1548 posts • joined Friday 21st December 2007 15:33 GMT
Re: Resistance is futile
If the energy stored in the capacitors could be recycled somehow, instead of being dumped/wasted during a state change, that might be of benefit.
It's been done in the lab. Look up "reversible computing".
I can attest to the fact that not a single soul in the US could find their own ass on a map
That's a mighty detailed hypothetical map you have there.
I know plenty of people in the US - myself included - who are quite adept at reading maps, and have a decent grasp of world physical and political geography.
I can be as cynical as the next curmudgeon, but this "boo hoo, no one of X nationality possesses Y skill" cliche that Reg commentators are so fond of has gotten rather dull. Let's try harder in the future, eh?
Re: Your choice of degree is an example of lateral thinking
Do people still need to write their own sort routines?
Rarely. It does still come up sometimes - when you need to sort a small array in an inner loop in performance-critical code (a problem that does come up in certain domains), then calling a library function would be foolish.
The correct question, though, is "Do people still write their own sort routines, even when they don't have to?". And the answer to that is "yes". The world is full of reinvented wheels. And the sort of programmer the OP was talking about is the usual culprit.
Re: Functional programming and shift registers
FP is interesting as is how shift registers work and NP-completeness or whatever to anyone who cares. However these are completely irrelevant to people who develop software on a daily basis.
This is the standard line among professional software developers. And it's why commercial software is loaded with race conditions, ghastly performance issues, and the sort of mind-numbingly stupid implementations that decorate the pages of thedailywtf.com.
A great many programmers have vague and wildly inaccurate mental models of the machine and the various abstractions layered upon it, and lack the critical and technical skills to improve them. And that is largely responsible for the generally abysmal quality of software.
Certainly many - perhaps most - computer scientists know very little about programming and software development. More than a few academics have pointed this out and urged the discipline address it; Stroustrop, for example, had a piece on the topic in CACM some issues back. But this Snowian1 "two cultures" mythology of a fundamental divide between CS and programming / software development is bullshit, and it only exacerbates the situation.
Also, it shouldn't be necessary - but apparently it is - to point out that a great many software projects require a whole fucking lot of CS to be done correctly. I wouldn't want to see a DBMS created without benefit of actual computer scientists. Or a JITing VM. Or a distributed analytics platform.
1Or Kiplingesque, if you prefer.
Re: Comp Sci degrees were sold to many kids looking for a well paid job....
If you've not published your first Android or IOS app before your [sic] 14 your [sic] basically toast.
"Hi, I'm Anonymous Coward, and I believe All the World's a VAX - sorry, a Smartphone."
When I'm looking to hire people, their having "published" a smartphone app counts for absolutely nothing. Having produced a significant piece of software that's robust and maintainable, and solved interesting problems along the way - sure, that's important. But that doesn't describe most apps, and it does describe a vast array of non-app software.
(I'd also give points to an applicant for knowing the difference between "your" and "you're".)
Re: "one of Skype or Instagram even had self-modifying code"
That would make one of them the third legitimate use for this technique I've heard of (I gather it's popular with malware writers, but apart from them it's not really clear who else).
Self-modifying code was quite common in mainframe applications at least into the mid-1980s. It was often used in assembly-language apps for things like patching, on-demand loading, memoization, etc. Some higher-level languages supported it directly, such as COBOL with its ALTER statement.
I've also seen it used in some performance-critical code, again back in the '80s. As CPU caches became more complex, there was a lot of discussion about their effects on self-modifying code.
Depending on what you consider "self-modifying code", you might also include some techniques that involve dynamically-generated code, such as GCC trampolines (which build code at runtime on the stack). That's not usually considered self-modification, though.
As for how the QNX technique works: it's really not that complex (or new - Andrew oversells it in the article). The interrupt handler looks at the parameters to determine whether it's a QNX syscall or a Linux syscall. In the latter case it probably jumps to a thunk that does the conversion. (The conversion could be done in the handler, but that'd be messy.) QNX and Linux syscalls are mostly equivalent, thanks to POSIX, so it's mostly a matter of massaging parameters.
They don't make centuries like they used to
nearly a century ago in 1906
I guess that's nearly an imperial century ago, not one of your newfangled metric centuries. Or has this piece been sitting in the queue since 2005?
Re: so the Russian skeptics seem to be right ?
I've always been a firm believer in the fact that our local star contributed pretty much 100% of our heat
Well, it seems you're only wrong by about 50%. I guess that's not too bad.
I have to admit her design for the Broad Museum here at Michigan State leaves me cold.
But hell, what do I know about architecture? (A little, actually, having had a course on it in grammar school. But less than a enthusiast, to say nothing of a professional.) To be honest, I don't think I care for most of the architectural styles from the 20th century, to say nothing of this one. My own house is a Queen Anne, with its inefficient irregular massing and the interior chopped up into lots of rooms, and that's the way I like it. You kids get your open floorplans off my damn lawn!
Probably wouldn't be hard to implement a hobbyist version of a system like this. You could find open-source implementations of many of the basic components (eg the image-recognition components and the SVM engine). And contemporary PCs have the computing horsepower to chew through enough data to get interesting results.
If you want to scale that baby up, you could look at what Tony Pearson does in his Build Your Own Watson Jr article for ideas. He uses UIMA, which when I last used it was a lot better with text than with non-text data, but it could be wrangled into something suitable for this purpose.
Re: No shortcuts with AI
the fact that it's [sic] learning needs 'supervision' shows that this has severe limitiations[sic]
No, it shows that the researchers decided to go with an SSL approach. There's nothing in the article that indicates they had to reject unsupervised learning, or bootstrapped approaches such as kernel extension (which are technically semi-supervised, but all the supervision comes at the beginning).
This requires an extensive internal 'world model' that, to my mind, can only be achieved through years of cognitive development similar to that of an infant
That's possible, but it's just a guess. And it's particularly difficult to see what a priori constraint would require "years" of training - why that wouldn't be a function of parameters such as the system's image-processing rate.
You can't take shortcuts by simply showing pictures of tanks (sorry, planes) and making associations. It's been tried before and failed.
Anecdote does not constitute proof. Care to provide evidence that no unsupervised-learning process can ever build an association graph that satisfies whatever (thus far undefined) metric you have in mind?
Honestly, I don't know who's worse - the "strong AI is just around the corner!" people, or the "machine learning is inevitably limited by X" ones. It's a huge and heterogeneous problem domain, with a huge number of avenues being investigated. Pat generalizations about it do not reflect reality.
Re: Simplest weapon
Yep. And if you have to improvise while you're on the plane and didn't think to get one of those beforehand, there are generally various usable alternatives; for example, you can make a decent knife by breaking a DVD and wrapping some cloth around the "hilt". And, of course, blunt instruments are not hard to come by.
(Of course, it's not hard to smuggle weapons past airport checkpoints either, as several people have demonstrated.)
Re: From the people who bought you Rick Dangerous
platform games don't work in 3d (Spyro soon corroborated this...
Actually, while I never cared for Tomb Raider, I found the first two Spyro games tremendous fun. (Never played any of the others.) Tastes differ, of course, but clearly for many gamers, platform games can work just fine in 3D.
Yes, the format change makes the comparison between Old Who and New Who in the chart pointless. Perhaps something like "story-hours in stories featuring X" would be a better metric.
Re: Of course it's not patientable...
all* computer programs are "computer implementations" of relatively simple abstract ideas
Really? You seem to have a pretty broad definition of "relatively simple". Relative to what?
I'm curious to know what's so simple about, say, graph spectral sparsification. Or what "relatively simple abstract idea" is embodied by, oh, the Linux kernel.
Prolepsis: "Those are collections of many simple ideas" isn't a persuasive rejoinder, since the act of combining simple ideas in particular ways adds information entropy.
Re: Datacentre Question
Definitely not Bjork. Bjork can manipulate IP routing with her mind. She doesn't need no stinkin' BGP advertisements.
Re: Everybody above this line
Lions 7, Christians 4, by my count. Looks like a few Reg readers need to go back to Internet school.
Re: @AC08:19 (was: @ Rampant Spaniel (was: I use VI! ;-)))
No offense, but not groking grok implies that you have absolutely zero clue about the culture of TehIntraWebTubes.
I'm sure most of us are familiar with Heinlein's coinage. I first read Stranger in a Strange Land (and thought it rather overrated) when I was but a lad, and reread it a couple of times over the years to see if I'd missed some hidden greatness. I'm pretty comfortable in the belief I have not.1
Use "grok" if you must - we can hardly prevent it - but its place in "Internet culture" is exaggerated, as is its utility as a term. It's only distinguished from its synonyms (at least as far as denotation) by fiat - that is, by Heinlein's claim in the novel that it means something more than, say, "comprehend".
1And I make some claim to expertise in this area, as I've been reading novels since I was 4, have a baccalaureate in English, am ABD in English Lit (specializing in literary theory and 20th century prose), etc.
Re: I use VI! ;-)
Genuine question from someone who uses pico \ nano , what would be the benefit to learning \ using vi?
[Argh! The backslashes! They burn!]
You'll find vi is more pleasant and makes you more productive - if you happen to be the sort of person who finds vi pleasant and productive. I myself prefer vi;1 I like its modal UI and over the past quarter-century or so I've memorized a lot of its functionality. Things like regex replace and repeat-command save me a lot of repetition and a little bit of time (if I didn't have vi I'd probably use sed or the like to do them anyway).
Could I do the same things with emacs? Sure, but I've only used emacs a few times, and there's no benefit in switching to it, and it doesn't suit my tastes as well.
On the other hand, if I were using pico for the sorts of things I do in vi, vi would be a revelation (once I gained some fluency in it). I'm a toolsmith2 by inclination, so I'd rather perform a regex replace than make the same change in three places.
Proponents of the Editor Religious Wars - including many of the people posting in this forum - would have you believe that their editor of choice is clearly superior, and others clearly flawed. That's either bombast or stupidity. Different artisans use different tools.
1That said, I install vim on any UNIX system I use for any length of time, mostly for multiple-buffer support and multiple undo/redo. Things like visual blocks are sometimes convenient, but they really don't save any time over basic vi functionality. On Windows, I use vim, except on one Windows machine where character-mode vim takes a long time to start for some reason I've never tracked down, so I just have it aliased to gvim.
2As defined by Brooks in The Mythical Man-Month. In Brooksian nomenclature I'm also a language lawyer, and on various projects surgeon, copilot, and editor. Maybe the "surgical team" metaphor doesn't so well after all. (Also, I just looked at TMMM again, and I notice Brooks credits Harlan Mills for this idea, from a 1971 publication. Gosh, did Agile advocates not invent the feature team?)
Re: Utter nonsense
Obvious troll is...
...still effective enough to pick up 23 downvotes, as of this writing. Like rats hitting the feeder bar.
Re: 80's Doctors
I grew up with Tom Baker's doctor; Davison's stories didn't start airing in the US until I was in high school. But I enjoyed Davison's nonetheless. True, that was at the height of my engagement with the show: a friend whose father ran a TV station got us a copy of The Five Doctors on VHS before it aired in the US (ha!), and I got to meet Tom Baker in person at one of his Boston-area appearances. So perhaps I'd've been happy with any actor playing the Doctor.
I didn't see all of the Colin Baker or McCoy stories as by that time I was in college and had other things on my mind (and for some years didn't even have a TV of my own, come to think of it). But when I caught the occasional episode I generally liked it.
Clearly opinions will differ, but so many writers - in the articles and comments - seem to believe that their low opinion of Davison's doctor is some sort of objective fact.
Re: Pedant alert
The literal meaning of "ultimate" is a bit pointless when we're talking about a character who can travel more or less arbitrarily in time.
Re: Mostly bad plans but also mostly not bad entertainment
PS I can understand not mentioning Time-Flight, I'm not sure even Peter Grimwade can explain that one, but no Castrovalva?
Eh? Castrovalva is discussed explicitly on the final page of the article, and Time Flight is alluded to ("all that business hijacking Concorde").
"they none of then anything on the Time Lords’ APC Net"
Pretty sure that was meant to be "none of them have anything on...". I agree the piece had an unfortunate number of errors of this sort. It could certainly have used another pass copy-editing.
I'd also have appreciated a mention of the story title and Doctor's incarnation for each of the segments, since I've never seen some of them, and others I barely remembered.
Re: The Forgotten Doctor
Somehow, Peter Cushing never makes it into these lists
Somehow, in the comments section for every single one of these Dr Who articles in the Reg, someone makes exactly this comment, despite the fact that it's been asked and answered in the comments section of every single one of...
We seem to be trapped in some sort of comments chronic hysteresis. Does Meglos have an account here?
Re: Robots of renown
Although I am fond of KITT from the original show and not the gawd awful reboot, he in no way can be classified as a robot
Why not? KITT could operate itself - drive around and such - and it was a mechanical partner, which is vaguely close to Capek's original usage.
No one in this thread has mentioned Box from Logan's Run, and he was overwhelming. Though, in the end, overwhelmed.
Re: Rise of the machines....
A.I. with robots will happen, sooner rather than later.
Oh really? Care to cite any of the relevant research showing we're appreciably nearer this goal than we were in, say, 1980? The last I looked (a few months back), we were still only scraping the surface of many quite basic problems in, comprehending language, such as reliably determining conversational entailment - and that's only one of many areas we need to handle for "AI with robots", and it's despite the vast amount of extremely clever work that's been done.1
"Strong" AI, even in very specialized domains, has made little or no headway in decades. "Weak" AI is doing a little better, but largely due to the general increase in computing power which has made additional approaches tractable (eg with IBM's Watson). And AI for robots is orders of magnitude worse than special-domain strong AI, because the robot 1) has to deal with hugely complex real-world inputs, and 2) has physical effectivity, which makes it dangerous.
Having a sentiant [sic] and inteligent [sic] machine is the ultimate User Interface
Good god. Why is this so difficult a concept? There is no "ultimate user interface". Users are not homogeneous, and neither are applications. No type of user interface is perfect for all users or for all applications.
Sentience wouldn't do a damn thing to make my computer use easier, for at least 90% of what I do with computers - and with the remaining 10% it'd be an annoyance.
As the robotic A.I.s creators we can ensure they enjoy what ever job they have,and would never consider anything else.
And machines working as instructed has never led to problems?
1It's only relatively recently that we've seen a number of major general algorithmic approaches, such as SVMs and MEMMs, applied in this area, not to mention algorithms more specific to NLP such as LSA. But we're still very much at the low end of the curve for a comprehensive mechanical understanding of language. That's partly because we don't understand a great many things about language in the first place - from low-level stuff like aspects of prosody (e.g. what "stress" really is) to high-level ones like how rhetorical devices really play in informal logic. Or about the neuropsychology of language. Hell, we don't even have a particularly good philosophical model of it - even within a philosophical school or movement (the New Pragmatists, say) there can be huge disagreements.
Re: Some big corporations behave like Nazis
Orwell's 1984 come true
Wrong book. Nineteen Eighty-Four1 describes a straight-up oligarchic totalitarian state. What we're seeing is the corporatist bread-and-circus of Brave New World and the like.
This isn't surprising. Heilbronner noted in 21st Century Capitalism that the fatal flaw of central planning was the lack of tension between the state and a powerful private sector; that tension on the one hand limits the excesses of both and on the other spurs innovation (through competition). But it also creates powerful incentives for the public and private sectors to mirror one another, by becoming twin domains for the same actors even while they remain separate in principle. Politicians are businesspeople.
Don Jefe was correct to refer to the East India Company, but more generally it was the rise of the stock corporation and other forms of self-organization among the bourgeoisie which started us down this path. In other words, we've been on it ever since the beginnings of Early Modern Europe. Of course governments before that were no friends to common folk; it's just that government organized around real estate and the hierarchical control over populations - i.e. feudalisms of various sorts - are inherently hostile to industry.
When the bourgeoisie wrested control of the polis from the aristocracy, by introducing far more efficient economic structures,2 they made things better for the middle class, which was by and large a Good Thing; but mostly they made things better for industry and its owners. And once that ball starts rolling, there doesn't seem to be any stopping it.
1Orwell preferred the title spelled out. In fact, he didn't even want to name it after a particular year; that was his publisher's influence. Of course, this is an area where publishers often show some wisdom, else we might have had Baz Lurhman's overproduced version of The High-Bouncing Lover last year.
2This also led to the downfall of institutional slavery, as Eric Williams famously argued (though he likely borrowed the thesis from C. L. R. James). Plantation slavery couldn't compete with wage slavery under capitalism; slavery produces a very inelastic labor force.
I suppose this merits Reg coverage
... more or less solely because of the Red Hat patronage. Otherwise it looks like Yet Another JVM Language. That's fine - the nice thing about targeting JVM (or CLR, or LLVM1) is that you can treat these as DSLs when they do something particularly useful in some odd corner of your source base, and use straight-up Java for your main development. It all interoperates and if your development team is any good, maintenance shouldn't be an issue because these languages are highly expressive and a competent developer should be able to pick one up as necessary. (That's assuming the existing code reasonably good, but if you're not holding code reviews and ensuring quality, you get what you deserve.)
But personally I find this one not very interesting. Given world enough and time I'd play around with it, but it ranks pretty low on my list - certainly below Scala and Clojure (which are already pretty well established) in the JVM-languages league, and below Julia and R and some others outside it.
* HTML syntax - I don't generally do UI work, and when I need HTML, I write HTML, so again this fails to excite me. Even if I did a lot of HTML, Ceylon's support doesn't look too thrilling. What does it do for me that the DOM doesn't? And if I'm generating enough HTML to make it worthwhile, I'm not going to do it with individual method calls for individual DOM nodes; I'll build a higher-level abstraction, thanks.
* Reified generics, type inference, etc - sure, that's nice, but why not go to Clojure and get a proper functional-OO language?
1Or UCSD p-System, right guys? Right? C'mon, where are all the p-System fans?
The BBC story guide for "Power of the Daleks" says the Daleks-need-static theme was apparently toned down in that story: "Whereas they die without static in The Daleks, in this story they just become dormant."
Subsequent to "Power", I guess the idea was just dropped entirely.
Re: Personal introspection
it is kind of irrelevant who they are criticising because they themselves are guilty of the aforementioned problems
It's not irrelevant at all. That's a wildly fallacious argument that displays an utter failure of critical thought (and it's sad, but not surprising, that it's gathered a number of upvotes). It's argumentum ad hominem, and even worse you're denying the validity of their argument on the grounds that they have direct experience of the situation they describe.
A doctor can be unhealthy and still offer sound medical advice. A criminal can offer valid insights into the workings of the law. &c.
I can clearly recall using [copy and paste] back in the mid-70s and if you check out Ritchie and Thompson's June 1970 memo describing the QED editor, you'll find it there too...
Agreed (though I suspect the post you're responding to was meant as a joke). The IBM mainframe editors had copy&paste since at least the mid-70s, with SPF (the precursor to ISPF), and I suspect the pre-SPF TSO editor had it as well, which would push it back to '71. I don't know whether early editors for, say, CMS or MTS supported copy/paste; it'd be interesting to hear from folks who used them. (I used VM/CMS in the late '80s, but never the early CP-branded versions.)
3MIAB is brilliant writing with believable characters
Jerome's sequel, Three Men on the Bummel, is less successful but also worth a read. The story has some nice moments, and the final chapter is a fascinating meditation on German "character". Besides its entertainingly absurd (if entirely period-appropriate) ethnic essentialism, the piece - written at the turn of the century - contains such gems as:
"The worst that can be said against [Germans] is that they have their failings. They themselves do not know this; they consider themselves perfect, which is foolish of them. They even go so far as to think themselves superior to the Anglo-Saxon: this is incomprehensible. One feels they must be pretending."
Fortunately, that sort of ideology of ethnic superiority never amounted to much in subsequent European history.
Re: To say nothing of the dog
Agreed, it's quite funny, which makes it a bit of a relief from her other time travel stories, which while excellent are, shall we say, on the sad side. Doomsday Book and To Say Nothing of the Dog make nice companion pieces.
Now that the holidays are coming up, perhaps it's time to dig DB out again...
Re: Self-Supporting Entity?
The USPS would be doing much better if Congress 1) let it run itself, rather than blocking every move the Postmaster General tries to make, and 2) stopped stealing billions of dollars from it every year (in the form of requiring it to overfund its pension system).
The USPS has to provide universal service - an extremely expensive proposition in a large country like the US. Its most profitable lines of business have been cherry-picked by the commercial delivery firms and eroded by the Internet. Major cost areas such as fuel and retiree health care have grown tremendously. But the Federal government that the USPS is supposedly "independent" from prevents any sort of reorganization, at the behest of constituents, while robbing it blind.
In short: you don't know what the fuck you're talking about.
Re: Historical Google Books
And these rare books were still under copyright, were they?
If I were an author and someone was distributing extracts of my book with pointers about who wrote it and how it can be obtained, I'd be very happy.
While we're dealing in hypotheticals, may I suggest that if you were an author, you might understand that not all authors are motivated solely by financial gain? Indeed, some might - for whatever reason - wish to see Fair Use exemptions restricted to use by actual people, and not by an automated system.
My own publications to date are all non-fiction, and I don't have any specific reservations about Google excerpting them willy-nilly. But if I published a novel, I'd much prefer the excerpts online to be the ones I chose (and reviewers and critics, etc, chose), not the ones the Great Googly Overlords decided were appropriate.
Personally, I think Chin's interpretation of the fair-use provisions of USC 17 are rather a stretch from what Congress intended. Well, whatever; interpretation is part of the role of the judiciary, and I'm certainly no intentionalist. But this interpretation is far from "obvious", regardless of what various Reg commentators claim. (Of course, people are fond of labeling their opinion as obvious when they aren't capable of constructing a real argument, or are too lazy to do so.)
Re: Making life complicated
more or less correlative numbers
What sort of numbers are "correlative"? Or, conversely, are not?
I've never seen "correlative" used as an absolute before. Generally it indicates one entity relates to another somehow. (The hint is in the name.) But perhaps I'm not very civilized.
Shrug. Personalized plates are a profit center for the State. That profit could be eaten up quickly if they ended up defending themselves in court from residents who decided they were offended by someone's choice of plate - regardless of whether said plaintiff had any chance of convincing a judge or jury that the plate was offensive. People willing to sue over being denied a particular plate appear to be much rarer. And in many, perhaps most, cases, people who are denied one particular personalized plate will try something else until they find one that's acceptable, so the State gets its money anyway.
So the economic incentives strongly favor ridiculous restrictions.
Personally, though I favor strong protection, and liberal interpretation, of freedom of expression, I don't think human has a winning argument here. There's nothing preventing him from putting a bumper sticker that says "COPSLIE" right above or below the plate, so it's hard to argue that there's any effective restraint here.
Re: GM Foods
Is there a need? Won't these plants just die on their arse when challenged by the natural variety in their usual habitat?
Often, yes, in which case there's no problem. But also often, no, because they find a niche to which they're already well-adapted; or no, because the ecological processes that formerly prevented species with similar adaptations from establishing themselves have been disrupted.
The last is what happened with prairie grasses in most of the US grasslands, for example. They are typically tall, slow-growing plants with very deep root systems. When established, they block most of the sun from fast-growing invasives; the ones that aren't controlled that way generally were done in either by drought (didn't have the roots to survive it) or burn-off. Burn-off, caused by lightning strikes, was particularly important - the native grasses could burn to the ground and regrow quickly from their root systems, while the burn would wipe out other plants.
When people started building permanent dwellings, cutting down the tall grasses, irrigating fields, and suppressing wildfires, the European invasive plants they brought (because guess which people these were) easily found niches in the modified ecosystem.
Invasive-species ecology is a whole complex field of study. Many people have made careers out of a single species (Asian carp, kudzu, Formosan termites ...).
Re: Indeed, who *does* use TIFF as a common file format?
TIFF is commonly used by scanning software, and by "imaging" (i.e., taking physical documents, scanning them, and storing and retrieving them) applications generally.
It was invented by Aldus and is now controlled by Adobe (in the sense that they hold the copyright on the specification). Derivative formats have been published by ISO and the IETF (eg RFC 2306).
One main advantage of TIFF, as Chemist noted, is that it can be used to store images in lossless encodings (uncompressed or LZW); it can also be a container for lossy-compressed JPEG images, so it's more flexible than JPEG1 or, say, PNG alone. It also supports multiple images ("pages") per file, layers, various sorts of metadata, etc.
1JPEG does define a lossless mode, but apparently it's not widely supported.
the first time you wrote a snippet of code to persuade that clunky Commodore PET at school to send a variable string to a dot matrix printer
Good call. It had no printer, but the school's Commodore PET was indeed the machine I wrote my first snippets of code on.
Fred Brooks referred to those of us who'd rather spend time creating a program to do our work than actually do our work as "toolsmiths". In The Mythical Man-Month1, he says that every programming team should have a toolsmith, because every team needs custom tools.
Personally, I have always found Wombats obtuse. And furry.2
1Not to be confused with the mythical man-moth, featured on SyFy this Saturday.
2Sunday on SyFy: "Obtuse Wombat vs Mythical Man-Moth".
Re: Redefining "reading" is not the biggest problem
Why do your think Xerography and microfiches were invented ? War (US Army and Navy copying all telegrams in the US ca. 1925) as the mother of every invention surely applies to these two technologies.
Oh yes, because there are no other economic drivers for document duplication and storage technologies. Try reading a little history. I recommend Yates, Control Through Communication. Not that it's likely to do you any good.
When the only tool you have is an axe, everything looks like a grindstone.
“Simply put, if a computer programmed by people learns the contents of a communication, and takes action based on what it learns, it invades privacy.”
So Spam Filters and the like are illegal too?
The problem here is that "learns" is not a term of art in IT or computer science, nor does it have a clear meaning in applicable law, just like "read". That was, after all, the whole point of the article this thread is commenting on. Numerous commentators seem to have missed this point. Rasch's warning was that Google was arguing that automated processing of text did not constitute "reading", because it did not involve "learning". So if Google's argument is accepted (by the courts, or the general populace, or whomever1), then "learn" is no more effective than "read" was.
If people want to draw lines in the sand, they must make them sharp and straight enough to mean something. "A computer ... learns" does very little to constrain interpretation.
1One of the problems with both Rasch's argument and the article is that they conflate the possible audiences for Google's argument. Is the issue that the courts might accept it, and deem privacy law inapplicable because machines are not "reading"? That's one danger. Is it that the citizenry will accept it, and decline to be outraged at their loss of privacy, because their texts are not being "read"? That's a different danger. Is it that government organizations with only the most tenuous inclination to follow the law, which they have largely suborned anyway, will be persuaded by it? Well, maybe, but that barn door appears to be swinging in the wind already.
The process to create an Amendment is NOT done by "a stroke of a pen."
No, but the process of amending the US Constitution is - it's the recording of the amendment by the Archivist that causes it to take effect.
the whole process of Amendments was a very clear acknowledgment that some things were left out, but could be added later as needed, but there had to be a VERY compelling reason, and undeniable will of the people, to do so.
It was no doubt obvious to all of the Framers that the Constitution needed an amendment process. That's patently obvious to anyone with even the most basic understanding of history and law. So that fact is not going to bear much of the weight of your argument.
And as to whether an amendment must reflect the "undeniable will of the people": that's even more tenuous. Of the two processes for proposing an amendment, one is completely up to the legislatures of the several states, which the Federal Constitution does not require be representational.1 The other simply requires two-thirds of a quorum in both houses of the Federal legislature, and recall that when the Constitution was written, the Senate was not appointed by popular election. And here too the Federal legislature is representational in theory, but not so much in practice (and steadily becoming less so2).
Ratification, too, is done by the several states, and the manner of it is not determined by the Federal Constitution, and need not (in terms of Federal law) be representational.
The Framers were for the most part plutocrats who wanted to distribute political power among various competing interests to prevent any one faction from growing too powerful. The general populace was only one of those interests, and the Framers were not keen on over-indulging it (and suffering from "the tyranny of the majority", as Adams put it). It is very dubious that they introduced the amendment mechanism with the intent that it serve primarily as a vehicle for the "will of the people".
1And indeed state legislatures often are not very representational, thanks to gerrymandering, disenfranchisement, and other biasing factors.
2Obviously the 19th and 20th centuries saw improvement in Federal legislative representation with various amendments (13th, 15th, 17th, 19th, 26th) and acts (particularly the VRA). But representation is constantly being eroded by the methods I mentioned above, the gutting of the VRA, etc. It's also eroding "naturally" as the population gradient between most- and least-populous states grows (and so the Connecticut Compromise biases the proportions of representation further), and as the population of non-citizens (resident aliens) grows.
It is just possible that a few brave individuals have, in the past, written collaboratively, and done so successfully. Not using Office Web Apps, true, but they had other obstacles to overcome.
Re: Stolen Ideas
Forbidden Planet: Stolen from Shakespeare's 'The Tempest'.
And The Tempest draws heavily on Ovid's Metamorphoses and other works. Yes, we all know about influence studies (one hopes). Structural narratologists, such as the Russian Formalists (e.g. Propp) or Northrop Frye, did this whole "there are no new stories" bit to death in the early 20th century.1 It's really not necessary to rehash it all now, is it?
1Which doesn't mean there isn't room for new work along these lines, of course. Recent years have seen a number of interesting computational approaches to structuralist narratology, for example in applying automated narratological analysis to large corpora of stories, or creating computational models of narrative structures. But bare observations like "all stories borrow from other stories" are a tad sophomoric.
Re: Copying from the BBC
despite the fact that it's entry into common parlance
Right, that's two canes now. We'll get you yet, my pretty, and your little dog too.
(I knew someone who claimed to have been offered a job at MIπ, but he turned them down, as he found their reasoning circular and irrational. And another chap who said he'd worked at MIi, but I'm sure they're imaginary.)
Re: Pardon me sir...
"Pardon me sir for mistaking me"? You're apologizing to yourself? Is this a new variation on Step 8?