Re: Already in the 1980s
You could, if you added sideways RAM. In any case, most games used lower resolution screen modes not only to save space but also to make updates faster.
407 posts • joined 21 Sep 2006
You could, if you added sideways RAM. In any case, most games used lower resolution screen modes not only to save space but also to make updates faster.
Double buffering was widely used on home computers in the early 1980s -- I remember doing it on my BBC micro, and most games did it to get smoother updates. I suspect the method is much earlier than that.
There is a lot of research in typed assembly language, proof-carrying code, and so on, that allow static verification of safety properties without relying on sandboxing. Something like that would be great. I don't know enough about WebAssembly to decide if it does that, but I suspect not.
The payload of this glider is two people and life support for these in addition to instruments for sampling air. So my guess is 300-400 kg. That could be enough to carry a small rocket that could reach space, but probably not enough to get anything into orbit. Using a balloon to carry a rocket to the edge of the atmosphere seems more practical.
From the orbit of Pluto (which is a similar distance to our sun as this planet is from its main star), our sun just looks like a very bright star. The two other stars are even further away, so there is very little light indeed.
It is plausible that the planet is still somewhat hot, since it is only 16 million years old, and it is possible that tidal heating might warm some moons. But there is little chance that life has had time to evolve: Initially, the planet would be too hot for life, so if it has the right temperature for life now, it has only had that for a couple of million years, which is probably not enough. The moons are not much better off.
There are much better candidates for life among the known exoplanets.
A Fredkin gate can in theory copy qubits: Use the qubit as control and apply 0 and 1 on the two inputs. The control will be unchanged but one of the outputs will be a copy of the control (and the other will be its negation).
IIRC, ARM2 was fully static, so you could single-step through an instruction sequence or stop the clock indefinitely. So I was surprised to hear that the flags in ARM1 used dynamic logic.
If the flag logic was the only dynamic logic on the chip, it makes good sense to change that in the redesign to get single-step capability, so it is perfectly plausible that this happened.
PC and server processors have typically had a high unit price and, hence, a high earning per unit. This is the market Intel has mainly succeeded in. Processors for IoT need to have a very small unit price, which means lower earnings per unit. Intel has previously had some success with 8-bit embedded processors, but they are being pushed out of that market by low-end ARM cores in highly integrated SoCs.
Intel has traditionally not done SoCs. One reason is that no single SoC fits all purposes. ARM handles that by licensing: A large number of different companies make an even larger number of different SoCs by integrating their own peripherals around ARM cores. Intel doesn't license its cores.
So if Intel wants to get into the IoT market, they should start licensing. The x86 platform probably has too much complexity baggage to compete effectively against ARM in that market, so Intel should design a simple 64-bit microprocessor that they can license to SoC builders.
Alternatively, Intel could gets its income by fabricating processors from other companies on its foundries. Intel has pretty good fabrication technology, so if it can't compete on processor sales, it might very well compete on chip fabrication.
The idea of using a filament to produce an electric field to ionize the gas has the problem that it is fragile and likely to melt when surrounded by hot plasma. 27escape proposed using magnetic fields, and that has more merit. After all, this is what fusion reactors uses to contain plasma that is easily hot enough for a light sabre. This could also explain the sounds when light sabres clash (the magnetic fields interfere and create extra ionisation) and even the fact that they stop each other: If the fields have the same polarity, they would repel. But it would need serious trickery to create a strong, shaped magnetic field from something the size and shape of a light-sabre handle.
Conditional execution was in Thumb2 replaced by an if-then-else instruction that specifies which of the following up to four instructions are executed when the condition is true and which are executed when the condition is false. Specifying ahead of time is better for pipelining, ARM64 has, IIRC, done away with generalized conditional execution entirely, except for jumps. I suspect this is to make implementation simpler and because branch prediction can make jumps almost free, so all you would save with conditional instructions is code space.
IIRC, the MUL and MLA instructions took four bits per cycle from the first operand, so a multiplication could take up to 8 cycles. This also meant that it would be an advantage to have the smallest number as the first operand, so multiplication terminates early.
Expanding a constant-multiply to shifts and adds speeds up the computation only if the constant is relatively large and has few 1-bits (a bit simplified, as you can handle many 1-bits if you also use subtraction, so it is really the number of changes between 1-bits and 0-bits that count). But multiplying by, say, 255 or 257 was indeed faster to do by shift-and-add/subtract.
AC wrote: 'Code density is a good benchmark of the "goodness" of an ISA that doesn't basically boil down to "it's good because I like it, that makes it good".'
Code density is only one dimension of "goodness", and it is one of the hardest to measure. If you measure compiled code, the density depends as much on the compiler (and optimisation flags) as it does on the processor, and if you measure hand-written code, it depends a lot on whether the code was written for compactness or speed and how much effort the programmer put into this. So you should expect 10-20% error on such benchmarks. Also, for very large programs, the difference in code density is provably negligible: You can write an emulator for the more compact code in constant space, and the larger the code is, the smaller a proportion of the code size is taken by the emulator. This is basically what byte code formats (such as JVM) are for.
I agree that the original ARM ISA is not "optimal" when it comes to code density, but it was in the same ballpark as 80386 (using 32-bit code). The main reason ARM made an effort to further reduce code size and Intel did not was because ARM targeted small embedded systems and Intel targeted PCs and servers, where code density is not so important. Also, Thumb was designed for use on systems where the data bus was 8 or 16-bits wide, so having to read only 16 bits per instruction sped up code execution. The original ARM was not designed for code density, but for simplicity and speed.
Asdf wrote: " Intel has tried repeatedly to kill their abomination but the market won't let them"
That is mainly because the processors that Intel designed to replace the x86 were utter crap. Most people vaguely remember the Itanium failure, but few these days recall the iAPX 432, which was supposed to replace the 8080. Due to delays, Intel decided to make a "stop-gap" solution called 8086 for use until the 432 was ready. While Intel managed to make functional 432 processors, they ran extremely slow, partly because of an object-oriented data model and partly due to bit-level alignment of data access. Parts of the memory-protection hardware made it into the 80286 and later x86 designs, but the rest was scrapped. Itanium did not do much better, so Intel had to copy AMD's 64-bit x86 design, which must have been a blow to their pride.
If Intel had designed a simple 32-bit processor back in the early 1980s, ARM probably would not have been. Acorn designed the ARM not because they wanted to compete with x86 and other processors, but because they were not satisfied with the current commercial selection of 16/32 bit microprocessor (mainly Intel 8086, Motorola 68000, Zilog Z8000 and National 32016). If there had been a good and cheap 16/32-bit design commercially available, Acorn would have picked that.
In Kim Stanley Robinson's Mars trilogy (which is highly optimistic in terms of technology, BTW), they send comets to skim the atmosphere of Mars, shedding water vapour as they go. This seems like a safer way to add water vapour (which is also a greenhouse gas) to the atmosphere. It would require a lot of comets, but it is still better than nukes.
But I agree that terraforming Mars will always be rather iffy, because it is small, far from the sun, and inactive geologically.
Given today's focus on profit from short-term fluctuations in share prices (even down to milliseconds), share prices are highly volatile and has very little to do with the profitability of the company. Since trading in share prices is a zero-sum game, the main effect is that the clever (and fast) take money from the not-so clever (and not-so fast). The society as a whole gains zip from this. Quite the opposite, in fact, as we have to bail out banks that made the wrong bets, while the banks that make the right bets channel their profit to huge bonuses to its CEO, CFO etc, who immediately send it to the Cayman Islands.
So, IMO, shares should only be traded at par, so the only potential gain from shares would be payment of dividends. This is, by its nature, not a zero-sum game, and millisecond trading will not profit from it.
I don't see this happening, though. Not only will many financial institutions do what they can to prevent it, but they will find ways around a law that requires shares to be sold at a fixed price: Rather than trading the shares themselves at varying prices, they will trade papers that promise to buy or sell shares (at the fixed price) at a specified later date. These papers will be subject to price fluctuations, unless the laws forbid these too. And if they do, the banks will just find some other form of derivative to trade.
"Cross-platform? Up to a point. F# seems to be tied to Visual Studio".
I use Emacs for F# programming. Also, MS is releasing an open-source, cut-down version of Visual Studio for Linux and MacOS, so even if you want an IDE, you are not tied to Windows.
My main gripe with F# is that they based on O'Caml instead of Standard ML. This makes the syntax (especially for pattern-matching) a bit clumsy in places. But, overall, F# is a big improvement over C#, Java, C++ and a host of other mainstream languages. If Apple's Swift is made truly cross-platform, it may rival F#, though.
JLV wrote: "And on a more general note, what are the design features that you disagree with?"
I admit that I may not be entirely up to date with the newest developments, but from what I remember from reading about it a while back:
- The choice of reference counting GC. It is slow and it necessitates using weak pointer if you create cyclic structures such as doubly-linked lists. There are concurrent GC algorithms that would be much better.
- That only objects are boxed. This makes it impossible to create a recursive enumerate without using objects.
- A rather verbose syntax for enumerates, as well as a few other odd syntactic choices.
Actually, the three-body problem IS chaotic in the modern, mathematical sense. There are non-chaotic three-body configurations (for example, when several small bodies orbit a much larger mass in near-circular orbits), but the general problem is chaotic. Smaller moons closely orbiting two large co-orbiting bodes are almost bound to be a chaotic system.
He mentioned wanting to learn Haskell. Here is a link to a small Haskell program for solving Sudoku: http://web.math.unifi.it/users/maggesi/haskell_sudoku_solver.html
That resolution only makes sense for wall displays, but a display port could drive one such.
This drive for higher resolution resembles the similar drive for digital cameras, where resolutions on most cameras are now much higher than the accuracy of the lenses. What is needed for digital cameras is better light sensitivity instead.
On a laptop or tablet screen 4K is more than enough (3K is IMO the useful limit for less than 17" screens). Like in cameras, what is needed is not more pixels. On screens what is needed is better colour reproduction and better visibility is sunlight. Reflected-light screens (like Qualcomm's Mirasol, once fully developed) may be the future.
Most code obfuscation is done at the lexical level: whitespace and comments are eliminated, variables and procedures are renamed, macros are expanded, and so on. As mentioned in the article, such tools can not hide coding style, as this goes far beyond lexical details. So a good obfuscation tool must work on the semantic level of the program: It must replace code with semantically equivalent code using more than just local syntactic or lexical transformations. This is very difficult to do, especially if the language semantics is loosely specified (*cough* C *cough*). Writing such a tool is (at least) as complicated as writing a compiler, which is why it is rarely done. But there is research that points the way: http://dl.acm.org/citation.cfm?id=2103761
That competitors who complete more tasks in code competitions have, on average, longer programs than those who compete fewer tasks is not surprising. Mark Twain is attributed for ending a letter with "I apologize for the length of this letter. If I had had more time, it would have been shorter". The same is true for programming; It takes more time to write shorter code. It is often faster to cut-and-paste and do local modifications than to make a parameterised procedure to cover all cases, and sometimes it is faster to special-case on different inputs than to make a general solution, which often requires insights that take too long to obtain when you are pressed for time. And you certainly don't want to spend time on simplifying code that works. Good competition programmers also often have a standard skeleton program that they modify for each task, because it is faster than starting from scratch. So there will often be procedures that the programmers do not bother to remove even if they don't use them. They don't harm, so why use time to remove them?
Coding competitions are very different from normal programming: The problems are small and self-contained, so you don't have to worry about modularisation or readability of the code (in a few hours, nobody will ever look at the code again), and the process is more explorative than normal coding. So you can't draw conclusions about general coding style from such competitions.
The article mentions three factors that (somewhat) improve code quality: Strong typing, static typing and managed memory. Erlang has strong typing and managed memory, but it is dynamically types. So by the (somewhat weak) conclusions in the paper, it is no surprise that Erlang is close to the average.
What would be interesting is factoring out the number of years the programmers have worked with their language: C and C++ programmers have at least potential to have worked longer with their language than Erlang, TypeScript and Haskell programmers, and it is very likely the case that programmers who know their language intimately will make fewer errors. This also makes a case for languages that are sufficiently simple that you can actually manage to learn all of it.
Also, these days, huge libraries mean that you quickly can produce useful code in almost any language, as long as you stay within the scope of the libraries. This makes the actual properties of the language itself largely irrelevant, again if you stay within the scope of the libraries. So a true test of language (isolated from library) productivity/quality would require giving programmers a task where they can not make any significant use of non-trivial library functions, or where you deliberately forbid use of anything but the most basic libraries, e.g., by limiting the total library code used to, say, 1000 lines.
If they split off the PC-processor and graphics department in a separate unit, it is possible that they could sell it to ARM. Not so much for the products, but for the patents and the technology.
While it is true that these features are what is generally seen to distinguish RISC from CISC, the original MIPS design has a large part in that definition: It was (alongside the Berkeley RISC processor, which is the forefather of SPARC) basically what defined the concept.
I have long thought that ARM should have moved the PC out of the numbered registers when they moved the status register to a separate, unnumbered register. While you save a few instructions by not having to make separate instructions for saving/loading the PC, PC-relative loads, etc., most instructions that work on general registers are meaningless to use with the PC. And in all but the simplest pipelined implementations, it complicates the hardware to make special cases for R15 (the PC). So this move is hardly surprising. I'm less sure about the always-0 register. I think it would be better to avoid this (gaining an extra register), and make a few extra instructions for the cases where it would be useful, e.g., comparing to zero.
And while code density is less of an issue now than ten years ago, I think ARM should have designed a mixed 16/32-bit instruction format. For simplicity, you could require 32-bit alignment of 32-bit instructions, so you would always use 16-bit instructions in pairs, and branch targets could likewise be 32-bit aligned. For example, a 32-bit word starting with two one-bits could signify that the remaining 30 bits encode two 15-bit instructions where all other combinations of the two first bits encode 32-bit instructions.
Though the 64-bit ARM instruction set is vary simple (looks more like MIPS than 32-bit ARM), and could easily be implemented directly on a simple pipelined 80's style CPU, most modern CPUs are superscalar, which means that they internally are variants of dataflow machines: Instructions are executed when their operands are available and not in the order they are written in the code, and they use a much larger internal register set than the visible register set, renaming visible registers to internal registers on the fly. Compiling blocks of ARM code to the dataflow machine and storing this makes sense, as you can skip the decode, schedule and register-renaming stages of the execution pipeline. In particular, that would make mispredicted jumps run faster, as you don't get quite as large pipeline stalls.
Apple has (almost) always wanted as much as possible control over their hardware platform, both to tune hardware and software to each other but also, equally important, to prevent people running their software on machines not made by Apple.
By making their own System-on-Chip (SoC), they can pretty much ensure that only computers made by Apple can run their software. As for performance, the gap between ARM and Intel is dwindling now that ARM has a line of 64-bit processors. And, as the article said, MacBooks are not known as power houses anyway.
Somebody mentioned migration as an issue, but this is much less an issue than it was at the previous processor switches (from 68K to PPC and from PPC to x86), as most software is now written in high-level languages that can easily be compiled for other processors. Also, MacOSX and IOS themselves already share a large code base that is just compiled for the different platforms.
So, technically, I see no major hindrance. The main reason to stay with x86 would be if Apple could pressure Intel to give them really good prices by threatening to switch if they don't.
Apart from not really needing formatting when writing a novel (apart from chapter headings and occasional blank lines and, if you write like Jonathan Stroud or Susanna Clarke, footnotes), Word takes up a lot of screen estate for menus and similar stuff, so you see relatively less of your text when writing.
A raw text editor (VI, Emacs, gedit, ...) shows more text (and often more legibly) than Word, it loads faster, it scrolls faster and the text files are much smaller, so you can keep every previous version around even on a tiny machine. And raw text files can easily be imported into whatever software the publisher uses for the final typesetting.
Giger do not seem to have influenced Lynch's Dune film much, if at all. Giger did, however, make a lot of concept art for Jodorowsky's (sadly uncompleted) Dune project, in particular designs for the Harkonnen palace and also some designs for another uncompleted Dune adaptation by DIno de Laurentis.
While Intel is correct in saying that most software these days is written for single-core, sequential processors, and that it is, indeed, easier to do so, there is little doubt that the future belongs to massive parallelism from many small cores rather than speedy single cores: At a given technology, doubling the clock speed will roughly quadruple power use, because you both need more power to make the transistors switch faster and because every switch costs power. For the same power budget, you can get four cores, which gives you twice the total compute power. It is true that there are inherently sequential problems, but these are fairly rare, and the jobs that require most compute power (cryptoanalysis, data mining, image processing, graphics, ...) are typically easy to make parallel.
Intel's strength is in making fast cores, but they are also power-hungry and expensive. The former because that didn't matter much for desktop PCs and servers up through the 90s and 00s, and the latter because Intel had a de-facto monopoly on x86 processors for PCs and servers, and most 3rd-party software was written exclusively for x86. These days, power matters more: Desktop PCs are more or less a thing of the past (excepting with a few hardcore gamers) and power use is increasingly an issue in data centres. Intel is trying to adapt, but it fears to lose its dominant position before the adaptation is complete. Hence, these bombastic claims.
There are some towns in Denmark that even the locals find embarrassing: Tarm (intestine, though originally used for any long and narrow passage), Hørmested (smelly place, origin probably from "horn"), Bøvl (trouble, originally "bend"), and Lem (member, originally "barrow place").
Then there are some that are just mildly funny, such as Sengeløse (without beds), Springforbi (jump past), Tappernøje (taps liquid precisely), Bagsværd (back sword, though originally back sward) and Middelfart (middle speed, originally middle ferry passage), though we can see why some English speakers find that name a bit embarrassing.
I can't see how that name is unfortunate. In Danish the meaning is roughly "Middle seafaring" and alludes to the fact that there used to be ferry services from Middelfart to Jutland. Now there's a bridge, but you do pass Middelfart if you go from the island Fyn to Jutland. "Fart" can also mean movement or speed in general, so there is a joke that goes like this: "Q: Why are there so many speeding tickets between Odense and Fredericia? A: Because you have to go over Middelfart." (middle speed).
By the same reasoning you call Middelfart unfortunate, Middlesex is downright disastrous.
Unlike PC software, a lot of server software are written portably, at least across multiple versions of Unix/Linux. This is partly because the server landscape already supports several processor architectures: x86, x64, Sparc, and even Itanium, but also because server software doesn't use machine-specific GUI APIs. So it is relatively painless to port server software from one Unix/Linux/BSD platform to another. There may be differences in how compilers handle corner cases (due to the woefully underspecified C standard) and implicit assumptions about byte order and how unaligned access is handled, but it is still a lot easier than, say, porting Windows software to Linux or MacOS.
So, while software pretty much locked desktop users to the Windows/x86 platform, there has never been quite the same adscription to a single platform in the server world.
I agree, and have previously proposed a similar idea. The main problem is that true innovation is hard to measure as effort and cost: If you have a flash of inspiration and find a truly ingenious solution to a problem, what is your cost and effort? Do you count all the non-productive hours where you didn't find anything of value? Do you count all the time you used to educate yourself to the level where you could understand the problem (and its solution)?
Patents have meaning in two situations: 1. Where a product has cost a fortune to develop and test but is easy to copy (such as medicine), 2. Where a simple and ingenious solution is found to a product -- something that nobody had seen before, but is obvious once you see it. Your idea works for the first kind, but fails for the second. So we should have two different ways of protecting IP: One that covers expensive development processes and one that covers ingenious ideas. The first kind could easily give exclusive rights until the documented expenses including interest rates (plus, say, 50-100%) have been earned back through profits or licensing. The second kind would need some other mechanism that is a lot harder to make fair. It could be done by making a panel of experts in the field rate the innovation of the solution and assign a value to it. Until that value is regained, the inventor keeps the rights. It should still be possible to challenge the rated value, for example by pointing to prior art.
Speaking of prior art, a major problem is that patent offices only search previous patents and not scientific literature. So patents are often granted for things that have been known in scientific communities for ages, but never been patented. These patents can be challenged, but it is far too costly to do so. Maybe patents should have a trial period: Anyone can, for a modest fee, indicate prior art. If the prior art is accepted, the fee is returned and the patent invalidated.
Even if Google wants its own ARM server chips, it does not need to design them themselves. They can get AMD, Marvell, Samsung, or a whole host of other experienced chip companies to do so for them. Or they can, like Apple did, buy a chip design company to do it in-house. But I don't really think so. Google does not have Apple's paranoid need to have full control over all aspects of the hardware, and for the same reason I don't see Google getting into designing processors for phones or tablets.
For flying too close to the sun and burning its wings, Icarus would be a better name for ISON.
If you wanted to make a terrorist attack, why wait until you are through airport security? Airports are so crowded outside security that exploding a bomb there would kill as many as doing it on a plane. Or do it in a mall during Black Friday, at a train station or in a zillion of other crowded places that has little or no security checks. The chain is only as strong as the weakest link, so why make this particular link so much stronger than the rest?
I suspect that one of the reasons for all the restrictions on what you can bring of liquids etc. is to increase sale in the "Tax Free" shops. In many airports half a litre of bottled water costs around €2, for example. This also explains the apparent contradiction that you can buy stuff that you wouldn't be allowed to bring in.
to address more than 4GB. Cortex A15 has an address bus of 48 bits (IIRC), but uses only 32-bit registers. This means that a single program using more than 4GB will need to use the MMU to switch between banks of memory (much like old 8-bit computers used banked memory to have more than 64K total). But phones and tablets usually run many programs at once, and each of these can use their own 4GB out of a larger total RAM without needing to do fancy tricks -- the OS does the bank switching.
On a longer time scale, I don't see much future in huge, flat address spaces: The future is parallel, so instead of a few cores sharing a large flat memory space, you have many cores each with their own memory communicating over channels. So 64 bit registers for memory addressing is not all-important. You can more easily operate on large integers, though, which is an advantage for cryptographic applications and a few other things. So it is by no means irrelevant, but the importance has been hyped a bit when Apple released their new phone.
All astronomers can see are the size of the planets and the approximate distance from their suns. If a planet is at a distance that allows liquid water and it is not gas-giant sized, it is deemed potentially habitable. But it takes more than potential liquid water to make a planet habitable: It takes real liquid water and an atmosphere with sufficient oxygen. The latter implies life, as free oxygen reacts with other substances so it needs to be continually renewed, and life is AFAIK the only realistic mechanism for that.
However, discounting gas giants ignores the possibility of habitable moons orbiting these. A large fraction of known exoplanets are much more massive than Jupiter and many of these are quite close tor their stars. Probably not because they really are more common, but because such planets are easy to find because they make the stars wobble visibly. Some of these super-massive planets are in the "Goldilocks zone", where liquid water may exist. While the planets themselves are hostile to human life, they may have Earth-sized moons that could be habitable. In our own solar system, gas giants have sizeable moons, though none are near Earth size. But it is not unreasonable that larger planets may have larger moons, so moons of super-massive planets may be a better bet for habitation than planets. A single super-massive planet may even have several habitable moons.
While Intel leads in speed in the current crop of processors, there is nothing inherent in the architectures that implies that ARM processors must be slower than x86 processors. When ARM first came out, it was faster than the x86 processors of the time. In the 90s, Intel caught up and surpassed ARM, but that was mainly because the focus for ARM processors shifted from desktop to mobile systems, in particular phones and PDAs. In the last decade, Intel has had the advantage of a 64-bit architecture, but now ARM has one too, and one that is simple enough to be implemented for high speeds at modest cost.
Since anyone can buy an architecture license and implement their own ARM processors, it is entirely possible that a big server producer will do that to make fast servers.
Word has grown in complexity to such an extent that even the save format (.docx) requires more than 7000 pages to specify its form and behaviour (http://en.wikipedia.org/wiki/Office_Open_XML#ISO.2FIEC_29500:2008). 99.9% of all users would be happy with something much less complex, such as HTML. HTML 4.01 requires less than 400 pages of specification (http://www.w3.org/TR/REC-html40/). And 99% would probably be happy with much less than even HTML.
HTML has the advantage of being readable and editable by humans without using a WYSIWYG editor (though many of these exist), so power users can get the precision and flexibility of editing raw HTML + CSS while most will be happy with selecting a predefined style and editing via a WYSIWYG editor.
HTML is not perfect, but is is a widespread standard having many implementations and is readable by anyone who has a browser (which even telephones have these days). The relatively simple format allows external tools to process documents fairly easily (compared to processing .docx files), so you don't need to have stuff like versioning, bibliography reference support, table-of-content generation and so on built in to the format: Just apply an external tool, similar to how you with LaTeX apply BibTeX and MakeIndex as external tools.
I will still continue to use LaTeX for my own use, but if forced out of it, I would much rather work with HTML-based text processors than with Word or derivatives like Open Office and Libre Office.
While really old models aren't very sellable, iPhone 4 and 4S still fetch reasonable prices on the2nd-hand market and there are quite a few for sale on EBay and similar sites. So Apple doesn't need to introduce cheaper models to expand their market: They can rely on 2nd-hand sales to do so. While a 2nd-hand sale does not directly give Apple any income, sale of apps will do so over time, and purchasers of a 2nd-hand iPhone are likely to upgrade to a new model in the future.
Am I the only one who thinks that passing obstacles 20% it own height is less than impressive? A house cat can jump over two meters up -- from standstill. So until the robot becomes a high jumper and a proficient climber of trees, I would not call it cat-like.
I'm pretty sure you mean that the show starts 8pm on 18 May rather than 8pm today..
Usually, such claims are based on maximum theoretical performance -- driving all processing units at full load. You may be able to get very close to that with specially-designed benchmark programs, but it is not realistic to get anywhere near this in "real" code.
My guess is that the actual performance gain is around x2 for graphics-intensive tasks and x1.5 for tasks that mostly involve the CPU alone. Not a bad gain, but not x3 overall.
Comparing a modern rocket to a V2 is like comparing a Fort T to a modern car: Both run gasoline-burning piston engines with a transmission shaft and mechanical gears. Even modern electric cars are not fundamentally different from those that were designed 100+ years ago.
The reason is that the basic design works well. The same is true for rockets. And that is why viable alternatives are still stuff of the future.
One idea that hasn't been mentioned is using ionised air as the main propellant: The motor is a linear accelerator that ionises the incoming air, accelerates it and expels it at the other end. The energy could come from a nuclear reactor or (for a slower acceleration) solar panels. A solar powered craft could use traditional propellers for a first stage to lift the craft to high altitude, where the next stage (with smaller wings) uses the linear accelerator motor. Eventually, the air would be too thin, at which point you could switch to air you brought along (e.g, liquid nitrogen).
Though, sadly, it never went into production, I found Acorn's Phoebe design quite distinctive.
I also liked the design of the Newbrain: http://www.theregister.co.uk/2011/11/30/bbc_micro_model_b_30th_anniversary/page3.html
Spreadsheets are excellent when you need to add up some columns and rows in an array of values, where some values are not filled in yet. But they are woefully inadequate for the more complex tasks that many people use them for. It is a bit like writing in assembly language: You can do it, but the result is unreadable and even simple errors will not be caught -- neither at compile time nor at runtime.
It is not hard to write domain-specific languages for financial analysis that have build in checks such as preservation of money: You don't accidentally use the same $ twice, and you don't forget that you have them somewhere. Basically, all the money (or streams of interest payments) that get into the system is accounted for exactly once in the what comes out. This gives a safety that spreadsheets do not. You can, of course, still write nonsensical derivations, but a lot of common mistakes are caught. A bit like static versus dynamic types: The latter require you to write a lot more tests to catch bugs. This is no problem if you programs are small and simple enough, but once you get above a certain degree of complexity, you are likely to miss cases that would have been caught by a good static type system.
Also, by being designed specifically for the task, solving a problem in that language is usually easier and faster even than doing it in a spreadsheet (which is faster than doing it in Java).
I can't see who Apple is trying to protect (apart from their own image). You need an Internet connection to use their bookstore, and it is not exactly hard to find sexually explicit images on the Internet.
I would much prefer warning labels ("This work contains nudity, swear words and violence" and so on) and a rating system. The default could very well be that you are only shown works with a "child-safe" rating, but you should be able to set the filter details yourself, so you, for example, can avoid violence but accept nudity (or the reverse, which seems to the American standard).
And, yes, I'm Danish, so censorship offends me more than nudity.