384 posts • joined 21 Sep 2006
Erlang and more
The article mentions three factors that (somewhat) improve code quality: Strong typing, static typing and managed memory. Erlang has strong typing and managed memory, but it is dynamically types. So by the (somewhat weak) conclusions in the paper, it is no surprise that Erlang is close to the average.
What would be interesting is factoring out the number of years the programmers have worked with their language: C and C++ programmers have at least potential to have worked longer with their language than Erlang, TypeScript and Haskell programmers, and it is very likely the case that programmers who know their language intimately will make fewer errors. This also makes a case for languages that are sufficiently simple that you can actually manage to learn all of it.
Also, these days, huge libraries mean that you quickly can produce useful code in almost any language, as long as you stay within the scope of the libraries. This makes the actual properties of the language itself largely irrelevant, again if you stay within the scope of the libraries. So a true test of language (isolated from library) productivity/quality would require giving programmers a task where they can not make any significant use of non-trivial library functions, or where you deliberately forbid use of anything but the most basic libraries, e.g., by limiting the total library code used to, say, 1000 lines.
Sell to ARM?
If they split off the PC-processor and graphics department in a separate unit, it is possible that they could sell it to ARM. Not so much for the products, but for the patents and the technology.
Re: RISC, not IRONIC
While it is true that these features are what is generally seen to distinguish RISC from CISC, the original MIPS design has a large part in that definition: It was (alongside the Berkeley RISC processor, which is the forefather of SPARC) basically what defined the concept.
I have long thought that ARM should have moved the PC out of the numbered registers when they moved the status register to a separate, unnumbered register. While you save a few instructions by not having to make separate instructions for saving/loading the PC, PC-relative loads, etc., most instructions that work on general registers are meaningless to use with the PC. And in all but the simplest pipelined implementations, it complicates the hardware to make special cases for R15 (the PC). So this move is hardly surprising. I'm less sure about the always-0 register. I think it would be better to avoid this (gaining an extra register), and make a few extra instructions for the cases where it would be useful, e.g., comparing to zero.
And while code density is less of an issue now than ten years ago, I think ARM should have designed a mixed 16/32-bit instruction format. For simplicity, you could require 32-bit alignment of 32-bit instructions, so you would always use 16-bit instructions in pairs, and branch targets could likewise be 32-bit aligned. For example, a 32-bit word starting with two one-bits could signify that the remaining 30 bits encode two 15-bit instructions where all other combinations of the two first bits encode 32-bit instructions.
Though the 64-bit ARM instruction set is vary simple (looks more like MIPS than 32-bit ARM), and could easily be implemented directly on a simple pipelined 80's style CPU, most modern CPUs are superscalar, which means that they internally are variants of dataflow machines: Instructions are executed when their operands are available and not in the order they are written in the code, and they use a much larger internal register set than the visible register set, renaming visible registers to internal registers on the fly. Compiling blocks of ARM code to the dataflow machine and storing this makes sense, as you can skip the decode, schedule and register-renaming stages of the execution pipeline. In particular, that would make mispredicted jumps run faster, as you don't get quite as large pipeline stalls.
Reason #5: Control
Apple has (almost) always wanted as much as possible control over their hardware platform, both to tune hardware and software to each other but also, equally important, to prevent people running their software on machines not made by Apple.
By making their own System-on-Chip (SoC), they can pretty much ensure that only computers made by Apple can run their software. As for performance, the gap between ARM and Intel is dwindling now that ARM has a line of 64-bit processors. And, as the article said, MacBooks are not known as power houses anyway.
Somebody mentioned migration as an issue, but this is much less an issue than it was at the previous processor switches (from 68K to PPC and from PPC to x86), as most software is now written in high-level languages that can easily be compiled for other processors. Also, MacOSX and IOS themselves already share a large code base that is just compiled for the different platforms.
So, technically, I see no major hindrance. The main reason to stay with x86 would be if Apple could pressure Intel to give them really good prices by threatening to switch if they don't.
Word bad, raw text editor good
Apart from not really needing formatting when writing a novel (apart from chapter headings and occasional blank lines and, if you write like Jonathan Stroud or Susanna Clarke, footnotes), Word takes up a lot of screen estate for menus and similar stuff, so you see relatively less of your text when writing.
A raw text editor (VI, Emacs, gedit, ...) shows more text (and often more legibly) than Word, it loads faster, it scrolls faster and the text files are much smaller, so you can keep every previous version around even on a tiny machine. And raw text files can easily be imported into whatever software the publisher uses for the final typesetting.
Giger do not seem to have influenced Lynch's Dune film much, if at all. Giger did, however, make a lot of concept art for Jodorowsky's (sadly uncompleted) Dune project, in particular designs for the Harkonnen palace and also some designs for another uncompleted Dune adaptation by DIno de Laurentis.
Keeping with horse-drawn carriages
While Intel is correct in saying that most software these days is written for single-core, sequential processors, and that it is, indeed, easier to do so, there is little doubt that the future belongs to massive parallelism from many small cores rather than speedy single cores: At a given technology, doubling the clock speed will roughly quadruple power use, because you both need more power to make the transistors switch faster and because every switch costs power. For the same power budget, you can get four cores, which gives you twice the total compute power. It is true that there are inherently sequential problems, but these are fairly rare, and the jobs that require most compute power (cryptoanalysis, data mining, image processing, graphics, ...) are typically easy to make parallel.
Intel's strength is in making fast cores, but they are also power-hungry and expensive. The former because that didn't matter much for desktop PCs and servers up through the 90s and 00s, and the latter because Intel had a de-facto monopoly on x86 processors for PCs and servers, and most 3rd-party software was written exclusively for x86. These days, power matters more: Desktop PCs are more or less a thing of the past (excepting with a few hardcore gamers) and power use is increasingly an issue in data centres. Intel is trying to adapt, but it fears to lose its dominant position before the adaptation is complete. Hence, these bombastic claims.
REALLY unfortunate Danish town names
There are some towns in Denmark that even the locals find embarrassing: Tarm (intestine, though originally used for any long and narrow passage), Hørmested (smelly place, origin probably from "horn"), Bøvl (trouble, originally "bend"), and Lem (member, originally "barrow place").
Then there are some that are just mildly funny, such as Sengeløse (without beds), Springforbi (jump past), Tappernøje (taps liquid precisely), Bagsværd (back sword, though originally back sward) and Middelfart (middle speed, originally middle ferry passage), though we can see why some English speakers find that name a bit embarrassing.
I can't see how that name is unfortunate. In Danish the meaning is roughly "Middle seafaring" and alludes to the fact that there used to be ferry services from Middelfart to Jutland. Now there's a bridge, but you do pass Middelfart if you go from the island Fyn to Jutland. "Fart" can also mean movement or speed in general, so there is a joke that goes like this: "Q: Why are there so many speeding tickets between Odense and Fredericia? A: Because you have to go over Middelfart." (middle speed).
By the same reasoning you call Middelfart unfortunate, Middlesex is downright disastrous.
Unlike PC software, a lot of server software are written portably, at least across multiple versions of Unix/Linux. This is partly because the server landscape already supports several processor architectures: x86, x64, Sparc, and even Itanium, but also because server software doesn't use machine-specific GUI APIs. So it is relatively painless to port server software from one Unix/Linux/BSD platform to another. There may be differences in how compilers handle corner cases (due to the woefully underspecified C standard) and implicit assumptions about byte order and how unaligned access is handled, but it is still a lot easier than, say, porting Windows software to Linux or MacOS.
So, while software pretty much locked desktop users to the Windows/x86 platform, there has never been quite the same adscription to a single platform in the server world.
Re: The problem with patents
I agree, and have previously proposed a similar idea. The main problem is that true innovation is hard to measure as effort and cost: If you have a flash of inspiration and find a truly ingenious solution to a problem, what is your cost and effort? Do you count all the non-productive hours where you didn't find anything of value? Do you count all the time you used to educate yourself to the level where you could understand the problem (and its solution)?
Patents have meaning in two situations: 1. Where a product has cost a fortune to develop and test but is easy to copy (such as medicine), 2. Where a simple and ingenious solution is found to a product -- something that nobody had seen before, but is obvious once you see it. Your idea works for the first kind, but fails for the second. So we should have two different ways of protecting IP: One that covers expensive development processes and one that covers ingenious ideas. The first kind could easily give exclusive rights until the documented expenses including interest rates (plus, say, 50-100%) have been earned back through profits or licensing. The second kind would need some other mechanism that is a lot harder to make fair. It could be done by making a panel of experts in the field rate the innovation of the solution and assign a value to it. Until that value is regained, the inventor keeps the rights. It should still be possible to challenge the rated value, for example by pointing to prior art.
Speaking of prior art, a major problem is that patent offices only search previous patents and not scientific literature. So patents are often granted for things that have been known in scientific communities for ages, but never been patented. These patents can be challenged, but it is far too costly to do so. Maybe patents should have a trial period: Anyone can, for a modest fee, indicate prior art. If the prior art is accepted, the fee is returned and the patent invalidated.
Who will design it?
Even if Google wants its own ARM server chips, it does not need to design them themselves. They can get AMD, Marvell, Samsung, or a whole host of other experienced chip companies to do so for them. Or they can, like Apple did, buy a chip design company to do it in-house. But I don't really think so. Google does not have Apple's paranoid need to have full control over all aspects of the hardware, and for the same reason I don't see Google getting into designing processors for phones or tablets.
For flying too close to the sun and burning its wings, Icarus would be a better name for ISON.
If you wanted to make a terrorist attack, why wait until you are through airport security? Airports are so crowded outside security that exploding a bomb there would kill as many as doing it on a plane. Or do it in a mall during Black Friday, at a train station or in a zillion of other crowded places that has little or no security checks. The chain is only as strong as the weakest link, so why make this particular link so much stronger than the rest?
I suspect that one of the reasons for all the restrictions on what you can bring of liquids etc. is to increase sale in the "Tax Free" shops. In many airports half a litre of bottled water costs around €2, for example. This also explains the apparent contradiction that you can buy stuff that you wouldn't be allowed to bring in.
You don't need 64 bits
to address more than 4GB. Cortex A15 has an address bus of 48 bits (IIRC), but uses only 32-bit registers. This means that a single program using more than 4GB will need to use the MMU to switch between banks of memory (much like old 8-bit computers used banked memory to have more than 64K total). But phones and tablets usually run many programs at once, and each of these can use their own 4GB out of a larger total RAM without needing to do fancy tricks -- the OS does the bank switching.
On a longer time scale, I don't see much future in huge, flat address spaces: The future is parallel, so instead of a few cores sharing a large flat memory space, you have many cores each with their own memory communicating over channels. So 64 bit registers for memory addressing is not all-important. You can more easily operate on large integers, though, which is an advantage for cryptographic applications and a few other things. So it is by no means irrelevant, but the importance has been hyped a bit when Apple released their new phone.
All astronomers can see are the size of the planets and the approximate distance from their suns. If a planet is at a distance that allows liquid water and it is not gas-giant sized, it is deemed potentially habitable. But it takes more than potential liquid water to make a planet habitable: It takes real liquid water and an atmosphere with sufficient oxygen. The latter implies life, as free oxygen reacts with other substances so it needs to be continually renewed, and life is AFAIK the only realistic mechanism for that.
However, discounting gas giants ignores the possibility of habitable moons orbiting these. A large fraction of known exoplanets are much more massive than Jupiter and many of these are quite close tor their stars. Probably not because they really are more common, but because such planets are easy to find because they make the stars wobble visibly. Some of these super-massive planets are in the "Goldilocks zone", where liquid water may exist. While the planets themselves are hostile to human life, they may have Earth-sized moons that could be habitable. In our own solar system, gas giants have sizeable moons, though none are near Earth size. But it is not unreasonable that larger planets may have larger moons, so moons of super-massive planets may be a better bet for habitation than planets. A single super-massive planet may even have several habitable moons.
Is ARM slower than x86?
While Intel leads in speed in the current crop of processors, there is nothing inherent in the architectures that implies that ARM processors must be slower than x86 processors. When ARM first came out, it was faster than the x86 processors of the time. In the 90s, Intel caught up and surpassed ARM, but that was mainly because the focus for ARM processors shifted from desktop to mobile systems, in particular phones and PDAs. In the last decade, Intel has had the advantage of a 64-bit architecture, but now ARM has one too, and one that is simple enough to be implemented for high speeds at modest cost.
Since anyone can buy an architecture license and implement their own ARM processors, it is entirely possible that a big server producer will do that to make fast servers.
Word has grown in complexity to such an extent that even the save format (.docx) requires more than 7000 pages to specify its form and behaviour (http://en.wikipedia.org/wiki/Office_Open_XML#ISO.2FIEC_29500:2008). 99.9% of all users would be happy with something much less complex, such as HTML. HTML 4.01 requires less than 400 pages of specification (http://www.w3.org/TR/REC-html40/). And 99% would probably be happy with much less than even HTML.
HTML has the advantage of being readable and editable by humans without using a WYSIWYG editor (though many of these exist), so power users can get the precision and flexibility of editing raw HTML + CSS while most will be happy with selecting a predefined style and editing via a WYSIWYG editor.
HTML is not perfect, but is is a widespread standard having many implementations and is readable by anyone who has a browser (which even telephones have these days). The relatively simple format allows external tools to process documents fairly easily (compared to processing .docx files), so you don't need to have stuff like versioning, bibliography reference support, table-of-content generation and so on built in to the format: Just apply an external tool, similar to how you with LaTeX apply BibTeX and MakeIndex as external tools.
I will still continue to use LaTeX for my own use, but if forced out of it, I would much rather work with HTML-based text processors than with Word or derivatives like Open Office and Libre Office.
2nd hand sales
While really old models aren't very sellable, iPhone 4 and 4S still fetch reasonable prices on the2nd-hand market and there are quite a few for sale on EBay and similar sites. So Apple doesn't need to introduce cheaper models to expand their market: They can rely on 2nd-hand sales to do so. While a 2nd-hand sale does not directly give Apple any income, sale of apps will do so over time, and purchasers of a 2nd-hand iPhone are likely to upgrade to a new model in the future.
Am I the only one who thinks that passing obstacles 20% it own height is less than impressive? A house cat can jump over two meters up -- from standstill. So until the robot becomes a high jumper and a proficient climber of trees, I would not call it cat-like.
I'm pretty sure you mean that the show starts 8pm on 18 May rather than 8pm today..
Usually, such claims are based on maximum theoretical performance -- driving all processing units at full load. You may be able to get very close to that with specially-designed benchmark programs, but it is not realistic to get anywhere near this in "real" code.
My guess is that the actual performance gain is around x2 for graphics-intensive tasks and x1.5 for tasks that mostly involve the CPU alone. Not a bad gain, but not x3 overall.
Rockets and cars
Comparing a modern rocket to a V2 is like comparing a Fort T to a modern car: Both run gasoline-burning piston engines with a transmission shaft and mechanical gears. Even modern electric cars are not fundamentally different from those that were designed 100+ years ago.
The reason is that the basic design works well. The same is true for rockets. And that is why viable alternatives are still stuff of the future.
One idea that hasn't been mentioned is using ionised air as the main propellant: The motor is a linear accelerator that ionises the incoming air, accelerates it and expels it at the other end. The energy could come from a nuclear reactor or (for a slower acceleration) solar panels. A solar powered craft could use traditional propellers for a first stage to lift the craft to high altitude, where the next stage (with smaller wings) uses the linear accelerator motor. Eventually, the air would be too thin, at which point you could switch to air you brought along (e.g, liquid nitrogen).
Though, sadly, it never went into production, I found Acorn's Phoebe design quite distinctive.
I also liked the design of the Newbrain: http://www.theregister.co.uk/2011/11/30/bbc_micro_model_b_30th_anniversary/page3.html
The right tool for the job
Spreadsheets are excellent when you need to add up some columns and rows in an array of values, where some values are not filled in yet. But they are woefully inadequate for the more complex tasks that many people use them for. It is a bit like writing in assembly language: You can do it, but the result is unreadable and even simple errors will not be caught -- neither at compile time nor at runtime.
It is not hard to write domain-specific languages for financial analysis that have build in checks such as preservation of money: You don't accidentally use the same $ twice, and you don't forget that you have them somewhere. Basically, all the money (or streams of interest payments) that get into the system is accounted for exactly once in the what comes out. This gives a safety that spreadsheets do not. You can, of course, still write nonsensical derivations, but a lot of common mistakes are caught. A bit like static versus dynamic types: The latter require you to write a lot more tests to catch bugs. This is no problem if you programs are small and simple enough, but once you get above a certain degree of complexity, you are likely to miss cases that would have been caught by a good static type system.
Also, by being designed specifically for the task, solving a problem in that language is usually easier and faster even than doing it in a spreadsheet (which is faster than doing it in Java).
Warning labels instead of censorship?
I can't see who Apple is trying to protect (apart from their own image). You need an Internet connection to use their bookstore, and it is not exactly hard to find sexually explicit images on the Internet.
I would much prefer warning labels ("This work contains nudity, swear words and violence" and so on) and a rating system. The default could very well be that you are only shown works with a "child-safe" rating, but you should be able to set the filter details yourself, so you, for example, can avoid violence but accept nudity (or the reverse, which seems to the American standard).
And, yes, I'm Danish, so censorship offends me more than nudity.
A couple of years ago I predicted that this would happen, and I have seen nothing to change that opinion. The main reason I see for Apple to do this is to get full control of the hardware, like they have on their iOS devices. A secondary reason is to get iOS apps running on MacBooks. I think iOS and MacOS will merge much in the same way Windows 8 have merged the desktop and phone operating systems from MS. MS has stayed with x86 on the desktop/laptop version of Win8, but that is (IMO) mainly because many Windows applications have parts written in x86 assembler or C that assumes x86 behaviour (such as endianness and non-aligned memory transfers), which makes them harder to port to a new platform. Having learned from past experience in moving to new processor platforms, Apple have far fewer assumptions about processor in their code. And, as another poster said, they are increasingly relying on LLVM in a move that mirrors Microsoft's increasing reliance on .NET, both of which makes changing platform easier.
With the upcoming 64-bit processors, ARM should be powerful enough to compete with 64-bit x86 processors, but the licence model that allow other companies to build their own SoCs around ARM cores (and even design their own cores) is probably the main reason: With Intel, they have to use the SoCs that Intel make. With ARM, they can make a SoC that does exactly what they need and want. And make sure no one else can use this SoC to make clones.
As for Apple buying AMD, I doubt that. But I wouldn't be surprised if ARM bought AMD.
An updated Elite! would be very interesting. I have tried Oolite, but I found that (apart from visuals), too little was added. Yes, that made the feel very close to the original Elite!, but, frankly, that is a bit dated. I haven't played Frontier or any of the official sequels, so I can't really comment on these.
Things I would like to see are:
- More realistic economy. In the original Elite!, prices fluctuated with every jump, so by jumping back and forth between close systems, you could select your prices. Fluctuations should be slower and prices should depend more on distance. Also, you very quickly found out what to trade: Computers from high-tech to low-tech and food, liquor and furs the other way. I would prefer all goods to be viable and some systems to specialize in certain goods. And alien artefacts should be REALLY worthwhile to grab, unlike in the original where the price is just average.
- A 3D star map. The original galaxies are flat and rectangular. A 3D local map and spiral-galaxy global map would be nice. One galaxy should be enough, though, if it is large enough.
- Worm holes: A few connections that allow you to travel long distances to specific destinations.
- No "ultimate" weapons like energy bombs.
- More choice of player ships and upgrades.
- Possibility to buy or hire escort ships.
- Exploration: Earn income by surveying unexplored systems.
Sometime between the first two films, Lucas announced that he had plans for three trilogies, one set before the first film (which was, accordingly, renamed to "Episode IV") and one after the original trilogy. It took ages for him to make the prequel trilogy (and he messed that up pretty badly), so he thankfully dropped the idea of the third trilogy. Given that the title of the first Disney film is "Episode VII", my guess is that they will make the third trilogy.
In a way, Star Wars has always been in a Disney-like style, so I can't see anything horrible coming out of this.
As far as I recall, the hardware on big.LITTLE processors can do the transition from low-power to high-power processors and vice-versa without intervention from the OS. Essentially, the processor state is saved in local memory and read by the other processor. That makes the situation different from Tegra3, where you have to write software to explicitly exploit the 5th core.
To me, one of the major attractions of Lenovo laptops has been the "nub" mouse control. Without that, it holds very little interest to me.
Reasons for decline
In my estimation, it was not the unfamiliarity of the OS that led to the failure of Acorn. It was a mixture of price (the cheapest RISC OS computer was much more expensive than the cheapest Wintel PC) and lack of software. While there were excellent applications for most tasks, there were applications on Windows and MacOS that had no decent equivalent on RISC OS -- the market would simply be too small. This led to a downwards spiral where developers would move from RISC OS to Windows to get more customers and this would reduce the uptake of RISC OS computers and so on.
In retrospect, Acorn should have done as Microsoft did: Licensed their OS so anyone could have made hardware using it. This would have given people who felt that Acorns prices were too high a cheaper entry and, possibly, kept the market large enough to attract software developers. Apple was able to survive on a non-license policy and high prices, but that was because they were dominant in a market that was willing and able to pay a premium: Graphics design and publishing. Acorns main market was education, a market that is neither willing nor able to pay premium. Their secondary market (hobbyists) has a segment that will and can pay premium, but this was not enough to keep Acorn alive. And after the death of Acorn, development of RISC OS was much too slow, and was hampered by ARMs shift in focus from performance to low power, which meant that ARM-based computers could no longer compete with x86 ditto.
As for ARM vs x86 instructions sets, ARM is in the CISCy end of RISC and x86 in the RISCy end of CISC, so you should not use them as defining instances of these terms. Rather, you should compare the instruction sets on their own merits. IMO, ARM assembly language is much easier to program than x86 ditto and also a lot easier to compile to. x86 code is slightly more compact, but with the Thumb extension ARM got the advantage here again. ARM suffered for a long time in not having a unified floating-point instruction set over all processors but, again, that problem is solved. Now the main failing of ARM is lack of a true 64-bit processor, but even that is coming soon. And in any case, the advantage of 64 bits is mainly that a single program can easily use more than 4GB of RAM, but that is (as yet) not a problem for personal users. Saying it will never be is, however, as silly as saying taht 640K is enough for everyone.
Instead of rotating the engines proper, you can rotate the exhausts as on the Harrier jet. Possibly also the intake vents.
I think the idea og a rotating plane makes a lot of sense. I don't think passengers will mind sitting sideways when the plane is at top speed -- the speed will be fairly steady, so you won't feel any pressure. Turbulence will throw you in random directions regardless of how you face, so that won't make any difference. Declining seats are IMO a nuisance more than a service, so I would actually prefer seating arrangement like in trains: Two rows facing each other, so you can stretch your legs if you can agree with the person opposite which way to turn. Tables can be pulled out of the arm rests, like they are in the emergency-exit seats now. If seats are made back-to-back and non-reclining, they can be made sturdier at less weight. Seat belts should, IMO, be made more like in cars with a strap over one shoulder and self-adjusting. This is also easier if seats are fixed.
Still waiting for films based on ...
2. Alpha Centauri
3. Lemmings (o.k, joking a bit here)
Re: I wonder...
boltar says: "Thats the problem with proving software - how do you know the tests or the proof is correct? In theory you could end up in an infinite regression of testing - prove the software, prove the proof, prove the proof of the proof etc etc. I guess at some point you just have to draw a line and say "its as good as its humanly possible to get"."
It is actually not as bad as that. Proof systems usually have a very small set of primitive rules and combining forms that are verified by hand, and then all proofs are build up from this set of primitive rules and combining forms. This means that errors in the code that generates the proofs can not generate faulty proofs (at least not without triggering run-time errors). Basically, the worst a programming error in the proof generator can cause is failure to produce any proof, but faulty proofs can not be made. Assuming, of course, that the small kernel of primitive rules is correct, but great effort has been made to ensure this.
When proving behaviour about programs, a much larger problem is whether the formal specification of the programming language actually corresponds to the behaviour of running programs: The compiler may not conform to the specified standard. Hence, you often verify the generated machine code instead of the source code. That way, you don't have to trust the compiler and you don't need a formal language specification for the high-level language. What you need instead is a formal specification of the machine language, but that is often easier to make. A problem is, though, that the machine language does not have information about the types of values (are they integers or pointers, and pointers to what?). So you sometimes make the compiler generate typed assembly language. The types can be checked by the proof system and help verification of the correctness of the code relative to a specification. Obviously, few "standard" compilers generate typed assembly language that can be verified this way.
While there is a lot of "religion" in choice of programming language, I find C a particularly bad choice for writing zero-defect software: There is not enough information in the types to catch even simple mistakes (such as writing x=y instead of x==y) at compile time, memory deallocation is unchecked and unsafe, lots of behaviour is specified as "implementation dependent" or "undefined" in the standard, and so on.
As a result, you have to throw a lot of complex analysis after the program just to catch errors that in most other languages would have given a compile-time error message or which could not even occur in these languages. And to make the analysis tractable, programmers are forced to use only the simplest parts of the language, as the more complex parts are too difficult to analyse. Of course, this allows Stanford researchers to write a few scientific papers and Coverisity to earn a few bucks. But that seems like a very costly solution.
I don't suggest using one of the newer mainstream languages, because while they have better type systems, they are not suited for small computers running real-time software. But there are plenty of languages designed for ease of verification and control of resources. Some of these even have compilers that are verified to generate correct code, which I don't think any (full) C compiler is.
District 9 has basically the same premise as Alien Nation: Aliens are stranded on Earth and become a new lower class.
As for War of the World adaptations, I quite like the Marvel comic book series that shows Earth under Martian rule after a 2nd invasion. It started out being called "War of the Worlds" but was later named after its lead character "Killraven". A similar premise is used in John Cristopher's Tripod series. So it would be fairly safe to say that WotW has been the inspiration of quite a few alien invasion stories.
Not so fast
One of my friends bought a Newbrain, and I recall that it wasn't really that fast, at least not when it came to graphics. One reason may be that it used real numbers as coordinates rather than integers and allowed arbitrary scaling factors for the coordinates (so you could, for example, define your screen as going from -pi to pi horisontally and from 0 to 1e20 vertically). It did have some (for the time) advanced graphics primitives, including a flood fill (which worked in a way that gave associations to Tron lightcycles).
The keyboard, in spite of its odd look, was actually quite good, as I recall it.
In any case, after I bought a BBC B, my friend sold his Newbrain and bought a BBC too. I dont' think he ever regretted that decision.
More pixels mean more light?
For backlit screens, the amount of light needed is a function of screen area and not the number of pixels on said screen. So increasing pixel density will not affact power use for backlight. What it will affect, however, is the power required to show movies or pictures in full screen resolution, as more data needs to be moved and processed.
Even with LED displays,the power needed to provide a certain brightness will not increase if you double the number of pixels while keeping the same total area, as the pixels will be smaller and each use less power. After all, the brightness is the combined energy output of the pixels. so unless there is a higher power loss by increasing density, the power needed to display a picture at a certain brightness should be the same.
The Danish Kings used to have a throne called the "Unicorn Throne" because it was made partly of alicorns (http://en.wikipedia.org/wiki/Unicorn#Alicorn), which were in reality narwhal tusks (which Denmark, being the sovereign of Greenland, had easy access to). But why say this when it was more impressive to claim they were from unicorns?
Re: Good luck with that
"In most of the US, the Metric system is still seen as some kind of commie plot."
Or even as ungodly. "If inches and feet were good enough for Jesus, they are good enough for us!"
The idea of using existing languages (in particular C and C++) for massively parallel computing is doomed: These languages are inherently sequential and rely on a flat shared memory, which is very far from what massively parallel machines look like. Sure, you can use libraries called from C or C++, and you can even program these libraries in something that superficially resembles C, but the fact is that C and related languages are hopelessly inadequate for the task.
So we need languages that move away from languages with implicit sequential dependencies through updates to a shared state towards languages that do not have shared state and where the only sequential dependencies are producer-consumer dependencies. This means that you don't have traditional for-loops, as these over-specify a sequence on iterations of the loops. Instead, you have for-all constructs that allows the "iterations" to be done in any order or even at the same time. And to replace a for loop that, say, adds up elements in an array or other collection, you have "reduce" constructs that do this in parallel.
You might think of map-reduce, but it goes further than that. The proper reference is NESL.
This image viewer and editor is one of the first things I download when I or one in my family gets a Windows PC. Personally, I don't use Windows much myself anymore, but Irfanview is one of the few programs I miss.
"Miscopied genes" is sort of the definition of mutation, so it should be no surprise that this is behind human evolution (including intelligence). The interesting bits are that plausible specific genes have been pin-pointed and approximate dates for the mutations found.
IMO, another problem with "traditional" ways to teach programming to kids is that the teachers want the programs to be about real-world problems. While this is realistic and can motivate some students, it also adds another layer of complexity: Abstracting a real-world problem to a level that allows solution on a computer. So I think the students should initially work on problems that do not require an abstraction step. This can either be by not pretending at all that the problems have anything to do with real life or by working in a domain that is already abstracted using an abstraction that the students know well already: Numbers. For example, the data domain can be simple integers and a grid of pixels that can be turned on and off individually and problems can be of the form: Make a program that makes checker-board pattern on the screen. Pixels should be big enough that you can see each individual pixel, so you can better see what goes wrong when something does. Also, make it easy to read the value of a pixel on the screen -- a feature that was found on most 80's home computers, but which is complicated or impossible to do in many modern graphics libraries.
How to teach programming to kids
Many of the efforts to teach programming to kids has suffered from the desire to allow the kids to make cool stuff (animations etc.) happen on their screens within a few minutes after teaching starts. This is supposedly to motivate the kids to go on exploring their tools.
But in order to get this stuff on the screen so quickly, the tools that are used do a lot of things under the hood that the kids have no control of or understanding of. It is a bit like thinking you can learn electronics by plugging together two black boxes to make a radio. The kids can see the cause and effect (if I put together these boxes, sound comes out of one of them), but they have no understanding of why this happens.
So, instead, the kids should use extremely simple languages where every minute step is explicit. It might take a while before they can make something "cool", but they will understand what happens when they do.
In Denmark ...
... we have towns called "Tarm" (intestine) and "Lem" (a slang word for penis). These have not considered changing their names. There is also "Kværkeby" (strangling town), "Mørke" (darkness) and "Ringe" (inferior). English-speakers might find "Middelfart" amusing.
I'm nowhere near London this weekend, otherwise I would have come. Had I known about it a few weeks ago, I might have arranged a trip. Oh, well.
I agree that the 6502 took a bit of getting used to, if you wanted to program in assembler. But the 6502 in the BBC was quite fast, so it was worth it. And the BASIC on the BBC was far ahead of BASICs on contemporary machines, both in terms of features and speed -- and that even though the BBC used 32-bit integers while the rest nearly all used 16-bit integers.
A few years ago, I had the students for my compiler class write a BASIC compiler. It targeted MIPS (as that is the architecture they were familiar with after the architecture course) and it didn't have nearly all the features of BBC BASIC. But the students thought it a fun exercise.
Not real steganography
"Real" steganography is hiding a message in an already constructed text or picture in a way that does not obviously change that text or picture. What is described is a form of cryptography.
Text steganography could, for example, be by varying the amount of space between words to encode a hidden message: On the surface, the text is unaltered and looks perfectly natural, but there is a message hidden. If you actually have to construct a message specifically to encode the message, it is not really steganography -- it is just low-density cryptography similar to texts where the initial letters of the words encode a message or texts where the length of the words encode digits (like "How I wish I could recollect pi. "Eureka!" cried the great inventor: Christmas pudding, Christmas pie, is the problem's very centre"). The challenge of these is to make the text seem natural, while it has to obey non-trivial constraints. Real steganography should have no such constraints, but be able to encode any text (or picture) to hide a message.
- Product round-up Ten excellent FREE PC apps to brighten your Windows
- Review Tough Banana Pi: a Raspberry Pi for colour-blind diehards
- Analysis Pity the poor Windows developer: The tools for desktop development are in disarray
- Product round-up Ten Mac freeware apps for your new Apple baby
- Chromecast video on UK, Euro TVs hertz so badly it makes us judder – but Google 'won't fix'