This happened to Earth
The modern theory of the formation of Earth and its moon is that two planets in near-identical orbits collided and merged, and that the impact ejected a large mass that became the moon.
431 posts • joined 21 Sep 2006
Yes, BBC BASIC was only an improvement on what you got for default on home computers at the time, which is almost all cases were BASIC variants, and most often inferior to BBC BASIC. Hard disks were uncommon even at the time Archimedes shipped (with a floppy as standard), so using compilers were impractical -- you wanted programs to load and run without storing extra files on your floppy or using large amounts of memory for compiled code. It was only after I got a hard disk that I started using compilers on my Archimedes. Several compilers existed early on for RISC OS, including Pascal, which was arguably the most popular compiled language at the time (until C took over that role). There was even a compiler for ML, which is a much better language than both Pascal or C.
So, to look at alternatives for BBC BASIC for floppy-only machines, let us consider languages that runs interpreted without too much overhead and which were well-known at the time. Pascal and C are rules out because they are compiled. Forth was mentioned, but only RPN enthusiasts would consider this a superior language. LISP (and variant such as Scheme) is a possibility (LISP was actually available even for the BBC MIcro as a plug-in ROM), but it is probably a bot too esoteric for most hobbyists, and it requires more memory than BASIC and similar languages. Prolog is even more esoteric. COMAL is not much different from BBC BASIC, so this is no strong contender.
Also, BASIC was what was taught in schools, so using a radically different language would have hurt sales. So, overall, I would say BBC BASIC was a sensible choice as the default language. Other languages such as C, Pascal, or ML, were available as alternatives for the more professionally minded (who could afford hard disks and more than 1MB of RAM).
One thing that I would have wanted for RISC OS is to allow !Draw-files for file/application icons. This would allow them to be arbitrarily scalable, and a bit of caching would make the overhead negligible. It was such caching that made Bezier-curve fonts render in reasonable time on an 8MHz machine. Similarly, you could define cursor shapes as !Draw files to make cursors easily scalable. Thumbnails of files could also be !Draw files.
Another obvious extension would be folder types, similar to file types. As it is, you need a ! in front of a folder name to make into an application, and there are no other kinds of folder than normal or application. Using types, you can make, say, word-processor files into typed folders. For example, a LaTeX folder could contain both .tex, .aux, .log, .toc, and all the other files generated by running LaTeX. I recall that one of the RISC OS word processors used application folders as save files, but that meant that all file names had to start with !. You could get rid of this for both save files and applications by using folder types instead.
For modern times, you will probably need more bits for file types than back in the day. I don't know if later versions of RISC OS has added more bits, bit if not you might as well add folder types at the same time that you add more bits for file types.
While I love the GUI, the file system (typed files, applications as folders, uniform display and print graphics, and a modular file system) the font manager and the standard applications (especially Draw), I haven't used RISC OS in over a decade. It lacks a lot of things that are needed for modern desktop/laptop use: Multi-core, pre-emptive multitasking, proper UNICODE support (unless this has been added since last I looked), support for most USB-devices, support for graphics cards, and so on. It is also a problem that it is written in ARM32-assembler. Not only does it make it harder to maintain, it also limits use on ARM64 and other modern systems (except through emulation).
I think the best route would be to build a RISC OS desktop on top of a Linux kernel, rewriting the RISC OS modules and applications in Rust (or C), and use Linux drivers etc. to make it exploit modern hardware.
Your first touch point with this new fangled tech is...Lisp? That's some serious brain engagement.
LISP is not really that complex to learn. The syntax is dead simple (albeit a bit verbose), and you can code with loops and assignments just like in BASIC, if you find that simpler than recursive functions (I don't).
LISP on the BBC was, however, considerably slower than BASIC, which was probably because BBC BASIC was written by Sophie Wilson, who did a lot to optimise it.
"Actually, microcode has been around since CPUs existed. it's how they work internally."
True, but in the beginning microcode interpreted the instructions, whereas modern processors compile instructions into microcode. This gets rid of the interpretation overhead (which is considerable).
Actually, the description reminded me a lot of EPIC/Itanium: Code compiled into explicit groups of instructions that can execute in parallel. The main difference seems to be that each group has its own local registers. Intel had problems getting pure static scheduling to run fast enough, so they added run-time scheduling on top, which made the processor horribly complex.
I can't say if Microsoft has found a way to solve this problem, but it still seems like an attempt to get code written for single-core sequential processors to automagically run fast. There is a limit to how far you can get on this route. The future belongs to explicitly parallel programming languages that do not assume memory is a flat sequential address space.
"You want to have a context switch every time a branch causes a cache miss? That would be a Bad Thing."
It would indeed. But that is not what I say. What I say is that there is a pipeline of instructions interleaved from two or more threads, each having their own registers. No state needs to be saved, and executing every second instruction from different thread is no more expensive than executing instructions from a single thread. The advantage is that functional units can be shared, and since independent threads do not have fine-grained dependencies between each other, instructions from one thread can easily execute in parallel with instructions from another.
This is not my idea -- it has been found for decades in processors (just look for "X threads per core" in specifications). IMO, it is a better approach than speculative execution since it does not waste work (all instructions that are executed will be needed by one thread or another) and it is not considerably more complex than having one thread per core. Note that out-of-order execution is not a problem: That also executes only instruction that are needed, it just does so out of sequence, which requires register renaming, but that is not a huge problem. The main cost is complex scheduling, which increases power use (OOO processors use more energy scheduling instructions than actually executing them).
What speculation gives that these do not is (potentially) much faster execution of a single thread. But to do so, it uses resources that could have been used to execute instructions that are definitely needed. So it improves latency at the cost of throughput. OOO execution improves both at a cost in complexity and power use, and multi-threading improves only throughput, at a small cost in latency, because the two (or more) threads are given equal priority, so each thread may have to wait for others to stop using functional units.
Speculative execution is basically a way to make sequential computation faster. When the processor has to wait for, say, a condition to be decided, it makes a guess as to the outcome and starts working from that guess. If it guesses right, you save time, if not, you both lose time (for clean-up) and waste heat (for doing wasted work). You can try to work on multiple different outcomes simultaneously, but that is more complicated, and you will definitely waste work (and heat).
Speculative execution relies on very precise predictions, and these cost a lot in resources for gathering and storing statistics and analysing these. The bottom line is that speculative execution is very costly in terms of complexity and energy.
Another solution is to pause execution until the outcome is known. While this pause lasts, you can have another thread use the execution units. This is called multi-threading, and is usually implemented by having an extra (or several) copy of all registers, and schedule instructions from two (or more) threads simultaneously. You only execute instructions that are guaranteed to be needed, so there is no speculation. You can even have both threads execute instructions simultaneously, if there are no resource conflicts. The scheduling unit is somewhat more costly, as it has to look at more instructions, but it is not as bad as the complexity of speculative execution. The downside is that each thread does not run faster than if it ran alone on a processor without speculative execution, but the throughput of instructions is likely higher than this case. If the threads share cache, there is a risk of information spillage, so you generally limit this to threads from the same program.
The next step is to make multiple cores, each with their own cache. If the memory is protected (and cleared when given to a new process), this can be made safe from leakage, it scales better than multi-threading, and the complexity is lower. This is part of the reason why the trend is towards more cores rather than faster single cores. In the extreme, we have graphics processors: A large number of very simple cores that do no speculation and no out-of-order execution and which even share the same instruction stream. Sequential execution on these is horribly slow, but the throughput is fantastic, as long as you can supply a suitable workload. It is nigh impossible to make C, Java, and similar languages run fast on graphics processors, so you either need specialised languages (https://futhark-lang.org/) or call from C or Java library routines written in very low-level languages and hand-optimised.
In conclusion, the future belongs to parallel rather than speculative execution, so you should stop expecting your "dusty decks" of programs written in C, Java, Fortran, etc. to automagically run faster on the next generation of computers.
Apple moved from 68K to PowerPC because there was no high-performance 68K processor. PowerPC promised (and delivered) higher performance than 68K, at least in the foreseeable future. At that time, Apple was mainly about desktop machines, so power use was not all-important.
The move to x86 was allegedly motivated by lower power use for the same (or higher) performance, which was required for laptop use. Competition between Intel and AMD had driven an arms race for more power for less power, and Apple could ride on that.
A move to ARM can be partially motivated by a desire for lower power use, but it is more likely so Apple can build their own ASICs, as they have done for iPhone, and so more code can be shared between iOS and MacOS.
I have long been expecting this move, and I I'm surprised it hasn't happened earlier. Using the same CPU on iPhones and Macs will simplify a lot of things for Apple, as will having the ability to make their own chips combining CPUs with coprocessors of their own choice instead of relying on the fairly limited choice that Intel offers.
With the advent of 64-bit ARMs, integer performance is similar to x86 performance, and due to the smaller core size, you can fit more cores onto a chip, increasing overall performance. Where ARM CPUs have lagged behind Intel is in floating-point performance, but that may not be important for Apple. And if it is, they have a license that allows them to make their own FPU to go alongside the ARM cores. In any case, the most FP-intensive tasks are rapidly moving from classical FPUs to GPUs, so as long as Apple supplies their Macs with GPUs that runs OpenCL at decent speed, sequential FP performance may not matter much. In general, single-core performance (whether integer or FP) is becoming less and less important as the number of cores grow: To get high performance, you have to code for multiple cores, regardless of whether you code for Intel or ARM.
As for running legacy code, this can be done with just-in-time binary translation: The first time a block of x86 code is executed, it is emulated by interpretation, but a process is at the same time started on another core that translates the x86 code to ARM. As soon as this translation finishes, the code will run compiled when next executed. There might even be multiple steps: Interpretation, simple translation, and optimised translation, each being started when the previous form has been executed sufficiently often that it is expected to pay off.
Someone (I don't recall who) once said something along the lines of "If they ever build a computer that can be programmed in English, they will find that people can't program in English". The point being that the level of precision required for instructing a computer is far beyond most people even when using their native language -- or maybe in particular when using their native language.
As a side note, I recall that BBC BASIC had a a "colour" command, while most other BASICs had a "color" command.
In Algol 68, keywords were distinguished from identifiers by case Keyword are upper case (or boldface or quoted) and identifiers lower case. This allowed non-English versions of Algol 68 just by providing a table of keyword names. And since there is no overlap with identifiers, the code could automatically be converted to use English keywords (or vice-versa) without risk of variable capture.
Similarly, in Scratch keywords are just text embedded in graphical elements, and changing the bitmaps of these graphical elements can change the language of the keywords without affecting other parts of the program, and the same program will be shown with English keywords in an English-language Scratch system and in Japanese (or whatever) in a Japanese Scratch system, because the internal representation does not include the bitmaps.
But I agree that, unless the programming language attempts to look like English (COBOL, AppleScript, etc.), the language of the keywords matter next to nothing, as long as the letters used are easily accessible from your keyboard. There are programming languages with next to no keywords (APL being an extreme example), and (apart from sometimes requiring special keyboards), they are not really more or less difficult to learn than languages with keywords in your native language (what makes APL difficult to learn is not its syntax). An exception may be children, which is why Scratch allows "reskinning" the graphical elements.
The root of Spectre and Meltdown is speculative execution -- the processor trying to guess which instructions you are going to execute in the future. While this can increase performance if you can guess sufficiently precisely, it will also (when you guess incorrectly) mean that you will have to discard or undo work that should not really have been done in the first place. On top of that, accurate guesses aren't cheap. Some processors use more silicon for branch prediction than they do for actual computation.
This means that speculative execution is not only a security hazard (as evidenced by Meltdown and Spectre), but it also costs power. Power usage is increasingly becoming a barrier, not only for mobile computing powered by small batteries, but also for data centres, where a large part of the power is drawn by CPUs and cooling for these. Even if Moore's law continues to hold for a decade more, this won't help: Dennard scaling died a decade ago. Dennard scaling is the observation that, given the same voltage and frequency, power use in a CPU is pretty much proportional to the area of the active transistors, so halving the size of transistors would also halve the power use for similar performance.
This means that, to reduce power, you need to do something other than reduce transistor area. One possibility is to reduce voltage, but that will effectively also reduce speed. You can reduce both speed and voltage and gain the same overall performance for less power by using many low-frequency cores rather a few very fast cores. Making cores simpler while keeping the same clock frequency is another option. Getting rid of speculative execution is an obvious possibility, and while this will slow processors down somewhat, the decrease in power use (and transistor count) is greater. As with reducing clock speed, you need more cores to get the same performance, but the power use for a given performance will fall. You can also use more fancy techniques such as charge-recovery logic, Bennet-clocking, reversible gates, and so on, but for CMOS this will only gain you a little, as leakage is becoming more and more significant. In the future, superconductive materials or nano-magnets may be able to bring power down to where reversible gates make a significant difference, but that will take a while yet.
In the short term, the conclusion is that we need simpler cores running at lower frequencies, but many more of them, to get higher performance at lower power use. This requires moving away from the traditional programming model of sequential execution and a large uniform memory. Shared-memory parallelism doesn't scale very well, so we need to program with small local memories and explicit communication between processors to get performance. Using more specialised processors can also help somewhat.
When Apple started making their own ARM chips, I predicted that they would move to ARM on the Mac line also. It has taken longer than I expected, but Apple has good reasons for this:
1. It would make Macs the ideal tool for developing iPhone software, as it can be made to run it without emulation.
2. More parts of the OS can be shared between Mac and iPhone.
3. It allows Apple to make a SoC to exactly their specifications instead of relying on what Intel produces.
4. It removes dependency on Intel (or AMD).
It is not impossible for Apple to make a 64-bit ARM processor that will outperform the fastest Intel processor. I'm sure Apple would love having the fastest laptops around, so people would migrate from Wintel to Apple for performance reasons. Apple need to do more work on the FP side to make this happen, but it is not impossible.
There is potential for disaster, but if handled well, it could be good. I think the best period for an initial run is the period between the Hobbit and LotR, as mentioned earlier. I'm not sure Moria will work well as a main storyline -- the retaking is probably not all that interesting, and the fall happens rather late in the time line (since Gimli is not aware of it when the Fellowship enters Moria). The rangers fighting orcs and goblins up north is probably a better idea, but with a better storyline than the "War in the North" game. It could feature a young Aragorn so there is some name recognition.
I'm not sure the story of Túrin Turambar (The Children of Hurin) will work on TV, nor Beren and Lúthien. The tale of Númenor definitely takes place over too long a time frame to work on TV.
Machine learning of almost any kind is sufficiently parallelisable to exploit such a monster. Deep learning neural networks are very popular these days, and they need lots of processor power. It is already running on graphics processors for that reason. DNA analysis too.
While playing elaborate pranks on the scammers may be fun, you are wasting your own time as well as theirs -- and your time is probably much more valuable, to you at least.
So when I get a call from someone claiming to be from the Microsoft Tech Support Centre or some such, I just say "No, you're not" and hang up.
The main reason that BASIC on the TI99/4a was slow was that the BASIC interpreter was not written in assembly language (which would not have been difficult, as the TMS9900 was much easier to program than 8-bit alternatives such as 6502 or Z80), but in a language called GPL (Graphics Programming Language), which was compiled to a byte code that was interpreted by the CPU. I estimate the overhead of using interpreted byte code to be 5-10 times, so a BASIC interpreter written directly in assembly language would have sped up the BASIC enormously -- depending on what you do, though. For some operations such as floating-point calculation or graphics primitives, the overhead is relatively small, but for integer calculations it is pretty hefty. Games that are written in assembly language are not affected by this, but I still find it a curious design decision -- it made the TI99/4a compare very badly to other home computers in BASIC benchmarks, which is what most magazines used to compare speed of home computers.
"the machines will be able to do things like factor very large prime numbers"
That is not very difficult. A prime number factorises to itself, no matter how large it is.
What is meant is that it can (in theory) factor products of very large primes.
Also, the D-wave is not a universal quantum computer, but specialised to do simulated annealing. There probably was a remark about that in an earlier version of the article, since there is an orphaned footnote explaining annealing. The D-wave is similar to analogue computers that can also solve some optimisation problems very quickly, and there is debate about whether D-wave actually uses quantum effects at all or if it is just a fancy analogue computer.
SciFi Channel made a low-budget, but decent adaptation as a TV miniseries (http://www.imdb.com/title/tt0142032/), followed by a somewhat-higher-budget version of Dune Messiah/Children of Dune as another miniseries (just called "Children of Dune"), which was also quite decent.
I agree that a single film is not enough to give a decent treatment of the book. A GoT-scale TV-series would be best, but a film trilogy could also work. Then one film for Messiah and another trilogy for Children. If all succeed, one film for each of the following books (God Emperor, Heretics, and Chapter House) is a possibility.
There is a lot of research in typed assembly language, proof-carrying code, and so on, that allow static verification of safety properties without relying on sandboxing. Something like that would be great. I don't know enough about WebAssembly to decide if it does that, but I suspect not.
The payload of this glider is two people and life support for these in addition to instruments for sampling air. So my guess is 300-400 kg. That could be enough to carry a small rocket that could reach space, but probably not enough to get anything into orbit. Using a balloon to carry a rocket to the edge of the atmosphere seems more practical.
From the orbit of Pluto (which is a similar distance to our sun as this planet is from its main star), our sun just looks like a very bright star. The two other stars are even further away, so there is very little light indeed.
It is plausible that the planet is still somewhat hot, since it is only 16 million years old, and it is possible that tidal heating might warm some moons. But there is little chance that life has had time to evolve: Initially, the planet would be too hot for life, so if it has the right temperature for life now, it has only had that for a couple of million years, which is probably not enough. The moons are not much better off.
There are much better candidates for life among the known exoplanets.
IIRC, ARM2 was fully static, so you could single-step through an instruction sequence or stop the clock indefinitely. So I was surprised to hear that the flags in ARM1 used dynamic logic.
If the flag logic was the only dynamic logic on the chip, it makes good sense to change that in the redesign to get single-step capability, so it is perfectly plausible that this happened.
PC and server processors have typically had a high unit price and, hence, a high earning per unit. This is the market Intel has mainly succeeded in. Processors for IoT need to have a very small unit price, which means lower earnings per unit. Intel has previously had some success with 8-bit embedded processors, but they are being pushed out of that market by low-end ARM cores in highly integrated SoCs.
Intel has traditionally not done SoCs. One reason is that no single SoC fits all purposes. ARM handles that by licensing: A large number of different companies make an even larger number of different SoCs by integrating their own peripherals around ARM cores. Intel doesn't license its cores.
So if Intel wants to get into the IoT market, they should start licensing. The x86 platform probably has too much complexity baggage to compete effectively against ARM in that market, so Intel should design a simple 64-bit microprocessor that they can license to SoC builders.
Alternatively, Intel could gets its income by fabricating processors from other companies on its foundries. Intel has pretty good fabrication technology, so if it can't compete on processor sales, it might very well compete on chip fabrication.
The idea of using a filament to produce an electric field to ionize the gas has the problem that it is fragile and likely to melt when surrounded by hot plasma. 27escape proposed using magnetic fields, and that has more merit. After all, this is what fusion reactors uses to contain plasma that is easily hot enough for a light sabre. This could also explain the sounds when light sabres clash (the magnetic fields interfere and create extra ionisation) and even the fact that they stop each other: If the fields have the same polarity, they would repel. But it would need serious trickery to create a strong, shaped magnetic field from something the size and shape of a light-sabre handle.
Conditional execution was in Thumb2 replaced by an if-then-else instruction that specifies which of the following up to four instructions are executed when the condition is true and which are executed when the condition is false. Specifying ahead of time is better for pipelining, ARM64 has, IIRC, done away with generalized conditional execution entirely, except for jumps. I suspect this is to make implementation simpler and because branch prediction can make jumps almost free, so all you would save with conditional instructions is code space.
IIRC, the MUL and MLA instructions took four bits per cycle from the first operand, so a multiplication could take up to 8 cycles. This also meant that it would be an advantage to have the smallest number as the first operand, so multiplication terminates early.
Expanding a constant-multiply to shifts and adds speeds up the computation only if the constant is relatively large and has few 1-bits (a bit simplified, as you can handle many 1-bits if you also use subtraction, so it is really the number of changes between 1-bits and 0-bits that count). But multiplying by, say, 255 or 257 was indeed faster to do by shift-and-add/subtract.
AC wrote: 'Code density is a good benchmark of the "goodness" of an ISA that doesn't basically boil down to "it's good because I like it, that makes it good".'
Code density is only one dimension of "goodness", and it is one of the hardest to measure. If you measure compiled code, the density depends as much on the compiler (and optimisation flags) as it does on the processor, and if you measure hand-written code, it depends a lot on whether the code was written for compactness or speed and how much effort the programmer put into this. So you should expect 10-20% error on such benchmarks. Also, for very large programs, the difference in code density is provably negligible: You can write an emulator for the more compact code in constant space, and the larger the code is, the smaller a proportion of the code size is taken by the emulator. This is basically what byte code formats (such as JVM) are for.
I agree that the original ARM ISA is not "optimal" when it comes to code density, but it was in the same ballpark as 80386 (using 32-bit code). The main reason ARM made an effort to further reduce code size and Intel did not was because ARM targeted small embedded systems and Intel targeted PCs and servers, where code density is not so important. Also, Thumb was designed for use on systems where the data bus was 8 or 16-bits wide, so having to read only 16 bits per instruction sped up code execution. The original ARM was not designed for code density, but for simplicity and speed.
Asdf wrote: " Intel has tried repeatedly to kill their abomination but the market won't let them"
That is mainly because the processors that Intel designed to replace the x86 were utter crap. Most people vaguely remember the Itanium failure, but few these days recall the iAPX 432, which was supposed to replace the 8080. Due to delays, Intel decided to make a "stop-gap" solution called 8086 for use until the 432 was ready. While Intel managed to make functional 432 processors, they ran extremely slow, partly because of an object-oriented data model and partly due to bit-level alignment of data access. Parts of the memory-protection hardware made it into the 80286 and later x86 designs, but the rest was scrapped. Itanium did not do much better, so Intel had to copy AMD's 64-bit x86 design, which must have been a blow to their pride.
If Intel had designed a simple 32-bit processor back in the early 1980s, ARM probably would not have been. Acorn designed the ARM not because they wanted to compete with x86 and other processors, but because they were not satisfied with the current commercial selection of 16/32 bit microprocessor (mainly Intel 8086, Motorola 68000, Zilog Z8000 and National 32016). If there had been a good and cheap 16/32-bit design commercially available, Acorn would have picked that.
In Kim Stanley Robinson's Mars trilogy (which is highly optimistic in terms of technology, BTW), they send comets to skim the atmosphere of Mars, shedding water vapour as they go. This seems like a safer way to add water vapour (which is also a greenhouse gas) to the atmosphere. It would require a lot of comets, but it is still better than nukes.
But I agree that terraforming Mars will always be rather iffy, because it is small, far from the sun, and inactive geologically.
Given today's focus on profit from short-term fluctuations in share prices (even down to milliseconds), share prices are highly volatile and has very little to do with the profitability of the company. Since trading in share prices is a zero-sum game, the main effect is that the clever (and fast) take money from the not-so clever (and not-so fast). The society as a whole gains zip from this. Quite the opposite, in fact, as we have to bail out banks that made the wrong bets, while the banks that make the right bets channel their profit to huge bonuses to its CEO, CFO etc, who immediately send it to the Cayman Islands.
So, IMO, shares should only be traded at par, so the only potential gain from shares would be payment of dividends. This is, by its nature, not a zero-sum game, and millisecond trading will not profit from it.
I don't see this happening, though. Not only will many financial institutions do what they can to prevent it, but they will find ways around a law that requires shares to be sold at a fixed price: Rather than trading the shares themselves at varying prices, they will trade papers that promise to buy or sell shares (at the fixed price) at a specified later date. These papers will be subject to price fluctuations, unless the laws forbid these too. And if they do, the banks will just find some other form of derivative to trade.
"Cross-platform? Up to a point. F# seems to be tied to Visual Studio".
I use Emacs for F# programming. Also, MS is releasing an open-source, cut-down version of Visual Studio for Linux and MacOS, so even if you want an IDE, you are not tied to Windows.
My main gripe with F# is that they based on O'Caml instead of Standard ML. This makes the syntax (especially for pattern-matching) a bit clumsy in places. But, overall, F# is a big improvement over C#, Java, C++ and a host of other mainstream languages. If Apple's Swift is made truly cross-platform, it may rival F#, though.
JLV wrote: "And on a more general note, what are the design features that you disagree with?"
I admit that I may not be entirely up to date with the newest developments, but from what I remember from reading about it a while back:
- The choice of reference counting GC. It is slow and it necessitates using weak pointer if you create cyclic structures such as doubly-linked lists. There are concurrent GC algorithms that would be much better.
- That only objects are boxed. This makes it impossible to create a recursive enumerate without using objects.
- A rather verbose syntax for enumerates, as well as a few other odd syntactic choices.
Actually, the three-body problem IS chaotic in the modern, mathematical sense. There are non-chaotic three-body configurations (for example, when several small bodies orbit a much larger mass in near-circular orbits), but the general problem is chaotic. Smaller moons closely orbiting two large co-orbiting bodes are almost bound to be a chaotic system.
That resolution only makes sense for wall displays, but a display port could drive one such.
This drive for higher resolution resembles the similar drive for digital cameras, where resolutions on most cameras are now much higher than the accuracy of the lenses. What is needed for digital cameras is better light sensitivity instead.
On a laptop or tablet screen 4K is more than enough (3K is IMO the useful limit for less than 17" screens). Like in cameras, what is needed is not more pixels. On screens what is needed is better colour reproduction and better visibility is sunlight. Reflected-light screens (like Qualcomm's Mirasol, once fully developed) may be the future.
Most code obfuscation is done at the lexical level: whitespace and comments are eliminated, variables and procedures are renamed, macros are expanded, and so on. As mentioned in the article, such tools can not hide coding style, as this goes far beyond lexical details. So a good obfuscation tool must work on the semantic level of the program: It must replace code with semantically equivalent code using more than just local syntactic or lexical transformations. This is very difficult to do, especially if the language semantics is loosely specified (*cough* C *cough*). Writing such a tool is (at least) as complicated as writing a compiler, which is why it is rarely done. But there is research that points the way: http://dl.acm.org/citation.cfm?id=2103761
That competitors who complete more tasks in code competitions have, on average, longer programs than those who compete fewer tasks is not surprising. Mark Twain is attributed for ending a letter with "I apologize for the length of this letter. If I had had more time, it would have been shorter". The same is true for programming; It takes more time to write shorter code. It is often faster to cut-and-paste and do local modifications than to make a parameterised procedure to cover all cases, and sometimes it is faster to special-case on different inputs than to make a general solution, which often requires insights that take too long to obtain when you are pressed for time. And you certainly don't want to spend time on simplifying code that works. Good competition programmers also often have a standard skeleton program that they modify for each task, because it is faster than starting from scratch. So there will often be procedures that the programmers do not bother to remove even if they don't use them. They don't harm, so why use time to remove them?
Coding competitions are very different from normal programming: The problems are small and self-contained, so you don't have to worry about modularisation or readability of the code (in a few hours, nobody will ever look at the code again), and the process is more explorative than normal coding. So you can't draw conclusions about general coding style from such competitions.
The article mentions three factors that (somewhat) improve code quality: Strong typing, static typing and managed memory. Erlang has strong typing and managed memory, but it is dynamically types. So by the (somewhat weak) conclusions in the paper, it is no surprise that Erlang is close to the average.
What would be interesting is factoring out the number of years the programmers have worked with their language: C and C++ programmers have at least potential to have worked longer with their language than Erlang, TypeScript and Haskell programmers, and it is very likely the case that programmers who know their language intimately will make fewer errors. This also makes a case for languages that are sufficiently simple that you can actually manage to learn all of it.
Also, these days, huge libraries mean that you quickly can produce useful code in almost any language, as long as you stay within the scope of the libraries. This makes the actual properties of the language itself largely irrelevant, again if you stay within the scope of the libraries. So a true test of language (isolated from library) productivity/quality would require giving programmers a task where they can not make any significant use of non-trivial library functions, or where you deliberately forbid use of anything but the most basic libraries, e.g., by limiting the total library code used to, say, 1000 lines.
While it is true that these features are what is generally seen to distinguish RISC from CISC, the original MIPS design has a large part in that definition: It was (alongside the Berkeley RISC processor, which is the forefather of SPARC) basically what defined the concept.
I have long thought that ARM should have moved the PC out of the numbered registers when they moved the status register to a separate, unnumbered register. While you save a few instructions by not having to make separate instructions for saving/loading the PC, PC-relative loads, etc., most instructions that work on general registers are meaningless to use with the PC. And in all but the simplest pipelined implementations, it complicates the hardware to make special cases for R15 (the PC). So this move is hardly surprising. I'm less sure about the always-0 register. I think it would be better to avoid this (gaining an extra register), and make a few extra instructions for the cases where it would be useful, e.g., comparing to zero.
And while code density is less of an issue now than ten years ago, I think ARM should have designed a mixed 16/32-bit instruction format. For simplicity, you could require 32-bit alignment of 32-bit instructions, so you would always use 16-bit instructions in pairs, and branch targets could likewise be 32-bit aligned. For example, a 32-bit word starting with two one-bits could signify that the remaining 30 bits encode two 15-bit instructions where all other combinations of the two first bits encode 32-bit instructions.
Biting the hand that feeds IT © 1998–2019