Re: Hard to find faults with PS4, isn't it?
Perhaps I should say that I'll buy one when I can run Linux on it, and they guarantee that this ability will never again be taken away on a whim.
2684 posts • joined 10 Jun 2009
Perhaps I should say that I'll buy one when I can run Linux on it, and they guarantee that this ability will never again be taken away on a whim.
A reactor would be a lot safer.
Seriously: a radio-isotope power source is as "hot" as it will ever be at launch. If the launch fails, some fairly short-lived really horribly radioactive crap gets showered into the atmosphere or the ocean.
A reactor could be launched unfuelled and then fuelled in orbit. Fuel is subcritical pieces of enriched Uranium, suitably packaged. If launch of those failed, pure Uranium (enriched or otherwise) is a lot less harmful than the contents of a radioisotope source. U235 has a half-life of over a billion years. That's almost non-radioactive, unless you build a supercritical assembly and add neutrons.
After it's assembled in orbit, take the reactor critical and take the spacecraft out exploring on ion drive. It won't ever come back. (To be absolutely sure give it enough fuel to reach Jupiter or wherever, and not enough to make a return flight). Once the reactor has been running for a while it'll be plenty radioactive (and unscreened). But also plenty far away enough to be safe!
@Mike Richards. You fail to mention the worst Icelandic volcanic erruption in recorded history, Laki, 1783. http://en.wikipedia.org/wiki/Laki
The erruption had serious repercussions in Europe. An estimated 23,000 deaths in the UK alone. Globally there was climate disruption lasting several years.
... to stop their citizens getting access to "subversive" literture, just as soon as the most rudimentary data-copying technolgy became available. They just about kept a lid on the typewriter and carbon paper, but once the photocopier reached the Soviet bloc, they'd lost. (The DPRK seems to have learned this lesson: you can keep your masses down by denying them any technology more advanced than mediaeval, and ruling them in a like manner).
Yes, Google can remove most of the filth from their indexes, but how anyone thinks they can deal with sheets of paper containing lists of URLS distributed by paedo-sneakernet or peer-to-Tor-to-peer, heaven only knows. The servers will of course be some wholly innocent organisation's PC, running malware.
Computers, even massive systems like Google's don't really have the chance to perform actions that effect the world around them.
I don't think this is correct. One trains a neural network by "rewarding" it (+n) for getting decisions right, and "penalising" it (-n) for getting them wrong. It has a build-in imperative to try to maximise its score. If it has any consciousness at all (I hope not), that consciousness is of a virtual environment of stimuli and chosen responses and consequences of those choices. (It would have to be a pretty darned smart virtual critter to start suspecting that it's in a virtual environment embedded in a greater reality. Human-equivalent, I'd hazard. )
A very simple life-form (an earthworm, say) can be trained to associate following certain unnatural signals with food, and others with a mild electrical shock. It'll learn to distinguish the one from the other. Just how is this different? If you attribute self-awareness to an earthworm but not to the neural network model, move down to a less sophisticated organism. It's possible to train an amoeba, even though it altogether lacks a nervous system!
Yes, this patent is a bunch of already-invented hardware strung together is a way that ought to be deemed obvious. So it ought to fail should it ever be challenged. (I haven't read the fine print - if there's a truly novel concept in the details of the creation of a 'loon network, then my comment does not apply to that).
One of the ways that the patent system is broken is that patent offices issue patents on almost anything if the applican'ts cheque is good. Then if someone wants to challenge the patent, he needs very deep pockets to pay his lawyers. So it's a system for the benefit of the large companies that works against individuals and small companies.
Memristors have many advantages over flash, but the areal density advantage is not a large one. My money is on the nonvolatile RAM (memristor) over the large-block-addressed and limited-rewrite Flash, but I also expect that multi-Terabyte disk drives will be around for the forseeable future. A lot depends on whether the HDD manufacturers have built the bit-patterned media plants and the two-digit-Tb drives before SSD eats their bread and breakfast market. They might decide to stop investing in future bigger HDDs because the HDD business doesn't have a future (which would be a strongly self-fulfilling prophecy).
The price of N Terabytes of SSD will always be N times the price of one terabyte (until they can make a 1Tb nonvolatile storage chip, if ever). The price of one disk drive will be £50 plus whatever they can get for it being bigger than cheaper ones. If a 10Tb or a 50Tb drive is ever marketed, it's a fair bet that 5 years later it will be available for £50 of today's money.
Wafer-scale SSD integration might one day put a terminal spanner in the HD works, but wafer-scale integration is something that's been coming for almost as long as nuclear fusion, and like fusion we still don't have it.
But I don't think they've built (gigabyte state-of-the-art) Flash fabs in China yet. The moment Intel or Samsung or whoever do that, they've given all their know-how to the Chinese. So it's the power cost in whatever country the Flash fab is in that you need to look up.
One can deduce an upper limit on the energy input from the sale price of the chips and current industrial electricity costs.
A 10 Watt disk drive running 24 x 365 x 5 years uses 438 kw/h, at 10p/unit that's £43.80. Depending on size, an SSD may not cost a lot more than that, and it's the cost of the chips it contains you need to use for your upper energy-input limit, not the cost of the completed, tested, packaged and warrantied assembly.
Not long-term figures. SSD tech has been developing extremely fast, disk tech less so, but in both cases by the time any device can be pronounced long-term reliable, it's also obsolete. The manufacturers do "accelerated ageing" tests but don't have a TARDIS. They have to substitute heat, humidity, vibration, extreme usage patterns ... and time axis scaling laws of very doubtful validity.
To put it bluntly, the fact that only 1/1000 of your test sample has failed after 12 months of torture testing, doesn't mean that 90% of them won't have failed after five realtime years of gentle usage. And in fact we see the unexpectedly bad ageing problem every time a manufacturer buys a bad batch of components. then there's a flood of "WD / Seagate / Hitachi are cr*p" messages, really meaning "model xxxxxxx with serial numbers between xxxxxxxx and xxxxyyyy are likely to fail prematurely because the ZZ corp supplied some bad widgets". It would be nice if the manufacturers put out recall notices like car manufacturers do, but of course for a £50 (or even £150) disk drive they cannot afford to do so.
the base price of a disk drive is about £50 (they can make them cheaper, but these sacrifice some longterm reliability). It's been that price since the days it bought you just a few Gbytes.
They have technologies that will permit 50Tb disk drives working in the labs. I expect they'll be on the market within a decade. I doubt that 50Tb of SSD will ever be competitive on price.
One other thing: are we sure that SSD really is more reliable? The technology has not been in use for very long. One thing we all know, an SSD is likely to fail "just like that" with no advance warning. HDDs frequently (though not always) give warning of pending failure and permit pre-emptive replacement. My instincts say wait a few years more before betting the farm on SSD storage. Also that memristor tech will supplant flash SSDs - true random access instead of large-block addressing is a huge plus.
There are official "crawler lanes" on hills, so that means that the other lanes at that location must be at least faster lanes if not fast lanes.
A driver, in the fast lane, went from 65mph to 50mph because he got a call on his cell phone. No brake lights, he simply took his foot off the gas and kept driving at 50mph with his cell phone in his hand.
Which was blatantly illegal (in the UK). Use of a hand-held mobile while driving a car should be made illegal in any jurisdiction where it isn't already.
I'd assumed this article was about hands-free mobiles? In which case I can't see the difference between talking on a mobile and talking with a passenger. Also if cars blocked mobiles, they would not be able to automatically call for help after a serious accident (which may have left the driver and passengers unable to make such a call manually).
chances of B16B00B5 in there?
Or even a DEADBABE? (I Shouldn't have read the BOFH of the week)
That is what worries me. What I want to know is if this is someone clever, or someone with a Very Large Budget
The universe might contain closed timelike curves the existence of which we'd be unable to appreciate beyond (maybe) an unrepeatable "that's odd" observation or feeling discarded as erroneous or insignificant. They might be very small and therefore doomed to be lost in the noise of ordinary molecular interactions, or very large, so a human's perspective couldn't appreciate the backward causality between the big crunch and the big bang. Or even a perfect conspiracy on a human scale that's just written off as coincidence, rather than an example of the future causing the past. How could you ever prove non-coincidence, given that you'd only spotted it after studying observations of a volume of spacetime that contained (past tense intended) the entire CTC?
We can't observe the early history of the universe, back beyond when the electrons attached themselves to the protons to create hydrogen and what is now the cosmic microwave background radiation.
If one measures time linearly, that's almost all the way back. But if one uses a logarithmic axis measuring time since the big bang in units of Planck time, then it's almost all of it that we can't see. BTW the size of a Planck time unit involves c.
If it really has to be infinitely long then it requires infinite mass and isn't possible.
I haven't read the paper. I'd guess that this particular infinity is there to simplify the maths by taking the z axis completely out of the equations, and one might reasonably suspect that a finite length would suffice. OTOH it's possible until someone does the maths that there might be some sort of essential instability that would develop along the z axis of any such putative time machine of any finite length.
On Linux you can throttle the rebuild megabytes/second rate, if you are willing to trade a slower rebuild for a greater responsiveness during the rebuild. It also prioritizes file I/O over rebuild I/O. If your workload is still seriously impacted, it suggests to me that the hardware is only marginally adequate even under ideal (non-rebuild) conditions.
Rebuild time of a RAID array with no overlying filesystem activity is simply the time to read N disks and write one. If the controller is capable of supporting all N disks streaming in parallel, for a typical 150 Mbyte/s 2Tb drive that's about 13000 seconds or ~4 hours flat-out, 8-12 hours if throttled to allow overlying filesystem activity. A whole day would suggest a bottleneck in the controller or the CPU. Soft RAID on an Atom CPU? What's the maximum rate it can calculate XORs? OTOH a decent Core-2 quad can outperform 4 SATA disk drives without breaking sweat (and still has three cores left for other work).
The number of I/O operations needed to copy a whole filesytem as files (indeed, even to read a whole filesystem as files) exceeds the number of operations needed to rebuild the underlying RAID block device (apart from the case where the filesystem is mostly empty, or partly empty with the rest made out of a small number of very large files). Also rebuilding does sequential block reading, with only one-cylinder seeks. Accessing a filesystem will tend to do random seeks in larger numbers. So there's no fundamental advantage in doing mirroring at the file level. (I'm not saying that there can't be an advantage when particular implementations are compared).
You get a disk failure on a hot-swap-capable RAID box and you do exactly the same. The operating system rebuilds the array in the background while you continue to use the degraded array.
Don't know which of the various home array boxes are this capable but you can build your own which is, based on a PC, Linux software RAID, hot-swap disk cage(s) and a hot-swap-capable SATA controller.
Just supposing a major Indian bank went tits-up because of a complete XP-based IT failure, I wonder what the consequential death toll would be? This is India we are talking about. Lots of people surviving very close to the edge even when things are going well.
Yes, I can imagine thousands of deaths. It would just be impossible to tell, except perhaps by statistics, how many or even whether they happened.
Just like the excess deaths of elderly people in the UK in Winter over Summer, some of whom die because they can't afford the heating bills, or just because they think they can't. How many, we can't know.
Name any equivalent software products that get nearly 13 years of support?
Red Hat Enterprise Linux. 13 years to end of extended life cycle. (for 5 and 6, both at future dates).
But the key difference is there's no vendor kill switch. If you really *have* to keep that RHEL5 system on RHEL5 until 2025 you can get hold of the open source, fix your own bugs after Red Hat won't, or pay some other organisation to do so on your behalf. It won't be cheap, but it will be possible.
I could also mention [open]VMS, still supported after 32 years, the corporate death of two former owners, and the engineering death of its two former CPU architectures.
Microsoft has more to lose than Union Carbide did. Suppose Microsoft did kiss its entire Indian business goodbye. Would that be the end of it? India would go Linux. Then it would start pushing Linux as an export. And the rest of the world would think that if an entire subcontinent can go Linux, why can't we?
It won't happen. Neither the April cut-off, nor Microsoft pulling out of India.
Don't you have to upgrade from 32bit XP to Vista-32 first, then 7-32, then is there a 32-bit 8? It's probably about 24 hours "work" answering the odd question every 20 minutes or so. And if you believe anything will still be working after that completes ....
There's no direct upgrade path from 32-bit XP to 64-bit Windows 7 (or 8). Why not, Microsoft only knows. It's not as if nobody wanted it. They'd even pay for it.
As for that April deadline, it's like the USA debt ceiling. I'm 99% sure they'll take it to the wire, and then announce another year's support. That'll probably happen twice. Because the world's governments aren't going to make the deadline, and they will quietly explain to Microsoft what will be the consequences to Microsoft of not providing XP updates past next April. By which I mean, punitive (multi-billion) fines for abuse of a their monopoly, and terminal loss of government contracts. If you have to do an emergency migration anyway, why not migrate to Linux, and never again be held to ransom by a monopolist's finger on the kill switch?
The major factor in play is "China and Russia (currently) like them
It's bizarre that China supports them. Look at a map of the world. Where can the North Koreans invade with their army? There's South Korea, and then there's China. And that's it.
Joseph Stalin made a similar miscalculation concerning a well-known German lunatic, but one might excuse him on the grounds that he was none too sane himself. The Chinese leadership, on the other hand....
Ban any organisation from competing for projects funded out of the public purse, which is engaged in significant (amoral? immoral? ) amounts of artificial tax avoidance. Set up an independant body to scrutinise the tax arrangements of anyone submitting a tender, and to work out whether the public purse would not be better-served by awarding the contract to a different organisation that returned a greater slice of its earnings as taxes. 30 outgoing, 10 back as tax, beats 28 outgoing, 5 back as tax.
These sorts of shenannigans were commonplace back when banks were small and unregulated. You put your money in, some crooks took your money out, and the bank suddenly wasn't there any more. No "too big to fail", no government guarantee, just a safe that oddly did not contain the gold that it was supposed to contain, and some bankers whose whereabouts could suddenly no longer be assertained.
Or you could hoard your gold in the privacy of your own home, and fall victim to a different class of robber or a different sort of trusted partner.
There's nothing new under the sun?
What he means?
"I fscked up. I wanted them to be spying on all you plebs, because some of you are dangerous lunatics who fly aeroplanes into skyscrapers and plant bombs. It was kinda cool that we could keep tabs on people who simply oppose our gummint as well. But the b*stards aren't doing what we asked them to. They're spying on us as well! They're becoming a law unto themselves!! Soon, they'll mount a coup, round all of us up and shoot us!!!!
In the world of Linux and Free software it's been a non-issue for a long while. You can run stuff locally. Or you can run stuff on a remote server with X display across the network. And there's actually not much difference between a thin client and a desktop PC ... they're both systems running Linux.
The key hardware issue is network bandwidth. In theory a Gbit LAN can transfer data at 120Mbytes/second, which is about the same as a hard drive (but considerably inferior to an SSD). In practice there are bottlenecks. As for network X-serving, the difference in bandwidth is far more noticeable if (and only if) your work necessarily involves interaction with moving imagery (especially interaction with 3D models). The CPU to GPU / Graphics card bandwidth is enormous compared even to a dedicated flat-out Gbit LAN.
But most office-pushers could be on thin clients were it not for the baroque tangle of Windows licensing. Go Linux + Libreoffice on thin clients and a fat server and LAN!
The moon stabilizes the axis of rotation of the earth, allowing for less extreme winter/summer climate fluctuations (on a timescale of O(10^5) years. It's also possible that lunar tidal drag is critical for maintaining the geo-dynamo, without which the solar wind would have stripped Earth's atmosphere leaving this planet like a hotter Mars. This latter is speculative, because the workings of the geo-dynamo are not well-understood (It's very hard to take a closer look at it :-)
If 4 billion years is accurate, the Earth wasn't even solid then.
The oldest dated terrestrial rocks are 3.8 billion years old. The oldest terrestrial zircon crystals are 4.3 billion years old. (Zircons crystallize out in even white-hot magma. They are harder than just about anything except diamond so they survive all subsequent geological processes except deep re-melting. Zircon can contain Uranium but not Lead impurities at formation, which allows accurate radio-isotope dating - any Lead in a Zircon must have started as Uranium when the zircon crystallized.)
@sandtitz 15:07 Evidence ... with the possible exception of proprietary languages where Microsoft or some other party will not permit development of a LInux compiler, can you tell me any language for which a compiler or interpreter exists on Windows but not on Linux?
A lot of scientific codes are still written in FORTRAN (which didn't stop being developed in -77 as some folks believe, and which does retain certain advantages over C/C++ whatever the C bigots say). Linux has free f95. Windows has ... well, you can persuade f95 to run in the Cygwin LInux-emulation environment. But if you're going down that route I'd recommend just running Linux proper in a free-beer VMware Player VM instead.
It's not just the languages but the libraries. If the libraries your code needs are available on Linux and not on Windows, do you really think someone whose primary motivation is science research is going to want to volunteer to port them and shake out the resulting bugs? In which case the fact that Microsoft makes its compilers free-beer (not free-open) accessible to parts of academia subject to certain restrictions, is of little relevance.
BTW we're not talking little libraries here and little programs here!
A lot of visualisation codes are written for X-windows. Yes, you can run a free X environment atop MS Windows. But it's inferior on the performance front, and these codes tend to need all the performance they can get.
The reason for preferring Linux is usually that they need to run research codes which are written to run on Linux. That in turn is at least in part because the Linux environment is open and provides a wide range of free programming tools, whereas the Windows environment is closed and does not. Also in most cases the desktop is the front-end to the computer cluster running Linux. Using Windows as a front-end would be somewhat perverse. (I do know one person who works that way! ).
A few mostly European-mainland students do turn up knowing Linux, and Windows not at all. If your personal notebook runs Linux you are hardly likely to choose a Windows desktop!
A large number (and growinfg proportion) have Macbooks as their personal system and ask for huge iMacs, but unless they have unusually good funding the budget doesn't stretch far enough for big Apples.
The reason is that the Windows 8 user interface is not appreciated in a positive way, and that's putting it mildly.
As for why we buy them PCs rather than inviting them to BYOD ... sometimes I wonder myself. Some of them definitely do require hardware well above the usual desktop PC for displaying folding proteins, molecular dynamics, that sort of thing. (Computing that sort of thing uses large clusters of PCs in a server room ... just displaying the results is demanding). Maybe with others the hardware is wasted.
On the support side of the desk, it is far easier and productive supporting a few known configurations than many random malware-infested ones. Windows software deployment by changing one key in the AD is far easier than having to physically present oneself at the user's personally owned PC. And a support person costs a lot more per annum than the capital cost of quite a large number of PCs. So there's a rational reason for you. I doubt it's the actual one! The world is not that sensible a place.
I don't know if a university science department's latest intake of postgrad students is a representative group, but we offer them a choice between Windows 7 and Windows 8 (and Linux of course ). Of those wanting Windows:
8 asked for Windows 7
0 asked for Windows 8
And I'm frequently asked to remove Windows 8 from a laptop and to install Windows 7.If MS wants to sell more PCs for Xmas, isn't the answer obvious?
Should I invoke Rule 34 at this point?
It's probably just a faulty battery.
But these days with fancy battery control chips capable of being controlled by the slab's CPU, I fear it's only a matter of time before someone works out how to implement HCF (halt and catch fire).
Oh yes, the UV laser pulse zap gun. Yes, you could deliver a large pulse as far as the first object in the path which is opaque to UV. That would usually be a window. You could probably break the window, but a bullet (or brick) works better. You might just about be able to disable the PC in the room (although I've experienced lightning striking a lamp-post about ten feet from my PC, and the PC survived - I had momentary doubt about myself!). But for military use, wouldn't even a humble RPG wreak much greater havoc?
A zap gun has probably got some application as a wireless Taser (or real-life "Phaser set to stun") if they can make a suitable hand-held laser. You don't need an extremely fast rise-time or enormous E-fields to knock a person out. (Once again if you want to kill them, a bullet is probably a better bet). It's other application might be near-speed-of-light delivery of such a pulse to (say) an incoming hostile aircraft or missile. But aircraft are provably hardened against lightning strikes.
I don't have access to any top secret papers, but I very much doubt that a Terawatt pulse at one end of a wire made of ionised air can arrive as a terawatt pulse at the other. It's a dispersive transmission line, and I'd expect the pulse to be considerably broadened (and consequently flattened, de-fanged) in transit.
As a small-scale disrupter in terrorist or saboteur hands, one has to worry.
On a large scale, inverse-squares is your friend. You'd have to generate a LOT of electrical energy to have the same effect at 1km rance compared to 10m. 10,000 times as much.
And for the record a nuke generates a serious EMP when detonated in the upper atmosphere, above 50km up. Lower, and it shorts out its own EMP with its own fireball. Not to imply that a nuclear fireball is in any way harmless ....
Very few Asians have blue eyes, but it's not completely unknown. I'm told that this is evidence that the Vikings travelled more widely than is commonly known
Not that the Vikings were in the same league as Ghengis Khan.
I was thinking tentacular porn, rather more Laundry than Rule 34 ... we all know about the Redmond cultists, don't we? (Embrace, Extend, Extinguish ...)
You mean, like Firefox does?
The principle is caveat emptor. Check out the credentials of the person or org offering the plug-in. Find some satisfied users. It's no different to any other software. Indeed, the principle was well-known to the Romans (hence the latin phrase), long before software existed. In "Alice in Wonderland" it was a bottle labelled "Drink Me" which Alice unwisely took at face value.
Consider also "A fool and his money are soon parted", and "nothing is proof against an exceedingly great fool".
Blocking ads is such a morally gray matter, given the Internet can't run for free, but the principle should encourage ad pushers to use less-annoying advertisements which people wouldn't bother to block in the first place.
No, it's not grey. It's just you exercising your freedom of choice. If others doing the same causes certain organisations to cease to be profitable, they'll have to find another business model or cease trading. That's commercial life. Newspapers, other than freesheets, have now almost all gone subscription-only for their online editions, which is fine by me. I'd pay, if I needed access to their news to any significant extent. (I pay for Linux Weekly News).
Google may be planning to take away that particular freedom of choice from Chrome users. If I'd ever left Firefox, I'd be returning soon in response to this news.
Has anyone ever considered an ad-blocker that classified adverts, and allowed them through if they met the user's criteria for minimal annoyance? Or the same done manually (better - maybe?), funded by the responsible advertisers who don't want to be blocked only because of the irresponsible advertisers? Or an advert-server which guarantees no adverts that don't pass minimal-annoyance criteria?
Probably not, because most www users don't block ads and probably don't know that they can.
It was a stony meteorite rather than a nickel-iron one. Stone is brittle and usually has internal weaknesses. So when it was subjected to thousands of G, and when its exterior was abruptly made very hot, it broke up and then (with a massively increased surface area) "exploded" (meaning lots of pieces in close proximity deposited most of their kinetic energy into the same smallish volume of air).
An Iron meteor of the same mass would probably have held together and made a crater on the surface if it were big enough for it to not be completely burned up in transit. (Iron burns, most stone doesn't because it's already as oxidized as it can get).
The predictable human death rate from meteor impact is at least 60 people per annum.
Surprised? Doubtful? That's because I'm using a conservative average, but running it back over hundreds of millions of years.
Let's assume just one Chixulub-scale impact every 100 million years. It would kill most of not all of the human race if it happened today. 6 billion deaths / 100 Million years = 60 per annum.
Aren't statistics wonderful?
The cause of the Permian-Triassic extinction is not widely believed to have been an impact event. It was coincident with the erruption of the Siberian traps. This was the most colossal volcanic erruption probably since life evolved. It could hardly have caused less than a global ecological catastrophe due to the gases released into the atmosphere.
Of course it's possible that an impact event provided the final straw for a seriously damaged ecosystem. This is the probable fate of the dinosaurs in the more recent mass extinction. The Chixulub impact occurred at the same time as the eruption of the Deccan traps, another massive volcanic outpouring though considerably smaller than the Siberian traps.
Finally, at least one sort of catastrophic event exists that would leave no direct geological trace at a remove of hundreds of My: exposure of the Earth to a gamma-ray burst in a "nearby" galaxy. This would ionise some fraction of the N2 and O2 molecules over half the planet's atmosphere, followed by recombination into Nitrogen Oxides. The Ozone layer would be almost instantly gone, and decades of nitric acid-laden rain would follow. Land-based life would suffer worst, as almost all plant life would be destroyed. You can of course posit any quantity of NOx creation depending on the gamma-ray flux.
Geologists have something of a track record of being right!
Back in Victorian days, they became quite certain that the earth was billions of years old, by measuring sedimentary rock strata thicknesses and present-day deposition rates. Physicists, however, were equally certain that the Sun could not be more than tens of millions of years old, because no chemical reaction could fuel it for any longer. The Geologists insisted that if chemistry couldn't, something else must....
...and in due course, nuclear fusion was discovered.
Music and programming (and pure mathematics) have far more in common than people who aren't at all musical can ever realize. I've never known a university maths department that doesn't have some truly gifted amateur musicians, and a random collection of upper-quartile programmers won't be far behind. Something like they employ the same parts of our brains to ends that are superficially very different, and deep down not at all so.
I've occasionally thought that a musical score is the machine-code, and learning to perform that music amounts to reverse-compiling it inside one's head, until one has ascended back to the high-level abstractions more like those that the composer started from inside his head.
Once again: Python is a free download, works the same on Windows or Linux or an RPi. So it's not availability of a simple interpreted language at fault (one FAR better structured than BASIC!).
Either the kids don't want to drink, or there are no teachers to lead them to the water.