Re: This isn't surprising
I always considered “x64” to be shorthand for “{AMD,Intel }64”. Come to think of it, do Intel still call their AMD64 homage “Intel 64”, or do they have a new name for that too?
362 publicly visible posts • joined 11 Jun 2009
Another one of the ZX Spectrum generation here. A ZX Spectrum 48K was my Xmas pressie in 1983, followed by an Alphacom 32 printer the following year, then a ZX Expansion System (Microdrive bundle), then a QL (after Amstrad reduced them to get rid of them). Knew of Sinclair some time before that though - my dad had previously bought an Oxford calculator (also reduced to clear, ISTR). One of the LED segments died, and when he sent it off to get fixed he got a Sinclair Enterprise as a replacement (which I still have). Three years after unboxing that Spectrum I had decided to study computer science at uni. Still not sure if that was a good move ;-)
The Spectrum was definitely a product in the right place at the right time - just enough capability for decently playable games, and significantly cheaper than the competition.
I suppose Clive's approach to product development could be summed up as an obsession with cutting corners, bending rules and taking unorthodox approaches (e.g the ZX Printer, Microdrives, weird keyboards, right-angled CRTs...), all to cut costs to achieve attractive price points. For instance, the serial ports on the QL are an astonishing feat of bodgery, just to avoid using conventional UARTs.
Of course, in the fast-moving days of the 1980s, sometimes the mainstream technologies would catch up pricewise by the time Clive's alternatives hit the streets...
Not sure why the author starts off talking about minicomputers - the minicomputer architectures of Prime, DG etc were already old hat by the mid-90s - what Intel was trying to kill with Itanium were the proprietary *microprocessor* architectures of the Unix workstation/server vendors like Sun (SPARC), SGI (MIPS), DEC/Compaq (Alpha) and IBM (POWER). These were not “minicomputers”. To some extent, then, it was a success, as it did indeed kill off MIPS (as a mainstream architecture, it of course lived on in the embedded world) and Alpha. Even Sun and IBM hedged their bets at one point with OS ports to Itanium.
"1 litre of petrol is equivalent to 10kWh. So that means the petrol pump is equivalent to a 30MW supply. That is the same capacity as 10,000 UK domestic 13A plugs."
It's interesting to compare the effective energy transfer rate of petrol pumps with EV chargers, but it has to be remembered that electric motors are maybe around 3 times more efficient than ICEs, so that has to be factored in too.
Of course the inefficiency of an ICE comes in very handy in the middle of winter where a continuous blast of "free" hot air out of the dashboard as you drive can be very welcome...
The 737-8200 (aka 737 MAX 200) is indeed a custom high-capacity version of the 737 MAX 8 for Ryanair.
The boring non-marketing designations used in official documentation for the 737 MAX series have always been 737-7, 737-8, 737-9 and 737-10 (not to be confused with the previous-generation 737-700, 737-800 and 737-900 of course...).
Um, no, Compaq (remember them?) decided to can Alpha in favour of Itanium before they merged with HP.
HP started the VLIW research project intended to produce a follow-on to PA-RISC in 1989. That later became Itanium, after they partnered up with Intel.
I switched to Safari on my MacBook some time ago after I got pissed off at the main Chrome process (not one of the worker processes) persistently burning lots of CPU time for no good reason. Conveniently, I found a script to reopen current Chrome tabs in Safari, which avoided manually recreating the dozens of tabs over half-a-dozen windows I perpetually have open :-). Only things I really miss are favicons and being able to pin tabs in one window only (not in every window as Safari does).
If you're using a proper camera with removable card storage, there is a simple way to do photo backups. When the card gets full, stick a new one in and put the old one somewhere safe. SD cards are cheap enough nowadays to consider them as write-once media and they certainly don't take up much space!
That occurred to me back in the days when the Linux roadmap seemed to suggest that the version number would be stuck at 2.6.x forever. In that case, why not just drop the "2.6", just like GNU Emacs 13 or Java 5. But then, I didn't expect he'd come up with the genius idea of bumping the major version number for no good reason at all....
I believe the root cause of the the dreaded wobble was that Sinclair re-used a RAM pack casing design intended for the ZX80 (which had a flat, vertical rear surface for the pack to butt up against) for the ZX81 (which didn't). See Rick Dickinson's sketch of his intended ZX81 RAM pack in his fascinating (if you're an old Sinclair nerd) Sinclair design archives on Flickr.
Yes, history repeats itself.... in fact the supersonic Harrier programme (P.1154) collapsed because it was meant to be both a fighter for the Navy and a bomber for the RAF (like the F-35) and they couldn't reconcile that. Of course, in those days the Navy had full-size carriers with steam catapults, so they went with the F-4 Phantom II instead (not the F-111, that was what was supposed to substitute for the TSR2), as did the RAF, with the subsonic Harrier as a consolation prize.
Sea Hornets would, I'm sure, be much cheaper, but who builds aircraft out of balsa and plywood these days...?
Why? Because F-35As can fly further, carry more ordnance, and are about $28m cheaper than an F-35B, since they don't have a lift fan and associated impedimenta. Given that F-35s of some description will be the de facto replacement for the RAF's Tornado GR.4s, in addition to their original task of replacing Joint Force Harrier (now just a distant memory), the F-35A is probably the most appropriate variant to do that, since no Tornado squadron has ever been expected to fly off an aircraft carrier.
Oh and in the good old days of the Sea Harrier, the RN only had three squadrons (and one of those was a land-based HQ/training squadron) for their fleet of three aircraft carriers...
This isn't just "ARM". In order to compete with Wintel in the server space, ARM have now defined (and are in fact still working on) various specs (SBSA, SBBR), which describe a common 64-bit ARM server system architecture in (hopefully) enough detail that OSs such as RHELSA will Just Work on a variety of different hardware vendors' offerings.
Unfortunately, this means the dreaded ACPI has now spread to ARM systems, but if that's what it takes, then...
I think we need more than just hope here. What if it doesn't? Will it slam on the brakes wherever the car might be, say, in the outside lane of a busy motorway? Then what? Wait for someone to turn it off and turn it on again?
Yes, but it's how much electricity that's the problem. If you want to fully charge an electric car with a a range comparable to an ICE-powered car in, say, half an hour, (so, a power requirement on the order of 200kW) then that's equivalent to about 8 substantial houses all on the point of popping the master fuse on their mains supply. For one car.
...until you can trust a self-driving car not to say "you have control", when it decides it has no idea what is going on and you're 2 seconds away from colliding with something. And if you can't trust it not to do that, then you might as well drive the damn thing yourself.
TBH, I can't imagine any smartphone marketing person getting very enthusiastic these days about a new feature that turns your super-retina displaying, 3D-face-recognizing, animoji-capable, machine-learning, augmented-reality-projecting smartphone into a pocket tranny from the 1950s.