Looking at the games list...
I strongly suspect that this is just the DTV inside a new box, and with a couple of USB ports bolted to the side.
I've still got a DTV in a cupboard somewhere...
169 posts • joined 16 Nov 2010
I strongly suspect that this is just the DTV inside a new box, and with a couple of USB ports bolted to the side.
I've still got a DTV in a cupboard somewhere...
"What's 80 quid a month and 150 up front when your last contract was 70 quid a month and 99 quid upfront?"
Except it's probably going to be more than a tenner.
At a glance over at Carphone warehouse, the iPhone 7 Plus is 699 quid sim-free; the most sensible contract [*] is around £43 with around £150 up front.
Interestingly, the Galaxy Note 8 is £869 sim-free and the contract/up-front costs are pretty much identical despite it being £170 more expensive. Presumably Samsung and/or the phone networks are offering a heftier discount?
Any which way, it suggests that contracts for the iPhone X will be around the 65 quid mark once the initial feeding frenzy subsides, which isn't that much of a jump.
[*] Unless you want to lob 660 quid at them upfront to get it down to 13 quid a month, though at that point, I'd be inclined to scrape a few more coppers together to buy the phone outright and then go hunting for the best sim-free deal...
5mb? Tha were lucky!
Back in t'day, I had a ZX Spectrum which loaded data from tape; a C15 would hold up to around 160kb of data, or approx. 80kb per side. A C90 was absolute luxury, though you really needed a counter on your tape player to figure out where each program was.
Fast-forwarding past the C64 and Amiga, my first PC was a 486 Viglen with (I think) a 50mb HDD. Oh, the joys of running doublestacker over this to try and increase the amount of space, as well as the hypnotic patterns of the defragger...
At some point, I acquired an 850mb 3.5" HDD; sadly, this exceeded the 540mb limit set by the BIOS in the PC. IIRC, there was a software patch, though some drives had a physical jumper you could set to fool the BIOS.
A while later, I worked for a large company which was clearing out vast swathes of obsolete tech. Among a few other things, I picked up a SCSI card and a mahoosive 2GB HDD - at 5.25 double-height, it was literally the size of two CD drives duct-taped together. I ended up with this huge bronze lump sat atop the PC case, as there was no way it'd fit inside!
In some ways, this makes me even more appreciate the fact that I can fit the sum of all human knowledge[*] onto my fingernail. However, I do also slightly miss the challenges of the old days, when PC technology was still a bit of a wild-west arena...
[*] Wikipedia is quoted as being around 160gb in size, though this is for text only. Still it'd leave a bit of room for some light reading!
Fundamentally, it feels like there would be a lot of issues around converting a commercial drone into a weapon.
First, navigation. GPS can be spoofed, downgraded or even disabled altogether, and is no real use against moving targets. Remote steering is possible, but can be jammed and is subject to lag - and unless you have some form of swarming technology, you need one warm body per drone. In theory, you could give a drone optical sensors and image recognition technology, or even have someone plant a beacon for a drone to home in on, but even those are subject to spoofing and jamming.
Then, there's range. A quick glance at Google indicates that commercial drone control range generally tops out at around 1km - though there is mention of one with a 7km range. Equally, flight time tops out at 15-30 minutes. I'd guess that this gas-turbine fuelled beastie may be capable of much more, but even then, it's unlikely to compare to a dedicated missile platform.
And there's more: commercial drones aren't designed to have a low radar profile, they're not designed for flying in adverse weather, they don't have systems redundancy, they don't have any defensive capabilities, they only have short range collision avoidance sensors, they're unarmored, they're not EMP hardened, etc.
Some of these things can be addressed as technology improves, or even retro-fitted on, but fundamentally, there's significant differences between the capabilities required for military and commercial activities.
Personally I think commercial-grade drones will take on a much smaller scale role - e.g. acting as disposable, short range scouts during infantry actions - if you're not too worried about making your presence known, they're perfect for city clearances and building searches. It's even possible that they could be used as beacons for their larger and more heavily armed military cousins...
To be fair, CW is currently selling a sim-free dual-SIM android phone for £47. And if you can fight through the filtering options on Amazon, there's something similar for £36.
As ever, you pays your money and you takes your chances; for me, the camera and screen size are the most important features, so I tend to stick to the higher end of the market via a 24-month contract to make the cost slightly less painful.
If those weren't a criteria, these days I'd be quite happy with whatever budget Android device was on sale at the time; quality has come a long way from the early android-landfill days!
"Explain to me again how the clueless CEOs of such companies even pretend to justify their ludicrously huge salaries when they cannot predict obvious effects of events that will impact their market"
TBH, I think the issue is more about business-politics rather than business-nous: bad news has to be couched as gently as possible to minimise impact to the share price and potential bad news will be handwaved away in the hope that nothing will actually happen until after the current incumbents have cashed out their stock options.
Anyhow, as other people have noted, the mobile-phone industry is in much the same place as the PC white-box market found itself a few years ago. Technology has become commoditised and differentiation between both product-versions and product-vendors is increasingly limited, while OEM manufacturers are undercutting premium brands and driving profit margins down.
Admittedly, the above impacts Android devices more than Apple, but I suspect even Apple will find themselves having to trim their profit margins as the quality of mid-range Android devices continues to rise.
Personally, I'm using an S7 Edge, and from what I can see, there's pretty much no point in moving to the next generation. E.g. the S8 has a slightly better CPU/GPU, a slightly larger screen, slightly longer battery life and exactly the same physical camera technology. None of which really screams "You need to upgrade NOW!!11!".
I'll therefore probably stick with my current phone until the 2-year contract runs out, by which time I'm hoping that there'll be something more interesting; if not, then I may just drop back to a SIM-only deal.
So Atari[*] are suing Nestle for a commercial which intermittently shows something vaguely resembling the game Breakout [**]?
1) It's not actually a playable game, nor is it something which is being commercially sold
2) It's clearly intended to be a comedy/parody
3) There's the best part of 40 years of prior "infringement", from 1980s magazine type-ins to commercial releases such as Arkanoid and the tutorial for pretty much every game-development kit ever made
Overall, I can see this bouncing out of the courts faster than a speeding tennis ball!
[*] Or as RPS recently put it: the creature wearing the skin of Atari...
[**] Said commercial can be viewed here: http://www.eurogamer.net/articles/2017-08-18-kitkat-accused-of-copying-ataris-breakout
But the Pound Bakery has been doing two six-inch sausage rolls for a quid for aaaages now. And as far as greasy tubes of pink protein wrapped in pastry go, they're pretty decent, especially when still warm. And it's a bit easier to save half for later... [*]
OTOH, their veggie sausage rolls are a bit odd; where most veggie rolls taste pretty much the same as a normal one, PB's have a distinct vegetable-y flavour which isn't entirely to my liking...
Either way, I've got a road-trip with friends at the weekend, so I may pick up one or two of these new monsters from Morrisons, purely for the comedy value!
[*] I'll leave the floor open for anyone wanting to make jokes about how many inches you can take at a time, etc, etc
The main thing this reminds me of is the wonderfully named Black Ghost Knife Fish - a member of the electric eel family and quite spectacular to watch. Essentially, it has a single "fin" running underneath it, which it oscillates in much the same way as I'm guessing that robot will.
It also means that unlike other fish, they're able to move backwards very easily, by reversing the oscillation...
"u can get a cheap phone and an SD card for a fraction of the price and it will be better in pretty much every single way"
Not entirely true, at least from my perspective.
iTunes is bloated and slow, but it handles playlists very well, and the integration with the iPod hardware/software means that sync'ing is quick [*]. It also facilitates 2-way synchronisation (i.e. I can tag songs on the iPod, and iTunes will pick those tags up); handy for eliminating unwanted songs from a playlist over time [**].
Then too, the non-touch iPods have another advantage: physical buttons; you don't need to turn the screen on, nor do you even need to look at the device to perform basic tasks such as skipping tracks or increasing volume.
Plus, there's the form factor; even an iPod Classic is smaller than pretty much any modern phone. And it's certainly more robust!
[*] I have experimented with an iTunes->Android plugin, but sync times are ridiculously slow, presumably because the plugin has to physically review every file which is already on the device)
[**] I tend to dump albums/compilations into a playlist, and then filter the songs over time. Allegedly, MTP devices can also do 2-way syncs, but according to the Musicbee wiki, few if any Android devices actually support this feature
I can remember a friend having an MP3 player with 16mb of RAM. Yeps, RAM: if your battery died, you lost your music. Mind you, it was pretty much the same for the early Palm Pilots, if you couldn't swap the AAAs over quickly enough!
Either way, at the time I was happy with my Sony Minidisk player; in LP mode, you could get over 4 hours per disk and it could manage 30+ hours of playback (from a single AA battery!). It's a shame Sony tied the format down so much in the name of preventing piracy; didn't they learn anything from Betamax?
Anyhow, these days, I'm using a fairly bruised and battered 120GB iPod Classic; I bought it secondhand, 4 or 5 years ago and the battery still lasts for around 20 hours, so I've had my monies worth from it. Those things were definitely made to last - unlike the various iPod Touches it replaced; those seem overly prone to headphone socket failure. It may be a bit basic these days, but it can handle playlists, I can tag songs for deletion and It Just Works, to borrow the old Apple catchphrase.
If/when it does die, I may well pick up a reconditioned one from Ebay - one fitted with a new battery and solid-state storage will hopefully last for a decade or so, at which point we'll probably all have nanobots directly twanging the appropriate neurons in our brains.
"But now, a company far away can easily appropriate that park for their own uses with no permit at all. The result might be a very large crowd (just like with the concert) making heavy use of the park, interfering with local's use of said park, and if something bad happens, no liability for the 'organizers.'"
That scenario has been around for decades - see the old 90s illegal raves in the UK. Laws were passed in the UK to address that, and I'd expect the USA to have similar laws already in place.
"It's cheaper to replace the entire thing with another second hand or an entirely new one. How do these parts sellers even sell anything with at least double the price a customer would pay!?!"
Because the cost of replacing the hardware is still cheaper than the cost of rebuilding your preferred/required setup on new hardware, especially in terms of time.
Admittedly, with things moving to "cloud" based sync'ing and storage, this is becoming less of a factor. But it is still a factor.
The thing is, they've managed to get a "perfect" score on the Atari 2600 version, not the arcade original. And for all that the port was well received at the time, it's a crude and heavily cut-down copy.
Sadly, there doesn't seem to have been much analysis of the way it was coded, though there is at least one hack out there which improves the graphics (http://atariage.com/hack_page.html?SystemID=&SoftwareHackID=5). But I'd be willing to bet that the algorithms controlling the ghosts are entirely deterministic, unlike the original where a random factor was included in the algorithms controlling the ghosts [*]
Beyond that, it's worth noting that the AI was only responding tactically, not strategically. Which is fine for a game like Ms Pac-man: if you can put your death off long enough, you'll eventually reach the maximum score. It wouldn't work as well in a game where there a
re other criteria - e.g. in Defender, you have to survive, kill all the aliens and protect the humans.
So yeah. They managed to write an AI which could produce a tactical solution for a deterministic situation with only 4 negative factors (aka: the ghosts). It's pretty much the most basic proof of concept you could produce.
Wake me up when they manage to produce something capable of tackling Defender or something more chaotic such as Robotron or Bubble Bobble...
[*] Unlike the original Pac-man, which was entirely deterministic; there were even books written on how to game the algorithms!
The gentleman in question was actually sitting in the pub and observing the results, but it's a good question - I'll have to take a look at the app at some point :)
So someone I know took great pleasure in ordering drinks to random tables. Much confusion all round. And free beer, so it's not all bad.
Back in the 16-bit days, the only form of portable media (in fact, pretty much the old media for the Atari ST and Amiga) was ye olde 3.5" floppy disc. And as impoverished teenagers and students, we tended to use the cheapest of the cheap. Magazine coverdisks were one source - I can recall people selling bin-liners full of these at markets and car-boot sales - but if you were feeling flush, you'd fork out the cash for a box of no-name disks from some far-eastern company you'd never heard of before [*].
Needless to say, quality control was an issue; aside from the usual plethora of read/writing issues, you'd often get issues with the protective plate not sliding open, or the disk failing to spin correctly inside it's sleeve. The best one I ever saw was when a friend enthusiastically hit the eject button on his Amiga, causing the disk to shoot out at a higher speed than normal; said disk literally disintegrated in flight, like an armour-piercing SABOT round...
[*] Much the same happened with the earlier 8-bit machines (magazine cover-tapes and no-name C90s - generally, the quality of the no-name cassettes was so low you could barely record audio on them, never mind the squeaks and squeals of a computer program. And then there were the budget VHS tapes as well; good luck watching anything you'd recorded on these, especially if you'd optimistically gone for LP recording; twice the duration and a quarter of the quality! Admittedly, budget CDs and DVDs were just as bad; there's nothing quite like picking up an old backup to find the aluminium peeling off...
I once spilled Yop (drinking yogurt) all over a Toshiba laptop - it was back when Yop bottles had a stupid pop-off lid, rather than one which screwed off.
Initially, it didn't want to boot, but it eventually recovered after I'd removed/cleaned the keyboard and then left it for a few weeks - I'm guessing some of the liquid seeped under the keyboard and needed to dry out.
Ever after though, a slightly sour smell of strawberries lingered around it...
And there isn't a rate-limiter or captcha mechanism built into these websites because...?
Admittedly, rate-limiting gets a bit trickier if you're dealing with requests coming in from a botnet, but slapping up a captcha would seriously hamper this kind of trawling.
Agreed. I've been after an A4-sized tablet for a while for reading purposes - I've got a lot of scanned magazines from the 80s I'd like to read, but there's a little too much squinting required on a 9.7" screen and widescreen tablets make things even worse - a 12" widescreen display is only 5.9 inches tall, or nearly 2.4 inches narrower than a piece of A4 paper[*].
Alas, the iPad pro is a tad too expensive to use for occasional archaelogical browsing. I do occasionally eye-up convertible laptops on Ebay, but they tend to either have archaic technology, or are increasingly also widescreen; at 16:9, I'd have to get a 17" display to be able to view A4 pages at their original scale, which in turn drives up the weight and lowers the battery life.
[*] Can we stop using old money to measure screen sizes?
"Electron, what you post is nonsense. A 160 bit hash _does_ in practice produce a unique result for any given input (unless you spend 6,600 years of CPU time to search for two given inputs with the same result)."
160 bits make for a big number. A really big number, with lots and lots of zeros after it. And any halfway decent hash algorithm worth it's salt [*] will be designed such that even a tiny change in the input will produce a significantly different output.
However, 160 bits is not infinity. It's not even a googleplex. And if someone's deliberately trying to force collisions, you'd be foolish indeed to assume that things are safe.
Then too, assuming they're using the "1 GFLOP machine working for a year" definition of CPU-years [**], that figure of 6,600 years isn't as impressive as it sounds.
An Intel i7 can run at over 350 GFLOPs in dual-precision mode, while a modern GPU (e.g. Radeon RX 480) can theoretically churn out up to 5 TFLOPs in single-precision mode; the Tesla K80 used by Amazon for their cloud-computing back-end can churn out 8.74 TFLOPS in single-precision mode.
Then too, there's always the possibility of using distributed computing or specialised hardware - this type of problem is inherently parallelisable and bitcoin mining has shown how effective ASICs can be for this type of number crunching. Also, since the people most likely to want to force a collision are nation-states or hackers, they could well have access to a supercomputer or maybe even botnets - I wouldn't be surprised to see people offering collision-detection as a service, as more traditional profit streams continue to dry up (albeit with botnets increasingly being based on IoT low-power devices, there may not be many of these).
So, yeah. 160 bits is good. But these days, it arguably ain't good enough.
Instead, the question being asked in court is: did Google abuse it's search-engine dominance to starve out Streetmap, rather than just competing on merit?
It has to be said, the "Objective justification" argument presented by Google seems somewhat strange ("In my opinion, I think I'm best for the job"?) - in fact, I'm mildly surprised there hasn't been any comparisons to the browser-integration debacle which led to Microsoft getting a slapped wrist from the EU.
Anyhow, coming back to the technology, and Streetmap in particular...
I have to agree that I remember mapping technology being clunky back in the day - but I can't say exactly when that day was, or which suppliers earned my wrath - or indeed, how much of it was due to browser technology and connectivity speeds being much more primitive.
Because, y'know: it's been a decade or more. And beer.
Equally, I can't really comment about how good Streetmap was versus Google Maps back in 2007 - as far as I know, unless someone's maintained a video archive detailing their functionality at the time, the only way to accurately compare their relative merits would be to borrow a Tardis and jump back a decade.
Finally, I'd note that while £300,000 sounds like a large sum - I'd love to have that landing in my bank account every month - but for an IT company, it's not actually a huge amount. Once taxes and overheads are accounted for, it's only really enough to fund maybe half a dozen staff.
In fact, I suspect Streetmaps never really stood a chance against larger companies like Google, who could throw far more resources at their implementation, as well as integrating it with their other offerings and technology, such as natural language processing.
Perhaps if they'd gotten some investors behind them early enough, or if they'd been able to build up some sort of patent portfolio, Streetmap would have done better - other companies such as TomTom and Garmin have found themselves in similar situations and are having to evolve. But I can't help thinking that Streetmap simply weren't in a position to scale up at the rate needed to compete.
"Best Practice? Nothing to do with best accounting practice regards Tesco."
I'm pretty sure that unless you're working for the Mafia, best accounting practice never involves deliberate fraud. The point I was making was that BT makes it very clear at all levels that such activities are unacceptable and will lead to dismissal and possibly prosecution.
I'll also (un)happily note that upper management can often have a different view on "unacceptable" when compared to the lower tiers of management - and often seem to get away with far more at a much lower cost than other employees would be allowed to do - but as I said above, in BT at least, everyone has to provide explicit annual proof that they're aware of the rules and regulations which pertain to their role. I may even still have some of my old printed certificate-of-completions hanging around somewhere!
""I'm guessing it was more a case of inflated numbers and stuff being rapidly shuffled between divisions and/or bank accounts to make things look good and keep bonuses high."
In my book what you have just written 'is' fraud/corruption. You seem to see this a "light touch" manipulation?"
No, it's still fraud and corruption. The point I was trying to make was related to the original poster's "how you don't notice £530 million going missing": it wouldn't have been a single lump sum which could be easily spotted by running =SUM(A1,A20) over an Excel spreadsheet. Instead, it'll have been lots of things overstated, understated, assigned the wrong depreciation rate, calculated using an incorrect exchange rate, unrecipted, double-charged and a dozen other things that will only be understood by a professional accountant with full access to the books and a thorough understanding of the company's processes and the applicable national/international legislation.
Apologies if that wasn't entirely clear!
In the meantime, it looks like the knives have been sharpened and a few heads are starting to roll, starting with the head of BT's Continental Europe division (aka the ex-head of BT Italy). Though again, as per above, I'd guess his leaving statement will be phrased to handwave as much personal responsibility as possible away...
That's a wee bit unfair. This wasn't BT, this was a subsidiary *in another country, with a (presumably) very different and complex regulatory process* who appears to have been cooking the books. I'm guessing it was more a case of inflated numbers and stuff being rapidly shuffled between divisions and/or bank accounts to make things look good and keep bonuses high.
I've no doubt that people will be running around within BT doing damage limitation, but this situation is absolutely nothing to do with BT's practices in this country, nor does it have anything to do with the politics of Openreach and Ofcom.
Also, it's worth noting that BT's employees in the UK - including in subsidiary companies - have to undertake a mandatory set of annual training courses which make it clear that equivalence has to be maintained and things like bribery and corruption are completely unacceptable. Which may not stop people, but at least they can't claim that they haven't been explicitly warned.
You have to wonder if other large-scale companies follow the same best-practice. Such as Tesco, with their £326m accounting scandal...
[ObDisclaimer: I worked for BT, about a decade ago. I don't have any shares in them, though, which may not be a bad thing at present!]
Anyone's got a list of which router chipsets are the most reliable/fastest?
... in the same way that Ford invented the Model T, Sony invented the Walkman or Nintendo invented the Wii. They took existing technologies, iterated and integrated them, and presented them in the right way in the right place at the right time.
And that's been true of pretty much every invention since someone discovered how to knap flint.
As to how much of a part the state had to play: a lot of things - especially in the IT and medical field - have been spun out of military research, though by the same token, much of this is done by private companies funded by government sources.
Equally, a lot of technology has been acquired through trade, acquisition or outright theft. In WW2, the United Kingdom gave the USA a lot of technology via the Tizard mission (and later, jet-engine technology was also licenced), and both Russia and the USA "acquired" a lot of rocket technology by picking over the bones of Germany's industrial infrastructure. Then, Russia spent the next 40 years stealing whatever nuclear/military technology it could from the USA - though I'm sure some things would have trickled the other way as well!
Anyway, if you trace any modern technology back far enough, there will have been state intervention. That shouldn't subtract in any way from the work done by companies and individuals who have produced something where the sum is greater than the parts...
But they pretty much set the mold by taking the existing technology and infrastructure of the iPod, adding a highly polished and well designed UI, and then integrating everything with network-agnostic functionality.
Back when the iPhone first launched, people were still thinking of smart phones as miniaturised computers - or at best, an upgraded PDA - and designed the UI and software accordingly. So, apps had scroll-bars which needed to be physically clicked and dragged with pixel-perfect precision, which was generally only feasible if you had a stylus or sharp fingernails. It' was all clunky and clumsy, especially since many devices still used single-touch resistive touch-screens and often sacrificed screen-size in favour of an awkwardly small physical keyboard.
Conversely, Apple ditched the physical keyboard and built the UI from the ground up to use capacitive multi-touch, with little bits of auto-assist technology built in everywhere. And to quote the old Apple Marketing slogan, It Just Worked - as long as you were happy with the functionality Apple was willing to give you.
Then too, Apple had some other advantages: the iPhone used the same connector as their iPod line. This meant that there was already a reasonably large third-party ecosystem out there (e.g. powered speakers, recharging docks) and the cost of buying replacement cables, etc was low. Also, you could share cables/battery packs/etc between your devices and you could charge via USB. And if you did want to use your iPhone for music, it had a 3.5mm headphone jack, unlike most other mobile phones at the time, which would at best have a 2.5mm jack for a mono earpiece
(And yes, there's an irony there, given that Apple has now ditched the 3.5mm jack. And to be fair, manufacturers like Nokia and Sony-Ericsson had fairly standardised power connectors, but these didn't transmit audio and they only sold cabled wall-warts, so if you did want to recharge via USB, you had to track down third-party cables, sometimes of highly dubious quality)
Finally, for all that iTunes is looking very long in the tooth these days, at the time, it was leagues ahead of the garbage supplied by other major manufacturers at the time (Sony, Nokia, etc), which were often unstable/buggy or hamstrung by politics. Sony in particular were bad for this, presumably because the media division ranked higher than the hardware division; the minidisk in particular was one technology which could have made a much bigger impact if they hadn't been locked it down so much to try and prevent music copying.
The use of iTunes also had a further impact, in that it provided a standardised and relatively simple way to push software updates to an iPhone, improving performance, stability and features. This was something other manufacturers simply couldn't begin to do, thanks in no small part to the fact that there was often network-specific elements embedded in the OS.
And iTunes also had a further, unexpected benefit, in that it offered a way for people to easily download - and pay for - new applications to their phone. Everything I've seen/read/remembered suggests that Apple initially failed to realise the significance of this, despite the fact that even basic games like Nokia's Snake had become a part of popular culture. Still, in time, iTune apps actually became a major driver of iPhone sales, thanks to effectively-exclusive titles such as Angry Birds, Fruit Ninja and Doodle Jump.
Mind you, for all that I admire and recognise the impacts of the iPhone, I've never actually owned one: they've always been too expensive, especially if you wanted extra storage and by the time I could justify buying one, Android phones were giving better bang for the buck, as well as offering far more flexibility - varied screen sizes, expandable storage, replacable batteries, widgets, etc.
I seem to recall people doing similar with a ZX Spectrum[*] at the Manchester Play expo a few years ago - I think it was either for Twitter or IRC. When all's said and done, you're just using Ye Olde Machine as a bare-bones terminal.
[*] Admittedly, as this was a Sinclair machine, it was probably done with a lot of bodged parts rescued from landfill and was at risk of crashing if there was too much wobbling. We'll have none of that freshly organic artisanal rubbish here!
To be fair, I was thinking more historically, but even more recent wars have had something of an impact - according to the ever-reliable Wikipedia, up to 80 million people died "[including] 19 to 25 million war-related famine deaths". That was 3-4% of the entire world population at the time.
Then too, it was significantly worse at a country level - some (e.g. Poland) suffered 15%+ casualties.
Any which way, war sucks.
As badly written articles go, this one is... quite badly written. Where to start?
First, the entire article is based on the strawman that a basic income policy is only intended to address a shrinking job market. However, there's far more to it than that: not only does it reduce inequality and poverty (pleasing the left) and reduce government bureaucracy overheads (pleasing the right), but it also leads to more entrepenureal activity (pleasing both). After all, if you have a guaranteed economic safety net, you're free to experiment and take risks that would be unthinkable otherwise. And this isn't just a theory: this effect has been seen in many of the pilot schemes carried out to date (https://en.wikipedia.org/wiki/Basic_income_pilots) - in Madhya Pradesh, "The study also found an increase in economic activity as well as an increase in savings, an improvement in housing and sanitation, improved nutrition, less food poverty, improved health and schooling, greater inclusion of the disabled in society and a lack of frivolous spending".
Dismissing the concept as being just "charity" is therefore both foolish and misleading - as is the claim that it would "not be progressive or emancipatory".
Then, there's the shoe-factory example. There's an underlying assumption here that there's an infinite market for shoes - i.e. if you make 200 pairs instead of 100, you can sell all 200 for the same price as the original 100. In practice, the market will become saturated sooner or later, and then you'll have to either drop your prices or reduce your output. Either way, that shop-floor worker will lose out, as they'll either get fired, work less hours or get a reduced wage.
"There's no correlation between how burdensome and how well-compensated a job is". "Burdensome" generally isn't factored into compensation calculations because it's irrelevant (and often subjective, to boot). A manual job may require some degree of physical fitness but often requires little in the way of training or experience, so there's a very large number of people who can do it and the laws of supply and demand kick in again.
"The principle of production increase over leisure increase applies independently of the type of job in question". It does up to a point - the point where supply exceeds demand. At that point, you either scale back, start making a loss or end up with a large chunk of unsold/unsellable inventory - which essentially also means taking a loss.
Then, there's the claim that there will be new jobs to replace the old jobs. However, the current industrial revolution is different to previous ones in at least one important way: it's happening a lot quicker, and it's affecting many more economic areas. After all, a lot of it is being driven by software, and new apps and updates can appear virtually instantaneously across the world. Take Google Maps for an example: it's eliminated the need to keep a physical map in the car, and it's getting increasingly better at identifying and routing around traffic jams. So there goes the paper-map industry *and* the traffic-report DJs. Along with everything else that can now be handled by a mobile phone - checking your bank balance, taking photos, booking hotels, ordering food, checking mail, etc. There goes the bank-teller, the camera-manufacturer, the people on the phone and even the computer manufacturers...
Admittedly, people are using technology to create new jobs for themselves - t-shirts, 3D printed cosplay accessories, self-published media, etc. Sadly, the people I know who do this are generally making little or no money. Because with technology being so cheap and easy to use, anyone can do what they're doing and the rules of supply and demand have come into effect once more. However, having a basic income would give them more freedom to experiment, innovate and differentiate themselves, and therefore increase the revenue they earn.
It's actually worth looking at some of the classical civilisations to see how they handled over-production and over-population - Egypt, India, China, etc. Generally, what you ended up with was a heavily striated society with very limited movement between layers and increasingly complex social models and policies - such as the imperial examinations in China, where you had to study philosophy, poetry and even horseriding and archery. They also tended to have either low productivity or some form of resource-sink, such as the Great Wall of China or the Egyptian funeral industry. And in the long run, they also tended to be dominated by more efficient and less striated civiliations - the Romans, the Mongols, the British empire, etc.
Of course, there's always another approach to dealing with over-population: going to war - not only does it reduce your population (and that of whoever you target), but it distracts the general population and you also get to spend resources on equipping and training your troops. Win-win, except for the people at the sharp end of the axe.
So personally, if a Basic Income offers even the slightest possibility of avoiding a striated society or war, I think it's something we should be spending a lot more time looking into!
My last phone was a G4, but when it came to upgrade time, I looked at the G5 and jumped over to a Samsung S7 Edge. Not that there was anything specifically wrong with the G5, but the add-on technology seemed pointless and the S7's specs generally had the edge.
I passed the G4 onto a friend, and then things got a bit complicated, as it stopped working a month or two later due to the "reboot loop of doom", which turns out to be a well-known issue caused by component failure, to the point where LG has actually agreed to fix all affected phones regardless of warranty.
So. A bit of a pain, but at least we could get it fixed for free - I volunteered to help with this, as my friend's not technical. However...
I sent an email to their website, asking what was needed to submit a repair request. After two weeks, I got back an automated email apologising for the delay and asking me to resubmit the email if I still needed something...
In the meantime, I'd raised an RMA request, only for that to sit untouched for far longer than the 48 hours claimed on the website. I eventually rang them, only to get through to a human on an overflow line, who advised me that there wasn't anyone to take my call(!). The day after, I finally managed to get through to someone who could deal with the issue, and he finally got the process moving. He advised me that we'd have to post the phone without any additional items - i.e. no SIM card, memory card - even the back-cover and the battery should be removed.
Fair enough. Then, I received an email for one of those Inpost automated drop-boxes, printed off the return label, packaged up the phone by itself and popped it into the drop-box after it had scanned the appropriate QR code.
Two days later, I got another email from Inpost, and went back to discover there was something in the drop-box: some official LG packaging and a note telling us to include the battery and back-cover!
Thankfully, LG then confirmed that they'd received the phone anyway, and after about two weeks, it finally came back and my friend is happily using it once more.
Overall, it's put me off using LG for anything else in the future...
For prototyping, low-volume or custom items, 3D printing is great, whether you're printing out a replacement part on a US Navy ship in the Atlantic, making a high-precision part for a jet engine or printing out some props for your latest cosplay outfit.
In fact, prototyping is probably where 3D printing has made the greatest inroads and will continue to do so.
For mass-production, it's not so great, as it's a slow and expensive process. I was at a maker's fair this weekend and someone was using a 3D printer to make little rocketship models; each one took 2-3 hours to complete.
It'll be interesting to see how this improves as the technology matures - looking at the website for the HP Multi Jet Fusion, it's actually only claiming a 10x speedup against comparable 3D printers (i.e. ones with a six-figure price tag), which is still nowhere near fast enough for mass production.
Then too, if you have used 3D printing to produce a proof-of-concept and decide to use a more traditional method for mass-production, you'll have to redesign your widget from scratch to account for differences in material strengths and stress points.
Beyond this, some of my friends genuinely think that there's going to be a revolution of sorts, where every home will end up with a 3D printer sat in the corner; if you want a new item or something breaks, you'll just download the 3D model for it and set it printing.
I just don't see this happening. Partly because non-technical people have enough issues with standard printers - I get regularly summoned to help relatives change ink cartridges or install drivers.
Partly because I suspect there'll be issues with getting hold of the actual 3D models - as with cultural media (e.g. music, books, movies, etc), the people who make them will generally want to be paid for them, which means that there's likely to be a plethora of copy-protection mechanisms and formats.
And partly for the same issue/reason as per above: you can't always replace a mass-produced item with a 3D printed model. 3D printed items can be stronger, but they can also be weaker, as the guys who scanned and printed out gun components found out (http://arstechnica.com/tech-policy/2013/03/download-this-gun-3d-printed-semi-automatic-fires-over-600-rounds/).
There's also the question about other physical properies (e.g. heat resistance, expansion/contraction when temperature changes occur) and tolerances - the Multi Jet Fusion claims to have an accuracy of 0.2mm, but only after sandblasting. whereas injection moulding and CNC routers are usually accurate to around 0.127mm
Performance-intensive stuff: most of this comes down to the GPU these days. The Pi itself is a key example of this; the fairly underpowered ARM chip (at least in the original iteration) relied heavily on the Broadcom GPU.
A fairly quick glance online shows the PS4 GPU to be roughly equivalent to a Radeon 7850 (http://wccftech.com/playstation-4-vs-xbox-one-vs-pc-ultimate-gpu-benchmark/). These look to be available for around 75 quid online, and come with 2GB of dedicated ram.
Admittedly, there's something of an apples/oranges comparison here, since I'm looking at second-hand prices. Then too, the PS4's custom-tuned architecture may well have some speed advantages - though conversely, GPU performance under linux is still generally behind that of Windows, and that's even assuming a hack like this is able to get access to all the hardware, and that drivers are available to take advantage of it.
Still, for around £150, you can get a quad-core machine with 8GB of ram, 2GB of dedicated GPU ram and a GPU equivalent to the PS4. And generally, that'll include a Windows 7 licence which can be upgraded to Windows 10 or junked and replaced with Linux.
And then you can spend the rest in the pub ;)
But these days, it does seem a bit redundant.
Getting Linux running on the PS3 was interesting at the time, as it was pretty powerful for the price in some number-crunching scenarios, thanks to the Cell architecture. But Moore's law had already marched on a fair amount by the time Sony withdrew support for Linux, thanks in no small part to the rise of the GPU as a device for massively parallel processing.
These days, "consumer" hardware is very much a commodity. Android-based USB-powered thumbsticks can be picked up for less than 15 quid - or, if you want to build something for scientific purposes, for the same price as a PS4, you could pick up ten Raspberry Pis and slap them together into a cluster.
Or you could nip onto Ebay and pick up a OEM small-form-factor PC; at a glance, there's plenty of multi-core, 3ghz machines with 8GB of ram available for less than a third of the price of a PS4[*]
And with all of the above, you don't have to worry about the functionality vanishing if/when Sony patches the exploit.
It's still an interesting experiment, but it's definitely of limited use in the real world!
[*] This is exactly what I did a while ago; said box fits comfortably under the TV and does a good job of running Windows 10 with Kodi, Steam, iTunes and a few other bits and pieces. Plus, it's all controllable from my phone - including the TV itself!
These people were actively thinking about the future, rather than just hammering random keys. Though admittedly, it can sometimes be hard to tell the difference ;)
There's plenty of other interesting nuggets out there, too.
EE Doc Smith produced some spectacular space-opera cheese; much of this was the cliche "hero saving heroine from Certain Doom with the power of Science", but his Lensman series included some interesting concepts and his exploration of how to handle complex space battles was cited as an key inspiration for the US military's development of Command Centre capabilities in World War 2.
Robert Heinlein produced some equally interesting stuff - the militry concepts and tactics in Starship Troopers are well thought out (and the way these were ignored by the film is a major reason why I despise it) - and along the way, he also invented things like waldos (named after his story) and the water bed; his story was actually used as an example of prior art when someone tried to patent the concept!
Keith Laumer is much less well known, but produced some interesting concepts, especially in his Reteif series, where a diplomat wanders the cosmos, cleaning up after his incompetent bureaucratic superiors. Admittedly, it's hard at this distance to determine how much was original and how much was drawn from other sources, but he dabbled with concepts such as virtual reality, remote-controlled robotic bodies and cloning. It's possible at least some of this was driven by the fact that he suffered a stroke which restricted his mobility.
There's many more out there - for instance, the British government ignored Arthur C Clarke's ideas about geo-stationary satellites.
Sadly, one area where the Golden Age of sci-fi seemed quite weak was around computing science (though again, EE Doc Smith did come up with the concept of "robot controlled" spaceships as the first line in massed assaults). I suspect this was down to editors/publishers not being comfortable with the concept (and/or assuming the reader wouldn't be interested); Science was there to be controlled, not self-governing!
That no-one's mentioned the Atomic Toaster from MDK 2 yet!
I spend a lot of time trying to fix things with a codebase which dates back over 15 years and has been hacked on by dozens (if not hundreds) of people with highly varying levels of knowledge and experience.
The bit of code I'm looking at *today* is a prime example: it's meant to deal with account cancellations. How does it do this, you ask? Well, it runs a query to pull back every account with a cancellation date set *regardless of whether the date is in the future or not*, and then performs a pass in the code to filter this down to the customers who we're actually interested in. Because everyone knows databases are bad at applying date and primary-key constraints to queries.
Then there's the code which used a switch statement to round a timestamp to the nearest 15 minutes. Y'know, instead of using the modulus operator.
Or the code which used the "last modified" timestamp on a file to determine the next polling period, rather than using the "YYYYMMDD-HHIISS" metadata embedded in the filename - and the two could differ significantly as the process could take over an hour to run. Though to be (un)fair, this same code also mandated a two-hour overlap between polling periods, because who doesn't love reprocessing data?
Or the code which compared an array to itself and surprisingly always got a match!
And the list goes on...
I'll be the first to admit that I've written some bad code in the past, and newer code in the system is (generally) of a higher quality. But even so!
Fair point - I forgot about the RAM upgrade on the Pi2b - I'm still running RaspBMC on a 512mb B+, as it Just Works :)
"The culture of computing for several decades has been C and Unix or Unix-like OSes"
Off-hand, I can think of a lot of operating systems which haven't fallen into these categories. The Japanese Tron OS, BeOS, RiscOS, Amiga OS, QNX, Warp, VMS, MS-DOS, Palm OS, etc. Some may have been written in C and some may have been a bit *nixy, but not at the same time.
They just haven't caught on in the same way as *nix systems. To me, a big part of the reason for this is that Unix came from a mainframe/multi-user/batch-processing background, and therefore had a head start when it came to modern "networked server" paradigms (e.g. LAMP), where a given machine may be running dozens if not hundreds of tasks in parallel. And, y'know, the whole "free as in speech and occasionally beer" thing for OSs such as Linux and BSD; combined with the dropping cost of hardware, this led to a huge takeup of *nix systems by amateur enthusiasts, which then fed back into the workplace.
Other systems - including Oberon - generally came from a consumer or real-time/single user perspective, and weren't able to adapt. Windows is a notable exception, though it's telling that Microsoft accomplished this by essentially ditching their old codebase and switching over to their multi-process/multi-user New Technology system - and which itself was built by engineers from DEC, who had previously worked on VMS, a server-orientated competitor to *nix...
"The IT industry assumes that operating systems have to be written in C to work -- wrong -- and must by nature be big and complex -- wrong."
I don't think anyone is claiming that an OS absolutely has to be written in C [barring the odd flareup of flamewars on places like Slashdot]; it's just that the most popular operating systems have been written in it.
As to whether or not an OS should be big and complex: that's a full-blown topic all by itself. Modern hardware is so much more complex than hardware from even just a decade ago, and we expect it to do far more: more data, more threads, more peripherals, more displays, higher throughputs, more parallel processes, virtualised hardware, etc - and we expect all this to happen flawlessly and reliably on low-cost, commodity hardware. Handling everything that can go wrong - from dropped network packets to processor stalls - is complex and needs lots of defensive code.
It's also worth noting that there's been many efforts to go down the micro-kernel route - QNX and Gnu Hurd being two prime examples, with the latter being a prime example of how "theoretically superior" concepts don't always come out as expected in the real world.
"But it should be something simple, clean, modern, written in a single language from the bottom of the OS stack to the top -- and that language should not be C or any relative or derivative of C, because C is old, outmoded and there are better tools: easier, safer, more powerful, more capable."
I'd love to hear suggestions on what should replace it? It sounds like you're rejecting things like Java and C# (and hence by extension things like the Android runtime)
Other than these, the last real attempt to do this was BeOS, and this failed. Partly due to allegedly dodgy behaviour from a certain industry giant, partly because they targetted the consumer market and partly because they couldn't get a critical mass of applications and developers.
"We should start over, using the lessons we have learned. We should give kids something small, fast, simple, clean, efficient. Not piles of kludge layered on top of a late-1960s hack."
Perhaps the biggest lesson to learn is that reinventing the wheel is expensive, time consuming and generally pointless. Most if not all of the technical lessons we have learned are already encapsulated in the current popular operating systems - they've survived and grown because they've evolved and rearchitected themselves along the way. Both Windows NT and Linux have moved towards "hybrid" kernel design - not quite microkernel, but not entirely monolithic. They handle a wide range of physical hardware from a vast range of manufacturers - CPUs, network/audio/network/video/etc. They handle as many real-world issues (packet drops, security attacks, parity errors, etc) as they can. There's literally thousands of man-hours which have been ploughed into making them as robust as possible.
Dismissing all of that as "hacks" is simply foolish. I'm reminded of the article by Joel Spolsky, written back when Mozilla decided to reinvent the wheel and reimplement Netscape Navigator from scratch, and in doing so essentially conceded the browser wars to Microsoft. http://www.joelonsoftware.com/articles/fog0000000069.html
"No, we should not be teaching children with "real world" tools. That is for job training. Education is not job training, and vice versa. You don't teach schoolkids woodwork with chainsaws and 100m tall trees"
Oddly, to my mind, that's exactly what you're proposed. In fact, you're essentially expecting them to first assemble the chainsaw before firing it up. And therein lies the thing which this article seems to have misunderstood; it's about fifteen years out of date. The computer as a singular device has long since stopped being the primary thing people need to learn about; these days, it's all about what you can plug into it (or what you can plug it into), whether that's a camera, a network, a mechanical device, a sensor or a coffee machine. To do this, you need a development environment and tools (e.g. an IDE and support libraries), and that's precisely what things like the Pi - and the linux ecosystem - offer.
So no, we shouldn't be pointing schoolkids at a tree and passing them the parts to a chainsaw. We should be giving them some planks of wood, a saw, some nails and a hammer and telling them to build a birdhouse based upon an existing template. Said template may have been sketched out in the sixties and look a bit crap, but it's tried and tested and the children are free to innovate and reinterpret it - maybe they can use a 3D printer to give it a tiled-roof look, or a CNC milling device to etch the face of their mum on the side...
"The Pi's strength is its cheapness and the simplicity of its hardware, but at heart, software-wise, it's a PC... <rant about ARM vs x86>"
This is an odd complaint. At heart, the Pi is a mobile-phone chipset married to a low-end ARM chip, and it will run whatever OS is provided. It only takes a few seconds of looking at the official website (https://www.raspberrypi.org/downloads/) to see that there's a number of "officially approved" OS builds available for it, ranging from various flavours of Linux to Windows 10 /and/ RISC OS. And it doesn't take more than a few seconds to find ports of FreeBSD, Android and even more obscure OSs such as Haiku.
It's also worth noting that the Pi isn't bundled with an OS by default, which means that people are actually choosing to run Linux on it - as indeed, are many other "non-PC" devices, especially in the IoT landscape. After all, it's free and there's lots of existing dev tools and support libraries.
"There were some missed opportunities in creating the Raspberry Pi. The Foundation could have brought harmony and a de facto standard for firmware on ARM hardware"
The Pi was never intended to be a high-volume device. Instead, it was intended to be a relatively low-volume educational device, and it wasn't clear until after it had launched how popular it would become. Setting industry standards were never part of the foundation's remit.
Also, the Pi had only sold 5 million units as of February this year. Even if we assume that volumes have since managed to doubled to 10 million, that's a drop in the bucket compared to the "billions" of other ARM-based devices which the article itself notes have been sold in the same timeframe. So the Pi is hardly in a commanding market position!
Finally: as the article itself comments, the Pi deliberately sidestepped the firmware issue. What it doesn't mention is that this was for several pragmatic reasons - the impact on manufacturing costs being the main one. Because, once again, it was intended to be a low-cost, low-volume educational device.
"Failing that, the Foundation could have bundled RISC OS with it"
It's available on the website for free, and there was a fair amount of excitement/publicity when the Pi first launched about the fact that RISC OS was available. Which suggests that, as fun as tinkering with obscure OSs can be, people actually wanted to use an OS which has lots of existing tools and libraries available...
"Pi project founder Eben Upton fondly recalls his first machines, a BBC Micro and an Amiga 600. A kid could fathom those; I did, with my ZX Spectrum."
Ah, the humble Speccy - the grey +2 was my introduction to the wonderful world of computing. And in truth, I think it's a lot easier to learn how to use a computer these days. The 8-bit machines did offer a BASIC prompt on startup, but there was generally little or no support structure for people other than the official manual, whatever the local library had in stock and the odd magazine type-in (which quickly died off as the commerical world moved towards the use of machine code). These days, you can use the internet to search for documentation/prior examples, or post queries to somewhere like stackoverflow.
I'd also argue that it became significantly more difficult to learn how to code when the 16-bit era landed. You no longer had BASIC bundled with the machine and commercial C/Pascal compilers were relatively rare, underperformant and usually badly documented. So you had to either learn assembly or pick up a third-party program such as AMOS.
Then too, if your code crashed or went into an infinite loop back in the 8/16-bit days, you generally crashed the entire computer and lost all your hard work in the process. And let's not go into the time-cost of backing up to tape or floppy disk - especially the latter, since most home coders used repurposed magazine cover disks with distinctly variable levels of quality control...
"Twenty-first century Unix is too big, too weird, too full of arcane 1960s strangeness."
"Conventional wisdom is that this complexity is an unavoidable consequence of modern software's age and maturity, and that it's worth it. You just ignore the stuff you don't need.".
The ZX Spectrum was basically a 16k ROM bolted to 16/48k of ram, a 4mhz z80 CPU, and a custom ULA which did some magic to reduce component counts (and led to the infamous color-clash issues).
To take the current "high-end" Pi, the Pi 2 features 512mb of ram, a multi-core processor, an OpenGL capable GPU, an audio chip, a DMA controller (and an MMU), a mass media controller, a serial controller and a few other things for good measure. All essentially built into the one chip. The complexity of modern software goes hand in hand with the fact that the hardware is so much more capable. And since you can't chisel bits of silicon off the CPU, you pretty much have to ignore the stuff you don't need...
"Which brings me to the other cheap little educational computer you've never heard of: the OberonStation ... No, it won't do the 1,001 things Linux will, but it's no toy ... But what it shows is that complete, working, usable operating systems – not single-trick ponies – can be built, essentially single-handed, and can be understandable by ordinary mortals"
Hmm. An effectively proprietary OS, no USB ports, no soundcard, no network capabilities, PS/2 keyboard/mouse ports and VGA-only output. That sounds like a toy to me!
From a quick glance at the manual, Oberon was a vanity/sabbatical project built by two people in the eighties. I.e. it's pretty much ideosyncratic by definition and was designed back before the concept of networked computers/IoT became mainstream. Also, the manual states that the system "can be understood by a reasonably determined person", which is definitely a step beyond being understandable by an "ordinary mortal"! So I really can't see any justification for using it these days. Especially since any OS-level skills/knowledge you pick up can't be reused on other devices.
So no, Oberon shouldn't grab the Pi's crown. If there's even a crown to grab. Which there probably isn't, since there's so many competitors out there, starting with the millions of Arduino devices out there. The fact that the Pi Zero is so low cost may well cause it to grab some more "makers" market share from Arduino and others of the same ilk, but there's still plenty of choice out there!
You don't use a sledgehammer to put a screw into a piece of wood. Unless, y'know, it's right next to you and the screwdriver is still in the toolbox...
It sounds like we're going back to the "write once, run anywhere" ethos that Java once enthused about, and it's likely to encounter the same issues that Java did: the levels of abstraction needed to get the same code running on devices A and B mean that you need more physical storage, more run-time memory, more processing power, and more electrickery to keep things ticking over. And for the IoT ecology, all of these - especially the electricity - are generally in short supply. It's the age old "cheap, powerful, efficient: pick two" dilemma, and in a commodity market, cheapness is generally mandatory.
Also, there's a question about what's going to be done with all the data spewing from these devices. Is someone really going to gather all the stats needed to monitor a fridge compressor - and even if they do, are they going to be able to put together a realtime monitoring mechanism *and* have some way of exposing it securely for customers to access? That sort of thing costs time and money and unless there's some sort of high-value support contract in place, there's little or no reason to provide it. Especially since in a few years time, there'll probably be a new model of the compressor and the entire thing will have to start again...
(To be fair, there is a case to be made for having a widget sat atop the freezer that monitors for pre-defined, short-term issues - a change in the compressor's RPM or power usage, a prolonged change in temperature, etc - and punts out an alert via email to the butcher and/or the company which provides the support contract for the freezer. But that's very different to the kind of real-time monitoring/tuning/statistical analysis that Microsoft are talking about, and requires far less resources to implement)
1) Maybe - looking back now and reading the wikipedia summary, I guess it can be taken either way. But even if that is the case, it still feels unethical - the Doctor is basically refusing to take "no" for an answer and forcibly wiping people's memories until they agree with him!
2) True, but that doesn't address the issues which led to the splinter group becoming terrorists.
'm starting to regret being so hard on RTD back in the day, as this felt fairly similar to some of the stuff turned out back during his reign. The Doctor bumbled around without actually achieving anything, there were some heavily telegraphed "plot twists", and lots of people died because the Doctor was faffing around. Then too, the entire ending hinged on a macguffin/Deus Ex Machina and the story fizzled out with an implicitly contradictory message, a plot hole large enough to migrate the entire Zygon race through and nothing was done to address the consequences of the various events (e.g. lots of dead people) [*]...
On a brighter note, the dramatic speech actually was quite dramatic.
[*] SPOILER/RANT ALERTS
1) The cease-fire has failed /fifteen/ times, and given that Kate doesn't look to have aged drastically, this has happened within the space of no more than a couple of years. I.e. things keep breaking down to the point of a full-blown MAD scenario within 3-6 months. Surely that's a sign that the peace treaty is a complete failure?
2) As much fun as stealing the plotline from Sunshine of the Spotless Mind must have been, the memory-wipe only affected the people in the room. What about the millions of Zygons outside the room and the unknown number of humans who knew that the uprising had occurred? Is there to be no justice for people affected by the atrocities carried out by the splinter group? If nothing else, the Zygons are going to have to choose some new leaders...
3) Similarly, even if the Doctor did manage to magically erase the memories of everyone on the planet, what about all the people who died - all their friends, family, medical and legal records, etc. If the Doctor is prepared to go back and wipe out people's memories of their loved ones to artificially maintain a demonstrably unsustainable peace, he's a much bigger monster than anyone else could ever be!
4) Why did the Doctor keep clumsily asking if Osgood was human or Zygon? Of all the entities in the universe, he should be the one most aware of the power of an anonymous symbol (e.g. such as a question mark...). It would have made more sense for Clara or possibly even Kate to ask that question - Kate especially had good reason to demand an answer!
5) And since someone will no doubt spark up with a "you don't have to watch it" comment: I've actually enjoyed some of the episodes this season; it does feel like there's an effort being made to steer things towards a more interesting path. And with some fairly rare exceptions, there isn't exactly a huge amount of British sci-fi to pick from!
The soundtrack was included on a Your Sinclair covertape (http://www.ysrnry.co.uk/ys36.htm) and I have fond memories of listening to those tunes while playing various games on my humble speccy.
It's also worth noting that Afterburner is part of a lineage at Sega which essentially started with Space Harrier (which also offered a deluxe seated edition[*]), and the Afterburner/G-Loc games in the arcade, before moving onto the home consoles in the shape of the Panzer Dragoon series, before going out on a high note with Rez. And the person who designed Rez (Tetsuya Mizuguchi) then went on to create Child of Eden. though I'd personally say Rez is the better of the two...
[*] I've got memories of a trip to Blackpool as a young'un, and I could swear that I saw my cousin playing Space Harrier while perched on some funky mechanised fighter-pilot style seat, but the only images I can find for the SH seated version show a fairly boring wooden all-in-one cabinet...
We first had to destroy it.
Instead, it's all about the network traffic - both the size of the data and the lookups/translations required to determine the route to said data. And while HTML/JS contribute to the size of the data, I'd be willing to bet that for 90% of the websites out there, the binary data (i.e. images) far outweighs the size of the code.
F'instance, on this very page... if I download it, there's about 1.35mb of data. 34kb of this is the page/content. There's another 180kb for jQuery and another 85kb of CSS.
There's then around 500kb of what looks to be advertising-related JS and a further 1280kb of data spread across some 120 images. And that all needs to be cached, decompressed and generally tinkered with to get the page rendered.
And that's not going to change, no matter what form the code wrapped around it takes.
"Phone apps masquerading as remote controls"
Eh? The remote app for my old WD TV Live worked pretty well (much better than the WD TV Live's SMB mounting, but that's a different story) even on my old Samsung S3.
Meanwhile, my LG G4 (and the G3 before it) works fairly well as a basic TV remote, and does a good job with both the HTPC (as a KVM), Kodi/Xbmc - and potentially also the Xbox 360, though I've never actually tried using it for that!
I can also use it to VNC into my desktop machine upstairs, and it also does a good job of acting as a remote control for the iTunes install sat on the same machine; Retune even lets me tell iTunes to stream from the desktop down to Kodi so I can have good tunes and psychedelic visuals running whenever I'm downstairs.
Overall, both my original TV remote and the Logitech Harmony have been gathering dust for a wee bit now...
Having just come back from a 2,500 mile drive to Austria and back[*]... I found cruise control to be the best thing since sliced bread. However, the long roads do seem to encourage some bad driving practices on the continent; on the dual carriageways, people tend to overtake with their cruise control set to just a few kph faster than the speed you're driving at. So if you're coming up to a slower-moving vehicle, you either have to brake or rev your engine to nip out before the cruise-controller blocks you in...
Anyhow, back to HUDs, and it's the same as anything else (e.g. smart-phone interfaces): it'll take time
to evolve something which offers relevant information in a non-obtrusive way. Simple shapes/icons, use of colour-coding, etc. The article's point about fighter-plane HUDs is a good one; not only do the military spend lots of money on trying to make the HUDs effective, but the pilots themselves are heavily trained to make best use of them. Something which can't be guaranteed when it comes to Joe Bloggs in his company BMW...
In fact, I suspect the main issue will be that car manufacturers will have to downplay the expectations of people who've seen the heavily contrived VR/HUD displays in things like Minority Report and Iron Man. Slapping something like those onto someone's windscreen is pretty much a guaranteed recipe for disaster...
[*] And the worst bit of this journey? It wasn't the french potholes or the german trucks. It was the M1 and M25, thanks in no small part to the huge swathes of 50mph semi-permanent roadwork zones and the enforced slowdowns for accidents/closed lanes/temporary roadworks. Especially since at least two of the latter proved to not exist at all! And that brings up another point about HUDs and "smart" roadways: information needs to be both relevant and timely...
I'm not sure this is worth an upgrade!
About the only thing which stands out on the list is the camera - though it'll be interesting to see how well this performs, given that image/video processing is one of the few things out there which can greatly benefit from speedy multi-core CPUs...
obDisclaimer: I've known Matt for years, and attended a talk he gave in Sheffield about this very performance, which I recorded - https://www.youtube.com/watch?v=r1a3JYp-VFs
Said recording covers most of the points above, but to summarise:
1) It's not some sort of publicity stunt. It was just a bit of fun for his local musuem, who were putting on a "Geek is good" season - and they approached Matt with a suggestion for hooking up some Speccies and BBC Micros together.
2) Memory limitations. The symphony is over an hour long, so can't be crammed into 48k of memory, especially in BASIC.
3) Hardware. Unsurprisingly, getting hold of lots of *working* Spectrums is pretty tricky these days. Matt had to search high and low to find enough kit for this - and even then, several failed to work on the day
4) Raspberry Pi. Each model of speccy has slightly different timings, so the pi was used to keep them in sync; the code for the music ran locally on each Speccy, rather than using them as dumb terminals
In the end, it was a bit of fun for a museum display. If anyone wants to go one better, then grab a 48k Spectrum (or emulated equivalent thereof) and get tinkering!
Biting the hand that feeds IT © 1998–2017