I thought all viruses had a noise associated with them. Isn't it normally "Aaaaarrrrrggghhhh bugger" coming from the user when they discover the damn thing's infected their PC/MAC/Android?
2208 posts • joined 23 Apr 2008
Ah, we'll there in lies the problem that Google built in to Android without really thinking about it.
Apple RIM and Microsoft can push updates out to their customers with with a high degree of independence from the network operators. This provides a long term assurance to customers that their handset is going to be looked after for the duration of their contract (mostly). Whereas with Android you're completely depending on the handset manufacturers and the network operators, a much less certain proposition. Most people won't know how to root their handset.
As for a VM, with the ARM A15 cores that's not necessarily so hard to do. Don't know when they make it in to handsets...
Yep, it'll be something like that, possibly they've done it as direct manipulation of some time string. I've not read their report.
Yet again some programmers somewhere have been shown to be a bunch of lazy ******s. Symantec had a similar problem with their antivirus software updater thinking that the year 2010 came before the year 2009... And are Apple devices capable yet of setting an alarm off properly at the appointed time? I suspect not.
I honestly don't know what goes on in such programmer's heads. If they cared to take even a casual glance at the reference manuals for things like the ANSI C library, Java class libraries, etc. they would find a wealth of functions that a bunch of careful people spent time and effort on so as to make it easy for other programmers to avoid this sort of mistake. Why don't they just ******g use those well thought out routines instead of thinking "I know, I'll do it all over again myself in my own code, how hard can it be, I'm sure a string will do?". It's unbelievable madness. Who supervises these idiots and reviews their code, designs their systems? Sure, the purpose of the routines available in the libraries may be a bit tricky to fully understand, but then time measurement systems (e.g UTC plus the various local timezones) are not a trivial topic. But that's no excuse to ignore the complexity.
In The Beginning Apple chose to go with GSM then UMTS (rest of the world) ahead of CDMA2000 (USA). Clearly they'd decided to pursue the majority market first, for understandable reasons. But you'd have thought that the attraction of a few hundred million extra customers would be appealing, even for Apple.
It does seem to call into question the depth of Apple's engineering outfit. If Samsung and everyone else can spare a few engineers to shoehorn in a TD-SCDMA baseband into their phones, why can't Apple? Are they lacking staff in their engineering department? Did they not consider the different basebands when they designed the kit? Are they now find themselves having to make big changes to the internals to accommodate the different chips (if that is in fact necessary...)?
Having said that, it's hard to question anything about Apple's strategy when they've got $90billion in the bank.
Well, if you count a .vcf contact file in that category of Bluetooth action, then quite often. It's sort of the forgotten piece of functionality that Apple don't seem to care about.
In the days before Facebook, Myspace, etc. people would swap photos with Bluetooth. It's still quite effective - it incurs no data cost and works irrespective of local 3G coverage - but generally people have forgotten about that little nugget of functionality. With smart phones and now tablet becoming more and more capable, it ought to be something that happens more.
This is typical of the effect that Apple (and Android to a lesser extent) have had on the market. Apple implemented a smart-ish phone that was lacking in several key areas (crap battery life, the ability to make and hold on to a phone call, most of Bluetooth not working, no support for Java apps, poor security, expensive to buy and run, unreliable, fragile, prone to breaking down, etc. etc) and then managed to persuade customers in their millions that none of that was important. And then Apple (in an impressively cynical way) claim to be great innovators when they finally put right some of those omissions (which so far they've failed to do by most accounts).
For example, it is perfectly feasible these days to turn up with (and I pick RIM purely because I know for sure that you can do this) a Blackberry Playbook, give a Powerpoint presentation using it connected to a projector with the notes displayed on the tablet's own screen, using your Bluetooth connected Blackberry phone as a back/forward remote control. Then if anyone wants a copy of the presentation you could just Bluetooth it straight to their phone, laptop, along with contacts details. None of it needs a cloud, 3G coverage, prior knowledge of their email address, etc. So it works, and its reliable. Unless they've got an iPhone, or possibly if they've an Android. Reliability of such technology is absolutely key if you want to make that sale!
But because of the generally depressive effect that the likes of Apple have on what people's expectations are of technology, it is very difficult for companies that do actually make this stuff work to get their message across. In effect their advertising has to be along the lines of "Things can be better than an iPhone! Don't fall for the Cappuccino PR". Which isn't a very good message to dish out to people who've got expensive iPhones burning holes in their pockets and don't want to be told they're stupid for not having looked elsewhere.
@Destroy All Monsters: Yes there are! Once ADA runtimes emerged that actually used O/S facilities like threads instead of re-creating those things for themselves, ADA got a *lot* better. From what I vaguely remember, Greenhills ADA on VxWorks was pretty decent indeed.
I can remember the problems that a bunch of colleagues had in the very early '90s with ADA (on Vax I think). The application they'd written was too large for any of the ADA runtimes of the day to actually run. I never found out if they ever got it going...
The K machine is mighty pricey, and it would interesting to see how that cost breaks down into CPU vs I/O development. The K machine has a very elaborate interconnect. This must surely take a lot of the credit for the machine's sustained performance being so close to the theoretical peak performance. The cost break down might illustrate where investment pays off best.
@synthmeister, I think you're right about RIM having osborned themselves, but I'm not sure they had much choice. If they'd kept quiet about their plans then it would have looked like they didn't care about the smartphone revolution. That would have raised questions about the long term and pushed buyers toward other platforms that were moving forward. That would probably have been worse.
As it is I think RIM have bravely chosen what will be an excellent technical solution (the Playbook really is quite good), but will require a lot of hard work to get people to understand and want the benefits. They could have chosen to do something clunky (ie squeezing a desktop OS onto a mobile platform like Android and iOS) but that would have lessened the technical value of their offering.
@Chris 3; indeed, and the reason they want iPhones is because they've fallen for the Apple PR about "just working" and "secure". Now really they should be hard nosed unemotional decision makers who carefully weigh up decisions before making them. Questions like "are my company communications secure" should float across their minds. Getting decisions like that wrong is potentially a company-killer these days. Pandering to the whims of staff isn't going to look so clever when your company's intellectual property becomes public because some employee jailbroke their iphone or android and got infected with a dodgy app.
@Henry Blackman; do some reading. As any reader of El Reg should know emails arriving on your corporate blackberry were encrypted when they left your company email server, thanks to the way that BES works. So even if BB's servers were compromised the emails themselves aren't available to a hacker/government.
And so far as availability is concerned, RIM did have one little outage, but iPhones have one every afternoon when their battery goes flat.
The SPEs mostly are Altivecs, with just enough extra to make them independent. Most of the maths instructions are exactly the same. That was the whole point of them. So long as you know what you're doing Altivec/SPEs are pretty good for image/signal processing, and I've had very good mileage out of them for ten+ years now.
As for physical size, the Power7 MCM is large. Sure, an individual core is smaller but then that would not be a "power7", would it. Based on previous form I doubt that IBM will be doing anything for the PS4. They're not really interested in the games or pc market, it's just not worth their while. Freescale might, but they've got their own product releases coming along without worrying about Sony too. Sony have rights to Cell, so they can go it alone. But it really wouldn't be worth their while doing something Power-ish that wasn't based on Cell because they'd lose all the existing software.
As for performance, I've seen Cell get very close to 250GFlops (though not in a PS3). You have to try quite hard to get anything with Intel written on the top near that figure. It's hard to program, but get it right on Cell and your sums are done very quickly indeed. GPUs have a lot of grunt, provided AMD or Nvidia have written you a library function...
Er, the Cell is very different to anything else, including the entire Power range. Sure, Cell borrowed bits and pieces from the Power ecosystem (a PowerPC core here, 8 Altivecs there), but they were glued together in a totally unique way.
It is almost inconceivable that Sony will change to Power7, at least not in the form it exists in when built in to an IBM mainframe. The physical size alone (we're talking something as big as your hand) would preclude that. Plus the Power7 architecture has some even weirder components; I very much doubt that a games designer will be able to find a use for a decimal maths co-processor.
But you are right to point out the dilemma that Sony are in. I see several pitfalls with going x64; even today you have to have a fairly mighty x64 before you've got as much floating point grunt as the Cell has. That won't come cheap, plus it's tricky to not appear as a fancily dressed PC that isn't running Windows.
They could rely on GPU for the floating point grunt that's needed, and ATI / Nvidia would have you believe that they're the ones for the job. But whilst GPUs undoubtedly have a lot of grunt, again it's tricky not to appear as just a PC.
Developing Cell (16 SPEs? Hooking up to a beefier GPU?) would be a brave step, but it would allow them to preserve the investments that have already been made in software whilst bringing about demonstrable improvements, and they would retain complete control. But if they do, I wish they would let Linux back on it. When the PS3 was launched there was much talk about it's computing grunt. Including the GPU doing single precision floating point it was apparently topping out at about 2.1 TFLOPS, the Cell accounting for about 200 GFLOPS. Even now you have to try reasonably hard to beat that for the money.
3G phone networks; the doors on a lot of trains; aircraft navigation; the national electricity grid. I bet you've relied on one or two of those at some point in the past, and would have been seriously annoyed if they stopped working.
GPS has wormed its way into a lot of things that we count on everyday and take for granted. If GPS jamming became a major thing it would cause a lot of problems.
Those engineers who have built highly important systems that rely solely on GPS (I don't know if that's completely true for the list above) are lazy idiots. Either that or their management are cheapskates. The trouble is that GPS is just so damned convenient for many things. And to have to fall back on something else when GPS packs up is difficult. But it is necessary if you're building something that really matters like trains, planes, grids and comms.
The problem with consultants is making sure that there is enough of them around who actually know real stuff when it really matters, eg when war breaks out.
Consultants quite often used to be on the inside but ended up outside. If you rely on consulsants that probably means that there is no internal program growing new experts. Its not like many universities offer courses on how to build effective war machines, etc. The result is that when the current crop of consultants die off there is likely to few compentent replacements, and fewer still who recognise the deficiency. Not a good situation to be in if a major war happens...
I think you'll find that RF front ends care very much indeed about peak received power. Your, er, fail is to not read what I wrote. So I shall write it once more. That band, even if used for sat phones, makes it difficult to get a co-located built in GPS receiver to work at the same time. The FCC should have realised it, the GPS industry should have realised it, and maybe LS were naïve in assuming that the others already had.
As for your spurious refereneces to mobile phone power management, if you do happen to be 35km from a base station (traditionally the GSM max range) then your phone will be running at 2watts and yes it will go flat very quickly indeed. If anything your analysis illustrates that a sat phone in that band co-located with a GPS rx would have been in more trouble than if afflicted by LTE; you couldn't have got the satellites closer to allow the phones to put out less power. Even more reason for the FCC to spot that the band allocation was going to be problematic.
And on the topic of the FCC, if it isn't their job (as you hint) to know what band allocations are viable and best for all US users, what exactly are they for?
It would be interesting to know what plans for built in GPS LS had/have. Their whole proposition depends on GPS working in LS handsets. If they can show a working technical solution then that is the end of the debate. Alas I think that whether LS's filters are 'working' is not going to be assessed objectively by the press, commentards, lobbyists, politicians, CEOs, shareholders, accountants, bureaucrats or the courts, and I doubt that engineers will get a look in.
Firstly your phone is capable of 2watts. Secondly a sat phone would have needed at least that, probably more. Thirdly, a sat phone might only have been 2watts but being only an inch or so from the GPS receiver in the sat phone it would have been far worse than a L^2 basestation a few miles away. Or haven't you heard of the inverse square law? Or do you some how imagine that a sat phone user would never have wanted a built in GPS? Or that they wouldn't have wanted to do something like use Google Maps? Or do you imagine that a sat phone GPS jammed by the sat phone itself would somehow have been less annoying to every sat phone user than what LTE will cause?
The problem (yes it is real enough) has *always* been there lurking in the frequency allocation, satellite or terrestrial, but it has taken L^2 to make everyone realise it. No one noticed before because sat phone was a commercial non starter. That's how badly the FCC have dealt with this.
I'm no happier about this than anyone else. But don't blame L^2, blame the FCC for not keeping their eye on the ball. I mean, hadn't the FCC heard of built in GPS?
" The bands adjacent to GPS are designated as space->earth only,"
No they weren't. They were designated for satellite mobile comms. That involves earth->space too, presumably in the same frequency band, or you're in for a very one sided phone call.
Given that, don't you think that the earth based transmitter that fulfilled the earth->space leg would have caused similar problems? Especially as that transmitter would have been co-located with a GPS rx, namely the one in the satellite mobile phone?
The problem with this whole debate is that no-one is thinking clearly about what the actual technical issues are, were, and always have been. In summary,
1) The GPS industry have been lazy in ignoring frequency allocations that were always going to cause them problems
2) The FCC didn't even begin to think what the technical issues would have been resulting from the sat phone band allocation
3) The FCC were negligent (as you hinted) when permitting the change use; it would have been a good time to have re-assessed the band allocation given widespread GPS usage
4) The FCC are being cowards in not telling the GPS industry "tough luck”
5) LightSquared have been naive in trusting the cowardly FCC to do their job properly. A little testing would have revealed the problem ages ago and avoided the whole thing.
As a result:
1) LightSquared's investors are going to lose a lot of money
2) The US taxpayer is going to lose a lot of money through under exploitation of valuable spectrum space, and possibly as a result of being sued by LightSquared for negligence
3) A viable technical solution to the whole issue (the filters that LightSquared developed) is probably not going to see the light of day because the freetard GPS industry's lobbying looks to have paid off
Many people will crow if and as seems likely when LightSquared are dead and buried. But really it is the tax payer and consumer who is going to lose out the most. That's not good for anyone?
"buying patents they didn't develop and then using them against business rivals"
*If* the patent system worked properly it would not be possible to use patents in this way.
The ideal behind patents is that someone invents something, gets the patent, others can't copy without a license. The practical reality is that patents are awarded for the most trivial "inventions" these days with very little regard for what has actually been done before. This is leading to many companies having overlapping sets of weak but apparently enforceable patents, so war breaks out. The Venn diagram of companies and their patent holdings must look like a whole load of frothed up bubble bath.
The US patent system is truly dreadful in this regard, but I'm not sure that anyone else's is very good either. The problems were built in at the start. Surely it doesn't take a super genius to spot that the prior art checking process was only going to grow exponentially. Then the US made life LOTS harder for itself by allowing software patents....
The only way to fix the system is to tighten up on what 'invention' actually means, specifically in relation to triviality, the invention 'date' and commercial realisation. That should then be retrospectively applied to all patents when a dispute is initiated by an offended company. I imagine that the majority of disputes would evaporate in a puff of smoke. A whole lot of lawyers will of course strongly lobby against such a move, so it's up to the politicians to think for themselves and see what harm is being done to their economies.
But I think you're right; all these companies are behaving in an entirely logical manner given the patent system that exists. I would like to think that some of them are thinking "why is this happening really?" and will become motivated to lobby for a change. At the moment it looks to me like all the leading companies will be run by patent law experts instead of people who actually know stuff and build things :-(
"I don't see how this is going to end well for MS"
Well, I think that's because you haven't considered the wider picture. In case you hadn't noticed there is a trend towards ARM (or at least, seeking lower power) across the whole IT industry now. It's no longer just mobile platforms where ARM matters now.
Microsoft *have* to respond to that. They probably need to get Windows Server and Desktop on ARM too just to survive. What we're seeing here with WOA and tablets is clear evidence that MS are positioning the whole product line to be ready for the x86->ARM transition. Sure, what we've seen here is limited, won't run everything, but the constraints are now "artificial" (purely for the sake of battery life), not technical.
It is not so hard to imagine that they could role out a desktop / server orientated version (where there is at least mains power to use) without too much difficulty. Remember that Linux is already there, and whilst MS haven't worried too much about the Penguin on desktops, Tux does do extremely well in server land. MS make money on servers too, and they want to carry on doing so I'm sure.
It seems clearer that there are going to be many sources of hardware - TI, Qualcomm and NVidia are involved - so there would seem not to be a grave prospect of hardware lock in. A bit like the PC market. And that can only be good for consumers. There maybe a good prospect of desktop ARM hardware sooner rather than later. There already is ARM server hardware at HP.
As for performance, I think that the days of ARM being too slow are already gone. And if you don't think they are, the quad core 2GHz 64bit DDR3 parts that are being talked up now should address your concerns. I mean, it's not that long ago that people were dreaming of such performance in their desktops! The smartphone revolution has shown that there is plenty of performance in ARM, and plenty more to come.
Satisfying a bunch of corporate users who want bling phones isn't necessarily good for business. RIM did have their little woopsie last year. But Apple are quite rubbish at software reliability, generally break at least one thing with every update and, as we've seen iOS 5, prone to making significant and unannounced API changes. Not exactly a good thing on which to bet business critical apps and functions.
Of course, Apple have managed to persuade people that they don't need things like battery life, good antenna performance and the ability to make phone calls reliably when moving. That's good PR but not good for end users. Apple may also succeed in convincing people and their businessess that they don't need reliable software or online/cloud services either. And that may indeed be fine, right up until your business is killed as a result of a really big cock up. Do you trust Apple not to have one of those?
Bluetooth could do it years ago, for free, using very little power, regardless of network coverage, was nigh on universal, and is going to the phone of the bloke you're talking to. Plus it defines the format of the contact data (VCard so far as I know), and is presented on the Bluetooth link as being a contact. This allows the receiving phone to add it to the address book with little room for misinterpreting the data fields and merely prompting the user as to whether they want this to happen. Problem solved well over a decade ago.
*If* a handset manufacturer has put the 'Bluetooth' label on the box then all of the Bluetooth stack should work properly. But Androids and iPhones were (still are?) definitely a bit dodgy in the whole Bluetooth area generally (don't know about WP7), mostly I suspect because they were in too much of a hurry to do the job properly to a high quality. You couldn't even use a hands free kit reliably, something that not even Steve Jobs would ever have been able to convince the world it doesn't need. But I digress. Solving manufacturer laziness by saying "Oh just email it" is a backward step, an acknowledgement that things are a little bit worse than they used to be, is making things far more complicated than they need to be for no good reason.
But you are right, it is a personal choice. Personally I'd far rather not have to type in someone's email address or phone number just so that I can swap contact details with them when I could just zap it directly into their phone with only a couple of button presses. I mean, once you've typed in their email address you've pretty much done the whole thing anyway. By way of analogy, when you're giving someone a business card you don't want to be writing it out long hand in front of them do you? Bluetooth is instant, so there's no awkward email/text delay and there's so little room for error.
Someone read the EULA?!?!
Regardless of what it used to say in the EULA, given how old Office is there was clearly no intent on MS's part to claim IPR ownership. They've not so far as I'm aware ever sued someone for royalties because they used a licensed version of Word. Maybe their reticence was because they were embarrassed that they're made a legalese mistake; lawyers are supposed to be fully conversant in it.
It goes to show how useless the complex EULAs of the software world actually when not even the originators understands its meaning fully...
Ah yes, NT for PowerPC. I saw it a few times, back in the days when the future was multiplatform-bright. It was running on embedded SBCs that the manufacturer was using instead of buying PCs... There must be many IT experts out there who are way too young to remember what was going on back then.
One has to be impressed by Intel for how well they managed to see off those platforms through being good at marketing and silicon processing. Perhaps they've now made the mistake of believing their own PR...
Yep, I agree with all that. The only thing that I'd add is that if MS can make the programming API the same as it always has been, then porting from x86 (little endian, 32 or 64 bit) to ARM (little endian, 32 and soon to be 64 bit) probably isn't so difficult.
I'm reminded of the demo MS did showing an early Win8 running on ARM with Office 10, printing to an ordinary Epson printer. The hints were that the printer driver and Office were simply recompiled and appeared to work well enough for MS to be confident about giving a public demo.
If MS have really pulled it off to that level that soon (the demo was ages ago), it might be reasonably easy for app devs to avoid the shade of Wordperfect and 123...
They should be.
It has been suggested / reported / deduced that MS and / or Apple are going to start doing proper ARM versions of their mainline OSes. Linux is, of course, already there. The OEMs (in MS's case) could then start manufacturing laptops with ARMs in them and offer customers a choice. There will always be users who genuinely need a lot of compute grunt, but most people would probably value battery life over raw horsepower.
That'd be great for end users, ARM, MS, Apple and the OEMs, but almost certainly not so good for Intel. Nor AMD.
From what I've seen of Macs being used by bio/med scientists, the features of ZFS will be utterly lost on most of them. I showed a bunch the miracle of network file sharing; it took a *lot* of explaining. Up to that point to move files between Macs (all connected to the same LAN) they were using a large number of USB drives, and were forever running out of space on them. Ironically, didn't Apple beat MS to network shared disk space? Seems to have been a waste of time. Then I tried to get them to understand the wisdom of backups. Fair enough, they know cells inside out and I don't, but even so.
ZFS is way too complicated I suspect for the average Mac user to understand and desire. A Mac is perhaps the last machine on earth you'd pick as being a platform on which to store enough data to even begin to get ZFS interested. But I wish them luck.
Coz if you had you'd know that the bezel is part of the touch screen and all four sides do something useful. Whereas the ludicrously wide top/bottom bezels on and iPhone do nothing but hold the lame home button. Apple has always thought (wrongly?) that its users can't cope with more than one button on a device.
Tilera have some big hurdles to overcome if they're going to significantly expand their market. The biggest problem is that no-one (relatively speaking) has heard of them. In contrast, everyone knows who Intel, AMD and ARM are.
A significant advantage that ARM has over Tilera is that ARM don't actually make their chips. The very large number of ARM licensees out there works in ARM's favour, because the licensees are the ones who put all the effort into working out what will actually sell, developing a whole ecosystem and marketing it. That strategy worked in the mobile space. And with the likes of Microsoft, HP, and loads of others all sniffing around the ARM server market, you'd have to say that ARM have some *very* big partners helping out. Tilera don't have that benefit.
Also I'd have to say that Tilera may have mistaken raw compute performance for being the key market driver when it comes to servers. Sure, some servers are very busy but a very large number are primarily driving disks and network interfaces. ARM look like they are offering just the right amount of compute power and features for many people's server needs (just like they did in the mobile phone market). Again, ARM can leave that sizing issue largely up to their licensees, whereas Tilera have to work it out themselves. I'm not convinced a large number of cores with presumably not much network/storage I/O will do it for a lot of server operators. And if ARM do decide that some architectural change is necessary to compete with Tilera they could easily develop the architecture and then let their many licensees do the rest.
This is not the first time that a new discrete Fourier transform has come along.
About ten years ago some academic came up with a fresh approach to the algorithm that claimed to produce the correct arithmetic result but with a reduction in the number of floating point operations. Everyone got very excited, but as far as I know it never saw the light of day in any commercial application. Whilst the exact algorithm wasn't published (they were aiming for a patent) from the vague description the authors gave I concluded that it wasn't going to be very cache friendly. And if an algorithm isn't cache friendly then it isn't going to be terribly fast on a CPU, especially if the amount of data being processed is larger than the L1 cache size; you'd have to build dedicated silicon to implement it, and that is *very* expensive to do.
The bigger CPUs (x86, sparc, ppc; not ARM) generally are astonishingly fast if data fits in L1 cache (ever timed the Intel IPP library's FFT on smallish data sizes?) and a complicated algorithm like this new one may actually be of some real world benefit. If it can break down larger FFTs into lumps of data that stay in L1 cache for longer then the real world performance could be significantly better than existing algorithms. So there maybe some software applications.
But as for hardware applications (signal/video/image processing in mobile phones) it might not see the light of day for a long time; if it can be squeezed into existing chips then great; if not, then it'll have to wait until the next design iterations, which could be a long time away. And it will have to worth it; if it takes twice as many transistors then the cost/benefit analysis that the hardware manufacturers will domight not stack up in its favour.
So long as humans want to go to sleep sometime after dark and wake up sometime around about when the sun comes up, we will need a time scale that is aligned with the sun.
We either stop caring about 0800-ish being the time we wake up and go to work, or we arrange matters so that 0800-ish is when the sun comes up. Trying to coordinate the former across the world without causing a lot of havoc is going to be difficult, because actually there is so much in our lives that is based on clock time being equal to sidereal time.
If the ITU does abandon leap seconds the whole world would occasionally have to re-align every time dependent aspect of our lives (timetables, contracts, laws, telephone systems, etc. etc) to keep it in track with the fact that humans live sidereal lives. Changing all that in one go and getting it right sounds a lot harder than dealing with a leap second every now and then.
The trouble your proposal is that it makes it very difficult to accurately work out the time between two dates. The answer would necessarily be very ambiguous! For example the answer to the question 'how many seconds were there in 2011?' would depend on whether you're taking the current length of a second as the basis, or the length of a second at the very end of 2011, or the mean second length during 2011, and so on.
The REAL answer to the problem is to get the operating system boys to sort out time properly. Pretty much every OS, programming library and application out there has always completely ignored leap seconds purely because the original programmers were too lazy to find out what UTC actually is, which is crazy considering UTC was defined decades ago back in the early days of computing.
The only people who have actually got it right so far as I know is the astronomers; not surprising as they do actually care about accurate time differences over long periods of time. The IAU's SOFA source code library has all the routines needed to accurately convert between UTC, TAI, etc, taking proper account of leap seconds.
The only disadvantage of SOFA is that it needs a static table of leap second data manually updated (and your code recompiled) every time there is a new leap second (they're not predictable in advance). It uses this table of all the leap seconds there have ever been when converting between TAI and UTC. In this day and age it should be trivial for something like NTP to communicate that table to OSes automatically. It would take some work to update apps to use something like SOFA instead of the inaccurate libraries that are used in the mainstream today, but it would completely solve the problem for ever.
Changing OSes and software is probably a whole lot easier to do that than it would be to change all the laws, working practise, train timetables, etc. etc. when humanity finally gets fed up with 0900 being increasingly early in the sidereal day. My great great....great grandchildren don't want to be contractually bound to turn up to work at 0900 if that's in the middle of the night.
I reckon that this is something that Linus Torvalds could single handedly solve. He can change the Linux kernel and has enough influence over NTP and glibC to make it happen. If Linux gets it right, every one else might follow.
Getting it right in the OSes would make a tremendous difference to software programmers who have to worry about time. For example, how many times have Apple failed to get their iPhone OS alarm clock to actually work as an alarm clock? How many people with electronic calendars have been frustrated by the inability to properly deal with daylight savings and time zones?
Yep, I'd agree with all that too.
I get annoyed by Linuxes. I've just freshly downloaded and installed Linux Mint (yep, I sit on both sides of the fence). First thing it did was insist on fetching 258MB of patches. So why couldn't they keep the original download up to date and stop wasting my time and their expensive bandwidth?
Where MS do quite well in my opinion is that they differentiate between updates that are security related and those that are just improvements. Linuxes don't, at least not obviously so in Update Manager. The result is that Linuxes get patched an awful lot more mostly because the original distro apparently wasn't a finished polished product.
BTW does anyone know yet how the hacker who breached kernel.org a while back managed to get root privileges escalation? We all know that the source code for the kernel wasn't affected (phew!). But the implication is that there is still a way of getting root privileges that only the hacker knows about. That ought to be a worry for every Linux user out there.
Same here, except I've got a BB phone. I really really like the way email, calendar and contacts are bridged to my blackberry phone. I'd hate to lose that feature.
I use my Playbook at home, and the bluetooth reaches my 9800 Torch anywhere in the house. Then when I go out everything is there on my handset without needing a battery or data sucking sync to some stupid cloud somewhere.
I think that Playbooks are a real bargin at the moment, and if you have a BB phone then they're almost a no brainer because of the price. If the OS update means that all those Android apps start working too then suddenly the Playbook will be highly competitive from a technical point of view. RIM are going to have to do a lot of demonstrations to the public to get the message across...
Apple don't seem to have a good record of getting value for money out of the start ups they acquire. PA-Semi got bought for its chip designing expertise. But according to past El Reg articles the staff left, formed another startup which then got bought by Google. Then Apple bought another ARM specialist startup. Don't know how that worked out, but if it went well why are Apple considering buying another?
I suspect that Apple are still dependent on Samsung for designing their chips, never mind just building them, which is why they're looking at getting in more external expertise. But how many startups do Apple have to chew through before they realise that being part of Apple seems not to suit chip designers? And why might that be one wonders?
Begs the question then, how many facilities went around and made the supposedly isolated (and operation critical) network a part of the general network to "make it easier"
I suspect that it is actually cost driven. Don't underestimate how penny pinching companies can be. Quite a lot of large industrial accidents can be traced back, one way or another, to a lack of willingness to spend a small amount of money to avert what was thought to be an unlikely disaster.
In my view companies are pretty bad at taking improbable though severe risks seriously. Look at TEPCO, owners of the plant at Fukushima. They chose to continue to operate their ancient old reactors against all advice, just for the sake of a few Yen profit. Look where that got them.
I'm not saying that companies using the Internet to connect industrial control networks together between sites should stop doing that. They could easily and cheaply make such networks much more robust by hiding them behind VPNs. That way any hacker would have to break through a VPN first before they can start attacking vulnerable SCADA systems. And if they were really paranoid they could rent private lines off their telecomms company. Both of those approaches are way cheaper than dealing with an oil refinery explosion...
I suspect that actually, quite a lot of companies already do that sort of thing one way or another. But there are bound to be some that haven't even begun to consider what sort of risk hacking represents to their systems.
Biting the hand that feeds IT © 1998–2019