Oh do stop being so negative
I'll fetch my coat
2071 posts • joined 23 Apr 2008
I'll fetch my coat
Coz if you had you'd know that the bezel is part of the touch screen and all four sides do something useful. Whereas the ludicrously wide top/bottom bezels on and iPhone do nothing but hold the lame home button. Apple has always thought (wrongly?) that its users can't cope with more than one button on a device.
Tilera have some big hurdles to overcome if they're going to significantly expand their market. The biggest problem is that no-one (relatively speaking) has heard of them. In contrast, everyone knows who Intel, AMD and ARM are.
A significant advantage that ARM has over Tilera is that ARM don't actually make their chips. The very large number of ARM licensees out there works in ARM's favour, because the licensees are the ones who put all the effort into working out what will actually sell, developing a whole ecosystem and marketing it. That strategy worked in the mobile space. And with the likes of Microsoft, HP, and loads of others all sniffing around the ARM server market, you'd have to say that ARM have some *very* big partners helping out. Tilera don't have that benefit.
Also I'd have to say that Tilera may have mistaken raw compute performance for being the key market driver when it comes to servers. Sure, some servers are very busy but a very large number are primarily driving disks and network interfaces. ARM look like they are offering just the right amount of compute power and features for many people's server needs (just like they did in the mobile phone market). Again, ARM can leave that sizing issue largely up to their licensees, whereas Tilera have to work it out themselves. I'm not convinced a large number of cores with presumably not much network/storage I/O will do it for a lot of server operators. And if ARM do decide that some architectural change is necessary to compete with Tilera they could easily develop the architecture and then let their many licensees do the rest.
Did that joke just step onto a neutrino and ride back in time to another forum a couple of days back?
This is not the first time that a new discrete Fourier transform has come along.
About ten years ago some academic came up with a fresh approach to the algorithm that claimed to produce the correct arithmetic result but with a reduction in the number of floating point operations. Everyone got very excited, but as far as I know it never saw the light of day in any commercial application. Whilst the exact algorithm wasn't published (they were aiming for a patent) from the vague description the authors gave I concluded that it wasn't going to be very cache friendly. And if an algorithm isn't cache friendly then it isn't going to be terribly fast on a CPU, especially if the amount of data being processed is larger than the L1 cache size; you'd have to build dedicated silicon to implement it, and that is *very* expensive to do.
The bigger CPUs (x86, sparc, ppc; not ARM) generally are astonishingly fast if data fits in L1 cache (ever timed the Intel IPP library's FFT on smallish data sizes?) and a complicated algorithm like this new one may actually be of some real world benefit. If it can break down larger FFTs into lumps of data that stay in L1 cache for longer then the real world performance could be significantly better than existing algorithms. So there maybe some software applications.
But as for hardware applications (signal/video/image processing in mobile phones) it might not see the light of day for a long time; if it can be squeezed into existing chips then great; if not, then it'll have to wait until the next design iterations, which could be a long time away. And it will have to worth it; if it takes twice as many transistors then the cost/benefit analysis that the hardware manufacturers will domight not stack up in its favour.
So long as humans want to go to sleep sometime after dark and wake up sometime around about when the sun comes up, we will need a time scale that is aligned with the sun.
We either stop caring about 0800-ish being the time we wake up and go to work, or we arrange matters so that 0800-ish is when the sun comes up. Trying to coordinate the former across the world without causing a lot of havoc is going to be difficult, because actually there is so much in our lives that is based on clock time being equal to sidereal time.
If the ITU does abandon leap seconds the whole world would occasionally have to re-align every time dependent aspect of our lives (timetables, contracts, laws, telephone systems, etc. etc) to keep it in track with the fact that humans live sidereal lives. Changing all that in one go and getting it right sounds a lot harder than dealing with a leap second every now and then.
The trouble your proposal is that it makes it very difficult to accurately work out the time between two dates. The answer would necessarily be very ambiguous! For example the answer to the question 'how many seconds were there in 2011?' would depend on whether you're taking the current length of a second as the basis, or the length of a second at the very end of 2011, or the mean second length during 2011, and so on.
The REAL answer to the problem is to get the operating system boys to sort out time properly. Pretty much every OS, programming library and application out there has always completely ignored leap seconds purely because the original programmers were too lazy to find out what UTC actually is, which is crazy considering UTC was defined decades ago back in the early days of computing.
The only people who have actually got it right so far as I know is the astronomers; not surprising as they do actually care about accurate time differences over long periods of time. The IAU's SOFA source code library has all the routines needed to accurately convert between UTC, TAI, etc, taking proper account of leap seconds.
The only disadvantage of SOFA is that it needs a static table of leap second data manually updated (and your code recompiled) every time there is a new leap second (they're not predictable in advance). It uses this table of all the leap seconds there have ever been when converting between TAI and UTC. In this day and age it should be trivial for something like NTP to communicate that table to OSes automatically. It would take some work to update apps to use something like SOFA instead of the inaccurate libraries that are used in the mainstream today, but it would completely solve the problem for ever.
Changing OSes and software is probably a whole lot easier to do that than it would be to change all the laws, working practise, train timetables, etc. etc. when humanity finally gets fed up with 0900 being increasingly early in the sidereal day. My great great....great grandchildren don't want to be contractually bound to turn up to work at 0900 if that's in the middle of the night.
I reckon that this is something that Linus Torvalds could single handedly solve. He can change the Linux kernel and has enough influence over NTP and glibC to make it happen. If Linux gets it right, every one else might follow.
Getting it right in the OSes would make a tremendous difference to software programmers who have to worry about time. For example, how many times have Apple failed to get their iPhone OS alarm clock to actually work as an alarm clock? How many people with electronic calendars have been frustrated by the inability to properly deal with daylight savings and time zones?
What you want is a Samsung Smart TV. Thay have Skype built into the telly, webcam and mic too. Seems to work very well indeed. All you need is TV, internet connection and that's it!
Yep, I'd agree with all that too.
I get annoyed by Linuxes. I've just freshly downloaded and installed Linux Mint (yep, I sit on both sides of the fence). First thing it did was insist on fetching 258MB of patches. So why couldn't they keep the original download up to date and stop wasting my time and their expensive bandwidth?
Where MS do quite well in my opinion is that they differentiate between updates that are security related and those that are just improvements. Linuxes don't, at least not obviously so in Update Manager. The result is that Linuxes get patched an awful lot more mostly because the original distro apparently wasn't a finished polished product.
BTW does anyone know yet how the hacker who breached kernel.org a while back managed to get root privileges escalation? We all know that the source code for the kernel wasn't affected (phew!). But the implication is that there is still a way of getting root privileges that only the hacker knows about. That ought to be a worry for every Linux user out there.
Same here, except I've got a BB phone. I really really like the way email, calendar and contacts are bridged to my blackberry phone. I'd hate to lose that feature.
I use my Playbook at home, and the bluetooth reaches my 9800 Torch anywhere in the house. Then when I go out everything is there on my handset without needing a battery or data sucking sync to some stupid cloud somewhere.
I think that Playbooks are a real bargin at the moment, and if you have a BB phone then they're almost a no brainer because of the price. If the OS update means that all those Android apps start working too then suddenly the Playbook will be highly competitive from a technical point of view. RIM are going to have to do a lot of demonstrations to the public to get the message across...
Aren't you forgetting the sat phones' own transmissions that would have caused exactly the same problem had the satcomms service ever become popular?
Apple don't seem to have a good record of getting value for money out of the start ups they acquire. PA-Semi got bought for its chip designing expertise. But according to past El Reg articles the staff left, formed another startup which then got bought by Google. Then Apple bought another ARM specialist startup. Don't know how that worked out, but if it went well why are Apple considering buying another?
I suspect that Apple are still dependent on Samsung for designing their chips, never mind just building them, which is why they're looking at getting in more external expertise. But how many startups do Apple have to chew through before they realise that being part of Apple seems not to suit chip designers? And why might that be one wonders?
Begs the question then, how many facilities went around and made the supposedly isolated (and operation critical) network a part of the general network to "make it easier"
I suspect that it is actually cost driven. Don't underestimate how penny pinching companies can be. Quite a lot of large industrial accidents can be traced back, one way or another, to a lack of willingness to spend a small amount of money to avert what was thought to be an unlikely disaster.
In my view companies are pretty bad at taking improbable though severe risks seriously. Look at TEPCO, owners of the plant at Fukushima. They chose to continue to operate their ancient old reactors against all advice, just for the sake of a few Yen profit. Look where that got them.
I'm not saying that companies using the Internet to connect industrial control networks together between sites should stop doing that. They could easily and cheaply make such networks much more robust by hiding them behind VPNs. That way any hacker would have to break through a VPN first before they can start attacking vulnerable SCADA systems. And if they were really paranoid they could rent private lines off their telecomms company. Both of those approaches are way cheaper than dealing with an oil refinery explosion...
I suspect that actually, quite a lot of companies already do that sort of thing one way or another. But there are bound to be some that haven't even begun to consider what sort of risk hacking represents to their systems.
"They should have stayed "reserved for satellite communications"
Wouldn't have helped. The satcom phone itself is also a transmitter operating in the very same band. It has to be, unless you want a one way conversation... And guess where the satcom phone would be? Right next to the GPS receiver that the owner also has.
GPS receivers aren't fussy - they'll quite happily get jammed by any adjacent interference, whether its coming from a satcom phone or a LightSquared base station. Had that old satcom service ever have become popular we would have seen this problem years ago I reckon.
"Nailed on Google search Widget"
Nothing demonstrates Google's revenue stream more clearly. You don't use that widget, Google don't make money.
Not that there's any difference between Google and MS and Apple. Except Apple will take $hundreds of you first and then extract advertising revenue from your data.
I think that Windriver's VxWorks RTOS has more or less captured the entire Martian market too.
You raise two interesting points which phone manufacturers (Apple in particular) should learn if they know what's good for them.
Firstly, the killer app for a mobile is still communications. If a smartphone doesn't do messaging and voice calls effectively then it's market potential is limited.
Secondly, "non-western countries" arguably amounts to several billion people, whereas "western countries" perhaps does not. So phones meeting the needs of the majority have greater market potential than those that don't.
So does that mean that Blackberry have actually got it right from a worldwide marketing point of view? Despite their outage a few weeks back they remain the benchmark against which others are measured when it comes to messaging.
Have Apple, by focusing on the apparent needs of a comparatively few app crazed incommunicative Westerners missed out on the wider world wide market where battery life, voice and messaging are the prime selling points? The reported problems with battery life and iCloud bugs don't exactly commend iPhones to those who can't charge up every day / mealtime / hour (as the trend would appear to be) and want to call or text their mates.
Will Android manage to appeal in these emerging smartphone markets? Clearly yes because local manufacturers can tailor it for local non-google services (almost certainly not what Google intended) for local people, as has happened in China. But that results in more Android fragmentation.
Time will tell I guess. If Indonesians are heading towards being Crackberry Addicts that could indicate that Apple and Android are reaching the limits of their world wide market potential. Given RIM's low share price, does that make their stock worth a punt?
"the money losing Itanium which are pure garbage."
Well, it depends on your point of view. They do boot and run, and as Turing taught us that is good enough to do anything. And I'm sure there are some workloads that are well suited to their deeply unfashionable instruction set. The philosophy that instruction pipelining and parallel execution should be under the control of the compiler instead of the processor is fine. It means that you need fewer transistors on a chip to achieve a set level of performance *provided* you get the compiler right. That is the part Intel got wrong, at least initially.
It doesn't bode well for Intel. So far their only attempt at replacing the creaking x86 has been unsuccessful. Now that the world has discovered the benefits of low power the need for Intel to get rid of x86 is paramount, but they almost dare not try. X86 is really going to struggle to be as efficient as ARM, and Intel need to change ISA to properly compete. There's an even chance they will be forced to change to ARM...
Here's my untutored explanation.
All neutrinos are super-luminal. They are created across a spectrum of velocities faster than the speed of light. However, they are still subject to the quantum uncertainty principle. Thus their instantaneous position, and therefore their instantaneous velocities, are a little uncertain. Those travelling just slightly faster than the speed of light will occasionally have an instantaneous velocity slower than the speed of light. At this point they are able to interact with normal matter, and thus we can detect them through a collision with an atom in, for example, a vast tank of cleaning fluid. Those travelling a lot faster than the speed of light are lot less likely to have an instantaneous velocity lower than the speed of light, and so are lot less likely to interact. My conclusion is that the rarity of neutrino interactions is due to the rarity of neutrinos having a super-luminal velocity sufficiently close enough to the speed of light for the interaction mechanism described above to take place.
There. That's my tuppence ha'penny's worth. If it's right, please will El Reg forward my Nobel physics prize (and especially the cheque) on to me.
"You may be thinking about Moto's competitor for the Intel 8086, the 6800."
The 6800 was an 8 bit core. The 8086 was a 16 bit core, and came along some time afterwards. Different sorts of beasts really, and didn't really compete. The commercially relevant competitor to the 8086 was really the 68000, which is what Apple picked.
It's quite noticeable how even back then the 8086 architecture was a problem; only 16bit. The 68000 was a 32bit core, much more future proof as history would show. It took the x86 community something like 15 years to fully make the 32bit switch.
If a single Lightsquared base station can interfere with GPS receivers over a wide area, why wouldn't 400+ mobile phones flying in close formation interfere with the GPS on top of the plane?
I'm not convinced that they've got a way of discovering whether a plane crashed because of interference or straightforward equipment failure. Are we about to see a string of inexplicable plane crashes? They will happen, but if they're rare will anyone bother to find out why? I suspect not.
All OSes, even Windows, are getting better from a security point of view. A lack of secure boot is the chink in that armour. As time passes that lack is going to become glaringly obvious because it will be exploited more and more. If the industry doesn't address that problem then OSes will always be exposed. MS seem to recognise that and are suggesting that certain features of an existing standard are actually used to help. There is nothing else viable at this point in time to help.
However, whilst it is worth recognising that MS's plan will bring about security improvements, it is worth revisiting recent history. Essentially secure boot relies on some keys being kept private and securely stored deep inside every PC sold. But when you examine previous comparable systems (DVD, Blu ray, PS3, Wii, X box to name but a few) the private key has always leaked out one way or other. Along with needing thorough technical measures (e.g. a TPM), secure boot will rely completely on all the manufacturers being able to protect the key from theft, compromise, carelessness, etc. We are kidding ourselves if we think that the magic numbers will remain secure forever. And if the private key does leak then an enormous security hole will have been blown straight through the whole scheme.
I don't think that there's anything MS can do about that - they're just doing the best they can by their customers within the limits of the technology available to them. I'm sure that the hardware manufacturers will want to protect their Linux sales by putting in some way of allowing non-secure boot (though if bootkits were a real threat the open OSes will then necessarily be "less secure" then Windows). But if we are ever going to really solve that problem we are going to have to sort out the key distribution problem to eliminate the need for a single shared private key.
Anyone got any good ideas?
So does this mean that if you ask Siri "Can I use Siri on Android or Blackberry" it will now answer with "yes".
I may go into an Apple shop and try that out. It would be especially amusing if someone did just that, but used an Android phone to do so...
Yeah right. US prosecutors are well known for their sense of reality, reasonableness and balanced opinions. Any system that pays its prosecutors through an incentive schemes where pay is based on successful convictions is always going to encourage them to look for easy wins.
Similarly the European arrest warrant system is being heavily abused. The measure was intended for serious cases only (terrorism, etc), but so far has resulted in only very petty cases indeed. For example, the recent case where the Portugese authorities had a British woman arrested on charges that actually related to her former boyfriend from years ago and his possession of forged currency. How does that even make sense?
...think again. With the current state of the extradition arrangements between the UK and the US the federal authorities could get you with no due process in this country.
...El Reg is British, so blouses of all sorts are most welcome.
...especially over plugins. Sooner or later they're going to break Flash (probably by mistake). As soon as they do that most people will defect immediately.
Also, they don't seem to realise that most people out there don't give a stuff for the technical advancements, etc. etc. All that most people want is a browser that just works OK in the same old way. People who use software get used to how it works and don't want it to change. Hasn't Mozilla seen the fuss over, for example, Microsoft's ribbon interface? Developers are practically the last people on earth who should be allowed to make UI design changes.
...And is on my seriously-considering-it list when my current contract runs out.
Either The Reg swapped my posting order round, or my broadband got routed via a CERN neutrino SuperC datalink...
And I wants it really really badly. And I've not even read the whole article yet.
And I still want one.
That's an awful lot of double precision performance.
I still think that any manned space flight is very impressive. I think even you would be impressed if you were strapped to the top of rocket with the fuse lit and the sky beckoning!
Terry Pratchett in one of his book pointed out just how weird we are. Boredom - we're the only species to have it so bad that walking on the moon became not worth the TV air time after just two landings.
NASA have been stunningly successful in their unmanned program. Pretty much all of their deep space probes have sent back amazing results. And in the case of Voyager, it still is.
...you have a broadband connection. A considerable number of people who buy music on CD don't have broadband.
I'm fairly sure that current CD sales will not translate into the same number of digital downloads. The CD market might not be dominant, but it is still money worth having if you're in the music business. Though I can see them becoming something you can get only by mail order.
"Seriously; this would destroy the value of many American and European companies, leaving those who manufacture products, rather than those who design and invent them, with all the profit."
Quite right. Without the financial incentive provided by patents, no one would bother to make anything new because there'd be no substantive reward. And then we'd end up in a society quite like that of the old Soviet Union. Dull, drab, and nothing getting better than it already is.
Private investment has driven the technology of the world for the past 200, 300 years. Take away the incentive, and you're then reliant on government to drive things forward. How likely is that to be a good thing?
Your point about the evaluation of patents being at fault hits the nail on the head.
3 uk do a skype client for Blackberries (and many other phones too) which works over the 3g network, and is *free* to use within the uk. They've done a good job of integrating it with BB notification system too, and it also goes to voicemail.
If you really want skype it's a very, very good combination. It decided things for me - always on free skype anywhere everywhere.
The astronomers have had to deal with the various world timescales for years. The International Astronomical Union has a library for the Standards of Fundamental Astronomy, and there's some good routines in there to convert between utc, tai, ut1, etc. taking proper account of known leap seconds. Only trouble is that it needs updating every time there's a new leap second.
It really would be trivial to adopt that library, add it to ntp or whatever and have every networked computer system automatically updated with a current leap second table as and when necessary.
Putting off leap seconds and having leap minutes instead just makes the problem bigger, though rarer. How does that make things better?
The original designation of Lightsquared's band as being for satcoms is largely irrelevant. The satcom mobile phones themselves would likely have caused similar problems for GPS as Lightsquared's base stations. Ever heard a GSM mobile phone interferring with a set of loudspeakers? That's the sort of problem I'm on about. The problem would have been more proximate to the satcom mobile phones (they'd not have been as powerful as a basestation), but at the same time more widespread because everyone would have been carrying one.
The GPS industry got away with it because the original owners of Lightsquared's band couldn't make it commercially successful, so the phones weren't widespread so no one noticed the problem. But the potential for a problem has been there all along ever since the band allocations were made decades ago.
Also, we can't have a valuable piece of spectrum going unused just because the GPS industry can't be bothered to actually build their kit right. If you feel ripped off, complain to them, not Lightsquared.
"What I would really like to see happen is the FCC go after the GPS makers for breaching their licence, but I'm too cynical to actually believe that is ever going to happen"
AFAIK You don't need a spectrum license as such to make a gps receiver. It is a receiver, not a transmitter. However, the stock FCC compliance statement that all the manufacturers have been putting on their GPS saying that it works turns out to be something they're not complying with afterall. They could be sued for misleading their customers.
"You seriously believe that chavs care for the technical solution of how their walled garden messaging works?"
Well, I find it very hard to believe that BB, the functional tool of the hard nosed businessman, was some how deemed 'cool'. Some kid somewhere must have worked out that BBM was cheaper to run on their limited pocket money and stingy allowances on pay-as-you-go SIMs than SMS or anything similar from anyone else. I doubt very much that that kid looked at *why* it was cheaper; the fact that it *was* cheaper is probably why BBM has become popular with kids in the UK.
But the reason it was cheaper is because of the way RIM have implemented push.
Incidentally, does the notion behind a BB PIN remind anyone else of Compuserve's xxxxx.yyyy style addresses? RIM might need to take care if they ever open up BBM to incoming messages from the internet; it would represent an easy spamming opportunity.
It seems that some disgruntled down voting fanbois can't stand the fact that you are happy with your choice. Have yourself a rebalancing thumbs up.
I think that 'destroy' is too strong a word. 'Erode' would be more like it.
RIM's way of doing push is very good, being well optimised for minimising network traffic and maximising battery life. Blackberry is popular amongst cash starved chavs who have recognised that those characteristics save them money and hassle.
It is going to be very difficult for Apple or anyone else to surpass Blackberry push on purely technical grounds. That may translate into an inability to attract customers who demonstrably already do care about bills and batteries. Though Apple's reality distortion field does cause some very strange purchasing decisions...
Carrying a spare battery is a lot more sensible than carrying a charger... And if battery life is a problem then it's even more sensible to select a phone with good stamina.
It seems that iPhones have reached the point where they won't last long enough for quite a large fraction of their normally enthusiastic user base. It doesn't matter how shiny a phone is; people *can't* use it if it goes flat on them. And there's nothing like unavoidable loss of service to put people off what is actually a quite expensive device.
And indeed you are not alone. Many have switched, and there is certainly more app development activity in the Android space.
Personally speaking I find Android's terrible update mechanism (in fact, what update mechanism?) very off putting. Coupling that with the continual stream of reports of flaws and security mistakes in Android plus a not very good web browser makes it a non-option as far as I'm concerned.
Being a self confessed techno geek I like the architecture of QNX. It's much better than shoe-horning unix in to a mobile as Apple and Google have done. I don't think that MS have done anything particularly revolutionary in their kernel either. It will be interesting to see if RIM can build on QNX's cleverness to deliver an obviously better product (better battery usage, smoother app experience, etc). For example the playbook, whilst currently somewhat flawed, does a fantastic job of web browsing and multitasking. RIM will certainly be hoping so, but I fear that Apple and Google have shown that mere technical superiority is not a market factor.
"Can we admit that Apple have supported their products longer then any other manufacturer have supported an Android phone?"
Absolutely! Though of course there are tales of Apple withholding new features from older phones "because they're not powerful enough". Shame for Apple that the hackers generally get them going on older phones anyway. And then there's the things that most Apple upgrades seem to break...
So Apple are definitely along the right road when it comes to updates; updates yes, but still trying to force people to buy new handsets.
Android is terrible for updates and, if any kind of common sense prevails, will end up costing Google dearly.
MS and RIM seem to have updates well under control. In the case of MS it could become a significant reason to give a Win Phone a go. It might not be perfect (though initial reactions seem positive), but a couple of meaningful upgrades in the handsets' lifetimes seems a real possibility. Service packs and updates have kept Win XP viable for ages; why not the same in the mobile market?
"as *SERIOUSLY* expensive as a terrorist attributed meltdown?"
The problem is that company shareholders and accountants are rubbish at understanding risk, consequences and costs. Risks with extreme consequences but which are very unlikely to happen are often ignored. Why spend money mitigating something that is unlikely?
For example, Tepco had to be strong armed by the Japanese government to install pressure release valves at Fukushima. Turns out that they need those. Without them Japan would be looking at the ruins of four exploded reactor cores instead of four minor meltdowns.
Tepco are in real trouble anyway. They were pressed by various engineers and inspectors to shutdown the old reactors at Fukushima years ago. Had they done so they would be looking at a minor loss of electricity sales instead of complete corporate extinction.
In comparison, connecting vital corporate systems to the Internet seems much more likely to result in complete corporate disaster. So why do it?
Well done HP indeed. I think that the power consumption figures quoted in the article will certainly attract a lot of interest. There would appear to be a lot of real world applications out there (web serving) that would benefit handsomely.
I'm beginning to wonder whether HP have rediscovered their R&D mojo. Their enthusiasm for ARM could really pay off big time. They are also putting a lot of effort into memristor memory technologies from which they could easily end up owning what is currently the whole DRAM, HDD and FLASH markets. These are two very big bets indeed, and the payoffs would be truly monumental.
All the other tech companies out there should really be paying attention. HP are technologically very close to stealing large fractions of their businesses from underneath their noses. Some of them need to get on the ARM bandwagon very quickly. For others I think it is too late - Hynix have already done some sort of deal with HP.
Mini ITX servers as you describe would be great. Hopefully the power consumption advantages will drive the ARM genre on quickly, so we may not have to wait too long :)
Biting the hand that feeds IT © 1998–2018