* Posts by bazza

2021 posts • joined 23 Apr 2008

LightSquared to magic away GPS interference in 2 weeks

bazza
Silver badge

@bwalzer: Er, yes.

If LightSquared's transmitters were so terrible that "Things like intermodulation and IF images" were causing them to radiate outside their allocated band then they wouldn't have a leg to stand on. But by all accounts that's not the case here; no one is saying that LightSquared's transmissions will intrude in an unlicensed or unreasonable way into the allocated GPS band.

The interference of GPS receivers is caused by not defining their receiving band with a filter good enough to reject out-of-band signals that will, if LightSquared start operating, be commonly encountered. All receivers are vulnerable to high power interfering signals causing non-linear responses within the receive chain components. That's why when you design a radio you take a look at the expected operating environment (maybe glance at the frequency allocation tables) and decide how much filtering is going to be needed against legitimate and commonplace adjacent signals. And you are right - guard band practicality is an important factor for the allocating authorities.

It's worth questioning whether today's filtering requirement for GPS is so very different to that theoretically required years ago when the 1525-1559MHz band was allocated to satcoms. To set the historical context for the current debate I think that it is important to analyse the practicality of the previous use of the 1525-1559MHz band for satcoms long before LightSquared came along. And the issues are not generally evident from the online material covering this topic. For example, the last slide at:

http://www.pnt.gov/public/2011/03/munich/hegarty.pdf

shows the frequency allocations and a stylised filter response. It suggests that "Low Power On Earth Satcom Emissions" don't cause interference with GPS, and that "15kW Base Station Emissions" do. However such an analysis is valueless without considering the distance between the interference source and the GPS receiver.

So consider this: a single LightSquared 15kW base station may indeed cause operating problems for a large number of poorly filtered GPS receivers over a wide area. However the low power 2W-ish transmission of a satcoms mobile phone would still have been able to cause problems, just over a much shorter distance (perhaps tens of meters?).

And consider where the satcoms mobile phone might have been used; on a train, on a plane, in a car with SatNav, etc. etc. If they had become as ubiquitous as terrestrial mobiles are today they'd be everywhere all the time. It is perfectly possible that the satcoms phone itself could have caused equally significant problems for GPS just by being physically close to critically important GPS receivers. Anyone who has ever placed a GSM mobile phone near a set of loudspeakers and heard the ticketer-ter ticketer-ter sound it makes will understand my point, especially considering that a set of loudspeakers isn’t designed to be a highly sensitive radio receiver like GPS.

What I seek to show above is that the question of how much guard band to have between GPS and satcoms was just a relevant then as it is now between GPS and LightSquared. It is highly likely that practicality of the chosen guard band was considered long ago by the FCC when the satcom band was first allocated. That consideration ought (hindsight?) to have taken into account a close encounter between a GPS and a satcom phone. That scenario is, from the point of view of a single GPS receiver, not so very different to a more distant encounter with a LightSquared base station. So if the guard band then was considered to be practical, why not now? Maybe the FCC didn't consider such a scenario back then, or maybe they never imagined that everyone and everything would be using GPS receivers for some purpose or other.

By endeavouring to develop an adequate filter LightSquared are seeking to show that the FCC got it right and that the GPS industry are too damned lazy and cheap to do their own jobs properly. The irony of the GPS industry having to license a filter design from LightSquared would be memorable...

However, if the GPS industry is proved right and that filter with adequate out of band rejection can't be build then that really would mean that the FCC got it wrong, and arguably always had got it wrong long, long ago when the 1525-1559 band was first allocated for satcoms.

In both outcomes it's not LightSquared's fault (though you could argue that they should have known better than that). I don't think that it would really be the FCC's fault either. The bands were allocated to satcoms long before anyone thought that every mobile phone, car, etc. would have a GPS receiver in it, so the need for miniaturisation of filters with sharp cut-offs didn't exist. It's not really the GPS industry's fault either. Hardly anyone was actually using the satcoms band. If they had been then we'd have sorted this out years ago. But the solution (whatever it turns out to be) is going to cost a lot of money, and that's always going to come from the customers one way or other.

Some people have pointed out that this is a purely North American problem. But we'd all like our phones (including its GPS) to work properly when we go there.

2
0
bazza
Silver badge

Surely Admiral of the Fleet (RN) is better?

0
0
bazza
Silver badge

Not that

Better filters on LightSquared's transmission won't help. They don't intrude on the licensed GPS band anyway. The problem lies in that cheap GPS receivers don't bother to exclude the non-GPS bands from their radio receivers. The rules from the FCC and similar bodies round the world are quite clear; if you don't design equipment to reject other people's legitimate transmissions, that's your problem not theirs.

LightSquared are, I suspect, commissioning the design of these filters to demonstrate that the GPS industry is lying in their claims that such filters are not possible. If LightSquared get these filters going (and there is not particular electronic reason why they won't) then the GPS industry will be obliged to shut up and start designing their mass market equipment properly, just like the rules say they should.

What's more, if LightSquared lay claim to and enforce the rights to the filter design the GPS industry might have to pay a license fee to them for every new GPS receiver built. That will fill LightSquared's coffers nicely. It would be *quite* ironic, especially when it this is a problem soley of the GPS industry's making.

1
1
bazza
Silver badge

@John Sager

"Now that GPS has had augmentation and integrity features added by SBAS services such as WAAS & EGNOS, it can be used as a primary navigation aid by aircraft, so that's going to have a severe impact on the aviation industry if GPS can be randomly unreliable due to interference, continent-wide in the US."

Anyone suggesting that GPS is usable as a primary navigation aid for aircraft hasn't done enough failure mode analyses. Any radio system is susceptible to interference / deliberate jamming. They've merely done a commercial analysis that say's it's cheaper than, for example, an inertial navigation system. It's especially worrying if the GPS equipment manufacturers haven't bothered to secure their receivers against legitimate out of band signals.

The aviation industry is already experiencing significant problems with kids (presumably kids) shining laser pointers at aircraft coming in to land. Given the ease with which GPS jammers can be bought, how long before some kid tries one out near an airfield for a laugh?

0
3

Autodesk shifts design apps to the cloud

bazza
Silver badge

@Mr Young: Yep.

Industrial espionage made easy. Don't hack the company, hack the cloud they use and get *all* of their IPR, for old and soon-to-be-released products, in one easy step. And for all their competitors.

Can you imagine Apple (for example) risking all their IPR to someone else's cloud? I severely doubt it.

How on earth do accountants manage to override the concerns of engineers when it comes to the important of such things? Do accountants (for it is always they) not realise the company-killing value that their designs represent? Great if you can save a few bucks on software tools, but it doesn't look so clever if all your design work has been pinched by a competitor. I'm also astonished at how many companies will quite happily connect their design/engineering networks up to the internet. Do they get a kick out of risking their entire business to a network remote hack that they cannot guarantee to be able to prevent? Poor assessment of corporate risk all round I fear...

0
0

Dinosaur-murdering space boulder family found innocent

bazza
Silver badge

Georgia?

Has the Georgia state parole board looked into this? I think we should be told...

0
0

Boffins step closer to steam-powered Babbage computer

bazza
Silver badge
Pint

@alannorthants

Only because it takes 5 minutes to execute the first 10 instructions of the BIOS...

0
0

Android bug lets attackers install malware without warning

bazza
Silver badge

Updates?

"One of the hopes for Android a few years back was that it would be a viable alternative to Apple's iOS, both in terms of features and security. With the passage of time, the error of that view is becoming harder to ignore. By our count, Google developers have updated Android just 16 times since the OS debuted in September 2008."

Google may have updated Android 16 times, but I bet the number of updates actually delivered to every end users by the manufacturers and networks with all those varied handsets and configurations to support is far, far lower than that.

All it will take is for some massively unacceptable hack to take place (e.g. all Android phones disabled by some virus) and suddenly the buying public will vote with their wallets and buy something else. Seems that Android is, amongst all the mobile platforms out there, significantly vulnerable to that. Are SE, HTC, etc. wise to base their entire business on such fragile foundations?

2
1

Hackers break SSL encryption used by millions of sites

bazza
Silver badge

@Ken Hagan. Title not optional

"But yeah, this doesn't necessarily affect any other use of SSL."

I'm half wondering if the basic technique is re-usable. These chaps have used Javascript as a way of targetting SSL/TSL sessions in use by a web browser. But I'm guessing (without any real knowledge) that the basic technique could be re-packaged as, for example, a trojan which might intercept any SSL/TSL traffic. Any thoughts?

0
0
bazza
Silver badge

Time taken?

"In an email sent shortly after this article was published, Rizzo said refinements made over the past few days have reduced the time required to under 10 minutes."

What did he do, install the latest Chrome/Firefox/IE with their faster Javascript interpreters?

1
0

IBM pitches overclocked Xeons to Wall Street

bazza
Silver badge

Hot chips

I wonder how that lot compares latency-wise to the 5.0GHz POWER6 servers?

From what I hear the serious players in that high frequency trading game are busily locating servers as close as possible to the exchanges to get shorter propagation times down the cables. It's 5ns-ish per meter down a cable you know. Being several miles away really costs!

0
0

Intel demos ultra low-juice chippery

bazza
Silver badge

@ Ken Hagan

Well, I guess it depends on what you call an interesting compute problem ;-)

When you stop and look at the high performance floating point compute jobs that your average man on the street actually wants done (and is therefore 'interesting', at least from an industrial competition point of view), it's things like video / audio codecs, and to a lesser extent 3D graphics and games physics. And that's about it. Most people's high performance floating point requirements *are* very parallel indeed. That's why Nvidia and ATI have successfully sold so many billions of very parallelised GPUs, and why almost every smart phone out there has one too. In that sense they really have taken over the world.

My only point really is that whatever Intel/AMD can achieve with a general purpose CPU someone like NVidia, Qualcomm, ARM, etc. is likely to surpass once they've mastered the comparable sillicon manufacturing techniques. That has consistently been the case up to now, and the commercial realities of today are clear evidence of that. And now there's things like CUDA and OpenCL which are threatening to take even more floating point workload away from the CPU.

Until Intel can get the performance / Watt to a level where the x86 battery life is meaninglessly long or the electricity bill insignificant, they're not going to get a look in. Maybe these low operating voltages will get them there, but I doubt it. Anyway, who wants 100GFLOPS in a handheld device anyway?

0
0
bazza
Silver badge

Starting from the wrong place

"Let's take an example of a hundred-gigaFLOPS system today," he said. "If you want that performance, it will require about 200 watts of power."

Well, it might take 200Watts worth of Intel hardware to get 100GFLOPS. But there's plenty of industry examples that already out perform that. Take the Cell processors - that weighed in at about 250GFLOPS for 80Watts, (32Watts / 100GFLOPs). And I wouldn't mind betting that most GPUs that get up to 100GFLOPS (i.e. all of them these days?) use much less than 200Watts. And just how many ARM SOCs do you need to get 100GFLOPs? They seem to deliver *enough* performance on very little juice indeed.

I think this is Intel missing the point again. If you *really* want to deliver a workload with the absolute minimum of power consumption, starting off with the x86 as the basis for delivering it is not necessarily going to be the optimum solution. Intel are very good at forcing silicon manufacturing towards ever more impressive transistor performance, but everyone else catches up sooner or later and just builds ARMs using the same tricks. And ARMs seem to have an inherent architectural advantage when it comes to Performance/Watt metrics.

Where this may just save Intel (at least for a while) is in the world of servers. If they can point to siginificant power savings in the data centre then the operators will be replacing their equipment as quickly as they possibly can.

0
0

Google Native Client: The web of the future - or the past?

bazza
Silver badge
Unhappy

@Thomas Wolf, encore

I've done a bit more digging.

Jazelle exists, but isn't widely used. Japanese phones seem to use it a fair bit, but that would appear to be that. Seems a shame. I tried to find out whether Blackberries use it with no success. Given their pretty good battery life, perhaps they do.

Hardware acceleration has made everything else on ARM pretty good - video/audio codecs, GPUs with adequate grunt, etc. etc. So why not Java?

If the chip manufacturers (TI/OMAP, Qualcomm/Snapdragone, etc) don't put it on then no one can use it. And given that a large fraction of the mobile market (Android & iOS) don't support Java anyway, why bother to put down silicon that's not going to be used?

Seems a shame - hardware accelerated Java could provide a really nice solution to the problem of write once run anywhere in the mobile space, but I guess there's too many vested interests to prevent it ever taking off. There's Apple with the iTunes store and Google with the Android store for a start; and neither of those parties want to open up their platforms to apps from just anywhere...

1
0
bazza
Silver badge

@Thomas Wolf

Almost right.

A lot of ARM devices implement Jazelle, which is ARM's Java byte code execution engine alongside the ARM instruction set. In essence you can execute Java byte code natively alongside ARM instructions. There's an ARM op code that says that the next instruction to load from memory will actually be Java byte code; it's as seamless as that.

All of a sudden Java doesn't seem so stupid in the mobile platform, does it? Though I don't know if any of the the Java ME environments out there or Android's Dalvik use it.

1
0
bazza
Silver badge

@Def

"The world+dog is moving away from languages like C and C++ for a reason."

Not entirely correct. Those who really want the ultimate in performance are using them in a big way. Many datacentre people are wondering if C/C++ are a better bet than PHP, etc. from the point of view of electricity bills. And a surprisingly large fraction of the HPC community are still using Fortran. Almost all OSes are in C / C++ one way or another. Big applications like database engines, CAD packages, CFD modellers, etc. are not written in Javascript.

0
0
bazza
Silver badge

@John Miles 1

Careful - you'll be turning JavaScript in to MatLab, and you reeeeeeeeeeeeeeeally don't want to do that it you want high performance!

Other languages have done just that. Motorola extended C (and hence C++) on PowerPC with new types like "vector float" and "vector int". If you wanted to add four floating point values to another set of four values then it is a simple operation along the lines of ans_vec = vec_add(vec1,vec2), guaranteed to complete in a single clock cycle. A very good way to easily get stunning performance out of quite slow clock rate PowerPCs (equivalent to a x4 on the clock rate if you were really good).

I think that deeeeeep down in the Intel C compilers there's a very similar idea hidden away from view but still accessible if you go looking. Intel seem much more focused on providing complete libraries of useful routines that hopefully mean you as the programmer doesn't have to get that low level. But the low level stuff is still there somewhere.

0
0
bazza
Silver badge

@DrXym

>>>>

Even assuming the sandbox is secure, the fact that the code is processor dependent makes it a really dumb idea.

<<<<

Well, except that currently that's what you have to do to get ultimate performance. Until either x86 (eek!) or ARM or PowerPC or SPARC or MIPS (all much nicer) achieve a complete world wide instruction set monopoly we're stuck with that. And if ultimate performance isn't needed then you'd probably use Java, JavaScript, etc.

In effect Google are trying to provide a completely standardised API for native app developers on all platforms so that apps are write-once-compile-many-debug-once. History is shown that such things tend to fall to the lowest common denominator, which is a sure fire way of not being able to exploit the maximum potential of any given hardware platform which rather defeats the whole point. I wouldn't be surprised if they couldn't make that out perform a really well written .NET or Java (dare I say even Javascript? On second thought's, meh) virtual machine *and* keep it truly platform independent.

>>>>

With PNaCl, Emscripten wouldn't be necessary and apps could benefit from near native execution speeds"

<<<<

Yes, but PNaCl on top of LLVM gets away from the main thrust of NaCl which is to be purely native on the client. If PNaCl becomes their main effort then really they're just trying to compete with any other VM based cross platform ecosystem like .NET, Java, etc. Why bother doing that when they're years behind all of those?

0
0
bazza
Silver badge

@DZ-Jay

"Very good and insightful post."

Thank you very much :-)

>>>>

I believe that was exactly Blizzard's point, especially considering that not every single native application is a "properly written, decently compiled piece of native code."

<<<<

Well, maybe. Blizzard is right in that a piece of JavaScript can be run better by having a better interpreter as well as the developer actually improving the source code's own efficiency. But I suspect that Blizzard is being rather optimistic if he thinks that an interpreter can make JavaScript better than ordinary native code.

For example, imagine that they were to develop a Javascript engine that automatically spots opportunities for parallelisation in the source code. Fantastic!!! That can all be vectored up, maybe executed on a GPU if its big enough to warrant it, amazing!

However, all those tricks will also exist in the native application world too. Many already do (loop unrolling, etc. The native C/C++ compiler writers have been trying pretty hard over recent years, especially Intel). All you have to do is set the right comiler switches to tell it to do what it can, et voila, a faster application is produced. And ATI and Nvidia are trying very hard to make useful APIs (OpenCL and CUDA) available to developers to simplify the task of doing really big number crunching.

So there's nothing special about JavaScript that means that there are some magical optimisations that can automatically be applied that couldn't also be applied to C, C++ or indeed any other language. And if they are applied to a native application at compile time that's likely always to be better than suffering the overhead of an interpreter. Concievably one might run the interpreter in a separate thread on a different CPU core to get round this. But that is consuming a core's runtime which might otherwise be dedicated to executing application code.

Similarly I think Google are crazy if they think they can successfully and usefully abstract all the fancy high performance computing APIs that are currently available to native application developers. For instance, will they make NVidia's CUDA or ATI's OpenCL available as a standard part of the NaCl environment? If not then already they're way behind the curve. It will likely always be the case that as APIs for high performance come along (like CUDA and OpenCL) NaCl will always be playing catch up, won't be able to support them on all platforms, or will just not bother.

The only way to achieve better performance on given hardware than is achievable through compilers / interpreters spotting the obvious or off-loading this 'n' that to a GPU is to have explicit parallelisation in the original source code. This has traditionally been perceived as very difficult, so most people and indeed almost the entire computing industry has tried to avoid tackling this head on.

There is some progress though. SCALA (for those who've not heard of it that's a superset of Java) is a language that brings the old fashioned Communicating Sequential Processes paradigm from 1978 (!!!) back to life. This simplifies the business of developing applications that are inherently parallel. It takes a big shift in how one goes about designing a computer programme, but trust me it's worth it. This is (currently) a much better starting point than trying to get a compiler or interpreter to work it out for itself. Likewise OpenCL and the like are making it easier to exploit the mass parallelisation available in a GPU.

0
0
bazza
Silver badge

Sorry, but quite long...

Good article, thank you.

There’s a whole lot of horse shit being spouted all over by the various people quoted in the article. For instance:

"While JavaScript is a fabulous language and it just keeps getting better, there is a lot of great software that isn't written in JavaScript, and there are a lot of software developers that are brilliant, but they would rather work in a different language,"

Entirely wrong. JavaScript is merely an adequate language for certain purposes. Programmers use other languages for sound technical reasons (performance, libraries, etc), not just because they’d rather not use JavaScript. If Brad Chen thinks that all programmers should somehow want to use JavaScript (or maybe some other single language) then he’s starting off on the wrong foot.

And just who is Linus Upson trying to kid:

"One of the key features of the web is that it's safe to click on any link. You can fetch code from some unknown server on the internet,"

So Google have never got stung by a dodgy web link then? There have never been holes in JavaScript interpreters have there?

"Before, when you downloaded a native application, you had to install it and you had to trust it. With Native Client, you can now download native code, just like you download JavaScript and run it, and it's every bit as safe."

That maybe true but they’re carefully chosen words. “Every bit as safe” doesn’t mean perfectly safe.

And how about this little gem:

"You've added this arithmetic to make sure the jumps are in range. But the real issue is that if it's really clever, a program will arrange for a jump to jump past that arithmetic," says Morrisett. "You might protect one jump but not the next one."

So Morrisett is saying that someone might just do a little manual hacking to insert op codes in order to achieve something nefarious? It depends on where the verification is performed. If it’s done on the client as the code is running, then this whole NaCl sandbox idea will fall to the oldest hacking trick in the book. And using x86’s segment registers is mad. In today’s world of virtualisation there are many fine instructions on x86 from Intel and AMD to make strong sandboxing realistic, yet Google are choosing to ignore all that in favour of an archaic monstrosity from the dark ages of computer architecture history?

And Google haven’t done an ARM version yet. Haven’t they seen the mobile revolution happening just down the corridor in the Android department, in Apple’s shack, at Microsoft and literally everywhere else? Not having an ARM version is soon going to look pretty stupid if it doesn’t look stupid already… And isn’t PNaCl just mimicking Microsoft’s .NET and Sun’s Java? Does the world really need another one?

“Chrome will only accept Native Client applications distributed through the Chrome Web Store, and Google will only allow Native Client apps into the store if they're available for both 32-bit x86 and 64-bit x86”

So NaCl won’t be the web then. Users won’t be able to click on any link out there in the Web and get a NaCl app because they’ll have to visit a Google run store? That sounds *very* inconsistent with what Linus Upson was saying earlier.

But hang on, Chris Blizzard is talking junk as well:

“Once you download the native code, there's no opportunity for browser optimizations.”

Err, isn’t that the whole point of native code? Isn’t it supposed to be fully optimised already, no room for improvement without a hardware upgrade? No amount of software re-jigging inside a browser is ever going to make a properly written decently compiled piece of native code run any quicker than it already does?

“A source code–based world means that we can optimize things that the user hasn't even thought of, and we can deliver that into their hands without you, the developer, doing anything.”

Hmmm. I wonder how many web site authors, plug in developers and the like have spent feverish hours in the middle of the night trying to fix a web site or plug in to cope with Mozilla changing something YET AGAIN. Hasn’t Blizzard heard about the debacle over Firefox version numbers? His statement is correct only if the ‘optimisations’ don’t effect the standard, but Mozilla (and everyone else I guess) hasn’t exactly agreed what the standard is nor kept to it:

"What are you going to do about version compatibility? What are you going to do about DLL hell?”

Indeed. What are you going to do about plug in hell?

And this is a real beauty:

“Chen and Upson also point to efforts like the Emscripten project, which seeks to convert LLVM bit code to JavaScript. Even if Native Client isn't available in other browsers, Upson says, this would allow all Native Client applications to span the web.”

So we’re going to write in C++. That’ll get compiled to LLVM. Ordinarily that would get executed in some sort of VM, just like .NET and Java, in which case I might have chosen to use C# or Java in the first place. But just in case that VM is missing, the LLVM will get re-compiled to JavaScript, which in turn will get interpreted to x86 op codes. IN THE NAME OF HOLY REASON HOW IS THAT SUPPOSED TO BE A GOOD IDEA? Sorry for the shouty shouty, and I’m not religious in anyway either, but sometimes things make me snap. It’s not April 1st is it? No, good; I thought I’d better check.

Right, enough of the rant. Web apps (Java, JavaScript, whatever) are Web apps. Native apps are native apps. They serve different purposes. NaCl is another Google effort to corner more on line advertising revenue by means of setting up another app store eco system that doesn’t actually deliver any tangible benefit to the end user. All this talk of ‘trust’ doesn’t matter two hoots. In both models you have to trust either the app developer or Google. Why is trusting the app developer worse than trusting Google? You could even argue that a Single Point of Trust is worse - just look at the problems we've had when a single CA (Diginotar) gets hacked.

Unless they pull their fingers out very quickly NaCl is going to wither and die as the consumer world transitions wholesale to ARM. This transition is likely going to be driven like we’ve never seen before by Microsoft bringing out Win8/ARM.

On the face of it Linux (well, Android), Apple’s and Microsoft’s propositions are far more sensible (though Oracle might do for Android yet in the law courts). Java and .NET do a decent enough job. Microsoft will also have to do a decent job of making the x86 / ARM choice a non-issue to native developers (and the word is that they’re doing quite well on that front). Apple has made it relatively simple for developers of native apps to target the whole Apple eco system.

Battery life is going to be king for many years to come, and NaCl looks like a very bad way of extending battery life to me, not least because it’s currently stuck in the land of x86. If Win8/ARM machines start issuing forth in large numbers and last whole days on a single charge, who’s going to want a power hungry x86 machine running anything, least of all Chrome and NaCl?

17
1

Linux.com pwned in fresh round of cyber break-ins

bazza
Silver badge

@AC, re: @Captain Scarlet

"Clearly this problem is with configuration/implementation of the security on the Linux systems involved, probably with a little user complacency thrown in for good measure and not a fundamental problem with the quality of Linux."

I'm not sure that it is clear. It is clear that a privilege escalation has occured, but I wasn't aware that anyone was saying how it had been accomplished. If it is a kernel problem, then like wow, that's a big deal. An unknown kernel bug allowing such escalation is a big worry for any OS, not just Linux. But even if it is just a config problem, what's going on there? Why are they still offline?

2
1

Apple seeks product security boss after iPhone loss

bazza
Silver badge

Or...

Or they could just choose to chill out a bit and be less obsessively secretive. Would that really dent their sales in any measurable way whatsoever?

Anyway, Apple products are pretty predictable - shiny, lacking in some useful buttons and features that everyone else has been doing for years (FM radio, anyone?), pricey, designed to lock you into an ecosystem designed to make yet more money, and occassionally suffering form over function (antennagate?).

5
0

Skype: Microsoft's $8.5 billion identity tool

bazza
Silver badge
Happy

@Dazed and Confused: Seems to do that already?

I've got skype on PCs, phones, etc. They all ring when someone calls me, and when I answer I'm speaking to whoever.

2
0

How Apple's Lion won't let you trash documents

bazza
Silver badge
FAIL

VAX VMS?

I doubt I'll be the first to point this out but here goes anyway. Did I miss Mac OSX being transitioned from FreeBSD to VAX VMS? Are Mac users going to have to get used to typing PURGE?

I reckon that there's a high chance that the less technically experienced users out there are going to get veeeeeeery confused by this. The thought of trying to explain a complicated version control system and when it does what it does and why it does it to my Aunt is not an appealing prospect!

I shall snigger from afar....

2
0

'Satnavs are definitely not doomed', insists TomTom man

bazza
Silver badge

@Alex King

"Contrary to an earlier poster, nobody does (or should) give two hoots about gain, antenna patterns or whatnot. If it works, is easy to use and does the job then that's the point."

Except that if you as a consumer wanted to choose between them based on GPS reception performance that is the information you need. Without it all you can do is pick one at random.

This forum has many people saying that they've suffered GPS drop outs. When someone is lost in a city with no GPS reception they do care. Shame they didn't think about that when they were buying it in the shop.

But because the industry is effectively silent on the matter there is no commercial pressure. Sure, it works quite a lot of the time but we would all like it to work better.

TomTom certainly used to care - my ancient old TomTom easily out performs any phone I've ever seen in terms of GPS signal reception. When you're driving around the Peripherique and motorways in Paris through all those half obscured almost-tunnels you really don't want a GPS drop out; you will miss your exit! My TomTom hasn't let me down yet, but every phone I've seen packs in at the first hint of overhead obscuration.

0
0
bazza
Silver badge
Unhappy

@mikeyt: crims aren't that bright

Cars used to get broken into if the windscreen had the marks from the sucker on it!

I suspect that when some idiot breaks in to a car these days they're not doing specifically for the satnav; they're just not fashionable enough.

0
0
bazza
Silver badge

@Mark 63

I like my watch to work when my mobile battery goes flat...

You're right about the mp3 player market, hardly anything decent still on the market. I'm still using an ancient iRiver iHP-120 in the car, still works very well indeed, and the little cable remote control is just perfect - don't need to look at it even. Much better than fumbling around with a crappy touchscreen on a smartphone. Two headphone jacks (surprisingly useful, you can have great fun with a pal in an airport departure lounge listening to rude songs sniggering away without anyone else being able to hear), loads of different codecs, optical line in/out, FM radio (Apple still don't put radios in theirs, do they? Why? I mean, why oh why oh why do the f*****g idiots not just put a damned 5cent fm radio chip in their goddamn shiny toys? How long can a fit of pique over them not thinking of it first go on for?).

Anyway, I digress. Apple's success has really lowered people's expectations of what they think is technically achievable. It's no longer really commercially feasible for other manufacturers to push out superior products because not enough people understand the benefits of the technology anymore. Form is now more important than function.

Isn't it time for the competition authorities to take a serious look at Apple's dominant position before satnav is reduced to nothing more than an eBook atlas?

4
0
bazza
Silver badge

@AC, re TomTom

I've got quite an old TomTom (a One v3 with Euro maps) that I find very useful indeed. Its maps are a little out of date, but not disaterously so. I quite happily go all over Europe and it's not let me down once. On a recent family holiday in rural France I was the only one to make it direct to the remote farmhouse we were staying in with no difficulty at all. It even knew about the driveway. Everyone else with mobiles, newer satnavs that had cheap / partial euro maps, etc. spent hours driving round the countryside lost either because they couldn't get a mobile signal, or the roads weren't on the map, etc.

I was vaguely thinking of getting a newer one, but from I've read here today I think that I'll stick with the one I've got. I don't want to use a phone either because they're expensive to buy and aren't quite as good (worse GPS in my experience, reliance to some extent on mobile coverage, voice too quiet, stupid things like auto screen blanking that the app can't control, can't make a phone call and navigate at the same time, etc. etc.). If they just made a slightly newer One v3 then I'd buy that.

Why oh why does shiny mediocrity succeed over old fashioned yet effective clunkiness? Do people want to be stylish more than they want to get to their desination with ease? Why would anyone buy a £400 smartphone and use it to navigate and suffer the inevitable compromises when a 4 year old £100 TomTom argueably does a better job?

I suspect that it works this way:

Punter: "Does this smart phone do satnav?"

Sales dude: "Of course"

Punter: "And it is nice and shiny too..."

whereas it should work this way:

Punter: "What's the GPS receiver sensitivity in dBm?"

Sales dude: "Errrr"

Punter "And what's the GPS antenna pattern like?"

Sales dude: "Welllllll"

Punter: "What the peak antenna gain?"

Sales dude: "4?"

Punter: "And what's the average time-to-update for map corrections from the date the road layout changed?"

Sales dude: "blurb blurb"

Punter: "and what's the map resolution? And what's the average time from traffic jam forming to autorecalculation of my route?"

etc. etc.

To make a useful comparison between satnavs, either phone or standalone, these are the sort of data that is actually needed. But none of the companies supplies it. So a level of mediocre performance has become the accepted norm and the general public will use the half baked products in ignorance of the fact that they could be a *lot* better than they currently are. And the trouble with mediocrity is that it has a way of letting you down just when you really, really want the damn thing to work properly.

6
0
bazza
Silver badge

@MrEee

"That's assuming you want to overpay for a preinstalled system that will cost you many times more to fix if it has trouble down the road."

Built in satnav has the potential to be very very good. They can exloit car data (wheel speed, steering angle, etc) to provide a more reliable position fix than GPS alone. Shame that no one seems to do a good one. So why spend all that money on something that doesn't produce as good a result as something like a TomTom?

I wish there was an effective standard for these things in cars. DIN radio slots aren't the answer; it's not like every car comes with a spare DIN slot just waiting for you to put the upgrade of your choice in, and they're far too big. What would be very nice if there was a smaller slot that provided all the pertinent car data (wheel speed, steering angle, GPS antenna, etc) in a standardised way. The satnav manufacturers could provide units that would fit any car without having to stick it to the windscreen. Then there would be *real* competition in the satnav market.

1
0
bazza
Silver badge

@Petur

"My TomTom needs 30 seconds (sometimes more) to get a GPS lock, while my n900 has one instantly. Reason: my smartphone can access the internet, and get a good hint on its location for a fast kick-start of the GPS"

Funny, if I keep the satellite almanac data on my TomTom up to date (by plugging it in to the internet via a lappie every now and then like the book says to) it gets a complete lock within a few seconds. And it'll do that anywhere on the world's surface. I'd like to see your n900 achieve that outside of mobile phone coverage. Also a phone's approximate initial lock is OK so long as there's not two closely spaced roads to pick from...

"Also, the analogy with cameras is completely off: the big difference between a phone camera and a dedicated one is optics and sensorsize. The impact of the GPS antenna size isn't quite as big."

Not sure about that either. I've yet to find any phone with a GPS as sensitive as almost any satnav's. My TomTom gets a GPS signal almost anywhere *inside* my house; phone's don't. That gives a more reliable GPS lock in practise, something that's quite important in the urban jungle.

"Since a dedicated GPS receiver and a smartphone share a lot of common components, merging them seems like the obvious step. Goodbye dedicated GPS."

Indeed, and I think that a lot of the recent models of satnavs have 3G in them to get live traffic updates, roadwork information, etc. So there is a lot of hardware commonality between phones and satnavs. But some of the little features of satnavs that are missing from phones (like better sensitivity, no need for cell coverage, live traffic data that automatically alters the route) add up to something that the dedicated road warrior benefits significantly from.

I suspect though that the majority of the market will be phones, so the market costs for dedicated satnavs won't be sustainable and we'll all take a step back in capability like it or not. As for built in satnavs that benefit from speed and steering data direct from the car's own controls (and a lot of them have inertial sensors too), well they're already very expensive.

2
0
bazza
Silver badge

@AC, re: Fair enough

"prefer a simple, single gadget to lugging a bagfull of toys."

I don't know how many people lug their satnav around. The normal place to find a satnav would likely be in the glove box in the car, not the driver's pocket / hand bag.

0
0

Apple's ex-cop and the case of the lost iPhone 5

bazza
Silver badge

@AC, rip off USA

"Also, a monitor that's comparable to the 27" iMac's costs at least $1000 and that's in the US, where electronics are typically cheaper."

Do more Googling. They're about £490 from Hazro (IPS too!). Shows how much of a rip off Apple are who charge about twice that for something no better.

4
0
bazza
Silver badge

@Barry Shitpeas: illegal activity?

Er, aren't you missing the point? Surely even in the good ol' US of A it is illegal to impersonate a police officer? And doing so to gain illegal entry to someone's home and threaten the residence is surely an agrevating factor?

What on earth were the SFPD officers thinking when they agreed to go along with this? With the Apple guys handing out a phone number this was always going to come out.

And it doesn't say much for whatever GPS tracking is in the phone if Apple went to the wrong house.

9
0

Kernel.org Linux repository rooted in hack attack

bazza
Silver badge
Thumb Up

@sabroni: Indeed

Not holding my breath at all. No point really.

0
0
bazza
Silver badge

Always going to be a problem?

The security of a distributed development effort like Linux kernel is going to be only as strong as the weakest link in the chain. With hundreds of contributing individuals out there on the internet it's always going to be difficult to ensure that they're each as careful / prepared / patched / etc. as everyone else. Humans as individuals aren't very good at being so consistently self disciplined.

Whereas in an internet-isolated development environment (in which I imagine the likes of Windows are developed) there's a BOFH, rules, corporate oversight, contracts of employment, and no direct internet connection. To attack such a setup means getting a suitably motivated person in on the inside. That's much harder to achieve I should think. It's certainly less convenient for the attacker.

Perhaps the OSS community needs to be a bit more open minded? I don't know for sure but I suspect that all the main servers holding the Linux source are running Linux. A homogonous collection of servers is much easier to compromise on a large scale than a heterogeneous set. If kernel.org used something else (FreeBSD? Windows even?) as well as Linux to host the source then an attacker's life would be much harder. With reference to the canine world, mongrels are much more resilient than pure-breds. It won't stop some individual developer's personal machine being hacked and leaking passwords, but it does complicate the matter of how to exploit that to attack the servers. Microsoft famously turned to Linux servers when a serious problem emerged with Windows a few years back. Perhaps it's time to return the compliment?

OK, it's not good PR to say that you don't totally trust your own OS, but then we're clearly past that now aren't we? Doesn't this hack underline that? Wouldn't it be quite mature to acknowledge that nothing, not even Linux, is perfect? Surely it's better to provide a more robust offering than maybe being a little bit fanbois-ish about the perfection of one's own creation?

As for 17 days, isn't that a mighty long time to notice that something's wrong on such an important set of servers? Was everyone away on holiday?

1
8

Mac Lion blindly accepts any LDAP password

bazza
Silver badge
Pint

@Dibbley: El Reg, immediate action needed

Come on El Reg, this is a desparate situation. We need to get this hard pressed person an icon with several pints and a stiff whisky to follow. An icon with a single solitary pint is no where near enough. This is clearly a dedicated professional with a lot on their plate.

It sounds like you're the only one standing between your CEO / majority stock holder and ruin. Good luck!

2
0

Microsoft unveils file-move changes in Windows 8

bazza
Silver badge
Pint

@Si 1: Real men?

Real hairy chested weather beaten gruff talking wizzened old men use Xtree Gold, or possibly it's very welcome Windows clone ZTree bold.

1
0

Woman in strop strip for Bermuda airport customs

bazza
Silver badge

Proportionate?

It seems he's spent most of the past 10 years in a Scottish jail for the same reason. Now that doesn't sound very proportionate in comparison to, for example, Al Megrahi who did a mere 8 years for bombing the Pan Am jumbo killing 270 people.

2
0

'Devastating' Apache bug leaves servers exposed

bazza
Silver badge
FAIL

@Ru

"I know, isn't it terrible?".

Yes, it is if you're an app developer trying to support many users of many versions of many distributions. And how is an ordinary home user supposed to choose a Linux distro? For a start, which one's best? Which one does everything they need?

Ask yourself why Google chose do what they did with Android instead of just slapping a mobile friendly shell on top of an existing distribution. Surely that would have been much easier?

"Did you know there is *more than one command line shell*? Worse yet, there is more than one programming language even within the same language family! These things can be compiled for machines with totally different architectures."

Great if your a sys admin or developer. Totally and utterly irrelevant bollocks if you're an ordinary desktop user.

1
1
bazza
Silver badge

@Eddie Edwards: Not a joke

Redhat should have gotton rid of rpm a decade ago. Apt/deb is much much better, so why do Redhat not use it? Unbuntu got rid of Gnome as their default with not much warning. How many different APIs are there for sound in Linux, and which ones are supported in every single version of every single distribution? CUPs has at least brought some consistency to printing, but it's still weird that a programme has to be able to render in PS to print and something else to display the same thing on screen. Mozilla are releasing new versions of Firefox quicker than plugin writers can cope with, and are planning on ditching version numbers as a result. These things might not matter to sysadmin type people, but they do matter to developers aiming at ordinary desktop users.

Whereas MS have said three years in advance that XP will cease to be supported in 2014. Apple gave massive warning of the cessation of Cocoa. Older versions versions of Office are still updated, but there's a definite end of life. In short the knife gets wielded every now and then, the planning is often quite considerate of users' needs. 'Better' does not mean quicker.

1
2
bazza
Silver badge
FAIL

@Ian McNee

"Yeah, those servers, they don't matter much, no point them being secure and reliable, it's not like they deal with anything important like financial transactions over the internet...hey...wait a minute..."

So you support my point then? Sysadmins can and do cope with Linux's fragmentation, and has met with success. Even I cope with Linux's fragmentation on a daily basis, and it's infuriating. Linux doesn't succeed on the desktop because you still have to be a something of a sysadmin to run it on a desktop. For example, do you *really* expect the average desktop user to know how to install an rpm packaged piece of software on an Ubuntu box, or to know what to do with a tarball? Get real. If the Linux world wants to succeed in the desktop arena, it's going to have to sort that kind of problem out.

And as for servers (Linux or otherwise) being secure and reliable, it seems that if they've been running Apache these last 4 years they've been anything but that. They've all been sat there just waiting for someone to send them some dodgy http requests, and it's only luck that no one did. How many sysadmins have spent the last 4 years telling their bosses that their important Apache servers doing important financial transactions on the internet are secure, protected against denial of service attacks, etc, etc?

"Actual studies based on what happens in the real world show that bugs & vulnerabilities in OSS are fixed significantly faster than in proprietary code. End of."

Given the magnitude and timescales of this particular problem in Apache, perhaps those studies' findings should be revised? I mean, MS have had their share of problems, but to be in a situation where vast swathes of the internet could have been brought to its knees with a few only slightly dodgy HTTP requests without the need even for a DDOS attack is pretty spectacular.

2
1
bazza
Silver badge

@Solomon Grundy: Pretty good job?

"The OSS guys do a pretty darn good job of producing some pretty great software for free."

Moneytarily free is nice for the rest of us. But just how good a job have they done in this particular instance if a reported problem with huge consequences for a very large fraction of the internet was left unfixed for many many years?

The OSS people do get some soft benefits in return for their work - high reputation, consultancies, etc. This incident is a good example of how such benefits are just as vulnerable to bad news as cash flow is for a large company.

"The OSS community overall demonstrates project management skills that almost any big company should like to emulate"

I would dispute that. The glaring conter example to your statement is the world of Linux. I think that the handling of Linux (not just the kernel, I mean the whole shebang) by the OSS community has been terrible, really, on an absolute scale.

They do project management well in the sense that a bunch of guys decide to do something, and a result is delivered with enthusiasm over a reasonable timescale. The part of project management that is definitely not being done at all is deciding whether the work was necessary in the first place, or deciding (with global agreement) that the new thing will wholly replace something old.

Take a look round the world of Linux. Fragmentation abounds as far as the eyes can see. There are umpteen different distributions, a variety of different desktops, different package management systems, etc. etc. FFS how on earth can a choice of software package management systems be a good thing? Ok, someone once decided that an improvement was needed, but why would anyone keep using the old one?

I would say that at best Linux is a hodge podge of competeting ideas that has met some success in certain areas (servers) where this doesn't matter too much. But in the desktop arena it's in a terrible mess, and it's no wonder that most of the world's desktops and laptops are Windows or OS X. Clearly to the average user (and thus to app developers too) consistency really does matter. Linux has gained some popularity in the mobile sector only because some big organisation (Google and their Android) has come along and imposed its ideas on a global scale.

I think that the proprietary world is much better at wielding the knife to cut out old stuff and sticking with just one or two ways in which things are done. That's because it's expensive otherwise, and bad for sales. The same pressure is not being felt by the OSS community.

"Every software project has tons of bugs and decisions have to be made whether to work on improving functionality or fixing rarely encountered issues."

Clearly in this case no effective triage system was in place for assessing the criticality of issues. If there had been this would have been fixed many years ago.

6
16
bazza
Silver badge
FAIL

Not a good day for open source

One of the key advantages of open source is supposed to be that anyone can fix a bug and in all likelihood someone will do so quickly.

It seems that open source communities can be as lazy as closed software companies afterall. I suspect that the reason that this has happened is because Apache has had a reputation for rock solid reliability for some time there can't be any bugs worth fixing. Clearly not the case.

So how many other severe bugs are there lurking in the source code?

6
13

Nervous Samsung seeks Android Plan F. Or G, H ....

bazza
Silver badge

Wheels coming off the Android band wagon?

Or is a wheel bearing just beginning to squeak?

It's clear now that patent wars are more or less the most powerful commercial tool these companies have. Apple are indeed doing quite well on this front, despite the fact that there seems to be little about an iSomething that is obviously novel and without precedent. So even if Samsung do go down some other path, what's to guarantee that the end result will be fireproof from a patents point of view?

The whole patent system, principally in the US, is clearly the major issue here. It must surely be pushing up costs, and that gets passed on to the consumer. Will the US political system ever work that out?

If the patent situation in the US gets much worse it could result in non-US companies abandoning the US market, even though it is large. Afterall, there's 5.8 billion people elsewhere. The result would be an unintended policy of isolationism, and that really won't be any good at all for the US population.

Why does Apple sue Samsung anyway? Doesn't Samsung manufacture Apple's ARM processors?

1
1

Dish eyes 4G LTE wireless network

bazza
Silver badge

Insanity?

"LightSquared's plan was clearly insane..."

So when does insanity become genius? *If* Lightsquared do get to deploy a national terrestrial service then they will have got themselves some prime bandwidth without having to pay top dollar.

It should also be a lesson to all spectrum users. Just becaues the adjacent frequency bands are apparently clear doesn't mean that you can assume they always will be.

0
0

Jesus Phone gives Sprint redemption 'this October'

bazza
Silver badge

"guess when the iPhone rumor broke?"

12:00pm?

1
0

HP: webOS will still run PCs and printers

bazza
Silver badge

@Asgard: Symbian had other problems

I agree that Nokia have been a poor custodian of Symbian ever since they got hold of it (EPOC32) from Psion. Some things written a long time ago by ex-Psion people are quite clear on that point.

However, it's well known that Symbian is a difficult OS to develop native apps for, far harder than OSes that have come from mainstream mains powered hardware. The reasons for the difficulty are clear; achieving ultimate performance on a battery power device mandated a way of doing things at odds with the normal programming paradigms we all learnt when young. This really showed through in final products. Even today Symbain phones generally have very good battery life in comparison to iOS or Android driven machines.

I reckon that Nokai were never able to assemble a large enough team of programmers who *really* knew Symbian. In essence they could not put enough development into it to allow it to compete on bling, user interfaces, etc. as well as on the purely technical matters of battery consumption and RAM requirements. I suspect that the reason why they didn't have a big enough pool of the right sort of programmer was money; acquiring that sort of rare-skilled programmer / developer is expensive in salary and/or training. Maybe if they had got it right straight away there would now be a much bigger pool of programmers, but they didn't, so there isn't.

But in a way Symbian is beyond rescuing. Even if Nokia could salvage the mess and turn out a decent user experience, there's almost no point anymore. People are now completely used to having to charge up their fondleslab once a day or more. And people want to download apps, and those apps aren't going to be native Symbain apps. It's too hard and time consuming to be worthwhile for the average mobile phone eye candy app developer. So they will have to be written in something hideous like Javascript, and bang goes all those carefully crafted power saving design features.

In a way it's a bad sign for the whole computing industry. A fundamental requirement of portable devices is long on times, even if we've gotten used to having to charge up once/twice a day. Given the poor rate of improvements in batteries this really means less power consumption, which is something that the rest of the computing world would like too. So far the truly successful means of achieving this have been:

1) better chips

2) that's it.

So far software has not really played a significantly successful role in reducing power consumption, and arguably the modern trendy things like Javascript have made it worse. Yet Symbian shows that if you do get the software right you can make significant improvements without having to do anything at all to chip or battery design. Are we as an industry just too lazy to actually pursue that 'free' performance boost?

0
0

Here lies /^v.+b$/i

bazza
Silver badge

Iain M Banks?

c:\>restore.exe a: c:\*.*

0
0

iPhone 5 to include Japanese earthquake warning system

bazza
Silver badge
Thumb Up

@Joseph Haig

Ach, dammit, you got there first!

0
1

Oracle's Sparc T4 chip: Will you pay Larry's premium?

bazza
Silver badge

@Paul 77, @Chemist

@Paul 77

Why would you do that when you could just access a server across a network? Back when I first started you just accessed some server from an X terminal. Alright, you'd probably have to use a Linux PC instead of an X terminal these days, but otherwise nothing's changed.

@ Chemist

You are most likely correct. But by Linux I suspect you really mean Linux on x86/x64. Nothing wrong with that per se, but there can be very good technical reasons why x86 might not fit the bill. Not every academician (or developer for that matter) is best off hosting their work on x86, and it's their hard luck if they don't look around to see what else is available. As Kebabbert points out, Sparc T3 is very good for crypto. That might be handy if you're hosting a large website that has https only access. Similarly anyone doing large amounts of decimal (*not* floating point) math really needs to take a look at POWER, which is why IBM do quite well in the banking / financial services sector.

0
0

Forums

Biting the hand that feeds IT © 1998–2018