1375 posts • joined 15 Mar 2007
"Recent versions of Firefox, prior to the 7.0 release, were memory hogs that had a tendency to crash all on their own"
You mean they have actually and *finally* fixed the memory leak/bloat that has seen our browsers gobble 8GB+ of memory?
"All those software patents creating havoc in the western world"
Sorry, I think you mean "havoc in the USA" as most of them are not valid in Europe due to the differences in what is and is not patentable. India is also quite competent to decide for itself if software can or cannot be patented, and hopefully will show greater sense than the USA in this area.
Sadly, maybe not for long before we in Europe have that time-wasting burden forced upon us.
We will have to wait until the analysis comes out to find the truth behind this fisaco. However, my suspicion is one of the developer's home PC was rooted, either due to carelessness or from some package in use (or development) that was flawed. Once rooted, the hacker had a 'free pass' in to the kernel development machines, etc, due to that developer's trust level.
Why has this not happened to MS & Apple in such a spectacular manner?
Probably because they don't allow anyone outside of their corporate network to access any of the development machines. When you think about it, keeping a globally accessible system safe is SIGNIFICANTLY harder to do.
"Spotify made its users' private listening data public, at the same time as making Facebook membership mandatory for new signups"
I have an old Spotify 'free' account I have not used in a while, but if they are going to make FB part of it, then its time for a single-use email address and a fake FB ID.
Exactly, there is NO EXCUSE at all for a browser plug-in or document reader to run as anything other than a user-privileged program, so causing an OS crash should be all but impossible.
Oh silly me, this is Adobe & IE...
"the sad bastards who loiter around tech websites"
Said the AC who posted on a tech website...
@There's more to worry about....
I still use w2k for some things because it works well enough and I don't want to pay for changes that bring no direct improvement to me.
Of course, it runs in a VM now so I don't need to worry about hardware drivers, nor do I use it for email/web browsing/etc so security is much of a headache than when it was new & supported...
Tux, my friend.
@AC 19:15 GMT
"unless your willing to slay *every other* IE6 app we can't upgrade every desktop to any other browser."
Can't you provide a standard environment with *two* browsers?
IE6 for the crapppy written stuff.
Something else for everything else?
Maybe even a software firewall so IE6 can only connect to local IP addresses to improve security if anything can reach outside. Though given its lubed-up nature in Windows that may be difficult...
Most of the whipper-snappers on this site won't understand that pun!
Mine is the one with a pair of EL34s in the pocket.
That might be part of the reason, as if you can verify the boot loader, it can then verify the rest of the system* and so stop hacks that check for invalid activation keys, etc.
I don't care about MS screwing it users for non-licensed software, if you want Windows then pay for it. What I do care about is such a system being abused to prevent alternative OS from running.
Unfortunately if you can bypass the boot check, then you can also bypass all other DRM/license protection steps (given the time to hack the OS components). If MS are only doing this to stop root kits, fine, but I can't see it being very useful (in this context) and open at the same time.
* time-dependent of course, how long to check the signatures of a multi-GB OS installation?
Key holder matters
The issue is not the 'secure' boot by verifying the OS, that on its own is good for everyone (Linux, MS, Apple, etc) as it allows protection against pre-boot root kits.
The issue is who decides what can boot.
If the UEFI loader just stops and tells me this has changed, and do I want to accept the new signature, that is fine for me and nothing is lost but I have gained control over unexpected changes to my boot loader. Maybe have a UEFI password so only admin can change it (like current BIOS offer for boot sequence, etc).
Of course, it then makes the whole "security" push rather pointless because, as we all know, asking the (l)user if they want something or not is a recipe for disaster when it comes to security.
Even so, if you can root the OS while running, then you could flash the UEFI firmware to disable this before loading the pre-boot root kit. Also how long until the keys are compromised as for DVD/BlueRay/HDCP? It helps of course, but short of a physical switch to disable motherboard updates, it is only a bit harder for the bad guys.
So maybe a mandatory configurable option in the UEFI menu to enable/ask on change/disable would OK. But on MS' past behaviour I have serious worries about the openness of it all.
"And I saw that I was alone. Let there be light."
What is the point?
Really, what is the point of changing to DAB? Is it any 'better' than FM in a manner that counts to the end user, lets see:
More channels? Yes, initially, but most were crap and a lot have dropped out now.
Better sound? No, most are on low bit rate (cheaper, see?) and crap.
Interference/multipath protection? Partly, but no use if the signal is below threshold anyway.
Ease of use? Not really, tuning an FM radio is hardly challenging, and short battery life for DAB is a serious loss of 'ease of use'.
Ah, FM spectrum worth a bob or two? Maybe, just maybe. But for whom? What service really wants that band and is it of any use to use, the public?
"Fibre makes more sense now, as does going entirely mobile and ditching a landline network."
Er, no, going wireless is of limited capacity because the radio spectrum is limited. Yes, you can get closer with time, but there is a *fundamental limit* to usable channel capacity.
Going fibre is a much, much better idea as long as the oinks realise its not copper and stop digging it up. Of course you don't have power along a fibre (of any real amount) but for most of us having to power our home modem end would be no issue if it gave us gigabit speeds with negligible contention.
p.s. +1 for those who point out a lot of the cable is not solid copper.
What I am suggesting is a new/change to the law so all new products, irrespective of their type and T&C, must be "adequately secure" and maintained that way free of charge by the supplier for 5 years after sale. Otherwise the supplier bears all costs of failure.
It would get rid of the T&C you refer to and make sure that suppliers of ANY goods such as a car, TV, phone, laptop, etc, are all bound to the same standard for dealing with security fixes in a 'reasonable timescale'.
After all, its not that hard to do: you start with a decent design that has security as a core part of the requirements, and then keep the design team (or part of them) fixing things as they come up, and have the systems in place to allow patches to be deployed automatically to the consumers.
Hell, even MS, the original master of incompetent security, now mostly manages that (though not always the 1 month fix time, unless its made public and they *have to* speed things along).
That is perfectly within reason for a consumer protection law, and ideally it would be an EU-wide one. Just what is wrong with that suggestion?
Time for liability?
There really should be a consumer protection law that would punish suppliers who fail to fix vulnerabilities in a reasonable time scale and for, say, 5 years after official "end of life" for buying a product.
Something like liability for all damages, irrespective of the license T&C, if they fail to patch within 1 month of disclosure perhaps?
I'm not just talking about Android, the "new windows" of security, but for ALL software and hardware. And no wiggle room.
Yes it would cost a little, but it would also focus suppliers on releasing decent designs, and not a "ship the crap then forget" model that seems to be today's norm.
If you keep the SI second, which would be an issue for anything based on current time and frequency, then your definitions produce a 'day' that is not a cycle of light/dark by our Sun, which is how that was originally defined.
Similarly a 'year' is also based on the Earth's cyclic seasons.
For those of us who live on this planet, those are meaningful concepts. Of course, on another planets such as Mars, the Earth day is not so useful, but dealing with that issue is a LONG way off for humanity.
Science already has gone through the pains of CGS, and then MKS, unit revisions to make things more logical. You can bet the issue of time/date has been looked at by a lot of very smart minds and no compelling reason made to change when the benefits (simpler arithmetic) are weighed up against the costs of change.
@Richard 12 (part deux)
Sorry for miss-reading, but you are right there is no protocol I know of specifically intended to push out TZ changes. Normally the TZ rules are fixed for long-ish periods and they are pushed out as a set of values with OS patches (today I got some for my desktop covering Russia's DST rules, etc).
On an open source UNIX-like OS (Linux, BSD, Solaris, etc) that should be easy enough to implement something to replace the TZ rules with a dynamic set to centralise time for instant system-wide consistency (sort of "at 12:00 UTC change to TZ = +5 hours" computed from position & bearing sent in advance, so all devices roll at the same point). Even if the commercial justification is limited to your own business case.
Of course, you might have too much legacy stuff (specifically, outside of your control) to make that viable.
@There is a much bigger problem
The short answer is use UNIX.
It keeps time internally as UTC (time_t variable, etc) and has a timezone value that can be changed as you see fit WITHOUT breaking anything, as all time calculations are based on UTC. Unless you are Apple and make an alarm clock feature that is...
Of course, for a moving system you need to know the timezone for your location. GPS gives time and location so it could be mapped to find the global zone you are in and thus update the TZ setting.
I don't know if all applications recognise TZ updates after starting, but I imagine the normal libraries will notice a system-wide change, so it might not be a complete answer out of the box.
Time keeping on DOS/Windows has been spectacularly crap, but now attempts to follow the same model. Except for some stupid cases where file systems keep local time (FAT32, some CIFS implementations, etc), or coders have not understood how to do it right, etc.
@you've missed the point...
"Who says we should have to put up with the SI second?"
Well I would say just about everyone in the world with a clock or other time or frequency-keeping system marked or based on the SI unit.
(a) Keep the SI second, etc, and accept that things don't add up nicely so occasional corrections are needed, or
(b) Change the SI unit, break every time/frequency system and demand a re-write of all science and engineering text books to conform to the new system. Which STILL will not be correct as the Earth's rotation is randomly variable.
The argument is?
It is an interesting point. Usually the two reasons are:
(1) You need to evaluate elapsed time across the discontinuity (or propagate a model's predictions) and can't deal with 1 sec error.
(2) Your code has time-wasting loops or other sequence-sensitive parts that are based on a clock update that is ASSUMED to be monotonic. This was one of the original reason for NTP adjusting time by rate-compensating to avoid backwards clock time steps.
In google's case it is more likely to be a database type issue, which begs the question why the did not use a GPS-like time base so it is accurate and consistent across such discontinuities.
Even then, time for DB order is questionable, why not just a translation counter? If you have lots of transactions per second and rely on this for order on a distributed system then the time delay/sync accuracy across the network will eventually be a limit on consistency.
Not that Google docs is consistent anyway... :(
Yes, proper option is for Google to fix their own software as it should be easy enough to cope with a 1 sec jump.
As for decimalising time, well Napoleon tried it and eventually gave up. The second is now fixed as an SI unit, so you have to choose from the prime factors of 86400 for your day's sub-divisions, and that limits what you can do.
Also what about the ~365.25 days per year? Ain't no way that can be rationalised base-10 while keeping a calendar that is in sync with the seasons.
@A curious solution
NTP works fine with this because it was designed by folk who know what are are doing.
The underlying problem is that computers (or more precisely their clocks and calendar libraries) *assume* time is always 24*60*60 seconds per day, so the time_t of UNIX (or the corresponding underlying linear system in Windows) has to be stepped by 1 sec when such an adjustment to UTC is made, and so calculations across the jump are wrong.
But the Earth day is not exactly this, and we have always had the convention that mid-day (for 0 longitude, etc) is mean solar crossing, so *something* has to be fudged.
It has been proposed not to correct UTC to be 'right' in an astronomical sense to get round this, because of the accumulated stupidity of computers. But why? Fix the computer's clocks or just get over it. No big deal, eh?
"but as computer systems have become more complex, having a rogue extra second can cause a lot of trouble."
No, more accurate version is:
"as we become more time-dependent on computers, the fact programmers did not understand (or chose to ignore) the official time standards for the last 30 years becomes more apparent"
There are plenty of ways of dealing with this if it matters, either by working with an atomic time scale (as GPS uses, they have a UTC-GPS offset that steps so the underlying time is linear, not that different from the UNIX time-zone implementation) or by coding and testing stuff so an occasional jump of +/-1 second is no big deal.
Google's work-around is a reasonable band-aid to this, and it would make sense for NTP to have this as an option, however, it also needs to report the proper time for others to sync to so you don't get lied to by their machines.
At least, no more lied to than usual...
Figures don't look good - but you fail to mention how its cost stacks up against the Anobit units. Is that not a BIG omission in a review?
There are two regions of stability, one is close to either start (so the 2nd star just makes modest wobbles) and the other is far, far away where both stars' gravity appears as one primary pull to the planet. In between the trajectory is chaotic.
In both cases the orbit is 'stable', but nothing like the mildly perturbed ellipses that our planets enjoy.
Cunning Lister's Ingenious Test Of Rocket In Space?
Camera / lights?
I was thinking of the need for a window or internal fast-ish CCD camera and floodlight, bright enough so the camera is not temporarily overcome by LOHAN going off, so you capture the ignition moment. After all, once she blows the camera or window's life is going to be short.
@I have no doubt at all that astronauts walked on the moon
Honestly, you must be piss-poor with a camera!
Firstly they had adapted Hasselbald medium format cameras, and they were capable of astonishing quality.
Secondly I remember as a 12 year old getting a Christmas present of a FED-IV camera (a Russian copy of the Leica) and with low ASA B&W film developed at home I got amazing resolution out of a all-manual range-finder camera. Really, it was not until I got a D300s over 3 decades later that I felt a digital camera was worthy of replacing my moist process photos (and a lot of that reason is due to convenience).
@And only this morning
"Still - bound to catch an IE user out"
You mean the those who turned from photosynthesising to reading their email?
@John Smith 19
"No limitations on form factor, weight, materials etc."
Err, no, that is not acceptable. The battery has to be "safe" in the event of a crash (at least no more of a hazard than a tank of fuel) and from materials that are not too toxic to use, and that are not in such a short supply that the cost of £68/kWh can still be met when the global demand is in the 10s of millions per year (and some country with the only viable deposits decides to enjoy the profits a bit more).
Furthermore the costs of implementing a charging grid needs to be considered, both the practicalities of charging stations and the infrastructure to deliver enough power. Hell, just now we are looking at brown-out in the near futures due to lack of capacity WITHOUT adding the demands of motoring.
Realistically, transport costs are going to rise a LOT, one way or another, and nations should be looking at how to plan employment and distribution to avoid this.
@I wouldn't count on it...
(1) Macs (and of course Linux) have far, far, less malware out there, as Windows has something like 99.95% of everything and a production rate of around 5k per day[*]. Hence AV that relies predominantly on daily signature updates still leaves a significant exposure.
BUT on the other side of the equation you have:
(2) The fanbois who fail to see that small != zero and no matter what you use it is still going to be vulnerable, either by implementation flaw or Trojan.
(3) An apparent attitude problem of Apple to ignore or de-prioritise security issues that arise, more so the apparent lack of interest in enterprise support.
I suspect that moving from Windows to Mac would make security better overall, but ONLY if you apply (and maintain) good IT policies. Seeing it as an excuse to cut IT support and let users have admin rights is going to be a massive FAIL in my humble opinion.
[*] Based on the GData report covered here http://www.theregister.co.uk/2010/09/13/malware_threat_lanscape/ and assuming the 1M new Windows viruses are produced at an even rate over the 1st half of 2010.
"Apple could make a decent fist of doing so"
I presume this should have read 'first'?
Or are we talking iron fist here?
So long and thanks for the fish
Have some Friday enjoyment and good luck for the future.
The problem is?
What is the big deal?
Funny how we somehow managed before mobile phones by using wires, an almost unlimited resources (subject to putting cables in) compared to the finite bandwidth of radio.
@OMG, this is bad news
Bad example, as I suspect most would notice someone physically entering their body to tamper with the unit.
Maybe "Ford denies FM radio can cause steering lock up" would be more appropriate?
Hmm, now what was the story about the Ford Pinto again....
"who would seriously consider a heavily discounted fondleslab with no future?"
If the price is right, which this is, I would. For idle web browsing or as a 'toy' for any visiting children it makes enough sense to get one.
^ my usual reaction to visiting children.
All sold now according to Dixon's web site :(
See HP? Price it so it is 'value for money' and it will sell. Aim for iPad prices without the apps & polish and it won't. Simplez?
Have you looked at the urban dictionary's definition of a growler?
Maybe a room dedicated to that would be interesting!
@the idea behind patents
That was the original goal. I suspect that today it would be easy enough for most products to be reverse-engineered to find out how they work, so it is not such a problem.
The principle of rewarding inventiveness is a good one, what is needed is a major update of a system that has fallen in to a disreputable state.
We need shorter times to speed progress and limit the time spent searching for potential infringement. We need changes to stop trolling (where companies wait for infringement but don't actually have any "loss" because they don't make anything), and much faster & cheaper ways of resolving license fees (so the money goes to better products, and not armies of lawyers).
@Re: Short time?
"People change their dishwashers and vacuum cleaners every 2-3 years?"
No, but production lifetimes are commonly shorter than 5 years for a given level of technology. Today it is not uncommon for consumer-focused ICs to have last-time buy notifications after a year or so.
The original reason for patents (OK, the idealised one) was to exchange trade secrecy for the greater public good in return for a limited time monopoly on an idea so you could profit from it. You could argue that 5 years from the filing date may not be ideal, but something like "5 years from the first commercial implementation or 10 from filing" would be sufficient opportunity to profit and not a hinderence to progress.
Though I still think the biggest issues are vague/wide patents together with the real problem of an engineer knowing what, out of a few million active patents, actually apply and could become litigation-based blockers for trolls wanting big money (and not a ~0.01% expense for 1 of a few hundred IP-related items involved).
I suspect the "short (20 year) time frame to recoup" is not true in a lost of high technology areas, and not for an company with some backing (which I guess was Dyson's problem?).
Possibly the only case where recouping is a long term issue to justify 20-25 years protection is in pharmaceuticals where the effective cost of bringing a drug to market is staggering, much more than the cost of its initial development, due to the length and costs for clinical trials, etc.
The original 20 years you must remember was set a LONG time ago, where products had lifetimes of 10-20 years or more. I don't think that is justified any more where products go through a generation change in 2-3 years. Certainly for software patents, if they are at all justified, something like 5 years is plenty.
The idea behind patents, of ensuring reward for the inventor and to make R&D profitable compared to copying, is good. The problem comes from how it is applied: in terms of pointlessly wide & vague patents being granted, and their use in 'blocking litigation'.
Maybe in a lot of fields some compulsory licensing arrangement with returns in proportion to the overall IP of a product's revenue would make sense, blowing away most litigation (and the resulting waste of money) and returning to success of the 'best product' and not the biggest troll.
Shock! Man talks sense about copyright!
Such a shame the industry and government seem not to have listen to such reasoned arguments, hopefully soon...
if(testicles == 0) ApplySexChange();
@Military is not affected
No, they also use the coarse acquisition band at 1.5GHz (which commercial GPS use) along with another at 1.2GHz for precision correction of ionospheric delay.
The only reason(s) the military are not complaining probably are:
(1) They have very well built equipment designed to be robust against jammers and adjacent transmitters. This requires very tight filters, which in turn demand very high unloaded Q-factor of the resonant elements. Typically that means large and expensive. It also requiers other RF parts that are more costly and power-hungry as well (high dynamic range LNA/mixer, low phase noise local oscillators, etc). Not going to fit in your phone/sat-nav any time soon.
(2) They are likely to be operating far away from most probably users of LightSquared equipment, and in an emergency on USA territory where it was an issue, they would simply disable the network one way or another.
GPS are in the right
Exactly as Number6 points out, this *was* a satellite band that was fine to co-exisit along side GPS. The idiots at the FCC/gov moved so the LightSquared could make money without analysing the full implications of this.
Maybe militart GPS are OK, but if so I bet they are fitted with coke-tine sizes filters and use way more power to get the necessary phase noise performance to meet this sort of thing.
Easy to settle though: have LightSquared build a GPS demonstrator that can fit in a decint sized phone with similar power & cost (no more than say 25% more than industry norm) to show how easy it is to do. If they can, they have a case, if not then they should STFU.
"automated toolkits that can scan ... in seconds"
You mean to tell me that the designers/implementers did not test this before they went live, and periodically during on-going support?
Tell me it is not so!
"Given time, we'll learn how to make huge nasty castles to defend our information and resources"
Maybe when a few CEO's get jail-time for ignoring the obvious stupidity of putting critical resources on to a public communication system without several properly scrutinised mitigation process to limit damage, and to allow local manual control/alternate systems to be restored safely by already-trained staff, then we might see the biggest of threats go away.
Really, it appears that most of the 'critical infrastructure' threats comes from using crap-security-by-design system (typically based on Windows, with all of the existing hacking tools and knowledge to lube things up, and not even using Window to its best) and then making them remotely accessible to save money.
My rant for today.
We want a survey!
So it was a hoax.
Will someone now do a real survey of something interesting [*] and correlate it to browser & OS usage?
[*] Thinking here of the IQ test variants, and the most excellent stats unearthed and published by by OKcupid blogs.
- Updated Zucker punched: Google gobbles Facebook-wooed Titan Aerospace
- Elon Musk's LEAKY THRUSTER gas stalls Space Station supply run
- Windows 8.1, which you probably haven't upgraded to yet, ALREADY OBSOLETE
- Mounties always get their man: Heartbleed 'hacker', 19, CUFFED
- Android engineer: We DIDN'T copy Apple OR follow Samsung's orders