992 posts • joined Thursday 15th March 2007 16:58 GMT
"the sound quality from my little DAB box in the kitchen is phenomenal"
YMMV! My experience is limited to one or two dual DAB/FM radios of decent quality and FM was better in both cases.
OK, I know this will depend on your reception area, etc, but there is no compelling argument for DAB in my mind, and I would hardly call tuning a difficult task!. I like things to run for a long time on battery power, and see no need to replace a few older radios yet - just more landfill to be disposed of.
So is dabs/BT direct broken
I have been waiting 10 weeks for a refund on a failed PSU from them.
For similar problems with the local PC store 3000rpm you get it swapped there and then (if in stock), so why can't big organisations actually manage things in any reasonable time?
dabs/BT also used Yodel to collect the PSU, that along took 4 weeks until they actually came, but I don't know quite who was to blame :(
Indeed, it is doubtful that it would ever be more useful than addition to conventional warfare (e.g. take down defence or support systems as part of an attack, etc) but probably will kill someone at some point.
Just now it looks attractive as a terrorist tactic (state or group) partly due to the fact people are so easily scared you don't have to achieve much to cause panic, but also due to the easy of covering up just who was behind it.
But hopefully we will see the bosses of key infrastructure being lead to gaol for criminal negligence for having the likes of an unpatched Windows box running buggy software linked to the Internet which made it all possible.
Not because malicious damage was possible, it always was and will be, but because they did what only a moron would in terms of known security practice and made it easy to do remotely.
Can it EVER be any good?
I wonder - you have the risk of a compromised PC, quite possibly via a VM-like rootkit so it is virtually undetectable by AV or anything running in the OS, and try adding another OS-level bit of software that somehow is going to stop the keyboard/mouse/monitor being recorded and sent to a 3rd party?
Just how is that supposed to work?
As pointed out elsewhere, what is needed is a "2nd path" of information that is much harder to guess, such as the RSA key (assuming the morons learn and don't keep the keys to everyone's kingdom in the one place) or a mobile phone (unfortunately assuming said PC-monkey won't just install a Trojan on it as well).
Can we have a 'snake oil' icon please?
It might help if I could spell "Stuxnet" but I imagine you know what I meant.
"P.S. What does the "windows user" icon represent?"
I think it is a down-and-out with a can of cheap lager, but maybe I know nothing and am simply talking through the wrong orifice.
Paris - choice of orifice...
OK, this sounds like a dumb question, but here goes to all of El Reg's readers who actively manage these Windows-based SCADA systems:
Why have these systems:
(A) not been patched to remove the compromised certificates and known vulnerabilities that suntex used?
(B) used on networks where odd traffic to unknown IP address is not throwing up warning bells left right and centre?
Sadly this is what it comes down to, cost & convenience versus security. And guess what is the usual winner? Maybe if you're boss' pension put on the line if it gets hacked it might look different...
I suspect it is not beyond belief that dedicated encryption hardware could be deployed so you have a secure VPN that only terminates in another dedicated local machine without general internet connectivity at either end?
Maybe less secure than an air gap, but better than having a general computer (and probably a Windows PC) with internet access.
Once upon a time (8.10 to be precise) I tried Ubuntu and liked it, it was Linux that was easy to set up and use, and most things worked fairly intuitively. Friends & family used to XP would have no problems I thought, and indeed they did not.
And what happened?
They spent a lot of time dicking around with the GUI for no real benefit, while failing to fix packages that were important, such a Nagios (still broken for 10.04 LTS on daylight saving change, a YEAR after it was reported and was already fixed by the developers), Rhythmbox (stopped syncing to MusicBrainz even though the changes were known about and discussed since 2 YEARS ago), automounter broken with NIS due to unpredictable start-up sequence with Plymouth, etc.
Is the world so full of short attention-span people that an ever-changing desktop (and thus demanding help/training to all non-geek users) is more important than making the damned thing work?
Why do Unity? Indeed, why did they waste time on GNOME 3? No one is shipping a tablet with Ubuntu on it, and realistically no one will (Andriod is the choice for all who are not Apple or MS fans).
In my view it has simply pissed off a lot of users and serves to illustrate at least one reason why it never will be the year of the Linux desktop. Work put in to stuff that is simply visual fluff, and not in to making things 'just work'.
Trying to get the mushroom cloud back in that shiny ball of plutonium?
@I'm begiining to wonder
The lesson is not a new one - keep your secrets off any internet-connected machines. Have two networks, one private for all important stuff, one public-facing for customer related activities.
Old school physical entry or compromised staff are still ways of getting raided, but you no longer rely on the integrity of a billion lines of code written partly by low-cost code monkeys and peddled by vendors who are market focused (e.g. add features to sell new versions, rather than fixing problems).
OK, this won't happen due to cost and convenience issues, but its not exactly rocket science to avoid internet attack vectors.
@AC 22:25 GMT
You don't get it do you? It is not the existence of a pr0n filter should *I* choose it, it is the mentality that 'gov knows best' and once such a system is in place, there will be function creep.
"Do you let kids into Pubs, Strip Clubs, Bookies, etc. etc?"
No I would not, and that is acting responsibly.
Delegating parental supervision & education about the world in general, and the Internet in particular, to a gov-mandated filter is NOT acting responsibly.
"It's the sort of thing that ages ago we used to call friends and family"
Somehow, that statement alone is sadder than the whole intrusive nature of the insurance business they are getting in to.
The internet is for adults, not children. Why can't they look after their own kids and leave the rest of the world alone? Its called parenting, look it up.
And what is next on this religious/political moralists agenda? No work on Sundays? Nothing the gov deems to be harmful to public morals? Inconvenient for their business buddies?
I had hoped we had seen the last of the Nanny State for a while, but sadly political 'leaders' (media whores, more like) just can't resist the temptation to meddle in people's private lives.
I agree with you!
Except about Tesla, he essentially invented polyphase power distribution using AC, and the impact of that in terms of providing us with the power grid and cheap electricity (due to low losses) in hard to under estimate.
And now the end is here
And so I face the final shutdown
My friend I'll say it clear
I fondled my slab, of which I'm certain
I've lived a life that's full
I travelled each and every information highway
And more, much more than this
I did it my way
"Recent versions of Firefox, prior to the 7.0 release, were memory hogs that had a tendency to crash all on their own"
You mean they have actually and *finally* fixed the memory leak/bloat that has seen our browsers gobble 8GB+ of memory?
"All those software patents creating havoc in the western world"
Sorry, I think you mean "havoc in the USA" as most of them are not valid in Europe due to the differences in what is and is not patentable. India is also quite competent to decide for itself if software can or cannot be patented, and hopefully will show greater sense than the USA in this area.
Sadly, maybe not for long before we in Europe have that time-wasting burden forced upon us.
We will have to wait until the analysis comes out to find the truth behind this fisaco. However, my suspicion is one of the developer's home PC was rooted, either due to carelessness or from some package in use (or development) that was flawed. Once rooted, the hacker had a 'free pass' in to the kernel development machines, etc, due to that developer's trust level.
Why has this not happened to MS & Apple in such a spectacular manner?
Probably because they don't allow anyone outside of their corporate network to access any of the development machines. When you think about it, keeping a globally accessible system safe is SIGNIFICANTLY harder to do.
"Spotify made its users' private listening data public, at the same time as making Facebook membership mandatory for new signups"
I have an old Spotify 'free' account I have not used in a while, but if they are going to make FB part of it, then its time for a single-use email address and a fake FB ID.
Exactly, there is NO EXCUSE at all for a browser plug-in or document reader to run as anything other than a user-privileged program, so causing an OS crash should be all but impossible.
Oh silly me, this is Adobe & IE...
@There's more to worry about....
I still use w2k for some things because it works well enough and I don't want to pay for changes that bring no direct improvement to me.
Of course, it runs in a VM now so I don't need to worry about hardware drivers, nor do I use it for email/web browsing/etc so security is much of a headache than when it was new & supported...
Tux, my friend.
@AC 19:15 GMT
"unless your willing to slay *every other* IE6 app we can't upgrade every desktop to any other browser."
Can't you provide a standard environment with *two* browsers?
IE6 for the crapppy written stuff.
Something else for everything else?
Maybe even a software firewall so IE6 can only connect to local IP addresses to improve security if anything can reach outside. Though given its lubed-up nature in Windows that may be difficult...
That might be part of the reason, as if you can verify the boot loader, it can then verify the rest of the system* and so stop hacks that check for invalid activation keys, etc.
I don't care about MS screwing it users for non-licensed software, if you want Windows then pay for it. What I do care about is such a system being abused to prevent alternative OS from running.
Unfortunately if you can bypass the boot check, then you can also bypass all other DRM/license protection steps (given the time to hack the OS components). If MS are only doing this to stop root kits, fine, but I can't see it being very useful (in this context) and open at the same time.
* time-dependent of course, how long to check the signatures of a multi-GB OS installation?
Key holder matters
The issue is not the 'secure' boot by verifying the OS, that on its own is good for everyone (Linux, MS, Apple, etc) as it allows protection against pre-boot root kits.
The issue is who decides what can boot.
If the UEFI loader just stops and tells me this has changed, and do I want to accept the new signature, that is fine for me and nothing is lost but I have gained control over unexpected changes to my boot loader. Maybe have a UEFI password so only admin can change it (like current BIOS offer for boot sequence, etc).
Of course, it then makes the whole "security" push rather pointless because, as we all know, asking the (l)user if they want something or not is a recipe for disaster when it comes to security.
Even so, if you can root the OS while running, then you could flash the UEFI firmware to disable this before loading the pre-boot root kit. Also how long until the keys are compromised as for DVD/BlueRay/HDCP? It helps of course, but short of a physical switch to disable motherboard updates, it is only a bit harder for the bad guys.
So maybe a mandatory configurable option in the UEFI menu to enable/ask on change/disable would OK. But on MS' past behaviour I have serious worries about the openness of it all.
"And I saw that I was alone. Let there be light."
What is the point?
Really, what is the point of changing to DAB? Is it any 'better' than FM in a manner that counts to the end user, lets see:
More channels? Yes, initially, but most were crap and a lot have dropped out now.
Better sound? No, most are on low bit rate (cheaper, see?) and crap.
Interference/multipath protection? Partly, but no use if the signal is below threshold anyway.
Ease of use? Not really, tuning an FM radio is hardly challenging, and short battery life for DAB is a serious loss of 'ease of use'.
Ah, FM spectrum worth a bob or two? Maybe, just maybe. But for whom? What service really wants that band and is it of any use to use, the public?
"Fibre makes more sense now, as does going entirely mobile and ditching a landline network."
Er, no, going wireless is of limited capacity because the radio spectrum is limited. Yes, you can get closer with time, but there is a *fundamental limit* to usable channel capacity.
Going fibre is a much, much better idea as long as the oinks realise its not copper and stop digging it up. Of course you don't have power along a fibre (of any real amount) but for most of us having to power our home modem end would be no issue if it gave us gigabit speeds with negligible contention.
p.s. +1 for those who point out a lot of the cable is not solid copper.
What I am suggesting is a new/change to the law so all new products, irrespective of their type and T&C, must be "adequately secure" and maintained that way free of charge by the supplier for 5 years after sale. Otherwise the supplier bears all costs of failure.
It would get rid of the T&C you refer to and make sure that suppliers of ANY goods such as a car, TV, phone, laptop, etc, are all bound to the same standard for dealing with security fixes in a 'reasonable timescale'.
After all, its not that hard to do: you start with a decent design that has security as a core part of the requirements, and then keep the design team (or part of them) fixing things as they come up, and have the systems in place to allow patches to be deployed automatically to the consumers.
Hell, even MS, the original master of incompetent security, now mostly manages that (though not always the 1 month fix time, unless its made public and they *have to* speed things along).
That is perfectly within reason for a consumer protection law, and ideally it would be an EU-wide one. Just what is wrong with that suggestion?
Time for liability?
There really should be a consumer protection law that would punish suppliers who fail to fix vulnerabilities in a reasonable time scale and for, say, 5 years after official "end of life" for buying a product.
Something like liability for all damages, irrespective of the license T&C, if they fail to patch within 1 month of disclosure perhaps?
I'm not just talking about Android, the "new windows" of security, but for ALL software and hardware. And no wiggle room.
Yes it would cost a little, but it would also focus suppliers on releasing decent designs, and not a "ship the crap then forget" model that seems to be today's norm.
If you keep the SI second, which would be an issue for anything based on current time and frequency, then your definitions produce a 'day' that is not a cycle of light/dark by our Sun, which is how that was originally defined.
Similarly a 'year' is also based on the Earth's cyclic seasons.
For those of us who live on this planet, those are meaningful concepts. Of course, on another planets such as Mars, the Earth day is not so useful, but dealing with that issue is a LONG way off for humanity.
Science already has gone through the pains of CGS, and then MKS, unit revisions to make things more logical. You can bet the issue of time/date has been looked at by a lot of very smart minds and no compelling reason made to change when the benefits (simpler arithmetic) are weighed up against the costs of change.
@Richard 12 (part deux)
Sorry for miss-reading, but you are right there is no protocol I know of specifically intended to push out TZ changes. Normally the TZ rules are fixed for long-ish periods and they are pushed out as a set of values with OS patches (today I got some for my desktop covering Russia's DST rules, etc).
On an open source UNIX-like OS (Linux, BSD, Solaris, etc) that should be easy enough to implement something to replace the TZ rules with a dynamic set to centralise time for instant system-wide consistency (sort of "at 12:00 UTC change to TZ = +5 hours" computed from position & bearing sent in advance, so all devices roll at the same point). Even if the commercial justification is limited to your own business case.
Of course, you might have too much legacy stuff (specifically, outside of your control) to make that viable.
@There is a much bigger problem
The short answer is use UNIX.
It keeps time internally as UTC (time_t variable, etc) and has a timezone value that can be changed as you see fit WITHOUT breaking anything, as all time calculations are based on UTC. Unless you are Apple and make an alarm clock feature that is...
Of course, for a moving system you need to know the timezone for your location. GPS gives time and location so it could be mapped to find the global zone you are in and thus update the TZ setting.
I don't know if all applications recognise TZ updates after starting, but I imagine the normal libraries will notice a system-wide change, so it might not be a complete answer out of the box.
Time keeping on DOS/Windows has been spectacularly crap, but now attempts to follow the same model. Except for some stupid cases where file systems keep local time (FAT32, some CIFS implementations, etc), or coders have not understood how to do it right, etc.
@you've missed the point...
"Who says we should have to put up with the SI second?"
Well I would say just about everyone in the world with a clock or other time or frequency-keeping system marked or based on the SI unit.
(a) Keep the SI second, etc, and accept that things don't add up nicely so occasional corrections are needed, or
(b) Change the SI unit, break every time/frequency system and demand a re-write of all science and engineering text books to conform to the new system. Which STILL will not be correct as the Earth's rotation is randomly variable.
The argument is?
It is an interesting point. Usually the two reasons are:
(1) You need to evaluate elapsed time across the discontinuity (or propagate a model's predictions) and can't deal with 1 sec error.
(2) Your code has time-wasting loops or other sequence-sensitive parts that are based on a clock update that is ASSUMED to be monotonic. This was one of the original reason for NTP adjusting time by rate-compensating to avoid backwards clock time steps.
In google's case it is more likely to be a database type issue, which begs the question why the did not use a GPS-like time base so it is accurate and consistent across such discontinuities.
Even then, time for DB order is questionable, why not just a translation counter? If you have lots of transactions per second and rely on this for order on a distributed system then the time delay/sync accuracy across the network will eventually be a limit on consistency.
Not that Google docs is consistent anyway... :(
Yes, proper option is for Google to fix their own software as it should be easy enough to cope with a 1 sec jump.
As for decimalising time, well Napoleon tried it and eventually gave up. The second is now fixed as an SI unit, so you have to choose from the prime factors of 86400 for your day's sub-divisions, and that limits what you can do.
Also what about the ~365.25 days per year? Ain't no way that can be rationalised base-10 while keeping a calendar that is in sync with the seasons.
@A curious solution
NTP works fine with this because it was designed by folk who know what are are doing.
The underlying problem is that computers (or more precisely their clocks and calendar libraries) *assume* time is always 24*60*60 seconds per day, so the time_t of UNIX (or the corresponding underlying linear system in Windows) has to be stepped by 1 sec when such an adjustment to UTC is made, and so calculations across the jump are wrong.
But the Earth day is not exactly this, and we have always had the convention that mid-day (for 0 longitude, etc) is mean solar crossing, so *something* has to be fudged.
It has been proposed not to correct UTC to be 'right' in an astronomical sense to get round this, because of the accumulated stupidity of computers. But why? Fix the computer's clocks or just get over it. No big deal, eh?
"but as computer systems have become more complex, having a rogue extra second can cause a lot of trouble."
No, more accurate version is:
"as we become more time-dependent on computers, the fact programmers did not understand (or chose to ignore) the official time standards for the last 30 years becomes more apparent"
There are plenty of ways of dealing with this if it matters, either by working with an atomic time scale (as GPS uses, they have a UTC-GPS offset that steps so the underlying time is linear, not that different from the UNIX time-zone implementation) or by coding and testing stuff so an occasional jump of +/-1 second is no big deal.
Google's work-around is a reasonable band-aid to this, and it would make sense for NTP to have this as an option, however, it also needs to report the proper time for others to sync to so you don't get lied to by their machines.
At least, no more lied to than usual...
There are two regions of stability, one is close to either start (so the 2nd star just makes modest wobbles) and the other is far, far away where both stars' gravity appears as one primary pull to the planet. In between the trajectory is chaotic.
In both cases the orbit is 'stable', but nothing like the mildly perturbed ellipses that our planets enjoy.
Cunning Lister's Ingenious Test Of Rocket In Space?
Camera / lights?
I was thinking of the need for a window or internal fast-ish CCD camera and floodlight, bright enough so the camera is not temporarily overcome by LOHAN going off, so you capture the ignition moment. After all, once she blows the camera or window's life is going to be short.
@I have no doubt at all that astronauts walked on the moon
Honestly, you must be piss-poor with a camera!
Firstly they had adapted Hasselbald medium format cameras, and they were capable of astonishing quality.
Secondly I remember as a 12 year old getting a Christmas present of a FED-IV camera (a Russian copy of the Leica) and with low ASA B&W film developed at home I got amazing resolution out of a all-manual range-finder camera. Really, it was not until I got a D300s over 3 decades later that I felt a digital camera was worthy of replacing my moist process photos (and a lot of that reason is due to convenience).
@And only this morning
"Still - bound to catch an IE user out"
You mean the those who turned from photosynthesising to reading their email?
@John Smith 19
"No limitations on form factor, weight, materials etc."
Err, no, that is not acceptable. The battery has to be "safe" in the event of a crash (at least no more of a hazard than a tank of fuel) and from materials that are not too toxic to use, and that are not in such a short supply that the cost of £68/kWh can still be met when the global demand is in the 10s of millions per year (and some country with the only viable deposits decides to enjoy the profits a bit more).
Furthermore the costs of implementing a charging grid needs to be considered, both the practicalities of charging stations and the infrastructure to deliver enough power. Hell, just now we are looking at brown-out in the near futures due to lack of capacity WITHOUT adding the demands of motoring.
Realistically, transport costs are going to rise a LOT, one way or another, and nations should be looking at how to plan employment and distribution to avoid this.
@I wouldn't count on it...
(1) Macs (and of course Linux) have far, far, less malware out there, as Windows has something like 99.95% of everything and a production rate of around 5k per day[*]. Hence AV that relies predominantly on daily signature updates still leaves a significant exposure.
BUT on the other side of the equation you have:
(2) The fanbois who fail to see that small != zero and no matter what you use it is still going to be vulnerable, either by implementation flaw or Trojan.
(3) An apparent attitude problem of Apple to ignore or de-prioritise security issues that arise, more so the apparent lack of interest in enterprise support.
I suspect that moving from Windows to Mac would make security better overall, but ONLY if you apply (and maintain) good IT policies. Seeing it as an excuse to cut IT support and let users have admin rights is going to be a massive FAIL in my humble opinion.
[*] Based on the GData report covered here http://www.theregister.co.uk/2010/09/13/malware_threat_lanscape/ and assuming the 1M new Windows viruses are produced at an even rate over the 1st half of 2010.
- IT bloke publishes comprehensive maps of CALL CENTRE menu HELL
- Analysis Who is the mystery sixth member of LulzSec?
- Nine-year-old Opportunity Mars rover sets NASA distance record
- Prankster 'Superhero' takes on robot traffic warden AND WINS
- Comment Congress: It's not the Glass that's scary - It's the GOOGLE