1259 posts • joined Thursday 15th March 2007 16:58 GMT
If you keep the SI second, which would be an issue for anything based on current time and frequency, then your definitions produce a 'day' that is not a cycle of light/dark by our Sun, which is how that was originally defined.
Similarly a 'year' is also based on the Earth's cyclic seasons.
For those of us who live on this planet, those are meaningful concepts. Of course, on another planets such as Mars, the Earth day is not so useful, but dealing with that issue is a LONG way off for humanity.
Science already has gone through the pains of CGS, and then MKS, unit revisions to make things more logical. You can bet the issue of time/date has been looked at by a lot of very smart minds and no compelling reason made to change when the benefits (simpler arithmetic) are weighed up against the costs of change.
@Richard 12 (part deux)
Sorry for miss-reading, but you are right there is no protocol I know of specifically intended to push out TZ changes. Normally the TZ rules are fixed for long-ish periods and they are pushed out as a set of values with OS patches (today I got some for my desktop covering Russia's DST rules, etc).
On an open source UNIX-like OS (Linux, BSD, Solaris, etc) that should be easy enough to implement something to replace the TZ rules with a dynamic set to centralise time for instant system-wide consistency (sort of "at 12:00 UTC change to TZ = +5 hours" computed from position & bearing sent in advance, so all devices roll at the same point). Even if the commercial justification is limited to your own business case.
Of course, you might have too much legacy stuff (specifically, outside of your control) to make that viable.
@There is a much bigger problem
The short answer is use UNIX.
It keeps time internally as UTC (time_t variable, etc) and has a timezone value that can be changed as you see fit WITHOUT breaking anything, as all time calculations are based on UTC. Unless you are Apple and make an alarm clock feature that is...
Of course, for a moving system you need to know the timezone for your location. GPS gives time and location so it could be mapped to find the global zone you are in and thus update the TZ setting.
I don't know if all applications recognise TZ updates after starting, but I imagine the normal libraries will notice a system-wide change, so it might not be a complete answer out of the box.
Time keeping on DOS/Windows has been spectacularly crap, but now attempts to follow the same model. Except for some stupid cases where file systems keep local time (FAT32, some CIFS implementations, etc), or coders have not understood how to do it right, etc.
@you've missed the point...
"Who says we should have to put up with the SI second?"
Well I would say just about everyone in the world with a clock or other time or frequency-keeping system marked or based on the SI unit.
(a) Keep the SI second, etc, and accept that things don't add up nicely so occasional corrections are needed, or
(b) Change the SI unit, break every time/frequency system and demand a re-write of all science and engineering text books to conform to the new system. Which STILL will not be correct as the Earth's rotation is randomly variable.
The argument is?
It is an interesting point. Usually the two reasons are:
(1) You need to evaluate elapsed time across the discontinuity (or propagate a model's predictions) and can't deal with 1 sec error.
(2) Your code has time-wasting loops or other sequence-sensitive parts that are based on a clock update that is ASSUMED to be monotonic. This was one of the original reason for NTP adjusting time by rate-compensating to avoid backwards clock time steps.
In google's case it is more likely to be a database type issue, which begs the question why the did not use a GPS-like time base so it is accurate and consistent across such discontinuities.
Even then, time for DB order is questionable, why not just a translation counter? If you have lots of transactions per second and rely on this for order on a distributed system then the time delay/sync accuracy across the network will eventually be a limit on consistency.
Not that Google docs is consistent anyway... :(
Yes, proper option is for Google to fix their own software as it should be easy enough to cope with a 1 sec jump.
As for decimalising time, well Napoleon tried it and eventually gave up. The second is now fixed as an SI unit, so you have to choose from the prime factors of 86400 for your day's sub-divisions, and that limits what you can do.
Also what about the ~365.25 days per year? Ain't no way that can be rationalised base-10 while keeping a calendar that is in sync with the seasons.
@A curious solution
NTP works fine with this because it was designed by folk who know what are are doing.
The underlying problem is that computers (or more precisely their clocks and calendar libraries) *assume* time is always 24*60*60 seconds per day, so the time_t of UNIX (or the corresponding underlying linear system in Windows) has to be stepped by 1 sec when such an adjustment to UTC is made, and so calculations across the jump are wrong.
But the Earth day is not exactly this, and we have always had the convention that mid-day (for 0 longitude, etc) is mean solar crossing, so *something* has to be fudged.
It has been proposed not to correct UTC to be 'right' in an astronomical sense to get round this, because of the accumulated stupidity of computers. But why? Fix the computer's clocks or just get over it. No big deal, eh?
"but as computer systems have become more complex, having a rogue extra second can cause a lot of trouble."
No, more accurate version is:
"as we become more time-dependent on computers, the fact programmers did not understand (or chose to ignore) the official time standards for the last 30 years becomes more apparent"
There are plenty of ways of dealing with this if it matters, either by working with an atomic time scale (as GPS uses, they have a UTC-GPS offset that steps so the underlying time is linear, not that different from the UNIX time-zone implementation) or by coding and testing stuff so an occasional jump of +/-1 second is no big deal.
Google's work-around is a reasonable band-aid to this, and it would make sense for NTP to have this as an option, however, it also needs to report the proper time for others to sync to so you don't get lied to by their machines.
At least, no more lied to than usual...
There are two regions of stability, one is close to either start (so the 2nd star just makes modest wobbles) and the other is far, far away where both stars' gravity appears as one primary pull to the planet. In between the trajectory is chaotic.
In both cases the orbit is 'stable', but nothing like the mildly perturbed ellipses that our planets enjoy.
Cunning Lister's Ingenious Test Of Rocket In Space?
Camera / lights?
I was thinking of the need for a window or internal fast-ish CCD camera and floodlight, bright enough so the camera is not temporarily overcome by LOHAN going off, so you capture the ignition moment. After all, once she blows the camera or window's life is going to be short.
@I have no doubt at all that astronauts walked on the moon
Honestly, you must be piss-poor with a camera!
Firstly they had adapted Hasselbald medium format cameras, and they were capable of astonishing quality.
Secondly I remember as a 12 year old getting a Christmas present of a FED-IV camera (a Russian copy of the Leica) and with low ASA B&W film developed at home I got amazing resolution out of a all-manual range-finder camera. Really, it was not until I got a D300s over 3 decades later that I felt a digital camera was worthy of replacing my moist process photos (and a lot of that reason is due to convenience).
@And only this morning
"Still - bound to catch an IE user out"
You mean the those who turned from photosynthesising to reading their email?
@John Smith 19
"No limitations on form factor, weight, materials etc."
Err, no, that is not acceptable. The battery has to be "safe" in the event of a crash (at least no more of a hazard than a tank of fuel) and from materials that are not too toxic to use, and that are not in such a short supply that the cost of £68/kWh can still be met when the global demand is in the 10s of millions per year (and some country with the only viable deposits decides to enjoy the profits a bit more).
Furthermore the costs of implementing a charging grid needs to be considered, both the practicalities of charging stations and the infrastructure to deliver enough power. Hell, just now we are looking at brown-out in the near futures due to lack of capacity WITHOUT adding the demands of motoring.
Realistically, transport costs are going to rise a LOT, one way or another, and nations should be looking at how to plan employment and distribution to avoid this.
@I wouldn't count on it...
(1) Macs (and of course Linux) have far, far, less malware out there, as Windows has something like 99.95% of everything and a production rate of around 5k per day[*]. Hence AV that relies predominantly on daily signature updates still leaves a significant exposure.
BUT on the other side of the equation you have:
(2) The fanbois who fail to see that small != zero and no matter what you use it is still going to be vulnerable, either by implementation flaw or Trojan.
(3) An apparent attitude problem of Apple to ignore or de-prioritise security issues that arise, more so the apparent lack of interest in enterprise support.
I suspect that moving from Windows to Mac would make security better overall, but ONLY if you apply (and maintain) good IT policies. Seeing it as an excuse to cut IT support and let users have admin rights is going to be a massive FAIL in my humble opinion.
[*] Based on the GData report covered here http://www.theregister.co.uk/2010/09/13/malware_threat_lanscape/ and assuming the 1M new Windows viruses are produced at an even rate over the 1st half of 2010.
The problem is?
What is the big deal?
Funny how we somehow managed before mobile phones by using wires, an almost unlimited resources (subject to putting cables in) compared to the finite bandwidth of radio.
@OMG, this is bad news
Bad example, as I suspect most would notice someone physically entering their body to tamper with the unit.
Maybe "Ford denies FM radio can cause steering lock up" would be more appropriate?
Hmm, now what was the story about the Ford Pinto again....
"who would seriously consider a heavily discounted fondleslab with no future?"
If the price is right, which this is, I would. For idle web browsing or as a 'toy' for any visiting children it makes enough sense to get one.
^ my usual reaction to visiting children.
@the idea behind patents
That was the original goal. I suspect that today it would be easy enough for most products to be reverse-engineered to find out how they work, so it is not such a problem.
The principle of rewarding inventiveness is a good one, what is needed is a major update of a system that has fallen in to a disreputable state.
We need shorter times to speed progress and limit the time spent searching for potential infringement. We need changes to stop trolling (where companies wait for infringement but don't actually have any "loss" because they don't make anything), and much faster & cheaper ways of resolving license fees (so the money goes to better products, and not armies of lawyers).
Shock! Man talks sense about copyright!
Such a shame the industry and government seem not to have listen to such reasoned arguments, hopefully soon...
@Re: Short time?
"People change their dishwashers and vacuum cleaners every 2-3 years?"
No, but production lifetimes are commonly shorter than 5 years for a given level of technology. Today it is not uncommon for consumer-focused ICs to have last-time buy notifications after a year or so.
The original reason for patents (OK, the idealised one) was to exchange trade secrecy for the greater public good in return for a limited time monopoly on an idea so you could profit from it. You could argue that 5 years from the filing date may not be ideal, but something like "5 years from the first commercial implementation or 10 from filing" would be sufficient opportunity to profit and not a hinderence to progress.
Though I still think the biggest issues are vague/wide patents together with the real problem of an engineer knowing what, out of a few million active patents, actually apply and could become litigation-based blockers for trolls wanting big money (and not a ~0.01% expense for 1 of a few hundred IP-related items involved).
I suspect the "short (20 year) time frame to recoup" is not true in a lost of high technology areas, and not for an company with some backing (which I guess was Dyson's problem?).
Possibly the only case where recouping is a long term issue to justify 20-25 years protection is in pharmaceuticals where the effective cost of bringing a drug to market is staggering, much more than the cost of its initial development, due to the length and costs for clinical trials, etc.
The original 20 years you must remember was set a LONG time ago, where products had lifetimes of 10-20 years or more. I don't think that is justified any more where products go through a generation change in 2-3 years. Certainly for software patents, if they are at all justified, something like 5 years is plenty.
The idea behind patents, of ensuring reward for the inventor and to make R&D profitable compared to copying, is good. The problem comes from how it is applied: in terms of pointlessly wide & vague patents being granted, and their use in 'blocking litigation'.
Maybe in a lot of fields some compulsory licensing arrangement with returns in proportion to the overall IP of a product's revenue would make sense, blowing away most litigation (and the resulting waste of money) and returning to success of the 'best product' and not the biggest troll.
if(testicles == 0) ApplySexChange();
@Military is not affected
No, they also use the coarse acquisition band at 1.5GHz (which commercial GPS use) along with another at 1.2GHz for precision correction of ionospheric delay.
The only reason(s) the military are not complaining probably are:
(1) They have very well built equipment designed to be robust against jammers and adjacent transmitters. This requires very tight filters, which in turn demand very high unloaded Q-factor of the resonant elements. Typically that means large and expensive. It also requiers other RF parts that are more costly and power-hungry as well (high dynamic range LNA/mixer, low phase noise local oscillators, etc). Not going to fit in your phone/sat-nav any time soon.
(2) They are likely to be operating far away from most probably users of LightSquared equipment, and in an emergency on USA territory where it was an issue, they would simply disable the network one way or another.
GPS are in the right
Exactly as Number6 points out, this *was* a satellite band that was fine to co-exisit along side GPS. The idiots at the FCC/gov moved so the LightSquared could make money without analysing the full implications of this.
Maybe militart GPS are OK, but if so I bet they are fitted with coke-tine sizes filters and use way more power to get the necessary phase noise performance to meet this sort of thing.
Easy to settle though: have LightSquared build a GPS demonstrator that can fit in a decint sized phone with similar power & cost (no more than say 25% more than industry norm) to show how easy it is to do. If they can, they have a case, if not then they should STFU.
"automated toolkits that can scan ... in seconds"
You mean to tell me that the designers/implementers did not test this before they went live, and periodically during on-going support?
Tell me it is not so!
"Given time, we'll learn how to make huge nasty castles to defend our information and resources"
Maybe when a few CEO's get jail-time for ignoring the obvious stupidity of putting critical resources on to a public communication system without several properly scrutinised mitigation process to limit damage, and to allow local manual control/alternate systems to be restored safely by already-trained staff, then we might see the biggest of threats go away.
Really, it appears that most of the 'critical infrastructure' threats comes from using crap-security-by-design system (typically based on Windows, with all of the existing hacking tools and knowledge to lube things up, and not even using Window to its best) and then making them remotely accessible to save money.
My rant for today.
We want a survey!
So it was a hoax.
Will someone now do a real survey of something interesting [*] and correlate it to browser & OS usage?
[*] Thinking here of the IQ test variants, and the most excellent stats unearthed and published by by OKcupid blogs.
@micropayments haven't failed
Good point. Maybe if you got the choice of micropayment and no adverts, or advert-based model, folk would actually pay the odd penny or two per show, probably works out at more than the advertisement rates for a national TV channel.
I doubt it - if you have say a 16-digit car ID and 2 digits of possible commands, you still have space for a lot of authentication code.
Even if you only had 4 alpha-numeric characters for authentication that is 14 million permutations. If based on a strong underlying system and only taking a limited hash/truncation of the signature, you will be waiting a long time to brute-force with approx 7 million SMS.
I suspect it is the usual combination of cryptographic incompetence and lack of formal review/testing/verification of the system before it was implemented and rolled out. Having a not-as-commonly-hacked communication system is no excuse for a lack of proper encryption/authentication.
But as others have pointed out, WTF is this needed for in a car anyway? I prefer a manual switch for the ignition, as I know enough about machines not to trust them.
@Just what I was thinking!
A cunning suggestion, but I'm not sure the paper bag round a bottle of buckfast and unkempt hair would go down will with your usual Apple clientèle though.
@A polite response
As pointed out, until some proactive ass-kicking is done, nothing will change. At the very least the CEO/board should be held personally responsible for any lack of due diligence in critical systems, only then will things be done that cost money.
While true air-gap protection may not be possible, at the very least they can have a properly separated/fire-walled red/blue style network and *serious* penalties for anyone plugging in non-approved equipment to the secure one. Like a big fine (contractor) and/or immediate dismissal (employee).
Finally there is the whole 'old windows is fine' mentality that is so incredibly dumb, in particular the even greater piss-poor security of vendors (like Siemens, I mean hard-coded root passwords that if changed break the system, WTF!).
Something like compulsory insurance with a premium based on the evaluated security of the system might just make people choose and/or replace things that are woefully insecure because suddenly there is a very real on-going cost for it, and not a 'oh if it happens we will react' mentality.
Yes, but really how likely is it?
Number of internet-based machines potentially able to reach you = billions.
Cost of software based attack = very small.
Chance of internet-based attacker being caught = negligible (using infected PCs, foreign jurisdiction, etc).
Number of attackers with physical access = small.
Cost of hardware based attack = modest.
Chance of physical attacker being caught = significant (CCTV, fingerprints, etc).
So for most folk who don't have anything of interest to the security services or heavy weight industrial competitors, it is not a big deal. If you do, then times are interesting...
It is a basic flaw in the design of FireWire that allows it to become in effect bus master to go anywhere in memory by virtue of DMA. It was covered some years ago on El Reg.
Things like ASLR make it slightly harder, and how much caching of passwords your OS performs alter the ease of attacking, but essentially it will also apply to all OS on any machine with FireWire that an attacker has 'local' access to.
Having said that, a number of years ago when developing a USB device for XP, not only did I blue-screen the OS with just my dongle (using MS' own USB stack, etc) but I also succeeded in wiping the HDD's MBR and rendering the machine unbootable! Had I been interested and skilled in things black-hat, what more could I have achieved?
So malicious access is not restricted to badly thought through (from a security perspective) peripheral hardware :(
In general, if the attacker has got even short term physical access, you have little hope of escaping with most computers.
As the joke goes...
..when asked directions to the manor house, the village idiot thought for a while then said, "well, I would be starting from here".
Mainframe and UNIX virtulisation was designed-in to a well thought out system. The guest OS were generally also of such a planned type.
As already pointed out, there are a lot of reasons for x86 VM that have nothing to do with per-user tailoring. That after all should be something that generally 'just works' by the OS.
We have issues running older Windows and Linux versions due to security (need the OS to run something legacy, can't trust it on its own) and due to the loss of hardware support with time (hence the attraction of virtulised network cards, etc).
For other reasons then things could be improved by a less complex stack, and VM tools that allow hot migration of a running machine from server to server, etc, offer great advantages in uptime. Except of course when those tools come with bugs...
I don't think they are "meting out justice", but rather stirring things up. Illegal? Most certainly. Bad for society as a whole? Maybe not.
Would we be better off if the un-redacted MP's expenses had not been leaked to the public? That was an illegal act after all.
Or what about ACS:Law? Their reign of legal blackmail might have been stopped in due course after a lot of careful weighing up by The Law Society, but that could have taken years. Maybe those on then end of baseless 'pay up or else' threats were relieved a good deal faster by the release of the (illegally obtained) email archive to the world?
@WTF is up with you people?
Ah, they need to be prosecuted. Should this be by the same police force that until very recently avoided properly investigating NI's sponsored hacking activities by any chance?
"an early 60s song"
I think the most famous version was 1974 by Terry Jacks, though the original French language (OK Belgium...) was probably early 60s.
Not the pish Westlife one around the millennium.
Showing my age here...
@I somehow doubt
"these morons have just destroyed a ton of potential evidence against Murdoch and his empire: anything incriminating in the email servers can just be blamed on these hackers."
Oh come on!
(1) You seriously think NI have not been purging their own servers, etc, since this all started to happen a few years ago?
(2) If they had not purged so far, what about off-site backups?
"the UNIX model was adequate for what it had to do, and was well understood."
I think that nails it on the head. Unix has a simple model of file (and thus essentially everything) permissions. Groups allow a more complex take on that, but most users glaze over when you get beyond the you/others can read/write/execute point of an explanation.
Windows NT+ on NTFS has ACLs that potentially offer much finer grained control, but is a bugger to understand/follow the consequences, so is rarely used effectively. A lot of legacy windows programs just breaks if you try to implement a properly secure system, so it becomes rather useless to the end user :(
So in reality, and most certainly for home users, Windows in basically broken and Linux is fine. Not by capability, but by complexity and working defaults.
I don't hold out a lot of hope, but that is a general observation and not Toshiba-specific.
How many sites send you a password reminder of your *real* password by email?
This shows that (a) they store it is un-hashed form somehow, and (b) they don't care that email is unencrypted and your email account may well be hacked.
Folks, think about all of the times you have received such a reminder!
Assuming like me you occasionally forget and use such a service, if not, why not give it a try with some companies they have important stuff...
@hardware is the problem
Yes and no:
Yes because the whole x86 is a horrible kludge that only became successful due to DOS/Windows (and the resulting applications) running only on it, and so Intel was able to spend sh*t loads of money to make a basically crap design good value for money.
Shame it was not spent on the ARM...
No because of why you are likely to want a VM, and this is where I disagree with the author saying "the VM should emulate the same hardware as the host":
Typically a strong reason for a VM is you want or need to run some horrible old OS+application that you have no practical or economic method of replacing. As such the VM has to look like *supported hardware* for possibly a decade or older OS.
It is so nice being able to move a Windows VM from one type of x86 host to another without needing to change drivers, re-install, re-licence, etc. Host could be Windows or Linux, CPU could be AMD or Intel, running 32 or 64 bit modes, and my old w2k and XP VMs and the various useful but difficult (or expensive) to replace applications work just fine!
As far as I know, the same issue applies to XP (maybe not the admin part) where if you shift a USB device to another port Windows thinks you need to (re)install the driver for it. Doh!
No idea if its fixed in Win 7 as I don't use windows much outside of VMs now.
- World's OLDEST human DNA found in leg bone – but that's not the only boning going on...
- Lightning strikes USB bosses: Next-gen jacks will be REVERSIBLE
- OHM MY GOD! Move over graphene, here comes '100% PERFECT' stanene
- Beijing leans on Microsoft to maintain Windows XP support
- Google's new cloud CRUSHES Amazon in RAM battle