Re: Uhhh, since when was "embiggen" a word?
2007 posts • joined 15 Mar 2007
Its a tricky one.
While you might feel guilty about revealing it, there is a good chance that someone else will (or has) found it and will exploit it. Until it is understood by AV companies (as we can assume MS knows now) there is nothing to protect those using XP from it.
Now MS told you its not going to be fixed as XP is EOL, but what of the embedded version that various systems use? Publishing might be the only way to force MS to fix that for those still expecting support until that version is finally EOL'd.
Finally, you might want to consider if the same underlying bug also impact on Win7/8.x as well. Disclosure would allow that to be investigated.
So really, it will come out one way or another, and probably best if done via an open forum than black-hat sales channels. MS know, so its their call about patching.
The deeper problem is the sorry state of SSL certificates in the first place, and why it was possible to go pretty much undetected until security researchers looked in to it.
Lenovo deserve a really big bollocking here, but all of the web browsers, and business in general, needs to be doing something more serious about stopping faked certificates being used to MIM https, or making them damned obvious to the users.
You might find this enlightening:
Don't these chips have thermal monitoring?
If so (which I assume they all do), why not scale back the clocking if they start to overheat?
You mean like we have has since 1985 (Cray UNICOS, first 64-bit implementation of Unix)?
Or 1994 (Silicon Graphics IRIX)?
Or 1998 (Sun Solaris 7)?
Or 2000 (IBM z/OS)?
Or 2001 (Linux becomes the first OS kernel to fully support x86-64, same year as XP 64-bit)?
Or 2003 (Apple Mac OS X 10.3 "Panther")?
[Shamelessly copied from http://en.wikipedia.org/wiki/64-bit_computing]
It also depends on how well the applications were written, and how they are linked. For example, if they only ever used the libc code for time calculations (mktime() gmtime() etc) then having a patched libc on the 32-bit system would allow this to be put off until 32-bit unsigned overflow, which is around 2106
However, if statically linked or doing things with time_t based on it being signed, then its going to have problems. Also note (as already covered) this is not a Linux problem as such, it is a C language problem and anything similarly UNIX-y that uses the time_t. A lot of MS software could well be using the C library, etc.
So really this is more a 32-bit application/data problem, and only code audits and (more importantly) testing will reveal what will actually happen.
There are some ways to work round this and some things might just work. But testing is needed, and more importantly there should be STANDARDS for all those embedded applications that demand testing with post-2038 dates just to be sure.
Currently 64-bit Linux works fine, of course, as time_t is natively 64-bits.
Even today, as time_t is generally used as that (i.e. a specific data type and not the generic 'int' or even 'long'), if it were defined to be a 64-bit integer then most 32-bit systems would re-compile and be all OK as the compiler should do all the necessary stuff. What would be broken is things like file systems and other file formats where 4 bytes is explicitly used and that is all.
Alternatively if the 32-bit integer was treated as unsigned then also most things would work. I tested the gmtime() function recently and found that 32-bit Linux "failed" post-2038 by design as it returned -1 to flag an error, same for the older MS VisualC++ 6 (also 32-bit). Ironically the old 16-bit MS-DOS C compiler got it right post-2038 if you treated time_t as unsigned!
Really old bean? I though GCHQ had nothing official to do with the US after that spot of bother in Boston with all of the tea...
I think you will find users want better content, rather than more content or quality. Sadly this is misunderstood to mean there should be more channels of utter pish, rather than the available revenue being spent on fewer channels with worthwhile content.
Also WTF is it that broadcasters/ISPs will spend billions on sports coverage and not nearly as much on creating worthwhile programs in other areas (arts, drama, comedy, science/education, etc)?
I applaud this just because it means we are starting to see 4k monitors at tolerable prices.
For PC use having a big 30-40" monitor in 4k would be great as the resolution would be usefully delivering the equivalent of 4 * 15-20" HD monitors but without the division and physical arrangement problems. Great for all sorts of things beyond speciality video!
This family of infections has a (rare) module that can be used to infect your HDD's firmware so even having bought a clean one is no guarantee it will never have this.
Arr, t'is the true way!
[closest icon to a flagon of rum]
Of course if it were not for the botched intervention in Iraq a lot of the terrorist problems would not exist.
Sure Sadam Hussain was a ruthless bastard, and a lot of his people suffered under his regime, but I'm not convinced that Iraq "post-democracy" is a better place to live with the lack of security, rise of religious power, and enormous society & infrastructure damage.
I wonder how much VPN use that $29/month "privacy fee" would get you?
A smart enough router and you could stuff some high-bandwidth but low interest things like YouTube direct on AT&T's network and everything else via the VPN.
Clearly you know little and/or have never used any significant number of single-parity RAID before. Maybe you got lucky, but others know that sinking feeling when a RAID rebuild throws up errors due to bad sectors on what you had hoped were the remaining good disks.
Of course "RAID is not backup" as everyone here should know, but unless you have a 2nd RAID or some serious money in a tape system you will have a tedious and probably incomplete data restore to face you.
By the way, that is one of the nice things about ZFS: it tells which files are corrupt, not that sector 1284529784 has an error and you have to either spend ages on your file system of choice to identify what that impacted upon, or go down the "nuke it from orbit" route of a fresh start and complete restore.
Rebuild times for classical RAID (including smarter ones like ZFS) is a bit problem with modern drives because the capacity has increased way beyond the read/write speed, so you can be looking at days or even a week or so. That is not, in its self, a problem but both the longer time and the huge amount of data means you have a much greater chance of another disk croaking (or discovering bad sectors) during this process.
This is why you really, REALLY, should be scrubbing your RAID array every week/fortnight. This forces the disks to read every sector and then to fix/remap bad sectors while you still have parity, so when you lose a disk in RAID-1/5/10 you have a sporting chance of a successful rebuild.
Better still, look to dual parity like RAID-6 or ZFS' RAID-Z2
I don't know if its still the case, but fsck-ing ext4 with large arrays needs lots of memory, more than 2GB usable, and that is a problem on small NAS.
You are better off with XFS for a lot of those NAS, but ZFS (and not on LVM as Thecus do - doh!) is much better (subject to much more memory though).
I have had Thecus and support was not that bad, but still crappy much like other NAS-in-a-box offerings.
Really, if you have the technical know-how (which usually is the case of El Reg readers), then a cheap server like HP ProLiant Gen8 G1610T MicroServer, some more ECC memory, and a copy of FreNAS will give you a much better box.
Yes, I worry when reports like this profile RAID-0 without dire warnings about how that is not really "RAID" because it lacks the redundancy pert of the acronym...
Chris, this "protection from lawful interception" you speak of is complete bollocks. If the police wants my data then they simply have to get a court order in my country and I will have to hand it over.
We are not talking about some free/anonymous service here, this is all about businesses paying for storage/servers/etc so its pretty clear who is responsible.
A much more useful measure of "cloud service" integrity would be some properly audited trail to show that YOU, the customer, sets a private encryption key on your clients and that is never made available to the cloud provider.
If the law want your data then they have the proper course of action by getting a court order in YOUR COUNTRY to force disclosure.
Anything less is just marketing whitewash.
You might want to look up "anonymous", it kind of is opposite to declaring a consistent name.
No doubt the manual also warns of the consequences of being a moron and making all of this visible & vulnerable to world+dog?
Various countries, any recently the UK, have already regulated the installation of electrical wiring to prevent stupid things being done the put lives at risk due to fire or shock. It is high time that those who put important stuff (or personal stuff via smart TVs, etc) on the Internet are held accountable for gross stupidity and not applying best-practice precautions that any 1st year computing course ought to teach.
I mean really, is there any doubt why all anonymous trolls should not be executed?
The problem is you can't buy chutney made with chillis in civilian establishments, so they had to improvise with chutney & chilli sauce in some unholy combination.
That I might have to try later, just in case it leads to the second coming.
I was going to make exactly the same point, regulation exists to prevent costs as well - crime, accidents, injury etc.
The goal of regulation should be to balance folk getting on with doing things, against folk getting on an ripping others off or exposing them to excess/unknown risks.
Simple, it is when they realistically expect it to ship!
+1 for that. I have no problem with folk liking the ribbon, just mighty pissed off that we are given no bloody choice in the matter.
I often use LibreOffice and its great, but rarely follows the document layout in .docx in my experience, better with .doc
Office 2003 + 2007 compatibility sort of works, but often it borks on newer .docx in my sad experience.
Office 2010 is not that bad, so I tend to use if if I can't handle documents in older versions, or simply to convert to a format that is better parsed.
"Negotiations ... have been going on for more than three years and ..."
Well if the EU just banned US corporations for handling our data until a satisfactory agreement was in place, you can be damn sure it would not take 3 years!
I was going to ask the same - just how useful is this in the real world?
I can see it matters if you can get close enough to a very high value system to record the EM signatures and (presumably) have it run stuff you know to help break the stuff you don't, but for 99.999% of computer users will it matter?
"As-a-service is more valuable in the world of cloud because it means repeatable subscription revenue as the onus is on the customer to
cancel their account keep paying or all their business data and established work-flow vanishes."
Some of us know its pointless...
Different log-in accounts?
But seriously, it is a point - I can imagine a lot of people not wanting all of their stuff in US clutches once they understand what this implies.
Oh I don't know, I would like to take part in a France versus Portugal smack-down on either food or nubile lady fronts.
Maybe both, but then I'm a dirty old man. Thanks, mine is the mac...
God I hope so! I mean, what if aliens have triangular sphincters?
Well, it is how governments treat all internet users after all...
"If the theft and publication of that correspondence renders her unemployable, wouldn’t Pascal have grounds for a massive lawsuit against her former employer?"
Perhaps if they had not been suck a disk-heads in the first place to say things that are untrue, and/or in very poor taste, and/or showed very poor professional judgement, they would have nothing to fear?
That is what our leaders keep telling us, so it must be true...
No, I disagree. And I am telling tell you to take the bus where you can cower under a blanket wetting yourself at all of the bogeymen that invading everyone's privacy was supposed to stop.
Interesting read, and nice to see someone in the mobile phone business where the #1 goal is not whoring you from advertiser to advertiser.
I am impressed by the girth of their pr0n hose! Don't we all wish our systems could sustain 50Gbit/sec?
However, I am disappointed that El Rag failed to convert that into kilowrists.
Yes, GPS are subject to time dilation, but that is accounted for in the numbers they provide. Its only a problem if you don't correct for it by design!
Really, there is a bigger picture here. Systems get screwed up for all sorts of different reasons! While we debate the leap-second we also should remember faulty hardware and numerous other bugs in both the OS (and any OS) and the applications.
If you have a big critical system you really ought to have some sort of watchdog on your servers to spot the signs of kernel panic/lock-up or application faults and reboot it. While brutal, at least you would be coming back on-line in minutes rather than hours while support folks are called to investigate and find they can't SSH in, etc, so they have to debate and then use ILOMs to reboot possibly hundreds of machines.
GPS broadcasts the linear atomic time and the offset as separate fields, and all internal calculations (other than UTC output) use the former. Some GPS receiver's firmware has had buggy handling of the GPS-UTC offset change, but again that ultimately comes down to not testing it. You can buy GPS simulators, so its not like a company can't test for it, just they did not think and/or bother doing so.
Similarly NTP broadcasts the leap second event for the day before it happens, and then tells the kernel to apply the step at the appropriate time. AFIK the NTP daemon can get the pending leap-second info from an attached GPS used as a stratum-0 source, so it ought not to require networking to other peers to get that information.
The main bug was not in NTP itself, but in how the Linux kernel handled the application of the 1 second jump the the time_t UTC counter, as it allowed a dead-lock situation to occur. A standard type of problem for any multi-threaded software, and again one that ought to have been better reviewed and tested.
I don't know the reason(s) for the Java bug, but most likely it was related the kernel deadlock while waiting for "sleep" timers to expire.
Better fix - just use the working code.
It was working properly in Linux, and then a patch was applied that broke it. No one noticed its implications at the time, and no one tested it on a leap-second generator. Then it failed in real life.
The moral is simple and need repeating: Test every bloody change you make!
A lot of space systems already use variations on "ephemeris time" that has a linear atomic basis and a variable offset to get UTC, etc. That is not a new idea, and as pointed out exactly the same approach is used by the GPS satellites.
The problem is NOT the introduction of leap seconds, it is the simple fact that they don't test systems properly to deal with this known attribute of time keeping.
Instead of trying to get rid of leap seconds, perhaps they should always add/remove one each alternate month with the occasional add two months in a row?
That way people would be forced to test for this and not cry every 1-2 years when untested/patched code throw a wobbly.
That was my thought, that they wanted to record her password for whatever reason. I'm guessing that as she is a security expert she has now changed it, and it was never the same as anything else of importance.
What is a bigger worry is they have copied the encrypted HDD at another time (while sleeping, etc) and they wanted that to get access to it.
As another commentard has pointed out, best to have a 2nd account to demo a machine works so you don't have to decrypt your own files (assuming per-account encryption and not just full-disk).
Hmm, might need a tighter tinfoil hat now...
No, the law should be where you do business. If FB is selling adverts to Dutch companies, even indirectly, then it should be forced to comply with Dutch laws.
Don't want to follow Saudi, NK, etc, laws? Then don't do business in those countries.
Keeping your own records sounds like a good idea, until they are needed in an emergency or the person finds they have lost them (or electronic copy is deleted, corrupted, HDD failed etc).
What we need ideally are central records that can only be accessed by staff treating you, and that you can see an audit of access if you want. And not being available otherwise, except as anonymous data for research.
You are right.
However, the goal of a single and effective IT and management system across the NHS is a good idea, but government organisations (and a lot of private industry) seem to be useless and properly specifying and developing such a system, and the contract inevitably go to the usual suspects who seem worse at software development than a room full of 2nd year comp sci students.
The answer? I don't know, but I guess that having a small group work with a couple of NHS trusts to prototype something, get proper feedback from those actually using it (not those who fear it, or those paying for it) and then pay more to scale & deploy it when proven would be a good start.