Re: Neil Barnes
"Never mind the quality... feel the width."
Are the pr0n studios interested then?
2161 posts • joined 15 Mar 2007
"Never mind the quality... feel the width."
Are the pr0n studios interested then?
If you are not doing massive images, etc, you might want to create an XP or Win7 VM and use that for your photo editing. I have a few VMs with old CAD and editor software just for that sort of job - saves my having to dual boot now.
Also for most VMs (certainly VMware player) you can save the VM state mid-operation so you can then shut down or reboot your main PC and then later resume the VM from *exactly* where you were...
You might be pleasantly surprised by dosemu for Linux as a way of running 16-bit code. Its not perfect (but then XP's NTVDM wasn't either) but you also have some options to customise it or even fix problems if dedicated and smart enough.
Also if you are brave/foolish/need it you can give dosemu direct hardware access to certain I/O ranges or interrupts, this can be useful for some cases when you have special hardware.
We do, and it allows our 22 year old software & custom hardware worth £££ to work just fine. You get the simple DOS "so what you want" ease of doing I/O but with the relative security and remote maintenance of a modern OS. And good time-keeping if you configure dosemu to use the host (NTP adjusted) time.
Sure we could have re-written our software to use different OS (and had to do that multiple times going to various Windows HAL changes, maybe then to Linux to escape the product activation and similar silly buggers screwing things over at inconvenient times) but why? It works well, has hundreds of equivalent year of debugging already, and after spending 6-12 man months of time & cost it would have done EXACTLY THE SAME job. How do you sell that to your business manager?
Down-voted for wanting accidents independently investigated - any down-voters care to say why the DON'T want that?
How do you know they are doing so well?
Yes they have managed OK on a pretty regular US system, but how much do they depend upon GPS/maps being completely correct? how do they cope with partially closed roads? What about twisting country roads with passing places? Temporary traffic lights? Polices flagging them down due to an accident or similar? Dumb meat-bags doing stuff that another meat-bag would see the warning signals of high stupidity and/or intoxication and keep well away?
Though Google are pop-pooing it, the accident rate seems to be about 5 times more than average, so its hardly a stunning display of everything being just right.
And Google have a vested interest in playing up the success and not talking about any known problems, do you really want to end up buying the high-tech Ford Pinto?
THAT is why there needs to be an independent analysis of what has actually been tested, and when failures have occurred, what should have been learned.
Very true, unless it is withheld for "commercial reasons" or trade secrets, etc..
We really need the equivalent of the air crash investigation board to deal with such events in a way that the manufacturers cannot legally get out of, or withhold evidence from.
OK, maybe not as rigorous in minor cases, but to trust something as new and potentially dangerous like this demands an independent analysis.
By the Google car under manual override, or by other road users?
When do we get an independent analysis to see if they were really unavoidable, or if the software messed up in some way that a typical human would not have?
I doubt I am the only one, insurers will want to know and I bet people considering such a car will want the equivalent of the NCAP ratings for 'droid drivers.
Yes, I sound negative, but the burden of proof has to be one the suppliers that they are better than the average human in all reasonable situations for most people to be willing to accept them. And that includes in dealing with the other human drivers that will be around for decades to come even after commercial availability.
You clearly have missed El Reg's "Biting the hand that feeds IT" mission statement.
Not intelligent life.
SVV said it for me - the lack of strong data typing to catch mistakes in data use is the single biggest thing by far. Fine and less effort to write for a 20 line shell script, pants for anything complex.
Anything that allows fast and usable code cross-platform to be developed without resorting to flaky and/or propriety systems like ActiveX, Java applets, or NaCl stuff is to be praised.
Hopefully the MS implementation will remain "standard" and thus be fully cross-platform (browser, OS, and CPU) and future web developers will look at using this best (OK, fastest) sub-set for writing stuff.
The FPTP system is basically broken if you have more than 2 candidates per seat, and even then a tad doubtful with only 2. Some sort of AV/PR system is going to give you a more balanced seating.
However, the biggest problem is not how we vote for the devious, thieving two-faced bastards, but that so many of them are useless at their jobs and do little more than knee-jerk to get voted in again. Until we deal with who stands for election, and what skills they ought to have (you know, like having had a REAL job for some time and not been a carer politician) then nothing will really get better.
As for Scotland, 50% voted SNP but they got 95% of the seats which is not exactly representative. Still, the only glimmer of justice is UKIP got more votes than the SNP but only 1 seat...
Usually the biggest error made in predicting RAID failures is the presumption of uncorrelated faults. Most of us know from bitter experience that faults are much more likely to happen in a strongly correlated manner due to:
1) Manufacturing defects (or buggy firmware) that impact on a lot of disks, and you have all from the same batch...
2) A stress event prompting the failure, such as power cycling after years of up-time, or an overheating event due to fan failure, etc, that is common to most/all of the HDD in the RAID array.
So you should start by assuming HDD faults of around 5% per year and do the maths from that, not from claimed BER figures.
And you don't see the problem in losing/corrupting a chunk of your data without knowing what file it was?
First point has already been made - you just can't do all-flash for a lot of cost & space requirements.
Second point, as most folk will know sooner or later, HDD don't suffer from simple random bit errors, they are almost always big clusters at a time and generally much more common than the quoted BER figures would tell you.
Worst still is that most file systems don't tell you if something is corrupted, so if you do get a rebuild error on sector 1214735999 then how do you know which file to restore? Yes it is possible to work that out but it is a major PITA to do so. Further more, you can have errors that are not from disk surface read flaws, such as the odd firmware bug in HDDs, controller cards, etc. So you really want something that protects against all sorts of underlying errors if you have big volumes of data (or really important stuff). Enter ZFS or GPFS as your friend - they have file system checksums built in. And if it matters make sure the system has ECC memory so you don't get errors in cached data being written to disk!
The multiple day rebuild times are not such a problem in some ways just so long as another HDD doesn't fail during it. So if you have any biggish array you should start by using double parity. It is much better to have 8+2 in a stripe than 2*(4+1) in terms of protection against double errors, etc.
Finally if you have an array make sure you regularly scrub it - most RAID support this (hardware cards, Linux's MD system, ZFS, etc) and it forces the controller to read all sectors of all disc periodically so errors can be detected and probably corrected before you have a HDD fail completely (they will do the parity checks and attempt a re-write, probably forcing a sector reallocation on the flaky HDD). For consumer HDD do it every week or two, for enterprise you can probably get away with once a month.
One day? How about the Boston Marathon bombing? That had the so-called PATRIOT act in place, shit they even had warnings from Russia that these guys were trouble, and what did it prevent?
I don't see how they can tell (yet?) which key was pressed, but they might be able to find out your password's length and so target brute-force on a subset of users with short-ish passwords.
Well thank $DIETY that people realised this and sent them the best answer possible - not buying a shitty locked-in product. One hopes this will be a lesson, perhaps not of Ratner-esque proportions mind you, for other businesses to take heed of.
That part is, to me, fair enough.
What was not fair was it was in effect a fixed fee, and not a progressive taxation based on overall income (including any benefits, etc).
Well, Ukip have their way there will be
no only undesirables remaining...
Fixed if for you...
But I guess there is a big difference between "local" attaches, where the person has to gain some sort of physical access, and the risks from a remote hack being used.
While there probably are very few bad/mad enough to do this in total in the world, the risk of it being done is much higher if the perpetrator need not travel or physically risk being caught. To me that is the real issue with the whole IoT craze, not that someone who gets on my LAN can do something stupid/bad, but that suddenly any twerp anywhere in the world can take a shot at things because the devices are being exposed to the WAN, without adequate security or patching, for whatever reason the designer thought cool.
You are, of course, perfectly right.
Sadly you are also in the minority as developers go, in particular if you have XP-era (or older) software that you need to run. Even a lot of MS's older stuff flouted their own "good practice" guides!
Interesting development. But for now I will stick to a handful of VMs with XP and the strange win32 stuff I can't get on other platforms.
Come on now, they never said they would catch smart terrorists or criminals.
This is about citizens who are disliked by those in power, sorry about catching the ones trying to set fire to their underpants. Probably after they have failed, but see - we have emails to prove they have proper explosive pants.!
Ah yes, illustrates the importance of recording such meetings, completely off the record of course, on a phone. An Android most probably...
Are the patents for FAT32 not expired now?
After all its been 20 years since Win95 came out with long file name support. Sure it sucks as a file system, but I doubt you need a license for that any-more. Not true for exFAT of course as it is a recent one...
The answers to your points are:
1) Yes, realistically you need newer machines to have a decent chance of running a VM. Think of at least 4GB RAM and support for virtulisation (AMD A8 ought to be fine).
1.1) What the VM buys you is you don't need to have drivers for the new hardware for an old OS (currently a w2k or XP issue).
1.2) You can also (sometimes!) migrate a working machine in to a VM image and thus save the process of installing the OS, patching, installing applications, getting license keys, setting stuff up, etc. Down side is you don't then clear out years of crud.
2) Most software that is currently performing OK on a 5 year old machine will be fine in a VM, and you can get some video acceleration support for the VM as well (depends on OS/video driver/etc).
Obviously you won't get "bare metal" performance but often the convenience beats that except for really high performance tasks, gaming, etc.
3) USB dongles are not usually a problem, you can selectively connect USB devices through the host to the VM, but you might find the occasional thing that won't work.
However, all change has a cost (time, software and hardware, sometimes all 3) and eventually you need to attend to it. Better to do it before the excrement hits to HVAC attachment so you don't find big problems that take ages to work around.
If all you need is XP/7 application support, and not special hardware, then running Windows in a VM is a good solution.
OK for the typical end user its a little more training/understanding of the whole "computer in a computer" arrangement, but it allows you to totally decouple the application+OS you depend upon from the hardware you have. You also can lock it down so web/email is from the host, and the VM has only the internet access it really needs (which could be zero). Finally, as a lot of malware now avoids running in a VM to evade analysis, and you are probably not exposed so much, you can drop a lot of crappy AV software and rely on other methods of recovering from an infestation (as AV is pretty shit generally at that job).
For myself I have XP and 7 VMs for CAD software, Office, etc, and use Linux for my host machine. No need to rent, no need for cloud unless I want it, no need to sign up to a MS account, etc.
What? Did I miss Twitter having an actual use?
I think most SSD support a "secure erase" instruction that wipes the device. They would have to prove you did it (harder to prove if the wipe software was on the SSD when it wiped) but that way there is nothing encrypted to be forced in to decrypting (or trying to prove that random data is in fact random data, for example, as I have from the Numerical Recipes CD). Might also be useful if your device is stolen/confiscated for espionage (industrial or nation state) reasons.
What is a bit sad is the fact this discussion is taking place. That people feel enough of a threat of 'data' being used/abused to convict them when in the past you generally had to be shown to have physically done something and/or have corroborating evidence from others.
The issue has nothing to do with disk space (usually) but everything to do with the mindset of your typical large system IT department where if it can't be locked down by AD policies, it ain't going on their machines.
It is not that daft a rule, as typically they want to be able to control trust certificates and proxy settings, etc, as well as controlling what sort of plugins are permitted.
If Mozilla really do want to be relevant and get a bigger share of the corporate world they ought to make their web browser and email clients much easier to administer remotely using Windows practice and ideally something for Mac/Linux as well.
Stop copying the dumbed down Chrome UI and its policy of changing stuff every month or two, as that just pisses of people who have to manage and train non-technical staff.
So if no one has checked the person requesting the certificate, How can you trust it? how do you know it was issued to the site that is now signed as being so?
That is the underlying problem of the whole https system: the certificates are only as secure as the logical-OR of all 600+ authorities who can issue them, and some (or their governments) I would not trust as far as I can comfortably spit out a rat...
Hence we than have the "certificate pinning" that sort of works on some browsers & sites. And we have Chrome basically ignoring certificate revocation completely (speed matters! WTF do you care if its dodgy?)
"customers will be willing to download a free alternative"
Try telling that to the Gov, NHS, etc, who have their balls in IE's vice...
Would a better approach not be to have a system where all of the stuff they access on-line is included in the submission the examiner gets?
That way you can mark how well they "used" google and check for simply asking the question and/or using google to go to a 'mechanical Turk' site for a solution.
OK, would make marking a bit more tedious, but maybe the knowledge that their search operations are assessed would make for a more focused approach.
If MS has really done the decent thing and put a bullet in IE's multiple mutant heads, and developed a new standard-compliant browser that is up with the rest of them, I applaud them.
But why only Win10?
I mean all of the major browsers like Chrome, Firefox and the also-ran Opera (sadly now a skinned Chrome engine) manage to support various versions of Windows and also Mac & Linux. Why can't MS do this?
Of course if their business model did not consist of screwing every last cent out of whoring its users' data from advertiser to advertiser, maybe this would not be a problem?
Yes, it is free, and no I would not pay for it. Unless it really did offer privacy and respected laws outside of the USA.
Why? Really, if anyone is sick enough to want to use a radio controlled bomb there are plenty of other RC devices out there that don't need a mobile phone network. Or a timer. Or the lack of a radio link. Or as demonstrated with those very 7 July tube bombing, but pushing the button and blowing yourself up as well.
One strongly suspects there is much more to this reluctance than just how some bomber could manipulate the network kill-switch.
In 5 years will we be reading "After a couple of years,
Microsoft Google moved the goalposts again. IBM Microsoft couldn’t keep up and threw in the towel" I wonder?
I don't like Google's behaviour with many things, but it is hard to feel much sympathy here.
"your child's child likely will have no need of handwriting"
So they can't sign things and thus we all have to be corralled in to a biometric future like cattle, all suitably tagged and compliant.
They may not be "safety critical" but they sure are business-critical as shown today.
Also I doubt the cost of having software for two OS is anywhere near double to cost of one, but we will have to wait and see if it was an OS problem killing the connections or an app problem. Either way, it is a timely reminder of just how much companies depend on IT systems working.
Problem = solved."
Not really. While the "Professional Starter 5 Server" looks as if it provides your cloudy store & share, it still leaves open the whole issue of how you secure access to your own server to host it (assuming that you have the need for enough data to make them hosting it uneconomical or too slow, so you want only some data synced but lots more on-demand).
Also you might have software on a home/work machine you need to run remotely (maybe its tied to MAC address or whatever for licensing). That was why the issue of choosing & configuring a router/VPN was mentioned, as it could drastically reduce the chances of other having a pop at your server, etc.
"safe solution would be to have your own NAS box somewhere on the network"
Yes - except most home & small-office products are shockingly shit when it comes to security.
Maybe Trevor Pott has some advice from his much greater experience than me, but personally I won't put any of my machines on the world-accessible network as I don't trust them much. My own Linux PC which I can SSH in to also has a 2nd software firewall (behind my el-cheapo router) that only allows my work's sub-net to even try a log-in.
It might make a useful article, how to chose & set up a router and NAS + few machines so you can VPN in and access your data or desktop with tolerable risk?
Its not just the Americans, though they seem to be the worst offender these days given the open attitude of "USA courts can enforce USA laws in other countries".
It is about anyone out there who wants to get a hold of your data: be it spy agency in your own country or another, business competitors, jilted spouse, nosey employee at your hosting provider, whatever.
As for deliberate weaknesses, that is far easier to do in a closed source implementation (to leak the key as claimed for Crypto AG devices) than in a standard (where you hope that the breaking effort is much less than obvious brute-force due to some knowledge you have about it). Which is why the only standards you should consider are ones that have been publicly analysed by the international community (e.g. AES) and not ones where the creation was done in secret (e.g. Dual EC).
The only way that is trustworthy is to have your own encryption.
That way if anyone has a legal reason to access your data they have to come directly to you with a court order. You then only have to respond to courts that have legal authority over you, not over your ISP or over your cloud provider, etc.
Just to add that SpiderOak claim to provide a drop-box like file sync/share with "zero knowledge" of the data stored on their servers. Of course, just so long as you don't create a share link for web access as that needs your key to be transferred.
This is how it should be!
The only reservation I have is I don't think it has been independently audited and even if the source was available to me, I doubt I could audit it myself.
Yes, look at BT here in the UK.
They outsourced email to Yahoo and the buggers changed settings from time to time without it being updated on BT's help pages, and their useless hell desk had no clue either :(
I mean WTF are they doing changing an email server's settings without informing the users. You know, maybe by emailing them in advance?
If I am kind then it is simple incompetence in not knowing the POP/IMAP settings at any point in time. If cynical then its because they want people to use the web-mail interface where they can serve up adverts.
Encryption works if you use the "cloud" for data storage, say as an off-site back-up. And it is only trustworthy if you have control over exactly what software is doing it (and realistically that means a well regarded open source system) and you are the only one holding the key.
Where it all falls down is if you are using the "cloud" as a computing-on-demand service, or for document sharing and web-based editing, because then it has to be decrypted on the servers of the host, so they have access to your key.
Sure, the data at rest (i.e. stored on disk) may be encrypted, but they could snapshot the running VM or whatever and then poke through its memory for the key.
Really if you are concerned about privacy then run everything on a local machine, with multiple layers of firewall/VPN style protection depending on who/where access is needed, and only use an off-site provider to keep encrypted backups. That you encrypt before they move off-site.
Yes, fines should be large and enforced otherwise bugger-all will change.
How said companies chose to respond is up to them. It would be better for free software and probably cheaper for them to cooperate in making specifications fully public, also it would help build trust that nothing dodgy was added. But sense seems to be a rear thing these days.
Even if not going so far, it is time that suppliers were punished financially for failing to freely patch bugs in a timely manner for, say, 5 years after the software/product was last sold.
I don't see the logic here, if they are using phones to simultaneously trigger bombs then by time you know about it all said bombs have gone off. And if your aim is to detonate other bombs a bit later, you have timers and/or the ability to notice the network has gone dead for that.
The only situation where it would make any sense, and probably it is the reason for them wanting the document kept secret, is for demonstrations and similar where you would not want the organisers to be able to re-route a march, etc. And then it starts to look rather undemocratic.
Doh, me being stupid again! Why would they presume the people should have any say in their government's actions?