"heaps of suits are doing network diagrams in Visio"
No, probably spreadsheets. But same applies, having a 3x2 screen is much less sucky than 16:9
2220 posts • joined 15 Mar 2007
No, probably spreadsheets. But same applies, having a 3x2 screen is much less sucky than 16:9
No, just 0.1% phony...
Which bits were different to the official version might worry you, of course...
"Until they realised that no one wants to pirate Windows 8...."
Fixed it for you.
Clearly you, and most of El Reg's readers, are not the target market. It is mainly for folk who just want web access, with a keyboard, and don't want to manage anything to do with updates and AV software, etc.
For that sort of use-case it is very good and cheap, which is important.
Yes, it has Google's spying but most folk are still going to use Google anyway, and probably download Chrome as well, so that is not something they care about.
I got an Acer one for playing with and dual-booting, good value for money, but I do hate the lack of home/end/insert/delete keys on the keyboard.
"even in it's virgin form...KNOX complains of intrusions"
Maybe that is telling you something important about how buggy the pre-installed (and store?) apps are in terms of not poking where they should not be?
The problem with this comes from the president the USA would set, and other ISPs around the would would start eyeing up the opportunity to charge twice for their pipes.
The real problem is not the ides of prioritised data based on type - that is already a known technical solution - but that the payment by source of data becomes the factor. Added in to this the race to the bottom on ISP prices, they won't invest in making better back-hauls unless someone big and rich pays them to.
If ISPs are forced to treat all data sources equally then of course they may have to adapt thier billing model (and maybe, just maybe, be forced to honestly advertise their quality of service) and charge some end users more, but it would keep a level playing field so you don't get a few big media players delivering usable video and anyone else being throttled in to oblivion.
They don't provide the DRM, just the "hooks" that allow it to be called.
In that sense it is no worse than supporting flash player. But they, and other DRM-opponents, are right as it is a very worrying trend towards everything being restricted so ad-blockers, etc, may not be allowed in this dystopian future.
MS had some of the weakest security around at the turn of the millennium but actually decided to do something about it. These days the Windows kernel is not bad at all, and in the believable comparisons (not the odd troll here) it has broadly similar numbers of flaws as the Linux kernel.
What gets your average Windows machine p0wned these days is user-space crape like Adobe reader plug-ins.
Of course, a Trojan and lack of knowledge is another easy route to the dropped trousers and bucket of soapy frogs (which is an OS-independent problem).
MS has an operating system comprising of millions of lines of code in hundreds of sub-systems, and has managed to get serious bugs down to a handful per month to be patched.
Adobe has a document reader, and not much more than a video player for the web, and it can't do much better?
Food for the spiders & snakes I suspect?
Are you offering a nice Chianti with this liver?
Good point, if that make it then the next sales opportunity is for schools...
"...runs on ye olde spinning rust, a medium that offers lesser performance than the solid-state-disk-based tier it previously offered and therefore attracts a lower price."
Nope, it runs on ye olde spinning rust, a medium that offers lower cost per GB compared to SSD and that is the reason it is cheaper. The "lesser performance" aspect is why you might choose to pay more for SSD.
An interesting move, however, I first thought they were doing x86 and ARM in the same chip as well so you could get both (or just low power, etc) as needed at run-time.
Maybe if Intel had done this with the Itanium from the start it would have been less Itanic...
*cough* teledildonics *cough*
The atomic number, like 117 is the count of protons in the nucleus, but the stability depends strongly on the number of neutrons. E.g. in the simplest hydrogen has none, the Deuterium isotope has one and both are stable, while Tritium has two and decays to half over 12.3 years.
So with the "island of stability" (which is a relative measure, none will be *that* stable) there is a great uncertainty about what the effect of differing isotopes will be. Unfortunately it is damn hard to make any of them, let along high neutron count versions.
Oh dear, that reads as if "bit-wise operations" are dangerous! Doh!
My point was you can do things in C with ease, such as bit-wise operations, pointer arithmetic, etc, that can be seriously dangerous, but is also essential for some OS operations. Same in that respect as assembly, and not as other languages that (for good reason) deny dangerous operations.
I think C was created to be just "one step from the metal" for writing OS in a moderately portable way. However you might complain about the dangers of C, it sure beats assembly!
There are occasions where a goto might be the most elegant option (e.g. breaking out of multiple nested loops) but the problem I see is when you look at a goto target, just how did I get there?
I think gcc supports a variant on the idea, but then you get in to serious portability issues for a library that should be cross-platform and compilable on systems of widely varying age.
Yes, one of the issues is simply crappy coding style (as the author put it so well "No bug is shallow if it lives in a bug-camouflaging environment.").
That is why the likes of MISRA C/C++ guidelines were created, to get programmers doing things in ways that are robust (i.e. common/minor mistakes are easily caught or mitigated) and readable (so bugs have less opportunity to be hidden).
You can argue C++ has more elegant ways of doing safety/clean-up things, you can also argue that it has lots of interesting ways of adding bloat or doing things inefficiently. But if you know and understand those arguments, you can probably write safe code in either C or C++ anyway.
Depends - it won't stop them if you are a high-value target worthy of directing a lot of resources, hell they will just bug your machine(s) at $100k+ sort of cost in that case.
What is does do is make data hoovering that bit more difficult and expensive. If enough people used it then they would only be able to investigate high-value targets, sort of like the good old days when human resources (i.e. a spy) had to do the work, or that quaint idea of having proper judicial oversight.
Have an upvote for mentioning the FB Purity add-on !
Increasingly I don't bother with facebook as the signal to noise has decreased. Maybe my "friends" are more boring now, or simply numerous, also adverts increased and lots of pointless article referrals.
But as an ID service? You must be joking?
Remember the "rouge" MP3 site that sold tunes by the data volume, in the format of your choosing, and DRM-free? Much easier to mange as no device type info needed, just let consumers choose the image size/quality and price it accordingly.
Oh and the industry might make more money if they turned out better films and less crap remakes. Just my opinion of course...
"since an SSD has a life of say, 3000 years"
Mistake #1, you assume that erase/write is the only failure mode, and not due to, say, ion migration under voltage stress, etc. Most devices have a lot of failure modes, but often only 1 or 2 are dominant and you may find SSD have lives under read-dominated operations of 5-10 years max.
However, having it mirrored with another device, such as a cheaper HDD, gives you a sporting chance of surviving a failure without problems. (Incidentally the more recent Linux RAID software supports write-mostly for situations like that where IOPS differ a lot between the storage devices).
Rule #3 of data sheets - NDAs exist because they suck at something or another, and don't want it more widely known or compared..
Rule #1: if it is not specified - it sucks.
Rule #2: they probably lied with the stuff that doesn't suck.
Broadly - when talking bollocks about one's self.
Allegedly - when talking bollocks about others.
***chough*** RBS Mainframe ***cough***
"I tried to embark on a process of capturing the disk images, but stopped when I had difficulty finding any new blank double sided double density floppies"
Don't do that - make an image of the whole disk, for example using 'dd' or some Windows equivalent, and then you can present a copy of said image to a VM running an emulator to extract the files (assuming it is a weird file system format). For example:
dd if=/dev/fd0 of=~/Documents/image-1.dat conv=noerror
Even if you need to make real floppies again, you can 'dd' back from the stored and backed up images you made.
Just remember though that 'dd' is nicknamed 'destroy data' because of the tragic consequences of getting source and destination confused!
So a bit like pr0n then?
Mine's the dirty mac with the profanasaurus in the pocket.
Boards are around 100 Euro, need a good working floppy drive as well. They also sell them, but I'm guessing the 999 Euro for a 5.25" drive is an "out of stock" indicator, rather than a "we take the piss" one.
I did, but it was using non-standard connections (compared to my 3.5" PC drives).
I had a similar problem with my father's "word processor" when it died (floppy drive no longer reading disks) and then I found out he had important stuff saved over about 20 years without any other copies (he did have two floppies for each important set, but they were both in Sharp-specific format which he though could be read elsewhere).
Reading the disks was the first challenge, because virtually none of our PCs had a floppy drive. They were in 720kB format and I made images of the floppies using one of our old Linux boxes that actually recognised a disk was present.
They were flaky formatted ones that a VM Windows 95 refused to understand, so it had to be actual DOS 6.22 VM to 'read' them, and that occasionally crashed due to cross-linked files showing up and endless loops. chkdsk sort of fixed that, so files could be found.
But they were mostly in Sharp PA-W1400 ".doc" format as my father had never seen the need to export in ASCII, and Sharp could not tell me what that format was, so I had to look in there with a hex editor and could see mostly recognisable stuff, so ended up writing a small program to parse them and convert what I could identify as special character sequences in to UTF-8 for things like "1/2" and so on.
A lesson there...
No idea, I'm a bloke. But I imagine trying to get a spiky hair brush, can of hair spray, mirror for doing eyes, etc, up there would be a tad uncomfortable!
"Some people need their pcs to be slightly more secure than that...."
Exactly! That is why they don't install Windows...
Pot meet kettle.
"What does ibm bring to the table in this that would interest anyone over x86/x64?"
Oh, maybe an established 64-bit system (compared to ARM) with a better underlying architecture (compared to x86) and willingness to license at affordable costs?
Yes, Intel has the lead in process technology, and yes the legacy software market for x86 is very important and deeply ingrained, but there is a lot of new stuff that has no such constraint.
"One would have thought it's something that could be built in to a modern CPU."
Intel have that, but as it is a secret black box, who would trust it?
Oh I would not worry about a lack of a willy, as in decades of engineering work I have never needed to use mine in a professional capacity. Also I think you will find that waving lady-bits around will trump any willy-based competition!
Sorry for omitting ReFS, just I have not seen that actually used yet. And it is also Windows-only!
I thought I might as well come out from under my bridge to weigh in on this:
In the beginning there was no Windows security at all, and BillyG said Lo! Make it so we don't suck! Thus Dave Cutler was employed to design a worthy OS and, being who he is, it had to be non-UNIX in every aspect, presumably due to some nasty experience at the hands of some UNIX admins at a student party or similar.
Thus he created NT, and we saw it was good and multi-platform. Anything and everything had an ACL for security and computer scientists around the world marvelled at how complex one could create a machines permissions. Alas, it did not last because those in MS' demonic marketing department decided that it had to be compatible with some legacy stuff based upon the old singer-user non-networked model of security, and speed was poor and thus the video subsystem, and other stuff, was thrust into the ring 0 code that once was pure kernel. Then it became x86 only, until very recently when the bastard child WinRT was created.
And darkness descended upon the windows ecosystem as software was allowed free reign by default to do things it should not, and the tenderest parts of the user's nether regions became the favourite lunch of malware writers the world over.
Meanwhile the old UNIX/Linux model chugged along on the bases of multi-user systems with a crude, but effective, set of permissions that were enforced by default leading to far less trouble.
And so children, the lesson here is analogous to the tortoise and the hare: Windows should have been the pinnacle of security, but was let down by pesky users not knowing or caring how to use ACLs, and by time it became a problem so much legacy software was doing it all wrong. Given you need to use a tool to simply find out what ACLs are in use, it is hardly surprising.
Linux is indeed less sophisticated by default, but as its basic segregation of admin & user has always been enforced, software for it always played well that way, thus basic security has always "just worked".
For ACLs on Linux you can copy this way:
getfacl file1 | setfacl --set-file=- file2
And yes, ACLs on Linux are not completely consistent across different file systems, but how consistent is Windows ACLs across file systems? Oh yes, it is only NTFS...
Nothing like a good cross-forum argument!
Arguing security on ACLs versus permission bit-masks is so last decade...
As yes, those "unskilled and lazy developers" who wrote stuff like Outlook express (which saved emails at one time to random/cryptically names hidden folders under Program Files) and Office (that, unless patched, failed when XP SP3 finally turned on the firewall by default)?
With MS playing fast-and-loose with software development for such a long time, often to get round the speed or effort penalty of doing it right, can you really blame other developers of that era for doing the same?
Maybe the VR show is all about the other sort of strap-on? Explains the general look of enfeeblement....
For whom? The boss who is not getting in the BOFH's way, or the beancounter who turns down the boss' most excellent suggestion for new kit desperately needed for his support team?
You know, those 4k monitors and extra storage arrays for "speciality" content?
I'm less concerned by lawful access, based on a court order from any competent government, than unwarranted hoovering of all data "just in case".
Have an upvote for using "salubrious", oh and a beer.
Thanks for the feedback, I stand corrected.
"If you follow the spaghetti trail that is the source code"
I think you have identified a significant problem just there.
"I.e. it's read-overflow (or 'buffer overflow' by reading rather than writing) - nothing to do with the memory allocation!"
If they are really using a stack-based source then electric fence would not have caught it, but I would have hoped some of the code profiling tools would have thrown up a warning about the copy size being potentially bigger than the buffer.
I'm not sure, but usually if you overrun a buffer then standard tools like the "electric fence" library or the valgrind tool fill find the problem.
Of course, if you write obscure code and use a not-very-well-thought-through alternative version of malloc() then things might not go so well...