Re: I'm wondering ...
I wondered the same - if she can go without RFID tracking then why the hell not without the lanyard?
Or would that give other kids the idea they could be free of big-bother oppression, which clearly will never do?
1696 posts • joined 15 Mar 2007
I wondered the same - if she can go without RFID tracking then why the hell not without the lanyard?
Or would that give other kids the idea they could be free of big-bother oppression, which clearly will never do?
Why should the OS age matter in terms of being patched correctly? If anything, the older code should be better understood and so less likely to have problems.
All OS suck donkey balls, you just have to sit down and decide if you want warty, hairy or crusty.
Really, I would pay £100-200 for a proper working version of Office for Linux (minus the ribbon, which should have been a user-choice against traditional menus). Would save me having to run a windows VM for that.
How long until they do it for iOS? That is a real market (even if you think Linux users are all freetards) and the only reason I can see for them not doing it is to protect Windows and try to encourage it for fondleslabs. They might regret that...
Cisco of course is immune to Chinese tampering. What, you mean they are made in China as well? Tell me its not so!
<= for the hard of thinking.
I don't know, but think the tactile feedback of a real keyboard is nice and not likely to be easily replaced. Same goes for a mouse versus touchscreen - mouse wins for precision any day.
Having said that, the masses for whom the main players target their sales may have other ideas, and thus lead us all to a crappy solution.
Can I use this post as another opportunity to complain about 1920×1080 as being crap? Sadly the market for HD TVs seems to have squeezed out better screens, with Apple offering better but not in my OS/price range.
We have some old-ish MGE UPS and they are also incompetently designed.
For example, there are some features/parameters you can set using the Windows software that you can't do on the Mac or (now missing?) Linux versions, and what is more stunningly stupid is you can't do it by network at all! Yes, so unless you have a Windows box directly connected by RS232 to the head, your UPS in the far away data centre is crippled by design for network control/monitoring even though it runs a web server.
Incidentally, do you know of anyone who makes HDD with programmable faults (bad sectors, silent bad sectors, read/write faults, etc) to test RAID systems more easily?
I know there is an option in Linux software RAID to put in a fault layer for debugging RAID stuff, but it would be nice to have an HDD (or SAS/SATA in-line thing) that could emulate typical hardware faults to test 'black box' systems.
Having read the article, it is clear the author like the thing, even though it has a dumb-ass design of needing OS-specific management software. Compared to the various NAS I have used which have a web interface, that is just such a backward move and limits your choice of OS.
But the linked article told us nothing about how "BeyondRAID" works, only the sort of thing it is supposed to do better. Further more some of the points made, such as disk swapping/order, have not been an issue on hardware or software RAID for years now. When anyone makes up a fancy marketing name like this, and you see little about what it actually does, my BS detector starts to twitch.
So far (I guess?) you have had no HDD fail so presumably you don't know how well it handles real-life failures? Has the unit got support for periodic disk scrubbing? Have you tested it with double parity and pulling/replacing one disk, and during the rebuild pulling and replacing another? (with something like ZFS on the iSCSI allocations so you can check the file's integrity afterwards)
Much as I hate to agree with such an overt "free marketeer", or to see anyone lose their jobs, the argument you are putting forward is similar to not wanting rid of the guy with the red warning flag in front of early cars.
While it might reduce the need for schedulers, by making taxis easier and possibly cheaper it might well bring more jobs driving and maintaining the cars. That is the nature (generally speaking) of Industrial era progress.
What I do question is the need for growth as such. More money per capita - where does the money come from? A lot of what we see as growth in increased consumption, and that is leading to problems of waste disposal and material scarcity (or ultimately the cost of recover it in usable amounts).
What the West needs is an economy that is less about buying cheap tat from the lowest priced off-shored supplier and about having a good standard of living (which is more than your tat count) without the underlying presumption of growing working population and consumption of materials.
You know those bank accounts that offer you a good interest rate then drop it after a year? At least you can move your money without too much trouble and have it in multiple banks for added peace of mind because electronic money all basically a 'commodity'.
How do you deal with a cloud provider, once established, changing the rules?
In a nutshell that is one of the biggest issues. Sure there are others like data sovereignty / jurisdiction and so on, but unless you can easily migrate your data and process from vendor to vendor you are simply putting your balls in a big vice and inviting them to turn the screws later...
If you have a RAID system you really, REALLY, should be doing a periodic "scrub" to verify all used sectors on all disks so when (not if!) you get a HDD failure there is a decent chance of the other HDD being clean enough to do a rebuild.
ZFS has a scrub command, and Linux software RAID with recent-ish kernels supports a check command to do a scrub(see http://en.gentoo-wiki.com/wiki/RAID/Software#Data_Scrubbing), while some hardware cards (like my Areca 1210) also support such a periodic background check.
Double parity is also a good idea, though matters more if you have several disks (say 5+), but is still not a substitute for a backup held elsewhere.
"I'm sure the guy made some serious cash"
Take a look, I think you will find "he" is a lot better looking then you might imagine. Enough, perhaps, for some divorce-proving thoughts?
In my case I am thinking of using it as the journalling device for my ext4-formatted HDD RAID array, that might help a bit in dealing with write speed on big-ish files while keeping the redundancy of the RAID.
Might also try it in due course as a ZFS intent-log device, if I ever get round to re-purposing some of the HDD I have accumulated in to a high integrity data store/backup thing.
"specifically so that actors lips move at the same time that the sound reaches the viewers ears"
I expect that delay is the opposite way - you delay the video to account for the audio's slower path.
The real problem is for the actors & musicians who need to play in time with each other, and if you remember the joys of significant delay on a phone, it can be very disconcerting (pun?) to hear yourself with a modest delay.
What Ofcom has finally been forced to recognise it seems is you can't magically replace a high power FM link and get the same near-zero delay and hi-fi sound in a GSM channel, and so live events need to have much more bandwidth (and protection for it) than a similar number of phone users.
The issue behind the shift is simply money - they wanted to combine and flog the analogue TV spectrum for money. Ah yes, analogue TV, I remember when it was near real-time for the New Year bells and where you did not get blocky compression artefacts on anything fast moving. Just needed a decent SNR...
You seem to have missed the point, it is not latency for "live" broadcast on TV - that is bollocks, it is latency for things like concerts and theatre where you can't tolerate 0.1s delay between actors & instruments, etc, and those involved are experiencing it all in human real time.
All of the bandwidth/power gains you see with digital radio come at the expense of delay - you need to have a significant block of data (10s of milliseconds or more) before you can strip out 'insignificant' information for audio compression, and similar case to allow the addition of worthwhile forward error correction (ARQ is largely a lost cause when real-time matters, and most radio mics, etc, won't have a back channel).
Have a Linux box I was going to wipe & re-install so thought I would try basically the above approach. Was quite surprised how far it got, eventually all of the text vanished from the Gnome desktop being replaced by small blank boxes (guess that was the fonts gone!) and finally it froze. Rebooted with a live CD to inspect the file system and only a handful of directories still existed (those with open 'files' before it finally stopped), but not any files as far as I remember.
Was impressed by its thoroughness!
Morris Dancing lessons are never to be laughed at!
Often I run Facebook in Chromium without an ad-blocker just to see how, and to whom, am I being whored. It is interesting to see how they move towards dating adverts as the evening progresses. I guess that tells me all I need to know about my sex life.
The fact I am looking at Facebook late in the evening, that is!
Think of how much effort would be saved if software developers just implemented time sensibly? It is not like this whole world timezone & DST issue is something that happened after computers were developed, is it?
Sadly you are probably wrong and this is simply a software screw-up.
So much of MS (and presumably Adobe?) software did things in 'local' time and often without making clear what zone that was. That was just brain dead. What is worse, they assumed that the clock was in Microsoft's home time zone, not UTC, if nothing was specified.
The reason for being able to say its crap is this was already a solved problem before MS-DOS and Windows was created, as UNIX always uses UTC as its underlying time and just applies the local offset for presentation. That way when DST changes, or you access a LAN from another timezone, you still get the correct (OK self-consistent) times.
All 'operating systems' have flaws, some more than others and some patch easier than others, but we get used to the idea that every so often (and that is usually <= month) we get some minor update to fix problems and close vulnerable orifices.
It is just a shame that phones, which now run as full and operating system as one could imagine, seem so utterly crap at being updated. Not just the the manufacturers don't seem to care much (thinking of you, HTC) but even when they do offer a patch it is often of the "save your settings and factory wipe" the phone. The sort of brain-dead approach when Windows95, etc, got upset all those years ago.
Why have they not learned from desktop OS that patching is, sadly, inevitable so make it something that is easy and (normally) automatic?
Yes, I know of diverse hardware but that is something that should be well within the capabilities of the manufacturer to have automated build/test setups. And yes, I know of the crapware some telcos add to a phone, but again that should be unimportant for OS patches as that is stuff that (should) runs on top of the core OS.
Was going to add your point - if you want to enforce OpenDNS you also need to configure the router's firewall restrict port 53 to only the OpenDNS IP addresses (126.96.36.199 & 188.8.131.52) which some, but probably not all, home routers can manage.
But it is true the setting up a home router to implement this properly & securely is not trivial even for a reader of El Reg, let along Joe Public.
I also made in my submission the same point raised by Ken Hagan about what exactly should be blocked? Who decides and monitors this?
The consultation asked about 'blocking' but gave no indication of what would/should be blocked, and how much it would cost us, and who would pay when (not "if") it screws up and the innocent are blocked. Thankfully sense has prevailed for now, and they (the government, not necessarily certain MPs) appear to have canned the idea.
Point your home router to OpenDNS and set that up, easiest way to control all home devices on DHCP. Otherwise you get in to per-device configuration, either OpenDNS again, or filtering software and with a typical range of devices (Window PC, iPad, Android phone, etc) you won't get any software uniformity for filtering and a whole life of pain in tending to them.
Better still, talk to them and educate them about the risks on-line. Not easy to do I accept, but much better for their long term development.
I think (but don't have clear memory or facts) that the ZX series were cheaply made and used a double-sided PCB and not multi-layer boards with power & ground planes. That, if true, is probably the #1 reason for the poor EMC performance.
Also note they tested it without cables/peripherals, so real-world use would be significantly worse that observed in El Reg's article.
Really, the BT power line modems are also an abject EMC failure but due to the money behind them ofcom, etc, don't care. The solution? They re-draft regulations to allow more noise...
The key point is once you have licensed a VM (which for XP is fine though Win7 I think muddied those waters) is you don't have to worry about hardware changes, drivers, etc. Further more, if it is running in more-or-less isolation for specific tasks you have far, far less to worry about in terms of security. To the point where I don't care about my XP VM going out of support in a year or so time.
The manage-my-whole-network by Microsoft is very attractive for corporate users, and so far Apple & Linux are not nearly as organised, etc, but most people don't want Windows, they want stuff that works and gives them less trouble.
And MS don't really get that - they foist Metro [insert latest name here] and the office ribbon, etc, on us without the obvious and easy to implement option of just keeping the old way and that means re-training and so on. Change is annoying, and it is gradually getting to the point where going from MS to MS latest is as much trouble for users as going to an alternative.
OK, Ubuntu et al are not doing themselves much favours either...
The first point is that this always-on encryption means that they can't just seize the servers and go trawling (or trolling?) for evidence. They have to take you to a court and show good reason for a judge to compel you to hand over any password in your possession. At least you know they are investigating you and have recourse to legal advice early on, and the sheer effort of going after someone through the courts means they simply can't afford to do it for anything other than serious and significant cases. A few bootleg episodes of the Simpsons, etc, is hardly going to be worth it and copyright trolls (like the now defunct ACS:Law) will find that as well.
Second point is if you have forgotten your password, I think the ECHR would come down on them for any attempt to force you to reveal what you no longer have. Of course, if you were dumb to say you know but are not telling, or if a court might not be convinced of your genuine problems in remembering it, then its not going to work.
Third point is how long will it be before someone has a third-party service in another country that manages the passwords and can be set to destroy them if not used for a couple of weeks, so unless they can go through the courts very quickly (again, meaning you have to be on a really serious charge) then there is no longer a password to be revealed, as your memorable one will no longer recover the encryption one.
I would be amazed if even 33% was actually unique and valuable enough to protect.
Well she ate his little head, and that is where a lot of men appear to keep the controlling brain.
What is really needed is someone (e.g. the EU) to force Apple, Google & MS to allow alternative public app stores to be added under YOUR control, so you get real competition, and are not simply reamed by your OS supplier having bought a device.
<= You missed the icon.
By these big-name companies moving manufacturing to the far East for cheaper labour and using IP laws to defend high prices and blocking 'grey imports' of genuine goods at lower prices?
Think Tesco vs Levis anyone?
CD Wow! versus BPI perhaps?
A key problem here is a lot, in fact almost all, of existing control systems were NOT designed to be secure enough to have world+dog probing their nether regions over t'Internet. Even when bugs are found most operators are loathed to change a fully commissioned working system due to the risks of other unexpected side effects, the possible lack of current personnel fully understanding an older system, and the difficulties of testing everything on a safe simulator/system before you go live with it.
With expected life times of 10-20 years do you really think they will replace them sooner to fix the deep seated design problems, or just ignore the risks because its the "done thing" in this new business model?
I would be less disturbed is you had said turkeys can be tasty, instead of "very useful".
"If it does the job, and with a lot less cycles than ZFS, what is the problem?"
The problem is no integrity checking, same issue for Linux software RAID, etc. My data is valuable, so I want to know if it is uncorrupted, and this is something I have seen before.
"Why does Oracle Linux use OCFS2?"
Because ZFS' license is not compatible with the Linux kernel's GPL one, resulting in it generally being relegated to user-space where performance sucks (same for all other fuse systems). This is a legal issue, not a technical one.
"ZFS is just a ripoff of WAFL"
Hmm, I think the NetApp versus Sun/Oracle case was closed on that one after several of the patents were struck down. Odd you see that as a problem, as NetApp's customers like things like snapshots and copy-on-write. OK, they don't like the usurious license fees NetApp like to charge to actually *use* such features, but that is a separate issue.
"It also has problems with hardware RAID"
Not really, but if you use hardware RAID, or a separate software RAID layer to present the storage to ZFS, you then lose the key advantage of error detection and recovery of 'silent' HDD/bus/memory errors that most dumb RAID systems miss. It will at leat tell you the file(s) are corrupt, but too late to do anything by then.
I have wondered why you have such a problem with anything Sun-related, as your other posts on DB stuff are clear and rational. So why are you not so caring about data integrity in a storage system? What do you uses/recommend to verify data is exactly the same as when written?
The problem with simply monitoring the SMART status is it won't know about bad sectors until you try to read them. Often by then it is too late.
Smart has support for a surface scan, and while that allows marginal ones to be re-written, it just report any uncorrectable/re-mappable sectors as bad and you won't generally know about that until a HDD fails and you need to re-build the array.
Hence the advantage of the RAID scrub process:
1) It accesses all of the HDD sectors (or all in-use ones in the case of ZFS), forcing the HDD to read and maybe correct/re-map any that are marginal, just as the SMART surface scan will do.
2) For any that are bad, it, by virtue of being in a RAID system, can then re-write any bad sectors with the data from the other HDD(s) and that will normally 'fix' the bad sector (as the HDD will internally re-map a bad one on write, and you still see it as good due to the joys of logical addressing).
Recent Linux distros like Ubuntu will do a RAID scrub first Sunday of the month if you use the software RAID, which is good. But I don't know of any cheap NAS that pay similar attention to data integrity.
Not counting RAID-0, OK?
One critical issue in my view is data integrity. That is what a NAS it supposed to do, store data reliably. But the article fails to address that. Do they support internal file systems that have data checksums (like ZFS)?
If not (and important even with ZFS) do they support automatic RAID scrubbing where periodically all of the HDDs are read and checked for errors in the background.
Most folk at home will only have 1 HDD of protection (RAID-1 or RAID-5) and what happens later in life is a HDD fails, you replace it and find bad sectors on the other disk(s), thus corrupting the valuable data. With two HDD of protection (e.g RAID-6 or ZFS' RAID-Z2) you can cope with one error per stripe of data while rebuilding, but that is not always enough.
That is why you want to check once per fortnight/month that the HDD are all clean, and so so allow the HDD to internally correct/re-map sectors that had high error rates when read, and if necessary to re-write and uncorrectable ones from the RAID array if that fails.
Of course, sudden HDD failure happens, maybe even multiple HDDs, or PSUs, as does "gross administrative error", which is why you should all repeat "RAID is not a backup" twice after breakfast...
Seriously, you think that a home/small business internet connection can support access to 20TB of data in the cloud?
" I can get built-in RAID on most PC mobos" - that is almost certaily 'fake RAID' where the BIOS can boot from it but it is the OS that has to actually do the RAID computation. OK for simple RAID-1 or similar its easier than ZFS, but it still lacks the advantages of data checksums.
ZFS is not the only file system that does that, GPFS has them as well, but most others I think only do metadata checksumming (e.g. Linux ext4, and MS' new and unproven RsFS unless you explicitly ask for the extra checks/load).
I can't believe you have not ever had that horrible feeling when you get a/multiple disk errors and no simply way to find out *what* has been corrupted by the failure of "sector 102345569" etc. Also I am not the only one I know to have had data corruption in a file system due to bus/memory errors that were 'silent' so it was only on decompressing a ZIP archive (which has integrity checks in it) that it was discovered. Most other files have no checksums so the true extent of the damage was not known and the tedium of complete backup restoration had to be undertaken.
We all know you have an irrational dislike of all things Sun, but from an integrity point of view ZFS is one of the best choices for file systems, unless you are playing big-league with IBM's distributed system.
"her daughter had managed to post gibberish" is so amazing, to be as capable as virtually all other Twitter users to post gibberish! Get her a mensa application now!
Indeed, the question remains for everyone outside the USA (and hopefully some inside) is do you trust Intel/McAfee?
If it can hide stuff from the OS, how do you check what is there and who put it there?
Where is the option for "fix the known damn bugs and quit pissing around with GUI"?
I'm no expert, but I think Mars once had a decent atmosphere but something happened a long time ago to kill the planetary dynamo the provided the magnetic shield to stop the solar wind stripping that away. Now we see little of what was once there.
Remember Venus is the same (approx) gravity as the Earth, but has a *much* higher atmospheric pressure.
Hopefully some more expert commentards will provide you with enlightement...
Indeed, they will do anything if the prices is right. Even sadder is how low that 'price' often is :(
The DRM aspect is why it is so important to keep TPB afloat - so they learn that DRM is bad for *paying* customers and the pirated sort is a better experience, you know, the sort you would actually prefer to pay for.
It took years for the music industry to accept DRM-free once the realised that the battle was lost and that the majority of customers, when treated nicely, are happy to pay for content.
So far we may have got past the "you are probably a thief" non-skipable crap with DVDs, etc, but we don't have freedom legally to use media on any platform we want and to skip crap like trailers as we wish. The move to HD and streaming is a new battle ground for DRM and it must be the public at large that wins this one, unless we all want to be digital sefs to the few biggest of corporations who hold the DRM-forged manacles.
Because you are paying someone else to do it.
In theory you get reliable operation and nice management tools, but in practice you often get a plate of donkey gonads to suck upon.
Microsoft's FAT32 patent is a very bad example, the reason no one else did it earlier was IT WOULD NOT WORK WITH WINDOWS until Microsoft did it their way.
No one in their right mind would choose to do thing in the FAT32 way unless they have to work with MS software, and their oligopoly status means you have to.
This is an example of why patents need reigned in - where you can't interoperate without infringing. The basic idea behind the patent system is good, the problems I have are those already stated, that the may be:
not very inventive
needed for interoperability
too long lasting in areas (e.g. software) where 20-25 years represents many, many generations of a product.
OK, that explains something and makes more sense. I had assumed this was an extension of the device driver signing process where they did look at your code.
So am I right in assuming that to get approval MS get to see all of your 'trade secrets' of your source code, quite possibly to copy (sorry, "influence") for new MS products, but you don't get to see theirs?
If you have to bare all, at least go open-source and maybe get community help in bug-fixes, etc.