Re: Still don't see any reasons to buy it
I think you will find "The Register" is not a monolithic Borg, but an outlet for a number of journalists with a varying range of personal opinions.
1731 posts • joined 15 Mar 2007
I think you will find "The Register" is not a monolithic Borg, but an outlet for a number of journalists with a varying range of personal opinions.
Yes, I gues.
I am not a software scientist by training but have ended up programming in C (mostly) to solve difficult problems, not necessarily NP-hard, but not easy for the affordable hardware of the time.
Most success came from starting with a good book, in particular Numerical Recipes, and timing where things were held up, for example using the profiling tools that come with, for example Visual Studio.
However, in a number of cases I resorted to approximating the problem or allowing sub-optimum solutions because it was good enough for the system requirements and sometimes vastly faster.
E.g. I once reduced the processing time of some software that re-projected (warped) an image by implementing my own task-aware cache rather than using the DOS/Windows95 FAT system's own one. Today you don't see file systems that inefficient in common use, and memory is plenty big enough just to load the whole source image in to memory for random access, but original case was ~100MB file in the days when you might have 16MB RAM in a PC.
Parity & RAID is a bet, based on the probability of multiple failures occurring at once. The quoted figures you get for availability are based on the assumption of statistically independent failures.
We all know that is bollocks, of course. As HDD are often from the same batch so may suffer manufacturing defects, and failure can be provoked by events such as fan failure, PSU surges, etc, that are common to the array.
So RAID != Backup and never forget that!
The trade off with going to triple parity depends on your work load and the CPU/controller, etc, but often it demands larger stripes to be efficient but that in turn hammers the IOPS capability. You can get a lot of that back with SSD for journals/ZFS Intent Log use though.
In most cases you get one failure and then others croak when the load of a rebuild kicks in, in that case double parity is a great help.
But you also get an array being powered off after years of use and a number of HDD just giving up the ghost and not spinning up, at that point you really are looking at a new array and restoring from backup :(
That was my figure of 1.2 times (or thereabouts), say 6 disk for 5 disk's capacity = 6/5 = 1.2 or with double parity and more disks per stripe 12/10.
Always go double parity if you can, and scrub periodically, as a HDD-failure RAID rebuild is when the trouble starts!
I am guessing they don't consider data object bigger than a single HDD then?
Presumably the protection against HDD failure is now based on object duplication, so a 2 times storage penalty, rather then something like RAID-5/6 or RAID-Z2 where you get a 1.2 sort of penalty?
No, you don't need a TV licence to watch Internet streamed video. Or to buy/rent DVDs, funnily enough...
It is sad to see someone believe that "the promotion of innovation and growth" comes from whoring your customers from port to port, rather than developing things people actually need or want to pay for.
Er no. Not if you have VB-heavy business stuff based on years of painful Office-based development, which is a big point for corporate users.
Still, aside from the debate about the fundamental usefulness of WinRT, at least Nokia is offering something that looks a viable competitor in battery life, price, etc.
Quite probably system bloat, but maybe likely it is due to DRM? Consider this analysis:
Shame none of that protects you, the owner of the PC, from malware...
Has anyone compared XP with Windows 7 on the same hardware to see if this is a factor?
The sad thing is this attitude, which is by no means uncommon, is really NOT how the majority of USA citizens think it should be.
I have never had a "problem" as such with USA immigration and border control, but as an anonymous person from Europe have seen how slow and troublesome it can be. As a point of comparison when on a flight to Chile, when fellow passengers were actually being fined for failing to declare fruit & veg (in that case a bag of tea), the staff were still polite and pleasant, and no guns were pointed at the visitors during the procedure.
I really wish that the USA gov, and its representatives, could be like the majority of pleasant and helpful folk I have met in my travels in the USA.
The protection from "the government" is supposed to be due process and the court of law, which gets its power from the people's choice of elected representative.
Please stop laughing in the back seats!
"B) What risks does it prevent?"
Given a lot of "stock" phones will have an OS that is old, unpatched and vulnerable, the only reason I can see is to prevent users from having loaded un-vetted apps from dodgy sites.
However, there appear to be enough dodgy apps from the official site to limit that aspect as well...
"with a new version, the auditing needs to be all over again"
That is why you have an automated process, one where the agreed compilers and build environment are used and you can check that the binary coming out of the audit system matches the download version for a given code release.
Then your review of the source code changes is a meaningful activity.
But until the code has been independently audited by cryptographic experts (ideally not from the USA, etc, where there is a justifiable suspicion of court-ordered tampering) it is hard to trust the system, even as compiled from source, not to have either a foolish or deliberate flaw that makes the security much less than the password.
"a TrueCrypt virus. One that only attacks that particular program and inserts a backdoor into installed copies"
Really, you don't think that a simple key logger to grab the password would be easier and more deniable? If your machine has been compromised, even by a user-space program for your account, then ANYTHING you do from then onwards is, by definition, insecure.
"most Android users are quite happy with the Google-backed ROM which comes pre-installed"
No, I think most simply live with the donkey gonad-sucking software that device manufacturers supply and then practically never patch or fix.
Most OS have several patches per *month* for security, when did your phone last get patched? And the only time I got a "patch" for my HTC it was a complete image, thus involved a system reset and having to configure everything again. Look you imbeciles at HTC, Google, etc, patching a Linux-based OS is a know technology, use it!
An image more like a camel's toe under the tent?
"when Microsoft software is offered free, then it's even worser than offering them drugs."
You obviously missed the bit about the school having to be fully paid up to MS, using public money, to get this. You see, that is the point, MS never offers anything for "free", it always comes with restrictions and is simply there to get them while young.
Now MS are a business, and making money is fine if it is done by honest competition and offering the best products. Some of MS' products are very good, but others are not so good and they also have a long and inglorious history of abusing their oligopoly on the PC desktop and OEM relationship to kill competition rather then to make something better.
Funny how most pro-MS folks are ACs?
To add: I have no love of MS and can't see any special reason to buy one, but younger non-technical friends find the cheaper Nokias are "not bad" as smartphones.
Ah, but it would do wonders for FLOSS in the enterprise.
"All the Chinese are racists?"
No, that comes under xenophobia I think and not race. And it is down to history mostly. A bit like Europe's last several centuries of bloodshed...
Meet the new boss, same as the old boss...
Very much so. Now then, do you have a list of teenage daughters I could chat to?
Thanks - a complete cad & bounder.
Please Apple, could you consider as laptop with:
A bigger 16:10 screen, say 17", with at least 1200 lines resolution.
A proper Ethernet connector.
A price in the ~£1k range (or less, but lets not ask for Unicorns here).
A keyboard the gets rid of "Caps Lock"
A touch pad that is off to the side of the keyboard, so folk don't graze it with palms while typing.
Indeed, given the US law on this, what is the point in asking? Those who know are bound, on pain of imprisonment, to lie in order to cover any NSA requests.
Long term, this is going to do the USA-based business no good at all, and if the USA gov is able to act and see sense, then they will allow at least honest answers about the number and general nature of the FISA requests.
Sure, it won't deal with all issues, but then such questions about scale and privacy have half a chance of being answered honestly to EU countries, etc, and that may just help the USA to rebuild some measure of trust.
Why did I read that as "Facebook gobbles upstart Onanist for $200M"?
Is that closer to the mark?
My dirty mac =>
At this precise point in time I suspect there are more Americans worried about the Gov shutdown and potential default. Quite probably, that is more damaging to the USA than any/all of the revelations about the NSA doing what tin-foil hat folk knew all along.
This is supposed to be secure email for within the Brazilian government and not about the rest of the world.
Yes, most of El Reg readers know and have known for years that email is, in almost every case, as secure as a post card, but it still ends up being used with some expectation of privacy. Now they know, rather than suspect, that the NSA hoovered it all up (J Edgar'd it up?) they feel it is something to bring back under national control.
As for the rest of us, until we can get and manage some sort of open/free public key system and have an interoperable email standard that "just works" for kids to granny's computers without any technical knowledge, then we (as in the public) are still out in the open.
"uncool brand" is kind of how most folk see MS, as their work computer it has that "dancing dad" aspect.
Too much trouble. The UK TV Licensing 'enforcers' just assume *everyone* is watching TV and thus must have a licences, unless they can show otherwise...
The argument is not that another country would be any better, but that the combined effect of them would be to ensure that no single *one* of them is in a position to, for example, compromise high-level SSL certificate generation, or backdoor key standards.
However, given the power of US-controlled businesses in this area (MS/Apple in personal computers, Google/Facebook in search and privacy violation, Verisign, etc, in "trust" certificates) this may be more symbolic than effective.
"EU and the UN to see how it would suck up expenses and how agreement would be impossible to reach"
Dude, you should take a closer look at the USA gov for a moment, you know the one currently unable to act because its global credit card is maxed out?
"Does the VPN not have to go through their system? How on earth..."
Most likely they throttle YouTube along with torrents and usenet access as a "waste" of the bandwidth that you might have imagined you paid for, but they have not throttled VPN yet (or have too much big business users to dare).
Do you not get pestered for a MS log-in when setting up Windows 8? My guess is this will become mandatory and one ID that MS can use to track you, and slurp your data to SkyDrive for better analysis.
As if we did not have enough reasons not to desire a move to Windows 8 already...
The problem is that the media companies will not "trust" this sensible sort of path and will want things that probe into your system and/or use undocumented aspects of closed video drivers, etc.
Just look at the demands they recently put on 4k video and the debacle (already mentioned) of older BluRay players being broken in a vain attempt to shore up the DRM.
If they are really wanting to look at DRM in HTML5 browsers, they should also be addressing the issue of trust both ways about what the DRM can, and cannot, do in accessing the users own hardware.
"2012 doesn't have any UI by default"
Actually that is one of the best things MS has done for ages, dropping bloat and avoiding the temptation of someone, somewhere, deciding to surf the net on a critical box. Never thought I would up-vote TheVogon!
We operate in a rather specialised area as well, and over the years have done DOS, Win32, Solaris and Linux code (lets forget about x86 and DSP Assembler, and FORTRAN on a PDP11/32 shall we?).
At one point we thought NT would triumph and I did some stuff but the dumb changing directions of MS and the rise of Linux as a decent platform means that now we have a couple of legacy DOS applications to support (running on Linux under dosemu, cheaper and much less effort than porting and debugging them) and are moving off Solaris to Linux as fast as we can following Sun's borging by Oracle.
If you really want to keep your options open, then use something very generic like C++ and the Qt cross-platform tools. I still use MS Visual Studio for a lot of Linux development (where no really Linux-specific stuff is needed) because it is a jolly good IDE!
Avoid being too vendor-specific, and if you can make sure all new stuff is developed & tested for two different platforms (like Win32 + Linux) as then moving to a 3rd/newer/different platform is relatively easy because nothing already used is too propitiatory.
That is something I would worry about for that (very slim) chance of it stalling on a level crossing and not being able to start, and yet not being able to lurch in 1st gear off using the starter motor.
Allow me to explain:
Don Jefe manages to remain coherent and thoughtful in his comments, even when clearly pissed off.
Eadon, while amusing at times, came across as rabid, thoughtless, and in need of a higher dose of dried frog pills.
There is inevitably a performance hit going to ZFS if all other things are equal due to the block checksums, etc, that it uses to guarantee higher integrity.
However, you can often get a major boost if using SSD for the ZIL (ZFS Intent Log) as that provides fast confirmation of data commitment (so your application 'knows' that the data is saved) while also allowing ZFS to schedule the stripe write over the main storage HDD in a more efficient manner.
Enough RAM (about 1GB per TB of storage is the rule-of-thumb) is, obviously, also an advantage. But make that ECC memory, as there is little point in using ZFS for slower but high-integrity off the storage devices if the data can be (and occasionally is) corrupted in memory.
Finally, be sure to run ZFS as a kernel-mode driver, not as normally done for licensing reasons as a user-space loop-back device, otherwise performance takes a major hit (one of the reasons NTFS on Linux is not so fast).
Is it not much better to do it the other way round, to use ZFS to combine bulk HDD storage and SSD write-intent log drives in to a high integrity array, then use iSCSI to export a 'block device' to any application that is incapable of using a standard file-system?
I am not expert in storage systems, but from my perspective we should be moving away from applications needing block devices (presumably an approach dating back to horribly inefficient FAT systems and the like) and using network file systems so user+application data is stored as files, but on centrally managed and backed-up machines?
No, I think the real lesson is if you really annoy someone with the massive resources of the FBI/NSA behind them then your chances of being caught by some minor flaw in any one of your tools are high.
That is not to say I agree with the USA's "war or drugs" (or terror/liberals/whatever). Personally I think the approach there, and in the UK, is flawed and failing, but that is another issue.
First point is you can't really have an open-source DRM module, since it would be easy to modify to render it worthless. So you will either need a closed and untrusable browser, or the DRM to be another plug-in along the lines of flash/silverlight.
Then you get the issue of the anal executives who demand that the only DRM they will use has to be tied to the OS and hardware of the machine, so you lose further freedom as no open OS or graphics drivers will be allowed.
Final point is MUCH more impotent: I have no problem with the basic concept of protecting content against casual copying, but that is not what will happen. It will, if a "standard" be used by web sites and other miscreants for all sorts of other things.
And all of the is a damned big bit of a difference!
The issue is much worse then the browser, as the goal of DRM is to control *your* hardware according to someone else's agenda. What this will mean is you still won't get decent services on most platforms irrespective of the browser because only the likes of MS (and possible Apple), and maybe certain hardware, will be deemed 'secure enough' for content delivery.
Shame they are not secure enough to protect your own data or privacy...
And that is the real issue here. Flash was dropped from Netflix because it was not deemed to be 'secure enough' and as a result no more sales to folk running platforms that don't support silverlight. Oh yes, and its days are numbered as well.
Also it is not just video that will be "protected" but forced adverts and, in very little time, malware that uses the strong DRM to make monitoring it difficult or near impossible.
A pox on all of them!
You get the worst of both worlds. To be worth while, 4k video will be massive file sizes, and given the pitiful state of broadband in a lot of countries and daily caps of a few GB, downloading is not a serious option. Streaming, the media companies preferred option, will also be impossible for most (unless compressed to hell, again).
So it will be on disk.
But then you need an internet connection to make it work, so you can't use it anywhere on a remote holiday!
As for catching the pirates, if 4k ever comes to a general purpose PC then I expect it will be malware that uses a stolen identity/credit card/ whatever to "purchase" the file, then torrents it. After a few cases of the police being called out to the obviously victim of this, they just will ignore it and so it achieved very little for a lot of consumer pain. And it will make consumers think twice about using such services if the papers report such false accusations.
After all, it takes is one copy torrented per release and their plan has failed.
The first point is a perimeter firewall & its rules won't help your external users under DDoS as most likely your link will be saturated and/or the firewall overloaded with malformed packets. But what it can do is prevent your internal users from loosing the service, which I believe was the issue in reply to JDX. Of course, it also reduces the probability of a service under overload from becoming vulnerable.
The second point, the external users IP address, then it all depends. For example my home is on cable in the UK and my IP address has changed only 7 times in the last 4 years. And had I use a /16 mask then only 2 changes would have been needed (obviously trading off more potential zombies attacking).
We have an arrangement we we can log-in to our web server and ask for that IP address to be added to the firewall permissions, in a few minutes it then opens up SSH access, etc. Not totally automated, but good enough to allow modest home (or on-site) working to function while keeping out almost all login-forcing attempts.
As for IPv6, we just ignore it for now as our current infrastructure (and most UK broadband connections) don't support it by default. But eventually we will have to use it, so yes I will accept more potential pain there.
"Close the git server from the web" isn't an option if you want to allow your developers to work remotely.
No, but you could have a firewall list that only allows the IP addresses of your developers to gain access. Even with a bit of IP re-use on domestic broadband being added in, you are down from ~1 billion computers able to attack to a hundred or so.
Compressed memory is not a new idea, but it is a good idea for certain system usage patterns. These days even a web browser can gobble stupid amounts of RAM and in-memory compression is typically faster than disk paging (and less damaging to flash storage devices).
How do you "intentionally cause damage to protected computers"? By definition they are protected, and AFIF the LOIC is just a ddos flood tool, so you are really just "causing temporary nuisance to a web server".
Or are anti-capitalist/monopolist protests now considered a terrorist charge so they have to claim the server is 'damaged' by repeated pointless requests in order to justify the prosecution?
No, it is down to reverse compatibility which is a BIG THING given the millions of lines of code written pre-Unicode/UTF-8.
Basically, in order to work the single byte options have to map to the old ASCII set (which are 7-bit due to the old parity issues from the serial comms days), and those extending to 2/3/4 bytes cover everything else (the "extended ASCII" of original IBM-PC, including the £ symbol and similar, which you might think is 'imperial').
The fact that some programmer, in an attempt to show the "benefit of Unicode", should use a 'double' variable for PI and only give 6 figures tells you they should be executed and their programs not!
But yes, you speak the truth - UTF-8 is better for all practical reasons because it won't break old software/code and yet it allows all characters you (and your customers/users) might want. Subject to matching system fonts - a rant for another day...