1545 posts • joined 15 Mar 2007
Re: @Don Jefe
Allow me to explain:
Don Jefe manages to remain coherent and thoughtful in his comments, even when clearly pissed off.
Eadon, while amusing at times, came across as rabid, thoughtless, and in need of a higher dose of dried frog pills.
Re: "ZFS performance level was less than half of xfs"
There is inevitably a performance hit going to ZFS if all other things are equal due to the block checksums, etc, that it uses to guarantee higher integrity.
However, you can often get a major boost if using SSD for the ZIL (ZFS Intent Log) as that provides fast confirmation of data commitment (so your application 'knows' that the data is saved) while also allowing ZFS to schedule the stripe write over the main storage HDD in a more efficient manner.
Enough RAM (about 1GB per TB of storage is the rule-of-thumb) is, obviously, also an advantage. But make that ECC memory, as there is little point in using ZFS for slower but high-integrity off the storage devices if the data can be (and occasionally is) corrupted in memory.
Finally, be sure to run ZFS as a kernel-mode driver, not as normally done for licensing reasons as a user-space loop-back device, otherwise performance takes a major hit (one of the reasons NTFS on Linux is not so fast).
Re: "just get a large enough LUN and put ZFS on that"
Is it not much better to do it the other way round, to use ZFS to combine bulk HDD storage and SSD write-intent log drives in to a high integrity array, then use iSCSI to export a 'block device' to any application that is incapable of using a standard file-system?
I am not expert in storage systems, but from my perspective we should be moving away from applications needing block devices (presumably an approach dating back to horribly inefficient FAT systems and the like) and using network file systems so user+application data is stored as files, but on centrally managed and backed-up machines?
Re: Go with it
No, I think the real lesson is if you really annoy someone with the massive resources of the FBI/NSA behind them then your chances of being caught by some minor flaw in any one of your tools are high.
That is not to say I agree with the USA's "war or drugs" (or terror/liberals/whatever). Personally I think the approach there, and in the UK, is flawed and failing, but that is another issue.
First point is you can't really have an open-source DRM module, since it would be easy to modify to render it worthless. So you will either need a closed and untrusable browser, or the DRM to be another plug-in along the lines of flash/silverlight.
Then you get the issue of the anal executives who demand that the only DRM they will use has to be tied to the OS and hardware of the machine, so you lose further freedom as no open OS or graphics drivers will be allowed.
Final point is MUCH more impotent: I have no problem with the basic concept of protecting content against casual copying, but that is not what will happen. It will, if a "standard" be used by web sites and other miscreants for all sorts of other things.
And all of the is a damned big bit of a difference!
Re: Over my cold dead browser
The issue is much worse then the browser, as the goal of DRM is to control *your* hardware according to someone else's agenda. What this will mean is you still won't get decent services on most platforms irrespective of the browser because only the likes of MS (and possible Apple), and maybe certain hardware, will be deemed 'secure enough' for content delivery.
Shame they are not secure enough to protect your own data or privacy...
And that is the real issue here. Flash was dropped from Netflix because it was not deemed to be 'secure enough' and as a result no more sales to folk running platforms that don't support silverlight. Oh yes, and its days are numbered as well.
Also it is not just video that will be "protected" but forced adverts and, in very little time, malware that uses the strong DRM to make monitoring it difficult or near impossible.
A pox on all of them!
Re: What a bunch of charmers they are to be sure.
You get the worst of both worlds. To be worth while, 4k video will be massive file sizes, and given the pitiful state of broadband in a lot of countries and daily caps of a few GB, downloading is not a serious option. Streaming, the media companies preferred option, will also be impossible for most (unless compressed to hell, again).
So it will be on disk.
But then you need an internet connection to make it work, so you can't use it anywhere on a remote holiday!
As for catching the pirates, if 4k ever comes to a general purpose PC then I expect it will be malware that uses a stolen identity/credit card/ whatever to "purchase" the file, then torrents it. After a few cases of the police being called out to the obviously victim of this, they just will ignore it and so it achieved very little for a lot of consumer pain. And it will make consumers think twice about using such services if the papers report such false accusations.
After all, it takes is one copy torrented per release and their plan has failed.
The first point is a perimeter firewall & its rules won't help your external users under DDoS as most likely your link will be saturated and/or the firewall overloaded with malformed packets. But what it can do is prevent your internal users from loosing the service, which I believe was the issue in reply to JDX. Of course, it also reduces the probability of a service under overload from becoming vulnerable.
The second point, the external users IP address, then it all depends. For example my home is on cable in the UK and my IP address has changed only 7 times in the last 4 years. And had I use a /16 mask then only 2 changes would have been needed (obviously trading off more potential zombies attacking).
We have an arrangement we we can log-in to our web server and ask for that IP address to be added to the firewall permissions, in a few minutes it then opens up SSH access, etc. Not totally automated, but good enough to allow modest home (or on-site) working to function while keeping out almost all login-forcing attempts.
As for IPv6, we just ignore it for now as our current infrastructure (and most UK broadband connections) don't support it by default. But eventually we will have to use it, so yes I will accept more potential pain there.
Re: Not even a small developer would trust it for private, internal code
"Close the git server from the web" isn't an option if you want to allow your developers to work remotely.
No, but you could have a firewall list that only allows the IP addresses of your developers to gain access. Even with a bit of IP re-use on domestic broadband being added in, you are down from ~1 billion computers able to attack to a hundred or so.
Re: Worthwhile Features?
Compressed memory is not a new idea, but it is a good idea for certain system usage patterns. These days even a web browser can gobble stupid amounts of RAM and in-memory compression is typically faster than disk paging (and less damaging to flash storage devices).
How do you "intentionally cause damage to protected computers"? By definition they are protected, and AFIF the LOIC is just a ddos flood tool, so you are really just "causing temporary nuisance to a web server".
Or are anti-capitalist/monopolist protests now considered a terrorist charge so they have to claim the server is 'damaged' by repeated pointless requests in order to justify the prosecution?
Re: Make 'em pay
No, it is down to reverse compatibility which is a BIG THING given the millions of lines of code written pre-Unicode/UTF-8.
Basically, in order to work the single byte options have to map to the old ASCII set (which are 7-bit due to the old parity issues from the serial comms days), and those extending to 2/3/4 bytes cover everything else (the "extended ASCII" of original IBM-PC, including the £ symbol and similar, which you might think is 'imperial').
Cardinal sin of computing
The fact that some programmer, in an attempt to show the "benefit of Unicode", should use a 'double' variable for PI and only give 6 figures tells you they should be executed and their programs not!
But yes, you speak the truth - UTF-8 is better for all practical reasons because it won't break old software/code and yet it allows all characters you (and your customers/users) might want. Subject to matching system fonts - a rant for another day...
WTF 16:9 again?
Good to see more resolution, but why oh why this fixation with 16:9 ratio? Myself, and others, want more vertical estate to actually read documents!
Same for this retina resolution, nice but it is no f-ing substitute for a usable vertical display size!
Meet the new boss, same as the old boss
The deep nature of the alleged NSA compromise is worrying for anyone who believes in that quaint concept of privacy or "reasonable suspicion". But swapping for the Chinese spies is not actually an improvement, so we have a long way to go before vendors can be trusted not to have backdoor'd things for whatever reasons..
That meant nothing to me, University of Vienna...
I have no issue with torrenting stuff I can't buy, say old Stones bootleg albums and similar.
But...most of them are in low quality MP3 format, like 128kbit, and that is often noticeable even on the lowish quality of most bootlegs.
So you are much better to rip to lossless flac format, and then convert to MP3 copies in another directory (or whatever format your portable player or car accepts). Oh, and make sure you have a backup copy! An external couple of TB disk is not that expensive and could save a lot of tears later!
Re: Storing H2 is not a problem
I think (but am not a metallurgist) that significant exposure to hydrogen causes embitterment of various metals, which is a serious issue for storing and handling hydrogen (or H2-rich) fuel.
Can anyone else who knows cover that topic?
Re: Slightly fruity comparison
The obligatory XKCD reference:
The millisecond time-scale is the pulsar's rotation period (i.e. damn fast for something so massive!) while the abstract says "Within a few days after a month-long X-ray outburst, radio pulses were again detected" which implies that the matter accretion process is much longer and thus believable.
Re: reset while windows was running ...
Oh the joys of non-journalling file systems!
Re: Sun Server Keyboard...
But the Sun keyboards had support for a mouse coming out of them, which was long before USB hubs, etc, and such a neater arrangement. Also we had optical mice on our Sub machines of ~1992 which were cool, though they needed a gridded mouse pad.
Shame that Sun screwed up so badly, and Oracle has done even worse :(
causation is not correlation either!
"Touch notebooks accounted for 25 per cent of the total this year, which would seem to validate Redmond's touch-centric strategy for Windows 8"
Er, did MS not push OEMs hard to include touch screens, hence no big surprise that a large number of buyers got them whether they want to use touch of not?
If you want to "validate Redmond's touch-centric strategy" you need to be reporting on the number of users of laptops with touch screens that actually use that feature.
Re: @John G Imrie
Closed network on a power station? Then buy at least two GPS and LW equipped time servers for redundancy.
Just to add to my already excessive comments here, the problem is not actually the leap seconds, it is the handling of time-steps by OS and/or applications.
Now if we eliminate leap seconds we still will have the occasional time-step in real systems, as someone monkeys with the clock, or a machine with a poor clock is forced to jump from time to time to keep up, or when a NTP server is blocked for several days due to a firewall or ISP fault then comes back on-line, etc.
In all of those cases, a time step will happen, and you have to deal with it or face problems. If the software developers DO NOT TEST for this, problems will happen. That was the lesson from last year's Linux bug. In fact, getting rid of leap seconds will mean even less testing and probably a BIGGER risk later when time steps happen for other reasons.
What happens if you don't test =>
GPS itself uses atomic time starting 1980, so no leap seconds BUT, and this is where you are really wrong, the GPS ephemeris gives you the offset from GPS-UTC, so you can and do get leap-second information that way.
Re: Stopped clocks
In the Linux case it knows (from NTP) that a step is pending, and it jumps accordingly. The ntpd slewing/stepping is for "normal" time errors.
How VMs handle this is another story. From our experience VM timekeeping is pants anyway, so this is just another minor issue. If an OS/application really needs good timekeeping for some task (e.g. audit of network delays for security such as MITM detection), it really has to run on a physical machine.
Re: Stopped clocks
AFIK the leap second problem that affected Linux last year was down to some timers getting dead-locked, and that was due to a kernel patch that broke the previously correct time-handling for leap seconds. And nobody realised or tested it until the live event:
A short check shows a RedHat article including a leap second simulator so you can test a system for its behaviour to debug this predictable event:
While a big event, it just shows the price you pay for not testing something for all expected conditions.
Google slew their machines over 1 day, so no step but also same long-term behaviour. Of course, during that day they are up to 1s out, but clearly that is no big deal for them.
The UK's position
Thankfully, it seems the UK's position is sensible, as covered here:
Basically they point out that not only would it mean that "1 day" in no longer synchronised to the Earth's rotation as common sense expects, but that you either end up with a long term problem of sun rise/set getting seriously out of sync with our working hours, or you have bigger but less frequent steps which are worse then a 1-2 year leap second in terms of impacting badly designed systems.
Really, why don't they just make proper time-keeping a mandatory requirement for software systems and force vendors to test and demonstrate they can handle it? That is the biggest issues here: most folk don't have (or will pony up for) an NTP simulator to allow them to set up and test the OS/application reaction to these predictable and recurring events, so they simply hope for the best and, surprise, surprise, they get the worst!
Have you read the linked slide show? Three obvious political-style lies are included:
Page 6 - "Leap seconds interrupt normal operation of timekeeping infrastructures and are costly in staff time to implement" - no, you use NTP and it just happens! Unless the system gets broken due to bad/untested software, you need no interaction whatsoever.
Page 6 - "On June 30, 2012, every clock in the world had to stop for one second" no the fscking did not "stop", they simply stepped one second when needed. If you rely on a basic time-stamp then you might get it repeated, etc, but if monotonic time actually matters deeply for program flow or synchronisation, you use one of the system supplied functions that gives you that. (e.g. clock_gettime() with the CLOCK_MONOTONIC flag) or you implement your code to cope in other ways.
Page 12 - "and significant cost reduction in their implementation" - no, you use a system-supplied library that handles time correctly and then only one competent programmer needs to do it, and everyone else "just works". Having monkey-grade programmers implementing basic time keeping over and over again, and getting it wrong (by not RTFM) is a sign of a far deeper problem in your organisation and choice of staff.
How to we get this joker to correct this and apologies?
So in order to deal with incompetent or poorly tested OS designs that don't actually bother to address the definition of time that has been around for several decades, they want to break compatibility with anything that actually uses that definition by assuming that Earth rotation is never more than +/-1 sec from UTC?
A triumph of the incompetent many :(
Why don't they just tell folk to fix their software, its not a new problem after all?
And for those devices that are not connected to "know" about leap seconds, how exactly would they be keeping accurate time in the first place, and even if they do, how would that matter if they don't interact with systems that are kept in sync?
Re: It was a Y2K problem ..
Calling that Y2K seems a bit misleading given it is 13 years past that point!
But really, it seems odd that they did not have the on-board memory to store just a single byte more for the date/time and then have absolutely no chance of the system running out of time-keeping before its hardware & power supply died.
Most humour pokes fun at _someone_ and in many cases humour/satire is the way people deal with terrible things.
If you can't laugh then you will cry.
Re: The real reason for the laughing
I'm guessing these software patents are only valid in the USA?
Seems the rest of the world could go another way, and I'm guessing if the software was free then no issue with a company having to have a business presence of any sort in the USA. Also, I'm guessing that the majority of users don't need the majority of features, so probably not that much to fix, say, GIMP's problems with 16-bit filters, CMYK output, etc.
Re: The real reason for the laughing
You have to wonder how much money would be needed to rectify the issues with having an acceptable alternative to some of these packages. Get a few hundred users together, get them to contribute 1/2 the current fees each and see if that would pay some competent folk to implement the necessary code changes to GIMP, etc...
Outside the USA and going to put data in the hands of AT&T & MS, both who seemed happy to turn over everything to the NSA?
Yes, I know they probably have no legal choice in the matter, but was PRISM not a paid-for arrangement to make the process nicer and maybe even profitable?
Closest to a tinfoil hat icon =>
Re: Didn't Microsoft kill off a better browser by giving away an inferior one?
Almost - they killed its financial viability and locked lots of corporations in to a now-regretted dependency on IE5/IE6 which even MS can't/won't port, even as 2nd class application, to later versions of Windows.
But Netscape's legacy is still around as Firefox, and doing not too badly.
Depends, many older options did not work very well, maybe Quickoffice will work to a "good enough" standard?
Still, has MS not been in "protect Windows cash-cow at all costs" mode the last few years, it could have make Office properly available on IOS (at least) and Android and seen much more sales. Oh, and saved 1B$ in write-down on the unloved WinRT fondlslabs...
Re: SPARC hardware
There are many ways to decrypt a message that do not involve "breaking" the cypher.
As already pointed out: hacking in before it is encrypted, using you 'influence' to get a copy of the key(s), compromising the key/certificate generation software, compromising a closed-source implementation so it leaks information that you have the key to make use of...
Re: Linux anyone?
I use Linux and recommend it to friends/family, but I never tell them it is "safe". You have to always be careful and never, ever, assume the machine is immune.
On a side point, most distros disable the apparmor profile for firefox - that is a dubious step to allow easier file down/up load from a non-default directory. If you are very serious about security you should enable it to sandbox the browser.
Oh, and if really serious, us another account for dubious browsing, maybe a 3rd for very important browsing. And change the /home/* directories to remove 'other' access.
Re: Sigint capability
You are forgetting the likelihood that our puritanical overloards would be quite interested in spying on our activities. Look at how they enacted pr0n+ laws that tried, and in cases, succeeded in going beyond the stupid UK-wide changes that made drawing a dick on Bart Simpson a potential jail-and-sex-register crime.
Re: But why?
Corporate drones - they have no choice but to use the IT department's image.
Corporations that have screwed up IE6/7 only internal systems, where the users have to use IE and it becomes a dirty (or enforced) habit on t'Internet as well.
Re: @M Gale
Today you are only likely to worry about kernel size for embedded applications, and there you probably are going to roll a customised kernel with just what you need.
As you say, Windows has a lot of micro-kernel like aspects, but still has become bloated and need rebooting for way too many patches. Most of the bloat is probably not 'kernel' in the classic sense, but it is an issue for smaller devices like phones & fondleslabs.
And it misses the point - if going microkernel you really would be doing it primarily for security and fault tolerance/recovery, so you need a _VERY_ minimal 'kernel' and everything else as user-space modules.
Re: Otherwise it'll become bitrotten
Now then, where do I buy some new hardware to natively run my ZX81 games?
Or why can't I get this NT4 driver for my old SCSI scanner to play with Windows8?
Re: at this rate
There are a lot of good reasons for going microkernel in terms of security (even "binary blob" drivers get ring-fenced access) and in-memory footprint (only in-use drivers need be loaded).
But...usually performance hit of going in/out of ring 0 for every driver/file system action means it gets side-lined, and few have the stomach for trying to compete with Linux/Windows (even BDS) for developer attention.
Re: Total loss of control.
This is a valid point, but one solution is along the lines of Nate's post above you - have your own managed server with encrypted storage that you alone have the key. For storage/backup only you don't even need the physical server to be isolated, as you can encrypt-on-write at the client machine(s).
Of course, that is not going to stop a court order for access, but at least they have to deal with your own country's laws which, in theory, you have a democratic input to. That is very different to any foreign host where you can expect a different treatment even to the locals.
And as Trevor points out, you still need a local copy in case the provider has gone badly wrong or is holding your data hostage with usurious fees to migrate your data to another provider...
This probably covers it:
NASA has a lot of public-facing low-importance web sites that don't get maintained/updated for years. I'm surprised this is not more common really.
Re: Communicating with the rocket via Kermit?
Damn, I forgot I was that old
- BEST BATTERY EVER: All lithium, all the time, plus a dash of carbon nano-stuff
- Stick a 4K in them: Super high-res TVs are DONE
- Review You didn't get the MeMO? Asus Pad 7 Android tab is ... not bad
- DINOSAUR SLAYER asteroid strike was DEVILISHLY inconvenient timing
- Russia: There is a SPACECRAFT full of LIZARDS in orbit above Earth and WE control it