* Posts by Gordan

653 publicly visible posts • joined 15 Oct 2008

Page:

Red Hat acquires Permabit to put the squeeze on RHEL

Gordan

Didn't Stop Canonical

ZFS ships with Ubuntu as of 16.04.

Any anyway, who gives a dead rat's ass about whether it ships with the distro or not? There are many, many packages used very day in the enterprise that don't ship with the distro. I've been using ZFS on CentOS for years.

IBM's contractor crackdown continues: Survivors refusing pay cut have hours reduced

Gordan

Re: Why contract these days?

"I don't think companies are allowed to do the furlough thing in Europe."

If you are a contractor, of course they are. They key distinction between employees and contractors is mutuality of obligation.

Employees have mutuality of obligation, employer is obliged to provide work for them to do (or rather, pay them whether they provide the work or not), and the employee is obliged to do it in line with the terms of their contract of employment.

Contractors have no mutuality of obligation, so yes, they can indeed be told to take two weeks off, obviously unpaid.

On the other hand, two weeks is plenty of time to attend enough interviews to secure an offer, if your holiday plans don't happen to coincide with it.

Gordan

Re: Why contract these days?

"Who would want to work under these conditions?"

Contracting is not, and has never been for everyone. There is always more work for those at the top of their field than there are hours in the day, far too much to consider a permanency pay cut.

That minutes-long power glitch? It's going to cost British Airways £80m, IAG investors told

Gordan

"emphasised that the failure "had absolutely nothing to do with changes to the way we resource our IT systems and services"."

It might not have had anything to do with the _cause_ of the outage, but what effect did it have on the time required to recover the service into a sufficiently functioning state to resume operations? The outage lasted 3 days. Would it likely have been substantially shorter with more skilled staff closer to the problem?

AI-powered dynamic pricing turns its gaze to the fuel pumps

Gordan

Next:

Somebody writes an phone app to predict when fuel will be cheapest over the next few days. The war of AIs escalates until pricing markup strategy becomes completely random - at which point one retailer gives up and goes with the most competitive pricing they can afford. Others go out of business.

While Microsoft griped about NSA exploit stockpiles, it stockpiled patches: Friday's WinXP fix was built in February

Gordan

Patches Built in February

The reason patches for XP were provided publicly at all is because MS had already written them - for XP POS (Point of Sale) edition, used for embedded systems like cash registers and ATMs. XP POS is supported until 2019, and as ElReg covered 3 years ago, XP can be tweaked to change it's identity and use POS patches directly.

So there is no conspiracy or foul play here - the patches were built in February because they were built for the POS edition. Don't expect any good will patches for XP after POS goes EOL in 2019 regardless of the outcry.

Ad men hope blocking has stalled as sites guilt users into switching off

Gordan

One word:

AdNauseam

Clone it? Sure. Beat it? Maybe. Why not build your own AWS?

Gordan

TL;DR

Use OpenStack.

Yep. Bitcasa's called it quits

Gordan

$999/year!

It seems to me that their key problem was that their prices started to look ridiculous compared to the alternatives. Amazon Cloud Drive is £55/year for all you can eat storage, Backblaze and Crashplan offer unlimited backups for a similar amount, and Hubic offer 10TB for a similar amount. So their pricing seems to have been on the order of 10x higher than the alternatives.

So at a glance it looks like their key problem was that they failed to figure out how to do large scale cloud storage on the cheap to keep up with their competition.

Uber drivers entitled to UK minimum wage, London tribunal rules

Gordan

Will this do anything...

... other than accelerate the disappearance of taxi driver as a profession the instant that the first self-driving car certified for unattended road use becomes available to buy?

Self-driving cars doomed to be bullied by pedestrians

Gordan

Re: Automated lifts will never catch on

Very simple solution. Have a realistic looking dummy in the driver's seat. As long as you cannot tell at a glance whether it's a real human in the driver's seat or not, problem solved.

Windows 10 pain: Reg man has 75 per cent upgrade failure rate

Gordan

Re: EVGA SR-2

There are two classes of problems on the SR-2:

BIOS/Firmware bugs:

1) It struggles to POST with more than 48GB of RAM. The CPUs are rated for up to 192GB each, but a BIOS bug in MCH register initialization and timeouts prevents it from reliably POST-ing with 96GB. It can be made to work most of the time, but at the cost of running the memory command rate at 2T instead of 1T which has a significant impact on memory performance.

2) Some of the settings profile in BIOS clobber each other (IIRC 4 and 1, but I'm not 100% sure, it's been ages since I did any settings changes on any of mine.

Hardware bugs, substandard components, and design faults:

1) Clock generators get unstable long before the hardware. It's supposed to be an OC-ing motherboard, yet above about 175MHz bclk the clock stability falls off a cliff.

2) SATA-3 controller with it's 2x 6Gbit ports is behind a single PCIe lane with 5Gbit/s of bandwidth. So if running two SSDs off the SATA3 ports, the total bandwidth will be worse than running both off the SATA-2 ports hanging off the ICH10 SB.

3) SR-2 is advertised as supporting VT-d. This is questionable because there is a serious bug in the Nvidia NF200 PCIe bridges the SR-2 uses, in that DMA transfers will seemingly bypass the upstream PCIe IOMMU hub. That means that when running VMs with hardware passed through, once the VM writes to it's virtual address range that happens to be at the same place where the physical address range on the host of a hardware device's memory aperture, the VM will write it's memory contents straight to the device's memory aperture. If you are lucky, that will result in a write to a GPU's apreture and result in screen corruption briefly before it crashes the PCIe GPU and the host with it. If you are unlucky, it will write to a disk controller's memory aperture and write random garbage out to a random location on the disks.

This can be worked around to a large extent - I wrote a patch for Xen a couple of years ago that works around the issue by marking the memory between 1GB and 4GB in the guest's memory map as "reserved" to prevent the PCIe aperture memory ranges from being clobbered, but it is a nasty, dangerous bug.

Similarly, most SAS controllers will not work properly with the IOMMU enabled for similar reasons (I tested various LSI, SAS and Adaptec SAS controllers, and none worked properly).

4) NF200 PCIe bridges act as multiplexers in that they pretend there is more PCIe bandwidth available than there actually is. The upstream PCIe hub only has 32 lanes wired to the two NF200 bridges, 16 to each, but the NF200 bridges each make 32 available to the PCIe slots. So if you are running, say, 4x GPUs with each running x16, the net result is that even though each GPU will be showing up as being in x16 mode, only half of the bandwidth to the CPUs actually exists. This isn't so much a bug as dishonest marketing/advertising, similar to the supposedly SATA-3 controller.

5) SB fan is prone to seizing up. This has happened on all of my SR-2s within 2-3 years - not great when the warranty on them is 10 years, and even refurbs for replacements ran out over a year ago, with some motherboards still having 7 years left of their supposed warranty.

There are more issues, but the above are the biggest ones that stuck in my mind.

FWIW, I just ordered an X8DTH6 to replace the last of mine. There are too many issues for it to be worth the ongoing annoyances.

But I guess if the requirements are simple (no virtualization, mild or no overclock (so why buy this board in the first place?), <= 48GB of RAM), no more than one SSD hanging off the SATA-3 controller) it might just be OK enough for somebody who doesn't scratch the surface of what they are working with too much.

Gordan

EVGA SR-2

It's a miracle you managed to get it working with any OS in the first place. There are more firmware and hardware bugs and outright design problems on that board (as in actual bugs, not just a faulty board - the fraction of the ones that are outright faulty is a separate (and unpleasant) story entirely). I just retired the last one of mine having happily replaced them with Supermicro X8DTH boards, which are pretty much the same spec only without all the engineering having been done by the marketing team.

Uber: Why we use MySQL

Gordan

With MySQL there is the built in replication, Galera (you'd better know what you are doing and really mean it) and Tungsten (just don't go there).

With PostgreSQL there are too many to list off the top of my head, all with slightly different advantages and disadvantages, with some being very similar in bandwidth requirements and/or performance of MySQL's native replication.

Gordan

That article seems to conveniently omit pointing out that InnoDB also uses a WAL (InnoDB log) with similar effect on write amplification, and that MySQL's replication relies on a separate, additional log (as opposed to sending the WAL directly). This goes a long way toward levelling the field, and the omission of even a brief discussion of it makes the article come across as a bit shilly.

Initializing slaves also requires a state transfer from the master in some way or another regardless of the database used - and the most efficient way is via a snapshot state transfer. Depending on the underlying file system used, this can be very efficient (e.g. ZFS snapshots take milliseconds to create and can then be sent asynchronously). And since I mentioned ZFS, it can also be used to address the inefficiency of double-caching that PostgreSQL suffers (where the same data is kept in the shared buffers within PostgreSQL and the OS page cache) by cranking up the shared buffers to a similar amount as is recommended with MySQL, and setting the FS to only cache the file metadata (primarycache=metadata).

MySQL has also had releases that were buggy and caused on-disk data corruption.

While the initial explanation of direct index pointers vs. indirect pointers (to PK rather than on-disk location) is good and establishes some credibility, it is worth pointing out that direct pointers mean one index dive before the data can be fetched, while indirect pointers require two sequential index dives for the same operation. If all the data is cached in memory (shared buffers / buffer pool) that potentially makes the PostgreSQL's direct pointers twice as fast to retrieve the data. This is also applicable on UPDATEs/DELETEs, and will offset the extra cost of rewriting the node pointing at the affected row in each index (vs. only in the indexes affected by the data change).

Finally, this sentence brings the credibility of the author into question: "This may cause data to be missing or invalid, but it won’t cause a database outage." If the data is corrupted, that is a pretty dire situation, and while they don't mention experiencing bugs like this with MySQL, I have, and it's not pretty. it is in fact a bug like this that made them decide to migrate away from PostgreSQL.

PostgreSQL and MySQL both have advantages in different uses, but shilling one over the other without laying out the complete truth isn't helpful, it just makes it sound like an attempt at retroactively justifying the cost and effort of a migration. I'm not saying it wasn't justified, merely that omitting critical parts and a quantifiable comparison undermines credibility.

It's nuts but 'shared' is still shorthand for 'worthless'

Gordan

Cheating

"Using the device in the palm of our hand that just happens to be connected to a growing wealth of human knowledge?

That’s cheating."

Actually - yes it is. The point is in differentiation between the mediocre and the best. Somebody who knows stuff off the top of their head is going to be orders of magnitude more efficient, and therefore more productive, than somebody who has to google it and figure it out first.

Or to put it another way, you can be mediocre and do mediocrely by googling things and scraping by. But those aspiring to be "steely eyed missile men / steely eyed rocket girls" (NASA term), by the time you've googled the answer and figured out what it means, the mission will have failed.

Do not confuse being able to google the answer with being clever or good at something - the two are not even remotely similar.

Microsoft has made SQL Server for Linux. Repeat, Microsoft has made SQL Server 2016 for Linux

Gordan

Date Error?

Are you sure there is no system clock error in play here? I'm pretty sure the international trolling day is still nearly a month away.

Wakey wakey, app developers. Mobile ad blocking will kill you all

Gordan

Re: HTTPS

"If it is being done by blocking the content-provider's site then whether that site uses https or not is unimportant."

The problem is that you cannot block just a specific site when it is accessed via HTTPS. You don't know what the domain used is.

You could block the DNS lookup, but the app could have a whole bunch hard-coded IPs in it. If those IPs are used generically and not just for serving ads (e.g. Google could trivially decide to use the same set of IPs for their non-advertising offerings under different domain names via TLS). TLS allows for multiple domain names to be served via the same IP address over HTTPS, it is only old SSL implementations that require a separate IP address per domain name. With TLS the domain name is only sent after the initial crypto setup.

Long story short, you cannot block it by domain name, and you can only block it by IP addresses if you are willing to accept a massive amount of collateral damage that most users won't stand for since many would be relatively vital heavily used internet services (e.g. anything provided by Google).

The only way you could do it is if you persuade your customers to install a man-in-the-middle allowing CA cert on their devices, which no sane person will do.

Gordan

HTTPS

The reports on this are rather unjustifiably sensationalist. All this will do is make the adds shift to being delivered over https instead of http.

Broadband-pushers expand user piggyback rides on private Wi-Fi

Gordan

Re: I am not

"here in the UK there is very rarely any such thing as free WiFi."

You mean free WiFi as in O2 WiFi, TheCloud and Virgin Media WiFi on the London Underground? It may not be quite up to BT WiFi's coverage but it is reasonably available in built up areas, especially around pubs and suchlike.

Gordan

Re: @Gordon

>>The physical bandwidth shortage _can_ be an issue

>

> On BT's ADSL network? <sarcasm>Surely not!</sarcasm>

I don't exactly keep an eye on it and hammer it flat out all the time, but when I do stress it (remote backups using zfs send over ssh) I can generally saturate the upstream to whatever the reported sync speed is. Bandwidth generally isn't an issue when you have FTTC.

But I do recognize that in the edge cases where the maximum achievable sync speeds are meagre it can be an issue. But you don't HAVE to enable it. You just have to live with the fact that if you disable it you won't be able to use BT WiFi elsewhere yourself.

Gordan

Re: I am not

"It's a shame you're getting downvoted for stating the facts (and the ideal use case for the system)."

Speaks volumes about the level of knowledge of the typical commentard, doesn't it. :-)

Gordan

Re: @Gordon I am not

"Does this mean that the "public" get a different IP address?"

Short answer: Yes

Long answer:

IP you get on BT WiFi's public unencrypted connection is completely unrelated to the IP range you have on your private, encrypted connection. Each gets NAT-ed and passed to the exchange separately from different IPs.

Additionally, BT WiFi is authenticated after connection, so even if 10 people are connected to the same public hotspot, each MAC/IP address is non-anonymous. And public access via your BT router also doesn't use up your data allowance if you are on a metered deal, due to the same non-anonymous, non-plausibly-deniable nature of the service where everything is completely logically separate even if it is multiplexed over the same physical wire.

So - no anonymity, no plausible deniability.

The physical bandwidth shortage _can_ be an issue since the connection is generally limited by the sync speed, but in reality it is very unusual to see prolonged heavy impact from this. And if you do see an impact from it, you can always switch it off, and in the process forego your own access to BT WiFi hotspots.

Gordan

Re: I am not

"Even though It may provide plausible deniability,"

It doesn't. And it doesn't intrude upon privacy of the owner, either. It's pure FUD.

The public hotspot IPs are not encrypted, they live on a separate VLAN, and the only route is upstream, there is no way to cross over to the owner's encrypted connection or their LAN. It is obvious what data is flowing via each VLAN, so in no case is there any introduction of anonymity or plausible deniability.

In addition, most people, myself included, find it very handy to be able to hop onto BT WiFi (and/or equivalents) almost anywhere instead of burning through meagre 3G allowances.

VMware axes Fusion and Workstation US devs

Gordan

ESX in Workstation / Linux in ESX

"Fusion and Workstation probably have enough ESX in them to make it unlikely VMware would ever let the code run wild."

There are also seemingly well founded allegations of Linux GPL code in ESX:

http://www.theregister.co.uk/2015/03/05/vmware_sued_for_gpl_violation_by_linux_kernel_developer/

So arguably letting code run wild is exactly where we can hope this might legally end up.

AMD's 64-bit ARM server chip Seattle finally flies the coop ... but where will it call home?

Gordan

Re: The future can't be prevented. Only delayed.

Yeah. And the AMD offering in this case is beyond laughably late. They generated a lot of hype when they announced the board. In the end they delivered 18 months later than expected and without the originally insinuated feature set (e.g. not in *TX form factor). Other manufacturers like Gigabyte have beaten them to it by nearly a year.

I had great expectations of AMD 64-bit ARMs. In the end, they merely cemented their image of failure to deliver.

Lower video resolution can deliver better quality, says Netflix

Gordan

CRF?

"These days, the company's decided that approach was a bit arbitrary because it can result in artefacts appearing during busy moments of a complex film, while also using rather more resources than were required to stream something simpler, wasting storage and network resources along the way."

So they haven't heard of ffmpeg -crf 18 ?

Constant Rate Factor (CRF) does exactly what is described above, in that it compresses down the minimum size for the selected level of visual quality. Granted, this means the bit rate isn't fixed/constant, but given that some buffering will be happening anyway that isn't that big a problem most of the time.

PostgreSQL learns to walk and chew gum

Gordan

Re: MySQL versus PostgreSQL comparison

"That said, Oracle seems to be doing a good job of cleaning up MySQL's warts."

Actually, it's more the case that MariaDB are doing a good job of cleaning up Oracle's MySQL warts.

Gordan

Re: MySQL versus PostgreSQL comparison

"However, MySQL with MyISAM ran a lot faster."

Actually, no, it very much isn't. Even on read-only loads InnoDB has been faster for well over a decade (since early MySQL 5.0 releases).

MyISAM may still have specific niche uses, where features that otherwise make it useless are in fact desirable; I can think of 2 cases where I would use it, and they are both quite obscure and very narrow.

How much do containers thrash VMs in power usage? Thiiiis much

Gordan

Energy Consumption Inversely Proportional to Performance on Same Hardware

Additionally, it is worth pointing out that increased power usage directly correlates to increased overheads. If the machine is burning more power to perform the same task, then it is running that much harder and more busily.

But people get very uppity when somebody points out to them that full fat virtualization results in 25-30% peak computational throughput reduction compared to bare metal.

How to build a totally open computer from the CPU to the desktop

Gordan

Or you could just...

... buy a Lemote Yeeloong:

https://en.wikipedia.org/wiki/Lemote

Sufficiently open source even by Richard Stallman's standards.

Virgin Media filters are still eating our email – Ntlworlders

Gordan

Re: Yeah, right...

Banks generally explicitly state that they will never, ever email you. The reason they don't email things out is because if the users grow to expect emails from their banks they become that much more susceptible to phishing scam emails pretending to be from their bank.

Gordan

Yeah, right...

"The problem continues and is impacting their ntlworld.com customers. The company I work for has 10,000 of their customers legitimately registered."

I somehow seriously doubt the truthfulness of that statement. I regularly end up getting email I most definitely didn't sign up for via "digital marketing" companies that make that exact claim.

Don't want to fork out for NAND flash? You're not alone. Disk still rules

Gordan

Re: Throughput

Flash is a _part_ of the answer. The rest of the answer is moving to smaller form factor drives (use 2.5" disks instead of 3.5" ones), and the biggest part of the answer is to switch to a post-RAID technology like ZFS, with appropriately sized vdevs (i.e. don't have more than, say, a 6-disk RAID6 (RAIDZ2), or 11 disk RAIDZ3 (n+3)). if you need a bigger pool, have multiple such vdevs (equivalent of 12 disk RAID60).

Gordan

Re: Wait just a minute!

Basic understanding of statistics and quantitative analysis methods indicates that he may as well have rolled dice to project 5 future data points from 3.

Gordan
Stop

Wait just a minute!

He has 3 data points, NOT including this year (this being the first year that SSD capacities in 2.5" form factor have not just caught up but exceeded the capacities available as spinning rust) and from that he is trying to project the next 5 data points? I call bullshit.

Boffins make brain-to-brain direct communication breakthrough

Gordan
Stop

18%? Shouldn't baseline be 50% on a binary yes/no spread?

"Participants were able to guess the correct object in 72 per cent of the real games, compared with just 18 per cent of the control rounds."

18%? Shouldn't control set baseline be 50% on a binary yes/no spread, based on random chance?

Struggling AMD re-orgs graphics groups as Radeon Technologies

Gordan

"Most likely run into unexpected consequences and instabilities."

Given the quality and stability of their drivers at the best of times, how would you distinguish this from normal operation?

Gordan

"It could show some effort in supporting HD4xxx for W10 because Nvidia can support it for 8xxx which is a GPU that is three years older."

They also never released XP/2K3 drivers for R9 290X, even though the product was released many months before XP was EOL and over a year before 2K3 was EOL. Nvidia, OTOH, had XP drivers for all of their bleeding edge GPUs.

Having said that, I thought Vista/7/8/10 all used the same driver model, so can you not use, say, Windows 7 drivers on Windows 10 for the HD4xxx?

Seagate promises to HAMR us all with spinning rust next year

Gordan

Re: Two Heads Per Platter

Yes, spinning rust is still cheaper, but the gap is ever narrowing. But cost also increasingly includes power consumption, but for long term storage of enormous amounts of write-once read-never (or near enough) data, NAND is increasingly becoming a contender. One example of a point in case being here, and that was more than 2 years ago, certainly long enough ago that it may well be widely deployed today:

http://www.theregister.co.uk/2013/08/13/facebook_calls_for_worst_flas_possible/

Which leaves spinning rust being consigned to cheap desktop grade systems as the only prospect it has to look forward to.

I do still use spinning rust in some bulk storage applications, but I rather expect to be replacing them with SSDs (as I have been doing of late) as and when they expire out of warranty. Then again, I only use HGST drives so the expiry will likely take years rather than months.

Gordan

@Joe,

Well, differences in the 1, 2, 3 and 4 year old disks are all within 1% of each other, so there is no strong signal there either way, i.e. it makes negligible difference. Compared to that, < 1 year and 5+ year entries could be viewed as anomalous. I guess it comes down to personal interpretation, but it certainly comes across as not being something worth worrying about in the 1-4 year age group.

Gordan

@Joe:

From the paper:

"... only very young and very old age groups appear to show the expected behavior. After the first year, the AFR of high utilization drives is at most moderately higher than that of low utilization drives. The three-year group in fact appears to have the opposite of the expected behavior, with low utilization drives having slightly higher failure rates than high utilization ones."

So only very young drives fail faster when heavily utilized, implying that if they were already marginal when they left the factory, hammering them harder will finish them off quicker. Among the 1-4 year old drives the difference is minimal, and in case of 3 year old drives, it is the low utilization ones that are more likely to fail.

So the correlation is tenuous at best for the majority of drive ages. Also note that among the very young disks, the figures show that low utilization disks were also 2-3x more likely to fail than medium utilization disks.

So overall, not that much of a correlation.

Gordan

(Most) SMR drives have many zones, a staging zone. All the writes first go sequentially into a staging zone (append-only, similar to a log). When the drive is otherwise idle, this data gets committed into the target zone (which requires rewriting the entire zone).

So if the load profile on a SMR drive is read-mostly (and the vast majority of typical load profiles are in fact read-mostly), the performance isn't too bad most of the time. In fact, with the staging, random writes get merged into one big sequential write, which avoids some of the write-time seek latency.

However, if the staging area is full and the disk hasn't had enough idle time to commit the data from the staging zone(s) to the target zones, it will have to do so before it can accept more writes, at which point the write performance drops to less than 1/3 (as of spinning rust wasn't slow enough already). Having said that, SMR drives seem to have pretty large staging areas so this doesn't happen particulary often in typical use.

Gordan

Re: Two Heads Per Platter

@PleebSmash

"Back when NAND was running to an endurance wall"

That was never actually the case on SATA SSDs, it was merely perceived as such when people reacted with a kneejerk instead of looking at actual data and usage patterns. The endurance of NAND has actually been reducing steadily with the process sizes, it has not been improving, despite ongoing incremental advances in flash management. What has happened is that people have begun to understand that quoted write endurance is ample for any sane use over the most extreme foreseeable useful life of the drive (e.g. 10 years) even if you can only write 75TB of data to a 1TB SSD. Persisting 75TB of data in anything resembling desktop, or even typical server use, is way out there in the statistically insignificant territory fraction of use cases.

Then consider that even under very harsh tests every SSD tested to destruction has managed to outlive it's manufacturer specified write endurance by a large multiple:

http://techreport.com/review/27909/the-ssd-endurance-experiment-theyre-all-dead

Unless your use case is large scale very ephemeral data caching (high churn caching of TBs of hot data out of PBs of total data) or runnaway logging, SSD write endurance is way beyond being worth even remotely worrying about. It is, and always has been, a non-issue for most sane use cases.

Gordan

@AC:

Actually, the discrepancy in reliability between different manufacturers and drive generations easily more than covers your perceived failure pattern, and there is still no evidence that it has anything to do with the drives working harder, it is almost certainly entirely down to the disks from some manufacturers being massively more crap than disks from others. Since you mention "DM" drives, I presume you are talking about Seagates, whose most reliable model has several times the AFR than, for example, HGST's least reliable model. See here for the Backblaze study on this:

https://www.backblaze.com/blog/what-hard-drive-should-i-buy/

Based on the large number of disks in the sample and relatively extensive analysis, coupled with prior evidence that the workload of the drive has no effect on longevity, strongly implies that a drive being SMR doesn't really impact it's reliability, over and above by the fact that it is a less mature technology (bleeding edge drives are usually less reliable than the longer term established product lines that have had manufacturing processes perfected and bugs ironed out over some time).

Gordan

@AC:

There is in fact no correlation between how heavily a mechanical drive is utilized and the probability of it's failure. You can find Google's disk reliability study on this subject here:

http://static.googleusercontent.com/media/research.google.com/en/archive/disk_failures.pdf

(Section 3.3)

Gordan

Two Heads Per Platter

This has been tried 20 years ago by Conner, in their Chinook line of disks. It was uneconomical compared to the alternatives back then, and I don't see that it'll be any different this time around.

Spinning rust is nearly dead, anyone saying otherwise is only doing so because they are selling it.

Wangling my way into the 4K gaming club with a water-cooled whopper

Gordan

Re: It has to be said.......

FYI, I completed Crysis in 4K (3840x2400, IBM T221, FYI, higher res than today's 4K screens) on a GTX480 a few years back (no AA or motion blur, everything else on maximum (DX9)). Technology has been up to the task for quite some time. Currently running it with a 780Ti, and Left4Dead 2 on Linux runs at a flat 48Hz (what the T221 does natively without retuning the signal).

Boffins promise file system that will NEVER lose data

Gordan
Happy

Re: (Open) ZFS is pretty damned good already

You beat me to it, I was just about to say "Holy crap, they reinvented ZFS!"

Embracing the life-changing qualities of USB power packs and battery extenders

Gordan

Has nobody heard of Synergy?

"You can tell the Mac KVM Link software which border of your display to use to switch over, and simply dragging your mouse over that border switches any peripherals connected to the USB hub to the other device (your keyboard and mouse need to be connected to the hub, not your system)."

Some of us have been doing this using Synergy between Windows, and various *nix machines for a very long time. And before that there was x2vnc. And all without any additional hardware required.

Page: