* Posts by Gordan

544 posts • joined 15 Oct 2008

Page:

Windows 10 pain: Reg man has 75 per cent upgrade failure rate

Gordan

Re: EVGA SR-2

There are two classes of problems on the SR-2:

BIOS/Firmware bugs:

1) It struggles to POST with more than 48GB of RAM. The CPUs are rated for up to 192GB each, but a BIOS bug in MCH register initialization and timeouts prevents it from reliably POST-ing with 96GB. It can be made to work most of the time, but at the cost of running the memory command rate at 2T instead of 1T which has a significant impact on memory performance.

2) Some of the settings profile in BIOS clobber each other (IIRC 4 and 1, but I'm not 100% sure, it's been ages since I did any settings changes on any of mine.

Hardware bugs, substandard components, and design faults:

1) Clock generators get unstable long before the hardware. It's supposed to be an OC-ing motherboard, yet above about 175MHz bclk the clock stability falls off a cliff.

2) SATA-3 controller with it's 2x 6Gbit ports is behind a single PCIe lane with 5Gbit/s of bandwidth. So if running two SSDs off the SATA3 ports, the total bandwidth will be worse than running both off the SATA-2 ports hanging off the ICH10 SB.

3) SR-2 is advertised as supporting VT-d. This is questionable because there is a serious bug in the Nvidia NF200 PCIe bridges the SR-2 uses, in that DMA transfers will seemingly bypass the upstream PCIe IOMMU hub. That means that when running VMs with hardware passed through, once the VM writes to it's virtual address range that happens to be at the same place where the physical address range on the host of a hardware device's memory aperture, the VM will write it's memory contents straight to the device's memory aperture. If you are lucky, that will result in a write to a GPU's apreture and result in screen corruption briefly before it crashes the PCIe GPU and the host with it. If you are unlucky, it will write to a disk controller's memory aperture and write random garbage out to a random location on the disks.

This can be worked around to a large extent - I wrote a patch for Xen a couple of years ago that works around the issue by marking the memory between 1GB and 4GB in the guest's memory map as "reserved" to prevent the PCIe aperture memory ranges from being clobbered, but it is a nasty, dangerous bug.

Similarly, most SAS controllers will not work properly with the IOMMU enabled for similar reasons (I tested various LSI, SAS and Adaptec SAS controllers, and none worked properly).

4) NF200 PCIe bridges act as multiplexers in that they pretend there is more PCIe bandwidth available than there actually is. The upstream PCIe hub only has 32 lanes wired to the two NF200 bridges, 16 to each, but the NF200 bridges each make 32 available to the PCIe slots. So if you are running, say, 4x GPUs with each running x16, the net result is that even though each GPU will be showing up as being in x16 mode, only half of the bandwidth to the CPUs actually exists. This isn't so much a bug as dishonest marketing/advertising, similar to the supposedly SATA-3 controller.

5) SB fan is prone to seizing up. This has happened on all of my SR-2s within 2-3 years - not great when the warranty on them is 10 years, and even refurbs for replacements ran out over a year ago, with some motherboards still having 7 years left of their supposed warranty.

There are more issues, but the above are the biggest ones that stuck in my mind.

FWIW, I just ordered an X8DTH6 to replace the last of mine. There are too many issues for it to be worth the ongoing annoyances.

But I guess if the requirements are simple (no virtualization, mild or no overclock (so why buy this board in the first place?), <= 48GB of RAM), no more than one SSD hanging off the SATA-3 controller) it might just be OK enough for somebody who doesn't scratch the surface of what they are working with too much.

4
0
Gordan

EVGA SR-2

It's a miracle you managed to get it working with any OS in the first place. There are more firmware and hardware bugs and outright design problems on that board (as in actual bugs, not just a faulty board - the fraction of the ones that are outright faulty is a separate (and unpleasant) story entirely). I just retired the last one of mine having happily replaced them with Supermicro X8DTH boards, which are pretty much the same spec only without all the engineering having been done by the marketing team.

4
0

Uber: Why we use MySQL

Gordan

With MySQL there is the built in replication, Galera (you'd better know what you are doing and really mean it) and Tungsten (just don't go there).

With PostgreSQL there are too many to list off the top of my head, all with slightly different advantages and disadvantages, with some being very similar in bandwidth requirements and/or performance of MySQL's native replication.

1
0
Gordan

That article seems to conveniently omit pointing out that InnoDB also uses a WAL (InnoDB log) with similar effect on write amplification, and that MySQL's replication relies on a separate, additional log (as opposed to sending the WAL directly). This goes a long way toward levelling the field, and the omission of even a brief discussion of it makes the article come across as a bit shilly.

Initializing slaves also requires a state transfer from the master in some way or another regardless of the database used - and the most efficient way is via a snapshot state transfer. Depending on the underlying file system used, this can be very efficient (e.g. ZFS snapshots take milliseconds to create and can then be sent asynchronously). And since I mentioned ZFS, it can also be used to address the inefficiency of double-caching that PostgreSQL suffers (where the same data is kept in the shared buffers within PostgreSQL and the OS page cache) by cranking up the shared buffers to a similar amount as is recommended with MySQL, and setting the FS to only cache the file metadata (primarycache=metadata).

MySQL has also had releases that were buggy and caused on-disk data corruption.

While the initial explanation of direct index pointers vs. indirect pointers (to PK rather than on-disk location) is good and establishes some credibility, it is worth pointing out that direct pointers mean one index dive before the data can be fetched, while indirect pointers require two sequential index dives for the same operation. If all the data is cached in memory (shared buffers / buffer pool) that potentially makes the PostgreSQL's direct pointers twice as fast to retrieve the data. This is also applicable on UPDATEs/DELETEs, and will offset the extra cost of rewriting the node pointing at the affected row in each index (vs. only in the indexes affected by the data change).

Finally, this sentence brings the credibility of the author into question: "This may cause data to be missing or invalid, but it won’t cause a database outage." If the data is corrupted, that is a pretty dire situation, and while they don't mention experiencing bugs like this with MySQL, I have, and it's not pretty. it is in fact a bug like this that made them decide to migrate away from PostgreSQL.

PostgreSQL and MySQL both have advantages in different uses, but shilling one over the other without laying out the complete truth isn't helpful, it just makes it sound like an attempt at retroactively justifying the cost and effort of a migration. I'm not saying it wasn't justified, merely that omitting critical parts and a quantifiable comparison undermines credibility.

3
0

It's nuts but 'shared' is still shorthand for 'worthless'

Gordan

Cheating

"Using the device in the palm of our hand that just happens to be connected to a growing wealth of human knowledge?

That’s cheating."

Actually - yes it is. The point is in differentiation between the mediocre and the best. Somebody who knows stuff off the top of their head is going to be orders of magnitude more efficient, and therefore more productive, than somebody who has to google it and figure it out first.

Or to put it another way, you can be mediocre and do mediocrely by googling things and scraping by. But those aspiring to be "steely eyed missile men / steely eyed rocket girls" (NASA term), by the time you've googled the answer and figured out what it means, the mission will have failed.

Do not confuse being able to google the answer with being clever or good at something - the two are not even remotely similar.

5
1

Microsoft has made SQL Server for Linux. Repeat, Microsoft has made SQL Server 2016 for Linux

Gordan

Date Error?

Are you sure there is no system clock error in play here? I'm pretty sure the international trolling day is still nearly a month away.

1
1

Wakey wakey, app developers. Mobile ad blocking will kill you all

Gordan

Re: HTTPS

"If it is being done by blocking the content-provider's site then whether that site uses https or not is unimportant."

The problem is that you cannot block just a specific site when it is accessed via HTTPS. You don't know what the domain used is.

You could block the DNS lookup, but the app could have a whole bunch hard-coded IPs in it. If those IPs are used generically and not just for serving ads (e.g. Google could trivially decide to use the same set of IPs for their non-advertising offerings under different domain names via TLS). TLS allows for multiple domain names to be served via the same IP address over HTTPS, it is only old SSL implementations that require a separate IP address per domain name. With TLS the domain name is only sent after the initial crypto setup.

Long story short, you cannot block it by domain name, and you can only block it by IP addresses if you are willing to accept a massive amount of collateral damage that most users won't stand for since many would be relatively vital heavily used internet services (e.g. anything provided by Google).

The only way you could do it is if you persuade your customers to install a man-in-the-middle allowing CA cert on their devices, which no sane person will do.

1
0
Gordan

HTTPS

The reports on this are rather unjustifiably sensationalist. All this will do is make the adds shift to being delivered over https instead of http.

3
0

Broadband-pushers expand user piggyback rides on private Wi-Fi

Gordan

Re: I am not

"here in the UK there is very rarely any such thing as free WiFi."

You mean free WiFi as in O2 WiFi, TheCloud and Virgin Media WiFi on the London Underground? It may not be quite up to BT WiFi's coverage but it is reasonably available in built up areas, especially around pubs and suchlike.

0
2
Gordan

Re: @Gordon

>>The physical bandwidth shortage _can_ be an issue

>

> On BT's ADSL network? <sarcasm>Surely not!</sarcasm>

I don't exactly keep an eye on it and hammer it flat out all the time, but when I do stress it (remote backups using zfs send over ssh) I can generally saturate the upstream to whatever the reported sync speed is. Bandwidth generally isn't an issue when you have FTTC.

But I do recognize that in the edge cases where the maximum achievable sync speeds are meagre it can be an issue. But you don't HAVE to enable it. You just have to live with the fact that if you disable it you won't be able to use BT WiFi elsewhere yourself.

0
0
Gordan

Re: I am not

"It's a shame you're getting downvoted for stating the facts (and the ideal use case for the system)."

Speaks volumes about the level of knowledge of the typical commentard, doesn't it. :-)

8
5
Gordan

Re: @Gordon I am not

"Does this mean that the "public" get a different IP address?"

Short answer: Yes

Long answer:

IP you get on BT WiFi's public unencrypted connection is completely unrelated to the IP range you have on your private, encrypted connection. Each gets NAT-ed and passed to the exchange separately from different IPs.

Additionally, BT WiFi is authenticated after connection, so even if 10 people are connected to the same public hotspot, each MAC/IP address is non-anonymous. And public access via your BT router also doesn't use up your data allowance if you are on a metered deal, due to the same non-anonymous, non-plausibly-deniable nature of the service where everything is completely logically separate even if it is multiplexed over the same physical wire.

So - no anonymity, no plausible deniability.

The physical bandwidth shortage _can_ be an issue since the connection is generally limited by the sync speed, but in reality it is very unusual to see prolonged heavy impact from this. And if you do see an impact from it, you can always switch it off, and in the process forego your own access to BT WiFi hotspots.

9
0
Gordan

Re: I am not

"Even though It may provide plausible deniability,"

It doesn't. And it doesn't intrude upon privacy of the owner, either. It's pure FUD.

The public hotspot IPs are not encrypted, they live on a separate VLAN, and the only route is upstream, there is no way to cross over to the owner's encrypted connection or their LAN. It is obvious what data is flowing via each VLAN, so in no case is there any introduction of anonymity or plausible deniability.

In addition, most people, myself included, find it very handy to be able to hop onto BT WiFi (and/or equivalents) almost anywhere instead of burning through meagre 3G allowances.

14
13

VMware axes Fusion and Workstation US devs

Gordan

ESX in Workstation / Linux in ESX

"Fusion and Workstation probably have enough ESX in them to make it unlikely VMware would ever let the code run wild."

There are also seemingly well founded allegations of Linux GPL code in ESX:

http://www.theregister.co.uk/2015/03/05/vmware_sued_for_gpl_violation_by_linux_kernel_developer/

So arguably letting code run wild is exactly where we can hope this might legally end up.

1
0

AMD's 64-bit ARM server chip Seattle finally flies the coop ... but where will it call home?

Gordan

Re: The future can't be prevented. Only delayed.

Yeah. And the AMD offering in this case is beyond laughably late. They generated a lot of hype when they announced the board. In the end they delivered 18 months later than expected and without the originally insinuated feature set (e.g. not in *TX form factor). Other manufacturers like Gigabyte have beaten them to it by nearly a year.

I had great expectations of AMD 64-bit ARMs. In the end, they merely cemented their image of failure to deliver.

0
0

Lower video resolution can deliver better quality, says Netflix

Gordan

CRF?

"These days, the company's decided that approach was a bit arbitrary because it can result in artefacts appearing during busy moments of a complex film, while also using rather more resources than were required to stream something simpler, wasting storage and network resources along the way."

So they haven't heard of ffmpeg -crf 18 ?

Constant Rate Factor (CRF) does exactly what is described above, in that it compresses down the minimum size for the selected level of visual quality. Granted, this means the bit rate isn't fixed/constant, but given that some buffering will be happening anyway that isn't that big a problem most of the time.

3
0

PostgreSQL learns to walk and chew gum

Gordan

Re: MySQL versus PostgreSQL comparison

"That said, Oracle seems to be doing a good job of cleaning up MySQL's warts."

Actually, it's more the case that MariaDB are doing a good job of cleaning up Oracle's MySQL warts.

7
1
Gordan

Re: MySQL versus PostgreSQL comparison

"However, MySQL with MyISAM ran a lot faster."

Actually, no, it very much isn't. Even on read-only loads InnoDB has been faster for well over a decade (since early MySQL 5.0 releases).

MyISAM may still have specific niche uses, where features that otherwise make it useless are in fact desirable; I can think of 2 cases where I would use it, and they are both quite obscure and very narrow.

0
1

How much do containers thrash VMs in power usage? Thiiiis much

Gordan

Energy Consumption Inversely Proportional to Performance on Same Hardware

Additionally, it is worth pointing out that increased power usage directly correlates to increased overheads. If the machine is burning more power to perform the same task, then it is running that much harder and more busily.

But people get very uppity when somebody points out to them that full fat virtualization results in 25-30% peak computational throughput reduction compared to bare metal.

1
0

How to build a totally open computer from the CPU to the desktop

Gordan

Or you could just...

... buy a Lemote Yeeloong:

https://en.wikipedia.org/wiki/Lemote

Sufficiently open source even by Richard Stallman's standards.

0
0

Virgin Media filters are still eating our email – Ntlworlders

Gordan

Re: Yeah, right...

Banks generally explicitly state that they will never, ever email you. The reason they don't email things out is because if the users grow to expect emails from their banks they become that much more susceptible to phishing scam emails pretending to be from their bank.

0
1
Gordan

Yeah, right...

"The problem continues and is impacting their ntlworld.com customers. The company I work for has 10,000 of their customers legitimately registered."

I somehow seriously doubt the truthfulness of that statement. I regularly end up getting email I most definitely didn't sign up for via "digital marketing" companies that make that exact claim.

0
0

Don't want to fork out for NAND flash? You're not alone. Disk still rules

Gordan

Re: Throughput

Flash is a _part_ of the answer. The rest of the answer is moving to smaller form factor drives (use 2.5" disks instead of 3.5" ones), and the biggest part of the answer is to switch to a post-RAID technology like ZFS, with appropriately sized vdevs (i.e. don't have more than, say, a 6-disk RAID6 (RAIDZ2), or 11 disk RAIDZ3 (n+3)). if you need a bigger pool, have multiple such vdevs (equivalent of 12 disk RAID60).

1
0
Gordan

Re: Wait just a minute!

Basic understanding of statistics and quantitative analysis methods indicates that he may as well have rolled dice to project 5 future data points from 3.

1
0
Gordan
Stop

Wait just a minute!

He has 3 data points, NOT including this year (this being the first year that SSD capacities in 2.5" form factor have not just caught up but exceeded the capacities available as spinning rust) and from that he is trying to project the next 5 data points? I call bullshit.

0
0

Boffins make brain-to-brain direct communication breakthrough

Gordan
Stop

18%? Shouldn't baseline be 50% on a binary yes/no spread?

"Participants were able to guess the correct object in 72 per cent of the real games, compared with just 18 per cent of the control rounds."

18%? Shouldn't control set baseline be 50% on a binary yes/no spread, based on random chance?

0
0

Struggling AMD re-orgs graphics groups as Radeon Technologies

Gordan

"Most likely run into unexpected consequences and instabilities."

Given the quality and stability of their drivers at the best of times, how would you distinguish this from normal operation?

0
0
Gordan

"It could show some effort in supporting HD4xxx for W10 because Nvidia can support it for 8xxx which is a GPU that is three years older."

They also never released XP/2K3 drivers for R9 290X, even though the product was released many months before XP was EOL and over a year before 2K3 was EOL. Nvidia, OTOH, had XP drivers for all of their bleeding edge GPUs.

Having said that, I thought Vista/7/8/10 all used the same driver model, so can you not use, say, Windows 7 drivers on Windows 10 for the HD4xxx?

0
0

Seagate promises to HAMR us all with spinning rust next year

Gordan

Re: Two Heads Per Platter

Yes, spinning rust is still cheaper, but the gap is ever narrowing. But cost also increasingly includes power consumption, but for long term storage of enormous amounts of write-once read-never (or near enough) data, NAND is increasingly becoming a contender. One example of a point in case being here, and that was more than 2 years ago, certainly long enough ago that it may well be widely deployed today:

http://www.theregister.co.uk/2013/08/13/facebook_calls_for_worst_flas_possible/

Which leaves spinning rust being consigned to cheap desktop grade systems as the only prospect it has to look forward to.

I do still use spinning rust in some bulk storage applications, but I rather expect to be replacing them with SSDs (as I have been doing of late) as and when they expire out of warranty. Then again, I only use HGST drives so the expiry will likely take years rather than months.

0
0
Gordan

@Joe,

Well, differences in the 1, 2, 3 and 4 year old disks are all within 1% of each other, so there is no strong signal there either way, i.e. it makes negligible difference. Compared to that, < 1 year and 5+ year entries could be viewed as anomalous. I guess it comes down to personal interpretation, but it certainly comes across as not being something worth worrying about in the 1-4 year age group.

0
0
Gordan

@Joe:

From the paper:

"... only very young and very old age groups appear to show the expected behavior. After the first year, the AFR of high utilization drives is at most moderately higher than that of low utilization drives. The three-year group in fact appears to have the opposite of the expected behavior, with low utilization drives having slightly higher failure rates than high utilization ones."

So only very young drives fail faster when heavily utilized, implying that if they were already marginal when they left the factory, hammering them harder will finish them off quicker. Among the 1-4 year old drives the difference is minimal, and in case of 3 year old drives, it is the low utilization ones that are more likely to fail.

So the correlation is tenuous at best for the majority of drive ages. Also note that among the very young disks, the figures show that low utilization disks were also 2-3x more likely to fail than medium utilization disks.

So overall, not that much of a correlation.

0
0
Gordan

(Most) SMR drives have many zones, a staging zone. All the writes first go sequentially into a staging zone (append-only, similar to a log). When the drive is otherwise idle, this data gets committed into the target zone (which requires rewriting the entire zone).

So if the load profile on a SMR drive is read-mostly (and the vast majority of typical load profiles are in fact read-mostly), the performance isn't too bad most of the time. In fact, with the staging, random writes get merged into one big sequential write, which avoids some of the write-time seek latency.

However, if the staging area is full and the disk hasn't had enough idle time to commit the data from the staging zone(s) to the target zones, it will have to do so before it can accept more writes, at which point the write performance drops to less than 1/3 (as of spinning rust wasn't slow enough already). Having said that, SMR drives seem to have pretty large staging areas so this doesn't happen particulary often in typical use.

1
0
Gordan

Re: Two Heads Per Platter

@PleebSmash

"Back when NAND was running to an endurance wall"

That was never actually the case on SATA SSDs, it was merely perceived as such when people reacted with a kneejerk instead of looking at actual data and usage patterns. The endurance of NAND has actually been reducing steadily with the process sizes, it has not been improving, despite ongoing incremental advances in flash management. What has happened is that people have begun to understand that quoted write endurance is ample for any sane use over the most extreme foreseeable useful life of the drive (e.g. 10 years) even if you can only write 75TB of data to a 1TB SSD. Persisting 75TB of data in anything resembling desktop, or even typical server use, is way out there in the statistically insignificant territory fraction of use cases.

Then consider that even under very harsh tests every SSD tested to destruction has managed to outlive it's manufacturer specified write endurance by a large multiple:

http://techreport.com/review/27909/the-ssd-endurance-experiment-theyre-all-dead

Unless your use case is large scale very ephemeral data caching (high churn caching of TBs of hot data out of PBs of total data) or runnaway logging, SSD write endurance is way beyond being worth even remotely worrying about. It is, and always has been, a non-issue for most sane use cases.

1
0
Gordan

@AC:

Actually, the discrepancy in reliability between different manufacturers and drive generations easily more than covers your perceived failure pattern, and there is still no evidence that it has anything to do with the drives working harder, it is almost certainly entirely down to the disks from some manufacturers being massively more crap than disks from others. Since you mention "DM" drives, I presume you are talking about Seagates, whose most reliable model has several times the AFR than, for example, HGST's least reliable model. See here for the Backblaze study on this:

https://www.backblaze.com/blog/what-hard-drive-should-i-buy/

Based on the large number of disks in the sample and relatively extensive analysis, coupled with prior evidence that the workload of the drive has no effect on longevity, strongly implies that a drive being SMR doesn't really impact it's reliability, over and above by the fact that it is a less mature technology (bleeding edge drives are usually less reliable than the longer term established product lines that have had manufacturing processes perfected and bugs ironed out over some time).

1
0
Gordan

@AC:

There is in fact no correlation between how heavily a mechanical drive is utilized and the probability of it's failure. You can find Google's disk reliability study on this subject here:

http://static.googleusercontent.com/media/research.google.com/en/archive/disk_failures.pdf

(Section 3.3)

1
0
Gordan

Two Heads Per Platter

This has been tried 20 years ago by Conner, in their Chinook line of disks. It was uneconomical compared to the alternatives back then, and I don't see that it'll be any different this time around.

Spinning rust is nearly dead, anyone saying otherwise is only doing so because they are selling it.

2
0

Wangling my way into the 4K gaming club with a water-cooled whopper

Gordan

Re: It has to be said.......

FYI, I completed Crysis in 4K (3840x2400, IBM T221, FYI, higher res than today's 4K screens) on a GTX480 a few years back (no AA or motion blur, everything else on maximum (DX9)). Technology has been up to the task for quite some time. Currently running it with a 780Ti, and Left4Dead 2 on Linux runs at a flat 48Hz (what the T221 does natively without retuning the signal).

1
0

Boffins promise file system that will NEVER lose data

Gordan
Happy

Re: (Open) ZFS is pretty damned good already

You beat me to it, I was just about to say "Holy crap, they reinvented ZFS!"

4
0

Embracing the life-changing qualities of USB power packs and battery extenders

Gordan

Has nobody heard of Synergy?

"You can tell the Mac KVM Link software which border of your display to use to switch over, and simply dragging your mouse over that border switches any peripherals connected to the USB hub to the other device (your keyboard and mouse need to be connected to the hub, not your system)."

Some of us have been doing this using Synergy between Windows, and various *nix machines for a very long time. And before that there was x2vnc. And all without any additional hardware required.

4
0

Update Firefox NOW to foil FILE-STEALING vulnerability exploit, warns Mozilla

Gordan

Re: Sandboxing

Security has layers. Sandboxing means that the attacker has to have two exploits available to them - one to breach the application itself, and another to escape the sandbox. While this is not guaranteed to thwart the attack, it makes it more difficult and less likely to succeed.

6
0
Gordan

Sandboxing

It is past time that the standard security model on all operating systems is redesigned along the lines similar to Android, where every application runs inside it's own sandbox.

User logs in as "username", and for each application on the system, for each user, an account is created, for example "username_firefox". Firefox then runs as it would if you execute it with "sudo -u username_firefox firefox", and if it is compromised, the only files available to the attacker are the ones available to account "username_firefox", not the parent account "username".

I switched to this model a while back when the Steam bug surfaced that deleted all files on the system the invoking user had permissions to delete.

This isn't even all that inconvenient, since you can simply make the "username_firefox" home directory sticky group owned by group "username" with write permissions, so the parent user can still go and access all the downloaded files and suchlike under the sandbox account.

What I keep wondering is why something like this hasn't been made to be default on any Linux distributions (other than Android, if you want to consider that a Linux distribution) already.

2
2

Pray harder for AMD

Gordan

@Chewi

Their open source efforts are indeed great, but since the latest generation of cards is always only supported by the binary drivers, there is no incentive to buy the new generation of GPUs where AMD make their money. For open source usage you have to stick with buying the cards from the current line-up that are re-badged GPUs from the last year's line-up. The benefit for the consumer and the problem for AMD is that those GPUs are cheap new, and even cheaper 2nd hand on ebay.

0
0
Gordan

Re: Prayers won't help

@Aoyagi

I haven't done an exhaustive analysis of Nvidia vs. AMD driver bugs, but I have tried various generations of AMD/ATI GPUs, including the HD4870, HD6450, HD7970 and R9 290X, and struggled to get my IBM T221 monitors working with all of them.

HD4870: Randomly switched left and right side of the monitor between reboots for no obvious reason. Had a very annoying rendering bug with transparency/water in several games including Supreme Commander where the shallows were always opaque. I kept persevering for about 6 months before I caved in and got an 8800GT which manifested no obvious problems.

HD6450: Passively cooled, worked great on Linux with the open source driver. It wasn't possible to configure custom modes / refresh rates in Windows. Only AMD GPU I still have.

HD7970: Only one of the DVI ports was dual-link, the other was single link. I needed either two dual link or two single link ports to run my monitors, so I couldn't get this working at all on either Windows or Linux. So I got rid of it and got a GTX680.

R9 290X: No XP drivers even though XP was still supported at the time, I don't recall what the binary Linux driver issues I had was off the top of my head, but open source driver didn't support it. Traded that in for a 780Ti.

Now, you cannot say that this is for lack of giving AMD's solution plenty of chances, but I always ended up with an Nvidia card in the end when I capitulated and needed something that "just works".

The only workable AMD based solutions are, in my experience, on Linux and only the ones that are a generation out of date and fully supported by the radeon open source driver.

Something that "just works" is far, far more important and valuable than chasing scores at the top end, especially since relatively few gamers do in fact buy top of the line cards because they are outrageously expensive.

The only reason why AMD had a good 2014 was because a lot of people were buying their cards for scrypt mining.

1
0
Gordan

Re: Prayers won't help

They don't have to be faster and cheaper, they just have to work perfectly regardless of the performance bracket they are in. AMD drivers are outrageously buggy and fall apart very quickly in anything resembling an unusual setup (e.g. try running a dual-input monitor like IBM T221 off an ATI card).

It is NOT all about performance. Intel's built in GPUs are very popular for lower end gaming, especially in Linux where the drivers are completely open source. AMD cards also work great when you use open source drivers when you limit yourself to a chipset that is at least a generation behind, but the profits are paper thin in the £50 GPU price bracket, unlike the £500 price range.

IMO AMD would do well to stop competing on performance and start improving the quality, stability and feature set of their drivers. There is no point in trying to compete at the top end where they are pre-emptively disadvantaged when no matter how good their hardware is they will still fall short due to their software.

2
0

Flash deserves to live, says Cisco security man

Gordan

And this is exactly why Flash really has to die

"Chief security officer Brad Arkin last year told the Australian Information Security Association that its focus on increasing the cost of exploiting Flash and Reader rather than just patching individual vulnerabilities..."

I completely removed it from all of my machines after the Hacking Team fiasco (had it set to "ask to run, and used FlashBlock until then) and can happily report that I have observed no obvious loss of functionality. Uninstalling it makes it _really_ expensive to exploit.

6
0

Mozilla's ‘Great or Dead’ philosophy may save bloated blimp Firefox

Gordan

Mozilla has gone through cycles like this more than once before. Back in the days for v1 it was a debloated fork of Netscape.

It then went on a massive diet with v4, where it actually managed to maintain a smaller memory footprint than contemporary Chrome.

It sounds like it is time to lean out the code base and cut out various useless crud. Big deal. It will no doubt happen again some time, but the fact that it is happening periodically is a good sign that there are developers ready to take positive action when things start to get bad.

FF will still have the advantage over Chrome when it comes to packaging due to much more sane and sensible treatment of shared libraries it requires to build against (Chrome has to bring most of not all specific versions of 3rd party projects with it because it won't build against anything else, needless enlarging the memory footprint and reducing performance).

7
0

OECD nations gang up on internet retailers, tax dodgers

Gordan

Re: This will not work

Setting up a tax free country in Antarctica might be cheaper than the Moon. And with the kind of investment they are capable of funding, a comfortable city under all that ice is not an infeasible project.

Something like Rapture from Bioshock.

1
0
Gordan

Re: This will not work

Seems you beat me to making this point.

Ultimately, all this will achieve is ensure that nobody domiciled in Australia is employed in the process. What'll happen is that an Australian phone redirection service forwards a call to an office in Singapore, and the customer deals with somebody there.

1
1

Small WordPress sites leaking like sieves

Gordan

Re: There are benefits...

Actually, a multi-tenanted WP setup is very easy to achieve - it is designed for it. I'm sure you can google it.

Features over stability and security is the blight of the 21st century "agile" software development caused by people incapable of handling the concept that you cannot implement before designing, and you cannot design before analysing requirements.

The biggest problem I have with WP is that it is rather difficult to reconcile it's write permission wishes with basic security concepts without extensive per-file hand-crafting of permissions using either ACLs or SELinux or equivalents. The sanest solution I've been able to come up with is to simply make the entire directory subtree it is installed it only readable but not writable by Apache. This, unfortunately, breaks features such as auto-updates and user content uploads.

Keeping the number of 3rd party plugins used to an absolute minimum also helps to reduce the attack surface, since the most (but by no means all) exploits in WP are in 3rd party plugins rather than the core.

2
0
Gordan

There are benefits...

... to not having your WordPress folder writable by the apache user...

4
0

Page:

Forums