* Posts by Trevor_Pott

5992 posts • joined 31 May 2010

Thinking of following Facebook and going DIY? Think again

Trevor_Pott
Gold badge

Re: Mine's the one with Trevor's boot print on it...

I haven't worked for anyone with that little financial sense in quite some time. Trying to run Windows Server on a 30GB SSD with a projected install time greater than 6 years is called "being penny wise and pound foolish".

Examine the total cost of ownership over the life of the unit. This includes the cost of maintenance, downtime for upgrades, spares, electricity, bandwidth and so forth. I have spend my entire career as an SMB admin for the cheapest people alive, and I promise you that the sort of nonsense you advocate in that regard is far more costly over the life of the unit than simply buying a "sweet spot" drive and letting your OS grow.

I know it's very hard for some people to factor in the cost of manpower. THey think that being on salary makes their time cost nothing. For sysadmins this was true 15 years ago when there wasn't such a diversity of products to support. Now, the sheer volume of different kinds of hardware, software, networking etc that even the smallest of SMBs must support strains the ongoing maintenance capabilities of even the most strongly "pee in jars" sysadmin.

Try to make what you deploy as "fire and forget" as possible. That will require frontloading a few extra % in terms of hardware cost in order to recoup hundreds of % in operating costs.

2
0

Why are enterprises being irresistibly drawn towards SSDs?

Trevor_Pott
Gold badge

Re: @Archaon

The tears of enraged commenters power my happiness.

1
0
Trevor_Pott
Gold badge

@Archaon

I fairly explicitly stated in my original comment that there were other possible alternatives. I also made it pretty clear that they were niche and not very relevant. You then decided that the alternatives had to be spelled out and attempted to make them seem relevant.

That was not only pointless, you did not succeed in making them seem relevant at all. Which has now become the point of this thread.

If I said "for the purposes of creating the circle used in $company logo, the value of pi used was 3.14159{irrelevant additional numbers}" you'd be the guy not only explaining "pi is more than 3.14159, and in sometimes it matters that you use {long string of numbers}". And you'd be explaining that to the guy who owns http://www.tastypi.com.

Rock on!

0
1
Trevor_Pott
Gold badge

Re: @Trevor_Pott Change in Flash technology to eliminate finite write lifetime?

No, Archaon, in your non-objective, blinkered position on things you've missed the whole thrust of my argument: namely that there is no value - except in some very niche situations, including outright poverty - in recovering 10 year old drives from systems and reusing them, even if their lifespan was infinite.

Just because you can take a 32GB SSD out of some ancient system and reuse it in a newer one (with a whole metric ****pile of TLC and babying) doesn't mean it's sane, rational, profitable or otherwise a good idea. It's also something that the majority of individuals or businesses will do.

You, personally, may do it. That doesn't make it a good plan> It doesn't make it what the majority will do, would do, or even should do. And that, right there, is the whole damned point. Which you seem to be unable to grok.

1
2
Trevor_Pott
Gold badge

Re: @Trevor_Pott Change in Flash technology to eliminate finite write lifetime?

StartComponentCleanup does not prevent WinSXS from growing unchecked. It just slows the progression somewhat.

0
0
Trevor_Pott
Gold badge

Re: @Duncan Macdonald

But that powerdown timing isn't guaranteed. Hence why data loss occurs on consumer SSDs during power out events. Thus why supercaps are a thing.

0
0
Trevor_Pott
Gold badge

Re: SSDs as a system partition

Again, going to have to call that pretty niche. Your average punter wouldn't know how and your average enterprise admin wouldn't bother. Few folks have the knowhow and the time to do what you do...not that isn't a good idea. :)

0
0
Trevor_Pott
Gold badge

Crucial is the consumer brand, Micron the enterprise brand. Crucial has a cult following thanks to their RAM. Micron has traditionally sold as an OEM to others who rebrand. That's changing, and Micron is selling more and more under the Micron brand.

But yes, overall, Crucial = consumer, Micron = enterprise. Easier than remembering which model lines are which with Intel! :)

1
0
Trevor_Pott
Gold badge

Micron enterprise SSDs have been amazing to me. Micron M500DC? ****ing spectacular drive. Micron P420m? Life changing.

Also up there are the Intel drives. 3500, 3700, even the 520. Anything out of that Micron/Intel fab has been extremely good to me.

To contrast, OCZ is shit, covered in shit, with added shit, layered in shit, all wrapped up in a shit sandwich. The rest all fall somewhere between, with the consumer stuff generally being shite and the enterprise stuff being pretty passable.

0
0
Trevor_Pott
Gold badge

Re: @Duncan Macdonald

The rest of the system tends to go down between one and three seconds before the SSD. Mobo power powers the CPU, RAM and PCIe cards. It gets drained essentially instantly. Enterprise SSDs are rigorously tested to be able to finish their writes before the supercap gives out. SSDs without supercaps will NOT finish writes.

Also: not all SSDs with supercaps are the same. (Front pages versus back pages.)

0
0
Trevor_Pott
Gold badge

I would qualify that as electrochemical rather than mechanical. Any of a squillion electronic bits - from capacitors to volt regs - can go on either a magnetic disk or an SSD. Outside of the electronics driving the storage components themselves, SSDs have write life due to being a solid state medium, and magnetics have mechanical bits that can seize, are affected by vibration, air pressure differences, etc.

0
0
Trevor_Pott
Gold badge

@Duncan Macdonald

Regarding your comment "Write buffering and coalescing can be done without supercaps"

I would like to refer you to my previous comment, wherein I stated the following: "SSDs without supercaps do not all do this. Some do, some don't, and there is some debate about whether or not those that do should."

I acknowledge that write buffering and coalescing can be done without supercaps. It is the supercaps, however, that allow these operations to occur safely and thus make SSDs that implement these features fit for the enterprise.

1
0
Trevor_Pott
Gold badge

"Supercaps are a feature of enterprise SSDs, but have FA to do with wear levelling."

I apologize for not being more explicit in my article. Supercaps - and the functionality they provide - allow write buffering and write coalescing to be handled by the drive itself, rather than relying entirely on the controller or OS. Because of the supercaps, writes can be stored in buffer on the drive until there is enough to write a full block.

SSDs without supercaps do not all do this. Some do, some don't, and there is some debate about whether or not those that do should.

So you are partly correct: supercaps do not directly have anything to do with wear leveling. What they enable is write coalescing which enables a more efficient form wear leveling than would be otherwise possible.

4
0
Trevor_Pott
Gold badge

Re: @Trevor_Pott Change in Flash technology to eliminate finite write lifetime?

"Even taking frank ly's *Nix example as a given, I've got a machine with a pair of 30GB (not even 32GB) SSDs in RAID 1 which runs Server 2012 R2 Standard quite happily. Believe it typically sits at around 11GB free."

And I've got OS-only installs of Server 2012 R2 Standard that eat the better part of 80GB.

You *might* be able to convince me if you tried to make a case for 32GB SSDs as an ESXi disk, except that's probably useless since there are USB keys that are better fits for that job, and just plug directly onto the motherboard (or into the SATA plug).

Dragging along 32GB SSDs is an exercise more in being spectacularly cheap than anything else. I get it - I am an SMB sysadmin, we have to do this all the time . But the hassle of migrating components from system to system as everything else dies (or the system isn't worth the electricity it consumes) gets old fast.

A dirt cheap thumb drive solves the problem of a place to put a hypervisor, and the ancient SSD from the beforetimes isn't going to help me run my datacenter. It might be useful to the poorest of the poor consumers, or people in some extreme niches, but as a general rule storage devices aren't much use to general market past about 5, maybe 6 years. After that they're just too small.

A great example is the 1TB magnetic disk. I have an unlimited number of these things. I can't and won't use them. It costs me more to power up storage devices to run those drives for the next three years than it would to just go get 4TB drives. To say nothing of space, cooling, OPEX, etc.

Even if all our storage devices lasted forever, they would eventually stop being used. Just like my Zip drive. Just like my Blu-ray. Newer devices hold more, and they are less of a pain in the ASCII to use.

8
0
Trevor_Pott
Gold badge

Re: @Trevor_Pott Change in Flash technology to eliminate finite write lifetime?

"As the root, /home, swap and /{data} partitions of my desktop computer. "

Desktop linux is pretty goddamned niche. From my original comment:

"What use is a SATA 32GB SSD today, excepting in some very niche applications?"

Funny how when you quote it you leave off the last bit.

Also: " I'm sure most people at home (a big market)" won't be using Linux on the desktop. Doubleplus when we talk about putting different directories on different drives. Sorry, mate. You're not so much in a class by yourself as homeschooling from a tree in the middle of the Yukon.

4
5
Trevor_Pott
Gold badge

Re: Change in Flash technology to eliminate finite write lifetime?

A) The Al/C battery work doesn't port to silicon chips. It is unlikely we will ever see flash chips without write limits.

B) You'll sell just as many new units even if your units last forever because our demand for data is insatiable. What use is a SATA 32GB SSD today, excepting in some very niche applications? Hell, what use is a 120GB? Would you buy a 240GB for your notebook?

Flash write lives aren't being artificially suppressed. It's just physics.

9
2

Need speed? Then PCIe it is – server power without the politics

Trevor_Pott
Gold badge

Re: The "Printer" Icon

http://m.theregister.co.uk/2015/04/14/pcie_breaks_out_server_power/

0
0
Trevor_Pott
Gold badge

Re: Simple fix for southbridge bandwidth limitation

Because the company that ships the SoC decides to artificially limit the amount of RAM you can attach to their lower end (SoC) CPUs in order to make you pay for the much (much) more expensive ones if you want a usable amount of RAM.

As for the "why" of that, well...greed.

0
0
Trevor_Pott
Gold badge

Re: IB tech...?

Yes and no. Infiniband was never designed to handle the kind of load that modern supercomptuers are putting on it. It was also not designed to lash together as many nodes as seems to be the requirement these days. While it is way better than Ethernet for the task, Inifniband was designed for an earlier era of supercomputer and there are some pretty big changes it would have to go through to stay relevant today.

0
0
Trevor_Pott
Gold badge
Pint

Re: Just thanks for the fine article.

(Additional beer)

0
0
Trevor_Pott
Gold badge

Re: @Trevor PCIe won't work well outside the box...

I mind Intel owning the market a lot. Unfortunately, we've collectively lost that battle already. Intel succeeded in killing AMD in the face with a jeep, and AMD is not like to recover. ARM is a joke for server workloads.

So, okay, we lost that. Do we need more monopolists in our datacenter?

0
1
Trevor_Pott
Gold badge

Re: @Trevor PCIe won't work well outside the box...

Not at all. There are many potential successor technologies to flash and/or RAM. None of the likely candidates would appear to be the sort of thing that will affordable by the mass market. (Oh, and I am entirely aware of Crossbar).

The argument is no different than that of Nutanix versus VMware. Where should the power in the relationship rest: with the customer, or the vendor?

If you're happy to hand your genitals over to ViceCo Inc then, by all means, go buy something for where there is only one vendor. Maybe the commercial benefit you see from using that technology will be greater than the cost of licensing and implementing it. I doubt it, however.

Unless the proprietary technology is dramatically superior to the more pedestrian alternatives it won't get adopted by the mass market. Lock-in is a bitch, and value for dollar matters. This is why, despite all the problems with existing standards entities, standards (and FRAND) still matter.

2
0
Trevor_Pott
Gold badge

Re: Simple fix for southbridge bandwidth limitation

The limited amount of RAM you can connect to that SoC. That's what's not to like.

2
0
Trevor_Pott
Gold badge

Re: PCIe? Yeurk!

Don't be so sure that paying the patents isn't cheaper than inventing it all over again. If your assertions were correct, we wouldn't have companies reinventing interconnects over and over. Sorry mate, but which you are correct that proprietary interconnects are technologically and technically superior, that does not mean they'll win.

I know that's hard for the tried and true nerds to grok, but it's true. The technologically superior option only wins when it is as easy and cheap to consume as an inferior option. Which is sort of the point of the article.

PCI-E will become the mainstream intersystem interconnect because of it's ubiquity. The ultra high end stuff where it's taxpayers' money being spend will continue to be proprietary.

4
0
Trevor_Pott
Gold badge
Pint

Re: Just thanks for the fine article.

(beer)

0
0
Trevor_Pott
Gold badge

Re: Simple fix for southbridge bandwidth limitation

Intel is already moving there. This is why they are soldering CPUs onto motherboards for everything but high-end workstations/gaming rigs and servers.

0
0
Trevor_Pott
Gold badge

Re: PCIe won't work well outside the box...

Except it would be a future that belonged to a single vendor, who owned the RRAM patents. You're describing HP's lock-in fetishist utopia. No thanks.

2
0
Trevor_Pott
Gold badge

Re: PCIe? Yeurk!

Patents.

Standards.

WIiespread adoption.

Those are the barriers. Hypertransport is faster than PCI-E as well. It hasn't won because of...

2
0

Internet kingmakers cry mercy over mad dash to fill global DNS throne

Trevor_Pott
Gold badge

Re: Hey; it's only the Internet

I rather like gardening, and I don't see why a lack of internet access would prevent me from obtaining the automation tools required to make farming much more than "gardening at scale". Or why a lack of internet access would make it harder or all that much more expensive to produce or ship those tools.

0
0

2550100 ... An Illuminati codeword or name of new alliance demanding faster Ethernet faster?

Trevor_Pott
Gold badge

Actually 10Mb. Mega is capitalized. Bits are lower case, Bytes are upper case. Don't forget to distinguish between MB and MiB, as well.

1
0

Nvidia's GTX 900 cards lock out open-source Linux devs yet again

Trevor_Pott
Gold badge

Re: WTF?

"it was a simple question: what's wrong with just boycotting them? Please try to answer _without_ dragging in tropical diseases."

Boycotting nVidia is neither effective nor does it address the root issue of the imbalance between the power the vendor has and the power the customer has.

Only a small fraction of very wealthy customers have any real influence with nVidia. Like it or not, that is very similar to the issues wrought by economic darwinism on the pharmaceutical industry.

Perhaps more to the point: nVidia is more than just a vendor of GPUs, and they have aspirations to be even more. nVidia has the desire - and the capability - to be one of the major players in low power devices (see; internet of things) that will be (and are) making technology ubiquitous in all aspects of our lives. Including, increasingly, ones that involve life-or-death situations, medical equipment and everything from personal cars to mining equipment many times the size of your house.

Making processors, GPUs and the like is functionally an unregulated industry. Despite this, it is rapidly moving into becoming as important to life and limb as the pharmaceutical industry. Boycotting sure as hell doesn't work there!

It is ultimately a question of how much power we wish to allow the vendors versus how much power we, as the customer demand. That brings me right back to my previous comment: it's ultimately about social/economic darwinism. Either you support the "irrelevance" of the poor, the niche and those who bought a widget that is older than a single product cycle, or you believe that vendors need to be held to a higher standard.

I tend towards the latter.

3
3
Trevor_Pott
Gold badge

Re: JustNiz

"Ok so don't try to use someone else's wicked cool skill that you think are massively impressive to try to size up someone else because you have no idea what you're talking about."

Not being a developer (by choice, I might add), doesn't mean I don't know quite a bit about the field. (Well, fields. Development is broad enough to have specialized a long time ago.)

"You're taking offence to something that wasn't written. For most people that develop on Linux either at the kernel level, application or whatever are in no way affected by the revelation that nvidia has added signing to their GPUs. It affects a tiny minority of developers that are working on opensource drivers for nvidia GPUs and almost no one else. I think you're going to massive lengths to make this look like it deeply affects your friend's hobby project but I don't see how it does."

You are...whom? And who elevated you to the final arbiter of knowledge and understanding? And for that matter, who gave you the right to judge how many people have to be affected by something before it's important enough for others to care about?

"So don't use their stuff then."

So you're an economic/social darwinist. See my previous post in this thread.

" If everyday a bunch of charity cases walk into your office and give you their sob story will you do work for free or for a rate that means you lose money? You might once or twice out of the goodness of you heart but you aren't going to do it everyday until you go bust are you?"

First of all, you're so full of shit your eyes are brown. nVidia is in exactly zero risk of "going bust" if they support startups and hobbyists. They risk only making a (very) slightly lower profit.

As for me, I actually do my best to support those with esoteric requirements. In fact, I get together with other consultants in both of my major fields of endeavor to work out standards and do the equivalent of "open sourcing" as much of our work as possible. This makes it easier to support a large number of "edge cases" with a minimal cost on our side. It spreads the load amongst the network and ultimately builds good will amongst those individual and companies that form the edge cases. Good will that means a lot when the odd one goes gold.

"Hobbyists should stick to hobbyist friendly vendors that release proper documentation for their products and be hard on vendors that don't release documentation. What hobbyists really don't need is people flapping their gums about stuff they don't care about or need."

More economic darwinism and some extra judgement based on an appeal to your own authority. Don't push for change or seek better of vendors, just accept whatever your betters give you, and tell you that you should need. Neat-o! (Double bonus if you ad homeinem anyone who stands up for the edge cases.)

"The intel GPU drivers have been opensource for a long time. They still crash the whole X server when people do certain actions in kicad with some models of GPU. The bug has been there for about 5 years. Open sourcing the drivers doesn't instantly fix hard to fix bugs."

And now you're way out in the middle of nowhere. I never said anything about open source automatically fixing hard to fix bugs. I said it gave those who care enough the ability to do so. Now you're just manufacturing arguments.

"What exactly are they going to tweak/tinker? I can maybe understand that they might be able to find where values like the different core frequencies are held in the flash and overclock their cards but I very much doubt they are in IDA disassembling the stock firmware, documenting and re-implementing it on a daily basis."

You doubts are irrelevant. You've proven your limited capacity for belief repeatedly. While I'm not goign to waste my time going into a huge amount of technical detail, one example of what one group has been up to with their driver recoding has been changing the way vector calculations are done by the GPUs so that they can get (way) better performance from their algorithms. They had great success with the Phi, and so far have had good luck with AMD, but nVidia makes the hardware that is optimal for this task...if they could just change some of the behavior.

"So, yeah, poking in a hex editor to tweak the settings of the cards which nvidia doesn't make available."

Uh, no. But great job inserting your own prejudices and biases.

"What architectures do you think could really do with nvidia GPUs but don't have binary drivers. Keep in mind that there are only 3 or 4 current architectures that have pci-e interfaces."

You do realize that ARM, for example, is not remotely cross compatible, eh? Binary for one ARM isn't always (rarely, IME) going to work on a chip from another vendor. Or even a different chip from the same vendor. That's before we start looking to Power or MIPS.

I know, for example, of one group working to build an embedded PIC32 unit with a GPU (the "sidecar", as it's lovingly named) that is being optimized for extreme environments for purposes I can't go into because of NDAs. Some of the stuff they're doing requires some really crazy calculations to be done in real time (or as close as possible) and they absolutely need to tinker at the metal.

But I guess all them folk are just posers, eh? Just wanting to overclock cards and such.

"Unless the bugs are in the firmware that has no relation to the firmware being signed or closed source. Nvidia could have opensource drivers and closed firmware (like 99.9999999% of the stuff in your machine that has a mainlined driver but requires firmware).. would you still be demanding they remove the signing if that was the case?"

Actually, yes. I do generally request of all vendors that they either put in place a program to make it reasonably easy to get custom firmware signed (and then inject it) or - better yet - do away with signing and open source their firmware. For everything I can get my hands on from BMCs to the radios in my phone.

Part of the reason is security. These firmwares are often abandoned by vendors, yet units stay in play for bloody ages. Signing can be part of a defense, but it can also prevent community sourced updated for abandoned hardware that ultimately leave us more insecure.

Smartphones are a great example of this issue: a year out, and you're not getting any love. Try to roll your own, but it's fighting an up hill battle because of all the devices with undocumented, closed firmware. A lot of which has known vulnerabilities. That's before we begin discussions about nation states occupying your firmwares...and they won't have trouble getting their malware signed! (I'd really like to have a nice maintenance program that re-flashed all my firmware with known good copies relatively regularly, letmetellyou.)

And for all that we're having this discussion about GPUs today, the economic ethics being hashed out over these issues will be the default for future products. I don't fancy an "internet of things" full of abandonware with closed, unupdatable firmware.

How much of your "smart house" do you want to have to replace with the latest, greatest before you sell it, hmm? Are you okay with rebuying your car every year or two?

if you think I'm being alarmist, remember that these are markets nVidia is targeting with a vengeance. I don't particularly care for the implications - on a personal level or at a societal level - of economic darwinism being applied to technology when it becomes as ubiquitous and as critical, in many ways to human life as it is promised to become in the next 10 years.

But hey, there's nothing wrong with locking everyone out. As long as nVidia makes money doing so...right?

3
3
Trevor_Pott
Gold badge

Re: WTF?

"Serious question. Why not simply refuse to buy Nvidia unless and until they allow the level of support you think you need? Why not just boycott them, and make your reasons for boycotting them plain to see?"

Because the implication behind that is that the only thing that matters is money. This is from the same school of thought (economic/social Darwinism) that gives us neglected tropical diseases. There are plenty of individuals and companies that aren't in the "fat money belt" and could ultimately prove to be very useful to society as a whole if only they could get the initial support out of the gate.

There aren't a lot of vendors in this space. If we make it acceptable to behave as an economic Darwinist when in a position of monopoly, near monopoly or functional duopoly then we are ultimately restricting our innovation as a society. You can only really innovate if you have enough money, but you can't get enough money without being able to innovate (or unless you know the right people.)

Maybe that's fine by you. But it isn't fine by me. I happen to think that those who aren't rich and powerful - or part of the blind majority - still have much to offer society. Be this discussion about technology or real world issues like education, health care or political representation.

It's the age old debate: those who view the dollar as almighty versus those who see value in everyone (or almost everyone).

3
4
Trevor_Pott
Gold badge

@Handy Plough

But I'm a professional dick!

6
1
Trevor_Pott
Gold badge

@Daniel Palmer

At least one of the startups I'm working with is actually part of a project to bring both CUDA and OpenCL to the nouveau drivers. At the moment, he's been working on OpenCL. In fact, I just heard him log in to my testlab and spin up the GRID cluster, so I presume they've started working on the "GPU-enabled VMs" one more time.

2
3
Trevor_Pott
Gold badge

Re: JustNiz

"Where am I looking down at others exactly? You're the one trying to belittle the OP for mentioning he's some sort of developer by using someone else's apparent skills in an attempt to make him feel small. I have a feeling that the 2 or 3 lines I have in the mainline are more than the sum of *your* input to a serious kernel."

You'd be right. I'm not a developer, nor do I claim to be. But I'm also not disparaging someone's development efforts as "unimpressive" here. I never said our "professional Linux developer" should feel small because he is a Linux developer. I said that being a Linux developer - professional or not - doesn't give him standing to diminish someone like Chris. There's a difference.

"Which nvidia supply a public API for and doesn't require running third party firmware on the GPU. You're making out this is like some secure boot system that stops people running their own code on their CPU/GPU when it really isn't."

CUDA and OpenCL support aren't there for all processor types, and are still somewhat bug-ridden for x86 Linux. What's more, there are still efficiencies to be eeked out that would be a lot easier if one didn't have to go through the nVidia bureaucratic gauntlet.

"Vendors need to be working to get their stuff into the mainline so it doesn't bit rot but the management is usually very much "our precious" so however much developers tell them that they should try to get their stuff mainlined it's hard work to make it happen."

No, vendors need to open source their frakking drivers so that the rest of the world isn't held up by their internal politics. There's a whole industry that needs to be able to move faster than they can.

"I work with small startups a lot bringing up Linux of their hardware. I can't think of a case where we haven't been able to get the complete source for all of the vendor's drivers."

So do I, and nVidia doesn't release that information with a simple NDA. It takes a hell of a lot of lobbying and a lot of money. Money that you don't tend to have as a startup unless you are already A (or usually B) round funded. Especially if you're not an American startup.

"Hobbyists have a bit of a problem that they aren't very valuable to big semiconductor companies that need to ship hundreds of thousands of units to make a design profitable.""

Yeah, but fuck 'em, eh? Awesome attitude.

"You seem to be arguing along the lines of "I know more than you so shut up""

Uh, no. That would be you. There are damned good reasons to want open source drivers, even if they don't apply to you, personally, or people you've worked with. But hey, because you don't personally see a need, you're entirely happy with denying everyone else. You sure you're not a bureaucrat?

"and "Won't someone think of the children that for some inexplicable reason need to be able to upload their own firmware to GPUs". Neither is making much sense."

There are lots of reasons. You just don't accept them as relevant. Poor support outside of x86. Inability to obtain source code unless you have gobs of money and influence. Bugs that never get fixed. Abandoning hardware after very limited periods of time. Reams of WONTFIX bugs and corporate history of simply ignoring bugs raised are all good reasons.

But you're also conflating two things here. 1) Inability to firmware update cards (nice to have in a lot of ways) with 2) lack of open source drivers that can be recompiled on other platforms (absolute must).

Now some of my clients have a desire to get into the firmware and tweak and tinker, because they need every erg of speed. But I think there's a much broader need for open source drivers that can be tweaked and recompiled for different architectures, and where bugs can be fixed that nVidia won't.

4
4
Trevor_Pott
Gold badge

Re: JustNiz

"He seems to have written one kernel of limited complexity."

Which still requires an understanding of programming that most people lack. As a hobby item, it may not be the most complex, but it is more than the overwhelming majority of self professed "professional Linux programmers" are capable of.

What have you done that of the same complexity as a "multi-million LOC proper operating system kernel like Linux", thus giving you the bragging rights to look down your long nose at others, hmm?

"I'm not sure how you go from "writing a hobby kernel" to "needs to have custom firmware for a graphics card"."

Needs to? Maybe not. Wants to? Sure! In Chris' case, he'd love to explore it just for the sake of exploring. Learning for the sake of learning. I do know people who use other cards (notably the Xeon Phi) specifically because they get the kind of access to it that they don't with nVidia, even if nVidia's cards are more powerful.

You are presuming that your own identification of needs outweighs everything and everyone else.

" I can't even see where his kernel's nvidia graphics driver is.. it seems it has a serial console only. But anyhow, he's free to do what most toy kernels do and use the standard VESA stuff that is compatible with the millions of PCs out there.""

Graphics cards aren't just for graphics. They are used for processing as well. And the last i heard he was working on integrating some GPU compute stuff, given he'd just gotten some nice embedded boards with some sexy GPUs on them. Maybe you might even stop to think that in his case, part of the frustration is that the lack of open sourced drivers makes doing that integration work harder...especially when he's working with non-x86 platforms.

"Not massively impressed really. I know lots of people that look at an instruction sequence and tell you how many clocks it will take and how to reduce the clock count by using some weird trick."

Congratulations. You know people! There are 7 billion+ of them on the planed. Not it's my turn to be completely unimpressed. OTOH, here's someone who isn't an electrical engineer or computer scientist by trade who spends his spare time learning this stuff. That is relatively rare. No matter how little it impresses you.

"If the stuff they are working on is so important they should have a contact an nvidia that can help with that. Surely they want someone that has access to the engineers that put the chip together opposed to stuff that is reverse engineered.

What a lot of people don't realise is that even with proprietary hardware if you have enough cash and sign enough NDAs you can usually get access to all the information and code you would ever need. I have the complete source for the binary drivers for various ARM SoCs sitting on my harddrive."

Wow, you really are an arrogant little weasel man who lives in his own little universe, aren't you? "Get enough money together and you can have influence"? On behalf of every small business, every startup and ever hobbyist in the world: fuck you. In the face. With a 20MT vat of battery acid.

The stuff these people are working on absolutely is important. In the case of at least one of these companies, the quicker they get their software together the better, because it quite literally saves lives. (Though at the moment they've got map rendering from last year's industry best of 3 days to 36 hours, the lack of proper access to the cards is prevent them from hitting "real time".)

Sadly, they just don't have enough money to matter to arrogant types such as yourself. But that's okay. As long as your own inflated sense of self importance and the importance of proprietary drivers is in place, let other people die in the wilderness. You'd probably blame the victim anyways. I sense that about you. I really do.

"The proprietary drivers have public specifications right? For your previous example that should be enough. If they find bugs in the proprietary drivers they should have a contact within nvidia that they can contact to get it fixed."

Except that you have to be pretty goddamend important to have a contact within nVidia that can actually get anything done. Nor is nVidia particularly interested in actually fixing bugs, fixing them with any speed, or fixing bugs that only affect small, insignificant people without a lot of money.

Your solutions of "well, just obtain power and influence" are asnine, and prove that you don't remotely understand why open source anything is important.

Now, I happen to understand why nVidia doesn't do open source drivers. It has to do with military contracts. There's a whole lot of very long, condensed reasoning and I'd be happy to have a rational discussion with people about the whys and why nots.

What I won't stand for is tearing down someone who is actually quite intelligent, capable and damned fine human being just for the sake of some self-important grandstanding. Especially when the rationale for the vitriol - that open source drivers are irrelevant - is complete and utter bullshit.

Have a great time.

16
8
Trevor_Pott
Gold badge

Re: JustNiz

@JustNiz, you're kind of a dick. I state this as a professional dick: someone who actually gets paid to troll the internet. As a professional dick, I recognize your amateur dickishness, and both the fact that you are an amateur dick (for free! for shame!) and so terrible at it offends me.

Let me tell you a little something about the man you are disparaging.

To start off with, he writes his own kernels. That's right, @diodesign there writes his own kernels. Not "recompiles a Linux kernel", but writes entire kernels from scratch. When he's bored, or depressed, he does things like "try to port compilers to new platforms or SoCs so that emerging languages like Rust can be made to work where they have never worked before.

And then he writes a kernel in it.

Our Mr. Williams here absolutely is someone who is directly affected by the lack of open source drivers directly from nVidia, and he does that stuff just for fun. He does, in fact, code that close to the metal. On every CPU architecture and size of system he can find.

He won't wave this in your face, but I sure as hell will. Watching you try to wave about your own self-importance and the amazing depth of your critical experience in the face of what Chris actually knows and does is a bit like watching a "professional" social worker try to educate a medical journalist on the ins and outs of third world medicine, knowing that the medical journalist in question is an MD and spends half his year in Africa treating malaria.

You are making a 100% grade A ass of yourself, and you do not even know it.

What's more, I find the tripe you're waving around offensive. I am working with a number of startups here in Canada that are doing real, honest to god GPGPU work. Oilfield simulation. GIS rendering. Power distribution optimization. The lack of open source drivers really, honestly and truly does affect them, as there are regularly things they need to be able to change, and they have to fight tooth and nail to see them changed. It would make their lives a lot easier - and cheaper - if they could just code the drivers themselves. (And sometimes, they do fork the really, really crappy open source attempts at drivers, but they aren't nearly as feature complete, so it's a pain in the ASCII.)

I'm glad that you get by just fine on the proprietary drivers. But you're one person. Take the time to internalize the lesson learned here today: your experience does not dictate the experience of others, or even the experience of the majority. And yes, when something like "the drivers are not open source" is a problem for other people, that actually matters. Even if it does not matter to you.

Thank you, and have a great day.

35
11

'We STRONGLY DISAGREE' that we done WRONG, says Google

Trevor_Pott
Gold badge

So a company isn't allowed to make their own product worse? Why the hell isn't Microsoft in the brig for Windows h8?

0
1

Chrome version 42 will pour your Java coffee down the drain: Plugin blocked by default

Trevor_Pott
Gold badge

Re: isnt that a good thing?

Is a potato a zero or a one?

0
0

The VMware, Nutanix mud wrestle is hilarious, but which one is crying with fear on the inside?

Trevor_Pott
Gold badge

Re: About owning the data center...

You are not explaining how Nutanix is lock in. Nutanix sells a variety of appliances that use different hypervisors. So long as you are not using a custom hypervisor designed by Nutanix, how is it lock-in? The whole bloody point is that you can take your workloads and move them to another appliance vendor using the same hypervisor at the snap of a finger.

Your belief that "only vendors with an OS and application-level service offering (IaaS, PaaS) will be able to truly make a difference and truly "own" a data center" is wrong. Full stop. There are lots of parts of a datacenter that matter. If a datacenter becomes heavily invested in automatino and orchestration, for example, any move away from that automation/orchestration platform becomes practicably impossible. And that's just one example.

"Only open systems, open architecture will avoid lock-in in the long term" is also completely and utterly incorrect. You can be just as locked in to open source as you can to anything else. The "openness" of the platform doesn't prevent lock-in. Standards - the establishment and adherence to - prevent lock in.

Consider for a moment the following two scenarios:

1) There is one and only one hypervisor. It is open source. If you want a hypervisor, you must use this, or one of it's not-very-far-afield derivatives. If you want change you need to bribe developers and play petty ego politics and hand-hold groups of grown children as they squabble about minutia. Only the largest companies have the money to effect change. Features that benefit small companies don't get built.

2) There is a defined standard for both the virtual machine container (VHD/VMDK/XML) and the "integration tools", and there are multiple hypervisor vendors available. VMs can be moved between hypervisors with ease. If you don't like the features, development path or what-have-you of a given vendor then you simply choose another.

The latter is an ecosystem without lockin. The former is an open source ecosystem with lots of lock-in. Competition and standards are what prevent lock in. And it's compeition that ultimately drives standards adoption. We're starting to see this now. Startups are emerging to migrate seamlessly between hypervisors. In some cases, even to enable vMotion between them. (See: Ravello, as one extreme example).

Lots of hyperconvergence players is hugely good because they make it easy to transition between them. Don't like one? Throw them away and get another. Nothing - at the moment - locks you in to Nuanix in any way. Lots locks you in to VMware.

0
0
Trevor_Pott
Gold badge
Pint

Re: About owning the data center...

I've migrated from VMware to KVM. It wasn't that big a deal. If you have trouble migrating between hypervisors or data vendors, the startups will gladly help you. For cheap, too.

But you'll notice the article talks about migrating from one KVM hyperconverged solution to another, and from one VMware hyperconverged solution to another, and so forth.

If you want to pay more and get progressively less, go ahead. If you want to have no bargaining position with a vendor who controls more and more of your datacenter, go ahead. You will not be alone in your view of the world, and you are not alone in your view of the world.

But the number of individuals and companies who retain that view are diminishing. Maybe that means nothing to you today. Maybe it won't mean anything to you tomorrow. At some point, however, you might eventually notice that a rather large shift has occurred while you were refusing to look. Perhaps then you'll stop to consider what all those people know that you don't.

Cheers and beers.

0
0

SQL Server 2005 end of life is coming, run to the hills...

Trevor_Pott
Gold badge

"are there many companies running a business-critical instance of MSSQL Server 2005"

107 out of 135 on my list, at the moment.

"For those installs which do require high-availability, Standard Edition supports a 2-node Cluster. So the comparison with Enterprise Edition licensing (which I agree is really expensive, unless compared to Oracle) is a bit unfair."

You'd be surprised how many of the small oil and gas companies I know have run up against the need to be using enterprise. A couple of law firms too.

" Prefer to upgrade the application every 5 years with PostgreSQL, or every 10 years with Microsoft?"

Honestly, having done the math, I believe you pay less to do it the PostgreSQL way. Sure, you migrate more regularly (not a bad thing in and of itself) but you do so by spending yoru money keeping your application developer afoat instead of adding $0.00000000000000000000001 to the shareholder dividend for Microsoft.

That developer relationship will be vital to helping your company grow. Microsoft will probably try to kill your company so it can take over your entire sector.

0
0
Trevor_Pott
Gold badge

"I also found hard to believe that a properly coded database application running on MS SQL 2005 will break on 2012/2014"

Then you don't actually know much about the changes, especially in 2014. What was a perfectly okay application/database in 2005 will not necessarily work in the newer versions, especially 2014.

And then you go assuming things like "properly coded". Who defines proper? You? Or is "proper" simply "anything that ports seamlessly"? And what about "improper" applications/databases? You just say to folks "oh, sorry, you're fucked, too bad, should have been able to see the future, enjoy being out of business because you can't afford things"?

It's easy to simply write off individuals and companies you don't know with a dismissive wave and a haughty sense of superiority, but way down there past the end of your long nose there are thousands - if not millions - of organizations using applications with databases that absolutely will not migrate smoothly.

Sorry mate, but I've been doing these migrations now for three years, and you just flat out don't know what you're talking about.

0
0
Trevor_Pott
Gold badge

Re: SQL Server 2005 » 2012

@deadlockvictim

If you expect an SMB to "refactor data types" when migrating a database, you're completely insane.

Migrating to Server 2012 can indeed be a pain in the ASCII because the servers hve changed enough that 2012 doesn't support everything in exactly the same way that 2005 does. So there absolutely, 100% are databases and applications that, when moved to Server 2012, don't work without the developers changing how the app talks to the database.

That's a huge problem when you developer has not chosen to do so, is out of business, or is charging you a year's turnover for the "new version" that works with the new database.

0
0
Trevor_Pott
Gold badge

All mine run on Server 2008. When the database server runs on Server 2003 and does nothing but run SQL it's real easy to move it from 2003 to 2008 (or newer, if you have licenses).

It's a hell of a lot harder to move from SQL 2005 to SQL 2012. Experience says about 25% of your applications will just flat out stop working. And SQL 2014 is such a dramatic change from SQL 2005 that you can bet most of your applications are going to give up the ghost, unless the devs have been all over it.

Now, in the real world a lot of use use applications where the devs are emphatically not "all over it". Hell, I still have to babysit an application that uses frakking btrieve. That's like bashing two rocks together to make fire. Underwater. While being boiled alive.

Now, SQL 2005 --> SQL 2008 R2 should work for almost everyone and every application, assuming you have licenses.

If you need to go back to your developer and ask them to port the DB, don't get them to port it to Microsoft's latest and greatest. Just get them to port to Postgres. Later this year GPU acceleration for Postgres comes out. From experience, it's pretty fantastic. What's more the licensing costs are a lot more bearable.

If you don't think that licensing can be a bit of a pig, go take a look at the cost of two SQL 2014 enterprise 4 core licenses. (To allow for replication between two 4 core servers.) Tell me your average SMB will afford that.

Hell, for that kind of money, you can probably get your dev to port to Postgres and never worry about the licensing issues again.

Is that proper advice for the enterprise? No. But enterprises are probably not facing the same SQL 2005 issues as SMBs, and it's SMBs that are most likely still clinging to their old databases.

"Move away from SQL 2005" is not a simple, straightforward item with clear cut, universally applicable solutions, or even reasons why companies are facing the problem. It's a tangled mess of a thing and in a lot of ways it far - far - more difficult and problematic in today's datacenters than a "simple" operating system upgrade.

6
1
Trevor_Pott
Gold badge

Aye, and other than the security boogyman, I'm unsure what the benefit is for the average SQL user of upgrading. 14x faster? But what if SQL 2005 is already rediculous overkill? Sometimes it's used not because it's the most sensible DB for the use case, but because the developer didn't know how to code for anything else.

5
0

What just went down on Intel for three months? Er, PC and mobile chips

Trevor_Pott
Gold badge

Re: Maladies of x86 cpu's

PCs may not gorw much from here on in, but they're going to take a hell of a long time to decline.

2
0

Bell Canada pulls U-turn on super-invasive web-stalking operation

Trevor_Pott
Gold badge

Re: Boycott BELL and send a clear message

If you can get Bell you should be able to get TekSavvy.

0
0

Google, Microsoft and Apple explain their tax tricks in Australia

Trevor_Pott
Gold badge

Re: Which MS Product is in decline?

Windows, Windows Server, Exchange, SQL Dynamics and virtually every other product that you might consider installing on premises is considered "in decline" and "legacy" by Microsoft. Microsoft has radically altered it's sales structures such that the only way you make your quotas is to sell Microsoft's public cloud services. Based on this, Microsoft it not merely seeing a slowdown (or halt) in growth for these segments in the market, it is actively trying to reduce those product lines to zero.

You will put all your data in Microsoft's cloud, you will submit to American legal jurisdiction and you will pay subscription fees for everything, especially when you hit a downturn and can't actually afford it. There will be no more of this "owning your own infrastructure" or "stretching your purchases a few years". You will pay Microsoft what they feel is their due per endpoint and per user (for frontend and backend services) and you'll do it with a smile in your wallet, goddamn it.

2
0

Forums