2969 posts • joined Monday 31st May 2010 16:59 GMT
Where did I say that the technology "Flash" was bad? Flash certainly can be content. I loves me my Flash TD as much as the next guy! The point was that having a website with awesome Flash (or HTML5) transitions, animations, menus and intros has ZERO value unless the site actually has content and/or useful functionality.
The article was emphatically not about "the technology "Flash" is bad." It was about the fallacy of the notion current popular amongst web developpers. Namely: form over function.
Seems however that there are many people who are /very/ touchy about the idea that Flash the technology is "bad." It makes me wonder what made them so wound up?
@The Unexpected Bill
Good customer service indeed sells. I mean, the customer service I got from these guys was so fantastic I felt it was worth an article. The story is even better than the article tells.
You see, when I first called these guys looking for a transmission, they said they had the right one and they put it in the queue to be shipped out. I get a call the next day and they guy says "my boys apparently cut the kickdown cable on this transmission taking it out. What do you want to do?" I didn't know a think about what this meant, so he said he would get details on how this would affect me and call me back. He called his transmission guy, who told me "it would be a $300 job to reattach a new kickdown cable, assuming you can find one." I was heartbroken; the transmission they were selling me was $350 after shipping!
So the guy noodles around for a day and gets back to me. He says “I found a buddy of mine with one of these trannies. I’ll tell you what; we’ll sell it to you at the same price we quoted you on the original.” I was blown away. Gast absolutely flabbered.
Here is some random company on the other side of the continent that not only lets me use the tool I am most comfortable with (instant messenger0 to talk to a live person in real time, but they bent over backwards for me. They didn’t know me from a hole in the ground, had no previous business relationship, no reason to treat me “special” that I can think of. Yet lo and behold: fantastic customer experience.
A couple of days later I was thinking to myself “hey, I should actually get off my duff and crank out an article or two.” I thought back to this company and thought “you know what, screw all the negativity and scandal. I want to talk about someone being awesome.”
So yeah, good customer service on this guy’s part totally got them an article. I logged onto the instant messenger earlier today (after I discovered my editor had published it) and sent him the link. He was quite surprised, apparently it’s been printed and is now on the company bulletin board. ;)
The whole experience contrasts starkly with my day job. At my day job the CTO of the company is banging on one more time that we need to “completely redo the website.” I feel frustrated because I am trying to counter this with “it’s not what the website looks like that matters (it’s perfectly fine, aesthetically speaking,) it’s what is ON the website and what FUNCTIONALITY it provides that matters.” This is countered with “our website is crap, we need to start over.” There is a distinct temptation to cry/scream/howl/sob in frustration.
The reason our customers shop at the store I work at…the reason I like this random car wrecker I found on the internet…it has nothing to do with /presentation/. It’s because when you send an e-mail/text/IM/whatever there is a warm body on the other end that says “hello, how can I help you?” They then proceed to /actually help you/!
As such, I guess the whole article is a bit of cathartic venting. Since my voice is seldom heard around here, I cast my idea into the wild interwibble:
It’s not what your website looks like that matters.
It’s how you use it.
I drive a Scion XB
I think around here it's qualified as "a boxy go-cart with a plastic couch on the front." When >50% of folks in your province drive pickup trucks, (and a Ford F-150 is a "starter" truck,) then yes, a Camry is small. Most people have pickups or SUVs 'round here. People with sedans or smaller are driving small cars.
I drive my little Kleenex box around, with my head touching the roof (46" of headroom, and I still have to bend my neck.) I can tell you that thanks to her low ground clearance and general sub-compact sizing, I am generally terrified all the time whilst driving. everything around me is three times my size. Nobody can see me on the road, their are SITTING at about the same level as my SHOULDERS.
Let me tell you though; when you are toodling around in an F-350 with raised shocks and a big old cowcatcher on the front, you make your own parking spots in the winter. It’s a very Albertan thing to do.
I am not sure car-parts.com sells anything. At elast to me. (I don't work for a car shop.) I would have to look further, but suspect they only really sell the "service" to car shops who want to register thier inventories online.
Thus: nothing. There's nothing to pay for if you are just a dude searching for a transmission. ;) Although, that brings up a point: I should totally go find out who actually runs that site and let them know I wrote an article. I usually do that after it's published, but I got distracted trying to find a shop here in the city that would actually /install/ the transmission...
I know it was in jest...but I wasn't. If you can figure out a way to play Crysis without the DVI port, I'll buy you a pint. I would *love* to take these beauties for a spin! :) After all, the question has to be asked: with two Xeons, 48GB of RAM and two Tesla cards...does Crysis still run like crap?
Becuase it wrecks my laptop...
@Ian Michael Gumby
Getting the cards in the server seems way cheaper. What I am ordering really isn't that farr off the retail price. Even the local supplier I use for retail gear has a decently low retail price: http://www.cdw.ca/shop/products/Supermicro-SuperServer-6016GT-TF-FM205-no-CPU/2251250.aspx. Remember that you have to add CPUs and RAM to that.
That said, my client has some decent connections, and got a reasonable discount off of what seems to be the Canadian retial price for this gear. Also to be noted is that I don't have any disks in any of these nodes: they load thier OS over the network. It's just board/chips/RAM/GPUs.
@John Smith 19
It's a big converted warehouse sitting on top of a massive concrete slab with a two-story 3500 sq ft basement underneath. Sadly, most of the building is offices and warehousing. The corner of the building I get to work in really isn't that big...but I can punch holes in the wall/roof/floor if I need. I just can’t move walls.
I'm very sorry I didn't make that clearer. That is totally my bad. Even 48GB of RAM is probably excessive for these nodes...but I like to fill all the slots. I guess I forgot that not everyone would realise that the average video rendering box would not make use of 192GB of RAM. It's mostly about the number crunching. They typically crunch work units in the 4-8GB range, though they could get tasked with up to 36GB, depending on the job.
We're doing tests now to see if 10Gig NICs will really speed up overall farm performance, or if it is (as I suspect) going to be bottlenecked by the control software, not the network. Only tests will tell…
@Ian Michael Gumby
It only seems like a “great deal” if you assume maxed RAM. While the board supports 192GB of RAM, I'm only actually loading the systems out with 48GB. That's 12x 4GB modules, a pair of CPUs, the two GPU cards and the server. You can buy the barebones server with the 2x GPU modules retail for $5700 here in Canada. 48GB of RAM + CPUs aren't less than another grand, retail. Buy a few of them and a discount of $1000 off the retail really isn't that much.
It's an interesting compromise, this GPU processing thing.
If I made some ridiculous uber-machine with quad 12-core CPUs and 8 GPUs it would crunch numbers so fast I'd need a little lie-down. That said, what is the kind of time spend crunching numbers versus chatting with the control server looking for new jobs? Personally, I wish control software were a little bit more dynamic. I would love to have a couple real number-crunching beefcakes for the render jobs that can't quite be broken up as much. The rest could be farmed out to the smaller nodes.
Instead, you need to find a balance between speed of processing, power efficiency, cooling, ability to supply X number of watts to a single system and ability to actually get jobs from the control server. Given that the client uses Lightwave, I've found from testing that 2xCPU and 2xGPU seems to be about the right balance. At the end of the day, the control software just doesn’t seem to be good enough to deal with more.
Good luck sir. I quite enjoyed all your pieces here on El Reg. You are now giving me a reason to look up the Daily Telegraph and read their technology section!
All the best, and I hope it goes well for you. Tonight, I'll be drinking my pint in your honour!
At least we agree on something.
None of this is about me, nor do I understand why it should be about you. I don't even understand why we're having this conversation in the first place. This is about a guy who wrote a great article, one that I personally am eager to read follow-ups to. It's about someone who I think did a credible job at bringing a difficult topic "down" to the level of regular folks like me. The OP to this thread was kind of harsh on the author; I felt maybe if the OP was looking to get more info from this author...
...he'd catch more flies with honey than with vinegar. Where and how and why you got involved, I’ve honestly no idea.
Further apologies to the author for the tangential nature this comments thread has taken.
"Who the author [is]"
So...who is the author? Is it someone I should know? There was no "el reg bulletin: new datacenter articles are written by X." I am going to assume the on the article is the name of the author unless told differently. If the name is one I am supposed to recognise as "really big in the industry," then I am afraid they play well outside my pay grade. (Actually, that's evident by the Neat Stuff being discussed."
Also, unless you are Matthew Malthouse, I wasn't talking to you at all in this thread. Are you cyber-stalking my posts in other threads now? I have no idea why you posted "wimp" to this author. I found it bizarre, but figured "meh." Why you felt that my post to Matthew Malthouse was in way directed at you, I have absolutely no idea.
Are you feeling okay, dude?
Also, @Manek Dubash: I am very sorry to admit publicly to not recognising you name. Google came up with a few possibilities in the IT industry…but I must admit to not having heard it before. Please take that not as a slight against your experience, but rather an example of my not playing in quite the same fields as you. I also apologise that thread has somehow grown a “jake vs. Trevor” arm. Not remotely my intention. Keep the fantastic articles coming!
I thought it was a great introductory article. It described the guy's basic structure, and left the field open for follow-up articles fleshing out individual elements. Now, I can’t speak for the author – I’ve never talked to him, so I don’t actually know under what constraints he is working – but I know they hold me to between 500 and 750 words.
The long multi-page articles are apparently not nearly as well read as the simple 500-750 word single-page ones. I think you’ll find that even the really experienced authors such as Lewis, Lester and Andrew write more single-page articles than they do multi-pagers. The multi-pagers they do write are hugely in depth and generally very concise. They have had years of writing experience to learn how to hold a reader’s attention long enough to click the button for the next page.
Consider cutting the author a little slack. I’ve read all of his articles so far and I’ve liked every one. He is doing a good job trying to take a very complex topic - “Datacenters In General” - and reduce it to something that individuals who aren’t familiar with it can grasp. He has only written a few articles for El Reg; perhaps he’s even new to being a writer in general. He’s just hitting his stride with his audience, and frankly he’s doing better than I did when I started!
Try ASKING the author for further elaboration on topic areas you prefer. El Reg’s commenttards are notoriously critical; being offensive, rude or demanding will probably just get you ignored by the author. Rightly so, in my opinion. Asking politely however will probably earn you a smile and a mental “hey, thanks for not being a douche.” If he has the leeway to do so within his contract, I’d bet that the “asking politely” bit would then manifest itself in the form of an article diving further in depth on whatever area you wanted more information on.
A great example of how to do it right is given by a couple of the commenters here: http://forums.theregister.co.uk/forum/1/2011/01/10/datacentre_cooling_and_power_constraints/
They asked very politely for further elucidation on specific areas and I am currently have three follow-up articles in draft open on my screen to accommodate them.
Anyways, for my comment to the author:
Manek Dubash: good article, sir! I however have some questions. Perhaps if you have time you could expand upon them for me, please and thank you:
1) You talk about fibre channel as the storage layer, but exclude other technologies such as iSCSI or ATAoE. Any particular reason?
2) Also: you talk about your core network as being “large, high-performance switches consisting of blades plugged into chassis, with each blade providing dozens of ports.” In my setups, I have preferred to go with large numbers of commodity switches that physically break up my subnets and/or physically provide redundant paths. I admit to not having had a datacenter under my care larger than 500 nodes, but I wonder at the reasoning specifically behind “bladed” switches. Is there something about “bladed” switches you feel is inherently superior to standalone stackable switches? (Other than space conservation?) Having not had room to play in a > 500 node datacenter, I am very curious about all the rationale.
Looking forward to the next article!
I suspect however it's not possible. Summer temps usually only drop by 5 degrees. While that might be good enough for many days…there are entire weeks which could exist outside the temperature range of “running full bore.” You’d think that shouldn’t quite be a problem, excepting that apparently a half day’s rendering can make all the difference when on deadline. That said, it’s worth exploring Amazon’s EC2 or Rackspace’s cloud as potential emergency backups for thermal excursion events.
I can do some of that. The client wants to preserve his anonymity throughout this process. (He doesn't want to give his competitors an edge in any way.) So as such, photographs are out. Design diagrams and floor plans are certainly doable, but only if you are willing to put up with my terrible Visio skills.
As to costs and specifications…some of that should be manageable. I have to ask the guy designing the liquid cooling rig what his thoughts on the whole deal are…but I’ll write up articles on what I can get away with. ;)
Humidity in Central Alberta is roughly 0% - 5% year round. As such, you are quite correct in that it is (in theory) possible to run the whole setup without chillers. Indeed, last year my chillers were only on for 3 weeks of the year. That said, I am unsure that I would ever build a datacenter without adequate chiller capacity. While we do get down to -40 in the winter, we can easily have days of +40 in the summer.
On average, the summer months are 25-30, but the spikes that go up to 40 are enough to drop any datacenter I personally know how to design. (Well, theoretically I could engineer a heat-pump system that would not be a chiller, but I am fairly certain the chillers are actually more power efficient.)
As to hardware meeting ASHRAE specs, I am not 100% sure of that. We whitebox our servers, just as we whitebox our datacenters (and everything else.) It is the reason people call me to do this stuff. Anyone can order a pre-canned (and usually very expensive) server (or even entire datacenter) from a tier 1. Not so many people take the time to look at the available off-the-shelf components from the whitebox world and ask the magical question “what if?” Hewlett Packard can deliver you a datacenter in a sea can that does everything the one I am building will do; tested to meet a dozen different standards and proof against almost anything except a nuclear strike.
I am called in when someone wants to build a datacenter into an awkward space and do it for something like half the cost of a datacenter-in-a-sea-can. (Alternately, if someone wants to make a computer system do something it was not designed to do, I can usually arrange to make it do perform that function anyways.) My partner in crime on most projects – and fellow sysadmin at my day job – is the polar opposite. He is so by-the-book he makes my teeth hurt. He tests everything, checks, re-checks and then does it all over again. Every time I approach a problem from an oblique angle, there he is measuring the angle, documenting and ensuring we have enough backups to survive World War III.
In this case, we are likely going to be using some modified Supermicro servers. (I have a guy working on the liquid cooling systems now.) The issue is the video cards. I just don’t know that I can dissipate the heat off the video cards using forced air at or above 25 degrees C. They crank out stupid wattage, and trying to design this tiny little shoebox datacenter to handle 500 units without chillers is hugely outside my comfort zone.
I will naturally try to design out the need for chillers as much as is humanly possible…but I think I would be a fool not to install enough chiller capacity to completely back up the outside air system as a just-in-case measure. Call it the backup cooling system. After all, what do you do if the primary and secondary fans on the outside air system fail simultaneously?
Well, we live in Edmonton, Alberta, Canada. 10 months of the year, the outside temperature is below 20 degrees C. I have never installed a datacenter in this city without an outside air system. It would be unbelievably stupid not to take advantage of the massive source of cold air just on the other side of the wall.
The issue is that for two months of the year, the outside air temperature is often over 30 degrees C. This means that in addition to your outside air system, you need chillers capable of handling the entire datacenter, even if they are only active two months of the year. (Also: the outside air system has to be upgraded to a much higher volume/minute capacity than it currently has.)
The upgrades won’t be particularly hard…but they simply cannot be done right at the moment. Edmonton has had something like a metre of snow in past five days. We are still trying to clear our streets and walks, let alone having warm bodies to climb up on frozen rooftops to upgrade chillers!
In truth, most of the datacenters I have anything to do with here only need chillers two or at the outside three months out of the year. The rest of it is simply forcing outside air into the building, through the front of the servers and then exhausting the lot of it back outside the building. Not particularly complex, but it does take a sheet metal guy, someone to drill holes in the concrete wall and some dudes up on the roof upgrading the chillers.
Actually, we never had much of an issue here. We simply spammed spindles. Big fat RAID 10s running on Multiple Adaptec 2820sa controllers. The limits were the controllers themselves, not the drives! It took some work, but we eventually realised that if you staggered the startup of the system, they wouldn't all be trying to read/write from the storage array at once. Once we mastered staggered startups, booting was no longer an issue.
The control software was also good about this: it could be configured to only hand out jobs to a preset number of nodes at a time. So the first 15 nodes would get jobs and then 30 seconds later another 15 nodes would get jobs, etc. This staggered the requests from nodes reading jobs and writing results enough that it was within the capabilities of the hardware as provided.
At the moment, the client's sysadmin is most familiar with Octane Render as their GPU rendering platform. It can only talk to CUDA cards, and so nVidia was the only choice. So far, despite Octane being in beta…I’m mightily impressed. It is dirt simple to use, and fast as could be desired. The renders done on GPU have all the fidelity of a CPU render; there are no shortcuts taken by this software. (Traditionally, GPU renders would show graininess in shadows and there were frequently issues with glass rendering.)
At the current pace of development, Octane Render should have a fully supported version 1.0 product out the door before we get the datacenter upgrades completed and the new farm installed. Version 1.0 will come with all the scripting gibbons and bobs we need to make the whole thing properly talk to a command and control server and then we’re off to the races!
Fortunately for me however, I’m not the one who has to deal with the render software. I am setting up the deployable operating system, designing the network and speccing the hardware that will be used. I get to design the datacenter’s cooling and power systems and oversee the retrofit. I’ll be ensuring that Octane Render is installed properly on the client systems and that the individual nodes grab their configs from a central location…but tying those nodes into the CnC server is the in-house sysadmin’s job.
Overall, it’s a great way to play with new toys in a fully funded environment. More to the point, it’s doing so in a fashion that takes full advantage of my unique skills: instead of simply following a manual someone else has written, I am doing the research and writing the book myself. Doing that which hasn’t quite done before…but for once, with a proper budget backing it up.
The fact that I am getting paid for it is simply icing on the cake. These types of jobs are so fun...honestly, I'd do them for free. Hurray for the fun gigs!
You are correct in that the render nodes don't need UPS support. The command and control servers (as well as the storage systems) do have UPSes. The UPSes are APC, and are installed on the same racks as the systems using them. So far, they have handelled 2 hour outages with room to spare. Though that's probably because I went to Princess Auto and bought a bunch of very large Deep Cycle batteries and added them to the UPSes when I installed them. Works a treat!
"Where did you find these guys."
All over. It's an interesting thing sometimes to walk away from our common cloud of friends and associates and realise that there is this whole huge world out there separate and distinct from the technorati. We don’t know how good we have it; understanding technology, it’s uses and applications. Many people still use manual input devices and dead tree ledgers to do their accounting. Even more use dead tree flyers as marketing…and shockingly millions of people read them!
Ever now and again I actually disconnect from the Internet and all her various digital denizens and walk the streets of the world less connected. I meet interesting people from interesting places…periodically even some who own businesses. This is where I find these people. They own my favourite cheese shop, or are my barber. They run a ski hill or a church. They own a nightclub or a shoe-repair shop or that little café around the corner with the really good grilled cheese sandwiches. They are hundreds of thousands of people in my city alone, and millions across my continent.
Every now and again I forget about them…it’s good to stop and remember.
MAIDs are for all intents and purposes standard RAIDs wherein the disks are "spun down" when not in use. That is 100% in the RAID card. The Intel RS2BL080 is my current favourite card. It uses an LSI 2108 chip. It seems to spin my disks down when idle just fine. (I believe you need MegaRAID 3.6.)
DELL PERC H700 and H800 card can also be configured for Spin Down.
Most vendors don't call it MAID. They just call it "spin down." Usually tout it as a power saving feature. I should point out that a single modern RAID card can be married to SAS expanders to provide truly a "Massive Array of Idle Disks." :)
In any case, take some time to troll around the "about" page...he does talk about this near the bottom.
Where was "a flaw in virtualisation technology" ever remotely blamed? Also: how exactly do you feel you have the right to tell someone else they "should" have resource caps in place on testbed servers? There are dozens of reasons not to and only a few very flimsy reasons why you might want to.
I don’t see how properly testing an application – in a testbed environment – in order to do things like application resource profiling (among others) is such an issue. If you feel this article is an attack on virtualisation as a technology, then I would like to suggest that you are more than a little overly sensitive about the topic. The article was about patch testing and conveying the concept that “just because something has always behaved in X fashion does not mean that it will continue to do so forever.”
Nothing in the article remotely talked about “a flaw in virtualisation technology.” Furthermore I most certainly /was/ "asking for it." That is implied by the concept of a TESTBED server. The purpose of a testbed server is to "run the thing and see if/how it blows up." I've never met a systems administrator who didn't equate "testbed server" with "asking for it to explode."
The VM was allowed to dominate the entire server simply because...it was a test server. The point of the testing is to see if/how patches and updates change the behaviour profile of an application. If the behaviour profile of an application changes, then the entire virtual instance changes and I then go back and recalculate my load balancing.
Virtualisation is a tool in a systems administrator’s arsenal, no different from any other. While resource constraints are a fine thing in a production environment, they make absolutely no sense to me in a testbed environment. I view my testbed virtual servers similarly to how I view calibrating my various test equipment: it is there to provide you with baselines against which you can measure issues in production.
Given the radical nature of the performance changes delivered by this patch, I would say that this application server is quite simply no longer a candidate for virtualisation. Put another way: my calibration tests determined that the tool I was using is no longer valid for the environment in which it must operate. Indeed, this incident makes me grateful that I /don’t/ run my testbed systems with resource constraints. With resource constraints in place I may never have caught this performance difference. Had I not caught it, we would not be able to take advantage of the vastly superior report generation times this patch enables.
More to the point…it allows us to simply remove from service several instances of Windows Server we had been using to run “report servers” in order to compensate for the slow report generation features of this application. This frees up licences that I can use elsewhere for other projects. All in all, win/win/win largely because I take the time to reprofile my applications with every patch by running them on an “unlocked” server.
Based upon the performance delta, I have already begun the process of sourcing equipment to properly physicalise this server. This patch will not be applied to the production environment until that production environment has been fully physicalised.
There are no resource caps on the test server because the test server is used to see what kind of resources an app consumes. I /want/ them fighting for resource contention, because it helps me profile any changes to the application. :D
Hot Data: 10K 2.5" Drives
Cold Data: 5.4K 3.5" Drives
There's a lot more cold data than hot on my network. Get the right controller and you can spin down your 3.5" disks when not in use. (I do like MAIDs.) Newer controllers (LSI has sever very nice ones doing 6GB SAS now) can recognise and deal with SSD/Hot 2.5"/MAID 3.5" disks separately and in different appropriate fashions for dirt cheap.
It is understandable for you to be using 3.5" drives right now if you are carrying over legacy equipment (especially if you are towards the end of your refresh cycle,) but by the next refresh, there really will be no excuse.
Production and test systems were not on the same server. The test VM was on a test server with other test VMs. When the new patch finally "unlocked" the performance of the POS software, it flattened the test VM, on the test virtual server. The side effect was one of also flattening the various other test VMs on that same server.
No production systems were harmed during the testing of this patch.
The system would never really get above 30% CPU. It wasn't hitting the disk much, and didn't use the network for more than about 2-3%. RAM was about 40% in use. RAM wasn't all that active, so it wasn't a "very small chunk of RAM getting hit so hard that all the RAM bandwidth was being nommed" issue.
I poked at it on and off for four years. I never did figure out what I could possibly change that would make it actually consume more resources. There didn't appear to /be/ any form of bottleneck. Just a stubborn refusal to use what was provided it.
Truly and honestly the single most bizarre application I have ever had the opportunity to work with. With the sole exception of this one application, I have never met an app I couldn’t identify a bottleneck on.
"Common light bulb."
CFL or Incandescent? My house is all CFL moving to LED. I am not even sure you can /buy/ incandescent house bulbs any more. So you are claiming the waste power of a PSU is 14W or less on a 1kW PSU? In what universe? Even an 80 PLUS Platinum couldn’t claim that!
Now, I am always on the lookout for more energy efficient gear. You make a bold claim by saying “…those so-called "1 kilowatt" power supplies only consume about as much AC power as a common light bulb.” I would honestly like to know where you can get such a beast, if they indeed exist.
Also: you claim that “On those power supplies that say "350 watts" in big letters, that's the DC power capacity.” I would like to know which PSUs you use where you can reliably count on getting 350W DC on a unit stamped “350W.” Is that after 5 years of capacitor aging, or right off the shelf? Off the shelf, the best I’ve gotten is 98% of rated capacity, with degradation to 85% rated capacity with 5 years of capacitor aging.
I have always specced my PSUs such that system load = 80% of PSU capacity, and banked the PSU pull from the wall at 110% rated maximum. If you use alternate calculations, or know of better PSUs than 80 PLUS Platinum, please let me know!
I have had the best luck with FSP PSUs.
"Big, evil hard drive shredders."
I must arrange to send some equipment here, if only for the spectacle of watching a few useless drives get mauled. Beats turning them into coasters. I have about fifty formerly-hard-drive-platter coasters sitting on a shelf and dozens more "dead" drives waiting for a slow day so I can make more...
Well, I ran that idea past the head beancounter. The official response it that the beancounters don't get to decide what is considered a consumable and what is not. Apparently, that is decided by the government...there supposed exists an actual section of law that documents computers and other electronics as being "fixed assets" rather than consumables. Would have been really nice if that trick would have worked!
I wonder if it does in the US? The UK? Different countries, different laws...
Arrange for me a method of swapping out the PCs of the beancounters in Ottawa that create these silly laws and you’re on. As for the bean counters here at work…they are the “ethical” kind. If you did that, they would still maintain that they had residual value…though they would have a perfectly legitimate reason why they require a PC with more value than that evident in the older PCs.
Seriously though, who comes up with these laws? I understand that some checks are needed against people who launder money or dodge taxes in this manner…but for the everyday Joe this is just utter lunacy.
Listen to this man.
"My personal experience with HDD failures especially in enterprise level storage arrays is that frequently the disk that has been failed by the array is actually still quite serviceable and I have redeployed many of them to other less demanding situations without any issues. I suspect that the main reason for the high failure rate in storage arrays is that in a raid stripe or volume group that one slow drive can effect the performance of the other drives and storage vendors will fail these drives for performance balancing reasons."
Everyone listen to this man: he knows of what he speaks. This quote is Truth spoken freely. I should also point out that in many cases disks which consistently fail our of a RAID will pass vendor diagnostics as they are mechanically and electrically sound...they simply have remapped critical sectors as failed such that the disks are that msec slower than all the others in the array.
This indeed is why the TLER bug on the Velociraptors is such a pain: there is nothing wrong with the drives themselves...but they stop responding due to a firmware issue that ends up dropping a perfectly valid drive from the array.
Say what you want about Western Digital drives in general - and I've no good words for the Velociraptors - but I'll be damned if the RE4 Green Power 2TB drives aren't solid gear. Slow as sin...but they store a great many bits very reliably.
Bad (terrible!) idea as primary storage. Not remotely half bad as archival storage or in a MAID.
I don't actually "work for the Register." I work for a company here in Edmonton as a systems administrator full time (8-12 hours a day) as well as maintain about a dozen other networks (for example, those of my company's largest clients) "after hours." I squeeze in writing articles for El Reg mostly because part of my day job is cranking out documentation at work. Incident reports, how-tos, you name it. Half a sysadmin's job is paperwork, the other half is research.
A good example of a typical day would be today. I woke up at 7am to be on the road before 8am. I had to stop at Memory Express on the way in to pick up a spare disk and showed up at work by 9am. I managed to check the comments section and respond to a few whilst standing in line. I am at work until about 7:30pm tonight, followed by a short dinner date and then a server swap and data migration. I’ll get home around 11:00pm. That is enough time to feed the pets, check my e-mail and collapse into a heap. Rinse repeat until Sunday, which is then filled with doing all of the chores needed to keep a house maintained and various pets happy.
Even responding to this comment gets delayed; my phone let me know that it was posted at 10:39am. I have been pecking at it in between support calls and troubleshooting ever since. It is now 11:27am
Between my day job, the various other networks I maintain and writing for El Reg I work 10-16 hours a day, 6 days a week. (Usually Sundays off.) Scheduling conflicts are thusly something fairly normal. It is something of a common complaint I hear from other people in the city. Not only sysadmins, but anyone who has to work and commute in this sprawling city knows that ecostation days – like trips to the doctor – essentially require taking a day off work. This is especially true when you consider that it can take an hour to make it through the ecostation once you arrive.
Worth a try. Edmonton's freecycle community isn't exactly something I would call "vibrant." Oddly enough, there is still a Usenet group (?!?) still active in these parts named "edm.forsale." I might actually have some lucky there...but the last time I played around with that usenet group, the crowd was fairly picky. Wanting complete documentation on such "give-away" prizes, etc. Worth a boo, though!