Can we finally close the book on this one, please?
We've got a Rapture to be getting on with, and we've already had to postpone it twice while they sorted this missing mass out...
991 posts • joined 18 Oct 2010
You try 8 hours, 7 mins in one of those suits, fighting against the air pressure of the suit every time you try to move (and yes, that includes grabbing things), without a second of a break. It's not like an 8-hour drive, where you'll likely make a couple of stops for fuel, have a coffee, sandwich, pee, whatever. Sure, these guys have drinks handy, and nappies (yum!), but it's still hard work and a constant effort.
An epic undertaking - doesn't matter if it's the longest or not.
What I don't understand is why people buy their in-game cash from these outfits. I gave up on Eve a while ago because I didn't have the time to play it (oh, and I realised that it was basically a dice-rolling game, and I could spend hours with a D12 and a bottle of Laphroaig and have at least as much fun). Prior to that, though, I had the skills, but not enough cash to buy the toys. So I did what lots of people do - bought a Game Time Card (with cash, from an approved web site), and sold it in-game for a pile of ISK.
It's safe, it's approved by CCP, and the exchange rate isn't horrible. It'll cost you a little more than going to the Russians (or whomever), but it won't get you banned if you're nicked.
Oh, and a word to anyone thinking of starting Eve: Sell bullets. You buy a gun once - you buy bullets every time you fire it. And I had (still have, I guess) some pretty big guns :D
But a lot of people (myself included) couldn't give a damn about Java and Oracle at the server end. Oracle just seem to be very good at bumping their gums at everybody.
The big strength that ARM has for a prospective buyer is power consumption (really, nobody buying servers cares if their CPU can be made by one company or a dozen so long as they work and they can replace them when they don't). If I put my gear into a datacentre, I'd probably only get a rack half-full before being levied with extra power charges. That's on (mostly 2U) Xeons. If ARM can get the work done on half the power, that's a rack filled all of a sudden. And that's money saved every month on power. Once that saving starts to approach the cost of moving your legacy app from Windows / Oracle / Java / whatever, migrations become feasible.
Full disclosure - I was one of the guys with an Archimedes at home. I had hours of fun hand-coding ARM assembly, and learning to tweak individual clock cycles out of my inner-loops. I would *love* to see ARM succeeding in the datacentre. But realistically it's not going to happen overnight. My next server hardware cycle will be x64 - no question. The one after that? That's a tough one to call right now. And I'm in the Wintel camp - a migration isn't just a recompile for me, but a whole knot of licensing.
Hacked AppleTV with XBMC on it. Boots in about a minute. After that, if I want a movie it starts in moments whether it's a DVD or a BluRay rip. All my discs are in the attic as well, keeping them away from little-people fingers. I don't understand why people are forced to wait so long on a bloody movie starting. It's like going back to the days of getting a video tape from the rental place and realising the previous lazy sod hadn't rewound it.
Except, with the VHS/Betamax you could at least wind past the crap at the start...
With your quantum computer you can throw in your fiendishly complex problem, and it will calculate it in a flash. But then it takes forever to actually read the answer... Wouldn't it have been just as good to skip the quantum computer in the first place?
Okay, so these guys have managed to speed the whole thing up, by taking a guess effectively. Just guess the answer in the first place and save yourself the hassle. That leaves you with a load of time on your hands, and a grant for a q-computer to spend. Pub o'clock?
I used VMWare GSX -> Server 1.0 for years. It was fantastic. Never got to grips with v2 though. Big bits of it just seemed awfully clunky. Perhaps you should look at "free" ESXi? Of course, I don't know your circumstances, but I'm pretty sure ESXi will do everything Server2 will without having to pay for a license.
The only real sticking point is hardware support, but so long as you're using mainstream server hardware you're unlikely to hit any real issues there. The biggest thing I had to do was upgrade the firmware on my iSCSI storage. Yes - it exercised my sphincter nicely, thanks for asking...
Can't see the medical community being too happy about the implied messages of "we're going to replace every doctor in the world for a piddling 10mill". Dollars, even - not pounds!
Expect many entries from pharmaceutical companies that end up flashing "we've got a pill for that" on the screen...
It's been years since I've watched Star Wars. Used to love it, but it's been ridden roughshod over so many times in an attempt to squeeze every last penny from its value. I honestly think Lucasfilm sees the movies' fans as piggy banks to get shaken up every time they need a quick cash injection.
I for one will not partake, *unless* there's an option for the original unadulterated versions to be shown. I want "A New Hope", "The Empire Strikes Back" and "Return of The Jedi". I want them cleaned up, but unedited from their original cinema releases. I don't want spurious CGI. I don't want "deleted scenes". I don't want "extra footage", "extra action", Jar Jar Binks, Greedo shooting first or *any* of that shit. Clean it, give me a DTS soundtrack, and leave the rest of it the hell alone. That's the only way any of my money will go anywhere the shrivelled teat of Star Wars now.
Annoyingly, the original 7200.11s worked great in the SCSI->SATA enclosure I bought them for. Then they "fixed" the firmware and screwed them up. I got something like 40MB/sec on RAID0. Had to finally bite the bullet, and by the Constellations as listed on the HCL, and suddenly went up to 180MB/sec.
I liked the rushed 7200.11s. Though in honesty, they've almost all failed where I still have some 7200.10s performing sterling service.
Fail - because that's what most of them have done by now...
They're left on standby. The system is still powered up insofar as the power switch is lit up, and the iLO board is still running. I know that the PSUs in the DL380G5s that I run make a nasty buzzing noise when the systems are in standby. They settle right down when powered up (and it's not distressingly loud anyway), but it's there.
Glad all of mine stay on!
Indeed it is, but I can point to one of my racks, which pulls 2kW. There are 4 Xeon 5450 dual-socket Proliants in there, and each of those chips has a TDP of 80W. Well, that's 640W right there, and since they're VM hosts I do try to run them hard.
So if 25% of my power budget is CPUs, and I can maybe save 50% of *that* by going ARM, that's a significant saving. Then there are X5400s that pull *twice* the power of mine, but the rest of the server is the same. Sure, there'll have to be a significant speed boost for an ARM to keep up with that, but the potential for energy saving is very real.
(Oh, that cabinet is also full of drive enclosures, tape library, switches, blah blah - it's not just those 4 servers.)
If you're going for dense servers in a rack, the benefits of saving a little power on each server can be significant. I don't know if you've dealt with colocation, but for that you're restricted on the power consumption per rack. If you can save some power you can fill the rack some more. If it's your *own* rack, you save once on the electric bill, and once again on the cooling bill.
Legacy stuff needn't be a big issue. Plenty of Linux distros run on ARM, so you're covered for those applications. Windows is coming for ARM (apparently, though I've heard that in the past), and if that means Windows Server, that means SQL and Exchange etc etc. Who cares what's happening inside the box so long as what goes in and what comes out is the same.
Migration could be a pig - I don't know if there would be endian issues with the data, and I'm too lazy to look.
Yeah - took as long as normal for AnyDVD to skip over the BD protection, and MakeMKV to copy off the tracks I wanted. Can't be bothered with those silly plastic discs in the living room...
The 4 options seem to be:
BD (2D) + DVD
BD (3D) + BD (2D) + "digital" nonsense
BD (2D) + BD of Tron
I would have liked BD (3D) + BD (2D) + DVD, or better the 2x BDs + BD of Tron 1.
Still, big thumbs-up for the hot-chick dispenser room. Trying to get the wife to let me install one!
It's an AGR - the cooling's a lot cleverer than these boiling-water reactors. I'll not suggest that it's faultless, but the steam explosions (Chernobyl) and hydrogen explosions (Fukushima) are avoided by using a gas as your coolant. (Carbon dioxide off the top of my head - oh no, my carbon footprint!) But that's the difference between a 25-year-old reactor and a 40-year-old reactor. I dread to think how hard it is to break a new design...
I've already said it to people I work with. If Torness starts to go awry, I'll be driving over and camping by the gates just to prove a point. Still, it's not in an earthquake zone, nor a tsunami / tidal-wave zone, it's at the top of a cliff, and one of the most significant safety events since it was built was an RAF Tornado breaking down in the sky above it, and the pilot had to turn away from the plant before ejecting.
I guess, as Lewis says, we should have got rid of those pesky Tornados!
Is is just me that thinks a diesel/electric system would be best? Have enough batteries to cover (say) 20 miles, and a diesel engine to recharge them when they drop too low. All the range you need, your engine is running optimally (because it doesn't have to be throttled up and down, and can be tuned to a very narrow rev band), and you could even plug it in overnight to top the batteries off for the morning.
After all, it's not significantly different to a Prius or something - you're just disconnecting the diesel from the drivetrain. Maybe there are too many losses in the system to make it viable, or maybe it's a marketing thing - I don't imagine Joe Bloggs would be pleased with a car that sounds like a site generator.
OCZ have changed the memory chips to a smaller process (which are becoming more prevalant anyway), and because they are apparently less stable than the larger features they have to allocate more space for RAISE (reserved space to compensate for blocks wearing out). At the same time, I was reading that the Sandforce controller they use also compresses the data before writing it, to reduce wear on the drive some more (though I can't say for certain).
The big issues, though, is that (1) each memory chip is a larger capacity, so there are fewer of them, and therefore there are fewer active data paths, in turn slowing data transfer; and (2) with the greater reserved space, you can't match a new 25nm drive to and old 34nm drive in a RAID set. Oops.
Still, I've just bought my 120GB Vertex2 (25nm), popped Win7 Pro onto it, and I get a score of 7.1 - that's constrained by my Phenom II, the SSD pulls 7.8.
I bought an OCZ Vertex2 on Monday, and this comes out today...
Still, for what it's worth, any one of these devices will transform a reasonable computer. Mine responds instantly now, whatever I'm doing. 120GB SSD for OS and some apps, and 1000GB spinny-thing for rest of apps and data. Well, the data that doesn't live in my server.
And, @JDX, almost all SATA SSDs are 2.5". The Vertex2 is available in 3.5", but that seems pretty pointless. The problem is not the space inside the box, but the cost of the parts. You can fit enough parts into a 2.5" SSD to make it expensive enough for reasonable people to choke - you just don't need the 3.5" format. So they make them 2.5" and pack an adapter - that way you can toss it into a laptop or desktop.
And given the servers / SAN in here, hard discs are going 2.5" across the board anyway.
I think we currently generate around 20% of the electricity in the UK from nuclear sources. Upping that to 40% will greatly reduce our reliance on hydrocarbon fuels, so that's a good start. Surely, though, there's milage in pumped storage systems. Increase the base load (nuclear is perfect for this) beyond the minimum actually consumed in the country, and then use that extra generation to pump water into reservoirs overnight (or whenever there's surplus). This gives you an excellent store of power which is (a) clean, (b) cheap because you're using "spare" electricity to do it, and (c) very responsive to peaks in consumption.
It takes seconds to open the tap on a hydro-electric generator, so this is an excellent resource to have. In fact, it could be used to handle the lulls in wind generation (or smooth it out so that less responsive generators like gas / oil can react). Of course that depends on whether wind is actually economical - who knows...
As for exporting power, we export to France, Holland and Ireland I think. Geographically, we're limited in what we can do though. The peak will (obviously) chase the sun around the world, and there's nobody West of us that's close enough to realistically sell it. Storing it for the next day's peak, though - that's a good plan.
If you've got your "cloud" platform hosted by VMWare, and your backup hosted by Mozy (because of the favourable cost and tidy integration, see?), can you really be sure that they're taking responsible care of your data.
What if the live data and backups are in the same datacentre? You lose the datacentre (connection issues, power failure, flood, train crash, I don't know), and all your data is up in smoke. And of course this is the kind of area where they could save money by offering the backup as an add-on service.
I'd be wary...
I'll get in early with the obvious comment.
*I* don't have an HDTV or a PS3, and I'm a hard-working member of society (well, when I'm not on here). So I'm busy paying for a bunch of jailbirds to have the toys I can't afford, as some kind of punishment?
We've all been waiting for ARM netbooks for years here on the Reg (don't even pretend you've not!), but every time someone comes close to shipping one it suddenly disappears without a trace. The cynic in me reckons that could be Microsoft applying a little OEM pressure to lock them into Windows, where the ARM netbooks (you know, with the week-long battery life) would tend to run Linux (and would be incapable of running Windows).
So, if Microsoft ship Windows for ARM, their hardware-manufacturing "partners" (or lackeys if you're that way inclined) end up building ARM computers, and the netbook is a perfect device for ARM. If they're building the hardware anyway, perhaps they could ship Windows and Linux variants. If they won't, I don't imagine it'll take a particularly long time before someone crams Ubuntu (or similar) on there at home. Finally, freedom from the mains socket!
If Microsoft get it right (not a small if) then we could have decent Windows netbooks. If Microsoft produce another travesty like Bob(TM), we'll just end up with decent Linux netbooks. Everyone's a winner, baby!
It's virtually (see what I did there?) impossible to determine the OS load without directly polling the people implementing them. After all, if I go shopping for Proliants tomorrow, I'll stick XenServer on them. Does that put them in the Linux bucket (it's based on CentOS after all), or should I count up each instance on Windows / CentOS that's running virtually? And what if I'm not actually adding new VMs but just doing a hardware refresh or increasing headroom?
For what it's worth, I was one of those suckers who bought a plasma screen 9 years ago so you lot could have them cheap as chips now. A nice Panasonic display that does all of 480p. And I've started buying BluRays because they're getting to the price point now where I'm prepared to pay the premium over DVD to "future-proof" (hah!) my movies. At least for another 6 months.
Anyway, point is that the BDs look better on my 480p screen than the DVDs. I haven't compared like-for-like, but the BDs look much clearer and fresher. The DVDs just look a little muddy by comparison. Of course that's got *nothing* to do with the resolution (it's an SD screen after all), and everything to do with the encoding and the fact that there's a shedload more space on the disc. 30GB for a movie instead of 5GB? I'd hope it looked better. And to my eyes, on my equipment, it does.
Don't believe me? Don't care. This is what works for me, and once I have an HD screen to go with it I'll be a happy lad.
I spent weeks trying to faff about with a Hisense 1080p video player and MediaTomb because the SMB browsing was clunky (if I'm generous). It was soul-destroying trying to get the thing to work. And then it'd play music but not videos, or some videos but not others (despite the fact that it'd play the whole lot via SMB). Tossed the thing onto eBay, got an AppleTV off eBay almost by return, cracked XMBC onto it, and I now have 1080p playback on a nice little box that just plays everything I chuck at it. And I can retire my old XBox that's been doing exactly the same job only without HD support for years!
I want DNLA to work, but whilst there are so many standards it's just unlikely. If they could make the formats used on DVDs and BluRays a mandatory part of the spec it'd be a start. Then they can spend years arguing over the containers...
I used to write programs to animate Mandelbrot sets by shifting the starting parameters. Got my old Archimedes to render them in real-time with a mix of fixed-point maths, and laboriously-optimised hand-coded assembly. Wish I still had that program - my inner-loop was a work of art.
The sad thing is that at the time I didn't really understand it. I daresay I could go back to it now and make more sense of the unreal numbers and such, but at the time it just made pretty pictures.
Biting the hand that feeds IT © 1998–2019