92 posts • joined 14 Mar 2010
As above - Teradici PCOIP is very good for CAD workloads.
I agree with msage re. Teradici - it is a very good technology.
they also have the Apex card for virtualised servers.
Re: Obviously not
I agree with Nigel 11. There are problems where you need large amounts of RAM - as you say in engineering simulations, where you have very fine meshes. Or in bioinformatics,
Re: Obviously not
" But but I want a machine with 64TB of memory :( "
Buy an SGI Ultraviolet. Simples.
Seriously - you can spec one of these with 64 Tbytes of memory.
Re: "He used my access to make you a domain admin?!"
Youc an configure Linux to recognise the Bluetooth ID of your mobile.
If the mobile moves out of range the screen is locked.
Re: I've been helping friends (and businesses) upgrade from XP to ...
"(my preferred versions of openSUSE at time of writing, for example, would be 11.4 or 12.2 but both are coming to the end of their lives now) it is common to find that distros prefer you to keep up to date and provide little support for older versions."
Well - you really mean the 'community' distros like OpenSUSE and Fedora here.
And to be honest the line from OpenSUSE is that you can easily upgrade - just set your repositories and to a zypper dist-upgrade.
But perhaps of more relevance - check out the SuSE Evergreen project.
That is keeping older distros alive by providing updates. So you DO have support for older versions.
Re: Is that really the best place to build these things?
"At the levels of precision these colliders are operating, I'd have thought that building them in the Alps might introduce anomalies and gravitational distortions"
Well - the LEP accelerator was so finely instrumented that it detected earth tides for the first time.
There was a small chaneg in the beam tuning noticed twice a day.
Investigations showed this was due to earth tides - like ocean tides, but the earth is moved (*)
(*) Yeah, yeah. Particle physicists make the earth move!
Re: Is the overlap of the rings part of the plan?
Well yes, you do have track junctions and marshalling yards.
CERN has an accelerator complex - the older, lower energy accelerators produce the original particles, which are then injected into the larger rings. So yup, you have a points system like a railway.
The SPS began operation in 1976 and is still used as the injector for LHC.
Re: Better response times than TCP?
Better latency than a TCP connection, by using Infiniband and RDMA.
You're looking at 1 microsecond latencies.
(Of course you can run RDMA over 10Gbps ethernet also)
Re: I'm wondering .....
"One would hope it would cost a bit less to maintain and run than actually sourcing a brand new one."
Well... maybe not.
As machines become older, it gets harder to source the parts - particularly DIMMs and CPUs.
So HPC manufacturers ask increasing amounts to keep older machines under maintenance.
Of course it is in the interests of manufacturers to sell you new machines.
In my experience it is the DIMMs which will go faulty most often on a machine such as this.
Re: serious question - not to be confused with earlier comments/screeds
"Except for the rather important question of "why didn't all this matter/antimatter just annihilate each other shortly after the Big Bang when it was all so close together,"
The reason is called CP violation - and is one of the reasons why high energy physicists study b quark decays so closely. (I studied high energy physics and was a member of a CERN experiment).
All particle interactions are invariant under the operation CPT (charge conjugation, parity and time reversal). So all reactions should run at the same rate, forwards or backwards.
However some reactions, such as b decay, exhibit an asymmetry under CP reversal. Which means they must have a different reaction rate (T).
CP is the operation to turn a particle into its antiparticle, so CP violation means that some particles and antiparicles ahve different decay rates.
I hope to goodness I have remembered the above correctly, and I stand to be corrected when a card carrying physicist comes along.
Re: red had does similar
Nate, you make a good point regarding the overhead that virtualization takes, which is decreasing.
For HPC, I beleive that Docker will have a big future - packaging up containers to run a specific application, with its associated libraries and running them on VMs.
Also HPC workloads perform when you have CPU pinning - ie processes run on an allocated core and aren;t moved around by the OS (you want all those caches fileld nicely, and not having to repopulate them). I already run cpusets on an HPC ssytem I manage, and see cgroups as an extension.
Re: What about storing coldness in liquid nitrogen?
"That said, dumping nitrogen into the coolant reservoir might be an idea for an emergency "we need 120 seconds to shut everything down nicely" solution."
But you should have some sort of thermal monitoring anyway - hopefully shutting down automatically when the temperatures rise above a set threshold.
That's where old style mainframe 'halls' were good - high ceilings, lots of thermal mass.
BTW, Trox in the UK already do produce cooled doors cooled by CO2.
"or maybe the 'victorian' eggheads dont have a problem with sudden outage, interupted computation or loss of data. They can always go to the beach!"
Look at my comment re. UPS for the storage - that is very desirable and yes data corruption is not at all wanted.
But re. sudden outage, HPC jobs can and will have this. The job should write a checkpoint solution every so often (*) and could be re-run from the last checkpoint if it fails.
These workloads consist of simulations - if one of the blades running the simulation fails, the whole run is likely to stop anyway.
(*) that is an interesting problem in itself -and is one of the reasons HPC likes big fast storage.
Remember - these are HPC systems. Hi Chris!
Typically they draw a large amount of power per rack - but jobs can be halted and checkpointed if you need to turn the system off. It is not critical that they are up 100% of the time.
(Making that clear - it is GOOD that they are up as close to 100% but its not business critical and jobs can sit waiting in the queue to run later).
On an HPC system you woudl tend to be more concerned about having UPS for your storage and head nodes (login / provisioning nodes).
That said, a UPS does give you power smoothing, so for that reason there are UPSes on all nodes on the systems I look after. However we don't expect a long runtime - there is sufficient time to checkcpoint jobs and shed the load by switching compue blades off.
Congratulations to the CentOS Team!
High capacity tape DRIVES aren't cheap - the economies of scale come in when you have a library with hundreds or thousands of tapes, most of which are sitting there consuming no power.
So you are using an expensive device to access multiple 'cheap' devices (not that LT0 tapes are that cheap).
In the past, I would have said that the consumer equivalent for backups would be a writeable DVD - but hard drive capacity has of course far outstripped DVD capacity.
You are right though - there would be a market for a durable, high capacity backup solution for home use which could simple be parked on a shelf for years.
Re: I had the Sharp PC-1500 in 1992
I have the Casio equivalent - the FX-720P in my desk drawer.
Must get some betteries for it!
Someone else in the office uses his on a daily basis...
Alan, what flaws have you found in DMF?
Certainly satisfies the requirement for everythign going to tape at least twice - you can specify that easily, and also look for any files which somehow have ended up on only one set of tapes. You can easily use disk as a cache layer (which I do).
Indeed when the AWS annoucement of virtual tape libraries on their storage gateway came out it set me thinking on a configuration where you have a local tape library, with the primary copy, and use Amazon Glacier for the second copy. Cost aside, you have disaster recovery.
Re: Brits forgetting their past?
Barrage balloons brough down aircraft as they held up a steel cable, with a weak link at the bottom.
If an aircraft struck the CABLE it would break off and drag the aircraft down.
I suspect an octocpter thing would simply bounce off any cable or balloon.
Sorry - don't mean to be all technical and snidey, and I've never even seen a barrage balloon.
Just think it is interesting to learn the real mechanism of how they worked.
Re: Someone's reinvented NUMA?
AC - thanks for that link to the UKUUG meeting!
wow - that's a bit of history. Look at the speaker list: http://www.ukuug.org/events/linux2001/speakers.shtml
Re: Why ?
Nigel 11, you have it right regarding NUMA systems.
And as you say quite run-of-the mill multi-CPU motherboards are already NUMA systems.
And there are much bigger NUMA systems out there!
Install the absolutely great tool 'hwloc' from the OpenMPI project
You can get a graphical display of how your system is laid out.
Assuming you are running Linux, install the 'numactl' package and use
"Using just (!) 21 SKA12KXs to reach an overall 1TB/sec of throughput."
If this seems a huge amount, look at the Square Kilometre Array telescope - which will generate terabytes of data per second.
Re: 61 cores meh
"In order to levellerage the raw power of this intel kit then you need a OS and apps optimised to run on these 61 cores, cerainly not something that is availible off the shelf."
Sorry to be a Linux fanboi (Iam actually), but Linux runs on hundreds of cores on SMP machines already.
Applications already can scale to 1000's of cores - OK I'll give you the 'optimised' quote, but you already have applications running on multicore SMP machines.
I remember getting a copy of the Morris Worm in an email - yes I am that old!
It would either have been on an IBM Bitnet account or a DECNET email address.
Gogling also proves that I might be remembering wrong - a REXX based virus which affected BITNET precedfd the Morris worm (writing viruese in REXX!)
and yeah, that is some Unix beard.
Re: A NAS by any other name...
" NAS is ... useful! These don't even have that - unless you have no directories on your filesystem and you name all your files like this:
Yes, but it is an object store. Those are unique IDSs which identify - objects. such as digital photographs (or whatever). The metadata about the photographs is kept in a database.
why should we have meaningful filenames and a meaningful directory structure in this day and age?
For instance irods:
Re: Approaching write once storage.
"Which some goits translate as "in their original data format and media" - which means I have ~2000 first generation Exabyte cartridges around that said Goits won't let me throw away, despite not having an Exabyte tape drive to read the damned things (The data all fits on one LTO5 tape, of which there are several copies)"
Agreed. I have been through a transition from LT03 media/drives to brand new LT05 abotu two years ago - happily remarkably easy on our particular HSM setup (SGI DMF).
But I do agree - its the lack of functioning read devices which will render data unusable.
One woudl hope (yeah, I am laughing hollowly too) that this new generation of object stores would support transparent migration onto new technology, whether or not it is spinning drives, moving tapes or solid state or whatever. After all you just have to move the object, right?
And yeah, I'm I don't believe it either, but you can hope.
Re: Approaching write once storage.
Herby, you have it.
Also consider tape as storage for this use case - cheaper and doesn't need power till you come to read it.
Also, if I'm not wrong, the UK research councols say scientific data is to be kept for ten years.
Thats a LOT of data.
Object stores are the future
Object stores are the future.
Scientists and engineers care about their DATA.
They do not care a jot about IT types rattling on about LUNs and SANs and choice of RAID levels.
Re: Not totally convinced
You don;t back up very large data stores (ie petabyte and larger sized collections of data). You keep two copies of it.
You cannot back it up. (OK, tehnically you COULD - but you might as well be making that second copy).
Re: interesting, but ...
As Dave says, Infiniband already has low latency.
I wouldn;t know the exact figures for these particular Mellanox cards.
Also interesting is that you can extend Infiniband links over campus and 'metro' distances using fibre cables with active cables. Look up 'Luxtera'
Re: What a truly advanced civilization would do
"Of course, the proper fix would be to adjust the rotation of the Earth to stay in sync with the atomic clocks!"
Easy-peasy. Take some of those surplus Russian atomic bombs the equator, up a high mountain. Maybe Mount Kilimanjaro? Set them off.
Job's a good 'un.
See Project Orion for references.
I say scariest - by that I mean scariest at home.
A real scary incident when I brought an Oracle RAC cluster online at UMIST and blew a 100 Amp fuse,
The sparky said the fuse had actually caught fire.
cue entire machine room filled with bleeping alarms and scurrying techies.
Yes, their infrastructure had real (very big) fuses and not circuit breakers.
Scaryiest tech thing that happened to me from a powercut was when I had an old Epson inkjet printer.
Powercut at 2am, in the wee silent hours. Power comes on and printer runs a self test, which creates a hell of a racket. Had to peel myself off the ceiling I jumped so high in fright.
Re: They had a good small device
I had a Dell Streak. MArvellous device - well ahead of its time.
Sadly the screen cracked, and it is no more.
A good replacement is the Samsung Note.
I'm not sure how well the screne copes in bright daylight. A colleague similarly wants a device for use in gliders. Do you know what the specifics of the Streaks screen were which made it good for daylight use?
Re: New trans-oceanic cables in use?
Gigabit OVER CAT5/6 may have distance limitations of about that (I stand to be corrected).
For 10Gbps (never mond 1Gbps) over FIBRE it is circa 300 metres for short range, and a lot, lot longer for long range. And that's 10 Gbps I'm talking about.
(Off the top of my head 10Gbps/Infiniband over CX4 copper is limited to 25 metres)
Guess who had his head in wiring cabinets this morning?
You can now buy fibre cables to extend Infiniband across campus distances using fibre.
Take a look at SuperJanet, Internet II
or if you are in the movie industry Sohonet http://www.sohonet.com/
Re: Yeah OK, maybe the number of plaudits are being a bit overdone
Engineering pays the big bucks? News to me.
I have a PhD in physics and I work for a well known engineering company.
I get nowhere near big bucks. I'm laughing hollowly.
Talking about wind tunnels, I once was int he machine room of a UK aerospace company (well, there are not many to choose from...)
I noticed a VAX sitting there quietly - still in use as a data acquisition machine for the engine test stand I was told. A few years ago now I admit.
My first machine
I may have said this in an earlier article.
First machine I ever used was a PDP11-45 in my fathers research unit.
Learned FORTRAN programming at a very tender age (which may explain a lot about my subsequent career, and programming!)
Makes me feel good though - who knows I might still have a job at retirement age resurrecting those skills!
Re: Commodity systems demand a strong, free OS
95% of Top 500 systems run Linux int he list announced at ISC.
I Agree with you though - SGI UV 2000 for instance can run Windows.
Re: Fault finding
I slightly disagree with the point about Linux not having comparable trackin of memory errors.
I do agree that you are right - generix X86 hardware will never come close to having that grade of error logging.
However, my systems have ECC memory and log memory errors quite well.
That's sad news.
VMS was really good - as the article says they clustered together easily.
When I was a graduate student at CERN my experiment used VAX clusters - we got to the bigegst size you could run (somewhere near 120 machines if I'm not wrong) and then started on the next!
In the later days we were using Alphas in the clusters also.
Wrote my thesis on a Vaxstation 3000. Sniff. I still have it on a TK50 tape somewhere!
Re: sysadmin that monster
Morten, I agree with you.
I work in HPC on current SGI systems. I really don't think systems like that ever run on massive numbers of cores except for those hero runs to get the HPL number. But I wouldn't know.
And yes - checkpoint and restart. The more blades you have in the cluster, the more likely one will fial during a computation.
Good idea - good comms
This is a damn good idea.
The Stratford site has excellent transport links of course (including the International station).
Also very good comms were put in for the Olympics - I remember meetign wto BT techs happily terminating rolls of CAt5 out in the sunchine in the Olympic Park during the Games.
I thought that the Olympic Press Centre was slated for use as high tech offices anyway?
Developers developers developers
Last year Google had 400 million Android activations in 2012, now it's 900 million. "We couldn’t have got there without developers."
Did he bound aroudn the stage screaming Developers! Developers! Developers! when delivering that statistic?
Seems like a missed opportunity....
Re: Welcome home Commander.
What you said.
Reawakened my interest in space.
Beer, as Cmdr Hadfield hasn't had one in six months.
Re: Must ask
Chickpeas? The bloke is living inside a converted plastic water tank. Which will be almost airtight. And you expect hom to live off CHICKPEAS? Its dangerous enough living on storm-lashed Rockall, without the risk of suffocation.
Storage California - you can Check Out but you can Never Leave...
Regarding your point about backups, you don;t back up huge amounts of data in a big store like that.
You keep two (or more) copies, hopefully in distinct locations.
Sorry if that is me appearing to be aggressive - I run an HSM system, and backups consist of backing up the 'stub' files on disk. you then keep two copies of the data on the slower tiers.
Sorry to say it - this is a pretty scrappy article. Not well thought out.
You leap about between discussing object stores, and then comparing them with the underlying technology - you mention 'tape' several times. The actual mechanism for storing the data is separate from the data.
I was at a talk recently by DDN and was very impressed by their Web Object Store.
Similarly a lightning talk by Hitachi at Cloudcamp on object stores. It is an idea whose time has come.
- Fee fie Firefox: Mozilla's lawyers probe Dell over browser install charge
- Did Apple's iOS make you physically SICK? Try swallowing version 7.1
- Neil Young touts MP3 player that's no Piece of Crap
- Review Distro diaspora: Four flavours of Ubuntu unpacked
- Pics Indescructible Death Stars blow up planets using glowing KILL RAY