This is a no brainer - I wonder why you even bothered to write this in a rag for professionals. More appropriate for the back pages of the Beano.
So, you’ve hunkered down and finally completed that online course on machine learning. It took weeks. Now, you have all sorts of ideas running through your mind on developing your own intelligent code and neural networks. You assume you'll have to fork out a considerable wedge for a decent GPU-powered number-crunching rig, …
A second hand laptop, anything that can run putty/etc with a 4g connection to the web.
Learn some hacking theory, practice your skills and then hack your local university, or any other large institution with reasonable processing power and backbone connection.
Fumble away as all and many might do. Create the next Skynet AI, call it a religion, rewrite history as you see fit, buy an island, protected by the eponymous laser equipped sharks , and Bob's your auntie.
Cost of the operation , a small farthing or two. You've got to be a little bit crazy, but that hasn't stopped other world leaders.
Alternatively, forget AI, it's media hype for - we have got a lot of data but we have not yet found any other use than a new kind of snake oil.
Sometimes I think that "we" are already "Artificially Intelligent": think about it ......
My only concern about this plan is that it has no recreational element.
At night, start a club for people who like to fight recreationally. Only rule is no one must talk about fight club...
Use blockchain and AI to build a new economy. "Replace" the historical systems...
Be interesting to see what performance you get using a 2080Ti given it has Tensor Cores on the die
Could say the same about the potential invoice, I think the price of the latter is still pretty high. Not sure if it'd negate the advantage of building your own instead of renting though, but it'd add to the cost some.
The only problem with either option as a Linux user is the huge list of annoyances and issues with the proprietary Nvidia driver.
"Could say the same about the potential invoice, I think the price of the latter is still pretty high. Not sure if it'd negate the advantage of building your own instead of renting though, but it'd add to the cost some."
I believe RRP for the 2080Ti is $1200, so it would only add about a week to the payback time. Definitely nowhere near as bad as trying to stick a "datacentre" GPU in there.
"The only problem with either option as a Linux user is the huge list of annoyances and issues with the proprietary Nvidia driver."
Can't argue the issue that it's proprietary, but AMD's open drivers were a wreck 2 years ago. Maybe things have changed with Vega and Vulkan, but I won't dare find out.
With a Firepro I couldn't find stability with any driver in Debian 8, LFS and finally Ubuntu LTS. Multiple monitor support was also horrible via Display Port to my NEC monitor along with a LG 4k monitor and Samsung 1080p.
To be honest, unless I use an NVidia GPU AND their proprietary drivers, the entire experience could resemble hell. It's sad, but that's where I'm at.
But no one actually works this way. You do all your data prep and model development first (which takes the vast majority of your time, during which your GPU is idle), then rent the GPU just as long as you need it to train the model, then give it back. The number of people really keeping a GPU hot 24/7/365 is minuscule and if you did that with this rig you would need much better cooling. Most desktop GPUs are mostly idle most of the time, even for researchers.
"But no one actually works this way. [...] The number of people really keeping a GPU hot 24/7/365 is minuscule and if you did that with this rig you would need much better cooling. Most desktop GPUs are mostly idle most of the time, even for researchers."
But at $3/hr, the numbers say that if you plan on renting a GPU for more than 1000 hours, in total, you should just buy your own. And then you get a rig you can just about play Crysis on at the weekend.
"That’s a lot of hours - an hour every working day for 4 years. The only way this makes sense for a hobbyist is if they are gaming on it too."
Of course, if you are a hobbyist that doesn't plan on using it for more than an hour at a time, why not just use your normal computer and let it run overnight.
As ever, buying into the cloud only makes sense if you understand what you're paying for. A good chunk of my business runs on a second hand blade running in a spare room. Running the equivalent workload on AWS would have cost a six figure sum over the years I've used it - but as I don't need super low latency, fail over or load balancing, a £400 machine turns out to be a bargain.
Unfortunately, many businesses (falsely) assume cloud services give them all sorts of safety nets where sometimes on premises kit would be entirely satisfactory.
Cloud's great if you have a very variable workload. Imagine running an online retailer on a single rented server. You get a review in the Sunday Times and your website crashes. Lots of lost business. Cloud would be able to cope automatically.
Anything else it's just expensive.
In addition to the cloud "automagically" failing over you would have had to prep this service upfront so that it could automagically failover/scale. So if you were already planning for that, just buy a few more blades and do it that way.
Would still be cheaper. Cloud is not good for everything or every scenario. In fact most small business scenarios I can think of would be better off with their own hardware, I feel a lot are sold on the "cloud is better" premise even if it's more expensive or specced well above needs.
People need to look at cloud computing much like power generation. Cloud computing - i.e. someone else's computer(s) - is peaking plant whereas your own machine(s) are base load. You activate peaking plant when the demand becomes too great for your base load generation to cope. Examples would be sales periods for retailers, quarterly reporting for financial institutions, overnight processing for trading houses etc.
I cannot see how running the same capability of hardware full time when it is owned by someone else as being cheaper than owning and running it yourself. It is Op-ex vs Cap-ex. They may well be able to buy that hardware cheaper due to volume discounts, but that saving is their additional profit not your cost reduction.
Or old HP Z series hardware. I have a secondhand gaming box made out of a Z420 with a Geforce card that cost £350. I added a SSD to supplement the 2TB spinny that it came with and a joystick for about another £150. It runs Kerbal Space Program swimmingly. Dell and Lenovo make similar kit that turns up on Ebay as well, but HP is really the major player in this space.
Chen has done the sums, and, apparently, after two months that will work out to being ten times cheaper.
That sounded dodgy to me, so I read the paper. He has done the math, and it doesn't say what you said it says.
It's not the entire cost which is 10x cheaper, but the monthly costs alone (on a monthly plan). That makes 2 months the break-even point, not the point at which the total cost is 10x less.
The point at which the cost of running your own system as opposed to renting is 10x less (again based the monthly plan and figures given) is about 14 months.
After ~5 years you may need to buy a new rig to make the most of modern hardware again. At the current costs, this would still be cheaper than keeping with the cloud even if hardware upgrades are free / forced.
So basically the cloud is currently too expensive to really exist yet. Perhaps in another decade or so they will realize that their "cloud" dream will only manifest itself if they charge "correct" prices.
So I guess the current question is; why do cloud providers feel that they need to charge so much per hour? They are going to kill the idea of the cloud before it was even properly born.
Running on AWS gives you flexibility and means you don't have do as much admin: you're effectively paying someone else to do this and this should be included in the calculations. But, even so, if you're planning 24x7 load then don't bother with AWS, just rent the equivalent server somewhere else as that will be much cheaper.
You want AWS, or for ML, Google Compute if you want access to a lot of power for a relatively short time.
This article would have been a lot more informative if it explained why this hardware spec is needed and how spending less / more on each part would affect application performance. Why do you need a 12 core CPU and 64Gb RAM, for something that runs on the GPU? How much would performance be affected using a regular 4 core CPU with 16Gb RAM and the same GPU?
Biting the hand that feeds IT © 1998–2019