10 posts • joined 6 Nov 2008
Maybe .NET on OpenShift?
I guess they got tired of totally being locked out of anyone really using their stuff on all the latest trends like OpenShift, CloudStack, Amazon EC2, Google Apps etc..
Nearly every new cloud and PaaS platform is based on OpenSource, because lets face it, what cloud provider wants to deal with software licensing when building platforms to offer their customers?!!
Microsoft has been totally shut out of this ecosystem outside their own Azure stuff, and no one even talks about anything Microsoft these days in the world of the cloud.
Just because the dead cat bounces when it hits the ground does not mean its not dead.
Ya know what the real interesting thing is, and what scares governments and bankers?
The fact that even all of this.....people are STILL WILLING to go with bitcoin because the huge positives about being able to get out from under their control are worth even more risk than this.
The ability to not be slaves and subservient to the masters with your wealth is worth a HUGE amount of potential risk of not being 'regulated'.
Want to make more money to fund BBC? Allow me in america and other countries to subscribe so I can use iPlayer just like someone in the UK.
You would add millions to the funding of BBC.
Then how the fark did these plants grow in the first place?
Stupid global warming nuts can't even see the obvious.
>great opportunity for PaaS companies to lock-in developers.
That's why you use only Open Source in your PaaS development. Then you can go to whichever one you want with your code. Sure the interface for deployments etc might be a little different, but your app will run the same.
A Rails app on OpenShift is the same as a Rails app on Heroku is the same as a Rails app on Rackspace is the same as a Rails app on Stackao.
What you don't want to do is go down any vendor specific languages, or databases etc route..then your actually locked in. That is what you get with Microsoft Azure, or Amazon, or Google. You are not locked in if you go with Redhat, Heroku, Rackspace, Stackato etc..
>array which outperforms VMAX/VNX and costs less
Setting the bar kinda low there.
Nearly every other storage system in the world outperforms EMC and costs about 1000x less.
1) The plants withstood a 9.0 Earthquake with apparent ease. These are 40 year old reactors, and held up well. Newer reactors like those in the USA, are built on "rollers" to help them withstand even MORE severe of an earthquke as this was.
2) The reactors were shut-down in an orderly manner and the nuclear reaction was stopped.
3) A while later the Tsunami occurred, and we see where the major flaw was, having the diesel generators susceptible to the tsunami, not with a flaw in the reactor design.
4) The storage of the spent fuel pools at the top of the reactor was a big mistake. Although the nuclear hysteria is causing problems with spent fuel storage and probably has some blame.
5) The officials where slow to react and call in external assets(fire truck pumps) to help, which is probably a cultural thing.
If they would have put some more thought into the location of their generators and their spent fuel storage units, we would have had a zero incident.
Most likely we will have a few bannana's worth of radiation, a shit load of media hysteria, and 4 reactors that actually worked very well for being in a 9.0 earthquake and a massive tsunami.
And people will still be afraid of the cleanest and lowest cost energy source known to man.
I could have sworn they were going to call it...
Invista is just as in-band as SVC
Don't let anyone fool you into thinking that Invista is 'out of band'.
Invista traffic is all going through the linux box on the director blade... the only difference between Invista and SVC is that the linux inline appliance in Invista is connected directly to the backplane(25Gbits), instead of via physical switch ports like each SVC(16Gbits).
Example, if a server on blade 1 wants to access storage connected to blade 3, the traffic will go in blade 1, across the backplane, into blade 8(Invista), back on the backplane, out to request from the storage on blade 3, back in the storage port on blade 3, back on the backplace to Invista on blade 8, back out the backplace to blade 1, and out to the server.
Draw it out...its the same number of hops just replace the fibre channel cables in SVC with backplane connections on the Invista blade.
And with SVC I can scale to 8 using cheap intel servers, I can't scale any more Invista nodes without having additional DIRECTORS and expensive proprietary intel blades for them (OUCH $$$).
- Product round-up Coming clean: Ten cordless vacuum cleaners
- Product round-up Too 4K-ing expensive? Five full HD laptops for work and play
- 'Regin': The 'New Stuxnet' spook-grade SOFTWARE WEAPON described
- Worstall @ the Weekend BIG FAT Lies: Porky Pies about obesity
- 'Snoopers' Charter IS DEAD', Lib Dems claim as party waves through IP address-matching