Bad week for Bezos
To top things off, he lost his world's richest person title to . . . Bill Gates.
Microsoft has been awarded the $10bn decade-long US Department of Defense JEDI IT supply contract that will see the nation's military switch to the cloud. The Redmond giant's Azure platform will play host to the US armed forces in an attempt to overhaul and streamline the Pentagon's IT infrastructure under a single umbrella – …
I agree, those billionares have money beyond our imagination. But in an interview, Bill Gates said he doesn't have that much cash, and his net worth is just because of the stocks he holds and this value fluctuates because the price per share of microsoft keeps on changing.
One has to wonder how much cash do actually the likes of Bill and Jeff need.
I once read a memoir of a pre-war European president who said that the best thing about the presidency for his civil existence was that he no longer needed to carry cash around as every human need was provided for. Which brings the question of how much of their times are these men on their own, outside their corporate existences which would seemingly require much more than 24 hours a day every day from us, only somewhat capable humans.
...that a company that recently traded OS and application development for spyware development has won the contract. A totally innocent, unrelated, and in-no-way-covered-by-any-NDA coincidence that this same company has been lobbying for the death of any proposed bill to protect the privacy and freedom of anyone earning less than $500,000 a year. Nothing to see here. Move along...NOW.
More that the owner of Microsoft doesn't own a newspaper that criticizes the glorious leader
Could come back to bite them though:
Employees protesting working for the IBM of the next final solution
NGOs and foreign customers switching away from Microsoft
Retaliation from the next non-Trump government (assuming we ever have one)
Every windows security issue is going to be a much bigger story
This post has been deleted by its author
"Windows for Warships" running on hardened laptops. Seen those a decade or so ago when I visited my old boat before it was decomissioned. As long as the DoD can license the source code they've got no reason to complain about ANY of it, except maybe the price tag...
I say give MS a chance, but make it possible to switch vendors painlessly if they price gouge in the future.
At the risk of a possible National Security violation, I will reveal that the feature is now to be called the Blue Screen of Security. "Nothing to See Here" has a new and possibly classified meaning at DoD. Azure Connect Pro Defense Edition subscribers will note that they may elect to delay relevant Azure software upgrades for the duration of any US war actually declared by Congress. Otherwise, just suck it up, soldier. Nobody said it was gonna be easy. Expensive, yes. Easy? No.
Even if Microsoft utterly failed to deliver anything of use, what makes you think that the US Military would stop throwing money at them?
When they cancelled the US Army's "Future Combat Systems", Boeing still got paid the better part of one billion dollars.
The plan to replace Marine One with a converted AW101 (a helicopter which already flew and was in production), ended up with Lockheed Martin (and Augusta-Westland etc.) getting four billion dollars.
See also; every cancelled defence contract from the last forty years.
When you get a US Military contract, you're going to get paid fuckloads, even if you never deliver a thing.
Whenever I come across delays like this, it's usually because there is a vendor that can't quite make the cut, but the Powers That Be(tm) want that vendor to be awarded the contract. So the delays are to allow that vendor to get that last little piece of the puzzle in place. Don't be surprised to see AWS lose this one, probably to Microsoft.
This post has been deleted by its author
You mean, unlike this?: https://www.theregister.co.uk/2017/03/01/aws_s3_outage/
One 2-hour one-region outage of the biggest software-defined storage system deployment in the world that affected 30% of the internet >2 years ago, compared to the multiple, multi-hour, multi-service, multi-region outages suffered by Azure just this year, that largely go un-noticed by the internet because no really large websites rely on Azure?
Azure still only has 4 multi-AZ regions, compared to AWS' 18 3-region AZs, yet they still manage to break multiple regions simultaneously, which AWS almost *never* does.
You can bet your bottom dollar that the DOD Azure deployment won't be running on any of the public cloud kit, will have extra layers of redundancy and will be much more strictly controlled, in terms of patching/updates.
They each have their pros and cons I guess.
What is interesting to me, is that with something as important as defence, wouldn't you want resilience across cloud providers? You know, just in case one goes down and you *really* want to press the Big Red Button?
Locations and experience - AWS has made significant investments where they are needed to meet the DoD's requirements while Azure needs to expand considerably in Virginia and Texas to meet the contract requirements.
Longer term, it just makes the next $10bn, 10-year contract more likely to go to AWS when up until now the reverse was likely to be the case. Oh...and it has likely provided Oracles cloud with the kiss of "no more US government business for you".
<muffled explosions, quite nearby>
On a cutesy completely unhelpful error screen somewhere:
"We're sorry. Something seems to have gone wrong with your missiles. :("
"Would you like to restore?"
<sounds of bulldozers and assorted construction equipment>
Did I mention this deal scares the heck out of me?
Fer all ya yung'uns, Cloud Computing is JUST a fancy name for Clustered Client/Server systems which was INVENTED by DEC (Digital Equipment Corporation who was bought by Compaq and then HP) in the 1980s within its VAXcluster system which I STILL THINK is one of the best client/server systems ever built! The DESIGNER of the Vax (i.e Dave Cutler) went to Microsoft and DESIGNED Windows NT Server for them starting in 1988/1989! So I see so much of the pedigree of VAX VMS in Windows Server which is a VERY GOOD THING !!!!
One of my friendly clients still have an antiquated (32-bit) VAXcluster system doing arcane COBOL financial server daytrading processing work! The high-end-at-the-time mainframe-level VAX originally maxed out at 512 megs but with some RAM chip layering and soldering (quad-chip on chip with a custom bank switch in-between each layered chip) we got it up to 2 gigabytes. He does it for LULZ and GIGGLES but it's his BEST daytrading machine because of its network messaging system for some reason is so hobbled by its LOW MEGAHERTZ (not gigahertz!) speed that there seems to be a "Natural Anti-aliasing" taking place which SMOOTHS OUT high frequency trades into a CONTINUOUSLY PROFITABLE zone!
As a technical explanation, the machine is so bad (compared to my Smartphone!) that it's actually GOOD! It's lag times smooth out daytrades to such an extent, that profits ALWAYS exceed losses at the end-of-the day. Which is WHY he keeps paying his ASTRONOMICAL monthly BC Hydro Bill !!! Those VAXes ARE utterly power hungry BEASTS !!! But he STILL has been using it even after 28 years because it DOES what it does very well! And for the fact that COBOL code is so easy to read and change AND that VAxcluster is one of the MOST easy-to-maintain client/server systems EVER created !!!
Finding Financial Systems-experts with COBOL (Common Business Oriented Language) programmer expertise is a tad problematic! His three M.Sc.-educated COBOL programmers easily make mid six figures ($500 000 CAN or about $375 000 US per year) !!!! No wonder they live in 5000 square foot mountain-slope Log Home Chalets right by a local Vancouver Area Ski mountain with a view of the harbour that is a DREAM on a clear day!
Who knew COBOL programmers are STILL a thing? I heard some BIG BANKS in Canada, USA, Europe, Hong Kong and Japan are paying their IBM Mainframe Cobol programmers mid to HIGH six figures ($500 000+ US) while top flight JAVA developers get only $90,000 to $140 000 US because so much of the background financial system of the world STILL uses COBOL rather than C++, JAVA or Visual Basic. Some of that code is from the 1960's quietly DOMINATING the world financial and money transfer system running on $20 million dollar IBM mainframe systems!!! Since only a FEW people in the world still know COBOL and Fortran, those programmers are making a killing in the financial services and automated trading systems markets!!!
I know a 70+ year old STILL coding in Fortran as a contractor and making TRULY outrageous amounts of BANK (i.e. REAL MONEY!) He says we will finally stop when his estate/legacy for his grandchildren is enough to ensure that they will ALL be millionaires when he finally kicks the bucket! I believe he has probably only a small few years away (maybe 3 years?) from that goal!
ANYWAYS !!! Because Microsoft AZURE is basically VAXcluster on Steroids, I actually think that the BETTER CLOUD SYSTEM WON the US Department of Defence contract! All Microsoft needs to do now is goto TYAN and/or ASUS (U.S. and Tawainese companies rather than a Chinese company) and buy up every Dual AMD EPYC Rome-chip (64-cores and 128 threads) blade server card and server motherboard they can and stuff as much Micron-sourced (i.e. state of Idaho in USA made) ECC ram chips they can fit on them (16 Terabytes of System RAM per motherboard should do!) and as MANY Micron-sourced 20 TB SSD drives they can and put it all into an abandoned underground railway tunnel in Pennsylvania ....
Line the tunnels with 50000 psi concrete that is anti-corrosion-coated-rebar reinforced and sprayed with multiple layers of Line-X polyvarathane coatings for waterproofing and has MANY MU-metal layers for EMF/RF/EMP-proofing (i suggest 7 interspersed layers of Line-X and mu-Metal to ENSURE proper waterproofing and EMF/RF/EMP protection) AND THEN install 200,000 of these TYAN or ASUS motherboards filled with FOUR of the 32x PCI-express-4 dual-AMD-EPYC ROME CPU blade cards, all connected with 100 gigabit Ethernet-over-Fibre and multiple 100 GBit Switches into those now fixed-up old rail tunnels!
To POWER it all, install EIGHT on-site dug-into-a-side-tunnel Two Million Gallon (7,570,823 litres) industrial sized LNG tanks mated to Thin Membrane Proton Exchange-based Fuel Cells which should power the whole thing for TWO years at a time!
200,000 TYAN/ASUS AMD EPYC ROME superserver motherboards (dual CPU) with FOUR 32x speed PCI-4 express slots and TWO onboard 100 gigabit ethernet over fibre chips (one Gbit connector is used for INBOUND-to-motherboard data and the other 100 gbit connector is used only for outbound-from-motherboard data!) = 200 000 AMD chips at 245 Watts each + 120 watts for other on-motherboard chipsets = 610 watts per BASE motherboard.
= 200,000 mobos with on-board dual AMD Epycs and 2 x 100 Gbit connectors = 122,000,000 watts (122 megawatts)
four dual AMD EPYC CPU blade servers in EACH motherboard (520 watts for each blade plugged into all available PCI-4 express slots) = 2080 watts for a four blade setup on each motherboard.
200,000 mobos x 4 blades at 2080 watts per BASE mobo setup = 416,000,000 watts (416 megawatts)
= 122,000,000 watts (122 megawatts) for BASE mobos + 416,000,000 watts (416 megawatts) for four dual-cpu blades = 538 Megawatts total motherboard power
In terms of TOTAL PROCESSING POWER, that 2 onboard and 4 x 2 CPU blade mobo combination (i.e. 6 AMD EPYC ROme CPUS x 2 TeraFLOPS per chip) each have 64-bits Floating Point CPU horsepower of 12 TeraFLOPS per motherboard x 200,000 motherboards = 2,400,000 TeraFLOPS or 2400 PetaFLOPS or 2.4 ExaFLOPS of 64-bit processing horsepower !!! I think THAT should be enough CPU processing horsepower for the U.S. DOD !!!
For onsite electrical power production, natural gas to Ballard-style fuel cell electrical conversion is still cheaper than major service provider in terms of overall cost and if you spend a few tens of million dollars UP FRONT for building some high-pressure multi-walled LNG storage tanks, you could put multiple two million gallon onsite dug-into-mountain LNG tanks to get you TWO to FOUR FULL YEARS of fully off-grid on-site electrical power for less than HALF to ONE THIRD the cost of of buying power from ANY East Coast provider!
They KEY ISSUE is that you are FULLY OFF-GRID in terms of power production! It's ALL onsite! Fill up the tanks ONCE and for the next two to four years you are set for power! No worrying about snow and ice storms! No hurricanes (you're all underground!) No flooding because you're higher up in mountainous Pennsylvania! NO WORRIES ABOUT POWER PERIOD !!!
Please do educate me just on what is bullocks AT ALL about my post?
All of it is verifiable! And since I helped design the base logistics for a propane powered server farm in northern BC (i.e. now at 8 x 120,000 litre propane tanks for full off-grid power), the LNG powered one is just a fancier version of same!
Yup! a VAX-9000 still works diligently at daytrading in the year 2019 since my friend was the once who originally BOUGHT IT for like $4 Million+ US at the time in 1989 (i helped physically move the damn thing into his office in Calgary back in the day he first bought it!) In those days it was used to do Oil and Gas Reservoir simulations and petroleum and natural gas rights tracking.
Those things are built like tanks! The power supplies have had their capacitors replaced a few times BUT the whole shebang STILL works! Even still uses a token ring network to communicate with our modern gear at a magically slow data rate of 16 megabits per second!
He never got rid of the VAX and it kept changing roles and even ran/managed a multi-line modem-based online BBS at one time in the mid-90's before the ADSL/Cable-powered Internet became big!
He still kept even into the 2010's and NOW it makes him big money!!!! FAR MORE than almost anyone of you here on this website will ever see in their lifetimes!
He does it for lulz BUT doing daytrading with it actually DOES makes him money and the VERY LOW SPEED computer gear and COBOL software seem to make a big difference in smoothing out his profit/loss profile on certain daytrade products!
A VAXcluster (he has MORE of the VAXES which makes it a cluster!) is STILL quite superior in a few respects to modern networking gear because the VMS operating system was so well designed by Dave Cutler! It works and works and works and just never quits! 24/7/365 for 30 YEARS !!! The hardware engineers were geniuses where the cpu box which weighs literally a tonne or two, and the tape drives (we still have those to store data on!) are so easy to maintain and repair onsite! Had DEC not fired Dave Cutler and his PRISM Project super-server/super-workstation team for cost cutting measures, DEC would have had the MULTI-CPU NETWORK SUPERCOMPUTING CROWN that would have revolutionized modern computing and NOT have been relegated to becoming a corporate mental patient in HP's insane asylum!
And ANYONE who can actually AFFORD to pay the massive hydro bill without batting an eye, can afford to keep a 30 year old 4 million dollar mainframe just for LULZ !!!
He also has an original VAX 780, an IBM 360, some AS-400's, some old Sperry's and Univacs, original Apple-IIs and Macintoshes, MicroVAXes and DECstations, multiple IBM 5150's PC-AT's, Next Cubes with the high end Next Dimension boards in them! An old Cray XMP-48 and various other gear for his personal compute museum. AND for the kicker, a massive server farm in northern BC that is quite enviable to many 3 and 4 letter government agencies!!! He can afford it!
Not only is he ignoring such fundamentals as heat dissipation and fresh air, but he's recommending ridiculous levels of shielding around run of the mill known-backdoored consumer hardware.
Not to mention weapons like the MOAB will handle the mountainous terrain quite well. And if not, some of those earthquake weapons certainly will, especially with all that LNG he's recommending.
SG7 must be the "special" Stargate team -- the one they don't talk much about. Not quite all there upstairs.
I am WELL WELL AWARE of heat dissipation issue because UNLIKE YOU, I have actually DONE THIS stuff previously!!! You CNC machine multiple aluminum plates containing continuous loop microfluid channels and pump Silicone Oil at pressure and clamp those microchannel-embedded plates over every single CPU and ram chip stick. The system will suck all that heated silicone oil into a massive thin-metal fin/plate radiator structure submerged within multiple 50,000 litre pools of distilled water that absorbs all the heat away from the silicone-oil-cooled-mobos!
In another case, we just went for full silicone oil mobo submersion where entire rack and racks and racks of power supplies, mobos and networking gear were fully immersed within Olympic-sized swimming pools of non-conductive silicone oil and cooled that way!
I've done 200,000 mobo setups before! It's NOT the first time for me to do that type of setup!
AND just for your information, a MOAB is 22,000 lbs of ordnance that needs to be delivered by a C130!!! It will do DIP ALL to 50,000 PSI or 80,000 PSI reinforced concrete shell that is either 5000 feet underground OR has Reactive Plate Explosives on an outer ceramic shell which will dissipate the impact of any incoming conventional explosive system. A MOAB won't even make a dent!
Some of the underground highly-protected compute systems I've worked with will defend against DIRECT STRIKES from multiple TWO and FIVE MEGATON NUCLEAR WEAPONS !! --- I think I know my $&^T quite a bit more THAN YOU DO!
Underground LNG tanks are ALWAYS FLOATED upon active suspension hydraulics which move five tonne pistons back and forth so that even surface detonated nuclear weapon(s) P-Waves (i.e. the Power waves of a seismic event!) can be absorbed by counterweights, active hydraulic piston compression and by the strata composition of deep underground rock!
Even a 20 megaton nuke ain't gonna go through 5000 feet of rock!
I've done this before! YOU HAVEN'T!
And just for your info, a 200,000 mobo setup is SMALL POTATOES these days! The custom mobos with the 4 x 4 CPU arrays of over TWO MILLION mobos are the really BIG supercomputer setups!
The official T500 list of top supers isn't really official. I know PLENTY of under-the-radar private and government agency systems that quite exceed 500 Petaflops and many ExaFLOPS!
ExaSCALE was done LOOONG AGO !!!! We are into getting near ZettaSCALE computing nowadays in the DEEP black budget and under-the-radar white budget world!
I see you know the trick of opening a second window to refresh the authentication cookie while authoring a very long rant.
Look, if you really want your mind blown you need to get a job at Google. They have the old mainframe architecture running at global scale. Google can't run fast interactive apps but they can figure out who you are, what you're thinking, and what you should be seeing.
Why the heck would I want to work at Google when I have access to a 119 ExaFLOP 60 GHz GaAs supercomputer for some "Weekend Night Personal Projects", AND access to a 300,000 foot ceiling spaceplane with a multi-gigapixel multi-spectral camera system onboard AND all that free access to some of the most advanced 3D metal and polymer printing and 5-axis CNC machining gear on Earth?
I have access to BETTER GEAR and personnel than NASA or DARPA has! AND ... the parent company's M.Sc and Ph.D technical personnel INVENTED many of the software/hardware systems you use every day! Why the HECK would I give up all that tech and FREE personal access to some of the TOP BRAINS of the aerospace science world just to work for Google?
I have a Sonofusion generator I get to use in the Burnaby Warehouse for my plasmadynamics work. WHY would I give that up? I was handed down a bunch of "old" Microway clusters that have combined over Two Petaflops of 64-bit GPU processing horsepower just for my PERSONAL DESKTOP rendering machine! (i.e. I am the ONLY person who gets to use it!) Again? WHY would I give all that up to work for Google?
My personal "Pet Project" right now is Field Effects Propulsion Systems research that MAY be FTL capable, sooooooooo HECK NOOOOOO !!!! NO! NO! DOUBLE AND TRIPLE NO!
It's also hard to check your email when Microsoft's Office 365 cloud has gone for one of its regular little breaks. Never mind, if they have any problems I'm sure that the updates and bug fixes will be rolled out frequently, and they've never been known to break anything.
AWS stands for Amazon Web Services. There's a reason for that which is that AWS is primarily good at hosting Linux based Web servers. Microsoft Azure is more focused on corporates who want to move Windows server based systems into the cloud, which is why they are offering attractive licence deals for Windos Server and MS SQL in Azure. I suspect that this sort of workload aligns more closely with US DoD requirements. I have no doubt that Microsoft can fulfil this contract entirely.
Again, Microsoft style "Cloud Computing" is really just 1980's era VAXclustering technology brought into the 2019/2020 timeframe! You can ALL thank Dave Cutler who designed the VMS operating which was transformed into Windows NT Server when he was hired by Microsoft in 1988.
See biography at:
This guy IS A GENIUS PROGRAMMER !!! He's my Hero King Nerd of the Seven Nerd Kingdoms and Azure IS the way to go for the U.S. Department of Defence! It is TRULY A PROPER distributed Client/Server architecture and just needs a MINOR TWEAK or two to Active Directory and the User/User Group Management Consoles to TRULY KILL OFF AWS !!!
AWS is pretty much Apache Server stitched badly onto a basement-teenager-coded middleware layer that creaks and groans under load and WILL EVENTUALLY COLLAPSE MIGHTILY AND DESTRUCTIVELY under its own weight, while Azure is an IBM Mainframe Class of Operating System and Networking infrastructure that will work and work and work 24/7/365 for the next 100 years BECAUSE its designers KNOW SOMETHING about ENTERPRISE-SCALE corporate computing while the OTHER side AWS are game programmers and NOT scale-up/scale-out specialists!
Congrats to Microsoft on the DOD contract win!
THANK YOU DAVE !!!
P.S. Make sure your underlying hardware is FULLY EMF/EMP-shielded (i.e. fully SOLID 1/4 inch copper-plate-based Faraday Cage boxes) with Varistor-based on-board Surge Protection and contains at least TWO Server-class processors (64 cores/128 threads) for each base controller motherboard and TWO (64 core/128 thread) server cpus on each of the four PCI-4 express server blades EACH with TWO onboard Full-Duplex 100 gigabit Ethernet over fibre connectors! You use so many connectors for each mobo and server blade so you can assign and balance processing loads and network transfers at ANY POINT in time!
PLEASE ALSO MAKE SURE you use non-conductive refined Silicone Oil-based Full Motherboard Immersion to ensure PROPER cooling !!! Just fill each Solid Copper Plate Faraday Cage box that contains 4 to 8 motherboards with SIlicone Oil that circulates within a CLOSED LOOP cooling system in and out via INSULATED PIPES into a very large thin-fin-based radiator setup that is submerged into a 100,000 litres or larger open pool of distilled water which ABSORBS lots of heat through the radiator fins. That absorbed heat is then slowly but surely dissipated into air! 100,000 litres SHOULD do it for 200,000 motherboards BUT if you have to, an Olympixc sized swimming pool (2.5 MILLION LITRES) absolutely WILL be enough overkill to REALLY absorb and dissipate all that mobo heat!
Clippy : I see you are trying to launch some missiles. Can I help you in choosing a country to launch them to?
User took too long to make a valid selection.
Autolaunch is initiated and cannot be aborted.
Launching in 5 Microsoft minutes.
Oh wait, there's some Windowsupdates to be applied first!
Since we're now clearly going to lose any conflict we get involved in, I was wondering, to whom are we going to surrender? I want to start taking language lessons so I can understand the orders our new masters give us. Should I study Russian, Chinese, Korean, Arabic, or what?
The only upside here is I can't imagine software from Microsoft becoming intelligent and self-aware to the extent it could decide to wipe out the human race, so no worries about a potential "Skynet," then. At least not in America.
Pretty clear this award was due to Trump intervention vs merits or pricing. Given AWS is already handling confidential CIA material, has a much larger and robust infrastructure platform and more tools - it seems reasonable the review may take a while. Might it take as long as the election? Who says these awards are not politically driven?
Biting the hand that feeds IT © 1998–2019