Re: Oooh, clever..
Check recent Americas cups (and possible older ones as well), the holder has been writing the rules for a long time...
646 posts • joined 1 Oct 2009
Check recent Americas cups (and possible older ones as well), the holder has been writing the rules for a long time...
You didn't realise Nokias snake game was just a training tool for driving in the future?
Shirley you mean rebooting Sharknado (1-5) as LaserSharknado? Or SharkLaserNado
I guarantee the trailer will be at least as amazing as the movie. Probably more so...
I have to disagree with the use of common everyday standards.
ElReg defines the standards, just as it defines the newsworthy news, therefore I propose Beer o'clock as the universal standard.
It can be easily calculated as the time the writer thinks you should have a beer - anything interesting will be beerworthy and naturally occur at beer o'clock.
Converting between beer o'clock and local time is left as an exercise for the reader, just as it is with your fancy "scientific" measures of time...
The surprise was that the market grew at all - between a drop in enterprise spending of almost 10% amongst the top two vendors and the big cloud providers already spending heavily over the last few years, there was a lot of inertia to overcome.
This graph illustrates the problem for SPARC (and Itanium and Power).
Enterprise server market growth gone from a 50%/50% split between x86 and non-x86 systems in 2009 with a total value of around US$10b to a US$15b market where x86 has a ~70% share, IBM gets 17+% and the rest share 13% and dropping fast. And if you break out cloud and Enterprise sales, Enterprise is falling (and has been for years) while cloud is growing, which is killing the margins and volumes in the Enterprise business.
Larry's supported SPARC long enough to make sure it lasts longer than Itanium, but both are playing their end games. You can hate LarryOracle for it, but this decision has been been made by the market as much as Larry and Oracle.
"Printing was "allowed"."
Lucky you didn't fart at the same time - SAP would have found a way to bill you for energy production...
But we have so many peanuts!!!!
Why give them money when we can pay them peanuts?
...welcome a 50% improvement in the newest software version.
I don't believe this has anything to do with America.
I worked for a FMCG manufacturer in the past, and in Korea there was a very high level of scrutinisation of our products by consumers.
i.e. if there were flaws in the printing of packaging such as ink runs of cardboard/product matter between the cardboard packaging and plastic shrink wrap, a customer would expect to get 10-20 free items to remain "happy" with our products base don forum posts.
"Doing 15% YoY on cloud is shite."
And even worse than shite when most of it comes from shuffling existing lines of business rather than actual growth. i.e. shifting the hosting of an outsourcing customers data centre from a newer IBM data centre on the US east coast to an older less efficient "cloud" data centre on the US east coast.
Once they had cracked ROT26, ROT 13 can only be a few more years away.
"and up to $600k Oracle licensing savings"
I'm not sure this is a marketard problem - the likelihood of Samba being a driver (let alone a big driver) for Android is close to zero, as maybe a few hundred thousand out of the supposed 2 billion Android devices at best.
I suspect this was a developer trying to solve a problem they were personally experiencing, made it publicly available and a design decision from a few months ago (i.e. compatibility vs security) now looks completely wrong.
While there are lessons in this around the level of understanding required when developing with common libraries to ensure current security standards are implemented, I'm not sure this is due to marketards.
Lets wait for the inquiry to find out if it was the end of life OS's that were affected...
The NHS had a massive task to update old applications and systems to move off WinXP and they have made progress (I'll leave it as an exercise for the reader to decide if the progress is good/bad/sufficient). If the older systems had been effectively isolated and weren't hit, that leads to a very different conclusion to the Windows XP
While I don't doubt the NSA is the cause, patches were already available for the issue. Why weren't they applied and how can the NHS address these deficiencies in a way that avoids the impact that we saw with WannaCry.
Finally, MS having Windows XP/Windows 2003 patches that weren't released prior to WannaCry is also dubious in my opinion - while they didn't have to patch these, the nature of the bugs severity, the public disclosure of the bugs and the length of time that the the compromise was present suggests they should have been publicly released patches as soon as they were tested.
"I think what's happening is that desktop and laptop vendors are going upmarket."
They've been trying for years - outside of Apple they haven't had a lot of success and Apple see their year-on-year sales consolidating around certain models in each range.
The problem for manufacturers isn't so much the volume (as it drops it drives consolidation as the market is still pretty big at around 250+ million units/year), its that there is no real innovation driving a need for new PC's.
CPU's have plateaued in-terms of single core performance and the mainstream desktop software market hasn't really found anything to capitalise on multicore to drive sales.
While SSD provides benefits, it has to get down to the US$35 or less/unit cost to compete with HDD - laptops and enterprise storage are driving the price drops so it will happen eventually.
Combined with tablets/mobiles/games consoles the edges of the market will continue to shrink, probably at the low end of the market. So maybe Gartner will be right this time and average prices will increase...
"Can anyone remember an accurate report from Gartner predicting the future two years out?"
Are you referring to their well known report they sent out following a particularly drunken Christmas party saying "Gartner have looked into the future and can still see people willing to pay money for this crap regardless of how wrong we are"?
Monitoring all Internet traffic and using reversible (but only for the good guys...) is all well and good, but the bad guys will start avoiding those communications channels.
I favour a Matrix style approach where all the people actually sit in little beds wired up to reality so that we can monitor EVERYTHING, EVERYWHERE!!!!
You think this is a bad idea G489089890121-2? * zaps you with enough electricity to power a small town * Bwahahahaha!!!!! Re-education completed....
Endpoint AV/malware prevention tools or web/mail scanning were the only ways of preventing some form of encryption if you were hit within the first 12 hours...
Patching stopped one vector for the spread, but others were still available.
"Am I correct in understanding that this happens (in part) quicker in systems patched by Micro$oft?"
Pretty sure the answer is no.
If machines are patched against the NSA backdoors and SMBv1 is disabled, other propagation routes if the user has local admin access to the PC. i.e. lsadump for any cached credentials on the PC and then psexec/WMIC using those credentials in an attempt to access other machines via C$/Admin$ shares. Your MBR is also re-written and after 20-40 minutes your PC is restarted and a "chkdsk" run that encrypts your hard disk. Prior to the reboot, a boot from CD and re-writting the MBR allows to you to recover from this.
Also considering blocking SMB access between workstations via Windows firewall for end user devices if there isn't a compelling reason not too (i.e. in offices where a local PC is the "server" or some dumb app) or at least reducing access to just the hosts or subnets that need access to reduce your exposure.
If you don't have local admin access to allow the hash dump AND you are patched against the NSA issues across your network, files matching a list of extensions are encrypted.
If you haven't been infected yet, you best protection is ensuring AV and patching is up-to-date and reviewing your usage of privileged accounts (both at domain level and local PC level) to ensure you understand the potential for propagation across your network. Changing passwords for privileged to prevent cached hashs from being usable is also a good step.
I think there are a few points that need to be clarified:
- the cupboard should be FIREPROOF (to protect important things) not AIRTIGHT to prevent any issues with H&S...
- if the manager locks himself in the airtight cupboard and is unable to locate the light switch or the emergency door release or they both turn out to be faulty, then it would be a terrible accident...
- the paper trail showing the manager declining the recommended safety tests in said cupboard may also help in any subsequent inquiries by the local constabulary
Given the content, I suspect it's a floater.
You can keep on trying to flush it away but it will just keep coming back.
When playing network or telecoms poker, a clumsy excavator beats pretty much any hand.
Except for TWO clumsy excavators...
That's not so much AI as brute force.
Although wire cutters allow for less force and a little more finesse...
Given the state of the current BBC licence fee (effectively stagnant under the Tories) and content provision moving away from traditional TV/radio services (based on audience share not just for the BBC but globally across those mediums), this may be an opportunity for the BBC to negotiate their way out of what appears to be a dead end in the medium to long term.
So they can't fix the ejection seat issues then?
...the drive towards unmanned planes?
40Gbps was an OK interconnect for switches when there was nothing better, but it wasn't great for server connectivity when you were looking at upgrading from 10Gbps because you needed a little more bandwidth. Your 40Gbps costs per server were around 3x the costs of 10Gbps when you will probably only utilise 25-50%. in the medium term versus 25Gbps costs 2x 10Gbps and delivering 40-80%. of your bandwidth needs and providing a capable fibre channel competitor on the storage side.
AWS and Google have standardised on 25Gbps in their DC's already so it's likely the 25Gbps costs wil come down and 40Gbps will remain high.
Had something similar at a tertiary educational institute.
We had a microwave link between two sites around 3 miles apart. On lunchtime, the link went down.
Upon investigation, a student was found sitting on a ledge in front of one of the microwave dishes eating his lunch. We are unsure if the microwaves did any real harm - it wasn't an institution noted for it's academic achievements and usually any pigeons who sat there two long usually just fell over the edge...
And this is why my new startup "Quantum Time Developments" needs government funding to take our ideas from the drawing board (ok...scribbled on the back of a beer mat) to someplace that only a massive injection of public money can possibly achieve.
All of us here in "Quantum Time Developments" look forward to the exciting new possibilities this funding will give us and society as a whole.
Note: "society" as a whole may or may not extend past local drinking establishments, suppliers of recreational pharmaceuticals and artistic venues with "girls" in their title. And any politicians who need convincing of the merits of our brave new world.
How to get to 480V?
Your incoming power feed at 240V plus your UPS/generator feed which is a separate 240V feed.
And a big switch between them. With software to control and manual override. That someone screwed up based on the article, although The engineer maybe being thrown under a bus and the details are more subtle.
BA are supposedly around 250 racks per DC - given the age of the DC's and the likely equipment (mainframe and comes heavy), they are likely around 1-1.5MW per DC. Nothing will be small..
Given stories about the level of design customisation Google and AWS do to save power (Google using rack level high efficiency power supplies) and do to ensure reliability (I.e AWS writing firmware for UPSs), I wonder what HPE would offer Azure that Azure couldn't do better themselves?
I like HPE servers, but there's loads of enterprise features I like that cloud providers would get no value from.
Faulty network equipment rarely results in faulty destinations when scanning boarding passes - they result in either slow or no connectivity.
The boarding pass issue sounds more like storage, with either a fault (i.e. the power issues or a resulting hardware failure) causing a failover to another site with either stale data or the failover process not working smoothly (i.e. automated scripts firing in the wrong order or manual steps not being run correctly).
i.e. I suspect this is more of an RBS type issue rather than a Kings College type failure.
My take on ElReg's story is that it's a placeholder for comments and a link to contact someone directly for people that do know more.
Looking around the various news sites and places that might know, I don't see much more than what is known about the affects of the outage and the official statements.
The interesting stuff will come in during the week when people are back in the office and go for after work drinks with ex-colleagues :)
Google, AWS and Azure are all investing in a data centre infrastructure encompassing bandwidth/land/physical buildings/hardware that will allow them to deliver services to any point on the globe as economically as possible.
While Googles "boring" services may not match those offered by Azure and AWS they're still in the game because of that they already have and the amount of money they bring in.
The Oracle's, SAP's, IBM's, SalesForce etc hoping that their existing applications can be delivered by much smaller cloud infrastructures will either lose out to new solutions from the big three or become a customer of one of the other large infrastructure investors (Facebook/Alibaba/Baidu/Tencent/AT&T based on Intel's top 7+1 customers) and move their solutions to one of those providers.
There will always be solutions where cloud doesn't fit the requirement, but for applications covering large geographic areas, the big three will be hard to beat.
How many companies with more than 10 or so employees have utterly pointless e-mails sent out that could be quickly resolved with a face-to-face discussion?
How many of the same companies have someone who spends hours looking for something online that can be achieved some other way (telephoning suppliers, checking past actions, talking to colleagues etc)?
Used Dell update manage to update my PC.
It ran up huge online gambling debts on all my credit cards, burnt my house down as part of an insurance scam and ran off with my wife.
0.5/5 - updates installed very quickly.
Reading modzeros advisory:
The program monitors all keystrokes made by the user to
capture and react to functions such as microphone mute/unmute
keys/hotkeys. Monitoring of keystrokes is added by implementing a low-
level keyboard input hook  function that is installed by calling
The rest just sounds like layers of fail - maybe the old HP laptop I had wasn't too slow, the drivers were just so badly implemented it never had a chance finish starting up....
Thanks for that - very useful.
It makes the point around a firewall running on an AMT-enabled system being unable to properly secure the system (I.e. Has the packet you sent to the firewall been intercepted by the management processor rather than the firewall CPU). I suspect that may affect a lot of security people's assumptions about their network setup if the firewall is running on an AMT platform through pfsense, virtualisation or similar. And I'm very interested to see if any vendors come out with firewall patches. As for any environment you can't physically validate yourself...
I tend to go for stupidity over malice when looking for explanations for this type of thing, but I'm going to add a bit my tinfoil to my head ware just in case...
I agree with your comments around IBM being scared of being a monopoly while MS and Google weren't.
I think MS have had a few bad years but have come through it despite the damage they've done to their desktop operating system and the failure of their mobile platform, their SaaS and cloud businesses will see them through the next 10 years while some of their well known contemporaries shrink or disappear.
For Google, they have a monopoly on search but they're basically an entertainment service (YouTube) showing other people's content for free and making their money in advertising. The tech company is just a front, so it's hard to justify splitting up a free service lots of people love... There infrastructure and revenue will see them continuing to dominate entertainment and advertising for some time too...
How do I swipe left and right on those?
The key figure for AWS is revenue - the goal of the three big cloud providers is too expand their market share so when cloud is the defacto platform for applications, they have the platform to deliver that function.
That means that in the short -to-medium term, revenue trumps income.
When Oracle/SAP/IBM/etc are customers of AWS rather than competitors, we will see income increase as they probably will have reached their ~125-175 datacentre goal to allow them to serve any global requirements....
Not sure Intel will get out of it that easily...
Most of the vendors have longer support warrenty programs for enterprise gear so HAVE to replace faulty equipment. i.e. Cisco set aside US$125million for "product replacement" (http://www.crn.com/news/networking/300083778/cisco-cfo-doesnt-anticipate-any-massive-revenue-impact-from-faulty-clock-components-company-sets-aside-125m-for-product-replacements.htm)
While I have no doubt that there are some uses for Optane, the challange is do I get the best value out of Optane at 10x the price of an equivalent SSD or am I better off with a combination of a larger SSD/multiple SSD's and more physical memory to cache the IO as a midpoint in cost?
Reads like "IBM joins the public cloud party late, doesn't do much for a while, so cuts prices and egts an analyst to write a paper on storage prices for some publicity".
Or am I just cynical?
Wouldn't a regional failure require all DC's in the region to fail before the region is considered as failed?
Yes, AWS have had failures of individual DC's but part of Mr Hamiltons argument against Oracle cloud was that relying on a single DC would always (given enough time) result in some event that leads to a failure which is why AWS aims to provide 2+1 DC's per region.
I disagree that "cloud computing" is a meaningless buzzword.
While hosting can be handled internally and externally, the major difference between hosting externally (either as a managed data centre or a fully managed service), the difference is around scalability with cloud services.
Cloud computing gives you the ability to stand up or shutdown servers/infrastructure with minimal cost penalty and with minimal leadtime - something that you tend not to be able to do in traditional hosting (or if you can, the scale is extremely limited in my experience).
The seperate question is whether you can benefit from adding/removing capacity on-demand to reduce your overall costs. Cloud computing MIGHT provide the cheapest solution in some instances, but well managed in-house/third party data centres/third party managed services may provide more cost (or business) effective solutions.
Broadcom like the closed source driver model - no need for maintenance releases that most manufacturers would release anyway and it provides a reason for next years latest and greatest hardware which is almost the same as the old hardware but with slightly better drivers...
There were rumours that early batches of Intels latest and greatest Xeons were being sucked up by a cloud provider in spite of some bugs that stopped mainstream release.
Maybe they weren't rumours after all...
Is bashing Windows 2003 really MS bashing at this time though? It went end-of-life 2 years ago and arguably anything relying on WebDAV should have been replaced once or twice in that 14+ year period to address existing security issues...
Biting the hand that feeds IT © 1998–2017