Re: Sure, get rif of the older, more expensive workers
IBM are one step ahead of you...
They've alread got rid of most of the experienced people that have a clue in 10+ years of resource actions.
925 publicly visible posts • joined 1 Oct 2009
Maybe IBM could make things work in the past - now they're more interested in giving the task to someone cheaper in the hope that things might work in the short term and that the customers will pay them to not break things (i.e do nothing at all with no people) in the long term.
Still, they can always bask in the glory of what they did 15+ yeas ago right? Oh - and Watson...
From http://www.datacenterdynamics.com/content-tracks/colo-cloud/aws-how-to-manage-mega-growth/97431.fullarticle
"A few years back, AWS made the decision to go with 25Gbps Ethernet (25GbE) at a time when the industry was moving towards 10GbE and 40GbE. Hamilton said 40GbE is essentially four lanes of 10GbE, while 25GbE future-proofed the AWS network, allowing a switch to 50GbE (2x 25GbE) which would deliver more bandwidth at lower complexity.
To drive this network, AWS now makes its own custom silicon for server network interface cards (NICs) thanks to its acquisition of Israel-based chipmaker Annapurna for $350m last year.
The new Amazon Annapurna ASIC supports 25GbE, and will enable even faster innovation, as it gives the cloud giant control over both software and hardware down to the silicon level, Hamilton said: “Every server we deploy has at least one of these in it.”"
Once upon a time...
The vendors supplied 40Gbps and there was little demand.
The cloud providers demanded 25Gbps and started building lots of servers with 25Gbps links and using 25Gbps top-of-rack switches.
On the desktop we are seeing a similar battle for 2.5Gbps and 5Gbps where support for the higher speeds over Cat5E may see them become the new standards.
And the battle was over...
Last time it was a memory issue:
http://www.cisco.com/c/en/us/about/supplier-sustainability/memory.html
I vaguely recall it being a supplier issue (supplier initially provided components to Cisco's spec but at some point the spec changed and wasn't picked up during QA).
Or there's the long running capacitor plague - https://en.wikipedia.org/wiki/Capacitor_plague
I know it sucks having to replace newish equipment, but there's not much more that a vendor can do (from memory Cisco replaced equipment with faulty memory as advance spares and return replaced components within two weeks for equipment covered by Smartnet and a tighter return window for non-Smartnet equipment although that might have just been because we were a large customer...)
Given the recovery process, it looks like the DBA is pretty competent - he may have made a huge mistake (in my experience, competent people can hugely misjudge the risk of their actions) and is fixing it. Not a perfect fix, but able to recover all but 6 hours of data and able to quantify what was missing in under a day isn't bad given the number of issues found.
It looks like the root cause was attempting to get replication working from live to staging that broke the db1 to db2 replication process - the issue may have been related to performance limits in the staging environment. There was then a period of high DB utilisation issue that may have partially contributed to the replication problem either directly or indirectly via distracting the DBA. While I can understand the thought process behind deleting the db2 replica and starting it again, there was a risk in these actions that was unfortunately realised. At which point, things started to go horribly wrong as all the back up issues were discovered.
The bit that is missing is why did all the backups fail? I suspect the backups and backup process had been tested in the past with the earlier DB versions. 9.6 is reasonably new (Sept 2016) so they may have had a working backup strategy up until at least then and arguably based on their issue tracker until mid-December 2016.
Why is this important? Read through the comments about testing backups and ensuring high availability. They probably had both until last month when they upgraded the database...
I realise that throwing "Trump is to blame" is the latest fad but the reality is that the US has been providing laws that allow corporates to do business with foreign countries for a long time while the US government departments either suck up the data anyway or fight cases in court repeatedly until they get the results they want.
As far as the Privacy Shield goes, judge it on the success of legal action when it is found to be broken.
And if no cases ever come to court for non-Americans, we must be all safe....
I've seen a number of sites getting the Google CAPTCHA message and the cause is almost always someone messing around with search engine optimisation (SEO) tools without a good understanding of what they're doing.
Having said that, it can be hard to find (which of these thousands of Google requests per minute is causing the issue).
And the annoying thing is that the culprit almost always knows they were causing the issue in spite of the any communication...
As this is a tech site, I'm wondering how big an effect tech companies have had on news gathering.
- the loss of regional journalists due to advertising moving to other sources i.e. http://www.pressgazette.co.uk/how-the-rise-of-online-ads-has-prompted-a-70-per-cent-cut-in-journalist-numbers-at-big-uk-regional-dailies/
- the old media chasing clicks versus sticking with traditional journalism hasn't helped either their revenues or the quality of stories presented to their audiences. https://www.theguardian.com/media/2016/apr/17/fake-news-stories-clicks-fact-checking
- some of the comedy/parody news sites look factually accurate in hindsight...
The proliferation of news via new media and advertising revenue moving from old media to tech sites has resulted in less journalists, more reliance on syndicated stories so everything looks largely the same and page views/advertising money now goes to free sites relying on non-news content or sites with questionable content.
The fix? We see the result of our actions and if they are wrong, countries correct them over time via political change or less pleasant alternatives. I don't see getting Facebook to self-regulate as an answer that won't result in them accidentally forgetting to self-regulate at the next election...
AWS has been going after US Government business for a few years now - I wonder if that's starting to hurt some of Oracles big accounts that sat on older/slower hardware?
The question is whether AWS will provide a short term solution (a better instance to match the new pricing with SSDs with lots of RAM to offset some of the CPU hit) and a longer term migration path to another product (i.e. a AWS optimised PostgresSQL) to tempt Oracle customers away permanently? AWS don't have to do that much - a lot of the Oracle DB usage I've seen are unnecessary and Oracles pricing/salesforce would provide the customers for any alternative...
...HP Labs Blue sky has been grey for a long time.
HP has given me resistors something like 15 years of development to get to "not commercially usable in the near future" in spite of the hype train pulling into the station a few times already. My understanding is the only products that have worked the way that has been stated cause degradation in the material used which reduces the number of available writes and the speed is lower than existing flash memory.
As for Helion, how could a server manufacturer look at the existing hosting/cloud environment and see the drive towards commoditisation from 2009 onwards and try to enter the market in 2014? Was the thinking that the other hosting/cloud providers weren't able to afford the wonderful HP servers to make them more profitable or was it just a scam to boost server sales for exec bonuses?
And "The Machine" - not sure it was ever more than a platform for memresistors (see above).
The problem that I see for the historical enterprise server providers is that their high margin Unix business is gone - the customer base is only just big enough to pay for the next generation or two of research and they are no longer able to really innovate. Plus they keep wanting to re-invent the high margin server ignoring how well commodity hardware performs.
The cloud providers (Google/AWS/Facebook and I suspect MS) all use custom hardware so don't need HP and co to help them. Plus they probably all do scale better than HP ever could.
Is there hope for HPE? If they start to kill off Itanium and move customers to x86 (including migrating the software...) they could keep things going and might get a few more money years out of an innovative ARM server (I.e. The bigger ARMs with the ability to tie them to a decent amount of memory in blade server type form factors with very high scale) that would make a big difference in application caching and if HP developed software solutions around that they may keep the hardware sales going a little long enough to then see the transition from x86 to ARM.
And maybe someone will notice something that will make real money in the meantime...
This is meant as an explanation, not validation...
For certificate providers, it depends what you're doing - the cost of the certificates is often tied to the warranty/insurance offered with the certificate in the event that an end user loses money in the event of a certificate compromise.
Given the conditions required to get paid out, it's unlikely that a payout would occur, but I guess with Symantecs shenanigans payouts do occur.
My experience has been if the customer wants the insurance, the account managers/sales people I have worked with have always been happy to use them and charge their own percentage on top of that...
"And presumably if it's redirecting to voicemail, BT are still getting their coin for connecting the call through, so they still get some revenue anyway."
The "huge computing power" is actually just enough storage to ensure BT can record the calls and claim the spammer revenue.
While it doesn't require "huge computing power" at present, once they start selling the automated spamming services to allow both the call and receiver to be connected to the same system and cut out the inefficient third-party systems, the requirement to automatically provide spammers with numbers that then divert to the spam mailboxes will drive requirements through the roof...
Assuming this is accurate (http://aviation.stackexchange.com/questions/25084/what-is-the-force-exerted-by-the-catapult-on-aircraft-carriers):
Steam/power settings are adjusted for each a/c type and T/O weight.
The EMALS stores 484 MJ in four 121 MJ alternators spinning at 6400 rpm. It delivers up to 122 MJ over 91 m. That averages out to 300,000 lbf. EMALS more finely controls launch forces (Max Peak-to-Mean Tow Force Ratio = 1.05), allowing it to launch smaller a/c (eg, smaller UAVs) and delivering a smoother ride that reduces airframe fatigue.
Current steam catapults deliver up to 95 MJ over 94 m. Each shot consumes up to 614 kg of steam piped from the reactor (NB: not the primary coolant loop). That averages out to 230,000 lbf.
Accelerations average around 3 g's, peak around 4 g's.
My understanding of things military indicates that the solution to these connection issues is to design a very expensive proprietary plug to address the issue.
Maybe a special USB connection with a screw in security latch that takes techs a minute to screw in/unscrew and thus avoid premature disconnection?
How many millions you say? I'll sell them to you for the low low price of US$500k a plane to cover the specialised nature of the design and manufacturing process...
I'm wondering how much of this is due to people investigating/learning the MEAN stack versus serious usage of Mongo? Combined with free AWS/Azure/other cloud hosting, there may be more than a couple that were poorly setup and abandoned...
That's no excuse for poor security defaults for Mongo installs (i.e. don't recall any package based installs of MySQL/SQLite/Postgres/MS SQL or their derivatives giving access remote hosts out of the box in the last 5 years so they definitely aren't following "industry practices"...) but
Wouldn't a better comparison be two car crashes?
Excel:
You'll notice the primary reason for this crash was that the vehicle had no steering wheel and instead used a spike that resulted in the driver being impaled when they were unable to negotiate the first bend in the road. There were no survivors.
Access:
In this case, the driver has been provided not only with a steering wheel, but also additional seats in the vehicle. Unfortunately the steering wheel did not do anything and the vehicle failed to negotiate the first bend. Fortunately the driver was protected by the air bag and there was minimal damage to the car, but all passengers died. The vehicle was repaired and the air bag replaced. This is the 23rd time this has happened and the driver continues to recommend the safety of the vehicle.
I thought (i.e. based on passenger numbers through airports) that travelling between Christmas Day and the New Year was a quiet period compared to a normal working week?
Heathrow stats show December as the second quietest month of the year and traditionally long haul flights in December/January are cheapest between Christmas and New Year.
From various articles over the last 10+ years, I thought it was widely accepted within the IT community that the US electronic voting machines are at best about 10 years out of date regarding security practices and at worst are the Adobe of the election software industry - i.e. it looks OK, but underneath is a first gen product struggling to cope with the demands of the modern world and securing the product was done by capitalising the first letter of the admin password...
Having been involved in local body elections in a past life, I have some trust in the inherent checks and balances in at least some countries election processes. If you are relying on a start-to-finish electronic process with no ability to verify actual votes, you probably get the result you deserve....
I'm shocked by your careless approach to security - surely you are aware that viruses could be introduced to HP printers by non-HP ink cartridges.
Note 1: normally I wouldn't believe that any company would be stupid enough to put anything other than a few bytes of storage on a disposable printer cartridge, but look at the prices.
Note 2: normally I wouldn't believe any printer company would be stupid enough to allow information read from a printer cartridge to be treated as executable code but look at HP's recent history...
Re:EarthDog and Zed Zee
I believe the even more generic way of putting it is:
We are going to offer all the latest shit fads because no one wants to buy the current shit fads. We have asked our customers what they want but as we don't think we can make our big bonuses doing that we will continue to chase the pot of gold at the end of the rainbow.
To our employees, we value you blah blah blah but for the best interests of the company we have to get rid of you. While you may provide some value to HP/IBM/some other outsourcer, we've talked to the accountants and the fact is providing stuff the customer wants is expensive. It's cheaper for everyone if the work isn't done and the customer just keeps on paying us. I mean why wouldn't they?
We'll speak again in three months.
Merry Christmas!
OK - put a slightly different way.
One CPU can service one Thunderbolt port at full speed at present, and you might be able to scale that to a 4S serving 4 full speed Thunderbolt ports.
How do you serve 8/16/32 ports from your TOR? Or will your NVMe storage only be shared by one or two compute nodes?
I know AFA's can already saturate their IO links (assuming enough money is spent) and while there are some nice applications for NVMe, I don't see it in the same light that Oracle does. Much like 1Gbps ethernet in 1999, it has removed a system bottle neck but it needs the CPU to catch up before it proves its worth.
The question for NVMe storage is how do you provide all of this potential IO to the processor?
For general purpose servers (and databases), this storage will always be further away from the CPU than cache/RAM, so slower and provide latency challenges. These bottlenecks will continue to be addressed but they are likely to remain the bottlenecks for NVMe storage until the next leap in communications buses (i.e. Total IO to the CPU is memory+CPU interconnects+system buses peaking around the 200GBps mark with current generation) and likely CPU evolution allow the NVMe bandwidth to be fully used. By which time it will be the next CPU revolution...
Re: I suspect that Vimeo have forgone security in order to make it easier to get more signups. More accounts = more ad revenue after all.
Try:
More signups == more users == more VC money. Ad revenue and other "traditional" revenue streams tend not to feature in these plans...
Reference: just about any public Internet service
You can call your bunch of ARM processors a supercomputer, but whether they can be used for much will be down to how well you can distribute tasks across them which will come down to I/O and memory bandwidth. I mean, what point is having 10,000 or more CPU's available to you if the first 100 or so have finished the tasks you have distributed to them before the last 5000 or so processors have received any work?
You can break the bunch of processors down into nodes, but you can do that with other processors too.
I suspect, the advantage of Fujitsu's ARM's won't so much be in the ARM core as the offload processors that accompany it which is why they can't use off-the-shelf products. Potentially what Fujitsu need is the next die shrink to get the performance they need from each SoC to make this project worthwhile...
Not sure the issue will be with the process side of things. Supposedly, Apple already has 10nm products from TSMC. We might even see them announced later today...
Note that TSMC may or may not be actual 10nm - their 16nm was 20nm with FinFET gains to approximate 16nm (http://www.tsmc.com/english/dedicatedFoundry/technology/16nm.htm). In which case it may be a 14nm SoC base process with FinFET giving equivalent performance to 10nm in comparison to the 14nm SoC base process.
Which leaves the issue being Fujitsu's chip design... And most likely the performance they can extract from it at present.
Softbank may take the view that ARM is just IP and the people are just a cost centre, but they will discover their mistake in a few years.
ARM's current advantage is that they provide a cost effective and flexible design with very active development and some history.
If these change (particularly cost), there are competitors (MIPS as the most likely option, but Intel could conceivably compete if they can learn to live with wafer thing margins and POWER could work). It might take 4-5 years and the UK is unlikely to benefit in the way they do from ARM, but no IT company is guaranteed to succeed in the future.
As this exploit is restricted to Cisco ASA's (possibly PIXes, but as they are end of life I'll conveniently ignore them...), SNMP is enabled by default but no communities/hosts are defined to allow monitoring without further configuration.
As far as best practice, I would assume:
- monitor via a secure path (VPN or secure WAN to the inside interface)
- use standard company-/location-specific SNMP strings that do no include public/private/secret
- use separate communities for RO/RW access and only use RO-communities for monitoring to make capturing RW communities harder
- ensure both SNMP settings/ACL's restrict SNMP access to trusted hosts/networks
None of these practices makes monitoring a firewall difficult for a known authorised party (i.e. if you are doing it internally or via a third party). The biggest challenge for remote monitoring of a firewall on a Internet connection with a dynamic IP and technologies like Easy VPN address that requirement with minimal effort for competent operators.
Based on these recommendations, any ASA's discovered via the Internet with publicly accessible SNMP access are very poorly configured...
I think MS's answer actually translates to "we won't release updated install media to support newer hardware on "legacy" Windows OS's".
As usual, there will be a little bit of noise followed by someone releasing a tool to create bootable media to get around this.
MS will replace the PR person with a new one with less bullet holes in their feet and a bigger gun...
The US broadband monopoly is caused by counties/states handing monopolies to the telcos - no one then has any interest in intervening to fix the problems.
While Googles fibre projects were interesting, the costs were significant - the estimate of US$1b/city and low take up rates outside of richer neighbourhoods suggested it was never going to make rapid in-roads into the US telecoms market, but I thought they may give it more time.
For the 90% funding cut, i guess that puts expansion on hold until current subscriber numbers increase to meet current costs.
For Google and AWS, these outages are always interesting - it results in downtime/reduced availability, but in my experience in IT, downtime or unavailability of components aren't uncommon when trying to run 24x7.
The interesting thing is how you keep the larger system in a functioning state, capture enough information to identify the root cause AND get it back to a functional state within a few hours. Sure, it turned out to be human error (software updates combined with large scale moves) but they had considered capacity during this work, and the thing that affected service was the retries rather than expected load.
Yes, you would need to have a proxy configured to trigger the CONNECT's rather than sending the requests directly to a web server.
If the attacker is on the same subnet as you and you are using Windows, they might be able to automatically configure proxy settings via WPAD using NetBIOS name resolution, but there are steps that can be taken to mitigate that (disable proxy auto detection, disable NetBIOS over TCP/IP).