Re: "can the fault detection system work fast enough. "
"What are you guys even talking about? Do you think a fault detection system has to run tensorflow computations?"
These rocket launches aren't cheap....
708 posts • joined 1 Oct 2009
"What are you guys even talking about? Do you think a fault detection system has to run tensorflow computations?"
These rocket launches aren't cheap....
"I'm still trying to figure out "why" they bought Red hat."
What they say? It somehow helps them with cloud. Doesn't sound like much money there - certainly not enough to justify the significant increase in debt (~US$17B).
What could it be then? Well RedHat pushed up support prices and their customers didn't squeal much. A lot of those big enterprise customers moved from expensive hardware/expensive OS support over the last ten years to x86 with much cheaper OS support so there's plenty of scope for squeezing more.
"I believe in 10 years, architecture (ie. x86, PowerPC, ARM) will be more of a preffered brand, except in very specialized applications."
I'd be very surprised if PowerPC is a viable architecture in 10 years time. SPARC has been open sourced for a while and it is still in decline, with MIPS head start in the open source CPU designs appearing to give it the advantage in the also ran stakes.
AMD has got the low power/high volume CPU market covered. With Intels miss-steps, it looks like a combination of Intel and AMD will keep the mid-to-high performance/high volume CPU market covered and Power is the last real survivor in the very high performance bracket with Itanium having no uarch changes since 2012 and SPARC likely to be one final iteration away from being purely niche.
Can ARM challenge Intel/AMD in the mainstream server market? Possibly, but I'm unsure how it will match performance without increasing cache sizes/pipeline length and moving more functions onto the chip which hurts performance and cost.
Intels current failings at 10nm may completely alter that assumption as it would keep ARM/AMD on equivalent process tech. Intel now have a massive hole in their production line assuming they are sticking with 10nm - a surprise change to 7nm in Fab 28 would address a lot of this but I'm unsure if it is even possible and if not Intel will end up with two failed 10nm Fabs that will need re-built as x nm...
In other news, I will do the washing up and the cheque's in the mail...
I thought Cisco was one of the more successful acquisition and integration companies, at least in IT while Chambers was there.
Looking through their list of acquisitions, a lot of them still exist in some shape or form. At least I recognise companies and what current products they contributed too for about 75% of the acquisitions.
* checks for updates *
* uninstalled *
"So in the UK you have to stand there breathing in the carcinogenic fumes from the evaporating petrol generated by the hot British sunshine ?"
If you avoid filling up on that day each year, you can avoid the issue with the fumes.
Or just enjoy deaths sweet embrace.
Salisbury Cathedral is terrible to visit at this time of year - very little mud and slush so most people go to Stonehenge instead.
Much better to go in late winter when the cooler temperatures provide a good excuse for not hanging around for very long and getting back on the train to London before anyone asks questions.
The incident details are here:
It appears that Cisco migrated customers to a new platform, something went horribly wrong and they addressed the issue by rolling customers back.
While they have now identified the issue with the new platform, I'm guessing there will be a publicly released breakdown of the cause, troubleshooting/customer notification process, resolution and any attempts to address these types of issue in the future.
"Oh, and don't use 192.168.0.* or 192.168.1.* for your internal network.". Curious. Why these 2 address ranges but not any other RFC1918 addresses?
Technically there's no reason not to.
However, I would suggest that if you are connecting networks together at any point, maybe 50% of the worlds networks use 192.168.0.x and/or 192.168.1.x, causing problems with either routing, site-to-site VPN's or client VPN access.
You maybe able to use NAT to workaround the issues, but reducing the pain of NAT or renumbering will make your life easier in the long run.
Too many repeating characters...
I'm unsure that music copyright violations are the real target. If the music has a long lifetime and some value, its likely to result in a take down that does enough to placate both sides.
The real target is premium content with a short shelf life i.e. sporting events or other events where there is no other footage available for months such movies with staggered regional release dates, concerts (although arguably this is likely to be more of a marketing tool than any real loss to the artist based on typical levels of quality), news footage and anything else offering pay-per-view type coverage. YouTube and Facebook use this "free" user content to drive ad revenue.
My cynical prediction? Content will migrate from YouTube ("free" content distribution with no restrictions) to Facebook (content restricted to friends and friends of friends and widely shared is harder to take down if the owners don't see it) followed by YouTube offering a similar private setting. For other material that still warrents enough profit to run the risk of public distribution, there will be Twitch (already users have come up with some ingenious ways to distribute content) or smaller file sharing platforms to allow it to make it to FB in the first place.
My question was going to be "could this be a case of exploiting an unnoticed typo or partially planned functionality that was never implemented by finding the error and discovering the domain was available" but the async AJAX suggests they were trying to minimise the impact of the calls so they remained hidden.
"the system simply prevents the plane from taking off."
Ahh...this already exists - it's called RyanAir....
"Both PCs were useless until the Novel server was rebooted."
Probably completely useless information now:
1. Drop to debugger:
2. Put the dead process to sleep:
EIP = CSleepUntilInterrupt
3. Exit the debugger:
All going well, you might be able to shutdown cleanly following that....
"especially so for people who care about security as well as performance, because of e.g. leaky speculative execution and cache consistency designs which have been revealed in recent months."
Any fixes for security issues revealed in recent months will likely require more die space OR a smaller process node to achieve the desired performance. The smaller process node will likely be required to match or exceed current performance with the fixes in-place.
i.e. if there is an answer (i.e. tagging cache entries with the privilege level of the process that filled the entry strikes me as the most likely possibility of keeping the benefits of caching while mitigating the worst effects of Spectre), a smaller process node will almost certainly be required to avoid a performance hit.
....a flying coffee drone could get sucked into the intake of Ginni's personal helicopter causing a catastrophic engine failure?
Oh...sorry, you only wanted to know about bad things
To set your mind at ease for latest MS OS releases:
"It's not lazy, it's about cost and corporate profit. A few cents here, a few cents there, and pretty soon, the shareholder value takes a hit. Public agencies don't answer to corporate bosses but taxpayers and no taxpayer wants taxes raised to "fix" IT stuff since they don't understand it."
In many cases, the issue is poor planning and a lack of time to fully implement plans - we want to create/configure/deploy A with features W, X, Y and Z. By the time A is in production Y and Z are mostly done, X is on the to do list and W is forgotten about.
While this can be seen as a cost issue (if only we'd employed more people or taken more time to plan properly), in many cases this isn't apparent until long after the damage is done. Treating it as a corporate profit issue ignores the other cultural issues that result in these types of security problems.
Changing a default password is more likely to have been either a lack of product knowledge or a lack of simple security knowledge ("change any default passwords to something more secure"). Given the number of organisations affected, I'm frankly astonished that somebody within the organisations didn't question the lack of security.
For the key, it will be hashed to a 160-bit value via an HMAC-SHA1 function.
Pre-computing all possible 8 character passwords (assuming 96 characters possible from A-Z, a-z, 0-9 and 34 commonly used symbols on a standard keyboard - 96^2) requires 9.68 days on a single Nvidia GTX1080 @ ~8.6GH/second. (ref: SHA-1 hashes here https://gist.github.com/epixoip/a83d38f412b4737e99bbef804a270c40)
The equivalent 10 digit password (96^10) would take 244 years to pre-compute. With distributed cracking, this is doable.
By the time you get to 12 character passwords, you are likely safe for the next few years and 16 characters would allow for all but the most serious attempts at accessing a low value target and you are more likely to be affected by a weak password hash implementation than the password strength assuming you avoid anything covered by a dictionary attack.
Note: all password lengths assume SHA-1 hashing as used in WPA2.
My understanding is that this makes the capture of the interesting Wifi packets easier on newer Wifi kit, primarily due to being able to grab EAPOL packets without needing an existing client connected to the AP.
If you are using any EAP based security with a session lifetime set to a reasonable level (i.e. EAP-TLS or PEAP with <2 hour session lifetime), this introduces no real increase in risk.
If you are using WPA2 with a pre-shared keys, strongly consider moving to an EAP-based solution if you have servers running 24x7 and security is important.
If you don't have that option, as long as you have an adequate Wifi password (i.e. 16+ characters, a mixture of numbers and symbols and nothing that appears in any of the common hacking dictionaries) you're still forcing an attacker to go through a brute force crack of a SHA-1 password (i.e. 2^69+ potential combinations).
Feel free to correct anything I've misunderstood
Trump came 25+ years to late to hand over the keys to the swamp.
The alligators had the keys to the swamp when it was just TV and long distance calls. The plan was simple:
- negotiate with local government to provide roll out of cable and phone services in exchange for limiting competition to one or two providers
- in the case of multiple providers, one would take cable (typically Comcast) and the other provider took voice.
Adding internet to the mix just gave the providers the opportunity to take more...
Naturally, every regulator has suggested a number of fixes (including net neutrality that seems to address the issues of Comcast etc by making large interconnects to content providers the only option while doing little for the majority of end users) while branding actual competition in cities afflicted by these duopolies some form of communism...
You misunderstand, this is an evolution of the win at all costs chess strategy.
Yo can't beat Watson if you're dead
Unless Apple have got some IP from somewhere, I would have expected their licencing costs to by a significant component of the total modem cost.
The only real reason for Apple designing their own would likely be power savings or some additional functionality (better vSIM's?) that reduced the chip size. But even then, the costs of validating your design with third parties would probably exceed any real benefit when compared to contracting an existing supplier to do it for you.
Normally, you expect outsourcing to be a gold-plated turd or maybe a competently delivered shit sandwich that vaguely provides what someone hoped the business or customers would want.
However, in the hands of the NHS and Crapita, you can really get the worst of all worlds.
Managed end of contract service handovers? who would need that? CHECK
Arbitrary cuts to budgets without anyone understanding the effects for years to come? Maybe even requiring a third party to come in and tell you how to do YOUR job? CHECK
Reassuringly expensive for both the initial failed delivery and subsequent attempts to try and deliver the original requirements? CHECK
Making thousands of peoples lives miserable? CHECK
Those responsible walk away with their Teflon shoulders in tact? CHECK
PR people making bland apologies in-spite of nothing improving? CHECK
(Note: NHS managers considering using this as a requirements document should contact me first...)
It it's not fixed by this fix, it'll be fixed by the next one. Or the one after that. Or the one after that. Or the one after that. Or the one after that. Or the one after that. Or the one after that. Or the one after that.
Look...its still less patches than Adobe Flash OK? You've removed Adobe Flash, ok.... hmmmm
Head of Intel Security
"Because security is important to somebody... somewhere.... I guess"
Careful....he can hear your thoughts...
I'm just working on transforming this curry into something Crapita would be proud of....
AWS have been building in-house switches for 4+ years - initially to support 25Gbps before it was available from other vendors (https://datacenterfrontier.com/amazon-building-custom-asic-chips-to-accelerate-cloud-networking/) but also to support improvements in power usage and density.
Given that the focus of the devices is on data centre networking that is around 20% of Cisco's switching portfolio and that the data centre network market space is crowded already, I'm not sure AWS standing up support and sales infrastructure for their switches makes financial sense, particularly given AWS's optimisation towards their own requirements that may not fit a wider customer base.
"IBM also told TSB to prioritise telephony and branch channels"
My reading of this was that the active-active systems couldn't handle the load of all of the channels (web/mobile/branch/telephone) and that IBM was suggesting directing web/mobile to one active system and telephony/branch to the other active system to allow sufficient resources to allow staff to help customers rather than leaving customers AND staff with non-functional systems.
They also suggested further load shedding at the load balancers (F5's) knowing that it would result in poor customer experiences (and resultant bad press) but not having a better way of getting out of the hole.
My reading of those points suggests that actual load >> designed load >> tested load. While I expect a portion of the load to have been caused by failed interactions and customer services, the time frames for fixes were beyond "quickly put more bigger boxes in".
Surely the scale of the issue (27 data centres) would be beyond what a single ePO instance could manage if it was the cause of the outage? And if the ePO service had been scaled out (i.e. via region or customer), then surely they wouldn't manage to screw them all at once?
I say surely, but maybe the take up of the service has been particularly low...
As a representative of the devil, I would like to point out that down here, we are much more flexible in our arrangements with "customers" and while we may initially dictate terms, the pricing remains constant and our products will continue to be provided for eternity with no forced upgrades...
"Now our worst competitor (M$FT) has now full access to our private repos with our software source code !"
Wasn't this considered when you started to use Github or at some point in the product life cycle when things reached the point where they could justify a solution that allowed you to protect against this type of eventuality? While it is convenient, it's not the only option and if you have concerns about third party competitors wouldn't that justify keeping your crown jewels in-house? Or at least hosted in a cloud solution (assuming a cloud solution provides the availability/accessibility that an in-house solution may not) where you had the ability to control things like encryption etc to prevent unauthorised third parties accessing your crown jewels?
Here’s a link showing x86 single core improvements at around 15%/year @ constant 4Ghz clock speed:
The bit that’s missing is if you add in CPU clock speed increases from around 2004 (potentially earlier, I just haven’t seen the data) is that X86 was at around ~50% performance increases per year, hence the felling of single thread performance stagnation.
For the rest, you answer your own question - who needs flops for single or multi ore CPUs when you can just provide massively parallel GPU’s if you need them.
It's not just mathematics they should be ban - it's all the things criminals rely on:
- the Earth
- the Sun
There's probably more, but once we get rid of the last two, I confidently predict that the number of criminals using encryption will drop significantly and we can even leave you with your precious mathematics!
Be careful comparing the processes at a "7nm is better than 10nm" level - TSMC/Samsung/GloFo 7nm processes are likely to have similar or worse characteristics to Intels 10nm in-terms of gate and interconnect pitch which will likely lead to similar performance.
The big difference is historically Intel had a 3+ year lead over the rest of the industry on process technologies for a stable "next gen" process, which allowed them to roll out more conservative CPU designs than their rivals. That's where they've screwed the pooch more than once on 10nm already as that lead is now less than a year, allowing no real opportunities for further mistakes. .
That Intel still can't get volume on 10nm after ~1.75 years, they may already be looking at 7nm chips although that will hurt them on volume until they retool the (likely economically unviable) 10nm fabs.
So if it didn't go via AS2914 (NTT), Level 3 or Cogent, does that just leave Hurricane Electric and their customers as potential victims? eNet don't appear to be big by Internet standards (https://www.peeringdb.com/asn/10297) and are only listed as having 4 peers (https://bgpview.io/asn/10297) with 20 prefixes.
So I guess the questions are:
- why didn't HE have route filters for such a small customer
- why doesn't MEW have HSTS given they've been the target of domain hijacking attacks in the past?
- how many potential victims are there given the subset of ISP's using HE peering (although it looks like Google DNS was affected), users of MyEtherWallet and users who would accept an untrusted certificate. Losses are reportedly US$150k so not many?
"Whats the long game that Google are playing in the legal field... "
I think they're prodding the edges of the law to see what can and can't be done and I'm not sure it's bad in this case.
While I can understand journalists may be upset at a clearly non-journalistic organisation attempting to use their rights to avoid taking down articles, I'm not sure it's as black and white as many are making out. I'm certainly not disputing Googles relationship with news media as being "parasitic".
Do we see Google as a library and largely content agnostic (they collect and distribute using a more user friendly method that doesn't require users to go to the source) or do we see Google as something that we censor via laws? I suspect Google sees itself closer to the former (at least in search) and the reality is the later, only we prefer not to use the word censor due to the negative connotations. If Google agrees with the laws they are left unchallenged (i.e historically illegal in western countries legal systems), if there is more of a grey area, they are prepared to fight it.
The legal arguments used might upset some, but lawyers have tended to favour a winnable argument over public perception.
It's typically the opposite.
Typically companies that hit or exceed their targets will experience a share price drop following a dividend announcement as the market has priced the expected dividend into the share price. i.e. yesterday the share price was the "share price"+"expected dividend", today it is just the "share price".
It's usually a short term thing as people move their money elsewhere to where they think they can make more money.
"There was no way we could have crammed all that traffic through a tiny 10Mbps Ethernet port." So he didn’t."
If he'd just used a regular sized Ethernet port (at least assuming it was UTP...) rather than some non-standard tiny one, he could be running 10Gbps by now...
I'll get my coat.
"I don't know. It paid for plenty overtime every 30 days with OOH reboots..... via a Shiva RAS system at 33.3k.....or a site visit to reboot not only the NT4 boxes, but the RAS devices."
Rebooting the Shiva RAS system at the places I was involved in was never actually required based on the health stats, connection speeds and session drops.
However, there was certainly a LOT of optimism that a reboot might make the connection slightly more stable or a little faster.
I enjoyed playing with Lego growing up and looked forward to my kids having a similar experience.
What I've found is that a lot of the modern Lego sets are quite fragile with lots of custom blocks per set or range compared to the brick based sets from my youth.
My kids still play with the large brick sets to create Minecraft-like buildings or vehicles, but the modern sets tend to be played with once, involve a lot of adult interaction to put the fiddly bits back together and remain untouched.
I still waste too much money on Technic stuff that's like a jigsaw puzzle...
"Microsoft's eSIM-Based Always Connected PCs won't create any more segment growth than the last 20-odd Intel innovations that fundamentally misjudge what has with happened with commoditisation in the x86-based hardware markplace rather and push premium features that never become mainstream"
"I think Simon sees himself more as Lucifer Morningstar (like the TV series), bringing people their just deserts, in which case heaven is most certainly where the stairway does not lead."
Yes, there are two paths you can go by
But in the long run
There's still time to change the road you're on....
So it was Peter!!!
It all fits together now. Peter did it.
And they said there was no evidence.
Connection suggests one link.
Nexus suggests this is just one connection in a larger conspiracy.
Mines the one with the roll of aluminium foil in the pocket....
The challenge with rural vs urban is that your fibre costs for trenching/thrusting over long distances are significant and your existing ducts are unlikely to be usable (i.e. they are small, travel long distances and have likely moved since installation due to weather/other ground work).
If you're delivering a 10 miles of fibre at 1/house per mile, the additional £70/month of revenue may have a long payback...
Compare that to London where you have around 50/houses per mile and maybe even existing ducting for the majority of the route (i.e. just needing a route from the footpath into the property) and your cost model is significantly different.
Over time, rural areas will get picked up as part of either major roadwork or other incentives but all the initiatives around fibre so far have assumed customer revenue will cover the majority of the costs.
It affects ASA or FP appliances if configured for Any connect.
"Intel have taken a lesson from Barclays Bank*, and established a "Scandal Deployment Division" whose purpose is to cause the stock price to crash every so often so the directors and other insiders can buy back stock sold just prior to the crash."
Or, if you remove the tinfoil hat, Intel have employed heavy marketing and have been fortunate their main competitor (in the x86 marketplace) has struggled over recent years to paper over cracks in their once heralded manufacturing processes.
Historically had a 2-3 year lead over competitors and have now reached the point where they are almost level pegging (depending on when Intel finally starts volume shipments of 10nm parts) with Samsung, TSMC and Global Foundries.
On the CPU design side, Intel has historically been pretty conservative with features, knowing their manufacturing process allowed them to outperform and out produce their competitors (total silicon shipped rather than per unit to account for differences in part size) - depending on how Spectre is managed across the industry (i.e. if AMD beats them to market with hardware fixes that remove/reduce the performance penalty), Intel may find themselves in a pretty uncomfortable situation of declining revenues, increased R&D and actual competition.
"Can't wait for the MSMedia to blame this on Man Made Global Warming / Climate Change."
You idiot! Can't you see that global warming is the only thing that can save us from the suns weakening power?
Quick - to the V8's!!!!
(cue Tina Turner....)
Biting the hand that feeds IT © 1998–2018