It has a tiny cache, a nightmare for any complex routines to be programmed.
It'll be Intel's Cavium ThunderX CPUs.
572 posts • joined 20 Sep 2007
It has a tiny cache, a nightmare for any complex routines to be programmed.
It'll be Intel's Cavium ThunderX CPUs.
With then demo I saw a few years ago they didn't need to know the position of the transmitter but it helped. If they didn't know the location they just needed a few accurate references (e.g. starting GPS coordinates) and they could then use those to locate the alternative transmitters, when the GPS signal was lost they had enough information about the 'signals of opportunity' to carry on with some accuracy.
WiFi might give you a location but it is never accurate. SOO can be used for accurate positioning in areas that GPS can't. I've seen a demo a few years ago by a UK military research company where the researcher was able to show continuous location tracking inside and outside a building. It spotted a single GPS satellite as he passed a window and otherwise used local terrestrial masts for its references.
I was amazed at the time, glad to see someone a few years later doing it again (*cough* its not new *cough*) but I'd really like to see someone commercialise the concept.
I was thinking that myself, the cost of a 1Gbps link is probably well over $2m per year, sure it will take a long time to amortise $2m into $150m but when you consider the military value and the potential improvement in healthcare or just GDP it will pay for itself.
I think the article is rather unfair to the locals, there are probably parts of rural Britain which weren't much better before the 70s, Margate for example.
My first thought when I saw this? How much did the launch event cost and couldn't that money have been spent saving someone's life? I don't remember the Gates Foundations having big press launches for his initiatives other than when he wants to raise public awareness on an issue, not to announce he's spending money.
Vanity much? If it was true altruism they wouldn't need a PR launch.
The lasers are directional and one way, so each drone will have both a send laser and a receive sensor. The question is, can a drone have more than one receive and send function to provide mesh functionality.
People think that the public IXPs are important but the majority of most large ISP's traffic doesn't pass through IXPs. Most traffic is routed through private peering to people like Google, YouTube, Netflix and the CDN providers. Sure BT has 350GB of LINX peering, but I can bet that their total traffic volume is much higher than that.
This is also why I find the Netflix rant strange, they only have 80GB of LINX peering because most of their connectivity is private peering and the rest of Netflix content is delivered by on-net appliances within the ISPs themselves (providing 100's of GB of capacity).
Agreed, femtocells and other cellular nodes are already competitive commodity products in the business. The problem is that operators struggle to manage deployments of the available tools. What is facebook offering that isn't already in the market?
As someone spotted elsewhere the copyright notice in the EXIF says 2015, knowing the timescales of phone development I am not sure that 2015 is even the right answer. But perhaps the photographer forgot to change his template?
They moved their processing, not their Open Connect delivery network.
It seems a strange argument from Netflix, the majority of their traffic doesn't even pass through the IXPs. Any ISP who has a decent number of Netflix users will be using a Netflix CDN appliance inside their network. You might argue that this is necessary because of the limitations of the IXP system, but in reality for any larger ISP most of their traffic is through private peering to Google, Amazon, Microsoft, Netflix and the various CDN providers. Public internet exchanges are just there for the sizeable minority of more unusual routes.
Reminds me of the Epiphany-IV 64-core Microprocessor by Adapteva, sure it has less cores but the architecture seems similar. The problem with the Adapteva and I imagine a similar problem for this 1000 core design, the on-core memory is tiny and so it is difficult to fit useful workloads in to them. So you spend loads of time on an external CPU with a bigger core scheduling the tasks to the tiny cores. The architecture is really hard to programme for.
I think one of the questions with Google is: what do they do at the edges? How do they decide when to stop laying fibre? Most importantly: who gets left behind?
Indeed, twice been down the road of hearing from a tame person on the other side that it's a stitch up and we have no chance. Once my MD put in a bid below cost just to ensure that it caused problems for someone down the line.
It's not just about dark fibre, there is lots of under-utilised fibre which could be exploited. I worked for one company where we found that we had just a 2Mb E1 going to one site and we wondered what else we could use it for.
Lets look at what other wavelengths could be better utilised around the country and put them on the market.
We also need to explore more innovation in nuclear instead of relying on designs from the era of the atomic bomb. I like the Thorium designs and I don't think they've been given nearly enough investment. The Indians and Chinese are starting to invest in Thorium reactors and I think it would be really good if we didn't get left behind.
Busted... so make a quick half-apology.
At $450 this chip costs as much as a decent GPU and I haven't seen a GPU that can do 15 HD AVC transcodes yet. Certainly there is nothing from the GPU market that can do this at such a low TDP.
This chip really suits IPTV providers, streaming companies and broadcasters. Previously you bought an appliance that cost tens of thousands of pounds to do transcodes using ASIC chips, they provided excellent quality but the cost hurt. Many encoder and transcoder companies have been moving to software and cloud solutions in recent years which has hit the video ASIC market. With chips like this providing over a dozen transcodes in a 45W TDP for $450 I can see it being very attractive in my line of work.
The VDI business is interesting, but look beyond that to the encoding space and there will be people jumping on this chipset when it hits the street.
It was a misalignment, so some of the mirrors that reflect light were off-point and no one noticed until it was too late.
Sorry Google, Facebook has it right with Aquila. Your balloons are just hot air.
I always wonder why Android needs to go in to devices like this? What value does it add compared to Debian or something similar?
The mantra of "Android all the things" seems wrong to me.
In the article it wasn't particularly clear as to why you had suspended operations, perhaps you could look at the text to make that clearer?
There are a variety of different DSL bandwidths/profiles depending on the customer need. There is however a 2Mbps/2Mbps profile which is called SDSL (and there is also SHDSL).
Also ADSL2+ Annex M allows for uplinks up to 3.3Mbit/sec, <sarcasm>but what would someone want with that much bandwidth is beyond me.</sarcasm>
2.6% of defectors were apparently in military service at the time of their departure from the North.
Further proof that measuring success by "average" numbers isn't relevant, it is all about the distribution.
There is plenty of spectrum, it just needs operators to use it better. They need to invest more in femtocells and other small cell architectures to off-load capacity at a local level.
In my view 5G is technology m*sturbation, operators need to work harder with what they have.
Until BT has competition they don't really try.
OpenReach doesn't have a KPI which includes reliability at any sort of fine level. As long as the bit of copper works when someone from OpenReach tests it then Schrödinger's cable is alive. We need OpenReach to be accountable for uptime as well.
I believe I've seen BT deliberately not upgrading cabinets to FTTC because they are primarily business customers and they know they'll loose BTNet fibre business. That is one of the bigger crimes here.
A sterling engine?
I believe it relates to how much data is "lost" through bit-rot or other data loss factors, rather than just being unavailable.
I read a posting by some company who had some hosted hardware in NY, I can't remember who it was but I think I found it through El Reg though.
Interestingly their argument was that if you could justify the headcount to manage hardware and you had predictable capacity then outsourcing your servers doesn't necessarily make sense. Every argument I have seen about going cloud either boils down to two things: scaling/flexibility and/or administration overhead. Whenever I hear the administration overhead argument it is put forward by software people because, understandably, software people don't want to care about hardware.
But simply put, the costs of hardware and hosting can be cheaper than the cloud if you have the right economies of scale. But people should be doing more diligence than just saying "put it all in the cloud!" because they might not be doing the best for their business.
'Disaster recovery' is an over used phrase, in my view DR is like insurance: it shouldn't be needed but you have it just in case. If your day-to-day processes need DR then you are doing it wrong. You should have good processes with appropriate monitoring and roll-back, in this case they didn't seem to have that.
I remember when things went off-air at a previous job it was standard procedure to yell "Nigel!!!!" in to the racks. It was probably him who had broken something.
Multiple 4k programmes? How many 65in TVs do you have anyway?
The hydraulic pipes are already full of fibre, but the core route doesn't cover as much of London as people expect because it really focused on key buildings.
I loved the irony of talking about the 'okay' camera just below a terrible shot of the phone.
Yes, I love the price vs spec (dual SIM in particular) but the fact that it looks like a Samsung clone makes me more reticent.
Take a look at the Lenovo K3 Note which is about £150 from DX.com.
Reminds me of that misunderstanding in a bar in Koln...
Perhaps it's a dirty bomb.... a dirty, dirty, filthy, dirty, naughty, oh... my....
And the Mrs Grace L. Ferguson Airline and Storm Door Company
...waits for the first person to get that reference
I am confused by the qualification that it is silent, how are people so sure that it isn't transmitting anything? Is it just because it isn't transmitting on Ku-Band, C-Band, L-Band or UHF? Have people checked the entire EM spectrum and found nothing? I would expect the NSA/GCHQ to have done that but it isn't easy if they aren't using standard mechanisms. It could even be using some exotic UWB communications are are very hard to spot and are easily mistaken for noise.
Redundant: adjective - not or no longer needed or useful; superfluous.
The power supplies aren't redundant, they are resilient, if they were redundant you wouldn't need them but I am fairly certain it doesn't have its own generator inside.
Can people please stop referring to things as being redundant when they mean resilient?
My Samsung NC10 didn't perform particularly well with Ubuntu but I think Unity was to blame, I really need to find that machine and rebuild it with Mint.
Actually you can do tropo-scatter but it is really hard, really noisy and doesn't give nearly the capacity of LoS microwave.
Has anyone compared the 1pps of GPS to the frequency reference of MSF? As far as I am aware the MSF signal is very, very accurate (10^12) and so it should make a good reference for the UK.
Recently I received a handful of SD cards, delivered in a tray that would hold 48 SD cards, in an antistatic bag, wrapped in bubble wrap, in a box, inside another box packed with foam.
Those blank SD cards were, at no point, at risk.
This was not HP and I actually received two sets in two separate and identically packed shipments.
Biting the hand that feeds IT © 1998–2017