...Wot, no IPV6 (even if it is private)...tsk...tsk...tsk.
The European educational network community is feeling pleased with itself, switching on a single 100 Gbps link across the Atlantic. While submarine cables these days routinely have aggregate capacity in the Terabit range, this is the first time Europe's educational networks have had a single 100 Gbps link to play with. The …
...Wot, no IPV6 (even if it is private)...tsk...tsk...tsk.
But it's still tested with Ping. Good times :)
Am I the only one that thought:
79ms at the speed of light is: 23,683 km
Distance between EU and US: 7904km
Not bad considering there's probably a lot of internal switching and transponder equipment along that route too.
Remember it's a round-trip time, so the ping has to go there and back again.
More importantly, it will make playing games on US servers much more responsive (the real reason behind the link)
There's nothing special about 79ms - I've experienced similar with my US based servers for years:
The following shows the UK-US Telia link is about 70ms:
catflap.dyslexicfish.net (0.0.0.0) Tue Jun 4 21:12:49 2013
Keys: Help Display mode Restart statistics Order of fields quit
Host Loss% Snt Last Avg Best Wrst StDev
1. default-router.dyslexicfish.net 0.0% 60 1.6 1.4 0.4 21.4 3.6
2. ldc5-cr5.core.webfusion.com 0.0% 60 1.7 4.1 0.9 105.4 13.7
3. 220.127.116.11 0.0% 60 6.2 8.1 5.9 85.0 10.4
4. ldn-b5-link.telia.net 0.0% 59 6.2 8.5 5.7 56.9 8.8
5. ldn-bb2-link.telia.net 1.7% 59 6.7 10.0 5.8 104.9 15.2
6. nyk-bb2-link.telia.net 0.0% 59 76.1 84.3 76.0 222.3 26.7
7. nyk-b3-link.telia.net 0.0% 59 77.7 78.6 76.3 100.7 4.2
8. netaccess-tic-133837-nyk-b3.c.telia. 0.0% 59 76.7 76.9 76.3 77.4 0.0
9. 0.e1-4.tbr1.mmu.nac.net 0.0% 59 77.8 79.7 77.3 107.7 4.7
10. vlan801.esd1.mmu.nac.net 0.0% 59 77.9 79.1 77.3 100.4 3.9
11. 18.104.22.168 0.0% 59 78.1 78.1 77.3 80.9 0.6
12. catwalk.dyslexicfish.net 0.0% 59 77.1 77.8 77.0 80.2 0.5
Quick scanning the article the words that stood out were..
Then mind started to wander.
I gather that this is an improvement. It is better than less, I suppose, but not nearly enough. It is orders of magnitude short of where it should be. A small apartment building could saturate that bandwidth through a single switch. The only thing that saves it is that the rest of the network is also starved for bandwidth.
I am optimistic that the ability of Telcos to meter and limit bandwidth to support an obsolete telephone revenue model will eventually collapse. However, they continue to have an amazing stranglehold on bandwidth and this has spinoffs like this where long-haul bandwidth is ridiculously constrained. These pipes are narrow due to a misapprehension that the narrow pipes on either side of it are somehow a result of physical limits rather than political ones.
The constraints are usually a combination of physical and economic, not political. Pipes are narrow due to costs and infrastructure limits. You could have your small apartment building. You could offer 10Gbps to each flat. You could offer a 100Gbps backhaul link. 10G switches with 100G uplinks are available now.
You could think about 100G to every apartment, but your switch would cost more than the average apartment, especially if you filled it with transceivers that currently cost $20k+ a go. Your tenants would also need 100G NICs on their kit, and iGadgets don't include those yet. You'd also be unlikely to saturate the bandwidth given the constraints would be at the far end. There aren't many 100G servers installed yet because the costs are very high as 100G currently costs more than 10x10G. Customer perception can be that 100G should be cheaper, and it isn't, yet. Plus you'd need servers that could saturate a 100G link, and avoid SPOFs compared to 10x10G servers and a distribution switch.
And the main technical constraint would still be the number of wavelengths you can get down a fibre, which gives you the cost per wavelength. Technology allows more bits per wavelength, but it's done less to reduce the practical costs of installing and maintaining fibre.
I forget that I am speaking to the entire planet here. I am talking about our situation here in Canada where big Telcos have an effective cartel that has fought tooth and nail to keep ridiculous long distance rates alive. Here is a rate sheet:
Where I live that means you are looking at as much as a penny a second. Given that a phone call can be sustained with high quality for less than 100Kbps, that works out to ~10,000,000,000/100,000*$0.01 = $8,000 per gigabyte of transfer. That's 32 million dollars to backup a 4TB disk. Now that is significantly better than the $1,000,000+ (sic) per gigabyte they charge for SMS bandwidth (4 billion dollars per disk), but still, it is ridiculous. I am doing rough math in my head here and might even have slipped a decimal point, but it does not affect the nature of my conclusion: Charging a million dollars a gigabyte for data transfer *demands* artificial scarcity. Enter deliberately crippled infrastructure.
I get what you are saying with the examples you cite. However, what I was talking about was a vanilla 1Gbps NIC * 100 apartments. That would saturate a 100Gbps pipe. The fact that it gets harder up the pipeline only proves my point. At least one reason larger capacity NICs and switches are so expensive is because hardly any are made yet. What is the point of having 10Gb valves on a system of pipes only 10 Mbits wide?
We can only expect the tributaries feeding the backbone to increase in size. Our infrastructure cannot even handle their current size. The article was about a100gb connection to support an entire community, presumably some time into the future. I do not think it is nearly enough. Maybe that is all they can get, but it is not much cause for celebration. In the ES.Net article about this they say "experts show that with the proper tuning and tool, just two hosts on each continent can generate almost 80 Gbps of traffic". There are 1,000,000,000+ computers in use. This connection allows 4 of them to communicate at full speed all the way across the Atlantic. That leaves 999,999,996 to go.
When it comes to bandwidth, we always underestimate.
I guess it's low latency....? Comparatively.
and remember, light travels 2/3 the speed of light in fiber, so you have 0.079s x 200,000,000m/s = 15,800,000m, 15,800km/2 (to get the one way distance) = 7,900km which is pretty much spot on I think.
Biting the hand that feeds IT © 1998–2017