We'll be slurping your data faster!!!
Can't really wait for it!!!
Microsoft has announced it will add five new features – some experimental - to the TCP stack it will ship in Windows Server 2016 and the Anniversary Update to Windows 10. Redmond says the following five features will make it into its new TCP stack: TCP Fast Open (TFO) for zero RTT TCP connection setup. IETF RFC 7413 Initial …
The last thing I want in MS closed source proprietary networking code. I don't care how efficient it is.
"To continue your download, please press the onscreen 'purchase' button to pay for another 20 minutes of access."
"We're sorry, your Linux system does not seem to be using Microsoft approved TCP/IP packets and your connection has been dropped."
"Your download will take 9h 47m as other traffic has been prioritized over you. For faster downloads, please click on our Premium Services website for a price schedule."
If you think I'm wrong, you've never dealt with them before.
Being less agressive during periods of high latency was specifically mentioned
(from article)
"LEDBAT will stop Windows competing aggressively for bandwidth during times of high latency"
yes. it's ABOUT TIME on THAT one... not like I hadn't specifically complained directly to them about that OVER A YEAR AGO (and others as well).
it's probably the WORST thing that Micro-shaft does with their forced updates: DOMINATE your intarweb connection with WHATEVER SCHTUFF *THEY* want to use it for, at a time of *THEIR* choosing, even if you're watching streamed media content. OOPS - interruption - because, Micro-shaft.
The Windows TCP stack used to be one of the best and most stable, but like everything else companies leaf fog each other, so once the windows TCP stack got overtaken, Microsoft setup a team to do everything possible to once again make it the best.
So for the next few years it may be the best, then it will taken overtaken again.
No company wants to be making small changes to their TCP stack all the time due to the cost of testing etc. Therefore a TCP stack will be frozen for many years, until there are enough large image changes to justify the cost of testing.
"Yeah, because we all have sooo much experience of 40GB connections"
Some of us do. And believe me, there's bugger all measurable difference in the IP stack between Windows and Linux performance on that (or any) kit; certainly not enough to be worth singing praises in forums. In the real world there are tons of other variables that have a bigger impact (e.g. hardware, infrastructure, etc) that make such statements completely worthless without a lot more qualification.
And believe me, all those hardware offload features in NICs are not all they are cracked out to be either - I've often had to turn this off due to buggy drivers or firmware from NIC vendors (*cough* Broadcom *cough*).
At the application layer things can be a bit different (NFS, CIFS, HTTP, etc, etc) and can swing either way depending on the service and OS. But there are so many variables there that we are heading way outside the scope of this thread. :)
Now all that said, it's good to see Microsoft making improvements. Everybody benefits from this sort of thing and shouting that $PREFERRED_OS is the best is just silly here.
Everybody benefits from this sort of thing and shouting that $PREFERRED_OS is the best is just silly here.
Indeed. Especially as no-one mentioned the true winner: FreeBSD.
Even Facebook admits that Linux is playing catchup: http://www.theregister.co.uk/2014/08/07/facebook_wants_linux_networking_as_good_as_freebsd/
"there's bugger all measurable difference in the IP stack between Windows and Linux performance on that (or any) kit
That's not my experience. Not a vast difference but at extreme bandwidth use, Windows Server is generally measurably faster and has lower CPU use.
"Yeah, because we all have sooo much experience of 40GB connections"
Just because you cant afford modern kit, doesn't mean everybody can't. A new blade chassis setup would likely be running 40gbit uplinks these days and 40 gbit cards in the blades themselves are not that expensive where required.
And there are cheaper ways to join the club if you can't afford new:
https://www.etb-tech.com/mellanox-m3601q-40gb-infiniband-switch.html
"It's generally the fastest"
Yes. I especially liked how they made it so the OS established the TCP connections and began the application layer things before the application knew anything about the connection. It made IIS seem faster. But when too many connections came in and all the little white lies started building up too high, the OS choked.
Apparently they fixed that by increasing the amount of memory it could use to store all the layer violations.
Actually, there has been a TCP stack in Windows NT since 3.51 (maybe even 3.5); it was an OEM-ed stack back then, but it was there. I don't recall whether or not you had to pay extra for it, but...
(That stack, and all of the other OEM TCP stacks on Windows (NT and otherwise), ultimately lead to the development of the Windows Sockets interface spec, which we should all be happy about - it may not be quite the same as *nix sockets, but it unified protocol access on Windows, which was a Very Good Thing)
*(and I couldn't agree more about NetBIOS and NetBeui (NetBIOS is the interface, NetBeui is the protocol used in early Windows NT versions). NetBeui is a ghastly protocol (holes in the state machine!) from a bygone era that we're well shed of, and NetBIOS being retired is a good thing as well.)
>Actually, there has been a TCP stack in Windows NT since 3.51 (maybe even 3.5);
Actually, there has been a MS TCP stack for Windows since WFW 3.11 (maybe even Win 3.1). It was not, as I recall, a free product until it was available free as the NT client.
Late reply as always.... Life is busy.
The TCP stack in windows has an interesting history. They built it into win 3.11 windows 3.1 did not have support, eek was that win 3 and 3.1? damn I forget. You got full support with Trumpet Winsock. Quite a good TCP layer, especially as it didn't hang up your modem on reboot (windows loved the blue screen back then)
But MS felt threatened.... and built something not as good, bundled it in.
So afterwards of course, there were artificial limitations on the number of connections you could maintain, and winsock wasn't quite the same as unix socket layer, it had its own quirks if I remember correctly.
Couldn't let their mainstream OS become a server OS, but that could be made into a sharing limitation.
and the fun moved elsewhere, DHCP and DNS quirks....
finally they let Cisco redo the ip stack in 7? was it?
MS have form, opportunity, but this time around I don't think they have the motivation, at least I sure as hell hope they don't.
Microsoft has announced it will add five new features – some experimental - to the TCP stack it will ship in Windows Server 2016 and the Anniversary Update to Windows 10.
Am I misinterpreting the anniversary update? Is that not a live branch thing? If it's just a way of saying "It'll be rolled out over time to machines after the anniversary update has been deployed" then you're probably correct. I assumed it to mean it was part of that update landing in August but not yet being tested beyond Redmond's walls.
What a great idea! I bet Microsoft are glad you suggested that, as they'd never have thought of it... Oh, hang on...
Some things, for example TCP Fast Open, have been in the fast ring for quite some time. I know because there was an issue with the implementation a few months ago that meant some sites using TCP fast open would not work properly - notably some Google properties. This was withdrawn, fixed and it's back in the fast ring as an optional setting within edge, turned on via a setting in about:flags. It's now working with every site I tried it on.
https://blogs.windows.com/msedgedev/2016/06/15/building-a-faster-and-more-secure-web-with-tcp-fast-open-tls-false-start-and-tls-1-3/
I remember when Microsoft took it upon themselves to tweak the Windows file copy engine as part of making Vista great. Whereas Linux had managed more than acceptable performance for years by reading the source file in chunks and writing those chunks to the destination file, the Windows Vista copy geniuses knew better and implemented some absurdly complex scheme involving changes to the network driver layer and more, as described here: https://blogs.technet.microsoft.com/markrussinovich/2008/02/04/inside-vista-sp1-file-copy-improvements/
Unfortunately the great unwashed masses were not grateful for all of this effort, pointing out that in the real world of copying actual files, not only was Vista much slower than XP was (by orders of magnitude!), but that the Vista copy engine was actually slower than every other approach too - even being roundly beaten by third party copying applications running on Vista.
So forgive my cynicism, because somehow I don't think Microsoft will succeed here at all. Quite the opposite.
I'll stick with anything but Windows.
"Unfortunately the great unwashed masses were not grateful for all of this effort, pointing out that in the real world of copying actual files, not only was Vista much slower than XP was (by orders of magnitude!), "
That was fixed long ago in SP1:
https://blogs.technet.microsoft.com/markrussinovich/2008/02/04/inside-vista-sp1-file-copy-improvements/
Windows file copy is still completely f***ed up in one serious respect. drag a big file from one drive to another, then while it's copying, drag another file, then another. Windows attempts to copy all the files at the same time, interleaving the operations and causing disk contention and thrashing, when simply copying them in sequence would be (in my tests) up to 5 times quicker.
"Well, you told it to copy them all at the same time...."
No, you told it to copy them all; nothing suggests they have to be done concurrently. If you tell your favorite music-playing device to play ten songs, do you expect them all to be played at the same time? Just because Windows has always done it this way and you've come to accept that as normal does not mean that that's what you told it to do.
If there's a better way of doing this than all at the same time, I'd prefer that the OS be clever enough to do it that way. That's one of the things I noticed about Linux Mint that pleasantly surprised me coming from Windows-- Nemo (the file manager) queues the second copy operation and appends the dialog to the first one, clearly indicating that it is queued for copy, where you can then click the "go" button to copy concurrently, or click the stop button on either operation to cancel.
It's so simple-- anyone capable of understanding a file copy progress meter in the first place can grasp the concept of the second copy operation waiting for the first one to be completed (especially when the dialog tells you exactly that), so the idea that everything has to be dumbed down for the beginner doesn't really work.
I second that irritation...
There's nothing quite like having to hand hold a windows machine for a relative when you're copying photos from various cameras whilst trying to organise some other folders while you're at it.
Now watch as 3 different copy windows go from a few seconds to minutes, then hours because....reasons.
TCP starts sending packets slowly until it finds out how fast they can be sent without a router dropping them due to a slow network, or lack of buffer space etc. Therefore a short lived connection may never get up to full speed.
This has always been an issue in IP networks, as there is no central system to setup connections and decide what speed each computer can send data at….. This is also one of the reasons that IP networks scaled so well……
I understand you're joking. However, Linux has had most of these things for years. The exception is LEDBAT, RFC 6817. The actual dates are
RFC 7413 TCP Fast Open (TFO): kernel 3.13, 19 Jan 2014, https://kernelnewbies.org/Linux_3.13
Initial Congestion Window 10 (ICW10): kernel 2.6.39, 18 May 2011, https://kernelnewbies.org/Linux_2_6_39
TCP Recent ACKnowledgment: 4.4, 10 Jan 2016, https://kernelnewbies.org/Linux_4.4
Tail Loss Probe: 3.10, 30 Jun 2013, https://kernelnewbies.org/Linux_3.10
TCP LEDBAT RFC 6817: As far as I can tell, Linux does not have this yet.
"The Linux IP stack has had all these and more since 1976"
Looks like nobody got the humor. and you got more downvotes than me. NOW I'm envious...
(/me points out Linux was invented in 1990's, but the UNIX stack would've existed back then...)
At one time NT had an implementation of the BSD stack, with appropriate copyright statements in their documentation. They should've stuck with that, and tracked the BSDs.
Just disable telemetry. That would have the same effect.
They could also stop that "calculating" bollocks on file transfers and simply display a throughput graph.
Most of know how big the folder is we're trying to copy and can perform basic maths...I dont need a wildly inaccurate estimate for the file transfer dialog thank you very much.
>> They could also stop that "calculating" bollocks on file transfers and simply display a throughput graph.
Have you looked at Windows lately? Not only do you get a throughput graph for a file copy, you have the ability to pause the transfer - which is very useful if you're doing multiple transfers at the same time to a slow destination (or from a slow source).
I find the current file copy progress indicators on Windows to be far more useful than the stupid analog progress bar Linux provides...
Linux is a kernel. Kernels do not have progress bars!
Desktop environments do, though, and unlike Windows, you can choose from many different desktop environments that all work with the Linux kernel; Unity, Gnome, KDE, Cinnamon, Mate being among the better known ones. They each have their own way of handling file copies, and if you don't like it, you can choose another-- there's no such thing as the one "stupid" way Linux does it.