Researchers at US college North Carolina State University claim to have worked out how to allow Wi-Fi hotspots to fling up to 700 per cent more data back and forth, freeing large-scale Wi-Fi networks from the congestion that keeps users waiting for web pages to load and, in worst cases, to think they’ve been disconnected. And …
So it's adaptive, intelligent tail dropping? I dont see what's earth shattering about this.
From the description of what they have done, this is part of the WMM standard itself. Basically if the AP has traffic let it go out so AIFSN(AP) < AIFSN(STA) for all AC
I guess the difference is that their implementation is triggered based on traffic load. It is just an asymmetric traffic configuration. But then why not just enable it all the time?
Yeah can't see how this is a big deal. It would be worth testing on an enterprise class AP/deployment. Though a non standard setting, almost all of them can be configured to do what is described.
Always wondered about all these "hacks" to deal with TCP/IP shortcomings, wouldn't it be better to improve TCP/IP?
Obviously any change to TCP/IP results in a lot of network kit needing to be updated or chucked, but sticking hack on hack can't be good in the long term.
Someone should come out with a new version of the TCP/IP stack, maybe with an expanded address space to cope with the larger number of IP-connected devices, improved automatic address assignment, built-in security, and a bunch of other cool functionality. Of course, you'd need to increment the version number, maybe twice even, from the existing version four.
I wonder what you'd call something like that and why it hasn't been widely implemented yet?
Good use of sarcasm, well done.
In this case, it's a problem with how WiFi is setup, rather than TCP. WiFi is a shared medium, so you're going to get collisions (CSMA/CA attempts to give fair access to the medium). TCP is affected as it sees a collision as a drop, so it scales back throughput wise. That's why they're dropping new sessions, and giving priority to the existing data flows (It's kinda cheating, throughput wise).
TCP has quite a few throughput hacks in it (Window Scaling, SACK, binary backoff Vs exponential etc), and is quite predictable and mature. The real issue here is wireless ethernet being "non switched", thus having collisions and packet loss with many users.
The article seems to make a distinction between hotspots and access points. I'm not that familiar with WiFi topology but I've always considered the two to be the same though usage seems to imply that hotspots are public access points for a particular network over a large area such as a city and access points generally relating to more or less closed networks such as hotels and conferences.
This sounds like network management optimisation which is unlikely to have much effect with just one device. I'm also not convinced that if everyone is downloading you can increase the yield. This approach sounds like load balancing across access points. Surely, before any effort is made in that direction, you need to make sure that the access points are set up to consider both the effects of the environment, user density and interference from each other? Not really up on much of this so would appreciate an explanation.
Has there been any work done on Bluetooth 3 networks which use Bluetooth as a d-channel to manage clients while data is carried on WiFi?
Yeah, sure the manufacturers will update their firmware for this.
Or they'll just make new products with WiFox "700% faster!!" labelled on the box and expect us to upgrade.
I didn't read the entire article, but I got to the part where it said this new "technique" is dependent on how many clients are connected to the AP.
If that's true, I've been performing this technique for several years now, when I would log into my home router and dis-associate my grandfather's laptop so he would stop watching YouTube and I could have the internet connection all to myself.