back to article Is more bandwidth really the answer?

A former university lecturer used to say, "any problem can be beaten to death with pound notes". Many network managers looking at the impact of voice over IP on their networks might be thinking along similar lines, but is more bandwidth the answer to the performance and quality issues they are currently, or soon to be, suffering …

COMMENTS

This topic is closed for new posts.
  1. Tom

    Parkinsons Law!

    Basically people will send more and more rubbish until the bandwith is taken up and then upgrade again.

    Many years ago we could download several hundred text orders down a 14.4 modem in an hour.

    Now we get a few hundred bloated word documents that cannot be automatically parsed....Less orders higher cost of processing.

    Also remember a picture may be worth a thousand words.

    Shame it tends to take up several megs.....

  2. Peter Kay

    QoS and money, not bandwidth.

    Of course it's about money - but targeted money, not throwing it at bandwidth.

    'Measure at the application level' my arse. Yes, you could measure at the application level, but the smart money would be to measure the latency for that protocol using a decent managed switch. If QoS is used to prioritize VOIP above other services, it's going to work.

    The reason people don't measure at the application level is because the applications have insufficient tools to measure service quality at that level, so people resort to the parts of the infrastructure they can control in the hope that will fix things. IT Staff are not miracle workers, and can only use the tools provided; when applications have no ability to tune their performance it becomes a developer problem, rather than system administrators or management.

    It doesn't really help that with VOIP in particular, the exchanges are generally a pain to get working and most phone firmware is highly buggy.

  3. Martin Gregorie

    Bandwidth is often not the problem: latency is

    More and more I'm noticing that once a web page starts to download it arrives at an acceptable rate but some web sites have abysmal response times. The UK Met. Office is a good example. It takes anywhere from 2 to 7 seconds for a satellite image of the UK or Europe to start to load and the first request of the day is always the slowest. The initial delay isn't a network problem: pinging their web server shows a 30-40 mS latency, typical for a site in the UK.

  4. Cameron Colley

    Latency? Or DNS?

    "It takes anywhere from 2 to 7 seconds for a satellite image of the UK or Europe to start to load and the first request of the day is always the slowest."

    2 to 7 seconds, to me, sounds high for a latency problem (a lot more than 100mS, for instance) -- are you sure the Met Office images aren't from another domain or sub-domain and this isn't a DNS issue? There is also the possibility that the delay is caused by a lookup from a database that feeds the site.

  5. Robin Cook

    Bandwidth is often not the problem: latency is - True

    You can fill a lorry with hard disks and drive it to the user. Awesome bandwidth but no good for real world aps.

  6. Martin Gregorie

    Latency? or DNS?

    Its a server delay for sure. The web page clears almost immediately and then theres a long pause while the server cranks up to start sending the image.

    I read the 2-7 secs value off the timer on the Opera menu bar - you can watch it tick up. Just now the delay was 2 seconds between clearing the page and the image starting to appear: once started it came over at a steady 15 KB/sec. For my first access this morning it was 7 seconds.

    Try it yourself. The URL is: http://www.metoffice.gov.uk/satpics/latest_uk_vis.html

  7. Anonymous Coward
    Anonymous Coward

    Latency? Or DNS? Don't underestimate the power of latency!

    Something quick: "You can fill a lorry with hard disks and drive it to the user. Awesome bandwidth but no good for real world aps."

    Actually, I read an article a few years ago that described scientists shipping tower PC's packed with hard drives. The reason, it was cheaper and faster to ship data on them by UPS than to use the internet.

    Now the main point:

    In a badly written application, or toolkit, it doesn't take long for 100ms latency to add up to 2 to 7 seconds. Lately I've started seeing web pages that load in a manner that virtually guaranties the same problems.

    A few years ago I was working on projecting a simple X application across an ssh tunnel. The application was a simple display with a lot of buttons, but not all shown at once. The X Protocol was specifically setup for projecting across networks, the toolkits definitely are not. Through experimentation, analyzing protocol traffic, and reviewing the toolkit code we learned something very disturbing. This toolkit asked the X Server (the display) where the pointer was in relation to EVERY button it knew of, even the buttons not displayed. Thus a 100ms delay multiplied by 20 buttons became 2 seconds without any DNS delay. We eventually removed the latency by implementing our own button system. Not a good thing when the toolkit already had all the data it needed.

  8. Anonymous Coward
    Anonymous Coward

    It is also what developers and QA get used to

    If you compared the web in 1999 to today, you would be horrified at the bloat that has occurred. Why has this happened? Largely due to the fact that so many content providers and consumers have high speed broadband, that they fail to notice items that 8 years ago would have meant waiting a minute are served in a second or two. Less focus has gone into making these optimisations because the commercial need has been reduced.

    The same thing happened in the client-server world when 100 speed networks became commonplace. The same thing happened as RAM became cheaper, memory optimisation was less important than it was when RAM was more expensive.

    If you look at development tools, there are very few good tools that allow you to simulate slow and poor performing networks, yet these tools are essential when designing and testing software that will be used on such networks.

  9. Anonymous Coward
    Anonymous Coward

    latency builds

    Bandwidth is good but latency kills by attrition and

    so few developers and designers know about it until

    someone points out they have a dog slow website

    and no one wants to wait for it.It's the same old wheeze

    if you overload with layers you get a slow site

    .If you create an interpreted

    program that by definition takes a certain amount of time

    to run it will _always_ run that amount of time no matter

    how fast the platform otherwise it's broken.

This topic is closed for new posts.