back to article Google's Wi-Fi not good enough for its home town

Google's offer of free Wi-Fi to its home city of Mountain View isn't good enough for residents who have been struggling to make use of the overloaded service – but now they're finally getting an alternative. Google got permission to strap access points to lampposts around Mountain View in 2005, launching the free network a …

COMMENTS

This topic is closed for new posts.
  1. thesykes

    A wi-fi network built 8 years ago is now struggling... not exactly surprising, considering the number of gadgets that can now go online.

    1. localzuk Silver badge

      Precisely. However, the contract was extended for a second 5 year period, so I'd expect them to have done a refresh during that time.

      Our wireless network is about 5 years old here, and during that time we've expanded its capacity twice, due to the shifting nature of mobile devices and the changes in how the site is used. Not to mention simple increases in device numbers.

      I'd expect such things to be needed in a metro-area wireless more than that!

      1. Captain Scarlet Silver badge
        Childcatcher

        A refresh after 5 years, I doubt many companies would refresh this type of device that often.

    2. Rekrabm

      Re:

      The network has been too slow to use for most of its existence. It has always been faster to allow authentication with one's gmail ID, by orders of magnitude, than to load even Google's search page.

  2. Anonymous Coward
    Anonymous Coward

    There they go.

    Cutting out the middle man to make getting to your info more cost efficient.

  3. James 100

    Google's ISP ambitions

    Given Google's plans to build gigabit-per-user fibre services, how can they of all people now be struggling with congested WiFi? Presumably, a backhaul issue - did they cheap out and try to use wireless mesh, instead of running fibre or copper to each AP?

    1. NinjasFTW

      Re: Google's ISP ambitions

      Why would you think its a backhaul rather than a simple saturation of the airwaves?

      1. Richard Jones 1

        Re: Google's ISP ambitions

        Based on little to no information it is hard to make an assessment of where things fell down. Certainly airwaves are as capable of getting congested as anything else and, can be harder to fix than 'simple' back haul issues, which might be sorted via 'just bung in a bit of cable'.

        Rather than any one issue, I suggest that the whole installation was predicated on best 2005 guess information and that the whole end to end requirement set has substantially evolved. At a guess probably by a substantial set of factors.

  4. btrower

    The problem is WiFi not Google

    Our current way of chopping up the EM spectrum is hopeless. In fact, the entire understanding of bandwidth has been out of whack since, let me see now, oh yeah -- FOREVER. We should change to consolidate EM into one big TCP/IP** pipeline and a few smaller ones -- emergency services and the military come to mind. Maybe we could give the NSA its own without telling them where the rest of us went.

    The cellular network deals with bandwidth congestion by creating smaller cells in higher traffic areas. This should be done with all the spectrum and we should be joining the backbone into a finely-grained cellular wireless network.

    I have a good idea as to how we got into this mess and I think the only way out is to fix the network both politically and technologically. The technological has its challenges but right now I think the bottleneck is the politics (and related economics).

    Some of the issue relates to misunderstandings about latency, bandwidth, protocols and security. It has been to the benefit of companies involved in supplying this to us to make these obscure. They are not that simple, but at the level where it is bottle-necked you don't have to know that much.

    Above all, bandwidth is *always* going to be too low and latency is *always* going to be too high. Whatever standards we put in place should stop capping the speed or dictating the latency. This is or should be easy to understand, but has defeated everyone presenting standards since the beginning. Part of that is that even very clever guys without enough experience think that there is some value that is 'enough'. Part of that is because their bosses have given them a target that is an integer value rather than the idea that it needs to be as fast as possible even in the future.

    There should be only one logical highway. Whether we are talking inter-chip communications to a CPU cache or Wide Area Network speeds through a satellite it should be logically consistent. I should be able to write software that 'reaches low' for speed and efficiency that neither knows nor cares much about the details of the source or the sink for data. Bits carried by mule on a scrap of paper can't have the same underlying physical protocol as bits moved from one CPU register to another. However, except for the guy guiding the Mule or the CPU and whatever they link to, no other element needs to know how distant elements implement things.

    We are constantly struggling to keep up with changes in things like CPU word size, BUS standards, LAN, WAN and Wireless standards. Is there anyone who has gone through more than two of these transitions that thinks this is not a perpetually moving target? I am a software developer. I can think of why I would want massive CPU word sizes in the megabits let alone the next transition up to 128 that they say does not need to happen but will happen anyway. Are we going to want petabit bandwidth to our phones? Maybe. Probably terabit at least. Why should we rule that out by design when we don't have to?

    The wireless network looks the way it does because the standards committees entirely lacked imagination. They looked at everything as point to point in the light of data currently being transmitted and then designed for it. I know because I sat on a committee for the largest private network in Canada while they sagely designed for an increase in *some* branches from a 56K connection to a grand total of *two* 56K connections when were saturating 4,000K connections in our daily use in a tiny 12 person workgroup. That was in 1987 when the banks did not yet 'believe in' LANs. Oh my. If you don't understand the problem in a very profound way, it is hard to convince you that your solution is off.

    Not seeing the value in the EM spectrum, our political representatives were seduced into selling the right to use it to entities with a vested interest in deliberately limiting its use. So they sold it and its use was limited. That is why the per gigabyte cost of SMS is so insane -- something approaching a million dollars when even with our ridiculously compromised network it is only worth about a dollar or less. BTW if some industry weasel wants to step in to explain how that 140 byte SMS packet is special, why not go all the way and explain how that makes the actual marginal cost of an SMS message ZERO bumping up my generous theoretical one million times markup to an actual markup of infinity. Either of them great margins if you have the lack of scruples and the monopoly power to get them.

    What is happening with WiFi is what the network guys knew or should have known all along. When people start to actually *use* bandwidth to shift data that interests them from one place to another, they discover that the pipeline connecting them is wholly inadequate *by design*.

    I have a modest home with a wife, two daughters and a normal number of visitors. All of them use our WiFi and over the past decade I have gone from one Wifi router that was hardly used to three that do not always keep up.

    Google has huge pockets and lots of people a lot smarter than me, but they can't fix stupid. If I was in charge of Google, I would wish the city luck and pass them the infrastructure in place while they are dumb enough to take it.

    **We need to modernize our underlying protocols, so I really mean whatever replaces TCP/IP to fix security issues, latency guarantees, etc.

    1. cyberdemon Silver badge
      Trollface

      Re: The problem is WiFi not Google

      And the TLDR award for 2013 goes to.. btrower!

      I keep reading your name as btrowel.. which is ironic since you always seem to be laying it on with one!

      1. Vector

        Re: The problem is WiFi not Google

        Just to sum up -

        We still haven't learned that 640K in not more than anybody will ever need!

  5. Kevin McMurtrie Silver badge

    Here comes an old copy of Windows to burn the place down

    Neighboring city Sunnyvale used to have free WiFi too from a different company. The problem was that infected Windows machines would nuke the entire neighborhood's network. Their probing, attacks, and ARP "who-has" packets for nonsense addresses kept the network perpetually saturated. WiFi can't tolerate before it's useless.

    1. vagabondo

      Re: Here comes an old copy of Windows to burn the place down

      The "public" and hotel wireless networks that I use when travelling seem to drop traffic from clients other than to ports 53, 67, 80 and 443. This should lessen the inane chatter found on many LANs.

  6. Anonymous Coward
    Anonymous Coward

    Google really should have invested a few dollars to fix this

    Because it just makes them look bad that they can't get Wifi in a city that isn't very dense working properly. If we can make wifi work in stadiums filled with people, it shouldn't be too hard to get wifi working in a city with a handful of people per acre.

  7. MainframeBob

    Free = ever growing demand

    In the mind of developers:

    Free WiFi = App need more bells & need to be allways on and stream data.

    There will be problems forever with WiFi

This topic is closed for new posts.

Other stories you might like