* Posts by Paul Lahaie

3 posts • joined 16 Dec 2008

The Next Big Thing in Wi-Fi? Multiple access points in every home

Paul Lahaie

I'm surprised no one has mentioned this yet.... but the absolute last thing I would ever want is to give my telecommunications provider yet another vector to try and monetize things that are basically free.

I can just see it now:

Internet service with Wireless Mesh (5 clients/shared 1TB data plan) - $199.99/mo

Every additional TB of data - $49.99/mo

Every 10 additional clients - $4.99/mo

MU-MIMO Support (Premium) - $9.99/mo

Legacy Device Support (B/G/N) - $4.99/mo

All Meshed networks will also be available for them to use as their mobile signal, so you're up to 250Mb service will sometimes be lower because we are reselling the bandwidth you are already paying for to other customers (I know the modems are over-provisioned to help offset the 3rd party usage... but DOCSIS isn't able to deliver the full bandwidth to every customer -- during Netflix Primetime, my 26MB/s turns into ~ 12-18MB/s -- they'll fix it when they get around to doing a node split)

And next will come DPI on your internal wireless network, so they can make sure you're not ripping off any of their media properties (or any of the other members of their cartel)

Let's give the absolute worst behaving companies (as far as serving their customers) in the world even more ways they can control your interaction with the network. It took them a few decades, but they will manage to turn the Internet into their modern X.25 data network. So they can monetize absolutely every aspect of every interaction on it (you know, monthly access free, data charges, hourly rate, extra for "premium" sites)


It's 30 years ago: IBM's final battle with reality

Paul Lahaie

Better than OS/2 / Win9x / NT

Around the time OS/2 was making its way onto the scene and most people use DESQview to multitask on their 286/386 PC, Quantum Software Systems (now QNX Software Systems) had a real-time multiuser, multitasking , networked and distributed OS available for 8086/8088 & 80286 processors.

On PC/XT hardware in ran without any memory protection, but the same binaries would run on the real mode and protected mode kernel.

Something about being able to do

$ [2] unzip [3]/tmp/blah.zip

Would run the unzip program on node 2, read as a source archive the /tmp/blah.zip file on node 3 and extract the files into the current working directory.

We accessed a local BBS that ran a 4-node QNX network (6 incoming phone lines + X.25 (Datapac) )

Even supported diskless client booting and the sharing of any device over the network. Though at around $1k for a license, it wasn't "mainstream".

It's too bad the few times Quantum tried to take it mainstream, the plans failed. Both the Unisys ICON and a new Amiga had chosen QNX as the base for their OS.

Google hints at the End of Net Neutrality

Paul Lahaie

Another misleading article from someone who should know better

I'm looking at the 3 NN points, none of which the Google CDN would violate.

1. Levying surcharges on content providers that are not their retail customers;

If Google is paying to co-locate their servers on the ISPs network, are they not retail customers at this point? What's the difference between Google co-locating their server or you and I? None.

2. Prioritizing data packet delivery based on the ownership or affiliation (the who) of the content, or the source or destination (the what) of the content; or

The fact that the system is closer does not give it "priority" over other packets unless the network provider does this. As someone who works in LAN management, you should be aware of this. The idea is that with NN you can't prioritize based on origin/destination but closer destinations on fatter pipes will always be faster. If I connect to my home system (cable @ 10/1) vs colocated server (100/100) I notice a HUGE speed difference, but the traffic is still being treated equally.

3. Building a new "fast lane" online that consigns Internet content and applications to a relatively slow, bandwidth-starved portion of the broadband connection

How are "Internet content and applications" consigned to a "relatively slow, bandwidth-starved portion of the BROADBAND CONNECTION" (emphasis mine). So you're saying that once (if) these servers go up, my access to the rest of the Internet will somehow be slow?


Biting the hand that feeds IT © 1998–2019