10 posts • joined 17 Mar 2009
Replacing the copper..
...really is essential and can't be easily dodged.
I live not-in-the-city but within 20km of the centre. We have (at the time) nice copper - much thicker wires than those used in suburbian wiring (so we maybe can get decent data rates... mine is 8Mbit/Sec!!). BUT - we have the lovely gel filled cables.
So after every heavy rain - we suffer. The service pit opposite my house fills with delightful orange clay loaded water. The phone ceases to work. The calls go in. The service techs come out, drain the pit, dry the cable joints and attempt to do something with the wet gel.
Sadly for us, NBNco vehicles were seen surveying the pits just before the last election but we're now 'off the map'. FTTN is not going to do anything for us, any attempt to re-use the cable infrastructure is doomed - crap cable, collapsed ducting and pits that fill with water... I'd guess this is a common issue. Sigh.
This stuff isn't as simple as people think - scale gets you
"Providing sufficient network infrastructure is relatively simple- chuck a few wifi base stations about over campus with fibre or CAT6 connections between them."
Not quite that easy, actually... Our IT people discovered the joys of thinking that way! When you have lecture theatres that accomodate 200-300 people, each with at least one wifi device (mobile phone + laptop/iPad/whatever) and someone does an in-lecture quiz - the instantaneous load on the wifi infrastructure tends to kill it. You have the entire class attempting to do "something" quite literally at the same time. Just getting enough basestations in there to hang onto 300+ simultaneous associations is hard enough...
The systems engineering exercise is actually quite interesting.
Likewise the Eduroam network. I'm fully authenticated on the network at the institution I'm visiting. That is a whole lot different from a VPN tunnel. When I'm off this campus and I decide to print a document - I really *do* want to print on the printer in the office I'm in, rather than the one in *my* office! That kind of thing. Of course, in the non-academic world there isn't a need for (or possibility of) ad-hoc open access to things when visiting other sites not owned by your company - so the solution is probably only applicable in the Edu/Science sector - but it is a very effective and useful thing.
It is part of the lack of deep understanding problem...
Stats tell you the vast majority of "new" embedded widgets are leveraging both Linux and to a lesser extent the reference microcontroller implementation provided by the industry.
So to a certain extent the view is "port linux (which is probably already done) and put our stuff on it; ship". The time to market is quite short - so the QA and field test part of it is just missing.
The vendors quite quickly move onto a new product - possibly with an entirely different team developing the solution. So there tends to be a little amnesia in the corporate "memory" and each thing is a bit of a seperate miracle.
Some of our international students have very interesting things to say about how this stuff is actually done!
Official + Timely? Not all that likely..
Disclaimer: while I'm a Fire Service officer, this isn't (naturally) the view of the Fire Service...
The words from the article about how the information flows from the fire line are the nub of the matter. I might get on the radio and report something going south to the sector commander who will maybe take some steps to see if it is as bad as I say. Then when the sad truth of the matter is revealed s/he will forward that up the chain of command to the Incident Management Team.
As an IMT person myself - any news from the ground is wonderful and is pretty much immediately listened to and fed into operations and planning. When this system is working well, the most optimistic lag time is probably 5 or so minutes from an appliance OIC reporting it to the Incident Controller being made aware of something. Without ANY valid statistics to back this up - I'd suggest it is more likely that 10 or so minutes would realistically elapse..
Since the sad events in Victoria and the ensuing investigation - here in South Australia our IMTs now have a media liason person. So if the Incident Controller decides something needs to be done with respect to notifying the general public - the media liason bod gets tasked. I don't know what the lag through the ABC is, but it would be minutes at least.
So you'd be (realistically) looking at 10-15mins shrinking down to a very optimistic 5 or so minutes if all the ducks are lined up... So timely isn't part of the equation here.
The other part - the "official" part - is also a problem. The decision to notify (via SMS and/or radio) isn't "automatic". Someone (in the case of SA: someone at the regional office) decides if there is a real issue. This means a delay until the first arriving officer makes a Situation Report. That SITREP passes up through the chain of command quite quickly, but again - there is a delay. Being volunteer brigades, it might take 4-5mins to respond, a further say 5-10mins to drive to the incident. By which time a fire might be going "quite well". So even the alert that something is happening might be delayed by quite a bit. Not being part of the (paid) regional office staff I can't comment on what the "official" processes are to authorize an alert, but I think that bit happens pretty quickly.
The other thing that should be said is that people (like me) who live in bushfire prone areas do know that mobile phone coverage is dodgey, where their evacuation routes are; and most importantly wouldn't be waiting for an SMS to decide to leave.. Certainly here in the Adelaide hills you know when it is going to be a bad day the moment you open the door and step outside!
/dev/telcos have been beaten before...
Don't forget that Internode set up Agile to do (close to) this down the length of the "Limestone Coast" as it is now called. That project was pretty much 100% in competition with all the then Telcos.
So there is hope!
The keyword is *emergency*
..like when there is an earthquake and all your cell towers fall down. The simplest, most self contained technology wins. We use simple VHF band radios for fireground communications. Sometimes even this simple tech fails (fire+smoke does have a bad effect!).
Fortunately when that happens I usually get one of the crew to run the messages over to the other side. Doesn't get much lower tech than that.
If we relied on GSM/3G/whatever then here in Australia we'd be screwed as soon as we moved away from a commercially feasible area. Which would be 90+% of the continent.
Donate some bandwidth. Don't be schmucks. Planning needs to be done for the *worst* possible disaster, not the most common when it comes to comms.
Ahh - there's the problem
Emergency Services are traditionally not pushy enough...
First - disclaimer - I am a volunteer fire service person. A CFS officer. So this is emergency services biased.
There is far too much high tech in the comms systems being both used, and proposed. In South Australia someone in government was sold on the idea that all-of-government comms could be implemented using a trunking radio network. So the fire service, police, ambulance, ... all use something called the GRN (a Motorola trunking radio network). This system operates in the UHF 400MHz area.
Now, we also have hills. My own area in the Adelaide hills is, well, *hilly*. Did anyone ever consider what happens to UHF carriers in hilly areas? Apparently not. Did anyone ever consider the propagation issues when you are surrounded by 15m+ high flames - again, apparently not.
This network is used for command and control functions. It is also woefully under-provisioned. One of our fine national carriers has the gong for running the inter-cell backhaul on the network. Apparently, more capacity can be got when needed. By some mechanism that probably involves the minister (I'm a volunteer - I don't deal with this crap). We had a fire a couple of years back where I needed to talk to what was essentially the forward command person, who was parked in a vehicle about 50m away. I could *not* get through via the GRN network - because it was over congested (computer said "no"!). I ended up leaving the appliance and walking the 50m (something of a fitness programme, no doubt part of the hidden "benefit" of using a trunking network...). The congestion arose because (amongst other things) all the busses (!!!) have GRN handsets and were calling back to HQ complaining that the road (the major freeway out of Adelaide through the hills) had just been closed by the fire service..
Because of the parlous state of this "communications" system, we also have VHF handsets for use on the fire ground. So you can at least call the appliance you can SEE!
Now, it would be lovely to have more channel capacity - because when it hits the fan, you need to talk RIGHT NOW. Reserving the channel capacity is basically the only feasible solution. If I need to call because my crew is in danger, I need it RIGHT BLOODY NOW. I also need it to work. When bad things happen, they happen fast and everyone wants to phone their loved ones simultaneously. Any kind of channel sharing thing will simply annoy people and cause the emergency services delay and angst. You set the capacity aside and it hardly gets used. Big deal. The lustful eyes of the telcos/TV/Radio/whatever people just need to be kept off the channel.
Emergency services should not have to go cap-in-hand to the agency responsible. They should not even have to lay money on the table. This is basic infrastructure that a civil society actually NEEDS to function. The money makers just need to accept that not everything should be sold off.
Anyway, sorry for the rant.
Random Speed data point...
Put the 05.TB Seagate hybrid in my Dell Studio 15 laptop.. The only measured speed increase is the boot-time for me, since Unix derived Operating Systems cache so aggressively - you tend not to see speed lifts in the hum-drum browsing, compiling, <insert-whatever-you-do-here> (although it is faster).
But the boot-time is spectacular. From pressing the power button to being presented with the gnome greeter box is now 25 seconds. Ye olde disk took around 117 seconds, give or take.
The POST on the Dell is 10 seconds of that and in the Linux boot there is a full 10 seconds of waiting for the IDE timeouts on the probes of what devices are connected. So the load to login time is actually 5 seconds. Hellishly fast. Not as fast as I can boot some of my embedded Linux kit, but if we nixed the timeouts you'd be at 5 seconds which is within a poof-teenth of instant-on. And at no great IT-guru hackerdom difficulty either.
These suckers are probably the simplest "blindly plug-and-play" speed bump you'll get for your laptop short of putting in more memory (and most machines come with boggins of memory anyway these days, anyway...).
Very impressive speed lift for such a relatively small amount of high speed cache. It'd be interesting to get our Computer Architecture students to do a simulation of this rather than memory caches - the overall speed boost might actually be on par!
Ahh. But *HOME* networks are administratorless!
@Glen Turner: Absolutely. But home networks are built by the 12 year old child who goes to the computer shop, buys the you-beaut wireless ADSL router thing and plugs it in (unless it is a home network for an El Reg reader!). Sadly, we need to engineer for the lowest denominator...
I know my router has very limited control. .. I'd lay quids on the table most are similar.
Regrettably the archetypal home user is the most likely to be bitten by this.
- Mounties always get their man: Heartbleed 'hacker', 19, CUFFED
- Analysis Oh no, Joe: WinPhone users already griping over 8.1 mega-update
- Leaked pics show EMBIGGENED iPhone 6 screen
- Opportunity selfie: Martian winds have given the spunky ol' rover a spring cleaning
- OK, we get the message, Microsoft: Windows Defender splats 1000s of WinXP, Server 2k3 PCs