back to article Cloud on the rise (Love JANET)

Dr Phil Richards from Loughborough University heads into The Reg studios on October 20th at 10am (BST) to tell us how he's building modular hybrid clouds. Dr Richards has recognised numerous challenges with ageing data centres, some of which you'll recognise and some of which you won't. One of them was that doing things the …


This topic is closed for new posts.
  1. Anonymous Coward
    Anonymous Coward


    Does it still run X.25?

    1. alan buxey

      a little more advanced now

      40Gb backbone with 10Gb to several of the campuses. The current JANET network is still made up of Metropolitan Area Networks (MANs) who have local expertise and knowledge..and their own dark fibre in many places - providing each campus with ethernet WAN is most places.

      JANET is also ahead of the game for wireless roaming too - thanks to eduroam - 802.1X WPA enterprise wireless at all involved sites meaning that visitors to the eduroam SSID authenticate using their home site credentials - from wherever they are in the world.

      1. Anonymous Coward
        Anonymous Coward

        40G? 100G? you decide

        ....the regional stuff may be 40Gb but the 'core' is 100Gb now - see JANET technical pages ;-)

    2. firefly

      When I worked as a university sysadmin a decade ago I must have downloaded the entire internet several times over, often the 100m connection to my desktop being the limiting factor. This when consumer broadband was in its infancy and you had an 0.5 meg connection at home if you were lucky. Happy days.

  2. Anonymous Coward
    Anonymous Coward

    yes, but it runs slip on top of that.

    The real question is how to prevent pricks with company credit cards from taking their deskside collection of servers and virtualizing them into a different cloud.

    p.s. not everyone with a company card is a prick.

    p.p.s not every prick has a company credit card.

  3. James 100

    IPv6 is still not even on the horizon for deployment at my University - and even Teredo is blocked by the firewall (can't "risk" having anything modern, after all!)

    I sometimes suspect from the lavish expenditure on overpriced commercial products (six figure email fiasco, I'm glaring at you!) that if anything, they are OVER-funded, breeding complacency. Why be efficient with IP addressing? They were all handed a /16 or two for free - and still hog them as shortages become an issue for the rest of us.

    I know it's different in other universities - the one I studied at originally gave us Exim and open-source fully-replicated IMAP service, as a side-effect to delivering a far better email service at lower cost - but others just spray money up the wall on consultants and shiny toys. Much like companies, I suppose. Nice to see Loughborough seems like being on the better side of that dividing line!

    1. Anonymous Coward
      Anonymous Coward


      there are a few Universities that do provide IPv6 connectivity - though not as many as there could be and only a handful are doing DNSSEC. Loughborough is one of the few universities where my laptop has been given an IPv6 address (a proper public one that worked!) when I went to a conference there last year... seemed to be quicker than IPv4 too! ;-)

  4. James 100

    @theodore: the real key to achieving that is to make sure the in-house option is actually better than buying over the counter outside! When I am faced with a choice between registering a domain myself for under £10, or paying more than ten times as much to do it through an in-house intermediary, which option makes sense? A smarter organisation wouldn't allow the 1000+% markup in-house, a dumber organisation would mandate being ripped off internally...

    1. Anonymous Coward
      Anonymous Coward

      "a dumber organisation would mandate being ripped off internally..."

      Ah, I see you are familiar with the inner workings of the NHS

  5. Graham Bartlett

    Nice to see Loughborough sorting out its pipes

    When I was there (93-97), everything was seriously slow. Intra-campus networking was at best adequate, but connection to the outside world sucked. Although things got pretty good late at night, when you'd probably only have a dozen geeks using it.

    That said, the whole internet was pretty sucky back then. Sure, it was connected, but most of the connections were hardly better than a couple of tin cans and a bit of string. If you needed to get anything off a US-based server (e.g. datasheets) then you need to be doing it before midday or after midnight. Try doing it during US working hours, and a 10 bytes/sec download speed was about average.

  6. Anonymous Coward
    Anonymous Coward

    Ah fond memories ...

    thanks to JANET my name is immortalised in a post going back to 1987 (when I was 20) ... has proved useful when people ask at interviews "just how much experience have you of the internet ?". I can truthfully answer that I helped devise one of the protocol standards.

This topic is closed for new posts.

Biting the hand that feeds IT © 1998–2019