12 posts • joined 18 Nov 2010
I built an audio workstation a decade ago using this:
best onboard audio I ever heard
is this the same
Maggie that withdrew research funding for the National Fibre-optic network from BT and Mercury in 1988 because of over-zealous neo-liberalist ideaology and an infatuation with telewest etc.
If so then tell me again why it is that she helped innovation
font engine in kernel
priv esc in win2012 using a font?
would be the Cisco method of connecting distributed datacentres.
Also i thought that vMotion was a bit latency sensitive with <~10ms being the limit.
SELinux crossed with minix's treatment of userspace
i agree if you wanted to use fedora for your desktop but you just wouldn't in an enterprise environment .I have been happily using CentOS 5 (5.0 to 5.7, gotta update to stay safe) for *years* on the desktop (the conf files haven't changed location in all that time and it has all "just worked"), now migrating to CentOS 6 (6.2) and based on what is happening in fedora I am really looking forward to RHEL/CentOS 7 when it comes out the door.
We use CentOS because we can support it, for those without internal support RedHat will provide that.
Compared to being stuck on XP with no hope of upgrading either because of a lack of Win7 drivers for old hardware or the boxen being "too slow" - forklift upgrade...in this economic climate - I'll take linux on the desktop today.
Other than games i don't see why anyone would want to stay with windows.
If you have business apps that need XP then use your current licences to run an XP virtual machine and then at least your host OS will be update-able after XP goes EOL. After all Vt-x has been around since 2005.
there was an alternative system, like an OS that had all the software packages within a some form of ...oh let's call it "a repository" which would allow you to update your desktops and servers in a planned manner having first gone through some sort of change control process first. Maybe where you could find alternatives like xPDF/Evince or OpenJDK, maybe where the underlying OS would be supported for 10 years with security backports, didn't demand hardware refreshes every 3 years, didn't seem to have problems with cruft requiring reinstallation and had proper privilege separation.
yes and no. The point of this (and it sounds like it involves LISP) is that you shouldn't have to change the IGP database or cause any churn of your routing protocol to move a host around your network when you can just tunnel then shift the traffic to the host address.
The big change is that for years we have been told that tunnels are not the way but now it seems they are :)
it's the virtualisation!
A single server probably doesn't need 10Gbps but a physical server with many virtual servers might, then try using vMotion to shift a loaded virtual server to another box with more capacity whilst it is running with traffic "tromboning" thru the original host until the switches all update their FIBs. That will be the driver for faster links.
Was slightly disappointed to see no mention of Juniper's Q-fabric in the article which uses a CLOS non-blocking network in the core to allow full capactiy for any TOR switch to another and which is apparently in production with some of their customers now.
this has the possibility of creating a paradigm shift in networking technology especially in the data centre. There is a really good discussion about it in episode 40 of the (excellent) packetpushers podcast.
uses mac-in-mac that kinda needs to be in layer 2.
Use of IS-IS (underlying protocol for TRILL) makes sense for this but the movement away from traditional 3 layer enterprise model is probably going to happen at somepoint, irrespective of whether it is TRILL or SPB or QFabric. Maybe we will all move towards controller based networking for guided wave as well as wireless in the next 5 years.
I do worry about massive fault domains though.
An interesting point about 40Gb/100Gb links is that they will have to use MPO and custom leads (OM4 IIRC) so if you are designing a DC now then don't over specify your fibre requirements just in case as none of it will work with the coming standards.
when did that start on CentOS
you can go from 5.0 to 5.5 current in one iteration of yum update & reboot
I don't think i have ever had to update then reboot and update and i have been running CentOS servers for years