Stealthy chip start-up Teradici Corp. in Burnaby, British Columbia emerged from behind the curtain on Tuesday to reveal their long anticipated semiconductor fix for the remote PC desktop dilemma. The product, geared towards OEM systems manufacturers, consists of a pair of chips designed to overcome the shortcomings of existing …
Will the vendors adopt it?
Will the vendors adopt it? Only if the margins are superior to what they are getting for "business" PCs. And the volume will have to make up for the raw loss in revenue.
It's going to have to be consumer driven. Which means they will come from a smaller company who will either be bought by IBM/HP/Dell or driven out of business when one of them decides to join the fray.
Blade PCs are a great solution, but I don't think many people see the problem yet. Most companies now see laptops as the solution to a lot of their desktop issues. My current customer certainly does (even contractors are issued company laptops).
Blade PCs would solve the problem of the worker taking home a million names, addresses, SSNs, birthdates and credit card numbers. If they need to work from home, issue them another client (like I said, they better be cheap) and use a VPN (broadband goes without saying. Noone with a business need uses dial up).
I wish them well.
the early adopters that will see a very good ROI and security advantage will be financial institutions and design companies. By using a small box connected to their blade that'll have characteristics similar to an Intellistation (superior to a laptop), in IBM's case as an example ,they'll keep the data secure and easily transportable. This would allow for the financial guru to even have a remote box at home. For the design company with designers worldwide it would allow for 24-hour productivity with 3 different engineers working on the same machine.The fact that it's a 1:1 workstation in a remote capacity makes way for endless possibilities.
..and what data center has room for all the PCs back there ?
It's hard enough to cool an power a half a rack of SERVERS let alone put all the corporate desktops back there UNLESS you virtualize, and I don't see that here to the extent that you can already get VDI solutions from major vendors like HP and IBM.
Maybe if they put the chip on an easy-to-integrate ASIC they could sell it to the tier 1's who could combine it with VMware.
I don;t know too many data center managers who have the port density to run that much more Ethernet without virtualizing that as well. HP already has with their blades.
Similar to SunRay?
The comments in the article about "not running an OS", "small box with no state on desktop", etc. all apply equally to the SunRay experience from Sun. Quite nice in some respects when it works, provided you have sufficiently well-provisioned hardware and networks.
Looks like the main advantage of this "Blade PC" would be the ability to run other OS's natively. Although does your MacOS or Windows license allow you to do this? MacOS at least requires a real Apple computer.
What do we do if we need to put a CD rom in the computer or perhaps a floppy disk or other media? do we all need usb cd rom drives or will we share these and send our CD rom down to the admin team to put in the one single cd rom drive for the entire business?
A thin-Client is not just a thin PC
Interesting article, however, describing an existing Thin-Client as a thin-pc is incorrect - Thin-clients do NOT have a host OS (Other than a very basic embedded OS in BIOS), are very simple to manage compared to a standard desktop PC, and are not prone to virus attacks as no data held on the device.
A good example of existing Thin-Client devices can be found here http://www.chippc.com/thin-clients/jack-pc/
Re: Media Devices
Martin, I think that's the point... you do not NEED CD-ROMs, USB keys or anything of the sort. Blade PCs can be remotely configured, software can be deployed using technology that does all that already. You should not need to insert a CD-ROM anywhere. Or a USB key. Or an iPod. Or... or... or...
Great idea - but not the first application for the chip.
If I'm understanding this correctly, this solution is a pair of chips that take DVI and low-level USB input, squirt it over Ethernet, then provide the DVI and general USB out the other end.
The first market for these are KVM extenders, remote-video and remote-management applications.
I've built many systems where I've had to remotely control a PC, and at present the best option is VNC - but VNC can't send the video very well, which means that I often need to run a length of VGA, or even VGA-over-Cat5 if it's further than 50m or so.
If I could buy two boxes to plug into the network that would send two channels of DVI and at least one fully-featured USB connector across the network, it would make life a lot easier.
And if the reciever box was PoE, then I'd buy a few right now!
IBM already did this...in 1964...
If my memory servers me, IBM had deployed a very similar technology, albeit INCLUDING virtualization, back in the 1960's. It was called SNA, and deployed using "dumb" terminals (3270) on every desk top. Instead of work-group Ethernet switches, IBM used a 3274-xx controller box for every 8/16/32 terminals that was connected via a semi-high-speed cable (typically 2780/3780 RJE) back to the mainframe. The 3270 terminals were connected to the 3274 via a coax cable that transmitted video over baseband and returned key strokes that were mapped to "unprotected" fields in the display by the 3274.
Toward the end of the 3270 era (1980's), IBM pushed out colour terminals that had VGA graphics ability, and large screen plasma displays (3279 and 3290 respectively) that included light pens - but no mouse.
At the same time, AT&T went one further: they had a square CRT terminal that DID include a mouse and "high-res" graphics - and it could be connected remotely via a 56KB connection to the UNIX "mainframe" (5B15).
So, verily, there IS nothing new under the sun (or SUN)...hardly...
Reinvented the passive X terminal?
I remember working on passive thin clients, with a screen, a keyboard and a mouse not so long ago.
Ok, there wasn't any sound coming back, and ther was no dual DVI links on the back of the box.
At the time I used them they were already things from the past, but nontheless I used them, because a server is more reliable than a desktop machine, and the passive clients never failed (and in this case because NFS was much much faster working on the server)
With the very high core count servers available around those days, would it not be much more interesting to have people using common ressources, even for diplay level tasks. I am pretty sure that massive smp machines could well handle tons of clients, as long as network is beefy and that there is some hardware for compression and possibly stream encryption.
Most of the time, you do not need massive horsepower from you machine, even when working hard, most of the time you need a framebuffer and not much more. Unless you are using external hardware there is no need for a full box at all. And if you are using extrnal hardware, either you are the only expert on this bit in the company/service/world and you should be granted a machine, or this should be shared. I understand that this would kill the market of USB rocket launchers, but well....
I was blessed never having to work on a Windows network for too long, so I do not know how multi client on a single server is performing but for UNIX people, thin clients have been used for as long as I can remember.
....you step out of the office.
The rise of the laptop and portable computing against the desktop is well documented. Ten years ago this might possibly have been a money-spinner. Now it's an attempt to turn the Supertanker by poking it with the index finger.
The disclaimer in the small print at the end of the article says waaay more than the article itself.
No room in the brain
I don't think our corporate IT people (outsourced) could do anything innovative.
Hm... but I was more impressed by the SunRay clients.
And UNIX has not one, but *two* ways of achieving true networked systems without having data spread all around the workstations:
- NFS/NIS (or NFS/LDAP maybe?), workstations have only the OS installed and configured. Home directories are NAS'd.
- Passive X terminals. All the heavy duty processing is done server-side. With some truly evil behemoth servers I've seen, that doesn't seem so far-fetched...
Though the "KVM over Internet" solution could well serve other purposes... maybe remote emergency server administration? When your server goes down, or gets stuck in a "Press any key on console" message. Sometimes you don't even have physical access to the server...
Blade PCs don't work well...
... unless you use them 1:1 (expensive) or virtualise (requires more beefy hardware).
For whatever reason, our management decided to go for blades for a remote office (blades in London, users in Europe) but ClearCube didn't (at the time) advise us to virtualise and so we've had endless problems with users sharing OS resources on the blades. I'm now trying to start from scratch and redo it all as virtual PCs, but it's a maintenance headache.
The whole blade PC thing sounds great to management but to the techs who actually have to implement and maintain it it just means more work than necessary. (This is speaking only as a ClearCube user; HP or IBM's solutions might be completely different.)
"Interesting article, however, describing an existing Thin-Client as a thin-pc is incorrect - Thin-clients do NOT have a host OS (Other than a very basic embedded OS in BIOS)"
Actually ClearCube's I/Port (works over ethernet instead of local copper/fibre) runs Windows XP Embedded. These are a complete PITA to maintain and can brick easily when trying to update the image on it.
The problem with Blade-PC's is the fact that they are a PC, with applications with huge memory footprints. When I last used X-Terminals in anger, we had a ratio of about 10 X-Terminals per (not very big) server, and because the software was not PC based, we got reasonable performance. Add to that the fact that you can beef up the performance by adding dedicated specialist servers elsewhere on your network that work just as well as the controlling server in delivering applications. Real distributed computing.
Sun once said "The network is the computer", and I believe it to be the case.
BTW. The AT&T systems mentioned by Brett were called BLITs (Bell Lab Inteligent Terminals) 5620 and 630 (and I know that there were later models) which worked over serial, proto-TP-Ethernet (called StarLan) or full blown twisted pair Ethernet (later models). They ran a proprietery OS that was probably called Layers (my memory fades), and allowed windowed dumb terminal, or locally run, downloaded applications.
Hey guys, a better solution already exists !!!
The problem with blade PCs is that you keep most of the hardware and software potential issues...That's why I prefer a virtualisation solution: you keep your virtual machines on a secure, redondant and powerful server...and you can dynamically allow them more or less ressources, move them, etc...I heard that NEC launched several months ago that kind of solution ("Virtual PC Center"). I found thanks to my best virtual friend (Google) that in fact it is based on a server hosting virtual PCs: each user connects to his virtual machine from a tiny thin client that has a magical embedded chip. The virtual machine sends compressed data to this chip, so that it can even display HD quality video. And this embedded chip has a hardware-based VoIP capability too. So, I do not see what Teradici's innovation is...