12 posts • joined 8 Jan 2008
Another reason the standard should have been 10gbit/sec long ago
It is really too bad the industry didn't move forward with 10gbit/s ethernet long ago. The ASICs would be so cheap by now. It is pretty sad that I can have an SSD on my main PC, and an SSD on my laptop and barely get 250MB/sec after overhead. Just like I was disappointed that SATA3 was a meager 600MB/sec when it should have been 1TB/sec. If you look at the latest SSDs that include onboard ram, your SSD can peak way beyond 600MB/sec. Think if we were already at SATA 3.2 speeds, SSD vendors could be loading up a couple GB of memory on an SSD and flying burst data off like crazy. For many years I've dreamed of a hard drive with a memory controller and a SO-DIMM slot on it. 8 or 16GB memory stick for around $150 and your drive would fly.
I want this for my home PC network.. yea really
I'm so tired of being limited to 1GbE at home.. and 10GbE is insanely expensive.. I wish everything would move to optical.. imagine if we could have 100GbE as your home network.. PC's could natively share SMP and devices, etc..
I wish IBM's group of 32nm super friends would all ban together and work out a mass produced low cost optical chipset for motherboards, nics, switches, etc that could go mainstream.. give me 10GbE, 40GbE, 100GbE.. just something more than 1GbE and that is stackable ( http://en.wikipedia.org/wiki/Link_aggregation ).
10GbE won't be more expensive for long.
While delayed about a year, I think later generations of x58 were planned to have 10GbE included. Infiniband keeps losing because its not mass market. Its only a matter of time before we are are blessed with 10GbE on our motherboards.
I also have hopes with IBM's advancements in cheap optical technology we'll see an eventual optical PCIe @ 40gb or 100gb interconnect that will branch into home networks. I really want to be able to buy a high end PC, and natively interconnect it with a BIOS level hypervisor creating a single resource pool. This would also effect GPU arrays.
Sprint sucks too...
Kinda of ironic that a phone I bought on sprint back in 2004 had a built in java version of MS Messenger, and could send unlimited texts to other ms messenger (and yahoo IM) users. I guess back then they realized that 200bytes was about the equivilent of 1/10th of a second of voice traffic, so why charge people for it..
Along comes my new phone, the jim-dandy Samsung Instinct. Heck of a nice phone, but gee.. no instant messenger. I can only txt to other cells (or people who have their cell's intertwined with their messenger). Of course I had to upgrade to the Sprint talk message share plan which ran me $40/mo more than my previous plan.
Gee. I wonder how a phone 4 years newer than my ancient LG couldn't handle the code of any of the standard internet IM clients. Why because sprint is a nickel and diming unethical PoS company.
This is why I like google's idea of a open network. Let *me* pick what traffic I want to send over the network. Bunch of rip-off mother lovin frackers. (As they'd say on BSG).
What the heck is DARPA doing funding Sun? IBM is....
SUN? They have no hope in the processor world. IBM is miles ahead on on-chip optical interconnects. Last I read they also got funding from DARPA and were producing some pretty incredible optical results.
This sounds like another deal made with a senator on a golf course instead of by tech heads who have a clue.
ColdFusion has run under Java for years.
ColdFusion, which is still one of the best and most effecient programming languages, has basically been a Java app since 2001. Really it feels like just another Java language in many cases. PHP has been called the poor man's ColdFusion, are rightly so.
It probably would have been adopted as a standard Java core piece if it wasn't for Adobe's greed. Adobe basically has been driving CF into the ground by making it unaffordable to the majority. Leaving IT people with little arguement for their IT managers in a world full of free Java and .NET servers.
The Java engine should get mapped to lots of languages. There is development that just doesn't need to be written in Java or jsp. Developers can code a complex web page in CF in about 1/4th the time of a Java page. I'm not saying CF, or PHP is for everyone. But these languages have their place under a universal engine.
Well, I did only say "What's Interesting".. I didn't say people would be plugging their desktop 40GbE NIC into a 45 mile fiber line they ran out to their gaming buddies house. ;-)
However, I guess it is possible for a really brightly lit enterprise adaptor to span more than the minimum 100 meters (330 feet) and the 10 kilometers (6 miles) second level minimum.
40G and 100G, not just for the telecom kids.
What's interesting here is in relatively short order, desktop motherboards will include 10GbE interfaces and server motherboards will probably have 40GbE.
Intel's Eaglelake chipset due out late this year will have 10GbE. I'm not sure if that is copper or optical, or both. 10GbE included in a stock system basically turns it into a very attractive box for adding to a cluster. Intel released a document showing what looked to be an arguement to somewhat skip 10GbE interfaces as this generations preferred jump, to go to 40GbE ASAP. The same paper said that 100GbE won't be affordable for the mainstream until 2015 or 2020.
That being said, I'd like to forward on the rumor of eventual low cost, off the shelf, mass-produced 32nm 100GbE interfaces in 2010-2010 via IBM's 32nm partnership program. IBM wants a low cost way to interconnect those massive arrays of POWER7 based systems. (just a prediction....)
... and rumor #2, 40GbE Sony PlayStation4 PS4 interconnects. 10GbE will probably suffice, but 40GbE will give it the extra kick for those 100,000 node clusters that the government needs to build SkyNet. At these speeds, PS4 clusters will behave like a big nasty IBM mainframe.
.. another prediction, by 2013 - real time high-def audio/video data is migrated from the HDCP ruined HDMI standard to 10GbE/40GbE home networks. Streamed with H.264 AVCHD, and made universal. Any display device anywhere can display AV from any other device anywhere in the home. PC's begin to virtualize their AV content into IP streams served by Teradici's PC-Over-IP technology. DisplayPort and HDMI begin to go the way of composite cables. :) (ok this one is just a big wish).
Good IBM, now put 4-8 of these on clusterable cards and write directx drivers.
The Cell was designed from the ground up to be stackable. Now that they are fast enough, lets start scaling these into huge arrays and write some directx drivers for them :)
My 8800GT runs at 600Mhz. If a cell can do 6ghz, and you can cluster a bunch of them, even with some wacky emulation, there has to be a way to get up to speed.
I'll even buy the first version if it is only as fast as 10 8800GT's. :-0
P.S. Add native h.264 encode and decode to each of the cores please. I have some home movies that need compressed.
IBM needs to native add cfml support this
CFML is an excellent language, much easier to use than PHP.. While there are open source versions of ColdFusion servers, IBM should license the core from Adobe and include it standard with this new platform.
IBM - please bring us this technology into the home
I want off the shelf 100GbE optical in the home.. cheap interconnect chips in my PCs that let me modularly stack computing resources... IBM, bring us your mainframe and server chipset technology to the home enthusiast.. I'm tired of being trapped in the intel MS hole.. IBM is the king of Virtualization technology.. I want it in my home.. :)
I want to buy level 2 processor cache modules that can optically interconnect with the processors.. I want processor modules that are electrically isolated from my host motherboard and interconnected via dense strands of optical interconnects. let's modularize the PC so components can stack like cell's to grow as big as my money can buy.
32nm 100GbE multi-core interface chips need multi-vendor standardization and mass production. Let's skip the 10GbE standard and go straight to the future. If a good number of partner vendors came together with IBM on the 32nm technology, a very advanced SoC 32nm controller chip with features such as tcp/ip offload engine, iSCSI processing, advanced QoS/firewall/SPI/etc could be devised that would make 100GbE an instantly adopted standard.
iSCSI is becoming a simple commodity
It is kind of rediculous what people pay for this stuff. iSCSI technology is becoming simple commodity technology. EMC has been ripping off businesses since their conception. When storage was $1/GB the company I was at was paying $100/GB to EMC.
What is really needed is simple off the shelf hardware with huge banks of memory for caching. With a little modification, a xeon server motherboard with 16-32 FB-DIMM slots (up to 256GB ram), RAID cards, and linux host storage software can offer very scalable storage units on the cheap.
1GbE still doesn't cut it. 10GbE is getting there. But what I really look forward to is the 32nm based 100GbE optical chipsets that will make 100GbE storage networks common place (even in the home eventually). Someday, even a home enthusiast user will be able to buy a couple TB HD, drop it into his central home storage server (that is loaded with insane amounts of cache), and run all his home machines (and DVRs, HA, Security video DVRs, etc) on it.
- JLaw, Kate Upton exposed in celeb nude pics hack
- Google flushes out users of old browsers by serving up CLUNKY, AGED version of search
- GCHQ protesters stick it to British spooks ... by drinking urine
- China: You, Microsoft. Office-Windows 'compatibility'. You have 20 days to explain
- Twitter declines to deny JLaw tweet scrubdown after alleged iCloud NAKED PHOTOS hack