* Posts by ALFRED LAMDOLLES

3 publicly visible posts • joined 8 May 2008

Microsoft may lift VM licensing restrictions next week

ALFRED LAMDOLLES
Go

90 days..!?

Since Microsoft had ever released its server licensing policies on virtual machine. It is obvious that the move doesn't sound logic or reasonable at all.

In common, corporate implement virtualization to ease their IS/IT management time, cost, space and provide high server availability. But with this 90 days rule, one who will like to migrate their new server from one physical server to another will have to suffer. What does this scenario means? I've a server still just broke-down due to hardware failure. Well I'm lucky, the hard disks still work. At the mean time while I'm claiming the hardware warranty or getting a new replacement, I'll have to transfer all the VHD files to another physical server and boot them up to resume the server service. But as what Microsoft had told in the 90 days rule, I can't transfer the virtual server until the 90 days go-off. Then, does it mean that I'll need to have extra licenses spare in case of emergency. And that will let Microsoft squeezes another few more hundred or even thousand out from my company budget.

So, does this rule sound logic at all??

AMD plans 12-core server chip for 2010

ALFRED LAMDOLLES
Linux

Needs of Multi-Core

"Your 12 core..... for dozens of cycles." by Henry Cobb

Well it's true that the on/off switch here does matter. Personally, I've done a lot of testing on many computers, laptops and servers with most of the common processor models. Ranging from old 386 series to quad-core Xeon, from old K6/2 AMD till Turion. What Henry had told here is a fact. Most of our computer when running on daily used programs like MS Office, Windows, Media Players, Internet Browser and so on, actually rely more on inter hard drive and ram transfer. Most of our time spent waiting for a program to load, was time being spend seeking data and state module processing. In average, generally none of our computer workload reach even 50 percent if you'd ever calculated. So, multi-core doesn't seem to be a very critical technology to us.

It's more to those who need data processing and calculation power. Example, folding@home or earth stimulation. These computing tasks could work with very little or fixed set of data while used up all processing power all time until they get the job done. Therefore, data transfer rate between ram and processors, core between core had played a very critical role. If you'd run folding@home or any similar grid-computing software, you'll find that these software had been designed to run separately on individual core. Well, honestly, it does bring a number of benefits.

"Anyone any experience with this?" BY Matt

Currently, I've a few database in-house from MS SQL and Oracle. Databases doesn't require too much processing power unless they need to do some calculation. What they'll need is HDD and ram access bandwidth which include HDD data seek time, transfer bandwidth, ram access latency, and overall system I/O handles. The larger the database and query range, the longer time it takes to dig-out data you required. The database vendor could have told you that their database system can run or process 32/64 or maybe more and more query at a time, but don't forget one thing, if the database was stored on a single HDD, single channel ram, then you'll might as well forget about the simultaneous job handler. And even if you have the server running on raid-5 with 6 or more high speed SAS/SCSI HDD, the maximum I/O and transfer latency between Storage controller and ram/processor will limit your performance too.

ALFRED LAMDOLLES
Linux

12 Core....Not so easy by 2010...

As the supply and need of 8 core processor is still climbing slow...very slow...

I don't see the great need of the any multi-core processor more than 4....

Up till now, multi-core processors are still having a lot of bottlenecks and issues waiting to be resolved...

First one, the interconnection inside processor and between the main board.

When more and more processor core has been combined into one, designer and programmer will need to work hard to ensure the data exchange, load balancing and resource sharing among the cores are smooth and fast enough catching the processing power.

Taking the quad-core processor for example, connections between processors are simple, it's like a cross in a square box. All the individual core could have their one L2 cache and or share among each other, yet it will give a total of less than 12MB. And both HT3.0 and PCI-E 2.0 are able to handle the huge amount of data transaction between processor and main board.

Well, look at those more than 4 core now. Either the 8-core from Intel or the coming-up 6/12-core processor, need proper and complex transaction lane/route. Since both the two chip-makers had integrated the memory controller into the processor, then bandwidth will be the key. And if we look at the current dual-channel ram strategy. I guess, the channel will split-up to support different core-group. Core 0, 1, 2, 3 in one group using channel A while Core 4, 5, 6, 7 using channel B. Programmers for both processor and OS will need to make sure that workload is balance by add-in function like, downgrade virtually from 8-core to quad-core when the OS is not natively support 8-core or the software are not multi-thread processing. Last one, power consumption and thermal control. Though engineers are trying very hard to reduce the processors' power consumption now a day by reducing the transistors' size but at the same time their squeezing more and more transistors into smaller package. Which means the more jobs the processor can do now, and more heat generated.