11 posts • joined 7 Jul 2009
The success of a MIPS core in the IP market for android devices, or any other devices for that matter, will be mostly predicated upon the ease of integration and library of functional units that can be mixed and matched with it to build the SoC. Assuming a similar license cost to ARM of course....
Speed to market is everything with these small devices, and if a licensee can "bish bash bosh" together his SoC faster with a MIPS core then he will do so.
Look on the bright side.
I pity you Britishers, Ed Millibob is going to be your next PM, lol.
Great little piece on the "value of certification"
Section 2 (specifically 2.2) of this white paper, which talks about certifications and their usefulness gave me a chuckle.
Every enterprise software license scheme increases in complexity exponentially until it gets to a point at which the vendor introduces its own currency equivalent, in this case "VMware dollars" as I like to call them.
VMware would do well to remember that the next phase in product/license evolution after points/credits/cubits etc is that your product/technology space is made completely obsolete by being either included in the standard OS or in commodity hardware (CPU), or in some cases a mix of both.
When does VMware's license revenue go ex-growth and it becomes a support/upgrade business model?
Why IT really fails to perform.
The joke is that the only factor that really impacts operational efficiency, automation, and availability of IT, is quality of the staff. Even if it were "old hardware" that was causing some problems... thats easy to fix for a few thousand pounds, dont spend more than a moment on it.
Most of the companies that I meet that have badly performing IT need to fire 80% of their IT staff and up the salaries by about 40% in order to get good people. Quite a bit more expensive than buying a new Dell box... which is somehow going to make everything run smoother?!?! There may also be an argument that "if we were all still running Pentium III chips, then maybe things would be configured properly and efficiently", CPU cycles, RAM, and IO would be a more precious commodity and not something to be squandered.
High quality IT staff also have an easier time justifying some new servers being bought since they know how to plan and how to express IT expenditure in terms the business will understand/value.
You are seriously suggesting developers re-code to take advantage of new hardware/architecture? I am not a developer, I'm more a sysadmin, but I cant remember a single time in history when developers radically changed their method/practices to better fit with the hardware.
Everyone was going to re-code for Itanium weren't they? Too hard.
How about the incredible power of cheap, fast GPUs, even software that would suit their particular kind of performance doesnt get re-coded. I do know that parallel programming is very hard, and most languages dont make it any easier... (hence the Erlang ref) We have had multi-core in the x86 market for 5-6 years and still Microsoft and Apple are announcing "some special multi-core features coming soon to the next version of their OS". At this rate we should have multi-core/multi-thread applications in widespread use by about 2050...
Developers need faster cores, not just more of them.
For the last several years Intel seems to have thrown in the towel on making individual cores significantly faster, instead simply relying on process shrinking to cram more of the same into a chip package. Unfortunately for the rest of us, most software on planet earth does not take much advantage of increasing numbers of cores, infact some stuff actually slows down marginally as more cores are added!
Taking multi-core out of the equation, looking at single cores only, Moores Law ran out of steam at Intel somewhere around 2004.
That said, how a platform like Facebook with its thousands (tens of thousands?) of concurrent users can fail to take at least some advantage of hyperthreading within a core and a shift from 2 to 4 to 6 cores is beyond me. Perhaps they need better developers? Consider a move to developing in Erlang?
Right time right place
Between 20:1 and 50:1 using a VMCO Appliance and a free hypervisor (Citrix)
Main benefits have been obvious power, HW maintenance, and space savings, plus increased disaster recovery/continuity as workloads may float between physical appliances. Capacity planning is now a cinch and I can promise the business with certainty that a given spend will result in corresponding increase in processing capacity.
Fact Check Please?
Doesn't the amount of system RAM used impact the STREAM benchmark results, systems with more RAM (in this case the AMD) end up doing more work to solve a larger problem-space than those with less?
Some already on free the train to Citrixville
If you can see your way past the inevitable puns in their posting, the folks at 360 have already put out an offer of free migration advice from Virtual Iron to the Citrix flavour of XenServer about a week ago. Since XenServer is free (baring a few hundred quid in support costs...) I cant imagine there will be too many customers sticking with Virtual Iron by the time Oracle consolidates its virtualization strategy. More about it all here:
What about Sun's xVM and VirtualBox? Are we headed for a world with only VMware, Hyper-free, and Citrix Xen? Can't believe big blue are content to sit this dance out....
- Analysis Oh no, Joe: WinPhone users already griping over 8.1 mega-update
- Leaked pics show EMBIGGENED iPhone 6 screen
- Opportunity selfie: Martian winds have given the spunky ol' rover a spring cleaning
- OK, we get the message, Microsoft: Windows Defender splats 1000s of WinXP, Server 2k3 PCs
- Episode 4 BOFH: Oh DO tell us what you think. *CLICK*