33 posts • joined Saturday 10th September 2011 18:51 GMT
Re: Could you please explain why the UK still sends aid to India...
Because the Brits want to pretend that they have some leverage over India. The fact is that they don't. The UK should invest their "aid" instead in providing better education and food for their citizens.
Return on Investment
They recover the costs of R&D by launching satellites for other countries, being one of six organizations capable of doing so in the world today.
Stop B!tc#!ng and get back to work!
I notice this happening repeatedly as soon as any article related to India is published here on the Register. Agreed that you are a british website, but is it too presumptuous to assume that you have intentions of getting your hits and viewership from all parts of the world?
It is remarkable that India is doing what it's doing with Technology at a fraction of the budget of the US. I doubt if the British even have a space program (or any form of technological development underway - at least not since they were evicted from India in 1947). Instead of appreciating that, why constantly raise the bogey of poverty?
Yes...Poverty is rampant in India. But India has done far more to uplift the masses in the past 60 odd years than the greedy British Raj had done in 200 years prior to that. Prior to the arrival of the East India company in India, there was no historical record of epidemic poverty or malnutrition in India. So, your forefathers created the problem by looting India and destroying natural industry in India systematically for 200 years. And now you try and browbeat Indians for not having been able to erradicate poverty in 60 years?!?
You geniuses need to read this --
I used to work for a company that sold samsung galaxy phones and no iPhones. I had the "newest" (back then) Galaxy II phone and after 6 months of use, the battery would drain after 15 minutes of talk time or the phone would simply mysteriously locked and only way to fix it was to yank the battery and insert it back in.
This was true with all my other colleagues who used those infernal devices...they simply sucked a$$.
After leaving that wireless telco, I bought my first iPhone (a 4s). It has been rock solid, does what it needs to. My 5 year old can use it to play games or cartoons on netflix when we are driving, etc. The battery life is excellent and things just work.
Prince of Persia + Netware 3.12 + RPL-booting disk-less clients == days of fun
We used to play PoP on a Netware-based network with RPL-booting disk-less clients where people stayed back after hours to participate in Prince of Persia mayhem. This is by far the best game and nothing has ever come close to it, ever. I don't play games anymore because everytime I consider playing one, i end up comparing it with Prince of Persia.
Best Fast Bowler...
That privilege goes to Glen McGrath. That dude was so excellent in his deliveries. I would believe it if someone told me he had a 100% strike rate of hitting a coin from 22 yards off, running in full steam, another 20-22 yards.
Albeit, after watching the movie "Fire in Babylon", I long to see the fearsome foursome of the West Indies pace attack again, in some shape or form. Alas, the game has been ruined by T20 (and to a large extent, one-day cricket).
Re: implicateorder implicateorder implicateorde Billl HAHAHAHAA!
Actually ibm wrote some pretty crappy code there. Its the massive hardware threads of cmt that allows the throughput.
Re: implicateorder implicateorder implicateorde Billl HAHAHAHAA!
Is there any such thing? :)
Actually the app I was referring to was embarrassingly "singular". No multi-threading...just a humble little perl script munging data from raw files and inserting into a remote DB. The trick was to be able to run enough instances of these perl scripts against different files so that it could achieve the throughput needed.
Guess who the vendor of that wonderful application was? IBM!
Re: implicateorder implicateorder implicateorde Billl HAHAHAHAA!
Real world example from an adult for Matt Bryant. There are plenty of applications that can leverage even the older generation CMT processors. I moved a middleware application running hundreds of perl-based data mungers spread across 2 M5000s (@2.2GHz clock) to a half T5440 (128 threads / clock @ 1.6GHz).
In data warehousing workloads often the throughput is more important than single-thread speed. That unimpressive .5 T5440 did the throughput of the two M5000s in the same window.
The T4s and T5s are a completely different class from those humble T2(+) processors. When I migrated that workload to a T4 (only 12 cores), it ran 2x as fast and completed the work of 2 M5Ks and 0.5 T5440.
You know which applications suck the most? Apps from IBM. Cognos - such a piece of junk. Other horrendous applications are those of the Tivoli suite of products. These are so pathetic that they keep SIGSEGV'ing all over the place. IBM's solution - run on our hardware. Our suggestion to them --make your damn software work. On a Netcool upgrade couple of years back, their 1+ year old stable release needed 750 patches to be functional. I've never had to deal with apps from other vendors that were that buggy. It took them 6+ months to make the product stable!
Re: implicateorder implicateorde Billl HAHAHAHAA!
[[[Sure, sounds fine on paper, but you need to understand that in circuit design you don't get anything for nothing. The space taken up on the die by additional, specialised circuits usually has to come at the cost of more generalised circuits that can help in more generic uses. In the case of M5, by finally adding the cache the CMT designs need (though not the rest of the cache-handling technology required), half the cores had to be chopped out to make room. So you have a choice - make the CPU bigger so you can add the additional specialised circuits without affecting the general circuits, or add the specialised circuits at the cost of non-specialised performance. Making the chip die bigger means increasing the wattage required and also reducing the yield per wafer, both of which drive up costs. Staying in the same envelope but reducing general performance makes your chip less attractive to those users not running the specialised tasks you have designed for. And - whilst it maybe hard for the Snoreacle fanbois to admit - not every server out there is running Larry's database software as its core role.
it would make more sense to design such offload engines onto plug-in PCI-e cards, then they can be added as required without crippling the general performance of the system.]]]
PCI-e cards are going to be of course significantly slower than if we were running the same logic in silicon, on the die itself. Which is why things like tcp offload engines, crypto accelerators were moved into the processor die (they used to be co-processors and add-on cards in the past). That is the natural evolutionary process of miniaturization. If we didn't do that, could we manage to an iphone or an android in the palm or pocket?
I think protestations against this paradigm are based on fallacious premises. Every vendor should have the freedom to differentiate their product and provide incentives to their customers (to better sell their products). This need not necessarily be from a "cost-savings" perspective in terms of pure capital spent, it could (and in fact should) also be in cost-savings in the form of more work being accomplished (achieved through performance boosts, etc), etc.
Re: implicateorder implicateorde Billl HAHAHAHAA!
I don't think Oracle's trying to sell their hardware of "Other" software. If they can accelerate their own suite of software (which is pretty extensive), that makes sufficient case for new customers (and old) to start buying their hardware.
I'm not sure if you know the full gamut of oracle's software portfolio, but it is pretty massive. I don't think they need to care about accelerating DB2 or Sybase. Odds are if it is a DB2 shop, they are already on IBM hardware or some other vendor's if a sybase shop.
Re: implicateorde Billl HAHAHAHAA!
If I consider how the crypto accelerators work, it might be transparent to the application. The OS detects conditions that match and offload the work to the processors' built in crypto accelerator.
Re: Billl HAHAHAHAA!
I don't think that is the natural outcome of these "embedded" accelerators. It is not cost-effective to have different fabs for different functions. I think they will try and move some functionality into silicon that will be broad-spectrum (some java acceleration, some db acceleration). Given that Oracle apps are predominantly java based and have an oracle db backend, the gamut of oracle apps will fit on these processors.
I also think that they might start building Exa<data/logic/lytics> engineered systems on the SPARC line once they get the hardware accelerators in place.
Re: Billl HAHAHAHAA!
There are initiatives underway (and have been for a while now) to use FPGAs to provide hardware acceleration. Why is it a bad idea to do something similar on the processor die itself?
If they design it right, it can have significant impact on performance (positive) of apps that leverage java (we all know that can use some help in that regard) and databases. Sun/oracle have done that with their crypto accelerators (and so has Intel btw). This is just taking that a few steps further.
For computing to evolve and grow, the lines between what was considered software domain vs hardware domain needs to blur from time to time. Also, if you notice, it is a cyclical phenomenon. The cycles are getting closer and closer as technology evolves, miniaturization matures further.
Regarding "boat anchors"
Having worked on both x86 and SPARC, I can say one thing --
price differences between the mid-range SPARC and high-end x86 are nominal. I used to pay ~ 100K for the M5Ks and ~ 100K for the M4800s with 256GB RAM. Running solaris on both, I ran 5-8 zones and oracle dbs on both. Didn't see much of a performance difference. The M5Ks were a lot more stable in my experience, however, with advanced RAS features.
Did some micro-benchmarks on the T4s and HP DL580/BL460 G6/G7 boxes. HP running vmware + rhel, T4s of course running Solaris 10 and 11. Performance-wise they were very close.
The T5s might tip the scale in SPARC's favor with the extra clock-speed.
With the STREAMs benchmark I got ~ 32GB/s memory throughput on the T4s scaling upto 4 cores. I didn't test beyond that...but that's pretty good. I had to tweak the benchmark and put in parallel hints into the code, but one'd expect modern software to parallel process as much as possible...
Also, with virtualization, it is difficult to say which platform has better cost:capacity numbers. As far as scalability is concerned, I find it hard to see how something like an ERP system running on 128 h/w threads would fare on x86 at this point.
Moral of the story is -- There are many ways to solve infrastructure performance/scalability/reliablity problems. And there is no "one best way" to do it. The answer, well...it depends.
At the end of the day, a big factor in deciding what you implement depends on the incumbent(technology) in your shop and how the experience with it has been.
Too many times I've seen border-line (in)competent engineers flubbing a implementing and managing a simple design just because they didn't have the dedication to learn a new way of doing things (or because the solution didn't fit into what they considered to be "cool").
The T5 is interesting and if my micro-benchmarks of the T4 are an yardstick, the T5 (if it does provide the 20% boost in single thread performance) will outperform most intel-based gear out there. Add the ability to multi-thread across 128-256 strands of silicon and it's a potential winner. Unfortunately it seems that real admins are a rare commodity these days -- too many kiddies driving VCenter think they know infrastructure, if you know what i mean...
I think you are missing the point. The sizes of the workloads I'm referring to prevent themselves from running on vmware-type platforms. Albeit, once we go into the realm of 8-socket 10-core intel-based servers, the price differential between them and a T4-4 disappear quite rapidly.
I have deployed hundreds of containers in Solaris and as many ldoms. Although, I've been leery of using ldoms until the T4s came along. And what's more, I've also run a shop with vmware heavily leveraged. VMware isn't bad for small virtual machines. It becomes unwieldy when you start getting into larger sized machines.
Also, I was tasked with identifying cost of ownership of a vmware-based as well as a pure oracle/sun solution (just for comparison). I was amazed at how close they were once we factored in costs for vmware enterprise licenses and support, guest os support (RHEL, windows, etc) and the hardware costs (not to mention multiple vendors complicating the support model).
It boils down to how efficiently you design your solution and how skilled your engineers are at managing the infrastructure.
Not sure if you know this - but because this information is available on the internet, it seems moot to have to RTFM to post on a b!tch!ng session on an online forum.
And I had the IBM folks come down to my office and do the dog & pony...song & dance. Needless to say, I was unimpressed. A lot of what IBM is doing today seems to be in direct response to Oracle's Exa-****.
Also, anyone who has designed systems knows that you need to have capacity for peak loads. In clustered scenarios it is even more important to have 1/nth of the capacity available on each node of an n+1 cluster.
Overcommitment/oversubscription is nothing new. All multi-tasking operating systems have been doing something similar for decades. Usually when that happens on a host, it's performance tanks.
The virtualization model in LDOMs is better because it bypasses the oversubscription issue of other virtualization solutions and wysiwyg. Everything down to the IO slots can be partitioned.
So what's your point? Do you concur with what I posted?
And what is wrong with partitioning? I find the entire time-sliced Virtual machine paradigm silly. Especially for big workloads that need lots of cpus etc (crunching a few TB of data as fast as possible for instance). For instance we built a 100TB data warehouse... try running that in something like vmware.
I'd prefer "partitioning" over "Classic x86 VMs" any day.
Oracle's core multiplication factor uses a .25 cpu license for 1 core of a 16-core T3 processor. It uses a .5 for 1 core of the 8-core T4 processor. I have a feeling they will use a .25 cpu license for the 16-core T5 processor (of course they might get greedy).
At .25 licences/core, a 2 socket T5 would be effectively worth 8 CPU licenses. At .5 licences/core, a 2-socket T5 would be 16 licenses.
Oracle also recognizes LDOMs with whole-core constraints to be as valid CPU boundaries. So if you don't want to use all 32 cores in a 2-socket T5, use ldoms to control how many cores are being deployed...
Re: Sun HW + Solaris - no where but down. . .
The T4 and newer line of SPARC-based hardware have improved significantly since the "blunder" years of the Niagara fiasco.
Like most others in related to this field, I was skeptical about Oracle's dedication to keep the Sun portfolio alive and nurse it to health. I must admit I have grudgingly conceded that they seem sincere in their efforts to revive the Sun hardware line.
Having worked with sun gear for a good part of two decades, I must say that some of the moves being made are interesting and have potential for success. Moreover, Oracle is so very (brutally) fiscal minded that I don't doubt they will be able to turn things around. Only unfortunate thing is that the timing of it all might be too late.
I have found the T4 line very impressive and when we start considering the cost of ownership of the stack it starts making a lot of sense financially (with virtualization and the management software etc thrown in for "Free").
The exadata platform is very impressive (albeit super expensive) -- we were able to run a monster dwh workload on the exadata platform and see 40x performance improvements (no joke).
So love them or loathe them, Oracle seems to have done something that the bumbling management at Sun wasn't able to do. Draw a line and stand by it.
Re: Racist diatribes!
Go control your own population. India is doing quite well thank you, despite the best efforts of your nation.
As far as your aid is concerned, tell your government spend it on local educational initiatives..might makeyou lads a bit more competitive.
Wtf with these racist diatribes against india? Your ancestors bled india dry of all its wealth for 200 years...that too like some sneaky, underhanded little cheats.
Grow up and deal with it brats...lobby your government to stop try and pretend to be what your nation isnt anymore --relevant on a global scale. India will do what it has to do...another 20 years and your future generations will be lining up for work permits to india...
I was in a conversation with a friend who heads up r&d at a big midwestern exchange. We were talking about interviewing and he told me how he is always looking for a reason to reject a candidate. I on the other hand look to select a candidate, until they give me a reason not to.
I once interfviewed for a position of sr systems admin where the other sr interviewed me for 2 hours...rattling off inane questions like what is fcal, what is scsi etc. i had to cut the interview process short by finally asking him what he hoped to find out by testing my general knowlege and how did he expect to understand how well i could drive unix servers by asking me silly questions like that. Needless to say i did not accept their offer...
I like to see problem solving skills relevant to the role, testing from basic to more advanced levels of knowledge and experience.
Having developed code in perl, python and ruby, i can comments on a few things --
Perl is a more left-brain oriented scripting language and is very powerful. Its strength imho is thr fact that you dont have to play with data types...in my chaotic world of sys admin, that is very time-conserving.
Python is great, but i have used it to interface my unix systems with windows primarily...perl on unix, python-based gui apps on windows. Love its enforced indentation...hate the fact th regex doesnt work exactly like in perl...
Ruby (on rails) is great for quick mvc type development...autogenerates the sql as well. Imho, it is awesome for rapid prototyping..
Re: hold on - apples and oranges?
COMSTAR servers? You mean they custom-hacked their own COMSTAR-based Storage? And did they build RAID-0 pools?
In my shop we have hundreds of TB of data sitting on ZFS (albeit zfs pools carved on top of RAID-5 luns presented from our EMC arrays) and have recovered safely from catastrophic EMC array failures without any issues.
The unique feature of the ZFS storage appliance is the auto-tiering capability (ZFS functionality) -- how it can move hot data into the flash cache and DRAM and keep the LRU data in the slower storage. So, its great for certain workloads (I wouldn't run Tier-1 apps on either of these arrays)
Add to that built in functionality of replication, dedup, compression make a good value proposition for the ZFS array.
Like someone pointed out, cost/TB is what will potentially drive the decision-making process. The benchmark lists just the controllers with the 18x200GB SSDs with "Base software" and 8 enabled 8GB FC ports @ $181K. The Oracle ZFS appliance is listed at $409K with 84TB of Storage, providing 137K IOPs (ie more than the 120K achieved by IBM).
How exactly is the value proposition for the IBM more enticing than that of the ZFS array?
The problem is a massive logistical one. The biggest cause of inequitable population densities in Urban India is because of migration of low-wage, daily-labor workers into the big cities.
I would guess that 80-90% of the population in Asia's largest ghetto (Dharavi in Mumbai) is comprised of that. The process of providing proper "facilities" to such a dynamic and large population is highly complex.
I would also venture to guess that in rural India, the population density is not that great and what the governments should be doing is developing these areas (and I am certain efforts are underway toward that end). The mass exodus of rural citizens into urban centers is what needs to be looked at. What kind of incentives can be provided to them, how can they be empowered to live in the rural areas (associated viable livelihood, etc)...
As far as technology goes...yeah mobile technology is definitely more pervasive in India than other forms and as some one pointed out, the price point is reasonably affordable by most individuals. With the advent of smart phones and cheap tablets, their "connectedness" will also increase. I give it another 5 years.
There are already private projects underway to provide "e-banking, e-credit" facilities to remote parts of rural India. Once these take off, suddenly the financial abilities of the "lav-less indian" will increase. Who knows, he might even be able to "brownload" and "download" simultaneously.
BTW, being an Indian Citizen, I find this article (and certain comments) in bad taste. The sense of incredulity is typical of spoilt brats who taken things for granted and have a false sense of entitlement. Instead of looking at how this technology might be helping the poor stay connected and try and improve their lot in life, we get snide commentary on their "lav-less-ness"!
And you post this racist diatribe because sone "3rd worlder" was smarter than you, did not b1tch as much about what he/she was asked to do and did what was asked of very well...
The proble my dear Watson is in your marked lack of intelligence as compared to the " 3rd worlder" and once he was transplanted into a system where his talents were allowed tolourish, he kicked your arrogant butt. I am amazed at the sense of entitlement many " 1st worlders" have about their little bubbles they consider the world, and their self-aggrandized, megalomaniacal self-image. Maybe its time to stop eating mcdonalds ad try tasting the humble pie being served up these days?
Ridiculous sticker price for a "Has-been" Platform
My greatest grouse with HP has always been their "nickel-and-dime" approach to things.
Okay...you pay $$$/core for the base OS + $$$/core for Virtualization + $$$/core for Multipathing + $$$/core for Resource Management capabilities ....end result is that dinosaur organizations end up paying gargantuan prices for an obsolete OS. Contrast that with Solaris or Linux (at the entry-level)...it's a no-brainer...HP-UX should be retired and HP should get out of the Server game.
Why on earth should anyone care about HP-UX? (and no, I'm not trolling)
- Mexican Cobalt-60 robbers are DEAD MEN, say authorities
- Apple's spamtastic iBeacon retail alerts launch with Frisco FAIL
- Submerged Navy submarine successfully launches drone from missile tubes
- Apple sends in the bulldozers as Fruit Loop construction begins
- Pix Astroboffins spot HOT, YOUNG GIANT where she doesn't belong