back to article HPC 2.0: The monster mash-up

This is the second of a three-part series on the convergence of HPC and business analytics, and the implications for data centers. The first article is here; you’re reading the second one; and the third story is coming soon. The genesis of this set of articles was a recent IBM analyst conference during which the company laid …


This topic is closed for new posts.

Much Useless Data

Much of the petabytes of data being gathered and stored everyday will never get analyzed, and will grow old and stale. Yes there are retrospective studies, but very few, and retrospective studies are generally not hurried. Real-time users want the data that is about current times analyzed right now, else it cannot be used for immediate decision-making.

Another problem is many users who might use certain datasets never will use them, first because they don't know the datasets exist, and second because the datasets are controlled by some other company, or cost too much to buy rights to analyze.

There are a number of factors leading to a trend of diminishing returns for larger datasets. The next move will be to "focused" data gathering, with a business justification of making immediate decisions.


This topic is closed for new posts.


Biting the hand that feeds IT © 1998–2018