45 posts • joined Monday 22nd June 2009 18:40 GMT
Re: Why when Radio is already free?
What's so bad about DAB?
Re: Wow! 75 times faster than... whaaat?
@Steve.T: Reading comprehension, man. It was obviously irony. Should I have used HTML5-compliant <sarcasm> tags?
They can be used for light gaming, assuming you're happy with 1366×768 resolution at absolutely lowest settings (some games provide Intel-specific setting, which offers quality even below the basic).
Oh, and funny you should mention AMD bought ATi. Remember Intel740? Thought not. Intel bought Real3D and released their GPU in 1998 -- eight years before AMD bought ATi. They had EIGHT MORE YEARS to develop the (admittedly rubbish at the time) solution into a solid product. When AMD bought ATi, they were struggling with their lineup, slowly recovering from 2000 series debacle with notably improved 3000 series, but they weren't well entrenched until releasing the 4000 series and Evergreen. Integrated GPUs from ATi were already vastly better than Intel's at that time and it was without much prior support from AMD that the GPU was excellent. Intel's GPUs continued to lag behind AMD's, and when AMD integrated them into APUs, Intel was again outstripped.
Between 1998 and 2006, Intel had time to improve their GPUs. They failed. They had eight years of possibilities to integrate the CPU and GPU within the hardware, even when the GPU resided in NB, but they didn't care about it. Since 2006 they have slowly improved, with each generation about doubling the performance, but it was still way behind the curve. Seeing Intel's lack of initiative I have to call bullshit on this 'Iris'. Maybe Haswell is not going to bring anything new to the table in terms of graphics (aside from increased clocks) and Iris is just a way to counter the lack of performance by doubling the number of GPUs.
As for playing video streams -- Intel's CPUs DO NOT use the GPU portion for decoding the stream. The CPU has a dedicated processing unit for this. And although it is impressive in its own right, it is supposed to play high numbers of video streams without breaking a sweat.
And your last paragraph -- as long as Intel is trying to stick x86 into everyone's face, they will continue to fail. And it's funny how Intel continuously claims that their target is ahead of them. When i740 was released, high performance GPUs were their future. They failed. Then they said their goal was best integrated graphics. They failed. Then they were supposed to release Larrabee, which was supposed to introduce Intel to the enthusiast GPU market. When that failed, they said Larrabee was intended for heterogeneous computing all the time and they never intended it to be a GPU. Now you are saying their goal is best performance in tablets? Ain't gonna happen. Iris isn't going to convince anyone, either.
Wow! 75 times faster than... whaaat?
Seriously, who are they kidding? Why not claim they are seventy-five HUNDRED (7500) times faster than ViRGE, the 3D decelerator? While they're at it, why not remember that their Core i7 CPUs are several hundred thousand times faster than 8088?
75×rubbish is just rubbish, but more of it. Their drivers are bad, the performance is in the basement compared to integrated GPUs from AMD. While they could be on to something with Iris, the competition would need to stand still for the last five years. Wake up call, Intel! You are NOT competing with 2006 chipset-integrated Radeon or GeForce! You're going to compete with 2014 APUs which are going to include hUMA (which for most users will mean PS4-like GDDR5 system memory). Your GPU may well be 75 times faster than in 2006, but AMD's GPUs made more improvement in the last 7 years and you are not going to fool anyone.
I can see nobody mentioned Bicom yet
It launched in 1993. I remember reading the review, but frankly, there's scant info on the Internet and it's hard to find the information.
Nevertheless, I did find information on it. There were two models: 240i and 260i, differing in HDD capacity (40 and 60 MB, which was a lot for a notebook back then). Specifications:
- Am286LX at 16 MHz (1.5 µm node)
- 2 MB RAM
- Dimensions: 223×161×31 mm (smaller than original EeePC!)
- Weight: 1 kg (which is less than some EeePC models, at almost 1.5 kg)
- 7.5" monochrome displays (640×400 resolution, line-doubled CGA)
- Battery life: 3-4 hours on 5 AA batteries (!), you could use rechargeables (Ni-Cd at the time).
- Price: I can't remember now, but for what it did, it was cheaper than cheapest regular notebooks, at some $300-400.
It's hard not to draw parallels between then and now. The subnotebook was based on technology that was two generations behind the mainstream (486, color displays), which is about where netbooks are in relation to notebooks.
Obviously, technology has progressed since then, but in 15 years between this and the original EeePC, what did we get in return? Frankly, not much! Larger hard drives, color displays (sometimes they are even larger), more memory. But feature bloat caused the netbooks to not perform better than their old rivals. If you used 700 mAh Ni-Cd rechargeables with the Bicom, you got 3 hours battery life. With 2700 mAh NiMH rechargeables, you would get 12 hours -- compare that to 3 hours on an EeePC with batteries rated at 5600 mAh with higher voltage. Displays obviously draw the most energy, but 15 years of progress should have brought them at least to parity. If anything, turning off backlight (or the display altogether, and running on an external monitor), should allow the netbook to work considerably longer, but it doesn't.
Is it unreasonable to expect that you should be able to get a 9-inch notebook running a shrunk CPU two generations old (hey, it would be the original Nehalem now) -- not downclocked, mind you, with an SSD, weighing in at less than 0.5 kg, with dimensions of an A5 sheet of paper and at most 5 mm thick?
Re: Hmmm, what about tape?
You're welcome. And no, I didn't mean absolute leadership, but some of the names and companies should have never made the list.
Hmmm, what about tape?
What about Storagetek/Sun/Oracle and tape? What about IBM for that matter? You mentioned HP, which might go bankrupt and assets may disperse between various other companies, but both IBM and Oracle tape equipment right now holds more data (an order of magnitude more) on tape than is kept on disk. Tape still has the edge over flash for overall TCO, and flash vendors have to catch up with tape, not vice versa.
I understand that you may be enamored with the new technologies, but as it stands, the list is woefully inadequate. Dropbox, Facebook and Amazon? Pretty much equivalent in terms of web storage -- pick any one of the three or add many more to the list if you really believe that they matter. How about adding Rapidshare, then? Going further, I see fusion-io, which is apparently struggling, as you yourself reported:
Is this the financial outlook of a market leader and a successful company?
It doesn't seem that Fusion-IO is leading in benchmarks, either. Or are you considering adding OCZ to the list as well? It certainly doesn't seem that competition has to chase Fusion-IO in anything.
Is this a contest of who makes the presentation most devoid of content and crams as many cliparts to a page as possible?
What do the gears even represent? How the hell are two completely different domains -- sourcing and monitoring -- supposed to drive one another???
Or is it just a classroom project of one of the execs' children?
Re: The success of capitalism
I've yet to see Capitalism at work. The bailouts are precisely where the problems lie -- it's not Capitalism, it's thinly-veiled Socialism with tolerated Personal Property (unless said Personal Property needs to be taken over by the Government to support its own interests).
Why would it matter whether they are experts on global climate? The only conclusion of the research is the unprecedented scale of carbon sequestration in peat, and the enormous rate of growth of mires and bogs. The additional conclusion that the amount of carbon sequestrated might be high enough to offset human industrial CO2 emissions is added almost as an afterthought.
By the way, annual anthropogenic CO2 emissions are within the margin of error of estimates on amount of CO2 emissions from a medium-sized volcano eruption. How could human-made CO2 be responsible for anything? That's the thing that I've never seen climate scientists refute. It's like they're trying to explain how we can heat an ocean using a candle.
Given that indeed.com is a global site, and contains many more jobs in other sectors than IT, it's a big deal.
Even if it is not a huge percentage within IT, it's still significant.
Compared to other IT jobs, which include programming where the basic skill is a language, then there are generic IT jobs where no skills are required, web coding where you'll find the usual fare of CSS and PHP, Hadoop will be a significant part of the remainder of critical jobs -- like mainframe or Unix administration -- it might seem archaic for some, but it brings a lot of money.
Not everyone will have to know Hadoop inside-out, but those who do, and whose skills are required, will rake in the big cash.
Re: To quote Top Gear... How Hard Can It Be™
Okay, so I'm risking a reply to what may be an obvious troll, but... The Soviets were thought to have enormous apparent advantage in rocket design 50 years ago, and well, look up N1.
Soviets couldn't build a "simple rocket" to reach the Moon, it makes the American Saturn V that much more remarkable. (Some try to say it was a lucky break, but the perfect safety record says otherwise).
Re: "squeezed the juice" out of the two papers...
Unfortunately (yes, I'm Polish), not much. It's not that a lot of the theoretical foundations weren't laid by Polish mathematicians, it's that certain political decisions caused them to fall by the wayside.
However you want to twist it, siding with the French in their code-breaking efforts cost them the chance to work at Bletchley Park. Turing was a brilliant mathematician and computer scientist and he did a lot more work in breaking the code than any other man.
Cheers to that!
Well, the writer lives on the assumption that Apple's market share is growing and is significant everywhere in the world.
Well, that's not the case in roughly 90% of the world. OS X requires Apple hardware, and people don't want to pay the Apple tax for an otherwise ordinary PC. Not to mention exorbitant prices for parts and limited upgradeability. Sorry, but for the price of a Mac I can get a much more capable PC and run whatever I wish on it.
Re: Re: Re: huh?
Aside from being a shiny toy, what can a tablet do that a PC cannot at half the price?
Re: Re: "Windows is dead."
Yes, that is, assuming all those users decide to either:
1. Accept 10-12 inch screens on their tablets.
2. Accept 10-12 kg tablets with 27 inch screens.
Everybody seems to be under the impression that screen size no longer matters. And if you add in a keyboard, mouse, external screen and a power brick to an otherwise svelte slate, the sum becomes vastly more cumbersome than a desktop, vastly more expensive, and vastly less capable.
Or has everybody forgotten that tablets cost twice the amount of a more capable desktop PC?
How much are they suing for?
It misses on the most important detail -- how high do they rate their moral losses and how much do they want from Tesco/Apple/world+dog?
Mine's the one with the disclaimer not to use as parachute on the tag.
Firefox? They did 3.0, jumped to 3.5, sanity apparently hit them for a while, since they did 3.6, then jumped to 4.0, but suddenly lost it all by skipping to 5.0 in three months, 6.0 in about two, and 7.0 in just one more. If they release something in mid-November, it's going to be version 11...
All those searches have to pay for themselves, you know
They don't make (much) money when you search for a search engine.
Furthermore, using specialized search engines implies that you are shopping, and context ads in those engines are much better at being effective (as opposed to context ads when you are searching for whatever) -- and context ads generate the most revenue -- precisely the revenue that Google is otherwise losing.
Quite frankly, the last post is an insult to intelligence. Which of the three companies would you prefer to own (assuming you were looking for longevity):
Company A, $100 bn revenue, $150 bn costs, $50 bn loss
Company B, $80 bn revenue, $60 bn costs, $20 bn profit
Company C, $50 bn revenue, $10 bn costs, $40 bn profit
Going by your logic, you would believe company A has the best outlook of the three above.
Yes, because as we all know, increasing debt limits and doing nothing about the spending is the right way to go wrt budgets.
@E 2 : Ummm, no
The beta drivers were there earlier. Plus, the older drivers worked (sort of). And let's not forget that it was 6 weeks after the *announcement* was made. General availability wasn't until a few weeks after the release.
Well, how is *your* English?
"Ellison said in the Wall Street call today that Oracle has installed over 1,000 Exadata clusters (not racks, but distinct clusters) so far, and that it can triple the base to more than 3,000 machines in fiscal 2012."
In your own words:
What Oracle said is they have "installed" over 1000 Exadata servers. They made it clear it is not 1000 clusters.
To me, it's pretty clear Ellison said clusters. Care to re-read? BTW, you were replying to a post made at 9 p.m. and accused the guy of early drinking on Friday. You made your post at 9 a.m. on Saturday. What does that make you? Pot? Kettle? Black?
Installed vs. sold
Now, correct me if I'm wrong, but isn't this more a matter of how many Exadata clusters are installed compared to sold, but not installed yet? I.e., the sales team could have closed the deal already, but put it on a three-month backorder, then it will take some time to install, lab test and put into production.
You seem to be taking the common definition of something being installed or sold, and not the one that's used by businesses.
Why doesn't Tilera compare their CPU to SPARC?
Interestingly, Tilera didn't seem to show whether there are any advantages in the merge-sort when compared to a SPARC T3 CPU. Since T3 is also made on 40 nm, the comparison would make sense in that regard, especially since both CPUs are RISC designs.
Back when Itanic was being actively backed by Intel and major software vendors, its notoriety as triumph of marketing over CPU design for the sole purpose of bringing the general purpose CPU market under Intel monopoly was pointed out and rightly criticized by all major tech sites, el Reg included.
Then, one company after another stopped supporting it, including Microsoft and Red Hat, and Intel was ostensibly lukewarm towards continued development of Itanium, but all remained fine and well, and attacks on Itanic continued.
Now, when Oracle finally jumped the gun and announced termination of development for Itanic, everyone is suddenly rushing to Itanium's defense (and to bashing Oracle)? Come on! I know that the general attitude of most sites is that Oracle is greater evil than even Microsoft, not to mention hp or Intel (which, bafflingly, is still presented in good light despite the uncovered monopolistic practices), but its gotten ridiculous at this point. Microsoft or Red Hat were never subjected to a fraction of the criticism that's being leveled at Oracle.
Itanium was never a good chip, plain and simple. Intel suggested an ambivalent attitude towards it in the recent years and I can hardly believe it will retain any edge over x86, much less any significant edge. Coupled with dwindling market share, this was expected. But to defend Itanium all of a sudden? I'm baffled.
We really need an angelic/demonic Larry icon...
Let me clarify a few things
First of all, those customers did not migrate off of Siebel or Peoplesoft. The original assertion was that those customers migrated from Oracle support to SAP support, but did not change the software (which would be ridiculous otherwise and Oracle would have no case there).
If you are considering damages, it does matter how many customers have been lured away. However, the number of customers lured away is inconsequential (it could very well be zero) when it comes to determining whether SAP was guilty of IP theft or not.
Ellison did admit that the worst case scenario was averted, but you can't claim 300 customers (out of 300 thousand, give or take 299 thousand) is just a small bunch. The largest customers make up the bulk of the contract value.
He does make a lot of sense
The anti-Oracle tone I see all over the press would be ridiculous if it wasn't so damaging to Oracle. Sun was going down and nobody seemed to care what would happen to the IP if it went down completely. IBM certainly didn't care, but Oracle did. They bought out Sun with cold, hard-earned cash, and it was obvious then and is as obvious now they want to extract as much value out of Sun as possible.
First such obvious negative publicity was over OpenSolaris's demise. That was fair game, though, especially as it appears OpenSolaris did not live up to its potential and most code donations were from within Sun, not from outside developers, save for small pieces.
Then, recently, it's about OpenOffice -- true, at least that was called out by Sun, but it still appears to have been done haphazardly and certainly without proper funding, it seems. I doubt that without corporate backing (especially monetary), LOo will get anywhere. They'll probably try to fly back under Oracle's wing before 2011 is over. I may be wrong, it depends on how much they will want to drive the point, but I'm fairly sure there will be a lot of stagnation in development in the meantime.
And now about Java -- what's wrong with the roadmap that Oracle laid out? Nothing, apparently, apart from the fact that Oracle was the one that laid it out and Oracle is against Google, which automatically makes Oracle evil and all their decisions null and void?
Is it really bad that Oracle tries to recover the money they spent on Sun?
He does make a lot of sense
1. When flash reaches high enough capacity for home use at low enough prices, the market will slowly abandon spinning drives. With less traditional drives sold, losing economies of scale will slowly hike the price of HDDs, closing the gap even further in a positive feedback loop. As consumer drives go up in price, so will enterprise drives. This does not affect tape, which was always niche compared to disk.
2. Bit density on tape drives still has ample room to grow. T10K cartridges have surface area of about 75,000 cm^2. Compare this to about 456 cm^2 maximum for 4-platter 3.5" disks (I'm assuming 3.5" platter diameter with 1" diameter hub). The bit density for T10KB-formatted tape is about the same as of a 6 GB disk. There's ample room for growth. Assuming four-platter 2 GB disk bit density, a typical (4x5x1") cartridge could hold over 150 TB of data.
3. T10KB has 240 MB/s native throughput, not 120 as in the article (that's the throughput of the original T10K). A 20 TB cartridge will store data at 20 times the density. Assuming there would be 144 tracks (compared to 36 of T10KB), linear bit density is 5 times higher, so 1.2 GB/s throughput should be achievable. Assuming 100,000 slots means 10 connected SL8500 libraries with 64 drives each, that 1,380 TB/hour translates to almost precisly 600 MB/s (given rounding, it's insignificant).
4. As opposed to LTO, Storagetek drives maintain backward and forward compatibility with the same cartridges usable on various generations of equipment (based on the formatting), regardless of technology or format changes in between. It can be expected that the T10K cartridge will be usable on T10KC or T10KD drives, depending on their underlying technology. Obviously, Fowler may have meant 20 TB compressed capacity, which makes it perfectly viable -- 10 terabytes in 2015 seems almost like a breeze. Assuming a 2 TB T10KC is released before May 2011, 4-5 TB T10KD in 2013, 10 TB T10KE is certainly possible in 2015. 20 terabytes native is significantly more involved and would possibly require Storagetek to break backwards compatibility.
5. At some point, it may be possible that flash becomes significantly cheaper (although it's doubtful that progress would be notably faster than Moore's observation suggests, though 3-bit MLC could allow flash to overtake Moore's, as could 3D cells suggested by some people), and tape storage will be on the way out, possibly replaced by switched SATA/SAS in a MAID (zero spin-up time could make it possible). This of course assumes that the high-density storage is indeed cheaper to make and that there will be people willing to pay for lower tier (slower, but higher capacity and/or significantly cheaper) SSD storage.
That's just fantastic
I just love this double standard. On one hand, you are urged to be successful in what you do. On the other hand, if you're "too" successful, the likes of those watchdogs will want to punish you for trying.
This is ridiculous. Google built an empire of its own from scratch pretty much without any competition. Now that their business model is proven and successful, freebooters want to just copy them and get rich in the process. But one after another fails and then blames Google (rather than their ineptitude and lack of originality and distinguishing features) for that fail.
Oh sir, you kil me!
> The delayed entry of Intel's Larrabee and the dead-ending of IBM's Cell
> (at least on blade servers) gives AMD's Firestream GPUs a better chance
> against Nvidia's technically impressive Fermi family of Tesla 20 GPUs.
Technically, they're not impressive, they don't exist (fake cards don't count and 7 chips do not really make volume production).
As they don't exist, you can't really bench them.
5870 was already benched by SiSoft to be 8.8 times faster in double-precision FP than 260 GTX was. Assuming Fermi is 8 times faster than 260 GTX, it is barely going to be on par with the 260 GTX (we can assume it will be 4-5 times faster than previous generation).
Given that Fermi is going to be a huge part, it is going to have power issues as well, likely drawing more than Tesla, which already draws 10 times more than ATI and 5870 is rather frugal. Needless to say, this isn't going to earn them any top spots in Green 500.
You need error correction? Run two 5870 cards beside each other (or one 5970) and compare the results. It's still going to be cheaper than Fermi.
> The Fermi chips will be available as graphics cards in the first quarter
> of next year and will be ready as co-processors and complete server
> appliances from Nvidia in the second quarter.
Oh, really? With the slips they suffered for the last year they'll be glad if they are able to put *anything* on the market before they run out of assets. Nvidia has nothing to compete with ATI in the GPU market, Fermi is a huge die and is going to be too expensive to interest gamers if they can get two Radeons for the price of one GeForce (unless Nvidia decides to shoot themselves in the foot and sell below their margins).
> And they will likely get dominant market share, too, particularly among
> supercomputer customers who want to have error correction on the GPUs
> - a feature that AMD's Firestream GPUs currently lack.
Assuming that they can actually put anything on the market. While adding error correction is not a simple matter, I think AMD can do that within a reasonable time frame and with Nvidia lagging behind, it would be foolish to think AMD does not have anything on their roadmaps.
Pretty much the same as happened in the 80s
When it was about PCs, last century in the 80s and 90s, courts have blocked the PC clone industry and gave the only right to manufacture PCs to IBM. Therefore, I am writing this comment on a genuine IBM PC...
What will AMD do
@Matt -- don't worry. People use home PCs for four things:
1. Basic office work
2. Web browsing
4. Limited content creation
Of course, number one hardly needs more than one core. Number two can benefit from two of them (browser+flash), number 3 will benefit from more cores (more and more games are springing up to take advantage of multiple cores). Of course, 4 is going to take advantage of all the cores your PC can muster.
And multiple cores make for a future-proof investment -- most applications are written to take advantage of multithreading and it is the current paradigm. Even if a dual-core CPU is fine today, it may not be enough in a month or two. If the current HD movies can tax one core if you don't have hardware acceleration, future movies might tax four of them. Sucks to be you if you don't have multiple cores then.
And multiple cores help out a lot in normal usage, too. With more of them, you're able to be browsing, listening to music, running flash and no stuff in the background will disrupt your work (including e.g. virus scanners).
Six might be overkill at the moment, but people will eventually find use for them. And individual cores can idle quite nicely, Windows 7 scheduler is supposedly written to be aware of an idling core and will not assign a workload to it if it would be underperforming.
@h4rm0ny -- AMD wouldn't shoot themselves in a foot like that. They need a well-rounded lineup and new parts should be forthcoming.
@Gary F -- I read online that AMD is well able to create their own i5/i7 equivalent, problem being price given current AMD market share. Magny-Cours and Sao Paolo are supposed to close the gap, and if Intel screws up: 1) by focusing too much on the integrated GPU in Sandy Bridge, 2) if 32 nm ends up too expensive to manufacture and as a result offers no tangible benefit compared to the price point*, Bulldozer might very well end up faster than future chips from Intel.
*) And it's not far-fetched, too -- analysts point to the fact that 32 nm might be too expensive to manufacture, especially at first.
So, where's the *WEST* Antarctic?
I thought that was one big continent, centered on the South Pole and all the coastline was in fact facing NORTH.
While people can traverse East and West on the Antarctic, you won't make it to the shore if you move in those directions.
Thin clients are nice
And they work. They work especially well in discouraging users from browsing youtube.
Of course, thin clients do wonders to bills (power, but also air conditioning), and savings scale very nicely with the number of users.
As for thick clients, you can always anticipate problems. Hence monitoring software. Most system vendors provide free tools for their systems which are able to log system events and forward them to a central repository (including SNMP traps, or more commonly, e-mail notifications).
So if any component starts operating only marginally, support people get a heads up on the problem.
Now, it's a wholy different problem to persuade management (or the beancounters) to actually okay system repair costs, so the support personnel can either risk their budget and run preemptive repairs or wait until the part breaks -- at least then they'll magically know which part needs to be replaced.
Paris, because she knows the difference between thick and thin.
Databases and IO
Yes, databases are IO-intensive and that's where Sparcs shine. I know I simplified (maybe oversimplified) the issue, but it boils down to the same thing. Database queries are easily threadable. Sparcs can switch out of a stalled thread (regardless of what the thread is waiting for), and when they switch out, other threads can be executed.
The thread does not have to wait for the IO and stall in a traditional sense (which would cause the CPU to idle), but this mechanism allows other threads move forward and then will switch back to the stalled thread once data is available.
What I meant by a rarely accessed dataset, is that there won't be threads that can move forward while other threads are stalling, so every CPU is going to depend completely on the IO.
Now, I won't go into specint or specfp, I don't even know them for pretty much any of the CPUs on the market, so I've got no idea what I could prove with it or what you would.
@Ian Michael Gumby
> Just because a core has 8 threads per core, it doesn't mean that
> the performance of Oracle on the chip will increase significantly or
> that it can be tuned to take advantage of the extra thread.
> The current round of database designs are not parallel enough to
> take advantage of these extra threads. While Sun wants to say that
> a core w 8 threads is really like 8 virtual cores or 4 virtual cores, that
> doesn't translate to 8 times or 4 times the performance boost over
> a core and a single thread/ double thread.
Ummm, actually it does. Databases are one of the few types of applications that scale almost linearly with the number of threads. Each query can be (and usually is) set up as a different independent thread.
Databases are also memory and storage dependant. As database queries are (usually) random, there is no way to avoid heavy memory use and efficient use of available memory bandwidth is that much more important.
Sparcs really shine there.
> There would have to be a major overhaul of Oracle to really scale
> and take advantage of these cpu advantages. Until more of the major
> chip vendors move to a similar architecture, there is little incentive for
> a major RDBMS house to make the effort to change the infrastructure
> to take advantage of these chip advances.
Maybe, no and no.
Maybe an overhaul of Oracle is required.
No, chip vendors will not pick up Sparc, as they would need to divert resources from their other designs, nor is it actually necessary.
And no, Oracle will likely own Sun soon and this gives them incentive to provide any and all necessary improvements or enhancements.
> In short, you may be better off purchasing a cheaper cpu and
> bring down the cost per transaction, than spending $$$ for
> the additional horsepower you can't use.
Maybe, but only for small, maybe medium, databases. Sun T iron is not too expensive compared to the competition -- for what they are worth, benchmark results show it's vastly cheaper than POWER iron and more or less on par with x64 (when comparing bang per buck), and their running costs -- especially power and cooling -- are much lower than systems at comparable prices. Now you also have lower licensing costs. This all translates to much lower TCO for Sparcs and Oracle will not really lose anything on that.
> Maybe this is why they're cutting their prices?
They are cutting the prices to be more competitive. Using Sparcs for databases was, and still is, overlooked by most datacenter owners, even though the pace is slowly picking up since T2+ was introduced.
Debian does hybrid suspend/hibernate
On Debian 5 (not sure about other distros, but Mandriva 2008 and earlier did not have this), there is a s2both, which does what a hibernation would (ie. save state to disk), but instead of powering off, it suspends.
The downside is that the system takes its sweet time to go down (as much as a hibernation would). The flipside is that it goes up as fast as it would from a suspend-to-ram, but if you lose power, the system does not do a full boot, but returns from hibernate.
I have to say, that is the best of both worlds, isn't it?
They are taking competition quite seriously
> Itanium chips were originally at a 0.75 scaling factor, by the way,
> but were reduced at some point,
Well, they were, and reduced, too, because everybody in the business believed that Itanium is going to be the next big one rather than the Itanic.
Unfortunately, while Itanium is a nice all-round CPU, it isn't really good for database work, unless the database is a rather small, rarely accessed dataset (in which case it simply sucks as much as any other CPU).
> and despite the large number of cores in modern x64 chips from
> Intel and AMD (four or six), Oracle has not been tempted to raise
> the scaling factor here. It will be interesting to see what Oracle does
> when AMD crams 12 cores in a socket and Intel starts cramming in
> eight cores.
Nothing will happen. I know AMD is going to make the Magny-Cours a multi-chip module (MCM), is that also true about the 8-core Nehalem? I have read many conflicting reports on that.
Note that Oracle still has an MCM clause regarding IBM Power CPUs, where they are licensed at 2x the cost per socket (they are treated as two CPUs that they actually are rather than one package). I would expect Oracle to use that clause against AMD and Intel in their upcoming chips.
This might make T2+-based machines really nice Oracle boxes, given that they already are well-suited for that kind of workloads.
There were some interesting comments in the last round of SPARC-bashing in the linked article. I would just like to correct some statements by Matt in that discussion that:
1. Memory bandwidth does not make up for memory latency -- idle cycles are lost regardless of whether memory serves gigabytes or terabytes per second. Database queries are rarely larger than a few kilobytes, but the latency prevents that data from reaching the CPU quickly. If you have a few cores and all have to wait on a random query, they will stall. A Niagara will stall too, but instead of 8 or 16 threads stalling, you get 64 threads. Small cache has nothing to do with it because with the speed of a single thread (assuming all threads stall and are switched), the memory latency can be treated as one cycle.
Oh, and the cache of the T2 was enlarged compared to T1 only because you need to retain more data for more threads. That's quite elementary. If Matt's argument for more cache held any water, Sun's microarchitects would have to increase the cache more than two times keeping with the 2x increase in the number of handled threads, and they have increased cache by a measly 33%, from 3 to 4 MB.
By the way, as for the bandwidth, a T2/T2+ chip has four DDR2 controllers on-die. That gives more bandwidth than two or three DDR2 controllers and only 33% less than three DDR3 controllers on-die, so the Niagara chips are definitely not starved for memory bandwidth.
2. DDR3 memory might not be faster than DDR2 memory in some workloads. DDR3 memory might have a cycle latency (CL) of 7 or 9 cycles, whereas typical DDR2 memory has CL of 4 or 5. A DDR2-800-CL4 is always faster for small random queries than DDR3-1600-CL9, even though it has far less bandwidth.
3. If your thread stalls, it doesn't matter if you have a 64 MB cache or 64 KB cache. The CPU does not work on large sets anyway -- 2 or 3 64 bit data at the most per one cycle, with 64 bit instructions adds up to 256 bits or 32 bytes. Some SIMD commands will take more data, and some data may be larger and working on it may be spread across multiple cycles, but a small cache is never a hindrance if the CPU waits on memory. If a random database access comes, a CPU will not have the data cached (by definition of random data). If the CPU waits, say, 50 nanoseconds for the data, it can either idle (as most CPUs do) or switch to a different thread (as Niagara, Nehalem and some NetBurst chips do). Nehalem and NetBurst cannot switch more than once, but Niagara can then switch a 14 times more and when the data arrives, it can switch to the requesting thread at an instance or cache it and wait for the thread. After that random data is processed, it doesn't need to be kept in cache, anyway.
4. As for the Rock. While it's sad that Sun will not be releasing that CPU, they did not revise their roadmap as much as it has been suggested. The Rock was to stay on the market only for two or three years (which is ludicruously short for an enterprise CPU) and all improvements introduced by Rock were to be incorporated in the new VT core of all future Sun CPUs rather than keeping Rock as a separate family.
To the best of my understanding, Sun has agreed with Fujitsu to not duplicate effort, leaving the general-purpose Sparcs to Fujitsu as their SPARC64 line.
OU is not meant to be a webhost, for f***'s sake
A lot of people here assume that people will use or try using Opera Unite to:
1. Host illegal content.
2. Make it available to a high number of people.
3. Run an advanced web server (ie. dynamic pages).
4. Run services on a high-availability, high-traffic, 24x7 basis.
While I firmly believe that the majority will use it in order to:
1. Share personal pictures and videos.
2. Make it available to friends and family only.
3. Run basic content frontends.
4. Make their shares available while they're online only.
And as such it is an incredible idea. You can share pictures almost instantenously (without pasting them to facebook or sending via e-mail), one at a time (instead of sending an entire bundle).
Some say it's a ridiculous idea. Wanna bet how long it takes firefox developers to start bundling an Apache-lite server along with their suite? It's going to be the fourth element in their happy circle of apps, a Groundhog maybe.
> As for this piece of dirt they're calling a revolution - have any of you knumbnutz actually
> considered leaving this stuff running on your parents/grandparents/daughters/sons
> machines constantly?
It's not meant to be running constantly. I assume people will share stuff when they're online and turn their machine off once they're done for the day.
> IIS comes with Windows yet none of you are using it, the question begs why not if this
> Opera bullcrap is getting you so aroused?
First, with Professional (2000 and XP) and Premium (Vista) editions only. Second, to set up your own IIS (or Apache for that matter) server and add a nice interface to it is difficult enough for most people. Setting this up with Opera is easy.
Plus, it's free (yeah, so is Apache; see the people actually download it and run and refrain from jumping in to save them from their folly.
> You turn off the computer for a week while
> you go on vacation and all of a sudden you're New Zealand relatives can't see your
> cute puppy doing summersaults.
So what? Once you're back, they'll see it again.
> Then there's the biggest reason your mom shouldn't be sharing files:
Oh, I shudder for the thought that I will have to pay millions for sharing pictures and videos I shot. You think I should pull down my galleries or risk running them for my family to see?
- Geek's Guide to Britain BT Tower is just a relic? Wrong: It relays 18,000hrs of telly daily
- Product Round-up Smartwatch face off: Pebble, MetaWatch and new hi-tech timepieces
- Review: Sony Xperia SP
- Geek's Guide to Britain The bunker at the end of the world - in Essex
- Dell's PC-on-a-stick landing in July: report