53 posts • joined Monday 22nd June 2009 18:40 GMT
Re: Good luck to you Penny Arcade
By all means apply. The problem is, they've got millions of readers and PA has a budget of several hundred thousand dollars, perhaps into millions now. The job will definitely suck hard. Of course it won't be a 160 hours per week job, they will expect you to squeeze the 160 hours into 40.
Sure, prioritization, but I seriously doubt they have a ticketing system and any SLAs. You'll likely be doing a few things at once, like trying to work out the network because there are some looming problems that might bring down the whole operation and fixing a printer because the printouts for the next month's presentation come out a bit too dark to the art director's liking. Guess who will demand you drop everything and solve his problem immediately. You'll end up fixing the network after hours, which nobody will notice since you're directly managed by people with no IT experience whatsoever.
If you are professional, if you prioritize and never take overtime, you'll have a working network, but you'll be kicked out because of ignoring the key person in the company. Hint: in a small organization, they are *all* key people and their problems come first. If they retain you, expect no raises because you're not doing your job well.
I wonder who will fall for that ad
I read PA regularly and I appreciate their comic strip, but this ad really shows how easily you can lose touch with reality. I'm sure Mike and Jerry are perfectly happy to work their asses off for a measly reward since they own the operation and I assume they'd be happy with it even if they only barely broke even. But working behind the scenes, however important your job is, you'll never get the accolades and you'll never be in the spotlight, while you're still expected to put all of yourself into the job. Might as well get paid for it, no?
What would be the perks? Being able to talk to the owners? Play video games with them? Seriously? If that's supposed to cover for the ridiculous salary, they're really stretching it.
Problem is, it's a McJob. Expect poor pay, worthless experience and constant patting on the back, telling you how important you are to them. The problem with experience is that it will never be appreciated properly. It will be appreciated by other small operation webcomics, who will line up to fleece you at the same job just like PA is likely to, but any serious employer might actually balk at this, fully expecting that you just goofed off at the job. Finally, for all the work you put in there, you'll be laid off if they're ever merged into something larger.
Re: Well, it's Penny Arcade...
Sure. A 'normal' and 'boring' job ad, outlining the actual challenges and any items specific to the job.
Once you accept a funny ad, expect your job to be funny (but of the black humor variety). Oh, and prepare to accept funny money for that, too.
Re: Good luck to you Penny Arcade
I can't agre enough. Especially when looking at this:
- Annual Salary: Negotiable, but you should know up front we’re not a terribly money-motivated group. We’re more likely to spend less money on salary and invest that on making your day-to-day life at work better.
Improving day-to-day life at work? That still requires money that I would spend on things like a car, a good lunch, etc. The job should be paid no less than 4 salaries minus overhead of 3 extra people, so it should be something like 200-300 thousand dollars. And frankly, the ad reads more like expecting someone to come in to work for some 20-30 thousand.
Somebody doesn't get the concept of preorders
So the mobes will hit the retail channel in November and they will only then be sent out to those who preordered and should get to them before the end of the year?
And to think they only paid €100 for the privilege...
What's IOP? The article uses a pluralization form of IOPs. Obviously IOPS is I/O Operations Per Second, but IOP?
Re: Slightly fruity comparison
Fruit flies? Of the radioactive mutant variety?
What about tape?
Let's discuss the flash solution. It could be made in small or large modules, each would have its own advantages and drawbacks. Even though it's frugal, flash still needs power to function. The larger the basic module, the more power it will use. Then there's the matter of reliability. Larger modules would fail more frequently per module than small ones. And ultimately, they would be more expensive per byte because they would need more complex controllers. These factors would favor smaller modules. However, smaller modules would require more complex routing, switching, finally it would need very complex controllers per each brick of modules.
Would it be cheaper than current flash technologies? Sure. Would it be cheap? Not by a long stretch. Flash is still 8-10 times more expensive than spinning drives. TLC doesn't bring the cost down far enough.
It's also not a matter of density. At the same node, I suppose flash makers could make features denser, but even if they were twice as dense (which is rather unrealistic), we're looking at only four times the raw capacity -- which is still more expensive than spinning media.
Interference would become a greater problem, and it would probably cause the usable capacity to not increase as fast as raw capacity did. Durability would suffer, of course, but as the guy said, it's not a problem for them, especially since they already don't delete the content, but keep it hidden.
Nevertheless, it's still not a solution. Perhaps Facebook will be happy with the resultant module, even if it's expensive, if they think it will save power, or if it would be less complex to build and maintain, but I don't think so.
Which brings me to tape. There are T10000C drives that offer 5 TB per tape, and T10000D on the horizon which will offer more -- that's beyond the LTO roadmap at the moment, so I'm not talking about LTO. Tape has the nice property that when it's idle, it's not using up power and when a cartridge is needed, automation takes care of picking it up and mounting on a drive.
That said, I realize that if he said that waiting times for spinning up disks are too long, waiting half a minute or so to access a tape would probably be much too long for a user to wait. Caching part of the content on disk to wait until a tape is mounted would probably alleviate some of this concern. However, the service is free of charge, so Facebook pretty much has all power to set SLAs for it.
Re: Why when Radio is already free?
What's so bad about DAB?
Re: Wow! 75 times faster than... whaaat?
@Steve.T: Reading comprehension, man. It was obviously irony. Should I have used HTML5-compliant <sarcasm> tags?
They can be used for light gaming, assuming you're happy with 1366×768 resolution at absolutely lowest settings (some games provide Intel-specific setting, which offers quality even below the basic).
Oh, and funny you should mention AMD bought ATi. Remember Intel740? Thought not. Intel bought Real3D and released their GPU in 1998 -- eight years before AMD bought ATi. They had EIGHT MORE YEARS to develop the (admittedly rubbish at the time) solution into a solid product. When AMD bought ATi, they were struggling with their lineup, slowly recovering from 2000 series debacle with notably improved 3000 series, but they weren't well entrenched until releasing the 4000 series and Evergreen. Integrated GPUs from ATi were already vastly better than Intel's at that time and it was without much prior support from AMD that the GPU was excellent. Intel's GPUs continued to lag behind AMD's, and when AMD integrated them into APUs, Intel was again outstripped.
Between 1998 and 2006, Intel had time to improve their GPUs. They failed. They had eight years of possibilities to integrate the CPU and GPU within the hardware, even when the GPU resided in NB, but they didn't care about it. Since 2006 they have slowly improved, with each generation about doubling the performance, but it was still way behind the curve. Seeing Intel's lack of initiative I have to call bullshit on this 'Iris'. Maybe Haswell is not going to bring anything new to the table in terms of graphics (aside from increased clocks) and Iris is just a way to counter the lack of performance by doubling the number of GPUs.
As for playing video streams -- Intel's CPUs DO NOT use the GPU portion for decoding the stream. The CPU has a dedicated processing unit for this. And although it is impressive in its own right, it is supposed to play high numbers of video streams without breaking a sweat.
And your last paragraph -- as long as Intel is trying to stick x86 into everyone's face, they will continue to fail. And it's funny how Intel continuously claims that their target is ahead of them. When i740 was released, high performance GPUs were their future. They failed. Then they said their goal was best integrated graphics. They failed. Then they were supposed to release Larrabee, which was supposed to introduce Intel to the enthusiast GPU market. When that failed, they said Larrabee was intended for heterogeneous computing all the time and they never intended it to be a GPU. Now you are saying their goal is best performance in tablets? Ain't gonna happen. Iris isn't going to convince anyone, either.
Wow! 75 times faster than... whaaat?
Seriously, who are they kidding? Why not claim they are seventy-five HUNDRED (7500) times faster than ViRGE, the 3D decelerator? While they're at it, why not remember that their Core i7 CPUs are several hundred thousand times faster than 8088?
75×rubbish is just rubbish, but more of it. Their drivers are bad, the performance is in the basement compared to integrated GPUs from AMD. While they could be on to something with Iris, the competition would need to stand still for the last five years. Wake up call, Intel! You are NOT competing with 2006 chipset-integrated Radeon or GeForce! You're going to compete with 2014 APUs which are going to include hUMA (which for most users will mean PS4-like GDDR5 system memory). Your GPU may well be 75 times faster than in 2006, but AMD's GPUs made more improvement in the last 7 years and you are not going to fool anyone.
I can see nobody mentioned Bicom yet
It launched in 1993. I remember reading the review, but frankly, there's scant info on the Internet and it's hard to find the information.
Nevertheless, I did find information on it. There were two models: 240i and 260i, differing in HDD capacity (40 and 60 MB, which was a lot for a notebook back then). Specifications:
- Am286LX at 16 MHz (1.5 µm node)
- 2 MB RAM
- Dimensions: 223×161×31 mm (smaller than original EeePC!)
- Weight: 1 kg (which is less than some EeePC models, at almost 1.5 kg)
- 7.5" monochrome displays (640×400 resolution, line-doubled CGA)
- Battery life: 3-4 hours on 5 AA batteries (!), you could use rechargeables (Ni-Cd at the time).
- Price: I can't remember now, but for what it did, it was cheaper than cheapest regular notebooks, at some $300-400.
It's hard not to draw parallels between then and now. The subnotebook was based on technology that was two generations behind the mainstream (486, color displays), which is about where netbooks are in relation to notebooks.
Obviously, technology has progressed since then, but in 15 years between this and the original EeePC, what did we get in return? Frankly, not much! Larger hard drives, color displays (sometimes they are even larger), more memory. But feature bloat caused the netbooks to not perform better than their old rivals. If you used 700 mAh Ni-Cd rechargeables with the Bicom, you got 3 hours battery life. With 2700 mAh NiMH rechargeables, you would get 12 hours -- compare that to 3 hours on an EeePC with batteries rated at 5600 mAh with higher voltage. Displays obviously draw the most energy, but 15 years of progress should have brought them at least to parity. If anything, turning off backlight (or the display altogether, and running on an external monitor), should allow the netbook to work considerably longer, but it doesn't.
Is it unreasonable to expect that you should be able to get a 9-inch notebook running a shrunk CPU two generations old (hey, it would be the original Nehalem now) -- not downclocked, mind you, with an SSD, weighing in at less than 0.5 kg, with dimensions of an A5 sheet of paper and at most 5 mm thick?
Re: Hmmm, what about tape?
You're welcome. And no, I didn't mean absolute leadership, but some of the names and companies should have never made the list.
Hmmm, what about tape?
What about Storagetek/Sun/Oracle and tape? What about IBM for that matter? You mentioned HP, which might go bankrupt and assets may disperse between various other companies, but both IBM and Oracle tape equipment right now holds more data (an order of magnitude more) on tape than is kept on disk. Tape still has the edge over flash for overall TCO, and flash vendors have to catch up with tape, not vice versa.
I understand that you may be enamored with the new technologies, but as it stands, the list is woefully inadequate. Dropbox, Facebook and Amazon? Pretty much equivalent in terms of web storage -- pick any one of the three or add many more to the list if you really believe that they matter. How about adding Rapidshare, then? Going further, I see fusion-io, which is apparently struggling, as you yourself reported:
Is this the financial outlook of a market leader and a successful company?
It doesn't seem that Fusion-IO is leading in benchmarks, either. Or are you considering adding OCZ to the list as well? It certainly doesn't seem that competition has to chase Fusion-IO in anything.
Is this a contest of who makes the presentation most devoid of content and crams as many cliparts to a page as possible?
What do the gears even represent? How the hell are two completely different domains -- sourcing and monitoring -- supposed to drive one another???
Or is it just a classroom project of one of the execs' children?
Re: The success of capitalism
I've yet to see Capitalism at work. The bailouts are precisely where the problems lie -- it's not Capitalism, it's thinly-veiled Socialism with tolerated Personal Property (unless said Personal Property needs to be taken over by the Government to support its own interests).
Why would it matter whether they are experts on global climate? The only conclusion of the research is the unprecedented scale of carbon sequestration in peat, and the enormous rate of growth of mires and bogs. The additional conclusion that the amount of carbon sequestrated might be high enough to offset human industrial CO2 emissions is added almost as an afterthought.
By the way, annual anthropogenic CO2 emissions are within the margin of error of estimates on amount of CO2 emissions from a medium-sized volcano eruption. How could human-made CO2 be responsible for anything? That's the thing that I've never seen climate scientists refute. It's like they're trying to explain how we can heat an ocean using a candle.
Given that indeed.com is a global site, and contains many more jobs in other sectors than IT, it's a big deal.
Even if it is not a huge percentage within IT, it's still significant.
Compared to other IT jobs, which include programming where the basic skill is a language, then there are generic IT jobs where no skills are required, web coding where you'll find the usual fare of CSS and PHP, Hadoop will be a significant part of the remainder of critical jobs -- like mainframe or Unix administration -- it might seem archaic for some, but it brings a lot of money.
Not everyone will have to know Hadoop inside-out, but those who do, and whose skills are required, will rake in the big cash.
Re: To quote Top Gear... How Hard Can It Be™
Okay, so I'm risking a reply to what may be an obvious troll, but... The Soviets were thought to have enormous apparent advantage in rocket design 50 years ago, and well, look up N1.
Soviets couldn't build a "simple rocket" to reach the Moon, it makes the American Saturn V that much more remarkable. (Some try to say it was a lucky break, but the perfect safety record says otherwise).
Re: "squeezed the juice" out of the two papers...
Unfortunately (yes, I'm Polish), not much. It's not that a lot of the theoretical foundations weren't laid by Polish mathematicians, it's that certain political decisions caused them to fall by the wayside.
However you want to twist it, siding with the French in their code-breaking efforts cost them the chance to work at Bletchley Park. Turing was a brilliant mathematician and computer scientist and he did a lot more work in breaking the code than any other man.
Cheers to that!
Well, the writer lives on the assumption that Apple's market share is growing and is significant everywhere in the world.
Well, that's not the case in roughly 90% of the world. OS X requires Apple hardware, and people don't want to pay the Apple tax for an otherwise ordinary PC. Not to mention exorbitant prices for parts and limited upgradeability. Sorry, but for the price of a Mac I can get a much more capable PC and run whatever I wish on it.
Re: Re: Re: huh?
Aside from being a shiny toy, what can a tablet do that a PC cannot at half the price?
Re: Re: "Windows is dead."
Yes, that is, assuming all those users decide to either:
1. Accept 10-12 inch screens on their tablets.
2. Accept 10-12 kg tablets with 27 inch screens.
Everybody seems to be under the impression that screen size no longer matters. And if you add in a keyboard, mouse, external screen and a power brick to an otherwise svelte slate, the sum becomes vastly more cumbersome than a desktop, vastly more expensive, and vastly less capable.
Or has everybody forgotten that tablets cost twice the amount of a more capable desktop PC?
How much are they suing for?
It misses on the most important detail -- how high do they rate their moral losses and how much do they want from Tesco/Apple/world+dog?
Mine's the one with the disclaimer not to use as parachute on the tag.
Firefox? They did 3.0, jumped to 3.5, sanity apparently hit them for a while, since they did 3.6, then jumped to 4.0, but suddenly lost it all by skipping to 5.0 in three months, 6.0 in about two, and 7.0 in just one more. If they release something in mid-November, it's going to be version 11...
All those searches have to pay for themselves, you know
They don't make (much) money when you search for a search engine.
Furthermore, using specialized search engines implies that you are shopping, and context ads in those engines are much better at being effective (as opposed to context ads when you are searching for whatever) -- and context ads generate the most revenue -- precisely the revenue that Google is otherwise losing.
Quite frankly, the last post is an insult to intelligence. Which of the three companies would you prefer to own (assuming you were looking for longevity):
Company A, $100 bn revenue, $150 bn costs, $50 bn loss
Company B, $80 bn revenue, $60 bn costs, $20 bn profit
Company C, $50 bn revenue, $10 bn costs, $40 bn profit
Going by your logic, you would believe company A has the best outlook of the three above.
Yes, because as we all know, increasing debt limits and doing nothing about the spending is the right way to go wrt budgets.
@E 2 : Ummm, no
The beta drivers were there earlier. Plus, the older drivers worked (sort of). And let's not forget that it was 6 weeks after the *announcement* was made. General availability wasn't until a few weeks after the release.
Well, how is *your* English?
"Ellison said in the Wall Street call today that Oracle has installed over 1,000 Exadata clusters (not racks, but distinct clusters) so far, and that it can triple the base to more than 3,000 machines in fiscal 2012."
In your own words:
What Oracle said is they have "installed" over 1000 Exadata servers. They made it clear it is not 1000 clusters.
To me, it's pretty clear Ellison said clusters. Care to re-read? BTW, you were replying to a post made at 9 p.m. and accused the guy of early drinking on Friday. You made your post at 9 a.m. on Saturday. What does that make you? Pot? Kettle? Black?
Installed vs. sold
Now, correct me if I'm wrong, but isn't this more a matter of how many Exadata clusters are installed compared to sold, but not installed yet? I.e., the sales team could have closed the deal already, but put it on a three-month backorder, then it will take some time to install, lab test and put into production.
You seem to be taking the common definition of something being installed or sold, and not the one that's used by businesses.
Why doesn't Tilera compare their CPU to SPARC?
Interestingly, Tilera didn't seem to show whether there are any advantages in the merge-sort when compared to a SPARC T3 CPU. Since T3 is also made on 40 nm, the comparison would make sense in that regard, especially since both CPUs are RISC designs.
Back when Itanic was being actively backed by Intel and major software vendors, its notoriety as triumph of marketing over CPU design for the sole purpose of bringing the general purpose CPU market under Intel monopoly was pointed out and rightly criticized by all major tech sites, el Reg included.
Then, one company after another stopped supporting it, including Microsoft and Red Hat, and Intel was ostensibly lukewarm towards continued development of Itanium, but all remained fine and well, and attacks on Itanic continued.
Now, when Oracle finally jumped the gun and announced termination of development for Itanic, everyone is suddenly rushing to Itanium's defense (and to bashing Oracle)? Come on! I know that the general attitude of most sites is that Oracle is greater evil than even Microsoft, not to mention hp or Intel (which, bafflingly, is still presented in good light despite the uncovered monopolistic practices), but its gotten ridiculous at this point. Microsoft or Red Hat were never subjected to a fraction of the criticism that's being leveled at Oracle.
Itanium was never a good chip, plain and simple. Intel suggested an ambivalent attitude towards it in the recent years and I can hardly believe it will retain any edge over x86, much less any significant edge. Coupled with dwindling market share, this was expected. But to defend Itanium all of a sudden? I'm baffled.
We really need an angelic/demonic Larry icon...
Let me clarify a few things
First of all, those customers did not migrate off of Siebel or Peoplesoft. The original assertion was that those customers migrated from Oracle support to SAP support, but did not change the software (which would be ridiculous otherwise and Oracle would have no case there).
If you are considering damages, it does matter how many customers have been lured away. However, the number of customers lured away is inconsequential (it could very well be zero) when it comes to determining whether SAP was guilty of IP theft or not.
Ellison did admit that the worst case scenario was averted, but you can't claim 300 customers (out of 300 thousand, give or take 299 thousand) is just a small bunch. The largest customers make up the bulk of the contract value.
He does make a lot of sense
The anti-Oracle tone I see all over the press would be ridiculous if it wasn't so damaging to Oracle. Sun was going down and nobody seemed to care what would happen to the IP if it went down completely. IBM certainly didn't care, but Oracle did. They bought out Sun with cold, hard-earned cash, and it was obvious then and is as obvious now they want to extract as much value out of Sun as possible.
First such obvious negative publicity was over OpenSolaris's demise. That was fair game, though, especially as it appears OpenSolaris did not live up to its potential and most code donations were from within Sun, not from outside developers, save for small pieces.
Then, recently, it's about OpenOffice -- true, at least that was called out by Sun, but it still appears to have been done haphazardly and certainly without proper funding, it seems. I doubt that without corporate backing (especially monetary), LOo will get anywhere. They'll probably try to fly back under Oracle's wing before 2011 is over. I may be wrong, it depends on how much they will want to drive the point, but I'm fairly sure there will be a lot of stagnation in development in the meantime.
And now about Java -- what's wrong with the roadmap that Oracle laid out? Nothing, apparently, apart from the fact that Oracle was the one that laid it out and Oracle is against Google, which automatically makes Oracle evil and all their decisions null and void?
Is it really bad that Oracle tries to recover the money they spent on Sun?
He does make a lot of sense
1. When flash reaches high enough capacity for home use at low enough prices, the market will slowly abandon spinning drives. With less traditional drives sold, losing economies of scale will slowly hike the price of HDDs, closing the gap even further in a positive feedback loop. As consumer drives go up in price, so will enterprise drives. This does not affect tape, which was always niche compared to disk.
2. Bit density on tape drives still has ample room to grow. T10K cartridges have surface area of about 75,000 cm^2. Compare this to about 456 cm^2 maximum for 4-platter 3.5" disks (I'm assuming 3.5" platter diameter with 1" diameter hub). The bit density for T10KB-formatted tape is about the same as of a 6 GB disk. There's ample room for growth. Assuming four-platter 2 GB disk bit density, a typical (4x5x1") cartridge could hold over 150 TB of data.
3. T10KB has 240 MB/s native throughput, not 120 as in the article (that's the throughput of the original T10K). A 20 TB cartridge will store data at 20 times the density. Assuming there would be 144 tracks (compared to 36 of T10KB), linear bit density is 5 times higher, so 1.2 GB/s throughput should be achievable. Assuming 100,000 slots means 10 connected SL8500 libraries with 64 drives each, that 1,380 TB/hour translates to almost precisly 600 MB/s (given rounding, it's insignificant).
4. As opposed to LTO, Storagetek drives maintain backward and forward compatibility with the same cartridges usable on various generations of equipment (based on the formatting), regardless of technology or format changes in between. It can be expected that the T10K cartridge will be usable on T10KC or T10KD drives, depending on their underlying technology. Obviously, Fowler may have meant 20 TB compressed capacity, which makes it perfectly viable -- 10 terabytes in 2015 seems almost like a breeze. Assuming a 2 TB T10KC is released before May 2011, 4-5 TB T10KD in 2013, 10 TB T10KE is certainly possible in 2015. 20 terabytes native is significantly more involved and would possibly require Storagetek to break backwards compatibility.
5. At some point, it may be possible that flash becomes significantly cheaper (although it's doubtful that progress would be notably faster than Moore's observation suggests, though 3-bit MLC could allow flash to overtake Moore's, as could 3D cells suggested by some people), and tape storage will be on the way out, possibly replaced by switched SATA/SAS in a MAID (zero spin-up time could make it possible). This of course assumes that the high-density storage is indeed cheaper to make and that there will be people willing to pay for lower tier (slower, but higher capacity and/or significantly cheaper) SSD storage.
That's just fantastic
I just love this double standard. On one hand, you are urged to be successful in what you do. On the other hand, if you're "too" successful, the likes of those watchdogs will want to punish you for trying.
This is ridiculous. Google built an empire of its own from scratch pretty much without any competition. Now that their business model is proven and successful, freebooters want to just copy them and get rich in the process. But one after another fails and then blames Google (rather than their ineptitude and lack of originality and distinguishing features) for that fail.
Oh sir, you kil me!
> The delayed entry of Intel's Larrabee and the dead-ending of IBM's Cell
> (at least on blade servers) gives AMD's Firestream GPUs a better chance
> against Nvidia's technically impressive Fermi family of Tesla 20 GPUs.
Technically, they're not impressive, they don't exist (fake cards don't count and 7 chips do not really make volume production).
As they don't exist, you can't really bench them.
5870 was already benched by SiSoft to be 8.8 times faster in double-precision FP than 260 GTX was. Assuming Fermi is 8 times faster than 260 GTX, it is barely going to be on par with the 260 GTX (we can assume it will be 4-5 times faster than previous generation).
Given that Fermi is going to be a huge part, it is going to have power issues as well, likely drawing more than Tesla, which already draws 10 times more than ATI and 5870 is rather frugal. Needless to say, this isn't going to earn them any top spots in Green 500.
You need error correction? Run two 5870 cards beside each other (or one 5970) and compare the results. It's still going to be cheaper than Fermi.
> The Fermi chips will be available as graphics cards in the first quarter
> of next year and will be ready as co-processors and complete server
> appliances from Nvidia in the second quarter.
Oh, really? With the slips they suffered for the last year they'll be glad if they are able to put *anything* on the market before they run out of assets. Nvidia has nothing to compete with ATI in the GPU market, Fermi is a huge die and is going to be too expensive to interest gamers if they can get two Radeons for the price of one GeForce (unless Nvidia decides to shoot themselves in the foot and sell below their margins).
> And they will likely get dominant market share, too, particularly among
> supercomputer customers who want to have error correction on the GPUs
> - a feature that AMD's Firestream GPUs currently lack.
Assuming that they can actually put anything on the market. While adding error correction is not a simple matter, I think AMD can do that within a reasonable time frame and with Nvidia lagging behind, it would be foolish to think AMD does not have anything on their roadmaps.
Pretty much the same as happened in the 80s
When it was about PCs, last century in the 80s and 90s, courts have blocked the PC clone industry and gave the only right to manufacture PCs to IBM. Therefore, I am writing this comment on a genuine IBM PC...
What will AMD do
@Matt -- don't worry. People use home PCs for four things:
1. Basic office work
2. Web browsing
4. Limited content creation
Of course, number one hardly needs more than one core. Number two can benefit from two of them (browser+flash), number 3 will benefit from more cores (more and more games are springing up to take advantage of multiple cores). Of course, 4 is going to take advantage of all the cores your PC can muster.
And multiple cores make for a future-proof investment -- most applications are written to take advantage of multithreading and it is the current paradigm. Even if a dual-core CPU is fine today, it may not be enough in a month or two. If the current HD movies can tax one core if you don't have hardware acceleration, future movies might tax four of them. Sucks to be you if you don't have multiple cores then.
And multiple cores help out a lot in normal usage, too. With more of them, you're able to be browsing, listening to music, running flash and no stuff in the background will disrupt your work (including e.g. virus scanners).
Six might be overkill at the moment, but people will eventually find use for them. And individual cores can idle quite nicely, Windows 7 scheduler is supposedly written to be aware of an idling core and will not assign a workload to it if it would be underperforming.
@h4rm0ny -- AMD wouldn't shoot themselves in a foot like that. They need a well-rounded lineup and new parts should be forthcoming.
@Gary F -- I read online that AMD is well able to create their own i5/i7 equivalent, problem being price given current AMD market share. Magny-Cours and Sao Paolo are supposed to close the gap, and if Intel screws up: 1) by focusing too much on the integrated GPU in Sandy Bridge, 2) if 32 nm ends up too expensive to manufacture and as a result offers no tangible benefit compared to the price point*, Bulldozer might very well end up faster than future chips from Intel.
*) And it's not far-fetched, too -- analysts point to the fact that 32 nm might be too expensive to manufacture, especially at first.
So, where's the *WEST* Antarctic?
I thought that was one big continent, centered on the South Pole and all the coastline was in fact facing NORTH.
While people can traverse East and West on the Antarctic, you won't make it to the shore if you move in those directions.
Thin clients are nice
And they work. They work especially well in discouraging users from browsing youtube.
Of course, thin clients do wonders to bills (power, but also air conditioning), and savings scale very nicely with the number of users.
As for thick clients, you can always anticipate problems. Hence monitoring software. Most system vendors provide free tools for their systems which are able to log system events and forward them to a central repository (including SNMP traps, or more commonly, e-mail notifications).
So if any component starts operating only marginally, support people get a heads up on the problem.
Now, it's a wholy different problem to persuade management (or the beancounters) to actually okay system repair costs, so the support personnel can either risk their budget and run preemptive repairs or wait until the part breaks -- at least then they'll magically know which part needs to be replaced.
Paris, because she knows the difference between thick and thin.
Databases and IO
Yes, databases are IO-intensive and that's where Sparcs shine. I know I simplified (maybe oversimplified) the issue, but it boils down to the same thing. Database queries are easily threadable. Sparcs can switch out of a stalled thread (regardless of what the thread is waiting for), and when they switch out, other threads can be executed.
The thread does not have to wait for the IO and stall in a traditional sense (which would cause the CPU to idle), but this mechanism allows other threads move forward and then will switch back to the stalled thread once data is available.
What I meant by a rarely accessed dataset, is that there won't be threads that can move forward while other threads are stalling, so every CPU is going to depend completely on the IO.
Now, I won't go into specint or specfp, I don't even know them for pretty much any of the CPUs on the market, so I've got no idea what I could prove with it or what you would.
@Ian Michael Gumby
> Just because a core has 8 threads per core, it doesn't mean that
> the performance of Oracle on the chip will increase significantly or
> that it can be tuned to take advantage of the extra thread.
> The current round of database designs are not parallel enough to
> take advantage of these extra threads. While Sun wants to say that
> a core w 8 threads is really like 8 virtual cores or 4 virtual cores, that
> doesn't translate to 8 times or 4 times the performance boost over
> a core and a single thread/ double thread.
Ummm, actually it does. Databases are one of the few types of applications that scale almost linearly with the number of threads. Each query can be (and usually is) set up as a different independent thread.
Databases are also memory and storage dependant. As database queries are (usually) random, there is no way to avoid heavy memory use and efficient use of available memory bandwidth is that much more important.
Sparcs really shine there.
> There would have to be a major overhaul of Oracle to really scale
> and take advantage of these cpu advantages. Until more of the major
> chip vendors move to a similar architecture, there is little incentive for
> a major RDBMS house to make the effort to change the infrastructure
> to take advantage of these chip advances.
Maybe, no and no.
Maybe an overhaul of Oracle is required.
No, chip vendors will not pick up Sparc, as they would need to divert resources from their other designs, nor is it actually necessary.
And no, Oracle will likely own Sun soon and this gives them incentive to provide any and all necessary improvements or enhancements.
> In short, you may be better off purchasing a cheaper cpu and
> bring down the cost per transaction, than spending $$$ for
> the additional horsepower you can't use.
Maybe, but only for small, maybe medium, databases. Sun T iron is not too expensive compared to the competition -- for what they are worth, benchmark results show it's vastly cheaper than POWER iron and more or less on par with x64 (when comparing bang per buck), and their running costs -- especially power and cooling -- are much lower than systems at comparable prices. Now you also have lower licensing costs. This all translates to much lower TCO for Sparcs and Oracle will not really lose anything on that.
> Maybe this is why they're cutting their prices?
They are cutting the prices to be more competitive. Using Sparcs for databases was, and still is, overlooked by most datacenter owners, even though the pace is slowly picking up since T2+ was introduced.
Debian does hybrid suspend/hibernate
On Debian 5 (not sure about other distros, but Mandriva 2008 and earlier did not have this), there is a s2both, which does what a hibernation would (ie. save state to disk), but instead of powering off, it suspends.
The downside is that the system takes its sweet time to go down (as much as a hibernation would). The flipside is that it goes up as fast as it would from a suspend-to-ram, but if you lose power, the system does not do a full boot, but returns from hibernate.
I have to say, that is the best of both worlds, isn't it?
- Xmas Round-up Ten top tech toys to interface with a techie’s Christmas stocking
- Google embiggens its fat vid pipe Chromecast with TEN new supported apps
- Xmas Round-up Ghosts of Christmas Past: Ten tech treats from yesteryear
- Exploits no more! Firefox 26 blocks all Java plugins by default
- NSFW Oz couple get jiggy in pharmacy in 'banned' condom ad