44 posts • joined Monday 9th October 2006 09:16 GMT
Re: Will they want their own super-duper computer/
You will find Edinburgh is a world leading supercomputer location.
I'm still not in favour of independence though.
End of the law but not the end of the line.
Yes the exponential increase in the cost of fabs mean that Moores law is close to the end if not already ended. At some point we will be able to builder smaller transistors but there just won't be any point.
We will have to get used to a minimum cost per transistor just as we have got used to a maximum practical clock speed. However there are plenty of worthwhile ways of improving computers to explore other than just blindly throwing more transistors at the problem. None of these are going to give us decades of exponential improvement but they are worth pursuing. The good news is that once the transistor process (with its huge fab costs) stops taking centre stage then it becomes possible for smaller companies to innovate and compete.
The GPGPU market is an example of this. Floating point performance in increased over conventional multi-core by using smaller compute units and using a greater fraction of the transistors for floating point units.
Chip stacking won't reduce the cost per transistor. Each layer needs to be manufactured and you may ruin some good layers by bonding them to flawed ones. However it may reduce energy consumption and drastically improve the communications between different components.
The future is going to be interesting.
DDR3 DDR4 or something else?
The external Centaur chip opens up other possibilities as well. Such as the Hybrid Memory Cube.
At the High end like Power8 its a smart move. The basic concept has been proposed before (e.g. RAMBUS) but the additional costs of additional components has always been a problem at the low end.
With much of the mass market shifting down to low power portable devices the DIMM memory market is going to suffer so I think there is room for innovation at the top. IBM are obviously not committing to anything other than DDR3 but they have kept their options open.
Re: ARM is not Intel is not ARM
I recognise your point when it comes to the traditional processor market. However when it comes to the mobile and high power efficiency markets then SOC becomes very important and I believe that ARM will continue to have a real advantage there unless Intel make a big move to open up their fabs.
An Intel closed design SOC, no matter how good it is, will define the capabilities of a device. All that the downstream manufactures will be adding is a case so margins will be very slim. With ARM a company can design their own SOC, built out of IP blocks from multiple sources, and have a chance to differentiate themselves from the competition and charge a higher margin.
If Intel SOCs turn out significantly cheaper than sticking with ARM manufactures will have to go Intel (I'm not sure that is going to happen, smaller processes are getting VERY expensive) otherwise I think they will be reluctant to move.
DRM is key
I doubt a prior art argument will hold up.
OK phone-to-phone gifting is not new but this is phone-to-phone gifting of heavily DRM'ed content from an on-line store without letting the store lose control of the content and its difficult to prior-art (verb ?) apple in that area.
Of course that means that gifting non DRM content by NFC would not be covered as it does not involve downloading the content from a content store.
Somebody should quickly patent exchanging drop-box download links by NFC. Heck you could probably even get it to work without an immediate network connection by generating a unique transaction identifier and then have the giver upload the data when they next get network and the receiver poll until the data is available.
If you look at the original Moore's Law paper the original observation was that the COST per transistor was going down at a geometric rate. Because costs were very roughly per wafer reduced transistor size meant comparable devices cost lest in a new smaller process.
However new fabs and processes are becoming increasingly more expensive than previous generations reducing the cost savings. At some point the wheels are going to come off the cart and it will probably happen due to increased cost before physics limits. Once the increased costs per wafer associated with a smaller feature size matches the increased yield it no longer becomes cost effective to shrink the process.
Re: Show of hands, people - who had one?
First computer I ever had. Unfortunately it was one of the early models with an eeprom (with the window blacked out) instead of a proper rom. I think this is why it eventually gave up the ghost on me.
Intel to manufacture ARM?
Does this include the Altera processors with embedded ARM cores?
1970's vintage reinforced concrete computer centers have the same problem. On the plus side they are probably bomb proof.
I would not rule out ARM HPC. ARM should be at least as good as the PowerPC cores in BlueGene.
You would need to add a much better FPU than Neon and might need to tweak the memory interface but it would still be an ARM. Lots of people are thinking about architectures with lots of GPU like vector units driven by a very small lightweight cpu core. Unless you are Intel IBM or AMD then the only sensible choice for the lightweigh core is ARM.
Windows <-> Android
Provided MS don't try too ape the apple approach too closely developers might find it possible to share common code between windows8 and android.
There seem to be tools out there to do cross platform app development its just that apple really don't like them
to be used for the iphone.
Its the apple no-emulators rule that makes it necessary to develop every app twice. A 3 way ecosystem does not necessarily require 3-way app development.
If the web-site logs people out when the ip address presenting the session cookie changes then its going to be a lot harder for the attacker to do anything useful with the cookie even if they can steal it.
You might get spurious logouts if your device re-connects to a network with dynamic ip addresses but that is probably acceptable in cases where you care about security.
Re: too easily shocked
Yes the iphone was and is a huge success and caused a big change in the market but
not because of patentable design or technical innovations.
This was a marketing success.
Apple took a hugely successful and fashionable product with an existing fan base (the ipod) and added the ability to make phone calls.
Lots of existing phones already played music but it turns out that people would rather buy a really good music player with a crap phone function than a good phone with a crap music function. Sure it had to look nice but I doubt the success was just because it had rounded corners.
Good and insightful reading of the market that everyone else has tried to copy but not patentable.
Re: Intel lost out I suspect
I'm not sure. SeaMicro have a working product but Intel sell chips and it would take a while for their customers to build the new chips into their road-maps.
Its always going to take a product cycle before something like this makes an impact and I think the SeaMicro lead will be eroded by then.
Makes sense in hindsight.
The Cloud virtualisation and HPC markets are all about large numbers of tightly integrated cores.
From Intels point of view it makes a lot of sense to integrate the network onto the CPU (either directly or via a chip stack) and with AMD going this route they can't afford not to.
If the CPU manufactures start integrating networks then the independent HPC network vendors are doomed in the long term.
Cray needs Intel so its a much better deal to sell them the technology than to try to compete with them.
EBL the 3D printer of chip manufacture
"E-beam lithography itself is pretty straightforward," Liebmann said. "We all sort of qualitatively understand how this all works. And we all also understand that the biggest problem is throughput. What all of these systems are working on is massively parallelizing the system to get to a point where you can begin to make this profitable."
EBL may not have the throughput for profitable mass production but as I understand it you don't have to make masks either so the cost should be similar for small volumes and prototyping as it is for large volumes.
Just as 3D printers (much more expensive than mass produced injection mouldings) open up new possibilities EBL might already be interesting for the low volume specialist designs that currently use FPGAs. Maybe what we will see is the current small number of dominant chip designs built in huge global fabs being replaced by a wide range of custom designs printed to order. This might make things profitable even if there is a big hiccup in the expected price reduction with each processor generation.
Yes a rant
The problem with Java according to the evidence presented seems to be poor auto-updating and a large install base of old versions so that old and vunerable versions of java are frequently available for exploit.
This is a real problem for the internet as a whole but not an indication that up-to-date versions are inherently less secure than up-to-date versions of other plug-ins.
Only a stop-gap
There are two fundemental problems with conventional 2D DRAM.
1) The number of available pins on the device is proportional to the chip perimiter so the available connections between memory and processor grows much slower than moores law.
2) DRAM cells are built with specialised high desity processes so you can't add much in the way of additional logic on the same die (while keeping them cheap to manufacture). DRAM chips therefore connect via simple signaling where the energy cost is proportional to the capacitance of the wires and the frequency.
By stacking chips and using TSVs you get round both problems. The number of TSVs that can be supported is proportional to the chip area and the connections have extremely low capacitance.
I'm assuming the plan is to put efficient RAMBUS style high speed serial interfaces in the logic layer to connect to the CPU.
This keeps memory and CPU as seperate devices and allows standard memory devices to be used with different types of CPU However though better than what we have at the moment high speed serial interfaces still take power.
The right place for TSV stacked memory is right on top of the processor with external memory devices only used as a top-up for higher than normal memory configurations.
Remember it well
I still have an almost complete collection of every processor board Meiko ever made (complete with the green wire field modifications on the back).
One little known feature of the transputer was it supported multiple execution threads (including the thread scheduling) in hardware. Modern chips also support multiple threads but the OS is much more involved in the scheduling of them.
The transputer was not a multi core processor so only one thread actually ran at any one time but the threads were there so you could specify your problem as lots of tiny micro-tasks that automatically switched to a runnable state once their input data was available.
The interesting thing is that many of the programming models being proposed for future exa-scale systems are returning to this kind of thinking so we may see a return of designs like this.
Very hard to do this right
I looked into something similar myself a number of years ago. It is very very hard to do this in an unbiast way.
My results were that the actual compilers (I'm including the JVM JIT in this) tend to be equally good at optimising code. The difference between different languages is that the different language features tend to cause the programmer to make different design choices (some choices are just not available in some languages) these effect performance.
If you can identify the performance critical sections of the code and encapsulate them (and their data) then it is possible to re-write them for performance however these sections tend to look very different to "normal" code in that language (e.g. Java code that uses arrays and looks more like C than java).
I'd be willing to bet that the winning C++ verision was making heavy use of templates (template metaprogramming) this is really a language all of its own and can give very good performance but (IMHO) damages the code in terms of maintainability and intelligability.
Long game going on here
Even if it does not make sense in the short term it is not suprising that Apple is looking seriously at ARM. Not becasue it is obviously better for the users but it is better for Apple.
Apple is still primarily a hardware company but the ongoing trend in hardware is for more and more of the important hardware to be combined into a single device package. Any company that sticks with Intel are going to end up putting a cosmetic case around the Intel package as all their competitors. This makes it hard to be different enough to charge much of a premium for your hardware.
Things are much easier if you license ARM and build your own processors. Even if most of the hard stuff comes from ARM you still have plenty imput into the design and you can take that oppertunity to make sure your software won't run on 3rd party hardware.
Now I know that Apple are the world leaders in getting people to pay a premium for hardware that is not very dissimilar to everyone elses but there must always be a risk that they might get landed with a legal ruling unambiguously legalising the hackintosh. The more markets they can move to the iphone/ipod modle the better from their point of view.
Going to change the game
This might be the only easy way that the industry can continue to deliver the moores law performance increase that the market expects.
People have started to notice that adding cores without doing something about the memory system is only giving a modest improvement in many cases.
I think the memory is going to HAVE to be stacked directly on top of the chip. We have already reached the point where there are not enough conventional chip pins to feed the cores.
So the big processor manufactures will be producing single devices that contain all the major components of a server, cpu, memory, 1st level network, and flash storage. Turing these devices into servers is going to become a very low margin operation with very little differentiation between box builders. Good for anyone making processors, bad for people futher down the food chain.
How many times do we have to listen to the argument about getting a free ride from the apple ecosystem.
The vendors don't want to distribute their content through the app store. All they WANT is to enable their customers to use devices (which the customers paid for) to access the services the vendors supply. They would be quite happy to serve apps and content directly from their own websites never involving apple at all beyone making the hardware but apple insists anything involving an iPhone has to take part in their ecosystem. Yes apple make good hardware but that has already been payed for!
This is not the case of a fair payment for an essential service, but a a service that apple has artificially engineered to be essential so that everyone has to pay for it.
Why does it have to be apple doing this?
The article makes a number of points about how a single-stop online software shop has some advantages over having to track down the company website.
There is absolutely no reason this store has to be run by the device manufacture!
As a consumer I happen to have a preference to buy music downloads from amazon rather than itunes.
If I want an environment that automates the software install process and allows me to use my bought software on multiple computers, this can also be done by third part providers (look at steam for example).
If the users want app-store style delivery of computer software then we should expect multiple players in that market. Apple has a bit of a head start due to the phone segment but unless they try to lock down macs like iphones they are going to have to be a little less dictatorial.
May still get funded
The thing about zany military tech is that does not have to work all that well or be deployed in large numbers to be worth having.
OK there are lots of ways of countering a super laser but its going to cost the other side a lot of money to develop these especially when they are not sure how well your super laser works.
The more laser shielding you can persuade the other side to put on their missiles and planes the less effective they will be.
Yes subs are probably immune and its probably easier to harden surface ships than planes.
Thick armour plate would take a while to melt and might help against rail-guns too. No wonder the navy seem keen on this.
Lets hope it has less security holes than the real thing.
Of course it is less mind-bogglingly silly as emulating the entire PC in java
But almost nothing is.
It will never work
I can see why this looks attractive to Apple. Lots of new features that you can only get if you have an iphone but I fail to see the advantage to retailers and product manufactures who would have to carry the recurrent costs.
* Lots of expensive new features that only some of your customers have the right phone to use.
* Advertising etc. only accessible to members of the public who are already holding one of your products in their hands (not a key target demographic).
Existing phone apps that scan/read barcodes and chips that exist for other purposes are fine because they add value to the phone at minimal extra cost. Telling the retail trade to choose a technology because it would let phones do something funky is just arrogant.
Business Processes cannot be patented?
Well I guess you don't want the person paying the lawyer being sued for copying the other guy.
Now all we need is a ruling that software counts a business process and we can all get on with our jobs.
Is x86 important?
I'm not sure x86 compatability is important. HPC codes are not written directly in x86 assembler and to a very large extent are re-compiled for each system rather than being distributed as binaries.
Historically the HPC market has been happy to migrate to whichever architecture provides the most value for money. The current dominance of x86 chips in HPC is more due to the price performance benefits diven by the mass market in x86 chips than by any requirement for x86 compatibility from the HPC market. This would argue that graphics chips have the advantage due to selling in two markets (unless of course intel subsidise their HPC only chip).
Programmability is an issue but the PGI compiler already targets nvidia GPUs to accelerate normal fortran codes. There are problems with targetting attached co-processors with their own memory but these would also be experienced if the attached processors are x86 cores.
Of course x86 compatibility does mean you might be able to build a large MPP system out of ONLY Larrabee chips which might save some money.
Too many clouds
The problem with the cloud buzzword is that it means too many things to too many people.
Google App Engine is a mechanism for doing business with Google. Of course it is locked to one vendor and nothing intrinsically wrong with that and you get access to lots of nice underlying technology.
But there are lots of cloudy ideas out there that don't map onto App Engine (people are talking about using clouds for large number crunching jobs for example) and the ability to use multiple vendors is also important to some people.
This does not mean that App Engine is bad or inferior just not appropriate to these cases.
The problem is that people write stupid surveys about "cloud" as it meant a single thing. Different companies have different cloud offerings targeting different use cases. Google provide an offering that is specific to their infrastructure so inevitably has some degree of lock-in.
And 26 percent of respondents did not realise this or voted for their favourite company regardless of the question.
Half way house
Big installations with a chilled water system and a cold climate have another option. When the weather is cold enough we switch off the chillers and pump the water through a free-cool radiator system on the roof.
Takes a lot of additional plumbing and you still need to power the pumps but you keep control over dust and humidity.
Virgin got it right
VM actually got things right with the i-player they upload the content into the cable-TV on-demand infrastructure.
This means that we can watch catch-up TV programs on the TV (amazing concept)
and use the same interface for catch-up TV from non BBC channels as well.
Even better I can grab the laptop and use the broadband while my kids are watching the i-Player programs.
No OS can be proof against user stupidity. No matter how secure it is if the owner of the machine is stupid enough to try and install software of unknown origin then the machine is going to get infected.
Its therefore no wonder that an OS thats shipped by default to home users has a higher infection rate than an OS thats typically only used by IT professionals.
I'm no great fan of Vista but this seems a bit bogus to me.
I'm with the pro water cooling crowd.
Water cooling might be a bind if you are thinking in terms of a small server cupboard cooled by through the wall air-con units, but once you have a whole building full of the beasts then your air-con system has to be driven by a massive chilled water plant anyway.
In this case getting the chilled water closer to the source of the heat has to reduce the overall power use.
The only time I remember a big puddle of water under the machine room tiles it was not a failure of the chilled water system but the dehumidifier drain pipe got blocked and it was the air-con system that caused the flood.
Opt out cookie
The exact form of the opt out cookie is absolutely key here.
If all opted out users have exactly the SAME cookie then it can't be used to track your usage. If it contains a serial number then somebody will have to write a browser extension to generate a random opt-out cookie for every new web page.
That way they will have to use your ip address to track you and the ISP has always been able to do that with or without phorm.
For that matter how about a randomly generated opt-in cookie. That might give them a few headaches for a while.
The whole thing would be much much more stable, scalable and easier to accept if it ignored all streams without an opt-in cookie.
Why does this system have to opt in by default when they can target webwise ads to anyone without an opt-in cookie !!
What does google think of this
Call me cynical but I bet they are going to do is base their choice of ads to serve you based entirely on your interaction with search sites.
Its going to be a lot easier to identify somebodies interests from their search terms and click-through than anything else.
you could see this as an attempt to undermine google by getting access their raw data.
There are two parties to every communication. They may be able to claim that the user has opted in but the web-site sure as hell has not. If I was google I would sue them to make sure they don't intercept anything from my site.
of course they probably want all the other data as well not to target ads but to sell on. Even anonymised that data is valuable to somebody.
Probably the right solution is to force the encryption keys to be unloaded at every point which would also cause you to have to have to re-enter your login password later.
screen locks, hibernate, sleep, switch user etc. should all dump/overwrite crypto keys immediately if you have to re-authenticate your login this must be because there is the possibility that somebody else has access to the machine in between times so you should expect to have to re-authenticate your crypto at the same points.
What we really need is a good OS hook that allows any process to be notified of this kind of event and dump any keys that its holding. There are lots of different processes that cache security sensitive data in memory, ssh-agent, browsers holding unencrypted passwords etc
I recently switched to netbeans 6 after a couple of years on eclipse.
The best reason for switching is that NB uses the sun java compiler.
The eclipse compiler and the sun one have some disagreements about what is valid code when it comes to some uses of generics. I just got sick of having code that compiled in eclipse failing when built with ant.
What about the memory
All the really big HPC systems on the top-500 have only a couple of cores per node. Beyond that and memory bandwidth saturates. For HPC type applications most of the parallelism comes from large node counts. Unfortunatly using multiple nodes is much harder than multi-threading.
In my opinion very large core counts will only work for niche applications unless we start to see some inovation in memory system design but the memory manufactures seem only to be interested in making larger chips of the same old basic types rather than investing in significantly new technologies. Whatever happened to rambus?
old news or not ?
To those of us in the HPC community this seems like very old news. Every major supercomputer for the last 10 years has needed parallel programming.
However in this arena we have long since hit the point where codes are limited by the memory system more than the instruction rate. For this reason most really big systems are distributed memory rather than thread based. If you think thread programming is hard you should try distributing a problem across multiple memory systems.
Intels move to massive core counts could have one of 2 outcomes.
1) large scale scalable shared memory systems become commodity and the HPC arena becomes much easier.
2) people discover that having lots of cores attached to a rubbish memory system goes at the same speed as a couple of cores no matter what you do.
Guess which one I believe :-(
Part of the difference is just that Java has colonised the SQL/XML space so effectively that new projects that require this naturally gravitate to Java. I don't think that the performance difference between C++ and Java is really significant enough to motivate the switch to C++ by itself.
I've done quite a bit of performance comparisons between the two languages and with a good JIT I find the variation in performance due to different programming styles much greater than any inherent speed difference between the languages.
I also feel that because the core Java syntax is much less rich than C++ (not that I mind) there is often only one way to approach a problem in Java where C++ gives you a choice of four or five. In a perverse way this makes code re-use more likely.