* Posts by bazza

1950 posts • joined 23 Apr 2008

Heat sink breakthrough threatens ventblockers

bazza
Silver badge
Happy

@AC

Indeed, but that powers only the harddisk. I was thinking of the whole machine, or at least enough of it to close a few files properly. A VME chassis has an ACFAIL line, which can be used in embedded applications to do some vital stuff in the dying microseconds of the PSU's capacitor charge; quite a useful notification in some circumstances!

I thought that head parking was achieved through purely mechanical means in that the forces exerted on the head (via the air cushion) by the spinning disk have a tendancy to push the head arm off the disk. But there's clearly enough energy in the disk to do it electronically too.

Just a thought - that's not something that SSDs can really do is it, unless they have a decent amount of capacitance somewhere. So is a power cut a slightly greater problem with an SSD than for a HDD? Whatever volume checking an OS performs after a power loss, it'd be much nicer to find that all the sectors had been correctly written.

0
0
bazza
Silver badge
Pint

Like all really good ideas...

...it's one where everyone will say "I could have thought of that".

The really clever bit is in the thin air layer. Sure, air is normally a terrible conductor of heat, but when the layer is so thin it's thermal resistance is much reduced*. Thus, from the the point of view of the heat, the rotating impeller is thermally 'attached' to the base plate (or at least much more so than if it were, say, 1mm away).

I for one hope he/they make a pile of cash out of that. Clever ideas like that need rewarding. And besides, something like that spinning away at several thousand RPM has got to sound just a little bit like a turbine, and that'd be a cool noise for any PC to make.

There must be a pile of kinetic energy built up in that spinner. That could be used as a little energy reserve; lose the mains power, and the spinner becomes a generator providing just enough electricity for a cleanish shut down. Bit like an F1 car's KERS. My idea (unless someone else thought of it first)!!!

*Just like metal loaded epoxy used to connect some flexy circuits; it's a terrible conductor over any sizeable distance, but when used in thin enough layers that doesn't amount to much.

5
0

Monolithic supers nab power efficiency crown

bazza
Silver badge

Accountants

Depends on how the accounts that plague these projects set up the budgets. If the electricity is paid out of a different bucket to the one you have available to buy hardware, why 'waste' that one on efficient hardware?

0
0
bazza
Silver badge

Not a PS3 fanboi...

...because I don't own one, but I will apologise a little bit for rising to your bait. But I do like the Cell's internal architecture a lot. Much more elegant and responsive than the sledge hammer that is a GPU. Hard to programme properly, a huge amount of grunt available if you can programme it, probably extremely satisfying once mastered.

To my eternal regret I've not had to make use of one at all.

But being a SPARC fan, I am cheering on the K machine at Riken. Just goes to show how much performance is as much bound up in good inter-processor comms as it is in CPU speed. All those GPU based machines seem to be terrible from the point of view of mean/peak performance ratios; sounds like their GPUs are being starved of work. Now if someone bolted the K machine's interconnect right in to the middle of a GPU, think about what sort of awesome machine could be built! Though I'd still prefer a network of Cells...

0
0

MS to WinXP diehards: Just under 3 more years' support

bazza
Silver badge

Thirteen years...

...is pretty generous support (all things considered) to get for a proprietary piece OS where the updates have been free for all that time. Anyone buying an XP retail licenese all those years ago is going to end up having had a pretty good deal considering they would have been able to port it on to new hardware several times by then.

Whether you'd want to have been stuck with it for all those years is another matter. I prefer Win7 these days, definitely a better product than XP.

How many Linux distributions can claim to have a re-install free upgrade path from that far back? Not many I'd guess. My personal experience of upgrading between major editions of Ubuntu has been patchy at best. XP may have been boring all this time, but it has done (mostly) a job that its users have wanted it to do.

1
1

'Lion' Apple Mac OS X 10.7: Sneak Preview

bazza
Silver badge
Stop

ARRRRGGHHH! More Creeping Tabletisation!

Launchpad is definitely a lurch towards running on a tablet. It's a worrying trend, heed it well.

Explanation - all the big development money in the industry seems to be going in to tablets and mobiles. Not that I care particularly for Apple, but the others (Ubuntu, Microsoft, etc) are all doing that too. Are we beginning to see the end of the line for making the life of the desktop / laptop user better?

Us desktop users aren't necessesarily doing hip and trendy things - corporate droids mostly these days I'd guess - but we still like a nice working environment! Just because there isn't major profits to be made out of clear thinking hard nosed IT departments whose favourite line is "what do you need that for" doesn't mean that there aren't users desparate for an upgrade. Personally I find Win7 quite good, but I'd hate to think that that's the end of the line; actually got XP at work.

0
0

One per cent of world's web browsing happens on iPad

bazza
Silver badge
Paris Hilton

Redressing the balance

Posting this from Firefox on a Solaris 10 VM on VMWare on Win7 64bit on an AMD CPU.

Nope, the pie charts ain't changed noticeably.

1
0

IBM to snuff last Cell blade server

bazza
Silver badge

@asdf, missing out

"PowerPC wont die but at least it is largely gone from general computing."

But not gone from other fields. Sectors like telecoms, military computing, etc. tend to care less about architectural backwards compatibility and more about performance, power consumption, etc. By being willing to switch around a bit they can exploit whatever is best at the moment. PowerPC is in fine form in the telecoms world, but has slipped a bit in the high performance embedded world.

Whereas 'general computing' has been stuck in the Intel rut for decades now. The trouble is that the battery powered and server farm sectors of 'general computing' have already chosen ARM or are threatening to do so. Why is that relevant? Well, it signifies a greater willingness on the part of the vendors to look beyond the world of x86 for the performance that sells. Doing that once means they have to keep doing it should something better come along in order to retain a competitive edge. It's entirely possible that PowerPC will be the chip of choice, and it might not be too hard for some vendor to go for it. The trouble is that such an endeavour will always be commercially driven; an offer of cheaper Intel chips might be just as commcercially advantageous as switching to another architecture to get a performance edge.

ARM is having a quite interesting impact on the market. They own the mobile market, and they may end up owning the laptop market too (MS porting Windows, Apple talking about an ARM laptop). They may also end up owning a large chunk of the server market too if vendors see useful performance per Watt figures for ARM servers. So where would that leave the great big hulking chips that Intel and AMD are churning out? With a somewhat smaller market I would imagine, apart from the power Desktop users, and there's not really many of them. So will Intel/AMD keep developing these very powerful chips if the ARM architecture starts taking over the server market too? Possibly not, or not at the same pace.

So where might users who do actually need fast general purpose compute performance turn? With Freescale seemingly tarting up the PowerPC line with some recent anouncements and the embedded market there to support it, there might yet be a commercial rational to move high performance computing over to PowerPC. Adobe may yet have to dust off their old Photoshop source code. I can remember in the early days of Intel Macs Photoshop was slower than on the G5s because there was no altivec unit to exploit for all those image processing functions.

Regardless, it seems likely that the end users are going to have to get used to underlying architectures changing more than once every 20 years. We should be grateful. It'll mean more performance (hopefully) and less power consumption, and who cares what instruction set lies beneath?

6
0
bazza
Silver badge

Fog of War

Pity. I've had my eye on Cell for years, but the roadmap uncertainty has been quite off-putting. Maybe Freescale's newly announce multicore PowerPCs will make up the difference. Regardless, Cell was certainly a programming challenge and was not one that any old programmer can achieve maximum performance in their first afternoon. Perhaps that's the real reason why IBM have backed away from it. I gather that there are some in the games industry who have got to grips with it (and all that horsepower presumably makes a difference), so maybe Sony will continue with Cell. Who knows.

1
0

ITU Gen Sec: Why not speaking English can be a virtue

bazza
Silver badge
Coat

Google Translate?

Eh bien, parle anglais est clairement la meilleure option pour les communications mondiales. Considérez française - tout à fait exact, mais pas très bon pour faire des blagues po Par exemple,

"Pourquoi le poulet at traverser la route rapidement?

Pour éviter de devenir Coq au Vin ".

Aha, aha, aha. C'est la cause de mes côtés pour séparer en anglais, en français, mais l'Tumbleweed souffle passé.

0
0

Apple's new Final Cut Pro X 'not actually for pros'

bazza
Silver badge

Is The Register selling tickets?

Because I'm enjoying this show immensely!

Just to stoke some flames, I think that a new version that can't read the previous version's files at all is crap software. Imagine telling a software developer that Apple's new dev tools won't import or compile their existing source code. I'm sure the dev would be justified in being furious.

MS did a good thing (after complaints I imagine) with Office2k7 in producing a plugin for older versions to allow them to read the new file formats. That's the best possible philosophy when making big changes to a file format.

Having lit that firework I shall now retire to a safe distance.

4
0

Quantum crypto felled by 'Perfect Eavesdropper' exploit

bazza
Silver badge

@Destroy All Monsters: Are you sure?

"It's far more likely that factoring turns out to be in P than that QM falls over, really."

Are you really really sure?

Firstly, as Ken Hagan said elsewhere the only 'quantum' part of quantum cryptography is the detectors. But the as the original article indicated these were prone in this particular case to incorrect operation in the face of relatively simple attacks. That is nothing to do with whether or not quantum mechanics is valid. It is merely our inability to reliably measure quantum states in the face of a simple attack.

Secondly, whilst quantum mechanics has indeed shown to be a theory well matched to physical observations, it is still a theory. Richard Feynman had a good few things to say about theories, and he should know. Seek out the videos of his lectures on quantum electrodynamics that he gave in New Zealand, they're very good and I think they're still freely streamable. And indeed the semiconductor junctions on which we all now depend are devices exploiting quantum effects. But my point is that quantum mechanics is just a theory, no more, albeit one that seems to work very well.

Although qm is pretty good, it is reasonable to suggest that it may not be completely correct. Firstly, I don't think anyone has managed to make qm and relativity fit together. Both have a wealth of experimental data to suggest that they're along the right lines but they remain theoretically un-united. So *something* is wrong somewhere. One of those 'somethings' is the behaviour of Pioneer 10 which isn't quite where it ought to be according to both Newton's and Einstein's theories of gravity which otherwise seem to work quite well in keeping the planets in the right places. Nor are galaxies quite the right shape. And does a quantum state change instantaneously or over a finite period of time when an observation is made? It's quite an important question to qkd. But some of the experiments I've read about are hinting that the answer is the latter not the former, suggesting that there may be a hole in the basic premise of qkd.

So knowing that something is wrong somewhere in the theoritical models of why stuff happens, would you ever base the security of your system on it? The *only* assurance we have that it is correct is in effect a bunch of scientists saying "it looks OK to me". Whereas logical encryption algorithms like AES, DES, etc. all exist within the rules of mathematics which are much better understood, because mankind made up the rules.

As Pete H pointed out they are still vulnerable in their actual physical implementations, but provided the logical implementation is correct and an attacker is unable to get physical access to either end then their strengths and weaknesses are deterministic solely within the mathematical framework in which they are defined. It could be that we don't understand the maths right. But that's a much more straightforward thing to worry about than being totally certain that we understand the physics.

0
0
bazza
Silver badge

@Destroy All Monsters: quick thought experiment

Just to follow up on my previous response to your most welcome post, imagine asking a physicist the following question.

"Would you bet the life of your first born child on Newton's law of gravity ultimately being proved correct in return for £million?"

120 years ago you would get quite a few saying yes. Immediately after Einstein's general theory of relativity was published you would still get some saying yes. Today, I dearly hope for the future social well being of the world that none would say yes.

I think that if I rephrased the question along the line of "Would you bet the life of your first born child that quantum mechanics is completely correct in return for £billion (inflation)" you might not get a 100% 'yes' rate. And if that's really the case, why should we bet our communication's security?

0
0
bazza
Silver badge
Happy

@Pete H, not really

Your arguement applies to instances where an attacker has physical access to one or other end of the communication link. Sure, if someone is in a position to do a power analysis on the encrypting device there's potentially a physical weakness to be exploited. However, the discussion so far has really been about intercepting the communications link between the two ends and whether or not there is an exploitable physical weakness. With purely logical algorithms like AES the intercepted signal is solely noughts and ones, so there is nothing to exploit beyond weakness in the maths. As you say that is a bloody hard job these days. But quantum cryptography extends the physical weaknesses to all aspects of the encryption system - both ends *and* the communication link. Not a very desirable move perhaps?

0
0
bazza
Silver badge

@Remy Redert re:Dead in the water

Of course other encryption systems suffer from early problems, but you're missing my point.

The strengths and weaknesses of systems like DES and AES can be determined purely analytically, and their implementations are open to truly large scale testing and examination by anyone with the urge to download the spec and look at the source code. Whatever the weaknesses in the algorithms are, we can point to them and say definitively what they are, how hard they are to exploit, etc. Anyone can look at one aspect of an algorithm and say things like "you'd have to find the prime factors of that number there" and know that that would be a complete and definitive statement on the merits of that part of the algorithm. One can then objectively assess how hard it would be to perform said feat, keep an eye out for papers with titles like "prime factor finding" and generally be comfortable. And the same goes for implementations. This is because things like DES, AES, etc. are entirely logical systems that operate in rule sets created by man with no physical influences.

The problem with quantum cryptography is that the security of a key transfer relies entirely on the behaviour of physical processes, namely the quantum entanglement itself as well as the single photon sources and detectors. Knowing whether or not we have a complete understanding of these physical processes is much harder to be sure about. Mankind has been constantly revising its opinions of nature for millennia, and I don't suppose we're going to stop doing that anytime soon.

So far the problems that have been encountered with quantum cryptography are related to the physical properties of the detectors and photon generators (it turned out that single photons weren't always on their own...). No great surprises there - matter does not always behave as we tell it to! This latest problem is just another instance of our misunderstanding the physical properties of one electro-optic component in the system. I doubt that one can ever prove analytically that the components are designed and implemented correctly. All one can ever say is that N tests have shown them to work properly, but N can never be a truly large number. And should one test each and every photon detector, or just a sample of the production run?

But what about entanglement itself, and the impossibility of messing with it? There's several bunches of physicists who are questioning whether this is in fact correct or not. It looks like the rule that you can't measure the state of an entangled photon without effecting the state is more of an assumption than a proven fact. It's easy to say that it is hard to make such measurements, but to the best of my knowledge no one has quite yet been able to completely rule it out. Some very elegant experiments are being planned by academics to explore this. Some have already been done with electrons which showed that you can 'sniff'' their quantum state, repair the damage done to the state, repeat until you know everything. Not good news so far, except that quantum cryptography uses photons.

My point is that all an experimentalist can say is that their particular experimental design could or could not measure states without disturbing them, but that say's nothing about someone else's experiment. Saying "I can't do it" doesn't prove that no one else can. Yet for quantum cryptography to be guaranteed you have to prove the rule. As I said above some results are already known for experiements with electrons which would suggest the issue is more one of experimental design, not hard physical facts. So where would quantum cryptography be if someone successfully designed and performed the right experiment? It is not guaranteed that they won't be able to do so. Certainly, if some one *does* manage to do it (which would be impressive because it would mean our quantum model of the world is wrong, Nobel prize in the post) quantum cryptography would be finished.

And it's worth pointing out that quantum cryptography is in fact ordinary symmetric cryptography that relies on a physical trick to securely exchange the key. That still doesn't stop someone getting the design and implementation of the actual encryption/decryption algorithm wrong.

9
0
bazza
Silver badge
FAIL

Dead in the water

So it seems that the limit on the security of quantum cryptography is nothing to do the entanglement of photons, but is wholly dependent on the electronic behaviour of the detectors used to test the integrity of that entanglement. This trick has been possible because of a loop hole that no-one had spotted previously. Ok, so they'll plug this loop hole, but who says' there won't be more? That sounds like something that you can never be completely sure about.

So what exactly is the point of quantum cryptography then?

2
1

SpaceX goes to court as US rocket wars begin

bazza
Silver badge
Pint

Better Fix

"...just need a few more good lurches".

0
0

Oracle seeks 'billions' with Google Android suit

bazza
Silver badge
Thumb Down

@AC, re: It's a shame

>Ellison is a nut job along with steve jobs

Er, are you saying that Google aren't? Oh boy they've hoodwinked you well! The *only* reason Google structured Android the way it did was to create a closed software ecosystem. Java-ish, but not Java enough to be able to run the apps elsewhere. That App lock in just encourages people to use Google services for which Google get ad money.

Google have taken a gamble on bending someone else's intellectual property to suit their own money making scheme, and it may yet back fire quite spectacularly. They dress it up as open source "from the very bottom of their heart", but that just disguises their corporate profit driven strategy. Google's trick is that most people don't see where the money is coming from. Apple's trick is that despite the obviously high prices people don't seem to care. All companies that have shareholders are obliged to take steps to increase profits, and we shouldn't be surprised to find that some of them are quite good at doing so without a blatant flow of cash.

The closest I've seen to a large company properly donating to the open source world is IBM, and Sun too in the good ol' days. IBM have put $billions of effort in to Linux from which everyone has benefitted. They make money out of it through server and services sales, but otherwise the rest of us use their contributions without a penny heading IBM's way. Sun developed dTrace and zfs and gave them away under their own license. They haven't cropped up as such in the Linux world because GPL2 isn't compatible with the license Sun wrote. You'd have to be very cynical indeed to blame Sun for that! They have been picked up by FreeBSD though. I'm sure there are other good examples too.

Google have open sourced quite a lot, but they're a bit tardy about it with Android, and everything they've done is clearly aimed at capturing more of the search and on-line advertising market. They're not especially good at it though. You think Android is first class; all I see is version fragementation, unfixed bugs, a heavy steer to doing everything through Google's websites, and yet another app ecosystem that makes it difficult to port apps to another platform. Crap. Look at the hounding HTC got just recently when they said that they wouldn't put an already out-of-date Android on HTC Desires. A sign of happy Google customers? Hardly.

9
8
bazza
Silver badge

@Matt Bucknall

It's an interesting thought. It's even more interesting to wonder why Google went for Java in the first place.

Suppose the Google design requirements for Android went something like this;

1) cheap to sling together

2) app development not in C/C++, but in a pervasive and slightly trendy language

3) closed eco system - Android apps run only on Android

The answer to 1) is Linux - they could rip that off as much as they like. Android has clearly has been slung together with not much thought given to updates, quality, security, etc. Java would have been a good answer to 2) but 3) gets in the way. Solution - bend Java a bit by using Dalvik, et voila! And it was cheap as chips too - they didn't have to grow a whole ecosystem from the ground up.

Only trouble is Dalvik might not turn out to be cheap at all, and might prove very expensive.

So what do they migrate to? Native, with/without something like Qt? Javascript? I suspect that for most developers the former would be yeurk.

2
2

Time to say goodbye to Risc / Itanium Unix?

bazza
Silver badge

Portability?

>The _BIG_ advantage of Linux is portability. Source code

>written on X86 should compile and run happily on

>MIPS/ARM/Power/Big Iron/Itanic/Whatever comes along.

Really? Maybe, provided you've got all the right libraries installed, the right versions of those libraries, the right GCC setup, and that your distribution's fs layout is along the same lines as the one used by the software developer, etc. etc. And then you may also have to worry about hardware architectural issues such as endianness. And then you have to wonder whether the software you have just compiled is actually running as the writer intended, or is there a need for some thorough testing?

The idea of distributing a source code tarball and then expecting ./configure, etc. to work first time for everyone on every platform is crazy. Pre compiled packages are a joke in the Linux world too; deb or rpm? Why is there more than one way of doing things? There is no overall benefit to be gained.

It is asking a lot of a software developer to maintain up-to-date rpms, debs and tarballs for each version of each linux distribution on each platform. Quite understandably they don't do it. If we're lucky the distribution builders do that for them.

0
0
bazza
Silver badge

@Steven Knox

You're right to suggest that some sort of performance metric should be calculated for a candidate IT solution, but you can't tell everyone what their metrics should be.

Google apparently use a metric of searchers per Watt. Sensible - searches are their business, energy is their highest cost. A banking system is more likely to be measured in terms of transactions per Watt second; banking systems are sort of real time because there is an expectation of performance, but energy costs will be a factor too. But ulimately it is for the individual business to decide what is important to them. For example a bank somewhere cold might not care about cooling costs!

I think that it is safe to conclude from IBM's sales figures that a fair proportion of businesses are analysing the performance metrics of x86, RISC, etc. and are deciding that a mainframe is the way ahead. IBM sell so much kit that not all their customers can be wrong!

0
0

Google pits C++ against Java, Scala, and Go

bazza
Silver badge

@ Destroy all monsters; Less of the little one, more of the old one

My whole point is that there's nothing really new to SCALA's concurrency models. Both the Actor and CSP concurrency models date back to the 1970's. Pretty much all that fundamentally needs to be said about them was written back then. Modern interpretations have updated them for today's world (programmers have got used to objects), but the fundamentals are still as was.

[As an aside I contend that a Communicating Sequential Process is as much an 'object' as any Java class. It is encapsulated in that it's data is (or at least should be) private. It has public interfaces, it's just that the interface is a messaging specification rather than callable methods. And so on].

No one in their right mind would choose to develop a programme as a set of concurrent processes or threads. It's hard, no matter what language assistance you get. The only reason to do so is if you need the performance.

CSP encouraged the development of the Transputer and Occam. They were both briefly fashionable late 80's to very early 90's when the semiconductor industry had hit a MHz dead end. A miracle really, their dev tools were diabolically bad even by the standards of the day. There was a lot of muttering about parallel processing being the way of the future, and more than a few programmer's brows were mightly furrowed.

The Intel did the 66MHz 486, and whooosh, multi GHz arrived in due course. Everyone could forget about parallel processing and stay sane with single threaded programmes. Hooray!

But then the GHz ran out, and the core count started going up instead. Totally unsurprisingly all the old ideas crawl out of the wood work and get lightly modernised. The likes of Bernard Sufrin et al do deserve credit for bring these old ideas back to life, but I think there is a problem.

Remember, you only programme concurrent software if you have a pressing performance problem that a single core of 3GHz-ish can't satisfy. But if that's the case, does a language like SCALA (that still interposes some inevitable inefficiencies) really deliver you enough performance? If a concurrent software solution is being contemplated perhaps you're in a situation where ulimate performance might actually be highly desirable (like avoiding building a whole new power station). Wouldn't the academic effort be more effectively spent in developing better ways to teach programmers the dark arts of low level optimisation?

1
1
bazza
Silver badge
Pint

@Ken Hagan

Thank you Ken; one's smugness was indeed primarily derived from Google implying that C/C++ programmers were superior beings...

My beef with proponents of languages like SCALA and node.js is that yes, whilst they are well developed (or on the way to being so) and offer the 'average programmer' a simpler means of writing more advanced applications, they do not deliver the highest possible performance. This is what Google has highlighted. Yet there is a need for more efficiency in data centres, large websites, etc. Lowering power consumption and improving speed are increasingly important commercial factors.

But it that's the case, why not aim for the very lowest power consumption and the very highest speed? Why not encourage programmers to up their game and actually get to grips with what's actually going on in their CPUs? Why not encourage universities to train software engineering students in the dark arts of low level programming for optimum computer performance? C++, and especially C, forces you to confront that reality and it is unpleasant, hard and nasty. But to scale as well as is humanly possible, you have know exactly what it is you're asking a CPU+MMU to do.

From what I read the big successful web services like Google and Amazon are heavily reliant on C/C++. We do hear of Facebook, Twitter, etc. all running into scaling problems; Facebook decided to compile php (yeeuuurk!) and Twitter adopted SCALA (a half way house in my opinion). The sooner services like them adopt metrics like 'Tweets per Watt' (or whatever) the sooner they'll work out that a few well paid C++ programmers can save a very large amount off the electricity bill. Maybe they already have. For the largest outfits, 10% power saving represents $millions in bills every single year; that'd pay for quite a few C/C++ developers.

A little light thumbing through university syllabuses reveals that C/C++ isn't exactly dominating degree courses any more. It didn't when I was at university 22 years ago (they tried to teach us Modula 2; I just nodded, ignored the lectures and taught myself C. Best thing I ever did). Google's paper is a clear demonstration that the software industry needs C/C++ programmers, and universities ought to be teaching it. Java, SCALA, Javascript, node.js plus all the myriad scripting languages are easy for lazy lecturers to teach and seem custom designed to provide immediate results. However, immediate results don't necessarily add up to well engineered scalable solutions. Ask Facebook and Twitter.

7
1
bazza
Silver badge

@ Rolf Howarth; Not always...

"My favourite adage is still that CPU cycles are cheaper than developer cycles!"

Not when you're having to build your own power stations to run your data centre they're not.

http://www.wired.com/epicenter/2010/02/google-can-sell-power-like-a-utility/

1
1
bazza
Silver badge
Angel

Caution - old git moan

Once upon a time C/C++ where the primary language of choice for software development. A C/C++ programmer was an 'average' software developer because that's almost the only langauge that was used. Now Google are saying that they're effectively superior to 'average programmer'!

Sorry about the gap, was just enjoying a short spell of smugness.

@sT0rNG b4R3 duRiD. Concurrency in C is just fine, it's no better or worse than any other language that lets you have threads accessing global or shared memory.

I don't know about yourself, but I prefer to use pipes to move data between threads. That eliminates the hard part - concurrent memory access. It involves underlying memcpy()s (for that's what a pipe is in effect doing), which runs contrary to received wisdom on how to achieve high performance.

But if you consider the underlying architecture of modern processors, and the underlying activities of languages that endeavour to make it easier to have concurency, pipes don't really rob that much performance. Indeed by actually copying the data you can eliminate a lot of QPI / Hypertransport traffic especially if your NUMA PC (for that's what they are these days) is not running with interleaved memory.

It scales well too. All your threads become loops with a select() (or whatever the windows equivalent is) at the top followed by sections of code that do different jobs depending what's turned up in the input pipes. However, when your app gets too big for the machine, it's easy to turn pipes into sockets, threads in to processes, and run them on separate machines. Congratulations, you now have a distributed app! And you've not really changed any of the fundamentals of your source code. I normally end up writing a library that abstracts both pipes and sockets in to 'channels'.

Libraries like OpenMPI do a pretty good job of wrapping that sort of thing up in to a quite sophisticated API that allows you to write quite impressive distributed apps. It's what the supercomputer people use, and they know all about that sort of problem with their 10,000+ CPU machines. It's pretty heavy weight.

If you're really interested, take a look at

http://en.wikipedia.org/wiki/Communicating_sequential_processes

and discover just how old some of these ideas are and realise that there's nothing fundamentally new about languages like node.js, SCALA, etc. The proponents of these languages who like to proclaim their inventions haven't really done their research properly. CSP was in effect the founding rationale behind the Transputer and Occam. And none of these langauges do the really hard part for you anyway; working out how a task can be broken down in to separate threads in the first place. That does need the mind of a superior being.

15
1

Microsoft unveils Windows Phone 7 8

bazza
Silver badge
Alert

@mraak Gnome 3? iOS?

It's worth mentioning that neither of these paid much attention to Gnome 2 or OSX. For the former that's quite important - how many linux tablets are there in comparison to the number of actively used linux desktops? Not many. For the latter it seems unimportant; Macs are still Mac OSX. But if Apple decide that, actually, people should use Macbooks and iMacs iOS way too (and really the desktop metaphor has been one big horrible mistake that's costing Steve Job's some money, and who's stupid idea was it anyway?) there may be similar gnashing of teeth. [But not too much because fanbois are fawning, cult crazed pillocks for whom St Jobs can do no wrong. Ooops, did that come out aloud?].

The thing that worries me is that desktop machines may be deprived of the 'desktop metaphor'. Imagine if iOS's 'tablet metaphor' became dominant. How would we look at two applications' windows at once?

There seems to be an unseemly rush for tablet friendliness in operating systems. Presumably MS, Apple and Gnome are chasing commercial success / popularity (delete as appropriate), for suddenly tablets are where it's at for some reason or other. But those of us who actually have to use computers for work may be cut out of it. Boo and hiss. For instance, imagine trying to use a CAD package to make up a drawing from a sketch in a customer's email if you can't place the CAD applications' window next to the email's window?

Tablet friendliness in OSes is an indicator of general trend in computing that should be becoming quite worrying for a wide range of professionals. Undoubtedly the commercial drive is towards battery powered small portable devices. Apple really has made billions out of that market, and everyone wants a major slice of that pie. Apple's commercial success is a very powerful indicator that the majority of people are reasonably happy with a machine that browses, emails, can do iPlayer (there's an app for that...) and YouTube and not a lot else. Most corporate users can get by quite happily with a tiny little ol' PC just about capable of running Office.

But there are many professions out there that require a decent amount of computing grunt and a couple of large screens. For example, DVD compilers, graphic artists, large scale application developers, scientists and engineers, CAD, etc. etc. Gamers count too in my arguement. Trouble is, whilst there's plenty of diverse professions requiring computing grunt, there's not that many professionals doing them. They don't represent a significant portion of the overall purchasing population, so they don't figure highly in the strategic planning of corporations like Apple, MS, Intel, AMD.

So what do we see? ARM suddenly threating to take over the world; tabletiness creeping into OSes; R&D money being spent on battery powered devices with small screens and no keyboards; high prices for machines (especially laptops) that have decent screens and high performance.

I've no idea how far the trend away from cheap high performance computing will go. To a large extent the market for high performance desktop users is heavily subsidised by the massive server market. Intel and AMD have to produce high performance chips for servers, so it's not very expensive to spin those out for the desktop users too. But Intel and AMD may wind up losing a big share of the server market to ARM as well; it's a distinct possibility. That would drive the prices for big powerful chips right up, and certainly dull AMD and Intel's enthusiasm for spending billions of dollars on developing better ones.

All this could make life much more expensive and under resourced for the high performance desktop user. I'm not looking forward to it!

Just in case anyone thinks that it will never come to this, think again. Commercial pressures do mean that these large companies are not at all charitable when it comes to niche (but still largish) market segments. If Intel/AMD/whoever decides at some distant point in the future that there's only a couple of billion in revenue to be made from designing a new 16 core 5 GHz chip but tens of billions from 2 core 1 GHz chips running at 0.05W, they will make the latter. They wouldn't be able to justify making the faster chip to their shareholders. Worse than that, they'd likely get sued by their shareholders if they chose otherwise. The question then would be will they still manufacture their fastest design just for the sake of the niche high performance market? Maybe, but it's not guaranteed, and it likely wouldn't come cheap.

3
3

Danish embassy issues MARMITE WAFFLE

bazza
Silver badge

<Sharks with frickin lasers>Title

Just fishing to see if I can discover another icon. Oh, and down with Denmark, the marmite menacers.

0
0

Brit expats aghast as Denmark bans Marmite

bazza
Silver badge

Piat d'or?

Have seen it in France, and far enough away from the channel ports to probably not be aimed at the booze cruiser.

0
0
bazza
Silver badge

It's a good idea

Made sense to use diesel fuel. There's a lot of that on ships!

0
0

Intel rewrites 'inadequate' roadmap, 'reinvents' PC

bazza
Silver badge

@nyelvmark. Eh?

>ARM is not a processor

Where have you been during the smart phone revolution? Things like Android phones, iPads, etc. have shown that whilst an ARM might not be the fastest chip out there it's certainly plenty fast enough for browsing, email and some simple amusements which is all that most people want to do. The operative word there is 'most'. It shows where the majority of the market is. It shows where the money is to be made. Companies are interested in making money, end of. Any bragging rights over having the fastest CPU are merely secondary to the goal of making money.

So clearly compute speed is not as big a marketing advantage as all that. The features that allow one product to distinguish itself from others is power consumption and size. And that's where ARM SoCs comes in streets ahead of Intel.

Intel at last seem to have realised this and have been caught on the hop by the various ARM SoC manufacturers and decisions by Microsoft and Apple to target ARM instead of / as well as Intel. So they're responding with their own x86 SoC plans, and will rely on their advantage in silicon processing to be competitive. And they may become very competitive, but only whilst everyone else works out how to match 22nm and 14nm.

It's a mighty big task for Intel. They have to completely re-invent how to implement an x86 core, re-imagine memory and peripheral buses, choose a set of peripherals to put on the SoC die, the lot. There's not really anything about current Intel chips that can survive if they're to approach the power levels of ARM SoCs.

Also a lot of the perceived performance of an ARM SoC actually comes from the little hardware accelerators that are on board for video and audio codecs, etc. There's a lot of established software out there to use all these little extras, and the pressure to re-use those accelerators on an x86 SoC must be quite high. So there's a risk that an x86 SoC will be little more than clones of existing ARM SoCs except for swapping the 32,000ish transistors of the ARM core for the millions needed for an x86.

And there in lies the trouble; the core. The x86 instruction set has all sorts of old fashioned modes and complexity. To make x86 half decent in terms of performance Intel have relied on complicated pipelines, large caches, etc. These are things that ARMs can get away with not having, at least to a large extent. So can Intel simplify an x86 core so as to be able to make the necessary power savings whilst retaining enough of the performance?

The 8086 had 20,000ish active transistors, but was only 16bit and lacks all of the things we're accustomed to in 32bit x86 modernity. Yet Intel have to squeeze something approaching today's x86 into little more than the transistor count for an 8086! I don't think that they can do that without changing the instruction set, and then it won't be x86 anymore. They'll have to gut the instruction set of things like SSE anyway and rely on hardware accelerators instead, just like ARM SoCs. If Intel's squeezing is unsuccessful and they still need a few million transistors then as soon as someone does a 14nm ARM SoC Intel are left with a more expensive and power hungry product.

The scary thing for Intel is that the data centre operators are waking up to their need to cut power bills too. For the big operators the biggest bill is power. So they should be asking themselves how many data center applications actually need large amounts of compute power per user? Hardly any. Clearly there's another way to slice the data centre workload beyond massive virtualisation. If some clever operator shows a power saving by having lots of ARMs instead of a few big x86 chips, that could be game over for Intel in the server market.

In a way it's a shame. Compute performance for the masses is increasingly being delivered by fixed task hardware accelerators. Those few of us (e.g. serious PC gamers, the supercomputer crowd, etc) who do actually care about high speed single thread general purpose compute performance may become increasingly neglected. It's too small a niche for anyone to spend the billions needed for the next chip.

4
0

Linux kernel runs inside web browser

bazza
Silver badge
Pint

@David Hicks, absurdity

OK so it doesn't work on an iSomething yet. But how long before we see Steve Jobs start trashing Javascript?

The language is only going to get faster on Androids, Blackberries, etc. and as it does so the opportunities for things like this to become more serious and more capable will only increase.

Now it would be an absurd way to run whatever software you like on a phone. You'd need a pretty good network connection for storage (I'm presuming that Javascript can't store data locally on an iPhone). The battery consumption is going to be terrible in comparison to running an equivalent native application. But with such restrictive practises eminating from His Jobness it is quite possible that absurdity will not be such a high barrier afterall.

3
5

Intel: Windows on ARM won't run 'legacy apps'

bazza
Silver badge

The New Legacy?

It's quite possible that MS will manage to arrange things so that recompilation of source code will probably produce a working ARM version of an existing x86 application. It's different to Mac's migration from PPC to x86 - there was an endianness change.

But with x86 to ARM there isn't, and that makes a pretty big difference to the porting task. All MS really need to do is to ensure that C structures (or the equivalent for your chosen language) are packed in memory in the same way and that's quite simple to achieve. Sure, there'll be testing to be done, but I'd put a whole £0.05 on that testing being confirmatory rather than fault finding, provided MS get it right.

MS have already shown some good evidence for it really being that simple. They showed Office 10 printing to an Epson printer. I don't know about Office 10 (.NET?), but the printer driver was simply recompiled and just worked. If a recompiled driver just worked OK, there's good chances for applications too.

And of course, .NET, Java, Javascript, Python (and so on) apps would just run anyway.

There will be an emphasis on software providers actually bothering to recompile their applications. But if it really is that easy then open public beta testing will probably be an attractive way of keeping porting costs down.

1
0

HP breakthrough to hasten flash memory's demise?

bazza
Silver badge

@Michael Xion - Staff?

Who was it that said that cats don't have owners, they merely have staff to look after them? All too true!

1
0
bazza
Silver badge

@Graham Wilson - an old fogey like me?

"I remember the days of 'Rolls-Royce'-type spectrum analysers and other such world-class test and measurement equipment from HP"

Ah yes, still got some truly ancient (nearly 30 years old) HP test gear in the lab, still in regular use, still perfectly OK. Still like Agilent stuff even if it is all just PCs in a funny box with some funny hardware, an ADC and some funny software.

Once had a battle with the bean counters keen to divest themselves of equipment that had long since 'depreciated'. It was tempting to 'bin' (well, hide) them, let them buy new ones (from Agilent, probably), and then restore the old ones from their hiding place and have twice the kit.

It was difficult to get them to realise that buying new ones could never ever be cheaper than just keeping the ones we'd already got, no matter how cleverly they stacked up their capital items budgets. Won in the end.

Bean counters of the world, take heed; spending no money really is cheaper than spending some money. Message ends. Message restarts; not all 'electronics equipment' is worthless in three years time. Message ends again. Oh, and sometimes the engineers do have good ideas as to how money should be spent.

1
0
bazza
Silver badge

You mean...

That those visionaries The Goodies might have been on the right lines with Kitten Kong?

Scary!

0
0
bazza
Silver badge

Oh how easy it is to forget

Much as I recall with annoyance that HP didn't ship W2k drivers for the Deskjet I'd bought (ain't bought HP since on principle), that they divested themselves of what is now Agilent thus discarding their soul, that they acquired then binned most of DECs finest and ditched their own processor line too, that a once fine company reduced itself to little more than a commodity x86 box maker with an expensive line of ink and toner on the side, it's nice to see that there remains at least a spark creativity.

If HP can pull this off then I might even cheer, and I'll probably be grateful one way or another. It will be an enourmous break through. If it works how can it fail to take over from almost every non-volatile storage mechanism that mankind currently has? That's an enourmous market, and it could all belong to HP in years to come.

But you have to wonder why HP's management decided over the years that all that R&D heritage and expertise wasn't worth it. Look at IBM - there's a company that's still not afraid to spend on fundamental R&D, and look at how well they do. If HP can do this with whatever's left in their R&D budget, what might they have achieved if they'd kept all that they'd once had?

Bean counters. Bastards.

9
0

Intel's Tri-Gate gamble: It's now or never

bazza
Silver badge

Intel making ARMs?

I think we'd all win then, even Intel.

0
0

Jaguar hybrid supercar gets green light

bazza
Silver badge
Paris Hilton

In this hallowed place...

...is one allowed to say hhhhhhhhhhhhhhhhottttttttttttttttttttttttttttttttttttttttttttttttttt?

I think I'd like one quite a lot.

Paris, coz it's the closest corresponding icon. Or am I too old now?

BTW, Do we need a Pippa Middleton icon?

0
0

Apple reportedly plans ARM shift for laptops

bazza
Silver badge

@Ken Hagan

Well, transistor for transistor and clock for clock comparisons do count. The ARM core, even today, is still about 32,000 transistors. Intel won't tell us how many transistors there are in the x86 core (just some vague count of the number on the entire chip), but it's going to be way more than 32,000. So if you're selling a customer Nmm^2 of silicon (and this is what drives the cost and power consumption) you're going to be giving them more ARM cores than x86 cores.

Then you add caches and other stuff. On x86 there is a translation unit from X86 to whatever internal RISCesque opcodes a modern x86 actually executes internally. ARMs don't need that. X86 has loads of old fashioned modes (16bit code anyone?) and addressing schemes, and all of that makes for complicated pipelines, caches, memory systems, etc. ARM is much simpler here, so fewer transistors needed.

What ARM are demonstrating is that whilst X86s are indeed mightly powerful beasts, they're not well optimised for the jobs people actually want to do. X86s can do almost anything, but most people just want to watch some video, play some music, do a bit of web browsing and messaging. Put a low gate count core alongside some well chosen hardware accelerators and you can get a part that much more efficiently delivers what actually customers want.

That has been well known for a long time now, but the hard and fast constraints of power consumption has driven the mobile devices market to adopt something other than x86. No one can argue that x86 instruction set and all the baggage that comes with it is more efficient than ARM given the overwhelming opinion of almost every phone manufacturer out there.

On a professional level needing as much computational grunt as I can get, both PowerPC and x86 have been very good for some considerable time. ARM's approach of shoving the maths bits out in to a dedicated hardware coprocessor will do my professional domain no good whatsoever! It's already bad enough splitting a task out across tens of PowerPCs / X86; I don't want to have to split them out even further across hundreds of ARMs.

2
0
bazza
Silver badge

@JEDIDIAH

Yes you are correct, and indeed users of other sorts of phones don't run in to performance limitations either.

What the market place is clearly showing is that most people don't want general purpose computing, at least not beyond a certain level of performance. Afterall, almost any old ARM these days can run a word processer, spreadsheet, web browser and email client perfectly well, and hardware accelerators are doing the rest.

Intel are clinging on to high performance for general purpose computing, and are failing to retain enough of that performance when they cut it down to size (Atom). ARM are in effect saying nuts to high performance and are focusing only on those areas of computing that the majority of people want.

Those of us who do want high performance general purpose computing are likely to be backed in to a shrinking niche that is more and more separated from mainstream computing. The high performance embedded world has been there for years - very slow updates to Freescale's PowerPC line, Intel's chips not really being contenders until quite recently and even then only by luck rather than judgement on Intel's part. It could be that the likes of nVidia and ATI become the only source of high speed maths grunt, but GPUs are currently quite limited in the sorts of large scale maths applications that work well on them and aren't pleasant or simple to exploit to their maximum potential. Who knows what the super computer people are going to do in the future.

1
0
bazza
Silver badge

Yes, but not quite

That's true if you ignore the efficiency of the instruction set and hence the number of clock cycles needed to perform a given task. X86 is terrible - not it's fault, ancient and of its time 30 years ago, and Intel have worked wonders keeping it going this long. But the ARM instructions set is much more efficient (it is RISC after all), so clock for clock, transistor for transistor ARM will normally outperform X86. Intel might have some advantage in floating point performance, but with codecs being run on GPUs / dedicated hardware, who really does much floating point calculation these days?

You can see some of the effects of X86 from the performance Intel have managed to extract from Atom. That is, not very much. And all for more power and less throughput than ARMs of a similar clock rate are achieving.

1
0

IBM preps Power7+ server chip rev

bazza
Silver badge

That's not how IBM operate

Your still not getting the point. IBM don't really sell POWER chips. They don't really sell computers either.

What IBM do sell is apparently quite expensive business services (which does include some hardware), and looking at their profitability you'd have to say that they're clearly value for money. IBM's silicon needs merely reflect their need to keep selling business services. If they can do that with what some might argue is old fashioned out of date silicon processes and chip designs then IBM will be quite happy with that. Indeed, busting a gut to build a faster chip that doesn't help sell more business services would be commercially idiotic.

Developing their own POWER chips does allow IBM to tailor the silicon to the needs of the business services that they sell. For example, ever wondered why the POWER chips have a decimal maths FPU as well as a standard FPU? Why would IBM go to all that effort when no one else ever has?

It's because for banking / financial applications standard double precision FPUs on Intel/AMD chips are not accurate enough, so you have to do the maths a different way. For example, doing $$$billions currency conversions need to come out to the last snippet of a cent, and double precision binary floating point maths doesn't get you that.

On an Intel chip you have to do that decimal maths in software (a bit like the bad old days of having an 8086 without the 8087...). It's slow and time consuming. But a POWER processor does the decimal maths for you, so it ends up being much quicker than an Intel x64. Which means for certain banking applications IBM can offer a service that's much quicker / cheaper / power efficient than someone offering a solution based on Intel processors. And so far that 'niche' market is big enough for IBM to make very impressive profits indeed.

Basically there is a level of sophistication to IBM's business model that escapes most people's attention, and is completely different to Intel's. You just can't read anything of consequence into comparisons of IBM and Intel hardware. You can compare IBM's and HP's business service offerings, and I'd say that HP (who just happen to base theirs on x64/Itanium) don't appear to be as good. If IBM ever decide that they're better off with Intel or anything else, they'd drop POWER quicker than you can blink.

So that's IBM's cool headed, commercially realistic side. Then they go crazy and do things like the IBM PC (a long time ago, but still crazy considering what IBM's core business was), or the CELL processor. The CELL processor in particular promised the world tremendous compute power, got used in the PS3 and had the high performance embedded market buzzing with anticipation.

And then they drop it just like that because it turns out that most of the rest of the world was taking far too long to learn how to use what is unarguably the most complicated and 'different' chip that anyone has produced in recent years, so they weren't selling enough to make it worth their while. Grrrrrr! Pity in a way - I think that if they had persisted then they would have cleaned up eventually because there's aspects of the CELL which are far superior to the GPUs that have started filling the void.

IBM are a tremendous technology company, but don't often give the niche markets the things that that they could. That's capitalism for you!

0
0
bazza
Silver badge

@Steve Button

Second all that.

Plus not to forget the success of PowerPC in the almost invisible embedded market where Freescale have been earning good money in the the telecommunications sector for their PowerQUICCs. And then there is the good old 8641D which, despite running at a miserable 1.3GHz, is still quicker at some sorts of useful maths than Intel chips.

0
0
bazza
Silver badge

And still going off topic even further...

I hear from distant contacts in the games industry that they're just beginning to get to grips with the PS3's CELL processor (for it is a mightly complicated beast) and discovering with joy the raw power that's available to them. I fear that Sony might just ditch the CELL processor, and that would be a great pity.

The problem they've had is that to program a CELL well you need a background in high speed real time multi-processing applications. The systems that get used for building modern military radars and the like are architecturally very similar to the CELL processor, and indeed the CELL processor has found some uses in those fields. The problem has been that not many people in the games industry have come from those highly specialised application areas, so the games industry has had to re-learn some of their tricks.

0
0

New top-secret stealth choppers used on bin Laden raid

bazza
Silver badge
Coat

Do their helicopters...

...have comfy chairs?

0
0

Intel PC hegemony facing ARMy attack

bazza
Silver badge
Welcome

Fun!

Certainly very interesting times. Just imagine the inquiry inside Intel should ARM succeed in snatching a large and damaging market share; just how did a pokey little design house from somewhere flat, cold and wet without even a small fab to their name manage to out maneouvre the mighty Intel? I would be very interested to know if ARM's design team staff count is larger or smaller than Intel's.

If this does indeed come to pass, ARM will definitely have been a 'slow burner'. It's taken 20ish years to get this far, not exactly the fastest growth curve we've ever seen.

X86 has had a very impressively long run so far but the fantastic growth of mobile and datacentre applications has really underlined the penalties of the x86 architecture; power consumption. Intel are trying to keep up with clever silicon manufacturing processes, but you can't escape the fact that an ARM chip implemented on the same processes is smaller, cheaper and lower power. They once had an ARM license (StrongARM / Xscale) but disposed of it and haven't managed to compete since. Big mistake?

Intel could win if they bought ARM and wiped them out or renamed the ARM instruction set as x86-lite. I'm amazed that they haven't tried to do so as yet. It would raise the mother of all Competition Commission / SEC antitrust inquiries, and I don't think that Intel would win that one.

2
0

Mozilla refuses US request to ban Firefox add-on

bazza
Silver badge
Pint

@farizzle

Cuba? Thought that they spoke Spanish there...

Obviously it has to be Britain. Apart from the climate. And the US extraditability. But for those two inadequacies (anyway made up for by the beer and cheese), it's the perfect place.

3
4

ARM jingling with cash as its chips get everywhere

bazza
Silver badge

long term

Chances are that ARM will be earning those pennies on those designs for a very long time indeed. Intel won't be earning much on 4 year old chips... ARM are clearly in it for a long time to come, and those pennies will keep adding up.

1
0

Apple component lock-ups jump 40%

bazza
Silver badge

@RichyS

The end result is the same as if they did buy the parts and stock them in a warehouse.

It's interesting that not even Apple can rely on the market lifetime of components and have to make this sort of up front commitment to secure a supply. Nintendo ran in to the same problem with the Wii in the early days. Because the sales were way in excess of predictions they wanted to expand production but couldn't because the component manufacturers themselves couldn't keep up.

Component obsolesence is an increasingly major issue for everyone, and I know some small manufactureres who now routinely make lifetime buys of everything needed to make a product, (maybe not the passives like resistors, etc). For small-ish production runs it's simpler to have bonded stores of every unique component no matter how unlikely it is to go out of production. Warehouse space can be a lot cheaper than a redesign to deal with obsolesence. It's just-in-time delivery, but with a coarser timescale. It also gives you some control over when your re-design for obsolesence takes place, rather than it being sprung on you as a surprise just because some chip suddenly become unavailable.

Of course, Apple are big enough that they can probably get any component that's ever been sold remanufactured. But it wouldn't be as cheap.

1
0

iPhones secretly track 'scary amount' of your movements

bazza
Silver badge

A couple of different view points

I have a couple of slightly different view points:

1) Apple must surely know that people might not want their location at all times to be logged. Sure, there may be a benefit (better battery life, smaller mobile data bill or whatever) for users with the phone doing this. But from a PR point of view surely it would be better to tell the users what's going on under the hood, maybe having an option to stop it, etc.

2) With Apple having servers that dish up the information on request in the first place there is an interesting consequence for the network operators. The networks are traditionally shy about the exact locations of all their cell stations. A network armed with the locations of a rival's cell stations can work out all sorts of things about their rival's network capacity, operating overhead, etc. etc. That counts as priceless commercial information allowing them to accurately undercut the rival..

So what's to stop Vodafone (for example) buying O2 iPhones and using them to get a complete map of O2's cell network and thereby deriving performance information for O2's entire cell network? Or have the network operators accepted that their competitors know everything about their networks costs and performances?

And we do need a popcorn icon.

2
0

Forums

Biting the hand that feeds IT © 1998–2017