Reply to post: Re: Fork in the road far back

Hands up who HASN'T sued Intel over Spectre, Meltdown chip flaws

DCFusor

Re: Fork in the road far back

I noticed the same thing - I've been playing in this game since before ICs, and was lucky to have parents score me a PDP-8 early on. My first Xerox 820 (forerunner to the Kaypro) was eye opening compared to that - and as you say, did most everything functional you could want, unless you wanted realtime audio/video, huge math simulations, or just really had to substitute pictures of characters for 1 byte ascii representations a Z80 could push around quickly. For the average person, that's not a huge difference, as most people don't produce audio and video, and there are other ways to consume them. The rest is just slickness.

And size, but so far "big data" seems to be used mainly for some people to control other people and take their money. I don't see a huge advantage in that for the average person.

But a company has to make money selling product, so they have to invent a need to ditch the old one and buy a new one, else they go out of business. Marketing is the root of many evils!

So are mortgages...and empty stomachs, but that's another topic.

I guess I'm trying to say that once certain choices were made in architecture, we collectively fell into some sort of sunk cost fallacy when we perhaps should have been looking at a new organization of compute and storage.

Look where we are now - we're finally using more cores/threads, and I read here recently about some vendor touting putting compute and storage a lot closer together. If we'd started down that path...which I admit did seem hard at the time - a lot of things aren't trivially parallelizable - we might be at a place that scaled a lot better - and didn't have at least this set of problems. We never tried too hard to find new ways to parallelize many problems because we didn't have to - yet. Now...it's another story.

I remember back in the late 70's a paper on "contextually addressable segment sequential memory" where lots of little compute/memory chunks could be tied together really opened up my eyes. In this, you had a "right sized" chunk of storage per CPU, such that the time for it to fully process that was "reasonable" - say a pass rate of 60hz or so - and then you could chain these together forever even over relatively slow links and scale to the skies (at least for some types of work). None of this side channel timing stuff would be an issue in such an architecture (which of a zillion things would you even time?)...The sheer size of a database this enables is staggering compared to the old way of doing things - I'm not going to advocate for going backwards in performance, I'm advocating looking at new ways that can pay off in the future with more - the current path, chozen awhile back, kicked the can down the road. We ran out of road.

Time to build roads. Or airlines, or teleporters. Time to think about how we approach this, rather than just trying to brute force it into smaller nanometers, more transistors and so on. Speed of light isn't going to change anytime soon, I reckon.

And yeah, I'm a dino too - that isn't necessarily a bad thing if one learns from experience.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon