Feeds

back to article Intel cozying up to Google Chrome OS

It's official: Intel is working with Google on the development of the Mountain View ad broker's new netbook operating system, Google Chrome OS. Word of the world's largest processor manufacturer's involvement with the world's largest internet searcher's purportedly virus-free OS first came by way of a comment by an Asia-Pacific …

COMMENTS

This topic is closed for new posts.

Intel Is Scared

Otellini has a big problem on his hands. He can sense that the computer industry is ripe for a seismic change. The parallel programming crisis threatens to unleash a frenzy of innovation that may render Intel's processors obsolete. So Otellini figures that the best way to leverage Intel's heavy investment in last century's technology is to support as many OSes as possible in order to lock in as many customers as possible. If a major change happens, that should give him some breathing room.

The problem with operating systems, however, is that they will all become obsolete in a few years. This includes all the dinosaurs from the 20th century: Windows, Unix, Linux, MacOS, etc. And, let's not forget the processors they run on. They will all join the buggy whip and the slide rule into the pile of abandoned artifacts. Why? Because the coming solution to the parallel programming crisis will not suffer a bunch of primitive and inferior technologies to survive.

So Google's Chrome OS is yet another Linux OS? Please, don’t make me laugh. Linux is a mummy, a decrepit museum piece from a soon to be forgotten age. Eric Schmidt is clearly delusional. Google’s mountain of cash is not enough to guarantee success in this cutthroat business. Chrome OS is doomed before it is even born. Heck, Google’s own future is precarious because the computer industry is at a dangerous crossroad. A wrong turn may turn out to be very painful if not fatal. My advice is: Y’all should think carefully before deciding on which way to proceed.

How to Solve the Parallel Programming Crisis:

http://rebelscience.blogspot.com/2008/07/how-to-solve-parallel-programming.html

0
0
Linux

An abundane of marketshare???

Maybe Otellini is playing a wild card here, but Googles liquidity could be playing a small part in Intels strategy. Google cooperation could mean Microsoft subservience, if such a thing can even be imagined, but I doubt it. Mobile is the key, and instant on ;-)

Nothing to see here, just Intel collecting some OS-goodies, concessions? that AMD will have to emulate.

Not a fan of any of their houses /:

0
0

Chrome Linux?

I think think they will make a Chrome OS brand of linux - it will get them in the market in a hurry.

0
0
Alien

Re: Intel Is Scared

Ah... it's nice to see Louis 'Crackpot' Savain in our little corner of teh intertubes...

0
0
Silver badge
Thumb Up

It's about Innovation

I think Intel has completed its re-visioning from a widget maker to a fount of progress. Good on them.

0
0
Gates Halo

Intel are scared

Intel have reason to be scared, but not because of any imminent parallel programming crisis in the mass market computer industry.

Intel have reason to be scared because they've lost the plot. That Itanium aberration is costing them (and/or HP) a fortune.

Meanwhile at the other end of the market, the volume end, Intel sold off their ARM folks just before it was starting to get really interesting in that part of the market.

Anyway, if the Wintel duopoly had their way, Windows on x86 would last forever. But anyone who's seen Linux on ARM knows that the Linux/ARM technology is reasonably mature (pretty much as mature as Linux on x86).

The amount of "stuff" you can get on a single high performance low heat dissipation long battery life ARM SoC is simply amazing, especially if you have enough volume to go completely custom. x86 clone assemblers, even in the netbook market, can't come close. That hasn't mattered up till now because the Wintel duopoly have had the market muscle to make sure no one took ARM/Linux very far in the "PC" market - although in the embedded market Linux/ARM is dominant already, all those SoHo router builders and users and reflashers can't be wrong.

Now, combine the Vista flop with the resurrection of XP for netbooks etc, what does that tell us?

It tells us that MS made a mistake and that they couldn't fix it quickly and that they were scared, so scared that they made sure that their "business partners" were aware of the consequences if partners continued to stray from the One True Microsoft Way. And that was the end of the Linux netbook story. Or was it?

Now Google come along throwing their weight behind Linux. The Linux/ARM technology was already there, but Google's cash and Google's marketing machine changes the picture even more. Now, Microsoft can threaten their "partners" all they like, but it won't matter. Google have more than enough money to make their partners an even more attractive deal than Bill is offering. Who now, given Google's funding, will be able to *resist* straying from the One True Microsoft Way?

Imagine a Linux market with only a half dozen serious players. Googloss for the volume market, and some of the heritage Linuxes (debian, RedHat, MontaVista, etc) for the diehards and the datacentres. But all today's tiddlers drop out because suddenly it's even more pointless than it already was. Suddenly lots of contributors, small and large, focus on the few big-name high-impact Linuxes.

Would someone like Novell switch their efforts to Googloss, but continue with a service-oriented business model supporting Googloss inside corporate customers (instead of supporting SuSe)?

Where does it leave Intel, when their chips are totally OTT and therefore totally irrelevant and totally overpriced for 90%+ of the "internet appliance" market?

Interesting times.

0
0
FAIL

For insight into Louis Savain

http://science.blogdig.net/archives/articles/January2008/14/The_return_of_Louis_Savain.html

0
0
Terminator

All Windows applications supported?

In my view the reason that Windows is dominating the Netbook market is because some 'must have' applications only run on Windows.

In my personal case I bought a Linux EEE PC 900 and was very pleased with it, but ended up installing XP because Virgin Mobile Broadband didn't run on Linux.

I ended up with Virgin Mobile Broadband because at £5 a month to an exisiting Virgin customer it saved me £120 a year against other options.

I could also run Microsoft Autoroute linked to my Garmin eTrex GPS - I haven't yet found a comparable Linux application.

So I don't initially see how a cut down Linux used to run a web browser is going to persuade me to move from Windows XP.

[Unless, of course, it will also run all your native Windows applications faster and cleaner than under Windows.]

Oh, and no idea what your foster parents died of, why, when or whatever. My sympathy, however, it must be tough....

0
0

I, But... Bhehehehe...

But Chrome is *awful*. It's like a Teletubbies browser. Plus, it doesn't have a decent ad blocker. Funny that, innit? A stripped-out, dumbed-down Linux kernel running web-based apps throughout the browser doesn't sound like an experience I'd enjoy.

0
0
Bronze badge
FAIL

Louis Savain!

Can we have a Louis Savain icon, please? Something that encapsulates loopiness.

0
0
Linux

@David Roberts

Wireless broadband dongles seem to be becoming relatively well supported under Linux (I'm just about to buy one). If you want to insert VM's CD and have it run as they expect, then that's a different story.

You're in a minority now with AutoRoute and a GPS. I used to have that combination too, and it was a major factor in me not switching to Linux. More recently I've had two versions of TomTom on two S60 phones and needing AutoRoute is a thing of the past. I've had pretty much every AutoRoute since AutoRoute for DOS was a NextBase product, I've also bought TNT TravelManager and had the Personal Navigator free versions. The last Microsoft AutoRoute I bought was AutoRoute 2001, by which time it had still barely recovered from the nightmare which was AutoRoute 97 (not just my opinion, check any Microsoft AutoRoute review on Amazon etc, even the PC rags were uncomplementary about 97). On the PC, Google Maps (oops, them again) now does much of what AutoRoute used to do, and does much of it better, and doesn't care what the OS is.

If I did still want AutoRoute proper, or any other Windows-specific application, then I'd have two options: running in a virtual machine such as VMware Player (it's free) or VirtualBox (free for non commercial use) with Windows as the guest OS in the VM, or for a much lighterweight solution, Wine (which works with *some* applications including *some* versions of AutoRoute).

0
0

stop it .. my head hurts

<rant>

Chrome "OS" should't even be news .. who cares ? .. I just checked stats on a couple of my websites involving 10,000s of page views, 1000s of unique visitors in the last month.. and not a single visitor is using Chrome browser that I can see .. about 30% using FireFox though .. someone even checked the site in some odd UNIX browser .. but no Chrome ..

why is anyone *excited* about another version of Linux OS with *cloud* features tying into stupid Google *cloud* apps ? .. stoopid headlines on other sites making this out like it's a Windows killer ..

I swear if Google said they were gonna make another redundant image format El Reg, world + dog would have 50 articles about it after the first hour of announcement heralding the end of Web 1.0 .jpg , .gif, .png and MS/OS2 .bmp as well ..

0
0

Is that you, PZ Myers?

Yo, Anonymous,

Is that you, PZ Myers? Come on man, go get some gonads and identify yourself. LOL.

0
0
FAIL

@Louis Savain

Your solution is called CaS - "Compare and Swap" and it's nothing to do with parrellism, it allows memory pointers to be swapped without blocking the thread, but you still need to serialise access to the CaS routine.

Secondly, your solution has been around for years, in fact we wrote a web server about 8/9 years ago now, that did just what you describe on NetWare's NKS kernal - before they went and ruined it with all the Linux/UNIX fork nonense. This worked because on NetWare the threads were separate from their execution contexts.

This meant you could permanently run one actual thread per processor, but queue any number of unrelated "thread contexts" separately. When we detected an executing context would cause the thread to block (eg. disk access), to keep the thread running we swapped its' current context with another from the run queue, and put that one on the "Blocking" queue.

BTW - You actually need 4 queues - not 2. Runing, Queued, Blocking & Suspended, and you use your CaS routine to swap the Running and Queued queue as well as de-queue/en-queue the contexts into the queues themselves.

However, you have still spectacularly missed the problem with mult-threading apps. The issue isn't threads it's serialising access to shared resources across multiple processors and then managing that in code, the rest is actually pretty easy.

See: http://support.novell.com/techcenter/articles/dnd19991004.html

0
0
Silver badge

Louis Refrain

"So Google's Chrome OS is yet another Linux OS? Please, don’t make me laugh. Linux is a mummy, a decrepit museum piece from a soon to be forgotten age. Eric Schmidt is clearly delusional. Google’s mountain of cash is not enough to guarantee success in this cutthroat business."

He's a successful businessman and you're a crackpot. That's all. I'm afraid anyone who continues to post the same cut-and-paste rant about the coming parallel programming crisis along with a references to their equally self indulgent web site needs a referral to the men in white coats and little more. Please don't soil this website with your tripe like you have so many others.

0
0
Anonymous Coward

"all Mac OS X machines running on the company's processors"

Is someone forgetting Macs that can run OS X were PowerPC until only a few years ago?

0
0
FAIL

@Louis not even close

You might think you matter enough to have people follow you around, but you are not

I just recognise the stench of a nut-job when one shows up

0
0
Stop

Not Linux, please

Hopefully this will be new from the ground up and not some other mangled version of Linux. We've got plenty of those already and they still leave a lot to be desired, ugh.

0
0
Thumb Up

Google is the ONE company...

Google is the one that can do Linux right, look at the number of servers they support in their data centers. With Windows they would need an entire army to keep the damn things running. Look at the density they have per container, over 1,000 per container.

It is possible to make a secure os, but users will have to give up the option of installing ANYTHING they want and have to have ALL apps approved, like the iPhone.

I also assume these secure computers will only work right when connected to the Internet in some way.

0
0

It's the Fucking Threads, Stupid!

Neil Standbury,

Your rant is worthless. It's the fucking threads, stupid! It has always been the fucking threads. And not just because they are prone to lock and are hard to debug but also because they are a pain in the ass to program and understand. The solution demands that we get rid of the fucking threads. Read this over and over till it sinks.

The goal is to make it as easy as possible to produce rock solid software applications as rapidly as possible. It's not about making a bunch of aging baby-boomer geeks feel good about their ugly and hopelessly flawed cryptic code. In other words, it's about money and profit. It's time for the Turing worshippers of the last century to retire. You caused the crisis. You failed, goddamnit!

That being said, it's good to see that you, at least, identify yourself. That's gotta count for something. More than I can say for the anonymous cowards. LOL.

0
0
Go

Hidden complexities?

I suppose some of the previous posts highlight some pitfalls coz as most will know a network dependent device not only requires hardware and an OS but it also depends on a network.

Having a couple of giants in the form of google and intel at least gives some influence in getting the framework in place for MIDs to work and to work properly.

0
0
FAIL

@louis

You put your arguments forward precisely the way any delusional pseudo-scientist might. Avoiding peer review, avoiding having any effect on the scientific community and avoiding open debate.

You argue your superiority to 'experts' through assertion alone. Discrediting their work as 'so-called' and flawed without providing any support for any of your ideas.

Perhaps you are a unique genius, the likes of which never seen before. A true revolutionary against our broken information society.

I think it's more likely your grandiosity is showing...

0
0

This post has been deleted by its author

Silver badge

Linux on ARM + another Louis flame

There are far more Linux -on-ARM systems than there are Linux-on-x86 systems. All those Linux-based phones, of which there are many millions, are running Linux on ARM.

The whole point of Chrome OS is that it hides the whole OS from the user. You don't need to understand grep or bash or anything like that, much like Android or a Motorola Linux phone all that hairy-hacker stuff is hidden. You just deal with a nice UI.

Perhaps Louis is right perhaps we need to move away from threads to some functional programming model. But then again perhaps he was just dropped on the head as a child. Dunno about anyone else, but all my attempts to write a device driver in Haskell failed badly.

0
0
Grenade

You Ain't my Peers

@James Greenhalgh

You know what you can do with your peer review, don't you, Greenhalgh? You can pack it where the sun don't shine. You (the computer academic community) are not my peers. Why would I want you as my peers? You are failures. You turned computer science and programming into a tower of Babel, a big fucking pile of crap. You were wrong about computing from the start. This is something I realized the first day I opened a book on computer programming. Your computing model (the Turing Machine) is crap. You have shot computing in the foot. Big time. The parallel programming crisis is just the chickens coming home to roost, as they say in the USA. With a vengeance, I might add, if only because billions of dollars of rich folk's money are in the balance. I'd be scared if I were you.

Y'all better get ready to face your computing KARMA. The exit is on the left and don't let the door hit your behind on the way out, just in case. LOL.

0
0
Troll

LOL

I do like the way this Savain guy signs all his rants "Lots of Love". Gives you a nice warm feeling that he's only here for our own good.

0
0
Flame

@Louis - round two.

Good start... My turn.

1) Your theory on spacetime shows a fundamental misunderstanding of calculus and furthermore of relativity. Even as someone with very little knowledge of physics I can see the problem with your argument. I'll illustrate by applying your same principle to a one dimensional space, with vector (x), clearly, when you differentiate this with respect to x, you get 1. Ergo, by Louis maths movement in a straight line is fundamentally flawed. You see, when differentiating by t you are calculation the rate of change of whatever you are talking about with respect to that variable. The rate of change of t, with respect to t is clearly going to be one. By first principles of calculus. A small change in something creates an equally small change in itself. Congratulations, a huge victory in stating the obvious, but not a victory against space time. In order to measure some velocity in the time direction you would have to paramatise the time vector in your calculations creating a geodesic equation {ct(k), x(k), y(k), z(k)} differentiating now with respect to k, you will find a velocity, acceleration and whatever else you want can be quantified. Your problem here was that you misunderstood calculus. Don't worry it's hard.

b) Your argument that it is possible to create a deterministic way of programming removing the problem of non-determinism. This relies entirely on things being perfect. Your model for computing fails entirely to consider user interactions and the need for interrupts. You argue that hardware can be modelled by a finite state machine, and as such software should be able to be too. But software compensates for the limitations of hardware and solves some of the problems of the non-deterministic world we live in. Consider. By your model two objects could request a read from different areas on a disk, A third object would recieve a signal when the read was complete from object 2, and pass a signal giving the current clock value. Unless your entire system halts while accessing disk, creating a huge bottleneck, you have no possible way of knowing what value will be sent as a signal from object 3.

cat) Your model of programming, by mapping the idea of a neural network into software is inherently slow. On each clock pulse you must pass messages, analyse messages and send messages. From every active software object. And as the only way to communicate with an object is through signals and there is no grand operating system to control it a cycle soon becomes great fun. In a world void of interrupts and controlling software a simple three object cycle soon becomes cataclysmic to the operating of your system. Exponentially growing the number of signals generated for each swap. While a cycle may be easy to see in a small system, in a grand system you can destory everything, sometimes only in rare conditions.

4) Your model of programming fails entirely to consider the fundamental part of the Turing Machine. Input. Returning to 2) Input from a user can occur at any time, while your system is in any state. Lacking any way to handle this input your model collapses spectacularly. If your way of handling input is to give it priority and handle it at the front of the queue, your system is non deterministic. If your model is to add it to the front of the next queue your model is non-deterministic in the time to service. If you add it to anywhere else in the next queue your model is non-deterministic in terms of its own state.

I'm a sodding first year computer science student and I can see holes in everything you do. My friends and I had a good laugh yesterday reading through your page. Of course, we're all blinded by the establishment, etc, etc, etc. But your ideas are so fundamentally broken as to become insanity.

Seriously, seek accademic review. Find out what is wrong with your theories from people with more experience than I have.

Or keep making an arse of yourself online. I mean, I'm really not that bothered...

0
0
Linux

@louis - put up or shut up

Show us the source code if yours is as good as you claim. Or if you're intending to sell it like Windows, then at least show us the compiled binaries. And if you can't do either, then why should anyone who can't see your claimed emporer's clothing not consider yours in the same light as we do perpetual motion proposals apparantly intended to seperate gullible investors from their money and which crop up on a frequent basis ?

0
0
Boffin

To the idiot Savain

You are Tommy Davis, and I claim my five pounds.

0
0
Boffin

It's not Rocket Science

Greenhalgh, you're wasting my time.

copsewood, what I'm proposing is not rocket science. It's plain common sense. if you don't get it, it's because something is wrong with your brain. A baby boomer, are you? You got Alzheimer's, maybe? Oh well. Many people do get it. And some people are indeed working on their own implementations. I've got other things to be busy with. But I do enjoy getting on your nerves every so often.

0
0

@Louis - Round 3 - Knockout?

Interesting rebuttal.

So, again I ask the crux of my post.

How do you plan on mapping inherently non-deterministic events into your deterministic system. You have, as I see it, 3 options for controlling input events.

1) Defer them until the next virtual cycle and give them priority, the time to service is then indeterminate and is the number of real cycles needed to complete accessing the input buffer. This input buffer could be in any state.

2) Service them immediately. In which case you are unsure what the current state of the system is when they are serviced.

3) Defer servicing them until the end of the next buffer. In which case you could either overflow the buffer or just drop the request. With no way of determining as a programmer which outcome would occur.

And you similairly have three options with output.

1) Abort each request for a shared resource that occurs while the first is being serviced (disk read/write for example). This prevents a programmer from knowing if his request will be serviced.

2) Defer each request indefinitely, awaiting the correct time in turn for access to the disk. This prevents the programmer from knowing the state of the overall system when the access will complete and adds a layer of asynchonous behaviour.

3) Halt the system until each request is complete - This will slow down your system immensely and add to the issues described for input.

I await, with tepid enthusiasm, your response.

0
0
Boffin

Reactive Sounds a Bell?

Greenhalgh,

Your entire rant is pointless because you are arguing out of ignorance. The COSA software model is reactive and synchronous, which means that nothing happens unless there has been a change. It further means is that process timing is exact and that a read always immediately follows a write. Changes are automatically propagated to every component that needs them. It means that contention is a non-issue. See ya around, baby boomer.

Phew! Turing Machine heads and thread monkeys are a dime a dozen. Can't get rid of them. But take heart, Greenhalgh. It's never too late for you to repent of your manifold sins. ahahaha... AHAHAHA... ahahaha...

0
0
Flame

Ah hah!

I understand now Lou, don't worry. You see I was confused for a second that you and me might be designing a system which could work in the real world. Seems you don't require this as an outcome. Which is fine, I guess. But you have to make it clear that you are allowed to bend physical rules like access speeds for silicon or hardware. It means I can too.

In fact, it means I can take your system to its logical conclusion and point out a nasty little truth about your entire idea. You see, what you have designed are a series of interconnected components, that when dealt with provide a change or a signal to other interconnected cells.

Put simply, your cells take some degree of input. Process it. And provide output. - If this sounds similair, it might be... Depends how far through that textbook you got.

Furthermore, as a collection of cells the input is stored in one queue, the output in another. On a switch input becomes output and vice versa. These lists of cells provide a series of instructions of which cells are to be affected on the next cycle. For the sake of historical correctness, lets store these lists on tape. But I mean, you can store it on whatever you want, including physics bending infinite write speed silicon if you want.

Next, you have at least one processor which deals with the Head of the queue, it takes it, performs the calculation needed to do whatever it is the cell needs to do, and provides some change in the output. As this processor deals with the head of the queue, lets be nice and call it, well... Lets call it the Head.

And then we're going to have changed the state of the system, and that will be stored somewhere. Lets call that somewhere the state.

Now we have infinite read and write speed. And, as we have infinitely many processors we presumably have infinitely many heads. Because, as shown in my argument in a previous post, as soon as you don't have either of these things your system becomes indeterminate in terms of real world processing time. Although, for this argument it doesn't particularily matter if you didn't. Because by now you must be noticing something happening here...

--The Million Dollar Question--

So Lou, when you have an infinite number of HEADS, working through a list of instructions stored on TAPE, with infinite read and write speed affecting the STATE and a TABLE who's instruction is, move one place forward. What do you think you have?

It looks like an incomplete, but otherwise Universal Turing Machine to me.

It also has horrible overheads for reraltime processing on anything other than n dedicated processors, where n is the number of *Single Components* in your program. It is why research into a similair thing cannot be used in real robots until hardware provides the ability for millions of dedicated processors, all running a simple vector transformation.

You are not revolutionary. You are not a rebel. You are just misinformed. And so convinced of your own intelligence that criticism is ignored. I'm more than willing to take this up with you on your own site rather than wasting more of the poor Moderatrix's time. But I get the feeling the argument would descend quickly into you calling me an idiot and deleteing any further comment of mine. It seems to be your style. Like a child unwilling to accept that their long held belief in Father Christmas is misguided.

I'll repent. I'll quite happily join the Church of Lou. Just as soon as you deal with the nasty little truths behind your system.

0
0
FAIL

I feel a quote coming on...

"Arguing on the internet is like competing in the special Olympics..."

And I'm done.

(apologies to any Olympians reading these boards, but it's only a simile)

0
0
This topic is closed for new posts.