Does one of the biggest-ever revolutions in software, open source, contain the seeds of its own decay and destruction? Poul-Henning Kamp, a noted FreeBSD developer and creator of the Varnish web-server cache, wrote this year that the open-source world's bazaar development model - described in Eric Raymond's book The Cathedral …
20 years on from Tannenbaum's promotion of the microkernal as the new black, I am not aware of any such OS that has made it out of academia. When's the last time anyone heard anything about the GNU Hurd? Everything that isn't Windows is UNIX derived. Meanwhile Linux is taking over the world, apart from the desktop, which is rapidly going out of fashion.
Re: Hurding Cats
How about the Mach kernel. The basis for OSX's kernel (often mistakenly for FreeBSD).
Re: Hurding Cats
"Everything that isn't Windows is UNIX derived."
Are you for real?
Next (AKA Mac OS, let's face it)
And and number of specialist real-time OSes
Re: Hurding Cats
MacOS is going nowhere.
PhoneOS is being marginalized.
The academically objectionable approach is still doing very well both in terms of pure performance and it's ability to drive sales. Linux continues to thrive in the server room and on mobile devices and in embedded applications.
The main problem with a Mach kernel running on a Mac is not the kernel the hardware is running but the fact that you've got very narrow limitations when it comes to that hardware and what kind of system design tradeoffs you can make.
You are better off running MacOS in a VM on a cheaper and much more powerful Linux machine.
Title and article
I didn't find the article made the argument that the title seemed to suggest it would. The title is striking and forceful and then the article just rambles on about vaguely related stuff.
Q: "Does one of the biggest-ever revolutions in software, open source, contain the seeds of its own decay and destruction?"
The article runs counter to nearly all the evidence of the last 20 years. Linux has been a phenomenal success and continues to thrive with almost explosive vigour, and that pattern seems likely to continue for the foreseeable future.
"By the end of the 1980s, things were looking bad for Unix. AT&T's former skunkworks project had metastasised into dozens of competing products from all the major computer manufacturers..."
- And how is that "looking bad" ? In 1989 Unix was all over the datacentre like a cheap suit, also dominating the engineering, scientific and financial desktops, as well as the lower mid range market subsequently taken over by NT. Unix was obscenely healthy in those days.
"Microsoft hired DEC's genius coder Dave Cutler and ...the result was Windows NT ...enough time to get the new kernel working ...today it runs on about 90 per cent of all desktop computers."
And that kernel that has hamstrung Windows ever since. MS was so desperate to get NT out of the door they made the fateful decision not to implement proper protected memory spaces and execution levels. The system was prey to every user, process virus. And every version of Windows since has carried this fatal gene. Cutler must have been grinding his teeth. Had the decision been otherwise, our world would be quite different.
"But today's Unix descendants are large, complex graphical beasts, and so are their apps. Any significant modern application is a big project..."
Obviously Unix apps are graphical. They always were. The OS is not graphical. You might run a file manager, but underneath it is still all pipes and everything is a file.
Good article though.
The problem with autoconf...
...is that it works.
It takes a really keen person to rewrite something as important and as complicated (assuming you care about "legacy" which linux normally does [thank god]) as autoconf when, at the end of the project, no-one's going to care.
Re: The problem with autoconf...
Except that autoconf doesn't really work in a frightening number of examples. In many cases, if the library is new, the programmer who set up autoconf won't have known to add a check for it causing a compile failure rather than the helpful error message it was designed for. The other common mistake is for autoconf to be set to export the report on what libraries exist and then not handle any of the possible results.
I would say most of the time autoconf does nothing except run a ridiculous tests and everyone just assumes it is working because it spends a lot of time doing things and displaying cryptic output and that's why I end up just using shell scripts for my projects.
Re: The problem with autoconf...
On balance, it seems to me that most of the accusations levelled at autoconf here are more to do with how it's used than the software itself. It is a pretty horrendous bit of software in itself, thanks to the pretty steep learning curve, and I've been hit a few times by some of its idiosyncrasies (incompatible versions, missing m4 macros and the way it sometimes runs differently if you run 'sh ./configure' or './configure', mainly) but on the whole if you've got a project beyond a certain size and you care about portability, I think it's usually a no-brainer: use autoconf.
As I already said, the problem is often more to do with how the software is used. It's not a magic bullet that will automatically make your program portable. You still have to do all the work in your source code to account for all the different flavours of *nix or whatever, like whether they have certain library functions available to them (and which version), what system include files are needed, as well as, sometimes, lower level concerns like the machine's endianness, word sizes, data alignment characteristics, and especially the right architecture (or compiler)-specific flags to use. The other thing about autoconf, besides being an aid to achieving portability is that modern software generally has a multitude of dependencies, and without something like autoconf (and supporting tools/standards like pkg-config) any homebrew configure/make system is apt to get very complex very fast, and worse than that, they're (relatively speaking) very difficult to maintain and often not very portable in themselves. Most problems with building (besides problems with dependencies) tend to be with the author not writing portable code in the first place or simply not knowing about the foibles of your particular system or toolchain. Again, that's not autoconf's fault, but it is what it's designed to help the coder with.
and that's why I end up just using shell scripts for my projects.
There's nothing wrong with rolling your own configure/build system, but for the end user (ie, the person building the system), I think that familiarity with the autoconf system usually makes it easier to handle cases where things go wrong for some reason. Once you've compiled a few dozen apps it becomes pretty easy to figure out where the build is going wrong and how to fix it. Maybe that's just my personal preference, though.
Can't rate this article.
A bit like a lot of other articles. No conspiracy here, but nice one trying to make out that there might be.
Wasn't article rating removed wholescale in the site redesign a few months back?
I understand why: not being able to separate the ratings based on what people though of the actual article from the ratings basted on what people think of the topic would have made the system useless.
Never heard of it.
BSD Dropped The Attribution Clause A Long Time Ago
BSD dropped the attribution clause a long time ago, partly when they found shady operators were twisting it as an endorsement of their product which used pieces of BSD code.
Under BSD you can not claim the code was all your own (unless it really was). And you can not remove copyright notices. But you do not have to go out of your way to tell everyone that you used BSD code.
"Linux itself hasn't split is the forceful, charismatic leadership of Linus Torvalds"
Surely the bigger the headline the more easy it is to spot a cut-n-paste howler?
Linus doesn't scale?
"Linus" as a project management methodology does not *have* to scale.
The principle (and it is both ancient and not particularly related to software design) is to maintain a single coherent vision of what the project is supposed to be. You do that by having a small group who do that and then organise the rest of the work-force to be delegated to so that the architect(s) can spend time maintaining conceptual coherence. (Brooks had a whole chapter on this, IIRC.)
Of course, finding people to play the roles is tricky. The hard part is when the architect needs to say "That's shit." (or words to that effect) rather than "Are you sure about that?". At that point, the underling needs to have sufficient respect for the architect that they don't kick back. Linus seems to manage this. Bill Gates was supposed to command similar respect but I haven't heard similar remarks about his successors.
Closed source software can be as rubbish.
I don't think the argument holds water. Show me a bad open source program and I'll show you an equally bad closed source one. Fact is - if it's open source at the very least there's the chance that someone smarter than you can correct your mess.
"but - unfortunately - Linus doesn't scale. Very few projects get to have a Torvalds-like leader."
Yup - this holds true of all software, not just open source.
The cause of bad software is usually bad developers. On closed source projects those developers are hired and will continue developing it, often without improving. As long as it sells, and it will sell as there's a large pool of stupid customers, it'll be developed by those people. If a new guy who is good enters such a company he'll wear out as he needs to deal with idiots and quits or resigns.
On bad open source projects 2 things can happen :
First, the developers loose interest, since nobody wants to deal with that piece of crap, the project simply will die.
Second, a new and good developer comes along and can either improve the quality of the programmer and the software, by rewriting code and mentoring developers, or he can make a fork.
So bad open source software has a much less chance of staying bad software. It either dies or gets better.
Festering hacks, endlessly copied and pasted...
Not a great article, but El Reg journalism isn't *that* bad.
What is this all about?
Windows (any version): a pile of festering hacks that you can't see.
Linux (any version): a pile of festering hacks you can see.
In the former, you can only find out about the problem after the fact. In the latter, you can do some due diligence (or pay someone else to do it) before the fact (maybe even knock up a few test cases; whatever). Which one is better?
Oh, and most Linux devs are professionals who draw a salary.
As to GPL "infection"...if code has a license you don't like, don't use that code write it yourself! Who are you (or I, or anyone) to tell an author what license they should use? You could, of course, ask the author how much a dual license deal will cost you. Y'know...pay them.
People who moan about the GPL are fools who want to have their cake and eat it. Correction. They want to have your cake and eat. Then demand you do the washing up.
Calling libraries written in other languages
Now there's a thought... hmm that would be DEC's VMS - you could make library calls from ANY language supported by the operating system - and the architect?.. Mr Dave Cutler, genius...
Proper operating system, proper clustering, properly scalable, totally reliable, who needs the BSOD?
The is also another approach to solve the library problem. It's the Unix way of using text as an interface. In fact in Plan9 everything is in the file system. So if you want to open a socket you write into a file. Same goes for opening windows. In fact your software can even easily provide file system based interfaces. So there's an IRC client which allows you to write into a file to connect to a server. This causes a directory to appear representing the connection. From that on you can open channels all by just writing into files. It doesn't matter what programming language you use, it just works.
FreeBSD developers need a reality check
They are beginning to sound like the US Republican Party - when facts don't match their own delusion, they invent an alternate bubble of their own, and blame factual reality on a widespread conspiracy.
About the migration from GCC to CLang/LLVM: quit yammering about it, and how cool and amazing it is, because it hasn't happened yet. They've been working on it for two years. Maybe they should get it working first, and then brag about it.
Linux is dying. Really? Sez none other than FreeBSD?
Here's a few links - chosen at random - about FreeBSD's worldwide market share and usage:
I haven't yet seen, or heard of, a single mobile device, of any kind, running FreeBSD.
And I'm not even a Linux die-hard. I just like facts.
Put down the pipe, guys.
I don't see conherency in your arguments. You pull out the viral nature of the GPL code and I don't see that as a bad thing. But that fact doesn't explain the open/closed development model. There are plenty of closed-development projects that (regardless of the breadth of platforms they support) are actually *targeted* at Linux users. There is nothing in the GPL that forces an organisation of developers to accept code from outside parties. In fact, developers of GPL code regularly come in for criticism because of their close development practices. That's not because of the GPL license either though.
On the non-scalibility of Linus: I'd still rather have him dropping slightly too many patches than the opposite situation - where almost every patch is accepted. But you seem to be going against the main grain of your argument in bringing that up anyway.
The main difference between development of core FreeBSD and core Linux is nothing to do with the licenses. Both are relatively low-level systems and code for each of them that is brought in from outside the main development community is BSD-licensed or GPL-licensed respectively. I do appreciate the closed-house approach that FreeBSD advocates, but differences between that and the barely-contained-whirlwind approach of Linux kernel development are due to historical imperative, *not* the licenses.
Finally, "bizarre" is spelled "bizarre", not "bazaar".
What is this article trying to say, exactly?
None of the criticism voiced here is specific to Linux or the GPL. And, one, alleged, autoconf mess does not a general indictment make.
What I read here instead is a broad indictment of open source. Not entirely unwarranted in some cases, but way too broad and not argued well.
Can open source programs be a mess? Yes. So can closed source programs. The first step in doing anything with an open source anything is 1. check when was the last time the program was updated. 2. check the open bugs. 3. if you are a dev and planning to use the libraries, take a look at the code.
I know step 3 got me to junk a once-favored Python alternative to Django - code was an incredibly ugly mess of nested IFs that would discredit any programmer. Not clever - I have a hard time grokking Django's internals because it is too clever for me - just ugly.
None of the 3 vetting criteria above can be applied as efficiently with closed source, since even bug counts are generally kept under wrap.
Second, there could be case made that the BSD family of Unixes are kept on a tighter leash than Linux. But that this more due to the smaller teams and reluctance to change things much than to a GPL vs. BSD license argument. Stability over features and innovation. That's a different question, but not what the article covers.
Third, can open source programs be less than innovative? Yes, many are. So are most commercial programs. Can they be useless forks or vanity projects? Yes, and it behooves you to estimate long term viability before coupling your code or business processes to an open source project.
Last, spot the reasoning:
a) autoconf is a mess and uses GPL
b) Linux uses GPL
c) Therefore Linux is a mess
I prefer BSD over GPL in general, but I find this character assassination less than convincing. And, microkernel vs. monolithic has, again, little to do with the GPL. It's not like microkernels are broadly used in any license family.
Maybe I'm getting old and senile (don't answer that!), but can somebody explain exactly what is being said here.
I suppose ...
... if I had just tried to use autoconf and failed on a BSD or OSX system, I'd be ranting about those O/Ss.
If one of this issues is rouge coding, wouldn't the OSS community do themsleves a favor by being better teachers in this regard?
Anyone with basic ambitions can learn to code in a number of languages, but I have yet to find a quality document or tutorial, explaining elegant coding principals, beyond your basics.
The Art of Computer Programming
by Donald Knuth, thank you very much for asking.
Oh boy, this is a fun article to review. To save anyone the trouble of trying to understand this nonsense, I'm going to graciously provide a much needed synopsis.
First, the author complains that Linux is being killed (which it of course isn't) by GPL, because it is a collection of copy-and-pastes. The author attempts to justify this statment with a series of copy-and-pastes as follows:
1- Kamp, a BSD developer, once complained that Linux is a pile of festering hacks copied and pased. No further examples of actual problems are provided. Let's keep looking to see if the author can actually make a case...
2- After a brief history of Unix, Windows, Apple and Linux, the author complains about how BSD often has its parts taken and used in various projects with little kudos. This is of course not a comment on Linux. But a comment on how this article is a series of unrelated hacks strung together in an incoherent manner.
3- The author then quotes Balmer's irritation about the fact that if they copy-and-paste code from a GPL program they have to then you'll have to open the code of your program. Let's stop and congradulate the author about directly addressing his thesis. Well, ok, he didn't. But he did use the phrase "copy and paste" which was in his thesis. Kudos on kinda talking about something that was in your thesis, author.
4- The author next conquers the subject of forking. And no, this hasn't happened to Linux, but forking sure is bad! I have no idea why he thinks this. You won't get an explanation of how this harms Linux or any other project. But he does admit, "Well ok, sometimes forked projects even merge." So what? Don't read the article if you want an asnwer to that. You won't find it. But interjected in this portion of the article he will complain that it's only personality that holds Linux together. Then he ignores the fact that it is corporate support that is keeping it together.
5- The article then meanders into the fact that Linux is a collection of C programs. Or maybe one day it will merge with other languages. The author goes on to suggest maybe Java would be a good choice.
6- And the best quote of them all ... someone in 1992 proclaimed that Linux is already obsolete. Yup. That's his summary to bring together all of the above points.
SO, what the hell did that have to do with the idea that Linux is dying because it includes copy and pasted hacks? NOTHING! But what a joyfully insane rambling of writing. I have been on roller-coasters with fewer crazy twists and turns. At least the roller-coasters ended up back in the same place they started.
Thanks, this makes me wonder...
How does one get to be a paid Reg-Author? It seems like a job even I could do.
Re: Thanks, this makes me wonder...
Post controversial crap that generates lots of clicks. The jobs yours...
Re: Thanks, this makes me wonder...
"Post controversial crap that generates lots of clicks."
The job IS yours !!
Go to the source article...
Start @ http://queue.acm.org/detail.cfm?id=2349257
The point that original article is trying to make is much more succinct in scope.
1. FreeBsd takes a huge amount of time to compile.
2. That's because there are a lot of Ports (think apt-get or rpm) pointing to LOTS of programs
3. The programs have horrendous package dependencies.
Example: Firefox requiring, somewhere upstream, a TIFF package, either directly or through its dependents, even though FF does not do TIFF.
Or a package requiring both PERL and Python directly (WTF???).
4. Supposedly, autoconf makes a hash of what it has to deal with in 2. and 3. The author therefore laments that the kids these days don't know how to code.
Personally, regardless of the very ugly plumbing and cruft, which I am sure the original poster is much better qualified to comment on, I am rather impressed that I can go on an Apple command line and run the macports to install & compile a program automatically, including its dependencies.
Or that the various sudo apt-get flavors on Linux manage the same feat on essentially the same program source code.
When you think about it, that IS pretty impressive and a huge achievement of open source. Or are we supposed to pine for the heydays of 1990s Unix fragmentation???
Even though I can't disagree with the OP that there are a lot of cruft and hacks involved. And I am sure there are many incompetent coders distributed amongst all the FOSS licenses and proprietary stacks.
IN RE OBSOLESCENCE,TANNENBAUM WAS OBSOLETE LONG BEFORE 1992. TANNEMBAUM WAS JUST PLAIN WRONG.
This article is flamebait, you wankers!
I don't see what is wrong with hacky code or copy and pasted code really as long as you follow the license of the code that you are using.
There are lots of people with good/useful ideas but less people that can construct code that won't offend the most anal FreeBSD developer (basing an article about Linux dying on a FreeBSD developers comments is just laughable...) and I guess a lot of time the people with the ideas and the people that can develop "perfect code" aren't the same people. So.. a person with ideas and no coding skills will probably employ a cheap worker to implement the idea resulting in a lump of hacky code copy and pasted from stackoverflow answers. A person with an idea and limited coding skills will generate something of equal quality and the FreeBSD developer(TM) will sit on their hands until the "crappy code" ends up in ports because although it is a mess it is something that users have found useful. If anyone cares enough they can fix the issues or rewrite the code from scratch. If you have the time to write a blog post/news article about how crap something is you should be prepared to actually fix the problem yourself. I have found myself quite a few times when I work with some library and think "this library is a piece of shit" and start reimplementing it myself or fixing the issue I get so far and back out because I hit exactly the same issues/problems the original developer had but I didn't see as a consumer.
The beautiful cathedral of Unix, ...
"...deservedly famous for its simplicity of design, its economy of features, and its elegance of execution."
What a load of crap. UNIX was a quick & dirty hack job cobbled together for (even at that time) underpowered hardware. There's absolutely no beauty in this mess, and its stupid design features (like 'everything is a file') has made the implementation of many modern feature much more painful than necessary, and are still holding it back.
The true beauty has been MULTICS which unlike UNIX was a really advanced OS, and hadn't it been decided to go for the crap job to make a fast buck then we wouldn't have to sustain the turd that UNIX is.
Re: The beautiful cathedral of Unix, ...
1/10, because at least MULTICS was mentioned.
Re: The beautiful cathedral of Unix, ...
Not sure why others have downvoted you..
For BSD at least there was/is a ton of hacky code.. there are some BSD history videos were it's clearly stated by one of the people involved early on that BSDs much lauded TCP/IP stack was a massive hack and only done because AT&T or whoever was actually contracted to do the TCP/IP stack took to long about it.
So what's the difference?
The fundamental flaw in this hatchet job is that assumption closed OS's are perfectly designed & constructed - when all the evidence is there is no difference they are all patch jobs, the difference with Open Source if that every once can see it.
Baseless article. Do you work for Microsoft?
Let me join the chorus
The article is muddled. It makes major claims about the future of FOSS and mostly talks about the Linux kernel, a single FOSS project. The article further mixes in issues like monolithic vs micro- kernels.
I also want to note that while it is true that forks are fairly common, successful forks are not. Forking tends to be an unstable equilibrium - either the fork will fail or the original project will disappear following the fork. While it is true that there are examples of a fork and the original project going on to be successful this really occurs in a minority of cases
What is a bazaar development model?
At first I thought it was an attempt at wittiness, but even if one were to liken the open source world to a bazaar, the comparison falls apart quickly. This was the first of many deficient points in the article. While the topic is one that should support many interesting discussions, this article fails to follow through and deliver. The Register disappoints again.
This is nothing new is the world. The Great US of A is founded on a similar principal; The Republic can elect representatives that give away everything and require a subsection of their population to pay for it. Sorta like they did by re-electing the Socialist AssHat Obama.
Guess what kiddos, when you give people freedom you have to accept that they might be incredibly stupid with it. See the previous paragraph.
I guess I could sum up my entire post in 1 word: DUH!
You'll figure some of stuff out in 15 or 20 years... until then you'll be a junior whatever. ;)
i read the aricle twice, trying to discover the point that the author was trying to make.
my conclusion was that there was none.
this is not up to the usual standard of the register.
Essentially this: somewhere along the line, the Unix world went from elegance to, in some ways, "an embarrassing mess" (eg: autoconf, you either love it or hate it). Are licences, such as the GPL, encouraging the confusion or not? Liam discusses.
> Liam discusses.
I'm not entirely sure that's the correct verb, TBH...