Feeds

back to article The GPL self-destruct mechanism that is killing Linux

Does one of the biggest-ever revolutions in software, open source, contain the seeds of its own decay and destruction? Poul-Henning Kamp, a noted FreeBSD developer and creator of the Varnish web-server cache, wrote this year that the open-source world's bazaar development model - described in Eric Raymond's book The Cathedral …

COMMENTS

This topic is closed for new posts.

Page:

Bronze badge

And your point is?

I can't really get a handle on where the article is supposed to lead me.

Linux is running on more machines worldwide than just about anything else. Just Android smartphones, TomTom navigation devices, various set-top-boxes and smart-TV's outnumber the only operating system ordinary people have ever heard of (Windows). BSDs? I'd be hard pressed (apart from certain parts that made their way into things like the Windows TCP/IP stack decades ago) to name anything that's really come from them. So it's not really unpopular in either in-depth-hardware-geek territory (who most certainly would have heard of BSD, and would use it if they could - because it doesn't require them to expose their own code - but yet hardly anyone makes embedded devices that run on it), or even just general usage in homebrew projects (Raspberry Pi, various handheld consoles, etc.). I don't get the argument you're trying to present there by suggesting it'll all go titsup.

And Linus is preventing Linux forking by being unique - so if it forked, who would do Linus' job in the fork? Either someone would come along (and thus Linus wouldn't be unique), or someone wouldn't (in which case it wouldn't be forked).

I've lost the point there, apart from suggesting that the GPL (*the* most popular open source license) is somehow corrupting. I believe that was its intention, so people couldn't freeload from it for their own commercial purposes and not contribute back (for hobbyist purposes, it has no real hindrance because you only have to offer your code to the people who end up with the end product of your derivative work, which is probably just you).

And quoting Tannenbaum is really the last straw - his own progeny MINIX hasn't been touched in years, barely runs any of the huge amounts of code out there today and is unheard of outside of academia teaching operating systems. He operates in a world of perfect mathematical programs and no real-life OS would ever satisfy those criteria and always be "obsolete" (and, don't forget, MINIX predates Linux and Linux is basically the "I can do that better" version of it that Linus wrote - I think he proved his point).

I'm not a massive advocate for the subtleties of open-source, I avoid licensing wars like the plague (seriously, BSD, GPL, or proprietary is all I really care about - even the versions don't bother me much), and I don't care for the personalities and their opinions much. But this article is a rambling mess that somehow tries to sow seeds of doubt about Linux with no, actual, real point to do it with. It's the sort of thing I'd expect on the FSF website, not here.

101
5
Silver badge
Meh

Re: And your point is?

Well, I know of at least 1 big TV manufacturer runing FreeBSD on its sets, but other than that I agree with your comment. Ask somebody from the BSD camp about GPL and there's your article ;)

13
0
Silver badge

Re: And your point is?

Yes, about that article, I suppose if ones thinking is muddled the writing is muddled too.

21
0
Silver badge

Re: And your point is?

"... you only have to offer your code to the people who end up with the end product of your derivative work, which is probably just you."

As far as my understanding goes, (please correct me or give other opinions if you can), this also extends to in-house corporate use. If a company decides to use Linux, or any GPL software, for purely in-house use as part of its internal operations (e.g. process monitoring/control, networking, e-mail, etc), and they develop clever modifications and add-ons; then that corporation does not have to make their new source code available. In asking their employees to operate the clever machines and systems they have developed, they are not actually 'distributing' the code (as specified in the GPL license), they are simply making tools for employees to use.

There are some people who argue against corporate use of GPL code by saying, " .. if we develop anything useful and clever, we have to give it away to the rest of the world, according to the license." I believe this is not true. They also say, " .. at least with Microsoft, we'll get years of product support." Hahaha.

28
0

Re: And your point is?

FreeBSD choice of Clang vs gcc?

But you'll need a crystal ball to guess because this article is a mess.

This is an old war since the days when gcc became GPL v3 - I think FreeBSD was still using an older version of gcc.

As a developer I welcome the inclusion of Clang - first there is more choice and second this will benefit Clang and at the same time gcc. However I still use gcc since I feel it is (yet) better - it's my opinion and certainly many people in the BSD "world" will disagree.

Now - "Linux is dying"

As a matter of fact Linux is gaining and in many areas is dominating. From the mere "k" embedded OS to the almost 95% of the 500 large supercomputers of the world!

Except in the "desktop" but the paradigm has changed and nowadays the "desktop" has no longer the maiority of the market in the userland...

14
0
Anonymous Coward

I thought this article was about GPL

...then we go and invoke the old micro vs. monolithic kernel debate in the last paragraph. It leaves the impression of a hatchet job - [let's pile every bit of anti-Linux FUD we can find and throw it into one article and see what happens].

Differing opinions are well and good, but in the end the current state of affairs for Linux is that its use is exploding pretty much everywhere other than the desktop... so the title premise - that something is killing it - is provably false.

To the contrary, in the past few years I have seen a marked move on the part of server software vendors to closed, appliance VMs based on Linux and away from self-installed versions of their software for various OSs. Forking and the GPL make that possible.

You can do better than this Reg.

26
1
Silver badge
Coat

Re: I thought this article was about GPL

Yes perhaps, but I can find no reference to Liam Proven as working for the Reg.

1
0

Re: And your point is?

Panasonic Viera.

0
0
Anonymous Coward

You've talked the point under.

Actually, the BSD family is more of a geheimtipp in the industry. So what if it doesn't have the recognition or fanbase, there's a number of people who do know to value the software and make good use of it. In fact, several embedded manufacturers would've gotten into a lot less trouble had they not tried to jump on the linux bandwagon then gotten bitten by the so-called "viral" features of the GPL. You know, where they "neglect" to release the source to their distributed modifications and end up getting sued.

Apparently the industry suffers more from whippersnapper engineers not having heard of this cathedral thing than you might think. Which is what the acm article this builds upon argues--in typical forceful phk style. Or maybe it's just the managers that drive the decisions and do so mostly on bandwagoneering reasoning.

The BSDs, in contrast, do allow you to simply take the source and do whatever you like with it. Don't even have to mention it these days (the licence got amended, and it's not GNU philosophy that drove opening the source in the first place, by the by). Still and all, there are a number of features that seeped back into the BSDs from commercial non-open-source use. Most notably netgraph, which is fantastically useful if you need that sort of thing. Came from a vendor that used the software to build routers with. There's still vendors that do that, juniper being a rather big name one.

There's TVs, gambling machines, whatnot else that also run BSD software. That you don't see it advertised doesn't mean it's not true. It just makes it the geheimtipp alluded to earlier. The windows networking stack, though, is something they "rewrote from scratch" --IIRC it's been replaced twice now-- reintroducing a number of really old DoS vulnerabilities around the time of the vista previews. Apparently redmond is not big on regression testing. By the by, the original BSD networking stack was copied from somewhere else, so as not to reinvent the wheel. The extensive work that happened afterward did make it good enough for others to take and build upon, which happened repeatedly. The gnu crowd, though, tends to leave all that aside because it's not the right licence for them. Or rather, not left enough of one, the BSD licence having been nicknamed "copymiddle".

The Tanenbaum comment isn't as offensive as you make it out to be: Technically, as in from a computer science point of view, Andy has a point and Linus did leave a bit of an opportunity on the table on the architecture front. Of course, he was but a student, not a professor, and he didn't care. Seeing his massive success, especially when the hurd didn't reach anywhere near that success, he still doesn't care. And given that the sheer momentum of linux' success manages to bulldoze over many a crack, well, few others care much either. But that doesn't make Andy wrong: QNX is a nice example of a technically successful microkernel, though its price tag didn't help uptake outside of specialist niche markets much. Then again, if you don't care about subtleties, then indeed you'd best ignore that comment.

And you'd also best ignore minix and the fact that it's still being worked on entirely. Its development doesn't come quick compared to the massive momentum linux has, but it does progress and the thing is still alive. Whether it'll stay with us if Andy perishes, well, who knows? But we don't know that of Linus either, and for such a large project that is a bit bothersome. Much like the jury is still out on whether apple will survive Jobs' demise, in fact. That was one of the points the article tried to make.

4
4
Joke

Re: And your point is?

He's a footballer or something?

2
0
Anonymous Coward

Re: You've talked the point under.

Regarding the Tanenbaum comment:

The Micro vs.Monolithic debate is, IMHO, tantamount to RISC vs. CISC - both sides have their good points to be made, academically one side is viewed as better than the other, while the other has gained dominance (save ARM on smartphones, and MIPS/ARM in the embedded space) in the market. The fact that both sides of each debate still exist and, to varying degrees, thrive in the market is a good thing. In both cases the debate will continue for the foreseeable future with, more or less, the same lines being drawn and points being made.

Complaining (at least my complaining) that it was raised in this article was not due to the validity of Tanenbaum's argument but rather the inappropriateness of bringing it up as a parting shot in an article about the GPL. I'd have the same objection to an article about a comparison of the ARM vs. MIPS vs. Intel business models if a one-sided swipe of the old RISC vs. CISC debate was thrown into the last paragraph.

1
0

@Lee Dowling

The Torvalds vs Tanenbaum argument wasn't resolved to say "Linux is better" but "Linux is quicker and easier to produce". I don't think Tanenbaum ever argued against that, but rather suggested that Linux was quicker and easier to write because it was a hack. Ideologically, I'm on the side of the microkernel, but practically, I use Linux because it's there, and because there's stuff for it. And I use Windows more often than Linux, because there's even more stuff for it.

But with the volume of people working on Linux now, I don't see why there isn't a concerted effort to shrink the kernel. It would save a lot of the "roll your own" work required for installing on non-standard or Frankenstein systems. (And it might help get rid of that persistent laptop backlight problem...)

5
1
Silver badge
Linux

Re: You've talked the point under.

This is just sour grapes nonsense. The BSD folks are just mad that it's Linux that became popular and successful. On the one hand, the license on the source just doesn't matter for most people and even most companies. Most people simply don't have a 4 year old notion of property. (what's mine is your and what's yours is mine) So the whole drama of BSDL vs GPL is entirely irrelevant for them.

On the other hand, contributors might object at being free labor for IBM or Oracle or Apple or Microsoft.

THIS is why the GPL was created. It wasn't some subversive political agenda on the part of RMS. His contributors were p*ssed over exactly the kind of corporate proprietary assimilation that the BSL allows for.

You gotta keep the talent happy.

Noisy BSD fans are like the Trench Coat Mafia fantasizing about revenge on the popular crowd.

13
7

Re: You've talked the point under.

"In fact, several embedded manufacturers would've gotten into a lot less trouble had they not tried to jump on the linux bandwagon then gotten bitten by the so-called "viral" features of the GPL. You know, where they "neglect" to release the source to their distributed modifications and end up getting sued."

I think they and you still don't understand the GPL otherwise you know there are at least 2 options:

1. If you/they want to distribute the original code and your modifications totally under GPL that's a fact you must release the source;

2. If you/they want to distribute your modifications under other licence you can do it by the exception mechanism offered by GPL.

So, what's the point here?

They don't read the Licence or ask if they could somehow distribute their code under another licence?

Anyway I absolute agree with your point about BSD.

2
1
Flame

Re: I thought this article was about GPL

This is just the old flamewar about whether the GPL or BSD is the one true Open Source license. There is just no objective answer. If you want the cathedral rather than the bazaar, get AT & T's Plan 9 (Unix done better) or Inferno (better still) or Microsoft Midori OS.

2
1
Silver badge

Many negative critiques about Unix and Open Source...

Many negative critiques about Unix and Open Source are incoherent and muddled. So the quality of this article isn't to surprising.

6
0
Linux

BSD folks are just mad at Linux?

The I propose a de-fork, a merger of BSD and Linux ...

1
3
Vic
Silver badge

Re: And your point is?

> it has no real hindrance because you only have to offer your code to the people who

> end up with the end product of your derivative work

This is not true.

[I'm going to quote GPLv2 here, but GPLv3 has similar clauses, albeit with different numbers.]

You have the option to redistribute under section 3(a). This requires a copy of the source to *accompany* every single binary distribution - including downloads and updates. This clause quite clearly only means you need give your source to your downstream recipients (and the licence explicitly states that). Few projects use section 3(a) distribution.

Most projects distribute under Section 3(b). This requires you to make an offer of source - valid for 3 years - to *any third party*. Any. Even those who haven't got your binary.

The third form of distribution is Section 3(c). This is only available to non-commercial distributions, and only where the distributor himself received the code in binary form (e.g. from someone else's 3(b) distribution).

Aside from that quibble, though, I'm with you on your response to the article...

Vic.

1
0
Devil

Re: And your point is?

If you're looking for devices running BSD, look no further than the Playstation - CellOS is a FreeBSD branch and there are others. There's a difference between devices, running a streamlined Kernel with a few programs made by professionals, and a full blown Linux distro, though. What they are suffering from is called 'Lego Syndrome'. Too much code with too many weird dependencies. It's all about new features, more often than not inspired by Microsoft. ( KDE4s "Semantic Desktop Search" is a nice example - it needs it's own database / server WTF ? ) Regular Linux users don't realize the problem because new dependencies are automatically installed by their package manager. Just one more package - so what ? If you're on BSD and have to compile hundreds of MBs of dependent code for a mandatory feature, you might see things differently. You'd ask yourself, like Poul-Henning Kamp did: "Is this really necessary ? Can't they just code a few functions themselves instead of relying on all that third party stuff ?" Well . . . apparently not. - and THAT is the point of the article. The more inexperienced programmers reuse code they don't understand , the more unmaintainable it gets.

1
0
Silver badge
Big Brother

Re: And your point is?

"I'd be hard pressed (apart from certain parts that made their way into things like the Windows TCP/IP stack decades ago) to name anything that's really come from them."

OS/X? Though its kernel is a descendent of Next Mach its far to say the system application layer and command line tooks are still basically FreeBSD.

1
0
Bronze badge
Boffin

Re: And your point is?

> BSDs? I'd be hard pressed (apart from certain parts that made their way into things like the Windows TCP/IP

> stack decades ago) to name anything that's really come from them

Mac OS X runs on top of a BSD kernel, albeit a heavily patched one.

Also, FreeNAS devices (i.e. certain QNAP boxes) also runs a BSD kernel.

> And Linus is preventing Linux forking by being unique - so if it forked, who would do Linus' job in the fork?

Erm, didn't Google fork the Linux kernel for Android? Yeah, they still call it Linux, and they do keep pulling a new version of the kernel when it comes out, but then they heavily patch it to fit their needs. Does that count?

2
0
Silver badge
Boffin

Re: And your point is?

You'd see why the article calls Linux a series of cheap hacks if you read the part saying "... says some dude from FreeBSD". Every couple of years, someone from the bitter BSD groups will come out and bitch about Linux because Linux went out and did what GNU and BSD were supposed to be (the free/open alternative to Unix). See Theo De Raadt basically spewing the same bile about 5+ years ago. (The GNU people themselves have their own tantrum, they insist on calling Linux "GNU/Linux" as well.)

That said, the flock of C-gulls description isn't that off the mark. I've been using Linux since 1998, and during that time I've seen the silliness of branching and deprecation done real quick for either personal tantrums, pride, or infighting within the dev groups. Anyone remember ALSA, which was the one standard to supersede all other sound systems in Linux? Now there are a zillion "sound systems" still duking it out. Ditto with the XMMS project mentioned in this article. Or mpg123 and mpg321. And now the kernel itself seems to be doing the stupid change dance as well. Anyone using the latest and greatest distro might have noticed that the standard ethernet interface is no longer "eth0" but some weird thing called "p6p1". What does that mean?

So Linux and the FOSS community do need to get their act up, but it isn't as bad as the BSDites are painting it.

2
0
SFC

Re: And your point is?

Just because you don't know what's running FreeBSD doesn't mean that it's got a minimal install. As a result of the fact you don't have to share the code people don't have to tell you if they're using it or not. And they never have to give anything back.

Off the top of my head - NetApp and EMC Isilon are both based off of FreeBSD. Kace which Dell now owns, and JunOS by Juniper.

http://en.wikipedia.org/wiki/List_of_products_based_on_FreeBSD

1
0

Re: BSD folks are just mad at Linux?

Except Linux isn't a fork. It's a rewrite...

2
0

Re: And your point is?

They started work to re-merge it last year. I believe that 3.5 or 3.6 brings both kernels in sync, at least mostly. If not a complete merge, I know it's planned to be using the mainline kernel at some point in the not so distant future.

1
0
Silver badge
Unhappy

Re: And your point is?

"Anyone using the latest and greatest distro might have noticed that the standard ethernet interface is no longer "eth0" but some weird thing called "p6p1". What does that mean?"

There seems to be a pointless rush towards obfuscation at the moment. You see the same nonsense with the use of device UUIDs instead of /dev block devices in fstab and the creation of that over complicated abortion known as systemd. Why? God knows. The only reason I can think of is because the devs think its more l337 to create and use stuff that is more cryptic that its predecessor and frankly because sometimes they just can't seem to tell when it something ain't broke.

3
0
Anonymous Coward

Re: And your point is?

"Anyone remember ALSA, which was the one standard to supersede all other sound systems in Linux? Now there are a zillion "sound systems" still duking it out."

I think you'll notice that most of the sound systems "duking it out" on Linux are in fact abstraction libraries over ALSA itself (eSound, PulseAudio, GStreamer, Phonon and so on are just ways of simplifying certain tasks; ALSA still does the heavy lifting). ALSA was meant to replace OSS (which at the time was outdated), not any of those. This it did.

1
0

more buy in

Has this guy ever noticed the festering pile of hacks that also masquerade as proprietary software?

Linux and GNU with their terribly oppressive licenses managed to get more buy-in more quickly than the BSD's did.

25
2

Re: more buy in

While you suggest that the reason GNU/Linux got more buy in is due to the license, the reality is rather more complex and a lot of people who were involved at the time believe it is more to do with the AT&T/USL lawsuit (mentioned in passing in the article) which accused UCB and the CSRG of copying proprietary files from SYS V into BSD. The end effect was several years where people looked elsewhere for their open source needs until the lawsuits were settled, and even then the free BSDs had considerable work to remove the allegedly offending code from their trees and move from BSD 4.3 Net-2 based code to the 4.4-Lite code. e.g. FreeBSD didn't get 2.0 (4.4-Lite based) out until 1994, and it wasn't really stable for a while.

All the time Linux was quietly spreading without all these issues.

A lot of people believe that the balance between the open source OS's would be different today if it had not been for the AT&T/USL suit.

5
0
Silver badge
Linux

Re: more buy in

Linux was the first x86 Unix that supported my hardware. That's really all there is to it.

You can make all of the excuses you like but I think the fact that Linux was a populist Unix early on contributed to it's popularity quite a bit. At the time, I would have been willing to shell out the coin necessary for Solaris x86 or OpenStep if only either of those actually supported MY hardware.

On a long trajectory, a very small angular deviation can account for a very big difference where you end up.

9
3
Bronze badge

Re: more buy in

Linux was the first x86 Unix that supported my hardware. That's really all there is to it.

Never heard of Xenix then? Or SCO Unix, which was quite good back in the day at the "old" SCO even if the "new" SCO trailed its name through the mud? Or even 386BSD or Minix? Unix on x86 hardware goes back the best part of a decade before Linux.

1
2

This post has been deleted by a moderator

Facepalm

Re: more buy in

"My hardware" probably entails things such as drivers for a specific NIC and VGA card, not just the CPU architecture.

But thanks for the history lesson, you really showed him, you refined chap.

1
1
Bronze badge
FAIL

Re: more buy in

That is nothing short of revisionism not backed up by reality. Try looking at the supported hardware of Linux distributions circa 1995 - i.e. after a few years of Linux development.

Linux didn't support any video cards except as text consoles. XFree86 supported around a dozen, and even then generally as unaccelerated dumb frame buffers. In any case even then it was not tied to Linux but ran on other x86 operating systems if you so wanted so the idea that Linux supported any video cards that the others didn't is completely wide of the mark. Quite the reverse in fact: the X servers of the commercial Unixes had much better hardware support. SCO's X server for OpenServer supported 20-30 different cards, accelerated where supported by the underlying hardware. Linux simply wasn't in the same league.

As for NICs it was the same story there. You had a choice of about six different cards, all ethernet so if you were a token ring shop back then you were out of luck. If you wanted 100Mbit ethernet you had a choice of two cards. OpenServer had support for 30-40 different adapters in the core distribution and pretty much any card not so supported would come with drivers for it - even low end cards. You can make exactly the same story about disk controllers too.

The idea that Linux somehow bounded into existence with a full complement of device drivers from day one is very wide of the mark. From perhaps 1997 onwards Linux's hardware support had been rounded out much more fully, but it was actually quite poor to begin with. To pretend otherwise is either ignorant or deliberately misrepresenting the facts.

0
0
Bronze badge

Re: more buy in

The idea that Linux somehow bounded into existence with a full complement of device drivers from day one is very wide of the mark. From perhaps 1997 onwards Linux's hardware support had been rounded out much more fully, but it was actually quite poor to begin with.

All true, but the post you were originally responding to[1] made no claim to the contrary. All it said was that Linux was "the first x86 Unix"[2] that supported some unspecified hardware. No mention was made of when Linux supported that hardware.

That's not to say that I find JEDIDIAH's argument convincing, mind. In my experience, PC hardware support for any OS besides Windows was pretty lousy until the late '90s. And I worked with most of them: most of the MS-DOS versions, every version of Windows from 2.0 on, all of the OS/2 versions (which lacked decent drivers even for many of IBM's own Thinkpad models), SCO Open Desktop and Unixware, Solaris x86, FreeBSD, a handful of Linux distributions, Novell Netware, Coherent[4], and possibly some others which I've forgotten at the moment. Once Windows 3.0 became dominant in the PC market, most manufacturers just didn't care about drivers for other OSes.

[1] By block-caps JEDIDIAH, which I assume is an acronym for a cabal of some sort or perhaps for an AI project cleverly mimicking a Reg commentator as part of a Shannon language-model test.

[2] Technically incorrect, since Linux is not UNIX-branded,[3] but we'll accept "Unix" here as shorthand for "Unix-like OS".

[3] IBM z/OS, on the other hand, is UNIX 95-branded, so if you want a real UNIX, I recommend buying a System z and installing z/OS. Old UNIX hands might find ISPF a bit odd, but using USS over a TN3270 connection will be all kinds of familiar.

[4] A UNIX clone from Mark Williams Company, never very successful. Wikipedia claims it was the first UNIX-like OS for the PC and originally released in 1983; I haven't verified that (I didn't buy a copy until sometime around 1990). MWC was later made infamous by pursuing one of the first stupid-computer-patent lawsuits, over their patent on ... wait for it ... byte ordering.[5]

[5] US 4,956,809. Read it if you dare.

0
0
FAIL

B0LL0X

It's been GPL for years, and ended up in a bunch of fantastic OSes. Where's doom and gloom?

11
4
Silver badge

The only seeds sown for its demise

appear to be from the proprietary camp competing with lawyers instead of software. The only way the GPL is going away is if someone manages to make it illegal.

And as for the Tanenbaum comment from 20 years ago - 20 years ago they had a bit of a tiff but I doubt Tenanbaum feels the same way now and if you were to go read his works on OS design you would realise your article was mostly tosh. Especially the bit about re-inventing the wheel. MS does it every two years and as long as people are taught MS or new language shit in college and not Tanenbaum and Knuth we are going to get new wheels every few years with people patenting rounded corners on triangle wheels (one less bump than a square wheel!) and ask again what you think an oppressive license is - you cant get much more oppressive than MS/APPLEs pay us or FOAD approach to licensing.

24
0

Re: The only seeds sown for its demise

The seeds of destruction have little to do with the GPL and everything to do with the patent laws of various nations slowly making it impossible to write new code, free or otherwise. It is already impossible to come up with a new video codec that doesn't violate existing patents, thus the severe restrictions on the HTML 5 Video tag. Patents are being handed out for trivial adaptations that any engineer would see as hardly worthy of being termed an "invention". Ridiculous, really. This is the real danger to open source software. Closed source software runs into the same infringement issues, but of course it is much easier to prove an infringement against an open source project.

18
2
Silver badge
Devil

Nein! Nein! Nein! Nein! Plan 9!

"A pile of old festering hacks, endlessly copied and pasted by a clueless generation of IT 'professionals' who wouldn't recognise sound IT architecture if you hit them over the head with it"

Woah, I am relieved that I am not alone in the WTF reaction I had when I checked what happens underneath "./configure".

Also, the original rant (a bit over the top, and dated August 15, 2012 but then again this is El Reg) and the comment section is of paramount reading importance. The commentariat is often better than the rant, but then you have things like this:

---------

metageek | Mon, 20 Aug 2012 16:11:54 UTC

This is a typical engineering point of view. Engineers like to think that the world could be designed, however Nature shows the opposite. Any sufficiently complex system cannot be built out of design, it has to be *evolved*. Evolution is messy (anarchic?): pieces that work get reused, pieces that do not are lost. There must be something good about autoconf that has enabled it to survive and eliminate those awful "designed" makefiles that needed manual tweeking even though they did not check for 15 different Fortran compilers (the status quo of the 1980s). Does it matter that autoconf looks for those now extinct compilers? No, but it gives it robustness which is why it has survived. Someday a new, less bloated, alternative to autoconf will appear and it will slowly overtake autoconf. In the meantime we are doing very well with it, thank you very much.

Software is the most complex construction that mankind has so far created. It is so complex that it can only be evolved much like Nature's products are. Engineers can continue living in their nice little artificial world of linear systems and planned devices. Those that venture in the real-world of complexity and are successful will take up Nature's ideas.

Goodbye Unix of the 80s. Linux is here to stay

---------

These is exactly the kind of person you actually want just carrying the boxes in the basement lest they give you a Therac-20, again.

11
2
Anonymous Coward

Re: Nein! Nein! Nein! Nein! Plan 9!

Configure does make me wonder. Why does it look for a Fortran compiler when the program is in C? Why does it take longer than the actual compilation?

4
0
fch
Mushroom

Re: Nein! Nein! Nein! Nein! Plan 9!

configure / autoconf doesn't make me wonder - it makes me curse, swear and use sewer language of the worst kind. Nuking it from orbit is too kind a death for it.

It's not a tool, it's a non-tool. Full agreement with the *BSD ranter there - noone bothers understanding autoconf input / setting it up properly; it gets copied-from-somewhere and hacked-to-compile; if "development toolkits" provide/create autoconf files for you, they're usually such that they check-and-test for the world and kitchen sink plus the ability to use the food waste shredder both ways.

The result is that most autoconf'ed sources these days achieve the opposite of the intent of autoconf. Instead of configuring / compiling on many UN*X systems, you're lucky today if the stuff still compiles when you try on a different Linux distro than the one the autocr*p setup was created on.

It had its reasons in 1992, but the UN*X wars are over; these days, if your Makefile is fine on Linux it's likely to be fine on Solaris or the *BSDs as well. Why bother with autoconf ? Usually one of: "because we've always done it that way, because we've never done it otherwise, and by the way, who are you to tell us !"

5
1

Re: Nein! Nein! Nein! Nein! Plan 9!

Indeed, software, and computer systems in general, have followed an evolutionary path from the start. I wouldn't argue otherwise. Computers still have BIOS beep codes and humans still have an appendix. However, there is a subtle difference. Engineers affect evolutionary gains (or failures) by designing changes to an existing system. In Nature, evolutionary gains (or failures) come about by random chance. As an engineer, I will stick to my belief that a guided evolution must surely affect the rate of evolutionary gains in a positive manner when compared to the rate afforded by random chance.

3
1
Stop

Dont rag on autotools

The autotools are a bit difficult to learn but I think they are worth the effort, M4 is a bit fugly but

with a bit of practise and ruthless factoring into small macros its not that bad, (Top tip add banner lines to your macros so you can spot the output in the generated configure script)

With the autotools, I get cross-compilation/packaging/ cross-compiled unit-tests execute in a cross environment/transparent replacement of missing functions at link time/standardised argument handling which generates help messages/binutil access and ability to mung various assets (images/sql etc) in to my code with very little effort.

Mostly I copy my build-aux and m4 directories into a new project and write a simple configure. My heart sinks when I have to work on project that doesn't use autotools.

So I think the autotools survived because when you take into account everything it provides, it's streets ahead of everything else. (Libtool is still a thorn in my side, admittedly)

1
2
Bronze badge
Stop

Where are the /free/ Micro Kernel OSs; I think that says it all!

The only use I have for BSD is the specialised FreeBSD dist. FreeNAS running my RAID box, because it too long to free it, so it was bypassed for Linux by most free OS users. No, I don't have or want any iCrapple devices.

I had an Amiga twice, I thought it was brilliant because the message passing made lots of stuff really easy to do, and did usable Multi-tasking unlike most other computers then; however Amiga OS is irrelevant now because it was not set free; such a missed opportunity!

Linux is massively successful because it has always been free; true a better new OS may eventually replace it, maybe even a Micro Kernel, but it must be free; this is where the toy OS Minix failed hugely, and Tanenbaum was talking complete nonsense then to say that Multi-tasking was not critically important for any practical OS, especially a Micro Kernel!

As for http://www.haiku-os.org/ and http://common-lisp.net/project/movitz/; you are taking the piss, the first "car" is a hobby Alpha concept car, so not safe to drive away from a test track, the second "bicycle" is more like a unicycle with a solid wheel, no tire and no brakes, so not practical!

All the arguments for central / monopoly planning are being progressively demolished and humiliated e.g. Steve Kean's critique of Neo-Classical economics aptly demonstrates why extrapolation of supply and demand behaviour from individuals to groups of people is utter nonsense and why the results are so different!

2
0
Anonymous Coward

Re: lest they give you a Therac-20

ITYM Therac-25

0
0
Silver badge
Unhappy

Re: lest they give you a Therac-20

Alzheimer strikes again

0
0
Anonymous Coward

Ho hum.

As analysis goes, it misses a bit of background, doesn't quite catch the flavour, isn't all that thorough. Best try again and find more viewpoints to compare and contrast. This just doesn't do the unix history justice. It's not the lack of length, it's just that the short and sweet of it doesn't quite manage to hit the nail on the head. One of the larger misses is that windows is very much defined as not being unix, qv. that chief designer guy.

Doesn't change that the current state of software is quite sorry, and that various vaunted world-improvers have not managed to actually, you know, improve the world. They just added more code and standards on top of an already big pile (Obxkcd left as an exercise).

I for me was just mouthing off against linux' sorry state of networking, which I won't repeat here except to note that it is fairly sorry with fragmented and multivaried, multiversioned mutually incompatible toolsets and hardware driver stacks. This sort of thing have me conclude that somethimes a design cabal is actually useful and to have people with a bit of background in no-reinventing conservatism at minimum (explicit goal in the CSRG), but ideally some design and architecting background also would be nice. In that, Poul Henning-Kamp is quite correct, even if he doesn't always manage to live up to the expectations it rises.

Linus Torvalds, though, is quite fine as a bad example. He's not great in large scale architecture, which is why the design is still "obsolete", though it has managed to muddle by. His example though has actually made people mistake forcefulness for leadership and this has crashed a few promising projects. This is not to blame him for those failures, but more of an illustration how lack of good example does fail to breed following good example.

Then again, forcefully opinionated is somewhat of a steady state in computing. One might say it is a sign of immaturity in the field. And since there are few truly one true ways of doing things, this isn't likely to change anytime soon.

4
9
Anonymous Coward

Re: Ho hum.

Instead of criticizing Linus Torvalds (I'm not saying you're right or wrong doing that) why don't you or anyone who has a better idea fork the Linux kernel, push it in the right direction and bring us all into a new era of computing. I'm pretty sure Linus himself will not be bothered by that (we wouldn't care even if he did).

What I don't like is people wanting to stick their ideas (good or bad) into the Linux kernel and complaining for being rejected. If there is a way to re-engineer the Linux kernel architecture, and I'm sure there are some other brilliant minds out there, what's stopping them ?

5
1
Anonymous Coward

Why fork linux at all?

Why would anyone want to try and weather the rough-and-tumble of the linux dev community only to be rejected because some other idea happened to be hot that day and yours wasn't? Or what if you don't happen to be into that cult of personality? And if you're going away from that, why stick with linux?

I could tell you why, and I could tell you why not. But all that aside, I wasn't complaining my code got rejected. I observe myself getting bitten by lack of uniform interface provided by "linux", as it happens today particularly in its networking, overall supporting phk's argument. I usually use other systems that have less such trouble, but I can't always do that. But I digress.

The point is that the state of software in general is quite poor. As far as examples of that go, linux certainly is not the only one, but it is a nice and illustrative one. For various reasons it is a common source of grief for other systems by dint of "linuxisms" in software written for it. That's not even the kernel code's fault or its apis or whatever, it's to do with the people surrounding the thing. You can't fix that by forking the linux code base.

It's not that something is "stopping them", it's that expecting unspecified people to roll a fat one, sit back, and spend a good relaxing night of forking, is entirely irrelevant. So that forking suggestion, well that's not particularly helpful. Neither is the incessant myopic focus on linux-the-kernel. We're talking a rather bigger picture than that. Thanks for playing. Try again soon.

1
4
FAIL

Re: Ho hum.

If you fork the Linux kernel, it's still under the GPL. Why not wait for 50 years till Linus dies or something and elect a new leader? Why wait at all? Why use Linus's much-critised kernel in the first place? What is a 1970 OS doing in the 21st century? What is keeping you from designing a clean all-modern new OS?

2
1

Page:

This topic is closed for new posts.