There once was a time when open source was all about peace, love, and Linux, a bottom-up community of self-selecting hackers that chummed together for the love of good code. As soon as Linux hit pay dirt, the nature of the open-source community changed forever. Today it is virtually impossible for a successful open-source …
> There once was a time when open source was all about peace, love, and Linux,
And maybe 20 years before that, it was peace, love and Unix. And RSX-11. And VMS.
Linux is the new kid, it wouldn't even exist if the GNU tools hadn't already been there. One thing that did change around then was licensing. A lot of the original open source had licenses which said pretty much 'you can do what you like with this as long as it isn't commercial". GNU and GPL did change that, although I doubt it was the original intention.
And before UNIX, RSX-11, and VMS, there were mainframes with OSes in assembler, customer-developed patches and enhancements, and tools tapes put together by customers and vendors. Indeed, one of the earliest user groups (if not the very first) is named SHARE--it's not an acronym, it's what the participants DO.
Oh, yeah (long night, I'm tired), they STILL DO, in case it wasn't obvious, the OCO aberations notwithstanding.
> And before UNIX, RSX-11, and VMS, there were mainframes with OSes in assembler,
Don't mention assembler in front of the children.
> Don't mention assembler in front of the children.
Psst, wanna have a look at some machine code?
0xA2 0x0A 0xCA 0x20 0x00 0x01 0xF0 0xFA 0x60
Just so long as it isn't 8088 assembler, they should be fine.
Re: AC's machine code
AC's machine code looks like 6502 (the terminal 0x60 is a give away) in which case it says:
// TODO: Get a life
> > Don't mention assembler in front of the children.
> Psst, wanna have a look at some machine code?
> 0xA2 0x0A 0xCA 0x20 0x00 0x01 0xF0 0xFA 0x60
WAAAAAAH! That doesn't look like C!
Myth of an idyllic age
Which probably never really existed, other than in RMS's memory of a community of university work friends who couldn't talk to him about what they were working on once they got a paid non-research job in industry and signed NDA agreements with their employers and stopped sharing source code for what they were doing due to their changing work environment. That and the rose-tinted specs with which grumpy old men often view events, which at the time had entirely competitive and career research motivations for those involved, simply because these events happened between 20 and 40 years ago.
Open source had various origins, including the fact that AT&T were prevented from selling computer operating systems by a telecom antitrust settlement, so gave away the UNIX tapes to universities for the copying cost as this helped share development costs. BSD UNIX then developed as a US taxpayer funded research activity, and the best research has to be shared and peer reviewed, requiring an open source license. The GPL licence probably came out of RMSs annoyance at ex research lab colleagues signing of NDAs privatising their knowledge gained in a more inherently open environment at MIT.
Probably the most significant push to open source was the IETF standards development history. As with understanding of how to connect telephone equipment needing to be shared in order to inter network, the Internet was successful where OSI efforts were not, due to the more open participation and publication process concerning the Internet RFC standards. In order to move these up the acceptance and approval chain, independant implementations had to talk to each other, which became easier when the BSD Unix software implementing these standards was published as reusable source code.
Re: Myth of an idyllic age
re OSI vs TCP/IP not due to the licensing it was just that TCP/IP was cheap and cheerful and dodged a lot of hard problems. OSI was/is hard sums and required developers to be a lot more professional whereas a lot of the TCP/IP standards and tools are "good enough for jazz" efforts knocked together by grad students .
And OSI was very cutting edge TCP/IP took years if not decades to develop mail to the point where it could do what X.400 could do. And X.500 if fully implemented far exceeds what DNS can do now.
And BTW I used to work on BT’s X.400 and X.500 products its just that some times the Best technical solution doesn’t win VHS vs Betamax for example.
Re: Myth of an idyllic age
I too worked on X.400 , years ahead and still got some advantages. I also worked with Coloured Book, OSI and X.25, knocked the socks off tcp/ip for reliability and security. OSI and X.400 were just about mandatory for any institution, such as banks or armed forces, that needed reliability and security.
Still, they left their mark. We all know about network layers, LDAP and so on. ASN.1 seems not so different from XML in concept.
As for LInux software quality, hmm. very debatable, knowing one or two "contributors'" coding style. Here, BSD does it much better. Even Linux was preceded, I think, by Minix and, as said already, depends rather heavily on the existence of GNU for the shell/presentation layer and more. Software was generally, at a commercial level, more open. Either for next to nothing or for a fixed price customers could get full sources (I had a job modifying Primos O/S, in its several implementation languages, Primos being for a short time a rival to VMS and, like UNIX; with some Multics roots and definitely proprietary).
OSI was not successful, it was an academic charlie foxtrot. It's why a lot of the industry is so wary of W3C, because folks know how much damage an unfettered international standards committee can do. We evalutated parts of OSI at Bell Labs in the 1980s, the red book and blue book versions. TP4 was horrifically inefficient. One implementation took longer to negotiate a packet transmission than the connection-time-out period of TCP. The first two full implementations of the X.400 in Europe couldn't even send email to each other, because the groups had chosen such different data format options, both totally in agreement with the sprawling specification. At Bell Labs, we used the phrase "mailer science", as a derogitory term for this kind of badly engineered over-designed system. The net effect of this standards committe circle jerk was that a cabal of contractors and universities got a gravy train of publications and grant money, while Europe and Japan delayed joining the internet for several years -- think about the opportunity cost of that.
If there is one lesson to take away from the OSI debacle, it is never, ever, let the ITU have control of the internet. America clearly has too much control now, but just keep the corrupt, inept UN committees away from it, or everyone will be sorry.
@Don Mitchell, Re: Mailer Science
+1 for your reasons for OSI failure, but giving ITU authority over ICANN wouldn't revert the last 30 years historical development of technical standards development methods which have to be more openly discussed, agreed and implemented if these are ever likely to be adopted. Such standards are more the job of the IETF and W3C which are independent of ICANN, and any other organisation (e.g. the IEEE) whose contributions are effective. The ITU have done a better job with the international telephone dialling code namespace than ICANN are proposing with TLDs, once ICANN started pimping .whores and .brands for private empire building profitability and other contended possible TLD names such as .bank to the highest bidder.
Great article !
Enjoyed every word of it. Thanks !
There is much more to it
A part which I'm completely missing out on in the article is advertisement. Its not uncommon for certain companies to maintain commercial based open source projects without having any means to make a profit from it. If they can keep things break even then its a win-win situation for them.
Because on the other hand; the bigger the project, the better known it can become which automatically also brings more attention to the company at hand.
Oi, Matt! Poor sport!
Nary a peep you didn't think of this yourself. Besides, that first link you do add gives a better answer to the headline than this article, even though like this it lacks a certain... historic perspective. That seems to be a recurring issue with the stuff you present us. While in toto we might get something useful, expecting your readers to read (or add) various essential pieces in the comments isn't something you should be depending on when writing the weekly deliverable. Sheesh.
It's not perfect...
...but it's better. Free Sofware (not quite identical to Open Source, incidentally), has positive beneficial effects even if we don't quite reach utopia. For one, it allows an escape route. Witness MySQL - when there were irreconcilable differences, developers forked it. Like a Hydra with a head cut off, two rise in its place. Fragmentation can be bad, but as an alternative to a company buying something out to kill it, for example, or a tyranical lead developer countenencing no breaking away from their vision, it is a positive result.
Secondly, whilst a company can take GPL'd code in house and keep their refinements to themselves, if they want to sell or work with others, they have to give back. (Yes, you can tip-toe around the edges with binary blobs and other fudging, but often this isn't feasible). So the GPL forces the production of a kind of coding Commons. Which is a good thing.
Thirdly, it reduces the barrier to entry. If you have to write a web-server from scratch to underly your product, or you have to licence one from a company that would be your natural rival, then as a small player you may just give up and go home. But if you could, say, just create it as an Apache module, maybe you can get into the market. And competition is good.
Yes, sometimes GPL has a downside for the self-interested. Maybe that Apache module has to be Open Source and others would try selling support for the module in competition with you (though you would have the edge). But it's optional - you can choose to participate in the Commons reaping the benefits as well as paying the costs, or you can go your own way and do everything from scratch. Point is, without GPL, there is only the latter and lack of choice is bad.
Finally, it lowers the barrier to entry for all those individuals which are the future of programming. They can grab Python or PHP or the GCC compiler, and get stuck in. Can you imagine a world where all programming language implementations were proprietary? It wouldn't be a complete disaster because companies would compete with each other with free versions because they wanted people to learn theirs, but it would certainly be worse than it is.
Could the Free Software world be better and are there abuses? Yes. Is it still a great positive force in the world of IT? Yes. I got into Open Source over a decade ago and I am not disillusioned yet.
Open source used to be distrusted by Enterprises, but now the opposite is true, possibly since IBM threw Unix under a bus and poured billions into Linux. That endorsement gave open source credibility and Linux immediately took over datacentres, where it was furtively installed previously. Then Google Chrome legitimised open source browsers after Firefox led the way, and, together with Firefox, killed the nightmare that is IE. Android is winning the mobile wars, to some extent at least, and Linux prospers on every platform, from embedded to supercomputer, except for desktops, where Microsoft still exerts a stranglehold. But even that is a stranglehold that is gradually loosening like IE's grip loosened in about 2003. Win 8 will further destroy Microsoft's credibility on desktop systems. And then Big Data came along, and, with the likes of Hadoop, open source is prevalent there in the most enterprise-like systems imaginable.
It's hard to forget that in 2004 Open source was scorned by industry. Now it is prospering by comparison.
"Has cash corrupted open source?"
Nope, design by committee might of done though.,
Computer science is like other sciences now, paid for by people who want you to help them make some cash and prove a point.
The difference is that other sciences publish their findings in journals and their results are scrutinised and any problems indicated.
Money changes everything.
Organisations not just money
Most big opensource projects are supported/managed/'controlled' by large companies who have a particular need.
So when Qt was bought by Nokia they released it as opensource and put a lot of development effort into it. BUT most of that effort was focussed on using it for phones (naturally). So a lot of bug fixes and contrbutions for the desktop Gui went ignored. Yes it's opensource so you could take the entire project and fork it to fix a couple of bugs !
A few top Linux kernel developers have also complained that the influence of big-iron server makers means that the focus of the kernel is on server performance rather than desktop imrovements.
We got to get those guys that want to commercialize open source software. Got to rid ourselves of them, or it will be tougher and tougher maintaining an ethical software world. Do anything you can to fight against these people.
Don't buy anything with a price attached. Dont buy any "apps".
Donate to FSF and EFF. Do that now.
The FSF and its GPL are probably the reason this article was written in the first place. Without it, there would be only two real kinds of "open-source" licenses, those that allow commercial use of the code but don't require contribution back (in which case they probably won't), and those that disallow commercial use at all. This sort of mixing of commercial and community involvement was exactly what the GPL was designed for.
The FSF and its GPL are probably the reason this article was written in the first place. Without it, there would be only two real kinds of "open-source" licenses, those that allow commercial use of the code but don't require contribution back (in which case they probably won't),
Yeah, no commercial entity has ever contributed to, say, Apache.
and those that disallow commercial use at all.
Unless you're being tautological (defining "real kinds of 'open-source' licenses" as those which fall into one of the categories you've listed), that's a highly dubious assertion. There's plenty of software that's free for non-commercial use but requires payment for commercial use, and some of it is distributed in source form. Certainly there are people who exclude that from their definition of "open source", but if that's what you mean, then you're assuming the consequent.
This sort of mixing of commercial and community involvement was exactly what the GPL was designed for.
The GPL was designed to enforce source availability on derivative works, and motivated by Stallman's dislike of closed-source software. I was working in Cambridge, MA in '88 when he first published the GPL, and in all the controversy I don't remember anyone claiming it was "designed" to "mix ... commercial and community involvement". As far as I can tell, that narrative came years later.
The GPL was certainly important in the history of software distribution, and to some extent in the history of software development. (Just how important it was to the latter is a matter of speculation; I haven't seen anyone do anything like a methodologically robust study of the question.) But its importance is frequently overestimated, particularly by people who disregard the long, complex history of software source distribution prior to it, and pivotal events such as the IBM consent agreement and consequent "unbundling" practice.
 Stallman's anti-closed-source position clearly developed over several years and had many influences, but there are some incidents that were clearly significant, such as the UniPress dispute.
 Though the GPL was based on licenses he had created earlier for Emacs, etc.
Inherently sillly article.
It's an inherently silly article based on a flawed premise and confused terminology.
It's FREE SOFTWARE. The moment you start talking about this "open source" stuff you've already compromised your ideals and have started pandering to guys with money and expensive suits. That's kind of the whole point of the thing.
if you are just realizing this now then you haven't really been paying attention.
"open source" is watered down for corporate consumption by it's very nature.
"consumed by venture capital dollars"
It isn't "consumed", it's merely funded, just like all software is funded, one way or another.
As long as the license remains Free there's no problem, and there's no way for anyone to "corrupt" Free Software into non-Free Software. That's the whole point of licenses like the GPL.
Free Software projects that are bought wholesale (not an easy task, since most of them contain code copyrighted to many contributors), such as CUPS for example, do run the risk of future versions being made proprietary by their new "owner", however licenses cannot be changed retroactively, therefore the existing Free Software code can and will be forked if necessary, and community development of that project will continue unabated.
In short, the funding of Free Software, by corporations or otherwise, does not and never has "corrupted" it, and moreover can't possibly do so, therefore your loaded questions are entirely moot.
Open Code != Open APIs
I was surprised by the apples-and-oranges comparison between MySQL/MondoDB and the Twitter API. I suppose both kinds of things have communities around them. But a service provided by a data silo is a fundamentally different thing to a code-base. They are 'open' in quite different ways - a service can't be forked without the data coming with it.
What worries me mostly is that the aspirations for the Internet have shifted from facilitating the communication of peers using open protocols, to creating 'siloed communities'. This is quite clearly illustrated by the so-called 'friend requests' I get from Facebook ... and then LinkedIn and Google+ (which I never subscribed to, but seem to have an account on) as associates gradually discover those too. There's no service-provider-independent means of marking up my social network.
Re: Open Code != Open APIs
>There's no service-provider-independent means of marking up my social network.
That's really the inherent problem with P2P, it requires some sort of centralized directory to locate everyone. You could do local discovery and caching, but IPs are dynamic and a good chunk of your local record cache would be out of date the same day they were created.
- FLABBER-JASTED: It's 'jif', NOT '.gif', says man who should know
- Analysis Spam and the Byzantine Empire: How Bitcoin tech REALLY works
- VIDEO Herschel Space Observatory spots galaxies merging
- Apple cored: Samsung sells 10 million Galaxy S4 in a month
- More than half of Windows 8 users just treat it like Windows 7