Re: Shutting the Stable door
And that matters to all two companies still using Solaris.
47 posts • joined 3 Apr 2014
This is an interesting subject and not limited to Linux in general nor Linux distributions.
There are a lot of heavily used and de-facto standard open source libraries and tools on Github that rely on the continued benevolence of a handful, sometimes even just one, persons with no source of income.
How do these projects gain revenue? Some developers put up donation links, some use Flattr (ugh) or Patreon but really, how much revenue do these things bring in? If there are numbers on these things would love to see them - there does not seem to be much information out there.
Clearly for any successful open source project, in general, to continue it has to have some source of revenue once it goes from being just a proof of concept or a hobby project to something included in distributions, heavily used as a de-facto third party provided library, or a frequently used tool.
Too many projects have been abandoned with no alternatives due to the one developer losing interest or no longer having the spare time to spend on it. Other than selling out to some corporation, which usually does not end well (hello there Graphite project), what options are there?
Well said and mirrors my own experiences with Sailfish. Nokia had the chance to be a real force in the smartphone era with an OS and UI that truly differentiated itself from others in meaningful, productive ways.
Then gave it all up to use Windows Phone.
Until about a year ago, a Sailfish powered phone was my main mobile device. Had there been newer options with better specs and more recent Android JVM support, it still would be.
Nokia did not fail by not switching to Android - it failed by not developing their own smartphone OS quickly enough, instead wasting resources on beating the Symbian dead horse.
As a result, neither platforms were fit for purpose and Nokia was forced into a choice of third party OS. It does not matter which they chose - they were doomed to fail from the point of being in the position of having to make that choice.
Ironically, Meego, what was supposed to be the Symbian replacement, did get an actual day in the sun in the form of Sailfish (https://sailfishos.org), as part of a startup started by ex-Nokia engineers to release what Meego would have been.
I bought their first phone, the OS was incredible - fast, easy to use, gesture based, true multitasking, native apps, third party android JVM up to 2.4. However, the native apps were too few and of too low quality and the phone did not have enough memory to run the memory intensive android apps, in addition to many newer apps not being supported due to the old android JVM used.
The irony is that Nokia did have the resources to put into making high quality apps for it had they focused their efforts on one platform.
In the end, the fact that Elop was a trojan horse is irrelevant - Nokia were already doomed by the time he waded in with MS stock in his back pocket.
This is a dangerous, and false, re-imagining of the intentions behind the GPL. There has never been anything in any version of the GPL that prohibits authors of code to sell said code.
The so-called 'copyleft' clause ensures that people other than the author must distribute derivative works of the code in question under the same license. There is always the option of obtaining a different license from the author by way of.. paying some money for a license.
Without this clause, companies would be free to modify, distribute and sell derivative works with no obligation to contribute their changes back nor to seek permission to sell or distribute their derivative work. The clause, therefore, is solely intended to protect copyright owners, the authors of the code - ie, developers.
Without the copyleft provisions in the GPL, no hardware manufacturer in the world would be putting resources into writing code for the Linux kernel - they'd just modify in house and force everyone to buy "Linux" from them if they want support for their hardware. This hurts both users and developers but hey, it means the hardware companies can charge whatever they feel like for hardware support like in the good ol' days of mainframes and proprietary human interaction devices. Yay for business, boo for collaboration, competition, and the free market.
If "new" developers are less inclined to use copyleft licenses, it is most likely from a poor understanding of what it offers them. Worse still are examples of non-open source licenses like the EPL (Eclipse Public License) being used for "open source" projects, which are then used in derivative works licensed under an actual open source license which is explicitly not allowed by the EPL - https://www.eclipse.org/legal/eplfaq.php#USEINANOTHER
These projects are lawsuits waiting to happen. Choose your license wisely and RTFM.
Intel has options, they just do not like them.
They could stop selling their entire affected product line, get everyone to work on a design fix/workaround and only start selling chips when they have a revised design that does not open up their *customers* to remote code execution.
Like, you know, every other company has done when in the exact same situation. Not only that, most companies would recall defective products at their own expense, as their customers expect them to. See Toyota (twice), Samsung et al.
But they do not want to because $.
No, wrong on all counts. The push back is from seeing these things being applied, or attempts thereof, to everything regardless of the suitability of those tools to the task.
"DevOps" is slang for "we dev, you ops". For developers it means they get to write code for ops to use. For ops it means they get to write code for devs to use. See the problem here?
"Agile" is slang for "let's spend 2 weeks doing multiple POCs that we know right now are not a good fit for what we want but will do anyway because Agile".
The fist time i heard the term "scrum master" I thought it was a joke. Then I saw actual job titles with that term. I still wonder what these people do, after years participating in scrums.
I just boil it down to "Automation". If it saves time, it's worth doing. Fucking about doing POCs and scrums while pretending to be working is not being "agile", it's just procrastination.
Pray tell, how well does blocking multi-threaded code handle a large number of network connections?
When everything needs to run in a (hardware) thread to gain parallelism, and all the application is doing is waiting on a socket, suddenly the overhead from all those context switches is greater than overhead spent actually doing something with that data.
That is where event loops come in. Different tools for different tasks. Both have their uses.
The real low point of node.js is that it forces _everything_ to be async, even when it really needs access to the CPU because it, like, needs to run some code on it, like, programming and stuff. Go and Python with native libraries like gevent are much better in that regard.
node.js callbacks are still the epitome of sh1t-ness, though. Why bother putting in language semantics to handle async calls when you can just dump it all in the programmers lap and force them to pass around endless callback references for their handling code..
References: Google nginx vs apache or any other multi-threaded vs event loop networking call handling benchmarks.
Cheaper and faster than running Openstack on your own kit, hands down. Do the maths and test it yourself if you want. Openstack is not *free*, there are severe costs in setting up and supporting it. Man hours are not free. If you are a person getting billed for those hours, however, you may feel that is a benefit to you. It is not a benefit to your company.
AWS also gets cheaper in the long term once you factor in continuing hardware maintenance and on going support.
My comments are from first hand experience.
| <..> it’s certainly possible to hire a team to spin up OpenStack. Vendors like Mirantis, SUSE, Canonical, Red Hat, and even VMware are happy to help.
Spin it up they can, make it stable and performing well *at scale*, you know, the thing it's supposed to do, they cannot.
There are *no* cost savings at 'hyper scale', because Openstack cannot scale to that. What there is at the end of the Openstack coloured rainbow is complaining users that the APIs are sub-standard, the platform itself prone to instability, scaling non existant, performance sub-par with severe degradation once people actually try to use it.
Meanwhile AWS offers all of what Openstack only claims to be able to do, cheaper in short and long term, faster and more reliably.
Speak with the companies, like a particular large American data company in the finance sector, that actually implemented Openstack Private Clouds at huge costs (multi-hundred millions of dollars) only to shutter them not even 6 months in due to it not doing what it purports to do, as above.
Once a Graphite API compatible front end is layered on top of InfluxDB, yes.
Like what InfluxGraph provides - https://github.com/InfluxGraph/influxgraph
InfuxDB by itself can only ingest graphite protocol, it does not natively support the Graphite query API. A drop-in replacement for Graphite requires both.
It is a far better solution for a metrics data store, no doubt about that. Anyone that has ever tried to scale Graphite core, at all, will tell you that Graphite core does not scale at all well.
InfluxDB is many orders of magnitude better in terms of speed and resource usage.
Companies and their management have no way of telling what is difficult and what is not.
The post is made tongue in cheek and it's point is that perhaps IT professionals should not be so eager to claim everything they do is stuff for idiots, less they find themselves replaced by said idiots.
When 'something that works' gets turned into 'accepting every piece of s$#t package the distribution fancies being shoved down your throat', while still having to do many things manually, it becomes harder to justify its use.
This may not be a popular opinion but perhaps something like OS X is a better fit for an OS that 'just works', which Debian a far cry from in the ease of use department - *cough* binary graphics card drivers, Debian users? *cough*
What is described above is not 'instability', but expectations being conditioned to abnormality by static release distributions that only provide security patches to static versions of packages.
With Arch and all other rolling release distributions, an update is an update, not security patching. If you make the choice of updating all packages on your system to their latest version, you should be prepared to accept that some of those versions may not work well together.
You likewise have the option of not doing that, or updating only system packages and libraries used in development. As a developer, having access to the latest versions of packages would presumably be useful.
Finally, here is something other OSes and distributions never tell people - stable software will remain stable. If your system is known to be stable and working well, why are you updating the entire system?
Sure, vulnerabilities get discovered, which is what system package upgrades are for. Can even restrict Arch (and Gentoo and others) to specific versions of packages and still get security patches on those versions.
Such a shame that Openstack is in such a sorry state due to shoddy software engineering practices and each part of it seemingly developed in isolation by different teams with wildly different APIs and conventions. SDN alone does not change how bad the rest of it is, in particular I/O performance.
Had noticed the recent switch to IPv6 addresses by default a couple weeks back. Yes, some sites will route over ipv4, some over v6 depending on server support.
This applies to all connections on any IPv6 compatible machine and for all applications. Things like corporate VPNs and any other routing programs (Tor, p2p, skype et al) will therefore leak traffic over IPv6 since most of them are only setup to route IPv4. Watch out..
People that have not seen the cost of music fall since they were children are less likely to pay for something they have had easy direct access to since at least their teenage years and continue to do so via services like the unknown underground pirate den of Youtube. News at 11.
"These adverts on my DVDs are so annoying, I mean I already paid!"
"Downloads don't have ads"
And you wonder why they are popular. Hint: they provide a better service.
Compete on quality of service, or die.
Your university is patently incorrect and doing its students a gross disservice.
The demand in the industry in the EU and UK in particular for *good* developers far outstrips supply and really great developers that also have lots of experience and can work in 'architecture' type roles can pretty much name their own salary. To clarify, not referring to web UI type development.
No amount of outsourcing changes this as it quickly becomes apparent that the persons the job has been outsourced to cannot do the job without significant oversight and resources from a more competent developer somewhere else. Typically you will hear something like 'Why not give it to the team in X country?'. Answer: 'They can't do it'.
On the flip side, demand for network engineers has fallen off a cliff in the last 10 years with the advent of software described networking, cloud services and the like and this is coming from someone that used to work in networks. Most dedicated network engineers at my workplace either (a) old and retiring, (b) have been made redundant or (c) moved onto another role if they have the skill for it.
Network security sure, though information security is a broader subject with more potential and again, not much there wrt network security specifically. Penetration testing is at the low end of the food chain and to get hired as an 'information security analyst', you need an information security degree and substantial experience.
I fear the real reason the university does not recommend it is because hard (as in high level) programming courses are just not being taught in many top end universities and I sadly say this as a graduate from one such university. There is not much demand for the type of programming we were taught when I graduated, that is true.
It is true do not bother if you do not enjoy it - you will not get ahead, be proactive in your learning, learn things in your own time and move far ahead of what was taught at uni, which again is barely scratching the surface, without a desire to do so.
Take this as you will, but personally I'd do some searching on available jobs, their salary and number of open positions before you make a far reaching decision that will impact your entire professional career.
Perhaps if Microsoft wanted better developers, they might spend their considerable resources in actual engineering rather than inventing new ways to annoy people and lock them into whatever platform they happen to be pushing.
Want great developers? Make great developers actually want to work for you.
In that sentence, 'remnants' basically reads as "matter kicked out by forces resulting from a merger of two or more black holes".
Nothing came out of the singularity as such - the gravitational waves are a result of mass being transformed into energy by the massive forces involved in a merger of two singularities and their accompanying matter within the event horizons.
They in turn seem to have an effect on the matter sucked within the event horizons of the merging black holes, but not part of either singularity, which causes the kick and for 'remnants' to escape as a result.
Hi there El Reg,
While I am wholly on board with offering differing views on the same subject - it is indeed one of the reasons why I feel El Reg is a cut above most online publications - that would best be applied not just to articles appearing on the site but to comments to said articles.
I am sure it has not escaped your notice the pattern of rejected comments by the author of this article whenever a disagreeing viewpoint is posted on one of them. For reference, see rejected comments on this very account and the many others that have commented along the same lines on many an article.
Insisting on offering differing viewpoints in articles but rejecting comments that disagree with the article's view points can only be described as hypocritical, as much as it saddens me to say.
Once again - what you do with your own code is up to you. If _your_ code interfaces with GPL licensed code, _others_ cannot re-distribute binaries of _your_ code in any shape of form, though _you_ can.
Substitute you with either nVidia or Oracle. nVidia can and do distribute binaries. Others cannot and do not so they do not fall foul of the GPL. Same with ZoL.
Distribution is the constraint which you are, can only presume purposefully, ignoring.
Simple - the copyright holder, for example nVidia, can build and distribute binary blobs of their own code.
End users taking those binary blobs and loading them in their kernel does not violate GPL, though it does taint the kernel - see dmesg output after loading nvidia kernel module.
nVidia cannot, on the other hand, provide their source code to be included with the kernel unless said source code is also GPL licensed.
For Ubuntu to build and distribute binary blobs of ZFS, they would have to take ZFS source code, licensed under CDDL, and combine it with kernel source code licensed under GPL, which is not allowed under the terms of the GPL.
It is the act of distributing the resulting binary which is not allowed under the terms of the license in the first place, not users loading binary blobs of any source code license in their kernels.
The above is the reason ZFS on linux is only (for now) available as source code and users are required to build it them selves, and take the risk of being sued by the copyright holder as an individual.
"<..> it’s more clear than ever that Congress needs to revisit copyright in this country — except, given how much money rights holders donate to various campaigns (both Democratic and Republican, though somewhat more flows to Democrats), it’s incredibly unlikely that any new law would be remotely pro-consumer. Nevertheless, the entire point of the Copyright Act of 1976 was to address the many technological inventions since Congress’s last examination of the law in 1909. A similar reconsideration needs to take place today to address questions of streaming, retransmission, device-shifting, time-shifting, fair use, and DVRs. What we have today is a hodgepodge of court decisions on these topics."
Biting the hand that feeds IT © 1998–2019