Of course Five Guys fries and burgers taste better than MaccyDs or BurgKings... given it's like 4 times the cost. Who'd pay x4 the cost for a burger and fries that were down at McDs level, or worse BKs level.
96 posts • joined 7 Jun 2016
Title says it all, why is this all being built into SystemD? what should be at most an optional plug-in is instead built-in but more likely be a feature that should not exist at all.
SystemD should be stripped back to being just an initialisation system and the process manager should be made a plug-in, then any other features made plug-ins/modules.... OPTIONAL plug-ins/modules so that administrators can actually control what is on their system, stop making a stupidly Monolithic solution where a monolithic solution was never wanted or needed.
Wait... don't MS and Oracle both have Linux Distributions now, sure for MS it's for their internal routing in Azure called SONiC, Oracle has Oracle Enterprise Linux (OEL) which was based on RHEL. I don't think either would want to call Linux as theft now given that they are both using and redistributing linux distros.
Seems I skimmed a bit too fast, apologies. Personally I haven't experienced anything like that but then generally try to avoid DHCP in the first place where possible, static configurations have (IMO) always been more reliable but I understand/know that static isn't always possible (of course).
My playground server is currently down due to moving but I suspect the behaviour mentioned here is controlled within the NetworkManager.conf, perhaps the dhcp setting but can't test to be sure and thus can't currently make an argument about it being configuration.
whoever said doing things properly doesn't matter? Nobody ever made that argument. The Argument is about having realistic expectations of what is actually achievable instead of believing everything can just be fixed with software, such as hardware failures or badly configured systems/servers.
As far as run flat tyres go, they are another example of where there is fundamental issues, Run flats are more expensive and generally degrade faster than conventional tires, blowouts are still possible with run flats and they generally only protect against specific areas, punctures from the side of the tire may leave the tire still completely unusable. Meanwhile development of self-sealing tires has come along and for many small punctures is an alternative method of keeping the tires still working while looking to get service/tire replacement.
You still seem stuck in the mindset of where NM was in CentOS/RHEL 6, it was majorly reworked for CentOS/RHEL 7 and no longer randomly breaks things. What was released in CentOS/RHEL 6 was definitely a daemon aimed at laptops and it was terrible by all accounts.
If you want to go for RHCSA/RHCE now, you have to learn NM properly which today it has vastly superior tooling to where it was on CentOS/RHEL6 and actually knows how to play ball with a server. In fact, NM has a new alternative to bonding, it has teaming. Further to this you can pass it configuration in JSON and it actually bothers to read what was in /etc/sysconfig/network-scripts/ too now. The only reason I see for disabling NM in CentOS 7 is the memories of trauma from what was given in CentOS 6.
But yet neither the linux kernel nor systemd are limited to only supported single embedded systems and nor is software in most web sites hosted on the Internet.
And indeed, there are cases where software fails because it is a heap of crap but this is just one of MANY cases, software also often fails when it is running on dodgy hardware, software often fails when the supporting system is not up to spec or ill-configured. Software also fails because of unforeseen events too, it's impossible to always know what a system is going to deal with but blaming it ALL on software developers is like blaming the Car Manufacturer for every punctured tire that car gets, even if say it ran over a nail.
This is why I say the previous statement war arrogant because it was so limited in scope and some idealistic world where you can just code away all issues in the world but software developers just won't do their jobs....at least that is VERY much how it came across.
Obviously I understand there is a difference, thus the very wording "there are multiple cases where", this is not an all encompassing phrase to begin with. I can only see you going to this point if I had used an all encompassing phrase, which I didn't. So why are we here?
"Well behaved computer-based systems have been achievable for decades, but have become unfashionable, especially since uncontrolled system failure became widely acceptable. Discuss..."
Why would 'well behaved computer-based systems" become unpopular unless there was fundamentally something wrong with them? The answer is that there is something fundamentally wrong with them almost every time. It is impossible to build an entirely crash-proof computer because components do degrade and resources get pushed too far, and there are simply better ways to achieve uptime than being reliant on a design with systems that are single points of failures.
NetworkManager isn't something you should disable as of RHEL/CentOS 7, it's actually better (IMO) than Network in RHEL/CentOS 7. The reason most people hate NetworkManager was because of how terrible the implementation in RHEL/CentOS 6 was.
One of the reasons Network Manager is better in RHEL/CentOS 7 is that it gives separation between connection and device which (in my opinion at least) gives you far more power over your network configuration and more versatility all-round, but it does require a lot more learning for it but at least bash completion exists.
Your response seems filled with arrogance to me, it almost sounds like you're saying "if only developers could do their jobs".
Crashes happen for various reasons and there are multiple cases where crashing is in fact preferable to continued running of a service. There are major differences between daemons running on a server and some terrible app that runs on a smart phone. More so, a lot of the time daemon failures aren't even the software's fault but the fault of the administrator for not properly configuring applications in the first place and supplying insufficient resources for what they are trying to run. A Daemon killed by OOM-killer for example is not the fault of the software but rather are usually the fault of the administrator(s) responsible for that server. Further there are hardware failures and failures in other software/libraries which also add to the stack. No serious developer is going to create their software from Assembly code to make even a basic web daemon.
So this isn't about developers being amateurs or designing bad software that crashes a lot but rather a mixture of many complex issues being over-simplified by somebody being naively idealistic on what is truly achievable in the real world.
when you go back over an old post and see a silly response... Chocolate as in the solid brown bar that people recognise today was invented by Joseph Fry. Yes cocoa had been known for a long time as well as dust being available but it was in England that the modern day solid chocolate bar was invented in.
"Not only has the MAX a problem with the single sensor, its main problem , according to pilot friends, is because they fitted the extra powerful CFM LEAP engines (same as actually work on newer Airbus320) but the 737 MAX geometry, location of engines , is compromised by its lower loading height (almost no lifter required to delicately toss in the hold-bags)"
You mean the entire, well-documented reason that MCAS was installed/required in the first place.
Software can change MCAS's ability to repeatedly fight the pilot's override, MCAS was relentless and definitely required some re-training which Boeing denied previously. Also there was a vital Gauge that Boeing decided to sell as an additional $10K optional extra.
Actually it's because the repeal of NN was essentially blocked and reversed using the congressional review act meaning so you never saw the loss of NN. Now most the states have enacted their own NN laws which the FCC would like to imagine it can repeal but the courts do not appear to agree. So you'll never see NN disappear because even if the FCC removed it from federal level now, it'll remain at state level in almost every stated.
So.... maybe just maybe, some critical thinking is a skill you shouldn't be preaching because you clearly didn't do your research.
"Surely, if you're working at the "global scale" you claimed in this very thread, you're using redundant servers and automatic failovers, load balancers and the like - and making sure server B is functioning before rebooting server A?"
This might be a surprise, but I haven't always worked for the same company using the same set-up. I have previously worked for an MSP, before that I worked in DC ops and before that app/DB dev. Unfortunately while working at said MSP, the sales team would sell some very stupid and unsuitable solutions but my complaints about the terrors their sold fell on deaf ears.
"Bloody hell, when I was running a couple of mate's websites off of a spare laptop in the linen cupboard under the stairs, I had a backup clone running at another mate's house where I could redirect DNS before shutting down/doing work on the main machine!"
"That's only really feasible if by "global scale" you mean - all my kit is in easy-to-reach, or otherwise manned datacentres. Which isn't always true."
Not even remotely true, You can have these things called "spares", a lot of DCs have on-site technicians or engineers as well if it comes to requiring physical replacement. Some companies will replace entire racks out if a single server fails with equipment all over the world, the fact that a sys admin might be in the US isn't going to prevent them from replacing out a rack in Austria. If you're at that scale, you aren't ever going to be relying on a single person, that'd be dumb past belief.
"Yes, you can drain the traffic and serve from elsewhere, but now you're dealing with increased latency for the duration of a remote re-install to somewhere with shakey international connectivit, or, waiting weeks for hardware to be swapped out (look how often the boat to TL is ;) ). All
because some berk didn't like text logs?"
Well apparently the set-up I'm dealing with is considerably better than the one you're dealing with since where I work, there are solutions to ALL those issues already. If you bother to do your set-ups correctly from the start you're not going to end out with most these problems.
"That shakey international connectivity btw? Also a bit of a shot-in-the-head for centralised ELK/Splunk. Working on Global scale doesn't always mean you're solely in big D/Cs with masses of international uplink, sometimes you're in the back-of-beyond, closer to the users"
If you're running your systems on the customer site, you can still have a localised syslog server. Sure a centralised ELK/Splunk would be better but it doesn't stop the ability to log to more than just the local server.
I respect your opinion. A lot of people just look at it as sysv vs. systemd but in reality it should be looked at for the pros and cons. I've said somewhere above, systemd is great for enterprise but terrible for expert users. As most sys admins are expert users, they naturally do not like systemd compared to sysv which is great for expert users but (in my honest opinion) isn't a truly enterprise worthy solution. I personally would like to see something better than either sysv or systemd.
it's an important metric when you've got a customer shouting down your ear after their services being restored NOW. What you value and what your employer or your customer values are highly different things. Generally customers love high uptimes, they never like to see their services or servers go down, even if it's just to apply monthly or security patches on servers running in higher availability.
There is quite a few, for starters the lack of dependencies, processes are just loaded in a pre-defined order and so if you have one daemon reliant on another service but the first service fails, sysvinit will just continue to load the latter service. This means having to manually detect if the previous daemon is running and terminating self or worse, letting the dependent service load when perhaps it should remain down.
The above plays into the fact that sysvinit really has no idea what state the processes it starts are running in.
Single threaded initialization, it's mainly a speed thing here so it's opinionated but in an enterprise environment this is actually meaningful. Longer boots is increased downtime for any reboot, like kernel updates for example. It's also not great when you have a customer on the phone shooting for their site to be brought back up right now, you check sysvinit and see it timing out on some process to later find out said process was trying to run a rdns lookup request that never got answered and so just sat their for 5 minutes holding up the entire boot process.
But the biggest issue, is generally other System Administrators who put together kludges into init scripts, so you end out with bespoke servers using band aid solutions that then later break for various reasons (i.e. package update replacing said script). This is often due to the quality of the sys admin but in a real enterprise environment, you shouldn't be running bespoke and patchy server configurations/scripts.
"And when JournalD shits itself and stops forwarding? It happens."
That's hardly an excuse to not be sending logs to syslog however. Just because you're using syslog server doesn't mean you automatically stop also having local copies of the same said log files too.
"That's fine if you're working on a very local scale. It doesn't work nearly so nicely when you scale up to global scale. Don't get me wrong, it's a nice tool to have, but you don't want to have to rely on it alone.
It's also not much use if journald gets stuck and stops forwarding, or if your box is failing to boot in the first place."
Considering I work primarily in the global scale, no, it's definitely nicer in the global scale, more so when you can use something like Splunk or ELK to compare error logs from different servers. It's also nice when say /var goes read only but you still get logging occurring, cas /var can also go read only, it happens.
As for servers that have boot issues, if you're truly at a global scale then you'd probably just replace the server out. Drain everything it is doing to another server and get it diagnosed later, removed, replaced or re-installed. If you're at global scale, you should have decent higher availability in place after all. No point worrying about servers going bespoke because somebody came up with a cunning plan to fix some strange boot issue or having to worry if somebody really fixed the corrupted root file system/etc.
If you're using imjournal, then your logs on the syslog server will be readable, you haven't even tried looking into it have you?
Also systemd reports to you the state of the process, it's pretty simple, systemctl status <name>, in sysvinit you'd need to have a cron or another daemon checking constantly against the script and hoping that you haven't missed any cases where the process might not be in the expected state in your scripting, whereas systemd you have the process respawn itself on failure, or you can make it e-mail you.
Seriously in the time I've worked in an MSP, the amount of sysadmins I've seen generate terrible little scripts that they are so proud of and thinking it's production applicable when in reality it should get a very fast rm -f... is too many too count. Scripts are great when done well, the problem is all the people that think their scripting is up to par when it isn't. Good scripts need heavy peer reviewing, testing and maintenance before being deployed properly (not just cut/pasted or SFTP/Rsynced into place). Unlike the 25 line scripts somebody usually places on to some random server is a bad practice that just makes things bespoke and so so bad for production.
I am not saying you can't disable it, you definitely can. However Red Hat has been building most their tooling in RHEL7 around it now, like NMCLI, having gone for RHCE I was not able to avoid having to use nmcli or firewall-cmd and in some ways I can't say they are bad tools either but they were forced on to me. Not sure if it's red hat being too pushy on their new technologies or those of us being too clingy to the old ways.
It's really very stupid and there must be much easier solutions than "the user must be logged in". I am surprised he didn't defer to cockpit or something actually. Even a flat file database that stores the encryption key encoded using the relevant private key sounds at least like an option... but then where would he store it... oh right he doesn't want it in /etc does he.
You'd need to take a too narrow a view to get stuck on this for all systemd's adoption. Obviously SystemD being developed by Red Hat engineers such as Poettering, we saw it first coming from the Red Hat/Fedora side. But this does not then explain adoption by other Distributions like Debian or Ubuntu. I think it more likely that there are actual real-world reasons to explain this and I'd put this back to it being for the Enterprise and not for the expert user.
Obviously there are many distributions that do not use SystemD still, and some branches of both Debian and ubuntu that do not use SystemD.
sysvinit was seen as legacy way before SystemD became a thing and there have been multiple previous attempts to replace it (i.e. runit). If you're solely relying on the logs on the local server, then you're already doing something wrong in the first place, at least at minimum have a proper syslog server and forward your logs, if not getting something like Splunk or ELK. Grep is great but using it for say scanning 1GB+ log files on a loaded production DB server is just a no, very bad practice.
you can still use text based files in systemd for launching services/daemons but the issue is that this really gives no real comprehension of process state and that is problematic in a professional enterprise environment, which is part of the reason why sysvinit really is legacy. Also the whole single threaded initialisation process and multiple other issues, since sysvinit has no real comprehension of dependancies either.
So remember when NetworkManager came out in RHEL and it was originally for laptops switching between different wifis but slowly was modified for usage as a Server and then by RHEL7 it was basically put down as a requirement to use for RHEL servers... this whole it's for laptop comment concerns me that this'll follow the same route that NetworkManager went to eventually become a requirement.
It is easy to say that systemd leaves a lot to be desired but the problem is that nobody else made an alternative that gained any kind of wide-scale adoption to replace the very much legacy sysvinit. So yes, while there are systems much more in-line with the linux design philosophies, they weren't popular. Generally speaking this can be put down to the alternatives leaving far more to be desired or failing to meet some basic criteria in real-world usage.
The last good thing I remember Atari doing was Rollercoaster Tycoon 3, but they fell out with the developers, Frontier. So when years later Frontier and Atari go head to head on new theme park tycoon/sim games... avoid Rollercoaster Tycoon 4 and get Frontier's Planet Coaster instead. Atari cheaped out and went for the cheapest developers they could find, after going through two different developers they ended out with a mobile games developer... for what is supposed to be a relatively high-end PC Game... it did not end well. Planet Coaster meanwhile continues to go strong.
As Frontier goes, well before Planet Coaster had already made Elite Dangerous and since made Jurassic World Evolution and this November have Planet Zoo due to come out.
people literally died from the heatwaves this year in the UK... great time for wearing a winter coat for sure. We have had two extremely hot summers in a row, it is becoming far more common place now with 2018 competing for hottest summer on record and 2019 having the hottest July day on record for the UK. This is what some people would call a worrying trend.
Not sure I can even follow this one? Are you claiming that US Law is better than EU? That doesn't sound remotely true. Given also in this case that it is the EU and not the US that brought Qualcomm's anti-competitive behaviour to the courts. Is there perhaps a similar action in the US?... given the parent companies are both US companies... well?
Most US law is traditionally based off of UK law anyways given The Constitution was based off of the Magma Carta. Also the banning of Huawei is just Trump throwing his toys out of the pram because it is a Chinese company doing better than America ones, the UK actually did a FULL analysis of all the security issues within Huawei's 5G implementation and found numerous issues, but none in regards to any intentional backdoors or the such. Everything you're saying here just seems to be faith based patriotism with zero facts, logic or rationales behind it.
You'd need to prove Uber are undercutting first, with the way Uber works that is near impossible to prove. Being cheaper is not the same thing as anti-competitive behaviour, underselling by artificially lowering prices to force out competition would be required for a case whereas it could be argued that the services Uber supplies are inline with it costs for the services provided. If you could prove that Uber was intentionally forcing prices down past that point with the intention to force out competition, then you'd see a case.
I've only been made redundant once but it was to my mega advantage since I was already looking for another job. I knew I could earn more than I was on, by a significant margin and so after my 3rd interview in a particular week I got a phone call from the Team Leader announcing the redundancies were coming.
Well I eventually did find another job and had the luxury to be paid a considerable sum to leave at the same time! I did however have the indignity to have to travel to train up some of the replacements, who I felt sorry for since I knew they were being tossed in the deep end of a swimming pool packed to brim with liquid manure since the company management and directors had no idea how operations worked in that company.
The idiot here is you, as the article has ALREADY explained, what Chinner said was in a context. What Linus did is called a Quote Mine, and you've REPEATED that quote mine. This Quote Mine was then used by Linus to construct a Strawman which Linus then attacked.
All you've done is repeated a quote mine and shown yourself a real idiot here.
"but in general it's just that: a bald faced like. Chinner. Just ignore."
The Bald face lie is saying that this was a General statement, it was made IN A SPECIFIC CONTEXT.
Most companies inflate their value as much as they legally can; As far as I am aware, HP has not provided any evidence that Autonomy has done anything wrong... meanwhile providing more than enough evidence that they did not know what they were doing.
The HP Claim is that Autonomy inflated the income from clients that were also service providers to Autonomy, which is a claim but I still believe at this junction there is no evidence that Autonomy artificially did anything here, nor a scale of it. Without solid evidence and without even having an estimation of the scale, it really leaves HP in a very haphazard situation, since even after this, HP would still need to show how much it made HP over-estimate the value of Autonomy by.
Long story short, there was evidence even without HP's failure of Due Diligence that Autonomy was not even worth 4 Billion, since IBM literally laughed Autonomy off in their talks at that figure. IBM had done it's due diligence, so why did HP purchase at near 3x that value? There are a lot of issues and unanswered questions on HP's side here. If HP can not answer those questions (and right now it does not appear that they can), it really leans this heavily on the side of B. Even if HP could answer these questions, they still lack the fundamental evidence to prove A... HP have effectively trapped themselves in a lose-lose situation here.
Biting the hand that feeds IT © 1998–2020