Sysadmins ought to know what they are doing.
Most Linux distributions have a significant focus on security. This does not mean they are necessarily ready for production out of the box. Tools like SELinux, excellent firewall options, and robust access controls can make Linux exceptionally secure. Despite this, actually deploying a Linux system into production still requires …
Not sure I agree about using Webmin! In fact a new Linux user should not deviate from the prepackaged stuff in the default repos IMO. It also does not help you learn what is really going on if all you do is use a nice pretty GUI - if you want to do that you may as well use Windows if you can afford it, its hard to move away from a GUIbut is well worth it.
I think its a shame that SELinux is not enabled by default on Debian though and that the default firewall is set to to allow everything! Dont forget to close down you ipv6 too! Time to switch?!
> Not sure I agree about using Webmin!
Webmin is a Good Thing(tm). It dramatically increases the discoverability of a server's features for the uninitiated.
But as I say every time the subject comes up, it has two significant problems: you need to be *very* careful if you try to have multiple users (as it doesn't really have them - they're all subsets of root), and you shouldn't do much sendmail administration with it (it writes the sendmail.cf file directly, meaning the sendmail.mc file gets out of sync, so future .mc modifications will roll back your webmin changes...)
Webmin is usually one of the first things I install on Linux just so I can get a visual overview of the system. I generally don't leave it running though...
It's also handy for finding the name/location of the config files you are looking for but can't remember (locate is great when you can remember some or all of the name of a config file).
> Sendmail module on webmin has "M4 configuration."
Yes. But if you edit that, it'll throw away stuff you've done with the other config tools, which edit the .cf file directly.
This is why I raise the issue. Every time, you tell me you get it - then post stuff like this.
Webmin is a fine tool, but it runs the risk of rolling back changes if you edit the .m4 file after you've used it to effect other changes. I wonder that you keep trying to ignore this very simple fact.
If you use the M4 config editor, it will indeed blow away all other changes in the .mc. I don't know that I'd ever "ignore that fact," Vic. It's pretty much the way M4 is support to work. If you use tools that edit the .mc directly - or you enjoy going in and editing the .mc by hand - then do not use the sendmail module in webmin. Period.
That said, I was taught emphatically to never edit the .mc file in sendmail directly. In fact, I have been berated and mocked by sendmail devs for doing so. If I go onto a sendmail forum for help, or I try the mailing list, etc...I am repeatedly and forcefully told that I am never to do anything outside of M4. M4 is where configuration changes are "supposed" to be made, and so I make them there.
Things like virtuser and aliases as generally include files nowadays, so I can use the Sendmail webmin module to edit those without clobbering my config every time I touch M4. This means that the Sendmail module will allow me to do things “properly,” which means that when I need support from the community, I have at least a snowball’s chance in a neutron star of getting it.
That said, I would never berate someone for editing the .mc directly. There are so many different ways to do something in Linux that I don’t feel it’s my place to tell someone that their method is “wrong,” so long as it works consistently for them. I don’t have the jihadi attitude about such things that is so prevalent amongst Linux nerds.
So if you are following the “rules” as laid out by the Sendmail devs, you are using M4 to generate ever config change, with aliases, virtuser, generics etc pulled out as includes so they don’t get clobbered by M4 regeneration. In that case, I highly recommend the Sendmail module, because it works…even when something else edits the M4 or the includes.
If however you edit the .mc directly, the sendmail module in Webmin will screw up your Sendmail something fierce and you should stay away from it!
Postfix isn't required for a great many cases. Such as when the local mail elements are being used to mail reports on behalf of a web application, or when you are using the Linux system's mail subsystem only as a pre-filter front-ending another system. (ClamAV + Mailscanner + Spamassassin, etc.)
They make for good, cheap, easy pre-filters for Exchange, for example.
I have spamassassin attached :
smtp inet n - - - - smtpd -o content_filter=spamassassin
etc piped at by postfix daemon. It works. ClamAV would not be very different, I guess. Postfix has a sendmail compatibility interface. gnu or bsd mail-utils and/or mutt are very handy too.
I heard that there are instances where sendmail does very complex job that postfix cannot handle. Haven't seen any of those myself.
@eulampios where exactly did I say that postfix couldn't or wouldn't use spamassassin, mailscanner, clamav or other?
I said you didn't need postfix to use them. The implication was not that postfix cannot use these technologies, but that postfix is more complicated to configure than sendmail. The further implication of that statement is that postfix is generally A Better Option, but that using the more complex tool isn't necessarily always required.
I wouldn’t want to run a full-bore business off of Sendmail – though I do admit that my PERSONAL email server is Sendmail – Postfix or QMail are better options than Sendmail for an actual email server.
That said, if I am not using the system to store emails – merely to send them on behalf of a web server, or to filter-and-forward (with or without LDAP lookups) – then I prefer to use the simpler tool. It is kinder to future admins who in most cases won't have 30+ years *nix experience.
@eulampios please try someday to front-end an LDAP-based email service with Postfix. Exchange is common, but I have LDAP QMail systems in the wild as well. Sendmail is significantly easier to set-up in as a simple mailfiter for LDAP-backed systems. (You want to be able to have the MTA do LDAP lookups so that you can do simple things like reject mail for addresses that don't exist, and banhammer systems that repeatedly try multiple non-extant addresses.)
Similarly on many Linux distros - CentOS is a great example - you don't need to "set up" sendmail at all. You simply "yum install apache php sendmail" and suddently your PHP scripts can send e-mail out. (In my case, i trap all outbound mail with an edge device and apply whatever filtering I need to there, but the principle of "it Just Works" remains.) If you need to make minor changes in Sendmail, use the M4 config (text-based) or the Webmin module.
Postfix is only easier if you are actually using it to host e-mail, rather than simply to process email. (In which case; Postfix all the way; never use Sendmail to host email!)
In more modern distros "apt-get install postfix" is enough to get a running mail system hosting whatever domain you want with a local mail store and an external mail server for sending if you so desire.
I don't see how you consider it hard to use postfix with LDAP since postfix supports it natively and it's mostly just a matter of putting "LDAP:" instead of whatever other store you could have used.
It also has some nice config items I use to restrict message size, ban certain attachments etc before the message reaches my mostly perl based filters.
Postfix seems to work out-of-the-box if the LDAP server is located on the same system as postfix itself, and that system is configured to talk to it, etc.
In my experience, getting sendmail to talk to an LDAP server is two lines in the M4 configuration. I don't have to configure PAM, the LDAP config file or anything else. Sendmail can be set up to talk to a remote LDAP server without having to involve or configure another thing on the Linux box.
Postfix only ever seems to work with a remote LDAP system if the Linux box is itself configured to authenticate against that LDAP domain. I prefer to not have to join my servers to the domain in order to do simple lookups.
That said, if you happen to have a link to any chunk of the postfix manual (or a decent walkthrough) that can show me how to set up postfix to a Windows Active Directory server without having to get the rest of the Linux system authenticating against AD, I'd be greatly indebted!
It sounds like postfix was using system accounts for delivery. That is the default and good for small setups but far from ideal when it comes to larger setups. I am not an expert on LDAP but I dug this up for you and I hope it helps.
Thank you for the link; I will give it a more in depth look tomorrow. At first glance, it looks like a walk-through focused on creating a postfix filterserver using a local LDAP setup. I'll poke at it some and see if I can figure out how to tie it back to Active Directory instead of a local LDAP setup. If I can figure it out, I'll do up a howto.
It would be nice to have a simple way of front-ending an exchange (or other) mail system using Active Directory as the user interface. Easy in Sendmail, but so far I've never made it work without Postfix using PAM to do LDAP lookups.
If I can make it as simple as sendmail...that's a huge step forward!
> I suggest postfix, which is much easier to set up
That's somewhat subjective; personally, I find sendmail much easier to set up than postfix, but that's almost certainly down to the fact I have far more familiarity with it than I do with postfix...
> BTW this would be useless with the webmin approach.
There is a webmin module available for postfix. I've no idea how well it works...
> If you use tools that edit the .mc directly - or you enjoy going in and editing the .mc by hand
> - then do not use the sendmail module in webmin
That's pretty much what I keep telling you. And you keep telling me you know better.
> I was taught emphatically to never edit the .mc file in sendmail directly
I very much doubt you were told that.
You were almost certainly told not to edit the .cf file directly.
> I am repeatedly and forcefully told that I am never to do anything outside of M4.
So you *weren't* told not to edit the .mc file. Like I said, then...
> M4 is where configuration changes are "supposed" to be made, and so I make them there.
And if you use the sendmail module in Webmin, that statement is no longer true, as it alters the .cf file directly for a number of options. Hence the warning I keep maknig, and you keep telling me isn't important.
I think we have a misunderstanding here; I don't use the Webmin module to edit any part of sendmail that would typically be part of the .cf. I use the M4 generator in the webmin module to exit those. I use the aliases and virtuser and so fort to edit those parts of the config that are in includes.
But you are correct; there are widgets in that sendmail config module that are flat out bad. They edit the .cf directly. And I would not ever recommend using those chunks of that module. It's useless! As soon as you touch the M4 config - which is how all config changes are "supposed" to be generated - then it wipes out our .cf; including any changes to the cf that you made with the sendmail module.
So just don't use those bits. I don't. But I do find it a convenient way to edit M4, edit "include-filed" items like aliases and virtuser, as well as manage the queue.
We are both right here, I think. There is value to the Sendmail module; but not ALL of it. It does edit the cf directly, which it should not do…but there are still other parts of the thing that work properly. I see no reason to throw the whole module away based on that; you just need to know it – and Sendmail – well enough to know what bits you cannot use.
I’d be far happier if they’d just pull the section that directly edit’s the cf out altogether…but I’ve simply ignored that for so long I just don’t notice it any more.
Maybe I need to do an article on that? "Webmin's sendmail module; which areas are safe, which break the rules."
Actually...now that I look at a fully up-to-date Sendmail module, it might be that only two sections of twelve are "dangerout/useless." From what I can see "Sendmail Options" and "Network Ports" directly edit the .cf. The rest seem to edit include files that don't get clobbered every time I regenerate the M4.
That's actually doing better than I remember...
"It also does not help you learn what is really going on if all you do is use a nice pretty GUI - if you want to do that you may as well use Windows"
There is nothing wrong with GUI, if that's what you want, provided you understand its limits. Likewise there is nothing wrong with CLI, if that's what you want, provided you understand its limits.
Rename a thousand files - Use the CLI
Tell me which of those files is a picture of a dog - Use the GUI
Not at all snobish. The windows management GUIs are simply much better than webmin.
The problem I have with GUIs in general is just you dont learn as much about how things work. You need this knowledge for when something breaks and you need to manually edit a zone file or change some permissions somewhere and look at log files,once you have this knowledge then using a script to make the task quicker is a good option or indeed - a gui. Obviously when viewing data or typing a letter you are not going to use the CLI but for most server administration functions there is rarely a need. If nothing else Webmin opens up another area of potential attack and is largely unnecessary IMO.
The CLI is best described as powerful, cryptic and dangerous. Many GUIs are basically wrappers around a CLI-like command parser, in part to make things easier for the user to carry common tasks and in part to prevent something Godawfully stupid from happening because of a miskey. Anecdotally, I was using a CLI the day I accidentally reformatted the office dev server's hard drive. It would have taken a lot more effort to achieve this via a GUI.
> Tell me which of those files is a picture of a dog - Use the GUI
Out of a thousand files? Are you kidding? That's going to be a ton of work.
It would be far better to do that with an automated tool. The real problem is that no such beast seems to exist. You use the GUI because you don't have a better option, not because it is actually a better option.
I also frequently use Webmin on linux servers along with Virtualmin for web hosting set ups. It can speed up many mundane tasks and is a great way to make learning Linux sys admin more accessible.
Unlike other control panel type apps (I'm looking at you C'panel) it uses the default config files an packages so you can use it for some tasks while going back to the terminal and config files for other things. Best of both worlds really.
Really makes life easier having it installed on any linux box.
Always been a bit gun-shy about webmin myself (though must admit I've not tried it in some years). Not for any failings inherent to itself, but more the worry of incompatibility with a particular distro. Admittedly for something like CentOS I'd expect this to be well-maintained, but I always feared (and did experience at least once) that a module might not match up correctly with that distro's version of the underlying app, and using it would hose something.
I never really looked into how (or to what extent) webmin's devs had made it respond correctly to different app versions and distro packaging inconsistencies. The former I'd hope they would have a stab at, the latter rests mainly with the distro maintainers. Anyone got some insight on this?
Paris loves a bit of GUI administration.
that security is overridden by rank. Most places I've worked are happier to pay for a few days outage due to a virus or firewall breach if it means the boss or other important lackeys can do as they choose.
I've even sat with some and shown them how security invariably matches almost perfectly with other company ownership issues and their little faces light up with recognition and then a few days later the desire to not have to remember what their job should entail overrides all else.
Someone has lied about computing being easy for 20 odd years and reality isn’t going to change that in a hurry.
Your ssl keys better be protected or when one of your machines gets hit, it and every other machine you connect to will become part of someone's ssh scanning network.
SSH Communications' ssh server allows key and password but openssl currently only allows key or password. Both products allow the key to be password encrypted on the client end.
If you install fail2ban as suggested in the article, then that will automatically kill off the brute-force botnet attacks.
Alternatively, these extra firewall rules right before you accept the ssh-connection will limit the number of attempts to 2 per minute. Can also be used for other services, if you like.
iptables -A INPUT -m state --protocol tcp --destination-port ssh --state NEW -m recent --set
iptables -A INPUT -m state --protocol tcp --destination-port ssh --state NEW -m recent --update --seconds 60 --hitcount 2 -j DROP
I have an office full of web devs that use SCP with key to transfer files to the server. That's 6 people running SSH connections through our single outgoing IP so take a guess what blindly allowing only 2 connections attempts per minute would do to their ability to get work done.
Fail2ban is a better option since it only punishes the bad users rather than everyone equally.
"Just changing the port SSH runs on doesn't make it anymore secure."
Maybe not, but moving it well up stopped a lot of noise from the script kiddies who were active several years ago. I don't just mean noise in the logs, but the disk on my home system used to rattle pretty non stop when it was on port 22; life got more peaceful once I'd moved it.
"Just changing the port SSH runs on doesn't make it anymore secure."
True. BUT: You can now configure your firewall to say "You tried to frob port 22: ON TO THE BLACKLIST YOU GO!" You can do that IMMEDIATELY: no waiting for a log in failure to be created. Under Linux, you can have a firewall rule immediately add that IP to a "recent" IPTables rule, and have that rule be checked at the very beginning of checking an incoming packet.
You can place the REAL ssh server on another port (with mandatory keypair needed, no keyboard-interactive, no root log in, FAIL2BAN in effect) and greatly reduce the amount of time J. Random ScriptBot can have at your system.
Ditto for any other well-known port you AREN'T making generally available: Put a (metaphorical) land mine there - touch that port, immediately be blacklisted.
@David D. Hagwood
Would you look at that? Amidst the dross; a sysadmin emerges! Yes sir, someone who understood exactly where I was going with "don't run the thing on a standard port" without having to have it explained. You don't run RDP on 3389 and you sure as all get-out don't run SSH on 22.
Them is the honeypot ports. Security through obscurity isn't security at all...but a minor dollop of obscurity is useful in catching the obvious idiots who like to eat your CPU cycles with their useless TCP packets!
Blows my mind that this apparently needs to be explained to "senior Linux administrators," but what're ya going to do, eh?
Using standard ports as a honey pot only works if you have total control over who connects to your system and over what links. The idea fails badly if you ever have offices in more than one country or have people who work from home.
Do you want to be the guy to explain to a paying client why their whole office can't connect just because someone ran some software on it's default settings?
In situtaions like this - where I happen to know what IPs that most people are coming from - I whitelist the IPs. In fact, I generally have the DNS names whitelisted alongside some dynamic DNS deployments for remote/home users.
Works wonders for more than just SSH. SIP phones for example…
>Just changing the port SSH runs on doesn't make it anymore secure.
Yes it does.
It doesn't make it *secure*. It's certainly 'security through obscurity', but it is a mite more secure than leaving it on the default 22.
>If you must expose SSH services at least lock them down to known source locations.
Great! Now can you tell me the IP source address of the wifi access point in Terminal 7 of Amsterdam airport, because my boss has just called to say that mail is down, and I need to ssh in quickly to reboot the server.
OK, I'm never going to be a paid Linux sysadmin. But when I started dabbling with Linux, about 5 years ago now (slight pause for that to register) the one tool which I stumbled across which transformed my experience was Webmin.
Despite being able to remember DOS 2.0, I am unashamedly a GUI fan. My argument is a *good* GUI can help enforce some sort of understanding of what's going on underneath - a classic example being an input field that is greyed out unless a checkbox is ticked. The GUI shows you the relation between the two.
The reason why CLI is alien to you is that Microsoft didn't do it right. DOS was a piece of junk, as was cmd.exe
Look at the vanilla terminal emulator official msdn tutorials use. Look at the ugly syntax there, you'll gather how much MSFT hated(s) the shell.
Power Shell is better but still "too innovative" for a shell with its OO idiocy.
"The reason why CLI is alien to you is that Microsoft didn't do it right. DOS was a piece of junk, as was cmd.exe"
Absolutely correct. It took me a couple of hours of trial and error to get NT4's 'ntbackup' and 'at' commands working back in 1997. All I wanted to do was set backups off at a certain time of day rather than running them manually. Quite a simple task on any other O/S but Microsoft had a fixation that we should all become point and click merchants.
Now, I'm a bit of a command line jockey in both Windows and UNIX/Linux, I'll freely admit that some Windows commands are odd, but there are also many UNIX/Linux commands which are also odd, or that don't correspond in terms of switches to other commands which do related tasks. There are commands which are massively under or over engineered (why does 'ls' need more switches than letters of the alphabet, for example.)
As for PowerShell being "too innovative" you don't get to make that accusation when you were saying the other day that ACLs are too complicated. It seems to be a common attitude amongst UNIX/Linux guys that they think that because they know UNIX/Linux, they magically know Windows and anything they don't understand is rubbish or not required. Let me tell you this is never the case, the last two companies I've worked for I have had to pick apart Windows products which were engineered by UNIX guys that just didn't work and in one case actually lost data, because there just wasn't the level of understanding of Windows that they thought they had.
Anyway, MS have always strived to make sure that everything is doable at the command line, the current version of Windows server offers no GUI options, the next version is command line by default, GUI as an option.
FYI, "ls" is a command, it is not part of the shell. Shell is used to call, pipe it and glue it. I bet you can use it in PS too. And, btw, "many is not few". I know those I use quite often, if I need to do more, I'd do "man ls" or "info ls" and search through with "/" or "?"
PS has a few questionable things to consider:
1) While a shell should tend to be as simple as possible, OO interface makes it cool, not simple hence not very usable.
2) as I mentioned above take at (any) MSFT shell and look at the syntax, how do you like it, compared to any *nix shell? those long, hard to read names of the commands are not esthetically appealing. What about those long paths? Hasn't MS realized it could be done with linking to some path (like /bin or /usr/bin etc) and storing in a variable like PATH?
3) look a the ugly MS vanilla terminal emulator. One can tell about the attitude towards what you do by looking at your workplace.
4) take this Trevor Pott's article written for (as he points out below) LInux newbies & Windows admins. Why is much so fear before the CLI? Why bother about webadmin ? It's the habit, indeed. Now compare the average Linux and Windows tutorial, guess which is more heavy on GUI? Even Windows 2008R Core almost gui-less has some windowy interfaces left, just to make sure... Those newer ones you're referring to are said to be administered through rdp GUI interface, as advised by MS, right
5) when did MS finally decided they need a better shell? some 20 years later after the *nix guys got it. And Windows culture hates and tries to avoid the CLI and this is with the reason situated in the City of Redmond.
Windows products which were engineered by UNIX guys that just didn't work and in one case actually lost data
Should I tell you about Windows XP tools being incapable to neither see its own backups, nor its own ntfs partition once Windows could not boot.
You do make some good points, but you should point out that this advice is only of practical benefit if you're placing a Linux box directly on the internet without being behind a firewall. I doubt you'll find any serious Linux setup that isn't behind a dedicated firewall.
Tools like ClamAV are designed to scan files going through the Linux system that will end up on other systems - Windows and Macs etc. There are very few viruses and trojans for Linux. If you've updated your system in the past year then you're probably safe against the ones that do exist. However you should really mention tools like Chkrootkit which will actually check for this stuff, or Aide which works as an intrusion detection system.
Incidentally, as a Senior Linux sysadmin with over a decade of commercial experience, I would advise turning SELinux off on your CentOS boxes. It really is more trouble than it's worth. However, Apparmor on Debian/Ubuntu boxes isn't too shabby, so keep that one running.
> I doubt you'll find any serious Linux setup that isn't behind a dedicated firewall.
I can point you at a few thousand...
> I would advise turning SELinux off on your CentOS boxes.
SELinux is very, very effective. Russell Coker used to publish his root password on his website and let you shell into his machine to play with it. It was quite a stunning demonstration.
SELinux often needs to be disabled because the admin doesn't understand it well enough - and that's fine, it's still a fairly new technology. But it should be left enabled if at all possible, because it really does stop bad stuff happening.
SELinux isn't new technology; it's at least a decade old. It is is badly designed and poorly documented technology though.
The point I was making, however, is beginner Linux admins ought to turn off SELinux because they'll try and do something simple and it won't work because of SELinux. There are other things they could do which aren't mentioned in this article which will make their systems more secure anyway and won't be such a pain-in-the-arse.
For one, CentOS has a stupid amount of services running as default, most of them ridiculous. If I remember correctly, one is a bluetooth service or something mad like that. The first step to securing a box is to stop unnecessary services. Another is not to run Webmin which, If I remember, has had some pretty nasty vulnerabilities in the past.
There are indeed thousands of Linux boxes directly on the net. In fact my personal server is. But when I say serious, I mean serious as in "Let's hire a sysadmin to look after this" type serious. I would expect at least a screened subnet type network setup for running a serious network system, whether Linux or any other OS. Not only does it aid security, but allows you to move from a server that's a single point of failure to something more highly available.
The conclusion is of course that this article is aimed at hobbyists rather than people employed as a sysadmin, therefore SELinux would be a hindrance.
Largely agree - SELinux shouldn't be disabled unless you REALLY know what you are doing. If you don't know what you are doing and it's getting in your way, you should learn enough about it to configure it's rules to make the problem go away.
Having said that - SELinux DOES cause quite an observable performance penalty. So if you really do know what you are doing and have made sure the system is otherwise suitable bolted down (or it isn't in a security-sensitive environment, e.g. not exposed to the internet), you can squeeze a bit more out of the machine by adding selinux=0 to your kernel command like (and/or putting SELINUX=disabled in your /etc/selinux/config).
A dedicated firewall is a linux box running iptables, or a proprietary piece of crap that does the same, but with more bandwidth limitations and no additional services.
Do you really want to put it behind yet another firewall ? because it's a linux box, so you know, according to your Senior advice ...
Are you joking?
Look up some basic network design and then get back to me when you understand it. You are, like the author of the article, assuming that you have one server and it's directly on the internet. I mentioned things like high availability before, which is one aspect of good network design which is what you do if you're professional. How the hell do you get high availability with a single box whether or not it's got iptables on it?
I just about knew you were going to say stupid stuff like that. It paints you one of those adherents to the idea that just because fancy GUIs make you feel safe, this must necessarily be a good thing for everybody else. This is curiously at odds with your powershell infatuation. Or maybe it isn't, but I don't really care.
Point is, you really don't get the point of simple stuff. Like, oh, system defaults. What else would you recommend linux distributions ship as a default instead of the IANA-assigned default port for SSH then? Suppose they pick some obviously much more secure port, say number 24. So people that run scanners take notice, and now scan 22 and 24. Congratulations, burned another port. And for what? A gazillion questions by confused newbies and angered admins why ssh doesn't "just work"? The centos crew must just love your wonderful suggestions.
Granted, allowing root login is a bit poor, but that too is something experienced admins will immediately amend to comply with their site's policies. Like, no password logins, no ssh logins anywhere from the outside except to one or two bastion hosts, that sort of thing. But none of that has much place in a default install.
In fact, I don't necessarily agree that a "firewall" is a good idea on a default install. The preceived necessity of such a thing mostly stems from shoddy code and defaults entirely unsuited to the open internet...by your favourite vendor. I say systems should be hardened enough to be reasonably safe out-of-the-box without packet filtering installed. That's not to say you shouldn't use one, there are many reasons why you might want to anyway, but I am saying the system should be solid enough without. If you ship then with a packet filter anyway, it's because your audience cannot be trusted to configure the system without opening themselves up to abuse. Like forgetting to limit the database in the lamp stack to localhost only, that sort of thing.
The rest is more listing of your preferences, without much of an argument at all why everybody else should agree with your tastes. You may have a point but you're not exactly exerting yourself making it.
The only thing I can really take away is a point you didn't actually make, and that is that it's a pity that SELinux isn't equipped with more accessible documentation and tools to make it work, leaving little option for everyone except experts in SELinux to just turn the cursed thing off and so do away with the obscure interference it causes, preventing many an app from "just working". If that could be achieved with little effort, then you are much less forced to know a gazillion tricks to work around SELinux' insufferability.
If you install a server inside an enterprise it's likely that port 22 is going to be firewalled anyway. So for the sake of convenience I don't think it's unreasonable to enable it even though the root should not be. Most installers ask for user id for the administrator anyway and users are encourages to sudo from that.
Always do it, but then I also run fail2ban which will block an IP for a day after 3 attempts. It can be quite amusing seeing some of the usernames bots attempt. I'm now a big fan of PKI too. Initially I used Webmin, but then after a while I got fed up with it not being flexible enough and learnt how to edit the configuration by hand - I haven't gone back.
Fail2ban is a great tool, and I use it to check against auth, mail, sql & http logs. The only issue is that one guy has a habit of locking himself out (from home) when he flattens his iPhone and gets his email credentials wrong.
I suppose I'm not really into the whole security by obscurity thing, though I might set up an SSH honeytrap on a non standard port and see how many hits it get.
I always run SSH on port 22, as I manage and share work on too many systems to remember obscure port numbers. I disable root logins on SSH, run Denyhosts (equivalent of fail2ban) before I expose a system to the net by forwarding the port and enforce strong enough passwords so that 5 attempts won't make any dent on the number of guesses likely to be needed. The attacker has to guess login account names as well, and the logs show this doesn't happen successfully very often, though 15 addresses or so get blacklisted every day for trying on the typical SSH server I operate.
It's handy for the initial config of a linux box and for getting an overview. For the low level stuff then I agree that cli is better.
I also tend to turn off the linux firewall though because these are internal boxes so don't need a firewall. Oddly enough, the only linux box with the firewall on is the one in the testlab as thats mimicing a live environment that uses hardware firewalls.
or configuring DNS zones - it takes some of the grunt work out of generating IDs etc.
But for systems such as postfix or apache configs its a real PITA.
For me its far far easier to configure down from defaults and apply simple to understand configs.
Its also OK for configuring a basic shorewall but I like to add rules by hand as its more flexible.
and you get to add comments :-)
Is that you cant just install it and click on a few buttons without understanding anything about it.
Well you can on a user desktop, to be sure, but not as a server.
That forces sysadmins to be sufficiently competent to not make at least the more basic of mistakes.
Dumbing down critical tasks is not necessarily a Good Thing.
There are reasonbly elegant ways to mitigate SSH brute force attacks that are available out of the box.
For example, if your machine has IP address 10.0.0.1, you could apply iptables rules along the following lines:
iptables -t filter -A INPUT -d 10.0.0.1/32 -i eth1 -p tcp -m tcp --dport 22 -m state --state NEW -m recent --set --name filter_10.0.0.1_22 --rsource
iptables -t filter -A INPUT -d 10.0.0.1/32 -i eth1 -p tcp -m tcp --dport 22 -m state --state NEW -m recent --update --seconds 60 --hitcount 2 --rttl --name filter_10.0.0.1_22 --rsource -j DROP
iptables -t filter -A INPUT -d 10.0.0.1/32 -i eth1 -p tcp -m tcp --dport 22 -j ACCEPT
This will effectively limit the number of ssh connection attempts for a particular IP to 1/minute which will make brute force dictionary password attacks unfeasible (unless somebody is running a large botnet from which they are brute forcing the attack).
If you are particularly bloody minded and have the TARPIT iptables target patched into your kernel, you could replace "-j DROP" above with "-j TARPIT" for good measure, which will also tie up the attacker's connections on IP stack level while making the attacking process get stuck waiting for a response.
Of course, this doesn't mean it's OK to run with direct root ssh access enabled. :)
You could apply something similar on a leyer further up the networking stack, for example to mitigate brute force attacks on your blog account login:
-A INPUT -d 10.0.0.1/32 -i eth0 -p tcp -m tcp --dport 80 -m string --string "/wp-login.php" --algo bm --to 64 -m recent --set --name filter_10.0.0.1_80 --rsource
-A INPUT -d 10.0.0.1/32 -i eth0 -p tcp -m tcp --dport 80 -m string --string "/wp-login.php" --algo bm --to 64 -m recent --update --seconds 120 --hitcount 3 --rttl --name filter_10.0.0.1_80 --rsource -j DROP
Again, you can replace "-j DROP" with "-j TARPIT" if you have TARPIT patched in.
You can also drop access attempts to known attack targets (which you hopefully don't have publically reachable on your servers):
-A INPUT -d 10.0.0.1/32 -i eth0 -p tcp -m tcp --dport 80 -m string --string "phpmyadmin" --algo bm --to 1024 -j DROP
Or drop access attempts from unmasqueraded penetration testing tools (you'd be amazed how many script kiddies don't bother changing the agent string):
-A INPUT -d 10.0.0.1/32 -i eth0 -p tcp -m tcp --dport 80 -m string --string "ZmEu" --algo bm --to 1024 -j DROP
And in those last two cases, again, you can replace "-j DROP" with "-j TARPIT".
All pretty basic stuff and all the tools required ship in the base distro. It's not the tool you have, it's what you do with it that counts. ;)
Linux in fact has the worst security of any commonly used OS, and is years behind Microsoft for instance.
The average distribution has ten times as many vulnerabilites than a Microsoft OS and twice as many as OS-X. See Secunia.org http://secunia.com/advisories/product/12192/
Linux having far higher vulnerability counts also holds true with a 'package adjusted' Linux that only provides the equivalent functionality of a Microsoft Server OS.
For an example of the impact of this in a market sector where Linux is actually used (so not desktops) - see http://www.zone-h.org/news/id/4737
You are many times more vulnerable running Linux, even allowing for market share.
Richto, amigo. Not even checking your numbers...
1) vulnerabilities can be severe and not that severe
2) it is important to note, how fast those severe ones are fixed, and Microsoft is not a good example here
3) Windows XP/VIsta/7 has probably 1% of software apps any Linux/BSD distro can offer
4) XP and VIsta and still 7 need no vulnerabilities to become a good target due to the poor design and some idiotic decisions
1.) Current Windows OSs have fewer and less severe vulnerabilities than Enterprise Linux distributions, and this has been the case every year since 2003
2.) That on average are fixed faster with fewer days at risk compared to Linux.
3.) The above still holds with a 'feature set' adjusted Linux distrubition to match the content of Windows
4.) 7 onwards is inherently more secure in pretty much every way than Linux. Older OSs it varies, but i did say current versions. things like secure boot chain, ASLR, NoExecute came first in Windows. Linux had to implement bolts ons like SEL to even come close to what is out of the box in Windows.
> The average distribution has ten times as many vulnerabilites than a Microsoft OS and twice as many as OS-X.
You can kid yourself all you like. That won't alter the fact that 99.9% of the malware that exists is for WinDOS.
You are trying to confuse the issue by conflating every single little bug that never hurt anyone with Windows worms that have managed to effectively disable the entire Internet. Of course they're not the same thing.
So you're not kidding anyone.
We should downvote our current and respected author Trevor Pott, since he insinuated:
CentOS doesn't have any default activated anti-malware... Linux systems....play host to some pretty nasty pieces of software. My preferred front line defences are the ever popular ClamAV and LMD.
Indeed, the anti-malware tools are Windows oriented indeed. When using those, you want to protect your Windows clients (more frequently with mail servers, not web servers). The whole idea of the at times very resource demanding database of the known bad things plus some questionable empirical methods with both high false positive and false negative outcomes do not very well agree with KISS and what the entire *nix administration stands on. Other more reasonable and open tools like SELinux/AppArmor should be used instead ( a properly set up system is taken for granted).
A Windows admin decided to write about Linux. It is pretty laudable, however should be taken with the appropriate grain of salt.
Linux systems never get compromised? That's a larf. They most certainly do; almost always through some badly coded PHP something or other. (To be fair, they also tend to compromise windows systems.)
Yes, you generally need anti-malware on Linux systems. If for no other reason than to ensure that your web applications haven't been hijacked by someone looking to poison the rest of the net. Or do you have even the remotest shred of evidence to say that every single compromised website is IIS based? How do you dismiss a decade's worth of evidence that shows several thousands new LAMP systems compromised every month?
I’m legitimately curious.
Trevor, in any case, now tell me how much any of these AV tools apply to any of those compromises? Do they look for some "common" *nix malware known to cause any of the said compromises? If you think they might, you're very wrong. If one gets a "malware" because he/she downloaded&installed some crap from an insecure source having logged in as root... the best remedy is the GB tools (good beating)
As far as the compromises are concerned they are of mainly two (+ one) types:
-I) unknown 0-day (at the time) vulnerability in the software -- happens pretty rarely, call it "many pairs of eyes over very few" advantage. AppArmor and SELinux (and good policies) are your best friends to mitigate the risks therein.
-II) poor security policies, like root ssh login, poor passwords, or password ssh logins, systematic disgust of security updates, too many unneeded features, modules and apps running -- to mention just a few
-gamma) with php taken too much ad liberty (without the suhosin patch) by non-experienced admins all of the above should be cited as a separate one.
Although, Windows might involve similar risks, the anti malware would be helpful for none of those with Linux/BSD administration. Suggesting AV to fight those is very unprofessional indeed. Nevertheless, in the Windows parallel world where the fundamental constants are proclaimed to differ very much, it is a part of a pretty well paid profession.
@eulampios: ClamAV is actually quite terrible at finding website compromises. It does find some however, and is better than nothing. LMD does a far better job, but isn't included in the primary repositories.
The issues of the type I am discussing are neither "you must be logged on as root and download some Trojan by using Linux as a desktop" issues nor are they 0-days. In nearly every case, malware on Linux occurs because someone forgot to - or couldn't, because of chained dependencies - patch.
In most cases it is a flaw in some PHP application that an admin has installed on their Apache setup. A privilege escalation bug or some other issue allows someone access to the webserver. They then alter the extant CMS/Application/whatever to include links to malware, typically as part of a drive-by-download attack targeting Windows (though increasingly Mac) users.
In general, this sort of malware does not compromise the Linux system itself. IMHO, anti-malware trying to defend the Linux operating system itself is completely pointless. Every available anti-malware package for Linux is so woefully inadequate that if and when your Linux system is compromised you nuke the whole thing are start over. (It’s quicker than defanging the thing.)
No, anti-malware on Linux is almost exclusively for cleaning e-mail and cleaning compromised websites. Generally compromised websites targeting windows systems.
I wouldn’t prescribe anti-malware for Linux for the same reasons as I would Windows. Frankly, Windows anti-malware is far more robust. It has to be; Windows has so many deep flaws (and is such an attractive target due to market share size) that there are many vectors to infect the OS itself.
Linux has a smaller attack surface in getting at the OS + core packages proper. That said, when it is infected, it’s pretty much a total loss. When a Windows system is compromised, even a half-assed Windows admin can clean the thing in ~80% of cases with less than an hour’s applied effort. (Assuming you ignore “the progress bar is going” in the effort calculations; most admins go do something else while waiting for progress bars.)
When a Linux system is compromised, this isn’t really the case. In these instances the malware is generally (by necessity) significantly more complex than your typical Windows software, written by people who know far more about the OS than the sysadmin trying to defend the thing.
Comb through the logs for long enough, test permissions and run fuzzers on enough things and you might figure out what was compromised, how, how many friends it downloaded, what they affected, etc. Then you can kill it pretty easily. In that timeframe however you could just have backed up your core configs/data, reinstalled and been on your merry way. (This isn’t remotely as easy on Windows; even with folder redirection, AD, etc, backing up configs can be a PITA.)
So, to re-cap: anti-malware is generally necessary on Linux for the two most common roles that Linux sees. Namely, e-mail (either as a pre-filter or actual server,) or web hosting. The actual usefulness of anti-malware is different than it would be on Windows, but it is still recommended nonetheless.
That there is Malware for Windows desktops is because people actually use Windows versus the ~1% Linux share in that space.
Where Liinux is used as a high market share - like webservers - it is hacked to shreds.
The facts stand in terms of security as measured by vulnerabilities, Linux sucks. All other things being equal it is much easier to hack Linux.
If you honestly believe this - honestly and truly - please go back to the article proper and select "email the author." I will post for you a CentOS 6.2 Virtual Machine DEFAULT INSTALL hosted on an external IP address. I will use *NONE* of the security measures mentionned in this article. I will even turn the firwall off.
You can hack away to your heart's content. I will monitor all of the packets in and out (naturally) to see exactly how you "hack" my off-the-shelf, completely unsecured virtual machine. I will bet you a barrel of ale you cannot do it.
On the other hand, I could post a Windows 7 system (fully patched) default install to an external address and with the firewall turned off I don't even need you to hack it. Within a week an IP address from China will have done it for you.
Hell, there are a few hundred IRC servers where you can buy zero-day software to do exactly that for $100 USD.
"Easier to hack Linux" my ASCII.
A) I have hacked the current Windows Servers. 2008R2 (fully patched) as well as 2012. (They fixed the bug.) Sometimes you just dtumble across zero-days...
B) Considering Linux systems are oftem left unpatched as "fire and forget" systems, sure, I'll buy that more people manage to bust into a webapp on a Linux system than compromise Windows. Busting out of that web app to compromise the Linux system? I doubt that.
It also still isn't comparing like for like. Compare a modern Windows to a modern Entriprise Linux. Out of the box, fully patched...firewall off. That is not a contest Windows wins.
When I went out to learn Linux around 2000, I was struck my the lack of "here's what you do in Windows, here's how you do it in Linux" type of help. I even spoke to Red Hat to see if they had a Linux course which was targeted at advanced Windows users wanting to convert, I got a hostile answer. The "Linux web community" gave hostile answers.
It's rather depressing to see that things haven't really changed in the last twelve years and anyone who tries to help Windows guys learn Linux is slagged off by the Linux community, it's almost as if they still really want to keep it a private club, despite their protestations.
Attitude is in fact one reason why I left "linux" and its gn00munity behind. I'm not interested in taking over the world nor in taking over the guys bent on taking over the world. I'd like to get some work done and I like to be left alone, too. Yes, I still use "unix" a lot, but I tend to the *BSD family. Much better documentation, for one.
Yet it might be instructive to look at it from the other side. What would your attitude be if you had to deal with a constant influx of people who demand (yes, demand) things to Work Just Like They Were Used To On Windows, even if that obviously makes no sense whatsoever?
So it's not entirely strange that the already cranky bunch (high octane code hackers tend to poor social skills and are prone to tunnel vision) isn't overly welcoming. Goes for linux' own too (qv Con Kolivas), for that matter.
Best to drop the "but I already know computers" idea entirely since what you know is such a bunch of things ripped-off from elsewhere, badly, that it leaves you with a worldview so jumbled it needs knocking on the head before you can really continue. Much like you'd experience in a GUI sense going to mac (even if you never touch the underlying darwin), but in a much more text oriented direction, so to speak.
This, by the by, is what brings Trevor the flak too: He's a windows admin and probably knows things I will never know (nor want to know, TYVM), presumably good enough to manage a bunch of sites, but is completely oblivious to how things are best done on Unix[tm] and its family and friends, yet he doesn't discern between administrating the two, justifying himself by pointing to a web interface. "Look, GUI! Pretty buttons to click! All the same, see?" That's a sure-fire way to rile up the natives. If that's what he wants, well, he's spot on there.
I get the difference between GUI and CLI. I just don't make the value judgement that one is "better" than the other. I don't see the point in writing long treatises on CLI apps or CLI systems administration largely because of the community attitude issues.
I spend months at a time neck deep in the command line, getting work done. Then I’ll spend months working on GUI-based systems. I don’t see the difference, really. The CLI is more powerful and far more flexible, but a GUI is easier and obfuscates a lot of the scut work that – frankly – I couldn’t give a rat’s ass about micromanaging.
So when I write things for a sysadmin blog I have some choices about who I target. Unix/BSD admins are few and far between. Even given that this is a tech site of some repute, they will still only make up a tiny fraction of the readership. I could write things aimed at them…but would they – or anyone else – care?
My experience with most Unix/BSD admins falls into a one of a small number of categories. They discount what I have to say because (one or more of the following):
1) I’m too young
2) I didn’t go to the right school and/or get the right degree
3) I choose to also work with Linux and Windows
4) I use a version of Unix they don’t approve of
5) They are crotchety old coots who just don’t listen to anyone.
So I could target this small niche of my potential audience who won’t listen to anything I have to say no matter what…or I can target someone else.
So what about Linux admins? Why not target them? Every now and again I make the attempt. Truth be told though, it’s a pain in the ass.
In truth, the majority of Linux admins I meet are actually good people with good critical thinking skills and the ability to function in society. They are rational and able to socialise in an acceptable manner.
Unfortunately, the Linux community attracts a highly disproportionate quantity of irrational individuals, tunnel-vision OCD folks and paranoids. They ruin it for everyone.
Take the absolute hatred that some have for GUI administration tools. There is no rational reason for the vitriol that these people spew against this class of tool. I have never encountered rabid ad hominem attacks against a gardener for using an automated sprinkler instead of watering every inch of the lawn by hand; yet within the Linux community it is everyday practice to attack people for using a GUI, the “wrong” distro, a given package rather than another…even tabbing conventions regarding comments in a config file!
I would want to deal with these people on a regular basis why, exactly?
So yeah, I’ll talk about GUIs. I’ll talk about Windows. I’ll discuss interfaces and applications instead of my favourite new way to combine grep with a neat new regex I discovered. I do that because I get more out of the community feedback writing about the GUI-enabled world than I do the CLI-only world.
Windows, Apple and mixed GUI/CLI Linux admins seem pretty open to careful, considered debate of a problem. They will come up with helpful – even novel – solutions to problems. I learn as much from commenters in these threads as I do from documentation.
A Linux article or forum thread is just bickering. Endless, circular, heated, hostile bickering. It's fun once in a while...but really, I'm starting to get too old for it.
That said, I will say what I have said many times before: I am entirely open to requests and suggestions regarding what I write about. If someone has a particular topic in mind that they would like me to address, I would be entirely willing to do so if I felt it were within my capabilities.
To date, I’ve had several requests for looks at virtualisation, Windows apps, Apple administration and so forth. No formal requests for Linuxy anything. Just a lot of complaining that I’m “biased,” “a Windows loving Microsoft shill,” and even some conspiracy theories about how I’m part of “the machine” designed to keep Linux in the shadows.
I think you're making a pretty clear value judgement there, right next to that sentence claiming you don't.
Personally I don't have a problem talking to "old guy" sysadmins. Then again I'm told I "sound old" already, so it may be me. OTOH, as the joke goes, "Sure unix is user friendly, it's just picky about who its friends are", and sure enough there's a learning curve and if you want to be taken seriously you need to show basic clue or at least capacity for absorbing same. No, m4d linex sk1llz are no entrance ticket no matter how many stuffed penguins you have. Though no-one I know actually cares about formal degrees; having a chat about what-if scenarios and past exploits, for example, is a much stronger indication of capabilities.
So I'd posit that it could well be you. Not so much because you "do" windows and linux both, but more because of a combined bouquet of MCSE, RHCE and A+ networking, the stench of obstructive cluelessness. If that's not you, well, switch deo brands or something.
Then again you might just be talking to the wrong people. Just like the windows community has its clueless, so does the wider unix community; linux fanbois don't have a monopoly on fanboiism. In fact, windows gets to keep its fanbois along with a large but shallow pool of "mere users", middle managers, and the like, where linux in particular has long been a welcoming home for windows-disaffected. That doesn't combine well, so no point complaining about the occasional clashes.
I indeed left windows behind for linux, then left linux behind for something even better, IMO. Partly because of the communities, partly because the software (and documentation) was and is genuinely better. Doesn't stop me from calling you out on writing articles written on shaky or broken premises.
And yes, I'm guilty of accusing you of liking your vendor too much, in the context of fawning over powershell like it was the bestest thing evar, right when said vendor is trumping it up as a really cool feature added to the system a score or so years down the road when a wide choice of similar and at least as good, and much better known alternatives have been available for, oh, half a century or longer. Not to mention 4DOS.
The reasons why pushing GUIs for administration get so much vitriol shouldn't be hard to see once you understand the GUI's inherent limitations, and how they relate to administration. To properly explore it takes a bit more than this paragraph, but it's the equivalent of epoxying the bonnet shut, claiming this will "ease understanding" or some similar malarky. Yet this sort of thing has been pushed wholesale on everyone regardless of whether it'd fit the workflow of the victim for quite a while. Guess who based their entire marketeering strategy on exactly this, to further their well-known goal of proprietarising the world? You're poking in a lot of history here; some people are perhaps a bit too sensitive about it but barging in like you do you're bound to meet some resistance. Even more likely so when that whole contraption is on the backswing.
Then there's the claiming you write for "linux admins" (cf the title) and now trying your hardest of disowning same claim, twice now under the same article. Bit of a poor show there, Trevor. And yes, it's the little things like that, that sneak up on you and do you in. No points for blaming the ants for disagreeing with your poking their nest.
Okay, even before I sit down to really nitpick this...can I get a dime bag of whatever you're smoking? I cannot connect any of your statements to reality.
“Combined bouquet of MCSE, RHCE and A+ Networking?” Um…what? I have two MCP exams? I think? They were necessary to get a discount on Microsoft Action Pack licensing.
At what point have I ever "fawned over powershell?" I loathe powershell. I think it's fantastic that powershell is something Microsoft is investing real resources in...but how - unless you are engaging in some serious mental gymnastics - does that translate into "fawning over ?"
“Liking my vendor too much?” Who is “my vendor?” I like to believe I am an equal opportunity offender, thank you very much. I criticise and employ cynicism in the general direction of everyone. Except possibly Intel; Intel haven’t actually done anything I consider overly stupid, malicious or anti-consumer in at least three or four years.
Regarding “claiming to write for Linux admins,” I don’t write the titles. The sub-ed does. That aside; a Windows admin who is dipping their toes into Linux is now a Linux admin, albeit a junior one.
Regarding GUIs, well…I wrote an article on that. Just for you.
TL;DR on that future article: I think people who limit themselves to “CLI OR GUI” without the mental capability to conceptualise “CLI AND GUI” are placing themselves at a complete disadvantage. But hey, if you want to cut off your happy bits because your religion told you so, far be it for me to tell you otherwise.
But if you want to spend yoru days spraying your religion around to the detriment of others, don't get all shocked and shaken if I think you're a complete twatdangle.
Special bonus question: where exactly am I making a "value judgement" in my post?
I do not consider the GUI “better” than the CLI. I do not consider the CLI “better” than the GUI. That means I am not making a value judgement about either of these tools; I believe they have their own separate and distinct uses.
Where exactly am I making a value judgement there?
OK, not even going to go through all the replies, but the authors methods ring bells of many a middle level Linux admin, not a seasoned one.
Was not impressed by some of his suggestions, webmin, I mean please. So some muppet can edit a BIND zone? LOL.
Most admins spend their time in vi/vim configuring nice raw configuration files. If an admin can't configure a service without the aid of web interface due to not knowing the underlying configuration files and parameters, then he should be on the Windows admin team LOL.
Also on a note about distros, CentOS ... nah thanks, prefer to go Scientific Linux if you're not stumping up the cost of a RHEL licence
Suggests the author of the article doesn't know what he's talking about: Check
Suggests everything should be done at the command line: Check.
Suggests that you should go back to Windows, if you don't want to use the command line: Check
Suggests that the wrong version of Linux has been chosen: Check
Obvious troll is obvious troll: check.
Firstly, great to see some attempt to make a sysadmin article which isn't about Microsoft products.
However... as an experienced Linux/Unix sysadmin, I would say that, in practical terms, there really is no "easy" way for Windows admins to just have a go at Linux administration.
Windows administrations fundamentally consists of pointing and clicking on the right boxes and trusting that Microsoft, or one of their development partners, has made sure that nothing's going to go wrong for you.
Linux administration, on the other hand, requires much deeper sysadmin skills. You need to understand for yourself what you are doing and take responsibility for it. You don't get easy tick-boxes with pop-ups and links to a paid support if you don't understand something. You do, on the other hand, get a huge pile of wonderful tools for all sorts of jobs which can be linked together creatively and usefully if you know what you are doing. You also get manpages which tend to be written by the actual software engineers so you actually know exactly what you can do with the tools available instead of having to read through some Microsoft-style waffle written by somebody on first line support.
Linux administration is about understanding how to see for yourself EXACTLY what a machine is doing and having complete and total control over it so that you can build and configure it PRECISELY how you want it. Webmin is a very poor substitute for proper CLI administration and using it is wasted time which could have been spent learning how to actually administer the system properly. If I saw someone in a sysadmin team deploying a server configured using webmin then I would raise an appropriate alarm immediately. I'm sure it's fine for someone playing around in a test environment, or for a hobbyist messing about with his own Linux box, but it simply isn't appropriate for professional system administrators to be using such things in place of doing the job properly. Building a server yourself and knowing precisely what it is doing at all times is the only way to make it secure AND to ensure that it is going to be tuned for the kind of performance you want out of it.
And this raises the issue of all the waffle on here about securing systems. People seem to have all sorts of elaborate schemes involving honeytraps, auto-ban systems, blacklists, dynamic firewalls rules. All of these things have their place but they're all largely missing proper underlying sysadmin approaches to things, i.e. to strike up a good balance between security and other things you need to achieve such as performance, functionality for users, etc. Only when you know how to build and run a system properly, from scratch, tuned and trimmed to perfection using all the elegant and powerful tools which Linux has to offer, are you going to achieve this. Just setting up systems then hoping that fail2ban is going to make you safe is not the correct way to approach security, just as using webmin is not the correct way to configure a server for deployment in a production environment.
If Windows admins are incapable of making the jump to full-blown Linux system administration and accepting the steep but fulfilling learning curve involved with such a transition then they should stick to their easy, safe, blame-passing Microsoft world. But for those who want to take responsibility for administering systems fully and properly, getting stuck in there with the CLI and learning what Linux can REALLY do is the way to go.
P.S. for all those people saying that moving SSH off the default port does not make a system more secure: understand that system security is not a black/white on/off scenario. No system will ever be 100 percent secure. It's all about balance and compromise and risk management. If it's convenient for you to move SSH to a different port, and if doing so will reduce SSH probes on your server from one every five minutes to one a month, then it's clearly a no-brainer to do this AS PART OF a proper, overall, holistic security strategy.
Powershell is just a glorified cmd.exe, and Windows is still a GUI-based OS. It's great that Windows enables you to use a half-decent CLI for certain things nowadays should you wish to do so, but that's a world away from the total paradigm shift that I'm talking about.
On my Linux servers I choose not to install a GUI because it's unnecessary for me and uses up resources that would be better dedicated to the daemons running on the server. When you can customise Windows to be CLI-only then your comment will be valid, but I very much doubt that day will ever come.
"Windows is just more powerful, flexible and has a lower TCO and reduced learning curve in offering you both options."
Well that's certainly the funniest sentence I've read all day. LOL.
(Oh and just to point out that you dont have any idea about this either - Powershell is also much more flexible and powerful than any out of the box Unix Shell scripting like Bash, etc. as it combines similar basic scripting capabilities, but with a much more powerful object orientated approach and enterprise functionality like digitally signing of scripts, etc.)
> If I saw someone in a sysadmin team deploying a server configured using webmin
> then I would raise an appropriate alarm immediately
You need to be a little careful with that...
I frequently install webmin on new builds - not because I need it (although it's rather useful as a MySQL browser), but because I might not be there when something goes wrong. Talking someone through a webmin interrface over the phone is very much easier than talking them through a CLI...
"I frequently install webmin on new builds - not because I need it (although it's rather useful as a MySQL browser)"
MySQL Workbench or equivalent running across a VPN or SSH tunnel is far better solution for that, for many reasons.
"I might not be there when something goes wrong"
That's what remote access is for. There are many handy and convenient ways of doing that nowadays. Installing webmin just for that reason is... odd, to say the least.
@dz-105 I don't see what that's odd. Vic has a good point. Sometimes you are just walking someone through something over the phone. Usually when you are *gasp* not in front of a computer (Maybe you discovered members of your preferred gender and decided to experiment with organic entertainment.)
If you are on call, you have to provide support; but the ability to provide said support wihtout having to remote in and do it yourself can be sanity-saving.
Well OK, perhaps there are situations where that sort of thing is acceptable, but those aren't situations I come across in my work life. I work in environments where expectations are high, and bolting a rather kludgy GUI onto a production server and letting people loose on it who can't perform relatively basic tasks in Bash would not be acceptable as far as I or my clients are concerned. A far better solution would be to find people who have a reasonable amount of CLI experience or to get them trained to that level where they can do basic support in Bash - surely it's not _that_ hard for someone with a brain!
Also, if it's a choice between the agony of talking an inexperienced support person through something on the phone, or just logging on and doing it myself, I'd far rather take the second option every time. It would be a lot quicker and lot less painful, leaving you more time to enjoy experimenting with that exciting organic entertainment you have procured.
@dz-105 well, your solution doesn't work for everyone. Most of my customers don't have local IT staff. They're too small. As for me, I don't live in front of a PC. So when I'm out socialising, I do rather enjoy the ability to not have to either leave where I am to get to a PC or try to fix something in bash from a smartphone.
But hey, if you get a real kick in the knickers from punching bash commands into a touch screen, you go right ahead. Me, I’ll accept the ~25MB of RAM per VM that running Webmin costs me.
My free time is just worth more to me than the “purity” of eschewing GUIs for…what exactly? I still haven’t gotten a good reason from anyone that wasn’t pure rhetoric.
“I love Apple because they’re just better than Microsoft” sounds exactly the same to me as “GUIs are evil because command lines are just better.” I’ll keep on using both, not limiting my options, and see where that takes me…
In that sort of specific situation where you're setting things up for customers with no IT staff at all, and leaving them to try and run things when you're not there, you're far better off using Windows IMHO. Chances are the UI will be much more familiar to them, and it's a safe point-and-click environment with simple setup for the relatively basic things they probably need, plus the option of paid support from Microsoft and/or whichever other software vendors are appropriate.
I wasn't suggesting GUIs should be eschewed for the sake of it. I was attempting to make the point (as various other seasoned Linux/Unix admins on here have also tried to do) that trying to get into Linux administration by using webmin is like trying to become an artist by colouring in a paint-by-numbers picture. You might be lucky enough to end up with a result that looks OK, but you won't have gained any of the deeper, next-level knowledge and understanding that comes from getting stuck in with the CLI from the ground up and working through that learning curve. You may as well just stick with Windows. That's not rhetoric, it's a very valuable and important thing to understand.
Why would clients have to "run things" at all? Excepting under exceptional circumstances - such as a power outage during a certain type of cron job - everything from Windows to Linux "just works."
Why should they use Windows and not Linux? I flat out don't understand the difference here other than your own prejudicial snobbery about keeping Linux "pure." I can – and do – walk users through Webmin on Linux just as easily as Windows. I will use the best tool for the job, not whatever tool makes narrow-vision nerds feel less “polluted.”
You make the argument that in order to learn Linux, one needs to learn ALL of Linux, from the ground up. You go from knowing nothing to knowing damned near everything with no stops in between. There should be nothing to help you, nothing to guide you, nothing to ease your transition. You simply study really hard, memorise everything and it is one, or it is zero.
I say bullshit. Your entire argument is bullshit. There is no requirement for that. A GUI can – and does – help someone learn the differences between operating systems. It can ease the transition.
What’s more, I know your argument is bullshit because I have seen dozens of living, breathing, regular human beings make the transition form Windows to Linux because of GUIs. GUIs helped them learn about things like “the differences in file structure” and “different naming conventions” while still using a relatively familiar environment.
GUIs – Webmin, Gnome, Unity or otherwise – have never in my experience prevented someone from learning the command line and continuing on to a more in depth knowledge of Linux and its fundamental differences from Windows. Quite the opposite; they made the transition a hell of a lot less intimidating.
In the end, they don’t use only one or the other interface. They use both. GUIs and CLIs. If that makes you – or anyone else – feel put upon because there are people who didn’t learn as you learned, didn’t suffer as you suffered…cope.
A hammer is for nails, a screwdriver for screws. Use all the tools in your toolkit; don’t limit yourself, or others.
"Your entire argument is bullshit."
I did start off by saying it's good that you had a go at a non-Microsoft-focused article for once, and it's raised some interesting discussion; but I have to say that as the author of this article, I think it's rather inappropriate for you to be so graceless and unaccepting of commenters who have different opinions, and more experience, than you do; especially when I don't think the full gist of my point has really been grasped fully... but there we go.
I have no problem with folks having different opinions than me. I have lots of problems with people who take those opinions and use them for homenim attacks.
I enjoy debating things with commenters. Many commenters can and do hold a fantastic debate. Several have taught me new things. Others have pointed out mistakes, shown me when I was wrong and I am grateful for all of them. I love The Register's commenttards...most of them at least.
But I do reserve the right to take up the debate when I disagree. Most especially when I feel that the commenter in question is turning purely professional or philosophical arguments into ad homs against myself or others. (Or when the person who evidences a difference of opinion does so with multiple easily pointed out logical fallacies.)
If you are particularly objectionable in your conversation, I will call you on it. If you’re a dong, I’ll call you on that too. If you repeat ad homenims against me, personally – especially if you back them up only with logical fallacies, rhetoric and baseless assertions – I am going to treat you like the complete twatdangle that I believe you to be.
Why would I do otherwise? I see absolutely no reason to take crap from you or anyone else.
I can and do respect the experience of the commenttards on El Reg. What I don’t do is blithely accept that your experience makes you “superior” simply on your say so. I don’t accept your opinion or life experience as more valid than my own or as more valid than those of the other systems administrators I have the pleasure of working with.
If you advocate something different from what I advocate, back it up. With solid evidence (primary research is best) and no obvious logical fallacies. Certainly no appeals to completely unverifiable authority. Above all; don’t cap your debates with snide comments about how I should “stick to Windows articles” or other such tripe.
Who – exactly – are you to tell me what to do? Who – exactly – are you that your experience, opinion and philosophical beliefs are automatically superior to mine, or that other guy over there with 30 years under his belt, or these million sysadmins over here?
You are a block. The validity of your arguments will flow from the evidence you provide. Nothing more.
Biting the hand that feeds IT © 1998–2019