At its most basic, the job of a server is to process incoming data and turn it into something more useful, thereby adding value. For example, a web server will accept a request for a web page when you click on a link, search for the page and, if found, bundle up the text and images, and squirt them back the to the requester. …
That was interesting.
Especially for an IT tech site where most of us are driving servers daily.
Are you sure that this article wasn't meant for the Plumbers Weekly?
I quite liked it and there is a use for it!
Keep a copy of this (or the url) to hand out as light reading to the next whining user that struggles to understand why you "can't just" like he does at home.
Articles of this are good reads and help users to get some sort of insight into why sysadmins are like they are - well partly why ;-)
while this article may seem simplistic, it touches on many practicalities and myths regarding managing Servers ("one size fits all" is something I hear a lot at my workplace). They do need to be configured for their role (either general purpose or specific use), and maintained as an important link in the IT service chain.
And I can't count the times a DBA has stated "oh we can provide redundancy at the Database level instead" and when pressed for the specific steps, either can't list them or spell out 15 manual steps with a raft of dependencies. Hardware and OS redundancy is automated from my end, any intervention that is required is usually a shift operator task. Beware the "only I am clever enough to do this" brigade in IT support.
Cases for both
There are cases for both. If your business is rich enough (cares enough about the IT department), you can afford to have a SAN for shared data. For those of us without such extra funding, or perhaps in the category of grandfathered into a sprawled DAS setup, redundancy on the storage level (replicating SANs for HA or the like) isn't very feasible. Hence the DBA jumping in saying "we can replicate it." Granted, I wouldn't replicate on the DB level for redundancy since end-users would have to point to the redundant DB if the primary goes down, unless you're using a DB gateway of sorts, at which point redundancy on the back-end does nothing if your gateway fails. Replication would be more useful for distributed load, specifically to target performance. Running that 15min report against your secondary DB server puts a lot less strain on your end-user experience than running such a report against your primary DB server.
An ideal world would have all of us running mini datacenters with a replicated SAN, fully redundant servers hosting a variety of <insert-vender>Motion-enabled VMs with a fully-redundant 10Gb+ network. But when IT is viewed as a no-returns expenditure, we make do with what we are given, and provide the best reliability that we can. This just reinforces the "can't cookie-cutter servers" idea posited in the article.
So, while there are "simple" solutions for everything, sometimes the "only I am clever enough" solutions are within the economics of a business.
Nice primer... After 20 years in the industry I'm still surprised by the number of people out there who don't have a clue about how their apps run, and even more surprised at all those who *think* they know and yet work till ridiculous hours.
Good start but where's the rest?
- Product round-up Too 4K-ing expensive? Five full HD laptops for work and play
- Review We have a winner! Fresh Linux Mint 17.1 – hands down the best
- Vid Antarctic ice THICKER than first feared – penguin-bot boffins
- 'Regin': The 'New Stuxnet' spook-grade SOFTWARE WEAPON described
- You stupid BRICK! PCs running Avast AV can't handle Windows fixes