* Posts by Trevor_Pott

6991 publicly visible posts • joined 31 May 2010

An ode to rent-a-nerds and cable monkeys

Trevor_Pott Gold badge
Pint

@Anomalous Cowturd

Both parents are psych nurses. Sister is a trick cyclist. Family up and down the line goes from shrinks to social workers.

Career in psychoanalysis? $deity...no thanks!

Had enough exposure to that to last several lifetimes. I propose pubs and good beer instead. :)

Trevor_Pott Gold badge

I don't write the article titles; take it up with $sub_ed.

:)

As to syllables...I have faith in the commenttard community.

Pano's virtual desktops go from zero to hero

Trevor_Pott Gold badge

PLC means a small IC whose processing can be changed by flashing it. It is essentially a mini-CPU whose logic structre can be altered. In most cases, onlg if you remove it, put it in the right device, anx then flash it with special device.

Now the end model can have a custom IC instead of the PLC, but almost nobody does that anymore. (They just solder on an appropriately flashed PLC, which you can't alter later.)

This PLC - or custom IC - does mot need RAM to work. (Or not always.) It is no different than a "hardware H.264 video decoder." A little dedicated chip that does ONE THING, and is incapable of anything else.

Yes, a PLC can fit there: it depends on the configuration of said PLC, and if it has the "flash me" pins wired to anything.

Pano's zero clients cannot be flashed. PLC or custom IC, the effect is the same: a piece of dedicated hardware - absent the need for ram or any form of software - that turns betwork lackets into images on the screen.

A dumb terminal.

Trevor_Pott Gold badge

I don't think you understand what a PLC is. There is no software, no memory. There is a PLC that does that "convert network into display" entirely in hardware. Not in software. Not qith a flashable firmware. Not using RAM or a CPU. The NIC itself is more complex than the entire rest of the unit.

It is not a thin client at all. Your understanding of the tech is inadequate.

Trevor_Pott Gold badge

Protocols

Pano's protocols are open. As to tbe CPU, nope. AFAIKT, there is a NIC, very primitive graphics chip and a PLC managing between. The PLC could be viewed as a CPU (maybe), excepting there is no OS, nor any way to do generic computation on the device. It's a dumb terminal, not a thin client.

Round up those wireless devices before they cause trouble

Trevor_Pott Gold badge

As with most organisations I have encountered recently...

...a CTO exists, but said CTO is not a techie. (In the best cases, the CTO *was* a techie, 25 years ago, but has forgotten almost everything.)

CTOs tend to be business, accounting, or HR specialists that exist to serve as lightning rod for the varying complaints of middle and upper managers. They exist to push orders DOWN to IT, rarely to consider IT's input.

IT has less and less pull in the modern company. The mantra is that "IT exists to serve the business, not to dictate terms to it." This isn't unique to my experience, I've done my homework on this and it is a generalised trend.

IT policy decisions are made now by accounting, HR and business units. IT is to implement those policies and otherwise be ignored. If you are still Emperor Nerd of your little domain, congrats. Enjoy t while it lasts.

Eventually, someone will figure out that we’ve just exited one recession only to start another. They’ll look at the millions in the US without jobs, realise many are nerds, and ditch their noisesome alpha geek for a more compliant model.

Welcome to 2011.

Trevor_Pott Gold badge
Childcatcher

Good question. The answer boils down to "most sysadmins don't get to make that choice." The days of sysadmins with enough authority to set corporate IT policy are gone. That sort of traffic management is now a business decision handled by bean counters, PR and upper management. IT is thanked for their input, then ignored.

The best you can do is lodge a formal protest. You job is not only to do the impossible on a budget, it's to keep employees - especially powerful ones - happy. Management doesn't want to hear that the bad IT people blocked the internets or stopped someone from seeing that "training video" that was vitally important.

That's obviously Empire Building by IT and won't be tolerated.

No, the modern sysadmin has to provide it all, and they likely won't be given a choice in the matter. If you still have an IT job where you can set policy based on best practices (rather than the squeakiest wheel,) treasure it.

Soon enough, it too will be gone.

Here come hypervisors you can trust

Trevor_Pott Gold badge

Or if...

...someone manages to figure out a remote attack that registers new OS module signatures with the TPM.

...someone manages to disable the trusted compute pools functionality altogether and then insert a borked hypervisor.

...someone manages to load malware into [insert some add-in card with it's own boot ROM that isn't part of the TCP scheme].

...someone manages to get hold of "measured in acres" computing power to compute alternate byte lines that meet the same signature hash but allow for you to use malicious code instead of the "proper" code.

It's a fun game! But it's still better than running the hypervisor with no checks or balances and just sort of praying. Traditional security (lock the damned door, keep the management interfaces on segregated networks and OFF THE INTERNET) are still absolutely required.

Trevor_Pott Gold badge

@Tom Chiverton 1

Trusted Compute Pools use the TPM. All the noise aroudn trusted compute pools talks - of course - about support from the big vendors. I am suspecting at this stage that this is an attempt at shipping out-of-box "trusted" hypervisors.

In theory, there is a process to "register" a hypervisor - such as an update - with the TPM, thus allowing the signatures to for the update to be validated. That isn't going to happen automatically, think more along the lines of having to access a vPro-like interface on the hardware and manually register the signature of the new hypervisor.

Also: this tech should be usable for more than hypervisors. Want to make sure your Windows/Linux/Solaris server has not been tampered with? This should do it. Assuming those OSes produce signed modules of the type necessary for the TPM to play with...

I realise this doesn’t give you absolute and total control over every aspect of your hardware in the world’s most simple fashion. But, if it were easy to do, then it wouldn’t be very secure! The ability to register this remotely through a Trojan or botnet would entirely remove any semblance of security the thin was designed to provide.

I/O holds up the traffic in virtual systems

Trevor_Pott Gold badge

Spindles

256 spindles is not all that abnormal.

Also: Flash.

Trevor_Pott Gold badge

Wyse clients.

Wyse clients made for cheap endpoints. Also: you can drag extant desktops along with a very basic OS and no management if you are using them just as thin clients.

As to "more servers under VDI," I virtualise more than desktops! Buying dedicated physical servers to perform all fhe rendering, backups, hosting, etc would result in many more boxen than nof using virtualisation.

As to "single point of failure," I did address that in that when talking about RAID, maintenance, etc.

Trevor_Pott Gold badge
Megaphone

Nate Amsden

It's a question of financial resources. We’ve rounded the bend. Virtualisation is simply cheaper.

Virtualised environment:

6x servers @ $4000 = $24000

100 thin clients @ ~400 = $40000

Software = $125000

Total = $189000

Advantages: Everything runs on RAID, getting people used to RDP means more of them working from home. *Way* less hardware cost.

Disadvantages: Shared resources can be a little slow at peak usage. (This problem is rapidly going away as new tech becomes more mainstream.) Software costs are about 1.3 - 1.5x the other route.

Same setup, non-virtualised:

24x Servers @ $2000 = $48000

100 fat clients @ $1000 = $100000

Software = $90000

Total = $238000

Advantages: Everyone gets their own dedicated hardware. Everything is always the same speed. Cheaper software.

Disadvantages: Way, way, WAY more expensive hardware costs. End user stuff isn't typically running on RAID. Backups are a pain in the ASCII. Swapping/maintaining hardware becomes a bit of a pain near end-of-life for end units. End users can't use their own equipment without having IT's security fingers all over it, creating wailing and gnashing of teeth.

At the end of the day, dedicated hardware for everyone is faster. It is however a person’s salary more expensive and adds something like two bodies worth of management and support overhead.

I’ve been running fully virtualised shops since ~2005. I simply wouldn’t go back to physical unless I was working at a place willing to put in about 2x the money /[unit of measurement].

Welcome to 2011, $deity help us all…

Trevor_Pott Gold badge

@Lusty

Bandwidth issue was solved with addition of more 10GBit lanes.

For now...

Trevor_Pott Gold badge

zef

I inherited the SAN on this particular network. *sigh* Unfortunately due to a squillion little reasons that add up to "changing this design would require completely tearing up and replacing the entire datacenter," I cannot go to DAS on this network for the forseeable future. So additional SAN filers are simply added, and added, and added....

Most other networks I run have DAS storage on VM servers for exactly this reason.

Trevor_Pott Gold badge

What am I running?

VDI. Folk watch video from inside their VMs. Servers that render images, video and audio. Web sites. File storage (deduplication front-ends, backup systems, storage replication VMs, etc)

In a word: everything. Server CPU time is cheap and plentiful. Why not abuse the crap out of it?

Deduplication: a power-hungry way to streamline storage

Trevor_Pott Gold badge

In memory database

Power failure, RAM stick gone bad, Northbridge packing it in, CPU fan seizes up...

I'll stick with things that gurantee writes of both the data and the hash tables thanks...

Trevor_Pott Gold badge

@T.a.f.T.

Thanks! Actually rather proud of this one; was trying to compress a difficult and complex topic into ~500 words. Additionally, it was bashed out via voice-to-text on my HTC Desire whilst driving down a pitch black hiway through the rain at 100Kph.

Glad someone liked it!

Trevor_Pott Gold badge

Good questions

Truth is if you want to port data from one vendor to another, you have to such the undeduplicated data off the long way and then rededuplicate it.

No yarding disks out and moving them around.

Plex flexes media server pecs

Trevor_Pott Gold badge
Angel

>_>

*whistling*

Give me 10 gig Ethernet now!

Trevor_Pott Gold badge

True, but that isn't the protocol's fault. If you have big time colliaions, you are using a HUB, and...WTF?!?. Indeed, find me a 10GBE hub...

Trevor_Pott Gold badge

You'd be wrong. The nightly backup requirement is 10TB. That is using compressed deltas and OTF deduping. The cumulative array size is ~0.5PB. Fairly average nowadays.

Seriously here...my *home* network has something on the order of 50TB. Enterprise backup requirements for a medium sized business being 10TB/day is not remotely unreasonable. It is all a question of what business you are in.

All the fancy software, rsynch, snapshotting or hullabaloo in the world won't help you if can't get the nessecary bits from A to B. Yes, I have 3 shops that actually modify more than 10TB of data a day. Last year, not one of them modified more than 3 TB of data per day.

Next year...?!?

Trevor_Pott Gold badge

I have gotten 900 Mbit sustained out of 10GBE links. Is that 10x 1GBE? Nope. Is it still hella much faster than I can get out of 1GBE? Yep. Is it faster than Fibre Channel? Yep.

So I'll take it. Beggars can't be choosers.

Network quality of service: making the switch

Trevor_Pott Gold badge
Happy

How many jobs?

Heh. I don't sleep much. IT in my city is interestingly competitive. There are squillions of local admins (myself included) bearing only credentials from the local polytechnic. With such a glut of identical credentials, wages are abysmal and most of the good ones have been driven away to other cities.

To contrast, anyone and everyone around here will hire a fresh-off-the-line BSc comp sci to be a sysadmin around here, but your resume goes in the bin without one. Experience means nothing to most companies; all they care about is that you have a (any!) degree.

Inevitably, the guy with the degree in programming (which is all that CompSci from the local university amounts to,) is useless at being a sysadmin. They contract someone like me to fix the mess, but simply replace the permie with someone who has identical credentials. Rinse and repeat.

So while I may only be officially chief cook and Ethernet washer on about 6 networks, I play Mr. Fixit for a dozen more on a regular basis. At least until something better comes along.

ANONYMOUS: Behind the mask, inside the Hivemind

Trevor_Pott Gold badge

@Jason Bassford

All of the individuals interviewed spoke the language, knew the memes, hung out on Anonymous-related websites, IRC servers and forums, participated in Anonymous related activities and called themselves Anonymous. The individuals I talked to would be accepted by the majority of those who self-identify as Anonymous as fellow Anons. That’s really as solid a definition as is humanly possible under these circumstances.

If you chose to call yourself Anonymous, participated in Anonymous activites, etc. you would be accepted by them as well. Even if you disagreed with them about almost everything all the time.

For an example of anonymous internal disagreement, start up an argument about “how can you fight censorship by DDoSing (denying access to) a website? Isn’t that trying to fight one form of censorship by employing another form of censorship?” The debate has raged on for years, and will continue to do so for years more. (Or start a Republican versus Democrat thread.) Despite this, both sides of that debate would recognise the other as Anons.

It is all quite…complicated.

I have had conversations about a number of topics with thousands of individual Anons over the years, though I limited myself to 83 Anons on 32 servers for “targeted” interviews for this project. If you are curious where to find some of them to talk to, there are really three main tentacles of the hivemind I can point you to;

1) Chanology Anons have a presence on irc.anonnet.org

2) Activist Anons of all stripes will be found on irc.anonops.li

These are of course two of thousands of places anonymous can be found, but they are good “starter” locations for those seeing to have discussions with a specific tentacle. Asking around will get you links to other servers and then you’re off to the races.

Trevor_Pott Gold badge

@Scorchio!!

To 'know' something to be true is to deny absolutely the possibility that it could be otherwise. I 'know' very few things in life. I believe many things to be true - within given error bars, depending on the topic - based on information, experience and observation.

With the exception of mathematics, it is my belief that nothing can ever be ‘known.’ The closest we can come is a state of understanding wherein the possibility of our being incorrect is vanishingly small, but still non-zero.

Trevor_Pott Gold badge
Happy

@AC

This wasn’t an assignment El Reg sent me on; it was one I asked them to let me publish. If it brings in new readers…good! El Reg has a collection of great writers, and it publishes news relevant to more than just its core stable of IT nerds.

If my articles are different than those you have come to expect from authors on El Reg…also good! It’s a bad thing when any news organisation has all contributors with the exact same sociopolitical leanings. The one thing I as a reader and commenttard have always loved about The Register is the diversity of opinion amongst her writers.

For every author who does not believe in anthropogenic driven climate change, there is one who does. For every establishmentarian, there is a disestablishmentarian. There are even a few antidisestablishmentarianists!

If you want the truth about why the article was written here it is:

With lulzsec running about, coverage of Anonymous skyrocketed. Much of it was blatantly wrong. Every article that was wildly inaccurate bothered me in the same way that the misuse of “you’re” and “your” does. I decided to put my time to better use than complaining about inaccuracies.

Instead, I spent three weeks talking to nearly 100 Anons, from all walks of life, on dozens of servers. I tried my best to put together the most accurate and comprehensive article on “who Anonymous is.” Target audience? Anyone who had an inaccurate understanding of Anonymous’s origins, motivations and structure.

The regularity of readership simply wasn't a consideration.

Future of the cloud is hybrid

Trevor_Pott Gold badge

@Matthew Malthouse

I would suggest at that point that your PHBs should take some night courses. I recommend risk management as a place to start. Managerial accounting - with a focus on the concept of "false economy" - would be my next recommendation. It could save the company a lot of money, and the PHBs a lot of embarrassment.

For every individual who says information should "stay" in the cloud I have two critical questions:

1) How much per hour does loss of access to that service cost you?

2) Does that IT service contain any information that would damage your business - or expose it to legal liability if the information were compromised?

The more vital the service, or confidential the information, the better the case for hosting it in house. Some services can be tossed in the public cloud “forever.” Others…well, there are strong arguments against that.

Before the hosted cloud can ever take off, hosting in the cloud needs to provide people with "warm fuzzies." No matter how many studies come out that "prove" that the hosted cloud offers superior security to the local cloud, /everyone/ prefers to have someone to flog when something goes pear shaped.

Take a look at modern cloud providers; they present a very narrow flogging profile to the customer. Hosted cloud providers have their own risk management experts; they try very hard to ensure that they take on zero risk when hosting another company’s information. SLAs that actually mean something are stupendously expensive…in the rare instances they exist at all. For most people, that simply won’t do.

Since hosted providers don’t seem to “get it,” only the most uneducated of management would trust to the hosted cloud services the loss of which would present a truly negative impact to the business.

http://www.theregister.co.uk/2011/03/28/cloud_hosting_service_level_agreements/

If you’re a consultancy, or other non-retail/non-real-time organisation, then the loss of your accounting package for a day or two probably isn’t the end of the world. You can use paper backups, and enter the information back into the system when it comes back online. If you are a company that requires access to those systems for every single operating hour in order to make a sale to your customers, then you are losing money for each minute it is unavailable.

If you are The Register, wherein you don’t collect a lot of personally identifiable information on your customers, then the loss of customer information isn’t the end of the world. Embarrassing yes, but a very low risk of legal ramifications. If on the other hand you are the aforementioned retail shop, you likely collect and store credit cards for your regular customers…the loss of that information could have catastrophic legal ramifications.

So I argue then that which aspect of the cloud (hosted, private or hybrid) matters to an organisation can never be dictated externally. The hosted cloud isn’t for everyone, and neither is the private cloud. I do however suspect that the hybrid cloud – a combination of hosted cloud services and carefully guarded local company services – will become the norm. I also think that as technologies and applications mature, moving services and data between the private and hosted clouds will become easier and far more commonplace than it is today.

Personally, I don't think that any company with a server room of at least 5 servers will end up with less CPU cycles on the next refresh. They'll end up with the same (or more) locally, and buy some time from a hosted provider to boot.

Trevor_Pott Gold badge

@Gary F

It you want to get pedantic about nomenclature, then I really should point out that there *is* no “internet.” It isn’t a defined thing, nor a place. The “internet” is “a series of individual local networks that are INTER-NETworked into a larger network.” By definition, the instant you connect to physically disparate networks to each other, you have created an internet.

Beyond that bit of pedantry, “the cloud” has been a networking metaphor for “the internet” for quite some time. It was a reference to “a black box into which we fling packets.” You would draw a line to a cloud essentially indicating you did not care what the infrastructure was that got your packets from A to B: that was someone else’s problem.

Today, this metaphor has been extended beyond packets and to entire IT services. I don’t honestly believe the term is thusly misused. Computers have become far more than the sum of their parts these days. Few people think of the individual hard drive, that stores that one bit or the southbridge that delivers it to the CPU. People think of “email” as a service, not as a series of bits, opcodes and packets.

So the use of “the cloud” to refer to “managed or clustered services wherein the fundamental architecture is obfuscated (because it is someone else’s job to care about that)” is perfectly valid. It is an extension of the original networking “cloud” metaphor that is perfectly in line with the evolution of IT service delivery over the past decade.

What gets me are people who forget that “the internet” consists of far more than “the web” and its associated services. There are dozens of protocols that shuffle bits from place to place that have nothing to do with HTTP, FTP or whatever flavour of instant messenger “da yoof” are using today.

Heck, there is quite a bit of traffic on “the internet” that isn’t even TCP/IP! When you remember that, maybe this “cloud” thing will bother you less. “The cloud” is a metaphor for obfuscated IT service delivery. It is a part of the internet, as much as the web, IRC, or usenet. It is distinct from other parts of the internet, yet can be the underpinning behind them all.

Help! My Exchange server just rebooted

Trevor_Pott Gold badge
Mushroom

@MrCheese

The reason why most of these articles are written about Windows is because most of the networks I am asked to look after are Windows. I have a handful of non-windows networks in my care...but they don't tend to give me any problems worth blogging about.

The real problem is convincing people that they can live without tow things:

1) Excel

2) Outlook’s “public folders/public calendar/integrated presence.”

Excel can stand alone…but why would you? So that means the Office Suite. Outlook is unique amongst the Office suite in that it won’t run in WINE, so that means a Windows Desktop. To use all of Outlook’s features you need an Exchange server, which means it must run on top of a Windows Server. So suddenly you are (at least) into “small business server” territory simply because someone somewhere is addicted to Excel.

That’s before we get into “every industry in existence has industry-specific software that /HAS NO OPEN SOURCE ALTERNATIVE/, and the vast majority of this runs on Windows.”

So yes, there’s a damned lot of Windows out there, some for bad reasons, some for perfectly legitimate reasons. Personally, I don’t want to run Windows unless I can avoid it. In the instances I’ve been allowed to deploy Linux to address a problem, that problem tends to go away for good.

Now, some folks get uppity and say “well, if you were smart/”a real sysadmin”/more like me/etc.” then you would be paying developers to port your industry specific software to Linux. Except that’s about ten years out of date. Anyone who develops software for a fixed operating system and isn’t doing it for HPC/Big Iron style “every cycle counts” computing is delusional. If you redevelop an application in today’s world you do it for the cloud.

So all this stuff is slowly moving into the cloud, and the desktops are evaporating with them. With the desktops go the servers and then that network gets crossed off my list as a potential client. Linux is great; but it’s rare as hen’s teeth in the SME market and there aren’t enough deployments out there to provide enough work to feed my family. Where it exists, it “just works” and doesn’t need a babysitter.

Linux has its place in the large enterprise market, and most of the Linux deployments aren’t going anywhere. Least of all into the cloud! But there aren’t many large enterprises with datacenters or head offices in Edmonton. Those that are here chronically understaff their IT departments because we have such a glut of sysadmins that job openings at larger organizations turn into cage matches. You can burn through sysadmins like a box of cheap candles and there will always be more to take their place.

So yes, I work on Microsoft networks. Because they are what pays the bills. They also break often enough that they give me something to write about on El Reg. When all the Microsoft networks evaporate into the cloud, well…

What does mission critical mean, anyway?

Trevor_Pott Gold badge

@dave.laurello

I don't think that you and I differ too much here. I think you are dead bang on correct about Google; they make "best effort" for continual uptime without going balls-to-the-wall crazy about it. They aren't willing to put themselves out of business trying to obtain the impossible.

But that’s my whole point! If Google – of all companies – can’t maintain “perfect’ uptime without breaking the bank, then no company on earth can either. I hold Google up because Google has entire divisions dedicated to continually refining their high availability technologies. They have “given more back” to the community in terms of high availability technology than any other company in recent times.

People are less forgiving about downtime…but I think mostly they are unforgiving about /unschedualled/ downtime than anything. My ISP will periodically send me an e-mail that says “oh, and BTW, one month from now, on this date, we’re nuking the net for 30 minutes between 03h00 and 03h30 Sunday morning. I’ll get a reminder a few days before the event. Even if I plan to be up that night, I am not upset when it happens, because I was given adequate warning, and they get it done (more or less) within the appropriate timeframe.

So what is “mission critical?” Visa can’t keep their net up all the time, and without Visa’s network the world’s economy might actually collapse. Stock market femtosecond monetary masturbation stations loose millions of dollars for every *minute* of downtime, and they are still not showing “perfect” attendance.

Thus I think “mission critical” is pretty much dead bang on defined by Google: it’s the “best effort” uptime based on what your budget and technology will bear, within the boundaries of what your customers will tolerate. What your budget, technology and customers will bear are completely different not only company by company, but often department by department within the same organisation.

(That said, Google does a spectacularly piss-poor job of scheduling downtime and keeping customers informed. Then again, so do Amazon, Sony, Microsoft, Apple...)

Alternate definitions are of course always welcome!

Trevor_Pott Gold badge
Happy

I have had a few PFYs...

...and I do work side by side with another systems administrator at my day job. His name is Peter Washburn. He is a proper bastard in his own right, and I am proud to say I learn as much from him as he does from me. You couldn't ask for a better sysadmin, in my opinion.

As to the audio quality...recorded skype. Not always the best thing in the world. I was using a proper headset though. Used this: http://www.zalman.com/ENG/product/Product_Read.asp?idx=213

Does come with a proper microphone. Will work on this for next time.

VDI is not the only fruit

Trevor_Pott Gold badge

@Theodore

Please do some research on "Remote Desktop Gateway," formerly called "Terminal Services Gateway." I believe that this will suit your needs. As of Server 2008 R2, it works in conjunction with Hyper-V to assign not only terminal services sessions from a pool of terminal servers, but can also serve up Virtual Machines from pools you define.

Mission critical computing for the masses

Trevor_Pott Gold badge

Ouch.

I have never - honestly, not once in my life - believed something was "better" because it was new. I viciously and vociferously mock people who equate "new" with automatically "better." So I will take your comment in stride and try to do better next time.

As you pointed out, the article really was largely aimed at “reminding folks of some of the more significant developments of the last 15 years.” This article was never really meant to be a standalone: it is part of a series of interrelated articles. It is possible you have not read my previous articles on the topic that do go into far more depth on many of the issues you raised. Here are links:

http://www.theregister.co.uk/2011/03/17/pervasive_encryption/

http://www.theregister.co.uk/2011/03/21/hybrid_cloud/

http://www.theregister.co.uk/2011/03/28/cloud_hosting_service_level_agreements/

http://www.theregister.co.uk/2011/04/08/cloud_management_tools/

http://www.theregister.co.uk/2011/05/06/cloud_service_measurement/

http://www.theregister.co.uk/2011/05/08/cloud_microsoft_azure_analysis/

http://www.theregister.co.uk/2011/05/11/cloud_scale_cost_considerations/

As to “was there a problem in the first place,” well...yes! The problem is the same as it has always been: how to most efficiently deliver IT services. There isn’t “one true answer” to that problem; cloudy services are just one more tool in the toolbox. Neither innately good nor bad for their recent appearance; cloud computing must be viewed with the same sceptical eye we would use when analysing any technology.

Trevor Pott's guide to pricing up the cloud

Trevor_Pott Gold badge

Aye.

Check these out:

http://www.theregister.co.uk/2011/03/28/cloud_hosting_service_level_agreements/

http://www.theregister.co.uk/2011/05/06/cloud_service_measurement/

http://www.theregister.co.uk/2011/03/21/hybrid_cloud/

They flesh that bit out better, I feel.

Trevor_Pott Gold badge
Pint

"Canned."

86 articles total. 1 Didn't make it. Of course, this is probably because my editor takes the time to bounce them back when they're bad. Now, when talking about work e-mails that I wish I had editied a little before sending, that's a whole other story...

Trevor_Pott Gold badge
Thumb Up

That is a good question.

It also relates not only to some of my own musings, but some issues I will soon be faced with as a minor cloud provider via my day job. Investigation required; I'll put it on my list!

Trevor_Pott Gold badge

Thanks!

I'd love to take all the credit...but I really can't. Any skill I have – and the quality of the polished, finished product – exists only because due to the patience and hard work of the excellent individuals who edit my articles.

Also: this article - as well as many others - was written at the pub. I find that writing at home has too many distractions. Writing at work tends to lead to getting interrupted by support calls. Writing at the pub is oddly peaceful. Nobody bothers you, the glass of Diet Coke is never empty...and when you're done you can reward yourself with a beer.

The great thing about writing for a tech magazine is that somewhere in the neighbourhood of 75% of all the research that needs to be done can be done from a tablet at the pub using RDP. For example: taking the time to sign up for various cloud services, create instances on them, play with their management tools, try to break them, etc. So when I head to the pub tonight, I’ll raise a glass for you, sir, and other other readers who enjoy my pub-written research. :D

Canada's internet future at stake in Monday election

Trevor_Pott Gold badge

Metered rates

Closer to $5/GB if you go over your (very low) limits. In some cases as high as $10/GB.

Internet in Canada is ass, and it's only going to get worse under Harper.

Some top cloud tools to bash up the bus factor

Trevor_Pott Gold badge

@AC

Simplecloud but one example. Other projects exist.

As to Spiceworks, if you can't find a plug-in or feature you need...ask! You might be suprised at how helpful both the community and the company can be.

Trevor_Pott Gold badge

@AC

"OTOH, should we push for a single unified API to do "cloud-y" things? Discuss."

http://www.simplecloud.org/

"Apropos ticketing systems, find me one that will have true "email integration", meaning I can do bloody everything conceivable with it through email without ever having to touch any other interface."

Spiceworks isn't quite there yet, but it is very close. Spiceworks has an active community who often create plug-ins and extentions to the product bringing functionality the core application doesn't have. Spiceworks themselves are also very open to working with the community to meet feature requests.

Cheers,

--Trevor

Data centres gripped by power struggle

Trevor_Pott Gold badge
Happy

Click the link

Click the link to the Microsoft case study. They discuss DC versus AC supply and it's economic considerations. Apparently, MS engages in both...

Pervasive encryption: Just say yes

Trevor_Pott Gold badge

@Anonymous Coward

Well, let's look at this a little:

First: WEP. WEP is terrible. WEP and WPA both are easily cracked, WPA2 personal being within the "possible but unlikely" range. Had the company in question been secured with WPA2, this game would have been over before it began.

Second: Signed binaries on the router. Had the router's operating system design taken into account the idea that eventually people would find a vulnerability and root the system, they would have implemented a process of signing their binaries. Any change to the binaries (so as to add software for a man-in-the-middle-attack, for example, ) would be rejected as compromised code. It might result in the router bricking itself, but that's better than allowing an attacker to gain a foothold.

Third: HTTPS. While it’s true that with some SSL scenarios, you can pick up the encryption keys (or information passed to the server from the client) if you get there as the session is set up…you certainly can’t decrypt an SSL stream with the processing power available to you on a broadband router. (I have my doubts you could do it in real-time even with a van full of equipment; the wireless bandwidth would be the limitation to attacking SSL with rainbow tables.)

If you can’t pick apart the steam, you can’t inject code. If you can’t inject code, you can’t compromise the browser, root the Mac and install a bunch of lovely world-ending crap. (It was this crap that got browser-injected that eventually scraped the banking passwords.)

There are other cases where encryption saves the day for other breach scenarios, but any of these would have saved the day here. Simple things that – in my mind – the end user should never be responsible for. Setting up strong wireless encryption should be simpler than Bluetooth pairing. For *some* modern routers, it is. Routers should be designed with security in mind; they are lovely targets; manufacturers should be locking them down tight and signing binaries. Lastly…everything everywhere should be SSL forever. We need to stop assuming that the communications channel between our customers and our cloud services are secure. They aren’t. There are so many ways to intercept that stream that we need to consider it a /public/ communications stream and encrypt all interactions.

That’s where that 25% comes in. Cloud providers are afraid of the overhead on their side of the equation if they not only enable SSL, but make it the default. It is where – quite frankly – cloud providers are failing their customers.

Trevor_Pott Gold badge

@Ammaross Danan

Quite a bit of vitriol. Let us examine some of it. FIrst off, full disk encryption (even using trucrypt) can be a much higher CPU hit than 3-4%. Many factors matter here. How fast is your CPU? DO you have a TPM on that system? Drivers and such loaded so that your applications can take advantage of it?

As stated in the article: encryption overheads don't have to be anywhere near the traditional 25% hit. It is however a number frequently brought out by people who believe that encryption is a waste of resources. In the case of this Mac Powerbook G3, full-disk encryption cheerily would be 25% of the CPU. Quite probably more.

Next, I am lacking an understanding of how pervasive encryption became a "mantra." It is true, I believe it should be used in more places than it is...but it is but one tool amongst many. There are plenty of elements of encryption that these businesses can undertake without any real budgetary impact.

The burden of encryption should not belong to end users alone. Cloud services – from gmail and twitter all the way to EC2 and Azure – should not only be offering SSL as a possibility, but redirecting every HTTP request to HTTPS. You should have to choose to use an unencrypted communications protocol – after a warning pops up to tell you the risks – rather than the other way around. The burden here in my opinion is largely on the cloud providers themselves; one they shoulder only very reluctantly, it seems.

Another point of interest: you rightly point out that sticky notes are not a bad thing in and of themselves, physical access is required, and the attacker would have to see them to use them. Simply putting the commonly used passwords on a sheet of paper, placing it inside a folder (that everyone at the workplace knows about) and putting it inside the filing cabinet beside the computer desk is a marked improvement in physical security for very little additional work.

As to the consulting job itself, here is how it played out: Updating the PowerBook’s software – most importantly the WiFi drivers – to be able to use WPA2 was my very first step. Next was simply junking the router and getting something that could support WPA2. After this, I introduced them to Firefox, and lovely plugins like https://www.eff.org/https-everywhere/

I set them up with dropbox and a scheduled task that zipped up their critical 20MB of information every night into an encrypted ball and moved it into the dropbox folder. A second scheduled task prunes backups older than 3 months. All their sites are set up on this now; it has already saved their butts when one of the old powerbooks dropped its disk.

These are cheap solutions, all involving just a little a bit of encryption that – while not the perfect or ideal solution – add a layer of security overtop the impenetrable user apathy that exists at this business. Most importantly, it cost them only one cheap replacement wifi device per site. I didn’t even charge for the three hours of my time.

It should be pointed out that even at that, they were exceptionally reluctant to spend money on IT. This is a company where store managers much update all the accounting spreadsheets, put the numbers into the accounting program, then print out the statements and fax them in to the accountant. Why? The accountant refuses to own a computer and keeps all records by hand on a 30-column ledger.

We can’t ignore that these businesses exist. You state you are a freelancer and speak of advertising and winning business. Well it is for yourself and other freelancers that I feel writing such articles are important. I hope that it is a bit of cold reality to remind all freelancers and consultants “these people are out there.”

Myself, I am not. I end up taking these jobs not because I am a consultant, or because I need/want their money. I do it because I feel a weird sense of duty; an obligation to help the technologically impaired. I have the ability to lend a hand…why wouldn’t I?

So sir, pervasive encryption is not a mantra, nor is it overly burdensome or expensive…except to cloud providers who are not taking full advantage of cryptoprocessors in their infrastructure. It can however – like the airbag mentioned elsewhere in this thread – be an important safety measure when others have failed.

Trevor_Pott Gold badge

@James Cooke

HTTPS doesn't add 25% at the user end, but rather to the cloud provider's end. Google, Oracle, Twitter and others have made mention of this overhead. Specifically, it is used as a reason not to enable HTTPS by default on their services. (Or in some cases, even offer HTTPS at all.) That said, any decent cloud provider would in my mind be designing upcoming systems with this in mind; installing cryptoprocessors and ensuring that they have the ability to offer encryption to their customers as a standard, without a great deal of additional burden on their datacenters.

Trevor_Pott Gold badge

@Havin_it

Sir, I respectfully request you consider the geocentricity of your statements. PCI DSS is not law in Canada. Even if it were, no one would enforce it here. As a small business administrator here in Canada, I have seen this and far, far worse. What I relate is not a tale made up out of thin air. It is a tale based upon what I have seen with my own two eyes.

Seen, because when they found out something was splork (tens of thousands of dollars later,) they called a friend who called a friend who called a friend who referred them to me. You make some large assumptions about people, businesses and the technical capability of both. The majority of people aren’t IT nerds. They really, honestly do only care that it “just work.” They don’t want to – and will stubbornly refuse to – learn more than the bare minimum required to get the task done.

In this case, for all the “just do X, problem solved” comments one could lob…it doesn’t change the fact that A) a great many people don’t know that and B) a largish % of those same people wouldn’t do anything about it (until it bit them in the ass) even if they did.

Trevor_Pott Gold badge

Thank you.

I don't know why some people read an article like this and come away with "encryption is the only answer; the only solution you need!" I think it’s a tool, an important and useful one that we shouldn’t be working without. I believe it should be on by default. It’s use could prevent some easily-avoidable wetware errors such as the one detailed in the article.

It is by no means foolproof. Tape a clipboard to that airbag and you might well get a chance to watch Darwin in action. Still, if the user doesn’t have to install and configure the airbag on their car, there is a reasonable chance that – barring some world-ending clipboard-esque stupidity – that airbag will be there and functional when it is needed.

An Airbag doesn’t guarantee your survival in case of an accident. If you screw with the design of the airbag through apathy of idiocy you can render it useless. I see encryption the same way; a form of digital airbag. It isn’t guaranteed to save you, but it might just help when the brakes (wetware education, training and corporate procedures) fail.

Trevor_Pott Gold badge

@Colin Millar

Well said.

Trevor_Pott Gold badge

@armanx

The difference here is that you don't have to install and configure the airbag yourself. Cars are designed with the idea in mind that the end user down't know how to maintain them. We also have decades of a culture wherein "if you don't understand how to fix your car on your own, bring it to the mechanic on a regular basis."

That culture has yet to spread to computers, as does the idea that they should be simple to use. Worse yet, computers don't come out-of-the-box configured for safety. Nor to several cloud services. That the option for better security exists helps not at all if the end user knows nothing about it.

IT folk love to blame the user. They love to blame the business owner. They like to blame everyone and anyone excepting themselves. Security should be built-in, on by default and easy to understand from the word “go.” In some cases, great strides have been made. In others, even the most basic precautions aren’t followed.

There is still much work to be done; I believe that applies to all sides of the IT problem. Developers, device manufacturers, service providers, sysadmins and yes...even the end user. I don’t believe any link along that chain can reasonably be expected to bear the entire burden alone.

Trevor_Pott Gold badge

@Havin_it

Sincere apologies. It was indeed a POWERbook, not a MACbook. Same insanity, different chip architecture. :D Macbook G3, if you must know.

Trevor_Pott Gold badge

You are correct.

Virtually all cloud operators offer HTTPS. Few offer it as default. Fewer still enforce it as the only option.