Feeds

back to article Continuous delivery: What works (and what doesn't)

The notion that we might just as well automate everything is common in the perpetually dynamic world of continuous delivery. What does continuous (software application development) delivery actual entail and is it a practical solution all the time? It is effectively like refuelling your car while driving. Or perhaps a better …

COMMENTS

This topic is closed for new posts.
LDS
Silver badge

The fact is CD needs a lot of resources - and not everybody has them

Continuos Delivery is nice but requires a lot of resources. You need people writing a of tests tests and delivery scripts, and mantaining them. You need developers to add a lot of test hooks inside the code - and keep software deliverable at each commit.

You need a lot of tools, make them talk to each other, and mantain all that stuff. You need an infrastructure to simulate your environment, and although virtualization now helps a lot, it may be expensive anyway.

It's not practical all the time, nor needed. Facebook can add small improvements because it is basically a very simple application, it becomes complex due to the sheer number of users, but there are application with just a a few users but performing very complex tasks. These one in a development cycle may enter states where it's very difficult to mantain them "deliverable" because changes may be complex as well and not be completed in a single iteration. Like everything, adopt what suits you, not what is fashionable.

6
3
Silver badge
Thumb Up

Re: The fact is CD needs a lot of resources - and not everybody has them

Agreed.

If you have the people and resources to "pipeline" the stages then this can work really well. If you have to take people off one task to perform another then CD becomes more a hindrance than a help.

0
0

Re: The fact is CD needs a lot of resources - and not everybody has them

And if you work on a core piece of an application that has 20+ downstream consumers in the build tree, and some of those pieces are maintained in different time zones, then even with advance warning and pre-agreement, you're going to go at least 2 days without a successful full integration build if you need to make certain changes. The core piece works correctly as committed, but each downstream piece (in stages, if the build tree is more than 2 levels deep) is going to have to write unit tests against the code changes and then commit their own updates; and so on.

And worse, if they discover bugs (e.g. performance/scaling problems with more nodes than they ever let on they were creating, or when they try to add 150 000 rows atomically into a table) then it'll have to feed back to the core team, who will need to replicate, debug and fix - all the while the deliverable isn't. Of course when core team gets the fix in, it's Thursday afternoon, and the downstream guys have gone home for the weekend...

I guess what I'm saying is that this might work for a group that produces a single piece of software, in probably less than 20 build stages, with a team that's not scattered across 8+ time zones. Like so much else, though, I don't see it as a panacea.

0
0

Re: The fact is CD needs a lot of resources - and not everybody has them

These things are possible. I remember this talk covering some of how you can approach the solution,

http://skillsmatter.com/podcast/agile-testing/continuous-delivery-patterns-for-large-software-stacks

However, the best way to deal with the problem is not to have it. If you can adopt a component architecture (SOA, Microservices, plugin frameworks etc) with loose coupling then you should have a component simple enough to perf test for anticipated use and to be able to put it live without requiring downstream releases.

0
0
Roo
Silver badge

Re: The fact is CD needs a lot of resources - and not everybody has them

"And if you work on a core piece of an application that has 20+ downstream consumers in the build tree, and some of those pieces are maintained in different time zones, then even with advance warning and pre-agreement, you're going to go at least 2 days without a successful full integration build if you need to make certain changes."

That problem happens with or without CD.

In my view the way to tackle adding a tricky feature that will break stuff is to create a branch and then merge back to trunk when it's done. If you wish you can apply the CD process to the branch(es) as well. ;)

0
0
Silver badge

Dammit Adrian! I'm a Coder not a Baker!

Great article though :)

1
0
Silver badge
Pint

I see what you did there ;)

Have an upvote on me.

0
0
Silver badge

Rollback

Every deployment needs a reversion plan. Any change may break another part of the system, increasing exponentially with application age and complexity. Facebook, who cares if you can't find out the mood of your 500 friends for an hour, day, week? But an actual useful system needs to have a big red button to revert application changes but still keep (or rebuild) the data that happened in between. This is either wasteful having engineering teams writing reversion plans full time, or engineering teams on standby to dig you out of trouble.

This is where IT departments are unpopular, defending the business from itself. They love the idea of continuous deployment and delivery, they also get to point the finger when the sky falls in. The idea of having to provide that level of support when you can bet IT budgets are having downward pressure makes me shudder. Not the business users fault, they don't understand the incredible complexity of their phones, let alone massive asynchronous redundant transactional data systems.

1
0

We're still missing mature open source frameworks to support CD

There isn't an application orchestration engine (that I know of) as mature as the Puppet/Chef infrastructure tier and this forces us all to a lot of painful in-house custom work. I've used Glu and Capistrano and both have problems. Asgard might work next time I'm on AWS again.

0
0

Something not to forget

It is already a nightmare to provide technical support to customers that have different releases when these releases have gone through proper testing at all levels, it's going to be hellish if every customer has different snapshots.

1
0
h3
Bronze badge

The way Sun used to do it seems better than any of these newer methods.

0
1
Silver badge

Which was....???

2
0
Anonymous Coward

CD = Colision Detect

Visions of the Titanic Helmsman (blindfolded) shouting 'We are unsinkable' as it goes under

Like so many 'next best thing' memes, it comes at a cost. Unless you subscribe 10000000000% you are regarded as a heretic, unbeleiver, idiot (delete as applicable).

Our IT Dep is currently on the 'Agile is God' kick. Oh, by the way, we don't do documentation so good luck supporting it unbeleivers.

1
0
Silver badge

Re: CD = Colision Detect

They're into undocumented and probably hence inconsistently implemented "agile", are they? Whatever you do, make sure you're nowhere near those projects in about 2 years when they implode. They will, and it will be messy. Claret all over the walls kind of messy. Scapegoats fired kind of messy.

Luckily there's no documentation, so nothing with your name on it ;)

1
0
Thumb Down

Re: CD = Colision Detect

'Agile' is just an anagram of 'Chaos'

0
1

Re: CD = Colision Detect

If you don't do documentation, you're doing it badly. Unless your system is so simple it doesn't need any, but that's unlikely.

0
0
Anonymous Coward

Re: CD = Colision Detect

"Whatever you do, make sure you're nowhere near those projects in about 2 years when they implode. They will, and it will be messy"

So, have you read the latest government guidance on computer system design?

Alpha->Beta->Live, no documentation. Beta is a prototype and then you miraculously go live (from a prototype) a few weeks later !

Works with noddy websites, when they push this Ladybird approach to IT out to major systems, car crash waiting to happen. I know of at least one HUGE system, must have at least a thousand man-years of work in it, which they plan to replace entirely from scratch in one year using this bonkers approach.

Anon because my employer might not like the truth coming out. At the moment its Emperors New Clothes with all this GDS stuff.

0
1
Anonymous Coward

No direct CD involvement, but...

Let's take a major ERP vendor (call it Delphi to protect the guilty) which is working on its major new-ish Nova ERP product (again a changed name).

This is run mostly from India by a bunch of very smart propeller-heads, who have drunk deeply from the Java koolaid. Lots of coders too, no lack of resources there ;-)

The moment a unit test breaks in the CD, the VPs (ex-coders all) haul in the unfortunate programmer and demand an immediate fix. No matter the complexity of the issue - the test has to work the next day.

Friday afternoon? Lucky you, you have till Monday morning to fix it.

Now, I know _I_ can fix 80% of the bugs I get within an hour or two of work. Especially if I am not writing up Reg comments. But sometimes, it's.... complicated. You need to refactor/clean up something that has turned out to be badly designed. Sometimes you need to understand how the code is designed, not just band-aid an IF on the offending function. So I really, really, question the value of constantly chasing your tail that way.

Project Nova? Years late, underwhelming customer take-up at best, low feature set and fairly buggy in the wild. But 60+ hour weeks for all involved ;-)

0
1
This topic is closed for new posts.