The costs don't fall in the same places, though. It's a mistake to think of it as about money. The ultimate limited resource is "time".
If the user has to spend an additional 20 seconds every transaction re-clicking on the "send" button until it works, that's a cost to the user, not the owner of the software. The owner's cost is limited to (1) making sure the user knows enough to do this, and (2) handling the (probably not measurable) attrition among users who get too fed up to keep doing it.
Software changes follow a depressingly predictable lifecycle. Someone requests a new feature, it gets specced and costed, and a few months later it gets delivered and tested and the testers, invariably, discover it's a humungous pile of bat guano. They list the defects, demand that they get changed, and a month later the new version arrives in which defects A-E have been fixed, F-J haven't, and K and L have been introduced. This process can go on as long as everyone's patience holds out, but sooner or later something will give - the customer will say "close enough", or the supplier will say "that wasn't in the spec", and everyone concerned is so sick of the sight of it that they deploy it just so that it will become Somebody Else's Problem.
Is it a good system? Heck no, but it's what we've got. Software engineering practices assume that a spec "should" be perfect and any falling-short of this ideal is a failure by one side or the other, but Gödel's Incompleteness Theorem means that the "perfect" spec is a logical impossibility, even before throwing in other confounding factors such as the Law of Leaky Abstractions.