Re: "...purchasing a vehicle with non-working features?"
"Seeing as how it needs to be planned, documented and developed to some process resembling DO-178B or C, DAL perhaps A or B,"
The automotive industry has for years been "avoiding" this issue. They use the MISRA rule set for C programming as a means to justify claims that their software is "safe". The problem is that i) MISRA is more like smoke and magic than hard proof of correctness ii) MISRA tool chains that I've used are perfectly capable of compiling correct source code to junk object code that doesn't implement the source code (it was optimisation bugs), iii) there's no guarantee that their C libraries are themselves MISRA compliant. In fact one I've used most definitely was not MISRA compliant in its C library's source code, and the C library was buggy. Yet it had a tick box labelled 'MISRA', was and still is widely and highly regarded throughout the community.
Of course non of that has mattered, because in all cars actual safety has been provided by everything ultimately being mechanical or hydraulic, with software not taking a primary role in car control.
But with things like self-driving? Yep, the applicable standards have indeed got to be things like DO-178B, etc.
"so maybe ten lines of code a day per qualified coder drone, typically. So 250GB of tight code, at 500 bytes per Coder Drone day, it should be ready for beta release just in time to get tangled up in the Y10K problem."
Neat way of providing investment guidance!
An industry rule of thumb I picked up some time ago was more like 1 single line of code per coder per day across an entire software project of this type. After the design and specification is done the PMs would estimate the size of a project and do their cost estimation that way. And that was on systems that had to be correct but were still human supervised. I dread to think how slow a true safety critical piece of software such as a self driving car would be.
Of course the self driving guys know this. So they're spinning up arguments in favour of rapidly developed code being approved as safe from usage statistics to grandfather their systems into autonomous use. Kinda like "it's not gone wrong yet in our trials, so therefore it must be OK for all eternity". Accepting code in this way would be unprecedented in the history of safety critical systems and transportation. There'd also be the potential for a systemic and hitherto unidentified fault causing mass carnage and the world's most expensive law suit.
Personally speaking I find the industry's statistical argument for what a "safe" self driving car would be somewhat distasteful and implausible. Saying that it's as safe as the "average driver" is nuts; it'd mean that many passengers would statistically speaking be worse off. Terrific. The trouble is the people who will decide what's allowed or made compulsory aren't used to thinking 'personal'; they look at nationwide or insurance statistics, and see profit in reduced costs.
Fortunately the State of California has published Google's test results, and they don't make for encouraging reading from Google or any other self driver's point of view. Google's data, if squinted at only slightly, implied an accident every 1500-ish miles had their cars been fully autonomous and unsupervised. Not a very good statistic in favour of approving full autonomy.