From experience of working on one "robotic" brain surgery product (that has been operational successfully for years), I can say that at least standing between development and deployment is a rightfully bewildering set of compliance mountains to overcome. Getting FDA permissions for these machines is not easy, and I am thankful for the huge pressures exerted by these organizations on the development processes.
I predict that getting certification will become harder and even more technical, possible requiring machine validation of the code etc (proofs). Presently most human safety software regulation focuses on the development process, and not on the software itself (the 'how', not the 'what').
My concern would be that the areas that are newly being automated, the IOTs and entry of robotics into new areas will be lightly regulated, with overwhelming financial pressures being placed on governments to permit or just have 'light touch' standards. Self driving cars is one such area. As an old-hand programmer I can't believe for a second that these machines are anywhere near safe, for a long list of reasons, and I have been shocked at the public and governments attitude to permitting software engineers to give their shitty code near full control of cars and trucks etc. Its like they are living in an alternative reality, definitely drinking far too much Kool-Aid. 20 years of experience and neither Google nor Microsoft can secure a browser. What hope is there to develop safe software capable of autonomously driving a car?