Wikipedia - Asimov's Rules.
1.A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2.A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
3.A robot must protect its own existence as long as such protection does not conflict with the First or Second Law
Rule 1 : The robots(cars) should never be in such a position in the first place. They should have previsional awareness such that the scenario can always be avoided, exemple : very little forward oncoming awareness = slow down in order to be in a position to avoid anything that might arrive.
Rule 2 : The human being in the car would be screaming orders to save their own skin, rather than that of the on-comer, which the robot(car) would have to follow.
Rule 3 : In our driverless car scenario, rule 3 has no real bearing, so we can rule out this ethic.
In the event of a crash the blame would probably resolve down to the last command screamed out by a human.
As much as we would like to believe that everything can be coded within a well written algorithm, the final decision that one makes in order to "survive" will always remain instinctive. I do not believe that "instinct" can be written as a series of rules due to the fact that we, as mere humans, do not fully understand what it is.