Those are valid concerns. For the emergency scenario I have to wonder if it is possible to never trust another car to the point where it can induce a less optimal solution than an independent estimate could produce. For instance if the OtherCar says "Don't worry, I'll swerve into the ditch, just keep driving" and both MyCar and OtherCar keeps driving, we both die, so maybe MyCar says "he said he'd swerve, but I'm going to hard brake instead of driving" because impacting a ditch at speed x is roughly similar to getting hit while stopped at speed x and that's the worst case scenario for braking, whereas the both cars moving at speed x at each other is worse.
Not as tight an example as yours I'm afraid, but humans make cooperative decisions all the time where we use information and continue to be suspicious of it and hedge against lies. I think any realistic cooperative tech system where there are untrusted components needs to stay skeptical as well.
I completely understand what you mean and agree it's a valid concern.
I'm probably super naïve, but I think there's probably a way of digitally signing hardware components so that we can trust sensor data from other cars.
Not as tight an example as yours I'm afraid, but humans make cooperative decisions all the time where we use information and continue to be suspicious of it and hedge against lies. I think any realistic cooperative tech system where there are untrusted components needs to stay skeptical as well.