Because it's meant to give the _appearance_ or _perception_ that a celebrity is involved. Their actions demonstrate they were both highly interested and had the expectation that the partnership was going to work out, with the express purpose of using the celebrity's identity for their own commercial purposes.
If they had just screened a bunch of voice actors and chosen the same one no one would care (legally or otherwise).
What OpenAI did here is beyond the pale. This is open and shut for me based off of the actions surrounding the voice training.
I think a lot of people are wondering about a situation (which clearly doesn’t apply here) in which someone was falsely accused of impersonation based on an accidental similarity. I have more sympathy for that.
But that’s giving OpenAI far more than just the benefit of the doubt: there is no doubt in this case.
> Sounds like one of those situations you'd have to prove intent.
The discovery process may help figuring the intent - especially any internal communication before and after the two(!) failed attempts to get her sign-off, as well as any notes shared with the people responsible for casting.
Not necessarily, because this would be a civil matter, the burden of proof is a preponderance of the evidence - it’s glaring obvious that this voice is emulating the movie Her and I suspect it wouldn’t be hard to convince a jury.
I am guessing it's because you are trying to sell the voice as "that" actor voice. I guess if the other voice become popular on its own right (a celebrity) then there is a case to be made.