Lawyers don't exist to decide who wins a case. It's all about making the other side show their work, prove that they have valid evidence, it was collected legally, etc. An AI might be able to find flaws in the work of a sloppy human who wasn't expecting opposition, but will it know when to stop grasping at increasingly-inane straws? If it develops a reputation for subtly misrepresenting certain laws when pressed to find some answer, after all the easy options have run out, wouldn't it be kicked out of the courtroom for wasting the legal system's time, unless you have a team of programmers on standby to keep writing output filters to censor it from repeating its mistakes? Human lawyers have to fix their behaviour or lose their licenses when caught repeating mistakes, malice, or misleading practices, I think, and current AI paradigms have immense trouble learning from small datasets.