> If you are saying you couldn't do that with ChatGPT (law version~~coming soon!) then I'd say in fact it is _you_ sir who is out of touch with reality, and clearly have not used ChatGPT in any serious capacity whatsoever. Good day to you sir!
The fact that either way a human lawyer is still getting involved in reviewing the output already tells me that not even you can fully trust ChatGPT or any other so-called AI to not hallucinate its answers. Also it is highly likely would output drafts that may be potentially legally unsafe and when it does, it can't explain its own output transparently which that is my point.
This hype of LLMs clearly hasn't aged well with the reality of people being unable to trust the output or use it for anything serious other than for a sophistry generator.
Skepticism is valid. Caution yes. But wholesale avoidance and dismissal of any utility? Bah humbug, you! You know we can definitely use this. How much does a paralegal cost these days? You can have instant rotating 24/7 paralegal with ChatGPT. Does a partner (or snr associate) look at the paralegal's work to ensure the paralegal is not just "hallucinating its answers" OF-FUCKING-COURSE it does? So there's no fucking difference. You got to get that through your head, man. You can use this stuff, and it's good.
Legal safety thing is why you have to get a lawyer involved, but the same thing is true for any juniors work. I mean, this thing is a fucking junior it's not a genius, but the fact that it's a junior at almost everything makes it a kind of genius and one that you can use. So that's the fucking hype, man! if you are like misrepresenting or not aware of the hype, or you just wanna dismiss it, well bah humbug to you! because you're missing out. But I hope you give it a try because it's wonderful. But it's not a panacea and I think we need to be cautious, but not about the stuff that maybe you're being cautious about; we probably need less of that caution and more of the caution of, "well, how the fuck is this thing going to bite us in the ass like you know 18 months later, and what's the second order effects of how this is gonna upend society?"
The difference between what you're suggesting vs what I'm saying is not only I add very high skepticism to my general point of never trusting its output, hence why I mentioned that ChatGPT still needs another expert human to now triple check and review the output, but it is due to it easily getting tricked into hallucinating garbage and confidently suggesting atrocious outputs passed on as advice; either medical, legal or financial and that no-one can even begin to understand or explain why it is outputting that since it is a black-box.
After looking at the limitations, it is clear that it is only great for generating nonsense. It totally cannot be used for anything requiring trustworthiness, as aforementioned in the highly regulated industries and now even including in search engines.
Either way, the technology in ChatGPT is not and currently cannot replace qualified human professionals. In fact, they will be the ones reviewing the output of ChatGPT before using it since its output cannot be trusted and can detect it bullshitting right in front of them, rather than someone who isn't a qualified human professional using it as a replacement 'lawyer' 'doctor' or 'financial advisor'.
The fact that either way a human lawyer is still getting involved in reviewing the output already tells me that not even you can fully trust ChatGPT or any other so-called AI to not hallucinate its answers. Also it is highly likely would output drafts that may be potentially legally unsafe and when it does, it can't explain its own output transparently which that is my point.
This hype of LLMs clearly hasn't aged well with the reality of people being unable to trust the output or use it for anything serious other than for a sophistry generator.