If suing became an ultra-easy one-click activity, I can see two things coming: 1. maybe that would force us to shape up our tort laws a little so that not everything under the sun was actionable and 2. maybe people and companies would stop misbehaving so much because they knew there was no longer a burden to suing them. A combination of 1 and 2 sound like a great outcome.
I’ve been unlucky enough to have good reason to sue a few times in the last few years.
The courts are essentially inaccessible to 99% of the population. You can go to small claims court where the limit for damages is $12,500 in California (less than a half year’s rent around here), or you can hire a lawyer and pay $50-100K minimum before the trial even starts (on both sides).
The upshot is that I’m out around $100K (spread across a few different incidents) with absolutely no legal recourse.
Anyway, more access to the courts (and faster/more painful rejection of nuisance suits) would go a long way to fixing our legal system.
It would also be good if private individuals could directly press criminal charges.
> or you can hire a lawyer and pay $50-100K minimum before the trial even starts
What are those lawyers doing that’s worth so much, and that one cannot do themselves if they have time and mental capacity for it?
I don’t know about courts (though I managed my way through a simple divorce case without any need for a lawyer), but e.g. immigration attorneys are typically drastically overpriced for the services they provide - and I’ve dealt with six immigration events (two people, two countries, four events obtaining new residencies and two renewals, all six sharing a lot but being different situations under different clauses - lottery, income, marriage, family reunion) without any significant issues. Contrary to every single lawyer it wasn’t some rocket science - just stuff to learn, forms to fill and protocols to follow. And this makes me wonder if legal is actually as inaccessible and as risky to a layman as lawyers picture it…
This is like stating, "What are software engineers doing that's worth so much? There's plenty of free code courses online."
It reminds me of the old story of the plumber being called to a house that's leaking water out of a pipe, and the plumber looks around, finds one valve and gives it a half turn, and then writes a bill for 100$. The home owner is outraged he is charging so much for just a couple of minutes, and the plumber responds, "You aren't paying me to turn a valve, you are paying me to know which valve to turn."
Sure you could learn the law and represent yourself, but you can't expect results to be as good as anyone who practices law might do. It's a knowledge field, and experience matters.
> This is like stating, "What are software engineers doing that's worth so much? There's plenty of free code courses online."
Well, that’s not perfectly accurate comparison. When adjusted for nuances it’s much less clear what’s best. If a quick course is all you really need to get something done, and there are no e.g. maintenance concerns (so you don’t care if something is merely acceptable and not up to the best standards - as I get it, without any research, the case is effectively an one-off thing), and the professional services are notoriously costly, it makes me wonder why. In such scenario DIY approach looks very compelling to me.
Because I’ve heard the same thing about immigration and it turned out to be false. My current understanding is that there are a lot of immigration cases that may need a lawyer but a lot more where it’s a total waste of money.
Of course I can be wrong. There are always nuances and differences. That’s why I’ve asked what makes it so costly.
seems reasonable to add an initial cost burden to the filer then, to pay for the review, especially if they are filing a lot of suits. Say first 10 suits per year are free, but then you need to get on the premium plan :D
If the goal is to disincentivize all people from making frivolous lawsuits, that cost burden sharing should probably be proportional to the person’s free resources somehow. A percentage of wealth or income or something like that.
I’d rather have a tractable thought process which can be interrogated decide whether or not I can sue a corporation rather than an automated black box.
The whole point of law is that it’s humans regulating other humans. Not machines (in this case, built by wealthy humans) regulating humans.
I do think it’s an arms race where in many instances legal filing will be aided by AI and then the other party will also use AI to synthesize and summarize the filings or do similar analysis. The filings themselves will not actually be written or read by real human beings
Just keep small claims like that in small claims court. Then auto-Ronald has to send $500/hr general counsel instead of farming it out to a law firm and that makes it impractical for them to sue little people.
1. Only a handful of states prohibit hiring representation for small claims court.
2. McDonalds could make it so to get inside you have to agree to arbitration of any disputes. That's usually just used by companies to make you use arbitration instead of the courts when you have a complaint, but offhand I can't think of any reason they couldn't require arbitration in the other direction.
I would bet on the opposite, new rules that prevent automation from gumming up the court system. The judiciary is going to be hostile to such approaches.
I imagine lawmakers would be amenable to rules that you can't use GenAI to make legal filings, or that forced arbitration can now use AI to "screen" claims.
The courts are not for the little guy with little claims, nor are they a high volume system.
Any attempt to craft legislation to deal with this that could be construed as helping large corporations would be immediately shot down in our current political environment. But this is going to be a huge problem that costs jobs and harms the economy if something isn't done about it.
> They use an AI tool to comb public information about your company and file hundreds of copyright infringement, IP, and trade secret theft cases. The scale means you can’t just ignore it or settle for a nominal amount.
The "scale" itself is the problem. Because companies are so huge, and their reach is so huge, it invites techniques that increase the efficiency of attacks. Human beings weren't meant to handle things at such scales, and that is part of the reason we have the problem of AI in the first place.
If we lived in smaller, more self-sufficient communities, then we would not have scale and the people in such communities would not have much desire to develop AI either. AI is the natural reaction of a large populace who look for a technological solution to the immense chaos of information.
This is an unnecessarily complicated rationalization.
The only reason large companies attract these attacks is because large companies have large bank accounts.
That’s it. That’s the reason. There’s no need to theorize about small communities or humans developing AI in reaction to something.
Scammers see a tool, an opportunity for exploitation, and a target with something they want (money). They leverage the tool to try to extract what they want from the most likely target.
That’s what’s happening here. There isn’t some deeper philosophical explanation. Scammers just want money and they’ll use any tool they can in order to get it.
> If we lived in smaller, more self-sufficient communities, then we would not have scale and the people in such communities would not have much desire to develop AI either.
Yes, because they’d be too busy worrying about food and disease to bother with much else. Agrarian societies weren’t fun.
I suppose you've lived in one? Or are you just fearmongering? Because agrarian societies certainly weren't always worrying about food and disease. What about the Amish? They certainly manage decently without too much technology.
Humans were also not meant to handle corporations of a scale that could engage in things like IP theft, data theft, wage fixing, and mass psychology, with impunity. It strikes me as a cat and mouse game. Not that I think either side will make things better for the rest of us.
> If we lived in smaller, more self-sufficient communities, then we would not have scale and the people in such communities would not have much desire to develop AI either.
You could say AI is the child of internet, as it was the internet that cooked up the massive trillion token datasets. Without internet, no AI.
But if you look closely, internet resembles AI - you can search for images instead of generating, there are billions you can choose from. You can search for information and you can chat on social networks instead of using a LLM. It's the same with AI but even better, human made. An HGI made of humans and networks.
This attack works on a single person or a small business that produces a lot of public works. Any prolific author or artist is exposed to this. Any marketer, etc.
Your thesis doesn’t make sense about there being no desire for AI in small communities. Small communities absolutely want automation, both in the physical and information world.
I mean, sure, if we still lived as small groups of hunter-gatherers, many of today's problems would not exist. But many of yesterday's problems could never have been solved.
It isn't a dichotomy. But making a dichotomy is thing technophiles do a lot and I'm not sure why. It's clear that we can go back to smaller communities and keep some of the newer ways. I think technophiles irrationally put everything that isn't "advanced tech society" into the box of hunter-gatherer because they can't psychologically cope with the idea that there is an alternative to the unsustainable technological world that they have become emotionally attached to.
This is actually insane. 120,000 comments! To a certain extent if our law is already so complicated that you need to hire a lawyer to understand it that is already a fundamental problem.
Simplify the rules, make it easier to understand and reason about. The computers should be able to determine if someone is breaking a law, not trying to check if it is a bad law.
We should be using computing power where someone can ask: is this legal ? Can I do this? That’s the true value to society.
This kind of comes back to the common law vs civil law system. Most English-speaking countries operate under a common law system where the laws as written down provide the ground work, but precedent set by previous court cases is also legally binding. In contrast most of Europe and South America operates under a civil law system (yes, terrible name, not the opposite of criminal law) where the written law reigns supreme and previous court decisions are merely informing opinions.
As you can imagine, algorithmic decisions are incredibly difficult in any common law system. And while they might be viable in a civil law system, you would lose out on the ability of a judge to give consideration to the specific circumstances of each case.
In practise both systems end up unwieldy in an attempt to be fair. Common law systems because of the overwhelming amount of precedent to consider, civil law systems because the laws become incredibly long and complex, with complex interactions between laws
How do they achieve consistency in civil laws systems?
In the US if say a district court in California and a district court in Oregon adopt incompatible interpretations of a federal law, someone will appeal to the appeals court that covers both California and Oregon, that court will interpret that law and then that is binding in all that states under that appeals court (Alaska, Arizona, California, Hawaii, Idaho, Montana, Nevada, Oregon, and Washington).
If some other appeals court in some other region goes a different way, it can go to the Supreme Court which makes an interpretation for the whole country.
We eventually reach consistency even if Congress is unwilling or unable to revisit the law.
It seems like the best use of ai / computing power would be to do common law through llm? I understand it’s nuanced and complicated but that seems like a good use case for our current ai systems ? What am I missing ?
Court isn’t about understanding if action X violates law Y. Analyzing that part is quick in preliminary work. All of the hard work is proving that the defendant performed X, proving they had intent and it wasn’t an accident, etc.
Most laws have a lot of neuances and edge-cases. That's why the words and language is so important. The more edge-cases are discovered the more complicated the text becomes.
That’s not really true though. While legalese creates complexity a substantial part of judicial rulings is figuring out which of it should be ignored because it’s bullshit. A lot of law is driven by arguments of how things ought to be given a loose framework of law and precedent.
Simplify the rules, make it easier to understand and reason about
This happens every few decades...A legislature wipes the slate clean and starts fresh with a new, simpler set of laws.
Then spend the next few years discovering why the old set of laws was so complicated, as they gradually reintroduce laws to deal with the edge cases, loopholes, etc. that the new laws created. And then you end up with a complicated set of laws again.
The people who get hurt when you "simplify" the legal code aren't the corporations; they have the money to get good legal advice, nor the criminals since they don't particularly care about following the law in the first place. It's the common people who get hurt when the law is simplified, because most people are fundamentally law-abiding, and the law is complicated to deal with all the people who are not.
I don't know a lot, but I do know that the legal profession will change itself as required to protect itself. Which includes protecting itself from being completely intolerable to the wider populace. This is a law of the Universe on par with the "death and taxes" adage. What this may look like is measures that will make legal action, and even public action on par with "comments", far less accessible. At least to robots. Though, I can't predict how that will look in terms of detail.
Your two statements logically conflict. I understand the article. Your first line is not what it says. See the "Legal Cost Collapse" subhead. In addition, logically LLM's will increasingly allow individuals to free themselves from the need for attorneys. And again, see the Cyber Risk subhead for issues implying an intolerable state of affairs for the wider public. In that light as well as other context, your second line will be a major part of the subtext for why the legal environment adapts away from LLM access.
> logically LLM's will increasingly allow individuals to free themselves from the need for attorneys.
No. Not a chance. If you sue me with LLM-generated filings, I’m going to be paying a lawyer to mercilessly win, using LLMs as needed (perhaps using LLMs to test various responses to your filings that cause your LLM to spit out something that is terrible for you but you’d have no idea since you aren’t using a lawyer).
Legal access and the viability of the human-staffed legal profession aren't just about litigation. Moreover, your and everyone else's respective ability to continue to counter lawsuits is severely limited by how much cash you and they have. Change is on the way, even if it mostly ends with radical defensive restructuring on the part of the law and legal profession.
You should sometimes review the Indian judicial system and their judgements. I have never seen a more self congratulatory, smug and confident yet incompetent people in my life. All hiding under some pretense of intellectual superiority that is enabled by a corrupt collegium system that makes it difficult to disrupt.
Essentially run by a handful of families and inner circle. Just another shadow govt. type entity.
The Bangladesh high court ruling that reinstated the quota system and quite likely was the catalyst for the subsequent overthrow of the government was completely devoid of legal standards: https://www.supremecourt.gov.bd/resources/documents/2110398_...
The high court found a constitutional right to the quota even though it wasn’t in the text of the constitution. Textualism could have saved the government.
Generative AI can't even handle complex programming tasks, and those have discrete and measurable outcomes.
Generative AI won't be able to handle the law. That will require actual AI (meaning, a system that is capable of understanding, not just one that predicts which word should follow another as Gen AI does now).
I often read comments where people doubt there will be much work left after AGI. I think on the contrary, it will keep us busy, even busier than before. The moment the road widens, more cars fill it up to the brim. Will lawyers lose their jobs? Looks like the opposite trend is happening. With each capability unlocked, we got more work not less.
Remember that reminder about the fate of horses after the automobile was invented? How about the fate the transportation jobs? In 1910, approximately 13% of the workforce (about 6.7 million people) were involved in transportation-related employment. By 2023, this percentage decreased to 10.3%, but the absolute number grew to around 21.3 million people due to population growth.
Desire is like fire: feeding it only makes it grow, it consumes leaving ash.
One of the primary points of human life is learning the undesirability of desire itself.
- - - -
Jevons paradox, on the other hand, isn't metaphysical. When you make the road wider more people drive on it, but widening the road didn't conjure up those people and their cars. The road was too narrow. If you kept adding lanes to the freeway eventually everyone who needed or wanted to drive on it would have enough space.
Now metaphorically speaking we're considering a situation where widening the freeway is so cheap that we can make the freeway as wide as we like, and we're making free self-driving cars, and people can send the cars on the freeway to do anything at all, even really trivial things, and we imagine endless freeway and endless cars until the world is covered in freeways. Something like that?
I guess we do have to solve the problem of human nature and our tendency to flagrantly waste our time and resources on dumb stuff, eh? How does AI help us do that?
I share this view too, especially about law. There is going to be a huge increase in demand for white collar work, but the problem we will face is that the skills required to supply this demand will go up, and there further will be demand for technology to lower the skill barrier to do the jobs. I think the paper to CAD transition is a perfect example.
We started with more demand for computer skills to increase manufacturing efficiency, and now there is increasingly cheaper and better software to lower the barrier to learning CAD
I envision a similar transition for law and justice.
Between the lines it seems the legal professions depend on strong barriers to entry to justify their high prices, and this is threatening them more than the companies.
Here's a story from my divorce 25 years ago. We were having an easy divorce, and so we used Nolo press to get it done. But it was difficult to know what forms actually had to be filed in our case.
So we went to the clerk, and said, out of all these forms, which ones do we have to file?
And the clerk said that they couldn't tell us because that would be offering legal advice. The lawyers had gotten laws passed preventing the clerk from telling us what forms we needed to give the clerk. Nice.
After a couple of minutes of thought, we decided to just fill them all out. So we filled out everything and handed it to the clerk.
The clerk then proceeded to cherry pick the forms they needed, and handed the rest back to us. ("My sole regret is that I have but one face to palm for my country.")
I think in the last 25 years, it's probably only gotten a lot worse.
The law does not actually prohibit the clerk from telling you what forms you need to give him to finalize an uncontested divorce in any state in this country. If it was against the law, the Nolo books you used wouldn't exist.
The clerk wasn't going to tell you which forms you needed to give him, because if the divorce had been messed up by you giving him the wrong forms on his advice, he could have been sued for the financial losses you and your ex-spouse incurred.
The barriers essentially are 'we know better' which is weird as I feel it is subjectivity and biases that enable them. There is a good case for making 90% of court cases as objective as possible by rewriting from scratch, but that can never happen in this world.
Wouldn't this make legal counter-action cheap too? The robots fighting each other with a human referee(for now), the winner gets bounty.
On a more serious note though, people not seeking justice due to complexity which leads to high costs is the real issue IMHO. Maybe the idea is that this is pushing people into finding a middle ground but it's also a known barrier to real justice.
So some years ago when Turkey wasn't as totalitarian as today but was on the way to become such, they started having a problems with the European Court of Human Rights. The cases begin to pile up.
EU proposed: Create a way for people to access the Constitutional court of Turkey, so you might resolve most of the issues before coming to us as.
Turkey's proposal: Why don't you introduce a considerable application fee, so the number of cases can drop dramatically because only a few can afford it?
So yeah, that was the Turkish style. The EU way prevailed but this time Turkey dropped its bid to join EU and it simply started ignoring it's own Constitutional Court decisions.
I think the moral of the story is, it doesn't matter that much because people will end up doing it their own way.
The laws as written only work because they're not applied perfectly and to every offence.
My standard examples of this are that there's around three times as many heroin users in the UK as that country's total prison population, and that perfect enforcement of road traffic regulations would mean the only people allowed to drive would be people like me who don't have a car.
I hope that things like this result in massive liberalisation of legal systems worldwide; the alternative, de facto not de jure*, is one law for the rich and another for the poor.
* "In its majestic equality, the law forbids rich and poor alike to sleep under bridges, beg in the streets and steal loaves of bread." - https://en.m.wikiquote.org/wiki/Anatole_France
It sounds like it's about to get symmetric, isn't it? More GPU's are unlikely to yield better outcomes once some threshold is reached just as more and better designed restaurant discovery apps won't make you discover better restaurants in your area if they are non-existent.
Not clear the relevance to the article, but I suppose people using GenAI for contracts need to be cautious, since they will be bound by whatever the AI comes up with.
The whole article was ridiculous puffery for the application of AI in the legal field, written by… people who would benefit from investment in application of AI in the legal field.
The beginning of the article is supposed to hook you in and provide you some plausible basis for what follows. Yet, they chose to go with AI generating comments on proposed regulations. I’m not going to spend any time going into why that’s ridiculous, so feel free to write me off as a nutter or whatever, but the idea of that submitting a comment to a regulatory agency during a public comment will suddenly transform law and legal practice is about the weakest possible case to make. Public comments aren’t legal materials, they’re comments from the public. And I’ll bet you a year worth of salary that as people start flooding regulatory agencies with genAI produced public comments, a rule will appear (whether through regulation or through judicial interpretation) that agencies need not respond to nor even consider public comments submitted without some kind of declaration that it was not produced using genAI.
This looks like yet another example of the thing where LLMs disrupt processes which are designed to limit engagement through requiring tedious long-form writing using jargon that is not available to most people.
On the one hand this is great - real “democratization” of how society works.
But it’s going to break a whole lot of things in the short term while these processes are redesigned for this new world. And the fixes will likely be to come up with new ways of limiting the number of people who can engage.
Legal filings don't require special jargon. Most lawyers just use jargon because they know exactly what the jargon means and how a judge will interpret it. In a lot of cases, they use the jargon simply because its what they learned in law school from professors who learned it from professors who learned it from professors who learned it when the jargon was still required.
A layperson can file their own lawsuit, and as long as they satisfy the requirements of the law outlining the necessary elements of the lawsuit, they don't need any jargon or special language. (Formatting is a court-imposed requirement; that does not come from the lawyers.)
Access to the courts is limited by imposing filing fees. An LLM like this won't change that; if anything the most likely response is to increase filing fees.
This makes little sense. Commenting on a proposed rule and filing a lawsuit are two entirely different tasks and two entirely different processes. That people were able to submit more comments and do so in an easier manner says nothing about broader "legal risk" to corporations.
And as you can see, the authors are co-founders of some related startup and this article is nothing more than a weak pitch.
Higher volume legal actions can only be successful as "peer-to-peer". If it goes through a courtroom, there is no way they can handle this kind of volume(even with AI tools). Imagine that the CEO is informed there are 20 new legal actions and the first court hearing will start 10-20 years from now when the CEO will not be part of the company anymore.
The summary seems to be; That you can't obfuscate with words like you used to and normal bureaucratic hurdles are less effective with LLM generators?
Depending on how badly these things get the summaries wrong it could certainly help with people trying to understand what their government is doing. I'd love to be able to download the contents of a bill from the Federal Register and have an accurate summary of all of the things it changes and how.
Compute is going to be king in the new world; if you have enough GPUs you can spam legal challenges. If you have more GPUs you can deploy those to take the other side and defend.
As with DoS, the interesting cases will be where an asymmetry exists in cost of request vs response (or amplification is possible).
In the short term it seems likely that the government will be on the losing side of this exchange.
Need a lot more cashflow than just to run the GPUs in some places if you file a lot of cases with the courts - and that is way before hypothetically winning anything that might help recover them.
Lawsuits may become automatic when you leave a store unhappy it might have the ability to create a lawsuit and then the store's lawsuit Api will be called, then the two ais will start negotiations and end up with a $15 deposit in your account by the time you get home. And it will be all because a certain hormone raised in your head.
Right, and if we put everybody in work camps, unemployment would drop. It’s easy to improve a single metric (like “amount of litigation”) if you don’t care about any of the other effects it would have.
Some of the early lawyerly uses of AI have been no bueno.[0] Yet the legal need to produce knowledge from huge amounts of text is such an obvious alignment with LLMs...
I think most everyone would agree that using early LLMs (and personally I'd still consider current LLMs to be early) in legal contexts is ill-advised, at best. Circle back to this question in 5 years and I think the response will be very different.