Hacker News new | past | comments | ask | show | jobs | submit login

1) No, proofreading is not authorship.

2,3,4) A judge could reasonably find that you were attempting to circumvent the law and declare all of these as “bot”.

The judicial system will not specify in writing complete coverage for every loophole. Judges can, regardless, find you guilty.




Related: The TCPA[0] disallows automated text messages, but using software[1] to help you text individual people more rapidly is legal[2].

[0] https://www.nolo.com/legal-encyclopedia/the-tcpa-protection-...

[1] https://www.hustle.com/

[2] https://abc7news.com/politics/campaigns-are-texting-voters-o...


> 2,3,4) A judge could reasonably find that you were attempting to circumvent the law and declare all of these as “bot”.

Unless you were disabled and using assistive technology.


> 1) No, proofreading is not authorship.

It is when the substitution is both context-aware and not what you intended to write.

> 2,3,4) A judge could reasonably find that you were attempting to circumvent the law and declare all of these as “bot”.

Wait, so if someone sends you a question and the suggestions can detect from the context your answer, you're a bot because you chose the suggestion instead of typing out words with the same meaning? Then aren't most people texting going to have to declare themselves bots?

> The judicial system will not specify in writing complete coverage for every loophole. Judges can, regardless, find you guilty.

"Judges will decide something" is no help to you when you're trying to predict what they will decide ahead of time. Finding out after the fact does a fat lot of good after you've already engaged in the behavior in question and an unfavorable ruling puts you in jail.


Assistive texting is covered in a separate reply here: https://news.ycombinator.com/item?id=20360152

The law clearly targets automated content creation that is not declared as such, not assistive writing technologies, and this will be considered by the judicial system when evaluating your stated intentions and actual actions. If you are unable to predict with confidence the outcome of your intentions and actions as they may be interpreted by the judicial system, please seek legal counsel for further guidance.


> The law clearly targets automated content creation that is not declared as such, not assistive writing technologies

How are those two different things? In each case it's a machine generating and suggesting things that you may want to write. Presumably in the second case the suggestions would have to be more sophisticated in order to be coherent most of the time, but that still doesn't really give you any useful criteria to distinguish them. We're already at the point that phones have context-aware word suggestions. There isn't really a principled line to draw there at the point where the suggestions get good enough to constitute the entire message. It already happens sometimes.


They are different by your intended use of the tool and whether the work is judged to be authored by you or by the tool, not by some specific aspect of technology or implementation.

Do you intend to prepare your thoughts as written word, and you use technology to write those thoughts rapidly? Then that’s probably fine.

Do you intend to prepare written works written by algorithm, software, or technology, to a degree that the work can no longer be reasonably considered the creative output of a tool-assisted human and is now instead the creative output of a human-assisted tool? Then that’s probably not fine.

If you want another way to look at this problem, imagine that our society grants algorithms copyright over the works they produce with our assistance, while granting us copyright of the works we produce with the assistance of algorithms, and that the law demands all algorithms be credited (CC-AT) when their copyrighted works are republished by humans. Copyright law has significant experience studying the problems of entangled and commingled ownership of works, but it’s too soon for US society to grant copyright to algorithms over their works, and so this law is all we get today.


You're still not providing any meaningful distinction between the two. How do you actually distinguish between a tool-assisted human and a human-assisted tool? What's the test and where is that written in the legislation?

> If you want another way to look at this problem, imagine that out society grants algorithms copyright over the works they produce with our assistance, while granting us copyright of the works we produce with the assistance of algorithms, and that the law demands all algorithms be credited (CC-AT) when their copyrighted works are republished by humans.

That's just restating the question, not answering it. And the hairy mess used for copyright is not a very promising thing to aspire to.


> How do you actually distinguish between a tool-assisted human and a human-assisted tool? What's the test and where is that written in the legislation?

That will probably be distinguished between by a judge looking at all the facts that apply to a specific case, and making up a decision. Details such as these are the reason why there's a justice system with actual humans in it and not just some software bot calling shots by following if-then-else statements written in law documents.


> How are those two different things?

They are of different colour[0]. Sounds like the law is aiming at that distinction.

Whether or not a piece of computer-generated content was "automated content" vs. "assistive writing" might entirely depend on the answer to the question "why was this piece of writing created?".

--

[0] - https://ansuz.sooke.bc.ca/entry/23


The law is not a programming language.

Believing so is a common misconception amongst engineers, but depending on it as such is likely to lead to disappointment, frustration, anger, needless bickering, extended conflict, and vexatiously long, hard to read, and mostly unenforceable contracts.


The law should be a programming language. The fact that it's not isn't a feature, it's a bug.


Looking back at half a century of horrendous bug-ridden code, and you still say this?

I mean, people have tried! Ethereum created a system of contracts implemented as a programming language. Know what it led to? People losing huge amounts of their money after someone found a bug in the contract and exploited it. And after that, the money was gone. The hacker had followed the contract as written, and the money was theirs now.


Yup. And it led into split of Ethereum into two chains, one followed by people who believed that code should be law, and other who believed the code should be law only when it works in their favour.


Which can be considered a concession of failure for the idea of "law-as-code", because apparently when the going gets tough, that concept needs to fall back to good old "law-by-humans" in order to continue being relevant and accepted by people. As a system that should not ever be in need of any fallback, that spells fundamental defeat.


We create these systems to be of use to humans. When they aren't of use to humans, that's a bug, and is viewed as something to fix, so we convene humans to provide a fix, whether for a specific case or to the system overall.

Ultimately, the only ways that situation doesn't play out is if the system is designed perfectly not just for current use but all future uses, or Humans are removed entirely from the equation. Since the former is impossible, and the latter means the system is either irrelevant or we're all dead and gone, we might as well accept Human intervention as inevitable.


It really shouldn't. It will never be able to accomodate every possible case. There is a reason why stories about drones going around and enforcing the law like a programming language is considered a dystopic setting.


This is exactly the same as legalizing every loophole and abuse of wording.

Above all, if the law would be code, who would decide the input? Unless every conversation and record is already in the Law-Bots huge power is given to the "formatting" of the evidences.


You could pick apart basically the entire body of US law like this if you wanted to. As other people have mentioned the legal system just doesn't work like this - context and intent are taken into account and judges and lawyers are not mindless robots reading a script.


That's the excuse used by everyone in support of ambiguous legislation. "All we need is a law that says bad people go to jail and judges are smart people who can figure out what that means based on context and intent."

Ignoring predictable ambiguities to be resolved by the subjective whims of the judiciary is not the rule of law, and the fact that it regularly happens doesn't change that or make it right.


You are not supposed to inspect laws for loopholes. Many laws are exactly that, and consider circumvention as aggravating.


Your complaint seems to boil down to "if the law is so imprecise and open to interpretation, how am I supposed to game the system?". Well, this may be a surprise to some people here but the law is not intended as a system to be gamed.

Loopholes are just day 0 exploits of the legal system.


It's just the opposite. If the law isn't nailed down then people with fancy lawyers can argue it means whatever they want, jurisdiction shop to get it in front of a favorable judge, etc.

Making the law clear and correct is the only way to prevent the ambiguities from being construed in favor of whoever has the most money to spend litigating it.


Cool, you get to defend your bot in court. But don’t worry you’ll win!


Or does your bot defend itself in court and you just press the 'submit' button each time it makes a statement?!




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: