Hacker News new | past | comments | ask | show | jobs | submit login

I think AI is here to stay (obviously) but we do need a much better permission model regarding content, whether this is the writing on your blog, your digital art, your open source code, video, audio...all of it.

The current model basically says that as soon as you publish something, others can pretty much do with it as they please under the disguise of "fair use", an aggressive ToS, the like.

I stand by the author that the current model is parasitic. You take the sum of human-produced labor, knowledge and intelligence without permission or compensation, centralize this with tech about 2 companies have or can afford, and then monetize it. Worse, in a way that never even attributes or refers to the original content.

Half-quitting Github will not do anything, instead we need legal reform in this age of AI.

We need training permission control as none of today's licenses were designed with AI in mind. The default should be no permission where authors can opt-in per account and/or per piece of content. No content platform's ToS should be able to override this permission with a catch-all clause, it should be truly free consent.

Ideally, we'd include monetization options where conditional consent is given based on revenue sharing. I realize that this is a less practical idea as there's still no simple internet payment infrastructure, AI companies likely will have enough non-paid content to train, plus it doesn't solve the problem of them having deep pockets to afford such content, thus they keep their centralization benefits. The more likely outcome is that content producers increasingly withdraw into closed paid platforms as the open web is just too damn hostile.

I find none of this to be anti-AI, it's pro-human and pro-creator.




An important legislative step for this is that anyone creating and publishing an AI learning model needs to be able to cite their sources - in this case, a list of all the github repositories and files therein, along with their licenses.

If that is made mandatory, only then can these lists actually be checked against licenses.

There will also need to be a trial license, to establish whether an AI learning model can be considered derived from a licensed open source project - and therefore whether it falls under the license.

And finally, we'll likely get updated versions of the various OSS licenses that include a specific statement on e.g. usage within AI / machine learning.


In the age of reposts and generative AI, "attribution" is irrelevant. Nobody cares who originally made some content, and it truly doesn't matter.

>The more likely outcome is that content producers increasingly withdraw into closed paid platforms

Nah. You didn't get paid to write that post, did you? You did it for free. People nowadays are perfectly willing to create free content, and often high quality content, sometimes anonymously, even before generative AI.

There's no need for financial incentives anymore. As content creation becomes easier, people will start creating out of intrinsic motivation - to express themselves, to influence others and to inform. It's better that way.

Restricting content so that others can't benefit from it is not pro-human or pro-creator, it's selfish and wasteful. We should get rid of licenses altogether and feed everything humanity creates into a common AI model that is available for use by everyone.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: