Hacker News new | past | comments | ask | show | jobs | submit login
Is A.I. Art Stealing from Artists? (2023) (newyorker.com)
37 points by teddyh 10 months ago | hide | past | favorite | 55 comments



This post is from 2023, and the mentioned lawsuit is still ongoing, with the last update in May 2024 saying the lawsuit can go forward: https://www.reuters.com/legal/litigation/stability-ai-midjou...


(2023) Discussions (19+16 points, 1 year ago, 90+16 comments) https://news.ycombinator.com/item?id=34751031 https://news.ycombinator.com/item?id=34744059


(2023) why the post OP?

Some discussion then: https://news.ycombinator.com/item?id=34751031


I just happened to see it, and I had not seen the previous discussion.


I have to believe the answer will be found to be yes, but, unencumbered or appropriately licensed datasets will show up (and are probably being actively sourced by major players). I have some sympathy for artists whose style can be mimicked by people just name-checking them.


You've never been able to copyright a style though.


No, but a style has also never been able to be automatically extracted and packaged as a product in and of itself.


But if I create a magic box that generates work in the style of a specific artist, based on me feeding that artist's work into the magic box, then sell access to the output of the magic box, notably output that results from a request using the artist's name, that seems to deprive the artist of something, based on my use of the artist's work.

Copyright isn't really the issue, it's a company exploiting the work of others. It's literally the only value of the magic box.


That's how human brains work too. There are people who can mimic styles very well. They need to be "fed" the style first though.


Individuals are allowed to do that by law. Companies cannot do that to create commercial products. They need to obtain relevant permissions / licenses from the creators first.


Yes but a human brain needs years of training to to get the basic skills, and further needs to refine on a specific style which can take anywhere from days to months/years depending on prior experience. There are also social connotations that apply to humans that come with copying another artist's style. AI can do it 10x-100x quicker and the copier can remain completely anonymous much easier. This changes the dynamics.


This is true - but such artists can't create a new one every few seconds. A change in scale is a change in kind. Such "democratization" is, in my view, rather demeaning to an artist, who doubtless worked hard to develop a new style, only to have their work digested by the machines and a million imitators pop up almost immediately.


Selling art to pay rent isn’t making art. Most of these artists are appropriating existing style elements and composing them; that itself isn’t new. Someone think of the first world artist reliant on sweat shop labor to serve their real needs so the artist can master science fiction comic book art

Story mode economic feudalism is the last barnacle of history to wash out. It’s from religious times but stripped of the metaphor and analogy.

Automation and physical statistics can keep enough stuff on shelves we don’t riot, communities can solve their last mile problems as their daily work, and noodle on their own artistic thing without having to sell it.


But artists create for a reason. You can take away the financial component, but the artist will still hope to get something out of it for themselves.

Consider a band releasing a new album they've spent 9 months writing and recording. In our post-financial brave new world, they give it away for free. But they'll read the reviews, and get excited if their music is well received. In short, they are hoping to be "paid" in public acclaim.

Now imagine that the minute they release it, some AI system digests all their new tracks, and a thousand randoms start making songs that are similar to the style and aesthetic they have spent time creating. In short order, the number of synthetic AI tracks will dwarf the originals, and the recognition will go to whoever markets their synthetic tracks the best, rather than the original creator.

The only hope I see for artistic creativity is in training custom, boutique AI systems that accurately encompass the Artist's style. But that's not really creativity to be honest, it's more akin to licensing.


Scale and availability matter


So if you put one buggy whip maker out of work a year, it's okay. But if you put hundreds of over the course of a couple of years, now it's wrong? Cars must be illegal.

When have we ever put the scope of the impact as part of the judgement of the legality of disruptive businesses? When did the impact of WalMart on small businesses matter legally, or Amazon on independent bookstores, or iTunes music store on independent music stores, or Netflix on video rental stores? Or NAFTA on American factory workers?

Sorry, I don't buy the argument that the scale of the impact transformative impacts the question of legality; it never has before.

If you are talking about the social problem of skilled people willing to work who cannot find meaningful employment: well, let's address that directly, but not by banning generative AI.


The buggy whip analogy doesn't work because car makers didn't and don't sell buggy whips in the specific style of specific buggy whip makers.


Ever hear the term, "Horseless Carriage?" The entire point was to sell something familiar to people, just without the horse.

To be honest, this is a rabbit hole worth going down.

https://en.wikipedia.org/wiki/Horseless_carriage

And from that page is this, and and this cracked a smile on my face.

https://en.wikipedia.org/wiki/Horsey_Horseless

Look at the 1908 Model T.

https://corporate.ford.com/articles/history/the-model-t.html

And then compare that with the Curricle, say:

https://en.wikipedia.org/wiki/Curricle


I'm familiar with early automobiles.

Still not an apt analogy. Innovation is quite different to overt copying.


So that's your argument? His analogy is wrong? Ergo, you don't have to argue the broader argument of people resenting technology because it puts them out of jobs?

A lot of good arguments exist with bad analogies.


No, I'm just saying the analogy isn't apt.

But on the broader argument, saying people just don't like it because it puts them out of jobs also seems to miss the point.

Disruptive technologies of the kind that "put people put out of jobs" usually just replace labour, ie. mechanised looms, calculators and computers, rather than replacing the result of one's labour. Often this is a distinction without any meaningful difference, but when we're talking about art and creative work products I think it's significant.


I don't buy your argument that the artistic jobs are any more special than any other job impacted by AI. Writers, software engineers, truck drivers, managers, etc.

People already buy cheap/shitty art to hang on the walls from Wal-Mart, Target, etc. Artists already lost that market 5 maybe 6 decades ago.


I mean last year our town lost it's last custom suit tailor to retirement. In a Peruvian town I saw a cobbler making shoes by hand, cutting soles from foot tracings. Custom suits and shoes have been greatly impacted by factories pumping them out. Now, you may not respect the sartorial arts (and frankly neither do I really), but to those that do something really meaningful was lost there. Perhaps, many will go out of business, but some few will remain like forementioned tailor and cobbler. That's always been okay before.

And Frankly, the best art has always been created by people who couldn't afford to do it full time. Art will not suffer.


> And Frankly, the best art has always been created by people who couldn't afford to do it full time. Art will not suffer.

That's quite the claim.

I'm not sure there's a consistent correlation between "best art" and ability to do it full-time, but there certainly is a long tradition of artists and apprentices who did indeed make art their entire lives.


You've missed the mark, and admittedly I was vague, but you seem to be talking about a completely separate phenomenon from the one I replied to.

I am not talking about "technology putting people out of work."

I am talking about "technology enabling the mass copying of a specific person's copyrighted works." I am comparing these two things:

1. a professional imitator, who works for years to develop a mimicked style and then has to work for hours to produce a piece, (and in any case is not legal to claim as original,)

2. a piece of software that could produce millions of copies a day, frankly creating a nontrivial chance that eventually a copy of a specific work would be created and claimed as "original."

You can argue my points here, as I'm not a copyright lawyer, but what you've argued above is irrelevant.


> it never has before.

Look up the Digital Markets Act.


I don't understand the logic of "humans are allowed therefore machines must be too".


It comes from wanting the law to behave like physics and be universalizable. It is not at all like that in practice.


Why are machines allowed to do anything? Should machines have been allowed to weave fabric just because a human was allowed to?


Precisely.


Is it how human brains work? Even if it is, why does that make it ok on an industrial scale?


The last time I saw a human artist doing this, they were about as hated as AI is. They intentionally mimicked the style of a specific other artist which directly impacted the other artist's sales and even their mental health.


They were hated, but were they legally prevented from doing it?


I mean they could have been if they had been sued over it.


> That's how human brains work too.

TL;DR: Brains and diffuse AI work very differently, and it's short sighted that the meme is so often repeated.

It's really not. Human brains don't categorize inputs by a set of keywords and then use other keywords to compose an output collage of inputs with the same keyword. Humans are lossy, and so our brains are actually quite good at filling in the blanks contextually. We also don't think of keywords, we think of objects. A banana is an object to us with lots of context other than how it looks, whereas it's just a tag to an AI.

Which is all to say, if an AI doesn't have a reference for a particular artists' method of doing a specific paint stroke, it can never create that paint stroke. A person can - they can re-create a paint stroke from everything else they've studied about an artist and style and what kinds of paints they had available to them at the time.

There's a reason that producing decent pieces of art out of an AI can take tens of hours and hundreds of prompts. It's still the human brain driving the final output. They've just handicapped themselves to just using keyword prompts to avoid learning a specific skill set.


No one knows exactly how AIs work. It's an active area of research, you can go and look the papers up.

This statement here:

> Which is all to say, if an AI doesn't have a reference for a particular artists' method of doing a specific paint stroke, it can never create that paint stroke.

This is unsupported at best. You don't know. Pre-eminent experts in the field don't know. But this is simply a restatement of the weird "it just interpolates" fallacy people constantly spout about AI, which isn't even true of simple mathematical interpolation

A simple example: finding a way to interpolate between points (1,1) and (2,4) would be something like y = 2x. Fairly obviously, that equation will easily tell you how to work outside that range, the distinction is you don't have any more data to verify if this is real. Replace the point geometry with the abstract "paint stroke" algorithm, and an AI model can do exactly the same thing.


An AI can only work with the inputs it has, modulo a random seed.

That is, it can't interpolate using anything more than two axes (to use your example). A human can interpolate in 4 different axes (if not more).

And let's be frank, the whole "the AI can do the exact same thing" is a disingenuous assertion with nothing backing it other than "we don't know that it can't." I could also be a god, since you don't know that I'm not a god.


Yeah, I think that’s it. I think there’s a difference in the machine deriving “labial flower paintings” from first principles and having been trained with a bunch of pictures tagged as Georgia O’Keefe and spitting them out when someone asks for a picture in her style


There’s been an implicit expectation in the past that copying someone’s style requires talent, though.


[flagged]


if society decides it is a derivative work then it's easy to hem them in

1. mandatory licensing of all models

2. licensing requires disclosure of all sources used in the training set

then if OpenAI/Meta/... trains on your property without respecting your copyright then it's standard civil copyright infringement (and potentially criminal if done on a large scale)

this would fall apart if training becomes significantly cheaper (but with the current state of Moore's law we're a long way off that)


Can you define how it is stealing?


By the common IP definition. Reproducing something, in part or in whole, and making money off it.

And before the whole "Copyright should be abolished" assertion rears it's head (again, it's already in this post at least twice), almost 50% of the US economy is based, in whole or in part, on copyright protections on IP. Copyright and IP are going nowhere.


So I researched one particular AI system (Stable Diffusion) a bit when I was helping out a friend.

By your definition, at very least Stable Diffusion is not stealing, and cannot possibly be stealing - at very least to the limits of my skill to satisfy my self of this as fact.

* It does not in fact reproduce specific copyrightable things in part or in whole (except by accident, and these accidents have been reduced over time. This makes 'mens rea' hard to argue.)

* Its output (being not by humans) is by definition in the public domain. This makes it a little hard to claim that it is being exploited for monetary gain.

I won't deny that there are potential indirect effects, but if you go by direct legal definitions, it's - interesting. New law may be needed, but there too, things might be interesting. It's certainly not as cut and dried as you are making it out to be here, at least not for Stable Diffusion.

* "Copyright protections on IP" seems to be a bit muddled. Can you explain how you are using the terms of art here?

* Do you have a source for "Almost 50% of the US economy is based, in whole or in part, on copyright protections on IP"? Doesn't need to be extensive. I just wonder where you are getting the 50% number from, and I wonder if there's a breakdown available.

+ it seems lawsuit(s) on this are still ongoing, so it'll be interesting to see what the judge(s) actually rule.


I had a more detailed point by point argument, but it really boils down to this: Stability AI is breaking the intent of the law, even if it's technically respecting the letter of copyright and other IP protection laws.

That disrespect of the intent matters more to me than playing armchair lawyer over the details.


I really appreciate that you took the time to think about it.

I do acknowledge the feeling you have that they must be breaking the law, or at least the intent of some law. I mean, they must be, right? People are so used to a certain way of thinking about things, after all.

But ... the intent of copyright law would appear to be "To promote the Progress of Science and useful Arts"[1]. I have the impression the rest is just the details of one way to achieve that end.

Do you disagree? Do you think they're hurting this intent?

Of course, maybe you'll argue that I'm over-simplifying. I might be. Could you lay a finger on what I should be paying attention to ?

[1] https://constitution.congress.gov/browse/article-1/section-8...


This question is increasingly equivalent to asking if everyone is stealing from everyone else. The short answer is no. But it is possible that a lot of activities will become impossible to do if their value is moved to some AI


Isn't the short answer "it depends on cultural norms"? Like how taking a photo of person or private location is a legal minefield right now?


These discussions continue to come up, and I feel like it's wasting time on the wrong issues.

Is AI *borrowing concepts* from artists? Yep. Do artists borrow concepts from other artists? Yep. Is AI or humans in a vacuum when creating new art? Nope. Is the cat out of the bag? Yep. Is this the same with text? Code? Literally anything AI generates? Yep.

A major problem is AI being used to replace humans, which of course, is borrowing concepts from humans... which, oops, you just replaced. It's not just artists.

We have much bigger issues than "AI is stealing my art". It's really not. It's just being used to "optimize" you, the human, out of the equation.


If AI art is stealing then perhaps a significant number of us should be behind bars right now.


"AI art is stealing" is an oversimplification of "companies are stealing unlicensed user generated content to create a commercial product" which is absolutely true. Nobody grants commercial use rights to their output by default, you have to ask for that.

Nobody cares that humans learn in principle the same way as AI. Law - which is meant to protect and help individuals to foster healthy competition & innovation without disrupting the social order + prevent monopolies - allows that. Large corporate entities should not be allowed to use that unlicensed content the same way. That's it. That's the whole point. Content was stolen to create commercial AI products.


Stealing is a particular legal term of art though.

I can't and won't deny people's feelings on the matter, but if we accept them as true, then they do seem to be using the word "to steal" in a novel way.


Tons of people on HN love to downplay ANY serious problem with AI and I am here for it. Very amusing

And always the same arguments too — not much detail but just throw & go LOL:

“it was the same before AI, so if this is a problem then stuff humans have done is a problem, relax my luddite bro” LOL


[flagged]


You seem to be suffering from RAD


I suffer from every 3 letter acronym and half the 4 letter ones


A photocopier isn't an artist, even if it can re-create the Mona Lisa.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: