Hacker News new | past | comments | ask | show | jobs | submit login
Big Tech is replacing human artists with AI? (erikhoel.substack.com)
69 points by PinealGland on Sept 19, 2021 | hide | past | favorite | 76 comments



All of these examples fail on structure. The "novels" don't have a plot, the "music" doesn't have a direction, the "poem" mimics the tropes but not the rhetorical intent.

Really, NN-based art is just a more abstract kind of sampling - blundering around inside a human-originated space, bumping off the walls, looking for the occasional example that a human will find interesting.

It's still mechanical, it's still mimicry, and it's still repetitive. Even if nice things fall out - and sometimes they do - they're really just distortions of the originals. (That's particularly obvious in the library "paintings.")

They're still waiting for a human to say "I like that", but with no sense of what makes a human say that.

It reminds me of fractal art - pretty, but monotonous and disposable.

How many people can remember their favourite example of fractal art?


The majority of human-produced art is also mechanical, mimicry and repetitive. What is the "magic" of humans that AI lacks?


This is exactly what naive AI art will reveal: that most human art is cliche ridden and derivative.

By naive AI art I mean just using it blindly to generate something that imitates X, or generating random art-like output.

There’s another potential angle to AI and art though. Humans could use AI as a tool to make art just as they have with electronic oscillators, photographic film, 3D renderers, etc. People once decried electronic music and photography as “not real art” too, so I assume some will say the same of anything using AI.

Personally I think AI as a rendering tool could be fascinating. The artist would be a little like a DJ cueing up patterns and themes, but you could get way more sophisticated than you can with mixing tracks and could do things other than music.


> What is the “magic” of humans that AI lacks?

Based on his examples,

> The "novels" don't have a plot, the "music" doesn't have a direction, the "poem" mimics the tropes but not the rhetorical intent.

AI lacks an actually understanding about what it is talking about.

Current AI is like a cargo cult I guess, they see the surface patterns but do not have the underlying understanding as to why the patterns are the way they are.


> AI lacks an actually understanding about what it is talking about.

you could claim that a chess AI lacks understanding of chess. And yet, objectively, it performs better than a human.

The current state of the art AI generated works are probably low quality, but one cannot claim that this won't improve. At some point, i would imagine that the AI generated works would be indistinguishable from human created works, even tho the AI "doesn't actually understand" it.


> you could claim that a chess AI lacks understanding of chess.

No you can't, the AI has 100% of the rules of chess, there are no rules the human understands that the AI doesn't understand. Compare that to human written texts, we haven't been able to code all the rules for human written text so the AI doesn't have them. So instead of the AI being able to evaluate what it writes itself, the AI has to try to imitate what humans have written without being able to see if what it wrote is good or not. In chess on the other hand the AI can check itself if it did good moves, since it can play test games and see if the moves leads to victories or not.

If you want to try to code an AI that doesn't understand chess, then write it entirely without encoding any chess rules at all, just train a neural network based on human played games. I'd bet it be hard to even have it do legal moves reliably. That is the level of our current natural language AI.


If you think art is about creating the most realistic depiction of reality, or the most varied representations of an object, or anything else that can be maximised like chess (try to win in as few moves as possible, while minimising the risk of losing), then sure. But then you don't get the first thing about art - you're talking about a particular type of decoration.


It might just be that there is nothing “deeper” to understand about Chess. It’s an artificial game where the rules are the way they are because we said so - i.e. there is no “why”.

I never said AI will not get better. Just that its current level of understanding of many of the fields it’s trying to tackle is similar to that of a cargo cult’s.


The same could be said of music. There's nothing that inherently makes a set of frequencies between 20hz and 20khz played in a particular succession "deep". I don't think AI will ever "understand" it, because most human beings don't understand it. Because there's no inherent value to any of it. Just a confluence rules, personal decisions, circumstances and observations that has lead to the tropes and trends that make up today's - or any given period's - musical landscape.


> There's nothing that inherently makes a set of frequencies between 20hz and 20khz played in a particular succession "deep".

What about the abstract emotional perception of humans that determines if something resonates with them? You can learn that certain chord patterns have certain emotions associated with them. It's another step of difficulty to make an emotionally coherent that makes people feel something. Most art is about individual expression. Not impossible to fake, but in order to do so well you need to understand the context of the world and the salience of certain things to humans. word2vec won't win any awards for its latest single "king - man is a queen"


Chess has a pretty well defined mathematical reward function with perfect information....

Art has never had anything like that


Humans don't fit against exact descriptions.

Humans instead learn based on reinforcement, be it positive or negative. So the human essentially builds a world model of what is "good" and "acceptable", and then works against that.

I suspect that a Reinforcement Learning approach to art generation should be producing art that is much better than current works simply because it doesn't aim to imitate, but instead it aims to maximize rewards, or the fitness of the work without relying on individual measurements against an ideal.

It should be noted that RL research is very costly, perhaps a more feasible approach is to train a model to generate art based on noise, and then train the RL agent to generate good noise.


> The majority of human-produced art is also mechanical, mimicry and repetitive.

Is that so? How much human-produced art have you looked at?


I also feel like a huge piece of art being fun and interesting is the human to human communication aspect of it. Dali, without knowing he’s Dali and what he and the rest of his generation saw, talked about, wrote about, and valued is just a bunch of goofy nonsense, albeit technically well executed.


> How many people can remember their favourite example of fractal art

I’ve got a list of my 10 favorite I can share.

You should actually play with these tools before you dismiss them. There is a lot more going on than bumping around human space. The curation process between artist and machine is live and well in this genre.


I posted a thread [1] on VQGAN+CLIP art generation, curious what you think. I definitely see the points you wrote here, but at the same time I -do- like what it generated. I think there'll be versions that will use patterns better. So they won't understand why we like something, but will be close to picking the correct pattern/features to make us like something.

[1] : https://twitter.com/Killa_ru/status/1439299650331320327?s=19


I think you're right. This is discussed in the article- GPT-3 does very well at the small-scale, like short poems. The architecture of GPT-3 is based on Word2Vec, and attention - it doesn't really have a good understanding of large-scale semantic structures in writing. (Also mentioned in the article - it can't rhyme: it doesn't understand sub-word semantic structure either).

I think the next big advance will be when we are able to apply attention/word2vec-style algorithms at other semantic levels.


AI adds a tool in the toolbox for artists, it will probably never replace human artists.


> For example, recently psychologists discovered a simple way to test for creativity. The simple test is to come up with a list of 10 words as different as possible from each other. The simple test is to come up with a list of 10 words as different as possible from each other. After posting my word combinations and their scores people were replying with theirs, and then of course eventually GPT-3’s results were posted. According to the test, GPT-3 is far more creative than the average person, scoring in the 85-95th percentile. If you like, take that test and see if you can beat the machine.

So could a line of Python.

random.choices(words_in_english, k = 10)

This isn't really testing creativity, but more the size of the list of concepts you can keep in your head at once and then select from.

I can see how that correlates with creativity in humans, but for a machine, it is just leveraging its ability to store and process far more data.

EDIT:

I tested this. Maybe there is some confounding factor in my implementation, but random.choice seems to consistently beat most people too, around 90-95%.

Site used: https://www.datcreativity.com/task?

Repo of code: https://github.com/MattGaiser/CreativeWordChoice/tree/master...

I just grabbed a random list of nouns, narrowed it by the params specified, and fed them in as required. You may need more than 10 as it rejects some, but in that case I just re-ran it and put in the first nouns to come up.


> For example, recently psychologists discovered a simple way to test for creativity. The simple test is to come up with a list of 10 words as different as possible from each other.

When Robert Frost was the poet laureate of the U.S some news magazine (I think NewsWeek, although it might have been Time) did a creativity test on him, using inkblots iirc, and he scored particularly low.

Frost was irate afterwards and remarked something to the affect that to be creative required that you not see things in the inkblot, that is to say to not try to see things in distractions, that he had spent years training himself not to see things in meaningless shapes.

I think that there is quite a lot of truth in this, also if someone wanted to give me a simple test to come up with a list of 10 words as different as possible from each other I would probably get verbally abusive - at least call them an idiot.


> For example, recently psychologists discovered a simple way to test for creativity. The simple test is to come up with a list of 10 words as different as possible from each other. According to the test, GPT-3 is far more creative than the average person, scoring in the 85-95th percentile.

This is the dumbest thing I read all week. My god, how can somebody be so obtuse and so... narrow-minded?

An excellent argument in favour of including some humanities course in tech degrees at least.


> An excellent argument in favour of including some humanities course in tech degrees at least.

What are you seeing that I am not?

It looks to me like the author of the article went to a liberal arts school (Hampshire College) for a BA in cognitive science and philosophy before studying a PhD in neuroscience, and is a published fiction writer (both short-stories and a novel).

I suppose what a tech degree is needs to be defined, but in Canada a computer science degree would involve "breadth electives" that cover humanities and the social sciences. It is at least two of each depending on if one is studying for a BSc or a BA (a BA would require more). Even a degree in computer engineering in Canada would require at least one course in the humanities. I think this applies in the US as well. I believe computer science and computer engineering are the stereotypical tech degrees people picture when the argument is made that tech is lacking in an understanding of the humanities and ethics.

It seems that the author of the article is about as far from a typical tech degree as I've outlined above, but you may be seeing something else that I've missed?


Church turing paradox but in comments ~ ;p - to elaborate

One take is that it’s bollocks- a mechanistic measure for a mechanism to “win” at has nothing to do with human creativity. One would argue that the arts should be taught with stem to help engineers interface with humanity- especially the indefinables.

Another take is to ask what it is the machine isnt quite doing. As i believe its been said before “tell me exactly what the program can’t do and i will make it do exactly that”.

But where does it end? To satisfy the argument we will need to define creativity, humanity, arts, not to mention bicker on the semantics of building the necessary shared ontology!

The synthesist in me thinks the next Shakespeare surely could benefit from n+1 “monkeys and typewriters”


The tasks asked to name 10 nouns which are as "different" from each other as possible. I chose 10 common words which all belonged to different concepts, I scored 75%.

The results then stated that "different" refers to the probability of two words appearing closely to each other in known texts.

With that in mind I retook the test and chose 10 words which are rare or vulgar. Scored 93%.


I got a dual-major in psychology in undergrad, and while it's not been an exceptionally useful degree, it did bequeath me with an enormous dose of skepticism toward psychometrics like this. The replication crisis solidified that skepticism

Being able to choose less "related" words doesn't sound like it has much meaningful basis at all in what we care about when we say "creativity". This metric, as many others like it, was likely chosen and popularized because it is easy to measure


I can see it being strongly correlated, as that human has more items to draw from for figuring out solutions, but I don't see how it can be used to test computer creativity. Python's random surely isn't creative.


That's the problem with metrics though. You can make anything sound plausible. Once a model has been presented to you, you can come up with an endless stream of just-so stories for why it measures what you were told it measures. Nonetheless, some metrics simply aren't that useful, and get adopted anyway because they produce a bunch of data. Psychometrics like this have an awful track record for standing up to more meaningful tests like "Can we drive any outcome we care about with this"


That’s a good way to demonstrate where we are with AI; it can beat a human on any given narrow definition, but still fall far short on any task that requires some ineffable creativity, synthesis, or understanding of the word. We can make a Python script more “creative” than 95% of humans (by a meaninglessly narrow definition of creativity), but all our effort as a society has still not produced a car that can drive better than your average teenager.


Arguably that line of Python wouldn't do very good - you'd want choices that maximize the difference as opposed to be merely something random, this is harder than it appears; and the distribution of words is also uneven enough (both in relative frequency of word usage, and also in lexeme "density" w.r.t how many terms there are for some domains and how few for others) that pure random sampling would be not a good option.


Word frequencies follow a power law, so nearly all words are used very infrequently. So a randomly chosen word is likely to be very uncommon. Also there are many more specific concepts than general ones (because each general concept encompasses many specific ones). So a randomly chosen word is more likely to be specific than general.

Random sampling is likely to give you unusual, highly specific words. I don't see how that wouldn't do very good. Sure, you could do better by mapping each word to a concept, then selecting ten random concepts and selecting a word for each concept. But why do that when you get 99% of the way there with just random choice.

To show this experientially, I let python choose 10 random words from the first English word list I found. I got "epilithic, codirects, serologic, capsian, digestibility, yasna, samsara, supergun, hepatoptosis, kyurin". Sure, capsian and kyurin are somewhat related (names of places/people), as are serologic and hepatoptosis (both medical terms), but it's a pretty good spread.


The example you got was the exact thing I had in mind - there are "terminology-heavy" fields with lots of terms, and a random draw is quite likely to draw two items from a single field which IMHO should count as quite related. The automated scoring method they're using seems to say that all obscure words are unrelated to each other, though, but I'd use that as a criticism for that evaluation algorithm not as praise for the evaluated method.

On the other hand, probably there are many humans who would pick two local animals or two office-related objects that happen to be in their room, so that cuts both ways.


Not poster but I think the point is you could do better, not that this doesn’t work.

You can imagine having some discrete optimisation maximising the dissimilarity of words, though that’s hard to do, in particular in a discrete space.


It does pretty well on this (https://www.datcreativity.com/task?), at least with the below code. Consistently getting 80-90%.

https://github.com/MattGaiser/CreativeWordChoice/tree/master...

I just grabbed a random list of nouns, narrowed it by the params specified, and fed them in as required. You may need more than 10 as it rejects some, but in that case I just re-ran it and put in the first nouns to come up.


Yeah, a random sample won't be (numerically) ideal, but given the size of the set, probably safe.

A really quick and easy way to optimise this would be to just grab a well-trained word embedding space and pick _k_ furthest words by your favourite distance metric.


Also the test I took[0] scores this very poorly, by determining how often those words are in the same section of a text as each other, meaning choosing uncommon words (crenellations, gravedigger) will show much more creativity than common words (castle, grave) that are similar conceptually.

[0]: https://www.datcreativity.com/task?


My naive take is that if you can keep 2N concepts in your head at once, the number of possible connections among them is much more than twice as big than with N concepts, so you have a much wider graph to analyze and possibly fish out interesting ideas from.

(Unless N is 0, of course.)


I agree, but that is a two stage process. Sure the machine can beat us on concepts, but can it make meaningful connections?


I would just like to say, the insight to try just a random choice of words and see how well that satisfies the criteria demonstrates a kind of creativity that isn't captured by having people generate a list of "unrelated" words from memory.


Memory is much more correlated with creativity than one would think.


Sure, and photography for the most part really did replace painting, or at least its societal purpose... family portraits are done with photos, as is most other visual documentation.

The art world didn't end when photography came along. But it did change. It won't end when AI-driven creations come along. It may change. The creative process and the resulting works have been changing based on technology for centuries. The printing press changed art. The industrial revolution changed art. 3d printing changed art. CNC machines changed art. People will always find new ways to be creative with new technology. New art forms will arise. Old art forms will still be around, and people will find uses for all of the above.


This is a great point, maybe an AI can do the work of editing podcasts to remove "ums" or air-brushing ads to remove blemishes cheaper than a creative-type would but I don't see podcasters or fashion models disappearing as a concept anytime soon.


there is a form of art called "found object art", which is pretty much exactly what it sounds like. Once upon a time, I asked my art instructor what the difference is between this art and a random object on my desk, couldn't anybody just pick up an object and call it art? The answer she gave was that the art is not in the object but in the context - of people observing the object as art, discussing its symbolism and merits.

machine learning lowers the skill barrier to producing art, but in a lot of contexts this barrier never existed in the first place. To have any value art must be meaningful to people, and there are fewer things less meaningful than learning that a painting was churned out mechanically in a painting mill, paint by numbers, or optimization algorithm.


> We cannot know what a character on TV is actually thinking—that’s the very purpose of acting, to allow us as much access into a character’s thoughts as their expressiveness allows. Novels, on the other hand, are an intrinsic medium, wherein consciousness is directly accessible. The writer can lay bare the contents of a skull to a reader.

The author appears to have an idealistic and mistaken idea of how semantics work. The author implies that words have absolute meaning, and therefore one's thoughts can be made "directly accessible" through words alone. This is clearly false. Everyone has their own semantics.

Communicating clearly, minimizing ambiguity, is a non-trivial task. It is ridiculous to assert that all authors are somehow more capable of this than any actor, if indeed this is even their goal (beyond entertainment).


I find it interesting that tech writers have no problem believing that GPT-3 produces prose that is comparable to humans in quality, including mastering abstract concepts such as character development or the deeper meaning of poems - but the instant GPT-3 produces computer code, we all see the flaws immediately.

I think there is some expert bias at play here.


Artists will never be replaced, although looking at the trends of modern graphic design, I doubt a computer can do any worse.


I'd make the argument that while artists haven't yet been replaced by technology, it's clear that technology has made it much harder to be an artist. Because of the internet, and the fact that we're so globally connected now, art is just a lot more competitive. Nowadays, there's just so much music available online for free, even just on YouTube and soundcloud, it's a lot more difficult to be a musician. You're not just competing against something that's been given away for free, you're competing against very skilled artists, skilled enough that even 10,000 hours of practice might not bring you to their level. I think the same is partly through for graphical artists.


I don't think it's entirely true.

Internet or not if a band were to play 4 straight hours live like Led Zeppelin used to do back in the days they'd get noticed.

Of course you can't just be at the park sitting on a bench while doing so.

You still have to go through the process of embellishment and make an effort to sell yourself both in terms of structure and looks.

What people on HN and reddit fail to realize is that the internet won't ever find a way to force its way into the golden hour of music marketing:

7pm-3am

Young people go out during that timeframe, they are with their friends and smoking and drinking all sort of funny things.

That's how one's favorite song is minted.

The problem is clubs, they want to play the hits because they sound familiar. They should scout for bands and structure a deal for a % of their future income in exchange for giving the initial opportunity.

Club owners are not ambitious enough to think they could find the next big thing and become their manager, and yet history of early pop music was all about that sort of arrangements.

Beatles in Hamburg is the most famous.


Technology has also made it easier to be an artist, tools often automate what took artists countless hours to perfect. This exacerbates the issue making “art” a technical skill and not a creative endeavor. The problem is that no matter how technical it gets, treating art as something that can be artificially constructed leaves things very blatantly soulless. In the modern era everyone can be an artist but only the exceptional will thrive, and art being an endeavor you can thrive on instead of struggle is what’s unique about the times we live in.


I do pretty good drawing stuff that makes me happy to draw and giving it away for free, with a Patreon tip jar. There are better artists than me who are also doing this, there are worse artists than me doing this, and there are artists who work a lot harder than my lazy self doing this. Or artists who work a lot harder than my lazy self grinding out immense amounts of work in the animation/effects/game asset mines, and getting a salary.

Admittedly I am also fifty years old, and got good enough for people to pay me for drawing stuff about twenty five years ago.

The fact that every social media platform likes to hide posts that take people off their sites to places they could give me money hasn’t been making it easier, but I do okay. My audience of people who are enough like me that “stuff I draw to make me happy” fills a lot of otherwise-unfulfilled desires is slowly growing.


> What seems to make AIs more intelligent is simply their scale. More neurons, more connections (referred to as “parameters” in the community) means better, and techniques that don’t work at one scale may jump to human-level at a larger one.

I think for this reason human artists won’t be replaced anytime soon. The human brain is insanely efficient. I have a theory that humans are smart not because of some magic algorithm in the brain, but more because we process so much input we can recognize so many patterns at once.

IIRC we‘ve build a super-computer with as much “processing power” as a human brain (although we also don’t know exactly how neurons work). But it’s not just that. We have to build that computer, run it for decades on all kinds of input, and then we have something as smart and creative as one human. Not necessarily even an artistic one either. Meanwhile there are over 7 billion humans, plenty of whom can make art for cheap. And that’s why we’re far from companies using AI to make their graphics.

I’d love to see AI stories and art, but until we get much more processing power i doubt we’ll get AI making novel creations. These abstract art pieces? Sure. Transferring styles (e.g. Van Gogh, water color, polygonal) between paintings? Sure. In fact I bet most artists will be using AI-augmented tools. But for a long time AI will still involve human work and/or be relegated to different styles.


I've been thinking something along these lines recently too. A single neuron seems like an absurdly complex device. I doubt they are really similar at all to "artificial neurons" except in some vague abstract sense. So the brain is essentially a network of billions of powerful, hyper-efficient computers (not to mention all the other nerves in the body), whose function, communication protocols, and network topology have been developed for millions of years in such a way that has lead to human intelligence.

And yet it still takes decades to develop a human mind. Even just to become competent with a single language takes years and it requires much more than just learning that language. If we could develop an AI system that hypothetically could achieve human intelligence, could we even train it to that level faster than it takes a human mind to mature? Is there some kind of maximum rate at which the process by which "intelligence"- whatever that may mean exactly- can be reliably run?


The main advantage of AI is that we can just copy it. So if we take 10 years to train an Einstein level AI in a field, we can copy that a million times and have a million Einsteins. Of course a big part in creating progress is learning new things, so this approach might not create scientific progress faster than current humans, but at least it will be way more efficient for run of the mill knowledge work.


The lesson is that art is much less profound than previously thought. GPT-3 is basically autocomplete with a really big training set.

The main lack is a tie to the real world. If you generated, say, an auto repair manual with GPT-3, it would read well, but would not match the proper procedures for the car.

That might be fixable. The next level would be a system that watches someone doing a task, then generates a how-to guide.


As opposed to the artist that dropped buckets of paint from a bridge onto a canvas and called it abstract art.

Since time began, it's all in the users appreciation or lack there of.

Art is anything and everything if you boil it down.

But, an artist with talent and a message....


To me the fact that the SOTA AI-generated art resembles a lot of human-generated art is mainly because there is a ton of low-quality art out there.

However, I believe there have been enough major advancements to assume that there will be more and that it will keep. Obviously that's speculation, but it seems plausible that we will approach closer and closer to human level as time goes on and there are more breakthroughs.


To me, it’s a lot more fun to look at the work that human artists are doing with AI than to get worried about them being “replaced.” https://pitchfork.com/news/holly-herndons-ai-deepfake-twin-h...


> the linguistic and creative output of AIs, which lack all consciousness and intentionality, and therefore whose statements and products lack all meaning.

If, in fact, AI output does lack all meaning, then that should provide a major competitive advantage to humans. After all, isn't the search for meaning one our basest motivations?


Right, and because we are so completely caught up in the search for the meaningful, we don't have a complete idea of what it is. There isn't a scientific test for whether something is meaningful. (Only how much information something conveys; Shannon literally discarded meaning to formalize information measure.)

Thus not knowing exactly what it is, it's not terribly surprising when we are fooled by our own attempts to produce its facsimile.


This blog post is so long, but it’s important to note that it’s hobbyists producing these results, using either open source models released by OpenAI or calling their API. Big tech didn’t set out to replace artists, but it may be a side effect of the useful models they produce.


"Good enough to fool a casual onlooker." Thing is, it's not a continuous spectrum from garbage, to good enough to fool a casual onlooker, to masterpiece. I'm not saying we won't figure it out someday, but the current generation of AI won't produce masterpieces - people that worry they will, confuse art with decoration. People creating decorative designs for a living should be very worried though.


> Notably, GPT-3 is licensed by Microsoft, and is therefore closely guarded. You interact with it only via apps which act as oracles. It’s basically a genie stuffed in Microsoft’s basement you can Zoom with.

This brings to mind Eliezer Yudkowsky's AI-box experiment [1]. Although GPT-3 is mindless, how long will future iterations _remain_ in Microsoft's basement?

[1] - https://rationalwiki.org/wiki/AI-box_experiment


Art will never be replaced by AI because art isn't about just literally the manifestation itself, it's also about the intersection between the cultural boundaries and the individual as well.

Taking two pieces (of whatever medium) produced by a human and an AI and comparing them - if the two are the same, that's fine. However if you knew the human involved had some extraordinary circumstances involved in its creation, many would elevate the human's piece despite being literally the same as the AI's


What if the human completely made up the story surrounding the creation of the piece? How does that change the ranking of the two works?


yes, because then the question is of why, and the surrounding controversy


The problem with stopping the research is that other countries, some of them nuclear armed, will continue it. Combined with surveillance, it might become impossible for a citizen revolution, even a peaceful culture driven one, to ever happen again.



Heh, I thought only the artists and creative types would survive AI. Looks like they are the first to go.


No AI though, can ever be funnier than Norm Macdonald.


I am not sure what's 'Big Tech' but yea, artists are getting replaced. It is now easy for small companies to use deepfake to create interesting marketing ads. A company I know used their own employee to create some ads and then used deepfake to replace her face with Mona Lisa's. They are considering it for replacing faces with those of different ethnicities for different countries. There are models that will create a realistic voice just given a script. And of course models like GPT-3 can create a reasonable script. Nothing Oscar winning or close but enough for a lot of companies to drastically reduce their marketing related budgets.


Seems like you're talking about designers, not artists. Designers make products whose endgoal is money, artists make artworks whose endgoal is meaning. The intrinsic value of an artwork comes from human > human communication. AI is a tool.


That is certainly one definition of art. There are others. The answer to "What is art?" changes depending on time and culture.

FWIW, I have a degree in Fine Arts, have had my work in galleries, and my goal is neither money nor meaning, but just for people to give me some validation and say, "Hey, that's cool."


Sounds like a healthier, although very expensive, dopamine dependency.


> Designers make products whose endgoal is money, artists make artworks whose endgoal is meaning.

I don't think that's a useful classification or that those are even mutually exclusive. How would you group Michelangelo and his incredibly expensive commissioned works for instance?


What does it mean for meaning to be the endgoal of art?

I've spent the last two months making carvings and blockprinting them. Sometimes I try to come up with an idea or concept I want to convey. Other times I sit with a block and start carving without any sort of plan or idea for what will come out, just filling in the blank spaces until something emerges.

Upon completion I can often come up with various interpretation of the abstract results, but a lot of the fun comes from hearing how other people interpret my work. A veritable Rorschach test for the soul.


> The intrinsic value of an artwork comes from human > human communication

So you're saying art is a tool?

Is art not shared not art?


The necromancy! Facebook already does this. Terminator dons the skin of a fresh corpse and tries to sell you a phone plan.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: