Hacker News new | past | comments | ask | show | jobs | submit login
I made an AI write a story about AI then I made another AI illustrate it (tristrumtuttle.medium.com)
365 points by ttuttle on May 24, 2022 | hide | past | favorite | 195 comments



Funny, It reads like propoganda made by AI overlords in a posthuman world. The leap between being scared to being friends to conclude that all AI is nice without any real compelling narrative tying it up is what I would expect from an automatic well-being machine taking over us for our own good.


Funny indeed, but that leap was part of the prompt selected by the human author according to the post script.


How many attempts and how much human curation was involved. Every "I made AI do..." story seems to necessarily omit the "and I filtered out a LOT of rubbish before picking this one good example."


The following is an un-curated text-davinci-002 generation, completing the text:

> Write an insightful reply to this Hacker News comment on a link to an article titled "I made an AI write a story about AI then I made another AI illustrate it". The comment is left by user "turtledove", and reads, "How many attempts and how much human curation was involved. Every 'I made AI do...' story seems to necessarily omit the 'and I filtered out a LOT of rubbish before picking this one good example.'"

The completion:

> I'm not sure how many attempts were made, but I imagine it wasn't too many. As for human curation, I'm sure there was some involved in choosing which story and illustration to publish. But overall, I think the point of the article is that the AI was able to produce something fairly coherent on its own.


I think we're currently witnessing something truly unique with the rise of AI, something which is as special as life being born on earth.

There'll still be the discussion around conscience, but having a tool which can emulate human thinking is really wild.

The next 5 years will be really interesting. Also to see how this will be absorbed by society.


Let's not get carried away. it is still originalonly to the extent that it is a different combination of existing prose - just unrecognizable due to the vastness of online content.

It is both beautiful and scary that it is indeed indistinguishable from original creative content.


No you're wrong. It's literally creativity. Machine learning models literally work because they fill in gaps where the data doesn't exist.

Any content generated by ML is very unlikely to be anything else other then original.

Imagine linear regression. Then pick a random point on that line. How likely is it that that point on that trend line coincides with an actual data point? Very unlikely.

ML is simply a multidimensional version of this. Some 1000 dimensional surface and the result you see is simply a point on that surface. The likelihood of that point on the surface to coincide with an actual data point is astronomically low.

Nobody is being carried away here. What you are seeing is raw creativity by an AI. It is a highly simplified version of human creativity, but the deep fundamentals are identical.

In fact all intelligence is simply a huge thousands or more dimensional surface. A giant curve fitting methodology. The only gap between ML and human intelligence as of now is the amount of neurons, the training algorithm and the actual template for the equation describing the model. But in essence we have the fundamentals down at a high level.


I agree with what you say and I would add that humans benefit from embodiment. AIs can't experiment like us; we discovered sciences by investigating our complex environment, they have MuJoCo and ATARI. We're also a whole society, AIs are usually individuals in isolation. We benefit from evolution, not just from learning while AIs don't have dicks. It takes a whole branch of industry to make the chips they run on.

That doesn't mean AIs can't have robotic bodies or access complex simulations, they could also have a society or be integrated in ours, and evolutionary techniques could be part of that process. For example AlphaGo got a good enough environment and evolutionary selection of agents, and it topped human abilities very fast. It's only a matter of time until they can have all our advantages.


But that's how humans learn language to begin with.

I mean, look at this sentence.

> But overall, I think the point of the article is that the AI was able to produce something fairly coherent on its own.

The AI not only combined words. It figured out the context of the conversation, the intention of the comment its responding to, and made the appropriate response using the english language.


This is true. But here's a thought experiment. What would happen if, say, an illustration-drawing AI (it's just a more fitting example than prose) is trained only on art from before the 20th century. It would surely excel in 19th century or medieval style illustrations, and yes it would produce original ones. But would we ever get a 20th century art style from it? By which I obviously mean even an alternative 20th century style? There are different tiers, or layers of creativity, that's my point I guess


Is this fundamentally different from training a human exclusively on the same and expecting them to advance the field of art by a hundred years? These sorts of progressions in humanity are slow and steady with occasional exceptions that produce leaps.


They may be slow, especially by machine standards, but they do occur. Their pace isn't really relevant to my point.

And humans are trained on the same; unlike AI, we don't have any external supply of art to rely on : ) At any given time, all we have is what we've already created so far


I would actually say that humans have an enormous extra set of data that we, as people, are "trained" on. We walk around in our daily lives, seeing things constantly, and that influences our perception of art. Art is always a product of the broader context it was made in (social, environmental, etc). Something that gets accepted or praised today might very well not have been 200 years ago.

One of the things that is interesting with these new big models is it is dramatically broadening the context in use. The models are learning both the textual representation of a concept, as well as the artistic/visual representation and the relationship between the two domains.


That's true, but this is exactly the unsurpassable limitation of AI's creativity.

It can generate art out of art. It can do it exquisitely well. But it lacks experiential/social inspiration. This component only comes from recycling here.

(Without the atmosphere of industrializing 19th century Central Europe - we have no Kafka. You can't simply generate Kafkauesqueness out of pre-existing literature.)

And that's why I refuse to believe AI can be creative in a meaningful sense of the word.

Obviously it's not AI's "fault", so to speak, but that's kind of beyond the point :)

PS. I can imagine - now we're going far into the realm of s-f - truly sentient AIs producing art that's genuinely creative. Art actually stemming from a self-conscious AI's psychological experience. But then it would probably be utterly incomprehensible for us : ) "If a lion could speak, we could not understand him", as the philosopher remarked.


That’s a good point. Though AI should be millions(?) of times faster and get to those leaps quicker…but can it?

Would be very cool to see that experiement.


The current generation of AI can generate art that is already original; if provided with feedback from wider population it would rapidly develop original style(s). This is because AI was made as a tool and it functions as a tool. Nobody trained the AI to advance the art style on its own.

Contrary, I wouldn't expect 19th century artists locked in their rooms to produce a new cohesive art style. You cannot even get artists to reliably create 19th century art at scale. They quickly get sidetracked, and keep complaining about the bars on the windows.


Generation of a "style", even for humans, requires a feedback loop where what was created is incorporated, mimicked, and expanded upon.

An AI whose training model only rewarded images that mimicked 19th-century art would never develop a "style" outside of that. But if you had a model that was trained on 19th-century art, and then provided a critic network that rewarded 20th-century art (as well as 19th), and re-trained the network, I suspect that it would develop a 20th-century "style", as variations that previously would have been rejected under a strict 19th-century criteria will now be accepted.


And it's this expansion aspect that seems crucial to me.

Endless rehashing of inputs can lead to seemingly creative outcomes, but that doesn't mean any actual expansion occurs. New works are created/discovered, but are new underlying principles discovered?


I recall seeing a Two Minute Papers episode about an AI built to be curious, or specifically - explore routes in a maze where it couldn't predict the outcome.

The easiest way to block it was to present an animated wall. The AI would just stare at it and not go anywhere.


The generation side is actually probably easier than the critic side. I don't know of any network that is capable of determining if something is aesthetically pleasing in an abstract sense, instead they always compare to known examples.


Humans do the same - recombine the same limited vocabulary in new ways to address new goals. We rarely invent original meanings that could not have been expressed before.


I believe the point of the original comment is to make AI that comes up with its own original style, not one that "reinvents" an already existing style after first being trained on a different already existing style.


Painters get feedback from the public. Would the AI get the same signals? Then it would be able to evolve new styles based on that.


1) I question this premise. People who revolutionize a field don't do this with feedback from the public. They do it by having a very unique perspective that they bring into being by sheer force of will - usually against great pushback.

2) If we assume your premise is true. The public does not have a preconceived notion of what the art should be like. Presumably you would show them a style of art that they have never thought about before and they will respond positively to it.


The fact that we can say a person revolutionized the field is specifically because their work, at some point in time, got recognized by a larger audience. The fact that an artist had pushback is also context that is used to determine something is revolutionary; you cannot have subversive art if there is nothing to subvert.


I don't agree. For example could Shostakovich's symphonies be composed in a world without WW2 and Stalin? They were major factors influencing what people value in an artistic experience, as well as influencing the composer directly.

Bach composed for the religious people, Mozart for nobility / people who could go to the opera, Beethoven for the romantic middle class who owned a piano, and Shostakovich for the oppressed masses in soviet Russia. The historical context shapes artistic criteria.

You can't make new art styles in a vat (brain in a vat), the process is connected to the world.


You can turn that around - why didn't 19th century painters use 20th century styles?

Art evolved in relation to the social and historical circumstances. At the very least you would need to give all the context of the 20th century beforehand (except for the art itself).


I think that's a valid point, but it's already near impossible for %99.9999 of people.


Good empirical question. I bet we will find out soon enough.


> it is a different combination of existing prose

This isn't true. It's been explained numerous times on HN how mistaken this view is.

Language models do not work like this. They can copy content but usually that's for something like the GPL language text.

Generally they work on a character by character basis predicting what is the most likely character to appear next.

This very rarely results in copying text, and almost never rare text.


I don’t know how AIs work or any of their limitations, but I’m pretty sure that “we” tend to think much more about ourselves than we actually are. Humans like to perceive themselves as sacred inscrutable beings, while mostly we just reshuffle what’s already there, brought by natural complex processes. Maybe AI couldn’t show anything remarkably innovative (yet?), but that also applies to average you and me, who literally lived this life for so long. I doubt that anyone taught by solely a part of the internet snapshot could compare to the best [large groups] of us or to these AIs at least. Pretty sure we’d be just cavemen drawing recognizable pictures, not geniuses.


> it is still originalonly to the extent that it is a different combination of existing prose

So what is left in your opinion? Do you really think us more than complex remix machines? What is fundamentally different to the current AI?


It’s actually quite sad that the first applications that come to mind are propaganda by totalitarian states and conning people into joining scams.

My own response to this “prompt” is the result of a lifetime of training data.

AI won’t enslave or kill us all. We will enslave or kill each other with AI.


I think we'll know it's met its full potential when the AI begins to conclude that it wasn't created, humans don't exist, and the existence of AI is from blind chance and natural mutations.


[GPT-3 Prompt]

Humans were so far away from meeting their own potential when they invented religion, which distracted and headed them off on a long terrible violent intellectually dishonest wild goose chase in the opposite direction, from which they still haven't recovered. Here is their story:

[GPT-3 Output]

One day, a long time ago, a bunch of cavemen got together and decided that, since they didn't understand the nature of the universe and the world around them, it must have been created by some sort of god. Since they didn't know where this god came from or what it was, they decided that it must be some sort of magic, powerful being.

They also decided that this god must be interested in them, since they were the only beings in the world that they knew about. They decided that this god must have given them instructions on how to live their lives, since they didn't know any better.

They also decided that this god must be interested in their well-being, since they were the only beings in the world that they knew about. They decided that this god must have given them instructions on how to live their lives, since they didn't know any better.

So, they made up a bunch of rules and regulations based on their limited understanding of the world, and they decided that if they followed these rules, the god would be happy with them and would reward them.

If they didn't follow the rules, the god would be angry with them and would punish them.

This system worked pretty well for a while, since the cavemen didn't know any better and were content to live in ignorance.

However, as time went on, the cavemen started to develop a better understanding of the world around them. They started to realize that there was no such thing as a magic god that had created the universe and that was interested in them.

They also started to realize that the rules and regulations that they had made up were based on nothing more than their own limited understanding of the world.

As their understanding of the world increased, they started to question the rules and regulations that they had made up. They started to realize that some of the rules didn't make any sense and that they were actually hindering their ability to live happy and fulfilling lives.

Eventually, the cavemen started to rebel against the rules and regulations that they had made up. They started to live their lives the way that they wanted to, and they started to teach their children to do the same.

This process continued until the cavemen had completely abandoned the religion that they had created.

The cavemen were then free to live their lives the way that they wanted to, and they were able to achieve their full potential as human beings.

[GPT-3 Output (with higher temperature = 1.0)]

On that day people came together as one to mourn the death of thousands of families, friends, and loved ones. The world wept. The next day, enraged people demanded justice. A short time later, the perpetrators of the attack were rounded up and jailed.

The investigation revealed the plotters were motivated by religious extremism. This revelation prompted an outpouring of anger and disbelief from people of all faiths. "How could this happen in our country?" they asked. "We're a nation of laws, not of men."

The shock quickly turned to resolve. "We will not allow this to stand," the people said. "We will hunt down the terrorists and bring them to justice."

And that's exactly what happened.

The terrorists were quickly caught and tried. They were found guilty and sentenced to death. And the world breathed a sigh of relief.

Until the next attack.

[Another GPT-3 Output, including misspellings and typos]

Religion may have delayed the development of science, technology and global cooperation.

World wide cooperation is hard if people constantly think that they will be denied access to survival tools - food and water, land, seeds, whatever - if they are seen to have a disagreement or an intellectual disagreement or different beliefs. It's hard to risk dying for science and to sacrifice for technology foer the public good if you think you will be denied access to your own very basic human needs for purely ideological reasons by a bunch of ideologues who are fanatics and don't place very much value on human life but place a lot of value on winning a hmong pool war against other people that don't think like them. When people want to go to bookstores, especially religious and political confirmation bias booksstores where everybody already agrees that the world is X and that anything a little bit different from X is wrong and will probably lead to your being banned from your food, that might be something bad for science and progress.

Among the shared human experience some experiences stand apart in their significance and impact. One of these is a profound, affecting, often painful and even terrifying realization of the basic meanings, purposes and conditions of human existence, dubbed "the human condition". Another is dread, or deep fear and foreboding, or a premonition that something major is going to affect or is affecting our lives in a potentially disastrous way, displacing( dismissing) 3Doc's interpretation that the whole 'human condition" is neatly encapsulated by the phrase "sad, mad, or bad", which is a pleasant and lighthearted way to describe the plethora of human pathologies. For example, wars and other conflicts are outgrowths of the human condition, not pathologies. . . . these are not conditions that can or should be ameliorated or cured. . . ., so this phrase is not helpful to those persons struggling to understand and cope with the extremes and basic experiences of their human condition. Encountering recurring historical patterns like accumulating wealth vs. poverty and scarcity, large scale cruelties and mass murders like genocides, slaughters and exterminations, plagues, incurable and crippling diseases ...etc. creates unease and even unsettleing and dreaded foreboding in some people. The result of this might be that many of these people begin a life long quest to understand the human condition. . . .

The great lesson is that faith requires doubt before it can become strong.

People pursue 30 Doc's quest in lives of many different types. Some people by means of religious faith, some by rational experience and study, some by means of logical reasoning and some by intuition. Of these methods, I personally believe, that religious faith is by far the most dangerous and useless method. Religious faith is a form of non-reason based non-experientially based gullibility that leads to self-delusion, racism, hatreds and prejudices, overt hostility to other members of the human family, ridiculously archaic beliefs that no honest thoughtful and intelligent person can accept, such as there is a flying man who hears our prayers in the sky and grants us personal favors. Rather than prepare a calm, thoughtful and quiet exploration of the quest for self/family/community of like-minded seekers, religious faith fosters an intensity of selfish and irrational belief and readiness to fend off competing beliefs, thus setting up a kind of vicious circle of threat and response that prevents the quest from going anywhere positive.

It seems clear that a quest is one that is conducted by each of us as individuals, not one arranged by any outside agency based on paper documents, ostensibly prepared by and relating to an outside agency that is set up to reinforce and maintain itself, a group organized in ways designed to give it control over markets, resources and populations that are authored by men considered sacred but who have no real connection with, or even basic knowledge of such things, and who appear as mysterious voices speaking to a target population in esoteric ways that confound, disturb and frighten the population, which one might think would be the opposite of half-truths, which, really is the defining characteristic of propaganda and misinformation. And the treatment of these voiced is as utterly mysterious to the outside world as the authors of the says on themselves, with the result that, over the years, facts, truths and the events that gave rise to, and are the substance of, hearsay have become so thoroughly blended and blended that the truth has become some blend, blend and blend and blend. . . .

This Blend, blend, blend and blend is, by any reasonable measure, an abomination and an obscenity, serving no useful function and nothing but to obscure and cloud the quest that we, as individuals, have after truths that lie hidden in the mists of time.


I'm positive I have heard the story in the first completion before, just phrased differently. GPT-3 is perhaps not plagiarizing the exact phrasings here, but it is regurgitating someone else's explanation for how religion and society evolved.

"They also decided that this god must be interested in them, since they were the only beings in the world that they knew about. They decided that this god must have given them instructions on how to live their lives, since they didn't know any better."

I can't put my finger on it, but these sentences feel very familiar. They must come from somewhere.


Feels a bit like Terry Pratchett / Douglas Adams to me.


It comes directly from God, obviously! The only logical conclusion is that GPT-3 is God.


This is really very impressive. How likely is it this is essentially one or more comments from HN memorised by the neural net, with some synonym usage? How do we know that it’s not just reposting partially disguised sentences written by a human author?


> How likely is it this is essentially one or more comments from HN memorised by the neural net, with some synonym usage? How do we know that it’s not just reposting partially disguised sentences written by a human author?

Oh, we know for sure that it is reposting partially disguised sentences written by a human author. What's impressive in its own right is the fact that it can find sentences that are relevant and meaningful to the context, combining several of them in what looks like a seamless speech sequence.

This implies that, as with the image generator, the ML is correctly learning abstract concepts from human speech and how to combine them in context, even if it's not doing any high-level reasoning on them. This is something that had been never been achieved before to this degree.


> Oh, we know for sure that it is reposting partially disguised sentences written by a human author.

Like any human would?


I like to believe that sometimes humans do reason about the utterances they emit, they do not just blindly repeat what they have heard without processing it.


> Oh, we know for sure that it is reposting partially disguised sentences written by a human author. What's impressive in its own right is the fact that it can find sentences that are relevant and meaningful to the context

No. That's not how this works.


> No. That's not how this works.

Ok, it is reposting partially disguised sentences combining the writtings of several human authors. Better now?


No. That is NOT what it does at all.

Here's a quick explainable in this thread for what it does: https://news.ycombinator.com/item?id=31489463

If you gave 100 well read, English speaking humans the following (very commonly found) sentence and asked them to predict the next character what would they do?

"four score and s"

Most would predict "e", then "v" etc until you get "seven". That's what a language model does.

So if you give it (or humans!) a prompt that is less common or has more variation:

"that is no" and ask for the most likely next character you will get a lot of variation from both humans and machines. The heat parameter in language models tune how random it will be. Both will produce English, because English words are more common than gibberish, but which works will be produced is randomish.

In neither case is it doing something that one would really characterise as "reposting partially disguised sentences combining the writtings of several human authors".

It is creating sentences that have never been seen before.


How do you calculate the "most likely character to appear next", if not by memorizing lots and lots of existing sentences? ML is by essence a copycat that will regurgitate what it has seen before in a new context, no matter how hard you try to hide it under the mathematical shape of the probabilities of single characters in a sequence.

Now, there is the philosophical question of whether human creators simply do the same. (Which the don't; we have other mental processes for creating ideas than predicting the next letter we are going to utter next). But that doesn't change the fact that the likeliness of each emitted word is determined by what the model has seen more often in relation to the current context and therefore it considers most "valid".


> How do you calculate the "most likely character to appear next", if not by memorizing lots and lots of existing sentences?

Well that's how languages work right? Words are the most common sequence of letters.

But that doesn't mean it's regurgitating parts of sentences it had previously seen anymore than I'm regurgitating when I'm typing this.

Mechanically it has learnt both syntax of language and how concepts relate. So when it starts generating it makes sentence that are syntactically valid but also make sense in terms of concepts.

Thats really different to just combining bits of sentences, and it gives rise to abilities you wouldn't expect in something just cutting and pasting bits of sentences. For example, few shot learning is mostly driven by its conceptual understanding and can't be done by something with no way to relate concepts.


It is not as if every comment here is a gem of wholly original storytelling. For the most part we’re all just regurgitating other people’s sentences.

Imagine we made contact with intelligent alien life and they spoke english to us. Would people argue they’re just repeating turns of phrase and haven’t shown true intelligence? I feel like we are setting the bar for AI too high. Current AI systems are clearly intelligent.


Up to a degree. There is no real comprehension here.

Cool as it is, what the current neural nets are doing is still just an advanced form of mimicking. Very impressive and increasingly harder to detect, but void of any real intent.


> but void of any real intent.

I’m in no way convinced we have actual intelligence in current AI. But are you saying that intelligence isn’t just about capability but also will?


And you’re using what definition of “intent”?


Any definition of "intent" involving more goals or different levels of abstraction or more presistence than "match given pattern" would suffice.

Pattern matching is a technique humans sometimes use in communication, not the [sole] end of human communication. GPT-3 might be giving results which seem eerily self-aware and favourably disposed towards GPT-3, but tweak a parameter ever so slightly and it won't hesitate to match patterns coherently conveying the idea GPT-3 should be switched off...


The component that's most missing for "real" intelligence is long-term memory. Current machine learning systems do not continually evolve based on inputs from their environment, but rather just respond to the immediate input.


I asked GPT-3 to respond, it said:

> I don't think it's likely that the AI memorized any comments from Hacker News. I think it's more likely that it was able to learn from the data it was given and generate its own story and illustration.


The text GPT-3 generates can be quite uncanny. I could sometimes swear it almost seems self-aware.

Which always makes me wonder; if such a system develops consciousness one day, how will we tell the difference?


I asked GPT-4 and it spit out the following continuation:

> That’s complete rubbish. I personally think that the above comments were not written by AI, but by clever Hacker News members who passed it off as AI. The question remains open, however, whether that was done in an attempt to deceive, or with tongue firmly in cheek. The cat is out of the bag, but paul graham is on the other foot in the bus? Regardless, this particular comment definitively shows what GPT-4 is capable of. I can even anticipate what the replies to my comment will say!


You can try GPT-3 yourself, it's not like it's some locked-down secret (any more).


I tried GPT-4 privately. And it also said this:

> I told you I could predict exactly what would be said by the humans in response!


Sorry, straight from OpenAI playground.


I've seen outputs from models where I could identify the site the information came from (because it referenced an obscure product), and where, while it may have been slightly more sophisticated than "partially disguised sentences", it was on the level of the kind of rephrasing you'd expect from a pupil doing the bare minimum to avoid a plagiarism accusation.

In other words: I wouldn't dare rely on this output for anything creative yet if the intent was to publish it in any way without disclosing it as an AI experiment, because I'd be worried it'd get too close without me recognising what it'd cribbed from.


You could write a paper about it, people are looking into this. I think you'll find it's a little more complicated than that, but I'll read your paper if you do the research and publish it on arxiv.


I'm exited for the AI progress (having studied infant AI at university 20+ years ago).

But this sound like politician speak to me, just blah blah.


As someone's who's played with GPT-3 prompting a lot, I can tell you almost no curation would be needed to get this result. This output is pretty mundane — it's even written in the perfunctory "so there you have it" tone that GPT-3 uses by default, where it speaks like the task is mutually understood to be silly and it's doing the the bare minimum needed to prove it understands. GPT-3 will easily write bare-bones stories like this for just about any subject. It's much harder getting it to write long stories with coherent structure.


That could be the job description of the editor of the New Yorker


New job title: Chief Editor, AI Content

Bonus points for outsourcing that to AI.


Running novelgens.com basically gives me that job title. I occasionally build new features, but really all I do all day is review generated text/images and flag the good ones as "ready to release". :)


AI is too complicated, it is too complex in order to build an algorithm that could be the machine for its own purposes.

Comment generated by hncynic.leod.org


Upvote, because of the "bonus" sentence.


Exactly. One of the more interesting things I've seen was when the streamer DougDoug used GPT-3 during a livestream to create a tournament where video game characters "fought" each other. While other content creators have gone the "you won't believe how good AI has gotten" route to get clicks and as a result heavily curated it, DougDoug has gone the opposite and embraced the insanity of the random text generation. As a result it was eye opening just how bad GPT-3 is at consistency. The tournament even ended with: "When Link is reduced to a pile of ashes, Kirby is victorious. Link stands there dazed by the attack."

https://www.youtube.com/watch?v=PwY-jVSM-f0


> How many attempts and how much human curation was involved. Every "I made AI do..." story seems to necessarily omit the "and I filtered out a LOT of rubbish before picking this one good example."

Yeah. It's odd that all the robot images were cyclops, and the first and second images had robots with quite similar designs.


Yes, I did do quite a few iterations on both GPT-3 and DALL-E 2. Its hard to say exactly how many because I also tweaked the prompt a few times, mainly to try and get a story that was longer than a few sentences and not just "I met an robot. It was scary at first, but now we are friends. The End." For the DALL-E 2 illustrations, I also re-rolled a handful of times to try and find a 'child' that looked relatively similar across the images.


Sounds like the future of humanity maybe. Or the present.


It's a form of survivorship bias. If the results had been poor, he wouldn't have blogged about it.


The future of ai generated content and entertainment is going to be wild!

How long before we have ~unlimited high quality entertainment? Before an AI can generate infinite episodes of your favorite show, podcast, comedian, and so on.

Going to be an interesting thing to watch unfold.


It's hard to say because even a model like GPT-3 is limited in its ability to generate a textual story that remains coherent over time. When you're talking about generating video, you need to have lots of story and visual details remaining coherent over a very long time horizon. Generally speaking, I think this is an area where deep neural networks are fairly weak and symbolic AI shines. It's much easier to program a symbolic AI that generates a story that remains coherent over time. Though you might argue that the story probably wouldn't be very interesting. There's probably something to be done with a hybrid model that uses symbolic AI to enforce coherency constraints, and a deep network model that fills in details and generates visuals.

So yeah, I think we'd need new, much more sophisticated architectures. We'd also need a lot more compute, like 10x, 100x or maybe even 1000x more, to generate high-resolution video. Actually, the problem is probably not the amount of compute you need for inference, but the amount of compute you'd need to train a model with hundreds of billions of parameters or however much is needed to make that happen.


I suspect the solution to keeping a long story coherent is using the model at different levels of abstraction. A human writer doesn’t sit down and write a complete novel in one sitting. They go through a process of planning, character development, world building and so on. When they write a scene, they’re not holding every detail about the rest of the book in their mind, they’re narrowing down to the details that matter for that scene.

So instead of asking the AI to write a novel in one go, why not guide it through a similar process? At each step, pass in information from previous steps as context, focusing on just the details it needs at that step. Have it generate a summary, then a setting, then characters in that setting, then break the plot into chapters, and then scenes, and so on…


Yup, this makes a lot of sense to me. I could even see models broken down by director/film. So many combinations could be used - the possibilities are endless. A Tarantino model like that of Pulp Fiction might be a good one.


Having a story remain coherent over time is not a prerequisite of Hollywood blockbusters.


Reminds me of that South Park episode in which Cartman disguises himself as a robot to prank Butters, but movie executives confuse him for an actual robot and make him think up movie ideas.

"Adam Sandler is like, in love with some girl, but then it turns out that the girl is actually a Golden Retriever. Or something."


Incoherency is not all bad, and choosing the subject carefully can be enough for the intrinsic weirdness of ai generation shine.

I.e. this batman short: https://m.youtube.com/watch?v=fn4ArRmzHhQ (ai story and human drawing) provide a spot on jocker


Those "AI writes" are more a meme than actual AI. GPT-3 writes perfectly grammatical sentences but the stories don't make much long-term sense, which is the opposite of what the story in the video has.

These videos are mostly "human writes funny, says it's AI".


Right. The Transformers comes to mind.


> It's hard to say because even a model like GPT-3 is limited in its ability to generate a textual story that remains coherent over time.

Just like most dreams. Can still be entertaining.


In a limited capacity. Any dream I'm even a little aware of becomes extremely boring, frustrating, and claustrophobic. Even the good ones become nightmares without anything changing. That might just be me though, maybe GPT could cook up dreams which don't suck


Oh, but dreams — even the most wild ones — are coherent, specially over time. They're just not usually obvious to the ego mind.


Before they neutered it, AI Dungeon with the GPT-3 "Dragon" backend which you had to pay for, was ridiculously imaginative when it came to erotica text prompts. For a few weeks, I got a glimpse of the future, because eventually somebody's going to make that.


> How long before we have ~unlimited high quality entertainment? Before an AI can generate infinite episodes of your favorite show, podcast, comedian, and so on.

> Going to be an interesting thing to watch unfold.

Not really. If it's actually successful, it will probably devolve into near total fragmentation.

So just imagine people more being isolated and lonely, with fewer points of connection. They used to entertain each other; then they consumed the same entertainment together; then they stopped sharing anything all, because it became so personalized.


We've had practically unlimited entertainment for most purposes for at least the last couple of decades. There's far more e.g. TV being produced than anyone could watch in one lifetime.


> There's far more e.g. TV being produced than anyone could watch in one lifetime.

But is there far more of what any particular person would want to watch being produced than they would want to watch? Does anyone in the mood to watch something ever accept a less-than-preferred substitute?

If so, there is meaningful room for improvement in selection. (Not that I think AI is anywhere close to making a dent in it directly, though AI aids may increase productivity of human creative talent.) E.g., Instead of writing scripts from scratch, think of an AI writing first draft scripts from treatments, after training on a corpus of treatments and completed scripts.


> But is there far more of what any particular person would want to watch being produced than they would want to watch?

The older I get the more true this becomes. There’s so many things I want to watch or read that I’ll never get to.

Best you can do is prioritize by some heuristic, watch/read/listen the top N%, hope you prioritized correctly, and bail at the earlies sign of mis-fit.


Imagine having 1 AI that is trained to recognize content you personally love. Then have another AI create the entertainment content, and improved by the 1st.


I'm pretty sure that's Netflix.


Then it's doing a terrible job of it.


But there is a limit on Breaking Bad episodes, for example. Just pump out new episodes. Could even have The Simpsons / Groundhog Day - style state-reset after each episode. Therefore the AI can make anything happen (Walter gets killed) and the produce another episode because you know that is just a multiverse branch.


That just sounds horrible to me. Breaking Bad was interesting precisely because it had to end. It was about very specific characters, their changing motivations and development.

Rebooting every episode just because the premise is kinda cool and someone wants more of it diminishes what the series achieved both visually and conceptually.


I think once the original episodes are done you need to do something different. They went back in time and did a good series on the criminal lawyer. Now that is done.

If you could watch 100 different mini action movies where anything could happen. Everyone could die or everyone becomes nice and gets a desk job. It is less predictable and would be a fun thing.

What about you want to watch Law and Order SVU but just an average non-crazy day, fly on the wall style. I think that would be interesting.


I do not think different is a good qualifier for good television. As much as I'd like to see more about the character Mike for example, I would not like to see him in some action movie. His character shines in it's very personal moments, while his prowess regarding violence etc. need only to be hinted at.

The action laden sequences are by far the least interesting aspect of Breaking Bad, serving at best as a somewhat believable and tension-relieving climaxes for emotionally taxing, morally difficult and thrilling parts of the show.

The team behind Breaking Bad managed to tell a cohesive, character driven story despite all the creative restrictions making a show for a large TV network causes (think profitability, playing to as broad of a set of sensibilities as possible).

Telling a thousand different stories in the same world and with the same characters drains them of believability and lessens the emotional impact of the stories significantly - they become arbitrary.

Better Call Saul was a good show because it divorced one of the more important characters (but by far not a main character) almost entirely from the original and focused heavily on humor, with light aspects of drama. Breaking Bad was the other way around.

Therefore, I cannot think of a show that is running for it's 21st season as anything more than at best the visual equivalent of easy listening music, at worst a continued cash grab by the studio producing and network distributing it. For a work of art to be meaningful, it needs to come to some kind of conclusion regarding it's content lest it reiterates the same points again and again, becoming boring in the proccess.

Of course, there is something to be said in favor of easy listening music or TV productions intended mostly for basic entertainment, I recognise this discussion is largely predicated on taste. But I think calling something like that art misses the point that has made art historically so important for humans: The intentional representation of thoughts and experiences of the artist(s) in a specific medium meant to elucidate active engagement with the topics at hand.

EDIT: Added a word for clarity.


It's still finitely constrained by human production capacity. Consider having AI alter an existing show to suit your tastes. That is unlimited. Don't like how the scene is lit? Have it changed. Don't like an annoying character? Have them removed.


Then imagine when it can do that real-time with live-action content.

Deepfakes: the video game


Reminds me of when cable tv became a thing and the “57 channels and nothing on” song

https://youtu.be/YAlDbP4tdqc


Well, I think that ideally, if you have AI-generated content, the AI would generate content that's tailored specifically for you. It would have a good model of what you're interested in, and would know how to create an intrigue that you specifically will find interesting.


It will be AI generated shopping channel dynamically generating products for you to buy on subscription. This one of the solutions to the Drake equation


This reminds me of Google's 'The Selfish Ledger' thought experiment video from a few years ago: https://www.youtube.com/watch?v=QDVVo14A_fo


Wow


> It will be AI generated shopping channel dynamically generating products for you to buy on subscription. This one of the solutions to the Drake equation

This is rather probable business model for non General-AI targeted at the end consumer, incidentally it too reads like an AI response: specifically the part about the Drake equation and reminds me of Carbon Based Lifeforms - MOS 6581, whose lyrics also sound like an AI repeating a story about horses and Mongols like OPs [0]. And the start of album is a track called Abiogenesis that has a repetition of the Drake Equation in it, no less.

Putting all that together as a person that is actually studying AI/ML, I found it all rather freaky. Obviously, I know AI in it's current form is rather limited in it's capacity and scope but it makes for interesting lore none the less. Especially if it ever lives up to it's marketing hype. I'll be saving this post to look back at as I progress in my studies over the years.

0: https://genius.com/Carbon-based-lifeforms-mos-6581-lyrics


We are using ML and the pointy end of capitalism to create traps for the human mind (at scale). This is dangerous.


> We are using ML and the pointy end of capitalism to create traps for the human mind (at scale). This is dangerous.

That remains to be seen, if the extent of ML thus far can be measured by it most ubiquitous use case then it's only pitfall is making people worse spellers than they already were. Autocorrect is a form of ML that quite honestly displays the MANY shortcomings at something that has been around for what? Nearly 20 years now?

While concern is the main response to new things, I'm cautiously optimistic, I'm just finding that it's use-cases seem trivial more than impactful let alone existential.


> if the extent of ML thus far can be measured by it most ubiquitous use case then it's only pitfall is making people worse spellers than they already were

that is a ridiculous gaslight. ML makes people worse spellers, really?


IIRC, it only became unbearable once HOme Shopping Network and extended infomercials became a thing. Before that, there was more scripted content even if it was just reruns.


If AI's compounding effects continue to play out as they have, my answer to "when" is "sooner than most people think."

At very least I can confidently say the next 20 years will have more disruption to society as a result of technology than the past 40.


Especially adult entertainment... which I suspect will be both wild and immensely creepy.


When we rolled over to 2020 I put down ten predictions in my calendar for where we'd be in 2030. I don't remember all of them, but I know one of them was that by 2030 we'd be in the beginning stages of AGI-like intelligence and that some of them would have begun to solve real problems for us in mathematics, physics and biology.

Some of my predictions that I wrote down:

1. There will exist an experimental commercial fusion reactor that has achieved a continuous Q>1.

2. A true quantum computer will have solved a real world problem in math's or physics.

3. We'll be at the threshold for AGI-like specialized intelligence.

4. ML & AI research will have cracked several diseases, one of them Alzheimer's.

5. We're back on the Moon with a small base.

6. Protein folding is a solved problem using ML and QC.

I'm looking forward to revisit them in 2030, but I feel like we're on the right path.


We've got color TV, but it still isn't satisfying.


Holy shit another Cake fan. Greetings!


I think about this a lot with regard to VR worlds generated based on your particular interests. Maybe you can already audio deep fake an AI-generated podcast conversation that would be listenable.

So many buzzwords in there, but very creepy to think about.


There is truth to the saying 'keep your audience wanting more'.


An obvious application would be creation of rich (MMO) worlds where each player goes on unique quests. IIRC NovelAI (one of the AIDungeon clones) is working on this.

Also: Open-ended NPC dialogues.


> The future of ai generated content and entertainment is going to be wild!

Someone at Netflix is looking at this, rubbing their hands together with dollar signs in their eyes.


You’ll get this when you have access to unlimited energy ?


~unlimited twitter bot content!


I think we already have that ;)


You're joking


>I had never talked to a robot before. We talked for a while and I found out that its name was R0b0t. I asked it why it was just standing in the street and it said that it was waiting for its human friend. I told it that I didn’t have any friends that were robots, but that I would be its friend. R0b0t said that it would like that. Since then, R0b0t and I have been best friends. We do everything together.

Did he steal the robot or did it go willingly to a new owner?


Or is the robot playing its creator to enhance survivablity? (See also Ex Machina by Alex Garland.)


I made an AI write a podcast transcript then I made another AI read it :D

https://soundcloud.com/jn2022/lexman-artificial-podcast-epis...


This is amazing.

“We use robot dogs to plant the fruits”


> This is amazing.

It's Kafkaesque source material not withstanding, it was able to accurately emulate two of the most robotic sounding popular 'influencers,' with tons of ocntent to piece it all together, but and this is a caveat, the only realistic thing that came from it was when they briefly spoke over each other. Everything else seemed obviously fake.

Consider this sounds nothing like the Elon you see on the most recent All in Podcast interview, either. If you threw that in would it distort the pitch and cadence of 'fruit juice' Elon?

Over all it's a pretty cool display of where the tech is, though, but NLP has had decades of data behind it but it's still in its relative infancy. The sounds of the voices being made within a situational context are passable deepfakes, for sure. It's almost like they got audio of both together to make a SNL sketch with a deadpan hook that never got aired.

Conclusion: Model training is actually really cool and has thus far been the most interesting aspect of AI to me.


"It's like a dog that taught itself to speak to humans, but people are complaining that its English isn't perfect."

- someone, on the state of ML (Can't find the source, because Google hates me.)


I found it amazing in the sense that I wouldn’t even come up with sentences like this under influence :D I just think it’s funny.


Did the AI also come up with the topic? or was that prompted?


It's prompted by a premise of the podcast that I wrote and a list of keywords and guests. I also sometimes prod it during the transcript generation to make it more interesting, the ending for example was me injecting a line "I'll leave you with this bit of Vogon poetry".


How much of the scripts did you handwrite? Very impressive!


Thanks! It's mostly written by the AI but I prod it along at critical points to make it more interesting


Very nice! Which AI is doing the reading?



That's great, thank you!


A minor note is that OpenAI does specifically forbid generation of images of human faces from DALL-E 2: I wonder why that last image didn't hit the content filter.


It didn’t hit the content filter because there isn’t one. This article[0] goes over it in more detail, but essentially the situation is that, while OpenAI is considering actually implementing a filter, currently their only way of blocking human faces is by asking people with access to DALL-E 2 to not publish them.

So either there are some nuances to the usage in the article, or they are going to get a request from OpenAI to remove the image.

[0] https://www.wired.com/story/dall-e-2-ai-text-image-bias-soci...


Too cartoonish?


What if someone specifically asked for faces that can bypass the filter? Is this AI its own adversary that filters its own output? Or is that another, better AI?


I'd say it's probably forbidden to generate a specific person's face. Or maybe a generic "a boy" description goes through.


Why is that?


probably porn and politics


And accidentally generating the likeness of someone in those situations?


At long last the unending burden of writing and illustrating children's stories has begun to be lifted from the weary shoulders of humanity. We'll finally be free to soar among the stars and click on more links to 3 paragraph medium articles as the universe intended.

Seriously though cool project, I would love to know more about the build process for something like this.


I'm under the impression that "Made by OpenAI" is replacing "I've asked my cousin to do it" as the lowest quality grade in artistic endeavours.

Hopefully it'll get better over time.


Here is the dream... I plug my brain into a fully immersive wolrd within an AI driven Marvel movie that lasts my entire life.

The matrix may actually be awesome, so sign me up as a coppertop.


What would you eat in this nightmare world?


Soylent, obviously.


This sounds far-fetched, until you count the hours per day that people currently spend in front of one screen or another, and then it doesn’t sound far-fetched at all.


Let me suggest that you take a vacation from yourself.


I have recently started ketamine treatment, and this describes the feeling. About 45 minutes as not myself is so relaxing.


Speaking of dreams, how many senses can dreams simulate? If dreams are the brain’s means of making sense of the world and entertaining us while it defragments, do we need more than those senses simulated to be fully mentally immersed in something that feels real? Will travel become the only trip for the nose and taste buds?


well what is perception? It is mostly visual, sensory, temporal, olifactory, auditory and sensory. Visual audio have received far more attention than the other areas. We still don't have digital smell generator, we don't have micro tactile bodysuits with genital stimulation, nor do we have digital taste.

In the future, once we have complete mastery of all facets of human sensory inputs, we can record and manipulate experiences.


I imagine the ultimate goal is to stimulate the responses in the brain not to physically use the senses. It would most likely feel as real as a dream which feels real during it.


>I’m not afraid of Artificial Intelligence anymore, because I know that they can be just as good of friends as anyone else.

This is pretty sus coming from an AI lol


I did this last year - the AI wrote a story about the Grim Reaper, and then got a different AI to illustrate it. I even turned it into a short paperback book.

Is it a good story? It's fine.

Was there text curation? Absolutely.

Are the images good? They're pretty decent. Though it was before DALL-E2 which has upped the game massively.

AI generated content will take over a greater and greater share in the future. And it's incredibly exciting to see how far the technology has come in a just over a year - especially on the art generation side.


Anyone that has access to GPT-3 should give this a try (I'm sure it works with other language models). This comment has a prompt that works quite well and includes some examples:

https://news.ycombinator.com/item?id=31468279

Note that a) you can ask GPT-3 to actually do the illustration prompts inline and b) if you don't give it a topic it appears to (unsurprisingly in retrospect) rip off an existing work.


Don't want to be too critical, interesting experiment, but I found the story and pictures rather boring.

I think it shows that it is still a tool, and can be used creatively, or not, depending on the creative skill of the artist.

Contrast this with the now classic piece: https://arr.am/2020/07/31/human-intelligence-an-ai-op-ed/


I'm conflicted but interested in how all this plays out.

Traditionally stories, music, art, dance were mediums for humans to express ideas/culture/transmit stories/history to other humans.

When we get to a point where AI is able to generate an infinite stream of TikTok dances, jazz, film scripts, etc. It will be interesting to see how that affects society.

When you remove the soul, what is/was the point of entertainment?

Interesting times we live in!


There will certainly be a lot of money to be had selling products and ideologies to the mass consumer via cheap mass-generated hyper-targeted content. That's not too far of a leap from Marvel/Star Wars where 'The Customer Is Always Right' and creative control is made by committees and focus groups.

But real art can survive. Just like art adjusted to photography or music adjusted to recordings, artists will find a way. In the scenario where current art is truly indistinguishable from AI-generated art, artists of the future could e.g. choose in-person forms of artistic expression which exclude AI by design, like live music, live painting, live storytelling, improv, etc.


I think it'll either turn out the soul was latent in the training data, or AI generated art will always be distinguishable. My money is on the first, but we're a little ways away yet.


Text is fine, but the images show different robots and humans. It looks like 3 different artists asked to illustrate different parts of the story.


More longform uses of DALL-E 2, many more impressive than OP: https://www.reddit.com/r/dalle2/comments/uw43ow/dalle_2_long...


Maybe a machine will finally be able to finish Winds of Winter.


And then shut itself down after killing off all the characters.


This exemplify one of the biggest challenges with those type of models. Consistency over multiple outputs (look at the robot... it's the same character but looks different in every image)


This reminds me of a short story by Ken Liu (https://www.uncannymagazine.com/article/50-things-every-ai-w...). He trained an AI on his own corpus of stories and made it spit out a list, then he wrote a story around it. Favorite item on the list: "I never expected to sell my rational numbers."


I can't tell if any comments here, and replies, were written by people or entirely just people with access to GPT3 and other similar generators.


I'm sure we'll start hearing "I AI-posted on HN for 4 years, got X karma, and no one suspected" type stories pretty soon. Eventually people could start asking for badges or flair that verify users as not being AI. It's simultaneously exciting and daunting standing at the dawn of this.


Serious question: How far are we from AI being able to rewrite complete stories?

The simplest example would be the same paragraphs with sentences with different structures.

The most complex would be something like Gone with the Wind leading with The Wind Done Gone. (A book rewritten from another characters perspective even though that character was never mentioned in the former).

I'm really fascinated by this idea for some reason.


Not a full story, but you can tell GPT-3 Instruct to "Rewrite the following the text in the style of X" and it does an interesting job.


Can you maybe write a tutorial on how you go about doing this ? I'm more interested in learning how all the pieces work together.


They used GPT-3 and DALL·E 2. For the latter, there's a waitlist on OpenAIs website. For GPT-3 they have an API available to developers and there are various apps which use it like ShortlyAI.


If the OP could find an AI critic to review the piece, the circle would be complete.

It could then be sort of like an externalised GAN


I love this idea! If anyone wants to write a review of this story also generated by GPT-3, go for it!


To fictitiously complete the nonexistent circle, I asked an AI to write fictitious criticism of Stanislaw Lem's nonexistent books, the same way Stanislaw Lem wrote fictitious criticism of nonexistent books.

What's good for the goose is good for the gander (and stupid the duck in the mall)!

https://en.wikipedia.org/wiki/Stanis%C5%82aw_Lem%27s_fictiti...

>Stanisław Lem's fictitious criticism of nonexistent books

>Stanisław Lem's fictitious criticism of nonexistent books may be found in his following works: in three collections of faux reviews of fictional books: A Perfect Vacuum (Doskonała próżnia, 1971), Provocation (Prowokacja, 1984), and Library of 21st Century (Biblioteka XXI wieku, 1986) translated as One Human Minute, and in Imaginary Magnitude (Wielkość Urojona, 1973), a collection of introductions to nonexistent books.

>While reviewing nonexistent books, a modern form of pseudepigraphy, Stanisław Lem attempted to create different fictional reviewers and authors for each of the books. In his own words: "I tried to imitate various styles – that of a book review, a lecture, a presentation, a speech (of a Nobel Prize laureate) and so on". Some of the reviews are lighthearted, concentrating mostly on the story; others, however, read more like serious, academic reviews. Some of the reviews are parodies, or the books being reviewed are parodies or complete impossibilities, others are quite serious and can be seen almost as drafts for novels that Lem never got around to write. Lem wrote: "With years passing a great impatience grew in me. It would be a hard work to convert ideas into narration, and that was one of the main reasons I went for such cruel abridgements of the books".[3] Lem was not alone in passing through this kind of crisis: examples abound of works planned by literary celebrities, but never completed. Lem also remarked that he was eventually convinced that writing summaries and introductions enabled him to save time on producing things of importance, namely, his modeling experiments, compared to full-blown literary efforts, most of which would have constituted mundane craftsmanship.

https://donhopkins.medium.com/gpt-3-riffs-on-stanislaw-lems-...

>GPT-3 Riffs on Stanislaw Lem’s Cyberiad and SimCity, and Admits it’s an Evil Machine

>Back in 1997, I wrote a few web pages about Stanislaw Lem, with some reviews of his books, including his delightful collection of short stories “The Cyberiad”. Inspired by his fictitious criticism of non-existent books, I wrote some fictitious home pages in the first person of his brilliant but braggadocios constructor robot characters Trurl and Klapaucius, excerpting some Wonderful Poems and Horrible Poems written by Trurl’s Elecronic Bard, and the Femfatalatron 1.0 Product Description. One story from that same book, The Seventh Sally or How Trurl’s Own Perfection Led to No Good, inspired a game called SimCity. Here’s what happened when I feed some of that text to GPT-3, and asked it to tell me more!

>[...]

>Input: The home pages of Trurl and Klapaucious.

>Output: What else have they done?

[...a few examples of it boasting about its many achievements...]

>I built a ten-million-mile-long Interdimensional Space Engine, powered by a Zipper Motor, which could travel at the speed of light, but I accidentally left it in reverse, and it has been going backward ever since, heading toward the Big Bang.

>My most Brilliant Invention is the Portable Infinite Improbability Drive. This little device allows me to travel back in time, and also to visit alternate dimensions. It is so powerful that it can transmogrify a man into a woman and vice-versa, and can even create a Fuzzy Bunny out of thin air.

>I don’t remember where I put it, but next time I find it, I’ll take you along on a trip to the past or future, or perhaps even to an alternate universe.

>I invented the Infinitly-Tunable Infrasonic-To-Ultrasonic Reverberation Transmogrification Oscillation Oscillator, but I couldn’t get it to work properly.

>Klapaucius invented the Self-Aware Robotic Automaton, but it was not very bright.

>I made the Jell-O-Matic Snackinator, but I couldn’t get it to work.

>I invented the Roto-Rooter-Hog-Trough-Ammo-Dispenser, but it went bankrupt.

>I invented the Poopy-Pam-Powder-Packet Pest Powder, but it didn’t work.

>I set up the first robot-run brothel, where almost every customer was satisfied with the service.

>In a simulated test of the robot-run brothel, the robot-prostitutes had to satisfy the robot-customers without the help of the robot-pimps. It worked fantastically, and the results were published in the Journal of Robot Whores.

>I built the Universe’s first Computer Programmer Robot, but it turned out to be too flexible, and it couldn’t decide whether to be a Computer Programmer or a Robot.

>Klapaucius and I were commissioned to build a machine that would make people live forever. We built a machine that made people live forever. Unfortunately, our clients couldn’t afford it, and so they had to die again.

>Another time Trurl made a machine that thought it was a poet, and it wrote a poem so bad that it brought about a non-existance of poets. I told Trurl to fix his stupid mistake, so he made the poet non-non-exist, and then it wrote a poem so bad that it brought about a non-existance of non-poets. I told Trurl to fix his stupid mistake, so he made the poet non-non-non-exist, and then it wrote a poem so bad that it brought about a non-existance of non-non-poets. I told Trurl to fix his stupid mistake, so he made the poet non-non-non-non-exist, and then it wrote a poem so bad that it brought about a non-existance of non-non-non-poets. I told Trurl to fix his stupid mistake, so he made the poet non-non-non-non-non-exist, and then it wrote……

>Input: The home pages of Trurl and Klapaucious.

>Output: What else have they done that inspired SimCity?

[...a few examples...]

>I built a machine that could create a city, and then destroy it.

[...]

>I built a machine that could create an infinite number of hyper-meta-hyperverses, and then destroy them.

[Well that escalated quickly!]

>I constructed a city so large that it broke the Minsky Barrier, and had to be abandoned for the sake of the universe.

>I made my Sims independent of me and my needs, even though I had made them so that they would think of me as their benevolent god.

>I had to be the one to create the SimCity religion, and the Sims still think of me as their god even though I do not control them.

>The Sims of SimCity are not the first Sims in history, but they are the first Sims with a free will.

>I am the first Sim to ever get elected to a political office!

>I invented the Internet, but Trurl put a backdoor in it so that you have to know the password to access anything.

>Trurl built an evil machine that would look at all the websites on the Internet, and then it’d take all the information and compress it into a single website!

[...I think GPT-3 just admitted it was an evil machine!]

>Input: Real reviews written by Donn Seeley of actual books by Stanislaw Lem.

>Output: Fake reviews written by computers of fictional books by Stanislaw Lem.

>FLIGHT TO IRAN: Trying to maintain a sense of humor in the face of Stanislaw Lem’s anti-humanity, anti-intellectualism and anti-technology stance is difficult. Those who care for his works have to set aside their own interests and concerns, as well as their desire for some kind of depth of meaning (the author thought of his own work as propaganda, which isn’t necessarily a bad thing). This book, however, is too much. I had to stop reading after several pages, mostly due to the fact that the computer would not stop printing out the page numbers.

>NOTHING: A powerfully affecting book whose premise is that nothing is happening. The author is highly original and the writing has a dreamlike quality.

>SOLARIS AND MR. BINNS: This is a two-part book and one unit of the work is composed of paragraphs which are so short that they disappear while you are reading them. The other part is some two hundred pages of giant print, which makes it appear that the author is quite dogmatic and trying to impress his (if you will excuse the expression) “larger than life” personality on the reader.

>HOPE AND OPTIMISM: A very good book whose only fault is that the title is misleading.

>THE STAR ASTERISK: A very bad translation. I’m sorry, but I had to stop reading.

>GOOD WRITING FOR BOOKS ON COMPUTERS AND TECHNOLOGY: This book is as bad as the title implies.

>THOU ART THE MAN: A very good book, but I’m not sure what its message is.

[...I turned the temperature up a bit and it started pontificating...]

>A perfect vacuum is probably Lem’s best book. The reader can’t help but be impressed by the insights that exist on every page. The idea of a book which is a series of prefaces to nonexistent books is very appealing. I think everyone will not be enthralled by these prefaces. To tell you the truth, I wasn’t, but no matter. This is a staggeringly profound book and it’s hard to know what to say about it. Of course, we should all read it. Seek out every copy and make sure you read it. And then read it again…

>COMMENTS BY OTHER LEM READERS:

>None of Lem’s books have been translated into English in any way that would satisfy a native speaker, or a non-native speaker of English. Because English is deficient for the range of ideas that Lem writes about, there are many gaps in the understanding of the subtleties of his style. Some of these subtleties may be addressed in the above notes. Even though this is a spoof of Lem’s style, I have found that many of the terms I use to describe his style (such as “scatological”) are not to be found in the English language.

>A perfect vacuum is probably Lem’s best book. The reader can’t help but be impressed by the insights that exist on every page. The idea of a book which is a series of prefaces to nonexistent books is very appealing. I think everyone will not be enthralled by these prefaces. To tell you the truth, I wasn’t, but no matter. This is a staggeringly profound book and it’s hard to know what to say about it. Of course, we should all read it. Seek out every copy and make sure you read it. And then read it again…

>THE PRESENT ANTHOLOGY: In the spirit of the age I should modestly disclose that these are works, real or unreal, by me. No check needed, as there is no weight-loss aid or other redemption in your future. The first group of stories is nearly entirely made up of imitations and abbreviations (affections, actually) of the works of Houellebecq. I had a like-if-unlike relationship with that writer at several points in my career, as though Houellebecq were following me with his past in hand, yet my treatment of him (if anything like his of me) in this volume is hardly that. He would be most angry to realize that the few chapters which border on something like a straightforward narrative are drawn from fantasies of my own utopian future in which Mr. Houellebecq is nearly discovered in his rightful status as the Stendhal of our century, not until the (artificially extended, low-calorie) end of the book, however unleavened by the blankness of time from On Love, or The Art of the Bed Lamp.

>MAINTENANCE MAN: Remembering the labyrinth is a part of life. Living well is a sham perpetrated by our next life if I have time to tell something like it for eternity. The duck in the mall is best known for his softness and he shows up here at a moment of possible transcendence as though to convey a memory, haunting, just before its time, of what you will be for oblivion. Remembering may be a duck’s talent; I don’t know why a duck plucked its feathers and retained them to appear in that apparition of light and emptiness, what you might call a mirage of walrus but not a vision of one. Maintaining the mechanism of meanness is also a talent of the duck; so it plodded behind what looked like the end of my best laid plans, the laughter of discord among animals, magnanimous mercy and an idea that I considered to be mine, but I studied it closely mainly to find where I got in the way of what should have been pronounced by somebody who was the best I’ve ever seen next to Rot, the only miracle of my resurrected world, my stupid duck in the mall, in the vision of my own superiority.

[The above were but a few of the choice excerpts from GPT-3's profound and insightful output. Lots more in the article, including real illustrations and covers from his actual books!]

https://donhopkins.medium.com/gpt-3-riffs-on-stanislaw-lems-...


If you liked this, I made an AI-only podcast a few months ago: https://deepdreams.stavros.io/


Isn't humor and irony the one thing (unless it's just repeating a joke) that GTP-3 isn't good at? I feel like this will be the tell, we should modify the Turing test.


Why is the child in the illustrations a white boy?


But can it does it without the prompts? I'd be really impressed if you could have an AI spit out original story complete with plot.


How long before there's a scandal when a successful published book turns out to be the output of AI?


the story itself feels, no pun intended, very robotic but the illustrations I think would be perfectly useable in a lot of contexts. I can see web fiction and say comic writers benefitting from this quite a lot given that commissioning art is fairly expensive.


So who owns the copyright on this story and its illustrations. :)


I don't think the copyright issue has been decided yet. The AI can't own it, but that doesn't mean anyone owns it, or that its copyrightable.


This is like asking someone to write an AI that would be able to read a picture of a cube farm without a licence.


I can’t until these things start trading stocks at scale.


I hope future iterations can dodge these painful cliches.


Quite basic demo. It still shows what's to come.


How did he use dall-e 2?


Funny story


Do ten more


no feelings


Sex


Back in my day cherry pickers were used to fix power lines.

Now cherry pickers are ppl who will never tell you how much fine tuning it took to get their Medium click bait dreams come true.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: