Hacker News new | past | comments | ask | show | jobs | submit login
This AI is bad at drawing but will try anyways (aiweirdness.com)
313 points by corysama on Aug 21, 2018 | hide | past | favorite | 122 comments



Every time I look at AI produced pictures, I feel discomfort and uneasy. These pictures are like a scene from a perfect nightmare.


I feel discomfort reading “I feel discomfort and uneasy” because discomfort is a noun and uneasy is an adjective.


You can both "feel discomfort" and "feel uneasy", so I think it works.


No. You see:

    void feel(x: Noun)
    void feel(x: Adj)
"discomfort and uneasy" is actually an instamce of the superclass of Noun and Adj, but there is no overload for that so it doesn't compile.


With better heuristics and fuzzy logic, our IDEs should detect and handle such usage without hassle.


I suspect that sort of autocorrections would be AI-complete.


I agree with this fellow human.


You must be a robot. If you'd written your post in capitals though, I'd totally have fallen for it.




Doesn't your mind expand that to "I feel discomfort and become uneasy", or is mine just broken?


My mind expanded it to "I feel discomfort and I feel uneasy," since both work. You can feel nouns and you can feel adjectives, just in a different sense. It sounds weird, because we like to construct our logic with parallelism [1], and in this case we're using different definitions of the word, "feel". This makes pulling "feel" out front in an associative manner less correct.

I feel[a] hunger, and I feel[b] sad. I feel[a or b, but not both] hunger and sad.

[a] transitive verb; a physical sensation, acting on the object (in this case, "hunger")

[b] intransitive verb [2]; an emotional state, described by the adjective (in this case, "uneasy")

[1] https://en.wikipedia.org/wiki/Parallelism_(grammar)

[2] https://en.wikipedia.org/wiki/Intransitive_verb

EDIT: Clarity. Also, IANAL (I am not a linguist), and generally you don't want to take lingual advice from an engineer, like me.


I semantically autocorrected to "I feel discomfort and unease".

Something about it being the same number of words and the least letters' difference; I can't rule out the possibility of a 'y'/'e' error.


That reminds me of a line from Green-Eyed Lady by Sugarloaf that I've always liked: "Setting suns and lonely lovers free"


It's the perfect way to convey that feeling, so I'm stealing it now.


Traditionally, with computer generated stuff, you could clearly see the math in the algorithms (the sine waves and fractals and whatnot). With AI generated stuff it looks... natural. Like someone actually drew this for abstract art class. It's entirely unpredictable and unexplainable, except maybe for very vaguely applying the key words. It's a computer no longer letting you see how he thinks.


I think it actually shows this kind of image recognition AI doesn't actually think, i.e. it hasn't developed high-level concepts of the things in the pictures it's trained on.

When these neural networks get an input image and spit out labels like "bird" or "car", they haven't actually recognized which parts of the image are a car, nor what pieces it's made of. Instead they have memorized some textures and simple shapes which go with the label. It provides the kind of knee-jerk reaction that allows your brain to make you jump when a large object approaches fast, or think there's a tiger hidden in the dirty laundry when you turn around in a dark room.

That's why, when you reverse the process, it doesn't create meaningful images, but clumps of relatively common textures found in the training set. It lacks the hierarchy of concepts that allows you to identify objects and distinguish them from the background, which a baby learns in their first two years.


It's not so much that. It's because their training data is limited, the NNs haven't learned the constraints that typify birds or cars. They can recognise the features, and apply labels, but there has been no need for them to classify a car vs. the garbage you get out by generating images.

So all these garbage outputs would be classified as cars, because it's happening in the space that the NN doesn't really have information about.


What always fascinated me is how those images look almost exactly like the hallucinations you get on some psychedelics (e.g. salvia divinorum, which was legal and sold publicly in stores for a while).

The AI must be pretty close if they can already match the output of a confused human brain.


salvia still is legal in most states. and it is far from a typical psychedelic.


Well there is sort of a truth in that because dreams create a world thats half-reality-half-abstract. precisely on the boundary of the known and the unknown. These images reflect the same thing in essence.


Well, and both the AI and our brain employ some hugely efficient compression, trained on lots of real world data; and both these AI pictures and our dreams are sort of generated by "firing up" the compression machine from the other end with semi-random inputs.

So, yes, it's not wonder that they're alike in some ways, and disturbing, too.


Knowing this and that AI is so good at creating nightmarish pictures, would it be possible to attempt a reverse? Ask/learn AI to create the most frightening, most nightmarish rendering it can ever produce?

Is such picture even exist? That I would seen it once and wouldn't sleep for a week? Or upon seeing it, start crying without explanation?

Seeing something like this created by AI would be very impressive: "prepare to cry when you see this picture (guaranteed!)"



This is the best illustration of where AI is today. Yes it can do great things because we've tuned each to do so. This is the some of the tech that does it. It only works with much guidance and outside the boundaries, anything goes.


So, kind of like children.


Like really dumb infants maybe.

Most kids are able to correctly identify thing in their environment by age 1-ish since they're using words to describe those things around that age. Specially trained AIs are about at that level. Based on the difficulty of captcha tasks I think it's safe to say 1yo could identify cars, signs and store fronts about as well as Google's AI (were it actually in the situation, the 1yo probably couldn't conceptualize that some of the pictures in a grid of them on-screen represent cars or whatever). That's the level we're at and that's with AI that's specially trained to recognize those things.


...and qualitatively different. Where children have the deep concepts/relationships and poor rendering machines fake output with rendering/mimickery ability. These attempts show where the weaknesses are. When the pictures get better then it's more sensible to allow more control (ignoring any singularity).

Having jokingly brought that up, it's the best reason for hybridizing implants. "We should all just learn to get along" still applies.


I typed "Eldritch abomination and cat" and I was not disappointed.


Agreed. Felt the same way listening to some AI-generated classical music, too.


It would be great to have an art gallery, digital or physical, devoted to ai produced media. They could run exhibits highlighting different strategies, or have awards for creative or accurate results.


What art would that be? Part of art is not just the physical piece but the message it delivers, or a message that can be interpreted in many ways. Computer generated images can learn to do the aesthetic part but if the piece has no human context from which it comes nor has anything to say, how would that be art? Why did the computer create the image, and what is it trying to tell us with it?


Well there IS a human intent via the input.

This could fall under process, or generative art.

On a side not; A vast portion of art has been purposefully void of meaning for hundreds of years. "l'art pour l'art"


An essential aspect of an artistic work, whether it is structured from words, matter, sound, or behavior is that it has a human author who “intends” it to be art. An important part of this is that it has no other intended purpose. (This is what separates it from design.) So it needs a human author. The AI created images could be turned into art if someone could credibly claim such authorship. In the instance of the images in the post, they are not intended as art, so some kind of metaphysical alchemy would be required to convert them. (A big name modern artist claims them as his own, with permission from the developers, and puts them in a show.)

In addition to the work itself, the framing or presentation is essential to complete the artistic product. Printing the images out on nice material and framing them would help.

John Cage’s 4’33” is (IMO) a brilliant performance piece. It requires the context of a stage, of live musicians to be effective. It doesn’t work so well as a recorded piece!

https://en.wikipedia.org/wiki/4′33″


Any restrictions you put on what is and isn’t art will be contradicted by something, somewhere. A large number of pieces of art exist only to raise the question of what art even is.


Then that is already the message—to question what art even is.

Computer generated works don't have that, or any message.

The process of computer-generating a number of pieces itself might be some form of art, but more like in the form of art of programming rather than the art of painting.


There's plenty of avenues here to make art. Simply curating the images which are generated is enough to call it art. In addition, it has a text interface which is somehow connected to the output. The artist could write a poem and show us the output, but they don't have to show us every image, just the one's that contribute in some fashion to the meaning of their words. It's not that different from using aleatory methods, found art, or collage.


The beautiful thing about art is that its meaning is inherently personal and subjective. What the computer is trying to say is secondary to what the computer is saying - more importantly, rather, what it's saying about its dataset. Even how the works are presented could be interpreted as an artistic work in and of itself, the presenter being the artist.

I think these are art, since they affect me in the ways which I typically associate with art.


(Warning: the discourse on the nature of art is a classic debate that will likely never end.)

Art has a quality that is merely not personal and subjective. Interpretation is personal and subjective but if art was, then anything would do because there would be no objective-ish yardsticks to help judge between two random pieces. Postmodernism tried to explore the boundaries of how little is needed for the subject still to be art, which interestingly can be considered an art in itself. But the underlying current even there is that there is a human idea, motivation, or vision behind the art that is created. Computer generated pieces lack that; however, the process of computer generation probably does qualify as art. If an artist used a computer to generate images and make collages out the selected generated images that would be art—much like Warhol created art out of emblems of ordinary or entertainment figures.


Warning heeded :)

I think the fundamental question here is why art requires a "human" element to actually be art, and what that "human" element actually is.

My opinion is that humans aren't really as special as we want to think we are, and that fundamentally there's no real difference between how humans create "art" and how machines create "art"; in the end, we're both drawing on our inputs, applying some subjective evaluation (possibly based on other inputs), and producing a corresponding output. How that "subjective evaluation" step happens might seem different enough to warrant different classifications, but I'm not really convinced it actually is.


I think human art is only defined in the context of humans (or similarly developed entities). The best paintings of human history won't have much artistic value outside human viewers such as pets or possibly even aliens (should they exist).

Conversely, we can only understand, for example, animals' art to the very limited extent we understand animals: that is, we don't really have much of a clue if an animal doodling with pebbles in the sand is making art and trying to make a beautiful (to him) arrangement of stones, or just moving pebbles around for fun.

If we saw alien art, would we even know if it's art or something else? We would have to know the alien species, their culture, and how they live in order to even make educated guesses.

It might be that the concept of art likely carries over across species but we can only see in other species' art what is similar in our species.

If a monkey draws a monkey face we can understand that because we have similar eyes and we could draw a human face. But if we don't share even basic senses with some entity we can't possibly understand what visual art might be for them.

So, a machine producing "art" is something we can only understand as reflections to what a human would produce. We judge the machine's output by as if it was created by a human. But because we know it's not human made but just a compilation of random values and preset rules, it lacks the context we might call the human element and becomes void to us.

At a minimal scale the "human element" could even just be the story of the person who painted the image, even if the image is simple and not skillfully very refined. But if his paintings are a testament to how he after tenuous hardships ended up living on a small island and started painting, then we can reflect back to his story by looking at his pictures and try to see what parts of his life might be captured on the canvas.


The computer doesn’t create the art anymore than a brush painted the Mona Lisa. It’s not just pointing a computer at a canvas and waiting for stuff to happen. There are people behind the models people with intentions, human context and an idea of what they want to create. The AI is just a tool to transform disconnected parts into an interesting work.


It is however random generated, so you cannot expect the same output from the same input.


Many forms of creative endeavour, from landscape gardening to Jackson Pollock drip paintings, have allowed for a large amount of the gap between the creator's broad intent and the final appearance to be filled by processes that aren't 100% predictable. There's also a lot of intent going on in which works to display and which to discard

Whether largely unintended consequences or not, the abstract forms of the man in the tie are interesting enough to not look out of place in a gallery, and the renderings of sheep and clocks are clearly the products of a mind (even though it's a human one)trying to get a response from the viewer attuned to the idea of androids dreaming of electric sheep, melting clocks being an iconic surrealist thing and failure to interpret "one (1) single clock" being funny. The grassy hillside, sans sheep, not so much. A quick play with the algorithm suggests a lot of its outputs are unrecognisable forms with no correspondence to anything, so he'd already thrown away a lot of uninteresting results to get to that point.


Let an artist draw the same picture a second time without seeing the first one.


Art doesn't have to have something to say, it does not require a message, there does not have to be a thought-out why to its creation. It's strictly subjective, without exception. I can give whatever meaning to computer / AI generated art that I please.

To intentionally use an exaggerated example: I can urinate on a wall, tell nobody else about it, give it no further meaning or consideration, have done it for no particular reason other than my need to urinate, and call it art (regardless of whether the setup premise is offensive to some).

I can look at this image from the site:

https://78.media.tumblr.com/015901ac8ded0c711dcec8f8de7ec223...

And I can decide that is art which represents my emancipation from indentured servitude in another life when I was a coal miner for BigCorp on Mars. Or any other seemingly ridiculous meaning I choose to give it.


Sorry, but as someone with an art degree I have to respectfully disagree.

Pissing on a wall and arbitrarily calling it art does not make it art. That style of thinking worked during the Dada movement https://en.wikipedia.org/wiki/Dada but is no longer something that would work today, unless you were doing a live art installation, in which case you are committing to some kind of commentary on some factor of the world anyways.

Artists almost always create art with meaning, and in the rare cases they don’t they are still attempting to make genuinely thought provoking works of art, even if it’s simply showing off their skills. If you go to an art gallery there is always the artist’s thesis statement or message written out at the entrance.

You are correct that art is subjective though. What an artist envisions and what their audience gets out of it has been shown time and again to not line up in many cases. And in a way, that is the beauty of art - our shared experiences while appreciating it.


It doesn't matter if the artist's meaning is portrayed, though, so long as viewers give it meaning. I'd argue that if the viewer doesn't give some meaning or attach some emotion to it, it has failed. I might not have meaning in mind when I create things, but I know others perceive that stuff in my art. I know a few tricks to make others feel things or think about things, and employ those. I'm mostly just making stuff I like for whatever reason, though.

These AI pictures would wind up being called art if someone purposefully produced them as such and if necessary, provided some meaning or context to them all.


You say "sorry that's not how art works" but then immediately talk about an art movement that did just that? You're contradicting yourself.

I get what you're trying to say. That sort of thing isn't common in contemporary art, especially not in a gallery setting. But that's a tiny portion of all the art in the world.

For example - I know an artist in Tuscon who drinks his own piss. That's the performance. He might have a personal meaning behind it, but he doesn't explain it to anyone. And he does plenty of other meaningless things intentionally.

He's not world renowned or anything, but he is known among some artists in Tuscon, he considers himself an artist, and thats how he makes his living. He also has an art degree.


I get what you are saying, but the Dada movement doesn't contradict what I said, it emphasizes that there was a time when doing what was considered anti-art was itself a message. It was a strong world wide protest among aritsts against World War I, as well as the heavily traditional train of thought around art during that era. It's still a controversial topic even today!


> Pissing on a wall and arbitrarily calling it art does not make it art.

> You are correct that art is subjective though.

"art degree"


“context”


> Pissing on a wall and arbitrarily calling it art does not make it art.

That's my argument against (most) graffiti.


...unless you are a big name in art, in which case the same act will suddenly be daring and expressive - viz. piss-filled jars with crosses in 'm and similar works of 'art'.


So how could you tell an AI-generated Rothko from the real thing? If you can't tell the difference, is there a difference?...a kind of Turing test applied to art.

Personally, I think that a helluva lot of modern art has a high BS level, and welcome the uproar that a computer could bring to it.


No. The best working definition I've run into said that Everything unnecessary that is done on purpose is art. If you want the art to mean something that's your problem, not the artist's.


Google has had AI art museums at the past 2-3 I/Os.


Unlike real art, computers can produce a trillion unique works an hour. This will apply to all forms of art in the future, including stories, music and movies. So where is the scarcity?


There is hardly scarcity in art now, though. Not as a whole. If you like a particular style, you can usually find it at an affordable price so long as you aren't going by "famous" or "brand name".

I'll also add that scarcity will be simply built-in. Most folks won't initiate such art - and such art has to be initiated. There will still be a personal selection process going on with a human behind the scenes. Just because the computer can make so many doesn't mean we'll see that many at all. I imagine it will eventually edge out some art forms: Making advertising graphics for movies and print, for example. Making logos. And so on.


> So where is the scarcity?

Up one meta-level. The artwork isn't the sonata, or the portrait, or the sculpture. The artwork is the algorithm and input data.


Scarcity matters if you regard art as a commodity to be bought and sold. Otherwise I don't see why it's important.


I recommend John Berger's "Ways of Seeing", available on YouTube. https://en.m.wikipedia.org/wiki/Ways_of_Seeing


Wide as an ocean deep as a puddle. It will have the same problems as the procedurally generated worlds in No man's sky.


In an era where artworks are already abundant, curation creates scarcity. Not all works are considered equal.


There's already no scarcity of stories, music, photography, paintings or sketches. Even with the higher cost of production for movies there's still more than most people could reasonably watch in a lifetime. Aren't we already past that point without AI?


Furthermore, I'd argue that it's _better_ to be without scarcity in art, because a larger pool of differences in available art makes it far more likely for any random individual to find a piece that resonates specifically with themselves.

It's the same logic with e.g. books: if there are 100 great books, people are going to love those books and some subset of people are going to find a book and say, "wow, this book was really meant for me". If there are 1,000,000 great books, then 1) there's going to be a wider selection for people who enjoy more niche aspects in books who wouldn't otherwise find those, and 2) there's a much higher chance of one of those million books really resonating with any random individual.

The real struggle is in curation and recommendation: which of these million books would John Smith _most_ like (ideally more than, say, the original 100 books), instead of requiring him to peruse through them all on his own.


The scarcity is in the "quality" - its ability to move us in new and interesting ways, just like now. A trillion similar things are not very interesting, so quality stands out.


Why do we need scarcity?


Generate them yourselves:

http://t2i.cvalenzuelab.com/


I'm really curious what data it was trained on with snakes. Anything I write with "snakes" or "snake" in it produces something I could see in an impressionist art museum (but in no way resembles snakes). They also all tend to be monochromatic.

"snakes on a plane"

"snakes with a plan"

"yellow snakes"


These are amazing. I'd love some blind test, some of these presented as an art (with description as a title), compared to some abstract art made by humans. I could definitely hang many of those on my walls if they would be in high enough resolution. Do try some abstract phrases too.


It's fun to throw "A house of dust"[0] generated poems at it.

[0]https://nickm.com/memslam/a_house_of_dust.html


It looks like this webpage has been overwhelmed at this point.


Absolutely nothing happens when I type something in. Perhaps their GPU allowance ran out.


The server is not properly configured.

https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS


The hug of death


(two comments in this thread:)

> What always fascinated me is how those images look almost exactly like the hallucinations you get on some psychedelics... The AI must be pretty close if they can already match the output of a confused human brain.

> Traditionally, with computer generated stuff, you could clearly see the math in the algorithms (the sine waves and fractals and whatnot). With AI generated stuff it looks... natural... It's a computer no longer letting you see how he thinks.

The paper "Deep Image Prior" by Dmitry Ulyanov et al. gives compelling evidence that the structure of convolutional neural networks already encodes strong knowledge about the appearance of natural images, independent of any specific parameters (learned weights). Independence from parameters means it's independent from what task the network was trained to accomplish, and of the training algorithm.

This helps explain (IMO) why a neural network with "wrong" weights (meaning, the training process did not fully meet the goal of the project) still produces images that like plausible activations of the human visual cortex, rather than harsh mathematical patterns. The convolutional network structure is biased towards natural-looking images.

paper: https://arxiv.org/abs/1711.10925 third-party blog post: http://mlexplained.com/2018/01/18/paper-dissected-deep-image...


The connections between Neural Networks and the human brain are superficial at best and it's unwise to abuse the analogy beyond it's limit. The author's make no such claims and I don't think you have the standing to either.


yeah I did not intend to claim anything strong about that. edited my comment accordingly.


Interestingly, Christie's will sell an AI generated painting in the coming weeks. They come from a French startup who seems to create "better" (at least more pleasing) results https://www.christies.com/features/A-collaboration-between-t...


There's an obvious blotchy feature grid in all of the images generated by the French team. They don't look like paintings in technique at all, but only what they are: automatic collages of randomly blended pieces of photographs from old art.

The CAN images in that same Christie's article are much better, quite beautiful. But the author of CAN is full of shit in this interview answer:

> ‘An interesting question is: why is so much of the CAN’s art abstract? I think it is because the algorithm has grasped that art progresses in a certain trajectory. If it wants to make something novel, then it cannot go back and produce figurative works as existed before the 20th century. It has to move forward. The network has learned that it finds more solutions when it tends toward abstraction: that is where there is the space for novelty.’

The algorithm has grasped that art must move forward, so it paints abstract?! Or could it be that feeding random numbers into a black box neural network algorithm is never going to give you a human likeness... No, it must be that the AI just doesn't want to be Rembrandt.

With bullshit meters at this level, the next AI winter must be just around the corner.


more pleasing in the sense of even more nightmarish?


Link to AttnGAN without the tracking junk:

https://github.com/taoxugit/AttnGAN


I typed in "modern art" and it looks about right.


I tried using the same phrase several different times and got different results, so it appears the system is not deterministic.


That’s how GANs work: trying to make sense of random noise (with maybe a hint)


You can deterministicly make sense of random noise


The point is that the GAN is trained to model the "probability distribution" of the descriptions it's trained on precisely so that the concepts are generalized and can be synthesized/extrapolated. Once trained, you may fix the random seed so that the same description would generate the same image.


And this is why they need human brains in The Matrix


This reminds me of the scene from the Matrix when they question whether the AI knows how to replicate the taste of human food “maybe they couldn’t figure out what chicken taste like, which is why chicken taste like everything.”

https://m.youtube.com/watch?v=2oEnJfZ9joY


This was not a good idea to read at work. I tried to control the giggle but still coworkers noticed and I had to show them.

It's hard not to anthropomorphize the neural net and have a little bit of pity/laughter at its struggle to paint even remotely accurate pictures


They'd make good album covers.


That was exactly my first thought!


The level of our AI capabilities today means for me that

- we must not give it any decision making abilities because they will fail terribly on some mundane task

- SkyNet is not for yommorox, or the day after


Fear and Loathing in AI.


We were somewhere around Barstow on the edge of the desert when the drugs began to take hold. I remember saying something like “I feel a bit lightheaded; maybe you should drive. …” And suddenly there was a terrible roar all around us and the sky was full of what looked like huge bats, all swooping and screeching and diving around the car, which was going about 100 miles an hour with the top down to Las Vegas. And a voice was screaming: “Holy Jesus! What are these goddamn animals?”

Then it was quiet again. My attorney had taken his shirt off and was pouring beer on his chest, to facilitate the tanning process. “What the hell are you yelling about?” he muttered, staring up at the sun with his eyes closed and covered with wraparound Spanish sunglasses. “Never mind,” I said. “It’s your turn to drive.” I hit the brakes and aimed the Great Red Shark toward the shoulder of the highway. No point mentioning those bats, I thought. The poor bastard will see them soon enough.


Kind of sad it only recognizes English.

Which makes me wonder how useful it would be to use different languages for teaching ML about the world?

Maybe an understanding across different languages might help it differentiate between objects with more accuracy? Tho, I'm probably making this sound far more simple than it actually would be.


All the Major AI conferences are published in English, are you concerned about that as well? English has long been established as the lingua franca of Science, it's not discrimination it's just reality.


I didn't say anything about "discrimination"? I was merely pondering the possibility that cross-language reference might increase the learning potential of ML.

If it can compare between different language models, then likelihood interpreting the right way could be increased. Google seems to be doing something like this with for Google translate, using Bible translations [0].

[0] https://motherboard.vice.com/en_us/article/j5npeg/why-is-goo...


its not as fun when when you can kind of backwardsly induce how limited the training data set is, e.g. when I tried

> a dog on a bun on dog on a bun on a dog on a bun on a dog on a bun on

or maybe I've completely misunderstood it and in a way it's passed my art turing test?


Perception of real life may not be what you think it is.

Visual information processing is the visual reasoning skill that enables us to process and interpret meaning from visual information that we gain through our eyesight; perhaps this is not the same for all of us and is well illustrated with this AI example?


I'd make prints of some of these for my wall


Is this the graphical version of Markov chains?


Reminds me of what my kids are drawing.


"try"


Why the fuck does this link to the amp version of the site?

Change http://aiweirdness.com/post/177091486527/this-ai-is-bad-at-d... to http://aiweirdness.com/post/177091486527/this-ai-is-bad-at-d... and it actually loads and looks proper.


I see your outrage, and I'll raise you a "thanks for your service: good job finding the link".


When I posted my experience on mobile was the opposite. AMP page loaded fine. Non-AMP, no images or text at all with multiple refreshes. Don't know why. Non-AMP looks fine now.


AMP is nonfunctional with Javascript disabled, non-AMP works fine for me. Perhaps regular page got slightly hugged?


Hah, this makes a lot more sense. Viewing this last night half of each image was missing.


I liked it up until the "subscribe for more" plug at the end.

    I .. ended up with way more than would
    fit in this one blog post ..
    Enter your email and I’ll send you them
    (and if you want, you can get bonus
    material each time I post).
Why so stingy about it, author? Force me to subscribe for more funny AI pics? Rather than simply posting the content in a followup article. Right. Then you wouldn't be able to collect all those e-mail addresses and subscribers.

Probably doesn't qualify as a full-on dark pattern, but I am annoyed enough to say that I dislike this approach.

Is this Web 2.0, 3.0, or higher? I'd like to go back to the 1.0.. anyone got a D/L link so I can reinstall the good version?


Perhaps I'm naive, but I read it as, "I don't want to spam this blog post with a bunch of pictures that aren't the absolute cream of the crop, but if you're really into it, here's how to get more." It seemed pretty reasonable to me, since the post flowed well with the pictures chosen and might not have if the author tried to cram them all in. Hanlon's razor and all that.


Dark pattern? The author is advertising a mailing list, not stealing your social security number. The author is providing all of this content free of charge and giving you an opportunity to get more free content if you subscribe. There is nothing dark about this it’s extremely benign, and quite frankly how things used to work for most of the web 20 years ago. So very much Web 1.0. You can’t treat every individual blogger like they are Facebook. If my neighbor knocks on my door asking for sugar that’s not a problem, even if it would be a problem if sugar corporations hired 2 million people to knock up everyone in my neighborhood every day for a year asking for a cup of sugar to increase demand.


There's a place to generate your own pictures at the bottom of the article, if you want more. (The "Try It Yourself" link)

Given this, you certainly don't have to subscribe to get more of these pics.


Some people are shady af. Author should come here and explain.


This isn't a dark pattern, it isn't shady af, and the author should not be called to the rug to answer for it.

It's just someone trying to build an email distribution list for their (side?) gig. Feel free to enjoy the content without opting in.


Oh please. It’s a clickbait at best from my angry engineer tone. It’s shady because the person lure people in to read it and then at the end just day “btw, if you want to see them? Send me your email!”

How is that not shady marketing? This is bad press. Bad article. This is killing reader morale. We should not encourage people like this to build their gigs. I felt I wasted my time reading this in the first place.


> How is that not shady marketing?

Because you’ve already seen the good content! This blog highlights the most amusing/weird examples of stuff generated by AI. This particular post has 17 - I counted - examples. They’re not presented as a listicle or one-by-one slides, there’s not an advert between each one. This is exactly the sort of content we should be encouraging.


No. I disagree. Just because it has 17 examples doesn’t mean it’s not a markrting strategy.


I didn't say it's not a marketing strategy, but it's really not shady. Again, you've already seen the best content. The rest is an optional, barely-mentioned extra.


Did you see the pictures? I noticed they didn't show on mobile. The article had a number of them. I imagine they probably generated dozens of them.


Had to browse in landscape - the images went off the right side of the screen but the page won’t allow you to zoom out or scroll horizontally.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: