Hacker News new | past | comments | ask | show | jobs | submit login

I regularly catch my kids watching AI generated content and they don't know it.



It's kind of an interesting phenomenon. I read something on this. Basically being born between ~1980 and ~1990 is a huge advantage in tech.


The only generation that ever knew how to set the clock on a VCR: our parents needed our help; our kids won't have even seen a VCR outside of a museum, much less used one.


Very interesting point. Wonder about the generation before and what skills they had to share with their parents who were most likely traumatised from a world war or two. I remember setting the vcr clock and tuning the new tv with the remote. I’m sure the adults could of figured it out but they probably got more from seeing their ‘smart’ kids figuring it out in time for the cartoons!


The parents of those of us who grew up in the 80's and 90's invented the VCR, they could use it just fine.


The Zoomers have the advantage that the bar is pretty low these days.


A surprising amount of it is really popular too. I recently figured out that the Movie Recaps channel was all AI when the generated voice slipped and mispronounced a word in a really unnatural way. They post videos almost daily and they get millions of views. Must be making bank.


A group I follow about hobby/miniatures (as in wargaming miniatures and dioramas) recently shared an "awesome" image of a diorama from another "hobby" group.

The image had all the telltale signs of being AI generated (too much detail, the lights & shadows were the wrong scale, the focus of the lens was odd for the kind of photo, etc). I checked that other group, and sure enough, they claim to be about sharing "miniature dioramas" but all they share is AI-generated crap.

And in the original group, which I'm a member of and is full of people who actually create dioramas -- let's say they are "subject matter experts" -- nobody suspected anything! To them, who are unfamiliar with AI art, the photo was of a real hand-made diorama.


I was watching UFC recaps on Youtube and the algorithm got me onto AI generated MMA content, I watched for a while before realizing it. They were using old videos which were "enhanced" using AI and had an AI narrator. I only realized it when the fight footage got so old, and the AI had to do so much work to touch it up, that artifacts started appearing in the video. Once I realized it I rewatched the earlier clips in the video and could see the artifacts there too, but not until I was looking for them.


There's already rabbit holes of fake MMA fighting you can fall into online? Even if you're a "fan" and relatively aware of what to look for ... still difficult to spot? Horribly, had the same sensation while watching UFC at a bar. "Haven't I seen this match where they fall on the ground and hug for hours before?" Mostly empty background audience with limited reactions.

Somebody took AI video editing, and in a year or two, we're already at entire MMA rabbit holes of fake videos.

Commenting mostly as a personal evidence reference of how crazy the World Wide Web has gotten from anecdotal sources.


Most probably they employ overseas, underpaid workers with non-standard English accents and so they include text-to-speach in the production process to smoothen the end result.

I won't argue wether text to speech qualifies as an AI but I agree they must be making bank.


I wonder if they are making bank. Seems like a race to the bottom, there’s no barrier to entry, right?


Right, content creators are in a race to the bottom.

But the people who position themselves to profit from the energy consumption of the hardware will profit from all of it: the LLMs, the image generators, the video generators, etc. See discussion yesterday: https://news.ycombinator.com/item?id=41733311

Imagine the number of worthless images being generated as people try to find one they like. Slop content creators iterate on a prompt, or maybe create hundreds of video clips hoping to find one that gets views. This is a compute-intensive process that consumes an enormous amount of energy.

The market for chips will fragment, margins will shrink. It's just matrix multiplication and the user interface is PyTorch or similar. Nvidia will keep some of its business, Google's TPUs will capture some, other players like Tenstorrent (https://tenstorrent.com/hardware/grayskull) and Groq and Cerebras will capture some, etc.

But at the root of it all is the electricity demand. That's where the money will be made. Data centers need baseload power, preferably clean baseload power.

Unless hydro is available, the only clean baseload power source is nuclear fission. As we emerge from the Fukushima bear market where many uranium mining companies went out of business, the bottleneck is the fuel: uranium.


You spent a lot of words to conclude that energy is the difference maker between modern western standards of living and whatever else there is and has been.


Ok, too many words. Here's a summary:

Trial and error content-creation using generative AI, whether or not it creates any real-world value, consumes a lot of electricity.

This electricity demand is likely to translate into demand for nuclear power.

When this demand for nuclear power meets the undersupply of uranium post-Fukushima, higher uranium prices will result.


Continuing that thought, higher uranium prices and real demand will lead to unshuttering and exploiting known and proven deposits that are currently idle and increase exploration activity of known resources to advance their status to measured and modelled for economic feasiblity, along with revisiting radiometric maps to flag raw prospects for basic investigation.

More supply and lower prices will result.

Not unlike the recent few years in (say) lithium, anticipated demand surged exploration and development, actual demand didn't meet anticipated demand and a number of developed economicly feasible resources were shuttered .. still waiting in the wings for a future pickup in demand.


Spend a few months studying existing demand (https://en.wikipedia.org/wiki/List_of_commercial_nuclear_rea...), existing supply (mines in operation, mines in care and maintenance, undeveloped mines), and the time it takes to develop a mine. Once you know the facts we can talk again.

Look at how long NexGen's Rook 1 Arrow is taking to develop (https://s28.q4cdn.com/891672792/files/doc_downloads/2022/03/...). Spend an hour listening to what Cameco said in its most recent conference call. Look at Kazatomprom's persistent inability to deliver the promised pounds of uranium, their sulfuric acid shortages and construction delays.

Uranium mining is slow and difficult. Existing demand and existing supply are fully visible. There's a gap of 20-40 million pounds per year, with nothing to fill the gap. New mines take a decade or more to develop.

It is not in the slightest like lithium.


> Spend a few months studying existing demand

Would two decades in global exploration geophysics and being behind the original incarnation of https://www.spglobal.com/market-intelligence/en/industries/m... count?

> Once you know the facts we can talk again.

Gosh - that does come across badly.


Apologies.

When someone compares uranium to lithium, I know I'm not talking to a uranium expert.

All the best to you, and I'll try to be more polite in the future.


Weird .. and to think I spent several million line kms in radiometric surveys, worked multiple uranium mines, made bank on the 2007 price spike and that we published the definite industry uranium resources maps in 2006-2010.

Clearly you're a better expert.

> when someone compares uranium to lithium, I know I'm not talking to a uranium expert.

It's about boom bust and shuttering cycles that apply in all resource exploration and production domains.

Perhaps you're a little too literal for analogies? Maybe I'm thinking in longer time cycles than yourself and don't a few years of lag as anything other than a few years.


Once again, allow me to offer my sincere apologies.

You are well-prepared to familiarize yourself with the current supply/demand situation. It's time to "make bank", just like you did in 2007... only more so. The 2007 spike was during an oversupplied uranium market and mainly driven by financial actors.

I invite you to begin by listening to any recent interview with Mike Alkin.

Good night and enjoy your weekend.


> Most probably they employ overseas, underpaid workers with non-standard English accents and so they include text-to-speach in the production process to smoothen the end result.

Might also be an AI voice-changer (i.e. speech2speech) model.

These models are most well-known for being used to create "if [famous singer] performed [famous song not by them]" covers — you sing the song yourself, then run your recording through the model to convert the recording into an equivalent performance in the singer's voice; and then you composite that onto a vocal-less version of the track.

But you can just as well use such a model to have overseas workers read a script, and then convert that recording into an "equivalent performance" in a fluent English speaker's voice.

Such models just slip up when they hit input phonemes they can't quite understand the meaning of.

(If you were setting this up for your own personal use, you could fine-tune the speech2speech model like a translation model, so it understands how your specific accent should map to the target. [I.e., take a bunch of known sample outputs, and create paired inputs by recording your own performances of them.] This wouldn't be tenable for a big low-cost operation, of course, as the recordings would come from temp workers all over the world with high churn.)


Can you identify any of these models?


I think it's unusual to assume they are based in the US and employ/underpay foreigners. A lot of people making the content are just from somewhere else.


But it uses AI only for audio, right? Script for the vid seems to be written by human, given the unusual humor type of this channel. I started watching this channel some time ago.


It's hard to tell whether they use AI for script generation. After having seen enough of those recaps, the humor seems to be rather mechanical and basic humor is relatively easy to get from an LLM if prompted correctly. The video titles also seem as if they were generated.

That said, this channel has been producing videos well before ChatGPT3.5/4 so at the very least they probably started with human written scripts.


I thought it was just text to speech when I happen to saw some of those videos. And it seems to have been consistently similar since before ChatGPT etc. Why do you think titles are AI generated?

I feel like it might actually be quite complex for AI to pull up the perfect clips and edit them with the script, including timing and everything. Maybe it could be made automatic, but nonetheless it would be a complex process and I don't think possible few years ago. I know Gemini and possibly some others can analyze video if fed to them, but I'm still skeptical that this channel in particular would have done it, when they have always had this frequency of uploads and similar tone.

Also I think there's far better TTS now with ElevenLabs and others so it could be made much more human like.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: