Hacker News new | past | comments | ask | show | jobs | submit login

This should really be retitled to “The AI investment bubble is losing hype.” LLMs as they exist today will slowly work their way into new products and use cases. They are an important new capability and one that will change how we do certain tasks.

But as to the hype, we are in a brief pause before the election where no company wants to release anything that would hit the news cycle in a bad way and cause knee-jerk legislation. Are there new architectures and capabilities waiting? Likely some. Sora showed state of the art video generation, OpenAI has demoed an impressive voice mode, and Anthropic has teased that Opus 3.5 will be even more capable. OpenAI also clearly has some gas in the tank as they have focused on releasing small models such as GPT-4o and 4o mini. And many have been musing about agents and methods to improve system 2 like reasoning.

So while there’s a soft moratorium on showing scary new capability there is still evidence of progress being made behind-the-scenes. But what will a state of the art model look like when all of these techniques have been scaled up on brand new exascale data centers?

It might not be AGI, but I think it will at least be enough for the next hype Investment bubble.




We’ll see, but I doubt its because of the election; as another commenter said companies can’t afford to lose that much money by waiting around for months for the right “moment” to release a product. GPT-4o is good, I’ll grant you that, but its fundenmentally the same tech as GPT3.5 and the fundenmental problem, “hallucination,” is not solved, even if there are more capabilities. No matter what, for anything besides code that may or may not be any good, someone has to go through and read the response and make sure its all tight so they don’t fuck up or embarass themselves (and even then, using AI for coding will introduce long term problems since you’ll have to go back through it for debugging anyway). We all, within a month of ChatGPT getting introduced, caught it in a contradiction or just plain error about some topic of specialist knowledge, and realized that its level of expertise on all topics was at that same level. Sam Altman royallly fucked up when he started believing his own bullshit, thinking AGI was on the horizon and all we needed was to waste more resources on compute time and datacenters.

Its done, you can’t make another LLM, all knowledge from here on out is corrupted by them, you can never deliver an epistemic “update.” GPT will become a relic of the 21st century, like a magazine from the 1950s.


even if the datacenter increase was a waste because the potential of LLMs never pan out relative to the massive investment, i think the flood of gpu and compute will definitely enable whatever could benefit from it. The next better AI application will be very thankful that gpu compute is abundant.


GPUs are easily and irreparably broken by overheating, so GPU compute is something that's high maintenance. It won't stick around like a building or a bridge.


Luckily data centers have this thing called “air conditioning”…


Air conditioning is just heat displacement. It doesn't get rid of the heat, and if it gets hot enough it becomes physically impossible to remove enough heat to cool a building down, no matter how many air conditioners are installed.


Yea that’s why we have this problem today with data centers overheating.

Don’t put chips on top of each other and you’re fine.


I don’t think you’ve used LLMs enough. They are revolutionary, every day. As a coder I’m several times more productive than I was before, especially when trying to learn some new library or language.


They are revolutionary for use cases where hallucinated / wrong / unreliable output is easy and cheap to detect & fix, and where there's enough training data. That's why it fits programming so well - if you get bad code, you just throw it away, or modify it until it works. That's why it work for generic stock images too - if you get bad image, you modify the prompt, generate another one and see if it's better.

But many jobs are not like that. Imagine an AI nurse giving bad health advice on phone. Somebody might die. Or AI salesman making promises that are against company policy? Company is likely to be held legally liable, and may lose significant money.

Due to legal reasons, my company couldn't enable full LLM generative capabilities on chatbot we use, because we would be legally responsible for anything it generates. Instead, LLM is simply used to determine which of the pre-determined answers may fit the query the best, which it indeed does well when more traditional technologies fail. But that's not revolutionary, just an improvement. I suspect there are many barriers like that, which hinder its usage in many fields, even if it could work most of the time.

So, nearly all use cases I can think of now will still require a human in the loop, simply because of the unreliability. That way it can be a productivity booster, but not a replacement.


Human medical errors have been one of the leading causes of death[0] since we started tracking it (at least decades).

The healthcare system has always killed plenty of people because humans are notoriously unreliable, fallible, etc.

It is such a stubborn, critical, and well-known issue in healthcare I welcome AI to be deployed slowly and responsibly to see what happens because the situation hasn’t been significantly improved with everything else we’ve thrown at it.

[0] - https://www.ncbi.nlm.nih.gov/books/NBK225187/


> But many jobs are not like that. Imagine an AI nurse giving bad health advice on phone. Somebody might die.

This problem is not unique to AI and you see this problem with human medical professionals. Regularly people are misdiagnosed or aren’t diagnosed at all. At least with AI you could compare the results of different models pretty instantly and get confirmation. An AI Dr also wouldn’t miss information on a chart like a human can.

> So, nearly all use cases I can think of now will still require a human in the loop, simply because of the unreliability. That way it can be a productivity booster, but not a replacement.

This is exactly what your parent said, but yet you replied seemingly disagreeing. AI tools are here to stay and they do increase productivity. Be it coding, writing papers, strategizing. Those that continue to think of AI as not useful will be left behind.


To me those usecases are already revolutionary. And human in the loop doesn't mean it is not revolutionary. I see it multiplying human productivity rather than as immediate replacement. And it can take some time before it is properly iterated and integrated everywhere in a seamless manner.


A product doesn’t have to be useful for everything to still be useful.


If you adjust your standard to the level of human performance in most roles, including nursing, you’ll find that AI is reasonably similar to most people in that in makes errors, sometimes convincing ones, and that recovering from those errors is something all social/org systems must do & don’t always get right.

Human in the loop can add reliability, but the most common use cases I’m seeing with AI are helping people see the errors they are making/their lack of sufficient effort to solve the problem.


Are you making several times as much money?

IME LLMs are great at giving you the experience of learning, in the same way sugar gives you the experience of nourishment


Developer productivity doesn't map very directly to compensation. If one engineer is 10x as productive as another, they're lucky if they get 2x the compensation.


The 10x engineer will just start their own company.


A good programmer isn't necessarily also good at business to run his own company.

Sure, you have your John Carmaks' or Antirezs' of the industry who are 10x programmers and also successful founders but those guys are 1 in a million.

But your usual 10x engineer you'll meet is the guy who knows the ins and outs of all running systems at work giving him the ability to debug and solve issues 10x quicker than the rest, knowledge which is highly specific to the products of that company and is often non-transferrable and also not useful at entrepreneurship.

Becoming the 10x engineer at a company usually means pigeonholing yourself in the deep inner workings of the products, which may or may not be useful later. If that stack is highly custom or proprietary it might work in your favor making you the 10x guy virtually unfireable being able to set your own demands since only you can solve the issues, or might backfire against you at a round of layoffs as your knowledge of that specific niche has little demand elsewhere.


> But your usual 10x engineer you'll meet is the guy who knows the ins and outs of all running systems at work giving him the ability to debug and solve issues 10x quicker than the rest

You're talking about the 100x engineer now. The 10x engineer is the normal engineer you are probably accustomed to working with. When you encounter a 1x engineer, you will be shocked at how slow and unproductive they are.


>When you encounter a 1x engineer, you will be shocked at how slow and unproductive they are.

Well, Of Course I Encountered Him. He's Me.


Can’t be you. Self awareness is not a feature of 1x engineers.


I guess that means the fastest way to become better than a 1x engineer is to declare that you are 1x engineer. Makes sense to me!


Unlikely, given that you see the 100x engineer as being only a 10x level up.


You are using the deca-engineer scale.


>A good programmer isn't necessarily also good at business to run his own company.

AI can help with that.


We'd all be millionaires if AI could actually help with that, but then if everyone is a millionaire then nobody is.

Current AI is still at the state of recommending people jump off the golden gate bridge if they feel sad or telling them to change their blinker fluid.


You're right. And that's why I wonder how a developer can claim to get a 300% increase in productivity from AI results.


300% is a massive underestimate for someone who is AI native and understands how to prompt and interpret results correctly. In the past I would spend 30-40 minutes hunting around on StackOverflow to get the syntax of a difficult database query or bust out a long regular expression.

With AI I can do the same in 15 seconds. We’re talking a 120x increase in productivity not a 3x improvement.


Do you spend 100% of your time on such tasks? If not, then Amdahl wants a word with you about your overall productivity gains.


Agreed, I don’t know why people are so down on ai as a coding assistant here. I concur with everything you said, and will add that now I also have a service that will produce, on-demand, the exact documentation that I need at the exact level of abstraction, and then I can interrogate it at will. These things are huge time savers.


You can easily get 300% productivity improvements if you're using a completely new language but still have enough programming background in order to refine the prompts to get what you want, or if you're debugging an issue and the LLM points you in the right way saving you hours of googling and slamming your head against the wall.


No but I can work 2-3 hours a day (WFH) while delivering results that my boss is very happy with. I would prefer to be paid 3 times as much and working 8 hours a day, but I'm ok with this too.


Like with all productivity gains in history, this won't last too long once management realizes this and squeezes deadlines by 2-3x since it will be expected for everyone to use LLMs at work to get things done 2-3 faster than before.


Additionally, there's a limit to how much "productivity gains" on the tech side can cause actual positive business results. Going from an ugly, slow website to a fresh one will only increase revenue a small percentage, same with building a better shopping cart, fixing some obscure errors, etc. The majority of the web already works pretty well, there is no flood of new revenue coming in as a result of the majority of tech innovation.

Which means that after a brief honeymoon period, the effect of AI will be to heavily reduce labor costs, rather than push for turbocharged "productivity" with greatly diminishing returns.


> I don’t think you’ve used LLMs enough. They are revolutionary, every day.

How much is "enough"? Neither myself nor my coworkers have found LLMs to be all that useful in our work. Almost everybody has stopped bothering with them these days.


what line of work are you in?


Do you perhaps have some resources on how you use AI assistants for coding (I'm assuming Github Copilot). I've been trying it for the past months, and frankly, it's barely helping me at all. 95% of the time the suggestions are just noise. Maybe as a fast typer it's less useful, I just wonder why my experience is so different than what others are saying. So maybe it's because I'm not using it right?


I think it's your mindset and how you approach it. E.g. some people are genuinely bad at googling their way to a solution. While some people know exactly how to manipulate the google search due to years of experience debugging problems. Some people will be really good at squeezing out the right output from ChatGPT/Copilot and utilize it to maximum potential, while others simply won't make the connection.

Its output depends on your input.

E.g. say you have an API swagger documentation and you want to generate a Typescript type definition using that data, you just copy paste the docs into a comment above the type, and copilot auto fills your Typescript type definition even adding ? for properties which are not required.

If you define clearly the goal of a function in a JSDoc comment, you can implement very complex functions. E.g. you define it in steps, and in the function line out each step. This also helps your own thinking. With GPT 4o you can even draw diagrams in e.g. excalidraw or take screenshots of the issues in your UI to complement your question relating to that code.


> some people know exactly how to manipulate the google search due to years of experience debugging problems

this really rings true for me. especially as a junior, I always thought one of my best skills was that I was good at Googling. I was able to come up with good queries and find some page that would help. Sometimes, a search would be simple enough that you could just grab a line of code right off the page, but most of the time (especially with StackOverflow) the best approach was to read through a few different sources and pick and choose what was useful to the situation, synthesizing a solution. Depending on how complicated the problem was, that process might have occurred in a single step or in multiple iterations.

So I've found LLMs to be a handy tool for making that process quicker. It's rare that the LLM will write the exact code I need - though of course some queries are simple enough to make that possible. But I can sort of prime the conversation in the right direction and get into a state where I can get useful answers to questions. I don't have any particular knowledge on AI that helps me do that, just a kind of general intuition for how to phrase questions and follow-ups to get output that's helpful.

I still have to be the filter - the LLM is happy to bullshit you - but that's not really a sea change from trying to Google around to figure out a problem. LLMs seem like an overall upgrade to that specific process of engineering to me, and that's a pretty useful tool!


Keep in mind that Google's results are also much worse than they used to be.

I'm using both Kagi & LLM; depending on my need, I'll prefer one or the other.

Maybe I can access the same result with a LLM, but all the conversation/guidance required is time-consuming than just refining a search query and browsing through the first three results.

After all the answer is rarely exactly available somewhere. Reading people's questions/replies will provide a clues to find the actual answer I was looking for.

I have yet been able to achieve this result through a LLM.


> E.g. you define it in steps, and in the function line out each step. This also helps your own thinking

Yeah but there are other ways to think through problems, like asking other people what they think, which you can evaluate based on who they are and what they know. GPT is like getting advice from a cross-section of everyone in the world (and you don’t even know which one), which may be helpful depending on the question and the “people” answering it, but it may also be extroadinarily unhelpful, especially for very specialized tasks (and specialized tasks are where the profit is).

Like most people, I have knowledge of things very specific I know that less than a 100 people in the world know better than me, but thousands or even millions more have some poorly concieved general idea about it.

If you asked GPT to give you an answer to a question it would bias those millions, the statistically greater quantative solution, to the qualitative one. But, maybe, GPT only has a few really good indexes in its training data that it uses for its response, and then its extremely helpful because its like accidentally landing on a stackoverflow response by some crazy genius who reads all day, lives out of a van in the woods, and uses public library computers to answer queries in his spare time. But that’s sheer luck, and no more so than a regular search will get you.


Take a look at aider-chat or zed. zed just released new AI features. Had a blog post about it yesterday I think.

Also you can look into cursor.

There are actually quite a few tools.

I have my own agent framework in progress which has many plugins with different commands. Including reading directories, tree, read and write files, run commands, read spreadsheets. So I can tell it to read all the Python in a module directory, run a test script and compare the output to a spreadsheet tab. Then ask it to come up with ideas for making the Python code match the spreadsheet better, and have it update the code and rerun the tests iteratively until its satisfied.

If I am honest about that particular process last night, I am going to have to go over the spreadsheet to some degree manually today, because neither gpt-4o nor Claude 3.5 Sonnet was able to get the numbers to match exactly.

It's a somewhat complicated spreadsheet which I don't know anything about the domain and am just grudgingly learning. I think the agent got me 95% of the way through the task.


I rely on LLMs extensively for my work, but only a part of that is with copilots.

I have copilot suggestions bound to an easy hotkey to turn them on or off. If I’m writing code that’s entirely new to the code base, I toggle the suggestions off, they’ll be mostly useless. If I’m following a well established pattern, even if it’s a complicated one, I turn them on, they’ll be mostly good. When writing tests in c#, I reflexively give the test a good name and write a tiny bit of the setup, then copilot will usually be pretty good about the rest. I toggle it multiple times an hour, it’s about knowing when it’ll be good, and when not.

Beyond that, I get more value from interacting with the llm by chat. It’s important to have preconfigured personas, and it took me a good 500 words and some trial and error to set those up and get their interaction styles where I need them to be. There’s the “.net runtime expert” the “infrastructure and release mentor”, and on like that. As soon as I feel the least bit stuck or unsure I consult with one of them, possibly in voice mode while going for a little walk. It’s like having the right colleague always available to talk something through, and I now rarely find myself spinning my wheels, bike-shedding, or what have you.


It is very helpful in providing highly specific "boilerplate" in languages/environments you are not very familiar with.

The text interface can also be useful for skipping across complex documentation and/or learning. Example: you can ask GPT-4 to "decode 0xdf 0xf8 0x44 0xd0 (thumb 2 assembly for arm cortex-m)" => this will tell you what instruction is encoded, what it does and even how to cajole your toolchain into providing that same information.

If you are an experienced developer already, with a clear goal and understanding, LLMs tend to be less helpful in my experience (the same way that a mentor you could ask random bullshit would be more useful to a junior than a senior dev)


> this will tell you what instruction is encoded, what it does and even how to cajole your toolchain into providing that same information.

or it will hallucinate something that's completely wrong but you won't notice it


Copilot is just an autocomplete tool, it doesn’t have much support for multiturn prompting so it’s best used when you know exactly what code you want and just want it done quickly like implementing a well defined function to satisfy an interface or refactoring existing code to match an example you’ve already written out or prefilling boilerplate on a new file. For more complex work you need to use a chat interface where you can actually discuss the proposed changes with the model and edit and fork the conversation if necessary.


Don’t work in large established code bases. Make flappy bird games in Python.


My experience is mostly with gpt-4. Act like it is a beginner programmer. Give it small, self-contained tasks, explain the possible problems, limitation of the environment you are working with, possible hurdles, suggest api functions or language features to use (it really likes to forget there is a specific function that does half of what you need instead of having to staple multiple ones together). Try it for different tasks, you will get a feel what it excels in and what it won't be able to solve. If it doesn't give good answer after 2 or 3 attempts, just write it yourself and move on, giving feedback barely works in my experience.


What language do you use?

If you can beat copilot in a typing race then you’re probably well within your comfort zone. It works best when working on things that you’re less confident at - typing speed doesn’t matter when you have to stop to think.


I do 120wpm, but still copilot outpaces me, and it is not just typing, it is the little things I don't have to think about. Of course I know how to do all of it, but it still takes some mental energy to come up with algorithms and code. It takes less energy to verify what copilot output at least to me.


I use C# for the most part, sometimes PowerShell. But I can certainly see how it's more useful when I don't know much of the API yet. Then it would be a lot of googling which the AI assistant could avoid.


My experience is similar. Most of the results are not really useful so I have to put work in to fix them. But at that point I can do the small extra step of doing it completely myself.


This comment doesn't deserve the downvotes its getting, the author is right, and I'm having the same experience.

LLM outputs aren't always perfect, but that doesn't stop them from being extremely helpful and massively increasing my productivity.

They help me to get things done with the tech I'm familiar with much faster, get things done with tech I'm unfamiliar with that I wouldn't be able to do before, and they are extremely helpful for learning as well.

Also, I've noticed that using them has made me much more curious. I'm asking so many new questions now, I've had no idea how many things I was casually curious about, but not curious enough to google.


Good luck telling a bunch of programmers that their skills are legitimately under threat. No one wants to hear that. Especially when you are living a top 10% lifestyle on the back of being good at communicating to computers.

There is an old documentary of the final days of typesetters for newspapers. These were the (very skilled) people who rapidly put each individual carved steel character block into the printing frame in order print thousands of page copies. Many were incredulous that a machine could ever replicate their work.

I don't think programmers are going to go away, but I do think those juicy salaries and compensation packages will.


At least for now it seems more like a multiplier that wouldn't reduce the amount of work out there, possibly even increase demand in certain cases as digitisation becomes easier so projects that weren't worth to do before will be now and more complicated usecases will open up as well.

So same programmer with the same 8h of workday will be able to output more value.


The requirement for programmer’s is absolutely going to decline.

Some will undoubtably transition to broader based business consultancy services. For those unable or unwilling to do so the future is bleak.


> I do think those juicy salaries and compensation packages will.

I think that's inevitable with or without LLMs in the mix. I also think the industry as a whole will be better for it.


Horses before the automobile


What’s the title of the documentary?


ChatGPT says

The documentary he's referring to is likely "Farewell, Etaoin Shrdlu," released in 1980. It chronicles the last day of hot metal typesetting at The New York Times before they transitioned to newer technology. The title comes from the nonsense phrase "etaoin shrdlu," which appeared frequently in Linotype machine errors due to the way the keys were arranged. The documentary provides a fascinating look at the end of an era in newspaper production.


> But as to the hype, we are in a brief pause before the election where no company wants to release anything that would hit the news cycle in a bad way and cause knee-jerk legislation.

The knee-jerk legislation has mostly been caused by Altman's statements though. So I wouldn't call it knee-jerk, but an attempt by OpenAI to get a legally granted monopoly.


They are definitely working on that, but it needs to be the right kind of knee-jerk legislation, something that gives them regulatory capture. They can’t afford to lose control of the narrative and end up regulated to death.


Given enough money the united states government will capitulate to anyone; hell they didn't even finish prosecuting Scientology.


So you’re suggesting all innovation/new functionality releases are paused because US has elections coming up? I find that hard to believe.


"no no, it's not that the technology has reached it's current limits and these companies are bleeding money, they're just withholding their newest releases not to spook the normies!"


Here's an experiment you can try, go to https://www.udio.com/home and grab a free account which comes with more than enough credits to do this. Use a free chat LLM like Claude 3.5 Sonnet or ChatGPT 4o to workshop some lyrics that you like, just try a few generations and ask it to rewrite parts you don't like until you have something that you don't find too cringe. Then go back over to Udio, go to the create tab turn on the Manual Mode toggle and type in only 3 or 4 comma separated tags that describe the genre you like keep them very basic like Progressive Rock, Hip Hop, 1995, Male Vocalist or whatever you don't need to combine genres these are just examples of tags. Then under the Lyrics section choose Custom and paste in just the chorus or a single verse from the lyrics you generated and then click Create. It'll create two samples for you to listen to, if you don't like either of them then just click Create again to get another two but normally it doesn't take too many tries to get something that sounds pretty good. After you have one you like then click on the ... menu next to the song title and click Extend, you can add sections before or after and you just have to add the corresponding verse from the lyrics you generated or choose Instrumental if you want a guitar solo or something. You'll wind up with something pretty good if you really listen to each sample and choose the best one.

Music generation is one of the easiest ways to "spook the normies" since most people are completely unaware of the current SOTA. Anyone with a good ear and access to these tools can create a listenable song that sounds like it's been professionally produced. Anyone with a good ear and competence with a DAW and these tools can produce a high quality song. Someone who is already a professional can create incredible results in a fraction of the time it would normally take with zero budget.

One of the main limitations of generative AI at the moment is the interface, Udio's could certainly be improved but I think they have something good here with the extend feature allowing you to steer the creation. Developing the key UI features that allow you to control the inputs to generative models is an area where huge advancements can be made that can dramatically improve the quality of the generated output. We've only just scratched the surface here and even if the technology has reached its current limits, which I strongly believe it hasn't since there are a lot of things that have been shown to work but haven't been productized yet, we could still see steady month over month improvements based on better tooling built around them alone.

Text generation has gone from markov chain babblers to indistinguishable from human written.

Image generation has gone from acid trip uncanny valley to photorealistic.

Audio generation has gone from 1930's AM radio quality to crystal clear.

Video generation is currently in fugue dream state but is rapidly improving.

3D is early stages.

???? is next but I'm guessing it'll be things like CAD STL models, electronic circuits, and other physics based modelling outputs.

The ride's not over yet.


I've tried Udio when it appeared, and, while it is spectacularly fascinating from the technical perspective, and can even generate songs that "sound" OK, it is still as cringe as cringe can be.

Do you have an example of any song that gained any traction among human audience? Not a Billboard hit, just something that people outside the techbubble accepted as a good song?


Have you tried the latest model? It's night and day.

Edit:

There's obviously still skill involved in creating a good song, it's not like you can just click one button and get a perfect hit. I outlined the simplest process in my first comment and specifically said you could create a "listenable" song, it's not going to be great but it probably rivals some of the slop you often hear on the radio. If you're a skilled music producer you can absolutely create something good especially now with access to the stemmed components of the songs. It's going to be a half manual process where you first generate enough to capture the feeling of the song and then download and make edits or add samples, upload and extend or remix and repeat.

If you're looking for links and don't care to peruse the trending section they have several samples on the announcement page https://www.udio.com/blog/introducing-v1-5


I think the frontpage/recommended on Udio and Suno both have some decent music these days. By decent I mean on the level one could expect from say browsing music on say YouTube in areas one is not familiar with. There is of course a lot of meme/joke content, but also some pleasant/interesting sounding songs.

The really good stuff probably will not be marked as med with AI - and probably will also go via a DAW and proper mastering.


I see this pattern a lot, and I find it telling:

- someone claims that Gen AI is overhyped

- someone responds with a Gen AI-enabled service that is

    1) really impressive

    2) is currently offered pretty much for free

    3) doesn't have that many tangible benefits.
There's many technologies for which it's very easy to answer "how does it improve life of an average person": the desktop, the internet, the iPhone. I don't think Udio is anything like these. Long-term, how profitable do you expect a Udio-like application to be? Who would pay money to use this service?

It's just hard to imagine how you can turn this technology into a valuable product. Which isn't to say it's impossible: gen-AI is definitely quite capable and people are learning how to integrate it into products that can turn a profit. But @futureshock's point was that it is the AI investment bubble that's losing hype, and I think that's inevitable: people are realizing there are many issues with a technology that is super impressive but hard to productize.


I wrote a song for my girlfriend this way. It turned out pretty nice. A bit quirky is not necessarily a bad thing when making personalized content. Took me a couple of hours to get it to my liking, including learning all the tools and the workflow for the first time. And fixing up a couple of mispronunciations of her nickname using inpainting. Overall a very productive environment, and will probably try to make some more songs - an replace the vocals with my own using the stems feature.

I have some audio engineering skills, dabbled in songwriting, guitar and singng when I was younger, but actually never completed a full song. So it is quite transformative from that perspective!


That's great to hear! It's uses like this that really make Udio and other tools shine. Even just making up silly songs about things that happen in your life is fun or doing them as a gift like you did is always nice. It's also great to have the option to add music to other projects.


The ride to what? The place where human musicians can't earn a living because they can't live on less than what it costs to have an AI regurgitate the melodic patterns, chord progressions, and other music theory it has learned? This benefits who, exactly? It's a race to the bottom. Who is going to pay anything for music that can be generated basically for free? Who is going to go to a concert or festival to listen to a computer? Who is going to buy merchandise? Are the hardware, models, and algorithms used going to capture the public interest like the personalities and abilities of the musicians in a band? Will anyone be a "fan" of this kind of music? Will there be an AI Freddie Mercury, Elton John, Prince, or Taylor Swift?


It sounds like you're arguing with yourself. You provide exactly the reasons why generative AI isn't going to take us to a "place where human musicians can't earn a living". It's my understanding that most small bands primarily earn their money from live performances and merchandise, gen AI isn't going to compete with them there, if anything it'll make it much easier for them to create their own merch or at least the initial designs for it.

AI generated music is more of a threat to the current state of the recording industry. If I can create exactly the album or playlist that I want using AI then why should I pay a record label for a recording that they're going to take 90% of the retail price from? The playlist I listen to while I'm working out or driving is not competing with live band performances, I'm still going to go to a show if there's a band playing that I like.


Yeah I didn't really state that very well. My point was mostly what you say: because people are fans of artists, and because AI music is/will be essentially free to produce, AI music isn't something that will make money for anyone, unless it's the default way anything online makes money: ads are injected into it. I'm not going to pay for it. I'm not going to buy merchandise, or go to concerts, or do anything a music fan does and pays money for. I'm not even going to put it in a workout playlist, because I can just as easily make a playlist of real human artists that I like.

I disagree that it's a threat to the recording industry. They aren't going to be able to sell AI music, but nobody else is either, because anyone who wants AI music can just create it themselves. Record labels will continue to sell and promote real artists, because that's how they can make money. That's what people will pay for.


Fair enough but I'm not sure you're even going to realize if you're listening to AI generated music or not. One way of using these tools is to take lyrics and create several different melodies and vocal styles. An artist or a professional songwriter could do this and then either record their own version or pay musicians to perform it. That could be any combination of simply re-recording the vocal track, replacing or adding instrument tracks, making small modifications to some of the AI generated tracks, etc. The song can then be released under the name of the singer who can also go on tour in the flesh. You might also just come across a 100% AI song on a streaming platform and enjoy it and add it to a playlist. Who vets all of the music they listen to anyways? and if the producer manages the web presence of the "band" and provides a website it would withstand a cursory search. You'd have to look closely to determine that there are no human band members other than the producer. For the types of electronic music that aren't usually performed live and are solely attributed to one artist it might be impossible to tell. The line would be especially blurry there anyways due to the current extensive use of samples and non-AI automation.

There are a lot more fuzzy edges too, you can use AI tools to "autotune" your own voice into a completely different one. You can tap out a quick melody on a keyboard and then extend, embellish and transform it into a full song. You could even do the full song yourself first and then remix it using AI.

The point I agree on would be that one-click hits are going to be few and far between for a while at least. If no effort is put into selecting the best then it's really just random chance. I'd be willing to bet that there will be an indie smash hit song created by a single person who doesn't perform any of the vocals or instruments within a year though. It'll get no play time on anything controlled by the industry titans but people will be streaming it regardless.


While it’s true that political events can influence market dynamics


Yes, I do think it’s plausible. You’re talking Microsoft and Google. Two companies with extremely close ties to the US government. And Amazon is a major investor in Anthropic. It doesn’t take a conspiracy if the folks with majority stake in these companies are all friends and handshake agree over a steak dinner just off Capitol Hill. We live in a world where price fixing is a regular occurrence, and this is not so different.

I think we will see a very nice boost in capability within 6 months of the election. I don’t personally believe all the apocalyptic AGI predictions, but I do think that AI will continue to feed a nice growth curve in IT investment and productivity growth, similar to the last few decades of IT investment.


"there’s a soft moratorium on showing scary new capability"

Yes. There is also the Hype of the "End of the Hype Cycle". There is Hype that the Hype is ending.

When really, there is something amazing being released weekly.

People are so desensitized that just because we don't have androids walking the streets or suddenly have Blade Runner like space colonies staffed with robots, that somehow AI is over.


Nope. I couldn't care less about some elections (I care about global consequences, but no point wasting time & energy now, when it comes it comes). Thats US theater to focus population on that freak show you guys made out of election process, rather than on actually important stuff and concrete policies or actions.

What people, including me, are massively fed up is all the companies (I mean ALL) jumping on AI bandwagon in a beautiful show of how FOMO works and how even CEOs/shareholders are not immune to basic instincts. Literal hammer looking desperately for nails. Very few companies have amazing or potentially amazing products, rest not so much.

I absolutely don't want every effin' thing infused with some AI, since it will be used to 1) monitor my usage or me directly for advertising / credit & insurance scoring purposes, absolutely 0 doubt there; and 2) it may stop working once wifi is down, product is deprecated or company changes its policy (Sonos anyone?). Internet of Things hate v2.0.

I hate this primitive AI fashion wave, negative added value in most cases, 0 in the rest, yet users have to foot the bill. Seeing some minor industry crash due to unfulfilled expectations is just logical in such case.


Your observation is spot on. LLMs represent a transformative capability, offering new ways to handle tasks that were previously more challenging or resource-intensive.


> The AI investment bubble is losing hype

I disagree. Palantir is trading at 200X earnings.


I think we found the guy with NVDA calls.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: