For millenials and older gen z yeah, but it's my understanding (and my complete surprise as a millenial) that snapchat is actually big again amongst actual children.
Can anyone who better knows the reality here chime in?
Yeah one of my younger brothers falls into this category, his Snapchat streaks have been going for years - from middle school to mid-university so far, and that's not an exaggeration.
This assumes that software products in the future will remain at the same complexity as they are today, just with AI building them out.
But they won’t. AI will enable building even more complex software which counter intuitively will result in need even more human jobs to deal with this added complexity.
Think about how despite an increasing amount of free open source libraries over time enabling some powerful stuff easily, developer jobs have only increased, not decreased.
What about "general" in AGI do you not understand? There will be no new style of development for which the AGI will be poorly suited that all the displaced developers can move to.
For true AGI (whatever that means, lets say fully replicates human abilities), discussing "developers" only is a drop in the bucket compared to all knowledge work jobs which will be displaced.
More likely they will tailor/RL train these models to go after coders first. Use RLHF employing coders where labor is cheap to train their models. A number of reasons for this of course:
- Faster product development on their side as they eat their own dogfood
- Dev's are the biggest market in the transition period for this tech. Gives you some revenue from direct and indirect subscriptions that the general population does not need/require.
- Fear in leftover coders is great for marketing
- Tech workers are paid well which to VC's, CEO's, etc makes it obvious where the value of this tech comes from. Not with new use cases/apps which would be greatly beneficial to society - but effectively making people redundant saving costs. New use cases/new markets are risky; not paying people is something any MBA/accounting type can understand.
I've heard some people say "its like they are targeting SWE's". I say; yes they probably are. I wouldn't be surprised if it takes SWE jobs but otherwise most people see it as a novelty (barely affects their life) for quite some time.
I've made a similar argument in the past but now I'm not so sure. It seems to me that developer demand was linked to large expansions in software demand first from PCs then the web and finally smartphones.
What if software demand is largely saturated? It seems the big tech companies have struggled to come up with the next big tech product category, despite lots of talent and capital.
There doesn’t need to be a new category. Existing categories can just continue bloating in complexity.
Compare the early web vs the complicated JavaScript laden single page application web we have now. You need way more people now. AI will make it even worse.
Consider that in the AI driven future, there will be no more frameworks like React. Who is going to bother writing one? Instead every company will just have their own little custom framework built by an AI that works only for their company. Joining a new company means you bring generalist skills and learn how their software works from the ground up and when you leave to another company that knowledge is instantly useless.
Sounds exciting.
But there’s also plenty of unexplored categories anyway that we can’t access still because there’s insufficient technology for. Household robots with AGI for instance may require instructions for specific services sold as “apps” that have to be designed and developed by companies.
The new capabilities of LLMs, and generally large foundation models, expands the range of what a computer program can do. Naturally, we will need to build all of those things with code. Which will be done by a combo of people with product ideas, engineers, and LLMs. There will be then specialization and competition on each new use-case. eg., who builds the best AI doctor etc.,.
This is exactly what will happen. We'll just up the complexity game to entirely new baselines. There will continue to be good money in software.
These models are tools to help engineers, not replacements. Models cannot, on their own, build novel new things no matter how much the hype suggests otherwise. What they can do is remove a hell of a lot of accidental complexity.
> These models are tools to help engineers, not replacements. Models cannot, on their own, build novel new things no matter how much the hype suggests otherwise.
But maybe models + managers/non technical people can?
There’s a very good chance that if a company can replace its programmers with pure AI then it means whatever they’re doing is probably already being offered as a SaaS product so why not just skip the AI and buy that? Much cheaper and you don’t have to worry about dealing with bugs.
Exactly. Most businesses can get away with not having developers at all if they just glue together the right combination of SaaS products. But this doesn’t happen, implying there is something more about having your own homegrown developers that SaaS cannot replace.
The risk is not SaaS replacing internal developers. It's about increased productivity of developers reducing the number of developers needed to achieve something.
Again, you’re assuming product complexity won’t grow as a result of new AI tools.
3 decades ago you needed a big team to create the type of video games that one person can probably make on their own today in their spare time with modern tools.
But now modern tools have been used to make even more complicated games that require more massive teams than ever and huge amounts of money. One person has no hope of replicating that now, but maybe in the future with AI they can. And then the AAA games will be even more advanced.
Maybe this is a dumb question but why can’t you just record video with two iPhones evenly spaced with some kind of jig and synchronized the video output to get something usable for a 3D video?
Yes but ideas can have infinite resolution, while the resolution of language is finite (for a given length of words). So not every idea can be expressed with language and some ideas that may be different will sound the same due to insufficient amounts of unique language structures to express them. The end result looks like mimicry.
Ultimately though, an LLM has no “ideas”, it’s purely language models.
My use of word "appear" was deliberate. Whether humans say those words, or whether an LLM says those words - they will look the same; So distinguishing whether the underlying source was a idea or just a language autoregression would keep getting harder and harder.
I don't think I would put it in the way that LLM has no "ideas"; I would say it doesn't have generate ideas exactly as the same process as we do.
There is also the concept of qualia, which are the subjective properties of conscious experience. There is no way, using language, to describe what it feels like for you to see the color red, for example.
Of course there is. There are millions of examples of usage for the word "red", enough to model its relational semantics. Relational representations don't need external reference systems. LLMs represent words in context of other words, and humans represent experience in relation to past experiences. The brain itself is locked away in the skull only connected by a few bundles of unlabeled nerves, it gets patterns not semantic symbols as input. All semantics are relational, they don't need access to the thing in itself, only to how it relates to all other things.
That's a bunch of arguments that have nothing to do with anything I said.
There are millions of examples of usage of the word red - none of them communicate the subjective experience of seeing red.
The brain is purely physical, sure, there is a physical process that can be described in language when a person sees red. Explaining that process and every fact about the physics of light and color theory to a blind person won't make them understand what seeing red feels like. It can't be communicated in language, or if it can, there are no examples of it having ever been done.
> Relational representations don't need external reference systems.
What? This is irrelevant, I'm not saying that there is no such thing as subjective experience, I'm saying that communicating the essence of subjective experience is impossible. All semantics are relational, but not all relationships exist, not all sentences have meaning, and not all things that have meaning can necessarily be expressed using semantic relations.
This is pretty uncontroversial and common to most humans. If you've never experienced something and told someone, "You've got to see it yourself, no description can come close", then you need to get out more. If you have, then the existence of qualia is obvious, and the limitations of language in communicating our sense experience is clear.
> There are millions of examples of usage of the word red - none of them communicate the subjective experience of seeing red.
The trick is that all of them provide perspectives and the model composes those perspectives in the embedding space. "Red" is related to apples but also to communism in its internal vector space. And it also related to all situations where humans used the word "red" and expressed emotions, encoding emotional valence as well.
I think confusion comes from how we expect models to represent things. People think models represent the thing in itself, but instead they represent how each thing relates to other things. Thus inputs are both content and reference, they have dual status, and are able to create this semantic space from themselves without external reference.
In a relational system you don't need access to the object in itself, just how its perception relates to other perceptions.
Has there ever been a scenario where someone gave up on a startup, dumped the code, then someone picked it up and turned it into a successful business?
This isn't exactly what you mentioned, but Josh Pigford gave up on his startup Maybe, shut down the company and open sourced it. It got incredibly popular on Github, so he started it up again. (https://maybefinance.com/)
A bigger example might be Mozilla. It was a closed-source browser (Netscape Navigator), and when it was dumped, some employees petitioned to open source it. And that's how we got Firefox.
Close but not quite (and keeping some parts intentionally vague):
There was a data startup ran for ~3y until the 10 person team imploded in a tough market. Everyone left or was laid off, the office was closed, operations ceded etc. The website and some collection were still left running in zombie mode.
A couple of years later, the non-technical founder and CEO (who at this point had bought everyone else out) saw demand and PMF emerging. Mostly through serendipity, I was one of 3 people brought in to "see what we can do with it", originally intended as a 3 month project. We came in with zero context and jumped right in to figuring out the ship we were now sailing. I was able to have one or two of conversation with the previous CTO but that was about it in terms of overlap.
Things progressed and within a few months we found ourselves as co-founders of "Company v2.0". My partner took over as CEO.
I left after 2y around our series A when we were 20~30 employees and arguably a leader in our market segment. At that point there were significant and crucial parts of the stack and codebase inherited (and evolved obv) from the original efforts by the previous team. The company is now an established brand, have offices in several countries and had a successful series B last year.
Life expectancy and median age (didn't find average but I assume it is pretty close) were the first things I checked when I read your comment, but life expectancy globally is in the 70s and the median person about 30. I don't understand how the average person could have only a little more than 10 years left.
Some people might be minutes away from death, dragging the average way down. And the youngest of the adults alive right will probably not live more than 60-70 years max. And there’s a lot more old people right now than younger people due to increasing healthcare standards. It’s possible.
I think we’ll just probably convert ourselves into data and beam ourselves across space at the speed of light and install ourselves into machines deployed at various sites.
Except I'd be the one left behind, so I'd really know the difference, and if my copy was good, it would also know it's the copy and the original was left behind.
All I’m hearing is that in the future engineers will become so dependent on LLMs they will be personally shelling out $20k or more a year on subscriptions just to be able to do their jobs.
reply