Agree: why? I don't know a single reason, and I've worked on Java docs in the Java industry. This was literally my day job for a while, and I have no clue what they're trying to hint at here.
> We thought about in the context of elder care where they could ask the robot to perform a task for them, but we first need the models to be a little better - hence why we start here, to collect the data before it spreads further.
I hope you continue this work for the foreseeable future, because this would be such a boon if it all pans out well.
Thank you, yes there's a lot of positive that can come out of this technology, and it needs to be developed with the help of everyone in order to get there
Considering the entire purpose of the original post is asking people what they're doing, why do you have such a problem with the top voted argument? If the top voted argument was in favour of this tech taking jobs, would you feel better?
> I would guesstimate that less than 1 out of 10,000 developers are solving truly novel problems with any regularity. And those folks tend to work at places like Google Brain.
Looks like the virtue signalling is done on both sides of the AI fence.
Not sure how you equate that statement with virtue signaling.
This is just a natural consequence of the ever growing repository of solved problems.
For example, consider that sorting of lists is agreed upon as a solved problem. Sure you could re-discover quick sort on your own, but that's not novel.
I'm not living in denial, I think LLMs are extremely useful and have huge potential. But people that are all "omg my startup did this and we reduced our devs by 150% so it must be the end-all tech!" are just as insufferable as the "nope the tech is bad and won't do anything at all" crowd.
And before you mention, the hyperbole is for effect, not as an exact representation.