When I read about potential optimizations like this, I can't believe that people trust LLMs enough to do things with minimal oversight. Do people really believe that "AI" products that use LLMs are capable enough to do things like control a computer, or write accurate code? By design, isn't _everything_ a "hallucination" or a guess? Is it really possible to overcome that?
I have written (oversaw?) a few programs that we use in our production test systems using chatgpt and python. A program that sends actions to machines, queries them for results/errors/outputs, and then stores all that in a .csv which it later translates into a nicely formatted excel file. It also provides a start-up guide to show the technician how to hook-up things for a given test.
I am not a programmer. No one at my company is a programmer. It writes code that works and does exactly what we asked it to do. When the code choked while I was "developing" it, I just fed it back into chatgpt to figure out. And it eventually solved everything. Took a day or so, whereas it would probably take me a month or a contractor $10,000 and a week.
LLM's might be bad for high level salary grade programming projects. But for those of us who use computers to do stuff, but can't get past the language barrier preventing us from telling the computer what to do, it's a godsend.
Really interesting. We programmers live in a bit of a bubble, so it’s good to get this perspective. Perhaps with LLM’s we’ve finally reached the early dreams of the “programmable computer for everyone”, that seemed to slip out of reach after the 80’s.
In other words: Your problem was simple enough and well enough represented in the training corpus and you were a bit lucky. Also, the problem is not important enough for there to be a requirement for the code to be updatable/fixable at short notice, because effectively now nobody in your org knows how the solution actually works.
For this very constrained subset of a problem domain LLMs are indeed very suitable but this doesn't scale at all.
How do you overcome it as a human? If you think through it... you'll come to the conclusion that LLMs can be used to do all kinds of things. Humans don't write down code and then shove it into production, for example.
> Do people really believe that "AI" products that use LLMs are capable enough to do things like control a computer, or write accurate code?
Of course. It's not a hypothetical question. Almost all of my code is written by Claude 3.5 Sonnet. It's much more robust and accurate than my regular code and I've been programming for 20 years.
There is a band from Pasadena, California called Ozma. Way back in the day (1999), they released an album called “Songs of Audible Trucks and Cars” [0] via MP3.com. I later found out that they really wanted the title to be “Songs of Inaudible Trucks and Cars” [1], but MP3.com had a character limit on album titles, so they had to shorten it
If, like me, you immediately thought "why not just drop the 'and' and replace it with an ampersand?", Ozma had to do that, too, and parent's quoted the album title incorrectly, alas.
Same thing. You die "of" every disease you have at the moment of death, or very near. Especially for diseases which impact your circulatory system such as Covid.
Swervedriver are actually working on a new record, "I Wasn't Born to Lose You", and have put out a few songs from it (more "Mezcal Head" than "99th Dream", IMO):
Sorry for the shameless plug, but: I play in a shoegaze-influenced dream pop band called Weed Hounds, and we recently put out our first LP, for anyone who's interested (think Pale Saints, Swirlies): http://open.spotify.com/album/0aABjwPh5l2UOM5yWWON11
Hi, I'm the founder of the organization. 85% of the class had never written a line of code before. We believe that there are people in Queens and other underserved communities who can learn to code and also pursue tech entrepreneurship.
Your quote in the articles says -- "We saw lots of people in the City University of New York system who graduated as computer science majors but weren't going into the tech industry" --- You can't graduate as a CS major without writing any code...So this quote has nothing to do with the people actually in your program?
That quote was part of a longer conversation, it was asking why we saw the need for this. The idea of even students studying CS not having access and opportunities point out the opportunity to open this up to other people in these communities
I know a former CS PhD student at a top 5 program who never coded before going into industry oddly enough - it's probably possible to focus more on the abstract side, although extremely rare.
Nope, this is like everyone doing websites in the 90's during the dotcom bubble.
All small business trying to find honest people ended up hiring someone's 12yr old cousin (trhu a spiffy LLC front) and ended up with worst-than-useless websites full of security holes and zero accessibility.
Hope those projects are all games and fart apps and not mission critical stuff. and since i have a very hard time thinking of mission critical apps, i think the world will endure.
There are software companies that account for hosting fees and payment processing fees as COGS; COGS are non-marketing expenses that scale with sales, which can be nonzero for Internet software companies.
That is their projection for the future, and if you assume protracted litigation while they're restructuring is going to be the order of the day, that doesn't sound insane at all.