Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

GPT4 appears very intelligent when you discuss program code with it. It simply crazy; it can write SNOBOL4 code to extract a field from /etc/passwd.

When you discuss other things, it goes off the rails a lot more. For instance, I have had it quote some passages of classic English poetry to me, stating blankly that those passages contain certain words. The passages did not contain any traces of those words or even remotely similar words. In that situation, GPT4 was being dumber than /bin/grep, which can confirm that some piece of text doesn't contain a string.

GPT4 is deliberately trained as a coding technician, so that it will convince programmers who will become its advocates.

1. Programmers are vain and believe that everything they do requires great intelligence. (Not just some of it.)

2. GPT4 can "talk shop" with programmers, saying apparently intelligent things, perform complex refactorings, and intuit understanding of a piece of code with minimal context based on meaningful identifier names, and so on.

3. Thus, some programmers will falsely conclude that it's highly intelligent, like them.

To be sure, what the technology does is unmistakably a form intelligence. It solves complex problems of symbol manipulation and logic, in flimsy contexts. To deny that it is artificial intelligence is silly.

AI /= AGI

That's not where the goalposts are for the definition of AI.

Computers playing chess is AI. Google Assistant is AI. An expert system from 1990 is AI.



> It solves complex problems of symbol manipulation and logic

no, it solves the assumed-to-be complex problem of constructing probable and believable text given a prompt.


The problem seems obviously complex. People need special education to do it, some never learn how to do it. Making a system to do it (LLMs) is beyond most peoples understanding and even people who understand usually can't make one for a variety of reasons, e.g. money.


It can also “talk shop” with lawyers, doctors..

Are we all vain? I guess we are, but I find this take to be a bit too simplistic.

I do agree programmers are arrogant and think everything they do requires vast amounts of intelligence.

I’n not sure what your point is. I am honestly curious. You think having coders as major advocates for your product is somehow a brilliant strategy? If anything, this will work against it.


GPT4 can talk shop, and demonstrate useful code generation and refactoring, as well as understanding.

For instance, as an exercise, I had it write a maze-generating Lisp program that produces ASCII. I wanted it in a relatively little known Lisp dialect, so I described to it some of the features and how they differ from the code it generated.

For instance, GPT4 hypothesized that it has a let which can destructure multiple values: (let ((x1 y1) (function ..)) ((x2 y2) (function ...)) ...).

In plain English, I explained to GPT4 that the target Lisp dialect doesn't have multiple values, but that when we have a pair of values we can return a cons cell (cons a b) instead of (values a b). I also explained to it that we can destructure conses using tree-bind, e.g (tree-bind (a . b) (function ...) ...). And that this only takes one pattern, so it has to be used several times.

GPT4 correctly updated the function to return a cons cell and replaced the flat let with nested tree-bind.

At the end of the chat, when the code was almost working, I made a bugfix to the function which allocates the maze grid, to get it working.

I told GPT4: this version of the grid allocation function makes it work. Without it, the output is almost blank except for the vertical walls flanking the maze left and right. Can you explain why?

GPT4 correctly explained why my function works: that the function it wrote shared a single row vector across the grid rows, giving rise to sharing. It explained it like a computer scientist or seasoned developer would.

It's like a somewhat dim-witted, but otherwise capable coding clerk/technician, which talks smart.

With GPT4, you're Sherlock, and it is Watson so to speak. (Sorry, IBM Watson.) It can speak the language of crime investigation and make some clever inferences. In the end, you do all the real work of designing and debugging, and judging complex requirements against each other in the broader context. It saves you the keystrokes of doing the tedious coding you're used to doing yourself.

On the other hand, you expend some keystrokes explaining yourself. Some of the chat could be saved and used for documentation, because it captures requirements and why some things are done in a certain way and not otherwise (rejected decisions).


You seem to be saying that to have general intelligence one must be an expert in every field? That rules out quite a number of humans too.


No, I'm just saying that there is AI, and it's okay to use the term. There has been for a long time. E.g. SHRDLU in 1970 was/is AI.


Most humans can tell you pretty accurately whether or not a passage contains a word.


That's most likely due to the way it tokenizes words. You have to be careful here, or else for similar reasons you might imply that dyslexics aren't intelligent.


So can GPT4, but not in that bullshit answer it was giving at that time.

It's ironic how GPT4 conforms to the glitch-bot meme.


> GPT4 is deliberately trained as a coding technician, so that it will convince programmers who will become its advocates.

I'm a programmer and I've used GPT4 for a variety of tasks (well, tried to). The results have been mediocre on average, usually syntactically correct, but more often than not, semantically incorrect. It usually ends in frustration as GPT-4 keeps responding confidently incorrect answers and upon the slightest expression of doubt, it tends to spin in circles.

    ChatGPT: <Implausible response #1>
    Me: Are you sure? Reasons A, B, C [...]
    ChatGPT: I apologize... <implausible response #2>
    Me: Are you sure? Reasons D, E, F [...]
    ChatGPT: I apologize ... <repeats implausible response #1>

I'd like to know what people who are so impressed with GPT-4's programming capabilities are doing? It must be TODO apps, solving leetcode problems, writing basic Express.js routers, some basic React components and code for <one of the top 10 popular libraries>. The kind of things ChatGPT has seen a million of examples of.


To use the tool effectively, you can't use Socratic questioning on it, or not always.

You have to already know how to code the problem, so that you can spot what is wrong, and, if necessary, tell the thing how to fix it.

The fix is not always going to come from the training data; it needs to scrape it from what you give away in your follow-up question, together with its training data.

I went through an exercise in which I encoded a paragraph from Edgar Allen Poe with a Vigenere cipher. I presented this to GPT4 and asked it to crack it. First I had to persuade it that it's ethical; it's a cipher I made myself and it's not my secret. We worked out a protocol by which it can ask me questions which show that I know the plaintext.

It was a quite a long chat, during which I had to give away many hints.

In the end I basically gave the answer away, and the thing acted like it cracked the key. Then I reproached it and admitted that yes, it didn't crack the key but used my hint to locate the E. A. Poe passage.

Basically, if you know the right answer to something, and GPT4 isn't getting it, it will not get it until you drop enough hints. As you drop the hints, it will produce rhetoric suggesting that it's taking credit for the solution.


But out of all the programmers I know, there are only certain ones that are embracing it, and the rest are still stuck in "I tried it and the code wasn't exactly what I wanted - it's all hype" land. And I think for one of the reasons you say: programmers are vain and believe everything they do requires great intelligence. A lot are still missing the forest from the trees.


I tried so much to use it for my day job but it's next to useless, it takes more time to get it to go where I want than to actually do it myself. My day job is far from requiring a great intelligence, but the tasks are too specific for chatgpt

I've used it for side projects though, especially front end stuff that I absolutely hate (js) and it works fine for that, but that's because I'm absolutely garbage at it in the first place and probably ask it to solve the most answered things ever on stackoverflow &co


What's your day job, out of interest?


I think a lot of it is due to the unresolved legal questions. Basically nobody can use it professionally yet. As soon as that happens I expect it to become a standard tool.

Although... on the other hand there are plenty of programmers that don't even use IDEs still.


It’s a bit concerning how many folks I’ve met at meetups talking about how they are using ChatGPT at work. The few I’ve pressed on it have said they’re not supposed to, but they do it off the company hardware so no one knows.


For programming I find ChatGPT to be a great rubber duck


I got a coding challenge as part of the interview process. I had 2h to complete it, but I finished in 30mins thanks to ChatGPT. I wrote some test cases and told ChatGPT to generate more. I reviewed them and copied a few relevant ones. It also hinted me that I could add support for fractions.


Stack Overflow can do that for me too, and no Bill Gates negative karma is involved.


> That's not where the goalposts are for the definition of AI.

We've well past that goalpost, like decades ago, if you haven't realized.

The simplest form of AI is just a series of if-else statements (expert system). A highly sophisticated network of conditional statements can make fully informed decisions way faster and more accurate than any human mind.


> To be sure, what the technology does is unmistakably a form intelligence. > Computers playing chess is AI.

Computers doing anything are following the programming of their programmers. Without feeling and free will, there is no AI.


> Without feeling and free will, there is no AI.

That's quite a leap to that conclusion, friend.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: