I asked chatgpt to parse a quadkey (a standard neumatic reference to a location on OSM). It pulled three different codes from the internet and gave me three different answers from different countries- none of which was correct.
The other day I asked it a geometry question. It stated (correctly) that the ratios of the corresponding sides are equal. But when given measurements from two triangles it got one of the ratios reversed.
Chatgpt only really gives an illusion that it is capable of logical reasoning. When presented with specialized problems it is more likely to produce bulls*it.
I don't know how anyone who's used it for a bit cannot see how useless it is.
When it contradicts itself, you can ask it to provide references for each side, and it will cheerfully fabricate what look like perfect and relevant references, but are broken links or to papers about cats. And this doesn't hinge on whether one of the "sides" is entirely fictional or not.
I am actually trying to learn javascript and python off and on, and you would think there's nothing it's more suited for. It sure looks as though some work went into making its output for simple programming questions look good.
matplotlib seems byzantine and annoying, and I really thought this might be a good use case.
But no, it's just the same as anything else.
I type this: "add a square patch to a pyplot table cell" (and yes, that probably doesn't make any sense to someone who knows what they are doing, and that is the point)
And I get this:
# get the cell patch for the center cell
cell = tbl[1, 1].get_children()[0]
# create a square patch with a red fill color
patch = table.CellPatch(0.5, 0.5, width=0.3, height=0.3, facecolor='red')
# add the patch to the cell
cell.set_facecolor('white') # set the cell color to white so the patch shows up
cell.add_artist(patch)
It looks plausible when you have no idea what any of it means.
But quickly I discover (among other things) there is no CellPatch method. Someone somewhere named a variable cellpatch. And ChatGPT adeptly combined the capitalization of a method with a variable name to produce a plausible alternative reality. It's amazing! But it's not helpful. It's not helpful precisely because it's so good at making its answers look plausible.
It did produce a snippet that worked when asked "convert html style 6 digit hex color codes to matplotlib colors".
But the objection I have with giving credit for trivial but actually correct material - is that when it returns something trivial directly lifted from a page you could Google, you are getting strictly less information, as you have to confirm independently to make sure what you got.
It could be directly from stackoverflow, but is it a question or a mistaken answer?
That's the thing. If you ask it a well known fact or solution to a common problem that could otherwise be found from top results in Google search, then and only then it might return a reliable answer.
Which begs the question- WHY DOES EVERYONE THINK IT'S THE GOOGLE KILLER?!
At it's current capacity, once the hype dies down, it's neither going to kill Google nor take my job.
The other day I asked it a geometry question. It stated (correctly) that the ratios of the corresponding sides are equal. But when given measurements from two triangles it got one of the ratios reversed.
Chatgpt only really gives an illusion that it is capable of logical reasoning. When presented with specialized problems it is more likely to produce bulls*it.