LLMs create models, not algorithms. An algorithm is a rote sequence if steps to accomplish a task.
The following is an algorithm:
- plug in input to model
- say yes if result is positive, else say no
LLMs use models, the model is not an algorithm.
> There are patterns in the weights that could be steps in an algorithm.
Sure, but yeah... no..
"Could be steps in an algorithm" does not constitute an algorithm.
Weights are inputs, they are not themselves parts of an algorithm. The algorithm might still try to come up with weights. Still, don't confuse procedure from data.
Don't want to get to pedantic on that response.
The model can contain complex information.
There is already evidence it can form a model of the world.
So why not something like steps to get from A to B.
And, it is clear that LLMs can follow steps.
One didn't place in the Math Olympiad without some ability to follow steps.
"Yes, an LLM model can contain the steps of an algorithm, especially when prompted to "think step-by-step" or use a "chain-of-thought" approach, which allows it to break down a complex problem into smaller, more manageable steps and generate a solution by outlining each stage of the process in a logical sequence; essentially mimicking how a human would approach an algorithm. "
> There is already evidence it can form a model of the world.
Perhaps.
> So why not something like steps to get from A to B.
Why not - because a model and algorithm are different. Simply having a model does not mean you have an algorithm. An algorithm is a deterministic set of steps, a model is typically a function or set of functions for producing results. If the result of that model is to list a set of steps (and also evaluate them too) - that does not make the model an algorithm.
> And, it is clear that LLMs can follow steps
Sure, because that is what the model is set up to do.
> Yes, an LLM model can contain the steps of an algorithm, especially when prompted to "think step-by-step" or use a "chain-of-thought" approach, which allows it to break down a complex problem into smaller
This is the model looking into its training data to find algorithms that seem to match the prompt and then to print out the steps of the algorithm and also execute them. That's not an algorithm in of itself.
I feel I'm on pretty solid ground here. "Algorithmic prompting" has nothing to do with whether a model is an algorithm. I'd ask you google the differences of a model and an algorithm very thoroughly. If something follows an algorithm, I strongly suspect it cannot be a model by definition. It can still be an AI though, as there are non LLM's AI's out there that do follow algorithms. If we are talking about LLM, the M is for "MODEL". Models and algorithms are different. A model that looks for an algorithm to use - is a very sophisicated model, but it's still not an algorithm itself just because it could find, interpret and use one.
If you think so, you should publish your results. It seems like a lot of bright people are going down the road of using LLM for algorithmic tasks. To follow steps.
I think what I'm reaching for, is a little more esoteric, that out of all the data the model is trained on, that it has also started building up algorithms/steps in its 'model', which is part of how it pics the next item.
The whole reason algorithmic prompting started was people started noticing the LLM was already attempting some steps, and that if it was further helped along by prompting the steps, then the results were better.
But, I am using 'algorithm' rather loosely, as just 'steps', and they are a bit fuzzy, so not a purely math algorithm, but more of a fuzzy logic, a first start at reasoning.
edit
also, I should clarify. I am not confusing the algorithm to make the model versus the model, i'm saying in the model it learns to follow steps.