I mean that's just confabulating the next token with extra steps... ime it does get those wrong sometimes. I imagine there's an extra internal step to validate the syntax there.
I'm not arguing for or against anything specifically, I just want to note that in practice I assume that to the LLM it's just a bunch of repeating prompts with the entire convo, and after outputting special 'signifier' tokens, the llm just suddenly gets a prompt that has the results of the program that was executed in an environment. for all we know various prompts were involved in setting up that environment too, but I suspect not.
In retrospect, we should probably bring back institutionalism of individuals and try to have more psychiatric hospitals ran by the state. Some people just cant be helped but need to be shoved somewhere for the rest of their lives away from society. Hopefully though we could raise standards so they are all treated fairly and have no lobotomizations.
This is one of those ideas that gets brought up often in the 50s-lionizing, "return to traditionalism" discourse, and one easily discredited by thinking even briefly about the way government funding influences economic activity in the US. To wit: administrators start looking for more opportunities for "business". When the hammer is, "being forcibly institutionalized," and the nails are, "whoever could conceivably pad our numbers," I would rather just not give Home Depot the building permit.
No, a thousand times no. That thinking has rightfully been placed in the waste bin of history. How about we deal with systemic inequality and raise the standard of living for everyone, so folks don't grow up in desperate situations, and families and communities have enough resources to take care of themselves
I'm using the web as a synecdoche for the Internet as a whole because before the Web there wasn't much of a reason for Joe and Jane Q Public to use the Internet.
yup. I considered myself an /extremely/ verbal person when reasoning, but what I do with the above feels closest to 'moving the 1', almost like balancing a mental scale.
I never really noticed that before. I'm not great at math, fwiw.
I'm not arguing for or against anything specifically, I just want to note that in practice I assume that to the LLM it's just a bunch of repeating prompts with the entire convo, and after outputting special 'signifier' tokens, the llm just suddenly gets a prompt that has the results of the program that was executed in an environment. for all we know various prompts were involved in setting up that environment too, but I suspect not.
reply