So, you can assign github issues to this thing, and it can handle them, merge the results in, and mark the bug as fixed?
I kind of wonder what would happen if you added a "lead dev" AI that wrote up bugs, assigned them out, and "reviewed" the work. Then you'd add a "boss" AI that made new feature demands of the lead dev AI. Maybe the boss AI could run the program and inspect the experience in some way so it could demand more specific changes. I wonder what would happen if you just let that run for a while. Presumably it'd devolve into some sort of crazed noise, but it'd be interesting to watch. You could package the whole thing up as a startup simulator, and you could watch it like a little ant farm to see how their little note-taking app was coming along.
It's actually a decent patern for agents. I wrote a pricing system with an anylyst agent, a decision agent, and a review agent. They work together to make decisions that comply with policy. It's funny to watch them chatter sometimes, they really play their role, if the decision agent asks the anylyst for policy guidance it refuses and explains that it's role is to analyze. Though they do often catch mistakes that way and the role playing gets good results.
Python classes. In my framework agents are class instances and tools are methods. Each agent has it's own internal conversation state. They're composable and the agent has tools for communicating with the other agents.
Generally, I keep the context. If I'm one shotting then I invoke a new agent. All calls and responses append to the agent's chat history. Agent's are relatively short lived, so the context length isn't typically an issue. With the pricing agent the initial data has been longer than the context window sometimes, but that just means it needs more preprocessing. Now if there is a real reason that I would want to manage it more actively, I can reach out to the agent internals. I have a tool call emulation layer, because some models have poor native tool support, and in those cases it's sometimes necessary to retry calls if the response fails validation. In those cases, I will only keep the last successful try in the conversation history.
There is one special case where I manage it more actively. I wrote an REPL process analyst, to help build the pricing agent and refine the policy document. In that case I would have long threads with an artifact attachment. So I added a facility to redact old versions of the artifact replacing them with [attachment: filename] and just keep the last one. It works better that way because multiple versions in the same conversation history confuse the model, and I don't like to burn tokens.
For longer lived state, I give the agent memory tools. For example the pricing agent's initial state includes the most recent decision batch and reasoning notes, and the agent can request older copies. The agent also keeps a notebook which they are required to update, allowing agents to develop long running strategies and experiments. And they use it to do just that. Honestly the whole system works much better than I anticipated. The latest crop of models are awesome, especially Gemini 2.5 flash.
Product prices for thousands of listings across various ecommerce channels.
Funny you mention keyword bids, I use algorithms and ML models for that, but not LLMs, yet. Keyword bids are a different problem and more difficult in some ways due to sparsity. I'm actively working on an agentic system that pulls the big levers using data from the predictive models. Trying to tie everything together into a more unified and optimal approach, a long running challenge that I finally have tools to meet.
Langroid enables tool-calling with practically any LLM via prompts: the dev just defines tools using a Pydantic-derived `ToolMessage` class, which can define a tool-handler, and additional instructions etc; The tool definition gets transpiled into appropriate system message instructions. The handler is inserted as a method into the Agent, which is fine for stateless tools. Or the agent can define its own handler for the tool in case tool handling needs agent state. In the agent response loop our code detects whether the LLM generated a tool, so that the agent's handler can handle it.
See ToolMessage docs: https://langroid.github.io/langroid/quick-start/chat-agent-t...
In other words we don't have to rely on any specific LLM API's "native" tool-calling, though we do support OpenAI's tools and (the older, deprecated) functions, and a config option allows leveraging that. We also support grammar constrained tools/structured outputs where available, e.g. in vLLM or llama.cpp: https://langroid.github.io/langroid/quick-start/chat-agent-t...
Love it, I did something very similar, deriving a pydantic model from the function signature. Simpler without the native tool call API, even though occasional retries are required when the response fails to validate. Will have to give Langroid a try.
I had not thought about sharing it. I rolled my own framework, even though there are several good choices. I'd have to tidy it up, but would consider it if a few people ask. Shoot me an email, info in my profile.
The more difficult part which I won't share was aggregating data from various systems with ETL scripts into a new db that I generate various views with, to look at the data by channel, timescale, price regime, cost trends, inventory trends, etc. A well structured JSON object is passed to the analyst agent who prepares a report for the decision agent. It's a lot of data to analyze. It's been running for about a month and sometimes I doubt the choices, so I go review the thought traces, and usually they are right after all. It's much better than all the heuristics I've used over the years.
I've started using agents for things all over my codebase, most are much simpler. Earlier use of LLM's might have been called that in some cases, before the phrase became so popular. As everyone is discovering, it's really powerful to abstract the models with a job hat and structured data.
I think it would take quite a long while to achieve human-level anti-entropy in Agentic systems.
Complex system requires tons of iterations, the confidence level of each iteration would drop unless there is a good recalibration system between iterations. Power law says a repeated trivial degradation would quickly turn into chaos.
A typical collaboration across a group of people on a meaningfully complex project would require tons of anti-entropy to course correct when it goes off the rails. They are not in docs, some are experiences(been there, done that), some are common sense, some are collective intelligence.
Can you think of an example in history where labour was replaced with tech and the displaced workers kept their income stream? If a machine can do your job, (eventually) I'll be cheaper to use that machine instead of you and you'll no longer have a job. Is that not a given?
Anyway, it was probably just a joke... so not sure we need to unravel it all.
Displaced hired personnel of course cannot hope for that.
But VCs own their business, they are not employees. If you own a bakery, and buy a machine to make the dough instead of doing it by hand, and an automatic oven to relieve you form tracking the temperature manually, you of course keep the proceeds from the improved efficiency (after you pay the credit you took to purchase the machines).
The same was true with the aristocrats of centuries past: the capitalists who run our modern economy were once nothing more than their managers, delegates who handled the estates, their investments, their finances, growing power until they could dictate policy to their 'sovereign' and eventually dispose of them entirely.
The nobility used to be the dedicated warrior class, the knights. This secured their position in the society and allowed them to rule, by coercion when needed.
Once they ceased to exercise their military might, some time around 17th-18th century, and chose to live off the rent on their estates, their power became more and more nominal. It either slipped (or was yanked) from their hands, or they turned capitalists themselves.
The problem is your timeline: in actuality, the nobility of Western Europe lost their independent armies by the 15th century, solidly by the 16th, and thereafter held on to a military role only through participation (as officers) in the state-run standing armies that developed thereafter. Yet for centuries they held on to power: in France, until the revolution, and in Britain, until well into the 19th century. Great read on the topic: https://projects.panickssery.com/docs/allen-2009-a_theory_of...
"Living off of the rent of their estates" was enough to remain in control of the state for centuries. Only the birth of capitalism and thereafter the industrial revolution allowed for other actors -- the bourgeoisie -- to overtake the aristocrats economically.
I didn't get the impression it was meant as a joke:
"Every great venture capitalist in the last 70 years has missed most of the great companies of his generation... if it was a science, you could eventually dial it in and have somebody who gets 8 out of 10 [right]," the investor reasoned. "There's an intangibility to it, there's a taste aspect, the human relationship aspect, the psychology — by the way a lot of it is psychological analysis," he added.
"So like, it's possible that that is quite literally timeless," Andreessen posited. "And when the AIs are doing everything else, like, that may be one of the last remaining fields that people are still doing."
Similar to how some domain name sellers acquire desirable domains to resell at a higher price, agent providers might exploit your success by hijacking your project once it gains attraction.
I kind of wonder what would happen if you added a "lead dev" AI that wrote up bugs, assigned them out, and "reviewed" the work. Then you'd add a "boss" AI that made new feature demands of the lead dev AI. Maybe the boss AI could run the program and inspect the experience in some way so it could demand more specific changes. I wonder what would happen if you just let that run for a while. Presumably it'd devolve into some sort of crazed noise, but it'd be interesting to watch. You could package the whole thing up as a startup simulator, and you could watch it like a little ant farm to see how their little note-taking app was coming along.