Exactly! In the future, testers could just write tests in natural language.
Every time we detect, for instance with a vision model, that the interface changed, we ask the Large Action Model to recompute the appropriate code and have it be executed.
Regarding generating tests from bug report totally possible! For now we focus on having a good mapping from low level instructions ("click on X") -> code, but once we solve that, we can have another AI take bug reports -> low level instructions, and use the previously trained LLM!
Really like your use case and would love to chat more about it if you are open. Could you come on our Discord and ping me? https://discord.gg/SDxn9KpqX9
Every time we detect, for instance with a vision model, that the interface changed, we ask the Large Action Model to recompute the appropriate code and have it be executed.
Regarding generating tests from bug report totally possible! For now we focus on having a good mapping from low level instructions ("click on X") -> code, but once we solve that, we can have another AI take bug reports -> low level instructions, and use the previously trained LLM!
Really like your use case and would love to chat more about it if you are open. Could you come on our Discord and ping me? https://discord.gg/SDxn9KpqX9