Hacker News new | past | comments | ask | show | jobs | submit login

Exactly! In the future, testers could just write tests in natural language.

Every time we detect, for instance with a vision model, that the interface changed, we ask the Large Action Model to recompute the appropriate code and have it be executed.

Regarding generating tests from bug report totally possible! For now we focus on having a good mapping from low level instructions ("click on X") -> code, but once we solve that, we can have another AI take bug reports -> low level instructions, and use the previously trained LLM!

Really like your use case and would love to chat more about it if you are open. Could you come on our Discord and ping me? https://discord.gg/SDxn9KpqX9




I don't use discord much but joined to provide any additional thoughts.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: