I really just wanted the feeling of tab-based auto-complete to just work in the terminal.
It turns out that getting the LLM responses to 'play nice' with the expected format for bash_completion was a bit of a challenge, but once that worked, I could wrap all the LLMS (OpenAI, grok, Claude, local ones like Ollama)
I also put some additional info in the context window to make it smarter: a password-sanitized recent history, which environmental variables are set, and data from `--help` of relevant commands.
I've just started to promote it around the Boston area and people seem to enjoy it.
Wow that's very useful! I have also thought of completion but my idea was more like copilot. The user experience of your script should be better. I'm glad I didn't start to write that.
Regarding history in context, I suggest adding a record mode like ell. This really helps.
Password sanitizer is great. I will also add it as a plugin. Thank you for the idea!
Thanks for checking it out and the record mode is a great idea. I've been playing around with ways to get the terminal outputs but so far I haven't loved the UX of my solutions. Your co-pilot approach that can explain the commands and iterate is really valuable.
If you're open to joining, I have a small AI engineer/ open source dev Slack community in Boston. Id love to have you (https://smaht.ai)
I am open to join any community. As long as you don't mind the fact that I'm not in Bosten, why not? I have just submitted on your google form. Thanks for inviting!
I've watched autocomplete-sh in action at the AI Tinkerers meetup in Cambridge, MA. Was impressed. It is very well integrated with the shell. The idea of writing it directly in bash - bold! But an effective idea to keep it portable.
I really just wanted the feeling of tab-based auto-complete to just work in the terminal.
It turns out that getting the LLM responses to 'play nice' with the expected format for bash_completion was a bit of a challenge, but once that worked, I could wrap all the LLMS (OpenAI, grok, Claude, local ones like Ollama)
I also put some additional info in the context window to make it smarter: a password-sanitized recent history, which environmental variables are set, and data from `--help` of relevant commands.
I've just started to promote it around the Boston area and people seem to enjoy it.