I have been an LLM skeptic for a long time, but the llm CLI and your review of Claude 3 Opus (and subsequently discovering how comparatively cheap 3.5 Sonnet is) has started to turn LLMs into something I use daily.
Exactly that piping comes into handy all the time. I use it to estimate reading time of things on the web, through
curl <url> | llm -m claude-3.5-sonnet -s 'How long does the main content of this article take to read? First count words, then convert using a slow and fast common reading speed.'
It gets the word count wrong a little too often for my taste, but it's usually within the right order of magnitude which is good enough for me.
One of my most used shell scripts recently is one I named just `q` which contains
#!/bin/sh
llm -s "Answer in as few words as possible. Use a brief style with short replies." -m claude-3.5-sonnet "$*"
This lets me write stupid questions in whatever terminal I'm in and not be judged for it, like
[kqr@free-t590 tagnostic]$ q How do I run Docker with a different entrypoint to that in the container?
What's nice about it is that it stays in context. It's also possible to ask longer questions with heredocs, like
[kqr@free-t590 tagnostic]$ q <<'EOF'
> I have the following Perl code
>
> @content[sort { $dists[$a] <=> $dists[$b] } 0..$#content];
>
> What does it do?
> EOF
I have meant to write about this ever since I started a few weeks ago but I would like my thoughts to mature a bit first...
If you prefer adding another piece of prompt instantly instead of adding it in the template:
(cat somecode.py; echo "Explain this code") | ell -f -
I should've added this into README.
I really love your "llm" and the blog posts but somehow I missed them before. I believe I would be a lot less motivated to write ell if I had read your post first.
> I really love your "llm" and the blog posts but somehow I missed them before. I believe I would be a lot less motivated to write ell if I had read your post first.
I mean, doing a simple search like "CLI interface for LLMs" shows multiple tools made by people over the years. Not to bash your work (pun intended), but I don't see the point of creating yet another CLI interface for LLMs at this point.
>> To the parent: prefer that you hold opinions like this to yourself.
it seems weirdly inconsistent that you expect people to hear your voice as you try and shut down another expressing a viewpoint with which you don't agree. You would have been better off with just the first half of your post.
Well, either I'm not good at googling or google is not good at searching. I did searched similar products and I have listed them in the README. Perhaps I just didn't pick the correct keyword. I'm sorry that many wonderful similar products are not listed, but I currently don't find any of them completely cover the features of ell.
I use that with my https://llm.datasette.io/ tool all the time - things like this:
Or you can separate the instructions from the piped content by putting them in a system prompt instead like this: Being able to pipe content like this INTO an LLM is really fun, it lets you do things like scrape a web page and use it to answer questions: https://simonwillison.net/2024/Jun/17/cli-language-models/#f...