I don't use it to avoid reading man pages. Rather, as often with LLMs, this is a faster way to do things I already know how to do. Looking at commands I run in various situations and typing them for me, faster than I can remember the name of a flag i use weekly with a pdf processing tool or type 5 consecutive shell commands.
Money wise, my full usage so far (including running purposely large inputs/outputs to stress test it) has cost me.... 19c. And I am not even using the cheapest model available. But, you could also run it with a local model.
Yes, it is API based and uses your last unique 100 shell commands as part of its prompt: it seemed important to remind users that this data does leave their machine.
A fork using a local model should be fairly easy to set up.
I think the top post on the Krita thread does a pretty good job at setting their boundaries. Something that cannot replace artists: it will not "beautify" art, and stays close to the input, also it should not be trained on the work of unwilling artists.
Anthropics actually encourages using Claude to refine your prompts! I am not necessarily a fan because it has a bend towards longer prompts... which, I don't know if it is a coincidence that the Claude system promps are on the longer side.
It doesn't merely encourage, for at least a year now, they've been offering a tool for constructing, improving and iterating on prompts right in their console/playground/docs page! They're literally the "use LLM to make a better prompt for the LLM" folks!
For people who want a non web-based alternative, these days I use Xournal++ (https://xournalpp.github.io/) to do that type of edition locally.
What I am still looking for is a good way to clean scanned PDFs: split double pages, clean up text and make sure it is in lack and white, clean deformations and margin, cut and maybe rotate pages, compress the end result.
For the other issues, I haven’t found any single good tool, but I’ve stitched together things like unpaper, ghostscript and deskew ( https://github.com/galfar/deskew ).
Also, if you need OCR, hocr-tools and Google’s Document AI ocr API have worked really well for me (I tried Gemini, but you run into issues with big documents).
Yes! And the next articles in the series double down on this:
"Any polynomial basis has a “natural domain” where its approximation properties are well-known. Raw features must be normalized to that domain. The natural domain of the Bernstein basis is the interval [0,1][0,1]."
As a French speaker who has done quite a bit of paid translating between French and English (so an uneducated translator rather than an amateur one I guess...), I have found that a lot of translation (in non-fiction, but also sometimes in fiction) have a feel to them. You might not care or notice at a glance, but the text does feel unmistakably translated. That is a constant reminder of that fact that translation is a job that takes real skill, not just knowledge of both language.
I was also surprised to find out (roughly a year ago) that Claude is good at Old English (which, despite its misleading name, looks nothing like English and is more of a Germanic language) whereas ChatGPT would output pure hallucinations.
Claude is much better than ChatGPT at low-resource languages, at least it was a year ago, I haven't tested on new models from OpenAI but I believe that Claude still has an edge.
For example, when ChatGPT was outputting nonsense in Georgian, Claude was speaking it fluently, when ChatGPT learned Georgian, Claude was able to speak Mingrelian.
Interesting. I was using ChatGPT to try to come up with a possible reconstruction of the Ketef Hinnom scrolls (I don't know Ancient Hebrew at all), with some mixed results. I had to prompt it with things like "What do you think that 'YHWH' bit could mean?", and then it sort of caught on. Maybe I'll see if Claude can do better.
Your description of Old English is a bit odd. It's certainly very different from modern English, but it's its direct ancestor and both languages are Germanic.
It is a direct ancestor but I find that what most people picture when they hear Old English (and have no prior knowledge of it) is something closer to Middle English, which is somewhat redeable by modern English speakers, rather than something like `Oft Scyld Scefing sceaþena þreatum, monegum mægþum, meodosetla ofteah, egsode eorlas.` [0]
Claude can speak medieval and ancient languages but mixes up different time periods pretty often, unless you hard prompt the desired grammar. For Old English in particular, it tends to give something vaguely Shakespearean instead. It often uses period-incorrect alphabet or modern characters as well (for Slavic languages in particular).
I've also tried Old Norse, Ancient Greek, and Old East Slavic, and the result is pretty much the same. For OES in particular, it often outputs period-incorrect grammar, writes in Old Church Slavonic (different language), or even modern Russian or Serbian. Looks like the dataset was a bit chaotic, with religious books mixed with old manuscripts and even modern books for children. Mentioning a specific work from the desired period makes it write better, and wrangling it by specifying the rules makes it get this almost right.
I believe so! Since it is good at consistency and can be feed reference images, you can generate character references and deed those, along with the previous panels, to the model working one panel at a time.
Short answer: the model is good at consistency. You can use it to generate a set a style reference images, then use those as reference for all your subsequent generations. Generating in the same chat might also help it have further consistency between images.
Money wise, my full usage so far (including running purposely large inputs/outputs to stress test it) has cost me.... 19c. And I am not even using the cheapest model available. But, you could also run it with a local model.
reply