The killer feature for me is Edit>Selection Respects Soft Boundaries, which lets you copy text from inside windows defined INSIDE the terminal - like tmux or emacs splits - where iTerm figures out that, e.g., a pipe character is a window boundary.
Two more:
2) if you accidentally close a tab/window, you have a few seconds to hit ⌘z and the window will reappear as if you never closed it!
3) Minimum color contrast. If your terminal color scheme and what you're running's color scheme interact poorly to create something that would be unreadable, iterm notices and you can have it override those colors to be something of higher contrast, automatically.
But that's just my killer features. iTerm is like Word - it is a bloated monster with thousands of features. Nobody needs them all, but nobody agrees on which ones they need.
I am curious to ask others here, are there other low-config alternative tools like Fish that, looking back, now seem like a no brainer? Ghostty is a recent example, Helix seems like another. I’d love to know about other tools people are using that have improved or simplified their lives.
If you're looking for something with an addressable LED matrix in a clock style form factor, the Ulanzi TC001 [0] for ~$50 is worth having a look at.
Doesn't quite have the same aesthetic but inside it's just an ESP32 (flashed via the USB-C port) and there's various mature open source firmware replacements. I use awtrix[1] on mine and it's very easy to tie in HomeAssistant for doorbell notifications and that sort of thing. I did also knock up a Pomodoro app for it.
I must not fear. Fear is the mind-killer. Fear is the little-death that brings total obliteration. I will face my fear. I will permit it to pass over me and through me. And when it has gone past I will turn the inner eye to see its path. Where the fear has gone there will be nothing. Only I will remain.
Big fan of your work with the LLM tool. I have a cool use for it that I wanted to share with you (on mac).
First, I created a quick action in Automator that recieves text. Then I put together this script with the help of ChaptGPT:
escaped_args=""
for arg in "$@"; do
escaped_arg=$(printf '%s\n' "$arg" | sed "s/'/'\\\\''/g")
escaped_args="$escaped_args '$escaped_arg'"
done
result=$(/Users/XXXX/Library/Python/3.9/bin/llm -m gpt-4 $escaped_args)
escapedResult=$(echo "$result" | sed 's/\\/\\\\/g' | sed 's/"/\\"/g' | awk '{printf "%s\\n", $0}' ORS='')
osascript -e "display dialog \"$escapedResult\""
Now I can highlight any text in any app and invoke `LLM` under the services menu, and get the llm output in a nice display dialog. I've even created a keyboard shortcut for it. It's a game changer for me. I use it to highlight terminal errors and perform impromptu searches from different contexts. I can even prompt LLM directly from any text editor or IDE using this method.
For anyone interested in the equivalence between AI and compression, take a look at the Hutter Prize :) http://prize.hutter1.net/
Also worth a look is the Large Text Compression Benchmark http://mattmahoney.net/dc/text.html - currently the world's best compressor is a neural network made by ... the renowned Fabrice Bellard, creator of ffmpeg and QEMU!
And I really dig these pages' refreshingly appropriate text-only style!
This has been known for quite a while and can be used for arbitrary HTML elements with some CSS hacks. I'm suprised no "super bright" advertisements have shown up so far. https://kidi.ng/wanna-see-a-whiter-white/ (Safari only)
On macOS you can use apps like BetterDisplay, Vivid or BetterTouchTool to enable that HDR mode for the whole display, which makes it significantly easier to work outside. On iOS there is "Vivid Browser" - a browser that enables the HDR mode for the whole screen.
Every time I read about AI I am reminded of the mouse running a maze. Any AI algorithm can learn to complete a maze in record time. It can memorize every corner. It can run a search pattern perfectly and improve that pattern iteratively, to the point that it may create new search patterns, applying what appear to be novel ideas. But the mouse actually understands the concept of a maze. It knows that the cheese exists regardless of the maze. The mouse can see when the researcher has left the lid open, jump outside the maze and run to the cheese directly. The mouse is aware. The AI is not.
I've found Julia to be an excellent language for Project Euler [1]. Besides the speed, you can use Unicode identifiers, so the solution can closer follow the math.