You just check the Analysis/Interpreter box and tell it to how and when to use Python in the GPT instructions.
I put a mini Python lib for rolling dice and skill checks in the GPT instructions and just enabled the Analysis or whatever checkbox. And I told it to run the code in the beginning to initialize and use the functions for dice rolls etc.
It can write and call functions on the fly, but I wrote them ahead of time and have it call the library functions to reduce the amount of code needed to the minimum to try to speed things like dice rolls up.
I'm curious - what's the medication? Someone I know gets pretty regular migraines. She takes sumatriptan when they occur, and also has cut alcohol due to being a potential cause.
Congrats! I've been watching this space for a while, having built a couple multiplayer sync systems in the past in private codebases, including a "redux-pubsub" library with rebasing and server canonicity that is (IIUC?) TCR-like. There's a lot to like about this model, and I find the linked article quite clear - thank you for writing and releasing this!
1. You wrote "For example, schema validation and migrations just sort of fall out of the design for free." - very curious to read about what you've found to work well for migrations! I feel like there's a lot of nice stuff you can build here (tracking schema version as part of the doc, pushing migration functions into clients, and then clients can live-update) but I never got the chance to build that.
2. Do you have a recommendation for use-cases that involve substantial shared text editing in a TCR system? I'd usually default to Yjs and Tiptap/Prosemirror here (and am watching Automerge Prosemirror with interest). The best idea I've come up with is running two data stores in parallel: a CRDT doc that is a flat key/value identifying a set of text docs keyed by UUID, and a TCR doc representing the data, which occasionally mentions CRDT text UUIDs.