I'm hoping at a minimum get my thoughts written out while I'm flying this weekend!
The quick gist is that Jupyter is frustrating for me because it's so easy to end up in a situation where you inadvertently end up with a cell higher up in your document that is using a value computed in a cell further down. It's all just one global namespace.
In the Julia world with Pluto they get around this by restricting what can go into cells a little bit (e.g. you have to do some gymnastics if you're going to try to assign more than one variable in a single cell); by doing it this way they can do much better dependency analysis and determine a valid ordering of your cells. It's just fine to move a bunch of calculation code to the end/appendix of your document and use the values higher up.
The idea I've been chewing on comes somewhat from using Obsidian and their whole infinite canvas idea. It seems like using ideas from Pluto around dependency analysis and being able to also determine whether the contents of a given cell are pure or not (i.e. whether they do IO or other syscalls, or will provide outputs that are solely a function of their inputs) should be able to make it easier to do something... notebook-like that benefits from cached computations while also having arbitrary computation graphs and kind of an infinite canvas of data analysis. Thinking like a circuit simulator, it should be possible to connect a "scope" onto the lines between cells to easily make plots on the fly to visualize what's happening.
Anyway, that's the quick brain dump. It's not well-formed at all yet. And honestly I would be delighted if someone reads this and steals the idea and builds it themselves so that I can just an off-the-shelf tool that doesn't frustrate me as much as Jupyter does :)
it sounds like the ideal solution would be something functional (so you have a computation graph), pure (so you can cache results) and lazy (so order of expressions doesn't matter.) why not Haskell? or even a pure/lazy subset/variant of Julia, if you want to ditch the baggage of Haskell's type bondage?
you could ditch explicit cells entirely, and implement your "scope" by selecting a (sub)expression and spying on the inputs/outputs.
I've thought about that and have written some fun Haskell code in the past but... the other goal is to actually have users :D. I've also considered Lisp, Scheme, and friends to have really easily parseable ASTs.
I jest a bit, but there's a very rich ecosystem of really useful data analysis libraries with Python that do somewhat exist in other ecosystems (R, Julia, etc) but aren't nearly as... I would use the word polish, but a lot of the Python libraries have sharp edges as well. Well trodden might be a better word. My experience with doing heavy data analysis with Python and Julia is that both of them are often going to require some Googling to understand a weird pattern to accomplish something effectively but there's a much higher probability that you're going to find the answer quickly with Python.
I also don't really want to reinvent the universe on the first go.
It has occurred to me that it might be possible to do this in a style similar to org-mode though where it actually doesn't care what the underlying language is and you could just weave a bunch of languages together. Rust code interfacing with some hardware, C++ doing the Kalman filter, Python (via geopandas) doing geospatial computation, and R (via ggplot2) rendering the output. There's a data marshalling issue there of course, which I've also not spent too many cycles thinking about yet :)
Edit: I did copy and paste your comment into my notebook for chewing on while I'm travelling this weekend. Thanks for riffing with me!