Hacker News new | past | comments | ask | show | jobs | submit login
JupyterLab 4.0 (jupyter.org)
437 points by jonbaer on June 13, 2023 | hide | past | favorite | 193 comments



I keep experimenting with Jupyter in the context of telemetry/fault analysis and then hitting a wall with it where:

- I get an analysis that I like, but there isn't a good way to share it with others, so I end up just taking screenshots.

- There isn't a good way to take the same analysis and plug new data into it, other than to copy-paste the entire notebook.

- The process to "promote" fragments of a notebook into being reusable functions seemed very high-friction: basically you're rewriting it as a normal Python package and then adding that to Jupyter's environment.

- There aren't good boundaries between Jupyter's own Python environment, and that of your notebooks— if you have a dependency which conflicts with one of Jupyter's dependencies, then good luck.

Some or all of these may be wrong or out of date— Jupyter definitely passes the "oooh nifty" smell test, and I'd love to figure out how to make it usable for these longer-term workflows rather than just banging out one-off toys.


> The process to "promote" fragments of a notebook into being reusable functions seemed very high-friction: basically you're rewriting it as a normal Python package and then adding that to Jupyter's environment.

Don't get in that situation to begin with. Pop an `%load_ext` `%autoreload 2` at the top and just write functions in an imported .py file from the get go.


Ok, but let's also acknowledge that the mental model and levels of abstraction increased significantly, which presents a barrier to people without any prior software engineering experience, who are just getting into Python for "data science".


I don't think it's a huge leap to say your functions are stored in this .py file vs. in a cell above. the magic commands simply allow you to reload the functions automatically without needing to reimport / reload the kernel.


It is a huge gap when the notebook is documentation. Moving functions out either duplicates documentation or separates them. Notebooks are vastly superior for documentation. Extracting functions from notebooks into a py file isn't difficult (see nbdev for example)


Notebooks are good for demonstration, not documentation. They are bad for documentation because you need to put docstrings and comments in the code while desiring to put the same content in the markdown.


I typically don't find docstrings are particularly useful for documenting an analysis or understanding what a function is doing. Even analyses that are repeated with different data. When you are debugging or reviewing an analysis, the docstrings are the least of your concerns. Validity is far more important.

I would take a notebook that demonstrates what's happening to my data any day over a docstring that may or may not be correct. Particularly if I have to render it in yet a third system to juggle and manually keep in sync.

So we've gone from notebooks to... notebook + py + docstring... I'm sure we can think of another useless layer of indirection to bolt on.


> Particularly if I have to render it in yet a third system to juggle and manually keep in sync.

I'm not sure you know what a docstring is if this is your response.


If you are putting your docstrings in the py and they contain things like equations or figures or images, you have to render them to get the same effect that you would have just reviewing cells in the notebook. And as far as I know you can't even spread docstrings out over the body of a functions, the way you could if you broke the function up into cells. Docstrings are not literate programming.


> And as far as I know you can't even spread docstrings out over the body of a functions,

I really feel like you do not understand what a docstring is. This does not make any sense.

> the way you could if you broke the function up into cells

If your logic is broken up across cells then you cannot use it anywhere but in that notebook...


I realize you think the problem is that I don't understand what docstrings are, but it's more that you don't understand the sorts of documentation I'm talking about. docstrings are fine for a certain type of limited documentation that is formatted to prevent syntax errors. But they are extremely limited. Reading docstring/doxygen style documentation is the slowest and most error prone in my experience. Particularly when you are describing a complex calculation.

> If your logic is broken up across cells then you cannot use it anywhere but in that notebook...

You don't seem to understand how notebooks can be used and processed themselves. They are just data and there are libraries for loading and transforming them. You might check out pydev and papermill to get an idea about this (and maybe also literate programming in general).


I do all of my code in notebooks. If your docstring is slow to read, it is a bad docstring. To be clear, a docstring is the thing you get when you have a function, you don't know what it does, and you press shift tab to bring up the brief description of what it does in a pop up (in jupyter lab). It should be the quickest, least error prone form of documentation because it is available on demand and is supposed to be short.

Using notebooks as functions is a bad idea. I think papermill is trash.

Notebooks are good for illustrating what you did and what you got. They are not good for illustrating how you did it.


It truly is fascinating that you continue to insist that I don't know what a docstring is. Notebooks are good for certain things and not good for others. This conversation is going nowhere.


Well maybe we should expect people to learn the tools they use properly instead of constantly dumbing things down for the sake of convenience.

I’m not sure you should be able to just pop in and do “data science”


> I get an analysis that I like, but there isn't a good way to share it with others, so I end up just taking screenshots.

What would be your preferable way of sharing your analysis with others?

- You can turn jupyter notebooks into pdfs directly in the jupyter UI.

- You can upload them to Gitlab/Github and share the link to the rendered result.

- You can upload them to Colab/Binder/Kaggle and let people play with the code themselves.

- You can turn jupyter notebooks into beautiful websites/documents with: https://quarto.org/docs/tools/jupyter-lab.html

- You can add jupyter notebooks to you docs using nbsphinx: https://nbsphinx.readthedocs.io/en/0.9.1/

- You can turn jupyter notebooks into interactive web apps with voila: https://github.com/voila-dashboards/voila

- You can turn jupyter notebooks into presentations with rise: https://github.com/voila-dashboards/voila


To add to this, shamelessly self-promoting, Notebooker (https://github.com/man-group/notebooker) is a neat way of scheduling your Jupyter notebooks as parametrisable reports whose results are presented in a little web GUI (either as static HTML, PDF, or as reveal.js slideshow renders)


I think the ideal would be a link directly to the notebook which allows user-local editing/fiddling, with an option to fork the master copy and save it into their own workspace— basically the Github model.

But it sounds from both your comment and the many sibling replies that there are a number of tools now directly addressing this space, so I should definitely re-evaluate what is available.


Do you have an example of how this works with another tool/language?

I don't know if I understood it correctly but maybe you could:

- Upload your notebook to Github, then create a url with Binder (part of the jupyter ecosystem) directly to an editing/fiddling playground: https://mybinder.org/

- If by user-local you mean on their own machine, they can clone your repo and run their own jupyterlab to fiddle

- If everything should stay on your own computer/server, you could share a link to your own jupyterlab and collaborate with others in real-time: https://jupyterlab-realtime-collaboration.readthedocs.io/en/... (doing this securely might be a bit of a hassle)


I'm guessing the las one should link here https://rise.readthedocs.io/en/stable/ and not to voila two times?


You're right, thanks!


While I have no need for its online functionality and the SAAS part of plotly, I really do like plotly python + cufflinks [1]. It lets you make interactive plots in html/js format. Which means you can save the notebook as html, and while people won't be able to rerun the code, they can still zoom in on graphs, hover to see annotations etc, which is a really nice way to share the outcome of your work in a more accessible way.

[1] https://github.com/santosjorge/cufflinks


Cufflinks seems to be stale, maybe it is not needed anymore to bind plotly and pandas? I don't think these options existed in 2021 when cufflinks was last updated:

- Plotly can be used directly as pandas backend: https://plotly.com/python/pandas-backend/

- The plotly.express module makes it easy to create interactive plots in html/js format from pandas dataframes: https://plotly.com/python/plotly-express/#gallery


If you're interested in an easier way to create reports using Python and Plotly/Pandas, you should check out our open-source library, Datapane: https://github.com/datapane/datapane - you can create a standalone, redistributable HTML file in a few lines of Python.


> There aren't good boundaries between Jupyter's own Python environment, and that of your notebooks— if you have a dependency which conflicts with one of Jupyter's dependencies, then good luck.

It's cumbersome, and I'm not totally sure it's the correct way, but I remember getting around this by creating a virtualenv for my projects and then using that virtualenv's python as Jupyter's "kernel".


Right. I usually `pipx install jupyterlab` and register each project's env with `python -m ipykernel install --user --name MyProject --display-name "My Project"` as per the ipython docs: https://ipython.readthedocs.io/en/stable/install/kernel_inst...


Yeah this is the right way. But I don't think it's obvious from the getting started docs.

I bet a lot of people end up installing jupyter into each virtual environment instead.

This sort of works, as (I think) most people are only working on one or two notebooks at a time, and aren't using notebooks that relate to more than one virtual environment.


Yeah, that's essentially the right way (I use conda envs, but venvs are probably fine too). Have one env for running the Jupyter Lab/Notebook server (install jupyter, jupyter widgets, nbextensions, etc into this env) and then create other envs for your projects and register that kernel with jupyter [0]. You can also just install all of the jupyter ide cruft in each env if you want, but dependencies can get annoying.

[0] https://stackoverflow.com/questions/39604271/conda-environme....


This is the way. I use miniconda to create env for Jupyter and install only Jupyter and its dependency on that, then I just config it to point to all the other Python envs I use for my different projects. This is only one-time setup and it is absolutely worth it given how messy the ecosystem is.


> miniconda

Try micromamba, it will shave years off dependency resolution


I used to use micromamba, recently moved to mach-nix with nix. It's also really fast, but with the reproducibility of nix.


I do this, but it isn't well supported. Most Jupyterlab extensions expect you to `pip install` and that will handle both the frontend and backend components. The docs make basically no mention of the frontend/backend split. I end up having to install a bunch of frontend stuff in my project envs just to make sure my notebooks work.


I just do a venv inside the venv so I can target several Python versions. I also use Jupiter for C#, Clojure, and Julia. I wish the Clojure integration was as good as the others I listed.

I’m to the point now where if anything other than venv/pip is required I won’t use it. Unfortunately there are many things that insist on conda.


Conda is a fucking nightmare. Luckily you can usually just use pip to get the same packages.


Why does “pip install gcc” keep failing then? ;)

Conda isn’t perfect but takes on a lot of problems that pip doesn’t deal with at all. Regular conda is really slow these days but you can use mamba instead or just configure conda to use the libmamba solver and it’s much nicer.

The folks at prefix.dev seem to be building some pretty cool drop in replacements for conda too.


One that you can’t is the Clojure kernel for Jupiter. I installed it from source. That’s no big deal, but they aren’t 1-to-1 with packages.


had a ton of trouble getting jupyterlab to target an arbitrary virtualenv


The trick is that you have to deactivate the virtual environment and then resource it after adding Jupyter to that virtual environment.

Most shells cache executable paths, so the path for jupyter will be the global path, not the one for your virtual environment. This is unfortunately not at all obvious and leads to very hard to track down bugs that seem to disappear and reappear if you aren't familiar with the issue.

I have a recipe here which always works: https://github.com/nlothian/m1_huggingface_diffusers_demo#se...

If you don't have requirements.txt then do this: `pip3 install jupyter` for that line, then `deactivate` and `source ./venv/bin/activate`.


I almost never rely on activating environments anymore. If it's in my project, I refer to it by its path relative to the project top level. If it's meant to be "global", I use Pipx, and if for some reason I can't, I use a script in ~/bin that uses the absolute path.


For a while I viewed Jupyter as a toy that is neither here nor there (sitting between the chairs of development and explanation, briefing or visualization and not doing either job great). But about 2 years ago when I changed jobs into a "Jupyter heavy" environment I was forced to learn it and have grown to really like it.

I primarily use Jupyter for prototyping: trying ideas, plotting results and sharing notebooks for others to improve on (or punch holes in). Once a piece clearly shows promise I move it into a module. While this usually means a significant rewrite I actually see it as a benefit: I can be messy in original prototyping plus a rewrite after experimentation often leads to a better code with a small time investment.

Beyond the minimal self-discipline of actually moving code into modules I still have two three-character friction points: vim and git. Pointers on addressing those appreciated!

I would love, love, love a tight vim and Jupyter integration to be able to switch, easily and frequently, between editing a set of cells in vim and in Jupyter with solid sync between them. I am perfectly OK with vim ignoring the output. And I would love to have a git mode that only checks in the changes that cause actual differences in the python code; not timestamps or the output.


You can comfortably edit the whole thing as Markdown in VSCode (in vim mode, if you'd prefer) - and still have executable cells and nice pdf/html rendering. https://quarto.org/docs/tools/vscode.html

EDIT: actually, you can do it directly in neovim - https://quarto.org/docs/tools/neovim.html


This is my preferred way to work as well. Some techniques to address what you are missing:

1. Only checking in semantic differences (not output, timestamps, etc.):

Use the `jupytext`-extension [0] to seamlessly pair you notebook with an lightweight markup version of the input only (which can be used to generate full notebook).

2. Being able to switch between text editor and notebook interface

As other have mentioned, there are integrations for multiple editors.

Another approach is to move the central code out to a python module which you edit in a text editor, and then use the `%autoreload` magic [1] to reimport that module whenever you execute a cell in the notebook.

[0]: https://jupytext.readthedocs.io/en/latest/install.html [1]: https://ipython.readthedocs.io/en/stable/config/extensions/a...


This seems like a nice approach. Mine is similar. I refactor often, by moving stuff from a lower level of discipline to a higher one:

Enclose scripts in functions within the notebook, which minimizes the clutter of hidden state. I also have a habit of not walking away from a notebook without doing a "restart kernel and run all cells" to make sure the notebook works. I'm not dealing with giant data sets, so this doesn't cost me much.

Frequently used functions go into .py files, using auto-reload to keep things synchronized while I'm working on them.

Mature .py files that I might want to re-use in different projects get turned into pip-installable packages. The notebooks become informal tests of the packages.

I've never used venv, and never encountered dependency version problems. Some of the dependency horror stories may be obsolete due to the maturation of the big packages such as numpy and matplotlib.


This is a plugin that makes browser text area elements into nvim buffers.

https://github.com/glacambre/firenvim

Haven't tested it in combination with Jupyter but I imagine it should work


Are there some usecases where conventional IDE isn't up to the job as the notebooks? I can imagine this to be so for data analysis where pre-loading heavy datasets saves time. Anything else?


The notebook environment is excellent for data analysis and exploration. It matches my workflow of manhandling data as I reach for understanding.

Once I figure things out of course, either I'm done with the notebook and can copy a few plots out for inclusion in a powerpoint, or I'm done with the notebook and extract a pile of functions into a utility script or package.

Either way, it has served its purpose for rapid prototyping and one-off analysis.


For me its the matter of actually liking The notebook environment rather than overcoming the shortconings of IDEs.


jupytext will solve the git problems for you. Pair with a percent script, ignore the ipynb files.


jupytext is great. It even allows you to use only .py files directly as notebooks, but I recommend "pair with ipynb" and version controlling the .py file. The ipynb acts like a cache of the cell outputs between invocations of jupyterlab, which is handy too.


I always wanted a Jupyter like environment, but the one that would natively support and output .py files

jupyter notebook as IDE, but with .py files instead of .ipynb


This is exactly what jupytext gives you. The "pairing" is optional (and sometimed confusing). You can just work with .py files as notebooks and never ever see a .ipynb on your disk again.

The notebook .py files are just regular python files with comments that can be edited at hand wih any text editor. Thus you can easily collaborate with your local graybeards that will dislike editing text on their web browsers.


What I miss is a notebook launcher button that creates a .py notebook. Don't understand why we don't have that (i could not myself find a way to configure and add it.)

I'm not sure why you think it's viable to collaborate with others who don't use jupyter? I mean on any software project, if we don't agree how to compile or run a project, then we usually can't collaborate IMO. Changes become nonsensical (breaking the one or other mode that is not tested by the author of the change.)


I think collaborating with .py files is easier over git.

I could not find a way how to check-in .ipynb file and let reviewer review code only, and ignore the metadata part

if jupytext will allow me to work in jupyter, check in file to git and create MR and let other people easily see, review and comment my code that would be great


Great, then I'd recommend using jupyterlab 3.6 with jupytext right now


There are lots of different versions of this. One of these approaches may work for you https://alpha2phi.medium.com/jupyter-notebook-vim-neovim-c2d...


> - I get an analysis that I like, but there isn't a good way to share it with others, so I end up just taking screenshots.

Jupyter supports printing out as a PDF, an all-in-one-webpage and all sorts of other formats. I've used this and the support for Markdown to create a number of documents that I've presented up the leadership chain, or to other engineers across the org "Here's the data, here's the narrative, and here's the code that shows you how I got to it so you can reproduce". Can even share the Jupyter notebook itself.


Do you have a way to store data directly into the notebook? I do a bunch of device testing and I have master notebooks setup to analyze and condense raw data. I use papermill to evaluate the master notebook to generate a report. But I want to also store/attach intermediates (say a named numpy array) into the notebook for further analysis.


> I want to also store/attach intermediates (say a named numpy array) into the notebook for further analysis.

Is the issue that you do not want to save the data and report into a folder and distribute that? That is, you want an entirely self-contained notebook? Or is there something else going on here?

I'm sure that's possible but it seems kind of wrong to put your binary "data" in with your analysis and presentation-making code.


Somewhat. I want it to be difficult to separate the data from the report. Basically I want the report itself to be ingestible as input to other steps, so it's more of a "documented data" with the analysis results available. The inputs are documented but the final results can be restored without reevaluating the entire notebook. I don't want the entire workspace saved, just the final results. I hoped there was some magic to inject and restore a python object from a cell using some form of introspection.

The testing I do is annual equipment performance evaluation. I'd like to be able to process each test and then feed the results into longitudinal monitoring. One thing I am considering is adding a library or extension to papermill that automatically creates a workspace hdf5 or dill or whatever that I can store individual variables into. After studying the ipynb JSON it just seems odd that you can't just store blobs as attachments. But what I understand is it has to do with the kernels and notebooks running as separate processes and passing things around as notifications. So basically the kernels don't have any access to the cells or any sorts of introspection.

With papermill you have parameters, there's just not any "return values" in the processed notebooks. If it existed you could treat "reports" more easily as cached function evaluations.


It looks like there’s a %store command in Jupyter. I haven’t tried it out, but is this what you are looking for?

https://stackoverflow.com/questions/34342155/how-to-pickle-o...

(I just got it by googling “pickle an object in jupyter,” so sorry if this is something obvious that you’ve already seen and doesn’t quite solve your problem).


Yeah it got my hopes up when I found it. But when I was testing it, I didn't find the data actually made it into the .ipynb. It turns out that's actually a global storage in your home directory and doesn't go into the notebook at all. So different notebooks each overwrite the value if they use the same variable name.


Oh wow, that’s a really awful design. Sorry.


The problem with that is that Jupyter requires a back-end to do any of the actual processing that runs the code in the cells.

Only way I can think to make the html file truly portable while fully functional would be to embed a python interpreter and all required libraries as wasm.

Could be possible with pyodide? I haven't used it.


I just want selected/declared "final results" to be restorable in a new kernel. Papermill let's you easily parameterize notebooks and execute them. So the notebook feels like a well documented functions that behave like executibles with really fancy debug print logs that can be reviewed for correctness and reproducibility.

Papermill does a great job of recording the input parameters and execution history. But there isn't anything equivalent to the "return foo, bar" part of a function which makes it difficult to build up modules. You don't want to have to digitize a plot to carry on to the next step is what I'm saying.


I’m using Jupyter Book (https://jupyterbook.org/) to build all kinds of daily reports and it works really well. Individual notebooks can also be converted to HTML or pdf with nbconvert using command line or “download as” menu item.


+1 for Jupyter Book. I had to fight with it a bit for some things around LaTeX/PDF generation, but was very pleased with the result.


I manage a Jupyter instance where we do ~75% of all of our work, and before that used Jupyter daily as a quant. In response to your comments + sibling's:

Sharing: there hasn't been a good solution, so you have to hack something together. For us (small team) symlinking a "shared_notebooks" network path works fine (but looking forward to the RTC mentioned in the post). It's very important to restart your notebook before sharing.

New data: I've never really run into this in a form that "restart + run all cells" didn't really work. Are you trying to keep older versions?

Reusability: I highly recommend installing a personal Python package in development mode to your kernel (i.e. pip install -e .). Then just move functions from the notebook to the package and reload.

Boundaries: As others have said, I would just make a "user" kernel right away and never touch the Jupyter environment (as TLJH does by default).

I am kind of hopeful that Armin Ronacher's rye fixes Python environment hell and we end up with better solutions for many of these things, but there are definitely some issues with version management in Jupyter.


I've thought about this a lot in the context of our org. What you're describing is taking modeling -> production (minus the screen shot one).

Typically a data engineer will be the one who helps bridge that gap, but that has a problem of does data engineer's output == scientist output, which can be time consuming to handle.

To shrink the gap from dev->prod - we have 2 notebooks, one for model development and one model deployment in production. We use papermill[0] to execute directly notebooks in production.

Shared functions between the dev/prod that are built by the scientist are put into a separate notebook and then imported via `run`. If I'm honest, our scientist don't do this and simply copy/paste the functions if we don't yell at them to fix it.

This basically allows us to stay within the jupyter environment entirely so that dependancies are isolated.

So, it's far from perfect, but it's allowed us to shrink the dev->prod life cycle time. Love to hear what others have done towards the same end.


If you use conda there are extensions that can help with this by automatically registering any available conda environments that include ipykernel in your Jupyter Lab environment.

nb_conda_kernels is pretty reliable but not actively maintained. Gator from the mamba folks is new and still a bit rough around the edges but looks like it will be pretty slick eventually.

https://github.com/Anaconda-Platform/nb_conda_kernels

https://github.com/mamba-org/gator


If you're looking for a way for others to reproduce your environments, you might find this useful: https://jupyenv.io/


Have you considered using Pluto notebooks instead? They are designed to be reproducible (including version references to transitive dependencies). They are also reactive, so it is trivial to change the dataset.


One thing that I keep running into myself is I want to include data in a notebook as a sort of report or record of an analysis. I really like papermill for creating notebooks that execute and then store results. But you can only include text output or plots. If I wanted to store a numpy array within the notebook for inspection or input into.a next step, there doesn't seem to be a way. I understand it could be difficult to figure out how to make the all of that work. But I don't know. Maybe pickling or something should work?


Ploomber does this kind of proper jupyter notebook pipeline. I've used it, but not stuck with it (yet)


I'll have to check it out. I have looked at it previously but it's somewhat overwhelming to figure out what it actually does. My sense previously was that is it was more like a dependency graph thing similar to a makefile but I'll give it a closer look.


Export to HTML is usually a decent report. If you want to refine it a bit, you can export and show only outputs, or (with some code) only certain tagged outputs.

To plug in new data, I experiment with papermill.

There are solutions that can put jupyter and the kernel in different environments. It's even not too hard to setup, but it astounds me it's not a common way to set it up. It's natural to have kernel + the analysis's deps in its own environment.


We looked into many of these issues with Deepnote (YC S19) [https://deepnote.com/]. What we found is that these are not necessarily problems of the underlying medium (a notebook), but more of the specific implementation (Jupyter). We've seen a lot of progress in the Jupyter ecosystem, but unfortunately almost none in the areas you mentioned.


> - The process to "promote" fragments of a notebook into being reusable functions seemed very high-friction: basically you're rewriting it as a normal Python package and then adding that to Jupyter's environment.

> - There aren't good boundaries between Jupyter's own Python environment, and that of your notebooks— if you have a dependency which conflicts with one of Jupyter's dependencies, then good luck.

The best Jupyter UX for me now is VSCode. Just put an .ipynb file in your workspace and you get the notebook interface inside VSCode. Put `%load_ext autoreload` and `%autoreload 2` in the first cell, and use the same python environment you're using in your workspace for the Jupyter kernel. Then you can import libraries from your project, use them, and it's very easy to promote code from the notebook into a library. You can just cut a function from the notebook, paste it into a library, add an import, and rerun the subsequent cells to verify it still works as expected.


>- There isn't a good way to take the same analysis and plug new data into it, other than to copy-paste the entire notebook.

Cant you just set the parameters in the first cell?

>- I get an analysis that I like, but there isn't a good way to share it with others, so I end up just taking screenshots.

Export to md, pdf or html?


At least for the last point I use conda with the nb_conda/nb_conda_kernels packages, which makes you able to switch kernels to different environments.


To promote functions to a module use nbdev; execnb or papermill to parameterise your notebook; and nbconda to have environments for your notebooks.


You may be interested in papermill to address the parametrized analysis problem [1]. I think (but I'm not positive) this is what the data team at a previous job used to automate running notebooks for all sorts nightly reports.

[1] https://papermill.readthedocs.io/en/latest/#


> There aren't good boundaries between Jupyter's own Python environment, and that of your notebooks— if you have a dependency which conflicts with one of Jupyter's dependencies, then good luck.

I believe that you can use https://github.com/tweag/jupyenv for this.


Why not write the experimental code in the notebook, then write a function containing that code in the notebook and try it out, and only then move your function to a py file, if at all?



I'm always hesitant to self-promote in an hn comment (actually I have never done it before!), but your problems w/ Jupyter are just too closely mapped to what Hex (https://hex.tech/) solves to not plug it here!

- I get an analysis that I like, but there isn't a good way to share it with others, so I end up just taking screenshots.

You can publish any Hex notebook with literally just a few clicks, and anyone you share it with can access it, or edit it, or fork it, without installing anything— or you can even make it public. You can easily turn a notebook into an "app" or interactive report if you want, hiding/showing certain cells or choosing cells to show only code/only output. You can just share the raw notebook though too.

- There isn't a good way to take the same analysis and plug new data into it, other than to copy-paste the entire notebook.

Super easy to duplicate a Hex project and hit a different table or data source, or you can use input parameters (like ipywidgets) to make one notebook parameterized and work on a bunch of different data sources.

- The process to "promote" fragments of a notebook into being reusable functions seemed very high-friction: basically you're rewriting it as a normal Python package and then adding that to Jupyter's environment.

You can promote any part of a project to a "Component" (docs: https://learn.hex.tech/docs/develop-logic/components) that you can import into other projects. They can be data sources, function definitions, anything. If you make upstream changes to the component, you can sync them down into projects that import it.

- There aren't good boundaries between Jupyter's own Python environment, and that of your notebooks— if you have a dependency which conflicts with one of Jupyter's dependencies, then good luck.

Hex has a ton of default packages in its already installed standard library, and all the dependencies are ironed out— if you have packages you want to use that aren't there, you can pip install them, pull them in from a private github repo, or ask us to add them to the base image. You can also run Hex projects using a custom-provided docker image if you have super custom needs.

You should *definitely* check it out if you have these pain points. Here's an example of a pretty complicated public Hex project: https://app.hex.tech/hex-public/app/9b882bc1-ead3-4f0b-87d1-...

And here's a simpler one I just made the other day on a cool Silk Road dataset https://app.hex.tech/hex-public/app/cdc1b8fe-144b-4a74-a5ef-.... There's a bunch more examples at https://hex.tech/use-cases. Happy to answer any questions!


I was going to post hex as an ideal option, I definitely do not miss dealing with jupyter and all the Python env related headaches, hex solves all of these and then the rest of the more UX related issues with notebooks. Definitely recommend as the best compromise for notebook based Python data analytics tasks


What do you use to collect telemetry data?


It's a mix of things, but the big one was ROS bag files for robots. So a lot of the conventional up-to-the-minute metrics workflows oriented around grafana/kibana don't really pan out when you're very interested in a five minute window that occurred three days ago, buried in a 1gb datafile.


I have been using VSCode notebooks with .ipynb file extensions, this gives me many advantages as I am able to configure things I'm not able in JupyterLab. I also have access to a very rich ecosystem of plugins. If there is anyone aware of VSCode as a solution but keeps using JupyterLab, could they explain why?


I too use VS Code as my Jupyter platform (running remotely on a powerful EC2 instance with 32 CPUs and 256GB RAM — my own desktop is a 7 year old Intel core i7 with 8GB RAM). VS Code’s Remote extension is amazing, works over any SSH host and seamlessly blends local and remote. It’s also fast since the UI is local while the filesystem and execution is remote.

The experience is a lot better than JupyterLab (which I am forced to use from time to time on SageMaker). The VS Code UI is cleaner plus I get a full language server which means I can rename variables and refactor fearlessly.

I also get full access to VS Code plugins.


Really interesting setup. What kind of monthly expense does this run?

And on a separate but related note, does it change the way you think about how you spend your time coding? (Assuming the costs do ramp up with usage such that time literally does equal money?)


I’ve no idea what the cost is since the company pays for it — I do need the horsepower to run some really large models and I suspect most people don’t need this kind of spec. But for my company it’s just part of the cost of doing business.

There’s no IT and I can provision instances of any type (subject to limits) at any time.


VSCode is free if you self-host it. There are corporate tiers with virtual desktops etc, and you can pay for services as GitHub pilot if you want.

Anyhow there a wealth of free extensions to customize it and the setup is really straightforward. I have version management git in a private GitHub project for version management. You can add extensions for rendering graphs in good quality and importing and exporting stuff is easy.

I have not been able to figure out why some people prefer to use Jupiter notebook as it is.


Do you turn that lightbulb off at night when you clock out? Just curious.


32 vCPU / 256 GB instances like r6a.8xlarge is about $900/month (r6ad which has local disk is about $100/month more), I don't see there being much other major costs with such setup?


Unlike individuals, large enterprises rarely pay sticker price but a heavily discounted negotiated rate for software and services. I can’t say how much exactly but it’s less than that.


With respect to EDP, AWS doesn't do "discounted negotiated rates", only discounts tied to specific committed spend levels (non-negotiated).


I'm curious about your setup as I might have to do something similar soon due to my machine's performance constraints. I understand you can connect to a Jupyter server remotely, but how do you sync your code? Do you have your Git repo cloned on the remote host and just run Git commands over SSH? Or does VS Code have some kind of integration for remote file systems with version control?


Check out the remote SSH (https://code.visualstudio.com/docs/remote/ssh) extension. It makes everything really painless. I'm currently developing a website that resides in a docker container in a digitalocean server, and editing the code in VSCode feels literally no different at all to running it locally. If you have the SSH keys set up it's completely painless.

The other nice thing about VSCode is that you can extend it with VSCode Neovim (https://marketplace.visualstudio.com/items?itemName=asvetlia...), which runs a headless version of Neovim and allows you to do all the wonderful things that that entails, including stuff like VSCode's native multiple cursor implementation (and Lua config files!). All in all it's a great workflow, it's pretty light, and if you're paying for (or self-hosting) a beefy server it can turn any laptop into a powerhouse.


This is great, thanks. Commenting to bookmark for future reference.


My code resides entirely on the remote EC2 instance while VS Code UI runs locally. There is no sync as such. You’re working off the remote copy via SSH. (I do back up my code periodically)

VS code takes care of spinning up the remote Jupyter server. All I have to do is create a new .ipynb file and everything happens automatically. Execution and disk are remote, only the UI is local. This is the magic.

It’s exactly like SSH except you have a rich client IDE in VS Code. The only data that moves over the network are your keystrokes and pastes and what is needed to display output in VS Code. You have to try it to see.


That sounds amazing. I am going to try this today. I have been irrational to the point of not trying this because I just love the notebook so much even though I love VS Code too. I imagine you just would never go back to notebooks.

That is nice the company pays for all that cloud compute but for an individual it would just seem more practical to build a beast of a machine.


VSCode Jupyter notebooks + Github Copilot is my favorite way to interact with notebooks. The autocomplete is super helpful for assisting with discovery of matplotlib or numpy operations.


Yeah, I'm looking forward to Copilot chat for matplotlib stuff. Right now I have to wrangle Copilot to do what I want with comments, but with Chat you can just ask it to write the whole cell of code.


I've not been back to full-fledged Jupyter since getting in to VSCode.

Most of my analytical work now is done in .py files, broken up into blocks with `#%%`. Real notebooks feel really clunky since adopting the approach.


Does `#%%` have special meaning to the tooling? Are you able to view the output as a pseudo-notebook?

EDIT: Googled and answered my own question. Here are docs describing the feature: https://code.visualstudio.com/docs/python/jupyter-support-py

And a video demoing what you are describing: https://www.youtube.com/watch?v=lwN4-W1WR84


Nice! That's the Matlab way of doing things. I used to miss the Matlab workflow a lot when I was transitioning from Matlab to python. Although somewhat surprised they didn't just go with "## title" to match Matlabs' "%% title" (difference is comment character, and "# %%" reads like a python comment of a Matlab cell, I suppose it makes sense if you start with a m-file, python comment the whole thing and then work your way down translating cells from Matlab to numpy/scipy).

Personally I've since gone full literate programming mode to the point that I care far more about the narrative and documentation (of methods and results) that I will build and modify tools rather than go back to the Matlab way. I have been looking at Quarto but haven't had the time to see if I can transition my existing (and target/ideal) workflows.

I know it gets a lot of hate but ipynb have a lot of advantages as a format for building small custom tools for modification/transformation. Most of the complaints ultimately seem to boil down to not having tools that do what you want. Only want to diff the code cells? That's easy in a python utility that loads the notebook and looks at it intelligently. You can also use pre-commit to modify the notebook and strip out things that don't belong in git.

(Also nbdev... exists... and is a good example of how tools can help. Unfortunately it's too tied to GitHub functionality and the developer is a GitHub zealot who is oddly brittle and takes offense and demands justification if anyone mentions not wanting to rely on GitHub)


I must be missing something because I am not immediately seeing the value-add. Do you prefer the separation of input/output or is it something else. I believe all of the debug, extensions, and hinting work the same as the standard notebook.


I'm glad you asked, because you made me think about why. Initially, I guess I just thought it was kinda neat and stuck with it, but on reflection this is what I personally feel I get out of it:

- Same interface for analysis, scripting, and building more complex multi-file pipelines. I can also use the #%% notation to break up and debug scripts, which is probably teaching me all sorts of bad habits but it's something I find helpful.

- Similarly, as another commenter in this thread notes, .ipynbs just don't play as nicely with the other dev tools (e.g., Git, Black) and generally feel like second-class citizens in VSCode.

- I much prefer having the VSCode interactive window on the right, as opposed to having my output dumped out below my code block. I now find using the classic notebook style makes the document much longer and harder to navigate, particularly as I work with text a lot and I'm often outputting large chunks of text for inspection.

This noted, I think this is all possible because I'm rarely producing my final products in notebook format. Neither my boss nor the stakeholders I typically present to can (or have any inclination to) read code, so I don't really need a format others can execute or inspect. I just take the charts and figures and dump to presentations and other normie-friendly documents.


I am torn on the separation of the output from input. I expect half the time I would be happy for the extra vertical space and the other half I world be annoyed I could not immediately correlate code with output.

Anyway, thanks for spreading the workflow, and I will definitely try it out in the coming weeks.


I wish more data scientists used light percent format notebooks `#%%`. It can be combined now with other powerful tools (linting, formatting and git) that is impossible with the `ipynb` format


I always like this better than true notebooks for a lot of purposes, but it's long overdue that we standardize on a format here. Knitr and RMarkdown never caught on, and Org Mode and VS Code both just do their own thing. It's a shame there isn't something more "portable".


Do you find it clunky that with cells inside py files the results appear to the right? It reduces a lot the screen real estate


I guess this is preference.

I like to have the relevant code and output side-by-side, and dislike scrolling past outputs to get at code. Again, pure preference.

My screen copes fine with two tabs and the sidebar hidden most of the time, but more real estate would be nice.

What I'd love would be to pull tabs out into separate windows, like in a browser, and have the Jupyter output and variable inspector on a second screen. If anyone knows a way to do this (not new window) I'd love to hear. Last time I looked seriously this wasn't possible.


You cannot put pictures and YouTube videos in #%% .


I have to confess I've never had much need for YouTube videos in my code. Pics are just output as far as I'm concerned.

Naturally, if you need these things then .ipynb makes sense.


Maybe because the Jupyter is setup on the computing server, where everyone needs to log in to do their works? This was the case for my last 4 companies.


I use VSCode (with the remote extension) for the situation you’re describing, and find it works extremely well. There are a couple little pain points, but none related to the remote part of the equation.

Big positives are how it integrates with the rest of the IDE so go to definition, debug cell, and data explorer just work.

Some negatives are a possibly onerous setup if not already using VSCode as your IDE (to get some of the IDE-like stuff to work), and how there isn’t exact parity on hot keys so muscle memory fails you occasionally.


I see. If I faced the same constraint I would try to find whether VSCode can access files remotely, which is very likely.


The VSCode remote access is IMO better than just accessing files remotely; it’s s a large part of why I use it instead of Pycharm.

It splits the editor into a UI that is run locally, and a server that does the heavy lifting on the remote machine. Conceptually it’s very similar to Jupiter, where you have a user facing front end with the UI run on JavaScript and rendered by your browser, and a python kernel backend, and the two communicate over pipes that can be run over the internet.

What it effectively means for VSCode is that you get a more seamless experience than I experienced with Pycharm remote development.


I have tried VSCode many times, but I find the performance of the notebook UI to be terrible. It seems to be popular, so maybe it's just me.

In any case, jupyterlab + jupyter-lsp gets most of the benefits for me.


Would be curious how long ago you tried this and what your setup is? The lowest spec machine I have is a Windows 10 machine with an i7 and 8 GB RAM and it is super responsive on the latest version of VS Code.


A few weeks ago with Ubuntu 22.04 on i7 with 16GB RAM. Perhaps it's related to using the devcontainer feature of VSCode, although I run jupyterlab in the same container


That might be it. I’m not sure about devcontainers (I don’t use them) but as a data point I dual boot into kubuntu on an 7 year old i7 with 8 gigs of RAM and there are no responsive issues.


I would love to use vs code for notebooks, but I just cannot get the interactive console to work as I would like. Right know I don’t exactly recall the problem, but it had to do with the keyboard shortcuts and running the piece of code in the interactive console. For some reason other shortcuts took precedence, even if I disabled them, so I couldn’t get my code to reliably run on the console.


Well, I resetted my vs code environment in order to start fresh and check if my problem was still there. My issue is that VS Code interactive windows do not share the same "working space" as the cells. I usually use the interactive window to try stuff, which then gets crystallized in the notebooks. The interactive VS Code consoles are disjointed from the notebook, making it a pain to test stuff with already-defined variables and libraries.


For me it's a cleaner UI for experimentation and when you run a cell it doesn't jump to weird places depending on the output. I've been using jupyter for so long that I find it very familiar and easy to use. With that said, VSCode has improved so much for notebooks that I'll usually reach for it over jupyter.


Ctrl+enter executes a vscode ipynb cell without jumping to the next cell. Although it doesn’t solve the issue of jumping when you re-run all cell’s with figures in your notebook.


Does anyone else have issues with the cells overlapping or shrinking in vscode? I've had this issue for months. Created an issue on GitHub, but there's been no improvements. It can be very frustrating, sometimes I have to completely close vscode and reopen to get it to stop glitching constantly. I tried switching the GPU to the nvidia card, but it somehow uses 30% of the GPU when a cell is running.


I’ve tried to convince junior researchers to do make this jump in the past and they have not done so. I think its a combination of lack of time and familiarity. A lot of researchers only use jupyter notebooks occasionally between their more time intensive lab work or possibly using similar R tooling instead.


VSCode interactive notebooks are amazing, I think that should be how it is for all environments. I only dabble with notebooks but I dream of the day I can easily have all of my functions be interactive in a REPL as I code them with VSCode.


Only because I can use it anywhere that way, including on machines that I am not allowed to install things on. But if it works in vscode.dev then I am out of excuses, because you’re correct, it’s much much better in vscode.


I've been using Jupyter Notebooks for 7+ years but keep failing to find a use case for JupyterLab.

JupyterLab feels like a clunky web based IDE. I check it every year or so and go back to Notebooks.

I used to and still run a Littlest Jupyter Hub: https://tljh.jupyter.org/en/latest/ for my org.

I keep thinking whether migrating to full blown JupyterLab is worth the pain.

With the improvements that Visual Studio Code has made in ipynb support there is even less reason these days.

The biggest thing keeping me on VS Code of course is full blown Copilot support. Whenever I have to fall back to Colab I feel 2-3x less productive.

My workflow is:

* Notebook for exploration/fiddling around 90% of the time is spent here - keeping state open is so convenient

* Extract/export code to regular .py for production


Same here. JupyterLab is just too overwhelming and full of things I don't need. I just need an interactive Python interpreter with support for visualizations and editing code blocks. That's Jupyter Notebook, so I keep using that.


I've been using Jupyter Lab for ages now since it had a dark mode and Notebooks didn't. It's been so many years though maybe that has changed by now and I haven't noticed.


I see no mention of an improved debugger. IMHO, the atrocious debugger in JupyterLab is one of the primary reasons why it is difficult to write good code in this environment.

Even a simple improvement like remembering the sizes of various subpanels in the debugger sidebar will make me feel like I am not pulling teeth when I use it.

And don't get me started on inspecting the value of variables. If you are looking for more than an object within an object, you might as well go back to print statements.


Eh, print/log debugging works fine. Especially in an interactive environment: you've got direct access to the variables and objects, and can easily inspect them directly.

At some point I felt like I was a bad dev for not using a debugger, but at this point I think I'm more versatile since I'm less dependent on finicky tooling to figure out what some code is doing... Every language has it's own debugger to learn, but logging (and good strategies for logging) works about the same everywhere.


Debugging nested dicts and high-dim arrays is a nightmare using print


in python you can turn nested dictionaries and other data structures into json, but only if the data structures doesn't include circular references. I use that a lot.

something like

  >>> d = { "a":1, "b":[1,2,3] }
  >>> import json
  >>> print( json.dumps(d, indent=2) )
  {
    "a": 1,
    "b": [
      1,
      2,
      3,
    ]
  }


Try pprint:

  >>> d = { "a":1, "b":[1,2,3] }
  >>> d["d"] = d
  >>> import pprint
  >>> pprint.pprint(d)
  {'a': 1, 'b': [1, 2, 3], 'd': <Recursion on dict with id=4516761856>}


thanks, great!


Try `str`


Do you know how it compares to using Visual Studio Code's debugger with notebooks? I'm wanting to find a notebook debugger recommendation for my students.


The VSCode debugger works amazing with notebooks. I find it super useful all the time when I develop. Just install the Jupyter extensions and it should work


Vscode's notebook is quite good (as good as you consider it to be in comparison to pycharm), but on my system it always exhibits bugs: 'random' need to rerun code multiple times for the debugger to recognize changes, not terminating debugging session when I stop it, freezing interactive debugger shell in the middle of a session, etc.


Is this in the context of students learning to code, or something else where they need to use the notebooks to supplement their study?


Engineering students (not software) learning numerical methods, so doing programming themselves, but nothing very sophisticated. Their programming skills are mostly pretty limited so simpler is better. At the moment we use plain Jupyter notebooks, but a variable value explorer and simple debugger would be helpful. I want to avoid the (to them) bewildering complexity of an IDE.


Have you tried pyCharm? I believe even community (free) edition should have support You are looking for


No I haven't, but it looks like the community edition can't do notebooks: https://www.jetbrains.com/help/pycharm/jupyter-notebook-supp...


Ah, my bad then.

If it's for student, you could maybe get educational license, that is completely free for all their products, but I imagine it could be quite a hassle


Pycharm's notebook debugger is top notch, but their notebook implementation is, imo, clunky and slow.


Give it another shot if you haven't recently. I was a hardcore Jupyter Lab user and PyCharm pulled me away with their recent updates. Same hotkeys, good integrated tool support.

The VS Code implementation of notebooks has too much vertical space for me.


I'll try then! Last I remember they had some weird variable loading policy: even under small cell executions there'd be noticable delay before variables got 'refreshed', even when lazy loading was configured. Maybe only a problem on my end though.


The debugging engine is the same. They both use debugpy 'via' the Jupyter wire connection.


Also it has a poor multi-user experience.


Is there a similar notebook application like https://livebook.dev/ for Python?

I like Jupyter, but after trying out Livebook with Elixir I wish there was something similar in Python.

Smart cells and Toggling parts of code on/off is extremely useful features in a Notebook app.


I'm not sure what smart sales are, but the older Jupyter Notebook interface had several optional plug-ins that let you control things like freezing cells from being re-executed, controlling cell execution order, etc.


you could use streamlit


Anyone have experience with JetBrain's Data Spell product?


I found DataSpell to run much slower for me, and didn't find any value above something like VSCode.


Good enough for me and, clients love it.


What’s the current best-practice workflow for using with git? I see jupytext discussed in the comments but this saves the .py exported file. Is there a good clean method for storing tbe .ipynb files into git? It maybe is easier without the cell outputs?


I use a pre-commit setup that strips and formats notebooks. IIRC I use a bunch of the nbqa hooks (to also apply black, isort, ruff, etc) but I'd have to check if I moved to different versions for some reason. My workflow uses papermill so the notebooks I store in git are essentially "parameterized templates that get instantiated with data".

https://pre-commit.com/

https://github.com/nbQA-dev/nbQA

https://papermill.readthedocs.io/en/latest/


Jupytext seems the best practice.

The ipynb files are output artifacts. Why would you want to store them into git? It would be like storing compiled program binaries.


In the sense of an evolving notebook that’s used eg like for reports. It would be good to be able to version control this. The output doesn’t need to be saved, but the code itself would be valuable - and then can pull from version control and run.


I had good experiences with nbdime.


I like my repl. Can anyone tell me whether these notebook things are an improvement? They look a little like a repl instance with a history tracking how you got there which seems like a potential upgrade.


As you guessed, the history tracking is one of the killer features. Imagine it being super easy to edit the history of a REPL session (delete, reorder, merge, and edit contents of each command) and rerun... That's a notebook! Notebooks also allow for markdown input and rich HTML output (which is killer for plotting) making it possible to polish your REPL history into a document you'd actually want to share with a colleague to explain something like a data analysis workflow.

I actually started in notebooks and then learned to love the REPL as a simplified "scratchpad notebook." I'd say in many ways notebooks are an improvement that cater heavily to REPL-lovers, but that for some quick tasks, the extra complexity isn't always worth it.


Would be nice to have something like that but completely native - without the need of browser, CSS, JavaScript, etc. Just drawing directly on a surface, e.g. via Vulcan or GTK/Qt.


Even if the main interface is native, you still need an HTML/CSS/Javascript renderer for some of the output because most extension just output HTML and let's the browser do the rendering instead.


Can somebody give me a quick rundown on the difference between JupyterLab and JupyterNotebook? Whose the target audience for each one?


The Jupyter Notebook interface/server just opens singular notebooks (i.e. one notebook per browser tab).

Jupyter Lab is an IDE where you can open notebooks, files, terminals, etc all in one interface. Additionally, you can easily adjust the layout of open files (want notebooks side by side? Click one notebook's tab and drag it to one side of the screen. Want a terminal on the bottom of the screen? Open a terminal and drag its tab to the bottom of the screen. Want 3+ notebooks side by side? Click and drag. Etc).

I basically only use the Jupyter Lab interface when I'm working with notebooks (sorry about using the term notebooks so much, it's a synonym for a .ipynb file as well as the name of a server mode Jupyter offers).


when I install `pip install jupyter && jupyter notebook`, I can still create Py notebook,s text files or shell terminals.

I still dont quite get that is diff btw jupyter nb vs jupyterlab.

My understanding is notebook for single user locally, while jupyterlab is multi-user running on server, something like that


Lab is essentially for power users who need tabs of notebooks and multiple .py files open at a time in a single window. If you ever only work on a _single_ notebook at a time then it is overkill.


Jupyter Notebook is a one-file-at-a-time interface.

Jupyter Lab lets you have multiple files/directories/terminal/csv files/json files/html pages/etc open at once in the same browser window.


[flagged]


Please stop doing this. If you have personal experience to share that is valuable. If you are just cutting and pasting from an AI, anyone on HN can do that. It is no more helpful than copying a list of results from a search engine. Your HN contributions should be written by you, not a computer.


That definitely sounds sensible. I had the same question and figured I ask AI. It gave a pretty good answer. So I shared it.

I didn’t think it would get a backlash. But agreed, the answers are too verbose and polite for an HN comment and do not fit with the culture here.


Still looking for a nice setup with JS. Tried some notebook promises with danfoJS but couldn't find something ergonomic


Anyone know about compatibility with existing notebooks? Presumably stellar, right?

For now, I need jupytext (which is a great extension), so then I stay with v3.6 until jupytext is resolved.

For this announcement - or in the changelog - it would be great to know why it is a major version bump - what is the major compatibility change?


This is only a change in the interface. I don't think there is any underlying change in the notebook format. Your old notebooks are completely safe.

P.S. Juptext is a pretty cool extension https://pypi.org/project/jupytext/


nbformat is reference implementation of the notebook format: https://github.com/jupyter/nbformat No recent major releases.

jupytext is great, you can also review your notebooks as `.ipynb` files using GitNotebooks [https://gitnotebooks.com]. (I'm the solo dev)


I would think of JupyterLab as more of an IDE, and the major version bump need not affect their notebook format.

I think if they made breaking changes to the notebook format, they'd mention this right away.



Question for the more experienced than me but how does JupyterLab/Notebooks compare with the likes of datasette[0]?

[0] https://datasette.io/


Not really the same thing. Notebooks are a tool for interactive programming, maybe even literate programming if you want. They happen to be most commonly used in data science and data analysis work, where interactivity is important.


Was a nightmare making extensions in the previous version. The documentation was very scattered/cluttered between different jupyterlab versions. Don’t think I’ll be coming back to it for 4.0.


Making extensions is really a lot about looking at their own packages in their repos and looking at the API docs, figuring things out from procedure names and sometimes useless comments, that merely rehash the procedure name as a phrase.

Additionally they often point you at their terrible discourse forum for asking questions. More often that not I don't see a good answer there either, when I merely search for one. I think their gitter channel has worked best for me so far, when they did not point me to that forum.

Typescript also helps a bit when compiling.

Sometimes I visit an old bookmark, that looked like part of official docs by the URL, but find it 404ing. Ultimately I agree, that good and accessible docs is not the project's strong side.

I guess I will have to figure some things out again soon when updating extensions to version 4.


Anyone got experience with the real time collaboration and have feedback?


I'm surprised I had to scroll this far down for anyone to ask this question.

I've been trying to work out exactly how it works myself for the last hour, and there is no clear indicator on how to activate this mode. No documentation either, barring: https://github.com/jupyterlab/jupyter_collaboration

What am I supposed to be doing here?


Clearly it just works. install that plugin and someone will just instantiate into your notebooks and start editing things.


Would this work as my main IDE?


I haven't upgraded to JupyterLab 4 because the plugins that I use to make jupyter lab IDE-like (jupyterlab-git, jupyterlab-vim, jupyterlab-lsp) need to all be stable on the new version first.

But, yeah, you can develop python software in Jupyterlab if that makes sense for you.


[flagged]


Comedy is not your forte.


The analyses are just more data to analyze.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: