true but the languages are siloed, each notebook runs just one language. with beaker the languages can communicate with each other. there's no easier way to combine python and javascript for d3, for example: https://pub.beakernotebook.com/publications/7fdcaaa6-fb83-11...
Not everyone needs more than one language in the same notebook communicating between each other. But if required, then the cell magic system looks superior to me: have a look at %%fortran, %%cython, %%javascript, %%html, %%bash options. Also it is possible to switch kernels in the same notebook, but serialization of state between kernels is handled by user.
Cell magics just seem like one possible method of choosing which language to use, a menu such as what beaker uses seems alright too.
I was setting up cling (the C++ kernel) today and I gave up at realising that the interpreter is spun up in a subprocess, rather than a kernel persisted across cells. Without cross cell support, mixing languages is a limited feature of jupyter.
> JupyterLab adapts easily to multiple workflow needs, letting you move from a Notebook/narrative focus to a script/console one.
I'm not sure I like where that design is going. It's starting to look an awful lot like RStudio and Matlab and I moved away from those tools for a reason. My favourite thing about Jupyter is that it is focused on notebooks and narrative. It brought about a revolution of sorts; now we have people blogging and writing papers in Jupyter, github is full of random useful notebooks.
This design almost seems like a step backwards in that regard.
Context: We use Jupyter heavily (mostly against Spark).
In my experience there is a set of things that "traditional" Jupyter notebooks does really well. Anytime you have a linear flow of steps the notebook metaphor works really well.
However, if you are doing things approaching traditional development, where you have multiple sources of data, or loops that require debugging, or basically anything that isn't linear in nature it doesn't work so well.
I wouldn't want to lose traditional notebooks, but I'd love to be able to offer people something like this that offers better debugging and some development tool support, rather than jumping to a full desktop IDE.
My experience is in line with yours, debugging loops and functions is a big pain point.
However, I think there's a much better solution to be had here, which is to add more powerful debugging capabilities to Notebook. I think Notebook has potential for new debugging paradigms, imagine for example being able to break anywhere in a cell and get a new 'forked cell' which operates in the context of the code that you just broke into. I think that's the better direction to go instead of reverting to the very interfaces and paradigms that we moved away from.
As someone who still reaches for a Smalltalk environment when I need to prototype something, ipython notebook is about the closest thing I've ever found to the style of interaction you get with a Smalltalk REPL (aka "workspace"). Smalltalk deliberately blurs the lines between a text editor and REPL, and the debugger takes this a step further - a combined editor and REPL, within a suspended execution context. It could be argued, for example, that coding within the suspended context of a failed unit test, while the code underneath your cursor has this REPL-like liveness, is the non-cargo-cult way of doing TDD.
I'm not trying to claim Smalltalk as the Greatest Thing Ever, but its existence (and its "otherness" - from the point of view of today's conventional style of development) are evidence that there are useful tools to be had, somewhere down a road less travelled.
I learned enough Smalltalk to make prototypes with Seaside. I love the way notebooks work and to make them more Smalltalky would be wonderful not only for programmers but for the field as a whole.
I'm not sure (And by that I don't mean I disagree: I'm genuinely unsure).
To me, the traditional IDEs do work well for debugging and software development.
Notebooks are great for explanatory examples and interactive experiments. I think these are different to the type of software development I do when I use an IDE.
For example, I find notebooks great for rapid iteration of parameters when I'm doing "data science", or indeed most of the feature extraction->modelling->prediction data science pipeline.
What I don't find them good for is developing new algorithms. It isn't clear to me if this is an inherit limitation of the notebook format, or just something where it needs new developments.
(To be clear, I've also used both Zeppelin and Beaker notebooks and don't see any particular advantages. I've also used R Studio, but I don't really know enough R to comment sensibly on that)
I agree, the notebook could really have much more powerful debugging.
I've actually been working on implementing something to that effect for Python (see http://kitaev.io/xdbg). But it's been really hard to figure out who already has experience with similar workflows, because these tend to be isolated to particular language or devtools communities.
Apparently I need to take a closer look at Smalltalk.
Seeing as you're heavily using Spark, have you had a look at Apache Zeppelin (site: https://zeppelin.apache.org, demo: https://www.youtube.com/watch?v=J6Ei1RMG5Xo)? Seems like a more powerful notebook approach, plus better architecture for using embedded d3.js viz. Also painless templated SQL -> published dashboard looks great for getting data visible early on.
It looks nice, but the installation experience is (was?) terrible (as in - didn't work at all). Note the long gap between the 0.5.6 release (January) and the 0.6.0 release (July)? There were 3 (4?) Spark releases in that time, and that meant that none of the out-of-the-box released worked for anything except the version of Spark you downloaded with it (and from memory that had problems too)
I got it working and evaluated it in some depth. I'm from a Java background, so I really wanted to like it.
But it turns out that all those features that seem really nice are mostly only nice if you are trying to build applications, not notebooks. Maybe it has improved, and maybe for some usecases it makes sense.
I agree, but would phrase it more like this: Jupyter has succeeded because each of the different major modes of interaction have been decoupled. If you just want to use it in a shell, you don't need to involve the browser at all. If you want a narrative format for sharing, presenting, or converting to slides, you can easily launch that environment.
This feels like a big step backwards for me too. It's effectively like replicating the MATLAB / Octave / PyDev (Eclipse) sort of IDE-with-extras-plus-console that is so, so cripplingly bad, but acting like it's great and new just because it's all in the browser.
If you're a fan of productivity, you shouldn't want to do that kind of stuff in a browser. Heck, I even disable all of the dropdown menus in Emacs because even that is too much of a productivity hindrance / inefficient use of monitor space when I am writing, reading, and thinking about code.
This is one of those things where I feel that it doesn't actually solve practical use cases, doesn't make people more productive, but because there is a big hype engine behind it, it gets adopted and talked about anyway, and eventually becomes the sort of thing that an Office Space kind of manager starts to force you to use ... which really scares me. Stay off my lawn.
Can you elaborate on what you think is bad about the IDE-with-extras-plus-console framework? I do research in quantitative finance and I actually find it really useful.
One of the main things is that it continues to perpetuate primarily mouse-driven interaction with the development environment. Even when tools like this enable Emacs or vi key configurations, the integration just never quite works, and there are environment-specific options you are required to select that come from e.g. drop-down menus, etc. Interacting with UI elements is horrendously unproductive and disruptive to thinking. Putting it in the browser makes this worse, because then you've also got the browser's own key configurations, like tab switching or bookmarking, to worry about.
There is seldom any value in looking directly at code and at a console at the same time. But if you really want that, it's super easy to do it with a window manager like xmonad, or even just arranging shell windows on your desktop so that you can alt-tab between them easily.
You often want to quickly spawn and kill shell tabs, which themselves may or may not be in the same language. For example, I often have a tab in which I'm using IPython, another tab in which it's the working directory so that I can execute things with Python directly, mv/cp files, ls, etc. And then still more tabs in which I have background processes that check for file changes and run my unit tests whenever things change, sometimes a tab for Python 2 and another tab for Python 3. And further, development projects are almost always cross-language, so I tend to also have some tab opened for writing and working with C or Haskell at the same time, and possibly another with a psql shell.
Since there are so many necessary tabs just to do even the tiniest things, it means that any and all visual overhead must die. Switching these tabs, even if it is quick, inside of a clunky GUI application like a browser is just too unproductive -- the browser periphery already wastes maybe 5% of the available visual space, and then the overhead for the tab icons, clickable close buttons, etc., wastes another 5% inside of that, and then the width of the tabs is restricted because there's some left panel with directory information or in-memory workspace information (both total wastes of time), and the height is restricted below by some worse-than-plain-shell console; it just makes no sense. It's too much visual clutter and too inefficient to facilitate switching around as much as is necessary.
I think it should be emphasized again that the left-side panel showing either directory structure or the contents of an in-memory working environment are huge wastes of time. If you need to visualize a directory structure, that should just be another buffer, like a code source file, and when you want to view that you just switch to that buffer. There's no benefit to having some of your visual field distracted by it when you're working on other files. And the in-memory information is also generally crappy. It's another thing where if you really need it then it should just be in another buffer and you should quickly go to that buffer and give it all your attention for the short time you need it, then go back. It serves nothing by having it as an ever-present visual distraction. But more than that, relying on inspecting variables that way is a very infantile thing, and I see it a lot with MATLAB programmers. Their form of debugging is not to scientifically inspect and control the execution of the code, and use proper breakpoints and watchpoints to tells them what's going on, but instead to just "run everything" and then go and click to open up a spreadsheet-like view of a matrix variable or something and manually (!) inspect the data. Then they become reliant on this as a crutch and complain when it's no longer there, instead of learning proper ways to write tests and proper debugger usage and let those things automate the problem of zooming in on outlier data, messy data, or bugs.
Anyway, there's plenty more to say, but it's probably long-winded enough.
I generally much prefer GUI tools to command line tools. I've been working as a software developer for 20 years, so it isn't that I don't know the command line.
I find GUIs generally allow for better discoverablity and I much prefer switching browser tabs to switching tmux/screen terminals.
Perhaps it is possible that different people work in different ways? Maybe the things you are thinking of 'wrong' actually just don't work for you, but really do for others?
For example I know how to use a debugger. I've written my own custom debugging clients, and attached IDEs to live production webservers and debugger code live, so I really do know what I'm doing. But sometimes I prefer outputting data into an Excel spreadsheet because it gives me better context. Sometimes that really is the right tool for the job.
Also, get more, bigger screens. It's made a huge difference for me.
One of the things about JupyterLab is that it really emphasizes extending and customizing the environment. Think of it as a platform for web-based applications and a reference set of components. We've tried hard to make the underlying platform have good support for power users. We've paid a lot of attention to making a good keyboard shortcut system, for example. I think it makes a lot of sense for someone to write a plugin for the system that registers keyboard shortcuts for doing tab management, maybe in the vein of a tiling window manager. Also, it would take changing only a few lines of code to move the file browser from the left side panel to the main docking area - it's just a widget that is registered on the left side panel rather than the main area.
We also encourage people to theme the environment, and provide themes via plugins. I think theming things with a lot more minimal whitespace would be an interesting project for users like you.
Also, for a power user, I'd definitely suggest running it in the browser application mode, which gets rid of lots of the browser chrome.
Remember - it's an alpha-level project, we're still experimenting a lot, especially on the user experience and UI. JupyterLab was built to be easily extensible and customizable, and we encourage people to experiment with customizing the environment in a way that suits them. And we appreciate feedback as well! Thanks!
Ipython terminal is still there, but you can't expect ma$$$ or Excel users to start working in vim or emacs based workflow. Besides browser provides rich output, cross-platform support. The big thing for me are widgets.
The returns to widgets in these kinds of things are soo diminishing. It looks cool the first 5 times, but then your boss says, "Great. Productionize it" and it's a world of shit. The widgets are good for throwaway demos and presentations, but if you are designing some functions for a business API of some kind, and your form of reporting is to show people something with a widget, now they want your internal API to include the widget, which is a dangerous game.
I like super boring static charts, such that I can completely decouple the presentation from whatever business API was used behind the scenes for the data that's displayed. Take a super severe Occam's Razor approach to absolutely avoiding any kind of animation or interactive plot for as long as possible, and only if there is some kind of outrageously severe business case that absolutely demands it (which there almost never is) will I resort to designing a way to productionize the widget aspect of the report visualization as well.
For this reason, I now see widgets like from IPython as a huge minefield. It looks pretty, but it's a big trap, and it doesn't add nearly the value everyone thinks it does.
(Just to be clear, I'm only talking about the non-demoware case, which is the kind of case I have always worked with.)
I use widgets for data exploration, model tuning, and interactive plots. Works great in practice! Have a look at bqplot from Bloomberg used in production.
I avoid widgets for data exploration, which should be written from the start in a well-tested and library-focused sort of way even when it's ad hoc.
Model tuning absolutely should not be done with something like a widget. In fact, I could see how that could easily lead to unreported issues with multiplicity of testing when someone's just sliding around a slider and seeing what looks best, oblivious to the statistical consequences. Model tuning is better handled by having a separate sort of model specification file, in which parameters, data cleaning steps, etc., have to be registered ahead of time before any code is executed whatsoever. That allows full transparency and reproducibility: you can even map model specs to unique IDs and backtrack to which analysts executed the job to fit that model, how many times it was updated, etc. ... whereas someone monkeying around in a notebook with a slider bar, that's absolutely not OK. It would be OK for demoware, but never ever for serious production cases.
Avoiding widgets works great in practice. When I was working in quant finance, this was a reason why we heavily decoupled all data presentation code from all data exploration code.
We also realized that the interactive plots just add nothing 99% of the time and are virtually never worth the headache. Just use static plots until there's a serious use case that truly requires interactivity. Above all, don't use interactivity just because it's the shiny new thing.
bqplot, Bokeh, d3py, etc., these are great engineering projects that just unfortunately don't have pragmatic use cases and are generally adopted out of hype and an obsession for the new more than for pragmatism.
After working with lots of these tools, we just began to realize that e.g. mousing over line charts or maps and being able to click to drill down into data points was simply not helpful. Streaming plots do have some applications when you have to view real-time dashboards, but in these situations it gets abused and used incorrectly, mostly due to bad dashboard ergonomy (cough Bloomberg), and so there actually can be a cost-effectiveness argument in avoiding the streaming dashboard anyway, kind of like cost-effectiveness arguments for reducing alert menus to avoid alert fatigue. The cognitive foibles of the user matter to the design!
In a Tufte sort of sense, it just did not actually aid in perceptual understanding. It's more of a "let's do it cause we can" thing than a "let's do it because it actually offers actionable insight" thing.
Hey, Tufte also criticizes Excel, which is still most widely used tool for analyzing data and making plots that are not static. Engineers love it. Anyway I will stop here my replies.
Notebook is still primary interface for final version of your analysis. But during iteration and experimentation of analysis, the other parts of jupyterlab are really helpful! Before all of these was not so nicely integrated.
But from what I can tell, it still doesn't look integrated. It looks like separate tools all glued on the same webpage.
Now I have yet a separate terminal, a separate file manager, a separate Python REPL, ... How is this any better than just keeping those applications open and switching between them?
I really hope this isn't just a poorly-implemented window manager inside a web browser.
For some plugins integration just means visual integration (i.e. window-manager integration). And, to be clear, this one is not poorly-implemented. It's based on a very powerful web-application toolkit called PhosphorJS written by very experienced UI developers.
For other plugins there is very nice integration so that you can have interactions in one plugin update outputs in another and vice-versa.
The ease of creating a connected ecosystem of web-applications connected to your workflow is one of the powerful aspects of the new platform.
I would want integration into my existing workflow, not integration that replaces my workflow with something completely different.
For me, this means using emacs-ipython-notebook. This brings the ability to interact with a running IPython Kernel into the text editor that I use daily. It's the best of both worlds: interactive plotting+notebooks+rapid development assistance from IPython, and thanks to Emacs, all the keybindings are what I expect and I'm already familiar with how to extend the environment to suit my needs. EIN has been transformative for me.
Perhaps my needs don't match the needs of others. For most people, I imagine the Jupyter-lab style of environment that's self-contained in a browser would in fact be a step up from the ad-hoc notepad++ windows and PuTTY sessions used previously.
I love how notebooks allow the mixing of code and output and support incremental development by letting you choose which cells to execute. But I find the semantics horrible. Each time you execute a cell you do so in an environment that depends on your entire history and cannot be figured out by simply reading the notebook. I wish for a environment which would have the same semantics as a script but which would snapshot the environment at the entry to each cell so that when a cell is modified execution does not have to resume at the beginning. Even better if downstream data dependencies are tracked so that after modifying and reexecuting a cell we know which downstream results have become stale. Does such an environment exist?
I think this persistent state is one of the main advantages of the notebook environment, or the Matlab workspace, which I guess it was inspired by. It allows you to quickly try alternative values for certain variables without having to re-calculate everything. Saving snapshots would not be feasible if the project contains large amounts of data. If you want to reset everything, just "run all" from the beginning, or use a conventional IDE with a debugger.
And that came from Emacs and old Lisp environments---and perhaps something yet earlier?
As late as 2000, this was the single biggest advantage and single biggest impediment to new programmers in MIT's 6.001 lab: a bunch of nonvisible state, mutated by every C-x C-e. The student has tweaked two dozen points trying to fix a small program, and re-evaluated definitions after many of them, but maybe not all. The most straightforward help from a teacher is to get the buffer into a form such that M-x eval-region paves over all that, sets a known environment of top level definitions, and---more than half the time---the student's code now works.
I have similar concerns about much of Victor's work, for the same reason. Managing a mental model of complex state is a n important skill for programming, but it's best learned incrementally over long experience with more complex programs. These very interactive environments front load the need for that skill without giving any obvious structure for helping the student learn.
Contrast Excel and HyperCard, which have no invisible state: you can click and see everything.
But you cannot recalculate if your calculation has trashed your inputs.And if it hasn't then the snapshot does not impose a cost. If you are willing to forego the opportunity to replay to save memory, just put the producer and consumer in the same cell.
What you describe looks like reactive programming and lazy evaluation and Mathematica has support for this using the notion of Dynamic. This is how actually Manipulate is implemented.
I love Jupyter notebooks, I just wish I could use them in a dedicated program instead of having to run a server and using a browser based client. It feels hacky and I just prefer native apps for coding and the browser for reading documentation and similar. Even an Electron based program that could be associated with notebook files and hide the server-client model would make it nicer to use. At the moment I like to open notebooks in a tab in the browser built into Visual Studio, with the editor open in a tab next to it. With some custom CSS this can be made to look pretty nice, although it's far from perfect.
There's of course value in being able to share notebooks and view them in browsers, I'd just prefer a more native experience when editing them. Maybe an extension for Visual Studio Code would be a good idea?
Holy cow I never knew that existed! My life could have been so much different the past few months had I known.. I bet I even skimmed past it on HN without even looking.. Shame on me
Ditto! Amazing. Seems the inverse of notebooks where it is a notebook with predominantly text and a box with code and results. I'm going to take Hydrogen for a spin.
I still rely heavily on Mathematica, since the notebook interface won me over years and years ago. I think the next Mathematica paradigm to be redone in JupyterLab is easy connections to data without all the connection string fuss. You can just put in some basic phrases in Mathematica, and you are connected to the data, albeit curated by Wolfram, but still simple and immediate.
I can link to Mathematica from NetLogo and other programs I use too.
In what sense? I only get glitches in Mathematica when I am hitting extremely huge curated data from Wolfram and trying to do something at the same time.
I have only used Jupyter notebooks for a page or two, and mostly toy problems, so I can't say I have stress tested them.
I do like Hydrogen, though.
note quite the cell by cell, markdown experience, but you get inline resizable graphs, !shell commands, etc. more integration is on its way. Same with VScode.
PTVS has had IPython support for a long time, but it's not exactly like the notebooks, the IPython REPL in PTVS is more akin to Jupyter QTConsole. Better than nothing for sure, but not enough if you need or want to work with notebooks.
Like I mentioned, I've experimented with opening a notebook in a browser tab in VS, and I'd say it's passable while waiting on a real solution [0].
Somebody mentioned that PyCharm has notebook suppport, and that looks closer to what I want than just a REPL [1].
I've been using VS for years, from before I started using Python, and I still prefer to use it if I can. If I'm not mistaken you work on PVTS and left a message here around eight months ago about PVTS coming to VS Code. I really like and appreciate PVTS, is there any way you can update us on a time frame for when these updates to VS/VS Code will come, and what you have in the pipeline?
By the way, a lot of times I prefer the folder based projects in VS Code to the solutions in VS. I wouldn't mind if you could bring that to regular VS too.
I've also used it for the past months (latest version at MELPA), and haven't had any problems. Much preferable to the jupyter browser interface, no browser keybindings etc. web page stuff getting in the way.
Solution to related but different problem: I have too many projects with several notebooks that make window management a pain (two dozen browser tabs in a browser I would like to use for other things). My solution in OS X is to use the ancient-but-venerable Fluid app as a wrapper for each project, so I can treat it like a native Mac application. One can open as many new window in Fluid as you like, so one can have cluster info, another netdata, another terminal, and any number of notebooks.
That's a cool idea. If you launch the browser in application mode, does it do the same thing (i.e., appear to the OS as a separate application with window grouping?)
We hope to support tearing out tabs into new child windows, so you can have multiple OS window panes associated with the single jupyterlab page that is running.
(disclaimer: I'm one of the authors of the scipy talk the post is about...)
First: JupyterLab is definitely still alpha-level software, and is undergoing very rapid iteration.
I notice that a lot of comments are missing one of the most important ideas behind JupyterLab. It's built on a very flexible plugin framework (provided by PhosphorJS), and all components are provided as plugins. JupyterLab essentially is a reference set of components, but we really see people creating their own components, and a rich 3rd party ecosystem. This also sets a foundation for exploring other ideas for what a web notebook might look like, etc, and provides an easy way to distribute such a component. More than just extending via components, we've paid a lot of attention to getting having a good keyboard shortcut system, a good command/command palette system, etc. The video of the talk shows some simple examples of how JupyterLab can be extended: https://www.youtube.com/watch?v=Ejh0ftSjk6g&list=PLYx7XA2nY5....
Also, I see several people talking about having a UI that is really focused on notebooks. We definitely plan on an easy way to maximize a single pane (which might be a notebook, etc.) to be able to focus on one thing.
One thing I missed is a more detailed discussion of the Command palette. Will it also be extensible? Can plugins add their own commands to it for example?
Jupyter started from ipython, originally mostly written in python and for python. But the project was so successful that it extended for other languages using cell magic and then kernel system. The team is now mostly using JavaScript, Typescript, except for ipython sub-project. Although it's obvious choice for schools, research, analytical work, and publications, still I really wish for more adoption in corporate environment, that got stuck in proprietary tools.
I like the idea of executable cells for developing.
It's been a couple of years since I played around with Jupyter notebooks, but then I got frustrated quite soon:
Stepping outside my preferred editor (vim) was annoying, likewise I had no idea if (how?) I could export the code out of notebook to a regular .py text file (without clumsily copypasting each cell). And the .ipynb files itself seem quite terrible to manage in Git: write something and if you want to 'git diff' the changes, sometimes what you see is okay, sometimes it's not very... easily decipherable. (Text encoded image/png's?!)
My preferred workflow with R is to open the files I'm working with in both RStudio (with vim keybindings) and vim.
Maybe if I just strictly restricted the use of notebook cells just for calling scripts and functions written in external files, and use the notebook as a whole just for presentation purposes, but then it wouldn't be that much an improvement over regular IPython. Any thoughts?
There's a one-line terminal command to convert to. It's like `jupyter nbconvert -o python` or something. Works okay out of the box, and has some meager customizability.
Notebook provides keyboard shortcuts also accessible and searchable from command palette. There is also ipymd to write notebook from your editor using markdown. Pycharm supports notebooks and maybe you can use IdeaVim? You can export notebook to .py file from file menu. For git, just don't save the output, which can be removed from cells menu. See ipywidgets repo for notebook examples without outputs.
It looks nice - I kind of wish a lot of these features would have been put into jupyter itself, though. The lack of configurability in jupyter drives me insane. Sure, the configs are editable, but the lack of proper documentation makes it incredibly difficult.
Things like turning off autocomplete and limiting jupyter's ability to freeze my browser after an accidental infininte loop would go miles for me. These days I've just gone back into vim..
Have you tried stopping or killing the kernel? I found jupyter user group on google very responsive when something is not clear or needs to be extended. Google search on jupyter is pretty bad since still points at old ipython docs, although they do have updated docs for jupyter.
Of course. Problem is, a lot of the kernel halt requests don't seem to make it 90% of the time and restarting the kernel gets annoying when using it on larger datasets. Though, it has forced me into writing easily re-runnable code which is kinda good.
Too much thread-nesting here :) I had the same problem even in Mathematica with long-running cells. Fundamentally it is hard problem to solve in notebook, compared to terminal. Here is more explanation: https://mail.scipy.org/pipermail/ipython-user/2013-July/0128...
Any thoughts on the infinite loop problem, though? I've seen tons of threads for it in stackoverflow/github and not much ever comes from them. Lots of dev he-said-she-saids, though. Pretty big problem for a lot of us.
So RStudio in a browser? Jokes aside, this looks great. The biggest frustration I've had with the notebooks is the organization of code. Nice to see that become a bit more compartmentalized from the execution window.
Totally agree. My larger notebooks get pretty ugly and I have to spend a fair bit of time cleaning them up. But that's probably because I am not the most organized person ever.
That is why it is better to keep a jupyter terminal or qtconsole around for quick and messy experimentation connected to the same kernel before pushing it to notebook
As a non-professional user, I really like the existing everything-in-the-same-space-like-a-textbook-but-of-course-you-can-hide-libraries-and-modules layout. Is it still there somewhere?
It seems to me that Wolfram Research missed a huge opportunity here with regards to Mathematica Notebooks and the Worlfram Cloud. They didn't see or ignored the momentum that open source and the web can gain.
I love the notebook paradigm and executable cells and sharability of the whole thing (from ipython notebooks). Those still seem to be there, but the look and feel seems to take the system a step back towards a 90's style IDE with the tabs and file management.
that said probably still probably a good system on balance..
Rodeo is python-only last time I checked. Also it is missing notebooks, widgets, extensions, rich outputs, server architecture, multi-user support. Rodeo is closer to spyder than jupyter.
I see lots of "this is turning into RStudio" comments- but that's kind of the point of jupyterlab. Brian Granger was on a podcast recently[1] and spoke about how he feels RStudio has been great for the R community and wants to Proliferate that for other languages.
I think it's easy for us to forget we are the developer minority. Many, many developers (especially hobbyists and researchers) live and love the safety and guiderails of an IDE.
I think this misinterprets what people here are saying.
I suspect that many people here like IDEs, but think that notebooks are a a different thing that is valuable in themselves. I feel this way myself, but provided the notebook-style interface remains I don't see that exploring other options is a bad thing.
One thing people use it for is ephemeral computations that they don't need/want to be saved, so they don't want to bother with dealing with a notebook file.
Another thing we could support in the future is easily switching kernels mid-stream in the console. Maybe that makes sense---it did come up in conversation yesterday night here at scipy.
It supports these languages: Python, Python3, R, JavaScript, SQL, C++, Scala/Spark, Lua/Torch, Java, Julia, Groovy, Node, Ruby, HTML, and Clojure.
It has an experimental native version: https://github.com/twosigma/beaker-notebook/wiki/Electron-Be...
Talk at SciPy 2015: https://www.youtube.com/watch?v=iMPfLz6kKv8
[0] https://news.ycombinator.com/item?id=8364237