When you say "alter code and re-compile on the fly," do you mean and continue debugging without stopping the app and re-running? Because if so, that's a terrible way to debug. You now have state that may not be possible to achieve with the new binary you've made, and you may be debugging something that won't every exist in reality. And it may be very hard to tell that's the case. That doesn't sound very useful. It sounds very dangerous.
I find that's exactly where the debugger is most useful. I know I'm trying to reach or examine a state that my binary can't achieve. That's the problem. The debugger lets me poke around to figure out what exactly is wrong with my state, faster and more immersive than recompiling and rerunning every time. Then I fix the binary to match.
It occurs to me now that's essentially using the debugger as a REPL, but with access to a whole runtime's worth of external state. That's not a bad tool to have in your box.
And to be sure, this is exactly why Jupyter is so powerful for experimentation, especially when doing data science. When you can run lines independently, alter them and re-run them, but keeping all the state of the previous lines, it's so much better than needing to re-run from the beginning. Of course, you can shoot yourself in the foot if some of those lines are like (i = i + 1) and you end up incrementing i by (# times you've run the code) rather than (# times it appears in the code). But if you name your variables sanely and treat them as const, you can easily avoid this.
The question at the core here is: is a debugger meant for experimentation to find the (often subtle) root cause of a defect, or is it meant to be a read-only window onto state during execution?
If you're in a codebase composed largely of side-effect free functions or well encapsulated Object Oriented code, it's a very good way to debug. I had great success with such debugging and even coding and new development in Smalltalk environments for over a decade.
On the other hand, if your codebase is full of side effects and doesn't have good encapsulation (perhaps there's a lot of fiddling with globals) then you're going to have a bad time. But to me this isn't because the debugging method is bad. To me, it's because your codebase is designed with lots of tight coupling and side effects. You have an architecture that makes it harder to reason, debug, and refactor your code. This isn't just spouting. I'm basing this on many years of experience. And yes, I saw both kinds of Smalltalk code, and the effect is exactly as I described. Guess which codebases were more productive?
And they still keep a very tiny bit of it on the "Java Browser" perspective, which is basically the standard way Smalltalk presents the code navigation.
Pretty much par for the course for those days. Converting Smalltalk apps to Java ones usually yielded horrble Java apps, because Java was very immature, to be frank. While it was still VisualAge for Java, it was considered one of the best Java IDEs of the time.
Well, I have to use OpenGL for my development, and it's a giant state machine that works mainly via side-effects. It's notoriously frustrating to work with, but that's life. I realize not all code works that way, but I'd bet the majority of code does. So I stand by my statement that it's a terrible way to debug, maybe with the caveat "unless you can work with only pure code and no state."
Well, I have to use OpenGL for my development, and it's a giant state machine that works mainly via side-effects. It's notoriously frustrating to work with, but that's life. I realize not all code works that way, but I'd bet the majority of code does.
I suspect your sample is a bit skewed.
The Smalltalk class library wasn't stateless. It certainly wasn't pure. Really, code only has to be "pretty good" for such techniques to work well.
I believe Visual Studio does this by simply rewinding to the function entry point which addresses most of the concerns you've raised. A debugger is just an aid to reasoning about your code, not a substitute for it.
You might make the same argument against unit tests, or little hacks you write to separate out a problematic piece of code from an even more complicated context.
If you've never written code in a live debugger session, moved the execution point back up, and immediately run over the code you just wrote, you have a crappy debugger. VB6 could do this, Smalltalk does this, it's not dangerous, it's damn productive.
VB6... got so much hate but folks just don't seem to see how useful it's debug mode was.
Oh sure... VB6 and good OO design and modern practices (unit tests, dependency injection, etc) don't mix. But... if you need to sit down and "hack out" a bunch of working code quickly it was pretty hard to beat.
Python with the VB6 ide/debugger experience would likely take the scientific computing field by storm.
I miss it... especially since I've never been able to get "edit & continue" to work at all in Visual Studio.
You can do 'proper' OO, unit tests, and dependency injection with VB6, it's just not as easy as it could be, and most developers didn't, partly because unit tests and dependency injection weren't as popular at the time, but (with 'proper' OO and dependency injection) many apps were simple enough that these things were perceived as of little importance.
When I last wrote some VB6, I was criticised by a colleague for 'over-engineering' because, to build a cancellable progress dialog, I used a (very simple) observer pattern. He didn't understand what it was or how it worked - and would have preferred to simply block the entire UI while a process ran (without the ability to cancel, even). To him, even basic OO ideas were pointless and dangerous over-'designing'.
If your application was crap, it could also remain incredibly simple in terms of design, though inevitably multi-thousand line methods and hacks upon hacks led to completely non-understandable, buggy, and brittle code, so I'm sticking to my 'enough design' principles!
One more aside: This developer complained that there was a bug in the VB6 IDE because it wouldn't let him add any more code to a file - he'd written such a huge code file that he'd actually hit the IDE's limit. I tried talking about modularisation, refactoring, etc... then just gave up.
VS does this and I've used .NET since its start. It's useful now and then, but a REPL is far more useful. Considering the cost of Edit-and-Continue I'm not sure it's worth it. I'd have much prefer MS to have spent the effort on improving language tech or working on REPLs.
REPL is OK, work-spaces are much better, ala Smalltalk. The best analogy is like a SQL editor, you can simply write anything and execute it in place. The notion one needs to enter one line at a time, aka the REPL, is quaint; not something to really be desired except from languages that have neither.
REPLs don't have to be one line at a time. F# interactive and C# interactive work well with Visual Studio. And at worst, it's a simple UI issue. Like various SQL UIs, nothing stops a REPL from being a big textbox then letting you select text and "run".
Do Smalltalk workspaces do something fundamentally different?
Well, in Smalltalk everything everywhere is an execution environment; anywhere you can type, you can highlight and execute code. But it seems we have different definitions of REPL, to me a REPL is a command line; a textbox isn't a REPL, it's a workspace. Sure they're both code executions environments, but working with them is vastly different.
When using common lisp on SLIME you can type your code in the buffer, select it and have have it eval'ed. The result will be printed in the repl buffer instead of next to the cursor, but that's a pro to me.
When I tried out pharo I was pressing backspace all the time because outputs are not valid code and thus would break the highlighting.
Anyway, a repl is just that, a read eval print loop. If you interface with it using a command line or other means is just an implementation detail.
In Smalltalk, most of the class library, and hopefully your own codebase, is either side-effect free or at least well encapsulated. The classic trip-up in Smalltalk was with Streams. An experienced Smalltalker would make note of where the code was fiddling with Streams, then go down the stack until the debugger has effectively gone back in time before the Stream objects were instantiated. Try that for a few seconds, then restart the app. If your environment was set up intelligently, none of that would take that long.
I believe changing data structures gives a "you must restart to make this change" when resuming debugging. It's mainly for code flow type bugs rather than data structure type bugs.
There is all of this excitement around ideas like Light Table, allowing you to see the output of your code inline as you develop it. Being able to alter code and recompile on the fly in the debugger gives you a very similar experience.
there are some situations where execution of the rest of the program up to that point takes forever, and you don't want to have to spend however many hours/days/weeks reexecuting just for a typo. scientific computing comes to mind, with its huge simulations