Sure but the time delta is inconsequential. Why? because you're going to run that program anyway. You're going to at the very least manually test it to see if it works. The error will be caught. You spend delta T time to run the program. Either you catch the error after delta T or at the beginning of delta T. Either way you spent delta T time.
Additionally something like your example code looks like data science work as nobody loads huge databases into memory like that. Usually web developers will stream such data or preload it for efficiency. You'll never do this kind of thing in a server loop.
I admit it is slightly better to have type checking here. But my point still stands. I talk about practical examples where code usually executes instantly. You came up with a specialized example here where code blocks for what you imply to be hours. I mean it has to be hours for that time delta to matter, otherwise minutes of extra execution time is hardly an argument for type checking.
Let's be real, you cherry picked this example. It's not a practical example unfortunately. Most code executes instantaneously from the human perspective. Blocking code to the point where you can't practically run a test is very rare.
Data scientists, mind you, from the one I've seen, they don't use types typically with their little test scripts and model building that they do. They're the ones most likely to write that type of code. It goes to show that type checking gives them relatively little improvement over their workflow.
One other possibility is that expensive_computation() can live in a worker processing jobs off a queue. A possible but not the most common use case. Again for this, likely the end to end or your manual testing procedures will test loading a very small dataset which will in turn make the computation fast. Typical engineering practices and common sense lead you to uncover the error WITHOUT type checking being involved.
To prove your point you need to give me a scenario where the programmer won't ever run his code. And this scenario has to be quite common for it to be a practical scenario as that's my thesis. Practicality is a keyword here: Types are not "practically" that much better.
Additionally something like your example code looks like data science work as nobody loads huge databases into memory like that. Usually web developers will stream such data or preload it for efficiency. You'll never do this kind of thing in a server loop.
I admit it is slightly better to have type checking here. But my point still stands. I talk about practical examples where code usually executes instantly. You came up with a specialized example here where code blocks for what you imply to be hours. I mean it has to be hours for that time delta to matter, otherwise minutes of extra execution time is hardly an argument for type checking.
Let's be real, you cherry picked this example. It's not a practical example unfortunately. Most code executes instantaneously from the human perspective. Blocking code to the point where you can't practically run a test is very rare.
Data scientists, mind you, from the one I've seen, they don't use types typically with their little test scripts and model building that they do. They're the ones most likely to write that type of code. It goes to show that type checking gives them relatively little improvement over their workflow.
One other possibility is that expensive_computation() can live in a worker processing jobs off a queue. A possible but not the most common use case. Again for this, likely the end to end or your manual testing procedures will test loading a very small dataset which will in turn make the computation fast. Typical engineering practices and common sense lead you to uncover the error WITHOUT type checking being involved.
To prove your point you need to give me a scenario where the programmer won't ever run his code. And this scenario has to be quite common for it to be a practical scenario as that's my thesis. Practicality is a keyword here: Types are not "practically" that much better.