Hacker News new | past | comments | ask | show | jobs | submit login

My point here is that static types didn't do much to improve C++. We should be focusing on what made C++ bad. The things that made C++ bad and the fixes for those things are what makes python Good.

I'm saying type checking is not one of those things.




Well, I personally find Python much more pleasant to code in when using type annotations and MyPy. Have you tried that?


Of course. I'm a python guru. I know the python type annotation inside and out. I'm a type nazi when it comes to writing python.

That's why I know exactly what I'm talking about. I can unbiasedly say that from a practical standpoint the type checker simply let's you run and the "python" application less, and the "mypy" application more.

Example:

   def addOne(x: int) -> int:
       return x + 1

   addOne(None)
The above... if you run the interpreter on it, you get a type error. Pretty convenient, you can't add one to None.

But if you want to add type checking you run mypy on it. You get the SAME type error if you run mypy. They are effectively the same thing. One error happens at runtime the other happens at before runtime. No practical difference. Your manual testing and unit testing should give you practically the amount of safety and coverage you need.

Keyword here is "practically." yes type checking covers more. But in practice not much more.


> No practical difference.

For a simple example like this, no. But consider this:

   def add_one(x: int) -> int:
       return x + 1

   data = load_huge_database()
   expensive_computation(data)
   add_one(None)
MyPy will show you the error instantly. Python, on the other hand…


Sure but the time delta is inconsequential. Why? because you're going to run that program anyway. You're going to at the very least manually test it to see if it works. The error will be caught. You spend delta T time to run the program. Either you catch the error after delta T or at the beginning of delta T. Either way you spent delta T time.

Additionally something like your example code looks like data science work as nobody loads huge databases into memory like that. Usually web developers will stream such data or preload it for efficiency. You'll never do this kind of thing in a server loop.

I admit it is slightly better to have type checking here. But my point still stands. I talk about practical examples where code usually executes instantly. You came up with a specialized example here where code blocks for what you imply to be hours. I mean it has to be hours for that time delta to matter, otherwise minutes of extra execution time is hardly an argument for type checking.

Let's be real, you cherry picked this example. It's not a practical example unfortunately. Most code executes instantaneously from the human perspective. Blocking code to the point where you can't practically run a test is very rare.

Data scientists, mind you, from the one I've seen, they don't use types typically with their little test scripts and model building that they do. They're the ones most likely to write that type of code. It goes to show that type checking gives them relatively little improvement over their workflow.

One other possibility is that expensive_computation() can live in a worker processing jobs off a queue. A possible but not the most common use case. Again for this, likely the end to end or your manual testing procedures will test loading a very small dataset which will in turn make the computation fast. Typical engineering practices and common sense lead you to uncover the error WITHOUT type checking being involved.

To prove your point you need to give me a scenario where the programmer won't ever run his code. And this scenario has to be quite common for it to be a practical scenario as that's my thesis. Practicality is a keyword here: Types are not "practically" that much better.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: