Hacker News new | past | comments | ask | show | jobs | submit login
Postmodern Error Handling in Python 3.6 (journalpanic.com)
145 points by knowsuchagency on Feb 25, 2017 | hide | past | favorite | 146 comments



It's interesting to hear that Guido is now excited by types. A few years ago his opinion was still that types can't catch most problems. I think I will listen to that interview.

Personally I think types are over-hyped for most (profane) code.

The problem with complicated and/or specific types (which promise to catch more errors) is, in one word, coupling. They create serious dependencies across the whole project or even across project boundaries.

The other problem is: one functions-guy's URL is the next function-girl's string. And all this up- and downcasting usually creates considerable noise. Often overzealous typing is painting oneself in a corner, creating more pain then relief. It's like creating these totally arbitrary object-oriented inheritance-based taxonomies which might make some sense in one place of the code but totally break down in the next place.

But speaking about primitive types like int and str, or maybe list of int - most function arguments can accept only one of those for the code to make sense. So strategically placed type annotations can help reduce some boilerplate and put some useful barriers for error search there. While the simple-types cases are also the easy ones - the first thing you'll notice while testing is usually that you put a str where an int was expected.


> The problem with complicated and/or specific types (which promise to catch more errors) is, in one word, coupling. They create serious dependencies across the whole project or even across project boundaries.

With dynamic typing, you'll still have this coupling, but now it's unchecked and (in practice) undocumented.


Most parts of the code don't actually care about all aspects of the type (or the value), but only that it's an object that they can hand to the next function. Or they only care about a subset of the value's properties (like it being a string, not necessarily a URL).

For example, in Haskell there is the filter function:

    filter :: (a -> Bool) -> [a] -> [a]
    filter _ [] = []
    filter pred (x:xs)
        | pred x = x : filter pred xs
        | otherwise = filter pred xs
It doesn't actually care what (the type of) a is. This is one of the showcase examples of the power of Haskell's typesystem. (And admittedly it works nicely in this case).

But you can have that easily with no typing at all.

    def filter(pred, xs):
        return [x for x in xs if pred(x)]
So, dependencies avoided and it's not a huge source of bugs. While in the typed case, a system is needed to prove that the properties that get input come out again (or possibly a function of these properties). That quickly gets quite involved for more advanced cases.

Just look what the average Haskell code looks like, how many language extensions are typically needed, how crippled the code typically is (libraries dictating exactly what properties the client must be able to statically precompute). Most of the Haskell community seems to agree that the end-goal would be a practical system for dependent types (the unification of types and values, i.e. no types at all ;->).

For example, there are some libraries trying to capture the construction and execution of valid SQL queries in the type system. To understand these libraries without doubt a genius-level IQ is required. But the client code is still basically an unreadable mess. And the libraries are not reusable for cases where the database schema isn't known at compile time. The static type system actually prevents code reuse -- it makes the code really inflexible.


That's actually a great argument for static typing! Let's say your distant input component is changed and in some cases doesn't send lists of lists, but one flattened list. With Haskell, the compiler will immediately notify you of this bug, while with Python, you have to hope that your integration tests are sufficiently exhaustive to catch this, or you'll catch it in production. Whoops.

Like I said, the coupling is still there, but it's hidden by dynamic typing.


The problem you're describing has little to do with static or dynamic typing, and more to do with poor development practices.

Regardless of language used, if a person is changing the interface to something (either in internal code, or by upgrading a 3rd party library) then it's their responsibility to make sure the change is propagated to all the places in the codebase that use the interface. It doesn't matter much if they do that by running the compiler, grepping the code, or running a test suite.


What ends up happening in practice instead is that large projects using dynamic languages are hesitant to change anything like that. So if you didn't know the problem domain sufficiently to guess all the interfaces up front correctly, you end up in a pretty bad place.

In contrast, making such changes when you have types is an order of magnitude easier.

The equivalent would be to have integration tests that exercise the entire system with all its functions and code paths checking for basic data compatibility. Seems like a worthwhile thing to have to me, as its not easy to achieve at all.

Not to mention you have a tool to answer "if I change X, what will be affected" - even if you are not changing the type. Just pretend that you changed the type of X (e.g. change "name" to "firstName"), and see all the places that don't compile now. So much better than grep!


>What ends up happening in practice instead is that large projects using dynamic languages are hesitant to change anything like that.

I have seen this "code fear" in both statically and dynamically typed languages and fixed it. When I have worked through it, it has been roughly an equal amount of work to solve it in dynamically and statically typed languages.

Integration tests are usually the key to getting over the code fear hump.

>The equivalent would be to have integration tests that exercise the entire system with all its functions and code paths checking for basic data compatibility.

Integration tests that exercise every relevant user story are a prerequisite for avoiding code fear in any language - statically OR dynamically typed.

In addition to this sprinkling additional type checks around the code helps give confidence that the code is indeed working and gives you additional freedom to refactor. This is something I do when:

A) I encounter a type related error that an assertion would have caught.

B) I was previously confused about what type a var is or if just by looking at it I think somebody else might be confused.

People who have terrible test suites for their code written in statically typed languages are usually insistent that it's best practice to have types checked by the compiler. This means two things about their development practice A) they don't drive development with tests (TDD) and B) they often push unexercised code.


This is a claim I hear often. I don't think its true. Its possible to have integration tests for every user story, yet still not cover even a fraction of the possible code paths due to an explosion of different possible outcomes and inputs.

The great thing about types is that they're guaranteed to cover all code paths. They can't guarantee certain categories of things, but for the things they can guarantee, they easily provide 100% coverage, something that is impossible with integration tests alone.

They are also a tool to greatly reduce the number of allowed inputs and outputs for most code, which in turn greatly reduces the number of tests that actually need to be written. Runtime type assertions only provide one level of that: if you are sending nonsensical input to a method in an uncommon code path not exercised by your integration tests, the runtime type check will probably fail too late (in production). While runtime assertion can tell you if you call your code wrong at runtime, they can't answer "What are all the places that send incorrect input to this method?" Types can do that.

Not to mention that runtime assertions can be costly, unnecessary and very repetitive if added for every method. Comparatively types are free in terms of both performance and programmer effort, especially in languges with good inference.

In short, tests cannot replace types, just like types cannot replace tests. In the areas where they do overlap, types are a way better, more efficient, more tooling friendly choice.


>Its possible to have integration tests for every user story, yet still not cover even a fraction of the possible code paths due to an explosion of different possible outcomes and inputs.

Which is true in any language.

Having a hierarchy of integration tests helps to offset the combinatorial explosion, as does sanity checks that eliminate invalid code paths. These perform a very similar function to static types but you don't have to use them in prototype code and you can add them retrospectively in production code.

>The great thing about types is that they're guaranteed to cover all code paths.

Types are something every programming language has.

>They can't guarantee certain categories of things, but for the things they can guarantee, they easily provide 100% coverage, something that is impossible with integration tests alone.

Inserting type and sanity checks can achieve virtually the same guarantees as static typing when you have a comprehensive test suite.

When you don't have a comprehensive test suite, static typing may seem more attractive but it's a false sense of security.

>if you are sending nonsensical input to a method in an uncommon code path not exercised by your integration tests, the runtime type check will probably fail too late (in production).

Given that you have untested code, you have a higher likelihood of bugs. Period. In statically typed languages too. In dynamic languages those bugs are likely to manifest a bit differently. Yay?

>While runtime assertion can tell you if you call your code wrong at runtime, they can't answer "What are all the places that send incorrect input to this method?" Types can do that.

Arguments that argue best practice that start with with "given that I have poor test coverage..." are not especially convincing.

>Not to mention that runtime assertions can be costly

Runtime assertions are costly in CPU time. Static typing is costly in programmer time. CPUs are cheap, programmers are not.

>unnecessary and very repetitive if added for every method.

Oh yes, which is why I don't add them to every method - just methods where I am convinced they will be useful.

>Comparatively types are free in terms of both performance and programmer effort

Static typing never comes for free and it is often highly inappropriate (which is why many statically typed languages have dynamic typing bolted - e.g. see reflection in Java).

>In short, tests cannot replace types, just like types cannot replace tests.

You were arguing above that static typing was great because it caught bugs in untested code. That certainly sounds like you think it is replacing tests.


> Given that you have untested code, you have a higher likelihood of bugs. Period. In statically typed languages too. In dynamic languages those bugs are likely to manifest a bit differently. Yay?

Yes, test proponents love to say "you are not testing well enough!" Its dishonest, because in practice no system can achieve 100% integration test coverage, so no system is testing well enough :)

> You were arguing above that static typing was great because it caught bugs in untested code. That certainly sounds like you think it is replacing tests.

I never argued that. In fact, I started by saying that what types provide can also be provided by integration tests with that have 100% coverage, but that the second one is a lot more difficult to achieve AND doesn't give the nice extra tooling support (automatic refactoring, extra documentation, instant type incompatibility feedback in IDEs).

Indeed, if you can keep your integration test coverage at 100% in your project, you don't need types. Curious though; are you familiar with an open source project that has that?


Also, 2000 called, they want their static types arguments back. Take a look at modern type systems, especially those that are tailored for dynamic languages (TypeScript, core.typed). They've grown in expressive power and type inference. They've also patched some obviously stupid gaping holes: for example, TypeScript with strictNullChecks will prevent "NPE" bugs.


Most architects of large software projects agree, in my perception. As a counterpoint, I figure that something is wrong if changes routinely affect large parts of the code. The idea is that data is "in the interfaces" instead of global. Except for a few "god objects" changes should be pretty local.


Yes, what is wrong is our understanding of the problem domain. Or even the stakeholder's real understanding of the problem domain :)

Maybe we should stop working on types and focus on easy, high quality languages and tools which will enable other professionals to easily solve problems from their domain doing all the programming themselves. And not just languages - heck, it takes 2 hours just to set up a basic React Native stack for a "Hello World" app.


In most cases you will not be able to write your code in Haskell in the first place. Do try to write a database query construction kit that deals with schemata read at runtime (a very reasonable requirement).

Even for a non-genius it's pretty easy to make something that works, with dynamic types. http://jstimpfle.de/projects/python-wsl/main.html

I don't think something like that can be created in Haskell (without resorting to dynamic types).


You always have dynamic typing as an escape. If that's what it takes, then that's an option, but I think in practice you'd be able to keep that sort of purity to a small portion of the program.

Haskell is a research language, which sometimes shows in the libraries. I'm not sure if I'd recommend it for general use; I'm more partial to Kotlin these days. It's definitely an interesting approach, though, and anyone who's designing languages ought to make themselves familiar.


>with Python, you have to hope that your integration tests are sufficiently exhaustive to catch this

If there is a border between code I trust and code I don't trust I put some sort of sanity type-checking to catch this type of problem early. This usually gives me immediate notification of certain classes of bug when a test invokes that code.

I also put other forms of sanity checking in that are not type-specific - e.g. check a file exists before passing it on to a method that will read it.

I am not sure if this style of programming entirely mimics haskell's main USP in python, but it sure makes tracking down certain kinds of error much quicker.

Integration tests attack the problem from one direction and strict type checking (whether static or dynamic) attacks the problem from the other direction.

IMHO you have to attack from both directions. Ever increasing integration test coverage is obviously subject to diminishing returns (in terms of bugs caught/investment), but so is ever stricter type systems.


>If there is a border between code I trust and code I don't trust I put some sort of sanity type-checking to catch this type of problem early. This usually gives me immediate notification of certain classes of bug when a test invokes that code.

That's still on a "due dilligence" basis, and too late.


>too late.

If you are severely lacking in test coverage. Otherwise no, it's not too late.

Slow compilers that take ~2-10 minutes in statically typed languages give slower feedback than test suites that take seconds in dynamically typed languages. This inhibits development speed.


As an abstract statement, I can only agree. In reality, modern compilers can do some incrementally. So when you make a change in file A, the compiler will only change things dependent on A. In practice this chains a small number of file recompiles. The end result is an instantaneous feedback loop.

I get this all the time in Eclipse. Eclipse doesn't really try to do a full compile. It just updates as above. This allows me to see within a few ms if my code compiles. If it doesn't, red line and the reason.

A full compile on a about 40k classes without tests (just compile, static checking) with Maven on my 2013 MBP is about 5 seconds. So still pretty fast. Probably equivalently as a fast as your dynamic language + tests.

The same is true for hot swapping code. The JVM supports this a bit. Spring Boot supports it more. JRebel supports it entirely. When making a micro service or even a website in Spring Boot STS (Eclipse + Spring features), the run time will restart. Clocked time is 1.2 seconds. Not too bad.

For changes to just HTML/CSS/templates/ etc, no reboot time.


> e.g. check a file exists before passing it on to a method that will read it.

Is this not subject to a race condition though? I.e. you check that the file exists, get a green go-ahead flag to execute your method which will read it, by which time some external entity has caused the file to be removed. So then you handle the inevitable errors that are thrown by the OS when you try to access a non-existent file, making the first check redundant.

Or, to provide another example. A recent project I'm working on acts as a control interface for a process running on the same machine, which exposes an HTTP service. When the application starts up, it can check that the process it wants to control is running, but it is inevitable that at some point, the process will get killed while my application is running, and therefore I have to prepare to gracefully handle that case. I may as well unify everything down to that single path of error handling.


>Is this not subject to a race condition though? I.e. you check that the file exists, get a green go-ahead flag to execute your method which will read it, by which time some external entity has caused the file to be removed. So then you handle the inevitable errors that are thrown by the OS when you try to access a non-existent file, making the first check redundant.

It's not intended to catch every edge case. It's intended to act as a sanity check and a barrier to code executing under known invalid conditions.

Point being that if your code is operating under certain assumptions (e.g. config file is present), it makes sense to check those assumptions early and fail fast, hard and clearly if they don't.

>Or, to provide another example. A recent project I'm working on acts as a control interface for a process running on the same machine, which exposes an HTTP service. When the application starts up, it can check that the process it wants to control is running, but it is inevitable that at some point, the process will get killed while my application is running, and therefore I have to prepare to gracefully handle that case.

This is in essence what I was saying: do some sanity checking and report coherent error messages (i.e. was the server even up?) up front before continuing down a code path.

That way you get a clear actionable error early rather than a mess of weird behavior that means you have to play detective to figure out what went wrong.


These super general very reusable functions are just a small part of code. Most of it depends on how arguments objects looks like exactly. It all becomes much harder to maintain the moment the project grows bigger or you are new to it. Types make it much easier to figure out what is allowed as a parameter where and what I am expected to do in order to make the system work. When I switched from typed to untyped language, it was like having hands cut off.


You are making my point, I'm very much in favor of primitive or very simple types. More complex types typically have a bad cost to benefit ratio.

And it's quite doable to get by even without these simple-type annotations. Just put some assertions at strategic locations. Of course there still is some amount of debugging involved. On the other hand development was much cheaper at other places, and once it works you can be reasonably confident that an int will not turn into a string overnight.


The problem with assertions is that sometimes they mean people get woken up in the middle of the night for an issue that would have been caught hours/days/weeks before by a statically-typed language. If you don't have 100% test coverage in Python you just can't rule that possibilility out. I say this as a full-time Python engineer and I've been woken up at 3am by things that slipped through testing and would have been caught in (say) Java or C#.

Now, it is arguable whether it's worthwhile to take that chance to make development quicker and more flexible. I can definitely see that. Also, if you have to wait a long time for the compiler then that might actually be a loss in aggregate.

It does suck trying to refactor large Python codebases though. Running and fixing the tests over and over until errors go down to zero is no fun.


>The problem with assertions is that sometimes they mean people get woken up in the middle of the night for an issue that would have been caught hours/days/weeks before by a statically-typed language. If you don't have 100% test coverage in Python you just can't rule that possibilility out. I say this as a full-time Python engineer and I've been woken up at 3am by things that slipped through testing and would have been caught in (say) Java or C#.

For me, carefully placed assertions combined with a comprehensive (but not 100%), test suite nearly always means A) catching the exception with a test or B) a clear production error that indicates a problem with the production environment.

I've been woken up plenty by poorly written Java that has had its type system intentionally weakened by terrible coders, meaning lots of detective work. It's a fallacy is that dynamic automatically means weak and static automatically means strict.

Assertions aren't just useful in dynamically typed languages. Java would benefit from using them as well.


> You are making my point, I'm very much in favor of primitive or very simple types. More complex types typically have a bad cost to benefit ratio.

What is a "simple type" in your eyes? Primitive types plus aggregates and arrays? What if your data structure is actually complex? Do you just stuff it in a hashmap?


Primitive types plus basic containers mostly. Anything else is just ad-hoc stuff (like algorithms) that should be properly encapsulated, so its representation in a static type system has less utility.


In python, you could type it fairly loosely. Which is more readable, and also helps you catch errors. There's not so much coupling.

    from typing import Callable, Iterable
    def filter(pred: Callable, xs: Iterable) -> Iterable:
        return [x for x in xs if pred(x)]

    filter(lambda x: x>1, [1,2,3])
If you tighten it up to only take iterables of ints.

    from typing import Callable, Iterable
    def filter(pred: Callable, xs: Iterable[int]) -> Iterable:
        return [x for x in xs if pred(x)]

    filter(lambda x: x>1, [1,2,3])
    filter(lambda x: x>1, [[1],[2],[3]]) # this is an error

Float is duck type compatible with int. So if you specify float, it accepts ints and floats. If you specify int, it only accepts ints.

    from typing import Callable, Iterable
    def filter(pred: Callable, xs: Iterable[float]) -> Iterable:
        return [x for x in xs if pred(x)]

    filter(lambda x: x>1, [1,2,3])
    filter(lambda x: x>1, [1.01, 2.09, 3.34])

The moral of the story is to be as accepting as you can be unless there is an actual need to be very picky about what you accept. (Perhaps if you're doing integer arithmetic, and your code only works with ints, then require ints).


The very beauty and reason behind filter is in that it does not care about its arguments. It is a logical construct - if predicate is satisfied - keep it.

The ADTs, type-tagging, predicate logic and duck-typing are superior and less cluttered than homogeneous box-like variables and restricted homogeneous conditionals are - the cost of static typing.

Yes, one could shot himself in the foot, but one also could run instead of crawl.


Sometimes it is more important to not shoot oneself in the foot than it is to run.

These discussions always end up with two antagonistic camps, so let's acknowledge that we can choose the degree of language support, in keeping a program's semantics as intended, according to the circumstances. Language selection is an aspect of engineering judgement.


But I nevertheless would argue that this

   (item for item in iterable if predicate(item))
is vastly superior idiom and the principles behind dynamically-typed languages, which makes this idiom possible, are sound.

   filter(P, []) -> [];
   filter(P, [H|T]) -> filter(P(H), H, P, T).

   filter(true, H, P, T) -> [H|filter(P, T)];
   filter(false, H, P, T) -> filter(P, T).
Look, ma, no IFs


>is vastly superior idiom

What makes it superior? Just that is appears more generic?

It actually isn't. As long as it's passed an object for iterable that's not iterable it will break. At runtime.

As long as it's passed an item inside an iterable that's not compatible with the testing predicate makes, it will fail. At run time.

The only reason it looks more generic is that it does LESS.


It not "appears", it is generic. As generic as it could be. This particular line is also a generator comprehension which produces a lazy sequence.

Run-time errors and so-called null-propagation are well-known issues and once code passed unit tests it is no less trustworthy in principle than statically-typed one. In the first run, yes, there is a chance of a run-time error.

The dynamic languages, notably Lisps, Erlang, Python and Ruby are proved to be superior for quick prototyping and so-called exploratory programming, which has been popularized by pg, along with bottom-up design and the layered-DSLs architecture, in the OnLisp book.

Another classic example is Norvig's Design Patterns in Dynamic Languages which basically ridiculed the whole thing.

There are distinct cultures around MIT Scheme and Common Lisp, Smalltalk and now Python which emphasize expressiveness, minimalism and readability. Such an erudite like you should know this.


>It not "appears", it is generic. As generic as it could be

Only in the sense that accepting arguments that it shouldn't accept and it'll crash with, is part of the general description of the filtering operation -- which is not.

If you prefer, it's "more generic" than it should be. It's not a generic description of the "filter" operation, but a generic description of the "process_input_and_produce_output" operation.

>The dynamic languages, notably Lisps, Erlang, Python and Ruby are proved to be superior for quick prototyping and so-called exploratory programming

Where is that proof published? And what methodology did it follow? Since, you know, we are computer SCIENTISTS and all...


> are proved to be superior for quick prototyping and so-called exploratory programming

Not OP and I don't have proof but I always reach for Python for exploratory programming or a quick "let's try and see what happens". Not sure if types are part of it or not, maybe it is just terseness -- the code is closer to pseudo-code.

I hope one day to learn Rust enough to internalize the type and borrowing system so that I can crank things out just as fast (and hey'd be more reliable and faster out of the door) but I am not there yet.

> Only in the sense that accepting arguments that it shouldn't accept and it'll crash with

Now on that point I think it is not just types. Types are a part of it. C has types and systems written in it crash and segfault all the time. Likewise C++ and so on. Maybe Rust is one new-comer where types and lifetimes would make a difference in practice.

However since OP mentioned Erlang, I'd would it is possible to write more reliable systems in Erlang (Elixir as well perhaps) than in C++ or Java or other such typed systems. I have seen it work in practice, and Ericsson's customers have seen systems work for years with 99.9999999% reliability. Now Erlang has optional types (the more annotations you add the more benefit you get from), but in practice isolated heaps, built-in distribution, solid error logging and reporting, sane concurrency model make a lot more difference.


Back when I was good at Haskell, I would actually use it for exploratory stuff. At the time I think it mostly came down to which standard library I knew better. Python has a huge stdlib, but I actually find prototyping difficult as I don't have my head in the game in terms of what function on X can I use for this purpose, what's the syntax for that again. Python has quite a lot of syntax / language-specific things to know when you think about it. And I've always found the docs hard to decipher.

When prototyping I think I actually want:

1. Little language-specific knowledge you have to learn and remember when you know a lot of other languages. A lot of this is about API consistency. You should be able to recognise a pattern in how the API is organised, and just go from there. Python still doesn't do list.sort(), it's still sorted(list). WHY???? This shit gets in the way every time.

2. Really REALLY good documentation. This is incredibly important. I should go from googling what I want to do or looking up a method to a concise description and an officially maintained usage example in under 10 seconds. Rust is nailing it most of the time here.

3. Types. Trust me to say that, but autocomplete is one hell of a drug, and so is using it to break out of the write-compile-test loop WAY earlier, usually during the write phase.

4. Not necessarily a big standard library, but at least a very good, frictionless package/dependencies system. Rust is getting better at this, the community is even thinking about 'blessed' packages.


> doesn't do list.sort(), it's still sorted(list). WHY????

Because sorted() is a non-destructive sorting - it returns a new list.


> but autocomplete is one hell of a drug,

Ah good point. I do remember that when doing C++ and Java.


> what methodology did it follow?

    http://erlang.org/download/armstrong_thesis_2003.pdf
    http://norvig.com/java-lisp.html
    http://arclanguage.org/
    http://old.ycombinator.com/viaweb/
Yes, 'proved' was a too strong claim. A few exceptionally good systems has been produced, notably Symbolics CL, Smalltalk, Erlang and Clojure, which after removing all the hype and snowflakery, is a remarkable thing.


No, in principle is exactly when it's a lot less trustworthy than the statically-typed one. Because in principle you could execute some code (untested; data-driven; "is_testing?function(x){return true;}:1.5"; etc.) that passes a non-function instead of a function, and this code would be considered valid (until it goes pop at runtime). But a language that checked this property statically would give an error before execution even started.


> But a language that checked this property statically would give an error before execution even started.

Like C or C++ does.

But you are right - I shouldn't use 'in principle'. It was rather a decorative idiom here, because it is true only for simple functions like map or filter or whatever you take from a standard prelude.

What I mean is that we are not talking about run-time in this context. Yes, one could pass by reference (or even by value) some crap at run-time, but this is a quite different problem. Machine code has this problem too.


>Like C or C++ does.

C is more dynamic than typed. Types are very loosely enforced there and carry little information with them.


This is more a comparison between Python and functional programming than dynamic vs static. This is partly because the language you compared it to is Erlang, which is a dynamically typed language, with no compile-time checking. But also because you can do this superior idiom in statically typed languages if they offer it, and you also get the static type checking.

In statically typed C# with LINQ, you can have, generic over T:

    IEnumerable<T> iterable = ...;
    var filtered = from item in iterable select item where Predicate(item);
This does not depend on the static typing. It's just syntax.

One non-obvious difference is that the type checker knows that 'filtered' must be an IEnumerable<T>, and you can't accidentally treat it as an IQueryable<T> or an IList<T>, which have subtly different behaviour. You can't, for instance, attempt to sort an IEnumerable without getting it ToList() from it. This is good, because enumerables can be lazy, and lists can't.

Python checks the enumerable/list distinction, but only at runtime. So let's say your Python library offered a method returning a list, and a new version of the library did some filtering before it returned the list.

A test for that functionality might pass independent of the distinction, because what test case attempts to sort a list for no reason? Some other part of your code used to work on the old version, but now breaks, and nobody knows until they run their program or write a really comprehensive test suite.

In comparison, in C#, this problem literally never arises, for one of two reasons:

1. The API initially offered only an IEnumerable<T> and consumers were calling .ToList() before sorting anyway (which is free on an IEnumerable that's actually an IList); or

2. The library author catches the error when it doesn't compile, because you can't implicitly downcast IEnumerable to IList.


Why you are ignoring the fact that one have to type so much less for the same result? ;)

Filter is the canonical example of a truly generic function. Suppose I wish to filter out something from a stream or a port. Then all I shall do is to supply a generic predicate, which only matches what it is supposed to be a predicate of and ignores everything else. This is what a predicate is - a matcher for some category.

So, quickly writing something like

   (filter this? (filter (lambda (x) ...) ...))
without even thinking about type signatures is what dynamic languages and type-tagging are all about - quick prototyping.

This is related to the embedding-of-DSLs technique and producing a systems which are layered DSLs instead on a common big ball of mud in Java.

Shall I cite from OnLisp or arc.arc and news.arc? ;)


> Why are you ignoring the fact that you have to type so much less for the same result?

Mostly because in your Python example you're really just _using_ an inbuilt filter function (comprehensions) where in Erlang you're implementing one. If you compare using, it's shorter in Erlang. More importantly, though, it's because I agree with /u/coldtea that one line of statically typed code (like the C# example) does more for you than the equivalent dynamically typed code. It's _not_ the same result.

> a matcher for some category

Well, a type is a category, which is why the study of them is called category theory. Having them helps you write predicates, and helps you avoid trying to match something of an unrecognised category. The type system does the job of 'ignore everything else'. If you're duck-typing and you actually want to handle that job (which most programs don't, they just accept they might get errors), you have to go

    filter(x => x.some_method && x.some_method(99), list).
Try stringing that together into (filter this? (filter that? list)). My point is, you can totally have that power and expressiveness without forgoing a static type system. LINQ does it, that's enough of an example on its own. Any language worth its salt these days can string together .map(f).filter(pred).reduce((a,b) => a + b) calls and still let you hover over 'b' in your IDE and tell you exactly what it is. You don't have to choose between these things. As for writing all this 'without even thinking about type signatures': I want to think about type signatures when working with complex data. Having the type system actually lets me be more productive, it is not a hindrance. IDEs are a part of that, but also expecting a huge block of code to work first time.

I replied because your comparison purported to demonstrate that only dynamic languages could be so expressive. It was a bad comparison, and also not a useful example of what code that begs for expressiveness looks like. In fact, most static languages allow you to be productive and expressive when you need it, and let you lock down the use of certain code to strict requirements when you want to.

I imagine there is not a thing in the world I could do to stop you from citing OnLisp.


> does more for you than the equivalent dynamically typed code. It's _not_ the same result.

Yes. It does more checking at the cost of imposing restrictions, such as forcing homogeneous containers and conditionals and having less generic, less cluttered code. This is not merely hand-waving, BTW. One just have to take a look at some decent Lisp code, such as arc.arc or some good parts of Common Lisp or Norvig's code from AIMA which is simply wonderful.

As for typing, an old-school ADTs are OK for me (that is constructors, selectors and predicates explicitly defined as procedures). This requires some discipline, because the type system does not do anything for you, but all this is trivial, including writing pattern-matching.

I could argue that strong typing via type-tagging of values (values has a type, not variables principle) is good-enough as long as it comes together with other Lisp's features, such as everything is an expression, everything is a first-class value, which gives one such beauties as the Numerical Tower, but this is quite another topic.

I am also OK with the SML-family languages, love Haskell for its clarity and conciseness and have nothing against them, but... I still think that there are prototyping languages and implementation languages, and I still prefer to prototype in a dynamic-typed language and would still use Common Lisp if I could have a choice or Clojure.


Well, typing less is not necessarily always the most important thing - though it certainly appears to be useful when writing small examples for pedagogical purposes.


Quack, quack. I'm a duck.


You should look at modern structural type systems that support generics. They have none of the flaws you're describing here. TypeScript is one such example.

For example if you define a TypeScript interface

  interface Serializable {
    toJSON():string
  }
then any object that has a toJSON() method that returns a string is automatically accepted as a Serializable. You can even define such anonymous interfaces inline without giving them a name:

  function frob(obj:{toJSON():string}) {
    // do stuff
  }
Basically, you get statically checked duck types and no coupling.


I think Go's story is similar. I know neither TypeScript nor Go, but "no coupling" isn't so simple as removing "implements" qualifiers.

Basically the only way to really eliminate coupling that I see, is passing explicit function dictionaries, and retract the idea that there is only one valid implementation of any given (type, concept) tuple.

It's very bad if all interface implementations have to be physically coupled with the implementation of the class itself (as I think the case is with Java).

It's still at least unflexible sometimes if you are free to put the implementation anywhere (at the class definition site, at the interface definition site, at an independent site) but the system enforces that there is at most one implementation of each interface (this is how it's done in Haskell). This strongly discourages alternative implementations because on has to circumvent the machinery. I don't think there can be a system that automates away at least part of the boilerplate that is dictionary passing, while still retaining the flexibility to choose alternative implementations.

The Haskell (maybe also TypeScript?) way can be very nice if you can be sure that there can be only one valid implementation of a concept. However, most concepts actually don't lend themselves to a single implementation. Most concepts aren't mathematically pure enough for there to be one and only one canonical implementation. Right now I can think only of a few where it's almost always fine to use the default implementation instead of being explicit, because one doesn't care much about the result - like "Show" for debug output.

Take for example JSON: I can think of a thousand ways to create some JSON output from my global data. To get them all in the bag, you would start introducing phantom types and what not, and implement toJSON on those. Really nothing gained at this point, it's only losing some readability at the use site.

Given some run-time introspection or metaprogramming on the other hand, a lot of flexibility can be won back.

Here is an example of my preferred way to do it (metaproramming)

https://github.com/jstimpfle/python-wsl/blob/master/wsl/pars...

Just spend 5 lines to get the data in shape for the task at hand. The original function dict, having a specific layout convention, and containing functionality that is just not needed here, is specified here (informally)

https://github.com/jstimpfle/python-wsl/blob/master/wsl/doma...

So 5 lines and these two locations are completely decoupled. Easy.


Its not like its impossible to model metaprogramming with types. TypeScript recently brought a new level of expressiveness and power into the mainstream with mapped types.

Example: https://goo.gl/LphqrJ

Its possible to map over record fields of different types to transform the inner type of a field. Similarly, if I understood it correctly, for your example that would mean you could map an object where every field is of a different but related type Domain<T[Key]> into an object of the same shape containing WsllexFunc<T[Key]> or Decoder<T[Key]>.

Furthermore, you don't have to use Domain<T[Key]> - you can define a more restricted interface that only requires wsllex and decode. Then the extra functionality defined in the Domain<T[Key]> is no longer required, nor mentioned. Similarly the types WsllexFunc<X> and Decoder<X> are simple function types, and any function types with the same shape (arguments and return type) would be accepted regardless of their name. Recursively the arguments and return type are themselves structural, and so on (!)

IMO its incredible how structural type systems change the game completely and we've yet to realise the full implications of this for dynamic languages. My guess is that this is why Guido is excited about types.


Thank you for taking the time (and referring to the point I made instead of criticizing the code, which is experimental and a little crappy in places).

Looks interesting and I will keep an eye on it. But am I right guessing that the TypeScript you linked would not work if the schema is only read at runtime?


Yeah, unfortunately. For something like that, code generation of at least the initial record types would be required for different schemas at before-compile-time. Then they can be transformed further by the type level language.

Another option is to model the type as a dictionary of items, each item being a union of the possible types.

There is a tradeoff here, and I believe that if the rest of the code statically assumes a concrete schema is in place, code generation + record types is the better choice. Otherwise, dictionaries would probably work okay.

There are languages such as Idris and F# that have type providers which can be programmed to read and generate the types from the database, but AFAIK that would still happen at compile-time. https://docs.microsoft.com/en-us/dotnet/articles/fsharp/tuto... - i guess its a kind of "built in" code generation.


I agree. The benefits of static typing (catching trivial errors, documentation) usually do not outweigh the costs.

https://www.infoq.com/presentations/dynamic-static-typing


Watched the whole thing. Very articulate speaker, thanks!

Btw. going through your older posts

> Besides immutability, what are the advantages of using namedtuples instead of dicts?

fixed attributes. And they are ordered! :-)

Makes sets / db-style programming so much simpler:

    for user, name, account in users:
        ...
You can do that with conventional tuples as well, but they can't match the introspectability and the value for documentation.


That is what generics are for. When you want some code to be applicable to any type of input it can be applied to.


For our small team, we found that type annotations are useful as a substitute of documentation and it cut our docstrings at least by half. For most small and well-named API functions, we don't need docstrings anymore.

We're big fans of type aliases also. For example, API functions do not return an int, they return a PK. They do not return a str (or Text), they return a URL.

We're also heavy users of typing.Optional to mark parameters or return values that can accept and return None.

We tried mypy but it failed miserably on our large codebase and frankly, with our large test suite, we never had a typing error found in production so I think it's not a good investment of our time.

I'm eager to test type annotations with our next intern to see if it speed up the ramp up process.


Python type hinting and aliases in Python 3: https://docs.python.org/3/library/typing.html

I've just finished porting a small application to Python 3 just because 2020 and deprecation is in sight already - there are so many things that have passed me by in the Py3 world.


What I missed first was this deal with provisional API. Nice that it's widely tested, but it should be clear that typing is provisional: Your program is not future compatible if you use it.

Asyncio was provisional until Python 3.6, so it is effectively a Py 3.6 feature that has been backported to 3.5 in time and space.


> we never had a typing error in production

Python (currently) doesn't actually check actual types are per their annotations. Do you mean that you rely on, say, your IDE to do static analysis to ensure everything's lined up, or that just by virtue of having annotations, your team can reason about things more easily?

That's one thing I don't really like about type annotations in Py3.5+, unless you run it through a static analyser or have an IDE that understands them, they're effectively just noise. As such, I have mypy running through Syntastic, in Vim, and while not perfect it's a vast improvement over non-annotated Python.


Regarding typing errors in production, I meant that we never encountered an error that could have been directly prevented by a Java-like static type checker[1], e.g., a function with an int parameter that was called with a string argument.

We use types as a documentation tool so the noise you are talking about is actually very useful to us. When someone unfamiliar with the code wants to call a function, a quick look at the signature is usually enough now that we use type annotations. Before that, we provided that information in a docstring, which was more verbose so longer to write and longer to read.

We found PyCharm to offer a better out of the box validation experience than mypy (less configuration), but it's not perfect, i.e., we still see occasional false negatives and false positives. Ultimately, we did not include type checking in our build pipeline, but as the tooling becomes more mature, we will definitively revisit that decision.

[1] I know other languages have more powerful type systems that may have prevented some errors we encountered, but I don't have enough experience with these type systems applied to large systems to comment on that.


>Regarding typing errors in production, I meant that we never encountered an error that could have been directly prevented by a Java-like static type checker[1], e.g., a function with an int parameter that was called with a string argument.

That's because your devs had already suffered this bug in their test runs once too many times, and killed it before it got to production.

That, and luck.


I agree. That is why I mentioned our large test suite in my first comment.


I am mostly confused by this. The author started out with a friend's question about an Either implementation and ended up with an enumeration and an option.

Enumerations are useful for a number of reasons in typed, compiled languages (I guess they provide readability in Python), but they are not the same as monads (of which Either is one type). I am not a Python developer, but as far as I can tell, this implementation gets you none of the real benefits that an a true Either implementation would get you in terms of functional goodness.

Am I missing something here? Is there some kind of Python magic that makes this better than just propagating the exception?


I thought the same thing. There are so many other options in the Python toolbox, Enum is far from the first I would reach for. Not sure what the benefit is.


> I’ll be the first person to admit I have no idea what postmodernism actually means

Well, if we want to take the title literally:

Error handling is a fundamentally Modernist idea - shoring up errors and "correcting" or at least containing them.

Postmodern error handling would be anything that reasonably counts as a reaction against Modernist error handling. The most basic would be not handling errors at all, and instead embracing the chaos of an error-prone system, and designing a system in which errors propagate but converge to a desirable value.


To add to this, AFAIK postmodernism is actually a critique of modernism rather than a being a substitute for it. This critique may/will eventually produce a concept which can replace modernism.

I found that enlightening, but it is probably also splitting hairs with regards to the blog post.


It's also important to remember that "postmodernism" is at this point about 130 years old.


> It's also important to remember that "postmodernism" is at this point about 130 years old.

You're thinking of Modernism. Postmodernism is more like 60 years old, depending on how you draw the line, and it's still an ongoing and developing set of philosophies.


The term was used in the 1880s in painting circles; it probably varies by field


>The term was used in the 1880s in painting circles

Not in any great capacity that culture in general was aware of.

Only in the mid-20th century, and especially post 60s, it became an actual thing with the meaning we associate with it.


There are cases where you could justify writing "except Exception as e" and having it return a value, like if you're the author of Flask or Raven or something that includes handling arbitrary errors in other people's code in its job description.

But within your own code, you would not want to represent "an unforeseen error occurred" with a value, no matter how much you like enums.

If you were parsing JSON and you got an unexpected error that isn't about parsing JSON, logging the error and continuing is not the right option. There is probably nothing reasonable your program can do. Raise the error so your broken code stops running.


I often find myself using "except Exception", while I don't like it, I cannot find another way.

For instance when making an HTTP request. The requests library can throw thousands of different errors (SSLError, BrokenPipe, socket errors, errors from urllib, errors from requests itself...).

As far as I know, it is the only choice if you want to know if a request succeeded or not so you can deal with it without your view returning a 500.

I would really appreciate if someone has another solution to this.


IMO, there's nothing wrong with "except Exception" if your handler is, in fact, meant to handle all exceptions.

(Well, there's one thing wrong, which is that certain standard library errors in Python 2.x don't derive from Exception [socket error, I'm looking at you]. This has been addressed in more recent pythons, but there's still lots of 2.x out there.)

Python's rules for handling exceptions in a class hierarchy are designed to provide programmers with the ability to handle errors at whatever level of granularity makes sense. If your requirements say that the software should treat SSLError and BrokenPipe differently, then by all means, handle them separately.

More often, however, you're making a library call for which any failure has identical requirements. When you're trying to read a configuration file, you probably don't care whether a hard drive is corrupted, has a loose power cable, is on fire, or somehow triggered a divide by zero error somewhere in the bowels of an I/O library.


  >>> import socket
  >>> issubclass(socket.error, Exception)
  True
It's been like this since Python 2.4 at least.


I stand corrected. Well, partly. Looks like it was changed in 2.6.

https://docs.python.org/2/library/socket.html#socket.error


All of requests' exceptions inherit from requests.exceptions.RequestException [0], so you could catch it.

[0]: http://docs.python-requests.org/en/master/_modules/requests/...


Kennethreitz requests lib? Just use requests.exceptions.RequestException, that should handle every error, if not then report it.

For python urllib IOError and OSError should suffice. At least in py3.


  import urllib.request
  try:
      urllib.request.urlopen('https://wrong.host.badssl.com/')
  except (IOError, OSError):
      pass
causes:

  ssl.CertificateError: hostname 'wrong.host.badssl.com' doesn't match either of '*.badssl.com', 'badssl.com'


and in Python >=3.3, IOError is an alias for OSError.


You should have (+/-) one "except Exception" in your code which would be the "catch all/log error" thing.

But for how to do with errors, see the sibling comments to this one, and good libraries will give you an inheritance tree of exceptions


This is basically the equivalent of returning a 500 or failing a task. You know that your code failed somewhere but there is no way you can recover from it (granted your code is not idempotent so retrying everything is not an option).

By catching all exceptions as close as possible of the request you make, you can at least do something about it.

As for the tree of exceptions, it works well for a while until networks become unreliable, until remote peers start presenting certificates broken in subtle ways... You end up adding ranges of exception in the except clause as new ones are popping on Sentry. Then you get tired of it so you catch "Exception".

This is the unfortunate story of every single project I do that deals with network calls on the Internet.


Where 'in your code' is a module. Then log the exception and raise a new exception with a module-specific code. requests.exceptions.RequestException is a good example.

My modules do catch SystemExit and KeyboardInterrupt before the bare exception and simply re-raise those.


Good unittesting can trigger the relevant exceptions. I know this can be a lot of work to write tests which trigger all your corner cases. I practiced this in a recent project but now I'm much more confident in my code.

Another strategy is to catch the "expected" exceptions first followed by a general exception handler. This can use the `traceback` module to log the call chain and error message.


A better way is to implement a logging function, which will be recording unknown exceptions the way you prefer: sending an email, writing to a text file, whatever. Then you just write:

    try:
        [...]
    except Exception as e:
        log(e)
It's not much longer than `except Exception: pass` and you gain more control over your system.


Requests cannot throw errors from urllib because Requests does not ever invoke urllib code.

Any error that does not inherit from our top-level exception is a bug: please let us know so we can fix it.


I am working on a crawler that uses async/await, and from experimentation the list is:

except (aiohttp.ClientError, aiohttp.DisconnectedError, aiohttp.HttpProcessingError, aiodns.error.DNSError, asyncio.TimeoutError, RuntimeError) as e:

I want to continue running no matter what, so I also have an "except Exception", but I have better logging now that I know what the known vs. unknown exceptions are.


My pattern on this stuff is to initially write:

  try:
      do_something()
  except ExpectedException:
      handle_it()
  except OtherKnownException:
      handle_that_too()
  except Exception as exc:
      log.exception('New kind of error just happened')
      handle_fallback()


Yes, that's basically what I'm doing.


>I often find myself using "except Exception", while I don't like it, I cannot find another way.

This (pokemon exception handling) is a massively, massively destructive anti-pattern.

Once a code base grows beyond a certain size it usually leads to failures which manifest themselves with strange behavior that takes hours (sometimes days) to track down the real source of.


    try:
        ...
    except (NameError, AttributeError, TypeError):
        # likely programming error
        raise
    except Exception:
        ...
Not pretty, but maybe better than catching every exception.


>There are cases where you could justify writing "except Exception as e" and having it return a value

I actually use this pattern quite frequently in one very specific place. I use marshmallow to handle schemas for my API. On endpoints that require creating or updating a model, it's often difficult to write an exception for every single possible error type. Therefore, I usually try to catch ValueErrors, IndexErrors, etc explicitly. However, as a last line of defense I do have an except Exception as e. This is to catch and document any exception that I may not know of. Ideally, this ends up catching library defined exceptions that do not inherit from a standard defined exception or inherit from Exception directly. I haven't hit this line yet on any of my views, but when I do I'll know exactly what exception I need to catch.


except Exception: is useful when you have two alternate ways of accomplishing a goal; one default that might fail, and one extra that is less complete. Although you might only expect a certain error, since you have an alternate, you might as well use it for all errors.


This looks like an attempt to use Rust-type error handling in Python. This is pounding a screw. Rust does it that way because Rust doesn't have exceptions. (Although, as with Go, all the heavy machinery for exception unwinding has been retrofitted so "recover" will work. Rust and Go just don't have the syntax for exceptions.) Python doesn't need to become Rust, or Haskell.

Writing

    except Exception as e:
is clueless Python. If you're checking for something that went wrong in the outside world, you catch EnvironmentError. Catching Exception catches far too much, including syntax errors. It's sometimes useful to catch Exception at the top level of a program, log an error, and restart, but not in interior code.

Note how useless the examples in the article are. It's not like someone is trying to handle all the things that can go wrong in a network connection, or in nested code where you have a network connection and a database transaction open at the same time, either can fail, and you have to unwind properly.

Typed language are fine, but this semi-checked addition of types to Python is a mess. Python doesn't need this. I'm in favor of design by contract, typing, and assertions, but not in this broken semi-optional way.

Stop Guido before he kills again.


When you have a generation of people brought up to believe that exceptions are somehow evil, what do you expect?


I know. Python is one of the few languages to get exceptions right. There's a reasonably sane exception hierarchy, and if you catch something, you get everything in its subtree. The "with" clause and exceptions interact properly, so if you hit a problem closing a resource in a "with", the right stuff happens. Since Python is reference counted/garbage collected, you don't have the ownership problem of exception objects you have in C++. Python isn't big on RAII, so you don't have the exception-in-a-destructor problem (or worse, the exception during GC problem) unless you do something to force that.

Python is probably the best type declaration free language around. Trying to make it into something else damages the language. Python has become uncool, though, and so "cool features" are being added that don't quite fit.

These changes to Python are so big that this should have been called Python 4. But if Guido had done that, the reply from the big users would have been "Hell, no, we won't go!".


If you just need a Rust-like Result type for Python: https://github.com/dbrgn/result


I'm a bit confused by this post, the 'enums' in rust are actually 'algebraic data types' and not enumerations like what the author is showing in python. the code here isn't really very equivalent and if anything just shows how much less expressive python is than a language which has proper algebraic types.


Type annotations are also helpful without mypy. PyCharm uses them to provide hints and warnings during development.

At first I was reluctant of the syntax, after a while I got used to it. Function definitions look a lot like Rust ones and do not require the endless docstrings just to document the type of the expected parameters anymore.


My question is this: What happened to all the duck-typing evangelism? "If it walks like a duck and quacks like a duck, it is a duck", "static typing only catches trivial errors", and so on.


> "static typing only catches trivial errors",

That's partly because we tend to use the same types for everything. For instance in a spreadsheet you can subtract a column number from a row number and get no complaints even though the difference between the two quite likely makes no sense. If you have a separate type for row numbers and column numbers such mistakes can be caught.


In your example you'd just have two different types with no subtraction operation defined between them. That has nothing to do with type hints.

What you probably meant: This is where being strongly typed helps. Python is and always has been strongly typed.


Everything is cyclical. Pain is forgotten, same mistakes are made, pain is rediscovered, relief is reinvented. I had a high school teacher who said something like, "I haven't bought new ties for 25 years. Why? Because things always go in and out of fashion. A particular tie might be out of fashion for ten years but it comes back eventually."


I use typing at interface boundaries within code and sometimes between functions. Within a function I use duck typing. As others have mentioned, I use this more as an editor hint to warn me of questionable behavior and almost never run mypy. Yes, it only catches trivial errors, but a trivial error highlighted in the editor is easy to see/fix. The extra typing in the function sig is saved in not having to enter the same info in the docstring, so it feels like a wash to me.


Back when static typing meant Java or worse, it was reasonable to think horribly verbose and clunky programming was inherent to static typing. But now we know better.


Not a definitive answer to the problem but mypy supports a limited sets of common "interfaces", for instance dict-like or list-like.

http://mypy.readthedocs.io/en/latest/cheat_sheet_py3.html#st...


That's right, mirroring the built in (3.x) abc collections heirarchy. Which then allows you to do interesting things like delegation based polymorphism by registering with the appropriate metaclass so - just to come back to your parent's point - interrogating the type reveals it to be list-like or dict-like.


Ah, that makes things better


It's still here. Types don't matter in many scenarios. Example: get a parameter from a POST, use it as a key to query the db, store it into the db. Is it a string, is it an int? It doesn't really matter to the application because the db driver can handle that. Don't slow me down by making me look at the docs of what I really receive from the web server to declare a type. Multiply by some hundred or thousand times. Strongly typed languages only raise costs for my customers. Given that the budget is constant they get more features with dynamically typed languages.

I'm not happy seeing language designers introducing strong or optional typing in languages that didn't have it, but I understand that their goals are different from the goals of the developers that use their languages. I'd say leave the core of Python (and Ruby [1]) alone: if we picked those dynamically typed languages in the last 25 years (they're that old) it means we like them so. We can use plenty of other languages if we really need types.

[1] https://blog.heroku.com/ruby-3-by-3 "Matz: ... the third major goal of the Ruby 3 is adding some kind of static typing whille keeping the duck typing, so some kind of structure for soft-typing or something like that. The main goal of the type system is to detect errors early. So adding this kind of static type check or type interfaces does not affect runtime."


> Example: get a parameter from a POST, use it as a key to query the db, store it into the db.

So you get a string, and you want to pass it unmodified to something that expects a string? The static typing answer is simple: Your variable is a string! How does that slow you down, or cost your customers a single cent?


I really don't know if it's a string. AFAIK the framework could give my ints when it sees a number. I should check, but what for? I'll do the next time I need to do some math on those values and then forget again. They're not the things developers should waste time with.


> I really don't know if it's a string. AFAIK the framework could give my ints when it sees a number.

In addition to alexvoda's comment: If you don't know what you get from your framework, then how do you know that you database frontend can handle it? Maybe in some cases, your framework gives you a list of lists, or a hashmap, and your database has no idea what to do with it. I really hope that's not the level of thoroughness you spend on your professional work.

Besides, if your framework were statically typed, it would either give you a String, or some kind of Variant/Any/Object with a to_string() method, the output of which can be passed to your database. There, problem solved, no time wasted, and you can be sure that whatever your framework does, your database can handle it.


Typical response from someone looking at things not from the database side. Someone who mainly works with the database will cringe at the thought of just throwing whatever data into the database. Consistency, integrity and all that jazz.

Different areas of programming give importance to different things.


Being able to lock down known interfaces can be beneficial, while leaving the rest open. Optional typing also lets you ignore the type hints when you really know what you want to do.

> "static typing only catches trivial errors"

To be honest I've not seen that sentiment very often, and it's definitely not something I agree with.


There exist libraries that provide 'duck types' and other goodness for type hinting.

typecheck [0] is one (which can also run-time enforce the check) used like:

    import typecheck as tc


    def logathing(
        dest: tc.hasattrs('write'),
        thing: tc.any(int, float, str),
        prefix: tc.optional(str)=None
    ):
        dest.write(prefix + str(thing) if prefix else str(thing))
[0] - https://github.com/prechelt/typecheck-decorator


The both statements you have quoted are still valid.

Moreover, if you would think a bit you would realize that the world around you is duck-typed. At least at the level of proteins.

One more thing. Every ontological attempt to use naive categories would fail and end up in duck-typing-like approach, because it reflects how the mother nature is.


There must be something I don't get. What is the advantage of creating an Enum class to store relationships rather than just using a dictionary or namedtuple? (Arguably) readability? I mean, I see it works but fail to grasp the advantage.


I’ll be the first person to admit I have no idea what postmodernism actually means

It means it means there's no right or wrong way to do it, everything's subjective. Which is actually the exact opposite of the Python philosophy!


Am I correct in thinking that duck tying and type hinting are not mutually exclusive? Type hinting is not enforced at runtime, so we are still free to pass around whatever we like as long as it behaves as expected.


Correct, although it kind of defeats the point of typing if you ignore type errors.

There is however an Any type which you can use for things you don't need enforced


"Mypy allows you to add type annotations and enforce them prior to running your program"

Yes, and if I wanted type annotations to stop my program from working I wouldn't be using Python

Enforcing types is exactly what Python IS NOT about. Because of Duck Typing and everything else

So this is not "postmodern error handling" this is "let's code Java in something else and pat ourselves in the back"

Do you want to check errors in "compile time"? Use Pylint. It does the right thing


I'm not a big fan of type annotations because the code easily turns unreadable which is not what I expect from Python code.

But for larger projects type checking is valuable. Types introduce additional contracts between components to manage and control the overall complexity of the system.

For small scripts and self contained applications I regard type checking as a burden. For larger projects not.

Pylint is a valuable tool to detect typical errors but does not check types. Regard a function which returns the sum of two given arguments. It will work with both arguments being integers and for both being strings. PyLint will not complain here, but if your code uses this result you might trigger errors much later during progam execution which are hard to detect or understand without type checking.


> But for larger projects type checking is valuable.

Wrong solution to the wrong problem. Break that too large for duck typing fucking mess into smaller independent components with well defined and enforced interfaces.


> components with well defined and enforced interfaces

"Well defined and enforced interfaces"? That sounds like a job for static typing!

But seriously, that's exactly what types are good for: defining and enforcing an interface.


Duck typing is still a type system that can be checked before running the code. Flow for JavaScrip does that while allowing to use comments for type annotations. That is with Flow one has an option to program in idiomatic JavaScript that relies on Duck types and then check for type violations using the tool. The cool part of Flow is that it tries hard to check for various typical patterns making annotations minimal.


Duck typing is not the solution to every problem.

Pylint has nothing to do with catching type errors. It catches problems with code style.

I don't find the code in this blog post convincing, but there are reasons to write type-safe code.


> Pylint has nothing to do with catching type errors.

No, it does catch type errors, for example accessing methods that are not present in the object

> but there are reasons to write type-safe code.

Yes, and there are lots of languages that allow you that


Postmodern error handling would probably be treating any error as a successful result.


What is a "maybe implementation" and an "either"?


kinda of a bad pattern.

lots of abuse of error handling apparently just to have a fetch and parser in the same method?

why not have one promise of a fetch and then a promise of a successful parse?


Another reading of this paragraph is that it's essentially douchebaggery.


We detached this subthread from https://news.ycombinator.com/item?id=13731079 and marked it off-topic.


This paragraph? Which one? I see three.


I'll never understand the techie disdain for the humanities and liberal arts.


I am not endorsing miloshadzic's comment in any way, but one can be skeptical of postmodernism without being disdainful of the humanities and liberal arts in general.


Maybe it's partly because of anti-intellectualism, the idea that being very smart is bad. I had never heard about the idea, until recently. It was some article about American anti-intellectualism and why it is so popular. Reading that article was a very weird experience for me. I believe this concept doesn't exist in the culture of the country where I am from.


That's a very strange thing for us Europeans. When I meet a new guy I will scan some cultural fields and see if we have some common ground where we can sit the discussion for a while, it could be post-modernist philosophy, classical music, anime, or porn even, but we need to find place to settle. Younger, I did the same with some people from the US and noticed big "!name dropping!" flags. Right, I was name dropping, just to try to find a common topic. I wasn't trying to shower anyone with "intellectualism", though: it is perfectly ok to not have read Proust. (But it would be weird and a no-go to have familiarity of no parts of cultural life except baseball and "football")


>That's a very strange thing for us Europeans.

It's even worse in Britain. Anything remotely smart or cultured is considered "pretentious" etc.



Tired of people taking quotes out of context too. What Gove actually said was that the country was tired of self-proclaimed experts who always turned out to be wrong. Which is not an unreasonable thing to assert.



Really, it was a joke example, but few people actually self-proclaim expertise they don't have - tough even being tired of such people being wrong isn't at odds with the comment I was replying to about anti-intellectualism.


Most tech people I know have an above average knowledge and respect for the humanities. Maybe you cannot excel in both but you can enjoy both. There is always a subset of the population, intersecting also technical people, that increase their self perceived value by decreasing the value of the others.


I didn't realize my first paragraph might come off as being dismissive of the humanities. It was just meant as a self-deprecating yarn, nothing more.


Isn't computer science a liberal arts? At least it is a feature of lib art schools on the east coast, not necessarily always an engineering degree.


It depends on your definition of "liberal arts". The classical liberal arts (i.e. arts of a liberated person, as opposed to the servile arts) are music, arithmetic, geometry, astronomy, grammar, logic, and rhetoric. So if you boil down computer science to its mathematical principles, you could make a decent case for it.

These days, the liberal arts generally relate to the humanities: language (e.g. English degree in the USA), fine arts, history, anthropology, sociology, philosophy, etc. It's the study of people and groups of people, instead of the physical world and the laws that operate on the physical world independently of the people that occupy it.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: