Hacker News new | past | comments | ask | show | jobs | submit login
Pybind11 — Seamless operability between C++11 and Python (github.com/pybind)
135 points by tony on Aug 25, 2017 | hide | past | favorite | 117 comments



Some may find it useful: I've written a Jupyter notebook extension for pybind11 a while ago, so you can write C++ in the notebook and it gets automagically compiled and imported into the kernel (plus you get C++ syntax highlighting and a few other goodies):

https://github.com/aldanor/ipybind

(still not on pypi but the plan is to release it soon)


I've been using boost::python, and the experience is what I'd call cargo-cult programming. You copy and paste stuff off the tutorial, and when you get a compiler error, you just try other stuff at random, because the inner workings of the library are inscrutable. I vastly prefer cython to this. Is Pybind11 any better?


Pybind is derived from boost::python, so it's not much different. I wrote a large Python binding in the past for a moving C++ library target, using boost::python, and keeping the C++ binding code up to date was a nightmare akin to maintaining a fork of any fast-moving project.

Try cppyy [1]. It's very nice, though quite fresh. It's used extensively (as far as I can tell) at CERN, and derives from their cling C++ JIT interpreter. Plus it does lots of nice automatic transformations to make a C++ API more pythonic.

[1] http://cppyy.readthedocs.io/en/latest/index.html


Not quite -- while pybind11 was originally inspired by boost::python, it is an entirely different project that is intended to address many of its problems.


Indeed; I was using "derived" somewhat loosely. But I think there is certainly a visible lineage there. Indeed, pybind11 is much nicer than boost::python (and I did evaluate it for the future of the project I mentioned above).

However, the very nice feature of cppyy is that it does much (most?) of what pybind11 does, but it can also be completely on-the-fly, in the sense that it relies on the cling JIT interpreter. This means that there is absolutely no need to maintain a compiled C++ part for your bindings, and so the problem of keeping the interface up to date is greatly mitigated: the equivalent changes to match the interface when using cppyy are _much_ smaller.

Often, one maintains a "C++ interface" layer in one's Python bindings (which could be created by pybind11, boost::python, or SWIG, for example), with a pure Python shim layer on top of that. cppyy allows you to do away with this two-layer structure entirely; all you need is the shim, if you need anything at all.


The catch with cling and the derived libraries is in that you have to download a whole bunch [1] of CERN stuff and then build a customized LLVM as part of the build process. That's a bit too heavy for the nice reflection- and REPL-like features that you gain.

    [1] https://github.com/antocuni/cppyy-backend/blob/master/create_src_directory.py
Boost.Python is better in that regard since you "just" have to build boost; on some platforms, you can just snatch that via a package manager; that being said, you still need to build it. SWIG, aside from being ugly, requires an extra build step.

> This means that there is absolutely no need to maintain a compiled C++ part for your bindings, and so the problem of keeping the interface up to date is greatly mitigated

I tend to disagree. I would never consider the raw (swig or cling) 1-to-1 bindings of C++ code satisfactory for end-user use in Python. Ideally (in my subjective opinion and previous experience) Python-side bindings would closely mirror the C++ API, to the point where downstream code in either language looks very similar, but they don't reference any C++ stuff, be it vectors, or maps, or template arguments, or anything else. This implies you would have to maintain a set of higher level bindings on top of swig/cling ones anyway -- and these are the ones that'll break as the code evolves and that you'll have to maintain manually. As such, I'd rather maintain one set of bindings than two.


You're right about the CERN stuff, though recent efforts seem to have been made to split at least some of that out. I hope that it continues. I think, aside from anything else, that cling is a really cool project, and if it could be easily available more widely, that would be great.

I was being quite literal when I wrote "no need to maintain a compiled C++ part": of course, you probably do want to maintain /some/ extra layer! And, in that sense, I do think that cppyy lets you maintain just one set of bindings (not two); and, as in pybind11, the ultimate aim is to transparently translate any "vectors, or maps, or template arguments" into idiomatic Python: this is why cppyy has a 'Pythonization' API.

Perhaps there are just two slightly different niches: cppyy is good when you need more a interactive interface to C++, for prototyping or exploration (for example), because of its JIT nature; and pybind11 is good for building something more static in the longer term, and where you don't mind the cost of keeping the compiled part up to date with the relevant C++ API.

It's certainly an interesting space at the moment, and I do hope both projects keep the momentum up and keep innovating!


I'm the author of cppyy and was just made aware of this thread.

The big dependency is LLVM, not any more CERN code (there's some left, but it nowhere near takes up the disk space or compilation time than the patched version of LLVM does). The CERN code exist b/c LLVM APIs are all lookup based. The bit of leftover code merely turns that into enumerable data structures. Once pre-compiled modules can be deployed, everything can be lookup. That will greatly reduce the memory footprint, too, by making everyhing lazy.

It is hard to trim down the source of LLVM, but trimming the binary is easier to achieve and that's what I'm working on right now. The end result should be a binary wheel less than 50MB that is usable across all python interpreters on your system, and would be updated something like twice a year. Since that gets it down to a level where even an average phone won't blink, pushing it beyond that leads to vastly diminishing levels of return, and I'll leave it at that unless a compelling use case comes along.

That said, there is an alternative: on the developer side, you can use cppyy to generate code for CFFI (http://cffi.readthedocs.io/en/latest/). The upshot is that LLVM only has to live on the developer machine, but would not be part of any deployed package. Of course, w/o LLVM, you have to do without such goodies as automatic template instantion.

Finally, note that cppyy was never designed with the same use case as e.g. pybind11 in mind. Tools like that (and SWIG, etc.) are for developers who want to provide python bindings to their C++ package. The original idea behind cppyy (going back to 2001), was to allow python programmers that live in a C++ world access to those C++ packages, without having to touch C++ directly (or wait for the C++ developers to come around and provide bindings). Hence the emphasis on 100% automation (with the limitations that come with that). The reflection technology already existed for I/O, and by piggy-backing on top of it, at the very least a python programmer could access all experimental data and the most important framework and analysis classes out-of-the-box, with zero effort.


pybind11 shares some of the motivation of boost::python, but it is designed to be considerably easier to use. Out of the box, it is aware of a large set of C++ types that are automatically translated to their Python equivalents and vice versa (STL data structures, C++17 types like optional<> and variant<>, std::function<> for callbacks, and even sparse and dense matrices for numerical linear algebra).

@aldanor recently gave a nice overview talk of what pybind11 can do: https://speakerdeck.com/aldanor/pybind11-seamless-operabilit...

Also, take a look at our documentation (http://pybind11.readthedocs.io) -- it contains detailed examples of many tricky use cases.


I'm curious - are you more of a Python or C++ programmer? C++ compiler errors can be famously cryptic so I'm wondering if these types of binding-generators are mostly suitable for C++ programmers that want to expose their libs to Python and don't mind decoding template errors in their own code.

I've done both C++ and Python and never attempted to generate bindings, in the near future I may be faced with this though.


I have used boost::python a lot and later tried pybind11. A lot of people prise it because they consider boost bloated, but in my experiments boost::python is actually 2X faster at runtime (not at compile time). I think I'd rather use boost::python.


other projects in that space are: cython (need to manually do wrapping), xdress, SWIG, SIP, clif.

Does anyone have experience with using these and able to compare/contrast?

I'm afraid of C++, but if I can wrap it and learn how to use from a python REPL, I think I can handle... any recommendations/tuotirals/howtos would be much appreciated. (The library I want to wrap is https://github.com/openzim/libzim )


I built bindings for various C++ libraries in SWIG (not only for Python but also Lua and Ruby) and I was always very impressed with the result. What's amazing about SWIG is its ability to easily port very complex and rich code. I successfully exported very complex class hierarchies and code relying heavily on pointers and STL to Python and/or Lua. The process consists of writing several header-like configuration files that SWIG ingests and uses to generate bindings, which is often pretty straightforward. Often you can simply import the normal C/C++ header files in those configs and add some glue code / extra hints to help SWIG in cases where it can't figure out what to do with a particular type.

In my experience, the most difficult aspect of binding generation is not writing the glue code, but doing so in a scalable way. Manually writing bindings for any real-world C++ codebase would therefore be extremely tedious (IMHO), so having an automated system that does this work for you is a huge time saver.

Some larger libraries / frameworks have their own wrapping generators Btw, PyQt (which provides extremely good bindings to QT) for example has SIP which is worth looking at (it's open source).


Have you used pybind? I've used SWIG extensively so I'm familiar with that - you talked about SWIG but not how it compares to this new one.

Reading through some docs, it looks like pybind suits better for a python codebase that occasionally needs C++ pieces added in as you have to setup the C++ code to handle the py needs.

SWIG I was always impressed with exactly your point. You could sometimes just make 1 big header file in a facade pattern to expose some "start" buttons needed to run a huge c++ application and nothing else needed to be setup. No going back and forth.

Example usage here was a single heavy duty image processing application but using python has the distributor and work manager. C++ code was a "worker" and python and zeroMQ did everything else. So the only "connection" we needed was a start button and some metadata to the c++. Maybe 1 header file with 100-200 lines of code to make the bridge for SWIG.

(to anyone else reading) does that sound trivially simple to the do the same with pybind?


SWIG is amazing. I have usually taken a different approach so that I could get the scaling you talk about.

I point SWIG at the real headers in my projects and use ifdefs to block out things don't want SWIG to see. There are also a few rules that should be followed when doing this. Things like: don't let SWIG see overloads if the other language doesn't support them; don't return raw pointers that need to be deleted; don't let swig see std headers, etc...

Then I make SWIG (optionally) run as part of the build. Any warning it makes need to be resolved somehow because CI will reject commits causing warnings. It cost just a little more dev time, but prevents eons of debugging time.


I haven't used pybind yet, so I can't compare the two unfortunately. From how I understand the docs it seems you would have to write the interface definitions in C++ and the compiler would generate the bindings for you, which sounds like a nice approach that could be a bit more cumbersome though as (again, in my understanding) forces you to write and adapt all bindings by hand.


>could be a bit more cumbersome though as (again, in my understanding) forces you to write and adapt all bindings by hand.

That was my interpretation as well.


Swig is great when it works, a nightmare when it doesn't. A coworker in a former job used Swig to enable a C library to be used in Python.

Initially, all was great. But then we discovered a memory leak. He tried for weeks to identify it, but never could.

Cython would be my first choice.


Ownership can be a tricky question, but usually SWIG is pretty explicit about which memory/objects it owns and which it doesn't. Like the other poster I also often thought that my program was leaking memory as the Python interpreter does not immediately release unused memory, so even if everything is fine it can seem like your program is consuming more and more memory (explicitly collecting garbage using `gc.collect()` usually helps here). In general, you should be really clear about who owns a given object (either the C/C++-side of your code or the Python side) and avoid passing ownership back and forth between the two. I already mapped some pretty intricate code using SWIG (including a parser generator that created lots of object instances in C++ and passed them to Lua for manipulation, and where the Lua side could also create objects and link them to the C++-created objects) and so far didn't have any problems with memory leaks due to the binding code.


I have used SWIG a great deal. Any time I have had leaks near it they were in my code or not really leaks and just quirks of the garbage collector in the Non-C language.

I have used swig with Ruby, Lua and Java and it the code it makes doesn't leak unless the C/C++ code is bad, its just too simple.


What's also important is to have a simple, neat API, ideally C API. If you do have it, then writing wrappers for any language, even manually, without SWIG (which is great, but does add complexity to your project's build process), is simple.


Yes that's a valid point, manually writing bindings for C++ is very hard though (IMHO), as there are many intricacies that you have to keep in mind, and SWIG is really good at automating all of these away. For C-code the story is a bit different because usually the code is much less complex.


I've used cython, swig, and boost.python. I prefer pybind11 to all of them. It requires the least extra tooling and allows you to seamlessly transition between C++ and Python using the idioms native to each language.


Also, Boost.Python. http://www.boost.org/doc/libs/1_64_0/libs/python/doc/html/in...

A proprietary piece of software we use at work has a Python API for which they are using Boost.Python.

I've written a piece of software to get some data from the API and to analyze the retrieved data. The Python API has a bit of a "C++ smell" to it and doesn't feel completely pythonic but it's good enough. The bigger problem is that the vendor's idea of documentation is a single PDF generated from the class definitions and everything is either very sparesly or not at all commented, heh.


In fact, pybind11 was heavily influenced by Boost.Python, to the point where some of the syntax may look quite similar. It has one obvious advantage though, in that it doesn't depend on Boost :)


I used pybind11 for my open source asteroid detection code. It's very slick, especially it's ability to seamlessly convert between stl and python lists- for example a vector of strings converts directly to a python list of strings. Also the interoperability with numpy. A class containing an stl of primitive types and the dimensions can easily be turned into a numpy array. Very neat


https://github.com/DiracInstitute/kbmod

The project I was referring to :)


ROOT https://root.cern.ch/ provides Python bindings through its PyROOT functionality. It builds the bindings (ROOT calls them "dictionaries") by parsing your C++ using their "cling" interpreter based on LLVM. As far as usability, ROOT is a big package but once a given it's rather easy and non-intrusive to create Python bindings for your C++.


Also as teajunky notes below, cppyy (which was born out of PyROOT) is quite nice: http://cppyy.readthedocs.io/en/latest/


Here's how the pybind11 bindings of a c++ lib I work on look:

https://github.com/OSSIA/libossia/blob/master/OSSIA/ossia-py...

as you can see, it's just a matter of declaring the classes and methods you want to wrap. It was very straightforward and works extremely well.


I've used SIP, which wasn't too painful. The code you write to do the interfacing looks quite like the C++ definitions. I mostly used it as was an application which used PyQt. It probably is harder to use as it's not the most popular option.


At what point should I start thinking about rewriting computationally expensive parts of my Python application in C++ and making bindings?

Does anyone have experience taking one expensive method and replacing it with C/C++? Were the trade offs worth it?


First, try to optimize your Python. It's unexpected what you can do with it. E.G: slicing assignment is crazy fast, removing calls and lookups goes a long way, using built-ins + generators + @lru_cache + slicing wins a lot, etc. Also Python 3.6 is faster, so upgrading is nice.

Then, you try pypy. It may very well run your code 10 times faster with no work on your part.

If you can't or the result is not as good as expected, you can start using EXISTING compiled extensions. numpy, uvloop, ujson, etc.

After that, and only now, should you think about a rewrite. Numba for number related code, cython for classic hot paths or nuikta for the entire app: they can turn Python code into compiled code, and will prevent a rewrite in a low level tech.

If all of that failed, congratulation, you are part of the 0.00001%.

So rewriting bottlenecks in a compiled language is a good option. C or C++ will do. But remember you can do it in Rust too!


I love Python, but this is one of the pain points. Working through a sequence of domain specific languages when you could have just written it in a fast one to begin with (e.g. Julia or C++).


You usually don't, that's the point.

You may, late in the project, write partially some of your code in a DSL.

While with C++, you start from the beginning with a handicap for the whole project.


The reason so few projects are rewritten in C/C++ is that many people know up front that their project will require that performance and just start there.

If you are building a high end 3d video game with anything like current fancy graphics no amount of python or ruby is going to make it work. You must start with C or C++ to make effective use of modern hardware (even using the C# unity provides leaves a lot of performance on the table).

If you are building a system designed to be faster that some other well defined system then starting with C or C++ is a good idea. If your Java or C# system could handle 1 million transactions a second you might be able to complete 1.5 million/s with C++.

Some projects never need that level of performance, building those projects on C++ can cost you some time. Most webpages are in that vein, how many hits a day does a typical website get? only a few of the biggest retailers and search engine need that level of performance.

That time cost is also shrinking, but not shrinking as fast as I would like. C++11, 14 and 17 it took chunks off development time by polishing some of the sharp corners of the language. Memory leaks are harder to make. Threads and time are easier to work with. Error message are better than ever.

There is still progress to make. Every C++ project still needs some time dedicated to configuring the build system. There needs to be some plan for checking for memory issues, there needs to be... I think C++ will continue to get more Rust-like and Rust will continue to grow in popularity and performance. Eventually, I think Rust or something like it will be the preferred high performance language.


The secret is to write as much as you can in "high level" (Python) and then just specialize the critical path in C/C++. That gives you the best balance between clarity, developer time and performance.


I'd remove PyPy. It's not 100% compatible and there are antipatterns in between them. For some time I've treated CPython and PyPy as two very similar, but different languages. PyPy is more C-ish if you want to call it that way, what's fast in it is more direct, similar to the C mindset. In Python a good abstraction usually will give you better performance. If you mix them, it's not quite one thing or the other.


This doesn't directly answer your question, but I think a better use case for pybind11 is when you have an existing C++ library with a fairly rich typesystem and you want to expose it to Python.

If you just want to reimplement some parts of your code in C for performance (I'd argue you neither need nor want C++, you shouldn't bother with C++'s object system and you should keep using Python)'s, CFFI might be simpler:

https://cffi.readthedocs.io/en/latest/overview.html#purely-f...


A typical use case is pushing loops in the hot path of your code to the C++ space. A simple loop that does nothing like `for i in range(int(1e9)): pass` takes about 20 seconds to execute in Python on my machine, whereas in C++ the overhead would be thousands times smaller.

Because there's also overhead for transferring objects to/from pybind11 (it has to keep track of object lifetimes, figure out the conversions, etc), it's generally more beneficial to wrap big chunks of logic in C++ rather than every single method.


I wrote a demuxer for CAN Bus data in C++. Then, I built a Python module using Boost.python so that our Python programmers could simply 'import demux' and write scripts to see and manipulate individual variables, load data into the DB, etc.

The productivity gains of Python scripts that could demux were huge. It made demuxing fast and easily accessible to all of the Python coders. Also, the underlying C++ code could be used in our C# environment too, although I'm not sure if they ever did that. So, we had the ability to use the exact same underlying C++ code in multiple dev environments to ensure consistency.

The downside, IMO, was maintaining the modules. The C++ code itself was short and easy to test (maybe 100 or 200 LOC), but you need good documentation and the ability to build the modules for various versions of Python, with various toolchains, on various systems, etc.

I would do it again if in the same or similar situation.


For the sake of illustration, there's a simple but realistic example at the very end of my talk at this year's EuroPython:

https://speakerdeck.com/aldanor/pybind11-seamless-operabilit...

(reimplementing rolling_stats from pandas that ends up being faster)


The easiest would be to first give cython a try, as it can provide a nice performance boost at a minimum effort in some cases.

If you just need to bind simple number crunching functions, you may also write a small library exposing them with c linkage, then access them using ctypes.

For more advanced usage, you will probably benefit from using a binding framework instead of using the python c api.


Has anyone looked at https://pypi.python.org/pypi/cppyy ? This seems to be even easier than pybind11


I like how the first basic example invokes undefined behaviour without mention:

  int add(int i, int j) {
      return i + j;
  }


Just like “operator+(int, int)” invokes undefined behaviour without mention: you can write “a + b” and it will break if the addition overflows!

Or std::vector<T>::operator[] invokes undefined behaviour if the argument is invalid (out of band).

Or printf() invokes undefined behaviour if an argument is of the wrong type (for example "%s" with a corresponding int).

Or strcpy invokes undefined behaviour if you give it invalid pointers (or non-null terminated strings, etc).

So, yeah, many functions can invoke undefined behaviour when they are called with bad arguments, that doesn’t make them bad functions.


The difference is that this was a function exposed to Python. The expectation of Python programmers is that calling into the module won't explode.

The Python vs C++ joke here is that the minimal a + b example turns out to be unsafe to call with random numbers.


A certain loss of safety is common when you're using FFI.

Take any language's FFI and bind to the C library atoi function. Poof, you have instant undefined behavior for bad string to int conversions.

If you want a safe, robust module that is based on FFI, you have to write some padding in the higher level language that avoids misusing the foreign API in any way.

Directly exposed FFI stuff is not safe and cannot reasonably be safe; it makes no sense to expect that.


I disagree that this is a FFI. There's are FFIs for Python, the builtin one is called ctypes, and users know it's an exception to Python's safety. Modules implemented in C++ and presenting functionality to Python programs in a native way are expected to be safe.


Unless I'm mistaken, the topic is Pybind11, which is a foreign binding mechanism. Being criticized is the example in its documentation:

http://pybind11.readthedocs.io/en/master/basics.html

under the heading "Creating bindings for a simple function".

Looks like FFI to me.


I for one am grateful for this comment, I program rarely enough in C/c++ that I wouldn't have assumed signed overflow was undefined. As the example below with "int a; a+5>a" illustrates, that's potentially pretty serious.

I wonder if the "best" fix here, would be to use assembler intrinsics to maintain performance, possibly with a fallback to a type cast/promotion to a bigger int for the sum, and some form of defined behaviour for overflow on the python side? (reinventing the python number tower with full automatic promotion seems counter-productive here, as presumably the idea is to work with fast/simple/native (signed) integer math...)

Or just redefine the function to use unsigned ints.


Looks fine to me. Are you supposed to check for integer overflow before doing the addition, or something?


Yep,

    add(INT64_MAX, 1)
Is undefined.

If using Ada, similar kind of code would produce "raised CONSTRAINT_ERROR : program.adb:7 overflow check failed"

To be fair, Ada's behavior is easy to reproduce with a checked integer class like https://accu.org/index.php/journals/324


So?

There millions of lines of code, from the best C/C++ programmers, even the kernel, that add two ints in all kinds of programs.

Why is this suddenly a valid concern, especially for a code example, not NASA's missile code or Tesla's self-driving libs?


It is not a sudden concern, it has been ignored since C and C++ exist, and only became worse with code exposed to the world via the Internet.

"Many years later we asked our customers whether they wished us to provide an option to switch off these checks in the interests of efficiency on production runs. Unanimously, they urged us not to--they already knew how frequently subscript errors occur on production runs where failure to detect them could be disastrous. I note with fear and horror that even in 1980, language designers and users have not learned this lesson. In any respectable branch of engineering, failure to observe such elementary precautions would have long been against the law."

-- Tony Hoare, "The 1980 ACM Turing Award Lecture"

Millions of people also drive without security belt or helmet, apparently they are an useless extra.


Google for "site:cvedetails.com integer overflow linux"

Yeah, there are places in code like that, and many of those lines have been security vulnerabilities.

For anyone who is surprised by this and is handling untrusted data in C/C++, it's a good idea to read up on the subject.


> Why is this suddenly a valid concern, especially for a code example, not NASA's missile code or Tesla's self-driving libs?

It's been a valid concern for years. I'm not sure where you get "suddenly" from.

And where do you think the people writing NASA or Tesla code tomorrow are learning code from today?


>I'm not sure where you get "suddenly" from.

I'm getting it from the inanity of pointing it out in a sample C++ code in a post announcing a new FFI lib.

It's something people can write 100000 line programs and rarely care about to check. So quite far from the first concern that one should have when writing a sample 2 line function to showcase an FFI helper.

In other words, it's as relevant as someone saying 1+1=2 casually and someone pointing out in all pomposity that "actually the representation depends on the base of the number system, in binary it would be 1+1=10".


You read the joke as if it was a comic book guy skit in the Simpsons, but I intended to convey it more like a Sideshow Bob & rake situation. "Ok, let's make a C++ extension" blam "Grml grml grml..."

I guess it depends if your expectations are those of a Python programmer or those of a C++ programmer.


> I guess it depends if your expectations are those of a Python programmer or those of a C++ programmer.

That's a very good point, especially since there are people in this thread looking for using Pybind11 to speed up their existing Python code. They should be aware of the serious risks to correctness if they naively reimplement their Python code in C++. (People who regularly write in C or C++, we would hope, have heard this a thousand times before.)


Probably. Signed integer overflow is undefined behaviour in C and C++.


So is sqrt(-1) but that doesn't mean sqrt() shouldn't exist. Just don't call add() with arguments that will overflow.


So, sanitize your `/add?a=1&b=2` endpoint inputs by making sure that they don't add up to something greater than... wait a sec. Actually, let's just return an unsigned int. No wait! We might get a negative number as input. Hmm, can I bitbang my way out of this? Can I be sure that the signed ints are two's complement?

Man, these languages are seriously !!fun!!


There will always be syntactically valid yet semantically invalid statements in any grammar. Undefined behavior isn't bad, it's a unavoidable consequence of all languages, programming or otherwise. Grammatical constructions that have no meaning will always exist.

If you want to make sure your callers are obeying the interface contract, use assert() to not incur runtime overhead in production and guard against programming error:

    int add(int a, int b) {
        assert(!__builtin_sadd_overflow(a, b, NULL));
        return a + b;
    }


> Undefined behavior isn't bad, it's a unavoidable consequence of all languages, programming or otherwise.

This is incorrect.

"Undefined behavior" is a specific technical term for a scenario in a program that the compiler is permitted by the specification to assume will not happen, for the purposes of optimization. For instance, this code:

    int silly(int a) {
        if (a + 5 > a) return 0;
        return 1;
   }
can be optimized to just "return 0", because, as a human would read it, obviously a + 5 > a. So the spec says that signed integer overflow cannot occur to allow the compiler to optimize this as a human would want the compiler to optimize this. (Whether the spec actually matches human expectations is a good question, but in general it's right, and forbidding all undefined behavior in C and C++ would cause you to miss out on tons of optimizations that you obviously want.)

"Undefined behavior" does not mean providing invalid input to a function and getting an exception, or a crash, or a particular error result, if the result is well-defined. For instance, this is defined behavior:

    >>> import math
    >>> math.sqrt(-1)
    Traceback (most recent call last):
      File "<stdin>", line 1, in <module>
    ValueError: math domain error
because Python defines the behavior that math.sqrt(-1) raises a ValueError and I can reliably catch that exception. If it were undefined behavior, then I wouldn't have any guarantee of being able to catch the ValueError: the Python interpreter might just choose to return 27 if it's more convenient.


Thanks for the clarifications. Indeed this is what I meant. Undefined behavior in particular isn't unavoidable, though the existence of meaningless statements often is.

Python's philosophy of giving well-defined behavior to meaningless code seems wasteful. If your code is unintentionally executing sqrt(-1) (or unintentionally indexing out of bounds, etc) then something is wrong with your program. You don't necessarily need the behavior to be defined if there is a bug in your program. In the case that you want something predictable to happen, better to just abort(). Catching ValueError/IndexError in those cases is futile, how can one trust a buggy program to handle itself?

Python using exceptions to signal both runtime errors and programmer errors is a design smell. The former should always be caught and handled, the latter should never be caught and should probably only reasonably abort().


> If your code is unintentionally executing sqrt(-1) (or unintentionally indexing out of bounds, etc) then something is wrong with your program.

This is not Python's design. You might dislike Python's design, sure (many people do!), but it is recommended Python practice to try to do something, catch the well-defined exception, and implement the fallback rather than to check in advance.

https://docs.python.org/3.5/glossary.html#term-eafp

https://blogs.msdn.microsoft.com/pythonengineering/2016/06/2...

> Catching ValueError/IndexError in those cases is futile, how can one trust a buggy program to handle itself?

The program is not buggy.

(Also, it's absolutely possible to design software where bugs are contained to part of the code: for instance, if you hit a bug in your Python code while handling a web request, you should return a 500 to that user and carry on instead of aborting the whole web server. In fact, it is precisely the relative lack of undefined behavior in Python that makes this a good approach: if you were doing this in C, a wild pointer could well corrupt some other server thread!)


I said unintentionally. Intentionally running code that may throw an exception and catching it predictably (in the normal EAFP fashion, which I am well aware of) doesn't apply to my argument.

I'm arguing that an unintentional or unexpected ValueError, IndexError, TypeError, etc. means your program is buggy and there is no sane/practical way to 100% safely recover from it outside of restarting the process.

Your example of the catch-all handler in a modular HTTP server is inappropriate and dangerous. Continuing to run a server with persistent in-process state after an unexpected exception risks corrupting data. Just because Python doesn't have C pointers doesn't mean it can't have inconsistent or incorrect state that can affect other server threads.


> Python's philosophy of giving well-defined behavior to meaningless code seems wasteful.

The code has a well-defined meaning, it's just that the input domain is larger than the one defined in C.


Just about all programs have bugs. There's a huge difference trying to defend against any possibility of undefined behaviour happening, because it may result in very bad unpredictable things happening to your computer, and defending against bugs that may result in the error getting signaled in a well defined way.

This is the fundamental difference that makes C/C++ unsafe programming languages.

(Yeah, this specific case of the exported add funciton is unlikely to be too apocalyptic, but the general guarantees are important).


Rust, Python and other so-called safe languages are unsafe in the same fundamental ways C/C++ are unsafe. Safety is a larger class than "memory safety" which is what you are referring to. As long as the language permits running despite the existence of a bug / programming error, it is unsafe. RAM may not be corrupted, but state can nonetheless be left inconsistent and cause undesirable behavior ("unsafe" behavior).


That's all true, but there's a reason that we define "safe languages" in this way: it's that you can isolate parts of the program from other parts of the program, and know with confidence that a failure in one part of the program will not corrupt data in another part of the program.

Again, my proposal is to wrap only that part of the program which doesn't interact with shared state in a giant try/catch block (and you can know with 100% reliability what that part is in a safe language). The parts about taking a request off the queue, or storing results in a shared data structure, or whatever, should be outside of the try/catch block, because if they break, your shared state is indeed at risk.


Your proposal doesn't scale and doesn't apply in general. Neither Rust nor Python enforce high-level data structure integrity (e.g. transactionally pulling data off one queue and stuffing in another). A team of 100 programmers will get this wrong. Big try:except:log:continue is the exception (rimshot), the rule should be to abort().


> Neither Rust nor Python enforce high-level data structure integrity (e.g. transactionally pulling data off one queue and stuffing in another).

Rust absolutely does. For instance, if you unwind while holding a lock, the lock gets poisoned, preventing further access to the locked data. If you're not holding the lock, no safe Rust code can possibly corrupt it, unwinding or no unwinding. So you definitely can put a lock around your two queues, operate them from multiple threads running safe (or correct unsafe) code, and be robust to a panic in one of those threads.

I'm less familiar with Python's concurrent structures, but as I keep saying, this is why you leave the stuff that touches the data structure outside of your try/except - Python does guarantee that a thread that doesn't have explicit access to a shared object can't mess with the shared object by mistake.


Just because Rust has safe guards for lock usage during unwinds doesn't mean it prevents all high level data structure inconsistencies or even just plain old bugs.

Doesn't matter how you choose to handle invalid semantic forms, either via undefined behavior, error code, exception, or assert, as long as you silently ignore it, your code is unsafe. Rust doesn't have undefined behavior but that doesn't mean it doesn't suffer from silent errors. E.g. Returning NaN from sqrt(-1) or signed integer overflow wrapping.

That's my entire point.

As a programmer your intent is to use APIs in the manner they expect. An invalid use is an invalid program. Garbage in, garbage out. No amount of subsequent error handling is going to help you. Better to abort().


Yes, if you break the contract, better abort. But throwing errors can be part of the contract, even for some programming errors. Failure tolerance, resilience etc.


A runtime exception is fine to handle. Like ENOENT, etc. these are expected and your program can be designed to handle these errors.

A programming error is a sign that your program is not operating the way you expect. No correct program should ever call sqrt(-1) or overflow arithmetic.

Outside of effectively aborting the process, what other way is there to safely handle a programming error (aka bug) when encountered?


Not all programming errors lead to incorrect programs (correctness being defined by the language).

You shouldn't call sqrt(-1) in C, and if you do, you abort. But maybe you are not supposed to call sqrt(20) either, because 20 is a sign your programmer did not understand the application. In that case, the programming error is still a correct program.

In languages like Python, or Lisp, there are a whole set of programming errors like dividing by zero, calling length on something that is not a sequence, etc. that are designed to not crash the system (nor make it inconsistent), in part because those errors can happen while developing the code and are just of way providing an interactive feedback.

Now, if you ship a product and there is a programming error that manifests itself while in production, you better not try anything stupid, I agree.


You essentially agree with me. Aborting with a stack trace is still an abort. It doesn't need to be catchable.


You are speaking as if it was an All-or-Nothing situation.

> RAM may not be corrupted, but state can nonetheless be left inconsistent

... yes, but at least RAM is not corrupted, that's a little step towards reliability. And if you can manage your state in a transactional way, that's another step.

Do you restart your OS when your program crashes?


I never said it was an all or nothing. My point is that trying to handle unexpected errors due to programming error is not safe and Python allows that.

It doesn't matter if throwing ValueError exceptions on sqrt(-1) is well-defined, continuing to run the program by ignoring the exception is no less harmful than silent integer overflows or buffer overruns.

I don't restart my OS when a process crashes because it has been designed to use hardware mechanisms to clean up dead processes. I absolutely do restart my OS when it kernel panics, it doesn't try:except:log:continue.


Those are hardware mechanisms backed by software that tells the hardware mechanisms what to do. You have a lot of trust in them!

I recently discovered that my Windows machine wouldn't boot because my boot sector had been replaced with some random XML. That's exactly the sort of thing that hardware protection is supposed to prevent - nothing during a normal run of the OS should be writing to the boot sector, at all.

Do you restart your OS when it oopses and kills a process? Linux in fact catches bad memory accesses from kernelspace and attempts to just kill the current process and not the whole kernel.


I trust the code as long as it's behaving correctly, when it encounters a bug I no longer trust it and I shut it down before it can do further harm. A modular HTTP server should do the same.

The OS/process analogy doesn't hold here. The process has completely isolated state from the kernel.


> The OS/process analogy doesn't hold here. The process has completely isolated state from the kernel.

In one direction. That's why I'm asking you if you reboot your machine when your kernel dereferences a wild pointer when executing a system call on behalf a process - in theory it could have corrupted the kernel itself or any process on the system, but Linux makes a practice of trying to just abort execution of the system call, kill the process, and keep going.


If that's what Linux does, that seems fully intentional and the possible consequences on kernel state are probably well-thought out. Are you claiming what Linux does normally is unsafe and could possibly corrupt kernel state? Like every EFAULT? If that's not your claim, then the analogy doesn't hold and you're entirely missing my point.


That is absolutely my claim, and I am absolutely claiming that it is not well-thought-out - it's literally doing this in response to any fault from kernelspace. If you were unlucky enough that the wild pointer referred to someone else's memory, well, sucks to be you, some random process on your system (or random file on disk, or whatever) has memory corruption and nobody even knew.


> Undefined behavior isn't bad, it's a unavoidable consequence of all languages

Sorry but sounds like you are not familiar with the special meaning of the term "undefined behaviour" in context of C/C++. It means your program may crash or corrupt memory in a way that results in remote code execution. (Or anything else... http://catb.org/jargon/html/N/nasal-demons.html )


It specifically means that the compiler may assume that this case never happens, and do what's convenient to it.

In practice, for the addition function above, the convenient thing for any compiler on any reasonable platform is to just let the hardware handle overflow the way the hardware wants to. Signed overflow is undefined so that C as a language is portable to hardware with different ways of overflowing signed integers, and so there isn't a need for a compiler on one platform to implement special behavior. But it's unlikely to crash, corrupt memory, conjure demons, etc.

Corrupting memory requires a bit more setup: you do a bounds-check on a pointer in a way that hits signed overflow (or you do some inappropriate casts, or something), you get a result, and you use that result to index into an array. If the way that you got that result involved UB, your bounds check may not be valid. Again, this isn't because the compiler particularly desires to access invalid memory, but because it wants to do the cheapest possible thing that is still correct for all defined behavior.


Crashing isn't outside the realm of possibility even with the a+b example: trapping and aborting the program may happen if the hardware has trapping overflow, or if the C implementation is safety focused and inserts explicit overflow checks to avert further unsafe things from happening.


not incur runtime overhead in production and guard against programming error

I'm a huge fan of assertions and use them to document 'programmer screwed up' and to get notified of cases when that happens; I find that a more corect description then 'guarding against'. However we leave assertions on in release builds as well: tracking down cases of UB only happening in release builds (most often due to someone forgetting to initialize something) is hard enough already, and we found that leaving assertions on can help with that. Only in certain hot paths which prove the runtime overhead matters (and there's really not a lot of those) we'll turn them off.


There will always be syntactically valid yet semantically invalid statements in any grammar.

Really? Why? And can you give an example of that in, say, Python?


print(type("foo") + "foo")

Syntactically correct. Will provoke error during runtime due to invalid semantics.


That's a TypeError though, not undefined behaviour. People are talking about different things.


True.

But I was answering this specific to the question of "syntactically valid yet semantically invalid statements" in python.


    (-1) ** 0.5
Why define something that has no meaning? Just a waste of cycles, CPU and brain.


This is well-defined:

    >>> (-1) ** 0.5
    Traceback (most recent call last):
      File "<stdin>", line 1, in <module>
    ValueError: negative number cannot be raised to a fractional power
Any conforming implementation of Python 2.7 must raise ValueError. Why define it? So that a programmer can catch ValueError.


As I said in another thread, trusting a buggy program to debug itself is futile. Effectively handling unintentional ValueError (or IndexError, TypeError) is not practically possible. For one, there's a good chance the program's persistent shared state is in an inconsistent state when those unintentional exceptions happen. Better to just abort().


They can choose to do just that. But they're also afforded a little more flexibility with this approach. A developer could use this to serve up a stack trace rather than just crashing the server, for example.


You can always restart the process after an abort(). Continuing to run a server with persistent in-process state after an unexpected exception is dangerous and you risk corrupting data.


Many servers do not have persistent in-process state, or have in-process state that is robust against bugs in other parts of the program. In particular, in most languages where you need special syntax or data types to access shared state (so, definitely not C or C++, but Python should count), you can isolate all the code that doesn't use this syntax or these types of objects inside a giant try/catch, and know that any misbehavior inside that block of code cannot possibly have affected the shared state.


This is not scalable. A production server might import code from dozens, if not hundreds of modules, all of which have varying degrees of code quality and are written by different authors from different organizations. It's not practical to trust that all the code is written in a way that is exception-safe. Try:catch:log:continue is a hazard and signal that the person who wrote the code hasn't thought very deeply about correctness.


I am assuming this is python. This fails in python2 but works (returns complex number) in python3.


Sanitize your endpoint to ensure that both a and b are within INT_MIN/2 to INT_MAX/2.

After all, you didn't pick INT_MIN; the entire point of a 32-bit integer or a 64-bit integer is that it's more than enough for reasonable purposes. It's not based on a physical constant or anything, it's just convenient and large. If you need a specific number of bits for your purposes that's more than that, use a u128 extension or a bigint type.


If you just specify that a < INT_MAX/2 and b < INT_MAX/2, you are fine.

If not (and you are providing a general purpose addition), then you need a big number library.


You can also overflow by adding two large enough negative numbers.

    /tmp/file.c:2:[kernel] warning: signed overflow. assert -2147483648 ≤ i+j;
    /tmp/file.c:2:[kernel] warning: signed overflow. assert i+j ≤ 2147483647;
(tis-analyzer)


It must be nice to live in a world where you don't need to write fast code.


Safe semantics don't preclude fast code. See eg Rust, which arguably enables you to write faster code than C++ - because its guarantees let you have confidence in correctness of complex programs involving fine grained shared memory parallelism mutating common data.


So, like what's the case for 99% of programmers?


    CL-USER> (sqrt -1)
    #C(0.0 1.0)


I like it how contrived complaints can get...


what about opening a ticket / doing a pull request with a better trivial example that does not "teach" bad habits?


A simple fix is using unsigned integers, as unsigned overflow is well-defined.

(There isn't a way in C/C++ to request an exception or fault on integer overflow, which is what you'd want here, since you're running inside Python. A better conceptual fix is just doing the addition in Python, which is a more robust language than C/C++ for signed integer addition, but that kind of defeats the point of the example.)


> There isn't a way in C/C++ to request an exception or fault on integer overflow

This is incorrect.

See compiler options: -ftrapv, -fsanitize=undefined. See compiler builtins: __builtin_add_overflow and friends.


But no way in standard C/C++.


GCC/Clang/VC++ represent 99% of the compiler market. In the exceedingly rare chance your vendor doesn't implement those extensions, you're probably a big enough customer of theirs that you can request it.


I mean, yes, I'm sympathetic to the argument that most people should use a variant of C that isn't actually C and is much better about undefined behavior (not just integer overflow but other things too). https://blog.regehr.org/archives/1180 has some thoughts along these lines.

I'm not sure that pybind11 wants to have its docs using a nonstandard variant of C, though, for lots of reasons, including that I expect the major use case of pybind11 is binding to existing C or C++ projects which are generally written in standard C or C++. (But not always! Postgres, for instance, is written in a variant of C with -fwrapv - not even -ftrapv, but a definition of signed overflow - instead of standard C.)


> Postgres, for instance, is written in a variant of C with -fwrapv - not even -ftrapv, but a definition of signed overflow - instead of standard C.

To my knowledge that's solely because historically there's been signed integer overflow checks that relied on signed overflow semantics, not because there's otherwise use of overflows. At some point we didn't have a nice, portable and fast way to do overflow checks and so -fwrapv was added... I'd like to revisit that.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: