What's new in Cython 0.29?

I'm happy to announce the release of Cython 0.29. In case you didn't hear about Cython before, it's the most widely used statically optimising Python compiler out there. It translates Python (2/3) code to C, and makes it as easy as Python itself to tune the code all the way down into fast native code. This time, we added several new features that help with speeding up and parallelising regular Python code to escape from the limitations of the GIL.

So, what exactly makes this another great Cython release?

The contributors

First of all, our contributors. A substantial part of the changes in this release was written by users and non-core developers and contributed via pull requests. A big "Thank You!" to all of our contributors and bug reporters! You really made this a great release.

Above all, Gabriel de Marmiesse has invested a remarkable amount of time into restructuring and rewriting the documentation. It now has a lot less historic smell, and much better, tested (!) code examples. And he obviously found more than one problematic piece of code in the docs that we were able to fix along the way.

Cython 3.0

And this will be the last 0.x release of Cython. The Cython compiler has been in production critical use for years, all over the world, and there is really no good reason for it to have an 0.x version scheme. In fact, the 0.x release series can easily be counted as 1.x, which is one of the reasons why we now decided to skip the 1.x series all together. And, while we're at it, why not the 2.x prefix as well. Shift the decimals of 0.29 a bit to the left, and then the next release will be 3.0. The main reason for that is that we want 3.0 to do two things: a) switch the default language compatibility level from Python 2.x to 3.x and b) break with some backwards compatibility issues that get more in the way than they help. We have started collecting a list of things to rethink and change in our bug tracker.

Turning the language level switch is a tiny code change for us, but a larger change for our users and the millions of source lines in their code bases. In order to avoid any resemblance with the years of effort that went into the Py2/3 switch, we took measures that allow users to choose how much effort they want to invest, from "almost none at all" to "as much as they want".

Cython has a long tradition of helping users adapt their code for both Python 2 and Python 3, ever since we ported it to Python 3.0. We used to joke back in 2008 that Cython was the easiest way to migrate an existing Py2 code base to Python 3, and it was never really meant as a joke. Many annoying details are handled internally in the compiler, such as the range versus xrange renaming, or dict iteration. Cython has supported dict and set comprehensions before they were backported to Py2.7, and has long provided three string types (or four, if you want) instead of two. It distinguishes between bytes, str and unicode (and it knows basestring), where str is the type that changes between Py2's bytes str and Py3's Unicode str. This distinction helps users to be explicit, even at the C level, what kind of character or byte sequence they want, and how it should behave across the Py2/3 boundary.

For Cython 3.0, we plan to switch only the default language level, which users can always change via a command line option or the compiler directive language_level. To be clear, Cython will continue to support the existing language semantics. They will just no longer be the default, and users have to select them explicitly by setting language_level=2. That's the "almost none at all" case. In order to prepare the switch to Python 3 language semantics by default, Cython now issues a warning when no language level is explicitly requested, and thus pushes users into being explicit about what semantics their code requires. We obviously hope that many of our users will take the opportunity and migrate their code to the nicer Python 3 semantics, which Cython has long supported as language_level=3.

But we added something even better, so let's see what the current release has to offer.

A new language-level

Cython 0.29 supports a new setting for the language_level directive, language_level=3str, which will become the new default language level in Cython 3.0. We already added it now, so that users can opt in and benefit from it right away, and already prepare their code for the coming change. It's an "in between" kind of setting, which enables all the nice Python 3 goodies that are not syntax compatible with Python 2.x, but without requiring all unprefixed string literals to become Unicode strings when the compiled code runs in Python 2.x. This was one of the biggest problems in the general Py3 migration. And in the context of Cython's integration with C code, it got in the way of our users even a bit more than it would in Python code. Our goals are to make it easy for new users who come from Python 3 to compile their code with Cython and to allow existing (Cython/Python 2) code bases to make use of the benefits before they can make a 100% switch.

Module initialisation like Python does

One great change under the hood is that we managed to enable the PEP-489 support (again). It was already mostly available in Cython 0.27, but lead to problems that made us back-pedal at the time. Now we believe that we found a way to bring the saner module initialisation of Python 3.5 to our users, without risking the previous breakage. Most importantly, features like subinterpreter support or module reloading are detected and disabled, so that Cython compiled extension modules cannot be mistreated in such environments. Actual support for these little used features will probably come at some point, but will certainly require an opt-in of the users, since it is expected to reduce the overall performance of Python operations quite visibly. The more important features like a correct __file__ path being available at import time, and in fact, extension modules looking and behaving exactly like Python modules during the import, are much more helpful to most users.

Compiling plain Python code with OpenMP and memory views

Another PEP is worth mentioning next, actually two PEPs: 484 and 526, vulgo type annotations. Cython has supported type declarations in Python code for years, has switched to PEP-484/526 compatible typing with release 0.27 (more than one year ago), and has now gained several new features that make static typing in Python code much more widely usable. Users can now declare their statically typed Python functions as not requiring the GIL, and thus call them from a parallel OpenMP loops and parallel Python threads, all without leaving Python code compatibility. Even exceptions can now be raised directly from thread-parallel code, without first having to acquire the GIL explicitly.

And memory views are available in Python typing notation:

import cython
from cython.parallel import prange

@cython.cfunc
@cython.nogil
def compute_one_row(row: cython.double[:]) -> cython.int:
    ...

def process_2d_array(data: cython.double[:,:]):
    i: cython.Py_ssize_t

    for i in prange(data.shape[0], num_threads=16, nogil=True):
        compute_one_row(data[i])

This code will work with NumPy arrays when run in Python, and with any data provider that supports the Python buffer interface when compiled with Cython. As a compiled extension module, it will execute at full C speed, in parallel, with 16 OpenMP threads, as requested by the prange() loop. As a normal Python module, it will support all the great Python tools for code analysis, test coverage reporting, debugging, and what not. Although Cython also has direct support for a couple of those by now. Profiling (with cProfile) and coverage analysis (with coverage.py) have been around for several releases, for example. But debugging a Python module in the interpreter is obviously still much easier than debugging a native extension module, with all the edit-compile-run cycle overhead.

Cython's support for compiling pure Python code combines the best of both worlds: native C speed, and easy Python code development, with full support for all the great Python 3.7 language features, even if you still need your (compiled) code to run in Python 2.7.

More speed

Several improvements make use of the dict versioning that was introduced in CPython 3.6. It allows module global names to be looked up much faster, close to the speed of static C globals. Also, the attribute lookup for calls to cpdef methods (C methods with Python wrappers) can benefit a lot, it can become up to 4x faster.

Constant tuples and slices are now deduplicated and only created once at module init time. Especially with common slices like [1:] or [::-1], this can reduce the amount of one-time initialiation code in the generated extension modules.

The changelog lists several other optimisations and improvements.

Many important bug fixes

We've had a hard time following a change in CPython 3.7 that "broke the world", as Mark Shannon put it. It was meant as a mostly internal change on their side that improved the handling of exceptions inside of generators, but it turned out to break all extension modules out there that were built with Cython, and then some. A minimal fix was already released in Cython 0.28.4, but 0.29 brings complete support for the new generator exception stack in CPython 3.7, which allows exceptions raised or handled by Cython implemented generators to interact correctly with CPython's own generators. Upgrading is therefore warmly recommended for better CPython 3.7 support. As usual with Cython, translating your existing code with the new release will make it benefit from the new features, improvements and fixes.

Stackless Python has not been a big focus for Cython development so far, but the developers noticed a problem with Cython modules earlier this year. Normally, they try to keep Stackless binary compatible with CPython, but there are corner cases where this is not possible (specifically frames), and one of these broke the compatibility with Cython compiled modules. Cython 0.29 now contains a fix that makes it play nicely with Stackless 3.x.

A funny bug that is worth noting is a mysteriously disappearing string multiplier in earlier Cython versions. A constant expression like "x" * 5 results in the string "xxxxx", but "x" * 5 + "y" becomes "xy". Apparently not a common code construct, since no user ever complained about it.

Long-time users of Cython and NumPy will be happy to hear that Cython's memory views are now API-1.7 clean, which means that they can get rid of the annoying "Using deprecated NumPy API" warnings in the C compiler output. Simply append the C macro definition ("NPY_NO_DEPRECATED_API", "NPY_1_7_API_VERSION") to the macro setup of your distutils extensions in setup.py to make them disappear. Note that this does not apply to the old low-level ndarray[...] syntax, which exposes several deprecated internals of the NumPy C-API that are not easy to replace. Memory views are a fast high-level abstraction that does not rely specifically on NumPy and therefore does not suffer from these API constraints.

Less compilation :)

And finally, as if to make a point that static compilation is a great tool but not always a good idea, we decided to reduce the number of modules that Cython compiles of itself from 13 down to 8, thus keeping 5 more modules normally interpreted by Python. This makes the compiler runs about 5-7% slower, but reduces the packaged size and the installed binary size by about half, thus reducing download times in CI builds and virtualenv creations. Python is a very efficient language when it comes to functionality per line of code, and its byte code is similarly high-level and efficient. Compiled native code is a lot larger and more verbose in comparison, and this can easily make a difference of megabytes of shared libraries versus kilobytes of Python modules.

We therefore repeat our recommendation to focus Cython's usage on the major pain points in your application, on the critical code sections that a profiler has pointed you at. The ability to compile those, and to tune them at the C level, is what makes Cython such a great and versatile tool.

Der Facebook-Effekt, oder: warum Wahlergebnisse uns heute wieder überraschen

Geschrieben im Juli 2016.

Die meisten Leute haben schon mal vom "Kleine-Welt-Phänomen" gehört. Es erklärt verschiedene Alltagseffekte, unter anderem die Überraschung, in einer noch so unmöglichen Situation jemanden zu treffen, mit dem oder der ich in irgendeiner Verbindung stehe, ohne uns schon jemals begegnet zu sein. Sei es die Existenz eines gemeinsamen Bekannten, eine ähnliche Herkunft, oder ein gemeinsames Ereignis in der Vergangenheit. Technisch gesprochen beschreibt es die Eigenschaft eines Netzwerks (oder Graphen), dass jeder Knoten mit allen anderen Knoten über einen extrem kurzen Pfad in Verbindung steht. Dies trifft oft auf menschliche Bekanntschaftsverhältnisse und soziale Netzwerke zu, bei denen die Entfernung zwischen zwei beliebig herausgegriffenen Menschen meist weniger als 7 Zwischenschritte über einander Bekannte beträgt.

Insbesondere in den heute sogenannten Sozialen Netzwerken im Internet wird diese Eigenschaft geradezu zelebriert. Da es so einfach ist, mich mit irgendwelchen Menschen (oder zumindest irgendwelchen Accounts) irgendwo auf der Welt zu vernetzen, bilden diese Netzwerke sehr ausgeprägte Kleine-Welt-Graphen. Hier ist jeder andere Teilnehmer wirklich nur ein paar Netzwerkklicks entfernt. Die totale globale Vernetzung. Endlich wächst die Menschheit zusammen. Oder?

Oft wird dabei vergessen, dass es noch einen zweiten Aspekt solcher Graphen gibt. Das eine ist die minimale Entfernung zu jedem Knoten durch den gesamten Graphen hindurch. Das andere ist jedoch die Menge der direkten Verbindungen jedes einzelnen Knotens. Gerade in den Sozialen Internetnetzwerken ist es so einfach, neue Verbindungen zu erstellen und so neue "Freunde" zu gewinnen, dass ich mich sofort mit allen vernetzen kann, die mich irgendwie interessieren oder denen ich in irgendeinem positiven Sinne begegne. Im Umkehrschluss bedeutet das aber, dass die zweite Ebene, die der "Freunde" meiner "Freunde", eigentlich gar nicht mehr relevant ist. Ganz zu schweigen von der dritten und allen weiteren. Denn mit allen "Freunden" meiner "Freunde", die mich interessieren, kann ich mich ja auch direkt verbinden. Was ich ja auch mache. Damit werden es meine direkten "Freunde".

Aber wen nehme ich in den Kreis meiner eigenen "Freunde" auf? Wem "folge" ich in diesen Sozialen Netzwerken? Natürlich Menschen (oder Accounts), die so denken wie ich, mit denen ich mich im Einklang befinde, die mir gefallen. Aber verbinde ich mich auch mit Menschen, die anders denken als ich? Die nicht meine politischen Ansichten teilen? Die mir widersprächen, wenn ich nur mit Ihnen redete? Oder deren Ausdrucksweise nicht meiner sozialen Schicht entspricht? Prolls? Sozen? Rechtsradikale? Kinderschläger? EU-Gegner? Warmduschpropagandisten?

Warum sollte ich mir das antun? Wenn die "Freunde" meiner "Freunde" so etwas tun, dann sollen sie das eben. Aber meine "Freunde" werden sie damit nicht. Vielleicht lese ich mal einen Kommentar von diesen Leuten und rege mich darüber auf, aber das reicht ja dann auch wieder. Jeden Tag brauche ich das jedenfalls nicht.

Es zeigt sich also relativ schnell, dass diese Sozialen Internetnetzwerke einen Effekt verstärken, der Gleiches im Gleichen sucht und Anderes ausgrenzt. Eine moderne Form der Ghettoisierung. Die Welt mag noch so klein sein, der kürzeste Pfad zu allen Menschen auf der Welt noch so kurz - mir reicht die eine, direkte Verbindung zu denen, die meiner Meinung sind.

In Wirklichkeit wächst die Menschheit nicht zusammen. Nur die Trennlinien verlagern sich. Weg von Wohnort und Aussehen, hin zu Verhalten, Bildungsniveau und sozialen Unterschieden. Und die Trennung verstärkt sich. Warum sollte ich mit "Freunden" meiner "Freunde" überhaupt reden, die ich nie zu meinen eigenen "Freunden" machen würde?

Es gibt viele Menschen, gerade junge Menschen, die Ihren Medienkonsum größtenteils oder sogar komplett von den klassischen Medien Zeitung und Fernsehen weg in Soziale Internetnetzwerke verlagert haben. "Meine Freunde werden mich schon auf dem Laufenden halten", ist oft der Tenor dahinter. Wenn etwas passiere, dann kriege man das ja auch so mit. Sicherlich. Aber man läuft eben auch Gefahr, die "Nicht-Freunde" und ihre Meinungen auszublenden. Die Anderen. Die, mit denen ich mich nie direkt vernetzen würde. Weil sie eben nicht meiner Meinung sind. Weil sie eine Meinung vertreten, die ich ablehne. Die ich nicht hören möchte. Die ich ausblende. Die nicht der Meinung meiner "Freunde" entspricht. So funktionieren nicht nur die sozialen Netzwerke um einen Anders Brevik herum oder die von Pegida. Selbstselektierende Zugehörigkeit und Ghettobildung ist eine grundlegende Eigenschaft Sozialer Netzwerke im Internet, egal welcher Art die jeweiligen Auswahlkriterien sind.

Eine entscheidende Tatsache, die wir bei der Abstimmung über den Ausstieg Britanniens aus der EU am 23. Juni 2016 gesehen haben, war die geringe Beteiligung junger Wähler. Nur ein Drittel der unter 25-jährigen sind überhaupt zur Abstimmung gegangen. Und nur die Hälfte der 25- bis 35-jährigen. Obwohl gerade diese Altersgruppen über Austauschprogramme, Reisefreiheit und den offenen Arbeitsmarkt am stärksten von der EU-Mitgliedschaft profitieren und im Vergleich zu den stark engagierten über-65-jährigen noch sehr, sehr lange davon profitiert hätten. Noch den größten Teil ihres gesamten Lebens.

Es gibt eine gute Erklärung dafür: falsche Sicherheit. Viele Menschen der jungen, gut ausgebildeten Altersgruppen dürften im Internet gut untereinander vernetzt sein, aber wenig direkte Kontakte zu deutlich älteren oder sozial benachteiligten Menschen haben. Also eine Ghettoisierung nach Altersklassen und sozialer Herkunft. In solchen Ghettos kann schnell der Eindruck entstehen, sich nicht für etwas engagieren zu müssen, weil ja alle der gleichen Meinung sind. Die Mehrheiten scheinen bereits vorab festzustehen und da sie in meinem Sinne ausfallen, fühle ich mich als Ghettobewohner heimelig und wohlig davon umhüllt und verliere den Druck, mich selbst für etwas einsetzen zu müssen. Meine Mehrheit wird schon richtig entscheiden, mir selbst ist das Wetter heute zu schlecht oder der Besuch eines Konzerts zu wichtig, um raus zur Abstimmung zu gehen.

Es gibt weitere schöne Beispiele für diesen Effekt. So hatte bei den Vorentscheidungen zur Präsidentschaftswahl in den USA 2004 der Kandidat Howard Dean fast ausschließlich auf Internetaktionen gesetzt und darüber seine Kampagne organisiert. Dadurch erzielte er eine hohe Sichtbarkeit unter seinen Anhängern, unter Journalisten und anderen Nutzern dieser Medien. Erst bei den ersten Vorwahlen zeigte sich dann, dass diese hohe Sichtbarkeit in Internetkampagnen und die dort erzielten hohen Umfragewerte sich nicht in den realen Wahlergebnissen niederschlugen. Ein klarer Fall von Selbsttäuschung innerhalb des eigenen Ghettos.

Inzwischen zeigen einige Studien, dass Menschen, die in Sozialen Internetnetzwerken aktiv sind, sich deutlich weniger in ihrem realen Umfeld engagieren. Dass sie leicht das Klicken auf einen "Mag ich"-Knopf mit gesellschaftlichem Engagement verwechseln. Warum an einer Demonstration teilnehmen, kraftraubene Diskussionen mit Andersdenkenden führen, oder betroffenen Menschen mit Spenden und Taten helfen, wenn ich meine Meinung auch durch einen Klick auf einen Knopf oder das schnelle Unterzeichnen einer Petition "zeigen" kann? "Bin" ich wirklich Charlie, Paris, Brüssel, Istanbul, Aleppo oder Bagdad, nur weil ich im Internet auf einen Knopf geklickt habe? Oder weil ich mit einem Hashtag "ein Zeichen gesetzt" habe? Und für wen habe ich diesen Klick vollbracht? Für die betroffenen Menschen? Wirklich? Nicht doch eher für meine "Freunde", die mein "Zeichen" sehen, die ich dadurch wohlig umhülle und zu denen ich mich durch mein Zeichen zugehörig fühlen kann? Was nützt einem Verwundeten in Bagdad mein Klick auf den "Mag ich"-Knopf der Ärzte ohne Grenzen?

Wir müssen wieder verstehen und akzeptieren, dass Pluralismus und Meinungsvielfalt einer Symmetrie unterliegen und nicht nur etwas sind, was die betrifft, die anderer Meinung sind. Wo es Menschen gibt, die anderer Meinung sind, bin auch ich ein Mensch mit einer anderen Meinung. Nicht einmal, wenn eine dieser Meinungen grundlegenden Werten wie den Menschenrechten widerspricht, kann ich mir sicher sein, dass sie nur von "Idioten" und "Außenseitern" propagiert wird. Auch diese Kategorien sind nur subjektive Begriffe.

Das Menschenrecht auf Meinungsfreiheit zu verteidigen bedeutet zuerst einmal, andere Meinungen überhaupt wahrzunehmen und ihre Existenz zu akzeptieren. Als zweites, Toleranz gegenüber den Menschen zu zeigen, die sie vertreten. Und erst an dritter Stelle steht mein Recht, den Meinungen offen zu widersprechen, die sich nicht mit meiner decken, besonders, wenn sie meinen Überzeugungen widersprechen. Aber da steht auch die Pflicht zu widersprechen, wenn diese Meinungen sich gegen Menschen und Minderheiten richten. Denn jede artikulierte Meinung muss sich auch am ersten Artikel unseres Grundgesetzes messen lassen. Die Würde des Menschen ist unantastbar.

Die Meinungsfreiheit stellt es natürlich jeder/m Einzelnen frei, andere Menschen und ihre Meinungen zu ignorieren. Sie ist aber keine Einladung dazu, sich in Ghettos einzukuscheln und nicht mehr miteinander zu reden. Sie darf niemals dazu führen, dass ganze Bevölkerungsgruppen sich gegenseitig ignorieren. Eine Demokratie lebt nur in Diskussion und Austausch. Den Dialog abzubrechen führt geradewegs in die Ghettoisierung, zu Tunnelblick und Radikalisierung. Und zu unerwarteten Wahlergebnissen.

Cython, pybind11, cffi – which tool should you choose?

In and after the conference talks that I give about Cython, I often get the question how it compares to other tools like pybind11 and cffi. There are others, but these are definitely the three that are widely used and "modern" in the sense that they provide an efficient user experience for today's real-world problems. And as with all tools from the same problem space, there are overlaps and differences. First of all, pybind11 and cffi are pure wrapping tools, whereas Cython is a Python compiler and a complete programming language that is used to implement actual functionality and not just bind to it. So let's focus on the area where the three tools compete: extending Python with native code and libraries.

Using native code from the Python runtime (specifically CPython) has been at the heart of the Python ecosystem since the very early days. Python is a great language for all sorts of programming needs, but it would not have the success and would not be where it is today without its great ecosystem that is heavily based on fast, low-level, native code. And the world of computing is full of such code, often decades old, heavily tuned, well tested and production proven. Looking at indicators like the TIOBE Index suggests that low-level languages like C and C++ are becoming more and more important again even in the last years, decades after their creation.

Today, no-one would attempt to design a (serious/practical) programming language anymore that does not come out of the box with a complete and easy to use way to access all that native code. This ability is often referred to as an FFI, a foreign function interface. Rust is an excellent example for a modern language that was designed with that ability in mind. The FFI in LuaJIT is a great design of a fast and easy to use FFI for the Lua language. Even Java and its JVM, which are certainly not known for their ease of code reuse, have provided the JNI (Java Native Interface) from the early days. CPython, being written in C, has made it very easy to interface with C code right from the start, and above all others the whole scientific computing and big data community has made great use of that over the past 25 years.

Over time, many tools have aimed to simplify the wrapping of external code. The venerable SWIG with its long list of supported target languages is clearly worth mentioning here. Partially a successor to SWIG (and sip), shiboken is a C++ bindings generator used by the PySide project to auto-create wrapper code for the large Qt C++ API.

A general shortcoming of all wrapper generators is that many users eventually reach the limits of their capabilities, be it in terms of performance, feature support, language integration to one side or the other, or whatever. From that point on, users start fighting the tool in order to make it support their use case at hand, and it is not unheard of that projects start over from scratch with a different tool. Therefore, most projects are better off starting directly with a manually written wrapper, at least when the part of the native API that they need to wrap is not prohibitively vast.

The lxml XML toolkit is an excellent example for that. It wraps libxml2 and libxslt with their huge low-level C-APIs. But if the project had used a wrapper generator to wrap it for Python, mapping this C-API to Python would have made the language integration of the Python-level API close to unusable. In fact, the whole project started because generated Python bindings for both already existed that were like the thrilling embrace of an exotic stranger (Mark Pilgrim). And beautifying the API at the Python level by adding another Python wrapper layer would have countered the advantages of a generated wrapper and also severely limited its performance. Despite the sheer vastness of the C-API that it wraps, the decision for manual wrapping and against a wrapper generator was the foundation of a very fast and highly pythonic tool.

Nowadays, three modern tools are widely used in the Python community that support manual wrapping: Cython, cffi and pybind11. These three tools serve three different sides of the need to extend (C)Python with native code.

  • Cython is Python with native C/C++ data types.

    Cython is a static Python compiler. For people coming from a Python background, it is much easier to express their coding needs in Python and then optimising and tuning them, than to rewrite them in a foreign language. Cython allows them to do that by automatically translating their Python code to C, which often avoids the need for an implementation in a low-level language.

    Cython uses C type declarations to mix C/C++ operations into Python code freely, be it the usage of C/C++ data types and containers, or of C/C++ functions and objects defined in external libraries. There is a very concise Cython syntax that uses special additional keywords (cdef) outside of Python syntax, as well as ways to declare C types in pure Python syntax. The latter allows writing type annotated Python code that gets optimised into fast C level when compiled by Cython, but that remains entirely pure Python code that can be run, analysed ad debugged with the usual Python tools.

    When it comes to wrapping native libraries, Cython has strong support for designing a Python API for them. Being Python, it really keeps the developer focussed on the usage from the Python side and on solving the problem at hand, and takes care of most of the boilerplate code through automatic type conversions and low-level code generation. Its usage is essentially writing C code without having to write C code, but remaining in the wonderful world of the Python language.

  • pybind11 is modern C++ with Python integration.

    pybind11 is the exact opposite of Cython. Coming from C++, and targeting C++ developers, it provides a C++ API that wraps native functions and classes into Python representations. For that, it makes good use of the compile time introspection features that were added to C++11 (hence the name). Thus, it keeps the user focussed on the C++ side of things and takes care of the boilerplate code for mapping it to a Python API.

    For everyone who is comfortable with programming in C++ and wants to make direct use of all C++ features, pybind11 is the easiest way to make the C++ code available to Python.

  • CFFI is Python with a dynamic runtime interface to native code.

    cffi then is the dynamic way to load and bind to external shared libraries from regular Python code. It is similar to the ctypes module in the Python standard library, but generally faster and easier to use. Also, it has very good support for the PyPy Python runtime, still better than what Cython and pybind11 can offer. However, the runtime overhead prevents it from coming any close in performance to the statically compiled code that Cython and pybind11 generate for CPython. And the dependency on a well-defined ABI (binary interface) means that C++ support is mostly lacking.

    As long as there is a clear API-to-ABI mapping of a shared library, cffi can directly load and use the library file at runtime, given a header file description of the API. In the more complex cases (e.g. when macros are involved), cffi uses a C compiler to generate a native stub wrapper from the description and uses that to communicate with the library. That raises the runtime dependency bar quite a bit compared to ctypes (and both Cython and pybind11 only need a C compiler at build time, not at runtime), but on the other hand also enables wrapping library APIs that are difficult to use with ctypes.

This list shows the clear tradeoffs of the three tools. If performance is not important, if dynamic runtime access to libraries is an advantage, and if users prefer writing their wrapping code in Python, then cffi (or even ctypes) will do the job, nicely and easily. Otherwise, users with a strong C++ background will probably prefer pybind11 since it allows them to write functionality and wrapper code in C++ without switching between languages. For users with a Python background (or at least not with a preference for C/C++), Cython will be very easy to learn and use since the code remains Python, but gains the ability to do efficient native C/C++ operations at any point.

What CPython could use Cython for

There has been a recent discussion about using Cython for CPython development. I think this is a great opportunity for the CPython project to make more efficient use of its scarcest resource: developer time of its spare time contributors and maintainers.

The entry level for new contributors to the CPython project is often perceived to be quite high. While many tasks are actually beginner friendly, such as helping with the documentation or adding features to the Python modules in the stdlib, such important tasks as fixing bugs in the core interpreter, working on data structures, optimising language constructs, or improving the test coverage of the C-API require a solid understanding of C and the CPython C-API.

Since a large part of CPython is implemented in C, and since it exposes a large C-API to extensions and applications, C level testing is key to providing a correct and reliable native API. There were a couple of cases in the past years where new CPython releases actually broke certain parts of the C-API, and it was not noticed until people complained that their applications broke when trying out the new release. This is because the test coverage of the C-API is much lower than the well tested Python level and standard library tests of the runtime. And the main reason for this is that it is much more difficult to write tests in C than in Python, so people have a high incentive to get around it if they can. Since the C-API is used internally inside of the runtime, it is often assumed to be implicitly tested by the Python tests anyway, which raises the bar for an explicit C test even further. But this implicit coverage is not always given, and it also does not reduce the need for regression tests. Cython could help here by making it easier to write C level tests that integrate nicely with the existing Python unit test framework that the CPython project uses.

Basically, writing a C level test in Cython means writing a Python unittest function and then doing an explicit C operation in it that represents the actual test code. Here is an example for testing the PyList_Append C-API function:

from cpython.object cimport PyObject
from cpython.list cimport PyList_Append

def test_PyList_Append_on_empty_list():
    # setup code
    l = []
    assert len(l) == 0
    value = "abc"
    pyobj_value = <PyObject*> value
    refcount_before = pyobj_value.ob_refcnt

    # conservative test call, translates to the expected C code,
    # although with automatic exception propagation if it returns -1:
    errcode = PyList_Append(l, value)

    # validation
    assert errcode == 0
    assert len(l) == 1
    assert l[0] is value
    assert pyobj_value.ob_refcnt == refcount_before + 1

In the Cython project itself, what we actually do is to write doctests. The functions and classes in a test module are compiled with Cython, and the doctests are then executed in Python, and call the Cython implementations. This provides a very nice and easy way to compare the results of Cython operations with those of Python, and also trivially supports data driven tests, by calling a function multiple times from a doctest, for example:

from cpython.number cimport PyNumber_Add

def test_PyNumber_Add(a, b):
    """
    >>> test_PyNumber_Add('abc', 'def')
    'abcdef'
    >>> test_PyNumber_Add('abc', '')
    'abc'
    >>> test_PyNumber_Add(2, 5)
    7
    >>> -2 + 5
    3
    >>> test_PyNumber_Add(-2, 5)
    3
    """
    # The following is equivalent to writing "return a + b" in Python or Cython.
    return PyNumber_Add(a, b)

This could even trivially be combined with hypothesis and other data driven testing tools.

But Cython's use cases are not limited to testing. Maintenance and feature development would probably benefit even more from a reduced entry level.

Many language optimisations are applied in the AST optimiser these days, and that is implemented in C. However, these tree operations can be fairly complex and are thus non-trivial to implement. Doing that in Python rather than C would be much easier to write and maintain, but since this code is a part of the Python compilation process, there's a chicken-and-egg problem here in addition to the performance problem. Cython could solve both problems and allow for more far-reaching optimisations by keeping the necessary transformation code readable.

Performance is also an issue in other parts of CPython, namely the standard library. Several stdlib modules are compute intensive. Many of them have two implementations: one in Python and a faster one in C, a so-called accelerator module. This means that adding a feature to these modules requires duplicate effort, the proficiency in both Python and C, and a solid understanding of the C-API, reference counting, garbage collection, and what not. On the other hand, many modules that could certainly benefit from native performance lack such an accelerator, e.g. difflib, textwrap, fractions, statistics, argparse, email, urllib.parse and many, many more. The asyncio module is becoming more and more important these days, but its native accelerator only covers a very small part of its large functionality, and it also does not expose a native API that performance hungry async tools could hook into. And even though the native accelerator of the ElementTree module is an almost complete replacement, the somewhat complex serialisation code is still implemented completely in Python, which shows in comparison to the native serialisation in lxml.

Compiling these modules with Cython would speed them up, probably quite visibly. For this use case, it is possible to keep the code entirely in Python, and just add enough type declarations to make it fast when compiled. The typing syntax that PEP-484 and PEP-526 added to Python 3.6 makes this really easy and straight forward. A manually written accelerator module could thus be avoided, and therefore a lot of duplicated functionality and maintenance overhead.

Feature development would also be substantially simplified, especially for new contributors. Since Cython compiles Python code, it would allow people to contribute a Python implementation of a new feature that compiles down to C. And we all know that developing new functionality is much easier in Python than in C. The remaining task is then only to optimise it and not to rewrite it in a different language.

My feeling is that replacing some parts of the CPython C development with Cython has the potential to bring a visible boost for the contributions to the CPython project.

Update 2018-09-12: Jeroen Demeyer reminded me that I should also mention the ease of wrapping external native libraries. While this is not something that is a big priority for the standard library anymore, it is certainly true that modules like sqlite (which wraps sqlite3), ssl (OpenSSL), expat or even the io module (which wraps system I/O capabilities) would have been easier to write and maintain in Cython than in C. Especially I/O related code is often intensive in error handling, which is nicer to do with raise and f-strings than error code passing in C.

A really fast Python web server with Cython

Shortly after I wrote about speeding up Python web frameworks with Cython, Nexedi posted an article about their attempt to build a fast multicore web server for Python that can compete with the performance of compiled coroutines in the Go language.

Their goal is to use Cython to build a web framework around a fast native web server, and to use Cython's concurrency and coroutine support to gain native performance also in the application code, without sacrificing the readability that Python provides.

Their experiments look very promising so far. They managed to process 10K requests per second concurrently, which actually do real processing work. That is worth noting, because many web server benchmarks out there content themselves with the blank response time for a "hello world", thus ignoring any concurrency overhead etc. For that simple static "Hello world!", they even got 400K requests per second, which shows that this is not a very realistic benchmark. Under load, their system seems to scale pretty linearly with the number of threads, also not a given among web frameworks.

I might personally get involved in further improving Cython for this kind of concurrent, async applications. Stay tuned.

Cython for web frameworks

I'm excited to see the Python web community pick up Cython more and more to speed up their web frameworks.

uvloop as a fast drop-in replacement for asyncio has been around for a while now, and it's mostly written in Cython as a wrapper around libuv. The Falcon web framework optionally compiles itself with Cython, while keeping up support for PyPy as a plain Python package. New projects like Vibora show that it pays off to design a framework for both (Flask-like) simplicity and (native) speed from the ground up to leverage Cython for the critical parts. Quote of the day:

"300.000 req/sec is a number comparable to Go's built-in web server (I'm saying this based on a rough test I made some years ago). Given that Go is designed to do exactly that, this is really impressive. My kudos to your choice to use Cython." – Reddit user 'beertown'.

Alex Orlov gave a talk at the PyCon-US in 2017 about using Cython for more efficient code, in which he mentioned the possibility to speed up the Django URL dispatcher by 3x, simply by compiling the module as it is.

Especially in async frameworks, minimising the time spent in processing (i.e. outside of the I/O-Loop) is critical for the overall responsiveness and performance. Anton Caceres and I presented fast async code with Cython at EuroPython 2016, showing how to speed up async coroutines by compiling and optimising them.

In order to minimise the processing time on the server, many template engines use native accelerators in one way or another, and writing those in Cython (instead of C/C++) is a huge boost in terms of maintenance (and probably also speed). But several engines also generate Python code from a templating language, and those templates tend to be way more static than not (they are rarely runtime generated themselves). Therefore, compiling the generated template code, or even better, directly targeting Cython with the code generation instead of just plain Python has the potential to speed up the template processing a lot. For example, Cython has very fast support for PEP-498 f-strings and even transforms some '%'-formatting patterns into them to speed them up (also in code that requires backwards compatibility with older Python versions). That can easily make a difference, but also the faster function and method calls or looping code that it generates.

I'm sure there's way more to come and I'm happily looking forward to all those cool developments in the web area that we are only just starting to see appear.

Update 2018-07-15: Nexedi posted an article about their attempts to build a fast web server using Cython, both for the framework layer and the processing at the application layer. Worth keeping an eye on.

Zuwanderung

Lasst es mich einmal ganz klar sagen – München hat ein Zuwanderungsproblem.

Immer mehr Wirtschaftsflüchtlinge aus den umliegenden Gemeinden und Dörfern, aus den sozial und wirtschaftlich abgehängten Regionen Bayerns, ziehen nach München.

Aber: diese Menschen sind nicht Teil unserer Gesellschaft. Sie teilen nicht unsere traditionellen, städtischen Werte!

Sie leben ihren perversen Autofetischismus aus, indem sie mit ihren überdimensionierten Panzerfahrzeugen einen Kriegsstauplatz nach dem anderen begründen, anstatt wie wir rechtschaffenen Bürger die öffentlichen Verkehrsmittel zu nutzen.

Sie terrorisieren mit ihren burschenschaftlich übermotorisierten Zweirädern die friedlich schlafende, natürliche Bevölkerung.

Sie akzeptieren in ihrer primitiven Naivität völlig undemokratische Einstiegsmieten und forcieren dadurch horrende Mietsteigerungen, die dann zur Vertreibung von einheimischen Bestandsmietern genutzt werden.

Sie heben die tyrannische Macht der völkischen Einheitspartei über die heilige Entfaltung unserer multikulturellen Traditionen.

Glaubt mir, wenn ihr sie lasst, werden sie heimtückisch hinter eurem Rücken die CSU wählen!

Städter! Wehrt euch! Gebt den Kampf nicht geschlagen!

Befreit euch von der Unterwanderung der Provinziellen! Stoppt die Zuwanderung der Klerikalpastoralen!

Jetzt! Für immer!

What's new in Cython 0.28?

The freshly released Cython 0.28 is another major step in the development of the Cython compiler. It comes with several big new features, some of them long awaited, as well as various little optimisations. It also improves the integration with recent Python features (including the upcoming Python 3.7) and the Pythran compiler for NumPy expressions. The Changelog has the long list of relevant changes. As always, recompiling with the latest version will make your code adapt automatically to new Python releases and take advantage of the new optimisations.

The most long requested change in this release, however, is the support for read-only memory views. You can now simply declare a memory view as const, e.g. const double[:,:], and Cython will request a read-only buffer for it. This allows interaction with bytes objects and other non-writable buffer providers. Note that this makes the item type of the memory view const, i.e. non-writable, and not just the view itself. If the item type is not just a simple numeric type, this might require minor changes to the data types used in the code reading from the view. This feature was an open issue essentially ever since memory views were first introduced into Cython, back in 2009, but during all that time, no-one stepped forward to implement it. Alongside with this improvement, users can now write view[i][j] instead of view[i,j] if they want to, without the previous slow-down due to sub-view creation.

The second very long requested feature is the support for copying C code verbatimly into the generated files. The background is that some constructs, especially C hacks and macros, but also some adaptations to C specifics used by external libraries, can really only be done in plain C in order to use them from Cython. Previously, users had to create an external C header file to implement these things and then use a cdef extern from ... block to include the file from Cython code. Cython 0.28 now allows docstrings on these cdef extern blocks (with or without a specific header file name) that can contain arbitrary C/C++ code, for example:

cdef extern from *:
    """
    #define add1(i) ((i) + 1)
    """
    cdef int add1(int x) nogil

print(add1(x=2))

This is definitely considered an expert feature. Since the code is copied verbatimly into the generated C code file, Cython has no way to apply any validation or safety checks. Use at your own risk.

Another big new feature is the support for multiple inheritance from Python classes by extension types (a.k.a. cdef classes). Previously, extension types could only inherit from other natively implemented types, inlcuding builtins. While cdef classes still cannot inherit only from Python classes, and also cannot inherit from multiple cdef classes, it is now possible to use normal Python classes as additional base classes, following an extension type as primary base. This enables use cases like Python mixins, while still keeping up the efficient memory layout and the fast C-level access to attributes and cdef methods that cdef classes provide.

A bit of work has been done to start reducing the shared library size of Cython generated extension modules. In general, Cython aims to optimise its operations (especially Python operations) for speed and extensively uses C function inlining, optimistic code branches and type specialisations for that. However, the code in the module init function is really only executed once and rarely contains any loops, certainly not time critical ones. Therefore, Cython has now started to avoid certain code intensive optimisations inside of the module init code and also uses GCC pragmas to make the C compiler optimise this specific function for smaller size instead of speed. Without making the import visibly slower, this results in a certain reduction of the overall library size, but probably still leaves some space for future improvements.

Several new optimisations for Python builtins were implemented and often contributed by users. This includes faster operations and iteration for sets and bytearrays, from which existing code can benefit through simple recompilation. We are always happy to receive these contributions, and several tickets in the bug tracker are now marked as beginner friendly "first issues".

Cython has long supported f-strings, and the new release brings another set of little performance improvements for them. More interestingly, however, several common cases of unicode string %-formatting are now mapped to the f-string builder, as long as the argument side is a literal tuple. If the template string uses no unsupported formats, Cython applies this transformation automatically, which leads to visibly faster string formatting and avoids the intermediate creation of Python number objects and the value tuple. Existing code that makes use of %-formatting, including code in compiled Python .py files that needs to stay compatible with Python 2.x, can therefore benefit directly without rewriting all the template strings. Further coverage of formatting features for this transformation is certainly possible, and contributions are welcome.

Finally, a last minute change improves the handling of string literals that are being passed into C++ functions as std::string& references. Previously, the generated code always unpacked a Python byte string and made a fresh copy of it, whereas now Cython detects const arguments and passes the string literal directly. Also in the non-const case, Cython does not follow C++ in outright rejecting the literal argument at compile time, but instead just creates a writable copy and passes it into the function. This avoids special casing in user code and leads to working code by default, as expected in Python, and Cython.

What's new in Cython 0.27?

Cython 0.27 is freshly released and comes with several great improvements. It finally implements all major features that were added in CPython 3.6, such as async generators and variable annotations. The long list of new features and resolved issues can be found in the changelog, or in the list of resolved tickets.

Probably the biggest new feature is the support for asynchronous generators and asynchronous comprehensions, as specified in PEP 525 and PEP 530 respectively. They allow using yield inside of async coroutines and await inside of comprehensions, so that the following becomes possible:

async def generate_results(source):
    async for i in source:
        yield i ** 2
    ...
    d = {i: await result for i, result in enumerate(async_results)}
    ...
    l = [s for c in more_async_results
         for s in await c]

As usual, this feature is available in Cython compiled modules across all supported Python versions, starting with Python 2.6. However, using async cleanup in generators, e.g. in a finally-block, requires CPython 3.6 in order to remember which I/O-loop the generator must use. Async comprehensions do not suffer from this.

The next big and long awaited feature is support for PEP 484 compatible typing. Both signature annotations (PEP 484) and variable annotations (PEP 526) are now parsed for Python types and cython.* types like list or cython.int. Complex types like List[int] are not currently evaluated as the semantics are less clear in the context of static compilation. This will be added in a future release.

One special twist here is exception handling, which tries to mimic Python more closely than the defaults in Cython code. Thus, it is no longer necessary to explicitly declare an exception return value in code like this:

@cython.cfunc
def add_1(x: cython.int) -> cython.int:
    if x < 0:
        raise ValueError("...")
    return x + 1

Cython will automatically return -1 (the default exception value for C integer types) when an exception is raised and check for exceptions after calling it. This is identical to the Cython signature declaration except? -1.

In cases where annotations are not meant as static type declarations during compilation, the extraction can be disabled with the compiler directive annotation_typing=False.

The new release brings another long awaited feature: automatic ``__richcmp__()`` generation. Previously, extension types required a major difference to Python classes with respect to the special methods for comparison, __eq__, __lt__ etc. Users had to implement their own special __richcmp__() method which implemented all comparisons at once. Now, Cython can automatically generate an efficient __richcmp__() method from the normal comparison methods, including inherited base type implementations. This brings Python classes and extension types another huge step closer.

To bring extension modules also closer to Python modules, Cython now implements the new extension module initialisation process of PEP 489 in CPython 3.5 and later. This makes the special global names like __file__ and __path__ correctly available to module level code and improves the support for module-level relative imports. As with most internal features, existing Cython code will benefit from this by simple recompilation.

As a last feature worth mentioning, the IPython/Jupyter magic integration gained a new option %%cython --pgo for easy profile guided optimisation. This allows the C compiler to take better decisions during its optimisation phase based on a (hopefully) realistic runtime profile. The option compiles the cell with PGO settings for the C compiler, executes it to generate the runtime profile, and then compiles it again using that profile for C compiler optimisation. This is currently only tested with gcc. Support for other compilers can easily be added to the IPythonMagic.py module and pull requests are welcome, as usual.

By design, the Jupyter cell itself is responsible for generating a suitable profile. This can be done by implementing the functions that should be optimised via PGO, and then calling them directly in the same cell on some realistic training data like this:

%%cython --pgo
def critical_function(data):
    for item in data:
        ...

# execute function several times to build profile
from somewhere import some_typical_data
for _ in range(100):
    critical_function(some_typical_data)

Together with the improved module initialisation in Python 3.5 and later, you can now also distinguish between the profile and non-profile runs as follows:

if "_pgo_" in __name__:
    ...  # execute critical code here

Nichts falsch machen

»Das Netz ist voll mit mal mehr, mal weniger ernst gemeinten Aufzählungen der konkreten Vor- und Nachteile einzelner Programmiersprachen. Es ist zwar verlockend, diesen Listen eine weitere hinzuzufügen und dabei unsubtil die eigenen Vorlieben und Abneigungen einfließen zu lassen. Aber die Details solcher konkreten Empfehlungen ändern sich alle paar Jahre, und Sie lernen nicht viel dazu, wenn wir Ihnen einfach raten, »Nehmen Sie einfach Python, damit machen Sie nichts falsch!« (Obwohl Sie damit sicher wirklich nichts falsch machen.)« – Kathrin Passig und Johannes Jander, Weniger schlecht programmieren.