I keep reiterating that it would be nice if a couple of Python stdlib modules got compiled by Cython during Python’s installation, simply because it makes them faster right away, without sacrificing compatibility. Besides the difflib module, another nice example I found is the logging module. The logging benchmarks in Python’s benchmark suite run between 20% and 50% faster when the module is compiled. Especially the silent logging case is interesting with its ~50%, because a lot of log messages really never end up in a log somewhere, so you can leave more log statements in your code to activate them at need. And I’m sure there’s still a bit more to gain here by adding a couple of static type declarations in the right spots of that module. Feel free to give it a try.
Archive for the ‘Cython’ Category
This year’s PyCon-US included a talk with the rather lurid title “Cython vs. SWIG, Fight!“. I’m sure many of the attendees expected something different than they actually got, which was mostly a featurewise comparison at the “this is how some basic examples might look” level. In fact, I think the difference between Cython and SWIG, or actually Cython and any of those binding generators or wrapping tools, can be summarised in two words. They are binding generators, whereas Cython is a programming language, with all its expressiveness.
Cython’s main selling point is that it completely blurs the border between Python and C. A binding generator, or a foreign language interface tool like ctypes or cffi, will only ever give you the choice of two extremes:
- write your code in Python and let it talk to wrapped C, or
- write your code in C and wrap it.
Either easy or hard, nothing in between, and it’s not always your choice. They give you no help at all with the parts where it needs to be hard for some reason, and some do not even reduce that unfriendly feeling of working with truly foreign code. Even worse, these tools usually have a very specific idea about how the wrapping should look like, so if you come to a point where you’re not happy with the way they work, you’ll have to start working against the tool.
Cython is different. With Cython, you can
- write your code in Python and let it talk to wrapped C
- compile your Python code
- add static type declarations to your Python code to specialise and speed up the result of the compilation
- change your Python code file extension to “.pyx” to start using elements of the Cython language that let your code talk to C directly (note that this makes it Cython code)
- write your code in the Cython language and freely mix elements of Python and C, talking to both sides natively
- write your code in C and wrap it
So it opens up that entire huge grey area between those two extremes that you get from other tools. It lets you freely choose the tradeoff between simplicity and speed, and makes it very easy to move gradually between one and the other.
Want to reduce complexity and use fast, high-level tools? Use Python constructs. Compiling them in Cython makes them even faster by statically analysing and optimising your code and inferring types for you. And it allows you to give the compiler explicit static type hints that reduce overhead and specialise your code even further.
Want low-level speed? Move closer to C constructs, in exactly those areas in your code that need it. Any point along that gradient is right at your finger tips. Need to talk to C, C++, Fortran, maybe other languages? Without having to go through a level of indirection that bites right into your performance benefit? You can. Cython makes this easy, actually “normal”.
And, we’re always interested in improving the “static type declarations in Python code” kind of usage (which we call “pure Python mode“), so if you want to help extending the expressiveness of Cython’s type declarations for pure Python code to further blur the gradients for users, you should talk to us about it. We have a mailing list.
A while back, I wrote up my opinion on writing CPython extensions by hand, using the C-API. Most of the replies I got were asking for a proof, but the article was more of a summary of my prior experience than anything really new.
Now, David Malcolm, author of the GCC Python plugin, has given a talk at this year’s PyCon-US where he used a static analysis tool chain that he’s been working on based on his GCC plugin to find bugs in CPython extension modules. Being a Fedora developer, he ran it against the wealth of binary Python packages in that distribution and ended up finding a *lot* of bugs. Very unsurprisingly to me, most of them were refcount bugs, mainly memory leaks, especially in error handling cases, but also lots of other issues with reference handling, e.g. missing NULL/error tests etc. At the end of the talk, he was asked what bugs his tools found not only in manually written code but in generated code, specifically C code generated by Cython. He answered that it was rather the other way round: he had used Cython generated code to prune false positives from his analysis tool, because it was quite obvious that the code that Cython generated was actually correct.
I think that nicely supports what I wrote in my last post.
It seems I can’t repeat this often enough. People who write Python wrappers for libraries in plain C “because it’s faster” tend to overestimate their C-API skills and simply have no idea how costly maintenance is. It’s like the old advice about optimisation: Don’t do it! (and, if you’re an expert: Don’t do it yet!). If you write your wrapper code in C instead of Cython, it will be
- less portable
- harder to maintain
- harder to extend
- harder to optimise
- harder to debug and fix
It will cost you a lot of effort, both short term and long term, that is much better spent in adding cool features and optimising the performance critical parts of your code once you got it working. Say, is your time really so cheap that you want to waste it writing C code?
I’ve finally found the time to look through the talks of this year’s EuroPython (which I didn’t attend - I mean, Firenze? In plain summer? Seriously?). That made me stumble over a rather lengthy talk by Kay Hayen about his Nuitka compiler project. It took more than an hour, almost one and a half. I had to skip ahead through the video more than once. Certainly reminded me that it’s a good idea to keep my own talks short.
Apparently, there was a mixed reception of that talk. Some people seemed to be heavily impressed, others didn’t like it at all. According to the comments, Guido was more in the latter camp. I can understand that. The way Kay presented his project was not very convincing. The only “excuse” he had for its existence was basically “I do it in my spare time” and “I don’t like the alternatives”. In the stream of details that he presented, he completely failed to make the case for a static Python compiler at all. And Guido’s little remark in his keynote that “some people still try to do this” showed once again that this case must still be made.
So, what’s the problem with static Python compilers, compared to static compilers for other languages? Python can obviously be translated into static code, the mere fact that it can be interpreted shows that. Simply chaining all code that the interpreter executes will yield a static code representation. However, that doesn’t answer the question whether it’s worth doing. The interpreter in CPython is a much more compact piece of code than the result of such a translation would be, and it’s also much simpler. The trace pruning that HotPy does, according to another talk at the same conference, is a very good example for the complexity involved. The fact that ShedSkin and PyPy’s RPython do explicitly not try to implement the whole Python language speaks volumes. And the overhead of an additional compilation step is actually something that drives many people to use the Python interpreter in the first place. Static compilation is not a virtue. Thus, I would expect an excuse for writing a static translator from anyone who attempts it. The normal excuse that people bring forward is “because it’s faster”. Faster than interpretation.
Now, Python is a dynamic language, which makes static translation difficult already, but it’s a dynamic language where side-effects are the normal case rather than an exception. That means that static analysis and optimisation can never be as effective as runtime analysis and optimisation, not with a resonable effort. At least WPA (whole program analysis) would be required in order to make static optimisations as effective as runtime optimisations, but both ShedSkin and RPython make it clear that this can only be done for a limited subset of the language. And it obviously requires the whole program to be available at compile time, which is usually not the case, if only due to the excessive resource requirements of a WPA. PyPy is a great example, compiling its RPython sources takes tons of memory and a ridiculous amount of time.
That’s why I don’t think that “because it’s faster” catches it, not as plain as that. The case for a static compiler must be that “it solves a problem”. Cython does that. People don’t use Cython because it has such a great Python code optimiser. Plain, unmodified Python code compiled by Cython, while usually faster than interpretation in CPython, will often be slower and sometimes several times slower than what PyPy’s JIT driven optimiser gets out of it. No, people use Cython because it helps them solve a problem. Which is either that they want to connect to external non-Python libraries from Python code or that they want to be able to manually optimise their code, or both. It’s manual code optimisation and tuning where static compilers are great. Runtime optimisers can’t give you that and interpreters obviously won’t give you that either. The whole selling point of Cython is not that it will make Python code magically run fast all by itself, but that it allows users to tremendously expand the range of manual optimisations that they can apply to their Python code, up to the point where it’s no longer Python code but essentially C code in a Python-like syntax, or even plain C code that they interface with as if it was Python code. And this works completely seamlessly, without building new language barriers along the way.
So, the point is not that Cython is a static Python compiler, the point is that it is more than a Python compiler. It solves a problem in addition to just being a compiler. People have been trying to write static compilers for Python over and over again, but all of them fail to provide that additional feature that can make them useful to a broad audience. I don’t mind them doing that, having fun writing code is a perfectly valid reason to do it. But they shouldn’t expect others to start raving about the result, unless they can provide more than just static compilation.
To ask which is faster, CPython, PyPy or Cython, outside of a very well defined and specific context of existing code and requirements, is basically comparing apples, oranges and tomatoes. Any of the three can win against the others for the right kind of applications (apple sauce on your pasta, anyone?). Here’s a rule-of-thumb kind of comparison that may be way off for a given piece of code but should give you a general idea.
Note that we’re only talking about CPU-bound code here. I/O-bound code will only show a difference in some very well selected cases (e.g. because Cython allows you to step down into low-level minimum-copy I/O using C, in which case it may not really have been I/O bound before).
PyPy is very fast for pure Python code that generally runs in loops for a while and makes heavy use of Python objects. It’s great for computational code (and often way faster than CPython for it) but has its limits for numerics, huge data sets and other seriously performance critical code because it doesn’t really allow you to fine-tune your code. Like any JIT compiler, it’s a black box where you put something in and either you like the result or not. That equally applies to the integration with native code through the ctypes library, where you can be very lucky, or not. Although the platform situation keeps improving, the PyPy platform still lacks a wide range of external libraries that are available for the CPython platform, including many tools that people use to speed up their Python code.
CPython is usually quite a bit faster than PyPy for one-shot scripts (especially when including the startup time) and more generally for code that doesn’t benefit from long-running loops. For example, I was surprised to see how much slower it is to run something as large as the Cython compiler inside of PyPy to compile code, despite being written in pure Python code. CPython is also very portable and extensible (especially using Cython) and has a much larger set of external (native) libraries available than the PyPy platform, including all of NumPy and SciPy, for example. However, its performance looses against PyPy for most pure Python applications that keep doing the same stuff for a while, without resorting to native code or optimised native libraries for the heavy lifting.
Cython is very fast for low-level computations, for (thread-)parallel code and for code that benefits from switching seamlessly between C/C++ and Python. The main feature is that it allows for very fine grained manual code tuning from pure Python to C-ish Python to C to external libraries. It is designed to extend a Python runtime, not to replace it. When used to extend CPython, it obviously inherits all advantages of that platform in terms of available code. It’s usually way slower than PyPy for the kind of object-heavy pure Python code in which PyPy excels, including some kinds of computational code, even if you start optimising the code manually. Compared to CPython, however, Cython compiled pure Python code usually runs faster and it’s easy to make it run much faster.
So, for an existing (mostly) pure Python application, PyPy is generally worth a try. It’s usually faster than CPython and often fast enough all by itself. If it’s not, well, then it’s not and you can go and file a bug report with them. Or just drop it and happily ignore that it exists from that point on. Or just ignore it entirely in the first place, because your application runs fast enough anyway, so why change anything about it?
However, for most other, non-trivial applications, the simplistic question “which platform is faster” is much less important in real life. If an application has (existing or anticipated) non-trivial external dependencies that are not available or do not work reliably in a given platform, then the choice is obvious. And if you want to (or have to) optimise and tune the code yourself (where it makes sense to do that), the combination of CPython and Cython is often more rewarding, but requires more manual work than a quick test run in PyPy. For cases where most of the heavy lifting is being done in some C, C++, Fortran or other low-level library, either platform will do, often with a “there’s already a binding for it” advantage for CPython and otherwise a usability and tunability advantage for Cython when the binding needs to be written from scratch. Apples, oranges and tomatoes, if you only ask which is faster.
Another thing to consider is that CPython and PyPy can happily communicate with each other from separate processes. So, there are ways to let applications benefit from both platforms at the same time when the need arises. Even heterogeneous MPI setups might be possible.
There is also work going on to improve the new integration of Cython with PyPy, which allows to compile and run Cython code on the PyPy platform. The performance of that interface currently suffers from the lack of optimisation in PyPy’s cpyext emulation layer, but that should get better over time. The main point for now is that the integration lifts the platform lock-in for both sides, which makes more native code available for both platforms.
I did a couple of experiments compiling itertools with the new generator support in Cython. In CPython, the itertools module is actually written in hand tuned C and does very little computation in its generators, so I knew it would be hard to reach with generated code. But Cython does a pretty good job.
Something as trivial as
chain() is exactly as fast as in the C implementation, but compared to the more than 60 lines of C code, it is certainly a lot more readable in Cython:
def chain(*iterables): """Make an iterator that returns elements from the first iterable until it is exhausted, then proceeds to the next iterable, until all of the iterables are exhausted. Used for treating consecutive sequences as a single sequence. """ for it in iterables: for element in it: yield element
Other functions, like
islice(), are faster in C, partly because CPython actually takes a couple of shortcuts, e.g. by only looking up the iterator slot method once. You cannot do that in Python code, and I wanted to keep the implementation compatible with regular Python. Specifically, the C speed advantage for
islice() is currently about 30-50% in general, although the Cython implementation can also be up to 10% faster for some cases, e.g. when extracting only a couple of items from the middle of a longer sequence. The C implementation is about 90 lines, here is the Cython implementation:
import sys import cython # Python 2/3 compatibility _max_size = cython.declare(cython.Py_ssize_t, getattr(sys, "maxsize", getattr(sys, "maxint", None))) @cython.locals(i=cython.Py_ssize_t, nexti=cython.Py_ssize_t, start=cython.Py_ssize_t, stop=cython.Py_ssize_t, step=cython.Py_ssize_t) def islice(iterable, *args): """Make an iterator that returns selected elements from the iterable. If start is non-zero, then elements from the iterable are skipped until start is reached. Afterward, elements are returned consecutively unless step is set higher than one which results in items being skipped. If stop is None, then iteration continues until the iterator is exhausted, if at all; otherwise, it stops at the specified position. Unlike regular slicing, islice() does not support negative values for start, stop, or step. Can be used to extract related fields from data where the internal structure has been flattened (for example, a multi-line report may list a name field on every third line). """ s = slice(*args) start = s.start or 0 stop = s.stop or _max_size step = s.step or 1 if start < 0: raise ValueError("...") if step < 1: raise ValueError("...") if start >= stop: return nexti = start for i, element in enumerate(iterable): if i == nexti: yield element nexti += step if nexti >= stop or nexti < 0: return
Here is one that is conceptually quite simple:
count(). I had to optimise it quite a bit, because the iteration code in the C code is extremely tight. Even the tuned version below runs about 10% slower than the hand tuned C version, which is about 230 lines long.
@cython.locals(i=cython.Py_ssize_t) def count(n=0): """Make an iterator that returns consecutive integers starting with n. If not specified n defaults to zero. Often used as an argument to imap() to generate consecutive data points. Also, used with zip() to add sequence numbers. """ try: i = n except OverflowError: i = _max_size # skip i-loop else: n = _max_size # first value after i-loop while i < _max_size: yield i i += 1 while True: yield n n += 1
Note that all of the above generators execute in the order of microseconds, so even a slow-down of 50% will likely not be measurable in real world code.
So far, I did not try any of the more fancy functions in itertools (those that actually do something). The Cython project has announced a Google Summer of Code project with exactly the intent to rewrite some of the C stdlib modules of CPython in pure Python code with Cython compiler hints. So I leave this exercise to interested readers for now.