I keep reiterating that it would be nice if a couple of Python stdlib modules got compiled by Cython during Python’s installation, simply because it makes them faster right away, without sacrificing compatibility. Besides the difflib module, another nice example I found is the logging module. The logging benchmarks in Python’s benchmark suite run between 20% and 50% faster when the module is compiled. Especially the silent logging case is interesting with its ~50%, because a lot of log messages really never end up in a log somewhere, so you can leave more log statements in your code to activate them at need. And I’m sure there’s still a bit more to gain here by adding a couple of static type declarations in the right spots of that module. Feel free to give it a try.
Archive for the ‘Software’ Category
This year’s PyCon-US included a talk with the rather lurid title “Cython vs. SWIG, Fight!“. I’m sure many of the attendees expected something different than they actually got, which was mostly a featurewise comparison at the “this is how some basic examples might look” level. In fact, I think the difference between Cython and SWIG, or actually Cython and any of those binding generators or wrapping tools, can be summarised in two words. They are binding generators, whereas Cython is a programming language, with all its expressiveness.
Cython’s main selling point is that it completely blurs the border between Python and C. A binding generator, or a foreign language interface tool like ctypes or cffi, will only ever give you the choice of two extremes:
- write your code in Python and let it talk to wrapped C, or
- write your code in C and wrap it.
Either easy or hard, nothing in between, and it’s not always your choice. They give you no help at all with the parts where it needs to be hard for some reason, and some do not even reduce that unfriendly feeling of working with truly foreign code. Even worse, these tools usually have a very specific idea about how the wrapping should look like, so if you come to a point where you’re not happy with the way they work, you’ll have to start working against the tool.
Cython is different. With Cython, you can
- write your code in Python and let it talk to wrapped C
- compile your Python code
- add static type declarations to your Python code to specialise and speed up the result of the compilation
- change your Python code file extension to “.pyx” to start using elements of the Cython language that let your code talk to C directly (note that this makes it Cython code)
- write your code in the Cython language and freely mix elements of Python and C, talking to both sides natively
- write your code in C and wrap it
So it opens up that entire huge grey area between those two extremes that you get from other tools. It lets you freely choose the tradeoff between simplicity and speed, and makes it very easy to move gradually between one and the other.
Want to reduce complexity and use fast, high-level tools? Use Python constructs. Compiling them in Cython makes them even faster by statically analysing and optimising your code and inferring types for you. And it allows you to give the compiler explicit static type hints that reduce overhead and specialise your code even further.
Want low-level speed? Move closer to C constructs, in exactly those areas in your code that need it. Any point along that gradient is right at your finger tips. Need to talk to C, C++, Fortran, maybe other languages? Without having to go through a level of indirection that bites right into your performance benefit? You can. Cython makes this easy, actually “normal”.
And, we’re always interested in improving the “static type declarations in Python code” kind of usage (which we call “pure Python mode“), so if you want to help extending the expressiveness of Cython’s type declarations for pure Python code to further blur the gradients for users, you should talk to us about it. We have a mailing list.
A while back, I wrote up my opinion on writing CPython extensions by hand, using the C-API. Most of the replies I got were asking for a proof, but the article was more of a summary of my prior experience than anything really new.
Now, David Malcolm, author of the GCC Python plugin, has given a talk at this year’s PyCon-US where he used a static analysis tool chain that he’s been working on based on his GCC plugin to find bugs in CPython extension modules. Being a Fedora developer, he ran it against the wealth of binary Python packages in that distribution and ended up finding a *lot* of bugs. Very unsurprisingly to me, most of them were refcount bugs, mainly memory leaks, especially in error handling cases, but also lots of other issues with reference handling, e.g. missing NULL/error tests etc. At the end of the talk, he was asked what bugs his tools found not only in manually written code but in generated code, specifically C code generated by Cython. He answered that it was rather the other way round: he had used Cython generated code to prune false positives from his analysis tool, because it was quite obvious that the code that Cython generated was actually correct.
I think that nicely supports what I wrote in my last post.
It seems I can’t repeat this often enough. People who write Python wrappers for libraries in plain C “because it’s faster” tend to overestimate their C-API skills and simply have no idea how costly maintenance is. It’s like the old advice about optimisation: Don’t do it! (and, if you’re an expert: Don’t do it yet!). If you write your wrapper code in C instead of Cython, it will be
- less portable
- harder to maintain
- harder to extend
- harder to optimise
- harder to debug and fix
It will cost you a lot of effort, both short term and long term, that is much better spent in adding cool features and optimising the performance critical parts of your code once you got it working. Say, is your time really so cheap that you want to waste it writing C code?
I’ve finally found the time to look through the talks of this year’s EuroPython (which I didn’t attend - I mean, Firenze? In plain summer? Seriously?). That made me stumble over a rather lengthy talk by Kay Hayen about his Nuitka compiler project. It took more than an hour, almost one and a half. I had to skip ahead through the video more than once. Certainly reminded me that it’s a good idea to keep my own talks short.
Apparently, there was a mixed reception of that talk. Some people seemed to be heavily impressed, others didn’t like it at all. According to the comments, Guido was more in the latter camp. I can understand that. The way Kay presented his project was not very convincing. The only “excuse” he had for its existence was basically “I do it in my spare time” and “I don’t like the alternatives”. In the stream of details that he presented, he completely failed to make the case for a static Python compiler at all. And Guido’s little remark in his keynote that “some people still try to do this” showed once again that this case must still be made.
So, what’s the problem with static Python compilers, compared to static compilers for other languages? Python can obviously be translated into static code, the mere fact that it can be interpreted shows that. Simply chaining all code that the interpreter executes will yield a static code representation. However, that doesn’t answer the question whether it’s worth doing. The interpreter in CPython is a much more compact piece of code than the result of such a translation would be, and it’s also much simpler. The trace pruning that HotPy does, according to another talk at the same conference, is a very good example for the complexity involved. The fact that ShedSkin and PyPy’s RPython do explicitly not try to implement the whole Python language speaks volumes. And the overhead of an additional compilation step is actually something that drives many people to use the Python interpreter in the first place. Static compilation is not a virtue. Thus, I would expect an excuse for writing a static translator from anyone who attempts it. The normal excuse that people bring forward is “because it’s faster”. Faster than interpretation.
Now, Python is a dynamic language, which makes static translation difficult already, but it’s a dynamic language where side-effects are the normal case rather than an exception. That means that static analysis and optimisation can never be as effective as runtime analysis and optimisation, not with a resonable effort. At least WPA (whole program analysis) would be required in order to make static optimisations as effective as runtime optimisations, but both ShedSkin and RPython make it clear that this can only be done for a limited subset of the language. And it obviously requires the whole program to be available at compile time, which is usually not the case, if only due to the excessive resource requirements of a WPA. PyPy is a great example, compiling its RPython sources takes tons of memory and a ridiculous amount of time.
That’s why I don’t think that “because it’s faster” catches it, not as plain as that. The case for a static compiler must be that “it solves a problem”. Cython does that. People don’t use Cython because it has such a great Python code optimiser. Plain, unmodified Python code compiled by Cython, while usually faster than interpretation in CPython, will often be slower and sometimes several times slower than what PyPy’s JIT driven optimiser gets out of it. No, people use Cython because it helps them solve a problem. Which is either that they want to connect to external non-Python libraries from Python code or that they want to be able to manually optimise their code, or both. It’s manual code optimisation and tuning where static compilers are great. Runtime optimisers can’t give you that and interpreters obviously won’t give you that either. The whole selling point of Cython is not that it will make Python code magically run fast all by itself, but that it allows users to tremendously expand the range of manual optimisations that they can apply to their Python code, up to the point where it’s no longer Python code but essentially C code in a Python-like syntax, or even plain C code that they interface with as if it was Python code. And this works completely seamlessly, without building new language barriers along the way.
So, the point is not that Cython is a static Python compiler, the point is that it is more than a Python compiler. It solves a problem in addition to just being a compiler. People have been trying to write static compilers for Python over and over again, but all of them fail to provide that additional feature that can make them useful to a broad audience. I don’t mind them doing that, having fun writing code is a perfectly valid reason to do it. But they shouldn’t expect others to start raving about the result, unless they can provide more than just static compilation.
To ask which is faster, CPython, PyPy or Cython, outside of a very well defined and specific context of existing code and requirements, is basically comparing apples, oranges and tomatoes. Any of the three can win against the others for the right kind of applications (apple sauce on your pasta, anyone?). Here’s a rule-of-thumb kind of comparison that may be way off for a given piece of code but should give you a general idea.
Note that we’re only talking about CPU-bound code here. I/O-bound code will only show a difference in some very well selected cases (e.g. because Cython allows you to step down into low-level minimum-copy I/O using C, in which case it may not really have been I/O bound before).
PyPy is very fast for pure Python code that generally runs in loops for a while and makes heavy use of Python objects. It’s great for computational code (and often way faster than CPython for it) but has its limits for numerics, huge data sets and other seriously performance critical code because it doesn’t really allow you to fine-tune your code. Like any JIT compiler, it’s a black box where you put something in and either you like the result or not. That equally applies to the integration with native code through the ctypes library, where you can be very lucky, or not. Although the platform situation keeps improving, the PyPy platform still lacks a wide range of external libraries that are available for the CPython platform, including many tools that people use to speed up their Python code.
CPython is usually quite a bit faster than PyPy for one-shot scripts (especially when including the startup time) and more generally for code that doesn’t benefit from long-running loops. For example, I was surprised to see how much slower it is to run something as large as the Cython compiler inside of PyPy to compile code, despite being written in pure Python code. CPython is also very portable and extensible (especially using Cython) and has a much larger set of external (native) libraries available than the PyPy platform, including all of NumPy and SciPy, for example. However, its performance looses against PyPy for most pure Python applications that keep doing the same stuff for a while, without resorting to native code or optimised native libraries for the heavy lifting.
Cython is very fast for low-level computations, for (thread-)parallel code and for code that benefits from switching seamlessly between C/C++ and Python. The main feature is that it allows for very fine grained manual code tuning from pure Python to C-ish Python to C to external libraries. It is designed to extend a Python runtime, not to replace it. When used to extend CPython, it obviously inherits all advantages of that platform in terms of available code. It’s usually way slower than PyPy for the kind of object-heavy pure Python code in which PyPy excels, including some kinds of computational code, even if you start optimising the code manually. Compared to CPython, however, Cython compiled pure Python code usually runs faster and it’s easy to make it run much faster.
So, for an existing (mostly) pure Python application, PyPy is generally worth a try. It’s usually faster than CPython and often fast enough all by itself. If it’s not, well, then it’s not and you can go and file a bug report with them. Or just drop it and happily ignore that it exists from that point on. Or just ignore it entirely in the first place, because your application runs fast enough anyway, so why change anything about it?
However, for most other, non-trivial applications, the simplistic question “which platform is faster” is much less important in real life. If an application has (existing or anticipated) non-trivial external dependencies that are not available or do not work reliably in a given platform, then the choice is obvious. And if you want to (or have to) optimise and tune the code yourself (where it makes sense to do that), the combination of CPython and Cython is often more rewarding, but requires more manual work than a quick test run in PyPy. For cases where most of the heavy lifting is being done in some C, C++, Fortran or other low-level library, either platform will do, often with a “there’s already a binding for it” advantage for CPython and otherwise a usability and tunability advantage for Cython when the binding needs to be written from scratch. Apples, oranges and tomatoes, if you only ask which is faster.
Another thing to consider is that CPython and PyPy can happily communicate with each other from separate processes. So, there are ways to let applications benefit from both platforms at the same time when the need arises. Even heterogeneous MPI setups might be possible.
There is also work going on to improve the new integration of Cython with PyPy, which allows to compile and run Cython code on the PyPy platform. The performance of that interface currently suffers from the lack of optimisation in PyPy’s cpyext emulation layer, but that should get better over time. The main point for now is that the integration lifts the platform lock-in for both sides, which makes more native code available for both platforms.
I recently showed some benchmark results comparing the XML parser performance in CPython 3.3 to that in PyPy 1.7. Here’s an update for PyPy 1.9 that also includes the current state of the lxml port to that platform, parsing a 3.4MB document style XML file.
Initial Memory usage: 11332 xml.etree.ElementTree.parse done in 0.041 seconds Memory usage: 21468 (+10136) xml.etree.cElementTree.parse done in 0.041 seconds Memory usage: 21464 (+10132) xml.etree.cElementTree.XMLParser.feed(): 25317 nodes read in 0.041 seconds Memory usage: 21736 (+10404) lxml.etree.parse done in 0.032 seconds Memory usage: 28324 (+16992) drop_whitespace.parse done in 0.030 seconds Memory usage: 25172 (+13840) lxml.etree.XMLParser.feed(): 25317 nodes read in 0.037 seconds Memory usage: 30608 (+19276) minidom tree read in 0.492 seconds Memory usage: 29852 (+18520)
PyPy without JIT warming:
Initial Memory usage: 42156 xml.etree.ElementTree.parse done in 0.452 seconds Memory usage: 44084 (+1928) xml.etree.cElementTree.parse done in 0.450 seconds Memory usage: 44080 (+1924) xml.etree.cElementTree.XMLParser.feed(): 25317 nodes read in 0.457 seconds Memory usage: 47920 (+5768) lxml.etree.parse done in 0.033 seconds Memory usage: 58688 (+16536) drop_whitespace.parse done in 0.033 seconds Memory usage: 55536 (+13384) lxml.etree.XMLParser.feed(): 25317 nodes read in 0.055 seconds Memory usage: 64724 (+22564) minidom tree read in 0.541 seconds Memory usage: 59456 (+17296)
PyPy with JIT warmup:
Initial Memory usage: 646824 xml.etree.ElementTree.parse done in 0.341 seconds xml.etree.cElementTree.parse done in 0.345 seconds xml.etree.cElementTree.XMLParser.feed(): 25317 nodes read in 0.342 seconds lxml.etree.parse done in 0.026 seconds drop_whitespace.parse done in 0.025 seconds lxml.etree.XMLParser.feed(): 25317 nodes read in 0.039 seconds minidom tree read in 0.383 seconds
What you can quickly see is that lxml performs equally well on both (actually slightly faster on PyPy) and beats the other libraries on PyPy by more than an order of magnitude. The absolute numbers are fairly low, though, way below a second for the 3.4MB file. It’ll be interesting to see some more complete benchmarks at some point that also take some realistic processing into account.
Remark: cElementTree is just an alias for the plain Python ElementTree on PyPy and ElementTree uses cElementTree in the background in CPython 3.3, which is why both show the same performance. The memory sizes were measured in forked processes, whereas the PyPy JIT numbers were measured in a repeatedly running process in order to take advantage of the JIT compiler. Note the substantially higher memory load of PyPy here.
Update: I originally reported the forked memory size with the non-forked performance for PyPy. The above now shows both separately. A more real-world comparison would likely yield an even higher memory usage on PyPy than the numbers above, which were mostly meant to give an idea of the memory usage of the in-memory tree (i.e. the data impact).