Faster XML stream processing in Python

It's been a while since I last wrote something about processing XML, specifically about finding something in XML. Recently, I read a blog post by Eli Bendersky about faster XML processing in Go, and he was comparing it to iterparse() in Python's ElementTree and lxml. Basically, all he said about lxml is that it performs more or less like ElementTree, so he concentrated on the latter (and on C and Go). That's not wrong to say, but it also doesn't help much. lxml has much more fine-grained tools for processing XML, so here's a reply.

I didn't have the exact same XML input file that Eli used, but I used the same (deterministic, IIUC) tool for generating one, running xmlgen -f2 -o bench.xml. That resulted in a 223MiB XML file of the same structure that Eli used, thus probably almost the same as his.

Let's start with the original implementation:

import sys
import xml.etree.ElementTree as ET

count = 0
for event, elem in ET.iterparse(sys.argv[1], events=("end",)):
    if event == "end":
        if elem.tag == 'location' and elem.text and 'Africa' in elem.text:
            count += 1

print('count =', count)

The code parses the XML file, searches for location tags, and counts those that contain the word Africa.

Running this under time with ElementTree in CPython 3.6.8 (Ubuntu 18.04) shows:

count = 92
4.79user 0.08system 0:04.88elapsed 99%CPU (0avgtext+0avgdata 14828maxresident)k

We can switch to lxml (4.3.4) by changing the import to import lxml.etree as ET:

count = 92
4.58user 0.08system 0:04.67elapsed 99%CPU (0avgtext+0avgdata 23060maxresident)k

You can see that it uses somewhat more memory overall (~23MiB), but runs just a little faster, not even 5%. Both are roughly comparable.

For comparison, the base line memory usage of doing nothing but importing ElementTree versus lxml is:

$ time python3.6 -c 'import xml.etree.ElementTree'
0.08user 0.01system 0:00.09elapsed 96%CPU (0avgtext+0avgdata 9892maxresident)k
0inputs+0outputs (0major+1202minor)pagefaults 0swaps

$ time python3.6 -c 'import lxml.etree'
0.07user 0.01system 0:00.09elapsed 96%CPU (0avgtext+0avgdata 15264maxresident)k
0inputs+0outputs (0major+1742minor)pagefaults 0swaps

Back to our task at hand. As you may know, global variables in Python are more costly than local variables, and as you certainly know, global module code is badly testable. So, let's start with something obvious that we would always do in Python: write a function.

import sys
import lxml.etree as ET

def count_locations(file_path, match):
    count = 0
    for event, elem in ET.iterparse(file_path, events=("end",)):
        if event == "end":
            if elem.tag == 'location' and elem.text and 'Africa' in elem.text:
                count += 1

count = count_locations(sys.args[1], 'Africa')
print('count =', count)
count = 92
4.39user 0.06system 0:04.46elapsed 99%CPU (0avgtext+0avgdata 23264maxresident)k

Another thing we can see is that we're explicitly asking for only end events, and then check if the event we got is an end event. That's redundant. Removing this line yields:

count = 92
4.24user 0.06system 0:04.31elapsed 99%CPU (0avgtext+0avgdata 23264maxresident)k

Ok, another tiny improvement. We won a couple of percent, although not really worth mentioning. Now let's see what lxml's API can do for us.

First, let's look at the structure of the XML file. Nicely, the xmlgen tool has a mode for generating an indented version of the same file, which makes it easier to investigate. Here's the start of the indented version of the file (note that we are always parsing the smaller version of the file, which contains newlines but no indentation):

<?xml version="1.0" standalone="yes"?>
      <item id="item0">
        <location>United States</location>
        <name>duteous nine eighteen </name>

The root tag is site, which then contains regions (apparently one per continent), then a series of item elements, which contain the location. In a real data file, it would probably be enough to only look at the africa region when looking for Africa as a location, but a) this is (pseudo-)randomly generated data, b) even "real" data isn't always clean, and c) a location "Africa" actually seems weird when the region is already africa

Anyway. Let's assume we have to look through all regions to get a correct count. But given the structure of the item tag, we can simply select the location elements and do the following in lxml:

def count_locations(file_path, match):
    count = 0
    for event, elem in ET.iterparse(file_path, events=("end",), tag='location'):
        if elem.text and match in elem.text:
            count += 1
    return count
count = 92
3.06user 0.62system 0:03.68elapsed 99%CPU (0avgtext+0avgdata 1529292maxresident)k

That's a lot faster. But what happened to the memory? 1.5 GB? We used to be able to process the whole file with only 23 MiB peak!

The reason is that the loop now only runs for location elements, and everything else is only handled internally by the parser – and the parser builds an in-memory XML tree for us. The elem.clear() call, that we previously used for deleting used parts of that tree, is now only executed for the location, a pure text tag, and thus cleans up almost nothing. We need to take care to clean up more again, so let's intercept on the item and look for the location from there.

def count_locations(file_path, match):
    count = 0
    for _, elem in ET.iterparse(file_path, events=("end",), tag='item'):
        text = elem.findtext('location')
        if text and match in text:
            count += 1
    return count
count = 92
3.11user 0.37system 0:03.50elapsed 99%CPU (0avgtext+0avgdata 994280maxresident)k

Ok, almost as fast, but still – 1 GB of memory? Why doesn't the cleanup work? Let's look at the file structure some more.

$ egrep -n '^(  )?<' bench_pp.xml
1:<?xml version="1.0" standalone="yes"?>
3:  <regions>
2753228:  </regions>
2753229:  <categories>
2822179:  </categories>
2822180:  <catgraph>
2824181:  </catgraph>
2824182:  <people>
3614042:  </people>
3614043:  <open_auctions>
5520437:  </open_auctions>
5520438:  <closed_auctions>
6401794:  </closed_auctions>

Ah, so there is actually much more data in there that is completely irrelevant for our task! All we really need to look at is the first ~2.7 million lines that contain the regions data. The entire second half of the file is useless, and simply generates heaps of data that our cleanup code does not handle. Let's make use of that learning in our code. We can intercept on both the item and the regions tags, and stop as soon as the regions data section ends.

def count_locations(file_path, match):
    count = 0
    for _, elem in ET.iterparse(file_path, events=("end",), tag=('item', 'regions')):
        if elem.tag == 'regions':
        text = elem.findtext('location')
        if text and match in text:
            count += 1
    return count
count = 92
1.22user 0.04system 0:01.27elapsed 99%CPU (0avgtext+0avgdata 22048maxresident)k

That's great! We're actually using less memory than in the beginning now, and managed to cut down the runtime from 4.6 seconds to 1.2 seconds. That's almost a factor of 4!

Let's try one more thing. We are already intercepting on two tag names, and then searching for a third one. Why not intercept on all three directly?

def count_locations(file_path, match):
    count = 0
    for _, elem in ET.iterparse(file_path, events=("end",),
                                tag=('item', 'location', 'regions')):
        if elem.tag == 'location':
            text = elem.text
            if text and match in text:
                count += 1
        elif elem.tag == 'regions':
    return count
count = 92
1.10user 0.03system 0:01.13elapsed 99%CPU (0avgtext+0avgdata 21912maxresident)k

Nice. Another bit faster, and another bit less memory used.

Anything else we can do? Yes. We can tune the parser a little more. Since we're only interested in the non-empty text content inside of tags, we can ignore all newlines that appear in our input file between the tags. lxml's parser has an option for removing such blank text, which avoids creating an in-memory representation for it.

def count_locations(file_path, match):
    count = 0
    for _, elem in ET.iterparse(file_path, events=("end",),
                                tag=('item', 'location', 'regions'),
        if elem.tag == 'location':
            text = elem.text
            if text and match in text:
                count += 1
        elif elem.tag == 'regions':
    return count
count = 92
0.97user 0.02system 0:01.00elapsed 99%CPU (0avgtext+0avgdata 21928maxresident)k

While the overall memory usage didn't change, the avoided processing time for creating the useless text nodes and cleaning them up from memory is quite visible.

Overall, algorithmically improving our code and making better use of lxml's features gave us a speedup from initially 4.6 seconds down to one second. And we paid for that improvement with 4 additional lines of code inside our function. That's only half of the code which Eli's SAX based Go implementation needs (which, mind you, does not build an in-memory tree for you at all). And the Go code is only slightly faster than the initial Python implementations that we started from. Way to go! ;-)

Speaking of SAX, lxml also has a SAX interface. So let's compare how that performs.

import sys
import lxml.etree as ET

class Done(Exception):

class SaxCounter:
    in_location = False
    def __init__(self, match):
        self.count = 0
        self.match = match
        self.text = [] = self.text.append

    def start(self, tag, attribs):
        self.is_location = tag == 'location'
        del self.text[:]

    def end(self, tag):
        if tag == 'location':
            if self.text and self.match in ''.join(self.text):
                self.count += 1
        elif tag == 'regions':
            raise Done()

    def close(self):

def count_locations(file_path, match):
    target = SaxCounter(match)
    parser = ET.XMLParser(target=target)
        ET.parse(file_path, parser=parser)
    except Done:
    return target.count

count = count_locations(sys.argv[1], 'Africa')
print('count =', count)
count = 92
1.23user 0.02system 0:01.25elapsed 99%CPU (0avgtext+0avgdata 16060maxresident)k

And the exact same code works in ElementTree if you change the import again:

count = 92
1.83user 0.02system 0:01.85elapsed 99%CPU (0avgtext+0avgdata 10280maxresident)k

Also, removing the regions check from the end() SAX method above, thus reading the entire file, yields this for lxml:

count = 92
3.22user 0.04system 0:03.27elapsed 99%CPU (0avgtext+0avgdata 15932maxresident)k

and this for ElementTree:

count = 92
4.72user 0.07system 0:04.79elapsed 99%CPU (0avgtext+0avgdata 10300maxresident)k

Seeing the numbers in comparison to iterparse(), it does not seem worth the complexity, unless the memory usage is really, really pressing.

A final note: here's the improved ElementTree iterparse() implementation that also avoids parsing useless data.

import sys
import xml.etree.ElementTree as ET

def count_locations(file_path, match):
    count = 0
    for event, elem in ET.iterparse(file_path, events=("end",)):
        if elem.tag == 'location':
            if elem.text and match in elem.text:
                count += 1
        elif elem.tag == 'regions':
    return count

count = count_locations(sys.argv[1], 'Africa')
print('count =', count)
count = 92
1.71user 0.02system 0:01.74elapsed 99%CPU (0avgtext+0avgdata 11876maxresident)k

And while not as fast as the lxml version, it still runs considerably faster than the original implementation. And uses less memory.

Learnings to take away:

  • Say what you want.
  • Stop when you have it.

Speeding up basic object operations in Cython

Raymond Hettinger published a nice little micro-benchmark script for comparing basic operations like attribute or item access in CPython and comparing the performance across Python versions. Unsurprisingly, Cython performs quite well in comparison to the latest CPython 3.8-pre development version, executing most operations 30-50% faster. But the script allowed me to tune some more performance out of certain less well performing operations. The timings are shown below, first those for CPython 3.8-pre as a baseline, then (for comparison) the Cython timings with all optimisations disabled that can be controlled by C macros (gcc -DCYTHON_...=0), the normal (optimised) Cython timings, and the now improved version at the end.

CPython 3.8 (pre) Cython 3.0 (no opt) Cython 3.0 (pre) Cython 3.0 (tuned)
Variable and attribute read access:        
            5.5 ns
            0.2 ns
            0.2 ns
            0.2 ns
            6.0 ns
            0.2 ns
            0.2 ns
            0.2 ns
           17.9 ns
           13.3 ns
            2.2 ns
            2.2 ns
           21.0 ns
            0.2 ns
            0.2 ns
            0.1 ns
           23.7 ns
           16.1 ns
           14.1 ns
           14.1 ns
           20.9 ns
           11.9 ns
           11.2 ns
           11.0 ns
           31.7 ns
           22.3 ns
           20.8 ns
           22.0 ns
           25.8 ns
           16.5 ns
           15.3 ns
           17.0 ns
           23.6 ns
           16.2 ns
           13.9 ns
           13.5 ns
           32.5 ns
           23.4 ns
           22.2 ns
           21.6 ns
Variable and attribute write access:        
            6.4 ns
            0.2 ns
            0.1 ns
            0.1 ns
            6.8 ns
            0.2 ns
            0.1 ns
            0.1 ns
           22.2 ns
           13.2 ns
           13.7 ns
           13.0 ns
          114.2 ns
          103.2 ns
          113.9 ns
           94.7 ns
           49.1 ns
           34.9 ns
           28.6 ns
           29.8 ns
           33.4 ns
           22.6 ns
           16.7 ns
           17.8 ns
Data structure read access:        
           23.1 ns
            5.5 ns
            4.0 ns
            4.1 ns
           24.0 ns
            5.7 ns
            4.3 ns
            4.4 ns
           28.7 ns
           21.2 ns
           16.5 ns
           16.5 ns
           23.3 ns
           10.7 ns
           10.5 ns
           12.0 ns
Data structure write access:        
           28.0 ns
            8.2 ns
            4.3 ns
            4.2 ns
           29.5 ns
            8.2 ns
            6.3 ns
            6.4 ns
           32.9 ns
           24.0 ns
           21.7 ns
           22.6 ns
           29.2 ns
           16.4 ns
           15.8 ns
           16.0 ns
Stack (or queue) operations:        
           63.6 ns
           67.9 ns
           20.6 ns
           20.5 ns
           56.0 ns
           81.5 ns
          159.3 ns
           46.0 ns
           58.0 ns
           56.2 ns
           88.1 ns
           36.4 ns
Timing loop overhead:        
            0.4 ns
            0.2 ns
            0.1 ns
            0.2 ns

Some things that are worth noting:

  • There is always a bit of variance across the runs, so don't get excited about a couple of percent difference.
  • The read/write access to local variables is not reasonably measurable in Cython since it uses local/global C variables, and the C compiler discards any useless access to them. But don't worry, they are really fast.
  • Builtins (and module global variables in Py3.6+) are cached, which explains the "close to nothing" timings for them above.
  • Even with several optimisations disabled, Cython code is still visibly faster than CPython.
  • The write_classvar benchmark revealed a performance problem in CPython that is being worked on.
  • The deque related benchmarks revealed performance problems in Cython that are now fixed, as you can see in the last column.

Die glücklichen 2000

Hin und wieder begegne ich Menschen, die bei Dingen, die "jede/r weiß", etwas überrascht schauen. Dazu gibt es einen schönen xkcd Comic. Ich habe mal nachgeschaut, in Deutschland lag die Geburtenrate 2017 bei 785.000 Kindern, mit deutlich steigender Tendenz seit 2011. Das bedeutet, dass in diesem Land an jedem Tag im Durchschnitt so um die 2.000 Menschen zum ersten Mal von einer Sache hören, die schon "alle wissen" (zumindest alle Erwachsenen). Jeden Tag, 2.000 Menschen. Lasst es uns für sie zu einer schönen Erfahrung machen.

What's new in Cython 0.29?

I'm happy to announce the release of Cython 0.29. In case you didn't hear about Cython before, it's the most widely used statically optimising Python compiler out there. It translates Python (2/3) code to C, and makes it as easy as Python itself to tune the code all the way down into fast native code. This time, we added several new features that help with speeding up and parallelising regular Python code to escape from the limitations of the GIL.

So, what exactly makes this another great Cython release?

The contributors

First of all, our contributors. A substantial part of the changes in this release was written by users and non-core developers and contributed via pull requests. A big "Thank You!" to all of our contributors and bug reporters! You really made this a great release.

Above all, Gabriel de Marmiesse has invested a remarkable amount of time into restructuring and rewriting the documentation. It now has a lot less historic smell, and much better, tested (!) code examples. And he obviously found more than one problematic piece of code in the docs that we were able to fix along the way.

Cython 3.0

And this will be the last 0.x release of Cython. The Cython compiler has been in production critical use for years, all over the world, and there is really no good reason for it to have an 0.x version scheme. In fact, the 0.x release series can easily be counted as 1.x, which is one of the reasons why we now decided to skip the 1.x series all together. And, while we're at it, why not the 2.x prefix as well. Shift the decimals of 0.29 a bit to the left, and then the next release will be 3.0. The main reason for that is that we want 3.0 to do two things: a) switch the default language compatibility level from Python 2.x to 3.x and b) break with some backwards compatibility issues that get more in the way than they help. We have started collecting a list of things to rethink and change in our bug tracker.

Turning the language level switch is a tiny code change for us, but a larger change for our users and the millions of source lines in their code bases. In order to avoid any resemblance with the years of effort that went into the Py2/3 switch, we took measures that allow users to choose how much effort they want to invest, from "almost none at all" to "as much as they want".

Cython has a long tradition of helping users adapt their code for both Python 2 and Python 3, ever since we ported it to Python 3.0. We used to joke back in 2008 that Cython was the easiest way to migrate an existing Py2 code base to Python 3, and it was never really meant as a joke. Many annoying details are handled internally in the compiler, such as the range versus xrange renaming, or dict iteration. Cython has supported dict and set comprehensions before they were backported to Py2.7, and has long provided three string types (or four, if you want) instead of two. It distinguishes between bytes, str and unicode (and it knows basestring), where str is the type that changes between Py2's bytes str and Py3's Unicode str. This distinction helps users to be explicit, even at the C level, what kind of character or byte sequence they want, and how it should behave across the Py2/3 boundary.

For Cython 3.0, we plan to switch only the default language level, which users can always change via a command line option or the compiler directive language_level. To be clear, Cython will continue to support the existing language semantics. They will just no longer be the default, and users have to select them explicitly by setting language_level=2. That's the "almost none at all" case. In order to prepare the switch to Python 3 language semantics by default, Cython now issues a warning when no language level is explicitly requested, and thus pushes users into being explicit about what semantics their code requires. We obviously hope that many of our users will take the opportunity and migrate their code to the nicer Python 3 semantics, which Cython has long supported as language_level=3.

But we added something even better, so let's see what the current release has to offer.

A new language-level

Cython 0.29 supports a new setting for the language_level directive, language_level=3str, which will become the new default language level in Cython 3.0. We already added it now, so that users can opt in and benefit from it right away, and already prepare their code for the coming change. It's an "in between" kind of setting, which enables all the nice Python 3 goodies that are not syntax compatible with Python 2.x, but without requiring all unprefixed string literals to become Unicode strings when the compiled code runs in Python 2.x. This was one of the biggest problems in the general Py3 migration. And in the context of Cython's integration with C code, it got in the way of our users even a bit more than it would in Python code. Our goals are to make it easy for new users who come from Python 3 to compile their code with Cython and to allow existing (Cython/Python 2) code bases to make use of the benefits before they can make a 100% switch.

Module initialisation like Python does

One great change under the hood is that we managed to enable the PEP-489 support (again). It was already mostly available in Cython 0.27, but lead to problems that made us back-pedal at the time. Now we believe that we found a way to bring the saner module initialisation of Python 3.5 to our users, without risking the previous breakage. Most importantly, features like subinterpreter support or module reloading are detected and disabled, so that Cython compiled extension modules cannot be mistreated in such environments. Actual support for these little used features will probably come at some point, but will certainly require an opt-in of the users, since it is expected to reduce the overall performance of Python operations quite visibly. The more important features like a correct __file__ path being available at import time, and in fact, extension modules looking and behaving exactly like Python modules during the import, are much more helpful to most users.

Compiling plain Python code with OpenMP and memory views

Another PEP is worth mentioning next, actually two PEPs: 484 and 526, vulgo type annotations. Cython has supported type declarations in Python code for years, has switched to PEP-484/526 compatible typing with release 0.27 (more than one year ago), and has now gained several new features that make static typing in Python code much more widely usable. Users can now declare their statically typed Python functions as not requiring the GIL, and thus call them from a parallel OpenMP loops and parallel Python threads, all without leaving Python code compatibility. Even exceptions can now be raised directly from thread-parallel code, without first having to acquire the GIL explicitly.

And memory views are available in Python typing notation:

import cython
from cython.parallel import prange

def compute_one_row(row: cython.double[:]) ->

def process_2d_array(data: cython.double[:,:]):
    i: cython.Py_ssize_t

    for i in prange(data.shape[0], num_threads=16, nogil=True):

This code will work with NumPy arrays when run in Python, and with any data provider that supports the Python buffer interface when compiled with Cython. As a compiled extension module, it will execute at full C speed, in parallel, with 16 OpenMP threads, as requested by the prange() loop. As a normal Python module, it will support all the great Python tools for code analysis, test coverage reporting, debugging, and what not. Although Cython also has direct support for a couple of those by now. Profiling (with cProfile) and coverage analysis (with have been around for several releases, for example. But debugging a Python module in the interpreter is obviously still much easier than debugging a native extension module, with all the edit-compile-run cycle overhead.

Cython's support for compiling pure Python code combines the best of both worlds: native C speed, and easy Python code development, with full support for all the great Python 3.7 language features, even if you still need your (compiled) code to run in Python 2.7.

More speed

Several improvements make use of the dict versioning that was introduced in CPython 3.6. It allows module global names to be looked up much faster, close to the speed of static C globals. Also, the attribute lookup for calls to cpdef methods (C methods with Python wrappers) can benefit a lot, it can become up to 4x faster.

Constant tuples and slices are now deduplicated and only created once at module init time. Especially with common slices like [1:] or [::-1], this can reduce the amount of one-time initialiation code in the generated extension modules.

The changelog lists several other optimisations and improvements.

Many important bug fixes

We've had a hard time following a change in CPython 3.7 that "broke the world", as Mark Shannon put it. It was meant as a mostly internal change on their side that improved the handling of exceptions inside of generators, but it turned out to break all extension modules out there that were built with Cython, and then some. A minimal fix was already released in Cython 0.28.4, but 0.29 brings complete support for the new generator exception stack in CPython 3.7, which allows exceptions raised or handled by Cython implemented generators to interact correctly with CPython's own generators. Upgrading is therefore warmly recommended for better CPython 3.7 support. As usual with Cython, translating your existing code with the new release will make it benefit from the new features, improvements and fixes.

Stackless Python has not been a big focus for Cython development so far, but the developers noticed a problem with Cython modules earlier this year. Normally, they try to keep Stackless binary compatible with CPython, but there are corner cases where this is not possible (specifically frames), and one of these broke the compatibility with Cython compiled modules. Cython 0.29 now contains a fix that makes it play nicely with Stackless 3.x.

A funny bug that is worth noting is a mysteriously disappearing string multiplier in earlier Cython versions. A constant expression like "x" * 5 results in the string "xxxxx", but "x" * 5 + "y" becomes "xy". Apparently not a common code construct, since no user ever complained about it.

Long-time users of Cython and NumPy will be happy to hear that Cython's memory views are now API-1.7 clean, which means that they can get rid of the annoying "Using deprecated NumPy API" warnings in the C compiler output. Simply append the C macro definition ("NPY_NO_DEPRECATED_API", "NPY_1_7_API_VERSION") to the macro setup of your distutils extensions in to make them disappear. Note that this does not apply to the old low-level ndarray[...] syntax, which exposes several deprecated internals of the NumPy C-API that are not easy to replace. Memory views are a fast high-level abstraction that does not rely specifically on NumPy and therefore does not suffer from these API constraints.

Less compilation :)

And finally, as if to make a point that static compilation is a great tool but not always a good idea, we decided to reduce the number of modules that Cython compiles of itself from 13 down to 8, thus keeping 5 more modules normally interpreted by Python. This makes the compiler runs about 5-7% slower, but reduces the packaged size and the installed binary size by about half, thus reducing download times in CI builds and virtualenv creations. Python is a very efficient language when it comes to functionality per line of code, and its byte code is similarly high-level and efficient. Compiled native code is a lot larger and more verbose in comparison, and this can easily make a difference of megabytes of shared libraries versus kilobytes of Python modules.

We therefore repeat our recommendation to focus Cython's usage on the major pain points in your application, on the critical code sections that a profiler has pointed you at. The ability to compile those, and to tune them at the C level, is what makes Cython such a great and versatile tool.

Der Facebook-Effekt, oder: warum Wahlergebnisse uns heute wieder überraschen

Geschrieben im Juli 2016.

Die meisten Leute haben schon mal vom "Kleine-Welt-Phänomen" gehört. Es erklärt verschiedene Alltagseffekte, unter anderem die Überraschung, in einer noch so unmöglichen Situation jemanden zu treffen, mit dem oder der ich in irgendeiner Verbindung stehe, ohne uns schon jemals begegnet zu sein. Sei es die Existenz eines gemeinsamen Bekannten, eine ähnliche Herkunft, oder ein gemeinsames Ereignis in der Vergangenheit. Technisch gesprochen beschreibt es die Eigenschaft eines Netzwerks (oder Graphen), dass jeder Knoten mit allen anderen Knoten über einen extrem kurzen Pfad in Verbindung steht. Dies trifft oft auf menschliche Bekanntschaftsverhältnisse und soziale Netzwerke zu, bei denen die Entfernung zwischen zwei beliebig herausgegriffenen Menschen meist weniger als 7 Zwischenschritte über einander Bekannte beträgt.

Insbesondere in den heute sogenannten Sozialen Netzwerken im Internet wird diese Eigenschaft geradezu zelebriert. Da es so einfach ist, mich mit irgendwelchen Menschen (oder zumindest irgendwelchen Accounts) irgendwo auf der Welt zu vernetzen, bilden diese Netzwerke sehr ausgeprägte Kleine-Welt-Graphen. Hier ist jeder andere Teilnehmer wirklich nur ein paar Netzwerkklicks entfernt. Die totale globale Vernetzung. Endlich wächst die Menschheit zusammen. Oder?

Oft wird dabei vergessen, dass es noch einen zweiten Aspekt solcher Graphen gibt. Das eine ist die minimale Entfernung zu jedem Knoten durch den gesamten Graphen hindurch. Das andere ist jedoch die Menge der direkten Verbindungen jedes einzelnen Knotens. Gerade in den Sozialen Internetnetzwerken ist es so einfach, neue Verbindungen zu erstellen und so neue "Freunde" zu gewinnen, dass ich mich sofort mit allen vernetzen kann, die mich irgendwie interessieren oder denen ich in irgendeinem positiven Sinne begegne. Im Umkehrschluss bedeutet das aber, dass die zweite Ebene, die der "Freunde" meiner "Freunde", eigentlich gar nicht mehr relevant ist. Ganz zu schweigen von der dritten und allen weiteren. Denn mit allen "Freunden" meiner "Freunde", die mich interessieren, kann ich mich ja auch direkt verbinden. Was ich ja auch mache. Damit werden es meine direkten "Freunde".

Aber wen nehme ich in den Kreis meiner eigenen "Freunde" auf? Wem "folge" ich in diesen Sozialen Netzwerken? Natürlich Menschen (oder Accounts), die so denken wie ich, mit denen ich mich im Einklang befinde, die mir gefallen. Aber verbinde ich mich auch mit Menschen, die anders denken als ich? Die nicht meine politischen Ansichten teilen? Die mir widersprächen, wenn ich nur mit Ihnen redete? Oder deren Ausdrucksweise nicht meiner sozialen Schicht entspricht? Prolls? Sozen? Rechtsradikale? Kinderschläger? EU-Gegner? Warmduschpropagandisten?

Warum sollte ich mir das antun? Wenn die "Freunde" meiner "Freunde" so etwas tun, dann sollen sie das eben. Aber meine "Freunde" werden sie damit nicht. Vielleicht lese ich mal einen Kommentar von diesen Leuten und rege mich darüber auf, aber das reicht ja dann auch wieder. Jeden Tag brauche ich das jedenfalls nicht.

Es zeigt sich also relativ schnell, dass diese Sozialen Internetnetzwerke einen Effekt verstärken, der Gleiches im Gleichen sucht und Anderes ausgrenzt. Eine moderne Form der Ghettoisierung. Die Welt mag noch so klein sein, der kürzeste Pfad zu allen Menschen auf der Welt noch so kurz - mir reicht die eine, direkte Verbindung zu denen, die meiner Meinung sind.

In Wirklichkeit wächst die Menschheit nicht zusammen. Nur die Trennlinien verlagern sich. Weg von Wohnort und Aussehen, hin zu Verhalten, Bildungsniveau und sozialen Unterschieden. Und die Trennung verstärkt sich. Warum sollte ich mit "Freunden" meiner "Freunde" überhaupt reden, die ich nie zu meinen eigenen "Freunden" machen würde?

Es gibt viele Menschen, gerade junge Menschen, die Ihren Medienkonsum größtenteils oder sogar komplett von den klassischen Medien Zeitung und Fernsehen weg in Soziale Internetnetzwerke verlagert haben. "Meine Freunde werden mich schon auf dem Laufenden halten", ist oft der Tenor dahinter. Wenn etwas passiere, dann kriege man das ja auch so mit. Sicherlich. Aber man läuft eben auch Gefahr, die "Nicht-Freunde" und ihre Meinungen auszublenden. Die Anderen. Die, mit denen ich mich nie direkt vernetzen würde. Weil sie eben nicht meiner Meinung sind. Weil sie eine Meinung vertreten, die ich ablehne. Die ich nicht hören möchte. Die ich ausblende. Die nicht der Meinung meiner "Freunde" entspricht. So funktionieren nicht nur die sozialen Netzwerke um einen Anders Brevik herum oder die von Pegida. Selbstselektierende Zugehörigkeit und Ghettobildung ist eine grundlegende Eigenschaft Sozialer Netzwerke im Internet, egal welcher Art die jeweiligen Auswahlkriterien sind.

Eine entscheidende Tatsache, die wir bei der Abstimmung über den Ausstieg Britanniens aus der EU am 23. Juni 2016 gesehen haben, war die geringe Beteiligung junger Wähler. Nur ein Drittel der unter 25-jährigen sind überhaupt zur Abstimmung gegangen. Und nur die Hälfte der 25- bis 35-jährigen. Obwohl gerade diese Altersgruppen über Austauschprogramme, Reisefreiheit und den offenen Arbeitsmarkt am stärksten von der EU-Mitgliedschaft profitieren und im Vergleich zu den stark engagierten über-65-jährigen noch sehr, sehr lange davon profitiert hätten. Noch den größten Teil ihres gesamten Lebens.

Es gibt eine gute Erklärung dafür: falsche Sicherheit. Viele Menschen der jungen, gut ausgebildeten Altersgruppen dürften im Internet gut untereinander vernetzt sein, aber wenig direkte Kontakte zu deutlich älteren oder sozial benachteiligten Menschen haben. Also eine Ghettoisierung nach Altersklassen und sozialer Herkunft. In solchen Ghettos kann schnell der Eindruck entstehen, sich nicht für etwas engagieren zu müssen, weil ja alle der gleichen Meinung sind. Die Mehrheiten scheinen bereits vorab festzustehen und da sie in meinem Sinne ausfallen, fühle ich mich als Ghettobewohner heimelig und wohlig davon umhüllt und verliere den Druck, mich selbst für etwas einsetzen zu müssen. Meine Mehrheit wird schon richtig entscheiden, mir selbst ist das Wetter heute zu schlecht oder der Besuch eines Konzerts zu wichtig, um raus zur Abstimmung zu gehen.

Es gibt weitere schöne Beispiele für diesen Effekt. So hatte bei den Vorentscheidungen zur Präsidentschaftswahl in den USA 2004 der Kandidat Howard Dean fast ausschließlich auf Internetaktionen gesetzt und darüber seine Kampagne organisiert. Dadurch erzielte er eine hohe Sichtbarkeit unter seinen Anhängern, unter Journalisten und anderen Nutzern dieser Medien. Erst bei den ersten Vorwahlen zeigte sich dann, dass diese hohe Sichtbarkeit in Internetkampagnen und die dort erzielten hohen Umfragewerte sich nicht in den realen Wahlergebnissen niederschlugen. Ein klarer Fall von Selbsttäuschung innerhalb des eigenen Ghettos.

Inzwischen zeigen einige Studien, dass Menschen, die in Sozialen Internetnetzwerken aktiv sind, sich deutlich weniger in ihrem realen Umfeld engagieren. Dass sie leicht das Klicken auf einen "Mag ich"-Knopf mit gesellschaftlichem Engagement verwechseln. Warum an einer Demonstration teilnehmen, kraftraubene Diskussionen mit Andersdenkenden führen, oder betroffenen Menschen mit Spenden und Taten helfen, wenn ich meine Meinung auch durch einen Klick auf einen Knopf oder das schnelle Unterzeichnen einer Petition "zeigen" kann? "Bin" ich wirklich Charlie, Paris, Brüssel, Istanbul, Aleppo oder Bagdad, nur weil ich im Internet auf einen Knopf geklickt habe? Oder weil ich mit einem Hashtag "ein Zeichen gesetzt" habe? Und für wen habe ich diesen Klick vollbracht? Für die betroffenen Menschen? Wirklich? Nicht doch eher für meine "Freunde", die mein "Zeichen" sehen, die ich dadurch wohlig umhülle und zu denen ich mich durch mein Zeichen zugehörig fühlen kann? Was nützt einem Verwundeten in Bagdad mein Klick auf den "Mag ich"-Knopf der Ärzte ohne Grenzen?

Wir müssen wieder verstehen und akzeptieren, dass Pluralismus und Meinungsvielfalt einer Symmetrie unterliegen und nicht nur etwas sind, was die betrifft, die anderer Meinung sind. Wo es Menschen gibt, die anderer Meinung sind, bin auch ich ein Mensch mit einer anderen Meinung. Nicht einmal, wenn eine dieser Meinungen grundlegenden Werten wie den Menschenrechten widerspricht, kann ich mir sicher sein, dass sie nur von "Idioten" und "Außenseitern" propagiert wird. Auch diese Kategorien sind nur subjektive Begriffe.

Das Menschenrecht auf Meinungsfreiheit zu verteidigen bedeutet zuerst einmal, andere Meinungen überhaupt wahrzunehmen und ihre Existenz zu akzeptieren. Als zweites, Toleranz gegenüber den Menschen zu zeigen, die sie vertreten. Und erst an dritter Stelle steht mein Recht, den Meinungen offen zu widersprechen, die sich nicht mit meiner decken, besonders, wenn sie meinen Überzeugungen widersprechen. Aber da steht auch die Pflicht zu widersprechen, wenn diese Meinungen sich gegen Menschen und Minderheiten richten. Denn jede artikulierte Meinung muss sich auch am ersten Artikel unseres Grundgesetzes messen lassen. Die Würde des Menschen ist unantastbar.

Die Meinungsfreiheit stellt es natürlich jeder/m Einzelnen frei, andere Menschen und ihre Meinungen zu ignorieren. Sie ist aber keine Einladung dazu, sich in Ghettos einzukuscheln und nicht mehr miteinander zu reden. Sie darf niemals dazu führen, dass ganze Bevölkerungsgruppen sich gegenseitig ignorieren. Eine Demokratie lebt nur in Diskussion und Austausch. Den Dialog abzubrechen führt geradewegs in die Ghettoisierung, zu Tunnelblick und Radikalisierung. Und zu unerwarteten Wahlergebnissen.

Cython, pybind11, cffi – which tool should you choose?

In and after the conference talks that I give about Cython, I often get the question how it compares to other tools like pybind11 and cffi. There are others, but these are definitely the three that are widely used and "modern" in the sense that they provide an efficient user experience for today's real-world problems. And as with all tools from the same problem space, there are overlaps and differences. First of all, pybind11 and cffi are pure wrapping tools, whereas Cython is a Python compiler and a complete programming language that is used to implement actual functionality and not just bind to it. So let's focus on the area where the three tools compete: extending Python with native code and libraries.

Using native code from the Python runtime (specifically CPython) has been at the heart of the Python ecosystem since the very early days. Python is a great language for all sorts of programming needs, but it would not have the success and would not be where it is today without its great ecosystem that is heavily based on fast, low-level, native code. And the world of computing is full of such code, often decades old, heavily tuned, well tested and production proven. Looking at indicators like the TIOBE Index suggests that low-level languages like C and C++ are becoming more and more important again even in the last years, decades after their creation.

Today, no-one would attempt to design a (serious/practical) programming language anymore that does not come out of the box with a complete and easy to use way to access all that native code. This ability is often referred to as an FFI, a foreign function interface. Rust is an excellent example for a modern language that was designed with that ability in mind. The FFI in LuaJIT is a great design of a fast and easy to use FFI for the Lua language. Even Java and its JVM, which are certainly not known for their ease of code reuse, have provided the JNI (Java Native Interface) from the early days. CPython, being written in C, has made it very easy to interface with C code right from the start, and above all others the whole scientific computing and big data community has made great use of that over the past 25 years.

Over time, many tools have aimed to simplify the wrapping of external code. The venerable SWIG with its long list of supported target languages is clearly worth mentioning here. Partially a successor to SWIG (and sip), shiboken is a C++ bindings generator used by the PySide project to auto-create wrapper code for the large Qt C++ API.

A general shortcoming of all wrapper generators is that many users eventually reach the limits of their capabilities, be it in terms of performance, feature support, language integration to one side or the other, or whatever. From that point on, users start fighting the tool in order to make it support their use case at hand, and it is not unheard of that projects start over from scratch with a different tool. Therefore, most projects are better off starting directly with a manually written wrapper, at least when the part of the native API that they need to wrap is not prohibitively vast.

The lxml XML toolkit is an excellent example for that. It wraps libxml2 and libxslt with their huge low-level C-APIs. But if the project had used a wrapper generator to wrap it for Python, mapping this C-API to Python would have made the language integration of the Python-level API close to unusable. In fact, the whole project started because generated Python bindings for both already existed that were like the thrilling embrace of an exotic stranger (Mark Pilgrim). And beautifying the API at the Python level by adding another Python wrapper layer would have countered the advantages of a generated wrapper and also severely limited its performance. Despite the sheer vastness of the C-API that it wraps, the decision for manual wrapping and against a wrapper generator was the foundation of a very fast and highly pythonic tool.

Nowadays, three modern tools are widely used in the Python community that support manual wrapping: Cython, cffi and pybind11. These three tools serve three different sides of the need to extend (C)Python with native code.

  • Cython is Python with native C/C++ data types.

    Cython is a static Python compiler. For people coming from a Python background, it is much easier to express their coding needs in Python and then optimising and tuning them, than to rewrite them in a foreign language. Cython allows them to do that by automatically translating their Python code to C, which often avoids the need for an implementation in a low-level language.

    Cython uses C type declarations to mix C/C++ operations into Python code freely, be it the usage of C/C++ data types and containers, or of C/C++ functions and objects defined in external libraries. There is a very concise Cython syntax that uses special additional keywords (cdef) outside of Python syntax, as well as ways to declare C types in pure Python syntax. The latter allows writing type annotated Python code that gets optimised into fast C level when compiled by Cython, but that remains entirely pure Python code that can be run, analysed ad debugged with the usual Python tools.

    When it comes to wrapping native libraries, Cython has strong support for designing a Python API for them. Being Python, it really keeps the developer focussed on the usage from the Python side and on solving the problem at hand, and takes care of most of the boilerplate code through automatic type conversions and low-level code generation. Its usage is essentially writing C code without having to write C code, but remaining in the wonderful world of the Python language.

  • pybind11 is modern C++ with Python integration.

    pybind11 is the exact opposite of Cython. Coming from C++, and targeting C++ developers, it provides a C++ API that wraps native functions and classes into Python representations. For that, it makes good use of the compile time introspection features that were added to C++11 (hence the name). Thus, it keeps the user focussed on the C++ side of things and takes care of the boilerplate code for mapping it to a Python API.

    For everyone who is comfortable with programming in C++ and wants to make direct use of all C++ features, pybind11 is the easiest way to make the C++ code available to Python.

  • CFFI is Python with a dynamic runtime interface to native code.

    cffi then is the dynamic way to load and bind to external shared libraries from regular Python code. It is similar to the ctypes module in the Python standard library, but generally faster and easier to use. Also, it has very good support for the PyPy Python runtime, still better than what Cython and pybind11 can offer. However, the runtime overhead prevents it from coming any close in performance to the statically compiled code that Cython and pybind11 generate for CPython. And the dependency on a well-defined ABI (binary interface) means that C++ support is mostly lacking.

    As long as there is a clear API-to-ABI mapping of a shared library, cffi can directly load and use the library file at runtime, given a header file description of the API. In the more complex cases (e.g. when macros are involved), cffi uses a C compiler to generate a native stub wrapper from the description and uses that to communicate with the library. That raises the runtime dependency bar quite a bit compared to ctypes (and both Cython and pybind11 only need a C compiler at build time, not at runtime), but on the other hand also enables wrapping library APIs that are difficult to use with ctypes.

This list shows the clear tradeoffs of the three tools. If performance is not important, if dynamic runtime access to libraries is an advantage, and if users prefer writing their wrapping code in Python, then cffi (or even ctypes) will do the job, nicely and easily. Otherwise, users with a strong C++ background will probably prefer pybind11 since it allows them to write functionality and wrapper code in C++ without switching between languages. For users with a Python background (or at least not with a preference for C/C++), Cython will be very easy to learn and use since the code remains Python, but gains the ability to do efficient native C/C++ operations at any point.

What CPython could use Cython for

There has been a recent discussion about using Cython for CPython development. I think this is a great opportunity for the CPython project to make more efficient use of its scarcest resource: developer time of its spare time contributors and maintainers.

The entry level for new contributors to the CPython project is often perceived to be quite high. While many tasks are actually beginner friendly, such as helping with the documentation or adding features to the Python modules in the stdlib, such important tasks as fixing bugs in the core interpreter, working on data structures, optimising language constructs, or improving the test coverage of the C-API require a solid understanding of C and the CPython C-API.

Since a large part of CPython is implemented in C, and since it exposes a large C-API to extensions and applications, C level testing is key to providing a correct and reliable native API. There were a couple of cases in the past years where new CPython releases actually broke certain parts of the C-API, and it was not noticed until people complained that their applications broke when trying out the new release. This is because the test coverage of the C-API is much lower than the well tested Python level and standard library tests of the runtime. And the main reason for this is that it is much more difficult to write tests in C than in Python, so people have a high incentive to get around it if they can. Since the C-API is used internally inside of the runtime, it is often assumed to be implicitly tested by the Python tests anyway, which raises the bar for an explicit C test even further. But this implicit coverage is not always given, and it also does not reduce the need for regression tests. Cython could help here by making it easier to write C level tests that integrate nicely with the existing Python unit test framework that the CPython project uses.

Basically, writing a C level test in Cython means writing a Python unittest function and then doing an explicit C operation in it that represents the actual test code. Here is an example for testing the PyList_Append C-API function:

from cpython.object cimport PyObject
from cpython.list cimport PyList_Append

def test_PyList_Append_on_empty_list():
    # setup code
    l = []
    assert len(l) == 0
    value = "abc"
    pyobj_value = <PyObject*> value
    refcount_before = pyobj_value.ob_refcnt

    # conservative test call, translates to the expected C code,
    # although with automatic exception propagation if it returns -1:
    errcode = PyList_Append(l, value)

    # validation
    assert errcode == 0
    assert len(l) == 1
    assert l[0] is value
    assert pyobj_value.ob_refcnt == refcount_before + 1

In the Cython project itself, what we actually do is to write doctests. The functions and classes in a test module are compiled with Cython, and the doctests are then executed in Python, and call the Cython implementations. This provides a very nice and easy way to compare the results of Cython operations with those of Python, and also trivially supports data driven tests, by calling a function multiple times from a doctest, for example:

from cpython.number cimport PyNumber_Add

def test_PyNumber_Add(a, b):
    >>> test_PyNumber_Add('abc', 'def')
    >>> test_PyNumber_Add('abc', '')
    >>> test_PyNumber_Add(2, 5)
    >>> -2 + 5
    >>> test_PyNumber_Add(-2, 5)
    # The following is equivalent to writing "return a + b" in Python or Cython.
    return PyNumber_Add(a, b)

This could even trivially be combined with hypothesis and other data driven testing tools.

But Cython's use cases are not limited to testing. Maintenance and feature development would probably benefit even more from a reduced entry level.

Many language optimisations are applied in the AST optimiser these days, and that is implemented in C. However, these tree operations can be fairly complex and are thus non-trivial to implement. Doing that in Python rather than C would be much easier to write and maintain, but since this code is a part of the Python compilation process, there's a chicken-and-egg problem here in addition to the performance problem. Cython could solve both problems and allow for more far-reaching optimisations by keeping the necessary transformation code readable.

Performance is also an issue in other parts of CPython, namely the standard library. Several stdlib modules are compute intensive. Many of them have two implementations: one in Python and a faster one in C, a so-called accelerator module. This means that adding a feature to these modules requires duplicate effort, the proficiency in both Python and C, and a solid understanding of the C-API, reference counting, garbage collection, and what not. On the other hand, many modules that could certainly benefit from native performance lack such an accelerator, e.g. difflib, textwrap, fractions, statistics, argparse, email, urllib.parse and many, many more. The asyncio module is becoming more and more important these days, but its native accelerator only covers a very small part of its large functionality, and it also does not expose a native API that performance hungry async tools could hook into. And even though the native accelerator of the ElementTree module is an almost complete replacement, the somewhat complex serialisation code is still implemented completely in Python, which shows in comparison to the native serialisation in lxml.

Compiling these modules with Cython would speed them up, probably quite visibly. For this use case, it is possible to keep the code entirely in Python, and just add enough type declarations to make it fast when compiled. The typing syntax that PEP-484 and PEP-526 added to Python 3.6 makes this really easy and straight forward. A manually written accelerator module could thus be avoided, and therefore a lot of duplicated functionality and maintenance overhead.

Feature development would also be substantially simplified, especially for new contributors. Since Cython compiles Python code, it would allow people to contribute a Python implementation of a new feature that compiles down to C. And we all know that developing new functionality is much easier in Python than in C. The remaining task is then only to optimise it and not to rewrite it in a different language.

My feeling is that replacing some parts of the CPython C development with Cython has the potential to bring a visible boost for the contributions to the CPython project.

Update 2018-09-12: Jeroen Demeyer reminded me that I should also mention the ease of wrapping external native libraries. While this is not something that is a big priority for the standard library anymore, it is certainly true that modules like sqlite (which wraps sqlite3), ssl (OpenSSL), expat or even the io module (which wraps system I/O capabilities) would have been easier to write and maintain in Cython than in C. Especially I/O related code is often intensive in error handling, which is nicer to do with raise and f-strings than error code passing in C.

A really fast Python web server with Cython

Shortly after I wrote about speeding up Python web frameworks with Cython, Nexedi posted an article about their attempt to build a fast multicore web server for Python that can compete with the performance of compiled coroutines in the Go language.

Their goal is to use Cython to build a web framework around a fast native web server, and to use Cython's concurrency and coroutine support to gain native performance also in the application code, without sacrificing the readability that Python provides.

Their experiments look very promising so far. They managed to process 10K requests per second concurrently, which actually do real processing work. That is worth noting, because many web server benchmarks out there content themselves with the blank response time for a "hello world", thus ignoring any concurrency overhead etc. For that simple static "Hello world!", they even got 400K requests per second, which shows that this is not a very realistic benchmark. Under load, their system seems to scale pretty linearly with the number of threads, also not a given among web frameworks.

I might personally get involved in further improving Cython for this kind of concurrent, async applications. Stay tuned.

Cython for web frameworks

I'm excited to see the Python web community pick up Cython more and more to speed up their web frameworks.

uvloop as a fast drop-in replacement for asyncio has been around for a while now, and it's mostly written in Cython as a wrapper around libuv. The Falcon web framework optionally compiles itself with Cython, while keeping up support for PyPy as a plain Python package. New projects like Vibora show that it pays off to design a framework for both (Flask-like) simplicity and (native) speed from the ground up to leverage Cython for the critical parts. Quote of the day:

"300.000 req/sec is a number comparable to Go's built-in web server (I'm saying this based on a rough test I made some years ago). Given that Go is designed to do exactly that, this is really impressive. My kudos to your choice to use Cython." – Reddit user 'beertown'.

Alex Orlov gave a talk at the PyCon-US in 2017 about using Cython for more efficient code, in which he mentioned the possibility to speed up the Django URL dispatcher by 3x, simply by compiling the module as it is.

Especially in async frameworks, minimising the time spent in processing (i.e. outside of the I/O-Loop) is critical for the overall responsiveness and performance. Anton Caceres and I presented fast async code with Cython at EuroPython 2016, showing how to speed up async coroutines by compiling and optimising them.

In order to minimise the processing time on the server, many template engines use native accelerators in one way or another, and writing those in Cython (instead of C/C++) is a huge boost in terms of maintenance (and probably also speed). But several engines also generate Python code from a templating language, and those templates tend to be way more static than not (they are rarely runtime generated themselves). Therefore, compiling the generated template code, or even better, directly targeting Cython with the code generation instead of just plain Python has the potential to speed up the template processing a lot. For example, Cython has very fast support for PEP-498 f-strings and even transforms some '%'-formatting patterns into them to speed them up (also in code that requires backwards compatibility with older Python versions). That can easily make a difference, but also the faster function and method calls or looping code that it generates.

I'm sure there's way more to come and I'm happily looking forward to all those cool developments in the web area that we are only just starting to see appear.

Update 2018-07-15: Nexedi posted an article about their attempts to build a fast web server using Cython, both for the framework layer and the processing at the application layer. Worth keeping an eye on.


Lasst es mich einmal ganz klar sagen – München hat ein Zuwanderungsproblem.

Immer mehr Wirtschaftsflüchtlinge aus den umliegenden Gemeinden und Dörfern, aus den sozial und wirtschaftlich abgehängten Regionen Bayerns, ziehen nach München.

Aber: diese Menschen sind nicht Teil unserer Gesellschaft. Sie teilen nicht unsere traditionellen, städtischen Werte!

Sie leben ihren perversen Autofetischismus aus, indem sie mit ihren überdimensionierten Panzerfahrzeugen einen Kriegsstauplatz nach dem anderen begründen, anstatt wie wir rechtschaffenen Bürger die öffentlichen Verkehrsmittel zu nutzen.

Sie terrorisieren mit ihren burschenschaftlich übermotorisierten Zweirädern die friedlich schlafende, natürliche Bevölkerung.

Sie akzeptieren in ihrer primitiven Naivität völlig undemokratische Einstiegsmieten und forcieren dadurch horrende Mietsteigerungen, die dann zur Vertreibung von einheimischen Bestandsmietern genutzt werden.

Sie heben die tyrannische Macht der völkischen Einheitspartei über die heilige Entfaltung unserer multikulturellen Traditionen.

Glaubt mir, wenn ihr sie lasst, werden sie heimtückisch hinter eurem Rücken die CSU wählen!

Städter! Wehrt euch! Gebt den Kampf nicht geschlagen!

Befreit euch von der Unterwanderung der Provinziellen! Stoppt die Zuwanderung der Klerikalpastoralen!

Jetzt! Für immer!