Continuations in Python

In previous posts, I already wrote about continuations in Python. Below are some new ideas on the topic. First of all, I released yesterday a new version (v. 1.2.1) of the tco module, with a decorator. Thinking again to it lead me to write a new module from scratch, by keeping a similar internal mechanism but behaving quite differently. My new module id continuation.

While the tco module requires the coder to use a rather heavy syntax, I wanted to achieve a more mature project, easier to use. Here is a tail-recursive code for the factorial function:

def k_factorial(k):
    def _inner(n, f):
        return f if n < 2 else (k << k_factorial)(n-1, n*f)
    return _inner

factorial = lambda n: (with_CC >> k_factorial)(n, 1)

Every feature of the module can be seen in this piece of code:

  • a decorator is provided for defining the functions to be used with the module;
  • the syntax (with_CC >> k_func)(*args) is used for passing the current continuation to a function before calling it with its arguments;
  • the syntax (k << k_func)(*args) is used for calling a function with some arguments after having returned back in the stack to the level of the initial call.

A first difference with the tco module is that decorated functions now take a single argument (a continuation), making the syntax simpler and more consistent. On the other hand, achieving tail-recursion may seem a little more complicated at first glance. Another significant difference is the “visual” notation for playing with the continuation, which should make the process easier to understand: the two operators >> and << are intended to look like arrows pointing toward a meaningful left or right direction.

Importing the module is done with:

from continuation import with_CC, with_continuation

The following code shows that nested calls are correctly handled; while trapped inside three levels of while True loops, the single return statement in the k_identity function make the most external call return immediately:

def k_test1(k):
    def _inner(v):
        print("Entering loop 1")
        while True: (with_CC >> k_test2)(k, v)
    return _inner

def k_test2(k):
    def _inner(cont, v):
        print("Entering loop 2")
        while True: (with_CC >> k_test3)(cont, v)
    return _inner

def k_test3(k):
    def _inner(cont, v):
        print("Entering loop 3")
        while True: (cont << k_identity)(v)
    return _inner

def k_identity(k):
    def _inner(x):
        print("Entering k_identity")
        return x
    return _inner

print ((with_CC >> k_test1)(42))

Explaining functional aspects in python


Many python programmers, who are not aware of really different programming languages sometimes wonder why some coders make such a noise about various functional features allegedly missing in the python language. I want to talk to them in this post and try to explain to them, by using the language they know and like, some of the programming techniques involved in these questions.

First of all, I am not at all saying here or below that something is wrong in Python. As a programmer who spent much time in implementing tail-recursion and continuation-passing-style as modules for the python language, I am not claiming here that such things should be inplemented in the core of the language; I read the position of the author of python here and here; I understand it and agree with it up to a certain point. Let’s face the truth: nobody actually does force me to use python if I don’t want to use it. When I need speed, I use C or Fortran; when I want to focus on elegant algorithms, I use Lisp or Scheme; when I want to understand more deeply some problem, I use APL or J. For this reason, I think that python is more or less as it should be: a polyvalent language with many libraries allowing to quickly implement a piece of code and debug it.

But sometimes I miss some of these features and this is why I care about them (as other coders also do). Some Python programmers would ask: what can’t I do with Python that I would do with tail-recursion elimination? This question is repeatedly asked; I can even say that I am currently writing this post in order to reply to an old post on Stackoverflow.

To that question, the answer is easy: there is absolutely nothing that I can do with functional techniques that I can’t already do with python. But this is very certainly an ill-posed question. The great programmer Keneth Iverson always thought that the way you code helps you thinking what to code; if such a claim sounds too strange; if you rather think that you merely want to know what is the algorithm you need for such or such problem and then adapt it to Python, then you probably don’t need to read further this post: the fact is that you will not discover here some hidden gem that could make your coding more productive.

Using the tco module

The tco module is a powerful module made of only a couple of lines allowing mainly:

  • tail-call optimization
  • optimized tail-recursion
  • continuation-passing style
  • pseudo call-with-current-continuation

Two ideas lie behind these features: repeatedly call functions without having the size of the execution stack increasing and possibly jump back from a place in the execution stack to a previous one without passing through the intermediate calls and their own waiting return statements.

Drawing hands by M.C. Escher

A first example: a binary search tree

Let’s assume we have a binary tree; searching some node in it is easely done with recursion. Of course, python can do it already very well and most of the time the default size of the execution stack is enough; but I will show here how to do it with tail-recursion. I will also add one more thing: directly escaping from the deepest call of the recursion to the next function which has to handle the result of the search in some way without escaping from the recursion by using return statements.

First, let’s build some perfect binary tree:

from itertools import groupby
from random import sample
nbrlevels = 16                # number of levels in the tree
n = 2**nbrlevels - 1          # number of nodes
t = sample(range(2*n), n)     # values of the nodes
def make_tree(l):
    if len(l)==1: return [l[0], [], []]
    return [l[0], make_tree(l[1:len(l)//2+1]),
tree = make_tree(t)

You can now use the tco module for writing a function that can both:

  • call itself (recursion) without overloading the stack
  • call another function from the inside of the recursion without having to previously exit from the recursion
from tco import C

def do_something(node):
    # an arbitrary function for simulating some manipulations
    print "The node",node[0],"is in the tree."

# exit function
exit_success = C(lambda self: do_something)()
exit_failure = C(lambda self: lambda : None)()

# search function
search = C(lambda self, k1, k2: lambda node, n:
    if n==node[0]
       else ( # not the current node
                 ( # children do exist
                      self(node[1],n) # search left
                   if n < node[2][0]
                      else self(node[2],n) ) # search right
              if node[1]
                 else k2() # no child exists
            ))(exit_success, exit_failure)

Of course you could object that nobody would write such a code (though it can be very handy for smaller functions); if you like it rather you can also write:

from tco import C

def search(self, k1, k2):
  def code(node, n): 
    if n == node[0]: k1(node)
    elif node[1]:
      if n < node[2][0]: self(node[1],n)
      else: self(node[2],n)
    else: k2()
  return code
search = C(search)(exit_success, exit_failure)

which will work equally; notice that using the return statement is useless in the body of the code part (you can use it if you want but you don’t have to use it and I think you shouldn’t use it if you really understand what I will explain below).

You can now use the function with something like search(tree,42) in order to search for the value 42 in the tree; the do_something function doesn’t do anything interesting here except printing a message, but the important thing is that this function will be directly executed outside of the recursion despite appearances.

Now, how does it work? The important things are the parameters: self, k1 and k2. Only the first one is required (you can call it with another name if you wish); you can put as many as you want. The letter k is a current name in the theory of continuations but you can also choose more explicit names than k1 or k2 (like success or failure for instance). These three parameters allow the function inside to call either itself or any other function as the continuation of the whole search function.

Except the first parameter (which is called here self), all these parameters are associated to functions (to be called later) when the function returned by the C wrapper is initially called (with these functions as arguments); the second version of the code above may lead to an easier understanding if we have a look at the very last line:

search = C(search)(exit_success, exit_failure)

But you are not forced to use these continuations for exiting from such a function; you can also use a normal return statement (and even use several kinds of exit according to some case: either a return or a continuation). See the quick example below:

from tco import C

def test(self):
  def code(n):
    return n**2
  return code
test = C(test)()

This code has no real use; it merely creates a functiontest returning the square of its argument (the special features of the module aren’t used here), but you can check that normally returning any object is also allowed.

A second example: escaping from an infinite loop

Look carefully at the second example below; you will notice a while True statement calling a classical python function; how could the program escape from the infinite loop by calling the untrap function? The trick is that the very pythonic untrap function takes as a parameter the outermost continuation of the trap function; thus the infinite loop is started, the untrap function is normally called (and added to one more level of the execution stack since it isn’t an optimized function), and the untrap function calls (from the inside of the loop) a continuation which is outside of the loop. Here the call trap() will evaluate to 42.

from tco import C

escape = C(lambda f: lambda n: n)() # identity function

def untrap(k):

def trap(self, k1):
  def code():
    while True:
  return code
trap = C(trap)(escape)

In Van Vogt’s novel The Wizard of Linn, Clane owns a sphere which is itself the whole universe: Clane is inside the universe but it can as well act upon the universe from outide of it. See also the lithograph by M.C. Escher called Print Gallery; the standing man is inside the painting on the wall and at the same time outside of it.

Print gallery by M.C. Escher

This can be useful for exiting several loops at the same time; it is well known that the break statement only allows to exit from the innermost loop (for that reason, programmers often use some rather ugly tricks like using booleans as flags for emulating a more powerful break).

Nested systems of continuations

The previous version of the tco module (related to the ideas I described here) wouldn’t allow to nest several distinct systems of functions passing their own continuation. The code of the last version was carefully redesigned for allowing it. See the code below:

from tco import C

escape = C(lambda f: lambda n: n)() # identity function

def trap1(self, k1):
  def code():
    while True:
  return code
trap1 = C(trap1)(escape)

def trap2(self):
  def code(k):
  return code
trap2 = C(trap2)()

Again, you can see an infinite loop, but the new thing is that the function called from the infinite loop in trap1 isn’t a classical python function as previously but another function called trap2 optimized by the module tco. One could wonder whether the system is clever enough to understand that the k(42) call must not be confused with the continuation of the innermost C function or not, but it actually is: the call to trap1() is (as expected) evaluated to 42.

You may think it as a kind of goto statement but rather than the old horizontal goto jumping from a location to another one, it should be rather thought as a vertical goto jumping from somewhere in the stack to a previous level.

Self Reference in Python

The J language has a verb called “Self Reference” which is $: allowing a recursive call to the “longest verb that contains it” (see J dictionary).

Here is a way for implementing a similar feature in Python. I made it work on several versions of Python (versions 2 and 3 with CPython as well as with Pypy).

import sys
class _self_reference():
    def __call__(self, *args):
        f = lambda *a: None
        f.__code__ = sys._getframe(1).f_code
        return f(*args)
    def __getitem__(self,n):
        def func(*args):
            f = lambda *a: None
            f.__code__ = sys._getframe(n+1).f_code
            return f(*args)
        return func
self = _self_reference()

Of course the self variable can be renamed like this or even simply _.

The self function now calls the function called previously with new arguments. Now, the factorial function is:

fac = lambda n: 1 if n <= 1 else n*self(n-1)

Of course, some care has to be taken concerning the numbers of call levels that a single function may add to the execution stack. If needed, the syntax self[1](...) may be used to get to refer to the the function called one more level below, and of course any positive integer will work.

The following example is rather useless but it shows how to jump to the second level by complicating the previous definition of the factorial:

fac2 = lambda n: 1 if n <= 1 else n*((lambda k: self[1](k-1))(n))

Continuation passing style in Python

Here are some thoughts concerning the continuation passing style in Python. Of course, being able to take any function as an argument for another function, the language already allows to use such a style of programming when needed; but this artcle focus on some elegant ways of calling a function from the innermost part of an expression without burdening the call stack of the interpreter. Like in a previous article, tail-call optimization is the purpose of this study.

First part: the main idea

First, let’s create a function diplaying together its argument and the current size of the stack; it will be used below in order to check where exactly a call to a continuation is made:

import traceback

def disp(x):

Now, let’s figure out some function performing some kind of computation, adding many calls to the execution stack, and finally calling a continuation. A recursive function will suit these needs:

def test(k,n):
    return test(k,n-1) if n>0 else k(42)

While useless, this function is easy to understand: it calls itself many times with a decreasing counter then calls the function k with an argument being 42.

Now, lets’s see what happens:

>>> test(disp, 100)
(103, 42)

(The exact left value may change with the interpreter but it should be a little more than 100.)

An elegant way of getting rid of these useless calls waiting to return is to embed the initial call to the test function in some wrapper like:

C = lambda f: lambda c, *a: f(lambda x: lambda : c(x), *a)()

Let’s check:

>>> C(test)(disp, 100)
(4, 42)

The idea is to trap the argument intended to be given to the disp function in a closure, to return and empty the execution stack, and then to call the continuation.

The previous syntax allows a single return value which is the continuation; thus the following function can’t work properly:

def test2(k,n,b):
    return test2(k,n-1,not b) if n>0 else k(42) if b else None

Below is a working example and a faulty one:

>>> C(test2)(disp, 100, True)
(4, 42)
>>> C(test2)(disp, 100, False)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "<stdin>", line 1, in <lambda>
TypeError: 'NoneType' object is not callable

Some workaround can be found with:

C2 = ( lambda f: lambda c, *a:
       (lambda g: g() if callable(g) else g)
       (f(lambda x: lambda : c(x), *a) ) )

But the following part of the article will focus on the initial case where the single allowed return value is the call to the continuation.

Second part: variants of the previous solution

The above solution requires the user to put the continuation argument k as the first one of the function while many programmers usually put it after all other arguments. This order can be implemented but is less elegant than the previous one:

C3 = lambda f: lambda *a: f(*(list(a[:-1]) + [lambda x: lambda : a[-1](x)]) )()

I personally like another way for the very same idea (I also use the Y combinator for implementing the recursion in order to only rely here on lambda calculus):

Y = lambda f: (lambda x: x(x))(lambda y: f(lambda *args: y(y)(*args)))
test = lambda k: lambda n: Y(lambda f: lambda i: f(i-1) if i>0 else k(42))(n)
C = lambda f: lambda c: lambda a: f(lambda x: lambda : c(x))(a)()

which has to be used as:

>>> C(test)(disp)(100)

This style is itself a good transition to a more general solution allowing several continuations to be used (according to a case which is decided at the top of the stack but with an evaluation occuring once the stack will become empty):

C = lambda f: lambda *c: lambda *a: f(*map(lambda k: lambda x: lambda : k(x), c))(*a)()

which can still be used as:

>>> C(test)(disp)(100)

but which can also be used as:

def disp1(x):
def disp2(x):

test = ( lambda k1, k2: lambda n:
           Y(lambda f: lambda i,j: f(i-1,not j) if i>0
                else k1(42) if j else k2(42))(n,False) )

where two different continuations are now used. The new wrapper can be used as:


Third part: two more refinements

Two more improvements will now be added to the wrapper:

  • allowing the continuation to take any number of arguments (usual continuation passing style requires the continuation to take one argument but it won’t hurt here to allow more);
  • removing one level more in the execution stack by initially returning a tuple conataining the function and its arguments rather than a function.

The wrapper becomes (in Python 2):

C = lambda f: lambda *c: lambda *a: apply(*f(*map(lambda k: lambda *x: (k,x), c))(*a))

and in Python 3 (where apply doesn’t exist any more):

def C(f):
    def D(*c):
        def E(*a):
            func, args = f(*map(lambda k: lambda *x: (k,x), c))(*a)
            return func(*args)
        return E
    return D

Now the continuation will be added to the stack only one level above the wrapper itself with no intermediate call.

The final word

I finally wrote a little module containing various implementations of a general wrapper for handling both tail-elimination: in tail-recursion and in continuation-passing style; it is located at

Infinite power in Python

A nice feature in the J programming language is the infinite power operator, which gives the ability to raise a function to the infinite power. The idea is to repeatedly apply a function to its previous result until some convergence is reached or until some system is stabilized. Such a feature allows to “hide” many while loops.

Here is a link to the page of the J for C programmers book explaining more about this way of coding: Loopless Code IV.

Implementing such a feature in Python is rather easy:

class __infinitepower__():
    def __init__(self):
    def __rpow__(self,f):
        def func(a):
            while True:
                b = f(a)
                if b == a:
                    return b
                a = b
        return func
infinitepower = __infinitepower__()
infinitepower.__rxor__ = infinitepower.__rpow__

This piece of code creates the infinitepower variable which can be used for raising any function to the infinite power. This object may be affected to a new variable with a shorter name for convenience purposes (rename it to oo, Inf, or any other name).

>>> oo=infinitepower
>>> from math import sqrt, cos
>>> (cos**oo)(.5)
>>> (sqrt^oo)(.5)

Of course, the operator for “power” is ** in Python, but it isn’t much more coding to also allow the variable to be used with ^ (while it could be held as a very bad idea from a pythonic point of view).