Playing with exceptions inside lambda expressions in Python

The following piece of code can certainly claim being the most insane Python expression ever written:

C = lambda f: (
      lambda *args: (
        lambda o: (
          lambda g: { None } | {
              None for _ in iter(lambda:
               (lambda G: (not G) or o.append(G.pop()))
                   (yield from (g(*o.pop()) for _ in (None,))))())), None)
          } and o.pop())(f(lambda *a: o.append(a) or next(iter(())))))([args]))

The two most notable hidden features of this expressions are a true infinite loop and a rudimentary try/except/raise system.

Since the PEP 463 was rejected, there is no general way of catching an exception in an expression (raising any exception is very easy on the other hand). I fall upon a question about it and answered it by explaining all my ideas about what could actually be achieved.

Then I decided to implement a trampoline-based system allowing user-defined functions to jump back to the initial continuation by using the very single exception I finally managed to fully handle in an expression, which is StopIteration.

Implementing an infinite loop is very easy:

{ None for _ in iter(f, None) }

with a relevant f function will run as long as expected without needing any memory since the size of the set will not increase during the computation.

Raising the StopIteration exception can be achieved either with a general solution or with a ad hoc expression:

(_ for _ in ()).throw(StopIteration)


Catching the exception can be achieved by computing everything inside an iterator intended to run over a single value: once converted to a list, the sequence will either be empty or contain one returned value. The following piece of code randomly returns [] or [1] (if an exception is raised or not):

from random import randrange

list((lambda:(yield from (randrange(0,2) or next(iter(())) for _ in (None,))))())

Here is a tail-recursive version of the factorial using the C expression defined above:

myfac = C(lambda f: lambda n, acc: f(n-1, n*acc) if n else acc)

print("Factorial of 0 is", myfac(0, 1))
print("Factorial of 1 is", myfac(1, 1))
print("Factorial of 5 is", myfac(5, 1))
print("Factorial of 6 is", myfac(6, 1))

Here is another function showing how the callback function allows to escape an infinite loop: the call to g(1) obviously makes the evaluation entering the inner loop, but the DEBUG word is actually not printed because evaluating f(0) makes the evaluation return back to the initial continuation:

def outer(f):
    def inner(n):
        if n == 0: return 42
        while True:
            print("DEBUG", f(0))
    return inner
g = C(outer)

The C expression should be strong enough to be used with any user-defined function as long as it doesn’t use the callback function inside an iterator being wrapped in a yield from expression, which is not likely to happen.

Python Obfuscation #1

As usually I am going here to cheat with Python good practices in order to have fun. As is said at the beginning of a funny piece of code: If you use this for anything important, you’re mad! Still, I will polish everything below and try to provide functions on par with the best written ones in regards to speed and memory usage.

My general rules in such challenges are the following ones:

  • avoid statements;
  • avoid importing any module;
  • obviously avoid exec or eval;
  • avoid def or class for defining anything.

Such requirements come from a strong functionnal programming background though I am aware they don’t fit well with Python standards.

The task

I want to write a loop function; since it has to be a single expression that could easily be embedded in a larger expression, I will rather say I want to write a loop macro. It should be able to iterate infinitely if required with no stack or heap overflow.

The usage of the macro is intended to be:

result = loop(func, cond, x0)

where func is a function repeatedly applied to x0 then x1=f(x0) then x2=f(x1), etc., cond a function applied (before func) to the current temporary result before iterating and returning True as long as iteration has to go on, and x0 is the initial value. The macro should return the last computed value, which is the first such that cond(result) is False.


There are several ways for iterating inside an expression. It is well known that lambda calculus allows iterating with the famous Y combinator, but it will not allow to iterate infinitely because of the maximum recursion depth.

The most common iterating process in an expression (in Python) is performed with the list comprehension syntax. A funny way of using it is to iterate over a list being modified in order to get some control on the number of iterations from the inside of the list comprehension; see the example below for computing consecutive terms in the Collatz sequence:

collatz = lambda n: (lambda c:
               [k > 1 and c.append((k%2 and 3*k+1) or k//2) for k in c]
             and c)([n])

It is assumed here that the reader is familiar with the exact meaning of and and or in Python (what is evaluated or not and what is exactly yielded).

But appending new terms to a list will of course lead soon to a MemoryError when too many iterations are performed.

The same idea can however be tweaked in order to avoid any memory usage; if G is some iterator running as long as needed, then:

[ None for x in G if <expr> and None ]

will not append any term to the current list; of course, if <expr> is known to return something like None, the final part can be removed; if <expr> is known to return something like True, another simplification would be:

[ None for x in G if not <expr> ]

and if <expr> is known to return always the same value (like None), then a very nice simplification is:

{ <expr> for x in G }

because a set doesn’t keep duplicates!

Creating a class from within an expression

Of course anything like range(2**65536) does not run infinitely though no actual task should require such a large amount of iterations. On the other hand, I couldn’t figure out any infinite iterator without importing such or such module, which I want to avoid.

Creating a new iterator implies defining a new class, which is achievable in an expression:

type("MyClass", (object,), {})

yielding a kind of empty class. New methods can be added to it with:

Evil = (
          lambda o:
               setattr(o, '__iter__', lambda x:x)
               or setattr(o, '__next__', lambda x:True)
               or o
       )(type("EvilIterator", (object,), {}))

Of course an instance can be returned by adding () at the end of the whole expression or after the or o part.

It can be checked that this macro looks like what I want by typing:

{ print("Hello world") for _ in Evil() }

The final part

Now comes the final part; the Evil iterator above has to be tuned according to our needs.

Since the loop has to end when some condition is encountered, the iterator should raise a StopIteration exception, which can be achieved either by evaluating some expression doing the same thing like next(_ for _ in ()) or by using the throw method on any available generator allowing to throw any kind of exception: (_ for _ in ()).throw(StopIteration).

A first attempt is:

loop = lambda f, t, i: (
          lambda o, w:
               setattr(o, '__iter__', lambda x:x)
               or setattr(o, '__next__', lambda x:
                     (t(w[-1]) and not w.append(f(w.pop())))
                         or next(_ for _ in ()))
               or o
       )(type("EvilIterator", (object,), {}), [i])()

tested with:

[x for x in loop(lambda x: x-1, lambda x: x>0, 5)]

which yields the expected [True, True, True, True, True]. Using a set rather than a list help controlling the size of the set but since we want to return the last computed value, we have to do it inside. As a cosmetic change, the line containing the append part can also be turned to

(t(w[0]) and not w.__setitem__(0, f(w[0])))

which may be slightly faster.

The final version finally is:

loop = lambda f, t, i: (
          lambda o, w:
               setattr(o, '__iter__', lambda x:x)
               or setattr(o, '__next__', lambda x:
                     (t(w[0]) and not w.__setitem__(0, f(w[0])))
                         or next(_ for _ in ()))
               or { _ for _ in o() }
               and w[0]
       )(type("EvilIterator", (object,), {}), [i])

tested with: loop(lambda x: x-1, lambda x:x > 0, 5). Of course a list can be used instead of a set as long as a filter helps controlling the size of the list:

               or [ x for x in o() if not x ]
               or w[0]

but I like rather the version using a set because it does not contain a single if keyword also!

Now we can try to find some factor of an integer number by using the brute force:

loop(lambda x: x+1, lambda x: 8927%x, 2)

Playing with variables in Python

Trying again to hack the Python language itself, I will publish in this post two little pieces of code for tweaking the scope of global variables.

The first one is a decorator “freezing” some global variables to their current value at the time a function is defined.

def protect(*A):
    def D(f):
        protected = {}
        for k in A: protected[k] = globals()[k]
        def wrapper(*args, **kwargs):
            previous = {}
            g = globals()
            for k in A:
                if k in g: previous[k] = g[k]
            for k in A: g[k] = protected[k]
                out = f(*args, **kwargs)
            except Exception:
                for k in A:
                    if k in previous: g[k] = previous[k]
                    else: del g[k]
            for k in A:
                if k in previous: g[k] = previous[k]
                else: del g[k]
            return out
        return wrapper
    return D

Let’s try it:

a = 42

def test(n):
    return a*n


should print twice the value 630 despite the fact the global variable a has been redefined in the meantime.

The second piece of code is a context manager mimicking the behaviour of the let function in Lisp-like languages:

class RestrictedScope():
    def __init__(self, scope):
        self.scope = scope
    def __enter__(self):
        self.p = {}
        g = globals()
        for k in self.scope:
            if k in g: self.p[k] = g[k]
            g[k] = self.scope[k]
    def __exit__(self, *a):
        g = globals()
        for k in self.scope:
            if k in self.p: g[k] = self.p[k]
            else: del g[k]

This context manager temporarily hides global variables, restoring them to their current state as soon as the context is left:

a = 42
def test(n):
    return a*n

with RestrictedScope({"a": 10}):

Continuations in Python

In previous posts, I already wrote about continuations in Python. Below are some new ideas on the topic. First of all, I released yesterday a new version (v. 1.2.1) of the tco module, with a decorator. Thinking again to it lead me to write a new module from scratch, by keeping a similar internal mechanism but behaving quite differently. My new module id continuation.

While the tco module requires the coder to use a rather heavy syntax, I wanted to achieve a more mature project, easier to use. Here is a tail-recursive code for the factorial function:

def k_factorial(k):
    def _inner(n, f):
        return f if n < 2 else (k << k_factorial)(n-1, n*f)
    return _inner

factorial = lambda n: (with_CC >> k_factorial)(n, 1)

Every feature of the module can be seen in this piece of code:

  • a decorator is provided for defining the functions to be used with the module;
  • the syntax (with_CC >> k_func)(*args) is used for passing the current continuation to a function before calling it with its arguments;
  • the syntax (k << k_func)(*args) is used for calling a function with some arguments after having returned back in the stack to the level of the initial call.

A first difference with the tco module is that decorated functions now take a single argument (a continuation), making the syntax simpler and more consistent. On the other hand, achieving tail-recursion may seem a little more complicated at first glance. Another significant difference is the “visual” notation for playing with the continuation, which should make the process easier to understand: the two operators >> and << are intended to look like arrows pointing toward a meaningful left or right direction.

Importing the module is done with:

from continuation import with_CC, with_continuation

The following code shows that nested calls are correctly handled; while trapped inside three levels of while True loops, the single return statement in the k_identity function make the most external call return immediately:

def k_test1(k):
    def _inner(v):
        print("Entering loop 1")
        while True: (with_CC >> k_test2)(k, v)
    return _inner

def k_test2(k):
    def _inner(cont, v):
        print("Entering loop 2")
        while True: (with_CC >> k_test3)(cont, v)
    return _inner

def k_test3(k):
    def _inner(cont, v):
        print("Entering loop 3")
        while True: (cont << k_identity)(v)
    return _inner

def k_identity(k):
    def _inner(x):
        print("Entering k_identity")
        return x
    return _inner

print ((with_CC >> k_test1)(42))

Explaining functional aspects in python


Many python programmers, who are not aware of really different programming languages sometimes wonder why some coders make such a noise about various functional features allegedly missing in the python language. I want to talk to them in this post and try to explain to them, by using the language they know and like, some of the programming techniques involved in these questions.

First of all, I am not at all saying here or below that something is wrong in Python. As a programmer who spent much time in implementing tail-recursion and continuation-passing-style as modules for the python language, I am not claiming here that such things should be inplemented in the core of the language; I read the position of the author of python here and here; I understand it and agree with it up to a certain point. Let’s face the truth: nobody actually does force me to use python if I don’t want to use it. When I need speed, I use C or Fortran; when I want to focus on elegant algorithms, I use Lisp or Scheme; when I want to understand more deeply some problem, I use APL or J. For this reason, I think that python is more or less as it should be: a polyvalent language with many libraries allowing to quickly implement a piece of code and debug it.

But sometimes I miss some of these features and this is why I care about them (as other coders also do). Some Python programmers would ask: what can’t I do with Python that I would do with tail-recursion elimination? This question is repeatedly asked; I can even say that I am currently writing this post in order to reply to an old post on Stackoverflow.

To that question, the answer is easy: there is absolutely nothing that I can do with functional techniques that I can’t already do with python. But this is very certainly an ill-posed question. The great programmer Keneth Iverson always thought that the way you code helps you thinking what to code; if such a claim sounds too strange; if you rather think that you merely want to know what is the algorithm you need for such or such problem and then adapt it to Python, then you probably don’t need to read further this post: the fact is that you will not discover here some hidden gem that could make your coding more productive.

Using the tco module

The tco module is a powerful module made of only a couple of lines allowing mainly:

  • tail-call optimization
  • optimized tail-recursion
  • continuation-passing style
  • pseudo call-with-current-continuation

Two ideas lie behind these features: repeatedly call functions without having the size of the execution stack increasing and possibly jump back from a place in the execution stack to a previous one without passing through the intermediate calls and their own waiting return statements.

Drawing hands by M.C. Escher

A first example: a binary search tree

Let’s assume we have a binary tree; searching some node in it is easely done with recursion. Of course, python can do it already very well and most of the time the default size of the execution stack is enough; but I will show here how to do it with tail-recursion. I will also add one more thing: directly escaping from the deepest call of the recursion to the next function which has to handle the result of the search in some way without escaping from the recursion by using return statements.

First, let’s build some perfect binary tree:

from itertools import groupby
from random import sample
nbrlevels = 16                # number of levels in the tree
n = 2**nbrlevels - 1          # number of nodes
t = sample(range(2*n), n)     # values of the nodes
def make_tree(l):
    if len(l)==1: return [l[0], [], []]
    return [l[0], make_tree(l[1:len(l)//2+1]),
tree = make_tree(t)

You can now use the tco module for writing a function that can both:

  • call itself (recursion) without overloading the stack
  • call another function from the inside of the recursion without having to previously exit from the recursion
from tco import C

def do_something(node):
    # an arbitrary function for simulating some manipulations
    print "The node",node[0],"is in the tree."

# exit function
exit_success = C(lambda self: do_something)()
exit_failure = C(lambda self: lambda : None)()

# search function
search = C(lambda self, k1, k2: lambda node, n:
    if n==node[0]
       else ( # not the current node
                 ( # children do exist
                      self(node[1],n) # search left
                   if n < node[2][0]
                      else self(node[2],n) ) # search right
              if node[1]
                 else k2() # no child exists
            ))(exit_success, exit_failure)

Of course you could object that nobody would write such a code (though it can be very handy for smaller functions); if you like it rather you can also write:

from tco import C

def search(self, k1, k2):
  def code(node, n): 
    if n == node[0]: k1(node)
    elif node[1]:
      if n < node[2][0]: self(node[1],n)
      else: self(node[2],n)
    else: k2()
  return code
search = C(search)(exit_success, exit_failure)

which will work equally; notice that using the return statement is useless in the body of the code part (you can use it if you want but you don’t have to use it and I think you shouldn’t use it if you really understand what I will explain below).

You can now use the function with something like search(tree,42) in order to search for the value 42 in the tree; the do_something function doesn’t do anything interesting here except printing a message, but the important thing is that this function will be directly executed outside of the recursion despite appearances.

Now, how does it work? The important things are the parameters: self, k1 and k2. Only the first one is required (you can call it with another name if you wish); you can put as many as you want. The letter k is a current name in the theory of continuations but you can also choose more explicit names than k1 or k2 (like success or failure for instance). These three parameters allow the function inside to call either itself or any other function as the continuation of the whole search function.

Except the first parameter (which is called here self), all these parameters are associated to functions (to be called later) when the function returned by the C wrapper is initially called (with these functions as arguments); the second version of the code above may lead to an easier understanding if we have a look at the very last line:

search = C(search)(exit_success, exit_failure)

But you are not forced to use these continuations for exiting from such a function; you can also use a normal return statement (and even use several kinds of exit according to some case: either a return or a continuation). See the quick example below:

from tco import C

def test(self):
  def code(n):
    return n**2
  return code
test = C(test)()

This code has no real use; it merely creates a functiontest returning the square of its argument (the special features of the module aren’t used here), but you can check that normally returning any object is also allowed.

A second example: escaping from an infinite loop

Look carefully at the second example below; you will notice a while True statement calling a classical python function; how could the program escape from the infinite loop by calling the untrap function? The trick is that the very pythonic untrap function takes as a parameter the outermost continuation of the trap function; thus the infinite loop is started, the untrap function is normally called (and added to one more level of the execution stack since it isn’t an optimized function), and the untrap function calls (from the inside of the loop) a continuation which is outside of the loop. Here the call trap() will evaluate to 42.

from tco import C

escape = C(lambda f: lambda n: n)() # identity function

def untrap(k):

def trap(self, k1):
  def code():
    while True:
  return code
trap = C(trap)(escape)

In Van Vogt’s novel The Wizard of Linn, Clane owns a sphere which is itself the whole universe: Clane is inside the universe but it can as well act upon the universe from outide of it. See also the lithograph by M.C. Escher called Print Gallery; the standing man is inside the painting on the wall and at the same time outside of it.

Print gallery by M.C. Escher

This can be useful for exiting several loops at the same time; it is well known that the break statement only allows to exit from the innermost loop (for that reason, programmers often use some rather ugly tricks like using booleans as flags for emulating a more powerful break).

Nested systems of continuations

The previous version of the tco module (related to the ideas I described here) wouldn’t allow to nest several distinct systems of functions passing their own continuation. The code of the last version was carefully redesigned for allowing it. See the code below:

from tco import C

escape = C(lambda f: lambda n: n)() # identity function

def trap1(self, k1):
  def code():
    while True:
  return code
trap1 = C(trap1)(escape)

def trap2(self):
  def code(k):
  return code
trap2 = C(trap2)()

Again, you can see an infinite loop, but the new thing is that the function called from the infinite loop in trap1 isn’t a classical python function as previously but another function called trap2 optimized by the module tco. One could wonder whether the system is clever enough to understand that the k(42) call must not be confused with the continuation of the innermost C function or not, but it actually is: the call to trap1() is (as expected) evaluated to 42.

You may think it as a kind of goto statement but rather than the old horizontal goto jumping from a location to another one, it should be rather thought as a vertical goto jumping from somewhere in the stack to a previous level.