FP2: Fully In-Place Functional Programming provides memory reuse for pure functional programs 

This post has been republished via RSS; it originally appeared at: Microsoft Research.

This research paper was presented at the 28th ACM SIGPLAN International Conference on Functional Programming (opens in new tab) (ICFP), a premier forum for discussing design, implementations, principles, and uses of functional programming.

FP2: Fully In-Place Functional Programming; ICFP 2023

Functional programming languages offer a host of advantages, such as ensuring memory safety (opens in new tab) and eliminating arbitrary side effects. This enables systematic analysis and compositional program construction, facilitating development of scalable and complex software systems. However, a drawback of functional programming is its tendency to liberally allocate new memory. We believe this characteristic has impeded widespread adoption in performance-critical domains. How can we overcome this limitation and harness the benefits of functional programming while maintaining efficient memory usage? 

To illustrate the issue, let’s examine the well-known functional program to reverse a list in linear time using an accumulating parameter:

The reversal function is written in Koka (opens in new tab), a functional language developed at Microsoft that implements the techniques described in this blog post. Here, a list is either empty (as Nil) or non-empty as a Cons(head,tail) node, and contains the first element as the head and the rest of the list as the tail

In most functional languages, reversing a list this way allocates a fresh result list in the heap, where a list of integers from 1 to 10 is reversed, as shown in Figure 1.

Figure 1: The list [1..5] has already been reversed into acc, but we still must reverse the list [6..10].

As the list xs is non-empty, we add its first element to our accumulating acc parameter before recursing on the rest of the list xx. As shown in Figure 2, this step allocates a new Cons cell but also leaves the Cons cell of xs to be garbage collected. This is rather wasteful.

Figure 2: The lists after one step of recursion. The top Cons cell on the left has become garbage, while the top Cons cell on the right is freshly allocated.

Fully in-place functional programming avoids allocation 

Recent developments have made it possible to avoid such allocations. In particular, by using a compiler-guided reference counting algorithm called Perceus, we can reuse objects in place whenever the objects are uniquely referenced at runtime. With such reuse, the reverse function can reverse a unique input list xs in-place without allocating any fresh Cons nodes, essentially switching the tail pointers of xs in-place. However, the dynamic nature of this form of reuse makes it hard to predict its application at runtime.  

In our paper, “FP2: Fully in-Place Functional Programming (opens in new tab),” which we’re presenting at ICFP 2023 (opens in new tab), we describe the new fip keyword. It statically checks that programs like the accumulating reverse function can execute in-place, that is, using constant stack space without needing any heap allocation as long as the arguments are unique.

Spotlight: Microsoft research newsletter

Microsoft Research Newsletter

Stay connected to the research community at Microsoft.

Tree traversals and zippers

In fact, many familiar functions and algorithms satisfy our fully in-place criteria. For example, consider a binary tree with all the values at the leaves:

Now, suppose that we want to navigate through this tree, moving up and down in search of a particular element. You might add parent pointers, but in a functional language, there is an alternative solution originally proposed by Gérard Huet known as the zipper (opens in new tab):

The zipper stores subtrees along the path from the current node up to the root node. We can define operations on pairs consisting of this type of zipper and the current tree, enabling seamless movement through the tree. For example, the following function uses the zipper to move the focus to the left subtree:

Here, we move to the left subtree of the current node (if it exists) and extend the zipper data type accordingly. In his 1997, Huet already observed that such zipper operations could be implemented in place:

Efficient destructive algorithms on binary trees may be programmed with these completely applicative primitives, which all use constant time, since they all reduce to local pointer manipulation.

In Koka, we can now make Huet’s intuition precise, where the fip keyword guarantees that left is in place. On closer examination, this might be surprising. While the list reversal example reused a Cons node, here it seems like we may need to garbage collect a Bin constructor and allocate a new BinL constructor. Nonetheless, because both constructors have two fields, the previous Bin memory location can still be reused (only updating the constructor tag). Our paper provides the analysis details that enable this, rooted in the concept of “reuse credits.”

Now, suppose we want to update all the values stored in a tree. Using a zipper, we can do this fully in place. While traversing, the zipper stores input tree fragments in order, using BinL for unvisited and BinR for visited subtrees. Reusing the zipper nodes allows in-order tree mapping without heap or stack usage. The tree map function starts by descending to the leftmost leaf, accumulating unvisited subtrees in BinL. Once we hit the leftmost leaf, we apply the argument function f and work our way back up, recursively processing any unvisited subtrees, as shown in Figure 3.

The mutually tail-recursive app and down functions are fully in place. Each matched Bin pairs with BinL, and each BinL with BinR, ultimately leading to BinR pairing with Bin. The definition of tmap may seem somewhat complex, but it is much simpler than its iterative imperative counterpart that uses direct pointer reversal.

Figure 3: The program after visiting the leaf containing f(2) on the given tree. The pointers in the zipper are reversed.

Perspectives and further reading

Koka’s new fip keyword ensures that certain functions do not allocate and only use constant stack space, offering efficient and secure code execution akin to static linear types or Rust’s borrow checker. This introduces a new paradigm for writing programs that are purely functional but can still execute in place. We consider this new technique to be a significant milestone on the path toward using high-level functional programming to develop robust software that delivers both competitive and predictable performance. 

To learn about fully in-place functional programming and the Koka language, start at the Koka homepage (opens in new tab). Koka implements a variety of innovative language features, including algebraic effect handlers and first-class constructor contexts. We encourage readers to continue exploring and experimenting with fully in-place programming. For example, try implementing skew binary heaps (opens in new tab) in Koka. Can you demonstrate fully in-place heap union?

The post FP2: Fully In-Place Functional Programming provides memory reuse for pure functional programs  appeared first on Microsoft Research.

Exit mobile version