top | item 39309157

What Does "With Continuation" Mean? (2020)

107 points| d4nyll | 2 years ago |forum.snap.berkeley.edu

58 comments

order
[+] carterschonwald|2 years ago|reply
So there’s a funny thing this doesn’t touch on: the semantics of call/cc is genuinely confusing to understand! There’s a related construct that’s much more legible and has a much easier to understand: call with delimited continuation!

Oleg K wrote a very articulate piece about this some long time ago https://okmij.org/ftp/continuations/against-callcc.html

[+] cryptonector|2 years ago|reply
I think of `call/cc` as a parlor trick that helps introduce the concept of continuations more generally.

Threads and co-routines are on the heavy-weight end of the concurrent programming techniques spectrum because they require stacks -possibly large, with guard pages and all- and encourage smearing application state onto that stack, increasing cache pressure.

Continuation passing style (CPS) is on the light-weight end because it encourages the programmer to make application state explicit and compact rather than smearing it on a stack. Callback hell is hand-coded continuation passing style (CPS). Async/await is a compromise that gets one close to the light weight of continuations.

To understand all of that one has to understand the concept of continuations. In computer science, the cheap parlor trick that is `call/cc` is helpful in introducing the concept of continuations to students. After all, `call/cc` is shocking when one first sees it, and the curious will want to understand it.

[+] eru|2 years ago|reply
Approximately everything by Oleg is great!

We half-jokingly considered renaming our Sydney Computer Science Paper Club to 'Oleg Fan Club'.

[+] kazinator|2 years ago|reply
Delimited continuations are in fact harder to understand, because resumed delimited continuations hit an arbitrarily set brick wall, at which point they return like functions: the value which bubbles out of that brick wall is the return value. Thus they don't represent the true future of the computation. The true future of the computation will not hit a prompt and then stop dead, returning a value to the point where it was resumed. It's like a future ... on a yo-yo string. This is an extra feature. By adding code around full blown continuations, we can get delimited ones and so to understand that we have to understand full blown continuations, and that extra code.

Once you get it, you get it. You then understand that it's better for the continuation not to continue into an indefinite future, and easier to reason about when you know that it's going to stop at a certain enclosing contour, and you will have access to the value bubbling out of there.

Once you already understand it, it's very easy to explain it to yourself.

[+] nerdponx|2 years ago|reply
Is there any resource that explains the different varieties of the limited continuations in a way that doesn't require an advanced theoretical CS degree? I see that Racket has both prompt/control and shift/reset but I've never been able to make sense of how they differ, if at all
[+] canjobear|2 years ago|reply
I took a Scheme programming class where we did tons of stuff with continuations. I got obsessed and wrote all kinds of weird code. I even started using continuation passing style in Python. It was a dark time. Looking back, none of that code makes any sense at all.

When I think back to my days of mucking around with call/cc my main emotion is relief that I’ve forgotten how it works. It’s a load off my mind

[+] neilv|2 years ago|reply
In case first-class continuations scare anyone away from Scheme: you probably will never use it directly in practice, unless you're doing something very unusual that happens to really need it.

For example, say you have a really hairy AI search algorithm, capturing a continuation happens to make backtracking easier.

Or you're implementing another language or DSL in Scheme, and you use first-class continuations to implement some control structure semantics exactly how you want them to be.

I think the closest I've used, in writing maybe a hundred modules, is an escape procedure (like a premature `return` statement in many other languages, which you generally avoid in idiomatic Scheme).

[+] cryptonector|2 years ago|reply
For me it's the opposite: I can't forget how it works, I don't want to forget how it works, and I marvel at the beauty of it all.

When I was in school I carried one of Guy Steele's (I think, if I remember correctly) articles about continuations in my back pocket for a year, breaking it out when I was bored, until I got it.

[+] eru|2 years ago|reply
Continuation passing style is mostly a good tool for writers of compilers, and perhaps interpreters.

It's very, very similar to single-static-assignment (SSA) style that will be more familiar to people coming from imperative languages.

[+] jedberg|2 years ago|reply
I'm just happy to see Professor Harvey still participating in the forums. He taught me SICP in Scheme 27 years ago and it is still some of the most important computer science I ever learned.
[+] vegashacker|2 years ago|reply
Same! (CS 61A Fall Semester 1996) And I agree!
[+] kccqzy|2 years ago|reply
I'm glad that this explanation involves a comparison with goto especially with a discussion of "goto considered harmful". But IMO excessively using continuations results in the same kind of spaghetti code as code that excessively uses goto. Delimited continuations, on the other hand, essentially places a restriction on where the continuation can return to. The analogy with using goto is that the target of the goto has to be within a well specified block of code. This restriction gets rid of the temptation by programmers to take shortcuts and write overly clever but unmaintainable code.
[+] kazinator|2 years ago|reply
Simply using simple tail calls is goto-like spaghetti code.

Any if/goto program graph or flowchart can be turned into tail calls that have the same shape. For each node in the goto graph, we can have a tail-called function, 1:1.

[+] nerdponx|2 years ago|reply
Does any language other than Common Lisp provide a "delimited goto"?
[+] jillesvangurp|2 years ago|reply
Continuations are also the backbone of co-routines in Kotlin and has first class support. One nice feature of the coroutines framework is that they made it really easy to adapt existing asynchronous frameworks in Java (there are many) and on other platforms. The list of supported frameworks includes things like reactive Java, vert.x, spring webflux, Java Futures. And that's just the JVM. If you are using kotlin-js in the browser or on node.js, promises and async/await are covered too. And on IOS, coroutines also do the right thing with e.g. garbage collection and other mechanisms in IOS.

Most of this stuff is supported via extension functions. But for asynchronous things that aren't supported it's really easy to adapt them yourself with the suspendCancellableCoroutine function. This is a suspending function that takes a function with a continuation as the parameter. Inside the function you do whatever needs doing and handle your callbacks, futures, awaits, or whatever and call continuation.resume, continuation.resumeWithException, or continuation.invokeOnCancellation as needed (which sadly is not a feature every async framework or language supports).

With a few small extension functions you can pretend most things are nice kotlin suspend functions and not deal with the spaghetti code typically needed otherwise. E.g. Spring Flux is an absolute PITA in Java because of this and essentially effortless in Kotlin. Your code looks like normal kotlin code. Nothing special about it. All the flux madness is shoveled under the carpet.

In the same way the new virtual threads in Java are supported trivially because all of that is exposed via the existing Theading and Threadpool APIs. So, you can just dispatch your co-routine asyncs and launches as virtual threads via a dispatcher that you create with a little extension function on Threadpool. Most of this stuff "just works" out of the box. There's a lot of confusion on this topic in the Java community. For Kotlin this is just yet another way to do async stuff that joins a long list of existing other ways. It has its pros and cons for different use cases and is there if you need it. No big deal but very nice to have around. I've not found any use for it yet and we're on Spring Boot and java 21. We already have everything asynchronous and non blocking.

[+] neilv|2 years ago|reply
I don't know whether it helps, but here's another explanation that doesn't involve a visual programming language:

First, imagine that procedures (or functions) are first-class values, like in some languages are called "anonymous functions", "closures", "lambdas", etc.

Now imagine that every procedure call is effectively a goto with arguments... and the familiar part about being able to return from that procedure, to its calling point, is implemented by an implicit additional value created at calling time... which is a "continuation" procedure with argument(s) that you call to return the value(s) from the other procedure.

To make it first-class continuations, imagine that "continuation" procedure value you call to return the value(s) from another procedure... can be made accessible to the program as a first-class value. Which means you can store it in a variable or data structure like any other value. And you can call that "continuation" procedure value multiple times -- returning to that dynamic state of the calling context in the program many times.

It's one of those things that application code almost never uses directly, but that could be super useful and simplifying in the implementation of another programming language, or carefully controlled in particular libraries (e.g., you wanted to make a stateful Web backend library or DSL that made it really easy to express the logic for some kind of interaction tree responding to multiple steps of form input).

[+] Someone|2 years ago|reply
An alternative way to look at for C/C++/pascal/… programmers it is by looking at the return keyword not as a keyword, but as an implicit argument that’s a pointer to a function. Imagine this would work:

   global fooBack
   global barBack
   main() {
     print foo()
   }
   foo() {
     fooback = return
     bar()
     return “foo”
   }
   bar() {
     barBack = return
     if randomBool
       fooBack(“bar”)
     else
       return “bar”
   }
and that, 50% of the time, woud print “bar”.

In a C-like system with a call stack, that would give you a nicer setjmp (still with quite a few limitations). Systems with true continuations would allow code to call ‘up the stack’ for example if main were to call barBack in the scenario above. That wouldn’t work in C as bar’s stack frame wouldn’t exist anymore.

[+] joe_the_user|2 years ago|reply
Required reference "Objects the poor man's Closures[continuations]" [1]. Well, the links actually say "Closures And Objects Are Equivalent", which is the point. But I'd offer a deeper point - continuations/closures are rightly considered one of the hardest things most programmers ever encounter while objects are things most programmers deal with daily. Sure, that means use continuations if you want to look like a bad ass but use objects if you want a project to succeed.

[1]http://wiki.c2.com/?ClosuresAndObjectsAreEquivalent

[+] canjobear|2 years ago|reply
Continuations aren't equivalent to closures.
[+] cryptonector|2 years ago|reply
The easiest way to think about continuations is to consider them a generalization of function returns. The continuation of a C function f() is the return address and the saved frame pointer of the calling function -- and that looks a lot like a closure, and that's because it is, except that a) you can only pass that closure one argument in C: the return value, and b) you actually can't get a value for this closure in C, and c) there's no closures as such in C anyways :)

[And what is a closure? It's a tuple of {function body pointer, data pointer}, and "callbacks" in C often look just like that, but not as a singular value combining the two parts but as two separate values. In better languages a function is indistinguishable from a closure.]

Anywhere that you see `call/cc` you can think of it as making a first-class function value (a closure) of the current location (which would be the return address of the call to `call/cc`) and then passing that closure to the function given as an argument to `call/cc`. After you parse that sentence you'll realize that's a bit crazy: how does one make a function out of... a function's return address (and saved frame pointer)? The answer is: reify that {return address, saved frame pointer} as a closure. ('Reify' means, roughly, to make a hidden thing not hidden.)

Still, it must seem crazy. Yet it's not that crazy, certainly not if you make it so you can only call it once, and only if `call/cc` hasn't returned yet. What's crazy is that in Scheme this continuation can be called repeatedly, so how? Here's one way to do it: replace the stack with allocating function call frames on the heap!, which in a language with a GC means that all those function call frames remain alive as long as closures (which a continuation is) refer to them. (Another way to do it is with copying stacks.)

One can easily (for some value of "easily") implement a language that has `call/cc` in a language that doesn't but which has a) closures, b) a powerful macro system / AST / homoiconicity. All one has to do is take a program, apply continuation passing style (CPS) conversion to it (a fairly straightforward transformation where the result is unreadable for most humans), which automatically causes all function call frames to be captured in closures (thus put on the heap), but also automatically makes continuations explicit values (closures). The `call/cc` is a trivial function that passes its now-explicit continuation argument to the function that `call/cc` is given to call.

Allocating function call frames on the heap is a performance disaster. And that's where delimited continuations come in: the goal being to allocate function call frames on mini stacks.

`call/cc` is really a bit of a parlor trick, and a very nifty one at that, but continuations are a very important concept that comes up all over the place, and one that computer scientists and software engineers ought to be at least conversant with.

[+] JonChesterfield|2 years ago|reply
Simpler still is to recognise that "call a function" and "return from a function" are different syntax over the same thing. They both mean "jump to somewhere with a convention about where to find state".

If you replace "call a function" with goto, and replace "return from a function" with goto, then it becomes immediately obvious that "continuation" is a name for where you're going to jump to next. It only looks complicated from the context of calls/returns/stacks.

Confusing call and return for different things is unfortunate but popular. It gives rise to things like four registers available for passing arguments and one available for returning a result, when the calling convention really should be symmetric.

"Continuation passing style" is what all function calls use - x86 writes the address of the continuation to the stack, more sensible ISA's pass it in a specific register - modulo language syntax that somewhat obfuscates control flow.

[+] kazinator|2 years ago|reply
Henry Baker showed that call/cc doesn't require a spaghetti stack with dynamically allocated frames. All you need is one linear stack, and never return.

Richard Stallman made similar observations in Phantom Stacks. If You Look to Hard They Aren't There.

Chicken Scheme, based on C, implements Baker's idea. Chicken Scheme's C functions never return. They take a continuation argument, and just call that instead of returning. So every logical function return is actually a new C funtion call. These all happen on the same stack, so it grows and grows. All allocations are off the stack, including lambda environments, and other objects like dynamic strings. When the stack reaches a certain limit, it is rewound back to the top, like a treadmill. During this rewinding phase, all objects allocated from the stack which are still reachable are moved to the heap.

Thus the combination of the CPS strategy (all returns are continuation invocations) and the linear stack with rewinding obviates the need for dynamically allocated frames.

[+] kazinator|2 years ago|reply
yield based on delimited continuations in TXR Lisp, showing that unwind-protect works:

  (defun grandkid ()
    (unwind-protect
      (yield-from parent 'in-grandkid)
      (put-line "returning from grandkid")))
 
  (defun kid ()
    (unwind-protect
      (progn
        (yield-from parent 'in-kid)
        (grandkid))
      (put-line "returning from kid")))

  (defun parent ()
    (unwind-protect
      (progn
        (yield-from parent 'in-parent)
        (kid))
      (put-line "returning from parent")))

  (let ((fn (obtain (parent))))
    (prinl 'a)
    (prinl (call fn))
    (prinl 'b)
    (prinl (call fn))
    (prinl 'c)
    (prinl (call fn))
    (prinl 'd)
    (prinl (call fn))
    (prinl 'e))

Run:

  $ txr cont.tl 
  a
  in-parent
  b
  in-kid
  c
  in-grandkid
  d
  returning from grandkid
  returning from kid
  returning from parent
  nil
  e
Each yield captures a new delimited continuation up to the parent prompt. Each (call fn) dispatches a new continuation to continue on to the next yield, or termination.

So with all this back-and-forth re-entry, why do the unwind-protects just go off once? Because of a mechanism that I call "absconding": the answer to the dynamic wind problem.

Absconding is a way of performing a non-local dynamic control transfer without triggering unwinding. It's much like the way, say, the C longjmp is unaware of C++ destructors: the longjmp absconds past them.

With absconding we can get out of the scope where we have resumed a continuation without disturbing the scoped resources which that context needs. Then with the continuation we captured, we can go back there. Everything dynamic is intact. Dynamically scoped variables, established exception handlers, you name it.

The regular function returns are not absconding so they trigger the unwind-protect in the normal way.

absconding is an elephant gun, that should only be used in the implementation of primitives like obtain/yield.

15 second tutorial:

Cleanup yes:

  1> (block foo (unwind-protect (return-from foo 42) (prinl 'cleanup)))
  cleanup
  42
Cleanup no:

  2> (block foo (unwind-protect (sys:abscond-from foo 42) (prinl 'cleanup)))
  42
That's all there is to absconding. yield-from uses it. It captures a new continuation, packages it up and absconds to the prompt, where that is unpacked, the new continuation updated in place of the old so that fn will next dispatch that new one.
[+] pnut|2 years ago|reply
If I ever start writing statements like "Part the first", please take me out back and put us all out of that misery.