top | item 44520293

(no title)

082349872349872 | 7 months ago

AIUI, up until Scott, λ-calculus had mathematicians saying "yes, it works in practice, but can it ever work in theory?": say we have a universe of discourse D, and we want, per λ-calculus, to have functions D->D (D->D->D, etc.) be part of this universe of discourse. If D is a windowless monad, with a single object, we're ok: there's only 1 arrow (1^1) from 1 object to 1 object, namely the identity, so we just go ahead and identify it with the object. However, as soon as we have two objects in D, we have a problem: there are 4 arrows (2^2) from 2 objects to 2 objects, so even if we attempt to identify objects and functions we don't have enough for all the functions, and the problem only gets worse (N^N) as we have N>2 objects.

Scott's contribution was to cut down the cardinality of the function space: by restricting it to continuous functions (ie only those functions that have finite approximations, which is to say they don't do anything strange "at infinity") one not only gets what most people would admit is a reasonable model of computation, but -calculus then works in theory, as D->D can now fit inside of D.

What I was trying to explore (and maybe the trouble with asking an LLM about this sort of thing is that they're unlikely to push back hard enough?) is the notion that even given an "angelic turing machine", one still couldn't compute in a way than an earthly turing machine couldn't simulate.

Does that make sense?

discuss

order

gsf_emergency_2|7 months ago

Grad students Keye Martin & Lemmonier

https://arxiv.org/pdf/2406.07216

Seem to have had measurement problem right next to Scott continuity, -- & left it at that :)

If you want me to look at the details you've got to motivate me by telling what doesn't make sense to you (but ought to?)

082349872349872|7 months ago

in the priority heap... if you give me some pp's I'll bump it up