In computability theory, Kleene's recursion theorems are a pair of fundamental results about the application of computable functions to their own descriptions. The theorems were first proved by Stephen Kleene in 1938 and appear in his 1952 book Introduction to Metamathematics. A related theorem, which constructs fixed points of a computable function, is known as Rogers's theorem and is due to Hartley Rogers, Jr.
The recursion theorems can be applied to construct fixed points of certain operations on computable functions, to generate quines, and to construct functions defined via recursive definitions.
\varphi
e
\varphie
If
F
G
F\simeqG
F(n)
G(n)
F(n)
G(n)
Given a function
F
F
e
\varphie\simeq\varphiF(e)
Rogers describes the following result as "a simpler version" of Kleene's (second) recursion theorem.
This essentially means that if we apply an effective transformation to programs (say, replace instructions such as successor, jump, remove lines), there will always be a program whose behaviour is not altered by the transformation. This theorem can therefore be interpreted in the following manner: “given any effective procedure to transform programs, there is always a program that, when modified by the procedure, does exactly what it did before”, or: “it’s impossible to write a program that changes the extensional behaviour of all programs”.
The proof uses a particular total computable function
h
x
h
Given an input
y
\varphix(x)
e
\varphie(y)
x
\varphix(x)
\varphih(x)\simeq
\varphi | |
\varphix(x) |
\varphix(x)
\varphih(x)
h
g(x,y)
x
h(x)
y\mapstog(x,y)
To complete the proof, let
F
h
e
F\circh
\varphih(e)\simeq
\varphi | |
\varphie(e) |
h
e
F\circh
\varphie(e)=(F\circh)(e)
\varphi | |
\varphie(e) |
\simeq\varphiF(h(e))
\simeq
\varphih(e)\simeq\varphiF(h(e))
\varphin\simeq\varphiF(n)
n=h(e)
This proof is a construction of a partial recursive function which implements the Y combinator.
A function
F
\varphie\not\simeq\varphiF(e)
e
The second recursion theorem is a generalization of Rogers's theorem with a second input in the function. One informal interpretation of the second recursion theorem is that it is possible to construct self-referential programs; see "Application to quines" below.
The second recursion theorem. For any partial recursive function
Q(x,y)
p
\varphip\simeqλy.Q(p,y)
The theorem can be proved from Rogers's theorem by letting
F(p)
\varphiF(p)(y)=Q(p,y)
F
p
Q
p
Kleene's second recursion theorem and Rogers's theorem can both be proved, rather simply, from each other. However, a direct proof of Kleene's theorem does not make use of a universal program, which means that the theorem holds for certain subrecursive programming systems that do not have a universal program.
A classic example using the second recursion theorem is the function
Q(x,y)=x
p
The following example in Lisp illustrates how the
p
Q
s11
in the code is the function of that name produced by the S-m-n theorem.Q
can be changed to any two-argument function.
The results of the following expressions should be the same.
\varphi
p(nil)
Q(p, nil)
Suppose that
g
h
f
f(0,y)\simeqg(y),
f(x+1,y)\simeqh(f(x,y),x,y),
The second recursion theorem can be used to show that such equations define a computable function, where the notion of computability does not have to allow, prima facie, for recursive definitions (for example, it may be defined by μ-recursion, or by Turing machines). This recursive definition can be converted into a computable function
\varphiF(e,x,y)
e
\varphiF(e,0,y)\simeqg(y),
\varphiF(e,x+1,y)\simeqh(\varphie(x,y),x,y).
The recursion theorem establishes the existence of a computable function
\varphif
\varphif(x,y)\simeq\varphiF(f,x,y)
f
Reflexive, or reflective, programming refers to the usage of self-reference in programs. Jones presents a view of the second recursion theorem based on a reflexive language.It is shown that the reflexive language defined is not stronger than a language without reflection (because an interpreter for the reflexive language can be implemented without using reflection); then, it is shown that the recursion theorem is almost trivial in the reflexive language.
While the second recursion theorem is about fixed points of computable functions, the first recursion theorem is related to fixed points determined by enumeration operators, which are a computable analogue of inductive definitions. An enumeration operator is a set of pairs (A,n) where A is a (code for a) finite set of numbers and n is a single natural number. Often, n will be viewed as a code for an ordered pair of natural numbers, particularly when functions are defined via enumeration operators. Enumeration operators are of central importance in the study of enumeration reducibility.
Each enumeration operator Φ determines a function from sets of naturals to sets of naturals given by
\Phi(X)=\{n\mid\existsA\subseteqX[(A,n)\in\Phi]\}.
A fixed point of an enumeration operator Φ is a set F such that Φ(F) = F. The first enumeration theorem shows that fixed points can be effectively obtained if the enumeration operator itself is computable.
First recursion theorem. The following statements hold.
The first recursion theorem is also called Fixed point theorem (of recursion theory).[1] There is also a definition which can be applied to recursive functionals as follows:
Let
\Phi:F(Nk) → (Nk)
\Phi
f\Phi:Nk → N
1)
\Phi(f\phi)=f\Phi
2)
\forallg\inF(Nk)
\Phi(g)=g
f\Phi\subseteqg
3)
f\Phi
Like the second recursion theorem, the first recursion theorem can be used to obtain functions satisfying systems of recursion equations. To apply the first recursion theorem, the recursion equations must first be recast as a recursive operator.
Consider the recursion equations for the factorial function f:The corresponding recursive operator Φ will have information that tells how to get to the next value of f from the previous value. However, the recursive operator will actually define the graph of f. First, Φ will contain the pair
(\varnothing,(0,1))
Next, for each n and m, Φ will contain the pair
(\{(n,m)\},(n+1,(n+1) ⋅ m))
The first recursion theorem (in particular, part 1) states that there is a set F such that . The set F will consist entirely of ordered pairs of natural numbers, and will be the graph of the factorial function f, as desired.
The restriction to recursion equations that can be recast as recursive operators ensures that the recursion equations actually define a least fixed point. For example, consider the set of recursion equations:There is no function g satisfying these equations, because they imply g(2) = 1 and also imply g(2) = 0. Thus there is no fixed point g satisfying these recursion equations. It is possible to make an enumeration operator corresponding to these equations, but it will not be a recursive operator.
The proof of part 1 of the first recursion theorem is obtained by iterating the enumeration operator Φ beginning with the empty set. First, a sequence Fk is constructed, for
k=0,1,\ldots
Fk\cup\Phi(Fk)
The second part of the first recursion theorem follows from the first part. The assumption that Φ is a recursive operator is used to show that the fixed point of Φ is the graph of a partial function. The key point is that if the fixed point F is not the graph of a function, then there is some k such that Fk is not the graph of a function.
Compared to the second recursion theorem, the first recursion theorem produces a stronger conclusion but only when narrower hypotheses are satisfied. Rogers uses the term weak recursion theorem for the first recursion theorem and strong recursion theorem for the second recursion theorem.
One difference between the first and second recursion theorems is that the fixed points obtained by the first recursion theorem are guaranteed to be least fixed points, while those obtained from the second recursion theorem may not be least fixed points.
A second difference is that the first recursion theorem only applies to systems of equations that can be recast as recursive operators. This restriction is similar to the restriction to continuous operators in the Kleene fixed-point theorem of order theory. The second recursion theorem can be applied to any total recursive function.
In the context of his theory of numberings, Ershov showed that Kleene's recursion theorem holds for any precomplete numbering. A Gödel numbering is a precomplete numbering on the set of computable functions so the generalized theorem yields the Kleene recursion theorem as a special case.[2]
Given a precomplete numbering
\nu
f
t
\foralln\inN:\nu\circf(n,t(n))=\nu\circt(n).
Footnotes