Encyclopedia  |   World Factbook  |   World Flags  |   Reference Tables  |   List of Lists     
   Academic Disciplines  |   Historical Timeline  |   Themed Timelines  |   Biographies  |   How-Tos     
Sponsor by The Tattoo Collection
Computable number
Main Page | See live article | Alphabetical index

Computable number

In mathematics, theoretical computer science and mathematical logic, the computable numbers are the subset of the real numbers consisting of the numbers which can be computed by a finite, terminating algorithm. They can be defined equivalently using the axioms recursive functions, Turing machines or lambda-calculus. (In contrast, the reals require the more powerful axioms of ZF set theory. The computable numbers appear to be good enough for use in most branches of mathematics.

The uncountability of the reals must be attributed to the presence of uncomputable numbers. The computable numbers can be counted by assigning a Goedel number to each Turing machine / lambda expression / recursive function definition. Then we have mapping from the naturals to the computable reals. Note however that it is not possible to order the computable numbers, as this would require us to decide which natural numbers coresponded to halting Turing machines, which is an uncomputable problem. Because of this fact, the Cantor diagonalization argument does not work for the set of countable, computable reals: the diagonal element corresponds to a non-computable number. (Interestingly, we can define this diagonal number in a finite amount of English, such as this paragraph - though it is uncomputable! This is perhaps due to the assumption that we can 'imagine ordering the computable numbers' for the Cantor proof, whilst this is not algorithmically possible in practice.)

Table of contents
1 Formal definition
2 Properties
3 Computing digit strings
4 Measure theory
5 References
6 See also

Formal definition

A real number a is said to be computable if it can be approximated by some algorithm (or Turing machine), in the following sense: given any integer n≥1, the algorithm produces an integer k such that:

Or, equivalently, there exists an algorithm which, given any rational error bound ε, produces a rational number r such that

A complex number is called computable if its real and imaginary parts are computable.


The computable complex numbers form an algebraically closed field, and arguably this field contains all the numbers we ever need in practice. It contains all algebraic numbers as well as many known transcendental mathematical constants. There are however many real numbers which are not computable: the set of all computable numbers is countable (because the set of algorithms is) while the set of real numbers is not (see Cantor's diagonal argument).

The arithmetical operations on computable numbers are themselves computable. Take addition as example: there exists an algorithm or Turing machine which on input (A,B,ε) produces output r, where A is the description of a Turing machine approximating a (in the sense of the above definition), B is the description of a Turing machine approximating b, and r is an ε approximation of a+b.

However, order relations on computable numbers are not computable. There is no Turing machine which on input A (the description of a Turing machine approximating the number a) outputs "YES" if a>0 and "NO" if a≤0. The reason: suppose the machine described by A keeps outputting 0 as ε approximations. It is not clear how long to wait before deciding that the machine will never output an approximation which forces a to be positive.

Every computable number is definable, but not vice versa. An example of a definable, non-computable real number is Chaitin's constant, Ω.

Computing digit strings

Turing's original paper defined computable numbers as follows:

A real number is computable if its digit sequence can be produced by some algorithm or Turing machine. The algorithm takes an integer n≥1 as input and produces the n-th digit of the real number's decimal expansion as output.

Turing was already aware of the fact that this definition is equivalent to the ε-approximation definition given above. The argument proceeds as follows: if a number is computable in the Turing sense, then it is also computable in the ε sense: if n > log10(1/ε), then the first n digits of a provide an ε approximation of a. For the converse, we pick an ε computable real number a and distinguish two cases. If a is rational, then a is also Turing computable, since the digit expansions of rational numbers are eventually periodic and can therefore be produced by simple algorithms. Now if a is not rational and you want to compute its n-th digit, keep computing ever more precise ε-approximations until the n-th digit is certain. Eventually this will happen, since a is not rational and the case of "zeros forever" or "nines forever" is therefore excluded.

There is no algorithm which takes as input the description of a Turing machine which produces ε approximations for the computable number a, and produces as output a Turing machine which enumerates the digits of a in the sense of Turing's definition. So while the two definitions are equivalent, they are not "computably equivalent".

While the set of computable numbers is countable, it cannot be enumerated by any algorithm, program or Turing machine. Formally: it is not possible to provide a complete list x1, x2, x3, ... of all computable real numbers and a Turing machine which on input (m, n) produces the n-th digit of xm. This is proved with a slight modification of Cantor's diagonal argument.

The problem with Turing's definition is this: addition is not computable if we use descriptions of digit-enumerating Turing machines as input and require a digit enumeration as output. The reason is similar to the one described earlier, when talking about order relations: if you want to add two numbers and the first machine keeps outputting the digit 9 and the second machine the digit 0, how long do you wait before deciding that no carry-over to the current digit position is needed?

Measure theory

Standard measure theory fails when we work in the world of computable reals, as the outer measure is defined to be zero on any countable set. We must redefine the outer measure as:

m*[a,b] = |b-a| for an interval [a,b] in the computable reals.
This is in contrast to the standard definition:
m[a,b] = |b-a| for any interval [a,b] in the reals
m{a} = 0
m(any countable union of points) = 0.

An interesting question is: do we need uncomputable numbers at all? They arise as a consequence of the ZF axioms as follows:
ZF assumes the existence of the natural numbers, N, and the existence of the power set of every set.
So the power set of the naturals exists, P(N).
We can encode the reals, R, in binary notation, mapping the n-th digit to the presence or absence of n from a member r of P(N). So there is a mapping between P(N) and R.
Note that some members of P(N) are infinite in size, so cannot be captured algorithmically. They contain an infinite amount of information which cannot be captured by a finite machine. It is these members that form the uncomputables.
So if we use computable axioms such as lambda calculus instead of ZF as our foundations, we can create a consistent number system which avoids many of the technical difficulties of ZF.

//As far as I know, this is an active area of research, and for example, no-one has yet written a definitive book on computable measure theory.


Computable numbers were defined independently by Turing, Post and Church. See The Undecidable, ed. Martin Davis, for further original papers.

See also