\documentclass{article}
%%%\documentclass{report}
\usepackage{amsmath,amstext,amsfonts,amsthm,amssymb}
\usepackage{enumerate,eufrak,mathrsfs,euscript}
\renewcommand\baselinestretch{0.95}
\topmargin -0.65in
\oddsidemargin -0.25in
\evensidemargin -0.25in
\textwidth 7.0in
\textheight 9.3in
%%% my math definitions
\def\lnot{\neg}
\def\land{\;\wedge\;}
\def\lor{\;\vee\;}
\def\lthen{\;\rightarrow\;}
\def\liff{\;\leftrightarrow\;}
%%% my environments
\theoremstyle{definition}\newtheorem{exercise}{Exercise}[subsection]
\theoremstyle{remark}\newtheorem*{answer}{Answer}
\theoremstyle{plain}\newtheorem*{axiom}{}
%%% my numbering alterations
\numberwithin{equation}{subsection}
%%% schlipf's stuff follows...
\def\spo{\hspace{1ex}}
\def\eqd{\mathord{\spo\spo=_{\mbox{\footnotesize\it def}}\spo\spo}}
\def\Mthen{\mathord{\spo\spo\Rightarrow\spo\spo}}
\def\Miff{\mathord{\spo\spo\Leftrightarrow\spo\spo}}
\def\A{\mathord{{\mathfrak A}}}
\def\sA{\mathord{^{\A}}}
\def\Cn{\mathord{\mbox{Cn}}}
\def\B{\mathord{{\mathfrak B}}}
\def\G{\mathord{{\cal G}}}
\def\N{\mathord{{\mathfrak N}}}
\def\M{\mathord{{\mathfrak N}}}
\def\Nat{\mathord{{\mathbb N}}}
\newcommand\R{{\mathbb R}}
\newcommand\Z{{\mathbb Z}}
\newcommand\Lang{{\cal L}}
\newcommand\E{\mbox{\em Exp}}
\newcommand\Mod{\mbox{\hspace{1pt}Mod\hspace{1pt}}}
\newcommand\Th{\mbox{\hspace{1pt}Th\hspace{1pt}}}
\newcommand\Con{\mbox{\hspace{1pt}Con\hspace{1pt}}}
\def\AS{\mathord{{\mathbb A}_S}}
\def\AL{\mathord{{\mathbb A}_L}}
\def\AE{\mathord{{\mathbb A}_E}}
\def\sN{\mathord{^{\N}}}
\def\sA{\mathord{^{\A}}}
\def\sB{\mathord{^{\B}}}
\def\NS{\mathord{\N_S}}
\def\NL{\mathord{\N_L}}
\def\NA{\mathord{\N_A}}
\def\NM{\mathord{\N_M}}
\def\NE{\mathord{\N_E}}
\newcommand\PAE{{\mathbb P}{\mathbb A}_E}
\def\St{\mathord{{\cal S}t}}
\def\un{\underline}
\def\a{\mathord{\vec{a}}}
\def\app{\mathord{a^{\prime\prime}}}
\def\c{\mathord{\vec{c}}}
\def\n{\mathord{\vec{n}}}
\def\p{\mathord{\vec{p}}}
\def\q{\mathord{\vec{q}}}
\def\Th{\mathord{Th}}
\def\v{\mathord{\vec{v}}}
\def\w{\mathord{\vec{w}}}
\def\x{\mathord{\vec{x}}}
\def\y{\mathord{\vec{y}}}
\def\z{\mathord{\vec{z}}}
\def\tp{\mathord{t^\prime}}
\def\tpp{\mathord{t^{\prime\prime}}}
\def\sp{\mathord{s^\prime}}
\def\bp{\mathord{b^\prime}}
\def\<{\mathord{\langle}}
\def\>{\mathord{\rangle}}
\def\Prime{\mathord{\mbox{prime}}}
\def\lh{\mathord{\mbox{lh}}}
\def\tf{\mathord{\tilde{f}}}
\def\seqno{\mathord{\mbox{seq$\_$no}}}
\def\restrict{\mathord{|\hspace{-4pt}\stackrel{\backslash}{\hspace{2pt}}}}
\def\twotree{\mathord{\mbox{2-Tree}}}
\def\form{\mathord{\mbox{formula}}}
\def\sent{\mathord{\mbox{sentence}}}
\def\branch{\mathord{\mbox{branch}}}
\def\leaf{\mathord{\mbox{leaf}}}
\def\tail{\mathord{\mbox{tail}}}
\def\appendleaf{\mathord{\mbox{append@leaf}}}
\begin{document}
\centerline{\LARGE\bf Outline of a Mathematical Theory of Computation}
\centerline{{\Large by Dana Scott (1970)}}
%%%%%%%%%%%%%%%%%%%%%%%%
%%% Section 0
%%%%%%%%%%%%%%%%%%%%%%%%
\section{\bf Introduction}
Scott is attempting to provide a way of underastanding higher-level
computer programs. He, as did Church, makes a distinction between
functions as sets of n-tuples and functions as computations. Where
Church went with the latter, Scott argues that the former is
sufficient for {\it understanding} the computer languages in question. \\ \\
{\it The point is that, mathematically speaking, functions are\\
independent of their means of computation and hence are\\
''simpler'' than the explicitely generated, step-by-step evolved\\
sequences of operations on representations.} \footnote{By
'representation', Scott means that which stands for the information
being functionally studdied (i.e., data).} (p. 1)\\ \\
Thus, Scott is attempting to provide a way of understanding {\it
computation} that leaves out the ''irrelevancies''(i.e., the
arbitrary choices involved in dealing with functions according to
their computational definitions) of operational semantics.\\ \\
However, Scott does realize that the approach argued for above is
simply an argument for an approach that accomodates {\it human} understanding of
computation and that the operational approach must not be ignored
because, as he points out, the machines that the programs of study run
on are not capable of dealing with such an abstract level of
understanding. That is, the computaional approach should not be
abandoned because the machines that we build operate on that lower
level.\\ \\
{\it Therefore, a mathematical semantics, which will represent the \\
first major segment of the complete, rigorous definition of a \\
programming language, must lead naturally to an operational simulation\\
of the abstract entities, which (if done properly) will establish\\
the practicality of the language, and which is necessary for a full\\
presentation.} (p. 2)\\ \\ \\ \\ \\
Consider functions, Scott states that functions defined
mathematically can be known to {\it be} computable without knowing
{\it how} to do the computation. However, ''knowledge'' of a function
not only includes the abstract mathematical definition, but also its
operational definition. That is, we must not simply know that a
function is computable but also how to compute them (given input).\\
\\
The point of this paper, is to provide a theory that accounts for the
aforementioned {\it abstractions}. He points out that the operational
approach has been used for some time (compilers are evidence for such
understanding). \\ \\
{\it [T]he essential point : unless there is a prior, generally\\
accepted mathematical language at hand, who is to say whether a\\
proposed implementation is correct?} \footnote{Notice that compliers
do not provide us with such definitions for a particular language because
there can be many compilers for that language} (p. 2)\\ \\
He points out that those who write compilers should have clear
understanding of the language they are writing the compiler for and Scott's point is to say that that understanding
should be made explicit. Why? So that such abstract entities can be
''manipulated in the familiar mathematical manner.''\\ \\
{\bf Personal remark}: It seems as though Scott has backed off of his
''essential point.'' That is, the question he raises in making that point
seems to be a definite problem for the operational approach. But later in that paragraph, when he
brings up the point about compiler writers providing the standard (or
at least having some sort of understanding of the standard), he seems to back off the implication
that his mathematical approach will provide the standard for determining
whether a program is correct and it appears that he is simply willing
to claim that his approach
is just mathematically interesting. That is, his approach doesn't {\it
set} the standard it only makes the standard (hidden inside the heads of
copiler writers) ''visible'' and that the reason why we
should adopt his approach is only because it is of mathematical
interest. Perhaps not. . .perhaps it is only vague language that I'm
nit-picking on. We'll have to see if this vagueness continues as we
go.\\ \\ \\
%%%%%%%%%%%%%%%%%%
%% Section 1
%%%%%%%%%%%%%%%%%%
\section{The Problem of Self-Application}
When thinking about programming languages in this way we must have a
new way of thinking about {\it data types} and the {\it functions}
(i.e., mappings) that are to be allowed from one to another.\\ \\ \\
\\ \\
Since, a function can be in itself an infinite object (ex. a mapping
from integers to integers) we must introduce some idea of {\it
approximation} to deal with the finite nature of the machines (just as we do with real numbers).\\ \\ \\
Also, there exist operationally defined functions for which there is
no mathematical counterpart. In particular it is ''not unknown'' in
programming languages to allow {\it unrestricted} procedures which can
very well produce unrestricted procedures as values. I'm assuming that
by 'unrestricted' he means a function that is {\it
self-applicative}.\\ \\ \\
However, Scott points out that, to his knowledge, no mathematical
theory has been able to maintain self-application and remain
consistent. This is another goal of the theory that he is proposing
here.\\ \\
A related problem for a mathematical approach to programming language
understanding concern questions of keeping track of {\it side effects}
and of {\it storage of commands}. Mathematically speaking, what is a
store? Well, it's a mapping, $\sigma$, from information (contents) to locations. That is,
it is a function that assigns to each location, $l \in L$
its contents $\sigma(l) \in V$ (the set of values).\\ \\
\hspace*{.5in}$\sigma: L \lthen V$\\ \\
What is a side effect? Well, its a change in {\it state}, $S$. What is
a command? Well, it's a request for a side effect. Thus,
mathematically speaking, a command is the following function\\ \\
\hspace*{.5in}$\gamma: S \lthen S$\\ \\
Can a command be stored? Well, suppose $\sigma$ is the current
state of the store, and suppose $l \in L$ is a location at which the
command is stored. Then $\sigma (l)$ is a command; that is\\ \\
\hspace*{.5in}$\sigma (l): S \lthen S$\\ \\
Hence, $\sigma (l) (\sigma)$ is well defined. Or is it?
Consider the following:\\ \\
A store, $\sigma$, is a function from locations in the machine to
values. Thinking of functions in set theoretic terms, $\sigma$ is
simply the set of pairs of locations and values (state of the store).\\ \\
\hspace*{.5in}$\{\, \, \ldots \}
\spo\spo\spo\spo\spo\spo$(i.e., $\{\, \, \ldots\}$\\ \\
A command, $\gamma$, is a function from one state of the store to
another\\ \\
\hspace*{.5in}$\gamma(\sigma) \lthen \sigma'$\\ \\
If $\sigma$ is the current state of the store and $l$ is the location
where the comand, $\gamma$, is stored, then $\sigma(l) =
\gamma$. Thus\\ \\
\hspace*{.5in}$\sigma(l)(\sigma) \lthen \sigma'$\\ \\
Is this not well defined and self-applicative?\\ \\
%%%%%%%%%%%%%%%%%%%%%
%% Section 2
%%%%%%%%%%%%%%%%%%%%%
\section{Data Types and Mappings}
What is a data type? Is it just a set? That is, are objects {\it of} the
same type if they are both in the same set? According Scott, that is
just too simple,\\ \\
{\it [T]he objects [of the same type] are structured and bear certain relations to one\\
another, so the type is something more than a set.} (p. 6)\\ \\
The data types Scott is referring to are structures that are primitive,
general structures (as opposed to data structures) that have to do
with a sense of {\it approximation}. Consider $x,y \in D$ be two
elements of the same type, not only are they different but that one is
a better approximation of the other. Say, \\ \\
\hspace*{.5in}$x \sqsubseteq y$\\ \\
to mean intuitively that $y$ is {\it consistent with} $x$ and is
(possibly) {\it more accurate than} $x$. For short, $x$ {\it
approximates} $y$. Data types should always be provided with such a
relationship.\\ \\ \\ \\
So, what can we say about this relationship?\\ \\
Well, we want to assume that it is {\it reflexive, transitive, and
antisymmetric}, which seems to be an intuitive assumption because it
seems clear that $x \sqsubseteq x$, if $x \sqsubseteq y$, $y \sqsubseteq z$, then $x
\sqsubseteq z$, and if $x \sqsubseteq y$, then $y \sqsubseteq
\hspace*{-.13in}/ \spo x$. Thus, we have\\ \\
{\bf Axiom 1}: A data type is a {\it partially ordered set}.\\ \\
\hspace*{.5in}Why only partial? Well, for example for any to objects,
$x,y$, of the data type {\it integer}, $x \sqsubseteq
\hspace*{-.13in}/ \spo y$. That is, no integer is a better
approximation of the data tupe integer that any other element of that
data type (except $\bot$).\\ \\
If we have two sets, $D, D'$, with the partial orderings $\sqsubseteq,
\sqsubseteq '$, respectively and $f: D \lthen D'$ is a mapping from
one set to the other, then if $x, y \in D$ then\\ \\
\hspace*{.5in}$x \sqsubseteq y \lthen f(x) \sqsubseteq f(y)$\\ \\
Thus, we have\\ \\
{\bf Axiom 2}: Mappings between data types are {\it
monotonic}. \footnote{A monotonic function is an order
preserving function.}\\ \\
As of now, I'm having a difficult time getting this. So, let's
consider this: Think of the objects in the data type providing
{\it information} for some entity. Thus, $x \sqsubseteq y$ means that $x$
and $y$ are attempting to approximate the same entity (i.e., they're
consistent) and that $y$ provides for information about it.\\ \\
Let's try the following extremely intuitive example: The entity we are trying to
approximate is {\it bachelorhood} and let $x = $ `maleness' and $y
=$'maleness and unmarriedness'. Thus, $x \sqsubseteq y$. They're
consistent and it seems as though $y$ is a better approxiamation of
bachelorhood.\\ \\
Scott points out, that by looking at data types in this way we must
account for {\it incomplete} entities, like $x$ in my example (I
suppose it could be argued that $y$ is also incomplete. But let's
assume that it's not.)\\ \\
{\it Allowing for partiality of arguments and values has the good\\
effect that our functions become partial too; for even if the\\
arguments are perfect the values may only be partial.} (p.8)\\ \\ \\
\\ \\ \\
%%%%%%%%%%%%%%%%%%%
%% Section 3
%%%%%%%%%%%%%%%%%%%
\section{Completeness and Continuity}
The theory based on Axioms 1 and 2 would be too abstract. Why? Well, not
only is Scott trying to give an abstract mathematical theory of
computation, but he also wants to build a theory that can be
translated into a practical implementation. So. . .\\ \\
Suppose we have the following infinite sequence\\ \\
\hspace*{.5in}$x_0 \sqsubseteq x_1 \sqsubseteq \ldots \sqsubseteq x_n
\sqsubseteq x_{n+1} \sqsubseteq \ldots$\\ \\
then we want to suppose that the sequence is tending towards a
limit. Call the limit $y$\\ \\
\hspace*{.5in}$y = \bigsqcup_{n=0}^{\infty} \spo x_n$\\ \\
This limit is to be taken as the {\it least upper bound} (l.u.b.). We
can think of the limit as the union or join of each of the
approximations. Thus,\\ \\
{\bf Axiom 3}: A data type is a {\it complete lattice} under its
partial ordering. \footnote{A lattice is complete is {\it any} of its
subsets have both a {\it join} (a l.u.b.) and a {\it meet} (a g.l.b.)}\\ \\
They're complete because since every subset of a data type has a
l.u.b. implies the existence of {\it greatest lower bounds} (g.l.b.)\\ \\
So, let $x,y \in D$. It may be that they are either both
approximations of the same ''perfect'' entity or not (if so, then
they're inconsistent). The l.u.b. of $\{x, y\}$ is denoted by\\ \\
\hspace*{.5in}$x \sqcup y \in D$\\ \\
Notice that that subset's l.u.b. is inside $D$ (thus $D$ must be infinite). However, not only does
every subset of $D$ have a l.u.b., so does $D$ (denoted by $\top \in
D$ and called the ''top'' of the lattice). Again, it is to be thought of as the union (or join) of all
approximations. It is not that which is being approximated, but rather
an ``over-determined'' element. Thus,\\ \\
\hspace*{.5in}$x \sqcup y = \top$\\ \\
means intuitively that $x$ and $y$ are {\it inconsistent}, as opposed
to incomparible (dentoed as $x \sqsubseteq \hspace*{-.13in}/ \spo y$
and $y \sqsubseteq \hspace*{-.13in}/ \spo x$).\\ \\
The l.u.b. of the {\it empty} subset is an element $\bot \in D$
(called the ''bottom'' of the lattice). It is the most
''under-determined'' element of $D$.\\ \\ \\ \\ \\ \\ \\
The equation\\ \\
\hspace*{.5in}$x \sqcap y = \bot$\\ \\
means that $x$ and $y$ are {\it unconnected}, in the sense that there
is no ''overlap'' of information between them.\\ \\ \\ \\ \\
Since we have the notion of limit permitted by data types, we must
re-think our notion of function.\\ \\
{\it If a function is computable in some intuitive sense, then getting\\
out a ''finite'' amount of information about one of its values ought\\
to require putting in only a ''finite'' amount of information about\\
the argument.} (p. 10) \\ \\
Scott claims that we should be able to express this idea in terms of
the notions that we have available.\\ \\
The proper notion of limit is best expressed in terms of {\it directed
sets}. A subset $X \subseteq D$ is {\it directed} if every finite
subset $\{x_0, x_1, \ldots , x_{n-1}\} \subseteq X$ has an
upperbound $y \in X$ so that we have\\ \\
\hspace*{.5in}$x_0 \sqcup x_1 \sqcup \ldots \sqcup x_{n-1} \sqsubseteq
y \spo$\footnote{Notice that a directed set is alway non-empty}\\ \\
The limit of the directed set is the l.u.b., $\bigsqcup X$, and suppose
that we want a finite amount of information about it. Each ''bit'' of
information that we would need is contained in some element of
$X$. Since, $X$ is a directed set, all of the information about $X$ is
in at least one element of $X$. (Ya, but that element is the join of
infinitely many things, right?)\\ \\ \\ \\ \\
Now, consider a function, $f: D \lthen D'$ and given a limit
$\bigsqcup X$ of a directed subset $X \subseteq D$, we ask for a
''finite'' amount of information about $f(\bigsqcup X)$. As was just
intuitively shown, this would require a ''finite'' amount of
information about $\bigsqcup X$ (i.e., a single element, $x \in
X$). Thus, $f(x)$ should give us what we want\\ \\
\hspace*{.5in}$f(\bigsqcup X) = \bigsqcup \{f(x):x \in X\}$\\ \\
So, we could say that $f$ {\it preserves} the limit and a mapping that
preserves all the limits is called {\it continuous}.\\ \\
{\bf Axiom 4}: Mappings between data types are continuous.\\ \\
{\bf Question}: Why does Scott put quotes around the word 'finite'
all of the time?\\ \\
\section{Computability}
Again, we are at too high of a level, although some properties of
computable functions have been isolated.\\ \\
{\it The problem is to restrict attention to exactly those data\\
types where the elements can be approximated by ''finite\\
configurations'' representable in machines, thereby also making\\
more precise the concept of a ''finite amount of information.}''\\
(p.12)\\ \\
GOOD! This is what I was wondering about in the last section. Let's
see what he has to say.\\ \\
Scott's solution to this problem is to take a topological
approach. Any data type satisfying Axioms 1 and 3 can be regarded as a
topological space. To define a topology on a set one needs to say
which subsets are {\it open}. So, let $D$ be a data type. A subset $U
\subseteq D$ is open if the following two conditions are met:\\ \\
\hspace*{.5in}(U$_1$) whenever $x \in U$ and $x \sqsubseteq y$, then
$y \in U$\\ \\
\hspace*{.5in}(U$_2$) whenever $X \subseteq D$ is directed and
$\bigsqcup X \in U$, then $X \cap U \neq \emptyset$\\ \\
Notice that $f: D \lthen D'$ is continuous in the limit preserving
sense iff it is continuous in the topological sense. (Why?)\\ \\ \\ \\ \\
If $x,y \in D$, we write\\ \\
\hspace*{.5in}$x \prec y$\\ \\
to mean that $y$ belongs to the topological {\it interior} of the
upper section determined by $x$, that is the interior of the set\\ \\
\hspace*{.5in}$\{x' \in D: x \sqsubseteq x'\}$\\ \\
There are certain data types where certain elements are {\it
isolated}. That is, where $x \prec x$ is true (ex. $\bot \prec \bot$ and
$\top \prec \top$).\\ \\ \\
We write\\
\hspace*{.5in}$x \preccurlyeq y$\\ \\
to mean that the g.l.b. of the topological interior of the upper
section determined by $x$ is $\sqsubseteq y$. Thus,\\ \\
\hspace*{.5in}$x \prec y, x \preccurlyeq y$, and $x \sqsubseteq y$\\
\\
are successively weaker. (What are the differences?!)\\ \\ \\ \\
We consider the possibility of having a {\it dense} subset of a space
in terms of which all other elements can be found as limits, called a
{\it basis}. A subset $E \subseteq D$ is a basis if it satisfies the
following two conditions\\ \\
\hspace*{.5in}(E$_1$) whenever $e, e' \in E$, then $ e \sqcup e' \in
E$\\ \\
\hspace*{.5in}(E$_2$) for all $x \in D$ we have $x = \bigsqcup \{e \in
E: e \prec x\}$ \footnote{If a basis exists, then the {\it meet}
operation ($x \sqcap u$) is continuous, and not necessarily
otherwise.}\\ \\
Conditions (E$_1$) and (E$_2$) are not strong enough to make data
types ''physical.'' We need the stronger assumption\\ \\
{\bf Axiom 5}: A data type has an {\it effectively given} basis.\\ \\
That is, $E$ must be ''known.'' Given $e,e' \in E$, we have to know
how to find the element $e \sqcup e' \in E$. We have to know how to
decide which of the following are true and/or false\\ \\
\hspace*{.5in}$e \prec e', e \preccurlyeq e'$ and $e \sqsubseteq e'$\\
\\
This would require that $E$ be, at least, countably infinite and
effectively enumerated in terms of which the operations and
relationships from above are recursive. (Is there a bit of circularity
here? $E$ must be physically implementable before we can {\it
determine} it to be physically implementable. Maybe it's not
circular, only trivia.)\\ \\ \\ \\ \\
The most important consequence of Axiom 5 is the possibility of
defining what it means for for an element, of a data type, to be
computable. \\ \\
Suppose $x \in D$ and $E$ is the basis. Then (relative to this basis)
$x$ is {\it computable} iff there is an {\it effectively given
sub-sequence}\\ \\
\hspace*{.5in}$\{e'_0, e'_1, e'_2, \ldots, e'_n, \ldots \} \subseteq
E$\\ \\
such that $e'_n \sqsubseteq e'_{n+1}$ for each $n$, and \\ \\
\hspace*{.5in}$x = \bigsqcup_{n=0}^{\infty} \spo e'_n$\\ \\
This means that we must be able to give a better and better
approximations to $x$ which converge to $x$ in the limit. This is
essential, because the data type may have uncountably many elements,
while there can only be countably many computable elements.\\ \\ \\
\\ \\ \\
%%%%%%%%%%%%%%%%%%%
%% section 5
%%%%%%%%%%%%%%%%%%%
\section{Construction of Data Types}
We must consider now the construction of {\it useful} data types
satisfying the axioms, remembering that the lattice structure is the
most primitive structure on data types.\\ \\
Notice that all finite lattices satisfy the axioms and are sufficient
for practical application. However, many concepts are understood with
infinite structures. Consider the data type, $\mathbb{N}$, for the
integers. The elements of the lattice are $0,1,2,3, \ldots, n, \ldots$
and $\top$ and $\bot$. Notice that elements, other than $\top$ and
$\bot$, are {\it incomparable} under $\sqsubseteq$. That is, none of
the elements ''provide more information'' than any other
element. However,\\ \\
\hspace*{.5in}$\forall n \in \mathbb{N} (\bot \sqsubseteq n)$\\ \\
In this case, the entire lattice is its own basis. Why?\\ \\ \\ \\
Suppose $D$ and $D'$ are two given data types. There are three
particularly important constructs\\ \\
\hspace*{.5in}$(D \times D'), (D + D')$, and $(D \lthen D')$\\ \\
for obtaining new, ''structured'' data types from given ones. Let $x
\in D$ and $x' \in D'$\\ \\
\hspace*{.5in}$(D \times D'): \spo \spo \
\sqsubseteq \$ iff $x \sqsubseteq y$ and $x' \sqsubseteq
y'$\\ \\ \\ \\
$(D + D')$ is defined as a ''disjoint'' union of $D$ and $D'$, except
that $\bot = \bot'$ and $\top = \top'$\\ \\ \\ \\
The {\it function space} $(D \lthen D')$ has as elements all the
continuous mappings from $D$ {\it into} $D'$, for which we define\\ \\
\hspace*{.5in}$f \sqsubseteq g$ iff $f(x) \sqsubseteq ' g(x)$ for all
$x \in D$\\ \\ \\ \\ \\
Sums and products can be generalized to more terms and factors, even
infinitely many. For example, $D^n$ can be taken as the set of all
$n$-tuples of elements of partially ordered ?sets? in the obvious
coordinate-wise fashion. We can then set\\ \\
\hspace*{.5in}$D^{\star} = D^0 + D^1 + D^2 + \ldots + D^n + \ldots$\\
\\
which represents the data type of all finite {\it lists} of elements
of the given $D$. Then we could do lists of lists of lists, . . . Now
it would seem reasonable that $D^{\infty}$ can be defined such that\\
\\
\hspace*{.5in}$D^{\infty} = D + (D^{\infty})^{\star}$\\ \\ \\
that is to say, that each element of $D^{\infty}$ is either an element
of $D$ or is a list of other elements of $D^{\infty}$.\\ \\ \\ \\ \\
Remember that every continuous (even monotonic) function mapping a
complete lattice into itself has a {\it fixed point}. Thus, applying
this to $D^{\infty}$, for a fixed $a \in D$, the expression $\$
defines a continuous function of $D^{\infty}$ into itself. Consider a
fixed point\\ \\
\hspace*{.5in}$x = \$\\ \\
Thus, $x$ is a list whose first term is $a$ and whose second term is
. . . the list $x$ itself! Thus $x$ is a kind of infinite list\\ \\
\hspace*{.5in}$x = \\>\>$ \footnote{Consider
figure 4 on page 19}\\ \\ \\ \\
How does this not fit our usual ideas about data types of lists?\\ \\ \\
One could say that $D^{\infty}$ gives us the topological {\it
completion} of the space of finite lists (How?), and the various
limit points need not be used if one does not care to take advantage
of them (Why?) \\ \\ \\ \\ \\
A second example of the completion idea concerns {\it function
spaces}. Let $D$ be given, and set $D_0 = D$ and\\ \\
\hspace*{.5in}$D_{n+1} = (D_n \lthen D_n)$\\ \\
The spaces $D_n$ are a selection of the ''higher-type'' spaces of
functions of functions of . . . It turns out that there is a natural
way of embedding each $D_n$ successively into the next
$D_{n+1}$. These embeddings make it possible to pass to a {\it limit
space}, $D_{\infty}$ which contains the originally given $D$ and is
such that \\ \\
\hspace*{.5in}$D_{\infty} = (D_{\infty} \lthen D_{\infty})$\\ \\
This space provides a solution to the self-application problem because
each element of $D_{\infty}$ can be regarded as a (continuous)
function {\it on} $D_{\infty}$ {\it into} $D_{\infty}$. And conversely
every continuous function can be represented by an element of
$D_{\infty}$.\\ \\ \\ \\ \\ \\ \\ \\
According to Scott, this is the first known, ''mathematically''
defined model of $\lambda$-calculus. How?
%%%%%%%%%%%%%%%%%%%%%
%% Section 6
%%%%%%%%%%%%%%%%%%%%%
\section{Conclusion}
We can now provide an answer to the ''storage-of-commands'' problem
mentioned above. Let $L$ be the {\it location} space (finite or
infinite). The space $V$ of {\it values} is to be constructed by the
limiting methods from above. Supposing it to already be constructed,
the space, $S$, of {\it states of the store} is defined by\\ \\
\hspace*{.5in}$S = (L \lthen V)$\\ \\
The space $C$ of {\it commands} is defined by\\ \\
\hspace*{.5in}$C = (S \lthen S)$\\ \\
The space $P$ of {\it procedures} - with one parameter and {\it side
effects} - is defined by\\ \\
\hspace*{.5in}$P = (V \lthen (S \lthen V \times S))$\\ \\
That is, a procedure is a function, which given first a value of its
argument and next given a state of the store, then produces a
''computed'' value together with the necessary change of the state of
the store.\\ \\ \\ \\ \\ \\ \\ \\ \\
What can those values be? Well, they might be {\it numbers, locations,
lists, commands}, or {\it procedures}. \\ \\ \\ \\ \\ \\
Thus,\\ \\
$V=N+R+L+V^{\star}+C+P \spo \spo$ (i.e., values = integers + reals
+ locations + lists + commands + procedures)\\ \\ \\ \\ \\
{\it Such a space $V$ does exist mathematically, and it provides the\\
values for expressions of a programming language of the kind we have\\
understood only previously in the ''operational way.'' We should at\\
once begin trying to understand these languages mathematically,\\
because we now have the tools to do so.} (p. 21)
\end{document}