-
Notifications
You must be signed in to change notification settings - Fork 32
/
notes_week8.tex
468 lines (362 loc) · 21.2 KB
/
notes_week8.tex
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
\documentclass[11pt]{article}
\usepackage{tikz-cd}
\usepackage{proof-dashed}
\input{macros}
\setlength{\inferLineSkip}{4pt}
\metadata{Martens and Cavallo}{2013/11/04 and 2013/11/06}
\newcommand*{\Void}{\mathsf{0}}
\newcommand*{\Unit}{\mathsf{1}}
\newcommand*{\Bool}{\mathsf{2}}
\newcommand*{\Interval}{I}
\newcommand*{\Izero}{0_I}
\newcommand*{\Ione}{1_I}
\newcommand*{\Iseg}{\mathsf{seg}}
\newcommand*{\ap}{\mathsf{ap}}
\newcommand*{\apd}{\mathsf{apd}}
\newcommand*{\funext}{\mathsf{funext}}
\newcommand*{\isSet}{\mathsf{isSet}}
\title{15-819 Homotopy Type Theory\\Lecture Notes}
\author{Evan Cavallo and Chris Martens}
\date{November 4 and 6, 2013}
\begin{document}
\maketitle
\section{Contents}
These notes cover Robert Harper's lectures on Homotopy Type Theory from
November 4 and 6, 2013. Discussions include the interval type and classical
homotopy theory, classification of sertain types as sets,
proof irrelevance, and the embedding of classical into constructive logic.
% \bibliographystyle{alpha}
% \bibliography{fp}
\section{The Interval}
\subsection*{Definition}
Homotopy type theory takes full advantage of the latent $\infty$-groupoid structure of ITT's identity types by adding new paths. One way we add paths is via the univalence axiom, which introduces new paths between types. We can also directly postulate the existence of types with higher path structure. We will eventually develop a general theory of these \emph{higher inductive types}. For the moment, we consider a simple example, the \emph{interval type}.
\begin{center}
\begin{tikzpicture}[node distance=2cm, auto]
\node (L) {$\Izero$};
\node (R) [right of=L] {$\Ione$};
\draw (L) to node {\footnotesize $\Iseg$} (R);
\end{tikzpicture}
\end{center}
The interval $\Interval$ is defined inductively with two traditional constructors, $\Izero$ and $\Ione$. We think of these as two endpoints of an continuum of points, analogous to the interval $[0,1]$ of classical analysis. With these points alone, the interval is no different from the type $\Bool$. In order to complete the definition, we also define a path $\Iseg$ which connects the two endpoints. We thus have the following introduction rules:
\begin{equation*}
\infer[\Interval\text{-I-0}]{\ctx \vdash \Izero : \Interval}{}
\hspace{30pt}
\infer[\Interval\text{-I-1}]{\ctx \vdash \Ione : \Interval}{}
\hspace{30pt}
\infer[\Interval\text{-I-\textsf{seg}}]{\ctx \vdash \Iseg : \Id{\Interval}(\Izero,\Ione)}{}
\end{equation*}
In order to find the correct elimination rule for this type, we ask the same question we asked in defining the coproduct: how do we map out of this type? For any type $A$, what is the form of a map $f : \Pi z{:}I. A$? For the sake of simplicity, let's first consider how to define a map $f : I \to A$. We expect the recursor to have the form
\begin{equation*}
\infer{\ctx, x : I \vdash \rec_\Interval[\_.A](x;M;N;?) : A}{\ctx \vdash M : A & \ctx \vdash N : A & ?}
\end{equation*}
with computation rules
\begin{eqnarray*}
\rec_\Interval[\_.A](\Izero;M;N;?) &\jdeq& M \\
\rec_\Interval[\_.A](\Ione;M;N;?) &\jdeq& N
\end{eqnarray*}
So far, this is just the recursor for $\Bool$. To see what additional information we need, notice that for any map $f : I \to A$ we have $\ap_f(\Iseg) : \Id{A}(f(\Izero), f(\Ione))$ -- the values $f(\Izero)$ and $f(\Ione)$ have to be related in some way. In other words, we need to specify the way that $f$ acts on the path $\Iseg$. The full (nondependent) recursor therefore has the form
\begin{equation*}
\infer{\ctx, x : I \vdash \rec_\Interval[\_.A](x;M;N;P) : A}{\ctx \vdash M : A & \ctx \vdash N : A & \ctx \vdash P : \Id{A}(M,N)}
\end{equation*}
with the computation rules
\begin{eqnarray*}
\rec_\Interval[\_.A](\Izero;M;N;P) &\jdeq& M \\
\rec_\Interval[\_.A](\Ione;M;N;P) &\jdeq& N \\
\ap_{\rec_\Interval[\_.A](\Ione;M;N;P)}(\Iseg) &\jdeq& P
\end{eqnarray*}
For a dependent function $f : \Pi z{:}I. A$, the values $f(\Izero)$ and $f(\Ione)$ may have different types. Here, we have $\apd_f(\Iseg) : \Izero =_\Iseg^{z.A} \Ione$, so the dependent eliminator has the form
\begin{equation*}
\infer{\ctx, x : I \vdash \rec_\Interval[z.A](x;M;N;P) : A[x/z]}{\ctx \vdash M : A & \ctx \vdash N : A & \ctx \vdash P : \Izero =_\Iseg^{z.A} \Ione}
\end{equation*}
with the computation rules
\begin{eqnarray*}
\rec_\Interval[z.A](\Izero;M;N;P) &\jdeq& M \\
\rec_\Interval[z.A](\Ione;M;N;P) &\jdeq& N \\
\apd_{\rec_\Interval[z.A](\Ione;M;N;P)}(\Iseg) &\jdeq& P
\end{eqnarray*}
One question we should ask ourselves is whether this last computation rule should be definitional. Postulating a definitional equality involving an internally-defined function, $\apd$, is highly unnatural. On the other hand, adding new propositional ruins the computational interpretation of the theory. At this point, we have no satisfactory answer to this question. We will use definitional equality; the HoTT book uses propositional equality. The formal developments in Coq and Agda both use propositional equality, but this is largely an artifact of technical restrictions: it is impossible to add axioms for definitional equalities in these languages.
\subsection*{Describing Paths With the Interval}
In classical homotopy theory, paths in a space $A$ are defined as continuous mappings $f : I \to A$ where $I$ is the interval $[0,1]$. $f(0)$ is the left endpoint of the path, $f(1)$ the right endpoint, and the function gives a way of traveling continuously from one endpoint to the other. In homotopy type theory, paths are a primitive notion, but we can show that the classical definition is equivalent. For any type $A$, the path space $\Sigma x{:}A. \Sigma y{:}A. \Id{A}(x,y)$ is equal to $I \to A$, as shown by the following equivalence:
\begin{align*}
& f : (\Sigma x{:}A. \Sigma y{:}A. \Id{A}(x,y)) \to (I \to A) \\
& f \left<x,\left<y,p\right>\right> \defeq \lambda z. \rec_\Interval[\_.A](z;x;y;p) \\
~ \\
& g : (I \to A) \to (\Sigma x{:}A.\Sigma y{:}A. \Id{A}(x,y)) \\
& g~h \defeq \left< h(\Izero), \left<h(\Ione), \ap_h(\Iseg)\right> \right> \\
~ \\
& \alpha : \Pi s{:} (\Sigma x{:}A.\Sigma y{:}A. \Id{A}(x,y)).~g(f(s)) = s \\
& \alpha~s \defeq \refl{\Sigma x{:}A. \Sigma y{:}A. \Id{A}(x,y)}(s) \\
~ \\
& \beta : \Pi h{:}(I \to A).~f(g(h)) = h \\
& \beta~h \defeq \funext(\lambda x{:}I. \rec[z. f(g(h))(x) = h(x)](x; \refl{A}(h(\Izero)); \refl{A}(h(\Ione)); \\
& \hspace{210pt} \refl{\Id{A}(h(\Izero),h(\Ione))}(\ap_h(\Iseg))))
\end{align*}
Intuitively, the interval has the shape of a single path, so the image of a function $f : I \to A$ is a path in $A$.
\subsection*{{\sc Funext} from the Interval}
Interestingly, we can prove function extensionality in ITT if we assume the
presence of the interval type. Let $f , g : A \to B$ be two functions and
assume $h : \Pi x{:}A. \Id{B} (f(x), g(x))$; we want to show $\Id{A\to
B}(f,g)$. To do this, we'll define a function $k : I \to (A \to B)$. In
order to do that, we'll first want another function
$\tilde{k} : A \to (I \to B)$.
This function is defined for every $x : A$ by induction on $I$:
\begin{eqnarray*} \tilde{k}(x)(\Izero) &\defeq& f(x) \\ \tilde{k}(x)(\Ione)
&\defeq& g(x) \\ \ap_{\tilde{k}(x)}(\Iseg) &\defeq& h(x) \end{eqnarray*} The
function $k$ is then defined by $k(t)(x) \defeq \tilde{k}(x)(t)$. Observe that
$k(\Izero) \jdeq f$ and $k(\Ione) \jdeq g$. Hence, $\ap_k(\Iseg) : \Id{A
\to B}(f,g)$.
\section{ITT is a theory of sets}
Without univalence or higher inductive types, we have no way to constructing paths other than reflexivity. For this reason, we would expect that the types of ITT are \emph{homotopically discrete} -- they have no higher path structure. Recall the definition of $\isSet$:
\[
\isSet(A) \defeq \prod_{x,y:A} \prod_{p,q:\Id{A}(x,y)} p =_{\Id{A}(x,y)} q
\]
We will be able to show that most of the basic types in ITT are sets, and that most of the type constructors preserve the property of being a set. Depending on how we define the universe $\universe$, it may or may not be possible to prove $\universe$ is a set. To prove $\Pi$ preserves sethood, we will need function extensionality, which is not present in pure ITT. However, it is certianly consistent with pure ITT that all types are sets. In HoTT, on the other hand, we will be able to find types which are provably not sets.
\subsection*{Basic constructs}
\begin{itemize}
\item
$\Unit$: By a homework exercise, we know that $\Id{\Unit}(x,y) \simeq \Unit$, so for any $x,y : \Unit$ we have a map $f : \Id{\Unit}(x,y) \to \Unit$ which is an equivalence. Let $x,y : \Unit$ and $p, q : \Id{\Unit}(x,y)$ be given. We know that $f(p) =_\Unit \left<\right>$ and $f(q) =_\Unit \left<\right>$, so $f(p) =_\Unit f(q)$. Then $f^{-1}(f(p)) =_{\Id{\Unit}(x,y)} f^{-1}(f(q))$ by $\ap_{f^{-1}}$, and the properties of inverses give us that $p =_{\Id{\Unit}(x,y)} q$.
\item
$\Void$: Given $x,y : \Void$ and $p, q : \Id{\Void}(x,y)$, we can simply abort with $x$ to prove that $p =_{\Id{\Void}(x,y)} q$.
\item
$\Pi$: We assume function extensionality. To prove that $\Pi x{:}A. B_x$ is a set, we only need to know that $B_x$ is a set for every $x{:}A$. Assume this is true, and let $f, g : \Pi x{:}A. B$. We want to show any two $p, q : \Id{\Pi x {:}A. B}(f,g)$ are equal. By function extensionality, the type $\Id{\Pi x {:}A. B}(f,g)$ is equivalent to $\Pi x{:}A. \Id{B_x}(f(x), g(x))$, so it suffices to prove any homotopies $h, k : \Pi x{:}A. \Id{B_x}(f(x), g(x))$ are equal. Applying function extensionality again, it is enough to show $h(x) =_{\Id{B_x}(f(x),g(x))} k(x)$ for every $x{:}A$. This follows from our assumption that $B_x$ is a set.
\item
$\Sigma$: Analogously with the product type $A \times B$, it is possible to show that $\Id{\Sigma x{:}A. B_x}(a,b)~\simeq~\Sigma p : (\fst~a = \fst~b).~(\snd~a =^{x.B_x}_p \snd~b)$. Thus, a path in $\Sigma x{:}A. B_x$ is decomposable into a path in $A$ and a path in $B_x$ for some $x$; if $A$ and $B_x$ are sets, all such paths will be equal, so we can show that all paths in $\Sigma x{:}A. B_x$ are equal.
\item
$+$: To prove that $A + B$ is a set, we need to assume $A$ and $B$ are sets. Given $x,y : A + B$, we want to show that any two elements of $\Id{A + B}(x,y)$ are equal. We can do this by a case analysis on $x$ and $y$. If $x \jdeq \inl(a)$ and $y \jdeq \inl(a')$, then $\Id{A + B}(x,y) \simeq \Id{A}(a,a')$, so our theorem follows from the fact that $A$ is a set. The case that $x \jdeq \inr(b)$ and $y \jdeq \inr(b')$ is symmetric. If $x \jdeq \inl(a)$ and $y \jdeq \inr(b)$ (or in the reverse case), then the space of paths from $x$ and $y$ is empty, so of course any two paths are equal.
\item
$\Nat$: We will later discuss Hedberg's theorem, which shows that any type with decidable equality is a set. We leave it as an exercise for the reader to show that $\Nat$ has decidable equality.
\end{itemize}
\subsection*{The universe}
We did not go into much detail with demonstrating that the universe is a
set, but the gist of it is that we can show it by giving ``codes,'' or abstract
syntax trees, for every type in the universe such that they map onto
the natural numbers. For example, for base types like $\Nat$ we just give a
terminal code $\dot{\Nat}$ and for the complex types we can concatenate
(the inductive codification of) their codes. Then we give an interpretation
function to say the appropriate thing, like $T(\dot{\Nat}) = \Nat$ and
$T(a\dot{\to}b) = T(\dot{a}) \to T(\dot(b)$.
\subsection*{Identity types}
We can show that $\Id{A}(x,y)$ is a set if $A$ is a set.
Assumption: $A$ is a set, i.e. there is a term $H$ s.t.
\[
H : \Pi{x,y}{:}A.\Pi{p,q}{:}\Id{A}(x,y).\Id{\Id{A}(x,y)}(p,q)
\]
For the sake of making deeply-nested subscripts on identity types more
readable, let's introduce a few definitions:
\newcommand{\idA}{\op{idA}}
\newcommand{\ididA}{\op{ididA}}
\newcommand{\idididA}{\op{idididA}}
\begin{eqnarray*}
\op{idA}(x,y) &:=& \Id{A}(x,y)\\
\op{ididA}(x,y,r,s) &:=& \Id{\idA(x,y)}(r,s) \\
\op{idididA}(x,y,r,s,\alpha,\beta) &:=& \Id{\ididA(x,y,r,s)}(\alpha,\beta)
\end{eqnarray*}
We need to show that for any $x,y$,
$\Id{A}(x,y)$ is a set, i.e. construct a proof term of type
\[
\Pi{r,s}{:}\idA(x,y).\Pi{\alpha,\beta}{:}\ididA(x,y,r,s).
\idididA(\alpha,\beta)
\]
Assume:
\begin{eqnarray*}
u,v &:& A\\
r,s &:& \idA(u,v)\\
\alpha,\beta &:& \ididA(u,v,r,s)
\end{eqnarray*}
Need to construct a term of type $\idididA(u,v,r,s,\alpha,\beta)$.
First, specialize $H$ to $H'(q) : H(u,v,r,q)$.
We exploit the functoriality of $H'$ to get
\begin{eqnarray*}
\op{apd}_{H'} &:& \Pi{q,q'}{:}\Id{A}(u,v).
\Pi{\gamma}{:}\Id{-}(q,q').
\Id{-}(\gamma_{*}(H'(q)), H'(q'))\\
\op{apd}_{H'}(r,s,\alpha) &:& \Id{-}(\alpha_{*}(H'(r)), H'(s))\\
\op{apd}_{H'}(r,s,\beta) &:& \Id{-}(\beta_*{}(H'(r)), H'(s))
\end{eqnarray*}
By symmetry and transitivity of identity, we can form a term of type
\[
\Id{-}(\alpha_*(H'(r)), \beta_*(H'(r)))
\]
and so we can get transport in the identity
\[
\Id{-}(H'(r)\cdot \alpha, H'(r)\cdot\beta)
\]
Because the groupid structure tells us we get a cancellation property (?),
this means $\alpha = \beta$.
It is left as an exercise to the reader to construct this term in formal
notation.
% (XXX The above is sloppy; I'll work on it.)
\section{ITT + UA is {\em not} a set theory}
In other words, in homotopy type theory, not all types are sets. In
particular, $\universe$ is a proper groupoid. There are nontrivial paths
between the elements of $\universe$.
As an example, we can demonstrate two distinct paths between the booleans
$2$, one which is based on the identity mapping $\op{id}$ taking $\op{tt}$
to $\op{tt}$ and $\op{ff}$ to $\op{ff}$. The other is based on $\op{not}$,
taking $\op{tt}$ to $\op{ff}$ and $\op{ff}$ to $\op{tt}$.
$\op{not}$ and $\op{id}$ are two functions from ${2}$ to ${2}$, and we can
show them {\em equivalent} (exercise). Denote with $\mathsf{ua}$ the
half of UA that takes us from equivalences to paths. Then
$\mathsf{ua}(\op{id})$ and
$\mathsf{ua}(\op{not})$ are two {\em paths} from $2$ to $2$.
We can now refute that these paths are identifiable, i.e. realize
\[
\Pi x{:}2.\Id{2}(\mathsf{ua}(\op{id})(x),\mathsf{ua}(\op{not})(x)) \to 0
\]
$\mathsf{refl}_2(\op{tt}) : \op{tt} =_2 \op{tt}$ by identity introduction,
and by the assumed identity between $\op{id}$ and $\op{not}$, we can
transport to get a proof that $\op{tt} =_2 \op{ff}$. This can be refuted
via the path characterization of sum types seen in a previous lecture,
yielding $0$.
\section{$n$-types}
To foreshadow what's to come: we will eventually consider $\op{isSet}(A)$ a
special case of the more general $\op{is-}n\op{-type}(A)$, specifically
% \newcommand{\isntype}[1]{#1}
\newcommand{\isntype}[1]{\op{is-}{#1}\op{-type}}
\begin{tabular}{ccc}
$\op{isSet}(A)$ & becomes & $\isntype{0}(A)$ \\
$\op{isGpd}(A)$ & becomes & $\isntype{1}(A)$ \\
$\op{is2Gpd}(A)$ & becomes & $\isntype{2}(A)$ \\
\vdots & & \vdots
\end{tabular}
\bigskip
The types $A$ for which $\op{is-}n\op{-type}(A)$ holds will be called the
$n$-types. Roughly, it means that ``up a level'' we have a set (the
identities between identities between ... ($n$ times) become identified).
But before we start climbing the ladder upward, let's go the opposite
direction and consider (in some sense) $n=-1, -2$, i.e. what happens if we
{\em take away} structure in the sense of differentiation of identity
proofs.
\section{Proof Irrelevance}
So far, we have taken to heart the idea of {\em proof relevance} and seen
that it can be useful for the {\em evidence for a proposition} to matter,
i.e. to treat the proposition as a type and terms inhabiting that type as
useful, meaningful data. For example, the natural numbers form a type
$\Nat$, and different ``{\em proofs}'' of $\Nat$ are different
numbers---so of course we care to differentiate them.
Now we will consider the special case of proof {\em irrelevance}: we can
identify certain propositions for which we {\em do not}
distinguishing its proofs, i.e. we can consider any $M, N : A$ for this type
$A$ to be equivalent. We will call this property $\op{isProp}$
(corresponding to $\op{is-}-1\op{-type}$ in the table above), and formally
we define $\op{isProp}(A)$ to be the type
\[
\Pi{x,y}{:}A.\Id{A}(x,y)
\]
Another word used to describe $A$ with this property is ``subsingleton.''
It is a type with at most one element, up to higher homotopy (i.e. if there
are multiple elements then there are paths between them).
A motivation for considering this type arises in the domain of dependently
typed programming, wherein we want to consider types (propositions) to be
{\em specificatons} for code. For example, consider specifying a function
that takes a (possibly infinite) sequence and returns the first index of
the sequence that contains the element $0$. A type giving this
specification might look like
\[
\Pi{s{:}\Nat\to\Nat}.\ \Sigma{i:\Nat}.s(i)=_{\Nat} 0
\]
...except that if we want the function to be {\em total}, we need some
extra information about the input stream, saying that it actually contains
a $0$ element:
\[
\Pi{s{:}(\Sigma{t{:}\Nat\to\Nat}.\Sigma{i{:}\Nat}.t(i) =_{\Nat}0}).\
\Sigma{i:\Nat}.(\pi_1 s)(i)=_{\Nat} 0
\]
But it turns out that we are now asking for too much information from our
input. The above specification has a constant-time algorithm: the input
contains a proof, i.e. a witness that it has a 0 element, which is exactly
what we are supposed to return. The function is
\[
\lambda{s}.\langle \pi_1 (\pi_2 s), \pi_2 (\pi_2 s) \rangle
\]
This is not the function we wanted to specify: we had in mind something
like an inductive traversal of the sequence, stopping when we find the 0
element and returning a tracked index.
The question for resolving this puzzle is ``How do we suppress information
in a type?''. We would like to still require the input to have a 0 element
without making that information available in computation; that is, we are
interested in only the {\em propositional} content of the spec.
One way of suppressing information in a type is Brouwer's idea of using
double negation, i.e. to put a $\lnot\lnot$ in front of $s$'s type.
(Digression: if the stream is infinite, it's not actually clear that we
would be able to decide whether it contains a 0; we might be worried that
by doubly-negating, we no longer have access to that information.
Markov's Principle, from the Russian school of constructivism, states that
if a Turing machine can't fail to halt, it must halt; i.e. it takes a form
of DNE specialized to Turing machines. Alternatively, we can take the NuPRL
route and specify a bound $k$ for the sequence such that we know we will
find a $0$ if there is one.)
Double negation ``kills computational content'' in the way that we want,
and we can formally state that as the following fact:
$\op{isProp}(\lnot\lnot A)$ for any $A$.
$\lnot\lnot A$ is defined as $(A\to \zero) \to \zero$
NTS a term inhabiting
\[
\Pi{x,y}{:}(\lnot\lnot A).\Id{\lnot\lnot A}(x,y)
\]
In lecture it was stated that this is a simple proof using $\abort{-}$. If we
have function extensionality available it seems straightforward that in
fact {\em any} negated type $A \to 0$ is a prop:
\[
{f,g}{:}{\lnot C}, x{:}C \vdash \abort{C}(f\ x)
: \Id{0}(f\ x, g\ x)
\]
With function extensionality we can turn this into
\[
{f,g}{:}{\lnot C} \vdash \op{funext}(\lambda{x}.\abort{C}(f\ x))
: \Id{\lnot C}(f, g)
\]
% XXX stuck here, work this out.
\subsection{G{\"o}del's Double Negation Translation}
\newcommand{\dnt}[1]{{\parallel}{#1}{\parallel}}
\renewcommand{\implies}{\supset}
Brouwer's insight about double negation led to G{\"o}del's discovery of a
translation embedding classical logic into constructive logic. The idea is
to define $\dnt{-}$ such that if $A$ is provable classically, $\dnt{A}$ is
provable constructively. We can give this translation as:
\begin{eqnarray*}
\dnt{1} &=& 1\\
\dnt{A\land B} &=& \dnt{A} \land \dnt{B}\\
\dnt{0} &=& 0\\
\dnt{A\lor B} &=& \lnot\lnot(\dnt{A} \lor \dnt{B})\\
\end{eqnarray*}
For implication we have two choices. We can either ``just squash'' the
type, which would be sufficient for information erasure:
\[
\dnt{A\implies B} = \dnt{A} \implies \dnt{B}
\]
...or we can properly embed classical logic with the translation
\[
\dnt{A\implies B} = \dnt{A} \implies \lnot\lnot\dnt{B}
\]
We need the latter definition to recover completeness wrt classical logic,
since, remembering that classical logic can be formulated as ``constructive
logic plus DNE (a double negation elimination rule available in general),''
we have
\[
\dnt{\lnot\lnot A\implies A} = \lnot\lnot A\implies A
\]
with the ``just squash'' principle, but
\[
\dnt{\lnot\lnot A\implies A} = \lnot\lnot A\implies \lnot\lnot A
\]
with the classical embedding, which is provable constructively (it is just
an instance of the identity).
With the interpretation of $\lnot A$ as a {\em continuation} accepting a
term of type $A$, this translation coincides with the
``continuation-passing transform'' for compilers.
% XXX other possible ways of doing information erasure
% quotient by the full relation
% form a subsingleton, the set {* | A}
\section{Hedberg's Theorem}
Finally, we will touch briefly on Hedberg's Theorem.
Hedberg's Theorem is another way to prove something is a set: it states
that a type with {\em decidable equality} is a set. In other words, If
\[
\Pi{x,y}{:}A.\Id{A}(x,y) \lor \lnot\Id{A}(x,y)
\]
then $\op{isSet}(A)$.
Proof sketch: decidable equality implies {\em stable} equality, i.e.
$\lnot\lnot\Id{A}(x,y) \implies \Id{a}(x,y)$, and stable equality implies
sethood.
\end{document}