forked from michaelpj/change-structures-paper
-
Notifications
You must be signed in to change notification settings - Fork 0
/
change-structures.tex
1851 lines (1543 loc) · 78.8 KB
/
change-structures.tex
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
% Toggle comments for preamble and topmatter to typeset in ACM style
%\input{preamble-standard}
\input{preamble-acm}
\usepackage[utf8]{inputenc}
\usepackage{amsmath}
\usepackage{amssymb}
\usepackage{amsthm}
\usepackage{mathtools}
\usepackage{stmaryrd}
\usepackage{todonotes}
\usepackage{etoolbox}
\usepackage{appendix}
% Packages where the order matters
\usepackage[safeinputenc,natbib=true]{biblatex}
\usepackage{cleveref}
\usepackage{hyperref}
\newcommand{\todoall}[1]{\todo[inline,color=black!30,author=All]{#1}}
\newcommand{\todokcg}[1]{\todo[inline,color=pink!60,author=Katriel]{#1}}
\newcommand{\todompj}[1]{\todo[inline,color=yellow!40,author=Michael]{#1}}
\newcommand{\todomario}[1]{\todo[inline,color=blue!40,author=Mario]{#1}}
\input{notation}
\newif\ifproofs
% Comment out to disable proofs
\proofstrue
\addbibresource{paper.bib}
\begin{document}
%\input{topmatter-standard}
\input{topmatter-acm}
\begin{abstract}
Incremental computation has recently been studied using the concepts of \emph{change
structures} and \emph{derivatives} of programs. The derivative of a program allows updating the output
of a program based on a change to its input.
We generalise change structures to a Cartesian closed category of \emph{change actions},
and study their algebraic properties. We develop change actions for several common structures
in computer science, including directed-complete partial orders and Boolean algebras.
We then show how to compute derivatives of fixpoints, leading to a
generic account of incremental \emph{evaluation} of Datalog,
as well as incremental \emph{updating} of evaluated Datalog programs.
\end{abstract}
\title{Fixing incremental computation}
\subtitle{Derivatives for dcpos, fixpoints, and the semantics of Datalog}
\maketitle
\section{Introduction}
\label{sec:intro}
Consider the following classic Datalog program, which computes the transitive
closure of an edge relation $e$:
\begin{align*}
tc(x, y) &\leftarrow e(x, y)\\
tc(x, y) &\leftarrow e(x, z) \wedge tc(z, y)
\end{align*}
The semantics of Datalog tell us that the denotation of this program is the
least fixpoint of the rule $tc$. Kleene's Theorem tells us that we can reach
this by repeating applying the rule, starting from the empty relation. For example, supposing
that $e = \{ (1, 2), (3, 4), (5, 6) \}$:
\begin{center}
\begin{tabular} {|c|c|c|}
\hline
Iteration & Newly deduced facts & Accumulated contents of $tc$ \\
\hline
1 & $\{ (1, 2)\}$ & $\{ (1, 2) \}$\\
2 & $\{ (1,2), (3,4) \}$ & $\{ (1, 2), (3,4) \}$\\
3 & $\{ (1,2), (3,4), (5,6) \}$ & $\{ (1, 2), (3,4), (5,6) \}$\\
4 & $\{ (1,2), (3,4), (5,6) \}$ & $\{ (1, 2), (3,4), (5,6) \}$\\
\hline
\end{tabular}
\end{center}
And then we have reached fixpoint, so we are done.
However, this process is quite wasteful. We deduced $(1,2)$ at every iteration,
even though we had already deduced it in the first iteration. In fact, for a
chain of $n$ such edges we will deduce $O(n^2)$ facts along the way.
The standard improvement to this is ``semi-naive'' evaluation, where we transform
the program into a \emph{delta} program that only deduces the new facts at each
iteration, which we gradually accumulate \autocite[See][section
13.1]{abiteboul1995foundations}.
\begin{align*}
\Delta tc_{0}(x, y) &\leftarrow e(x, z)\\
\Delta tc_{i+1}(x, y) &\leftarrow e(x, z) \wedge \Delta_i tc(z, y)\\
tc_{0}(x, y) &\leftarrow e(x, z)\\
tc_{i+1}(x, y) &\leftarrow tc_{i}(x,y) \vee \Delta_{i+1} tc(x,y)
\end{align*}
\begin{center}
\begin{tabular} {|c|c|c|}
\hline
Iteration & Newly deduced facts & Accumulated contents of $tc$ \\
\hline
1 & $\{ (1, 2)\}$ & $\{ (1, 2) \}$\\
2 & $\{ (3,4) \}$ & $\{ (1, 2), (3,4) \}$\\
3 & $\{ (5,6) \}$ & $\{ (1, 2), (3,4), (5,6) \}$\\
4 & $\{ \}$ & $\{ (1, 2), (3,4), (5,6) \}$\\
\hline
\end{tabular}
\end{center}
This is much better - we have turned a quadratic computation into a linear one.
But the delta rule translation works only for traditional Datalog. It is common to
liberalise the term syntax to support, say, disjunction, existential quantification, and parity-stratified
negation.\footnote{Parity-stratified negation means that recursive calls must
appear under an even number of negations. This ensures that the rule remains
monotone, so the least fixpoint semantics still exists.}
Then we can write programs like the following, where we compute whether all the
nodes in a subtree given by $child$ have some property $p$:
\begin{align*}
treeProperty(x) &\leftarrow p(x) \wedge \neg \exists y . (child(x,y) \wedge \neg treeProperty(y))
\end{align*}
This is essentially a recursion through a \emph{universal} quantifier. We would
like to be able to use semi-naive evaluation for this rule too, but the simple delta
transformation does not produce a correct incremental program, and it is unclear how to extend it (and the
correctness proof) to handle such cases.
This is of more than theoretical interest - the research
in this paper was carried out at Semmle, which
makes heavy use of a commercial Datalog implementation
\autocites{semmleWebsite}{avgustinov2016ql}{sereni2008adding}{schafer2010type}.
Semmle's implementation includes parity-stratified negation, and other non-standard
features, so we are faced with a dilemma: either abandon the new language
features, or abandon incremental computation.
There is a piece of folkloric knowledge in the Datalog community that hints at a
solution: the semi-naive translation of a rule corresponds to the
\emph{derivative} of that rule \autocites{bancilhon1986naive}[section
3.2.2]{bancilhon1986amateur}. The idea of performing incremental computation using derivatives has been
studied recently by \textcite{cai2014changes}, who give an account using
\emph{change structures}. They use this to provide a framework for incrementally evaluating lambda calculus programs.
However, \citeauthor{cai2014changes}'s work isn't directly applicable to Datalog: the tricky part
of Datalog's semantics is the \emph{fixpoint}, and we need some addtional theory to tell us how to
handle incremental evaluation of fixpoint computations.\footnote{In fact, we can prove these
results for fixpoint computations over dcpos in general, which opens up the
possibility of using these result in domain theory, which uses fixpoints to
represent recursion in programming language semantics.}
This paper bridges that gap. We start by generalizing change structures to
\emph{change actions} (\cref{sec:changeActions}). Change actions are weaker than change structures, but
have nice categorical properties (\cref{sec:category}), and exist for more structures.
We then show how to compute fixpoints incrementally, and also how to perform
incremental updates of already computed fixpoint expressions, given a change to
the function of which the fixpoint is taken (\cref{sec:fixpoints}).
Finally, we put all this together into a generic account of incremental
computation and update of Datalog programs, showing how to handle
language constructs such as negation, disjunction, and additional well-behaved
extensions (\cref{sec:datalog}). This provides the world's first incremental
evaluation and update mechanism for Datalog that can handle arbitrary additional
term constructs (\cref{sec:extensions}).
\section{Change actions}
\label{sec:changeActions}
The core concept we will work with is a \emph{change action}. A change action is
a set along with a set of \emph{changes} that can be ``applied'' to elements of
the base set using an operator $\cplus$ (pronounced ``smush''). We also require
some structure on the changes themselves: they must form a \emph{monoid action}
on the base set.\footnote{The requirement that the change set be a monoid is convenient but in
fact inessential: given any set with an action on the base set, we can take the
free monoid over the action set to obtain a monoid action.}
\begin{defn}[Change actions]
A \emph{change action} is defined as:
\begin{displaymath}
\cstr{A} \defeq \cstruct{A}{\changes{A}}{\cplus}
\end{displaymath}
where $A$ is a set, $(\changes{A}, \splus, \mzero)$ is a monoid, and $\cplus$ gives a monoid action on $A$.
We will call $A$ the base set, and $\changes{A}$ the change set of the change
action. We may abbreviate $\cstruct{A}{\changes{A}}{\cplus}$ to $\cstr{A}_\cplus$.
\end{defn}
The fact that the change set is a monoid action gives us a reason to think that
change actions are an adequate representation of changes: any set of
transformations on the base set which is closed under composition can be
represented as a monoid action, so we are able to capture all of these as change actions.
The primary motivation for change actions is that they let us define
\emph{derivatives} for functions.
\begin{defn}[Derivatives]
A \emph{derivative} of a function $f: \cstr{A}_\cplus \rightarrow \cstr{B}_\cpluss$ is a function $\derive{f}: A \times \changes{A} \rightarrow
\changes{B}$ such that
\begin{displaymath}
f(a \cplus \change{a}) = f(a) \cpluss \derive{f}(a, \change{a})
\end{displaymath}
A function which has a derivative is called \emph{differentiable}.
\end{defn}
Derivatives need not be unique, in general, so we will speak of ``a''
derivative.\footnote{In several places we will need to pick an arbitrary
derivative for some construction. In general this needs the Axiom of Choice,
but in most practical cases we will want to have a computable derivative
operator for our domain, which alleviates the problem. In addition, thin
change actions (\cref{sec:thin}) have unique derivatives.}
Derivatives capture the essence of incremental computation: given the value of a
function at a point, and a change to that point, they tell you how to compute
the new value of the function.
The choice of the name ``derivative'' is also not a coincidence. While these
derivatives do not look quite like derivatives in real analysis, they \emph{do}
bear a strong resemblance to derivatives in other areas (such as synthetic differential geometry), and
they satisfy the standard chain rule.
\begin{thm}[The Chain Rule]
Let $f: \cstr{A}_\cplus \rightarrow \cstr{B}_\cpluss$, $g: \cstr{B}_\cpluss \rightarrow \cstr{C}_\cplusss$ be differentiable functions. Then $g \circ f$ is also
differentiable, with derivative given by
\begin{displaymath}
\derive{(g \circ f)}(x, \change{x}) = \derive{g}\left(f(x), \derive{f}(x, \change{x})\right)
\end{displaymath}
or, in curried form
\begin{displaymath}
\derive{(g \circ f)}(x)(\change{x}) = \derive{g}(f(x)) \circ \derive{f}(x)
\end{displaymath}
\end{thm}
Here are some recurring examples of changes actions:
\begin{itemize}
\item $\cstr{A}_\discrete \defeq \cstruct{A}{\emptyset}{\emptyset}$ is the discrete change action on any base set.
\item $\cstr{A}_\Rightarrow \defeq \cstruct{A}{A^A}{\ev}$, where $A^A$ denotes the exponential object and $\ev$ is the evaluation map.
\item $[A]$, the type of lists (or streams) of elements of type $A$, has a
change structure over concatenation ($\doubleplus$): $\cstruct{[A]}{[A]}{\doubleplus}$.
\item $\{A\}$, the type of sets, has change actions $\cstruct{\{A\}}{\{A\}}{\cup}$, $\cstruct{\{A\}}{\{A\}}{\cap}$.
\end{itemize}
Indeed, any monoid $(A, \splus)$ can be seen as a change action
$\cstruct{A}{A}{\splus}$, and the final two examples are instances of this. Many practical change actions
can be constructed in this way. In particular, for any change action $\cstruct{A}{\changes{A}}{\cplus}$,
$\cstruct{\changes{A}}{\changes{A}}{\splus}$ is also a change action. This means
that we don't have to do any extra work to talk about higher derivatives of
functions - we can always take $\changes{\changes{A}} = \changes{A}$.
Many other notions in computer science can be naturally understood in terms of change actions, e.g. databases
and database updates, files and diffs, Git repositories and commits, even video compression
algorithms that encode a frame as a series of changes to the previous frame.
\subsection{Complete change actions and minus operators}
Complete change actions are an important class of change actions, because they
have changes between \emph{any} two values in the base set.
\begin{defn}[Complete change actions]
A change action is \emph{complete} if for any $a, b \in A$, there is
a change $\change{a} \in \changes{A}$ such that $a \cplus \change{a} = b$.
\end{defn}
Complete change actions have convenient ``minus operators'' that allow us to
compute the difference between two values.
\begin{defn}[Minus operator]
A \emph{minus operator} is a function $\cminus: A \times A \rightarrow \changes{A}$ such that $a \cplus (b \cminus a) = b$.
\end{defn}
\begin{prop}[Completeness equivalences]
Let $\cstr{A}$ be a change action. Then the following are equivalent:
\begin{itemize}
\item $\cstr{A}$ is complete.
\item The monoid action of $\cplus$ on $A$ is transitive.
\item There is a minus operator on $\cstr{A}$.
\item Any function from any change action into $\cstr{A}$ is differentiable.
\end{itemize}
\end{prop}
This last property is of the utmost importance, since we are often concerned with the differentiability
of functions.
\begin{defn}[Minus derivative]
Given a minus operator $\cminus$, we have a derivative for any function $f$,
defined as
\begin{displaymath}
\derive{f}_\cminus(a, \change{a}) \defeq f(a \cplus \change{a}) \cminus f(a)
\end{displaymath}
\end{defn}
\subsection{Thin change actions}
\label{sec:thin}
There can be multiple changes representing the difference
between two elements. This is true for many change actions, but the change
actions for which there \emph{is} only one such change are particularly
well-behaved, so it's worth naming them.
\begin{defn}[Thin change actions]
A change action is \emph{thin} if whenever $a \cplus \change{a}
= a \cplus \change{b}$ for some $a \in A$, it is the case that $\change{a} = \change{b}$.
\end{defn}
Many change actions are not thin, for example $\cstruct{\NN}{\ZZ}{+}$ (as $0 + (-1)
= 0 + (-2)$ in $\NN$).
\begin{prop}[Thin equivalences]
Let $\cstr{A}$ be a change action. Then the following are equivalent.
\begin{itemize}
\item $\cstr{A}$ is thin.
\item The monoid action of $\cplus$ on $A$ is free.
\item If $\cstr{A}$ has a minus operator $\cminus$, then $(a \cplus \change{a})
\cminus a = \change{a}$.
\end{itemize}
\end{prop}
Thinness gives us uniqueness of derivatives:
\begin{prop}
Let $\cstr{B}$ be a thin change action, and $f: \cstr{A} \rightarrow \cstr{B}$. Then $f$ has at
most one derivative.
Conversely, if $\textrm{id}: \cstr{B} \rightarrow \cstr{B}$ has exactly one derivative, then
$B$ is thin.
\end{prop}
\section{The category of change actions}
\label{sec:category}
We are arguing that change actions provide a good model for incremental
computation in general. That means we want to be able to easily construct change
actions over the wide variety of datatypes that we actually use in programming.
We can do this by showing that the category of change actions has the usual
useful constructions: products, coproducts, and exponentials. These are the
building blocks from which we create our datatypes, and they will also be useful
for our proofs in \cref{sec:fixpoints}: in particular, the \emph{evaluation map}
$\ev$ that we get from exponentials will turn out to be differentiable, and its
derivative has a deep connection to the incremental evaluation of functions (\cref{prop:incrementalization}).
\begin{defn}[Category of change actions]
We define the category $\cat{CAct}$ of change actions. The objects are
change actions and the morphisms are differentiable functions.
\end{defn}
\subsection{Equivalence with PreOrd}
There is a natural preorder on the base set of a change action, given by reachability
under the action:
\begin{defn}[Reachability order]
$a \reachOrder b$ iff there is a $\change{a} \in \changes{A}$ such that $a \cplus
\change{a} = b$.
\end{defn}
We can characterize many of the properties of change actions in terms of the reachability order,
which suggests a connection between $\cat{CAct}$ and the category of preorders, $\cat{PreOrd}$.
\begin{prop}
A function is differentiable iff it is monotone with respect to the
reachability order.
\end{prop}
\begin{corollary}
Two change actions are isomorphic iff their posets under the reachability
order are isomorphic.
\end{corollary}
\begin{corollary}
Any function from a discrete change action or into a complete change
action is differentiable.
\end{corollary}
Indeed, the correspondence between a change action and its reachability preorder gives rise to
a (full and faithful) functor $\reach : \cat{CAct} \rightarrow \cat{PreOrd}$ that acts as the
identity on morphisms.
Conversely, any preorder $\leq$ on some set $A$ induces a change action
$\cstr{A}_\leq \defeq \cstruct{A}{\leq^\star}{\cplus_\star}$.
The action $\cplus_\star$ is defined as the extension of $\cplus_\leq$ to the free
monoid $\leq^\star$, where $\cplus_\leq$ is defined as:
\[
\begin{aligned}
\cplus_\leq &: (A \times \leq) \rightarrow A&\\
a \cplus_\leq (b, c) &\defeq
\begin{cases}
c&\text{ if $a = b$}\\
b&\text{ otherwise}
\end{cases}
\end{aligned}
\]
The mapping to $\cstr{A}_\leq$ gives rise to a (full and faithful) functor
$\direct : \cat{PreOrd} \rightarrow \cat{CAct}$, again acting as the identity on morphisms.
These two functors are in fact enough to give us an equivalence between the categories
$\cat{CAct}$ and $\cat{PreOrd}$.
\begin{thm}[Equivalence of $\cat{CAct}$, $\cat{PreOrd}$]
The functor $\reach$ from $\cat{CAct}$ to $\cat{PreOrd}$ together with the
functor $\direct$ in the opposite direction form an equivalence of categories.
\end{thm}
\ifproofs
\begin{proof}
See \cref{prf:preordEquivalence}.
\end{proof}
\fi
Since $\cat{PreOrd}$ is topological over $\cat{Set}$ \autocite[][Chapter V]{adamek2004abstract},
(and hence complete and co-complete) and also an exponential ideal of $\cat{Cat}$
(and hence Cartesian closed), this gives us a proof of the existence of limits, colimits, and exponentials in $\cat{CAct}$.
\begin{corollary}
The category $\cat{CAct}$ has all limits, colimits and exponential objects.
\end{corollary}
\subsection{Explicit constructions}
Having shown that $\cat{CAct}$ is equivalent to $\cat{PreOrd}$ it may seem
redundant to give explicit recipes for constructing products, coproducts, and exponentials.
However, we can give constructions that are much nicer than the ones
which we get via $\cat{PreOrd}$, which is important for using them in
practical computation.
\begin{prop}[Products]
Let $\cstr{A} = \cstruct{A}{\changes{A}}{\cplus}$ and $\cstr{B} =
\cstruct{B}{\changes{B}}{\cpluss}$ be change actions.
Then $\cstr{A} \times \cstr{B} \defeq \cstruct{A \times B}{\changes{A} \times
\changes{B}}{\cplus \times \cpluss}$ is their categorical product.
\end{prop}
\ifproofs
\begin{proof}
See \cref{prf:products}.
\end{proof}
\fi
\begin{prop}[Coproducts]
Let $\cstr{A} = \cstruct{A}{\changes{A}}{\cplus}$ and $\cstr{B} =
\cstruct{B}{\changes{B}}{\cpluss}$ be change actions.
Then $\cstr{A} + \cstr{B} \defeq \cstruct{A + B}{\changes{A} \times
\changes{B}}{\cplusvee}$ is their categorical coproduct, with $\cplusvee$ defined as:
\begin{align*}
i_1 a \cplusvee (\change{a}, \change{b}) &\defeq \iota_1 (a \cplus \change{a})\\
i_2 b \cplusvee (\change{a}, \change{b}) &\defeq \iota_2 (b \cplus \change{b})
\end{align*}
\end{prop}
\ifproofs
\begin{proof}
See \cref{prf:coproducts}.
\end{proof}
\fi
\begin{prop}[Exponentials]
\label{prop:exponentials}
Let $\cstr{A}_\cplus$ and $\cstr{B}_\cpluss$ be change actions. We will say a map
$\change{f} : (\cstr{A} \rightarrow \cstr{B}) \rightarrow (\cstr{A} \rightarrow \cstr{B})$
is \emph{pointwise $\reachOrder$-increasing} iff, for every $f : \cstr{A} \rightarrow \cstr{B}$
and $a \in \cstr{A}$, it is the case that $f(a) \reachOrder \change{f}(f)(a)$.
We will denote the set of pointwise $\reachOrder$-increasing functions by $\rightarrow_{\reachOrder}$.
\todompj{Why can't we just say ``increasing with respect to the reachability
order lifted to functions''?}
\todomario{We can, but I thought it might be more clear to write it explicitly - feel free
to change it}
Then $\cstruct{\cstr{A} \rightarrow \cstr{B}}
{(\cstr{A} \rightarrow \cstr{B}) \rightarrow_{\reachOrder} (\cstr{A} \rightarrow \cstr{B})}
{\lambda f . \lambda \change{f} . \change{f}(f)}$
is a change action with base
set $\cstr{A} \rightarrow \cstr{B}$, with monoidal structure given by function composition.
Furthermore, it is the exponential object $\cstr{B}^{\cstr{A}}$ in $\cat{CAct}$.
\end{prop}
\ifproofs
\begin{proof}
First, we note that, if $f : \cstr{A} \rightarrow \cstr{B}$, $\change{f} \in \changes{(\cstr{A} \rightarrow \cstr{B})}$
and $a \in \cstr{A}$, since $\change{f}$ is pointwise $\reachOrder$-increasing, $\change{f}(f)(a)$ is reachable
from $f(a)$, and thus there is some $\change{b} \in \changes{B}$ satisfying $f(a) \cplus \change{b} = \change{f}(f)(a)$.
We will often abuse the notation and use $\change{f}_\Delta(f)(a)$ to refer such a change (although, in general, the Axiom
of Choice is necessary to find such a $\change{b}$).
\todomario{this is terrible and I hate it}
\todompj{Agreed}
Furthermore, if $f, g : \cstr{A} \rightarrow \cstr{B}$ and for all $a$ we have $f(a) \reachOrder g(a)$, then there exists
some function change $\change{gf}$ sending $f$ to $g$ (it suffices to pick the change that maps $f$ to $g$ and every other
differentiable map to itself).
We are now ready to proceed with the proof. The evaluation map is, as expected, identical to its definition in $\cat{Set}$:
\begin{align*}
&\ev : (\cstr{A} \rightarrow \cstr{B}) \times \cstr{A} \rightarrow \cstr{B}\\
&\ev((f, a)) \defeq f(a)
\end{align*}
We need first to show that it is differentiable: this is straightforward to check. Indeed, suppose
$a \in \changes{A}, f : \cstr{A} \rightarrow \cstr{B}, \change{a} \in \changes{A}, \change{f} \in \changes{(\cstr{A} \rightarrow \cstr{B})}$.
Then, for $f'$ an arbitrary derivative of $f$, let:
\begin{align*}
&\derive{\ev} : (((\cstr{A} \rightarrow \cstr{B}) \times \cstr{A}) \times (\changes{(\cstr{A} \rightarrow \cstr{B})} \times \changes{A})) \rightarrow \changes{B}\\
&\derive{\ev}((f, a), (\change{f}, \change{a})) \defeq \derive{f}(a, \change{a}) \cdot \change{f}_\Delta(f)(a \cplus \change{a})
\end{align*}
Then $\derive{\ev}$ is a derivative of the evaluation map $\ev$:
\begin{itemize}
\item[ ]$\ev((f, a) \cplus (\change{f}, \change{a}))$
\item[=]\{ definition of $\ev$\}\\
$\ev(\change{f}(f), a \cplus \change{a})$
\item[=]\{ definition of $\changes{\cstr{A} \rightarrow \cstr{B}}$\}\\
$\change{f}(f)(a \cplus \change{a})$
\item[=]\{ $\change{f}$ is pointwise $\reachOrder$-increasing\}\\
$f(a \cplus \change{a}) \cplus \change{f}_\Delta(f)(a \cplus \change{a})$
\item[=]\{ differentiability of $f$\}\\
$f(a) \cplus [ f'(a, \change{a})\cdot \change{f}_\Delta(f)(a \cplus \change{a}) ]$
\item[=]\{ definition of $\ev$\}\\
$\ev(f, a) \cplus \ev'((f, a), (\change{f}, \change{a}))$
\end{itemize}
It remains to prove that a function $f : \cstr{A} \times \cstr{B} \rightarrow \cstr{C}$ is differentiable iff its curried
version $\curry{f} : \cstr{A} \rightarrow \cstr{C}^{\cstr{B}}$ is.
First, suppose $f$ is differentiable, and let $\change{a} \in \changes{A}$. Then
\begin{itemize}
\item[ ]$\curry{f}(a \cplus \change{a})$
\item[=]\{ definition of $\curry{f}$\}\\
$\lambda b . f(a \cplus \change{a}, b)$
\item[=]\{ differentiability of $f$\}\\
$\lambda b . f(a, b) \cplus \derive{f}((a, b), (\change{a}, \mzero))$
\end{itemize}
We note that $\curry{f}(a \cplus \change{a})$ is differentiable, and $\curry{f}(a)(b) \reachOrder \curry{f}(a \cplus \change{a})(b)$,
hence there is some function change $\change{f}_{a, \change{a}}$ mapping $\curry{f}(a)$ to $\curry{f}(a \cplus \change{a})$. The map
$\lambda a . \lambda \change{a} . \change{f}_{a, \change{a}}$ is, then, a derivative of $\curry{f}$.
Conversely, suppose $\curry{f} : \cstr{A} \rightarrow \cstr{C}^{\cstr{B}}$ is differentiable, let $(a, b) \in \cstr{A} \times \cstr{B}$
and let $(\change{a}, \change{b}) \in \changes{A} \times \changes{B}$. Then:
\begin{itemize}
\item[ ]$f((a, b) \cplus (\change{a}, \change{b}))$
\item[=]\{ definition of $\cplus$ on products\}\\
$f((a \cplus \change{a}, b \cplus \change{b}))$
\item[=]\{ definition of $\curry{f}$\}\\
$\curry{f}(a \cplus \change{a})(b \cplus \change{b})$
\item[=]\{ differentiability of $\curry{f}$\}\\
$(\curry{f}(a) \cplus \derive{(\curry{f})}(a, \change{a}))(b \cplus \change{b})$
\item[=]\{ $\derive{(\curry{f})}(a, \change{a})$ is pointwise $\reachOrder$-increasing \}\\
$\curry{f}(a)(b \cplus \change{b}) \cplus \derive{(\curry{f})}(a, \change{a})_\Delta(\curry{f}(a), b \cplus \change{b})$
\item[=]\{ differentiability of $\curry{f}(a)$\}\\
$\curry{f}(a)(b) \cplus [ \derive{\curry{f}(a)}(b, \change{b}) \cdot \derive{(\curry{f})}(a, \change{a})_\Delta(\curry{f}(a), b \cplus \change{b}) ]$
\item[=]\{ definition of $\curry{f}$\}\\
$f(a, b) \cplus [ \derive{\curry{f}(a)}(b, \change{b}) \cdot \derive{(\curry{f})}(a, \change{a})_\Delta(\curry{f}(a), b \cplus \change{b}) ]$
\end{itemize}
\end{proof}
\fi
The resulting change action is rather unwieldy and it's hard to perform actual computation on it. Fortunately, in many
cases, it is equivalent to a much simpler one.
\begin{lemma}[Pointwise function changes]
Let $\cstr{B}$ be a change action. If $\cplus : B \times \changes{B} \rightarrow B$ is differentiable
with respect to its first argument, then the change action
$\cstruct{\cstr{A} \rightarrow \cstr{B}}{\cstr{A} \rightarrow \cstr{\changes{B}}}{\cplus_\rightarrow}$, with the action
$\cplus_\rightarrow$ and the monoidal structure defined pointwise, is the exponential
object $\cstr{B}^{\cstr{A}}$.
\end{lemma}
\ifproofs
\begin{proof}
We prove first that $\cplus_\rightarrow$ is indeed a valid action, i.e. given functions $f : \cstr{A} \rightarrow \cstr{B}$
and $\change{f} : A \rightarrow \changes{B}$, $f \cplus_\rightarrow \change{f}$ is differentiable.
Letting $a \in A, \change{a} \in \changes{A}$, we have:
\begin{itemize}
\item[ ]$(f \cplus_\rightarrow \change{f})(a \cplus \change{a})$
\item[=]$f (a \cplus \change{a}) \cplus \change{f}(a \cplus \change{a})$
\item[=]$(f (a) \cplus \derive{f}(a, \change{a})) \cplus \change{f}(a \cplus \change{a})$
\item[=]$(f (a) \cplus \change{f}(a \cplus \change{a})) \cplus \partial_1\cplus(f(a), \change{f}(a \cplus \change{a}))(\derive{f}(a, \change{a}))$
\item[=]$(f (a) \cplus \change{f}(a)) \cplus (\derive{\change{f}}(a, \change{a}) \cdot \partial_1\cplus(f(a), \change{f}(a \cplus \change{a}))(\derive{f}(a, \change{a})))$
\end{itemize}
Thus $\lambda a . \lambda \change{a} .\derive{\change{f}}(a, \change{a}) \cdot \partial_1\cplus(f(a), \change{f}(a \cplus \change{a}))(\derive{f}(a, \change{a}))$
is a derivative for $f \cplus \change{f}$.
In order to show that it is isomorphic to the previous exponential structure, we prove that the induced
reachability order is the same.
\todomario{Nevermind, this is in fact false because the changes need to be differentiable but they may not! Quite a problem.}
\todomario{The question boils down to whether there is always a differentiable derivative, which is tricky.}
\end{proof}
\fi
Derivatives of the evaluation map correspond to incremental evaluation of functions:
\begin{prop}[Incrementalization]
\label{prop:incrementalization}
Let $f: \cstr{A} \rightarrow \cstr{B}$, $a \in A$, $\change{a} \in
\changes{A}$, $\change{f} \in \changes{(A \rightarrow B)}$, and let
$\derive{\ev}$ be a derivative of the evaluation map.
Then
\begin{displaymath}
(f \cplus \change{f})(a \cplus \change{a}) = f(a) \cplus \derive{\ev}((f, a), (\change{f}, \change{a}))
\end{displaymath}
\end{prop}
Conveniently, our proof of \cref{prop:exponentials} gives us actual derivatives for the evaluation map:
\todomario{Redo this!}
\begin{prop}[Derivatives of the evaluation map]
\label{prop:evDerivatives}
Let $f: \cstr{A} \rightarrow \cstr{B}$, $a \in A$, $\change{a} \in
\changes{A}$, $\change{f} \in \changes{(A \rightarrow B)}$.
Then if $f$ is differentiable
\begin{displaymath}
\derive{\ev}_1 \defeq \derive{f}(a, \change{a}) \splus \change{f}(a \cplus \change{a})
\end{displaymath}
is a derivative for the evaluation map.
Or, if $f \cplus \change{f}$ is differentiable
\begin{displaymath}
\derive{\ev}_2 \defeq \change{f}(a) \splus \derive{(f \cplus \change{f})}(a, \change{a})
\end{displaymath}
is a derivative for the evaluation map.
\end{prop}
\ifproofs
\begin{proof}
The first is shown in the proof of \cref{prop:exponentials}, the second is
similar but begins by taking the derivative of $f \cplus \change{f}$.
\end{proof}
\fi
\subsection{The category of thin change actions}
\begin{prop}
The product $\cstr{A} \times \cstr{B}$ of change actions $\cstr{A}$ and $\cstr{B}$ is thin if and only if both $\cstr{A}$ and $\cstr{B}$ are.
Furthermore, whenever $\cstr{A}$ and $\cstr{B}$ are complete, then so is $\cstr{A} \times \cstr{B}$.
The exponential object $\cstr{A} \Rightarrow \cstr{B}$ of change actions $\cstr{A}$ and $\cstr{B}$ is thin if $\cstr{B}$ is.
Furthermore, whenever $\cstr{B}$ is complete, then so is $\cstr{A} \Rightarrow \cstr{B}$.
\end{prop}
Thus, the category of thin change actions is Cartesian closed as well.
\section{Change actions over other structures}
\label{sec:moreStructures}
We are aiming to work over Boolean algebras with fixpoints, which is where we
will interpret Datalog. We will work up to that gradually, adding power as we
progress from posets, to directed-complete partial orders, and
finally to Boolean algebras.
\subsection{Posets}
If the base set of a change action is a poset, then this gives us a natural
order on the change set.
\begin{defn}[Change order]
$\change{a} \changeOrder \change{b}$ iff for all $a \in A$ it is the case that $a \cplus \change{a} \leq a \cplus \change{b}$.
\end{defn}
If the change action is thin, then the order is antisymmetric, and a
full partial order.
Having a monotone order on the changes is very useful.
\begin{thm}[Sandwich lemma]
\label{thm:sandwich}
Let $\supderive{f}$ and $\subderive{f}$ be a derivatives for $f$, $\changeOrder$ be a preorder on $\changes{B}$ such that $\cplus$ is monotone with
respect to it, and $g$ be such that
\begin{displaymath}
\supderive{f} \changeOrder g \changeOrder \subderive{f}
\end{displaymath}
Then $g$ is a derivative for $f$.
\end{thm}
If we have minimal and maximal derivatives then this gives us a full
characterisation of all the derivatives for a function.
\begin{thm}[Characterization of derivatives]
\label{thm:derivativeCharacterization}
Let $\cstr{A}$ and $\cstr{B}$ be a change actions, let
$f: \cstr{A} \rightarrow \cstr{B}$ be a function, and let $\subderiveM{f}$ and
$\supderiveM{f}$ be minimal and maximal derivatives of $f$, respectively.
Then the derivatives of $f$ are precisely
the functions $\derive{f}$ such that
\begin{displaymath}
\subderiveM{f} \changeOrder \derive{f} \changeOrder \supderiveM{f}
\end{displaymath}
\end{thm}
\ifproofs
\begin{proof}
Follows easily from \cref{thm:sandwich} and minimality/maximality.
\end{proof}
\fi
This theorem gives us leeway when trying to pick a derivative: we can pick out the
bounds, and that tells us how much ``wiggle room'' we have. This is helpful
because some of the intermediary functions may be much easier to compute than
others, as we will see in \cref{sec:datalogDifferentiability}.
One way to get minimal and maximal derivatives is from minimal and maximal minus
operators (if we are in a complete change action).
\begin{defn}[Minus ordering]
$\cminus_1 \minusOrder \cminus_2$ iff for all $a,b \in A$, $a \cminus_1 b
\changeOrder a \cminus_2 b$.
\end{defn}
This implies an ordering on the corresponding derivatives: if $\cminus_1 \minusOrder \cminus_2$ then
$\derive{f}_{\cminus_1} \changeOrder \derive{f}_{\cminus_2}$.
Which gives us a correspondence between maximal minus operators and maximal derivatives.
\begin{prop}
\label{prop:maximalMinusDerivatives}
If $\cminus$ is a minimal (maximal) minus operator with respect to
$\minusOrder$, then $\derive{f}_\cminus$ is a minimal (maximal) derivative.
\end{prop}
\subsection{Directed-complete partial orders}
Directed-complete partial orders give us the well known notion of
\emph{Scott-continuity}. We define \emph{continuous} change actions,
which are well-behaved with respect to continuity.
\begin{defn}[Continuous change actions]
A change action $\cstr{A}$ is \emph{continuous} if
\begin{itemize}
\item $A$ and $\changes{A}$ are dcpos.
\item $\cplus$ and $\splus$ are Scott-continuous in both arguments.
\end{itemize}
\end{defn}
\begin{corollary}
\label{cor:bottomPlusBottom}
Let $\cstr{A}$ be a continuous change structure. Then $\bot_A \cplus
\bot_{\changes{A}} = \bot_A$.
\end{corollary}
\ifproofs
\begin{proof}
$\bot_{\changes{A}} \leq \mzero$, therefore
\begin{displaymath}
\bot_A \leq \bot_A \cplus \bot_{\changes{A}} \leq \bot_A \cplus \mzero = \bot_A
\end{displaymath}
\end{proof}
\fi
We also have the following lemma (which is just a re-statement of a well-known
property of Scott-continuous functions, see e.g. \cite[Lemma~3.2.6]{abramsky1994domain}):
\begin{prop}[Distributivity of limits across arguments]
\label{prop:distributivityLimit}
A function $f : A \times B \rightarrow C$ is continuous iff it is continuous in each variable separately.
\end{prop}
A direct corollary of this property is the following result, reminiscent of a well-known theorem of calculus:
\begin{corollary}[Continuity of differentiation]
\label{cor:diffContinuous}
Let $\cstr{A}$, $\cstr{B}$ be change actions, with $\cstr{B}$ continuous and let $\{f_i\}$ and $\{\derive{f_i}\}$ be
$I$-indexed directed families of functions in $A \rightarrow B$ and $A \times \changes{A} \rightarrow \changes{B}$.
Then, if for every $i \in I$ is the case that $\derive{f_i}$ is a derivative of $f_i$, then $\sqcup_{i \in I} \{ \derive{f_i} \}$ is
a derivative of $\sqcup_{i \in I} \{ f_i \}$.
\end{corollary}
\ifproofs
\begin{proof}
It suffices to apply $\cplus$ and \cref{prop:distributivityLimit} to the directed families $\{ f_i(a) \}$ and
$\{ \derive{f_i}(a, \change{a}) \}$.
\end{proof}
\fi
We also state the following additional fixpoint lemma. This is a specialization of
Becik's Theorem \autocite[][section 10.1]{winskel1993formal}, but it has a straightforward direct proof.
\begin{prop}[Factoring of fixpoints]
\label{prop:factoringFixpoints}
Let $A$ and $B$ be dcpos, $f : A \rightarrow A$ and $g: A \times B \rightarrow B$ be continuous, and let
\begin{displaymath}
h(a, b) \defeq (f(a), g(a, b))
\end{displaymath}
Then
\begin{displaymath}
\lfp(h) = (\lfp(f), \lfp(g(\lfp(f))))
\end{displaymath}
\end{prop}
\ifproofs
\begin{proof}
See \cref{prf:factoringFixpoints}.
\end{proof}
\fi
\subsection{Boolean algebras}
Boolean algebras have a complete, continuous change structure.
\begin{prop}
Let $L$ be a Boolean algebra. Define
\begin{displaymath}
\cstr{L}_\superpose \defeq \cstruct{L}{L \times L}{\twist}
\end{displaymath}
where
\begin{displaymath}
a \twist (p, q) \defeq (a \vee p) \wedge \neg q
\end{displaymath}
and the monoid operator is
\begin{displaymath}
(p, q) \splus (r, s) \defeq ((p \wedge \neg q) \vee r, (q \wedge \neg r) \vee s)
\end{displaymath}
Then $\cstr{L}_\superpose$ is a complete, continuous change action on $L$.
\end{prop}
\ifproofs
\begin{proof}
See \cref{prf:lsuperpose}.
\end{proof}
\fi
We can think of $\cstr{L}_\superpose$ as tracking changes as pairs of ``upwards'' and
``downwards'' changes, where the monoid action simply applies both.\footnote{We
can, in fact, make it precise that $\cstr{L}_\superpose$ is an ``upwards''
and ``downwards'' change action glued together, but here it is simpler to
just go directly to the useful change action.}
Boolean algebras also have concrete definitions for maximal and minimal minus
operators.
\begin{prop}
Let $L$ be a Boolean algebra. Then
\begin{align*}
a \cminus_\bot b &= (a \wedge \neg b, b)\\
a \cminus_\top b &= (a, b \wedge \neg a)
\end{align*}
define minimal and maximal minus operators.
\end{prop}
In particular, \cref{thm:derivativeCharacterization} along with
\cref{prop:maximalMinusDerivatives} give us bounds for
all the derivatives on Boolean algebras:
\begin{corollary}
\label{cor:booleanCharacterization}
Let $L$ be a Boolean algebra with the $\cstr{L}_\superpose$ change action, $A$ be
a change action, and $f: A \rightarrow
L$ a function. Then the derivatives of $f$ are precisely those functions
$\derive{f}$ such that
\begin{displaymath}
f(a \cplus \change{a}) \cminus_\bot f(a)
\changeOrder
\derive{f}(a, \change{a})
\changeOrder
f(a \cplus \change{a}) \cminus_\top f(a)
\end{displaymath}
\end{corollary}
This makes \cref{thm:derivativeCharacterization} actually usable in practice, as
we have concrete definitions for our bounds (which, again, we will make use of in \cref{sec:datalogDifferentiability}).
\section{Fixpoints}
\label{sec:fixpoints}
In directed-complete partial orders we can define a least fixpoint operator $\lfp$ in terms of the
iteration function $\iter$:
\begin{align*}
&\lfp : (A \rightarrow A) \rightarrow A\\
&\lfp \defeq \sqcup_{n \in \NN} \{ \iter_n \}\\
&\iter : (A \rightarrow A) \times \NN \rightarrow A\\
&\iter(f, n) \defeq f^n(\bot)
\end{align*}
The iteration function is going to be the basis for everything in this section:
we can differentiate it with respect to $n$, and this will give us a way to get
to the next iteration incrementally; and we can differentiate it with respect to
$f$, and this will give us a way to get from iterating $f$ to iterating $f
\cplus \change{f}$.\footnote{The sharp-eyed reader may have noticed that we
could also differentiate $\iter$ with respect to the base point (which we have
given as just $\bot$).}
To avoid confusion, we shall write $\iter_f$ when we are holding $f$ constant,
and $\iter_n$ when we are holding $n$ constant.
\subsection{Incremental computation of fixpoints}
The following theorems provide a
generalization of semi-naive evaluation to any differentiable function over a
continuous change action. We will want to apply this to the semantics of
Datalog, for which we will need to differentiate its semantics. We will see the
details of how to do this in \cref{sec:datalogDifferentiability}.
Since we are trying to incrementalize the iterative step, we start by taking the
derivative of $\iter$ with respect to $n$.
\begin{prop}[Derivative of the iteration map with respect to $n$]
Let $\cstr{A}$ be a complete change structure. Then $\iter_f$ is differentiable, and a derivative is given by:
\begin{align*}
&\derive{\iter_f}: \NN \times \changes{\NN} \rightarrow \changes{A}\\
&\derive{\iter_f}(0, m) \defeq \iter_f(m) \cminus \bot\\
&\derive{\iter_f}(n+1, m) \defeq \derive{f}(\iter_f(n), \derive{\iter_f}(n, m))
\end{align*}
\end{prop}
\ifproofs
\begin{proof}
See \cref{prf:iterDerivativesN}.
\end{proof}
\fi
We can then compute $\derive{\iter_f(n)}$ along with $\iter_f(n)$ via mutual recursion.
We want to do this by computing a fixpoint, so we can rewrite it as a recurrence
relation:
\begin{align*}
&\nextiter_f : (A, \changes{A}) \rightarrow (A, \changes{A})\\
&\nextiter_f (\bot, \bot) \defeq (\bot, f(\bot) \cminus \bot)\\
&\nextiter_f (a, \change{a}) \defeq (a \cplus \change{a}, \derive{f}(a, \change{a}))
\end{align*}
Which has the property that
\begin{align*}
&\nextiter_f^n (\bot, \bot) = (\iter_f(n), \derive{\iter_f}(n, 1))
\end{align*}
\begin{thm}[Incremental computation of least fixpoints]
\label{thm:diffFP}
Let $\cstr{A}$ be a continuous change action, $f: \cstr{A} \rightarrow
\cstr{A}$ be continuous and differentiable.
Then $\lfp(f) = \pi_1(\sqcup_{n \in \NN}(\nextiter_f^n(\bot)))$.
\end{thm}
\ifproofs
\begin{proof}
See \cref{prf:fixpointIter}.
\end{proof}
\fi
This gives us a way to compute our fixpoint incrementally, by adding successive
changes to our accumulator until we reach fixpoint.
Note that we have \emph{not} taken the fixpoint of $\nextiter_f$, since it is
not continuous. So while this allows us to compute fixpoints that converge in
finitely many steps, it does not let us handle the infinite case.
\subsection{Derivatives of fixpoints}
\label{sec:fixpointDerivatives}
The previous section has shown us how to use derivatives to compute fixpoints
more efficiently, but we might also want to take the derivative of the fixpoint
operator itself. The typical case for this will be where we have some fixpoint
\begin{displaymath}
\fixpoint (\lambda X . F(E, X))
\end{displaymath}
and we now wish to apply a change to $E$ and compute
\begin{displaymath}
\fixpoint (\lambda X . F(E \cplus \change{E}, X))
\end{displaymath}
This amounts to applying a change to the \emph{function} whose fixpoint we are taking.
In Datalog this would allow us to update a recursively defined relation given an
update one of its non-recursive dependencies, or the base extensional database.
For example, we might want to take the transitive closure relation
and update it by changing the edge relation $e$.
However, this requires us to have a derivative for the fixpoint operator
$\fixpoint$ with respect to the function which it takes the fixpoint of.
We can get quite close in the general case, but to get a precise
derivative we'll need to do a bit more work.
\begin{defn}
Let $\cstr{A}$ be a change structure, $\fixpoint_A$ a fixpoint operator, and
$\derive{\ev}$ be a derivative of the evaluation map.
Then we define
\begin{align*}
&\adjust : (A \rightarrow A) \times \changes{(A \rightarrow A)} \rightarrow (\changes{A} \rightarrow \changes{A})\\
&\adjust(f, \change{f}) \defeq \lambda\ \change{a} . \derive{\ev}((f,
\fixpoint_A(f), (\change{f}, \change{a})))\\
&\derive{\fixpoint_A} : (A \rightarrow A) \times \changes{(A \rightarrow A)} \rightarrow \changes{A}\\
&\derive{\fixpoint_A}(f, \change{f}) \defeq \fixpoint_{\changes{A}}(\adjust(f, \change{f}))
\end{align*}
\end{defn}
\begin{thm}[Pseudo-derivatives of fixpoints]
\label{thm:fixpointDiff}
Let
\begin{itemize}
\item $\cstr{A}$ be a change action
\item $\fixpoint : (\cstr{A} \rightarrow \cstr{A}) \rightarrow \cstr{A}$ be a fixpoint operator
\item $f: \cstr{A} \rightarrow \cstr{A}$ be a differentiable function
\item $\change{f} \in \changes{(A \rightarrow A)}$ be a function change
\item $\derive{\ev}$ be a derivative of the evaluation map
\end{itemize}
Then a change $\change{w} \in \Delta A$ satisfies
the equation:
\begin{equation}\label{eqn:fixcondition}