-
Notifications
You must be signed in to change notification settings - Fork 14
/
11-Point-Patterns.Rmd
executable file
·631 lines (461 loc) · 37 KB
/
11-Point-Patterns.Rmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
# (PART\*) PART 2: Spatial Analysis {-}
# Point Pattern Analysis {#chp11_0}
## Centrography
A very basic form of point pattern analysis involves summary statistics such as the **mean center**, **standard distance** and **standard deviational ellipse**.
```{r echo=FALSE}
knitr::include_graphics("img/centrography.svg")
```
These point pattern analysis techniques were popular before computers were ubiquitous since hand calculations are not too involved, but these summary statistics are too concise and hide far more valuable information about the observed pattern. More powerful analysis methods can be used to explore point patterns. These methods can be classified into two groups: **density based** approach and **distance based** approach.
## Density based analysis
Density based techniques characterize the pattern in terms of its distribution *vis-a-vis* the study area--a **first-order** property of the pattern.
```{block, type="note"}
A *first order* property of a pattern concerns itself with the variation of the observations' density across a study area. For example, the distribution of oaks will vary across a landscape based on underlying soil characteristics (resulting in areas having dense clusters of oaks and other areas not).
```
<br>
In these lecture notes, we'll make a distinction between the **intensity** of a spatial process and the observed **density** of a pattern under study. A point pattern can be thought of as a "realization" of an underlying process whose intensity $\lambda$ is estimated from the observed point pattern's density (which is sometimes denoted as $\widehat{\lambda}$ where the caret $\verb!^!$ is referring to the fact that the observed density is an *estimate* of the underlying process' intensity). Density measurements can be broken down into two categories: **global** and **local**.
### Global density
A basic measure of a pattern's density $\widehat{\lambda}$ is its overall, or **global**, density. This is simply the ratio of observed number of points, $n$, to the study region's surface area, $a$, or:
$\begin{equation}
\widehat{\lambda} = \frac{n}{a}
\label{eq:global-density}
\end{equation}$
```{r f11-gobal-dens, fig.cap="An example of a point pattern where n = 31 and the study area (defined by a square boundary) is 10 units squared. The point density is thus 31/100 = 0.31 points per unit area.", fig.height=2.5, fig.width=2.5, echo=FALSE}
library(spatstat)
library(ggplot2)
cell <- 10
win <- owin(xrange=c(0,cell),yrange=c(0,cell))
set.seed(3452)
P <- rMatClust(0.5, 1, 0.5,win=win)
ggplot(aes(x = x, y = y), data = as.data.frame(P)) + theme(legend.position = "none") +
geom_point(data = as.data.frame(P), mapping=aes(x=x, y=y), size=1.5, shape=16, colour="grey20") +
coord_equal()+ xlim(0,10) + ylim(0,10) + labs(x="",y="") + theme(axis.text=element_text(size=8))
```
### Local density
A point pattern's density can be measured at different locations within the study area. Such an approach helps us assess if the density--and, by extension, the underlying process' local (modeled) intensity $\widehat{\lambda}_i$--is constant across the study area. This can be an important property of the data since it may need to be mitigated for when using the distance based analysis tools covered later in this chapter. Several techniques for measuring local density are available, here we will focus on two such methods: *quadrat density* and *kernel density*.
#### Quadrat density
This technique requires that the study area be divided into sub-regions (aka *quadrats*). Then, the point density is computed for each quadrat by dividing the number of points in each quadrat by the quadrat's area. Quadrats can take on many different shapes such as hexagons and triangles, here we use square shaped quadrats to demonstrate the procedure.
```{r f11-quad01, fig.cap="An example of a quadrat count where the study area is divided into four equally sized quadrats whose area is 25 square units each. The density in each quadrat can be computed by dividing the number of points in each quadrat by that quadrat's area.", fig.height=2.2, fig.width=2.2, echo=FALSE}
OP <- par(mar=c(0,0,0,0))
Q.dim <- 2
Q <- quadratcount(P, nx=Q.dim, ny=Q.dim)
A <- spatstat.geom::area(P$window) / Q.dim^2 # Area for each quadrat
plot(Q / A, main="", col="grey20")
plot(P, add = TRUE, pch=20, col=rgb(0,0,0,0.5))
par(OP)
```
The choice of quadrat numbers and quadrat shape can influence the measure of local density and must be chosen with care. If very small quadrat sizes are used you risk having many quadrats with no points which may prove uninformative. If very large quadrat sizes are used, you risk missing subtle changes in spatial density distributions such as the east-west gradient in density values in the above example.
Quadrat regions do not have to take on a uniform pattern across the study area, they can also be defined based on a **covariate**. For example, if it's believed that the underlying point pattern process is driven by elevation, quadrats can be defined by sub-regions such as different ranges of elevation values (labeled 1 through 4 on the right-hand plot in the following example). This can result in quadrats having non-uniform shape and area.
```{block, type="note"}
Converting a continuous field into discretized areas is sometimes referred to as **tessellation**. The end product is a **tessellated surface**.
```
</br>
```{r f11-quad02, fig.cap="Example of a covariate. Figure on the left shows the elevation map. Figure on the right shows elevation broken down into four sub-regions (a tessellated surface) for which local density values will be computed.", fig.height=2, fig.width=6.0, echo=FALSE}
elev <- density(P, sigma = 5) *1000
OP <- par(mfrow = c(1,2), mar=c(0,0,0,3))
plot.im(elev, col=grey(50:0/50), main=NULL, las=2)
plot(P, col=rgb(0,0,0,.7), add=T, pch=20)
brk <- c( 260, 286, 301, 318 , 370)
Zcut <- cut(elev, breaks=brk, labels=1:4)
V <- tess(image=Zcut)
plot(V$image, main="", las=2, col=colorRampPalette(c("grey90","grey20"))(V$n))
plot(P, col=rgb(0,0,0,.7), add=T, pch=20)
par(OP)
```
If the local intensity changes across the tessellated covariate, then there is evidence of a dependence between the process that generated the point pattern and the covariate. In our example, sub-regions 1 through 4 have surface areas of `r round(spatstat.geom::tile.areas(V), 2)` map units respectively. To compute these regions' point densities, we simply divide the number of points by the respective area values.
```{r f11-quad03, fig.cap="Figure on the left displays the number of points in each elevation sub-region (sub-regions are coded as values ranging from 1 to 4). Figure on the right shows the density of points (number of points divided by the area of the sub-region).", fig.height=2, fig.width=6.0, echo=FALSE}
# Extract centroids
library(sf)
V_stars <- stars::st_as_stars(as.im(V))
V_poly <- st_as_sf(V_stars, merge = TRUE)
V_coord <- st_coordinates(st_centroid(V_poly[ order(V_poly$v), ]))
#cl <- interp.colours(c("lightyellow", "orange" ,"red"), 40)
cl <- colourmap(c("#FFFFDA", "#FFDF5F", "#FF9100", "#FF0400"),
breaks=c(0, 0.31, 0.35, 0.4, 0.4222))
Q2 <- quadratcount(P, tess = V)
OP <- par(mfrow = c(1,2), mar=c(0,0,0,3))
plot(V$image, main="", las=2, col=colorRampPalette(c("grey90","grey20"))(V$n))
plot(P, pch=20, add=T)
text(V_coord, labels=Q2, col = "darkorange")
Q2i <- intensity(Q2)
int <- round(data.frame(Q2i)$Freq, 3)
plot(intensity(Q2, image=TRUE), col = cl, las=2, main=NULL)
text(V_coord, labels=int, cex = 0.8)
#plot(Q2, add=T, show.tiles = FALSE, entries= int, pos = 1, labelPts=T)
par(OP)
```
We can plot the relationship between point density and elevation regions to help assess any dependence between the variables.
```{r f11-quad04, fig.cap="Plot of point density vs elevation regions.", fig.height=2, echo=FALSE, }
ggplot() + aes(y=as.numeric(Q2i), x=1:4) + geom_point() + geom_line() +xlab("Elevation regions") + ylab("Point density")
```
It's important to note that how one _chooses_ to tessellate a surface can have an influence on the resulting density distribution. For example, dividing the elevation into seven sub-regions produces the following density values.
```{r f11-quad05, fig.cap="Same analysis as last figure using different sub-regions. Note the difference in density distribution.", fig.height=2, fig.width=6.0, echo=FALSE}
n_cut <- 7
brk <- quantile(elev, probs=(0:n_cut)/n_cut, type=2)
Zcut <- cut(elev, breaks=brk, labels=1:n_cut)
V <- tess(image=Zcut)
# Extract centroids
V_stars <- stars::st_as_stars(as.im(V))
V_poly <- st_as_sf(V_stars, merge = TRUE)
V_coord <- st_coordinates(st_centroid(V_poly[ order(V_poly$v), ]))
# Generate plots
OP <- par(mfrow = c(1,2), mar=c(0,0,0,3))
Q2 <- quadratcount(P, tess = V)
plot(V$image, main="", las=2, col=colorRampPalette(c("grey90","grey20"))(V$n))
#plot(Q2, show.tiles=FALSE, add=TRUE)
plot(P, pch=20, add=T)
text(V_coord, labels=Q2, col = "darkorange")
Q2i <- intensity(Q2)
int <- round(data.frame(Q2i)$Freq, 3)
cl <- colourmap(interp.colours(c("lightyellow", "orange" ,"red"), n_cut),
breaks=c(0,sort(intensity(Q2))) )
plot(intensity(Q2, image=TRUE), col = cl, las=2, main=NULL)
text(V_coord, labels=int, cex = 0.8, srt=45)
#plot(Q2, add=T, show.tiles = FALSE, entries= int)
par(OP)
```
While the high density in the western part of the study area remains, the density values to the east are no longer consistent across the other three regions.
The quadrat analysis approach has its advantages in that it is easy to compute and interpret however, it does suffer from the [modifiable areal unit problem](https://mgimond.github.io/Spatial/pitfalls-to-avoid.html#maup) (MAUP) as highlighted in the last two examples. Another density based approach that will be explored next (and that is less susceptible to the MAUP) is the kernel density.
#### Kernel density
The kernel density approach is an extension of the quadrat method: Like the quadrat density, the kernel approach computes a localized density for subsets of the study area, but unlike its quadrat density counterpart, the sub-regions overlap one another providing a *moving* sub-region window. This moving window is defined by a **kernel**. The kernel density approach generates a grid of density values whose cell size is smaller than that of the kernel window. Each cell is assigned the density value computed for the kernel window _centered_ on that cell.
A kernel not only defines the shape and size of the window, but it can also *weight* the points following a well defined kernel function. The simplest function is a **basic kernel** where each point in the kernel window is assigned equal weight.
```{r f11-kernel01, fig.cap="An example of a basic 3x3 kernel density map (ArcGIS calls this a point density map) where each point is assigned an equal weight. For example, the cell centered at location x=1.5 and y =7.5 has one point within a 3x3 unit (pixel) region and thus has a local density of 1/9 = 0.11.", fig.height=3.5, echo=FALSE}
#library(raster)
library(terra)
library(grid)
gp.text <- function(sg, nr=1, sfmt="%02.0f",x1,x2,y1,y2,src){
src = as.data.frame(src)
sg.names = names(sg)
sg = crop(sg, c(x1,x2,y1,y2))
names(sg) = sg.names # Renaming the layers is necessary if the brick object is cropped
r.df = as.data.frame(cbind(xyFromCell(sg,1:ncell(sg))))
r.df = cbind(r.df, as.data.frame(sg))
ggplot(aes(x = x, y = y), data = r.df, environment = environment()) +
geom_tile(aes(fill = Point.density)) +
scale_fill_gradient(low = "#F0F9E8",high = "#08589E", name="Basic \nKernel") +
theme(strip.text.x = element_text(size=6, family="mono", face="bold"), legend.position = "right",
axis.text.x=element_text(size = 8), axis.text.y=element_text(size = 8),
plot.margin = unit(c(0, 0, 0, 0), "cm"), legend.key.size = unit(0.5,"cm"), legend.text=element_text(size=8),
legend.title=element_text(size=10)) +
geom_point(data = src, mapping=aes(x=x, y=y), size=2, shape=16, colour="grey20") +
coord_equal() + geom_text(aes(label=sprintf(fmt=sfmt, Point.density)), size=3, colour="#555555")+
scale_x_continuous(breaks=seq(0,10,2)) + scale_y_continuous(breaks=seq(0,10,2)) + labs(x="",y="")
}
fesri <- function(sigma, n=5) {
m <- matrix(nc=n, nr=n)
col <- rep(1:n, n)
row <- rep(1:n, each=n)
x <- col - ceiling(n/2)
y <- row - ceiling(n/2)
# See http://math.stackexchange.com/a/41191
ax <- (x^2 + y^2) / sigma^2
m[cbind(row, col)] = ifelse(abs(ax) < sigma,
(1 - ax)^2,
0)
# sum of weights should add up to 1
m / sum(m)
}
r = rast(nrows=cell,ncols=cell,xmin=0,xmax=cell,ymin=0,ymax=cell)
r[] = 0
Psf <- st_as_sf(P)[-1,]
P.r = rasterize(Psf, r, field=1, fun=sum) # Cell value is assigned number of points
P.r[is.na(P.r[])] = 0
# Point density
d.p = focal(P.r, w=matrix(1,nrow=3,ncol=3), fun=sum, na.rm=T, pad=T, padValue=0)
d.p[] = d.p[] / 9 # Divide by kernel area
#sg = brick(d.p)
sg = d.p
names(sg) = c("Point.density")
gp.text(sg, nr=1, sfmt="%0.2f", x1=0,x2=cell,y1=0,y2=cell,src=P)
```
Some of the most popular **kernel functions** assign weights to points that are inversely proportional to their distances to the kernel window center. A few such kernel functions follow a *gaussian* or *quartic* like distribution function. These functions tend to produce a smoother density map.
```{r f11-kernel02, fig.cap="An example of a kernel function is the 3x3 quartic kernel function where each point in the kernel window is weighted based on its proximity to the kernel's center cell (typically, closer points are weighted more heavily). Kernel functions, like the quartic, tend to generate smoother surfaces.", fig.height=3.5, echo=FALSE}
#library(raster)
library(terra)
#library(maptools)
library(grid)
gp.text <- function(sg, nr=1, sfmt="%02.0f",x1,x2,y1,y2,src){
src = as.data.frame(src)
sg.names = names(sg)
sg = crop(sg, c(x1,x2,y1,y2))
names(sg) = sg.names # Renaming the layers is necessary if the brick object is cropped
r.df = as.data.frame(cbind(xyFromCell(sg,1:ncell(sg))))
r.df = cbind(r.df, as.data.frame(sg))
ggplot(aes(x = x, y = y), data = r.df, environment = environment()) +
geom_tile(aes(fill = Kernel.function)) +
scale_fill_gradient(low = "#F0F9E8",high = "#08589E", name="Kernel \nFunction") +
theme(strip.text.x = element_text(size=6, family="mono", face="bold"), legend.position = "right",
axis.text.x=element_text(size = 8), axis.text.y=element_text(size = 8),
plot.margin = unit(c(0, 0, 0, 0), "cm"), legend.key.size = unit(0.5,"cm"), legend.text=element_text(size=8),
legend.title=element_text(size=10)) +
geom_point(data = src, mapping=aes(x=x, y=y), size=2, shape=16, colour="grey20") +
coord_equal() + geom_text(aes(label=sprintf(fmt=sfmt, Kernel.function)), size=3, colour="#555555")+
scale_x_continuous(breaks=seq(0,10,2)) + scale_y_continuous(breaks=seq(0,10,2)) + labs(x="",y="")
}
fesri <- function(sigma, n=5) {
m <- matrix(nc=n, nr=n)
col <- rep(1:n, n)
row <- rep(1:n, each=n)
x <- col - ceiling(n/2)
y <- row - ceiling(n/2)
# See http://math.stackexchange.com/a/41191
ax <- (x^2 + y^2) / sigma^2
m[cbind(row, col)] = ifelse(abs(ax) < sigma,
(1 - ax)^2,
0)
# sum of weights should add up to 1
m / sum(m)
}
r = rast(nrows=cell,ncols=cell,xmin=0,xmax=cell,ymin=0,ymax=cell)
r[] = 0
Psf <- st_as_sf(P)[-1,]
P.r = rasterize(Psf, r, field=1, fun=sum) # Cell value is assigned number of points
P.r[is.na(P.r[])] = 0
# Kernel function
ff=fesri(1.52,n=5) # An approximation of ESRI's kernel density function
d.k = focal(P.r, w=ff, na.rm=T, pad=T, padValue=0)
#sg = brick(d.k)
sg = d.k
names(sg) = c("Kernel.function")
gp.text(sg, nr=1, sfmt="%0.2f", x1=0,x2=cell,y1=0,y2=cell,src=P)
```
#### Kernel Density Adjusted for Covariate
In the previous section, we learned that we could use a covariate, like elevation, to define the sub-regions (quadrats) within which densities were computed.
Here, instead of *dividing* the study region into discrete sub-regions (as was done with quadrat analysis), we create an intensity function that is dependent on the underlying covariate. This function, which we'll denote as $\rho$, can be estimated in one of three different ways-- by *ratio*, *re-weight* and *transform* methods. We will not delve into the differences between these methods, but note that there is more than one way to estimate $\rho$ in the presence of a covariate.
In the following example, the elevation raster is used as the covariate in the $\rho$ function using the _ratio_ method. The right-most plot maps the modeled intensity as a function of elevation.
```{r f11-kernel03, fig.cap="An estimate of $\\rho$ using the ratio method. The figure on the left shows the point distribution superimposed on the elevation layer. The middle figure plots the estimated $\\rho$ as a function of elevation. The envelope shows the 95% confidence interval. The figure on the right shows the modeled density of $\\widehat{\\lambda}$ which is a function of the elevation raster (i.e. $\\widehat{\\lambda}=\\widehat{\\rho}_{elevation}$).", fig.height=2.1, fig.width=8.2, echo=FALSE}
rho <- rhohat(P, elev, method="ratio")
pred <- predict(rho)
cl <- interp.colours(c("lightyellow", "orange" ,"red"), 100)
OP <- par(mfrow=c(1,3), mar=c(2.5,3.5,0.2,4), pty="s" )
plot(rast(elev), col=grey(50:0/50), main="", las=1,cex.axis=0.9,mgp=c(1.5,0.5,0))
plot(P, col=rgb(0,0,0,.7), add=T, pch=20)
plot(rho, main=NULL, las=1, legend=FALSE, do.rug=FALSE, cex.lab=1.2, cex.axis=0.9, mgp=c(1.5,0.5,0))
plot(rast(pred), col=cl, las=1, main="", cex.lab=1.5, cex.axis=0.9,mgp=c(1.5,0.5,0))
par(OP)
```
We can compare the modeled intensity function to the kernel density function of the observed point pattern via a scatter plot.
```{r f11-kernel04, fig.height=2.5, fig.width=2.5, echo=FALSE}
OP <- par(mar=c(3,3,1,1), mgp=c(2.0,0.8,0), cex=0.8)
K1 <- density(P)
K1_vs_pred <- pairs(K1, pred, plot = FALSE)
plot(K1_vs_pred$pred ~ K1_vs_pred$K1, pch=20,
xlab = "Observed intensity",
ylab = "Predicted intensity",
col = rgb(0,0,0,0.1))
abline(a=0, b=1, col = "red")
grid()
par(OP)
```
A red one-to-one diagonal line is added to the plot. While an increase in predicted intensity is accompanied with increasing observed intensity, the relationship is not linear. This can be explained by the small area covered by these high elevation locations which result in fewer observation opportunities and thus higher uncertainty for that corner of the study extent. This uncertainty is very apparent in the $\rho$ vs. elevation plot where the 95% confidence interval envelope widens at higher elevation values (indicating the greater uncertainty in our estimated $\rho$ value at those higher elevation values).
### Modeling intensity as a function of a covariate
So far, we have learned techniques that *describe* the distribution of points across a region of interest. But it is often more interesting to *model* the relationship between the distribution of points and some underlying covariate by defining that relationship mathematically. This can be done by exploring the *changes* in point density as a function of a covariate, however, unlike techniques explored thus far, this approach makes use of a statistical model. One such model is a *Poisson point process* model which can take on the form of:
$$
\begin{equation}
\lambda(i) = e^{\alpha + \beta Z(i)}
\label{eq:density-covariate}
\end{equation}
$$
where $\lambda(i)$ is the modeled intensity at location $i$, $e^{\alpha}$ (the exponent of $\alpha$) is the base intensity when the covariate is *zero* and $e^{\beta}$ is the multiplier by which the intensity increases (or decreases) for each 1 unit increase in the covariate $Z(i)$. This is a form of the *logistic regression* model--popular in the field of statistics. This equation implies that the relationship between the process that lead to the observed point pattern is a **loglinear** function of the underlying covariate (i.e. one where the process' intensity is exponentially increasing or decreasing as a function of the covariate). Note that taking the log of both sides of the equation yields the more familiar linear regression model where $\alpha + \beta Z(i)$ is the *linear predictor*.
```{block, type="note"}
Note: The left-hand side of a logistic regression model is often presented as the *probability*, $P$, of occurrence and is related to $\lambda$ as \$\\lambda=P/(1-P)\$ which is the *ratio of probability of occurrence*. Solving for $P$ gives us \$P = \\lambda/(1 + \\lambda)\$ which yields the following equation:
$$
P(i) = \frac{e^{\alpha + \beta Z(i)}}{1 + e^{\alpha + \beta Z(i)}}
$$
```
Let's work with the point distribution of Starbucks cafes in the state of Massachusetts. The point pattern clearly exhibits a non-random distribution. It might be helpful to compare this distribution to some underlying covariate such as the population density distribution.
```{r f11-starbucks, fig.cap="Location of Starbucks relative to population density. Note that the classification scheme follows a log scale to more easily differentiate population density values.", message=FALSE, fig.height=3, fig.width=7, echo=FALSE, results='hide'}
# Load Starbucks point locations
S <- st_read("./Data/Starbucks.shp")
# Load population density raster layer
r <- rast("./Data/pop_sqmile.img")
pop <- r
# Display data
plot(log(pop ), grid = FALSE, axes = FALSE)
plot(S$geometry, add = TRUE, pch=20, col=rgb(0,0,0,0.5), cex=1.3)
```
We can fit a poisson point process model to these data where the modeled intensity takes on the form:
$$
\begin{equation}
Starbucks\ density(i) = e^{\alpha + \beta\ population(i)}
\label{eq:walmart-model}
\end{equation}
$$
The parameters $\alpha$ and $\beta$ are estimated from a method called *maximum likelihood*. Its implementation is not covered here but is widely covered in many statistics text books. The index $(i)$ serves as a reminder that the point density and the population distribution both can vary as a function of location $i$.
```{r echo=FALSE}
#options(digits=10)
pop_xy <- xyFromCell(pop, cel = 1:prod(dim(pop)))
pop_z <- pop[]
pop_df <- as.data.frame(cbind(pop_xy, pop_z))
names(pop_df) <- c("x","y","z")
pop_im <- as.im(pop_df)
P2 <- as.ppp(S)
marks(P2) <- NULL
PPM1 <- ppm(P2 ~ pop_im)
a <- round(PPM1$coef[1] , 3)
b <- round(PPM1$coef[2] , 5)
```
The estimated value for $\alpha$ in our example is `r a`. This is interpreted as stating that given a population density of *zero*, the base intensity of the point process is e^`r a`^ or `r sprintf("%g",exp(a))` cafes per square meter (the units are derived from the point's reference system)--a number close to zero (as one would expect). The estimated value for $\beta$ is `r sprintf("%g",b)`. This is interpreted as stating that for every unit increase in the population density derived from the raster, the intensity of the point process increases by a factor of e^`r sprintf("%g",b)`^ or `r sprintf("%g",exp(b))`.
If we are to plot the relationship between density and population, we get:
```{r f11-pppm01,fig.cap="Poisson point process model fitted to the relationship between Starbucks store locations and population density. The model assumes a loglinear relationship. Note that the density is reported in number of stores per map unit area (the map units are in meters).", message=FALSE, fig.height=2.5, fig.width=3.6, echo=FALSE, results='hide'}
PPM1 <- ppm(P2 ~ pop_im)
PPM0 <- ppm(P2 ~ 1)
OP <- par(mar=c(4,4,0,0))
plot(effectfun(PPM1, "pop_im", se.fit=TRUE), main=NULL, cex.axis=0.8,cex.lab=0.8, mgp=c(1.5,0.5,0),
legend=FALSE)
par(OP)
#anova(PPM0, PPM1, test="LR")
```
## Distance based analysis
An alternative to the density based methods explored thus far are the **distance based methods** for pattern analysis whereby the interest lies in how the points are distributed relative to one another (a second-order property of the point pattern) as opposed to how the points are distributed relative to the study extent
```{block, type="note"}
A *second order* property of a pattern concerns itself with the observations' influence on one another. For example, the distribution of oaks will be influenced by the location of parent trees--where parent oaks are present we would expect dense clusters of oaks to emerge.
```
<br>
Three distance based approaches are covered next: The average nearest neighbor (ANN), the K and L functions, and the pair correlation function.
### Average Nearest Neighbor
An average nearest neighbor (ANN) analysis measures the average distance from each point in the study area to its nearest point. In the following example, the average nearest neighbor for all points is 1.52 units.
```{r f11-ppp-dist, fig.cap="Distance between each point and its closest point. For example, the point closest to point 1 is point 9 which is 2.32 map units away.", results="asis", echo=FALSE, message=FALSE, warning=FALSE, fig.height=3.0, fig.width=5}
library(gridExtra)
mytheme <- gridExtra::ttheme_default(
core = list(fg_params=list(vjust=0, x=0.4, cex=0.5,lineheight = 5)),
# gpar.coretext = gpar(fontsize = 1, lineheight = 0.1, cex = 0.1),
colhead = list(fg_params=list(cex = 0.4)),
rowhead = list(fg_params=list(cex = 0.4)))
ANN <- nndist(P, k=1)
NN <- nnwhich(P, k =1)
ddf <- data.frame(From = 1:length(NN), To = NN, Distance = round(ANN,2))
ddf.split <- cbind(ddf[1:10,], ddf[11:20,])
d <- tableGrob(ddf.split,theme = mytheme, rows=NULL)
PP <- ggplot(aes(x = x, y = y), data = as.data.frame(P)) + theme(legend.position = "none") +
geom_point(data = as.data.frame(P), mapping=aes(x=x, y=y), size=1.5, shape=16, colour="grey30") +
coord_equal() + labs(x="",y="") +
geom_text(aes(label=1:P$n),hjust=-0.4, vjust=-0.4, size=2) +
theme(plot.margin = unit(c(0, 0, 0, 0), "cm"), axis.text=element_text(size = 4) ) +
scale_x_continuous(breaks=seq(0,10,2)) + scale_y_continuous(breaks=seq(0,10,2))
grid.arrange(PP,d, ncol=2)
```
An extension of this idea is to plot the ANN values for different order neighbors, that is for the first closest point, then the second closest point, and so forth.
```{r f11-ANN-plot,fig.cap="ANN values for different neighbor order numbers. For example, the ANN for the first closest neighbor is 1.52 units; the ANN for the 2nd closest neighbor is 2.14 map units; and so forth.", fig.height=2, fig.width=3.5, echo=FALSE, message=FALSE, warning=FALSE}
ANN <- apply(nndist(P, k=1:(P$n-1)),2,FUN=mean) # Compute ANN for each order
d.f <- data.frame(Order=1:(P$n-1), ANN=ANN)
ggplot( aes(x=Order, y=ANN), data=d.f) +geom_line() + geom_point(size=1.5, shape=16)+
theme(plot.margin = unit(c(0, 0, 0, 0), "cm"), axis.text=element_text(size = 8),
axis.title = element_text(size=8 ) )
```
The shape of the ANN curve as a function of neighbor order can provide insight into the spatial arrangement of points relative to one another. In the following example, three different point patterns of 20 points are presented.
```{r f11-diff-patterns, fig.cap="Three different point patterns: a single cluster, a dual cluster and a randomly scattered pattern.", fig.height=2, fig.width=4.5, echo=FALSE}
library(spatstat)
win <- owin(c(0,10),c(0,10))
set.seed(12)
x <- rnorm(20, 5,0.3)
set.seed(14)
y <- rnorm(20,5,0.3)
P.cl <- ppp(x,y,window=win)
set.seed(6)
x <- c(rnorm(10, 3,0.5), rnorm(10,6,0.5))
set.seed(34)
y <- c(rnorm(10, 3,0.5), rnorm(10,6,0.5))
P.cl2 <- ppp(x,y,window=win)
set.seed(673)
P.rnd <- rpoint(20, win=win)
ann.cl <- apply(nndist(P.cl, k=1:(P.cl$n-1)),2,FUN=mean)
ann.cl2 <- apply(nndist(P.cl2, k=1:(P.cl2$n-1)),2,FUN=mean)
ann.rnd <- apply(nndist(P.rnd, k=1:(P.rnd$n-1)),2,FUN=mean)
OP <- par(mfrow=c(1,3), mar=c(1,1,1,0))
plot(P.cl, pch=20, main="",cols=rgb(0,0,0,.5))
plot(P.cl2, pch=20, main="",cols=rgb(0,0,0,.5))
plot(P.rnd, pch=20, main="",cols=rgb(0,0,0,.5))
par(OP)
```
Each point pattern offers different ANN vs. neighbor order plots.
```{r f11-diff-ANN-plots, fig.cap="Three different ANN vs. neighbor order plots. The black ANN line is for the first point pattern (single cluster); the blue line is for the second point pattern (double cluster) and the red line is for the third point pattern.", fig.height=2, fig.width=4.0, echo=FALSE}
OP <- par(mar=c(1.4,1.2,1,0) )
plot(1:19, ann.cl, type="b", ylim=c(0, max(ann.cl,ann.rnd, ann.cl2)), cex=.5, pch=16,xaxt="n",yaxt="n",
xlab="", ylab="")
axis(1, seq(0,20,5),cex.axis=.5,xlab="Neighbor order", cex.lab=.5,padj=-3,tck=-0.02)
mtext("Neighbor order", 1, cex=.7, line=0.5)
axis(2, seq(0,round(max(ann.cl,ann.rnd)), length.out=4),cex.axis=.5,xlab="Neighbor order", cex.lab=.5,las=2,
hadj=-1.6, ,tck=-0.02)
mtext("ANN", 3, cex=.7, line=0, at=0)
lines(1:19, ann.rnd, type="b", col="red", cex=.5, pch=16)
lines(1:19, ann.cl2, type="b", col="blue", cex=.5, pch=16)
par(OP)
```
The bottom line (black dotted line) indicates that the cluster (left plot) is tight and that the distances between a point and all other points is very short. This is in stark contrast with the top line (red dotted line) which indicates that the distances between points is much greater. Note that the way we describe these patterns is heavily influenced by the size and shape of the study region. If the region was defined as the smallest rectangle encompassing the cluster of points, the cluster of points would no longer look clustered.
```{r f11-diff-ppp02, fig.cap="The same point pattern presented with two different study areas. How differently would you describe the point pattern in both cases?", fig.height=2, fig.width=4.0, echo=FALSE}
set.seed(12)
x <- rnorm(20, 5,0.3)
set.seed(14)
y <- rnorm(20,5,0.3)
P.cl <- ppp(x,y,window=win)
win2 <- owin(range(x), range(y))
P2.cl <- ppp(x,y, window=win2)
OP <- par(mfrow=c(1,2), mar=c(1,1,1,0))
plot(P.cl, pch=20, main="",cols=rgb(0,0,0,.5), cex=0.8)
plot(P2.cl, pch=20, main="",cols=rgb(0,0,0,.5), cex=0.8)
par(OP)
```
An important assumption that underlies our interpretation of the ANN results is that of stationarity of the underlying point process (i.e. that there is no overall drift or trend in the process’ intensity). If the point pattern is not stationary, then it will be difficult to assess if the results from the ANN analysis are due to interactions between the points or due to changes in some underlying factor that changes as a function of location. Correcting for lack of stationarity when performing hypothesis tests is described in the next chapter.
### K and L functions
#### K function
The average nearest neighbor (ANN) statistic is one of many distance based point pattern analysis statistics. Another statistic is the K-function which summarizes the distance between points for *all* distances. The calculation of K is fairly simple: it consists of dividing the mean of the sum of the number of points at different distance lags for each point by the area event density. For example, for point $S1$ we draw circles, each of varying radius $d$, centered on that point. We then count the number of points (events) inside each circle. We repeat this for point $S2$ and all other points $Si$. Next, we compute the average number of points in each circle then divide that number by the overall point density $\hat{\lambda}$ (i.e. total number of events per study area).
<table>
<td style="width:30%;">
<img src="img/K_bands.png" width=300> </img>
</td>
<td style="width:350px !important">
<table style="width:75%;">
<tr>
<th>Distance <br>band <br>(km) </th><th> # events <br>from S~1~</th><th> # events <br> from S~2~ </th><th> # events<br> from S~i~ </th><th> K </th>
</tr>
<tr>
<td>10 </td><td> 0 </td><td> 1 </td><td> ... </td><td> 0.012 </td>
</tr><tr>
<td>20 </td><td> 3 </td><td> 5 </td><td> ... </td><td> 0.067 </td>
</tr><tr>
<td>30 </td><td> 9 </td><td> 14 </td><td> ... </td><td> 0.153 </td>
</tr><tr>
<td>40 </td><td> 17 </td><td> 17 </td><td> ... </td><td> 0.269 </td>
</tr><tr>
<td>50 </td><td> 25 </td><td> 23 </td><td> ... </td><td> 0.419 </td>
</tr>
</table>
</td>
</table><br>
We can then plot K and compare that plot to a plot we would expect to get if an IRP/CSR process was at play (K~expected~).
```{r f11-K, echo=FALSE, fig.cap="The K-function calculated from the Walmart stores point distribution in MA (shown in black) compared to$K_{expected}$ under the IRP/CSR assumption (shown in red). "}
knitr::include_graphics("img/K_function.svg")
```
$K$ values greater than $K_{expected}$ indicate clustering of points at a given distance band; K values less than $K_{expected}$ indicate dispersion of points at a given distance band. In our example, the stores appear to be more clustered than expected at distances greater than 12 km.
Note that like the ANN analysis, the $K$-function assumes stationarity in the underlying point process (i.e. that there is no overall drift or trend in the process' intensity).
#### L function
One problem with the $K$ function is that the shape of the function tends to curve upward making it difficult to see small differences between $K$ and $K_{expected}$. A workaround is to transform the values in such a way that the expected values, $K_{expected}$, lie horizontal. The transformation is calculated as follows:
$$
\begin{equation}
L=\sqrt{\dfrac{K(d)}{\pi}}-d
\label{eq:L-function}
\end{equation}
$$
The $\hat{K}$ computed earlier is transformed to the following plot (note how the $K_{expected}$ red line is now perfectly horizontal):
```{r f11-L, echo=FALSE, fig.cap="L-function (a simple transformation of the K-function)."}
knitr::include_graphics("img/L_function.svg")
```
This graph makes it easier to compare $K$ with $K_{expected}$ at lower distance values. Values greater than $0$ indicate clustering, while values less than $0$ indicate dispersion. It appears that Walmart locations are more dispersed than expected under CSR/IRP up to a distance of 12 km but more clustered at distances greater than 12 km.
### The Pair Correlation Function $g$
A shortcoming of the $K$ function (and by extension the $L$ function) is its cumulative nature which makes it difficult to know at exactly which distances a point pattern may stray from $K_{expected}$ since *all* points up to distance $r$ can contribute to $K(r)$. The **pair correlation function**, $g$, is a modified version of the $K$ function where instead of summing *all* points within a distance $r$, points falling within a narrow distance *band* are summed instead.
```{r f11-k-g, echo=FALSE, fig.cap="Difference in how the $K$ and $g$ functions aggregate points at distance $r$ ($r$ = 30 km in this example). *All* points *up to* $r$ contribute to $K$ whereas just the points in the *annulus band* at $r$ contribute to $g$."}
knitr::include_graphics("img/K_vs_g_bands.png")
```
The plot of the $g$ function follows.
```{r f11-g, echo=FALSE, fig.cap="$g$-function of the Massachusets Walmart point data. Its interpretation is similar to that of the $K$ and $L$ functions. Here, we observe distances between stores greater than expected under CSR up to about 5 km. Note that this cutoff is less than the 12 km cutoff observed with the $K$/$L$ functions.", out.width=400}
knitr::include_graphics("img/g_function.svg")
```
If $g(r)$ = 1, then the inter-point distances (at and around distance $r$) are consistent with CSR. If $g(r)$ > 1, then the points are more clustered than expected under CSR. If $g(r)$ < 1, then the points are more dispersed than expected under CSR. Note that $g$ can never be less than 0.
Like its $K$ and ANN counterparts, the $g$-function assumes stationarity in the underlying point process (i.e. that there is no overall drift or trend in the process' intensity).
## First and second order effects
The concept of 1^st^ order effects and 2^nd^ order effects is an important one. It underlies the basic principles of spatial analysis.
```{r f11-1st-2nd-effects, echo=FALSE, fig.cap="Tree distribution can be influenced by 1^st^ order effects such as elevation gradient or spatial distribution of soil characteristics; this, in turn, changes the tree density distribution across the study area. Tree distribution can also be influenced by 2^nd^ order effects such as seed dispersal processes where the process is independent of location and, instead, dependent on the presence of other trees."}
knitr::include_graphics("img/1st_2nd_order_property.png")
```
Density based measurements such as kernel density estimations look at the *1^st^ order* property of the underlying process. Distance based measurements such as ANN and K-functions focus on the *2^nd^ order* property of the underlying process.
It's important to note that it is seldom feasible to separate out the two effects when analyzing point patterns, thus the importance of relying on a priori knowledge of the phenomena being investigated before drawing any conclusions from the analyses results.