forked from hadley/r4ds
-
Notifications
You must be signed in to change notification settings - Fork 0
/
numbers.qmd
767 lines (582 loc) · 28.4 KB
/
numbers.qmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
# Numbers {#sec-numbers}
```{r}
#| results: "asis"
#| echo: false
source("_common.R")
status("polishing")
```
## Introduction
In this chapter, you'll learn useful tools for creating and manipulating numeric vectors.
We'll start by going into a little more detail of `count()` before diving into various numeric transformations.
You'll then learn about more general transformations that can be applied to other types of vector, but are often used with numeric vectors.
Then you'll learn about a few more useful summaries and how they can also be used with `mutate()`.
### Prerequisites
This chapter mostly uses functions from base R, which are available without loading any packages.
But we still need the tidyverse because we'll use these base R functions inside of tidyverse functions like `mutate()` and `filter()`.
Like in the last chapter, we'll use real examples from nycflights13, as well as toy examples made with `c()` and `tribble()`.
```{r}
#| label: setup
#| message: false
library(tidyverse)
library(nycflights13)
```
### Counts
It's surprising how much data science you can do with just counts and a little basic arithmetic, so dplyr strives to make counting as easy as possible with `count()`.
This function is great for quick exploration and checks during analysis:
```{r}
flights |> count(dest)
```
(Despite the advice in @sec-workflow-style, we usually put `count()` on a single line because it's usually used at the console for a quick check that a calculation is working as expected.)
If you want to see the most common values add `sort = TRUE`:
```{r}
flights |> count(dest, sort = TRUE)
```
And remember that if you want to see all the values, you can use `|> View()` or `|> print(n = Inf)`.
You can perform the same computation "by hand" with `group_by()`, `summarise()` and `n()`.
This is useful because it allows you to compute other summaries at the same time:
```{r}
flights |>
group_by(dest) |>
summarise(
n = n(),
delay = mean(arr_delay, na.rm = TRUE)
)
```
`n()` is a special summary function that doesn't take any arguments and instead access information about the "current" group.
This means that it only works inside dplyr verbs:
```{r}
#| error: true
n()
```
There are a couple of variants of `n()` that you might find useful:
- `n_distinct(x)` counts the number of distinct (unique) values of one or more variables.
For example, we could figure out which destinations are served by the most carriers:
```{r}
flights |>
group_by(dest) |>
summarise(
carriers = n_distinct(carrier)
) |>
arrange(desc(carriers))
```
- A weighted count is a sum.
For example you could "count" the number of miles each plane flew:
```{r}
flights |>
group_by(tailnum) |>
summarise(miles = sum(distance))
```
Weighted counts are a common problem so `count()` has a `wt` argument that does the same thing:
```{r}
flights |> count(tailnum, wt = distance)
```
- You can count missing values by combining `sum()` and `is.na()`.
In the flights dataset this represents flights that are cancelled:
```{r}
flights |>
group_by(dest) |>
summarise(n_cancelled = sum(is.na(dep_time)))
```
### Exercises
1. How can you use `count()` to count the number rows with a missing value for a given variable?
2. Expand the following calls to `count()` to instead use `group_by()`, `summarise()`, and `arrange()`:
1. `flights |> count(dest, sort = TRUE)`
2. `flights |> count(tailnum, wt = distance)`
## Numeric transformations
Transformation functions work well with `mutate()` because their output is the same length as the input.
The vast majority of transformation functions are already built into base R.
It's impractical to list them all so this section will show the most useful ones.
As an example, while R provides all the trigonometric functions that you might dream of, we don't list them here because they're rarely needed for data science.
### Arithmetic and recycling rules
We introduced the basics of arithmetic (`+`, `-`, `*`, `/`, `^`) in [Chapter -@sec-workflow-basics] and have used them a bunch since.
These functions don't need a huge amount of explanation because they do what you learned in grade school.
But we need to briefly talk about the **recycling rules** which determine what happens when the left and right hand sides have different lengths.
This is important for operations like `flights |> mutate(air_time = air_time / 60)` because there are 336,776 numbers on the left of `/` but only one on the right.
R handles mismatched lengths by **recycling,** or repeating, the short vector.
We can see this in operation more easily if we create some vectors outside of a data frame:
```{r}
x <- c(1, 2, 10, 20)
x / 5
# is shorthand for
x / c(5, 5, 5, 5)
```
Generally, you only want to recycle single numbers (i.e. vectors of length 1), but R will recycle any shorter length vector.
It usually (but not always) gives you a warning if the longer vector isn't a multiple of the shorter:
```{r}
x * c(1, 2)
x * c(1, 2, 3)
```
These recycling rules are also applied to logical comparisons (`==`, `<`, `<=`, `>`, `>=`, `!=`) and can lead to a surprising result if you accidentally use `==` instead of `%in%` and the data frame has an unfortunate number of rows.
For example, take this code which attempts to find all flights in January and February:
```{r}
flights |>
filter(month == c(1, 2))
```
The code runs without error, but it doesn't return what you want.
Because of the recycling rules it finds flights in odd numbered rows that departed in January and flights in even numbered rows that departed in February.
And unforuntately there's no warning because `nycflights` has an even number of rows.
To protect you from this type of silent failure, most tidyverse functions use a stricter form of recycling that only recycles single values.
Unfortunately that doesn't help here, or in many other cases, because the key computation is performed by the base R function `==`, not `filter()`.
### Minimum and maximum
The arithmetic functions work with pairs of variables.
Two closely related functions are `pmin()` and `pmax()`, which when given two or more variables will return the smallest or largest value in each row:
```{r}
df <- tribble(
~x, ~y,
1, 3,
5, 2,
7, NA,
)
df |>
mutate(
min = pmin(x, y, na.rm = TRUE),
max = pmax(x, y, na.rm = TRUE)
)
```
Note that these are different to the summary functions `min()` and `max()` which take multiple observations and return a single value.
You can tell that you've used the wrong form when all the minimums and all the maximums have the same value:
```{r}
df |>
mutate(
min = min(x, y, na.rm = TRUE),
max = max(x, y, na.rm = TRUE)
)
```
### Modular arithmetic
Modular arithmetic is the technical name for the type of math you did before you learned about real numbers, i.e. division that yields a whole number and a remainder.
In R, `%/%` does integer division and `%%` computes the remainder:
```{r}
1:10 %/% 3
1:10 %% 3
```
Modular arithmetic is handy for the flights dataset, because we can use it to unpack the `sched_dep_time` variable into and `hour` and `minute`:
```{r}
flights |>
mutate(
hour = sched_dep_time %/% 100,
minute = sched_dep_time %% 100,
.keep = "used"
)
```
We can combine that with the `mean(is.na(x))` trick from @sec-logical-summaries to see how the proportion of cancelled flights varies over the course of the day.
The results are shown in @fig-prop-cancelled.
```{r}
#| label: fig-prop-cancelled
#| fig-cap: >
#| A line plot with scheduled departure hour on the x-axis, and proportion
#| of cancelled flights on the y-axis. Cancellations seem to accumulate
#| over the course of the day until 8pm, very late flights are much
#| less likely to be cancelled.
#| fig-alt: >
#| A line plot showing how proportion of cancelled flights changes over
#| the course of the day. The proportion starts low at around 0.5% at
#| 6am, then steadily increases over the course of the day until peaking
#| at 4% at 7pm. The proportion of cancelled flights then drops rapidly
#| getting down to around 1% by midnight.
flights |>
group_by(hour = sched_dep_time %/% 100) |>
summarise(prop_cancelled = mean(is.na(dep_time)), n = n()) |>
filter(hour > 1) |>
ggplot(aes(hour, prop_cancelled)) +
geom_line(color = "grey50") +
geom_point(aes(size = n))
```
### Logarithms
Logarithms are an incredibly useful transformation for dealing with data that ranges across multiple orders of magnitude.
They also convert exponential growth to linear growth.
For example, take compounding interest --- the amount of money you have at `year + 1` is the amount of money you had at `year` multiplied by the interest rate.
That gives a formula like `money = starting * interest ^ year`:
```{r}
starting <- 100
interest <- 1.05
money <- tibble(
year = 2000 + 1:50,
money = starting * interest^(1:50)
)
```
If you plot this data, you'll get an exponential curve:
```{r}
ggplot(money, aes(year, money)) +
geom_line()
```
Log transforming the y-axis gives a straight line:
```{r}
ggplot(money, aes(year, money)) +
geom_line() +
scale_y_log10()
```
This a straight line because a little algebra reveals that `log(money) = log(starting) + n * log(interest)`, which matches the pattern for a line, `y = m * x + b`.
This is a useful pattern: if you see a (roughly) straight line after log-transforming the y-axis, you know that there's underlying exponential growth.
If you're log-transforming your data with dplyr you have a choice of three logarithms provided by base R: `log()` (the natural log, base e), `log2()` (base 2), and `log10()` (base 10).
We recommend using `log2()` or `log10()`.
`log2()` is easy to interpret because difference of 1 on the log scale corresponds to doubling on the original scale and a difference of -1 corresponds to halving; whereas `log10()` is easy to back-transform because (e.g) 3 is 10\^3 = 1000.
The inverse of `log()` is `exp()`; to compute the inverse of `log2()` or `log10()` you'll need to use `2^` or `10^`.
### Rounding {#sec-rounding}
Use `round(x)` to round a number to the nearest integer:
```{r}
round(123.456)
```
You can control the precision of the rounding with the second argument, `digits`.
`round(x, digits)` rounds to the nearest `10^-n` so `digits = 2` will round to the nearest 0.01.
This definition is useful because it implies `round(x, -3)` will round to the nearest thousand, which indeed it does:
```{r}
round(123.456, 2) # two digits
round(123.456, 1) # one digit
round(123.456, -1) # round to nearest ten
round(123.456, -2) # round to nearest hundred
```
There's one weirdness with `round()` that seems surprising at first glance:
```{r}
round(c(1.5, 2.5))
```
`round()` uses what's known as "round half to even" or Banker's rounding: if a number is half way between two integers, it will be rounded to the **even** integer.
This is a good strategy because it keeps the rounding unbiased: half of all 0.5s are rounded up, and half are rounded down.
`round()` is paired with `floor()` which always rounds down and `ceiling()` which always rounds up:
```{r}
x <- 123.456
floor(x)
ceiling(x)
```
These functions don't have a digits argument, so you can instead scale down, round, and then scale back up:
```{r}
# Round down to nearest two digits
floor(x / 0.01) * 0.01
# Round up to nearest two digits
ceiling(x / 0.01) * 0.01
```
You can use the same technique if you want to `round()` to a multiple of some other number:
```{r}
# Round to nearest multiple of 4
round(x / 4) * 4
# Round to nearest 0.25
round(x / 0.25) * 0.25
```
### Cutting numbers into ranges
Use `cut()`[^numbers-1] to break up a numeric vector into discrete buckets:
[^numbers-1]: ggplot2 provides some helpers for common cases in `cut_interval()`, `cut_number()`, and `cut_width()`.
ggplot2 is an admittedly weird place for these functions to live, but they are useful as part of histogram computation and were written before any other parts of the tidyverse existed.
```{r}
x <- c(1, 2, 5, 10, 15, 20)
cut(x, breaks = c(0, 5, 10, 15, 20))
```
The breaks don't need to be evenly spaced:
```{r}
cut(x, breaks = c(0, 5, 10, 100))
```
You can optionally supply your own `labels`.
Note that there should be one less `labels` than `breaks`.
```{r}
cut(x,
breaks = c(0, 5, 10, 15, 20),
labels = c("sm", "md", "lg", "xl")
)
```
Any values outside of the range of the breaks will become `NA`:
```{r}
y <- c(NA, -10, 5, 10, 30)
cut(y, breaks = c(0, 5, 10, 15, 20))
```
See the documentation for other useful arguments like `right` and `include.lowest`, which control if the intervals are `[a, b)` or `(a, b]` and if the lowest interval should be `[a, b]`.
### Cumulative and rolling aggregates
Base R provides `cumsum()`, `cumprod()`, `cummin()`, `cummax()` for running, or cumulative, sums, products, mins and maxes.
dplyr provides `cummean()` for cumulative means.
Cumulative sums tend to come up the most in practice:
```{r}
x <- 1:10
cumsum(x)
```
If you need more complex rolling or sliding aggregates, try the [slider](https://davisvaughan.github.io/slider/) package by Davis Vaughan.
The following example illustrates some of its features.
```{r}
library(slider)
# Same as a cumulative sum
slide_vec(x, sum, .before = Inf)
# Sum the current element and the one before it
slide_vec(x, sum, .before = 1)
# Sum the current element and the two before and after it
slide_vec(x, sum, .before = 2, .after = 2)
# Only compute if the window is complete
slide_vec(x, sum, .before = 2, .after = 2, .complete = TRUE)
```
### Exercises
1. Explain in words what each line of the code used to generate @fig-prop-cancelled does.
2. What trigonometric functions does R provide?
Guess some names and look up the documentation.
Do they use degrees or radians?
3. Currently `dep_time` and `sched_dep_time` are convenient to look at, but hard to compute with because they're not really continuous numbers.
You can see the basic problem in this plot: there's a gap between each hour.
```{r}
flights |>
filter(month == 1, day == 1) |>
ggplot(aes(sched_dep_time, dep_delay)) +
geom_point()
```
Convert them to a more truthful representation of time (either fractional hours or minutes since midnight).
## General transformations
The following sections describe some general transformations which are often used with numeric vectors, but can be applied to all other column types.
### Fill in missing values {#sec-missing-values-numbers}
You can fill in missing values with dplyr's `coalesce()`:
```{r}
x <- c(1, NA, 5, NA, 10)
coalesce(x, 0)
```
`coalesce()` is vectorised, so you can find the non-missing values from a pair of vectors:
```{r}
y <- c(2, 3, 4, NA, 5)
coalesce(x, y)
```
### Ranks
dplyr provides a number of ranking functions inspired by SQL, but you should always start with `dplyr::min_rank()`.
It uses the typical method for dealing with ties, e.g. 1st, 2nd, 2nd, 4th.
```{r}
x <- c(1, 2, 2, 3, 4, NA)
min_rank(x)
```
Note that the smallest values get the lowest ranks; use `desc(x)` to give the largest values the smallest ranks:
```{r}
min_rank(desc(x))
```
If `min_rank()` doesn't do what you need, look at the variants `dplyr::row_number()`, `dplyr::dense_rank()`, `dplyr::percent_rank()`, and `dplyr::cume_dist()`.
See the documentation for details.
```{r}
df <- tibble(x = x)
df |>
mutate(
row_number = row_number(x),
dense_rank = dense_rank(x),
percent_rank = percent_rank(x),
cume_dist = cume_dist(x)
)
```
You can achieve many of the same results by picking the appropriate `ties.method` argument to base R's `rank()`; you'll probably also want to set `na.last = "keep"` to keep `NA`s as `NA`.
`row_number()` can also be used without any arguments when inside a dplyr verb.
In this case, it'll give the number of the "current" row.
When combined with `%%` or `%/%` this can be a useful tool for dividing data into similarly sized groups:
```{r}
df <- tibble(x = runif(10))
df |>
mutate(
row0 = row_number() - 1,
three_groups = row0 %% 3,
three_in_each_group = row0 %/% 3,
)
```
### Offsets
`dplyr::lead()` and `dplyr::lag()` allow you to refer the values just before or just after the "current" value.
They return a vector of the same length as the input, padded with `NA`s at the start or end:
```{r}
x <- c(2, 5, 11, 11, 19, 35)
lag(x)
lead(x)
```
- `x - lag(x)` gives you the difference between the current and previous value.
```{r}
x - lag(x)
```
- `x == lag(x)` tells you when the current value changes.
This is often useful combined with the grouping trick described in @sec-groups-from-logical.
```{r}
x == lag(x)
```
You can lead or lag by more than one position by using the second argument, `n`.
### Exercises
1. Find the 10 most delayed flights using a ranking function.
How do you want to handle ties?
Carefully read the documentation for `min_rank()`.
2. Which plane (`tailnum`) has the worst on-time record?
3. What time of day should you fly if you want to avoid delays as much as possible?
4. What does `flights |> group_by(dest() |> filter(row_number() < 4)` do?
What does `flights |> group_by(dest() |> filter(row_number(dep_delay) < 4)` do?
5. For each destination, compute the total minutes of delay.
For each flight, compute the proportion of the total delay for its destination.
6. Delays are typically temporally correlated: even once the problem that caused the initial delay has been resolved, later flights are delayed to allow earlier flights to leave.
Using `lag()`, explore how the average flight delay for an hour is related to the average delay for the previous hour.
```{r}
#| results: false
flights |>
mutate(hour = dep_time %/% 100) |>
group_by(year, month, day, hour) |>
summarise(
dep_delay = mean(dep_delay, na.rm = TRUE),
n = n(),
.groups = "drop"
) |>
filter(n > 5)
```
7. Look at each destination.
Can you find flights that are suspiciously fast?
(i.e. flights that represent a potential data entry error).
Compute the air time of a flight relative to the shortest flight to that destination.
Which flights were most delayed in the air?
8. Find all destinations that are flown by at least two carriers.
Use those destinations to come up with a relative ranking of the carriers based on their performance for the same destination.
## Summaries
Just using the counts, means, and sums that we've introduced already can get you a long way, but R provides many other useful summary functions.
Here are a selection that you might find useful.
### Center
So far, we've mostly used `mean()` to summarize the center of a vector of values.
Because the mean is the sum divided by the count, it is sensitive to even just a few unusually high or low values.
An alternative is to use the `median()`, which finds a value that lies in the "middle" of the vector, i.e. 50% of the values is above it and 50% are below it.
Depending on the shape of the distribution of the variable you're interested in, mean or median might be a better measure of center.
For example, for symmetric distributions we generally report the mean while for skewed distributions we usually report the median.
@fig-mean-vs-median compares the mean vs the median when looking at the hourly vs median departure delay.
The median delay is always smaller than the mean delay because because flights sometimes leave multiple hours late, but never leave multiple hours early.
```{r}
#| label: fig-mean-vs-median
#| fig-cap: >
#| A scatterplot showing the differences of summarising hourly depature
#| delay with median instead of mean.
#| fig-alt: >
#| All points fall below a 45° line, meaning that the median delay is
#| always less than the mean delay. Most points are clustered in a
#| dense region of mean [0, 20] and median [0, 5]. As the mean delay
#| increases, the spread of the median also increases. There are two
#| outlying points with mean ~60, median ~50, and mean ~85, median ~55.
flights |>
group_by(year, month, day) |>
summarise(
mean = mean(dep_delay, na.rm = TRUE),
median = median(dep_delay, na.rm = TRUE),
n = n(),
.groups = "drop"
) |>
ggplot(aes(mean, median)) +
geom_abline(slope = 1, intercept = 0, color = "white", size = 2) +
geom_point()
```
You might also wonder about the **mode**, or the most common value.
This is a summary that only works well for very simple cases (which is why you might have learned about it in high school), but it doesn't work well for many real datasets.
If the data is discrete, there may be multiple most common values, and if the data is continuous, there might be no most common value because every value is ever so slightly different.
For these reasons, the mode tends not to be used by statisticians and there's no mode function included in base R[^numbers-2].
[^numbers-2]: The `mode()` function does something quite different!
### Minimum, maximum, and quantiles {#sec-min-max-summary}
What if you're interested in locations other than the center?
`min()` and `max()` will give you the largest and smallest values.
Another powerful tool is `quantile()` which is a generalization of the median: `quantile(x, 0.25)` will find the value of `x` that is greater than 25% of the values, `quantile(x, 0.5)` is equivalent to the median, and `quantile(x, 0.95)` will find a value that's greater than 95% of the values.
For the flights data, you might want to look at the 95% quantile of delays rather than the maximum, because it will ignore the 5% of most delayed flights which can be quite extreme.
```{r}
flights |>
group_by(year, month, day) |>
summarise(
max = max(dep_delay, na.rm = TRUE),
q95 = quantile(dep_delay, 0.95, na.rm = TRUE),
.groups = "drop"
)
```
### Spread
Sometimes you're not so interested in where the bulk of the data lies, but how it is spread out.
Two commonly used summaries are the standard deviation, `sd(x)`, and the inter-quartile range, `IQR()`.
We won't explain `sd()` here since you're probably already familiar with it, but `IQR()` might be new --- it's `quantile(x, 0.75) - quantile(x, 0.25)` and gives you the range that contains the middle 50% of the data.
We can use this to reveal a small oddity in the flights data.
You might expect that the spread of the distance between origin and destination to be zero, since airports are always in the same place.
But the code below makes it looks like one airport, [EGE](https://en.wikipedia.org/wiki/Eagle_County_Regional_Airport), might have moved.
```{r}
flights |>
group_by(origin, dest) |>
summarise(
distance_sd = IQR(distance),
n = n(),
.groups = "drop"
) |>
filter(distance_sd > 0)
```
### Distributions
It's worth remembering that all of the summary statistics described above are a way of reducing the distribution down to a single number.
This means that they're fundamentally reductive, and if you pick the wrong summary, you can easily miss important differences between groups.
That's why it's always a good idea to visualize the distribution before committing to your summary statistics.
@fig-flights-dist shows the overall distribution of departure delays.
The distribution is so skewed that we have to zoom in to see the bulk of the data.
This suggests that the mean is unlikely to be a good summary and we might prefer the median instead.
```{r}
#| label: fig-flights-dist
#| fig-cap: >
#| The distribution of `dep_delay` appears highly skewed to the right in
#| both histograms.
#| fig-subcap: ["Histogram shows the full range of delays.",
#| "Histogram is zoomed in to show delays less than 2 hours."]
#| fig-alt: >
#| Two histograms of `dep_delay`. On the left, it's very hard to see
#| any pattern except that there's a very large spike around zero, the
#| bars rapidly decay in height, and for most of the plot, you can't
#| see any bars because they are too short to see. On the right,
#| where we've discarded delays of greater than two hours, we can
#| see that the spike occurs slightly below zero (i.e. most flights
#| leave a couple of minutes early), but there's still a very steep
#| decay after that.
#| layout-ncol: 2
#| fig-width: 4
#| fig-height: 2
flights |>
ggplot(aes(dep_delay)) +
geom_histogram(binwidth = 15)
flights |>
filter(dep_delay <= 120) |>
ggplot(aes(dep_delay)) +
geom_histogram(binwidth = 5)
```
It's also a good idea to check that distributions for subgroups resemble the whole.
@fig-flights-dist-daily overlays a frequency polygon for each day.
The distributions seem to follow a common pattern, suggesting it's fine to use the same summary for each day.
```{r}
#| label: fig-flights-dist-daily
#| fig-cap: >
#| 365 frequency polygons of `dep_delay`, one for each day. The frequency
#| polygons appear to have the same shape, suggesting that it's reasonable
#| to compare days by looking at just a few summary statistics.
#| fig-alt: >
#| The distribution of `dep_delay` is highly right skewed with a strong
#| peak slightly less than 0. The 365 frequency polygons are mostly
#| overlapping forming a thick black bland.
flights |>
filter(dep_delay < 120) |>
ggplot(aes(dep_delay, group = interaction(day, month))) +
geom_freqpoly(binwidth = 5, alpha = 1/5)
```
Don't be afraid to explore your own custom summaries specifically tailored for the data that you're working with.
In this case, that might mean separately summarizing the flights that left early vs the flights that left late, or given that the values are so heavily skewed, you might try a log-transformation.
Finally, don't forget what you learned in @sec-sample-size: whenever creating numerical summaries, it's a good idea to include the number of observations in each group.
### Positions
There's one final type of summary that's useful for numeric vectors, but also works with every other type of value: extracting a value at specific position.
You can do this with the base R `[` function, but we're not going to cover it until @sec-vector-subsetting, because it's a very powerful and general function.
For now we'll introduce three specialized functions that you can use to extract values at a specified position: `first(x)`, `last(x)`, and `nth(x, n)`.
For example, we can find the first and last departure for each day:
```{r}
flights |>
group_by(year, month, day) |>
summarise(
first_dep = first(dep_time),
fifth_dep = nth(dep_time, 5),
last_dep = last(dep_time)
)
```
(These functions currently lack an `na.rm` argument but will hopefully be fixed by the time you read this book: <https://github.com/tidyverse/dplyr/issues/6242>).
If you're familiar with `[`, you might wonder if you ever need these functions.
There are two main reasons: the `default` argument and the `order_by` argument.
`default` allows you to set a default value that's used if the requested position doesn't exist, e.g. you're trying to get the 3rd element from a two element group.
`order_by` lets you locally override the existing ordering of the rows, so you can get the element at the position in the ordering by `order_by()`.
Extracting values at positions is complementary to filtering on ranks.
Filtering gives you all variables, with each observation in a separate row:
```{r}
flights |>
group_by(year, month, day) |>
mutate(r = min_rank(desc(sched_dep_time))) |>
filter(r %in% c(1, max(r)))
```
### With `mutate()`
As the names suggest, the summary functions are typically paired with `summarise()`.
However, because of the recycling rules we discussed in @sec-scalars-and-recycling-rules they can also be usefully paired with `mutate()`, particularly when you want do some sort of group standardization.
For example:
- `x / sum(x)` calculates the proportion of a total.
- `(x - mean(x)) / sd(x)` computes a Z-score (standardized to mean 0 and sd 1).
- `x / first(x)` computes an index based on the first observation.
### Exercises
1. Brainstorm at least 5 different ways to assess the typical delay characteristics of a group of flights.
Consider the following scenarios:
- A flight is 15 minutes early 50% of the time, and 15 minutes late 50% of the time.
- A flight is always 10 minutes late.
- A flight is 30 minutes early 50% of the time, and 30 minutes late 50% of the time.
- 99% of the time a flight is on time. 1% of the time it's 2 hours late.
Which do you think is more important: arrival delay or departure delay?
2. Which destinations show the greatest variation in air speed?
3. Create a plot to further explore the adventures of EGE.
Can you find any evidence that the airport moved locations?