forked from cienciadedatos/r4ds
-
Notifications
You must be signed in to change notification settings - Fork 1
/
tidy.Rmd
564 lines (393 loc) · 23.6 KB
/
tidy.Rmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
# Tidy data
## Introduction
> "Happy families are all alike; every unhappy family is unhappy in its
> own way." --– Leo Tolstoy
> "Tidy datasets are all alike, but every messy dataset is messy in its
> own way." --– Hadley Wickham
In this chapter, you will learn a consistent way to organise your data in R, an organisation called __tidy data__. Getting your data into this format requires some upfront work, but that work pays off in the long term. Once you have tidy data and the tidy tools provided by packages in the tidyverse, you will spend much less time munging data from one representation to another, allowing you to spend more time on the analytic questions at hand.
This chapter will give you a practical introduction to tidy data and the accompanying tools in the __tidyr__ package. If you'd like to learn more about the underlying theory, you might enjoy the *Tidy Data* paper published in the Journal of Statistical Software, <http://www.jstatsoft.org/v59/i10/paper>.
### Prerequisites
In this chapter we'll focus on tidyr, a package that provides a bunch of tools to help tidy up your messy datasets. tidyr is a member of the core tidyverse.
```{r setup, message = FALSE}
library(tidyverse)
```
## Tidy data
You can represent the same underlying data in multiple ways. The example below shows the same data organised in four different ways. Each dataset shows the same values of four variables *country*, *year*, *population*, and *cases*, but each dataset organises the values in a different way.
```{r}
table1
table2
table3
# Spread across two tibbles
table4a # cases
table4b # population
```
These are all representations of the same underlying data, but they are not equally easy to use. One dataset, the tidy dataset, will be much easier to work with inside the tidyverse.
There are three interrelated rules which make a dataset tidy:
1. Each variable must have its own column.
1. Each observation must have its own row.
1. Each value must have its own cell.
Figure \@ref(fig:tidy-structure) shows the rules visually.
```{r tidy-structure, echo = FALSE, out.width = "100%", fig.cap = "Following three rules makes a dataset tidy: variables are in columns, observations are in rows, and values are in cells."}
knitr::include_graphics("images/tidy-1.png")
```
These three rules are interrelated because it's impossible to only satisfy two of the three. That interrelationship leads to an even simpler set of practical instructions:
1. Put each dataset in a tibble.
1. Put each variable in a column.
In this example, only `table1` is tidy. It's the only representation where each column is a variable.
Why ensure that your data is tidy? There are two main advantages:
1. There's a general advantage to picking one consistent way of storing
data. If you have a consistent data structure, it's easier to learn the
tools that work with it because they have an underlying uniformity.
1. There's a specific advantage to placing variables in columns because
it allows R's vectorised nature to shine. As you learned in
[mutate](#mutate-funs) and [summary functions](#summary-funs), most
built-in R functions work with vectors of values. That makes transforming
tidy data feel particularly natural.
dplyr, ggplot2, and all the other packages in the tidyverse are designed to work with tidy data. Here are a couple of small examples showing how you might work with `table1`.
```{r, out.width = "50%"}
# Compute rate per 10,000
table1 %>%
mutate(rate = cases / population * 10000)
# Compute cases per year
table1 %>%
count(year, wt = cases)
# Visualise changes over time
library(ggplot2)
ggplot(table1, aes(year, cases)) +
geom_line(aes(group = country), colour = "grey50") +
geom_point(aes(colour = country))
```
### Exercises
1. Using prose, describe how the variables and observations are organised in
each of the sample tables.
1. Compute the `rate` for `table2`, and `table4a` + `table4b`.
You will need to perform four operations:
1. Extract the number of TB cases per country per year.
1. Extract the matching population per country per year.
1. Divide cases by population, and multiply by 10000.
1. Store back in the appropriate place.
Which representation is easiest to work with? Which is hardest? Why?
1. Recreate the plot showing change in cases over time using `table2`
instead of `table1`. What do you need to do first?
## Spreading and gathering
The principles of tidy data seem so obvious that you might wonder if you'll ever encounter a dataset that isn't tidy. Unfortunately, however, most data that you will encounter will be untidy. There are two main reasons:
1. Most people aren't familiar with the principles of tidy data, and it's hard
to derive them yourself unless you spend a _lot_ of time working with data.
1. Data is often organised to facilitate some use other than analysis. For
example, data is often organised to make entry as easy as possible.
This means for most real analyses, you'll need to do some tidying. The first step is always to figure out what the variables and observations are. Sometimes this is easy; other times you'll need to consult with the people who originally generated the data.
The second step is to resolve one of two common problems:
1. One variable might be spread across multiple columns.
1. One observation might be scattered across multiple rows.
Typically a dataset will only suffer from one of these problems; it'll only suffer from both if you're really unlucky! To fix these problems, you'll need the two most important functions in tidyr: `gather()` and `spread()`.
### Gathering
A common problem is a dataset where some of the column names are not names of variables, but _values_ of a variable. Take `table4a`: the column names `1999` and `2000` represent values of the `year` variable, and each row represents two observations, not one.
```{r}
table4a
```
To tidy a dataset like this, we need to __gather__ those columns into a new pair of variables. To describe that operation we need three parameters:
* The set of columns that represent values, not variables. In this example,
those are the columns `1999` and `2000`.
* The name of the variable whose values form the column names. I call that
the `key`, and here it is `year`.
* The name of the variable whose values are spread over the cells. I call
that `value`, and here it's the number of `cases`.
Together those parameters generate the call to `gather()`:
```{r}
table4a %>%
gather(`1999`, `2000`, key = "year", value = "cases")
```
The columns to gather are specified with `dplyr::select()` style notation. Here there are only two columns, so we list them individually. Note that "1999" and "2000" are non-syntactic names (because they don't start with a letter) so we have to surround them in backticks. To refresh your memory of the other ways to select columns, see [select](#select).
```{r tidy-gather, echo = FALSE, out.width = "100%", fig.cap = "Gathering `table4` into a tidy form."}
knitr::include_graphics("images/tidy-9.png")
```
In the final result, the gathered columns are dropped, and we get new `key` and `value` columns. Otherwise, the relationships between the original variables are preserved. Visually, this is shown in Figure \@ref(fig:tidy-gather). We can use `gather()` to tidy `table4b` in a similar fashion. The only difference is the variable stored in the cell values:
```{r}
table4b %>%
gather(`1999`, `2000`, key = "year", value = "population")
```
To combine the tidied versions of `table4a` and `table4b` into a single tibble, we need to use `dplyr::left_join()`, which you'll learn about in [relational data].
```{r}
tidy4a <- table4a %>%
gather(`1999`, `2000`, key = "year", value = "cases")
tidy4b <- table4b %>%
gather(`1999`, `2000`, key = "year", value = "population")
left_join(tidy4a, tidy4b)
```
### Spreading
Spreading is the opposite of gathering. You use it when an observation is scattered across multiple rows. For example, take `table2`: an observation is a country in a year, but each observation is spread across two rows.
```{r}
table2
```
To tidy this up, we first analyse the representation in similar way to `gather()`. This time, however, we only need two parameters:
* The column that contains variable names, the `key` column. Here, it's
`type`.
* The column that contains values from multiple variables, the `value`
column. Here it's `count`.
Once we've figured that out, we can use `spread()`, as shown programmatically below, and visually in Figure \@ref(fig:tidy-spread).
```{r}
table2 %>%
spread(key = type, value = count)
```
```{r tidy-spread, echo = FALSE, out.width = "100%", fig.cap = "Spreading `table2` makes it tidy"}
knitr::include_graphics("images/tidy-8.png")
```
As you might have guessed from the common `key` and `value` arguments, `spread()` and `gather()` are complements. `gather()` makes wide tables narrower and longer; `spread()` makes long tables shorter and wider.
### Exercises
1. Why are `gather()` and `spread()` not perfectly symmetrical?
Carefully consider the following example:
```{r, eval = FALSE}
stocks <- tibble(
year = c(2015, 2015, 2016, 2016),
half = c( 1, 2, 1, 2),
return = c(1.88, 0.59, 0.92, 0.17)
)
stocks %>%
spread(year, return) %>%
gather("year", "return", `2015`:`2016`)
```
(Hint: look at the variable types and think about column _names_.)
Both `spread()` and `gather()` have a `convert` argument. What does it
do?
1. Why does this code fail?
```{r, error = TRUE}
table4a %>%
gather(1999, 2000, key = "year", value = "cases")
```
1. Why does spreading this tibble fail? How could you add a new column to fix
the problem?
```{r}
people <- tribble(
~name, ~key, ~value,
#-----------------|--------|------
"Phillip Woods", "age", 45,
"Phillip Woods", "height", 186,
"Phillip Woods", "age", 50,
"Jessica Cordero", "age", 37,
"Jessica Cordero", "height", 156
)
```
1. Tidy the simple tibble below. Do you need to spread or gather it?
What are the variables?
```{r}
preg <- tribble(
~pregnant, ~male, ~female,
"yes", NA, 10,
"no", 20, 12
)
```
## Separating and uniting
So far you've learned how to tidy `table2` and `table4`, but not `table3`. `table3` has a different problem: we have one column (`rate`) that contains two variables (`cases` and `population`). To fix this problem, we'll need the `separate()` function. You'll also learn about the complement of `separate()`: `unite()`, which you use if a single variable is spread across multiple columns.
### Separate
`separate()` pulls apart one column into multiple columns, by splitting wherever a separator character appears. Take `table3`:
```{r}
table3
```
The `rate` column contains both `cases` and `population` variables, and we need to split it into two variables. `separate()` takes the name of the column to separate, and the names of the columns to separate into, as shown in Figure \@ref(fig:tidy-separate) and the code below.
```{r}
table3 %>%
separate(rate, into = c("cases", "population"))
```
```{r tidy-separate, echo = FALSE, out.width = "75%", fig.cap = "Separating `table3` makes it tidy"}
knitr::include_graphics("images/tidy-17.png")
```
By default, `separate()` will split values wherever it sees a non-alphanumeric character (i.e. a character that isn't a number or letter). For example, in the code above, `separate()` split the values of `rate` at the forward slash characters. If you wish to use a specific character to separate a column, you can pass the character to the `sep` argument of `separate()`. For example, we could rewrite the code above as:
```{r eval = FALSE}
table3 %>%
separate(rate, into = c("cases", "population"), sep = "/")
```
(Formally, `sep` is a regular expression, which you'll learn more about in [strings].)
Look carefully at the column types: you'll notice that `cases` and `population` are character columns. This is the default behaviour in `separate()`: it leaves the type of the column as is. Here, however, it's not very useful as those really are numbers. We can ask `separate()` to try and convert to better types using `convert = TRUE`:
```{r}
table3 %>%
separate(rate, into = c("cases", "population"), convert = TRUE)
```
You can also pass a vector of integers to `sep`. `separate()` will interpret the integers as positions to split at. Positive values start at 1 on the far-left of the strings; negative value start at -1 on the far-right of the strings. When using integers to separate strings, the length of `sep` should be one less than the number of names in `into`.
You can use this arrangement to separate the last two digits of each year. This make this data less tidy, but is useful in other cases, as you'll see in a little bit.
```{r}
table3 %>%
separate(year, into = c("century", "year"), sep = 2)
```
### Unite
`unite()` is the inverse of `separate()`: it combines multiple columns into a single column. You'll need it much less frequently than `separate()`, but it's still a useful tool to have in your back pocket.
```{r tidy-unite, echo = FALSE, out.width = "75%", fig.cap = "Uniting `table5` makes it tidy"}
knitr::include_graphics("images/tidy-18.png")
```
We can use `unite()` to rejoin the *century* and *year* columns that we created in the last example. That data is saved as `tidyr::table5`. `unite()` takes a data frame, the name of the new variable to create, and a set of columns to combine, again specified in `dplyr::select()` style:
```{r}
table5 %>%
unite(new, century, year)
```
In this case we also need to use the `sep` argument. The default will place an underscore (`_`) between the values from different columns. Here we don't want any separator so we use `""`:
```{r}
table5 %>%
unite(new, century, year, sep = "")
```
### Exercises
1. What do the `extra` and `fill` arguments do in `separate()`?
Experiment with the various options for the following two toy datasets.
```{r, eval = FALSE}
tibble(x = c("a,b,c", "d,e,f,g", "h,i,j")) %>%
separate(x, c("one", "two", "three"))
tibble(x = c("a,b,c", "d,e", "f,g,i")) %>%
separate(x, c("one", "two", "three"))
```
1. Both `unite()` and `separate()` have a `remove` argument. What does it
do? Why would you set it to `FALSE`?
1. Compare and contrast `separate()` and `extract()`. Why are there
three variations of separation (by position, by separator, and with
groups), but only one unite?
## Missing values
Changing the representation of a dataset brings up an important subtlety of missing values. Surprisingly, a value can be missing in one of two possible ways:
* __Explicitly__, i.e. flagged with `NA`.
* __Implicitly__, i.e. simply not present in the data.
Let's illustrate this idea with a very simple data set:
```{r}
stocks <- tibble(
year = c(2015, 2015, 2015, 2015, 2016, 2016, 2016),
qtr = c( 1, 2, 3, 4, 2, 3, 4),
return = c(1.88, 0.59, 0.35, NA, 0.92, 0.17, 2.66)
)
```
There are two missing values in this dataset:
* The return for the fourth quarter of 2015 is explicitly missing, because
the cell where its value should be instead contains `NA`.
* The return for the first quarter of 2016 is implicitly missing, because it
simply does not appear in the dataset.
One way to think about the difference is with this Zen-like koan: An explicit missing value is the presence of an absence; an implicit missing value is the absence of a presence.
The way that a dataset is represented can make implicit values explicit. For example, we can make the implicit missing value explicit by putting years in the columns:
```{r}
stocks %>%
spread(year, return)
```
Because these explicit missing values may not be important in other representations of the data, you can set `na.rm = TRUE` in `gather()` to turn explicit missing values implicit:
```{r}
stocks %>%
spread(year, return) %>%
gather(year, return, `2015`:`2016`, na.rm = TRUE)
```
Another important tool for making missing values explicit in tidy data is `complete()`:
```{r}
stocks %>%
complete(year, qtr)
```
`complete()` takes a set of columns, and finds all unique combinations. It then ensures the original dataset contains all those values, filling in explicit `NA`s where necessary.
There's one other important tool that you should know for working with missing values. Sometimes when a data source has primarily been used for data entry, missing values indicate that the previous value should be carried forward:
```{r}
treatment <- tribble(
~ person, ~ treatment, ~response,
"Derrick Whitmore", 1, 7,
NA, 2, 10,
NA, 3, 9,
"Katherine Burke", 1, 4
)
```
You can fill in these missing values with `fill()`. It takes a set of columns where you want missing values to be replaced by the most recent non-missing value (sometimes called last observation carried forward).
```{r}
treatment %>%
fill(person)
```
### Exercises
1. Compare and contrast the `fill` arguments to `spread()` and `complete()`.
1. What does the direction argument to `fill()` do?
## Case Study
To finish off the chapter, let's pull together everything you've learned to tackle a realistic data tidying problem. The `tidyr::who` dataset contains tuberculosis (TB) cases broken down by year, country, age, gender, and diagnosis method. The data comes from the *2014 World Health Organization Global Tuberculosis Report*, available at <http://www.who.int/tb/country/data/download/en/>.
There's a wealth of epidemiological information in this dataset, but it's challenging to work with the data in the form that it's provided:
```{r}
who
```
This is a very typical real-life example dataset. It contains redundant columns, odd variable codes, and many missing values. In short, `who` is messy, and we'll need multiple steps to tidy it. Like dplyr, tidyr is designed so that each function does one thing well. That means in real-life situations you'll usually need to string together multiple verbs into a pipeline.
The best place to start is almost always to gather together the columns that are not variables. Let's have a look at what we've got:
* It looks like `country`, `iso2`, and `iso3` are three variables that
redundantly specify the country.
* `year` is clearly also a variable.
* We don't know what all the other columns are yet, but given the structure
in the variable names (e.g. `new_sp_m014`, `new_ep_m014`, `new_ep_f014`)
these are likely to be values, not variables.
So we need to gather together all the columns from `new_sp_m014` to `newrel_f65`. We don't know what those values represent yet, so we'll give them the generic name `"key"`. We know the cells represent the count of cases, so we'll use the variable `cases`. There are a lot of missing values in the current representation, so for now we'll use `na.rm` just so we can focus on the values that are present.
```{r}
who1 <- who %>%
gather(new_sp_m014:newrel_f65, key = "key", value = "cases", na.rm = TRUE)
who1
```
We can get some hint of the structure of the values in the new `key` column by counting them:
```{r}
who1 %>%
count(key)
```
You might be able to parse this out by yourself with a little thought and some experimentation, but luckily we have the data dictionary handy. It tells us:
1. The first three letters of each column denote whether the column
contains new or old cases of TB. In this dataset, each column contains
new cases.
1. The next two letters describe the type of TB:
* `rel` stands for cases of relapse
* `ep` stands for cases of extrapulmonary TB
* `sn` stands for cases of pulmonary TB that could not be diagnosed by
a pulmonary smear (smear negative)
* `sp` stands for cases of pulmonary TB that could be diagnosed be
a pulmonary smear (smear positive)
3. The sixth letter gives the sex of TB patients. The dataset groups
cases by males (`m`) and females (`f`).
4. The remaining numbers gives the age group. The dataset groups cases into
seven age groups:
* `014` = 0 -- 14 years old
* `1524` = 15 -- 24 years old
* `2534` = 25 -- 34 years old
* `3544` = 35 -- 44 years old
* `4554` = 45 -- 54 years old
* `5564` = 55 -- 64 years old
* `65` = 65 or older
We need to make a minor fix to the format of the column names: unfortunately the names are slightly inconsistent because instead of `new_rel` we have `newrel` (it's hard to spot this here but if you don't fix it we'll get errors in subsequent steps). You'll learn about `str_replace()` in [strings], but the basic idea is pretty simple: replace the characters "newrel" with "new_rel". This makes all variable names consistent.
```{r}
who2 <- who1 %>%
mutate(key = stringr::str_replace(key, "newrel", "new_rel"))
who2
```
We can separate the values in each code with two passes of `separate()`. The first pass will split the codes at each underscore.
```{r}
who3 <- who2 %>%
separate(key, c("new", "type", "sexage"), sep = "_")
who3
```
Then we might as well drop the `new` column because it's constant in this dataset. While we're dropping columns, let's also drop `iso2` and `iso3` since they're redundant.
```{r}
who3 %>%
count(new)
who4 <- who3 %>%
select(-new, -iso2, -iso3)
```
Next we'll separate `sexage` into `sex` and `age` by splitting after the first character:
```{r}
who5 <- who4 %>%
separate(sexage, c("sex", "age"), sep = 1)
who5
```
The `who` dataset is now tidy!
I've shown you the code a piece at a time, assigning each interim result to a new variable. This typically isn't how you'd work interactively. Instead, you'd gradually build up a complex pipe:
```{r, results = "hide"}
who %>%
gather(key, value, new_sp_m014:newrel_f65, na.rm = TRUE) %>%
mutate(key = stringr::str_replace(key, "newrel", "new_rel")) %>%
separate(key, c("new", "var", "sexage")) %>%
select(-new, -iso2, -iso3) %>%
separate(sexage, c("sex", "age"), sep = 1)
```
### Exercises
1. In this case study I set `na.rm = TRUE` just to make it easier to
check that we had the correct values. Is this reasonable? Think about
how missing values are represented in this dataset. Are there implicit
missing values? What's the difference between an `NA` and zero?
1. What happens if you neglect the `mutate()` step?
(`mutate(key = stringr::str_replace(key, "newrel", "new_rel"))`)
1. I claimed that `iso2` and `iso3` were redundant with `country`.
Confirm this claim.
1. For each country, year, and sex compute the total number of cases of
TB. Make an informative visualisation of the data.
## Non-tidy data
Before we continue on to other topics, it's worth talking briefly about non-tidy data. Earlier in the chapter, I used the pejorative term "messy" to refer to non-tidy data. That's an oversimplification: there are lots of useful and well-founded data structures that are not tidy data. There are two main reasons to use other data structures:
* Alternative representations may have substantial performance or space
advantages.
* Specialised fields have evolved their own conventions for storing data
that may be quite different to the conventions of tidy data.
Either of these reasons means you'll need something other than a tibble (or data frame). If your data does fit naturally into a rectangular structure composed of observations and variables, I think tidy data should be your default choice. But there are good reasons to use other structures; tidy data is not the only way.
If you'd like to learn more about non-tidy data, I'd highly recommend this thoughtful blog post by Jeff Leek: <http://simplystatistics.org/2016/02/17/non-tidy-data/>