forked from hadley/r4ds
-
Notifications
You must be signed in to change notification settings - Fork 11
/
Copy pathtransform.Rmd
954 lines (675 loc) · 40.9 KB
/
transform.Rmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
# Data transformation {#transform}
## Introduction
Visualization is an important tool for insight generation, but it is rare that you get the data in exactly the right form you need. You will often need to create some new variables or summaries, rename variables, or reorder observations for the data to be easier to manage. You'll learn how to do all that (and more!) in this chapter, which will teach you how to transform your data using the pandas package and a new dataset on flights departing New York City in 2013.
### Prerequisites
In this chapter we're going to focus on how to use the pandas package, the foundational package for data science in Python. We'll illustrate the key ideas using data from the nycflights13 R package, and use Altair to help us understand the data. We will also need two additional Python packages to help us with mathematical and statistical functions: [NumPy](https://numpy.org/) and [SciPy](https://www.scipy.org/scipylib/index.html). Notice the `from ____ import ____` follows the [SciPy guidance](https://docs.scipy.org/doc/scipy/reference/api.html) to import functions from submodule spaces. Now we will call functions using the SciPy package with the `stats.<FUNCTION>` structure.
```{python setup, cache=FALSE, message = FALSE, eval=TRUE}
import pandas as pd
import altair as alt
import numpy as np
from scipy import stats
flights_url = "https://github.com/byuidatascience/data4python4ds/raw/master/data-raw/flights/flights.csv"
flights = pd.read_csv(flights_url)
flights['time_hour'] = pd.to_datetime(flights.time_hour, format = "%Y-%m-%d %H:%M:%S")
```
### nycflights13
To explore the basic data manipulation verbs of pandas, we'll use `flights`. This data frame contains all `r format(nrow(nycflights13::flights), big.mark = ",")` flights that departed from New York City in 2013. The data comes from the US [Bureau of Transportation Statistics](http://www.transtats.bts.gov/DatabaseInfo.asp?DB_ID=120&Link=0), and is [documented here](https://github.com/byuidatascience/data4python4ds/blob/master/data.md).
```{python, echo=FALSE}
flights
```
You might notice that this data frame does not print in its entirety as other data frames you might have seen in the past: it only shows the first few and last few rows with only the columns that fit on one screen. (To see the whole dataset, you can open the variable view in your interactive Python window and double click on the flights object which will open the dataset in the VS Code data viewer).
Using `flights.dtypes` will show you the variables types for each column. These describe the type of each variable:
```{python, echo=FALSE}
flights.dtypes
```
* `int64` stands for integers.
* `float64` stands for doubles, or real numbers.
* `object` stands for character vectors, or strings.
* `datetime64` stands for date-times (a date + a time) and dates. You can read [more about pandas datetime tools](https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html)
There are three other common types of variables that aren't used in this dataset but you'll encounter later in the book:
* `bool` stands for logical, vectors that contain only `True` or `False`.
* `category` stands for factors, which pandas uses to represent categorical variables
with fixed possible values.
Using `flights.info()` also provides a print out of data types on other useful information about your pandas data frame.
```{python}
flights.info()
```
### pandas data manipulation basics
<!-- https://pandas.pydata.org/pandas-docs/stable/getting_started/basics.html -->
<!-- https://www.dataquest.io/blog/pandas-cheat-sheet/ -->
<!-- https://medium.com/dunder-data/minimally-sufficient-pandas-a8e67f2a2428 -->
In this chapter you are going to learn five key pandas functions or object methods. Object methods are things the objects can perform. For example, pandas data frames know how to tell you their shape, the pandas object knows how to concatenate two data frames together. The way we tell an object we want it to do something is with the ‘dot operator’. We will refer to these object operators as functions or methods. Below are the five methods that allow you to solve the vast majority of your data manipulation challenges:
* Pick observations by their values (`query()`).
* Reorder the rows (`sort_values()`).
* Pick variables by their names (`filter()`).
* Create new variables with functions of existing variables (`assign()`).
* Collapse many values down to a single summary (`groupby()`).
The pandas package can handle all of the same functionality of dplyr in R. You can read [pandas mapping guide](https://pandas.pydata.org/docs/getting_started/comparison/comparison_with_r.html) and [this towards data science article](https://towardsdatascience.com/tidying-up-pandas-4572bfa38776) to get more details on the following brief table.
```{r, echo=FALSE}
library(tidyverse)
dat <- tibble(`R dplyr function` = c('`filter()`', '`arrange()`', '`select()`',
'`rename ()`', '`mutate()`', '`group_by ()`',
'`summarise()`'),
`Python pandas function` = c('`query()`', '`sort_values()`',
'`filter()` or `loc[]`', '`rename()`',
'`assign()` (see note)', '`groupby()`',
'`agg()`'))
knitr::kable(dat, caption = "Comparable functions in R-Dplyr and Python-Pandas")
```
**Note:** The `dpylr::mutate()` function works similar to `assign()` in pandas on data frames. But you cannot use `assign()` on grouped data frame in pandas like you would use `dplyr::mutate()` on a grouped object. In that case you would use `transform()` and even then the functionality is not quite the same.
The `groupby()` changes the scope of each function from operating on the entire dataset to operating on it group-by-group. These functions provide the verbs for a language of data manipulation.
All verbs work similarly:
1. The first argument is a pandas dataFrame.
1. The subsequent methods describe what to do with the data frame.
1. The result is a new data frame.
Together these properties make it easy to chain together multiple simple steps to achieve a complex result. Let's dive in and see how these verbs work.
## Filter rows with `.query()`
`.query()` allows you to subset observations based on their values. The first argument specifies the rows to be selected. This argument can be label names or a boolean series. The second argument specifies the columns to be selected. The bolean filter on the rows is our focus. For example, we can select all flights on January 1st with:
```{python}
flights.query('month == 1 & day == 1')
```
The previous expression is equivalent to `flights[(flights.month == 1) & (flights.day == 1)]`
When you run that line of code, pandas executes the filtering operation and returns a new data frame. pandas functions usually don't modify their inputs, so if you want to save the result, you'll need to use the assignment operator, `=`:
```{python}
jan1 = flights.query('month == 1 & day == 1')
```
Interactive Python either prints out the results, or saves them to a variable.
### Comparisons
To use filtering effectively, you have to know how to select the observations that you want using the comparison operators. Python provides the standard suite: `>`, `>=`, `<`, `<=`, `!=` (not equal), and `==` (equal).
When you're starting out with Python, the easiest mistake to make is to use `=` instead of `==` when testing for equality. When this happens you'll get an error:
```{python, error = TRUE}
flights.query('month = 1')
```
There's another common problem you might encounter when using `==`: floating point numbers. The following result might surprise you!
```{python}
np.sqrt(2) ** 2 == 2
1 / 49 * 49 == 1
```
Computers use finite precision arithmetic (they obviously can't store an infinite number of digits!) so remember that every number you see is an approximation. Instead of relying on `==`, use `np.isclose()`:
```{python}
np.isclose(np.sqrt(2) ** 2, 2)
np.isclose(1 / 49 * 49, 1)
```
### Logical operators
Multiple arguments to `query()` are combined with "and": every expression must be true in order for a row to be included in the output. For other types of combinations, you'll need to use Boolean operators yourself: `&` is "and", `|` is "or", and `!` is "not". Figure \@ref(fig:bool-ops) shows the complete set of Boolean operations.
```{r bool-ops, echo = FALSE, fig.cap = "Complete set of boolean operations. `x` is the left-hand circle, `y` is the right-hand circle, and the shaded region show which parts each operator selects."}
knitr::include_graphics("diagrams/transform-logical.png")
```
The following code finds all flights that departed in November or December:
```{python, eval = FALSE}
flights.query('month == 11 | month == 12')
```
The order of operations doesn't work like English. You can't write `flights.query(month == (11 | 12))`, which you might literally translate into "finds all flights that departed in November or December". Instead it finds all months that equal `11 | 12`, an expression that evaluates to `True`. In a numeric context (like here), `True` becomes one, so this finds all flights in January, not November or December. This is quite confusing!
A useful short-hand for this problem is `x in y`. This will select every row where `x` is one of the values in `y`. We could use it to rewrite the code above:
```{python, eval = FALSE}
nov_dec = flights.query('month in [11, 12]')
```
Sometimes you can simplify complicated subsetting by remembering De Morgan's law: `!(x & y)` is the same as `!x | !y`, and `!(x | y)` is the same as `!x & !y`. For example, if you wanted to find flights that weren't delayed (on arrival or departure) by more than two hours, you could use either of the following two filters:
```{python, eval = FALSE}
flights.query('arr_delay > 120 | dep_delay > 120')
flights.query('arr_delay <= 120 | dep_delay <= 120')
```
<!-- As well as `&` and `|`, Python also has `&&` and `||`. Don't use them here! You'll learn when you should use them in [conditional execution]. -->
Whenever you start using complicated, multipart expressions in `.query()`, consider making them explicit variables instead. That makes it much easier to check your work. You'll learn how to create new variables shortly.
### Missing values
One important feature of pandas in Python that can make comparison tricky are missing values, or `NA`s ("not availables"). `NA` represents an unknown value so missing values are "contagious": almost any operation involving an unknown value will also be unknown.
```{python}
np.nan + 10
np.nan / 2
```
The most confusing result are the comparisons. They always return a `False`. The logic for this result [is explained on stackoverflow](https://stackoverflow.com/questions/1565164/what-is-the-rationale-for-all-comparisons-returning-false-for-ieee754-nan-values). The [pandas missing data guide](https://pandas.pydata.org/pandas-docs/dev/user_guide/missing_data.html) is a helpful read.
```{python}
np.nan > 5
10 == np.nan
np.nan == np.nan
```
It's easiest to understand why this is true with a bit more context:
```{python}
# Let x be Mary's age. We don't know how old she is.
x = np.nan
# Let y be John's age. We don't know how old he is.
y = np.nan
# Are John and Mary the same age?
x == y
# Illogical comparisons are False.
```
The Python development team did decide to provide functionality to find `np.nan` objects in your code by allowing `np.nan != np.nan` to return `True`. Once again you can [read the rationale for this decision](https://stackoverflow.com/questions/1565164/what-is-the-rationale-for-all-comparisons-returning-false-for-ieee754-nan-values). Python now has `.isnan()` functions to make this comparison more straight forward in your code.
Pandas uses the `nan` structure in Python to identify __NA__ or 'missing' values. If you want to determine if a value is missing, use `pd.isna()`:
```{python}
pd.isna(x)
```
`.query()` only includes rows where the condition is `TRUE`; it excludes both `FALSE` and __NA__ values.
```{python}
df = pd.DataFrame({'x': [1, np.nan, 3]})
df.query('x > 1')
```
If you want to preserve missing values, ask for them explicitly using the trick mentioned in the previous paragraph or by using `pd.isna()` with the symbolic reference `@` in your condition:
```{python}
df.query('x != x | x > 1')
df.query('@pd.isna(x) | x > 1')
```
### Exercises
1. Find all flights that
A. Had an arrival delay of two or more hours
B. Flew to Houston (`IAH` or `HOU`)
C. Were operated by United, American, or Delta
D. Departed in summer (July, August, and September)
E. Arrived more than two hours late, but didn't leave late
F. Were delayed by at least an hour, but made up over 30 minutes in flight
G. Departed between midnight and 6am (inclusive)
1. How many flights have a missing `dep_time`? What other variables are
missing? What might these rows represent?
## Arrange or sort rows with `.sort_values()`
`.sort_values()` works similarly to `.query()` except that instead of selecting rows, it changes their order. It takes a data frame and a column name or a list of column names to order by. If you provide more than one column name, each additional column will be used to break ties in the values of preceding columns:
```{python}
flights.sort_values(by = ['year', 'month', 'day'])
```
Use the argument `ascending = False` to re-order by a column in descending order:
```{python}
flights.sort_values(by = ['year', 'month', 'day'], ascending = False)
```
Missing values are always sorted at the end:
```{python}
df = pd.DataFrame({'x': [5, 2, np.nan]})
df.sort_values('x')
df.sort_values('x', ascending = False)
```
### Exercises
1. How could you use `sort()` to sort all missing values to the start?
(Hint: use `isna()`).
<!-- df.sort_values('x', ascending = False, na_position = "first") -->
1. Sort `flights` to find the most delayed flights. Find the flights that
left earliest.
1. Sort `flights` to find the fastest (highest speed) flights.
1. Which flights travelled the farthest? Which travelled the shortest?
## Select columns with `filter()` or `loc[]` {#select}
It's not uncommon to get datasets with hundreds or even thousands of variables. In this case, the first challenge is often narrowing in on the variables you're actually interested in. `.filter()` allows you to rapidly zoom in on a useful subset using operations based on the names of the variables.
Additionaly, `.loc[]` is often used to select columns by many user of pandas. You can read more about the `.loc[]` method in the [pandas documentation](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.loc.html#pandas.DataFrame.loc)
`.filter()` is not terribly useful with the flights data because we only have 19 variables, but you can still get the general idea:
```{python}
# Select columns by name
flights.filter(['year', 'month', 'day'])
# Select all columns except year and day (inclusive)
flights.drop(columns = ['year', 'day'])
```
`loc[]` functions in a similar fashion.
```{python}
# Select columns by name
flights.loc[:, ['year', 'month', 'day']]
# Select all columns between year and day (inclusive)
flights.loc[:, 'year':'day']
# Select all columns except year and day (inclusive)
```
There are a number of helper regular expressions you can use within `filter()`:
* `flights.filter(regex = '^sch')`: matches column names that begin with "sch".
* `flights.filter(regex = "time$")`: matches names that end with "time".
* `flights.filter(regex = "_dep_")`: matches names that contain "_dep_".
* `flights.filter(regex = '(.)\\1')`: selects variables that match a regular expression.
This one matches any variables that contain repeated characters. You'll
learn more about regular expressions in [strings].
See [pandas filter documentation](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.filter.html) for more details.
Use `rename()` to rename a column or multiple columns.
```{python}
flights.rename(columns = {'year': 'YEAR', 'month':'MONTH'})
```
### Exercises
1. Brainstorm as many ways as possible to select `dep_time`, `dep_delay`,
`arr_time`, and `arr_delay` from `flights`.
1. What happens if you include the name of a variable multiple times in
a `filter()` call?
1. Does the result of running the following code surprise you? How do the
select helpers deal with case by default? How can you change that default?
```{r, eval = FALSE}
flights.filter(regex = "TIME")
```
## Add new variables with `.assign()`
Besides selecting sets of existing columns, it's often useful to add new columns that are functions of existing columns. That's the job of `.assign()`.
`.assign()` always adds new columns at the end of your dataset so we'll start by creating a narrower dataset so we can see the new variables.
```{python, cache=FALSE}
flights_sml = (flights
.filter(regex = "^year$|^month$|^day$|delay$|^distance$|^air_time$"))
(flights_sml
.assign(
gain = lambda x: x.dep_delay - x.arr_delay,
speed = lambda x: x.distance / x.air_time * 60
)
.head())
```
Note that you can refer to columns that you've just created:
```{python}
(flights_sml
.assign(
gain = lambda x: x.dep_delay - x.arr_delay,
hours = lambda x: x.air_time / 60,
gain_per_hour = lambda x: x.gain / x.hours
)
.head())
```
### Useful creation functions {#mutate-funs}
There are many functions for creating new variables that you can use with `.assign()`. The key property is that the function must be vectorised: it must take a vector of values as input, and return a vector with the same number of values as output. Some arithmetic operators are available in Python without the need for any additional packages. However, many arithmetic functions like `mean()` and `std()` are accessed through importing additional packages. Python comes with a `math` and `statistics` package. However, we recommend the __NumPy__ package for accessing the suite of mathematical functions needed. You would import NumPy with `import numpy as np`. There's no way to list every possible function that you might use, but here's a selection of functions that are frequently useful:
* Arithmetic operators: `+`, `-`, `*`, `/`, `^`. These are all vectorised,
using the so called "recycling rules". If one parameter is shorter than
the other, it will be automatically extended to be the same length. This
is most useful when one of the arguments is a single number: `air_time / 60`,
`hours * 60 + minute`, etc.
Arithmetic operators are also useful in conjunction with the aggregate
functions you'll learn about later. For example, `x / np.sum(x)` calculates
the proportion of a total, and `y - np.mean(y)` computes the difference from
the mean.
* Modular arithmetic: `//` (integer division) and `%` (remainder), where
`x == y * (x // y) + (x % y)`. Modular arithmetic is a handy tool because
it allows you to break integers up into pieces. For example, in the
flights dataset, you can compute `hour` and `minute` from `dep_time` with:
```{python}
(flights
.filter(['dep_time'])
.assign(
hour = lambda x: x.dep_time // 100,
minute = lambda x: x.dep_time % 100
))
```
* Logs: `np.log()`, `np.log2()`, `np.log10()`. Logarithms are an incredibly useful
transformation for dealing with data that ranges across multiple orders of
magnitude. They also convert multiplicative relationships to additive, a
feature we'll come back to in modelling.
All else being equal, I recommend using `np.log2()` because it's easy to
interpret: a difference of 1 on the log scale corresponds to doubling on
the original scale and a difference of -1 corresponds to halving.
* Offsets: `shift(1)` and `shift(-1)` allow you to refer to leading or lagging
values. This allows you to compute running differences (e.g. `x - x.shift(1)`)
or find when values change (`x != x.shift(1)`). They are most useful in
conjunction with `groupby()`, which you'll learn about shortly.
```{python}
x = pd.Series(np.arange(1,10))
x.shift(1)
x.shift(-1)
```
* Cumulative and rolling aggregates: pandas provides functions for running sums,
products, mins and maxes: `cumsum()`, `cumprod()`, `cummin()`, `cummax()`.
If you need rolling aggregates (i.e. a sum computed over a rolling window),
try the `rolling()` in the pandas package.
```{python}
x
x.cumsum()
x.rolling(2).mean()
```
* Logical comparisons, `<`, `<=`, `>`, `>=`, `!=`, and `==`, which you learned about
earlier. If you're doing a complex sequence of logical operations it's
often a good idea to store the interim values in new variables so you can
check that each step is working as expected.
* Ranking: there are a number of ranking functions, but you should
start with `min_rank()`. It does the most usual type of ranking
(e.g. 1st, 2nd, 2nd, 4th). The default gives smallest values the small
ranks; use `desc(x)` to give the largest values the smallest ranks.
```{python}
y = pd.Series([1, 2, 2, np.nan, 3, 4])
y.rank(method = 'min')
y.rank(ascending=False, method = 'min')
```
If `method = 'min'` doesn't do what you need, look at the variants
`method = 'first'`, `method = 'dense'`, `method = 'percent'`, `pct = True`.
See the rank [help page](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.rank.html) for more details.
```{python}
y.rank(method = 'first')
y.rank(method = 'dense')
y.rank(pct = True)
```
### Exercises
1. Currently `dep_time` and `sched_dep_time` are convenient to look at, but
hard to compute with because they're not really continuous numbers.
Convert them to a more convenient representation of number of minutes
since midnight.
1. Compare `air_time` with `arr_time - dep_time`. What do you expect to see?
What do you see? What do you need to do to fix it?
1. Compare `dep_time`, `sched_dep_time`, and `dep_delay`. How would you
expect those three numbers to be related?
1. Find the 10 most delayed flights using a ranking function. How do you want
to handle ties? Carefully read the documentation for `method = 'min'`.
1. What trigonometric functions does __NumPy__ provide?
## Grouped summaries or aggregations with `.agg()`
The last key verb is `.agg()`. It collapses a data frame to a single row:
```{python}
flights.agg({'dep_delay': np.mean})
```
(Pandas aggregate functions ignores the `np.nan` values like `na.rm = TRUE` in R.)
`.agg()` is not terribly useful unless we pair it with `.groupby()`. This changes the unit of analysis from the complete dataset to individual groups. Then, when you use the pandas functions on a grouped data frame they'll be automatically applied "by group". For example, if we applied similiar code to a data frame grouped by date, we get the average delay per date. Note that with the `.groupby()` function we used tuple to identify the column (first entry) and the function to apply on the column (second entry). This is called [named aggregation](https://pandas.pydata.org/pandas-docs/stable/user_guide/groupby.html#named-aggregation) in pandas:
```{python}
by_day = flights.groupby(['year', 'month', 'day'])
by_day.agg(delay = ('dep_delay', np.mean)).reset_index()
```
Note the use of `.reset_index()` to remove pandas creation of a [MultiIndex](https://pandas.pydata.org/pandas-docs/stable/user_guide/advanced.html#advanced-hierarchical). You can read more about the use of `.groupby()` in pandas with their [Group By: split-apply-combine user Guid documentation](https://pandas.pydata.org/pandas-docs/stable/user_guide/groupby.html)
Together `.groupby()` and `.agg()` provide one of the tools that you'll use most commonly when working with pandas: grouped summaries. But before we go any further with this, we need to introduce a structure for pandas code when doing data science work. We structure our code much like 'the pipe', `%>%` in the tidyverse packages from R-Studio.
### Combining multiple operations
Imagine that we want to explore the relationship between the distance and average delay for each location. Using what you know about pandas, you might write code like this:
```{python, fig.width = 6}
by_dest = flights.groupby('dest')
delay = by_dest.agg(
count = ('distance', 'size'),
dist = ('distance', np.mean),
delay = ('arr_delay', np.mean)
)
delay_filter = delay.query('count > 20 & dest != "HNL"')
# It looks like delays increase with distance up to ~750 miles
# and then decrease. Maybe as flights get longer there's more
# ability to make up delays in the air?
chart_base = (alt.Chart(delay_filter)
.encode(
x = 'dist',
y = 'delay'
))
chart = chart_base.mark_point() + chart_base.transform_loess('dist', 'delay').mark_line()
chart.save("screenshots/transform_1.png")
```
```{R, echo=FALSE, fig.align="left"}
knitr::include_graphics("screenshots/transform_1.png")
```
There are three steps to prepare this data:
1. Group flights by destination.
1. Summarise to compute distance, average delay, and number of flights.
1. Filter to remove noisy points and Honolulu airport, which is almost
twice as far away as the next closest airport.
This code is a little frustrating to write because we have to give each intermediate data frame a name, even though we don't care about it. Naming things is hard, so this slows down our analysis.
There's another way to tackle the same problem without the additional objects:
```{python}
delays = (flights
.groupby('dest')
.agg(
count = ('distance', 'size'),
dist = ('distance', np.mean),
delay = ('arr_delay', np.mean)
)
.query('count > 20 & dest != "HNL"'))
```
This focuses on the transformations, not what's being transformed, which makes the code easier to read. You can read it as a series of imperative statements: group, then summarise, then filter. As suggested by this reading, a good way to pronounce `.` when reading pandas code is "then".
You can use the `()` with `.` to rewrite multiple operations in a way that you can read left-to-right, top-to-bottom. We'll use this format frequently from now on because it considerably improves the readability of complex pandas code.
### Missing values
You may have wondered about the `np.nan` values we put into our pandas data frame above. Pandas just started an experimental options (version 1.0) for `pd.NA` but it is not standard as in the R language. You can read the full details about [missing data in pandas](https://pandas.pydata.org/pandas-docs/stable/user_guide/missing_data.html#working-with-missing-data).
Pandas' and NumPy's handling of missing values defaults to the opposite functionality of R and the Tidyverse. Here are three key defaults when using Pandas.
1. When summing data, NA (missing) values will be treated as zero.
1. If the data are all NA, the result will be 0.
1. Cumulative methods ignore NA values by default, but preserve them in the resulting arrays. To override this behaviour and include missing values, use `skipna=False`.
1. All the `.groupby()` methods exclude missing values in their calculations as described in the [pandas groupby documentation](https://pandas.pydata.org/pandas-docs/stable/reference/groupby.html).
In our case, where missing values represent cancelled flights, we could also tackle the problem by first removing the cancelled flights. We'll save this dataset so we can reuse it in the next few examples.
```{python, cache=FALSE}
not_cancelled = flights.dropna(subset = ['dep_delay', 'arr_delay'])
```
### Counts
Whenever you do any aggregation, it's always a good idea to include either a count (`size()`), or a count of non-missing values (`sum(!is.na(x))`). That way you can check that you're not drawing conclusions based on very small amounts of data. For example, let's look at the planes (identified by their tail number) that have the highest average delays:
```{python, cache=FALSE}
delays = not_cancelled.groupby('tailnum').agg(
delay = ("arr_delay", np.mean)
)
chart = (alt.Chart(delays)
.transform_density(
density = 'delay',
as_ = ['delay', 'density'],
bandwidth=10
)
.encode(
x = 'delay:Q',
y = 'density:Q'
)
.mark_line())
chart.save("screenshots/transform_2.png")
```
```{R, echo=FALSE, fig.align="left"}
knitr::include_graphics("screenshots/transform_2.png")
```
Wow, there are some planes that have an _average_ delay of 5 hours (300 minutes)!
The story is actually a little more nuanced. We can get more insight if we draw a scatterplot of number of flights vs. average delay:
```{python, cache = FALSE}
delays = (not_cancelled
.groupby('tailnum')
.agg(
delay = ("arr_delay", np.mean),
n = ('arr_delay', 'size')
))
chart = (alt.Chart(delays)
.encode(
x = 'n',
y = 'delay'
)
.mark_point(
filled = True,
opacity = 1/10))
chart.save("screenshots/transform_3.png")
```
```{R, echo=FALSE, fig.align="left"}
knitr::include_graphics("screenshots/transform_3.png")
```
Not surprisingly, there is much greater variation in the average delay when there are few flights. The shape of this plot is very characteristic: whenever you plot a mean (or other summary) vs. group size, you'll see that the variation decreases as the sample size increases.
When looking at this sort of plot, it's often useful to filter out the groups with the smallest numbers of observations, so you can see more of the pattern and less of the extreme variation in the smallest groups. This is what the following code does, as well as showing you a handy pattern for simple data frame manipulations only needed for a chart.
```{python}
chart = (alt.Chart(delays.query("n > 25"))
.encode(
x = 'n',
y = 'delay'
)
.mark_point(
filled = True,
opacity = 1/10))
chart.save("screenshots/altair_delays.png")
```
```{R, echo=FALSE, fig.align="left"}
knitr::include_graphics("screenshots/altair_delays.png")
```
There's another common variation of this type of pattern. Let's look at how the average performance of batters in baseball is related to the number of times they're at bat. Here I use data from the __Lahman__ package to compute the batting average (number of hits / number of attempts) of every major league baseball player.
When I plot the skill of the batter (measured by the batting average, `ba`) against the number of opportunities to hit the ball (measured by at bat, `ab`), you see two patterns:
1. As above, the variation in our aggregate decreases as we get more
data points.
2. There's a positive correlation between skill (`ba`) and opportunities to
hit the ball (`ab`). This is because teams control who gets to play,
and obviously they'll pick their best players.
```{python}
# settings for Altair to handle large data
alt.data_transformers.enable('json')
batting_url = "https://github.com/byuidatascience/data4python4ds/raw/master/data-raw/batting/batting.csv"
batting = pd.read_csv(batting_url)
batters = (batting
.groupby('playerID')
.agg(
ab = ("AB", "sum"),
h = ("H", "sum")
)
.assign(ba = lambda x: x.h/x.ab))
chart = (alt.Chart(batters.query('ab > 100'))
.encode(
x = 'ab',
y = 'ba'
)
.mark_point())
chart.save("screenshots/altair_batters.png")
```
```{R, echo=FALSE, fig.align="left"}
knitr::include_graphics("screenshots/altair_batters.png")
```
This also has important implications for ranking. If you naively sort on `desc(ba)`, the people with the best batting averages are clearly lucky, not skilled:
```{python}
batters.sort_values('ba', ascending = False).head(10)
```
You can find a good explanation of this problem at <http://varianceexplained.org/r/empirical_bayes_baseball/> and <http://www.evanmiller.org/how-not-to-sort-by-average-rating.html>.
### Useful summary functions {#summarise-funs}
Just using means, counts, and sum can get you a long way, but NumPy, SciPy, and Pandas provide many other useful summary functions (remember we are using the SciPy stats submodule):
* Measures of location: we've used `np.mean()`, but `np.median()` is also
useful. The mean is the sum divided by the length; the median is a value
where 50% of `x` is above it, and 50% is below it.
It's sometimes useful to combine aggregation with logical subsetting.
We haven't talked about this sort of subsetting yet, but you'll learn more
about it in [subsetting].
```{python}
(not_cancelled
.groupby(['year', 'month', 'day'])
.agg(
avg_delay1 = ('arr_delay', np.mean),
avg_delay2 = ('arr_delay', lambda x: np.mean(x[x > 0]))
))
```
* Measures of spread: `np.sd()`, `stats.iqr()`, `stats.median_absolute_deviation()`.
The root mean squared deviation, or standard deviation `np.sd()`, is the standard
measure of spread. The interquartile range `stats.iqr()` and median absolute deviation
`stats.median_absolute_deviation()` are robust equivalents that may be more useful if
you have outliers.
```{python}
# Why is distance to some destinations more variable than to others?
(not_cancelled
.groupby(['dest'])
.agg(distance_sd = ('distance', np.std))
.sort_values('distance_sd', ascending = False))
```
* Measures of rank: `np.min()`, `np.quantile()`, `np.max()`. Quantiles
are a generalisation of the median. For example, `np.quantile(x, 0.25)`
will find a value of `x` that is greater than 25% of the values,
and less than the remaining 75%.
```{python}
# When do the first and last flights leave each day?
(not_cancelled
.groupby(['year', 'month', 'day'])
.agg(
first = ('dep_time', np.min),
last = ('dep_time', np.max)
))
```
* Measures of position: `first()`, `nth()`, `last()`. These work
similarly to `x[1]`, `x[2]`, and `x[size(x)]` but let you set a default
value if that position does not exist (i.e. you're trying to get the 3rd
element from a group that only has two elements). For example, we can
find the first and last departure for each day:
```{python}
# using first and last
(not_cancelled
.groupby(['year', 'month','day'])
.agg(
first_dep = ('dep_time', 'first'),
last_dep = ('dep_time', 'last')
))
```
```{python}
# using position
(not_cancelled
.groupby(['year', 'month','day'])
.agg(
first_dep = ('dep_time', lambda x: list(x)[0]),
last_dep = ('dep_time', lambda x: list(x)[-1])
))
```
<!-- These functions are complementary to filtering on ranks. Filtering gives -->
<!-- you all variables, with each observation in a separate row: -->
<!-- ```{python} -->
<!-- not_cancelled['f'] = not_cancelled.assign( -->
<!-- r = lambda x: (x. -->
<!-- groupby(['year', 'month','day']). -->
<!-- dep_time.agg('rank', method = 'min')) -->
<!-- ).groupby(['year', 'month','day']).r.transform( -->
<!-- lambda x: (x == np.min(x)) | (x == np.max(x)) -->
<!-- ) -->
<!-- not_cancelled.query('f == True').drop(columns = 'f') -->
<!-- # The pandas way to do this -->
<!-- df['min_c'] = df.groupby('A')['C'].transform('min') -->
<!-- df['max_c'] = df.groupby('A')['C'].transform('max') -->
<!-- df.query(' (C == min_c) or (C == max_c) ').filter(['A', 'B', 'C']) -->
<!-- ``` -->
* Counts: You've seen `size()`, which takes no arguments, and returns the
size of the current group. To count the number of non-missing values, use
`isnull().sum()`. To count the number of unique (distinct) values, use
`nunique()`.
```{python}
# Which destinations have the most carriers?
(flights
.groupby('dest')
.agg(
carriers_unique = ('carrier', 'nunique'),
carriers_count = ('carrier', 'size'),
missing_time = ('dep_time', lambda x: x.isnull().sum())
))
```
Counts are useful and pandas provides a simple helper if all you want is
a count:
```{python}
not_cancelled['dest'].value_counts()
```
* Counts and proportions of logical values: `sum(x > 10)`, `mean(y == 0)`.
When used with numeric functions, `TRUE` is converted to 1 and `FALSE` to 0.
This makes `sum()` and `mean()` very useful: `sum(x)` gives the number of
`TRUE`s in `x`, and `mean(x)` gives the proportion.
```{python}
# How many flights left before 5am? (these usually indicate delayed
# flights from the previous day)
(not_cancelled
.groupby(['year', 'month','day'])
.agg(n_early = ('dep_time', lambda x: np.sum(x < 500))))
# What proportion of flights are delayed by more than an hour?
(not_cancelled
.groupby(['year', 'month','day'])
.agg(hour_prop = ('arr_delay', lambda x: np.sum(x > 60))))
```
### Grouping by multiple variables
Be careful when progressively rolling up summaries: it's OK for sums and counts, but you need to think about weighting means and variances, and it's not possible to do it exactly for rank-based statistics like the median. In other words, the sum of groupwise sums is the overall sum, but the median of groupwise medians is not the overall median.
### Ungrouping (reseting the index)
If you need to remove grouping and MultiIndex use `reset.index()`. This is a rough equivalent to `ungroup()` in R but it is not the same thing. Notice the column names are no longer in multiple levels.
```{python}
dat = (not_cancelled
.groupby(['year', 'month','day'])
.agg(hour_prop = ('arr_delay', lambda x: np.sum(x > 60))))
dat.head()
dat.reset_index().head()
```
### Exercises
1. Brainstorm at least 5 different ways to assess the typical delay
characteristics of a group of flights. Consider the following scenarios:
* A flight is 15 minutes early 50% of the time, and 15 minutes late 50% of
the time.
* A flight is always 10 minutes late.
* A flight is 30 minutes early 50% of the time, and 30 minutes late 50% of
the time.
* 99% of the time a flight is on time. 1% of the time it's 2 hours late.
Which is more important: arrival delay or departure delay?
1. Our definition of cancelled flights (`is.na(dep_delay) | is.na(arr_delay)`)
is slightly suboptimal. Why? Which is the most important column?
1. Look at the number of cancelled flights per day. Is there a pattern?
Is the proportion of cancelled flights related to the average delay?
1. Which carrier has the worst delays? Challenge: can you disentangle the
effects of bad airports vs. bad carriers? Why/why not? (Hint: think about
`flights.groupby(['carrier', 'dest']).agg(n = ('dep_time', 'size'))`)
## Grouped transforms (and filters)
Grouping is most useful in conjunction with `.agg()`, but you can also do convenient operations with `.transform()`. This is a difference in pandas as compared to dplyr. Once you create a `.groupby()` object you cannot use `.assign()` and the best equivalent is `.transform()`. Following pandas [groupby guide](https://pandas.pydata.org/pandas-docs/stable/user_guide/groupby.html) on 'split-apply-combine', we would assign our transfomred variables to our data frame and then perform filters on the full data frame.
* Find the worst members of each group:
```{python}
flights_sml['ranks'] = (flights_sml
.groupby(['year', 'month','day']).arr_delay
.rank(ascending = False))
flights_sml.query('ranks < 10').drop(columns = 'ranks')
```
* Find all groups bigger than a threshold:
```{python, cache=FALSE}
popular_dests = flights
popular_dests['n'] = popular_dests.groupby('dest').arr_delay.transform('size')
popular_dests = flights.query('n > 365').drop(columns = 'n')
popular_dests
```
* Standardise to compute per group metrics:
```{python}
(popular_dests
.query('arr_delay > 0')
.assign(
prop_delay = lambda x: x.arr_delay / x.groupby('dest').arr_delay.transform('sum')
)
.filter(['year', 'month', 'day', 'dest', 'arr_delay', 'prop_delay']))
```
### Exercises
1. Which plane (`tailnum`) has the worst on-time record?
1. What time of day should you fly if you want to avoid delays as much
as possible?
1. For each destination, compute the total minutes of delay. For each
flight, compute the proportion of the total delay for its destination.
1. Delays are typically temporally correlated: even once the problem that
caused the initial delay has been resolved, later flights are delayed
to allow earlier flights to leave. Explore how the delay
of a flight is related to the delay of the immediately preceding flight.
1. Look at each destination. Can you find flights that are suspiciously
fast? (i.e. flights that represent a potential data entry error). Compute
the air time of a flight relative to the shortest flight to that destination.
Which flights were most delayed in the air?
1. Find all destinations that are flown by at least two carriers. Use that
information to rank the carriers.
1. For each plane, count the number of flights before the first delay
of greater than 1 hour.