Data import¶
Introduction¶
Working with existing data is a great way to learn the tools of data science, but at some point you want to stop learning and start working with your own data. In this chapter, you'll learn how to use the readr package for reading plain-text rectangular files into R.
This chapter will only scratch surface of data import, but many of the principles will translate to the other forms of data import. The chapter concludes with a few pointers to packages that you might find useful for loading other types of data.
Prerequisites¶
In this chapter, you'll learn how to load flat files in R with the readr package:
library(readr)
Getting started¶
Most of readr's functions are concerned with turning flat files into data frames:
read_csv()
reads comma delimited files,read_csv2()
reads semicolon separated files (common in countries where,
is used as the decimal place),read_tsv()
reads tab delimited files, andread_delim()
reads in files with any delimiter.read_fwf()
reads fixed width files. You can specify fields either by their widths withfwf_widths()
or their position withfwf_positions()
.read_table()
reads a common variation of fixed width files where columns are separated by white space.read_log()
reads Apache style log files. (But also check out webreadr which is built on top ofread_log()
, but provides many more helpful tools.)
These functions all have similar syntax: once you've mastered one, you can use the others with ease. For the rest of this chapter we'll focus on read_csv()
. Once you understand read_csv()
, it will be straightforward to apply your knowledge to all the other functions in readr.
The first argument to read_csv()
is the most important: it's the path to the file to read:
heights <- read_csv("https://raw.githubusercontent.com/hadley/r4ds/main/data/heights.csv")
Rows: 1192 Columns: 6 ── Column specification ──────────────────────────────────────────────────────── Delimiter: "," chr (2): sex, race dbl (4): earn, height, ed, age ℹ Use `spec()` to retrieve the full column specification for this data. ℹ Specify the column types or set `show_col_types = FALSE` to quiet this message.
When you run read_csv()
it prints out a column specification that gives the name and type of each column. That's an important part of readr, which we'll come back to in [parsing a file].
You can also supply an inline csv file. This is useful for experimenting and creating reproducible examples:
read_csv("a,b,c
1,2,3
4,5,6")
Rows: 2 Columns: 3 ── Column specification ──────────────────────────────────────────────────────── Delimiter: "," dbl (3): a, b, c ℹ Use `spec()` to retrieve the full column specification for this data. ℹ Specify the column types or set `show_col_types = FALSE` to quiet this message.
a | b | c |
---|---|---|
<dbl> | <dbl> | <dbl> |
1 | 2 | 3 |
4 | 5 | 6 |
In both cases read_csv()
uses the first line of the data for the column names, which is a very common convention. There are two cases where you might want to tweak this behaviour:
- Sometimes there are a few lines of metadata at the top of the file. You can
use
skip = n
to skip the firstn
lines; or usecomment = "#"
to drop all lines that start with a comment character.
read_csv("The first line of metadata
The second line of metadata
x,y,z
1,2,3", skip = 2)
Rows: 1 Columns: 3 ── Column specification ──────────────────────────────────────────────────────── Delimiter: "," dbl (3): x, y, z ℹ Use `spec()` to retrieve the full column specification for this data. ℹ Specify the column types or set `show_col_types = FALSE` to quiet this message.
x | y | z |
---|---|---|
<dbl> | <dbl> | <dbl> |
1 | 2 | 3 |
read_csv("# A comment I want to skip
x,y,z
1,2,3", comment = "#")
Rows: 1 Columns: 3 ── Column specification ──────────────────────────────────────────────────────── Delimiter: "," dbl (3): x, y, z ℹ Use `spec()` to retrieve the full column specification for this data. ℹ Specify the column types or set `show_col_types = FALSE` to quiet this message.
x | y | z |
---|---|---|
<dbl> | <dbl> | <dbl> |
1 | 2 | 3 |
- The data might not have column names. You can use
col_names = FALSE
to tellread_csv()
not to treat the first row as headings, and instead label them sequentially fromX1
toXn
:
read_csv("1,2,3\n4,5,6", col_names = FALSE)
Rows: 2 Columns: 3 ── Column specification ──────────────────────────────────────────────────────── Delimiter: "," dbl (3): X1, X2, X3 ℹ Use `spec()` to retrieve the full column specification for this data. ℹ Specify the column types or set `show_col_types = FALSE` to quiet this message.
X1 | X2 | X3 |
---|---|---|
<dbl> | <dbl> | <dbl> |
1 | 2 | 3 |
4 | 5 | 6 |
Alternatively you can pass col_names
a character vector which will be
used as the column names:
read_csv("1,2,3\n4,5,6", col_names = c("x", "y", "z"))
Rows: 2 Columns: 3 ── Column specification ──────────────────────────────────────────────────────── Delimiter: "," dbl (3): x, y, z ℹ Use `spec()` to retrieve the full column specification for this data. ℹ Specify the column types or set `show_col_types = FALSE` to quiet this message.
x | y | z |
---|---|---|
<dbl> | <dbl> | <dbl> |
1 | 2 | 3 |
4 | 5 | 6 |
Another option that commonly needs tweaking is na
: this specifies the value (or values) that are used to represent missing values in your file:
read_csv("a,b,c\n1,2,.", na = ".")
Rows: 1 Columns: 3 ── Column specification ──────────────────────────────────────────────────────── Delimiter: "," dbl (2): a, b lgl (1): c ℹ Use `spec()` to retrieve the full column specification for this data. ℹ Specify the column types or set `show_col_types = FALSE` to quiet this message.
a | b | c |
---|---|---|
<dbl> | <dbl> | <lgl> |
1 | 2 | NA |
This is all you need to know to read ~75% of csv files that you'll encounter in practice. You can also easily adapt what you've learned to read tab separated files with read_tsv()
and fixed width files with read_fwf()
. To read in more challenging files, you'll need to learn more about how readr parses each column, turning strings into the most appropriate type. That's up next.
Compared to base R¶
If you've used R before, you might wonder why we're not using read.csv()
. There are a few good reasons to favour readr functions over the base equivalents:
They are typically much faster (~10x) than their base equivalents. Long running jobs have a progress bar, so you can see what's happening. If you're looking for raw speed, try
data.table::fread()
. It doesn't fit quite so well into the tidyverse, but it can be quite a bit faster.They produce tibbles, they don't convert character vectors to factors, use row names, or munge the column names. These are common sources of frustration with the base R functions.
They are more reproducible. Base R functions inherit some behaviour from your operating system and environment variables, so import code that works on your computer might not work on someone else's.
Exercises¶
What function would you use to read a file where fields were separated with
"|"?Apart from
file
,skip
, andcomment
, what other arguments doread_csv()
andread_tsv()
have in common?What is the most important argument to
read_fwf()
that we haven't already discussed?Sometimes strings in a csv file contain commas. To prevent them from causing problems they need to be surrounded by a quoting character, like
"
or'
. By convention,read_csv()
assumes that the quoting character will be"
, and if you want to change it you'll need to useread_delim()
instead. What arguments do you need to specify to read the following text into a data frame?"x,y\n1,'a,b'"
Identify what is wrong with each of the following inline csvs. What happens when you run the code?
read_csv("a,b\n1,2,3\n4,5,6") read_csv("a,b,c\n1,2\n1,2,3,4") read_csv("a,b\n\"1") read_csv("a,b\n1,2\na,b") read_csv("a;b\n1;3")
Parsing a vector¶
Before we get into the details of how readr reads files from disk, we need to take a little detour to talk about the parse_*()
functions. These functions take a character vector and return a more specialised vector like a logical, integer, or date:
str(parse_logical(c("TRUE", "FALSE", "NA")))
logi [1:3] TRUE FALSE NA
str(parse_integer(c("1", "2", "3")))
int [1:3] 1 2 3
str(parse_date(c("2010-01-01", "1979-10-14")))
Date[1:2], format: "2010-01-01" "1979-10-14"
These functions are useful in their own right, but are also an important building block for readr. Once you've learned how the individual parsers work in this section, we'll circle back and see how they fit together to parse a complete file in the next section.
Like all functions in the tidyverse, the parse_*()
functions are uniform: the first argument is a character vector to parse, and the na
argument specifies which strings should be treated as missing:
parse_integer(c("1", "231", ".", "456"), na = ".")
- 1
- 231
- <NA>
- 456
If parsing fails, you'll get a warning:
x <- parse_integer(c("123", "345", "abc", "123.45"))
Warning message: “2 parsing failures. row col expected actual 3 -- no trailing characters abc 4 -- no trailing characters 123.45 ”
And the failures will be missing in the output:
x
- 123
- 345
- <NA>
- <NA>
If there are many parsing failures, you'll need to use problems()
to get the complete set. This returns a tibble which you can then manipulate with dplyr.
problems(x)
row | col | expected | actual |
---|---|---|---|
<int> | <int> | <chr> | <chr> |
3 | NA | no trailing characters | abc |
4 | NA | no trailing characters | 123.45 |
Using parsers is mostly a matter of understanding what's available and how they deal with different types of input. There are eight particularly important parsers:
parse_logical()
andparse_integer()
parse logicals and integers respectively. There's basically nothing that can go wrong with these parsers so I won't describe them here further.parse_double()
is a strict numeric parser, andparse_number()
is a flexible numeric parser. These are more complicated than you might expect because different parts of the world write numbers in different ways.parse_character()
seems so simple that it shouldn't be necessary. But one complication makes it quite important: character encodings.parse_datetime()
,parse_date()
, andparse_time()
allow you to parse various date & time specifications. These are the most complicated because there are so many different ways of writing dates.
The following sections describe the parsers in more detail.
Numbers¶
It seems like it should be straightforward to parse a number, but three problems make it tricky:
People write numbers differently in different parts of the world. For example, some countries use
.
in between the integer and fractional parts of a real number, while others use,
.Numbers are often surrounded by other characters that provide some context, like "$1000" or "10%".
Numbers often contain "grouping" characters to make them easier to read, like "1,000,000", and these grouping characters around the world.
To address the first problem, readr has the notion of a "locale", an object that specifies parsing options that differ from place to place. When parsing numbers, the most important option is the character you use for the decimal mark. You can override the default value of .
by creating a new locale:
parse_double("1.23")
parse_double("1,23", locale = locale(decimal_mark = ","))
readr's default locale is US-centric, because generally R is US-centric (i.e. the documentation of base R is written in American English). An alternative approach would be to try and guess the defaults from your operating system. This is hard to do well, but, more importantly, makes your code fragile: even if it works on your computer, it might fail when you email it to a colleague in another country.
parse_number()
addresses the second problem: it ignores non-numeric characters before and after the number. This is particularly useful for currencies and percentages, but also works to extract numbers embedded in text.
parse_number("$100")
parse_number("20%")
parse_number("It cost $123.45")
The final problem is addressed by the combination of parse_number()
and the locale as parse_number()
will ignore the "grouping mark":
parse_number("$123,456,789")
# Used in many parts of Europe
parse_number("123.456.789", locale = locale(grouping_mark = "."))
# Used in Switzerland
parse_number("123'456'789", locale = locale(grouping_mark = "'"))
Strings¶
It seems like parse_character()
should be really simple - it could just return its input. Unfortunately life isn't so simple, as there are multiple ways to represent the same string. To understand what's going on, we need to dive into the details of how computers represent strings. In R, we can get at the underlying representation of a string using charToRaw()
:
charToRaw("Hadley")
[1] 48 61 64 6c 65 79
Each hexadecimal number represents a byte of information: 48
is H, 61
is a, and so on. The mapping from hexadecimal number to character is called the encoding, and in this case the encoding is called ASCII. ASCII does a great job of representing English characters, because it's the American Standard Code for Information Interchange.
Things get more complicated for languages other than English. In the early days of computing there were many competing standards for encoding non-English characters, and to correctly interpret a string you need to know both the values and the encoding. For example, two common encodings are Latin1 (aka ISO-8859-1, used for Western European languages) and Latin2 (aka ISO-8859-2, used for Eastern European languages). In Latin1, the byte b1
is "±", but in Latin2, it's "ą"! Fortunately, today there is one standard that is supported almost everywhere: UTF-8. UTF-8 can encode just about every character used by humans today, as well as many extra symbols (like emoji!).
readr uses UTF-8 everywhere: it assumes your data is UTF-8 encoded when you read it, and always uses it when writing. This is a good default, but will fail for data produced by older systems that don't understand UTF-8. If this happens to you, your strings will look weird when you print them. Sometimes just one or two characters might be messed up; other times you'll get complete gibberish. For example:
x1 <- "El Ni\xf1o was particularly bad this year"
x2 <- "\x82\xb1\x82\xf1\x82\xc9\x82\xbf\x82\xcd"
x1
x2
To fix the problem you need to specify the encoding in parse_character()
:
parse_character(x1, locale = locale(encoding = "Latin1"))
parse_character(x2, locale = locale(encoding = "Shift-JIS"))
How do you find the correct encoding? If you're lucky, it'll be included somewhere in the data documentation. Unfortunately, that's rarely the case, so readr provides guess_encoding()
to help you figure it out. It's not foolproof, and it works better when you have lots of text (unlike here), but it's a reasonable place to start. Expect to try a few different encodings before you find the right one.
guess_encoding(charToRaw(x1))
encoding | confidence |
---|---|
<chr> | <dbl> |
ISO-8859-1 | 0.46 |
ISO-8859-9 | 0.23 |
guess_encoding(charToRaw(x2))
encoding | confidence |
---|---|
<chr> | <dbl> |
KOI8-R | 0.42 |
The first argument to guess_encoding()
can either be a path to a file, or, as in this case, a raw vector (useful if the strings are already in R).
Encodings are a rich and complex topic, and I've only scratched the surface here. If you'd like to learn more I'd recommend reading the detailed explanation at http://kunststube.net/encoding/.
Factors¶
R uses factors to represent categorical variables that have a known set of possible values. Give parse_factor()
a vector of known levels
to generate a warning whenever an unexpected value is present:
fruit <- c("apple", "banana")
parse_factor(c("apple", "banana", "bananana"), levels = fruit)
Warning message: “1 parsing failure. row col expected actual 3 -- value in level set bananana ”
- apple
- banana
- <NA>
Levels:
- 'apple'
- 'banana'
But if you have many problematic entries, it’s often easier to leave as character vectors and then use the tools you’ll learn about in strings and factors to clean them up.
Dates, date-times, and times¶
You pick between three parsers depending on whether you want a date (the number of days since 1970-01-01), a date-time (the number of seconds since midnight 1970-01-01), or a time (the number of seconds since midnight). When called without any additional arguments:
parse_datetime()
expects an ISO8601 date-time. ISO8601 is an international standard in which the components of a date are organised from biggest to smallest: year, month, day, hour, minute, second.
parse_datetime("2010-10-01T2010")
[1] "2010-10-01 20:10:00 UTC"
# If time is omitted, it will be set to midnight
parse_datetime("20101010")
[1] "2010-10-10 UTC"
This is the most important date/time standard, and if you work with dates and times frequently, I recommend reading https://en.wikipedia.org/wiki/ISO_8601
parse_date()
expects a four digit year, a-
or/
, the month, a-
or/
, then the day:
parse_date("2010-10-01")
parse_time()
expects the hour,:
, minutes, optionally:
and seconds, and an optional am/pm specifier:
library(hms)
parse_time("01:10 am")
parse_time("20:10:01")
01:10:00
20:10:01
Base R doesn't have a great built in class for time data, so we use the one provided in the hms package.
If these defaults don't work for your data you can supply your own date-time format
, built up of the following pieces:
Year
%Y
(4 digits).%y
(2 digits); 00-69 -> 2000-2069, 70-99 -> 1970-1999.
Month
%m
(2 digits).%b
(abbreviated name, like "Jan").%B
(full name, "January").
Day
%d
(2 digits).%e
(optional leading space).
Time
%H
0-23 hour.%I
0-12, must be used with%p
.%p
AM/PM indicator.%M
minutes.%S
integer seconds.%OS
real seconds.%Z
Time zone (as name, e.g.America/Chicago
). Beware abbreviations: if you're American, note that "EST" is a Canadian time zone that does not have daylight savings time. It is \emph{not} Eastern Standard Time!%z
(as offset from UTC, e.g.+0800
).
Non-digits
%.
skips one non-digit character.%*
skips any number of non-digits.
The best way to figure out the correct format is to create a few examples in a character vector, and test with one of the parsing functions. For example:
parse_date("01/02/15", "%m/%d/%y")
parse_date("01/02/15", "%d/%m/%y")
parse_date("01/02/15", "%y/%m/%d")
If you're using %b
or %B
with non-English month names, you'll need to set the lang
argument to locale()
. See the list of built-in languages in date_names_langs()
, or if your language is not already included, create your own with date_names()
.
parse_date("1 janvier 2015", "%d %B %Y", locale = locale("fr"))
Exercises¶
What are the most important arguments to
locale()
?I didn't discuss the
date_format
andtime_format
options tolocale()
. What do they do? Construct an example that shows when they might be useful.If you live outside the US, create a new locale object that encapsulates the settings for the types of file you read most commonly.
What's the difference between
read_csv()
andread_csv2()
?What are the most common encodings used in Europe? What are the most common encodings used in Asia? Do some googling to find out.
Generate the correct format string to parse each of the following dates and times:
d1 <- "January 1, 2010" d2 <- "2015-Mar-07" d3 <- "06-Jun-2017" d4 <- c("August 19 (2015)", "July 1 (2015)") d5 <- "12/30/14" # Dec 30, 2014 t1 <- "1705" t2 <- "11:15:10.12 PM"
Parsing a file¶
Now that you've learned how to parse an individual vector, it's time to return to the beginning and explore how readr parses a file. There are two new things that you'll learn about in this section:
- How readr automatically guesses the type of each column.
- How to override the default specification.
Strategy¶
readr uses a heuristic to figure out the type of each column: it reads the first 1000 rows and uses some (moderately conservative) heuristics to figure out the type of each column. You can emulate this process with a character vector using guess_parser()
, which returns readr's best guess, and parse_guess()
which uses that guess to parse the column:
guess_parser("2010-10-01")
guess_parser("15:01")
guess_parser(c("TRUE", "FALSE"))
guess_parser(c("1", "5", "9"))
guess_parser(c("12,352,561"))
The heuristic tries each of the following types, stopping when it finds a match:
- logical: contains only "F", "T", "FALSE", or "TRUE".
- integer: contains only numeric characters (and
-
). - double: contains only valid doubles (including numbers like
4.5e-5
). - number: contains valid doubles with the grouping mark inside.
- time: matches the default
time_format
. - date: matches the default
date_format
. - date-time: any ISO8601 date.
If none of these rules apply, then the column will stay as a vector of strings.
Problems¶
These defaults don't always work for larger files. There are two basic problems:
The first thousand rows might be a special case, and readr guesses a type that is not sufficiently general. For example, you might have a column of doubles that only contains integers in the first 1000 rows.
The column might contain a lot of missing values. If the first 1000 rows contains only
NA
s, readr will guess that it's a character vector, whereas you probably want to parse it as something more specific.
readr contains a challenging csv that illustrates both of these problems:
challenge <- read_csv(readr_example("challenge.csv"))
Rows: 2000 Columns: 2 ── Column specification ──────────────────────────────────────────────────────── Delimiter: "," dbl (1): x date (1): y ℹ Use `spec()` to retrieve the full column specification for this data. ℹ Specify the column types or set `show_col_types = FALSE` to quiet this message.
(Note the use of readr_example()
which finds the path to one of the files included with the package)
There are two printed outputs: the column specification generated by looking at the first 1000 rows, and the first five parsing failures. It's always a good idea to explicitly pull out the problems()
, so you can explore them in more depth:
problems(challenge)
row | col | expected | actual | file |
---|---|---|---|---|
<int> | <int> | <chr> | <chr> | <chr> |
A good strategy is to work column by column until there are no problems remaining. Here we can see that there are a lot of parsing problems with the x
column - there are trailing characters after the integer value. That suggests we need to use a double parser instead.
To fix the call, start by copying and pasting the column specification into your original call:
challenge <- read_csv(
readr_example("challenge.csv"),
col_types = cols(
x = col_integer(),
y = col_character()
)
)
Warning message: “One or more parsing issues, see `problems()` for details”
Then you can tweak the type of the x
column:
challenge <- read_csv(
readr_example("challenge.csv"),
col_types = cols(
x = col_double(),
y = col_character()
)
)
That fixes the first problem, but if we look at the last few rows, you'll see that they're dates stored in a character vector:
tail(challenge)
x | y |
---|---|
<dbl> | <chr> |
0.8052743 | 2019-11-21 |
0.1635163 | 2018-03-29 |
0.4719390 | 2014-08-04 |
0.7183186 | 2015-08-16 |
0.2698786 | 2020-02-04 |
0.6082372 | 2019-01-06 |
You can fix that by specifying that y
is a date column:
challenge <- read_csv(
readr_example("challenge.csv"),
col_types = cols(
x = col_double(),
y = col_date()
)
)
tail(challenge)
x | y |
---|---|
<dbl> | <date> |
0.8052743 | 2019-11-21 |
0.1635163 | 2018-03-29 |
0.4719390 | 2014-08-04 |
0.7183186 | 2015-08-16 |
0.2698786 | 2020-02-04 |
0.6082372 | 2019-01-06 |
Every parse_xyz()
function has a corresponding col_xyz()
function. You use parse_xyz()
when the data is in a character vector in R already; you use col_xyz()
when you want to tell readr how to load the data.
I highly recommend always supplying the col_types
argument, building from printout provided by readr. This ensures that you have a consistent, reproducible, data import script. If you rely on the default guesses and your data changes, readr will continue to read it in. If you want to be really strict, use stop_for_problems()
: that function throws an error and stops your script if there are any parsing problems.
Other strategies¶
There are a few other general strategies to help you parse files:
- In this case we just got unlucky, and if we'd looked at just a few more rows, we could have correctly parsed in one shot:
challenge2 <- read_csv(readr_example("challenge.csv"), guess_max = 1001)
challenge2
Rows: 2000 Columns: 2 ── Column specification ──────────────────────────────────────────────────────── Delimiter: "," dbl (1): x date (1): y ℹ Use `spec()` to retrieve the full column specification for this data. ℹ Specify the column types or set `show_col_types = FALSE` to quiet this message.
x | y |
---|---|
<dbl> | <date> |
404 | NA |
4172 | NA |
3004 | NA |
787 | NA |
37 | NA |
2332 | NA |
2489 | NA |
1449 | NA |
3665 | NA |
3863 | NA |
4374 | NA |
875 | NA |
172 | NA |
1602 | NA |
2012 | NA |
979 | NA |
2018 | NA |
319 | NA |
1944 | NA |
4878 | NA |
1450 | NA |
3392 | NA |
3677 | NA |
980 | NA |
4903 | NA |
3708 | NA |
258 | NA |
2652 | NA |
3480 | NA |
3443 | NA |
⋮ | ⋮ |
0.14324546 | 2020-06-10 |
0.42678032 | 2013-11-08 |
0.56445359 | 2017-07-12 |
0.18729942 | 2016-08-12 |
0.60274952 | 2022-05-12 |
0.73487829 | 2020-08-11 |
0.06834881 | 2011-10-26 |
0.78291796 | 2015-12-09 |
0.92149271 | 2012-09-27 |
0.04428217 | 2013-02-04 |
0.71311485 | 2010-11-29 |
0.90388602 | 2013-10-16 |
0.73958150 | 2015-04-26 |
0.17217563 | 2011-04-12 |
0.18658998 | 2017-01-30 |
0.38248836 | 2014-06-08 |
0.45361328 | 2016-10-19 |
0.45513148 | 2023-09-01 |
0.17310278 | 2010-01-09 |
0.30541726 | 2020-01-11 |
0.86775210 | 2016-12-16 |
0.26023225 | 2010-01-03 |
0.68087076 | 2016-04-23 |
0.85357656 | 2016-08-10 |
0.80527431 | 2019-11-21 |
0.16351634 | 2018-03-29 |
0.47193898 | 2014-08-04 |
0.71831865 | 2015-08-16 |
0.26987859 | 2020-02-04 |
0.60823719 | 2019-01-06 |
- Sometimes it's easier to diagnose problems if you just read in all the columns as character vectors:
challenge2 <- read_csv(readr_example("challenge.csv"),
col_types = cols(.default = col_character())
)
This is particularly useful in conjunction with type_convert()
,
which applies the parsing heuristics to the character columns in a data
frame.
df <- tibble::tibble(
x = c("1", "2", "3"),
y = c("1.21", "2.32", "4.56")
)
df
# Note the column types
type_convert(df)
x | y |
---|---|
<chr> | <chr> |
1 | 1.21 |
2 | 2.32 |
3 | 4.56 |
── Column specification ──────────────────────────────────────────────────────── cols( x = col_double(), y = col_double() )
x | y |
---|---|
<dbl> | <dbl> |
1 | 1.21 |
2 | 2.32 |
3 | 4.56 |
If you're reading a very large file, you might want to set
n_max
to a smallish number like 10,000 or 100,000. That will speed up iteration while you eliminate common problems.If you're having major parsing problems, sometimes it's easier to just read into a character vector of lines with
read_lines()
, or even a character vector of length 1 withread_file()
. Then you can use the string parsing skills you'll learn later to parse more exotic formats.
Writing to a file¶
readr also comes with two useful functions for writing data back to disk: write_csv()
and write_tsv()
. Both functions increase the chances of the output file being read back in correctly by:
Always encoding strings in UTF-8. If you want to export a csv file to Excel, use
write_excel_csv()
- this writes a special character (a "byte order mark") at the start of the file which tells Excel that you're using the UTF-8 encoding.Saving dates and date-times in ISO8601 format so they are easily parsed elsewhere.
The most important arguments are x
(the data frame to save), and path
(the location to save it). You can also specify how missing values are written with na
, and if you want to append
to an existing file.
write_csv(challenge, "challenge.csv")
Note that the type information is lost when you save to csv:
challenge
x | y |
---|---|
<dbl> | <date> |
404 | NA |
4172 | NA |
3004 | NA |
787 | NA |
37 | NA |
2332 | NA |
2489 | NA |
1449 | NA |
3665 | NA |
3863 | NA |
4374 | NA |
875 | NA |
172 | NA |
1602 | NA |
2012 | NA |
979 | NA |
2018 | NA |
319 | NA |
1944 | NA |
4878 | NA |
1450 | NA |
3392 | NA |
3677 | NA |
980 | NA |
4903 | NA |
3708 | NA |
258 | NA |
2652 | NA |
3480 | NA |
3443 | NA |
⋮ | ⋮ |
0.14324546 | 2020-06-10 |
0.42678032 | 2013-11-08 |
0.56445359 | 2017-07-12 |
0.18729942 | 2016-08-12 |
0.60274952 | 2022-05-12 |
0.73487829 | 2020-08-11 |
0.06834881 | 2011-10-26 |
0.78291796 | 2015-12-09 |
0.92149271 | 2012-09-27 |
0.04428217 | 2013-02-04 |
0.71311485 | 2010-11-29 |
0.90388602 | 2013-10-16 |
0.73958150 | 2015-04-26 |
0.17217563 | 2011-04-12 |
0.18658998 | 2017-01-30 |
0.38248836 | 2014-06-08 |
0.45361328 | 2016-10-19 |
0.45513148 | 2023-09-01 |
0.17310278 | 2010-01-09 |
0.30541726 | 2020-01-11 |
0.86775210 | 2016-12-16 |
0.26023225 | 2010-01-03 |
0.68087076 | 2016-04-23 |
0.85357656 | 2016-08-10 |
0.80527431 | 2019-11-21 |
0.16351634 | 2018-03-29 |
0.47193898 | 2014-08-04 |
0.71831865 | 2015-08-16 |
0.26987859 | 2020-02-04 |
0.60823719 | 2019-01-06 |
write_csv(challenge, "challenge-2.csv")
read_csv("challenge-2.csv")
Rows: 2000 Columns: 2 ── Column specification ──────────────────────────────────────────────────────── Delimiter: "," dbl (1): x date (1): y ℹ Use `spec()` to retrieve the full column specification for this data. ℹ Specify the column types or set `show_col_types = FALSE` to quiet this message.
x | y |
---|---|
<dbl> | <date> |
404 | NA |
4172 | NA |
3004 | NA |
787 | NA |
37 | NA |
2332 | NA |
2489 | NA |
1449 | NA |
3665 | NA |
3863 | NA |
4374 | NA |
875 | NA |
172 | NA |
1602 | NA |
2012 | NA |
979 | NA |
2018 | NA |
319 | NA |
1944 | NA |
4878 | NA |
1450 | NA |
3392 | NA |
3677 | NA |
980 | NA |
4903 | NA |
3708 | NA |
258 | NA |
2652 | NA |
3480 | NA |
3443 | NA |
⋮ | ⋮ |
0.14324546 | 2020-06-10 |
0.42678032 | 2013-11-08 |
0.56445359 | 2017-07-12 |
0.18729942 | 2016-08-12 |
0.60274952 | 2022-05-12 |
0.73487829 | 2020-08-11 |
0.06834881 | 2011-10-26 |
0.78291796 | 2015-12-09 |
0.92149271 | 2012-09-27 |
0.04428217 | 2013-02-04 |
0.71311485 | 2010-11-29 |
0.90388602 | 2013-10-16 |
0.73958150 | 2015-04-26 |
0.17217563 | 2011-04-12 |
0.18658998 | 2017-01-30 |
0.38248836 | 2014-06-08 |
0.45361328 | 2016-10-19 |
0.45513148 | 2023-09-01 |
0.17310278 | 2010-01-09 |
0.30541726 | 2020-01-11 |
0.86775210 | 2016-12-16 |
0.26023225 | 2010-01-03 |
0.68087076 | 2016-04-23 |
0.85357656 | 2016-08-10 |
0.80527431 | 2019-11-21 |
0.16351634 | 2018-03-29 |
0.47193898 | 2014-08-04 |
0.71831865 | 2015-08-16 |
0.26987859 | 2020-02-04 |
0.60823719 | 2019-01-06 |
This makes csvs a little unreliable for caching interim results---you need to recreate the column specification every time you load in. There are two alternatives:
write_rds()
andread_rds()
are uniform wrappers around the base functionsreadRDS()
andsaveRDS()
. These store data in R's custom binary format called RDS:
write_rds(challenge, "challenge.rds")
read_rds("challenge.rds")
x | y |
---|---|
<dbl> | <date> |
404 | NA |
4172 | NA |
3004 | NA |
787 | NA |
37 | NA |
2332 | NA |
2489 | NA |
1449 | NA |
3665 | NA |
3863 | NA |
4374 | NA |
875 | NA |
172 | NA |
1602 | NA |
2012 | NA |
979 | NA |
2018 | NA |
319 | NA |
1944 | NA |
4878 | NA |
1450 | NA |
3392 | NA |
3677 | NA |
980 | NA |
4903 | NA |
3708 | NA |
258 | NA |
2652 | NA |
3480 | NA |
3443 | NA |
⋮ | ⋮ |
0.14324546 | 2020-06-10 |
0.42678032 | 2013-11-08 |
0.56445359 | 2017-07-12 |
0.18729942 | 2016-08-12 |
0.60274952 | 2022-05-12 |
0.73487829 | 2020-08-11 |
0.06834881 | 2011-10-26 |
0.78291796 | 2015-12-09 |
0.92149271 | 2012-09-27 |
0.04428217 | 2013-02-04 |
0.71311485 | 2010-11-29 |
0.90388602 | 2013-10-16 |
0.73958150 | 2015-04-26 |
0.17217563 | 2011-04-12 |
0.18658998 | 2017-01-30 |
0.38248836 | 2014-06-08 |
0.45361328 | 2016-10-19 |
0.45513148 | 2023-09-01 |
0.17310278 | 2010-01-09 |
0.30541726 | 2020-01-11 |
0.86775210 | 2016-12-16 |
0.26023225 | 2010-01-03 |
0.68087076 | 2016-04-23 |
0.85357656 | 2016-08-10 |
0.80527431 | 2019-11-21 |
0.16351634 | 2018-03-29 |
0.47193898 | 2014-08-04 |
0.71831865 | 2015-08-16 |
0.26987859 | 2020-02-04 |
0.60823719 | 2019-01-06 |
- The feather package implements a fast binary file format that can be shared across programming languages:
install.packages("feather")
library(feather)
write_feather(challenge, "challenge.feather")
read_feather("challenge.feather")
Installing package into ‘/usr/local/lib/R/site-library’ (as ‘lib’ is unspecified) also installing the dependency ‘Rcpp’
x | y |
---|---|
<dbl> | <date> |
404 | NA |
4172 | NA |
3004 | NA |
787 | NA |
37 | NA |
2332 | NA |
2489 | NA |
1449 | NA |
3665 | NA |
3863 | NA |
4374 | NA |
875 | NA |
172 | NA |
1602 | NA |
2012 | NA |
979 | NA |
2018 | NA |
319 | NA |
1944 | NA |
4878 | NA |
1450 | NA |
3392 | NA |
3677 | NA |
980 | NA |
4903 | NA |
3708 | NA |
258 | NA |
2652 | NA |
3480 | NA |
3443 | NA |
⋮ | ⋮ |
0.14324546 | 2020-06-10 |
0.42678032 | 2013-11-08 |
0.56445359 | 2017-07-12 |
0.18729942 | 2016-08-12 |
0.60274952 | 2022-05-12 |
0.73487829 | 2020-08-11 |
0.06834881 | 2011-10-26 |
0.78291796 | 2015-12-09 |
0.92149271 | 2012-09-27 |
0.04428217 | 2013-02-04 |
0.71311485 | 2010-11-29 |
0.90388602 | 2013-10-16 |
0.73958150 | 2015-04-26 |
0.17217563 | 2011-04-12 |
0.18658998 | 2017-01-30 |
0.38248836 | 2014-06-08 |
0.45361328 | 2016-10-19 |
0.45513148 | 2023-09-01 |
0.17310278 | 2010-01-09 |
0.30541726 | 2020-01-11 |
0.86775210 | 2016-12-16 |
0.26023225 | 2010-01-03 |
0.68087076 | 2016-04-23 |
0.85357656 | 2016-08-10 |
0.80527431 | 2019-11-21 |
0.16351634 | 2018-03-29 |
0.47193898 | 2014-08-04 |
0.71831865 | 2015-08-16 |
0.26987859 | 2020-02-04 |
0.60823719 | 2019-01-06 |
Feather tends to be faster than RDS and is usable outside of R. RDS supports list-columns (which you'll learn about in [many models]), which feather currently does not.
file.remove("challenge-2.csv")
file.remove("challenge.rds")
Other types of data¶
To get other types of data into R, we recommend starting with the tidyverse packages listed below. They're certainly not perfect, but they are a good place to start. For rectangular data:
haven reads SPSS, Stata, and SAS files.
readxl reads excel files (both
.xls
and.xlsx
).DBI, along with a database specific backend (e.g. RMySQL, RSQLite, RPostgreSQL etc) allows you to run SQL queries against a database and return a data frame.
For hierarchical data: use jsonlite, by Jeroen Ooms for json, and xml2 for XML.
For more exotic file types, try the R data import/export manual and the rio package.