library(torch)
Central to data ingestion and preprocessing are datasets and data loaders. A dataset is an object that holds the data to use, while a data loader is an object that will load the data from a dataset providing a way to access subsets of the data. By using datasets and data loaders you will have a process for clearly organizing your data and passing it to other components of the torch package, such as model training.
Built into torch
are premade datasets that are commonly
used in machine learning, such as the MNIST handwriting dataset
(mnist_dataset()
). Most of the prebuilt datasets relate to
image recognition and natural language processing.
Below is an example of how you would use the MNIST dataset with a
dataloader. First, the minst_dataset()
function is used to
create ds
which is a Dataset
object. Then a
dataloader dl
is created to query that data. Finally, that
dataloader is used in a coro::loop()
to iterate over
batches of that data:
# Create a dataset from included data
<- mnist_dataset(
ds
dir, download = TRUE,
transform = function(x) {
<- x$to(dtype = torch_float())/256
x
x[newaxis,..]
}
)
# Create the loader to query the data in batches
<- dataloader(ds, batch_size = 32, shuffle = TRUE)
dl
::loop(for (b in dl)) {
coro# use the data from each batch `b` here
# ...
) }
See vignettes/examples/mnist-cnn.R
for a complete
example.
In the more common situation where you have a unique set of data that
isn’t included with the package you’ll need to make a custom
Dataset
subclass by using the dataset()
function. The custom Dataset
subclass is an abstract R6
container for the data. It will need to know some information about the
particular dataset, such as how to iterate over it.
At a minimum, when using dataset()
to create a custom
Dataset
class you’ll want to define the following:
name
- for convenience, keep track of what type of data
it isinitialize
- a member function defining how to create a
object with that class. It could have no parameters, for when all
objects of that class will be the same, or you can give it specific
parameters usually for if different objects should have different
data..getitem
- this member function is called when the
dataloader goes to pull a new batch of data. You can include
preprocessing in this function if needed. Note that the function will be
called extremely frequently, so it’s advantageous to make it fast..length
- this will return the amount of data in the
dataset, which is helpful for users.While this may sound complicated the base logic is only a few
steps–the complexity often comes from the data itself and how involved
your preprocessing is. Here we show how to create your own
Dataset
class to train on Allison Horst's
penguins.
Component | Dataset R6 class |
Dataset object |
DataLoader object |
batch |
---|---|---|---|---|
Description | Output of dataset() . When calling
dataset() it should have at least a name ,
initialize , .getitem , and
.length . Output is a Dataset generator. |
Object created by using the custom
Dataset generator. Actually stores the data |
Object that queries the Dataset object
to pull batches of data |
The subsample of data used for things like model training |
Penguin example | penguins_dataset |
tuxes |
dl |
b |
library(palmerpenguins)
library(magrittr)
penguins
In addition, any number of helper functions can be defined.
Here, we assume the penguins
have already been loaded,
and all preprocessing consists in removing lines with NA
values, transforming factor
s to numbers starting from 0,
and converting from R data types to torch
tensors.
In .getitem
, we essentially decide how this data is
going to be used: All variables besides species
go into
x
, the predictor, and species
will constitute
y
, the target. Predictor and target are returned in a list,
to be accessed as batch[[1]]
and batch[[2]]
during training.
<- dataset(
penguins_dataset
name = "penguins_dataset",
initialize = function() {
$data <- self$prepare_penguin_data()
self
},
.getitem = function(index) {
<- self$data[index, 2:-1]
x <- self$data[index, 1]$to(torch_long())
y
list(x, y)
},
.length = function() {
$data$size()[[1]]
self
},
prepare_penguin_data = function() {
<- na.omit(penguins)
input # conveniently, the categorical data are already factors
$species <- as.numeric(input$species)
input$island <- as.numeric(input$island)
input$sex <- as.numeric(input$sex)
input
<- as.matrix(input)
input torch_tensor(input)
} )
Let’s create the dataset , query for it’s length, and look at its first item:
<- penguins_dataset()
tuxes $.length()
tuxes$.getitem(1) tuxes
To be able to iterate over tuxes
, we need a data loader
(we override the default batch size of 1
):
<- tuxes %>% dataloader(batch_size = 8) dl
Calling .length()
on a data loader (as opposed to a
dataset) will return the number of batches
we have:
$.length() dl
And we can create an iterator to inspect the first batch:
<- dl$.iter()
iter <- iter$.next()
b b
To train a network, we can use coro::loop()
to iterate
over batches.
Our example network is very simple. (In reality, we would want to
treat island
as the categorical variable it is, and either
one-hot-encode or embed it.)
<- nn_module(
net "PenguinNet",
initialize = function() {
$fc1 <- nn_linear(7, 32)
self$fc2 <- nn_linear(32, 3)
self
},forward = function(x) {
%>%
x $fc1() %>%
selfnnf_relu() %>%
$fc2() %>%
selfnnf_log_softmax(dim = 1)
}
)
<- net() model
We still need an optimizer:
<- optim_sgd(model$parameters, lr = 0.01) optimizer
And we’re ready to train:
for (epoch in 1:10) {
<- c()
l
::loop(for (b in dl) {
coro$zero_grad()
optimizer<- model(b[[1]])
output <- nnf_nll_loss(output, b[[2]])
loss $backward()
loss$step()
optimizer<- c(l, loss$item())
l
})
cat(sprintf("Loss at epoch %d: %3f\n", epoch, mean(l)))
}
Through this example we have trained a deep learning model using
dataset()
to define a custom class and then loaded it in
batches with a data loader. By using the dataset and data loader we were
able to write code that split the data preprocessing and setup from the
model training itself.
When using datasets and data loaders you may find that under certain conditions your code is running more slowly than you’d expect. In some situations the overhead of using dataloaders and datasets can impact overall performance. This may change in time as the R/C++ integration of Torch improves, but for now there are some workarounds:
.getbatch()
instead of .getitem()
By default a dataloader will use the .getitem()
member
function to pull each single datapoint individually. You can speed this
up by switching to using .getbatch()
which will pull all
the datapoints in a batch at once:
<- dataset(
penguins_dataset_batching
name = "penguins_dataset_batching",
initialize = function() {
$data <- self$prepare_penguin_data()
self
},
# the only change is that this went from .getitem to .getbatch
.getbatch = function(index) {
<- self$data[index, 2:-1]
x <- self$data[index, 1]$to(torch_long())
y
list(x, y)
},
.length = function() {
$data$size()[[1]]
self
},
prepare_penguin_data = function() {
<- na.omit(penguins)
input # conveniently, the categorical data are already factors
$species <- as.numeric(input$species)
input$island <- as.numeric(input$island)
input$sex <- as.numeric(input$sex)
input
<- as.matrix(input)
input torch_tensor(input)
} )
In many instances the only change is to exactly replace just
.getitem
with .getbatch
since often the
.getitem
function is written to handle vectors of indices.
In this penguins example the .getitem
function used the
index to select the rows, which will work fine with a vector instead
If switching to .getbatch
does not provide the benefit
you were expecting you could also remove the dataset
entirely and manually pass the data. At this point you are trading
readability of your code and convenience for speed.
<- na.omit(penguins)
input # conveniently, the categorical data are already factors
$species <- as.numeric(input$species)
input$island <- as.numeric(input$island)
input$sex <- as.numeric(input$sex)
input
<- as.matrix(input)
input <- torch_tensor(input)
input
<- input[, 2:-1]
data_x <- input[, 1]$to(torch_long())
data_y
<- 8
batch_size <- data_y$size(1)
num_data_points <- floor(num_data_points/batch_size)
num_batches
for(epoch in 1:10){
# rearrange the data each epoch
<- torch_randperm(num_data_points) + 1L
permute <- data_x[permute]
data_x <- data_y[permute]
data_y
# manually loop through the batches
for(batch_idx in 1:num_batches){
# here index is a vector of the indices in the batch
<- (batch_size*(batch_idx-1) + 1):(batch_idx*batch_size)
index
<- data_x[index]
x <- data_y[index]$to(torch_long())
y
$zero_grad()
optimizer<- model(x)
output <- nnf_nll_loss(output, y)
loss $backward()
loss$step()
optimizer<- c(l, loss$item())
l
}
cat(sprintf("Loss at epoch %d: %3f\n", epoch, mean(l)))
}