You can install the sparklyr package from CRAN as follows:
install.packages("sparklyr")
You should also install a local version of Spark for development purposes:
library(sparklyr)
spark_install()
To upgrade to the latest version of sparklyr, run the following command and restart your r session:
install.packages("devtools")
::install_github("sparklyr/sparklyr") devtools
You can connect to both local instances of Spark as well as remote Spark clusters. Here we’ll connect to a local instance of Spark via the spark_connect function:
library(sparklyr)
<- spark_connect(master = "local") sc
The returned Spark connection (sc
) provides a remote
dplyr data source to the Spark cluster.
For more information on connecting to remote Spark clusters see the Deployment section of the sparklyr website.
We can now use all of the available dplyr verbs against the tables within the cluster.
We’ll start by copying some datasets from R into the Spark cluster (note that you may need to install the nycflights13 and Lahman packages in order to execute this code):
install.packages(c("nycflights13", "Lahman"))
library(dplyr)
<- copy_to(sc, iris, overwrite = TRUE)
iris_tbl <- copy_to(sc, nycflights13::flights, "flights", overwrite = TRUE)
flights_tbl <- copy_to(sc, Lahman::Batting, "batting", overwrite = TRUE)
batting_tbl src_tbls(sc)
#> [1] "batting" "flights" "iris"
To start with here’s a simple filtering example:
# filter by departure delay and print the first few records
%>% filter(dep_delay == 2)
flights_tbl #> # Source: spark<?> [?? x 19]
#> year month day dep_time sched_dep_time dep_delay arr_time sched_arr_time
#> <int> <int> <int> <int> <int> <dbl> <int> <int>
#> 1 2013 1 1 517 515 2 830 819
#> 2 2013 1 1 542 540 2 923 850
#> 3 2013 1 1 702 700 2 1058 1014
#> 4 2013 1 1 715 713 2 911 850
#> 5 2013 1 1 752 750 2 1025 1029
#> 6 2013 1 1 917 915 2 1206 1211
#> 7 2013 1 1 932 930 2 1219 1225
#> 8 2013 1 1 1028 1026 2 1350 1339
#> 9 2013 1 1 1042 1040 2 1325 1326
#> 10 2013 1 1 1231 1229 2 1523 1529
#> # … with more rows, and 11 more variables: arr_delay <dbl>, carrier <chr>,
#> # flight <int>, tailnum <chr>, origin <chr>, dest <chr>, air_time <dbl>,
#> # distance <dbl>, hour <dbl>, minute <dbl>, time_hour <dttm>
Introduction to
dplyr provides additional dplyr
examples you can try.
For example, consider the last example from the tutorial which plots
data on flight delays:
<- flights_tbl %>%
delay group_by(tailnum) %>%
summarise(count = n(), dist = mean(distance), delay = mean(arr_delay)) %>%
filter(count > 20, dist < 2000, !is.na(delay)) %>%
collect()
# plot delays
library(ggplot2)
ggplot(delay, aes(dist, delay)) +
geom_point(aes(size = count), alpha = 1/2) +
geom_smooth() +
scale_size_area(max_size = 2)
#> `geom_smooth()` using method = 'gam' and formula 'y ~ s(x, bs = "cs")'
dplyr window functions are also supported, for example:
%>%
batting_tbl select(playerID, yearID, teamID, G, AB:H) %>%
arrange(playerID, yearID, teamID) %>%
group_by(playerID) %>%
filter(min_rank(desc(H)) <= 2 & H > 0)
#> # Source: spark<?> [?? x 7]
#> # Groups: playerID
#> # Ordered by: playerID, yearID, teamID
#> playerID yearID teamID G AB R H
#> <chr> <int> <chr> <int> <int> <int> <int>
#> 1 aaronha01 1959 ML1 154 629 116 223
#> 2 aaronha01 1963 ML1 161 631 121 201
#> 3 abbotji01 1999 MIL 20 21 0 2
#> 4 abnersh01 1992 CHA 97 208 21 58
#> 5 abnersh01 1990 SDN 91 184 17 45
#> 6 acklefr01 1963 CHA 2 5 0 1
#> 7 acklefr01 1964 CHA 3 1 0 1
#> 8 acunaro01 2019 ATL 156 626 127 175
#> 9 acunaro01 2018 ATL 111 433 78 127
#> 10 adamecr01 2016 COL 121 225 25 49
#> # … with more rows
For additional documentation on using dplyr with Spark see the dplyr section of the sparklyr website.
It’s also possible to execute SQL queries directly against tables
within a Spark cluster. The spark_connection
object
implements a DBI interface
for Spark, so you can use dbGetQuery()
to execute SQL and
return the result as an R data frame:
library(DBI)
<- dbGetQuery(sc, "SELECT * FROM iris LIMIT 10")
iris_preview
iris_preview#> Sepal_Length Sepal_Width Petal_Length Petal_Width Species
#> 1 5.1 3.5 1.4 0.2 setosa
#> 2 4.9 3.0 1.4 0.2 setosa
#> 3 4.7 3.2 1.3 0.2 setosa
#> 4 4.6 3.1 1.5 0.2 setosa
#> 5 5.0 3.6 1.4 0.2 setosa
#> 6 5.4 3.9 1.7 0.4 setosa
#> 7 4.6 3.4 1.4 0.3 setosa
#> 8 5.0 3.4 1.5 0.2 setosa
#> 9 4.4 2.9 1.4 0.2 setosa
#> 10 4.9 3.1 1.5 0.1 setosa
You can orchestrate machine learning algorithms in a Spark cluster via the machine learning functions within sparklyr. These functions connect to a set of high-level APIs built on top of DataFrames that help you create and tune machine learning workflows.
Here’s an example where we use ml_linear_regression
to fit a linear regression model. We’ll use the built-in
mtcars
dataset, and see if we can predict a car’s fuel
consumption (mpg
) based on its weight (wt
),
and the number of cylinders the engine contains (cyl
).
We’ll assume in each case that the relationship between mpg
and each of our features is linear.
# copy mtcars into spark
<- copy_to(sc, mtcars, overwrite = TRUE)
mtcars_tbl
# transform our data set, and then partition into 'training', 'test'
<- mtcars_tbl %>%
partitions filter(hp >= 100) %>%
mutate(cyl8 = cyl == 8) %>%
sdf_partition(training = 0.5, test = 0.5, seed = 1099)
# fit a linear model to the training dataset
<- partitions$training %>%
fit ml_linear_regression(response = "mpg", features = c("wt", "cyl"))
fit#> Formula: mpg ~ wt + cyl
#>
#> Coefficients:
#> (Intercept) wt cyl
#> 37.1464554 -4.3408005 -0.5830515
For linear regression models produced by Spark, we can use
summary()
to learn a bit more about the quality of our fit,
and the statistical significance of each of our predictors.
summary(fit)
#> Deviance Residuals:
#> Min 1Q Median 3Q Max
#> -2.5134 -0.9158 -0.1683 1.1503 2.1534
#>
#> Coefficients:
#> (Intercept) wt cyl
#> 37.1464554 -4.3408005 -0.5830515
#>
#> R-Squared: 0.9428
#> Root Mean Squared Error: 1.409
Spark machine learning supports a wide array of algorithms and feature transformations and as illustrated above it’s easy to chain these functions together with dplyr pipelines. To learn more see the machine learning section.
You can read and write data in CSV, JSON, and Parquet formats. Data can be stored in HDFS, S3, or on the local filesystem of cluster nodes.
<- tempfile(fileext = ".csv")
temp_csv <- tempfile(fileext = ".parquet")
temp_parquet <- tempfile(fileext = ".json")
temp_json
spark_write_csv(iris_tbl, temp_csv)
<- spark_read_csv(sc, "iris_csv", temp_csv)
iris_csv_tbl
spark_write_parquet(iris_tbl, temp_parquet)
<- spark_read_parquet(sc, "iris_parquet", temp_parquet)
iris_parquet_tbl
spark_write_json(iris_tbl, temp_json)
<- spark_read_json(sc, "iris_json", temp_json)
iris_json_tbl
src_tbls(sc)
#> [1] "batting" "flights" "iris" "iris_csv" "iris_json"
#> [6] "iris_parquet" "mtcars"
You can execute arbitrary r code across your cluster using
spark_apply()
. For example, we can apply
rgamma
over iris
as follows:
spark_apply(iris_tbl, function(data) {
1:4] + rgamma(1,2)
data[
})#> # Source: spark<?> [?? x 4]
#> Sepal_Length Sepal_Width Petal_Length Petal_Width
#> <dbl> <dbl> <dbl> <dbl>
#> 1 6.45 4.85 2.75 1.55
#> 2 6.25 4.35 2.75 1.55
#> 3 6.05 4.55 2.65 1.55
#> 4 5.95 4.45 2.85 1.55
#> 5 6.35 4.95 2.75 1.55
#> 6 6.75 5.25 3.05 1.75
#> 7 5.95 4.75 2.75 1.65
#> 8 6.35 4.75 2.85 1.55
#> 9 5.75 4.25 2.75 1.55
#> 10 6.25 4.45 2.85 1.45
#> # … with more rows
You can also group by columns to perform an operation over each group of rows and make use of any package within the closure:
spark_apply(
iris_tbl,function(e) broom::tidy(lm(Petal_Width ~ Petal_Length, e)),
columns = c("term", "estimate", "std.error", "statistic", "p.value"),
group_by = "Species"
)#> # Source: spark<?> [?? x 6]
#> Species term estimate std.error statistic p.value
#> <chr> <chr> <dbl> <dbl> <dbl> <dbl>
#> 1 versicolor (Intercept) -0.0843 0.161 -0.525 6.02e- 1
#> 2 versicolor Petal_Length 0.331 0.0375 8.83 1.27e-11
#> 3 virginica (Intercept) 1.14 0.379 2.99 4.34e- 3
#> 4 virginica Petal_Length 0.160 0.0680 2.36 2.25e- 2
#> 5 setosa (Intercept) -0.0482 0.122 -0.396 6.94e- 1
#> 6 setosa Petal_Length 0.201 0.0826 2.44 1.86e- 2
The facilities used internally by sparklyr for its dplyr
and machine learning interfaces are available to extension packages.
Since Spark is a general purpose cluster computing system there are many
potential applications for extensions (e.g. interfaces to custom machine
learning pipelines, interfaces to 3rd party Spark packages, etc.).
Here’s a simple example that wraps a Spark text file line counting function with an R function:
# write a CSV
<- tempfile(fileext = ".csv")
tempfile write.csv(nycflights13::flights, tempfile, row.names = FALSE, na = "")
# define an R interface to Spark line counting
<- function(sc, path) {
count_lines spark_context(sc) %>%
invoke("textFile", path, 1L) %>%
invoke("count")
}
# call spark to count the lines of the CSV
count_lines(sc, tempfile)
#> [1] 336777
To learn more about creating extensions see the Extensions section of the sparklyr website.
You can cache a table into memory with:
tbl_cache(sc, "batting")
and unload from memory using:
tbl_uncache(sc, "batting")
You can view the Spark web console using the spark_web()
function:
spark_web(sc)
You can show the log using the spark_log()
function:
spark_log(sc, n = 10)
#> 22/05/25 15:05:25 INFO BlockManagerInfo: Removed broadcast_84_piece0 on localhost:58163 in memory (size: 9.2 KiB, free: 912.1 MiB)
#> 22/05/25 15:05:25 INFO BlockManagerInfo: Removed broadcast_87_piece0 on localhost:58163 in memory (size: 18.4 KiB, free: 912.1 MiB)
#> 22/05/25 15:05:25 INFO BlockManagerInfo: Removed broadcast_77_piece0 on localhost:58163 in memory (size: 16.7 KiB, free: 912.1 MiB)
#> 22/05/25 15:05:25 INFO Executor: Finished task 0.0 in stage 67.0 (TID 83). 1004 bytes result sent to driver
#> 22/05/25 15:05:25 INFO TaskSetManager: Finished task 0.0 in stage 67.0 (TID 83) in 244 ms on localhost (executor driver) (1/1)
#> 22/05/25 15:05:25 INFO TaskSchedulerImpl: Removed TaskSet 67.0, whose tasks have all completed, from pool
#> 22/05/25 15:05:25 INFO DAGScheduler: ResultStage 67 (count at NativeMethodAccessorImpl.java:0) finished in 0.259 s
#> 22/05/25 15:05:25 INFO DAGScheduler: Job 49 is finished. Cancelling potential speculative or zombie tasks for this job
#> 22/05/25 15:05:25 INFO TaskSchedulerImpl: Killing all running tasks in stage 67: Stage finished
#> 22/05/25 15:05:25 INFO DAGScheduler: Job 49 finished: count at NativeMethodAccessorImpl.java:0, took 0.268655 s
Finally, we disconnect from Spark:
spark_disconnect(sc)
The latest RStudio Preview Release of the RStudio IDE includes integrated support for Spark and the sparklyr package, including tools for:
Once you’ve installed the sparklyr package, you should find a new Spark pane within the IDE. This pane includes a New Connection dialog which can be used to make connections to local or remote Spark instances:
Once you’ve connected to Spark you’ll be able to browse the tables contained within the Spark cluster and preview Spark DataFrames using the standard RStudio data viewer:
You can also connect to Spark through Livy through a new connection dialog:
The RStudio IDE features for sparklyr are available now as part of the RStudio Preview Release.
rsparkling is a CRAN package from H2O that extends sparklyr to provide an interface into Sparkling Water. For instance, the following example installs, configures and runs h2o.glm:
library(rsparkling)
library(sparklyr)
library(dplyr)
library(h2o)
<- spark_connect(master = "local", version = "2.3.2")
sc <- copy_to(sc, mtcars, "mtcars", overwrite = TRUE)
mtcars_tbl
<- as_h2o_frame(sc, mtcars_tbl, strict_version_check = FALSE)
mtcars_h2o
<- h2o.glm(x = c("wt", "cyl"),
mtcars_glm y = "mpg",
training_frame = mtcars_h2o,
lambda_search = TRUE)
mtcars_glm
#> Model Details:
#> ==============
#>
#> H2ORegressionModel: glm
#> Model ID: GLM_model_R_1527265202599_1
#> GLM Model: summary
#> family link regularization
#> 1 gaussian identity Elastic Net (alpha = 0.5, lambda = 0.1013 )
#> lambda_search
#> 1 nlambda = 100, lambda.max = 10.132, lambda.min = 0.1013, lambda.1se = -1.0
#> number_of_predictors_total number_of_active_predictors
#> 1 2 2
#> number_of_iterations training_frame
#> 1 100 frame_rdd_31_ad5c4e88ec97eb8ccedae9475ad34e02
#>
#> Coefficients: glm coefficients
#> names coefficients standardized_coefficients
#> 1 Intercept 38.941654 20.090625
#> 2 cyl -1.468783 -2.623132
#> 3 wt -3.034558 -2.969186
#>
#> H2ORegressionMetrics: glm
#> ** Reported on training data. **
#>
#> MSE: 6.017684
#> RMSE: 2.453097
#> MAE: 1.940985
#> RMSLE: 0.1114801
#> Mean Residual Deviance : 6.017684
#> R^2 : 0.8289895
#> Null Deviance :1126.047
#> Null D.o.F. :31
#> Residual Deviance :192.5659
#> Residual D.o.F. :29
#> AIC :156.2425
spark_disconnect(sc)
Livy enables remote connections to Apache Spark clusters. However, please notice that connecting to Spark clusters through Livy is much slower than any other connection method.
Before connecting to Livy, you will need the connection information
to an existing service running Livy. Otherwise, to test
livy
in your local environment, you can install it and run
it locally as follows:
livy_install()
livy_service_start()
To connect, use the Livy service address as master
and
method = "livy"
in spark_connect()
. Once
connection completes, use sparklyr
as usual, for
instance:
<- spark_connect(master = "http://localhost:8998", method = "livy", version = "3.0.0")
sc copy_to(sc, iris, overwrite = TRUE)
spark_disconnect(sc)
Once you are done using livy
locally, you should stop
this service with:
livy_service_stop()
To connect to remote livy
clusters that support basic
authentication connect as:
<- livy_config(username="<username>", password="<password>")
config <- spark_connect(master = "<address>", method = "livy", config = config)
sc spark_disconnect(sc)
Databricks Connect allows you to connect sparklyr to a remote Databricks Cluster. You can install Databricks Connect python package and use it to submit Spark jobs written in sparklyr APIs and have them execute remotely on a Databricks cluster instead of in the local Spark session.
To use sparklyr with Databricks Connect first launch a Cluster on Databricks. Then follow these instructions to setup the client:
databricks-connect configure
and provide the
configuration information
https://<account>.cloud.databricks.com
.15001
)To configure sparklyr
with Databricks Connect, set the
following environment variables:
export SPARK_VERSION=2.4.4
Now simply create a spark connection as follows
<- system("databricks-connect get-spark-home")
spark_home <- spark_connect(method = "databricks",
sc spark_home = spark_home)
copy_to(sc, iris, overwrite = TRUE)
spark_disconnect(sc)