Rcrawler: Web Crawler and Scraper
Performs parallel web crawling and web scraping. It is designed to crawl, parse and store web pages to produce data that can be directly used for analysis application. For details see Khalil and Fakir (2017) <doi:10.1016/j.softx.2017.04.004>.
Version: |
0.1.9-1 |
Imports: |
httr, xml2, data.table, foreach, doParallel, parallel, selectr, webdriver, callr, jsonlite |
Published: |
2018-11-11 |
Author: |
Salim Khalil
[aut, cre] |
Maintainer: |
Salim Khalil <khalilsalim1 at gmail.com> |
BugReports: |
https://github.com/salimk/Rcrawler/issues |
License: |
GPL-2 | GPL-3 [expanded from: GPL (≥ 2)] |
URL: |
https://github.com/salimk/Rcrawler/ |
NeedsCompilation: |
no |
In views: |
WebTechnologies |
CRAN checks: |
Rcrawler results |
Documentation:
Downloads:
Linking:
Please use the canonical form
https://CRAN.R-project.org/package=Rcrawler
to link to this page.