Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

JS Crawling via chrome headless #288

Open
wants to merge 26 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
26 commits
Select commit Hold shift + click to select a range
38cb059
fixed
arnaudmolo Jan 29, 2019
0762c30
download chrome version
arnaudmolo Jan 29, 2019
2f79b72
Download chromium & chrome driver
arnaudmolo Jan 30, 2019
f9179e8
Working python 2.7 chrome installer
arnaudmolo Jan 30, 2019
6b3514b
remove useless changes
arnaudmolo Jan 30, 2019
c7ce4e0
change environement variable name
arnaudmolo Jan 30, 2019
1a707b0
add versions link
arnaudmolo Jan 30, 2019
383471c
improve downloader script
boogheta Jan 30, 2019
95cff57
ensure chromedriver binary is executable
boogheta Jan 30, 2019
7f3cb40
make this script an actual developers testcase
boogheta Jan 30, 2019
a550912
remove old phantom binaries
boogheta Jan 30, 2019
1e5285d
do not restrict clicks to link
boogheta Jan 30, 2019
736b8c0
start preparing paths mess
boogheta Jan 31, 2019
38a3170
start migrating phantomJs to Chromium headless
boogheta Jan 31, 2019
8f17e20
move JS crawling to crawler directory (for docker compat)
boogheta Jan 31, 2019
3b00778
functional call of scrapy spider with headless chrome
boogheta Jan 31, 2019
2f1daf5
lightfixes
boogheta Jan 31, 2019
be94807
install chromium in docker
boogheta Feb 1, 2019
c9203d0
Merge branch 'master' into grab-chrome
boogheta Feb 1, 2019
0a3c8a1
fix chromium install in docker
boogheta Feb 1, 2019
b2dcee2
actually working chrome in docker
boogheta Feb 1, 2019
250e4c9
reactivate phantom options
boogheta Feb 1, 2019
73dd15b
Merge branch 'master' into grab-chrome
arnaudmolo Oct 21, 2019
7c49941
Add fixed versions for chromium & chromium driver
arnaudmolo Oct 23, 2019
89eda93
'Dsiable GPU' makes chromium crash
arnaudmolo Oct 24, 2019
ae14897
rename all panthomjs
arnaudmolo Oct 24, 2019
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 0 additions & 1 deletion .dockerignore
Original file line number Diff line number Diff line change
@@ -1,2 +1 @@
doc
bin/hyphe-phantomjs-2.0.0
5 changes: 3 additions & 2 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
bin/hyphe-phantomjs*
.vscode/

config/apache2.conf
config/config.json*
config/salt
Expand All @@ -24,7 +25,7 @@ hyphe_backend/crawler/hcicrawler/tlds_tree.py
build/
releases/
dependencies/

local-chromium/
#
# Mac files
#
Expand Down
Binary file removed bin/hyphe-phantomjs-2.0.0
Binary file not shown.
8 changes: 0 additions & 8 deletions bin/install_phantom.sh

This file was deleted.

2 changes: 1 addition & 1 deletion bin/recreate_corpus_from_archives.py
Original file line number Diff line number Diff line change
Expand Up @@ -220,7 +220,7 @@ def cli(archive_dir, corpus_name, api_url, filter_discovered, destroy_existing,
print >> sys.stderr, "WARNING: skipping crawl on problematic entity not recreated", c
continue
args = c['crawl_arguments']
res = hyphe_api.crawl.start(old_to_new[c['webentity_id']], args['start_urls'], args['follow_prefixes'], args['nofollow_prefixes'], args['discover_prefixes'], args['max_depth'], args['phantom'], {}, 1, args['cookies'], cid)
res = hyphe_api.crawl.start(old_to_new[c['webentity_id']], args['start_urls'], args['follow_prefixes'], args['nofollow_prefixes'], args['discover_prefixes'], args['max_depth'], args['headless'], {}, 1, args['cookies'], cid)
if 'code' not in res or res['code'] == 'fail':
print >> sys.stderr, 'WARNING: Could not start crawl', c, res

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@
log_level = logging.DEBUG
webentities_file = 'start_crawling_webentities_from_csv.csv'
depth_crawl = 1
phantom_crawl = False
headless_crawl = False
# Should be of this format : 'http://hyphe.medialab.sciences-po.fr/INSTANCE-api/'
# if you usually access Hyphe from : 'http://hyphe.medialab.sciences-po.fr/INSTANCE/'
api_url = 'url_of_your_own_api'
Expand Down Expand Up @@ -54,7 +54,7 @@ def main():
logging.info(result['result'])
# Add the web entity to be crawled, if it STATUS is 'IN' and it has not been crawled before, ie. CRAWLING STATUS is 'UNCRAWLED'
if webentity['STATUS'] == 'IN' and webentity['CRAWLING STATUS'] == 'UNCRAWLED' :
result = hyphe_core.crawl_webentity(webentity['ID'], depth_crawl, phantom_crawl, webentity['STATUS'], 'prefixes', {}, corpus)
result = hyphe_core.crawl_webentity(webentity['ID'], depth_crawl, headless_crawl, webentity['STATUS'], 'prefixes', {}, corpus)
if result['code'] == 'fail' :
logging.error(result['message'])
else :
Expand Down
8 changes: 5 additions & 3 deletions config/config.json.example
Original file line number Diff line number Diff line change
Expand Up @@ -52,12 +52,14 @@
"wp.me",
"ow.ly"
],
"phantom": {
"autoretry": false,
"timeout": 600,
"headless": {
"autoretry": true,
"timeout": 150,
"idle_timeout": 20,
"ajax_timeout": 15,
"whitelist_domains": [
"facebook.com",
"twitter.com"
]
},
"ADMIN_PASSWORD": null,
Expand Down
18 changes: 9 additions & 9 deletions doc/api.md
Original file line number Diff line number Diff line change
Expand Up @@ -268,29 +268,29 @@ The API will always answer as such:
- __`crawl_webentity`:__
+ _`webentity_id`_ (mandatory)
+ _`depth`_ (optional, default: `0`)
+ _`phantom_crawl`_ (optional, default: `false`)
+ _`headless_crawl`_ (optional, default: `false`)
+ _`status`_ (optional, default: `"IN"`)
+ _`phantom_timeouts`_ (optional, default: `{}`)
+ _`headless_timeouts`_ (optional, default: `{}`)
+ _`corpus`_ (optional, default: `"--hyphe--"`)

Schedules a crawl for a `corpus` for an existing WebEntity defined by its `webentity_id` with a specific crawl `depth [int]`.
Optionally use PhantomJS by setting `phantom_crawl` to "true" and adjust specific `phantom_timeouts` as a json object with possible keys `timeout`/`ajax_timeout`/`idle_timeout`.
Optionally use a headless browser by setting `headless_crawl` to "true" and adjust specific `headless_timeouts` as a json object with possible keys `timeout`/`ajax_timeout`/`idle_timeout`.
Sets simultaneously the WebEntity's status to "IN" or optionally to another valid `status` ("undecided"/"out"/"discovered").
Will use the WebEntity's startpages if it has any or use otherwise the `corpus`' "default" `startmode` heuristic as defined in `propose_webentity_startpages` (use `crawl_webentity_with_startmode` to apply a different heuristic, see details in `propose_webentity_startpages`).


- __`crawl_webentity_with_startmode`:__
+ _`webentity_id`_ (mandatory)
+ _`depth`_ (optional, default: `0`)
+ _`phantom_crawl`_ (optional, default: `false`)
+ _`headless_crawl`_ (optional, default: `false`)
+ _`status`_ (optional, default: `"IN"`)
+ _`startmode`_ (optional, default: `"default"`)
+ _`cookies_string`_ (optional, default: `null`)
+ _`phantom_timeouts`_ (optional, default: `{}`)
+ _`headless_timeouts`_ (optional, default: `{}`)
+ _`corpus`_ (optional, default: `"--hyphe--"`)

Schedules a crawl for a `corpus` for an existing WebEntity defined by its `webentity_id` with a specific crawl `depth [int]`.
Optionally use PhantomJS by setting `phantom_crawl` to "true" and adjust specific `phantom_timeouts` as a json object with possible keys `timeout`/`ajax_timeout`/`idle_timeout`.
Optionally use a headless browser by setting `headless_crawl` to "true" and adjust specific `headless_timeouts` as a json object with possible keys `timeout`/`ajax_timeout`/`idle_timeout`.
Sets simultaneously the WebEntity's status to "IN" or optionally to another valid `status` ("undecided"/"out"/"discovered").
Optionally add a known `cookies_string` with auth rights to a protected website.
Optionally define the `startmode` strategy differently to the `corpus` "default one (see details in `propose_webentity_startpages`).
Expand Down Expand Up @@ -364,8 +364,8 @@ The API will always answer as such:
+ _`nofollow_prefixes`_ (mandatory)
+ _`follow_redirects`_ (optional, default: `null`)
+ _`depth`_ (optional, default: `0`)
+ _`phantom_crawl`_ (optional, default: `false`)
+ _`phantom_timeouts`_ (optional, default: `{}`)
+ _`headless_crawl`_ (optional, default: `false`)
+ _`headless_timeouts`_ (optional, default: `{}`)
+ _`download_delay`_ (optional, default: `1`)
+ _`cookies_string`_ (optional, default: `null`)
+ _`corpus`_ (optional, default: `"--hyphe--"`)
Expand All @@ -375,7 +375,7 @@ The API will always answer as such:
* a list of `follow_prefixes` to know which links to follow
* a list of `nofollow_prefixes` to know which links to avoid
* a `depth` corresponding to the maximum number of clicks done from the start pages
* `phantom_crawl` set to "true" to use PhantomJS for this crawl and optional `phantom_timeouts` as an object with keys among `timeout`/`ajax_timeout`/`idle_timeout`
* `headless_crawl` set to "true" to use a headless browser for this crawl and optional `headless_timeouts` as an object with keys among `timeout`/`ajax_timeout`/`idle_timeout`
* a `download_delay` corresponding to the time in seconds spent between two requests by the crawler.
* a known `cookies_string` with auth rights to a protected website.

Expand Down
8 changes: 4 additions & 4 deletions doc/config.md
Original file line number Diff line number Diff line change
Expand Up @@ -97,27 +97,27 @@ Typical important options to set depending on your situation are highlighted as
see default values for example, a list of domain names for which the crawler will automatically try to resolve redirections in order to avoid having links shorteners in the middle of the graph of links


- `phantom [object]`: settings for crawl jobs using PhantomJS to simulate a human browsing the webpages, scrolling and clicking on any possible interactive part (still experimental and unstable, do not modify unless you know what you're doing) (unavailable in Docker installs for now)
- `headless [object]`: settings for crawl jobs using a headless browser to simulate a human browsing the webpages, scrolling and clicking on any possible interactive part (still experimental and unstable, do not modify unless you know what you're doing) (unavailable in Docker installs for now)

+ `autoretry [bool]`:

`false` for now, set to `true` to enable auto retry of crawl jobs having apparently failed (depth > 0 & pages found < 3)

+ `timeout [int]`:

usually `600`, the maximum time in seconds PhantomJS is allowed to spend on one single page (10 minutes are required for instance to load all hidden content on big Facebook group pages for instance)
usually `600`, the maximum time in seconds the headless browser is allowed to spend on one single page (10 minutes are required for instance to load all hidden content on big Facebook group pages for instance)

+ `idle_timeout [int]`:

usually `20`, the maximum time in seconds after which PhantomJS will consider the page properly crawled if nothing happened within during that time
usually `20`, the maximum time in seconds after which the headless browser will consider the page properly crawled if nothing happened within during that time

+ `ajax_timeout [int]`:

usually `15`, the maximum time in seconds allowed to any Ajax query performed within a crawled page

+ `whitelist_domains [str array]`:

empty for now, a list of domain names for which the crawler will automatically use PhantomJS (meant for instance in the long term for Facebook, Twitter or Google)
empty for now, a list of domain names for which the crawler will automatically use the headless browser (meant for instance in the long term for Facebook, Twitter or Google)


- __`ADMIN_PASSWORD [str]`__ (in Docker: __`HYPHE_ADMIN_PASSWORD`__):
Expand Down
13 changes: 2 additions & 11 deletions doc/install.md
Original file line number Diff line number Diff line change
Expand Up @@ -239,15 +239,6 @@ By default the starter will display Hyphe's log in the console using `tail`. You

You can always check all logs in the `log` directory.

__Important:__ Crawling with a headless browser is currently only possible as an advanced option in Hyphe. Do not bother with this section except for advanced use or development.

## Extra) Install [PhantomJS](http://phantomjs.org/) [Unrequired for now]

__Important:__ Crawling with PhantomJS is currently only possible as an advanced option in Hyphe. Do not bother with this section except for advanced use or development.

Hyphe ships with a compiled binary of PhantomJS-2.0 for Ubuntu, unfortunately it is not cross-compatible with other distributions: so when on CentOS or Debian, you should compile your own from sources.

```bash
./bin/install_phantom.sh
```

Note that PhantomJS 1.9.7 is easily downloadable as binary, altough it uses a very outdated version of WebKit and PhantomJS 2+ is required to handle modern websites such as Facebook.
Hyphe ships with a compiled binary of Chromium & Chromedriver for Ubuntu, unfortunately it is not cross-compatible with other distributions: so when on CentOS or Debian, you should compile your own from sources.
2 changes: 1 addition & 1 deletion docker-entrypoint.py
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@ def strToBool(string):
if "HYPHE_CREATION_RULES" in environ: setConfig("creationRules", literal_eval(environ["HYPHE_CREATION_RULES"]),configdata)
if "HYPHE_FOLLOW_REDIRECTS" in environ: setConfig("discoverPrefixes", literal_eval(environ["HYPHE_FOLLOW_REDIRECTS"]),configdata)

# TODO: Phantom config
# TODO: Headless config

if "HYPHE_ADMIN_PASSWORD" in environ: setConfig("ADMIN_PASSWORD", environ["HYPHE_ADMIN_PASSWORD"] or None,configdata)
if "HYPHE_OPEN_CORS_API" in environ: setConfig("OPEN_CORS_API", strToBool(environ["HYPHE_OPEN_CORS_API"]),configdata)
Expand Down
Loading