Skip to content

dmtroo/resume_scraper

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Web scraper for CV extraction

Tool is adapted for www.work.ua (soon will be configured for other Ukrainian job search websites).

General info

All CVs with PRO label (from pages https://www.work.ua/resumes/?page={1..n}) will be downloaded to "path" directory in parse_resume method. To increase/decrease number of pages for crawling, change range in parse method.

How to use?

Install Scrapy and its dependencies from PyPI with:

pip install Scrapy

Note: sometimes this may require solving compilation issues for some Scrapy dependencies depending on your operating system, so be sure to check the Platform specific installation notes.

For more details: Scrapy installation guide

To start crawler:

cd resume_spider
scrapy crawl work_spider

For more details: Spiders — Scrapy 2.7.1 documentation

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages