This scraper is provided as a public service because Glasdoor doesn't have a public API for searching overviews and collecting data. Glassdoor TOS prohibits scraping and I make no representation that your account won't be banned if you use this program. Furthermore, should I be contacted by Glassdoor with a request to remove this repo, I will do so immediately.
Additionally the program uses a manually designed heuristic to determine the legetimacy and closeness of the result to the search. Please view developer notes in the jupyter notebook for more detail. Use at your own descretion and feel free to fork and improve the code.
Have you ever wanted to scrape company information from a list of company names on Glassdoor, but bemoaned the site's lack of a public API? Worry no more! This script will go through pages and pages of company profiles and scrape lots of data into a tidy JSON file. Pass it a jsonlist as formatted in the example, and set a limit to scrape the 25 most conveniently available reviews!
It took about 11 seconds to find and collect information per company on average. So it will take about 3 hours to scrape 1,000 reviews during the day. However, I was blocked at search #650 so be careful. After restarting and entering the captcha and running the scraper through 12AM to 8AM, about 5000 entries were checked. This script's speed depends on the wifi network and requires patience. 😁
You could defenitely use this for social science research using data from companies in glassdoor! (hint hint university researchers)
Alternatives like Scrapy and BeautifulSoup aren't able to interact with AJAX requests like typing in values and clicking buttons. For the sake of simplicity Selenium was used for all scraping. However utilizing a seperate library for purely scraping from the collected links and using parallelization may speed up the scraper.
First, make sure that you're using Python 3.
-
Clone or download this repository.
-
Install virtualenv to set up a virtual environment if necessary with:
pip install virtualenv
Setting up a virtual environment ensures dependencies for this program don't mess with dependencies of other programs you may have. You could run this program withou steps 2 and 3 but it is not recommended.
-
Set up and activate a virtual environment by opening terminal to this file directory, then enter:
python3 -m venv env
source env/bin/activate
-
Run
pip3 install -r requirements.txt
inside this repo.
- followed the tutorial [here] (http://www.ds100.org/fa20/setup/#creating-your-environment) to install and set up conda
- If you get an error saying "'conda' command wasn't found" troubleshooted with [this] (https://towardsdatascience.com/how-to-successfully-install-anaconda-on-a-mac-and-actually-get-it-to-work-53ce18025f97)
- activate the .yaml environment file included in this repository
- You're ready!
- Use this or this tutorial to Install Chromedriver in to your PATH.
- Go to the bottom of 'glassdoorScraper.py' and enter your glassdoor user name one line 426, replaceing the '' after 'username': in the JSON object, and enter your password in the quotes ('') after 'password': in the JSON object.
- If you are using the jupyter notebook enter your glassdoor account username and password in the same manner in cell 39. I HIGHLY recommend you make a dummy glassdoor account as you may be blocked.
- If using the jupyter notebook, run all cells.
- if using the .py file, type
python3 glassdoorScraper.py
into the command line in this directory.
Now you need to format the input data, which is a list of the names of the companies you want to find. look at 'gvkey_salary_company.jsonl' for an example of how to set this up.
Enter your data on each line as a JSON object in the following format:
{"name":"AAR CORP","longname":"AAR Corp","gvkey":"001004","capiq-ticker":"AIR"}
if you dont have the company identified gvkey, you can exclude it, but you must enter all other values or this function will not run.
taskOne.json - Contains the JSON object of the jsonl object you inputted in 'gvkey_salary_company.jsonl' with the link to the glassdoor overview page of that company.
taskTwo.json - Contains the JSON object of the jsonl object you inputted in 'gvkey_salary_company.jsonl' with the link to the glassdoor overview page of that company, as well as scraped information from that page.
- The .py and jupyter notebook contain my dummy account username and passwords. Please do delete my dummy user information if distributing this program to other users.
- You may need to raise your 'JSON: Max Items Computed' settings to 999999 if viewing the output file in VSCode. The setting controls the maximum number of outline symbols and folding regions computed (limited for performance reasons).
- Most functions are used statically and this project is small, so in the pythonic spirit all functions are simply defined by itself without class wrappers
Task 1 - total 18 hours
- 3 hour: developing the interactive portion & cleaning data.
- 15 hours: developing and testing the heuristic for legitimacy (debugging and testing required watching scraper choose links, which was very slow and took a while to find many cases).
Task 2 - total 4 hours
- 4 hours: implementing the task data collection and json input/outputs.
- The 'check_public' method was made to check if a company is public, but too many publicly tradedd companies are falsely labeled on glassdoor so this function is turned off. Perhaps I could determine if the company is public some other way.
- If there are cases where sketchy pages with few reviews and no images are found, perhaps we can avoid these by setting a minimum score to accept an input only above a certrain ligitimacy level.
- Having a non-default image for the company profile could be used as a factor of determining the company page ligitimacy heuristic.
- Perhaps we could tokenize the words that make up the company and classify the name into a theme (like 'airplane company' for 'ASA airlines') then also classify the profile image to ensure the most similar result is found.