Skip to content

Latest commit

 

History

History
595 lines (526 loc) · 32.2 KB

README.rst

File metadata and controls

595 lines (526 loc) · 32.2 KB
Github Version PyPI - Python Version Downloads GitHub Actions Unittests Coveralls License Codacy Badge

A simple scraping tool for recipe webpages.

Netiquette

If you're using this library to collect large numbers of recipes from the web, please use the software responsibly and try to avoid creating high volumes of network traffic.

Python's standard library provides a robots.txt parser that may be helpful to automatically follow common instructions specified by websites for web crawlers.

Another parser option -- particularly if you find that many web requests from urllib.robotparser are blocked -- is the robotexclusionrulesparser library.

Getting Started

Start by using Python's built-in package installer, pip, to install the library:

python -m pip install recipe-scrapers

This should produce output about the installation process, with the final line reading: Successfully installed recipe-scrapers-<version-number>.

To learn what the library can do, you can open a Python interpreter session, and then begin typing -- and/or modifying -- the statements below (on the lines containing the >>> prompt):

Python 4.0.4 (main, Oct 26 1985, 09:00:32) [GCC 22.3.4] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from recipe_scrapers import scrape_html
>>> url = "https://www.allrecipes.com/recipe/158968/spinach-and-feta-turkey-burgers/"
>>> name = input('What is your name, burger seeker?\n')
>>> html = requests.get(url, headers={"User-Agent": f"Burger Seeker {name}"}).content
>>> scraper = scrape_html(html, org_url=url)
>>> help(scraper)

Some Python HTTP clients that you can use to retrieve HTML include requests, httpx, and the urllib.request module included in Python's standard library. Please refer to their documentation to find out what options (timeout configuration, proxy support, etc) are available.

Scrapers available for:

(*) offline saved files only

Contribute

If you spot a design change (or something else) that makes the scraper unable to work for a given site - please fire an issue asap.

If you are programmer PRs with fixes are warmly welcomed and acknowledged with a virtual beer. You can find documentation on how to develop scrapers here.

If you want a scraper for a new site added

  • Open an Issue providing us the site name, as well as a recipe link from it.

  • You are a developer and want to code the scraper on your own:

    • If Schema is available on the site - you can go like this.

    • Otherwise, scrape the HTML - like this

    • Generating a new scraper class:

      python generate.py <ClassName> <URL>
      • ClassName: The name of the new scraper class.
      • URL: The URL of an example recipe from the target site. The content will be stored in test_data to be used with the test class.

      You can find a more detailed guide here.

For Devs / Contribute

Assuming you have >=python3.9 installed, navigate to the directory where you want this project to live in and drop these lines

git clone [email protected]:hhursev/recipe-scrapers.git &&
cd recipe-scrapers &&
python -m venv .venv &&
source .venv/bin/activate &&
python -m pip install --upgrade pip &&
pip install -e ".[dev]" &&
pip install pre-commit &&
pre-commit install &&
python -m unittest

In case you want to run a single unittest for a newly developed scraper

python -m unittest -k <test_file_name>

FAQ

What if the recipe site I want to extract information from is not listed above?

You can give it a try with the wild_mode option!

If there is Schema/Recipe available it will work just fine.

url = 'https://www.feastingathome.com/tomato-risotto/'
name = input('What is your name, risotto sampler?\n')
html = requests.get(url, headers={"User-Agent": f"Risotto Sampler {name}"}).content
scraper = scrape_html(html, org_url=url, wild_mode=True)

scraper.host()
scraper.title()
scraper.total_time()
scraper.image()
scraper.ingredients()
scraper.ingredient_groups()
scraper.instructions()
scraper.instructions_list()
scraper.yields()
scraper.to_json()
scraper.links()
scraper.nutrients()  # not always available
scraper.canonical_url()  # not always available
scraper.equipment()  # not always available
scraper.cooking_method()  # not always available
scraper.keywords()  # not always available
scraper.dietary_restrictions() # not always available

Notes:

  • scraper.links() returns a list of dictionaries containing all of the <a> tag attributes. The attribute names are the dictionary keys.

How do I know if a website has a Recipe Schema?

Run in python shell:

Python 4.0.4 (main, Oct 26 1985, 09:00:32) [GCC 22.3.4] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from recipe_scrapers import scrape_html
>>> scraper = scrape_html(html=None, org_url='<url of a recipe from the site>', online=True, wild_mode=True)
>>> # if no error is raised - there's schema available:
>>> scraper.title()
>>> scraper.instructions()  # etc.

Special thanks to:

All the contributors that helped improving the package. You are awesome!

https://contrib.rocks/image?repo=hhursev/recipe-scrapers

Test Data Notice

All content in tests/test_data/ is used for limited, non-commercial testing purposes and belongs to their respective copyright holders. See the tests/test_data/LICENSE.md for details. If you're a copyright holder with concerns, you can open an issue or contact us privately via the email in our PyPI page.

Extra:

You want to gather recipes data?
You have an idea you want to implement?
Check out our "Share a project" wall - it may save you time and spark ideas!