Web crawler frontend deploy on Github Pages.
* React (Hooks).
* Redux.
* Formik.
* Yup.
* MaterialUI.
* Axios.
Web crawler backend deploy on Heroku. I intentionally left backend api opened for everyone for educational purposes.
* Java 8.
* Spring (Boot, Security).
* Jsoup html-parser.
Crawler just scans provided url's page body trying to find more links and counting number of specified term entries on the page. Crawler goes to the specified depth and repeats previous procedure for provided number of pages.
Peace!