Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

urllib2.HTTPError: HTTP Error 999: Request denied and 403: Forbidden #1

Closed
FelixVicis opened this issue Dec 28, 2017 · 2 comments
Closed

Comments

@FelixVicis
Copy link
Owner

From: kennyledet#3
Tagging @kn9,


Hello @FelixVicis, I get keep getting the error below when I try your forked Google-EmailScraper.
how can I resolve this, please

root@Inside:/home/amd/mail# sudo python main.py -query "Manage IT Services" -pages 10 -o ema.csv
Traceback (most recent call last):
File "main.py", line 65, in 
s.go(args.query, args.pages)
File "main.py", line 35, in go
self.scrape(url)
File "main.py", line 43, in scrape
raise e
urllib2.HTTPError: HTTP Error 999: Request denied
root@Inside:/home/amd/mail# sudo python main.py -query "Manage IT Services" -pages 10 -o emaia.csv
Traceback (most recent call last):
File "main.py", line 65, in 
s.go(args.query, args.pages)
File "main.py", line 35, in go
self.scrape(url)
File "main.py", line 43, in scrape
raise e
urllib2.HTTPError: HTTP Error 403: Forbidden
@FelixVicis
Copy link
Owner Author

@kn9 fixed the problem. It was a lack in handling http errors via urllib.

If you fetch master, your problem should be solved.

@kn9
Copy link

kn9 commented Jan 5, 2018

Ok, Thank you. will give it a try again.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants