You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Have you considered using other means to automatically gather cookie data?
There's a python module called browser_cookie3. For the last several releases I've been using my own script to update my auth data for the scraper.
The only part that needs to be filled in manually is the browser user agent. But I'm certain there is a way to programmatically gather that as well.
It can be done with something like:
user_agent = 'OH BOY THIS SURE IS A USER AGENT'
auth_dict = {'auth_id':'', 'sess':'', 'auth_hash':''} # Dict of fields to put in auth file from cookie data
cj = browser_cookie3.load() # Loads from Chrome and Firefox
cookie_site = ".onlyfans.com"
if cookie_site not in cj._cookies or '/' not in cj._cookies[cookie_site]:
print(f'No cookies found for {cookie_site}')
exit()
of_cj = cj._cookies[cookie_site]["/"]
auth_cookie_dict = {key: cookie.value for key, cookie in of_cj.items() if key in auth_dict}
# Update the original dict instead of just using auth_cookie_dict in case there are missing cookies
auth_dict.update(auth_cookie_dict)
cookie_string = '; '.join([f'{key}={value}' for key, value in auth_dict.items()])
with open(auth_filepath, 'r') as json_file:
auth_file_data = json.load(json_file)
auth_file_data['auth']['cookie'] = cookie_string
auth_file_data['auth']['user_agent'] = user_agent
with open(auth_filepath, 'w+') as json_file:
json.dump(auth_file_data, json_file, indent=2)
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Have you considered using other means to automatically gather cookie data?
There's a python module called browser_cookie3. For the last several releases I've been using my own script to update my auth data for the scraper.
The only part that needs to be filled in manually is the browser user agent. But I'm certain there is a way to programmatically gather that as well.
It can be done with something like:
user_agent = 'OH BOY THIS SURE IS A USER AGENT'
Beta Was this translation helpful? Give feedback.
All reactions