Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Script not scraping any content #2121

Open
PJD94 opened this issue Aug 21, 2024 · 8 comments
Open

Script not scraping any content #2121

PJD94 opened this issue Aug 21, 2024 · 8 comments

Comments

@PJD94
Copy link

PJD94 commented Aug 21, 2024

Over the last two days the script has stopped scraping any content for me. The script passes through all stages as normal but does not download anything other than the onlyfans profile picture before saying "archiving complete", even though there is content to be scraped.
Have checked all settings in my auth.json and config.json and all looks normal. As mentioned, this worked earlier in the week and nothing has changed since.

@Tg00174
Copy link

Tg00174 commented Aug 23, 2024

same to me, OF changed something i think

@Nick05491
Copy link

Same here.

@sb-187
Copy link

sb-187 commented Aug 29, 2024

Seems like OF changed the API response. Try this:

In the onlyfans.py:
Change preview_link = media["preview"] to preview_link = link

In the apis\onlyfans\classes post / message_models, at the end of link_picker add:

if "files" in media:
    files = media["files"]
    fileFull = files["full"]
    link = fileFull["url"]

With this I’m able to download source quality, previews would be wrong but I don’t care about those.

@betoalanis
Copy link

betoalanis commented Aug 29, 2024

Seems like OF changed the API response. Try this:

In the onlyfans.py: Change preview_link = media["preview"] to preview_link = link

In the apis\onlyfans\classes post / message_models, at the end of link_picker add:

if "files" in media:
    files = media["files"]
    fileFull = files["full"]
    link = fileFull["url"]

With this I’m able to download source quality, previews would be wrong but I don’t care about those.

Could you double check please?

  1. I can't find preview_link in any of the files of the whole project.

  2. I found preview_url_picker in the apis/onlyfans/__init__.py file, but I can't find link_picker in the post and message model files or anywhere in the project.

TIA!

@sb-187
Copy link

sb-187 commented Aug 29, 2024

Seems like OF changed the API response. Try this:
In the onlyfans.py: Change preview_link = media["preview"] to preview_link = link
In the apis\onlyfans\classes post / message_models, at the end of link_picker add:

if "files" in media:
    files = media["files"]
    fileFull = files["full"]
    link = fileFull["url"]

With this I’m able to download source quality, previews would be wrong but I don’t care about those.

Could you double check please?

1. I can't find `preview_link` in any of the files of the whole project.

2. I found `preview_url_picker` in the `apis/onlyfans/__init__.py` file, but I can't find `link_picker` in the post and message model files or anywhere in the project.

TIA!

I just realized I’m on a very old version. The relevant files are probably in https://github.com/UltimaHoarder/UltimaScraperAPI . The structure has changed a lot from the old version and is a lot less readable to me. I'm not able to find the relevant sections in the current version, so I think the only one that would be able to help is @UltimaHoarder

@betoalanis
Copy link

Seems like OF changed the API response. Try this:
In the onlyfans.py: Change preview_link = media["preview"] to preview_link = link
In the apis\onlyfans\classes post / message_models, at the end of link_picker add:

if "files" in media:
    files = media["files"]
    fileFull = files["full"]
    link = fileFull["url"]

With this I’m able to download source quality, previews would be wrong but I don’t care about those.

Could you double check please?

1. I can't find `preview_link` in any of the files of the whole project.

2. I found `preview_url_picker` in the `apis/onlyfans/__init__.py` file, but I can't find `link_picker` in the post and message model files or anywhere in the project.

TIA!

I just realized I’m on a very old version. The relevant files are probably in https://github.com/UltimaHoarder/UltimaScraperAPI . The structure has changed a lot from the old version and is a lot less readable to me. I'm not able to find the relevant sections in the current version, so I think the only one that would be able to help is @UltimaHoarder

Thanks for verifying tho! :D

@timbck2
Copy link

timbck2 commented Sep 3, 2024

I'm seeing the same thing - here's the output of "poetry run start_us.py":

[2024-09-03 09:55:35] Assigning Job
[2024-09-03 09:55:35] Archive Completed in 0.0 Minutes
[2024-09-03 09:55:35] Now exiting

@hawktank
Copy link

hawktank commented Oct 26, 2024

Same here, mine is logging in and revealing all the OF subs, but crashes on attempting to download, whether all or specific types:

[2024-10-26 10:34:12] Assigning Jobs Choose Medias: 0 = All | 1 = Images | 2 = Videos | 3 = Audios | 4 = Texts 0 Processing Scraped Stories 0%| | 0/7 [00:00<?, ?it/s]Traceback (most recent call last): File "H:\My Documents\GitHub\UltimaScraper\start_us.py", line 62, in <module> asyncio.run(main()) File "C:\Users\BUBBA\AppData\Local\Programs\Python\Python310\lib\asyncio\runners.py", line 44, in run return loop.run_until_complete(main) File "C:\Users\BUBBA\AppData\Local\Programs\Python\Python310\lib\asyncio\base_events.py", line 649, in run_until_complete return future.result() File "H:\My Documents\GitHub\UltimaScraper\start_us.py", line 44, in main _api = await USR.start( File "H:\My Documents\GitHub\UltimaScraper\ultima_scraper\ultima_scraper.py", line 50, in start await self.start_datascraper(datascraper) File "H:\My Documents\GitHub\UltimaScraper\ultima_scraper\ultima_scraper.py", line 137, in start_datascraper await datascraper.datascraper.api.job_manager.process_jobs() File "C:\Users\BUBBA\AppData\Local\pypoetry\Cache\virtualenvs\ultima-scraper-s7lw0SJI-py3.10\lib\site-packages\ultima_scraper_api\managers\job_manager\job_manager.py", line 45, in process_jobs await asyncio.create_task(self.__worker()) File "C:\Users\BUBBA\AppData\Local\pypoetry\Cache\virtualenvs\ultima-scraper-s7lw0SJI-py3.10\lib\site-packages\ultima_scraper_api\managers\job_manager\job_manager.py", line 53, in __worker await job.task File "C:\Users\BUBBA\AppData\Local\pypoetry\Cache\virtualenvs\ultima-scraper-s7lw0SJI-py3.10\lib\site-packages\ultima_scraper_collection\modules\module_streamliner.py", line 202, in prepare_scraper await self.process_scraped_content( File "C:\Users\BUBBA\AppData\Local\pypoetry\Cache\virtualenvs\ultima-scraper-s7lw0SJI-py3.10\lib\site-packages\ultima_scraper_collection\modules\module_streamliner.py", line 237, in process_scraped_content unrefined_set: list[dict[str, Any]] = await tqdm_asyncio.gather( File "C:\Users\BUBBA\AppData\Local\pypoetry\Cache\virtualenvs\ultima-scraper-s7lw0SJI-py3.10\lib\site-packages\tqdm\asyncio.py", line 79, in gather res = [await f for f in cls.as_completed(ifs, loop=loop, timeout=timeout, File "C:\Users\BUBBA\AppData\Local\pypoetry\Cache\virtualenvs\ultima-scraper-s7lw0SJI-py3.10\lib\site-packages\tqdm\asyncio.py", line 79, in <listcomp> res = [await f for f in cls.as_completed(ifs, loop=loop, timeout=timeout, File "C:\Users\BUBBA\AppData\Local\Programs\Python\Python310\lib\asyncio\tasks.py", line 571, in _wait_for_one return f.result() # May raise f.exception(). File "C:\Users\BUBBA\AppData\Local\pypoetry\Cache\virtualenvs\ultima-scraper-s7lw0SJI-py3.10\lib\site-packages\tqdm\asyncio.py", line 76, in wrap_awaitable return i, await f File "C:\Users\BUBBA\AppData\Local\pypoetry\Cache\virtualenvs\ultima-scraper-s7lw0SJI-py3.10\lib\site-packages\ultima_scraper_collection\managers\datascraper_manager\datascrapers\onlyfans.py", line 51, in media_scraper content_metadata.resolve_extractor(Extractor(post_result)) File "C:\Users\BUBBA\AppData\Local\pypoetry\Cache\virtualenvs\ultima-scraper-s7lw0SJI-py3.10\lib\site-packages\ultima_scraper_collection\managers\metadata_manager\metadata_manager.py", line 216, in resolve_extractor self.medias: list[MediaMetadata] = result.get_medias(self) File "C:\Users\BUBBA\AppData\Local\pypoetry\Cache\virtualenvs\ultima-scraper-s7lw0SJI-py3.10\lib\site-packages\ultima_scraper_collection\managers\metadata_manager\metadata_manager.py", line 147, in get_medias main_url = self.item.url_picker(asset_metadata) File "C:\Users\BUBBA\AppData\Local\pypoetry\Cache\virtualenvs\ultima-scraper-s7lw0SJI-py3.10\lib\site-packages\ultima_scraper_api\apis\onlyfans\__init__.py", line 39, in url_picker source = media_item["source"] KeyError: 'source' 0%| | 0/7 [00:00<?, ?it/s]

Using Python version 3.10.11
Using poetry v 1.8.4

Tried these troubleshooting steps without a resolution:

config.json regenerated and configured
auth.json regenerated and configured
deleted pypoetry Cache folder in C:\Users\BUBBA\AppData\Local\pypoetry and regenerated with python updater.py

Latest UltimaScraper from Github

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants