Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Refactor upload process #35

Merged
merged 11 commits into from
Sep 8, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 0 additions & 1 deletion .github/workflows/docker-image.yml
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,6 @@ on:
branches:
- master
- develop
- unit3d-searching

env:
REGISTRY: ghcr.io
Expand Down
8 changes: 5 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,14 +8,15 @@ A simple tool to take the work out of uploading.
- Generates and Parses MediaInfo/BDInfo.
- Generates and Uploads screenshots.
- Uses srrdb to fix scene filenames
- Can grab descriptions from PTP (automatically on filename match or arg) / BLU (arg)
- Can grab descriptions from PTP/BLU (automatically on filename match or arg) / Aither/LST/OE (with arg)
- Can strip existing screenshots from descriptions to skip screenshot generation and uploading
- Obtains TMDb/IMDb/MAL identifiers.
- Converts absolute to season episode numbering for Anime
- Generates custom .torrents without useless top level folders/nfos.
- Can re-use existing torrents instead of hashing new
- Generates proper name for your upload using Mediainfo/BDInfo and TMDb/IMDb conforming to site rules
- Checks for existing releases already on site
- Uploads to PTP/BLU/BHD/Aither/THR/STC/R4E(limited)/STT/HP/ACM/LCD/LST/NBL/ANT/FL/HUNO/RF/SN/RTF/OTW/FNP/CBR/UTP/HDB/AL/SHRI
- Uploads to PTP/BLU/BHD/Aither/THR/STC/R4E(limited)/STT/HP/ACM/LCD/LST/NBL/ANT/FL/HUNO/RF/SN/RTF/OTW/FNP/CBR/UTP/HDB/AL/SHRI/OE/TL/BHDTV/HDT/JPTV/LT/MTV/PTER/TDC/TTG/UTP
- Adds to your client with fast resume, seeding instantly (rtorrent/qbittorrent/deluge/watch folder)
- ALL WITH MINIMAL INPUT!
- Currently works with .mkv/.mp4/Blu-ray/DVD/HD-DVDs
Expand All @@ -33,7 +34,7 @@ Built with updated BDInfoCLI from https://github.com/rokibhasansagar/BDInfoCLI-n
- Also needs MediaInfo and ffmpeg installed on your system
- On Windows systems, ffmpeg must be added to PATH (https://windowsloop.com/install-ffmpeg-windows-10/)
- On linux systems, get it from your favorite package manager
- Clone the repo to your system `git clone https://github.com/Audionut/Upload-Assistant.git`
- Clone the repo to your system `git clone https://github.com/Audionut/Upload-Assistant.git` - or download a zip of the source
- Copy and Rename `data/example-config.py` to `data/config.py`
- Edit `config.py` to use your information (more detailed information in the [wiki](https://github.com/Audionut/Upload-Assistant/wiki))
- tmdb_api (v3) key can be obtained from https://developers.themoviedb.org/3/getting-started/introduction
Expand All @@ -50,6 +51,7 @@ Built with updated BDInfoCLI from https://github.com/rokibhasansagar/BDInfoCLI-n
- To update first navigate into the Upload-Assistant directory: `cd Upload-Assistant`
- Run a `git pull` to grab latest updates
- Run `python3 -m pip install --user -U -r requirements.txt` to ensure dependencies are up to date
- Or download a fresh zip and overwrite existing files
## **CLI Usage:**

`python3 upload.py /downloads/path/to/content --args`
Expand Down
14 changes: 8 additions & 6 deletions data/example-config.py
Original file line number Diff line number Diff line change
Expand Up @@ -37,14 +37,15 @@
"TRACKERS": {
# Which trackers do you want to upload to?
# Available tracker: BLU, BHD, AITHER, STC, STT, SN, THR, R4E, HP, ACM, PTP, LCD, LST, PTER, NBL, ANT, MTV, CBR, RTF, HUNO, BHDTV, LT, PTER, TL, TDC, HDT, OE, RF, OTW, FNP, UTP, AL, HDB
# Remove the ones not used to save being asked everytime
# Remove the trackers from the default_trackers list that are not used, to save being asked everytime
"default_trackers": "BLU, BHD, AITHER, STC, STT, SN, THR, R4E, HP, ACM, PTP, LCD, LST, PTER, NBL, ANT, MTV, CBR, RTF, HUNO, BHDTV, LT, PTER, TL, TDC, HDT, OE, RF, OTW, FNP, UTP, AL, HDB",

"BLU": {
"useAPI": False, # Set to True if using BLU
"api_key": "BLU api key",
"announce_url": "https://blutopia.cc/announce/customannounceurl",
# "anon" : False
# "anon" : False,
# "modq" : False ## Not working yet
},
"BHD": {
"api_key": "BHD api key",
Expand All @@ -71,7 +72,8 @@
"AITHER": {
"api_key": "AITHER api key",
"announce_url": "https://aither.cc/announce/customannounceurl",
# "anon" : False
# "anon" : False,
# "modq" : False ## Not working yet
},
"R4E": {
"api_key": "R4E api key",
Expand Down Expand Up @@ -239,9 +241,9 @@
},
},

# enable_search to true will automatically try and find a suitable hash to save having to rehash when creating torrents
# enable_search to True will automatically try and find a suitable hash to save having to rehash when creating torrents
# Should use the qbit API, but will also use the torrent_storage_dir to find suitable hashes
# If you find issue, use the "--debug" command option to print out some related details
# If you find issue, use the "--debug" argument to print out some related details
"TORRENT_CLIENTS": {
# Name your torrent clients here, for example, this example is named "Client1" and is set as default_torrent_client above
# All options relate to the webui, make sure you have the webui secured if it has WAN access
Expand All @@ -253,7 +255,7 @@
"qbit_port": "8080",
"qbit_user": "username",
"qbit_pass": "password",
# "torrent_storage_dir": "path/to/BT_backup folder"
# "torrent_storage_dir": "path/to/BT_backup folder" ## use double-backslash on windows eg: "C:\\client\\backup"

# Remote path mapping (docker/etc.) CASE SENSITIVE
# "local_path": "/LocalPath",
Expand Down
14 changes: 14 additions & 0 deletions src/args.py
Original file line number Diff line number Diff line change
Expand Up @@ -46,6 +46,7 @@ def parse(self, args, meta):
parser.add_argument('-blu', '--blu', nargs='*', required=False, help="BLU torrent id/link", type=str)
parser.add_argument('-aither', '--aither', nargs='*', required=False, help="Aither torrent id/link", type=str)
parser.add_argument('-lst', '--lst', nargs='*', required=False, help="LST torrent id/link", type=str)
parser.add_argument('-oe', '--oe', nargs='*', required=False, help="OE torrent id/link", type=str)
parser.add_argument('-hdb', '--hdb', nargs='*', required=False, help="HDB torrent id/link", type=str)
parser.add_argument('-d', '--desc', nargs='*', required=False, help="Custom Description (string)")
parser.add_argument('-pb', '--desclink', nargs='*', required=False, help="Custom Description (link to hastebin/pastebin)")
Expand Down Expand Up @@ -165,6 +166,19 @@ def parse(self, args, meta):
console.print('[red]Continuing without --lst')
else:
meta['lst'] = value2
elif key == 'oe':
if value2.startswith('http'):
parsed = urllib.parse.urlparse(value2)
try:
oepath = parsed.path
if oepath.endswith('/'):
oepath = oepath[:-1]
meta['oe'] = oepath.split('/')[-1]
except Exception:
console.print('[red]Unable to parse id from url')
console.print('[red]Continuing without --oe')
else:
meta['oe'] = value2
elif key == 'hdb':
if value2.startswith('http'):
parsed = urllib.parse.urlparse(value2)
Expand Down
16 changes: 11 additions & 5 deletions src/bbcode.py
Original file line number Diff line number Diff line change
Expand Up @@ -123,7 +123,7 @@ def clean_ptp_description(self, desc, is_disc):
desc = re.sub(r"\[img=[\s\S]*?\]", "", desc, flags=re.IGNORECASE)

# Extract loose images and add to imagelist as dictionaries
loose_images = re.findall(r"(https?:\/\/.*\.(?:png|jpg))", nocomp, flags=re.IGNORECASE)
loose_images = re.findall(r"(https?:\/\/[^\s\[\]]+\.(?:png|jpg))", nocomp, flags=re.IGNORECASE)
if loose_images:
for img_url in loose_images:
image_dict = {
Expand Down Expand Up @@ -198,6 +198,12 @@ def clean_unit3d_description(self, desc, site):
# Remove the [img] tag and its contents from the description
desc = re.sub(rf"\[img[^\]]*\]{re.escape(img_url)}\[/img\]", '', desc, flags=re.IGNORECASE)

# Now, remove matching URLs from [URL] tags
for img in imagelist:
img_url = re.escape(img['img_url'])
desc = re.sub(rf"\[URL={img_url}\]\[/URL\]", '', desc, flags=re.IGNORECASE)
desc = re.sub(rf"\[URL={img_url}\]\[img[^\]]*\]{img_url}\[/img\]\[/URL\]", '', desc, flags=re.IGNORECASE)

# Filter out bot images from imagelist
bot_image_urls = [
"https://blutopia.xyz/favicon.ico", # Example bot image URL
Expand Down Expand Up @@ -236,10 +242,10 @@ def clean_unit3d_description(self, desc, site):
desc = re.sub(bot_signature_regex, "", desc, flags=re.IGNORECASE | re.VERBOSE)
desc = re.sub(r"\[center\].*Created by L4G's Upload Assistant.*\[\/center\]", "", desc, flags=re.IGNORECASE)

# Ensure no dangling tags and remove extra blank lines
desc = re.sub(r'\n\s*\n', '\n', desc) # Remove multiple consecutive blank lines
desc = re.sub(r'\n\n+', '\n\n', desc) # Ensure no excessive blank lines
desc = desc.strip() # Final cleanup of trailing newlines and spaces
# Remove leftover [img] or [URL] tags in the description
desc = re.sub(r"\[img\][\s\S]*?\[\/img\]", "", desc, flags=re.IGNORECASE)
desc = re.sub(r"\[img=[\s\S]*?\]", "", desc, flags=re.IGNORECASE)
desc = re.sub(r"\[URL=[\s\S]*?\]\[\/URL\]", "", desc, flags=re.IGNORECASE)

# Strip trailing whitespace and newlines:
desc = desc.rstrip()
Expand Down
30 changes: 21 additions & 9 deletions src/prep.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@
from src.trackers.BLU import BLU
from src.trackers.AITHER import AITHER
from src.trackers.LST import LST
from src.trackers.OE import OE
from src.trackers.HDB import HDB
from src.trackers.COMMON import COMMON

Expand Down Expand Up @@ -47,6 +48,7 @@
import aiohttp
from PIL import Image
import io
import sys
except ModuleNotFoundError:
console.print(traceback.print_exc())
console.print('[bold red]Missing Module Found. Please reinstall required dependancies.')
Expand Down Expand Up @@ -77,8 +79,7 @@ async def prompt_user_for_confirmation(self, message: str) -> bool:
return True
return False
except EOFError:
console.print("[bold red]Input was interrupted.")
return False
sys.exit(1)

async def check_images_concurrently(self, imagelist):
async def check_and_collect(image_dict):
Expand Down Expand Up @@ -155,7 +156,7 @@ async def update_metadata_from_tracker(self, tracker_name, tracker_instance, met
manual_key = f"{tracker_key}_manual"
found_match = False

if tracker_name in ["BLU", "AITHER", "LST"]: # Example for UNIT3D trackers
if tracker_name in ["BLU", "AITHER", "LST", "OE"]:
if meta.get(tracker_key) is not None:
console.print(f"[cyan]{tracker_name} ID found in meta, reusing existing ID: {meta[tracker_key]}[/cyan]")
tracker_data = await COMMON(self.config).unit3d_torrent_info(
Expand Down Expand Up @@ -294,10 +295,13 @@ async def handle_image_list(self, meta, tracker_name):
approved_image_hosts = ['ptpimg', 'imgbox']

# Check if the images are already hosted on an approved image host
if all(any(host in img for host in approved_image_hosts) for img in meta['image_list']):
if all(any(host in image['raw_url'] for host in approved_image_hosts) for image in meta['image_list']):
image_list = meta['image_list'] # noqa #F841
else:
console.print("[red]Warning: Some images are not hosted on an MTV approved image host. MTV will fail if you keep these images.")
default_trackers = self.config['TRACKERS'].get('default_trackers', '')
trackers_list = [tracker.strip() for tracker in default_trackers.split(',')]
if 'MTV' in trackers_list or 'MTV' in meta.get('trackers', ''):
console.print("[red]Warning: Some images are not hosted on an MTV approved image host. MTV will fail if you keep these images.")

keep_images = await self.prompt_user_for_confirmation(f"Do you want to keep the images found on {tracker_name}?")
if not keep_images:
Expand Down Expand Up @@ -446,6 +450,8 @@ async def gather_prep(self, meta, mode):
specific_tracker = 'AITHER'
elif meta.get('lst'):
specific_tracker = 'LST'
elif meta.get('oe'):
specific_tracker = 'OE'

# If a specific tracker is found, only process that one
if specific_tracker:
Expand Down Expand Up @@ -475,6 +481,12 @@ async def gather_prep(self, meta, mode):
if match:
found_match = True

elif specific_tracker == 'OE' and str(self.config['TRACKERS'].get('OE', {}).get('useAPI')).lower() == "true":
oe = OE(config=self.config)
meta, match = await self.update_metadata_from_tracker('OE', oe, meta, search_term, search_file_folder)
if match:
found_match = True

elif specific_tracker == 'HDB' and str(self.config['TRACKERS'].get('HDB', {}).get('useAPI')).lower() == "true":
hdb = HDB(config=self.config)
meta, match = await self.update_metadata_from_tracker('HDB', hdb, meta, search_term, search_file_folder)
Expand Down Expand Up @@ -1324,8 +1336,8 @@ def screenshots(self, path, filename, folder_id, base_dir, meta, num_screens=Non
.global_args('-loglevel', loglevel)
.run(quiet=debug)
)
except Exception:
console.print(traceback.format_exc())
except (KeyboardInterrupt, Exception):
sys.exit(1)

self.optimize_images(image_path)
if os.path.getsize(Path(image_path)) <= 75000:
Expand Down Expand Up @@ -1394,8 +1406,8 @@ def optimize_images(self, image):
oxipng.optimize(image, level=6)
else:
oxipng.optimize(image, level=3)
except Exception:
pass
except (KeyboardInterrupt, Exception):
sys.exit(1)
return

"""
Expand Down
65 changes: 48 additions & 17 deletions src/trackers/AITHER.py
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,7 @@ def __init__(self, config):
self.source_flag = 'Aither'
self.search_url = 'https://aither.cc/api/torrents/filter'
self.upload_url = 'https://aither.cc/api/torrents/upload'
self.torrent_url = 'https://aither.cc/api/torrents/'
self.signature = "\n[center][url=https://aither.cc/forums/topics/1349/posts/24958]Created by L4G's Upload Assistant[/url][/center]"
self.banned_groups = ['4K4U', 'AROMA', 'd3g', 'edge2020', 'EMBER', 'EVO', 'FGT', 'FreetheFish', 'Hi10', 'HiQVE', 'ION10', 'iVy', 'Judas', 'LAMA', 'MeGusta', 'nikt0', 'OEPlus', 'OFT', 'OsC', 'PYC',
'QxR', 'Ralphy', 'RARBG', 'RetroPeeps', 'SAMPA', 'Sicario', 'Silence', 'SkipTT', 'SPDVD', 'STUTTERSHIT', 'SWTYBLZ', 'TAoE', 'TGx', 'Tigole', 'TSP', 'TSPxL', 'VXT', 'Weasley[HONE]',
Expand All @@ -37,6 +38,7 @@ async def upload(self, meta):
cat_id = await self.get_cat_id(meta['category'])
type_id = await self.get_type_id(meta['type'])
resolution_id = await self.get_res_id(meta['resolution'])
modq = await self.get_flag(meta, 'modq')
name = await self.edit_name(meta)
if meta['anon'] == 0 and bool(str2bool(str(self.config['TRACKERS'][self.tracker].get('anon', "False")))) is False:
anon = 0
Expand Down Expand Up @@ -74,6 +76,7 @@ async def upload(self, meta):
'free': 0,
'doubleup': 0,
'sticky': 0,
'mod_queue_opt_in': modq,
}
headers = {
'User-Agent': f'Upload Assistant/2.1 ({platform.system()} {platform.release()})'
Expand Down Expand Up @@ -102,31 +105,59 @@ async def upload(self, meta):
console.print(data)
open_torrent.close()

async def get_flag(self, meta, flag_name):
config_flag = self.config['TRACKERS'][self.tracker].get(flag_name)
if config_flag is not None:
return 1 if config_flag else 0

return 1 if meta.get(flag_name, False) else 0

async def edit_name(self, meta):
aither_name = meta['name']
has_eng_audio = False

# Helper function to check if English audio is present
def has_english_audio(tracks, is_bdmv=False):
for track in tracks:
if is_bdmv and track.get('language') == 'English':
return True
if not is_bdmv and track['@type'] == "Audio":
# Ensure Language is not None and is a string before checking startswith
if isinstance(track.get('Language'), str) and track.get('Language').startswith('en'):
return True
return False

# Helper function to get audio language
def get_audio_lang(tracks, is_bdmv=False):
if is_bdmv:
return tracks[0].get('language', '').upper() if tracks else ""
return tracks[2].get('Language_String', '').upper() if len(tracks) > 2 else ""

if meta['is_disc'] != "BDMV":
with open(f"{meta.get('base_dir')}/tmp/{meta.get('uuid')}/MediaInfo.json", 'r', encoding='utf-8') as f:
mi = json.load(f)
try:
with open(f"{meta.get('base_dir')}/tmp/{meta.get('uuid')}/MediaInfo.json", 'r', encoding='utf-8') as f:
mi = json.load(f)

audio_tracks = mi['media']['track']
has_eng_audio = has_english_audio(audio_tracks)
if not has_eng_audio:
audio_lang = get_audio_lang(audio_tracks)
if audio_lang:
aither_name = aither_name.replace(meta['resolution'], f"{audio_lang} {meta['resolution']}", 1)
except (FileNotFoundError, KeyError, IndexError) as e:
print(f"Error processing MediaInfo: {e}")

for track in mi['media']['track']:
if track['@type'] == "Audio":
if track.get('Language', 'None').startswith('en'):
has_eng_audio = True
if not has_eng_audio:
audio_lang = mi['media']['track'][2].get('Language_String', "").upper()
if audio_lang != "":
aither_name = aither_name.replace(meta['resolution'], f"{audio_lang} {meta['resolution']}", 1)
else:
for audio in meta['bdinfo']['audio']:
if audio['language'] == 'English':
has_eng_audio = True
bdinfo_audio = meta.get('bdinfo', {}).get('audio', [])
has_eng_audio = has_english_audio(bdinfo_audio, is_bdmv=True)
if not has_eng_audio:
audio_lang = meta['bdinfo']['audio'][0]['language'].upper()
if audio_lang != "":
audio_lang = get_audio_lang(bdinfo_audio, is_bdmv=True)
if audio_lang:
aither_name = aither_name.replace(meta['resolution'], f"{audio_lang} {meta['resolution']}", 1)
if meta['category'] == "TV" and meta.get('tv_pack', 0) == 0 and meta.get('episode_title_storage', '').strip() != '' and meta['episode'].strip() != '':

# Handle TV show episode title inclusion
if meta['category'] == "TV" and meta.get('tv_pack', 0) == 0 and meta.get('episode_title_storage', '').strip() and meta['episode'].strip():
aither_name = aither_name.replace(meta['episode'], f"{meta['episode']} {meta['episode_title_storage']}", 1)

return aither_name

async def get_cat_id(self, category_name):
Expand Down
Loading
Loading