You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe.
v24 included a plugin manager for scrapers and plugins and settings for plugins. However, there are no such settings for scrapers. Prior to v24 the general solution was to edit the scraper (either the yaml or a config file) to add the information (api keys, user name/passwords, etc). With the new model, that's a lot more difficult.
Describe the solution you'd like
I'd like to see a new UI and API that works like like the plugin infrastructure, but for scrapers. Same sort of configuration in the scraper yaml file and it can use the same graphql api for python/js scrapers. The values should be available to the yaml scrapers as well (probably using the same brace syntax like for {title})
Describe alternatives you've considered
There are two easy (and not mutually exclusive) alternatives:
Keep doing what we're doing. Scruffy's plugin template showed a pretty clever way to create a template config on first use. Users can go in and edit that. I didn't try it yet to see what happens if you uninstall or upgrade the plugin? (if the config gets deleted because it's in the same directory)
Use environment variables to pass information to python scrapers. This one works best for docker users (and probably systemd users), since you can set up the environment variables in the docker compose file or docker command. You could write wrapper scripts for other methods of launching stash.
Both solutions are serviceable for script based scrapers. I think they make using these sorts of plugins a more advanced scenario (especially with the config being python source). They also leave the file based scrapers behind.
Additional context
There's been some discussion about this on Discord. I think @DingDongSoLong4 or maybe @Maista6969 may have some additional context or thoughts here.
The text was updated successfully, but these errors were encountered:
Thanks for bringing this up, apologies for the late reply! I've already made up some thoughts around this with regards to plugins, but all of the below would apply equally to scraper settings
I would love it if we kept iterating on the plugin settings introduced in #4143, specifically:
specifying a default value for a setting
requiring the user change a setting before the plugin can run
persisting settings across plugin uninstall / reinstall
more setting types, like enums and lists (not crucial but nice to have)
The problem of missing defaults has already popped up in every plugin that uses this and is usually worked around in the script code, but this is not satisfying because it's unclear to the user what the default actually is: if the UI defaults booleans to False but the plugin itself assumes True, for example, this is a bad user experience
Other plugins can require e.g. an API key before they'll ever be able to function and the current feedback in the UI when a plugin fails could also use some work
First draft style, I'm imagining something like this in the .yml file that defines a plugin / scraper:
settings:
foo:
displayName: Foodescription: Foo the baz before xyz?type: BOOLEANdefault: truebar:
displayName: Favorite bartype: stringchoices:
- Cheers
- Paddy's Pub
- Ten ForwardapiKey:
displayName: Other site API Keydescription: Found in your account settings on OtherSite.comtype: stringrequired: true
Is your feature request related to a problem? Please describe.
v24 included a plugin manager for scrapers and plugins and settings for plugins. However, there are no such settings for scrapers. Prior to v24 the general solution was to edit the scraper (either the yaml or a config file) to add the information (api keys, user name/passwords, etc). With the new model, that's a lot more difficult.
Describe the solution you'd like
I'd like to see a new UI and API that works like like the plugin infrastructure, but for scrapers. Same sort of configuration in the scraper yaml file and it can use the same graphql api for python/js scrapers. The values should be available to the yaml scrapers as well (probably using the same brace syntax like for {title})
Describe alternatives you've considered
There are two easy (and not mutually exclusive) alternatives:
Both solutions are serviceable for script based scrapers. I think they make using these sorts of plugins a more advanced scenario (especially with the config being python source). They also leave the file based scrapers behind.
Additional context
There's been some discussion about this on Discord. I think @DingDongSoLong4 or maybe @Maista6969 may have some additional context or thoughts here.
The text was updated successfully, but these errors were encountered: