You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
At minimum all this would need to do would be to check if the provided string is a url (i.e. urllib's urlparse function doesn't throw an error when trying to parse it). If we wanted to get fancy, we could expose the parsed components and allow regex against them (ex. to allow both http and https but not anything else ex. ftp as the protocol).
There are workarounds that one could use, i.e. using the regex validator and grabbing one of the many regex's floating around on the internet and hoping that it actually does correctly validate for a valid url, but that feels more sketchy of a solution than it needs to be.
The text was updated successfully, but these errors were encountered:
To validate any possible URL without specifying any patterns to help the validator would be a momentous task as there are a myriad of possible URLs/URIs that one could call valid. And if you end up needing to feed patterns to the validator it's very close to using regex anyways. As far as I know, urlparse will never throw an error as long as it receives a string as input.
I would definitely like this feature as well though, so hopefully the maintainers are more creative and optimistic than I am :)
To validate any possible URL without specifying any patterns to help the validator would be a momentous task as there are a myriad of possible URLs/URIs that one could call valid. And if you end up needing to feed patterns to the validator it's very close to using regex anyways.
At minimum all this would need to do would be to check if the provided string is a url (i.e. urllib's urlparse function doesn't throw an error when trying to parse it). If we wanted to get fancy, we could expose the parsed components and allow regex against them (ex. to allow both http and https but not anything else ex. ftp as the protocol).
There are workarounds that one could use, i.e. using the regex validator and grabbing one of the many regex's floating around on the internet and hoping that it actually does correctly validate for a valid url, but that feels more sketchy of a solution than it needs to be.
The text was updated successfully, but these errors were encountered: