API documentation¶
+Using Sphinx’s sphinx.ext.autodoc
plugin, it is possible to auto-generate documentation of a Python module.
Tip
+Avoid having in-function-signature type annotations with autodoc, + by setting the following options:
+# -- Options for autodoc ----------------------------------------------------
+ # https://www.sphinx-doc.org/en/master/usage/extensions/autodoc.html#configuration
+
+ # Automatically extract typehints when specified and place them in
+ # descriptions of the relevant function/method.
+ autodoc_typehints = "description"
+
+ # Don't show class signature with the class' name.
+ autodoc_class_signature = "separated"
+
+ Parse (absolute and relative) URLs.
+urlparse module is based upon the following RFC specifications.
+RFC 3986 (STD66): “Uniform Resource Identifiers” by T. Berners-Lee, R. Fielding + and L. Masinter, January 2005.
+RFC 2732 : “Format for Literal IPv6 Addresses in URL’s by R.Hinden, B.Carpenter + and L.Masinter, December 1999.
+RFC 2396: “Uniform Resource Identifiers (URI)”: Generic Syntax by T. + Berners-Lee, R. Fielding, and L. Masinter, August 1998.
+RFC 2368: “The mailto URL scheme”, by P.Hoffman , L Masinter, J. Zawinski, July 1998.
+RFC 1808: “Relative Uniform Resource Locators”, by R. Fielding, UC Irvine, June + 1995.
+RFC 1738: “Uniform Resource Locators (URL)” by T. Berners-Lee, L. Masinter, M. + McCahill, December 1994
+RFC 3986 is considered the current standard and any future changes to + urlparse module should conform with it. The urlparse module is + currently not entirely compliant with this RFC due to defacto + scenarios for parsing, and for backward compatibility purposes, some + parsing quirks from older RFCs are retained. The testcases in + test_urlparse.py provides a good indicator of parsing behavior.
+ + + + + + +-
+
- + class urllib.parse.ParseResultBytes(scheme, netloc, path, params, query, fragment)[source]¶ + + +
-
+
- + urllib.parse.parse_qs(qs, keep_blank_values=False, strict_parsing=False, encoding='utf-8', errors='replace', max_num_fields=None, separator='&')[source]¶ + +
-
+
Parse a query given as a string argument.
+Arguments:
+qs: percent-encoded query string to be parsed
+-
+
- keep_blank_values: flag indicating whether blank values in +
-
+
percent-encoded queries should be treated as blank strings. + A true value indicates that blanks should be retained as + blank strings. The default false value indicates that + blank values are to be ignored and treated as if they were + not included.
+
+ - strict_parsing: flag indicating what to do with parsing errors. +
-
+
If false (the default), errors are silently ignored. + If true, errors raise a ValueError exception.
+
+ - encoding and errors: specify how to decode percent-encoded sequences +
-
+
into Unicode characters, as accepted by the bytes.decode() method.
+
+ - max_num_fields: int. If set, then throws a ValueError if there +
-
+
are more than n fields read by parse_qsl().
+
+ - separator: str. The symbol to use for separating the query arguments. +
-
+
Defaults to &.
+
+
Returns a dictionary.
+
+
-
+
- + urllib.parse.parse_qsl(qs, keep_blank_values=False, strict_parsing=False, encoding='utf-8', errors='replace', max_num_fields=None, separator='&')[source]¶ + +
-
+
Parse a query given as a string argument.
+Arguments:
+qs: percent-encoded query string to be parsed
+-
+
- keep_blank_values: flag indicating whether blank values in +
-
+
percent-encoded queries should be treated as blank strings. + A true value indicates that blanks should be retained as blank + strings. The default false value indicates that blank values + are to be ignored and treated as if they were not included.
+
+ - strict_parsing: flag indicating what to do with parsing errors. If +
-
+
false (the default), errors are silently ignored. If true, + errors raise a ValueError exception.
+
+ - encoding and errors: specify how to decode percent-encoded sequences +
-
+
into Unicode characters, as accepted by the bytes.decode() method.
+
+ - max_num_fields: int. If set, then throws a ValueError +
-
+
if there are more than n fields read by parse_qsl().
+
+ - separator: str. The symbol to use for separating the query arguments. +
-
+
Defaults to &.
+
+
Returns a list, as G-d intended.
+
+
-
+
- + urllib.parse.quote('abc def') 'abc%20def' [source]¶ + +
-
+
Each part of a URL, e.g. the path info, the query, etc., has a + different set of reserved characters that must be quoted. The + quote function offers a cautious (not minimal) way to quote a + string for most of these parts.
+RFC 3986 Uniform Resource Identifier (URI): Generic Syntax lists + the following (un)reserved characters.
+unreserved = ALPHA / DIGIT / “-” / “.” / “_” / “~” + reserved = gen-delims / sub-delims + gen-delims = “:” / “/” / “?” / “#” / “[” / “]” / “@” + sub-delims = “!” / “$” / “&” / “’” / “(” / “)”
++
+++/ “*” / “+” / “,” / “;” / “=”
+Each of the reserved characters is reserved in some component of a URL, + but not necessarily in all of them.
+The quote function %-escapes all characters that are neither in the + unreserved chars (“always safe”) nor the additional chars set via the + safe arg.
+The default for the safe arg is ‘/’. The character is reserved, but in + typical usage the quote function is being called on a path where the + existing slash characters are to be preserved.
+Python 3.7 updates from using RFC 2396 to RFC 3986 to quote URL strings. + Now, “~” is included in the set of unreserved characters.
+string and safe may be either str or bytes objects. encoding and errors + must not be specified if string is a bytes object.
+The optional encoding and errors parameters specify how to deal with + non-ASCII characters, as accepted by the str.encode method. + By default, encoding=’utf-8’ (characters are encoded with UTF-8), and + errors=’strict’ (unsupported characters raise a UnicodeEncodeError).
+
+
-
+
- + urllib.parse.quote_from_bytes(bs, safe='/')[source]¶ + +
-
+
Like quote(), but accepts a bytes object rather than a str, and does + not perform string-to-bytes encoding. It always returns an ASCII string. + quote_from_bytes(b’abc def?’) -> ‘abc%20def%3f’
+
+
-
+
- + urllib.parse.quote_plus(string, safe='', encoding=None, errors=None)[source]¶ + +
-
+
Like quote(), but also replace ‘ ‘ with ‘+’, as required for quoting + HTML form values. Plus signs in the original string are escaped unless + they are included in safe. It also does not have safe default to ‘/’.
+
+
-
+
- + urllib.parse.unquote(string, encoding='utf-8', errors='replace')[source]¶ + +
-
+
Replace %xx escapes by their single-character equivalent. The optional + encoding and errors parameters specify how to decode percent-encoded + sequences into Unicode characters, as accepted by the bytes.decode() + method. + By default, percent-encoded sequences are decoded with UTF-8, and invalid + sequences are replaced by a placeholder character.
+unquote(‘abc%20def’) -> ‘abc def’.
+
+
-
+
- + urllib.parse.unquote_plus(string, encoding='utf-8', errors='replace')[source]¶ + +
-
+
Like unquote(), but also replace plus signs by spaces, as required for + unquoting HTML form values.
+unquote_plus(‘%7e/abc+def’) -> ‘~/abc def’
+
+
-
+
- + urllib.parse.urldefrag(url)[source]¶ + +
-
+
Removes any existing fragment from URL.
+Returns a tuple of the defragmented URL and the fragment. If + the URL contained no fragments, the second element is the + empty string.
+
+
-
+
- + urllib.parse.urlencode(query, doseq=False, safe='', encoding=None, errors=None, quote_via=<function quote_plus>)[source]¶ + +
-
+
Encode a dict or sequence of two-element tuples into a URL query string.
+If any values in the query arg are sequences and doseq is true, each + sequence element is converted to a separate parameter.
+If the query arg is a sequence of two-element tuples, the order of the + parameters in the output will match the order of parameters in the + input.
+The components of a query arg may each be either a string or a bytes type.
+The safe, encoding, and errors parameters are passed down to the function + specified by quote_via (encoding and errors only if a component is a str).
+
+
-
+
- + urllib.parse.urljoin(base, url, allow_fragments=True)[source]¶ + +
-
+
Join a base URL and a possibly relative URL to form an absolute + interpretation of the latter.
+
+
-
+
- + urllib.parse.urlparse(url, scheme='', allow_fragments=True)[source]¶ + +
-
+
Parse a URL into 6 components: + <scheme>://<netloc>/<path>;<params>?<query>#<fragment>
+The result is a named 6-tuple with fields corresponding to the + above. It is either a ParseResult or ParseResultBytes object, + depending on the type of the url parameter.
+The username, password, hostname, and port sub-components of netloc + can also be accessed as attributes of the returned object.
+The scheme argument provides the default value of the scheme + component when no scheme is found in url.
+If allow_fragments is False, no attempt is made to separate the + fragment component from the previous component, which can be either + path or query.
+Note that % escapes are not expanded.
+
+
-
+
- + urllib.parse.urlsplit(url, scheme='', allow_fragments=True)[source]¶ + +
-
+
Parse a URL into 5 components: + <scheme>://<netloc>/<path>?<query>#<fragment>
+The result is a named 5-tuple with fields corresponding to the + above. It is either a SplitResult or SplitResultBytes object, + depending on the type of the url parameter.
+The username, password, hostname, and port sub-components of netloc + can also be accessed as attributes of the returned object.
+The scheme argument provides the default value of the scheme + component when no scheme is found in url.
+If allow_fragments is False, no attempt is made to separate the + fragment component from the previous component, which can be either + path or query.
+Note that % escapes are not expanded.
+
+
-
+
- + urllib.parse.urlunparse(components)[source]¶ + +
-
+
Put a parsed URL back together again. This may result in a + slightly different, but equivalent URL, if the URL that was parsed + originally had redundant delimiters, e.g. a ? with an empty query + (the draft states that these are equivalent).
+
+
-
+
- + urllib.parse.urlunsplit(components)[source]¶ + +
-
+
Combine the elements of a tuple as returned by urlsplit() into a + complete URL as a string. The data argument can be any five-item iterable. + This may result in a slightly different, but equivalent URL, if the URL that + was parsed originally had unnecessary delimiters (for example, a ? with an + empty query; the RFC states that these are equivalent).
+
+