Hello nurse
This commit is contained in:
180
.venv/lib/python3.9/site-packages/pip/_vendor/README.rst
Normal file
180
.venv/lib/python3.9/site-packages/pip/_vendor/README.rst
Normal file
@@ -0,0 +1,180 @@
|
||||
================
|
||||
Vendoring Policy
|
||||
================
|
||||
|
||||
* Vendored libraries **MUST** not be modified except as required to
|
||||
successfully vendor them.
|
||||
* Vendored libraries **MUST** be released copies of libraries available on
|
||||
PyPI.
|
||||
* Vendored libraries **MUST** be available under a license that allows
|
||||
them to be integrated into ``pip``, which is released under the MIT license.
|
||||
* Vendored libraries **MUST** be accompanied with LICENSE files.
|
||||
* The versions of libraries vendored in pip **MUST** be reflected in
|
||||
``pip/_vendor/vendor.txt``.
|
||||
* Vendored libraries **MUST** function without any build steps such as ``2to3``
|
||||
or compilation of C code, practically this limits to single source 2.x/3.x and
|
||||
pure Python.
|
||||
* Any modifications made to libraries **MUST** be noted in
|
||||
``pip/_vendor/README.rst`` and their corresponding patches **MUST** be
|
||||
included ``tools/vendoring/patches``.
|
||||
* Vendored libraries should have corresponding ``vendored()`` entries in
|
||||
``pip/_vendor/__init__.py``.
|
||||
|
||||
Rationale
|
||||
=========
|
||||
|
||||
Historically pip has not had any dependencies except for ``setuptools`` itself,
|
||||
choosing instead to implement any functionality it needed to prevent needing
|
||||
a dependency. However, starting with pip 1.5, we began to replace code that was
|
||||
implemented inside of pip with reusable libraries from PyPI. This brought the
|
||||
typical benefits of reusing libraries instead of reinventing the wheel like
|
||||
higher quality and more battle tested code, centralization of bug fixes
|
||||
(particularly security sensitive ones), and better/more features for less work.
|
||||
|
||||
However, there are several issues with having dependencies in the traditional
|
||||
way (via ``install_requires``) for pip. These issues are:
|
||||
|
||||
**Fragility**
|
||||
When pip depends on another library to function then if for whatever reason
|
||||
that library either isn't installed or an incompatible version is installed
|
||||
then pip ceases to function. This is of course true for all Python
|
||||
applications, however for every application *except* for pip the way you fix
|
||||
it is by re-running pip. Obviously, when pip can't run, you can't use pip to
|
||||
fix pip, so you're left having to manually resolve dependencies and
|
||||
installing them by hand.
|
||||
|
||||
**Making other libraries uninstallable**
|
||||
One of pip's current dependencies is the ``requests`` library, for which pip
|
||||
requires a fairly recent version to run. If pip depended on ``requests`` in
|
||||
the traditional manner, then we'd either have to maintain compatibility with
|
||||
every ``requests`` version that has ever existed (and ever will), OR allow
|
||||
pip to render certain versions of ``requests`` uninstallable. (The second
|
||||
issue, although technically true for any Python application, is magnified by
|
||||
pip's ubiquity; pip is installed by default in Python, in ``pyvenv``, and in
|
||||
``virtualenv``.)
|
||||
|
||||
**Security**
|
||||
This might seem puzzling at first glance, since vendoring has a tendency to
|
||||
complicate updating dependencies for security updates, and that holds true
|
||||
for pip. However, given the *other* reasons for avoiding dependencies, the
|
||||
alternative is for pip to reinvent the wheel itself. This is what pip did
|
||||
historically. It forced pip to re-implement its own HTTPS verification
|
||||
routines as a workaround for the Python standard library's lack of SSL
|
||||
validation, which resulted in similar bugs in the validation routine in
|
||||
``requests`` and ``urllib3``, except that they had to be discovered and
|
||||
fixed independently. Even though we're vendoring, reusing libraries keeps
|
||||
pip more secure by relying on the great work of our dependencies, *and*
|
||||
allowing for faster, easier security fixes by simply pulling in newer
|
||||
versions of dependencies.
|
||||
|
||||
**Bootstrapping**
|
||||
Currently most popular methods of installing pip rely on pip's
|
||||
self-contained nature to install pip itself. These tools work by bundling a
|
||||
copy of pip, adding it to ``sys.path``, and then executing that copy of pip.
|
||||
This is done instead of implementing a "mini installer" (to reduce
|
||||
duplication); pip already knows how to install a Python package, and is far
|
||||
more battle-tested than any "mini installer" could ever possibly be.
|
||||
|
||||
Many downstream redistributors have policies against this kind of bundling, and
|
||||
instead opt to patch the software they distribute to debundle it and make it
|
||||
rely on the global versions of the software that they already have packaged
|
||||
(which may have its own patches applied to it). We (the pip team) would prefer
|
||||
it if pip was *not* debundled in this manner due to the above reasons and
|
||||
instead we would prefer it if pip would be left intact as it is now.
|
||||
|
||||
In the longer term, if someone has a *portable* solution to the above problems,
|
||||
other than the bundling method we currently use, that doesn't add additional
|
||||
problems that are unreasonable then we would be happy to consider, and possibly
|
||||
switch to said method. This solution must function correctly across all of the
|
||||
situation that we expect pip to be used and not mandate some external mechanism
|
||||
such as OS packages.
|
||||
|
||||
|
||||
Modifications
|
||||
=============
|
||||
|
||||
* ``setuptools`` is completely stripped to only keep ``pkg_resources``.
|
||||
* ``pkg_resources`` has been modified to import its dependencies from
|
||||
``pip._vendor``, and to use the vendored copy of ``platformdirs``
|
||||
rather than ``appdirs``.
|
||||
* ``packaging`` has been modified to import its dependencies from
|
||||
``pip._vendor``.
|
||||
* ``CacheControl`` has been modified to import its dependencies from
|
||||
``pip._vendor``.
|
||||
* ``requests`` has been modified to import its other dependencies from
|
||||
``pip._vendor`` and to *not* load ``simplejson`` (all platforms) and
|
||||
``pyopenssl`` (Windows).
|
||||
* ``platformdirs`` has been modified to import its submodules from ``pip._vendor.platformdirs``.
|
||||
|
||||
Automatic Vendoring
|
||||
===================
|
||||
|
||||
Vendoring is automated via the `vendoring <https://pypi.org/project/vendoring/>`_ tool from the content of
|
||||
``pip/_vendor/vendor.txt`` and the different patches in
|
||||
``tools/vendoring/patches``.
|
||||
Launch it via ``vendoring sync . -v`` (requires ``vendoring>=0.2.2``).
|
||||
Tool configuration is done via ``pyproject.toml``.
|
||||
|
||||
To update the vendored library versions, we have a session defined in ``nox``.
|
||||
The command to upgrade everything is::
|
||||
|
||||
nox -s vendoring -- --upgrade-all --skip urllib3 --skip setuptools
|
||||
|
||||
At the time of writing (April 2025) we do not upgrade ``urllib3`` because the
|
||||
next version is a major upgrade and will be handled as an independent PR. We also
|
||||
do not upgrade ``setuptools``, because we only rely on ``pkg_resources``, and
|
||||
tracking every ``setuptools`` change is unnecessary for our needs.
|
||||
|
||||
|
||||
Managing Local Patches
|
||||
======================
|
||||
|
||||
The ``vendoring`` tool automatically applies our local patches, but updating,
|
||||
the patches sometimes no longer apply cleanly. In that case, the update will
|
||||
fail. To resolve this, take the following steps:
|
||||
|
||||
1. Revert any incomplete changes in the revendoring branch, to ensure you have
|
||||
a clean starting point.
|
||||
2. Run the revendoring of the library with a problem again: ``nox -s vendoring
|
||||
-- --upgrade <library_name>``.
|
||||
3. This will fail again, but you will have the original source in your working
|
||||
directory. Review the existing patch against the source, and modify the patch
|
||||
to reflect the new version of the source. If you ``git add`` the changes the
|
||||
vendoring made, you can modify the source to reflect the patch file and then
|
||||
generate a new patch with ``git diff``.
|
||||
4. Now, revert everything *except* the patch file changes. Leave the modified
|
||||
patch file unstaged but saved in the working tree.
|
||||
5. Re-run the vendoring. This time, it should pick up the changed patch file
|
||||
and apply it cleanly. The patch file changes will be committed along with the
|
||||
revendoring, so the new commit should be ready to test and publish as a PR.
|
||||
|
||||
|
||||
Debundling
|
||||
==========
|
||||
|
||||
As mentioned in the rationale, we, the pip team, would prefer it if pip was not
|
||||
debundled (other than optionally ``pip/_vendor/requests/cacert.pem``) and that
|
||||
pip was left intact. However, if you insist on doing so, we have a
|
||||
semi-supported method (that we don't test in our CI) and requires a bit of
|
||||
extra work on your end in order to solve the problems described above.
|
||||
|
||||
1. Delete everything in ``pip/_vendor/`` **except** for
|
||||
``pip/_vendor/__init__.py`` and ``pip/_vendor/vendor.txt``.
|
||||
2. Generate wheels for each of pip's dependencies (and any of their
|
||||
dependencies) using your patched copies of these libraries. These must be
|
||||
placed somewhere on the filesystem that pip can access (``pip/_vendor`` is
|
||||
the default assumption).
|
||||
3. Modify ``pip/_vendor/__init__.py`` so that the ``DEBUNDLED`` variable is
|
||||
``True``.
|
||||
4. Upon installation, the ``INSTALLER`` file in pip's own ``dist-info``
|
||||
directory should be set to something other than ``pip``, so that pip
|
||||
can detect that it wasn't installed using itself.
|
||||
5. *(optional)* If you've placed the wheels in a location other than
|
||||
``pip/_vendor/``, then modify ``pip/_vendor/__init__.py`` so that the
|
||||
``WHEEL_DIR`` variable points to the location you've placed them.
|
||||
6. *(optional)* Update the ``pip_self_version_check`` logic to use the
|
||||
appropriate logic for determining the latest available version of pip and
|
||||
prompt the user with the correct upgrade message.
|
||||
|
||||
Note that partial debundling is **NOT** supported. You need to prepare wheels
|
||||
for all dependencies for successful debundling.
|
||||
117
.venv/lib/python3.9/site-packages/pip/_vendor/__init__.py
Normal file
117
.venv/lib/python3.9/site-packages/pip/_vendor/__init__.py
Normal file
@@ -0,0 +1,117 @@
|
||||
"""
|
||||
pip._vendor is for vendoring dependencies of pip to prevent needing pip to
|
||||
depend on something external.
|
||||
|
||||
Files inside of pip._vendor should be considered immutable and should only be
|
||||
updated to versions from upstream.
|
||||
"""
|
||||
from __future__ import absolute_import
|
||||
|
||||
import glob
|
||||
import os.path
|
||||
import sys
|
||||
|
||||
# Downstream redistributors which have debundled our dependencies should also
|
||||
# patch this value to be true. This will trigger the additional patching
|
||||
# to cause things like "six" to be available as pip.
|
||||
DEBUNDLED = False
|
||||
|
||||
# By default, look in this directory for a bunch of .whl files which we will
|
||||
# add to the beginning of sys.path before attempting to import anything. This
|
||||
# is done to support downstream re-distributors like Debian and Fedora who
|
||||
# wish to create their own Wheels for our dependencies to aid in debundling.
|
||||
WHEEL_DIR = os.path.abspath(os.path.dirname(__file__))
|
||||
|
||||
|
||||
# Define a small helper function to alias our vendored modules to the real ones
|
||||
# if the vendored ones do not exist. This idea of this was taken from
|
||||
# https://github.com/kennethreitz/requests/pull/2567.
|
||||
def vendored(modulename):
|
||||
vendored_name = "{0}.{1}".format(__name__, modulename)
|
||||
|
||||
try:
|
||||
__import__(modulename, globals(), locals(), level=0)
|
||||
except ImportError:
|
||||
# We can just silently allow import failures to pass here. If we
|
||||
# got to this point it means that ``import pip._vendor.whatever``
|
||||
# failed and so did ``import whatever``. Since we're importing this
|
||||
# upfront in an attempt to alias imports, not erroring here will
|
||||
# just mean we get a regular import error whenever pip *actually*
|
||||
# tries to import one of these modules to use it, which actually
|
||||
# gives us a better error message than we would have otherwise
|
||||
# gotten.
|
||||
pass
|
||||
else:
|
||||
sys.modules[vendored_name] = sys.modules[modulename]
|
||||
base, head = vendored_name.rsplit(".", 1)
|
||||
setattr(sys.modules[base], head, sys.modules[modulename])
|
||||
|
||||
|
||||
# If we're operating in a debundled setup, then we want to go ahead and trigger
|
||||
# the aliasing of our vendored libraries as well as looking for wheels to add
|
||||
# to our sys.path. This will cause all of this code to be a no-op typically
|
||||
# however downstream redistributors can enable it in a consistent way across
|
||||
# all platforms.
|
||||
if DEBUNDLED:
|
||||
# Actually look inside of WHEEL_DIR to find .whl files and add them to the
|
||||
# front of our sys.path.
|
||||
sys.path[:] = glob.glob(os.path.join(WHEEL_DIR, "*.whl")) + sys.path
|
||||
|
||||
# Actually alias all of our vendored dependencies.
|
||||
vendored("cachecontrol")
|
||||
vendored("certifi")
|
||||
vendored("dependency-groups")
|
||||
vendored("distlib")
|
||||
vendored("distro")
|
||||
vendored("packaging")
|
||||
vendored("packaging.version")
|
||||
vendored("packaging.specifiers")
|
||||
vendored("pkg_resources")
|
||||
vendored("platformdirs")
|
||||
vendored("progress")
|
||||
vendored("pyproject_hooks")
|
||||
vendored("requests")
|
||||
vendored("requests.exceptions")
|
||||
vendored("requests.packages")
|
||||
vendored("requests.packages.urllib3")
|
||||
vendored("requests.packages.urllib3._collections")
|
||||
vendored("requests.packages.urllib3.connection")
|
||||
vendored("requests.packages.urllib3.connectionpool")
|
||||
vendored("requests.packages.urllib3.contrib")
|
||||
vendored("requests.packages.urllib3.contrib.ntlmpool")
|
||||
vendored("requests.packages.urllib3.contrib.pyopenssl")
|
||||
vendored("requests.packages.urllib3.exceptions")
|
||||
vendored("requests.packages.urllib3.fields")
|
||||
vendored("requests.packages.urllib3.filepost")
|
||||
vendored("requests.packages.urllib3.packages")
|
||||
vendored("requests.packages.urllib3.packages.ordered_dict")
|
||||
vendored("requests.packages.urllib3.packages.six")
|
||||
vendored("requests.packages.urllib3.packages.ssl_match_hostname")
|
||||
vendored("requests.packages.urllib3.packages.ssl_match_hostname."
|
||||
"_implementation")
|
||||
vendored("requests.packages.urllib3.poolmanager")
|
||||
vendored("requests.packages.urllib3.request")
|
||||
vendored("requests.packages.urllib3.response")
|
||||
vendored("requests.packages.urllib3.util")
|
||||
vendored("requests.packages.urllib3.util.connection")
|
||||
vendored("requests.packages.urllib3.util.request")
|
||||
vendored("requests.packages.urllib3.util.response")
|
||||
vendored("requests.packages.urllib3.util.retry")
|
||||
vendored("requests.packages.urllib3.util.ssl_")
|
||||
vendored("requests.packages.urllib3.util.timeout")
|
||||
vendored("requests.packages.urllib3.util.url")
|
||||
vendored("resolvelib")
|
||||
vendored("rich")
|
||||
vendored("rich.console")
|
||||
vendored("rich.highlighter")
|
||||
vendored("rich.logging")
|
||||
vendored("rich.markup")
|
||||
vendored("rich.progress")
|
||||
vendored("rich.segment")
|
||||
vendored("rich.style")
|
||||
vendored("rich.text")
|
||||
vendored("rich.traceback")
|
||||
if sys.version_info < (3, 11):
|
||||
vendored("tomli")
|
||||
vendored("truststore")
|
||||
vendored("urllib3")
|
||||
@@ -0,0 +1,13 @@
|
||||
Copyright 2012-2021 Eric Larson
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
@@ -0,0 +1,32 @@
|
||||
# SPDX-FileCopyrightText: 2015 Eric Larson
|
||||
#
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
|
||||
"""CacheControl import Interface.
|
||||
|
||||
Make it easy to import from cachecontrol without long namespaces.
|
||||
"""
|
||||
|
||||
import importlib.metadata
|
||||
|
||||
from pip._vendor.cachecontrol.adapter import CacheControlAdapter
|
||||
from pip._vendor.cachecontrol.controller import CacheController
|
||||
from pip._vendor.cachecontrol.wrapper import CacheControl
|
||||
|
||||
__author__ = "Eric Larson"
|
||||
__email__ = "eric@ionrock.org"
|
||||
# pip patch: this won't work when vendored, so just patch it out as it's unused
|
||||
# __version__ = importlib.metadata.version("cachecontrol")
|
||||
|
||||
__all__ = [
|
||||
"__author__",
|
||||
"__email__",
|
||||
"__version__",
|
||||
"CacheControlAdapter",
|
||||
"CacheController",
|
||||
"CacheControl",
|
||||
]
|
||||
|
||||
import logging
|
||||
|
||||
logging.getLogger(__name__).addHandler(logging.NullHandler())
|
||||
@@ -0,0 +1,70 @@
|
||||
# SPDX-FileCopyrightText: 2015 Eric Larson
|
||||
#
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
from __future__ import annotations
|
||||
|
||||
import logging
|
||||
from argparse import ArgumentParser
|
||||
from typing import TYPE_CHECKING
|
||||
|
||||
from pip._vendor import requests
|
||||
|
||||
from pip._vendor.cachecontrol.adapter import CacheControlAdapter
|
||||
from pip._vendor.cachecontrol.cache import DictCache
|
||||
from pip._vendor.cachecontrol.controller import logger
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from argparse import Namespace
|
||||
|
||||
from pip._vendor.cachecontrol.controller import CacheController
|
||||
|
||||
|
||||
def setup_logging() -> None:
|
||||
logger.setLevel(logging.DEBUG)
|
||||
handler = logging.StreamHandler()
|
||||
logger.addHandler(handler)
|
||||
|
||||
|
||||
def get_session() -> requests.Session:
|
||||
adapter = CacheControlAdapter(
|
||||
DictCache(), cache_etags=True, serializer=None, heuristic=None
|
||||
)
|
||||
sess = requests.Session()
|
||||
sess.mount("http://", adapter)
|
||||
sess.mount("https://", adapter)
|
||||
|
||||
sess.cache_controller = adapter.controller # type: ignore[attr-defined]
|
||||
return sess
|
||||
|
||||
|
||||
def get_args() -> Namespace:
|
||||
parser = ArgumentParser()
|
||||
parser.add_argument("url", help="The URL to try and cache")
|
||||
return parser.parse_args()
|
||||
|
||||
|
||||
def main() -> None:
|
||||
args = get_args()
|
||||
sess = get_session()
|
||||
|
||||
# Make a request to get a response
|
||||
resp = sess.get(args.url)
|
||||
|
||||
# Turn on logging
|
||||
setup_logging()
|
||||
|
||||
# try setting the cache
|
||||
cache_controller: CacheController = (
|
||||
sess.cache_controller # type: ignore[attr-defined]
|
||||
)
|
||||
cache_controller.cache_response(resp.request, resp.raw)
|
||||
|
||||
# Now try to get it
|
||||
if cache_controller.cached_request(resp.request):
|
||||
print("Cached!")
|
||||
else:
|
||||
print("Not cached :(")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
@@ -0,0 +1,167 @@
|
||||
# SPDX-FileCopyrightText: 2015 Eric Larson
|
||||
#
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
from __future__ import annotations
|
||||
|
||||
import functools
|
||||
import weakref
|
||||
import zlib
|
||||
from typing import TYPE_CHECKING, Any, Collection, Mapping
|
||||
|
||||
from pip._vendor.requests.adapters import HTTPAdapter
|
||||
|
||||
from pip._vendor.cachecontrol.cache import DictCache
|
||||
from pip._vendor.cachecontrol.controller import PERMANENT_REDIRECT_STATUSES, CacheController
|
||||
from pip._vendor.cachecontrol.filewrapper import CallbackFileWrapper
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from pip._vendor.requests import PreparedRequest, Response
|
||||
from pip._vendor.urllib3 import HTTPResponse
|
||||
|
||||
from pip._vendor.cachecontrol.cache import BaseCache
|
||||
from pip._vendor.cachecontrol.heuristics import BaseHeuristic
|
||||
from pip._vendor.cachecontrol.serialize import Serializer
|
||||
|
||||
|
||||
class CacheControlAdapter(HTTPAdapter):
|
||||
invalidating_methods = {"PUT", "PATCH", "DELETE"}
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
cache: BaseCache | None = None,
|
||||
cache_etags: bool = True,
|
||||
controller_class: type[CacheController] | None = None,
|
||||
serializer: Serializer | None = None,
|
||||
heuristic: BaseHeuristic | None = None,
|
||||
cacheable_methods: Collection[str] | None = None,
|
||||
*args: Any,
|
||||
**kw: Any,
|
||||
) -> None:
|
||||
super().__init__(*args, **kw)
|
||||
self.cache = DictCache() if cache is None else cache
|
||||
self.heuristic = heuristic
|
||||
self.cacheable_methods = cacheable_methods or ("GET",)
|
||||
|
||||
controller_factory = controller_class or CacheController
|
||||
self.controller = controller_factory(
|
||||
self.cache, cache_etags=cache_etags, serializer=serializer
|
||||
)
|
||||
|
||||
def send(
|
||||
self,
|
||||
request: PreparedRequest,
|
||||
stream: bool = False,
|
||||
timeout: None | float | tuple[float, float] | tuple[float, None] = None,
|
||||
verify: bool | str = True,
|
||||
cert: (None | bytes | str | tuple[bytes | str, bytes | str]) = None,
|
||||
proxies: Mapping[str, str] | None = None,
|
||||
cacheable_methods: Collection[str] | None = None,
|
||||
) -> Response:
|
||||
"""
|
||||
Send a request. Use the request information to see if it
|
||||
exists in the cache and cache the response if we need to and can.
|
||||
"""
|
||||
cacheable = cacheable_methods or self.cacheable_methods
|
||||
if request.method in cacheable:
|
||||
try:
|
||||
cached_response = self.controller.cached_request(request)
|
||||
except zlib.error:
|
||||
cached_response = None
|
||||
if cached_response:
|
||||
return self.build_response(request, cached_response, from_cache=True)
|
||||
|
||||
# check for etags and add headers if appropriate
|
||||
request.headers.update(self.controller.conditional_headers(request))
|
||||
|
||||
resp = super().send(request, stream, timeout, verify, cert, proxies)
|
||||
|
||||
return resp
|
||||
|
||||
def build_response( # type: ignore[override]
|
||||
self,
|
||||
request: PreparedRequest,
|
||||
response: HTTPResponse,
|
||||
from_cache: bool = False,
|
||||
cacheable_methods: Collection[str] | None = None,
|
||||
) -> Response:
|
||||
"""
|
||||
Build a response by making a request or using the cache.
|
||||
|
||||
This will end up calling send and returning a potentially
|
||||
cached response
|
||||
"""
|
||||
cacheable = cacheable_methods or self.cacheable_methods
|
||||
if not from_cache and request.method in cacheable:
|
||||
# Check for any heuristics that might update headers
|
||||
# before trying to cache.
|
||||
if self.heuristic:
|
||||
response = self.heuristic.apply(response)
|
||||
|
||||
# apply any expiration heuristics
|
||||
if response.status == 304:
|
||||
# We must have sent an ETag request. This could mean
|
||||
# that we've been expired already or that we simply
|
||||
# have an etag. In either case, we want to try and
|
||||
# update the cache if that is the case.
|
||||
cached_response = self.controller.update_cached_response(
|
||||
request, response
|
||||
)
|
||||
|
||||
if cached_response is not response:
|
||||
from_cache = True
|
||||
|
||||
# We are done with the server response, read a
|
||||
# possible response body (compliant servers will
|
||||
# not return one, but we cannot be 100% sure) and
|
||||
# release the connection back to the pool.
|
||||
response.read(decode_content=False)
|
||||
response.release_conn()
|
||||
|
||||
response = cached_response
|
||||
|
||||
# We always cache the 301 responses
|
||||
elif int(response.status) in PERMANENT_REDIRECT_STATUSES:
|
||||
self.controller.cache_response(request, response)
|
||||
else:
|
||||
# Wrap the response file with a wrapper that will cache the
|
||||
# response when the stream has been consumed.
|
||||
response._fp = CallbackFileWrapper( # type: ignore[assignment]
|
||||
response._fp, # type: ignore[arg-type]
|
||||
functools.partial(
|
||||
self.controller.cache_response, request, weakref.ref(response)
|
||||
),
|
||||
)
|
||||
if response.chunked:
|
||||
super_update_chunk_length = response.__class__._update_chunk_length
|
||||
|
||||
def _update_chunk_length(
|
||||
weak_self: weakref.ReferenceType[HTTPResponse],
|
||||
) -> None:
|
||||
self = weak_self()
|
||||
if self is None:
|
||||
return
|
||||
|
||||
super_update_chunk_length(self)
|
||||
if self.chunk_left == 0:
|
||||
self._fp._close() # type: ignore[union-attr]
|
||||
|
||||
response._update_chunk_length = functools.partial( # type: ignore[method-assign]
|
||||
_update_chunk_length, weakref.ref(response)
|
||||
)
|
||||
|
||||
resp: Response = super().build_response(request, response)
|
||||
|
||||
# See if we should invalidate the cache.
|
||||
if request.method in self.invalidating_methods and resp.ok:
|
||||
assert request.url is not None
|
||||
cache_url = self.controller.cache_url(request.url)
|
||||
self.cache.delete(cache_url)
|
||||
|
||||
# Give the request a from_cache attr to let people use it
|
||||
resp.from_cache = from_cache # type: ignore[attr-defined]
|
||||
|
||||
return resp
|
||||
|
||||
def close(self) -> None:
|
||||
self.cache.close()
|
||||
super().close() # type: ignore[no-untyped-call]
|
||||
@@ -0,0 +1,75 @@
|
||||
# SPDX-FileCopyrightText: 2015 Eric Larson
|
||||
#
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
|
||||
"""
|
||||
The cache object API for implementing caches. The default is a thread
|
||||
safe in-memory dictionary.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from threading import Lock
|
||||
from typing import IO, TYPE_CHECKING, MutableMapping
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from datetime import datetime
|
||||
|
||||
|
||||
class BaseCache:
|
||||
def get(self, key: str) -> bytes | None:
|
||||
raise NotImplementedError()
|
||||
|
||||
def set(
|
||||
self, key: str, value: bytes, expires: int | datetime | None = None
|
||||
) -> None:
|
||||
raise NotImplementedError()
|
||||
|
||||
def delete(self, key: str) -> None:
|
||||
raise NotImplementedError()
|
||||
|
||||
def close(self) -> None:
|
||||
pass
|
||||
|
||||
|
||||
class DictCache(BaseCache):
|
||||
def __init__(self, init_dict: MutableMapping[str, bytes] | None = None) -> None:
|
||||
self.lock = Lock()
|
||||
self.data = init_dict or {}
|
||||
|
||||
def get(self, key: str) -> bytes | None:
|
||||
return self.data.get(key, None)
|
||||
|
||||
def set(
|
||||
self, key: str, value: bytes, expires: int | datetime | None = None
|
||||
) -> None:
|
||||
with self.lock:
|
||||
self.data.update({key: value})
|
||||
|
||||
def delete(self, key: str) -> None:
|
||||
with self.lock:
|
||||
if key in self.data:
|
||||
self.data.pop(key)
|
||||
|
||||
|
||||
class SeparateBodyBaseCache(BaseCache):
|
||||
"""
|
||||
In this variant, the body is not stored mixed in with the metadata, but is
|
||||
passed in (as a bytes-like object) in a separate call to ``set_body()``.
|
||||
|
||||
That is, the expected interaction pattern is::
|
||||
|
||||
cache.set(key, serialized_metadata)
|
||||
cache.set_body(key)
|
||||
|
||||
Similarly, the body should be loaded separately via ``get_body()``.
|
||||
"""
|
||||
|
||||
def set_body(self, key: str, body: bytes) -> None:
|
||||
raise NotImplementedError()
|
||||
|
||||
def get_body(self, key: str) -> IO[bytes] | None:
|
||||
"""
|
||||
Return the body as file-like object.
|
||||
"""
|
||||
raise NotImplementedError()
|
||||
@@ -0,0 +1,8 @@
|
||||
# SPDX-FileCopyrightText: 2015 Eric Larson
|
||||
#
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
|
||||
from pip._vendor.cachecontrol.caches.file_cache import FileCache, SeparateBodyFileCache
|
||||
from pip._vendor.cachecontrol.caches.redis_cache import RedisCache
|
||||
|
||||
__all__ = ["FileCache", "SeparateBodyFileCache", "RedisCache"]
|
||||
@@ -0,0 +1,145 @@
|
||||
# SPDX-FileCopyrightText: 2015 Eric Larson
|
||||
#
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
from __future__ import annotations
|
||||
|
||||
import hashlib
|
||||
import os
|
||||
import tempfile
|
||||
from textwrap import dedent
|
||||
from typing import IO, TYPE_CHECKING
|
||||
from pathlib import Path
|
||||
|
||||
from pip._vendor.cachecontrol.cache import BaseCache, SeparateBodyBaseCache
|
||||
from pip._vendor.cachecontrol.controller import CacheController
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from datetime import datetime
|
||||
|
||||
from filelock import BaseFileLock
|
||||
|
||||
|
||||
class _FileCacheMixin:
|
||||
"""Shared implementation for both FileCache variants."""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
directory: str | Path,
|
||||
forever: bool = False,
|
||||
filemode: int = 0o0600,
|
||||
dirmode: int = 0o0700,
|
||||
lock_class: type[BaseFileLock] | None = None,
|
||||
) -> None:
|
||||
try:
|
||||
if lock_class is None:
|
||||
from filelock import FileLock
|
||||
|
||||
lock_class = FileLock
|
||||
except ImportError:
|
||||
notice = dedent(
|
||||
"""
|
||||
NOTE: In order to use the FileCache you must have
|
||||
filelock installed. You can install it via pip:
|
||||
pip install cachecontrol[filecache]
|
||||
"""
|
||||
)
|
||||
raise ImportError(notice)
|
||||
|
||||
self.directory = directory
|
||||
self.forever = forever
|
||||
self.filemode = filemode
|
||||
self.dirmode = dirmode
|
||||
self.lock_class = lock_class
|
||||
|
||||
@staticmethod
|
||||
def encode(x: str) -> str:
|
||||
return hashlib.sha224(x.encode()).hexdigest()
|
||||
|
||||
def _fn(self, name: str) -> str:
|
||||
# NOTE: This method should not change as some may depend on it.
|
||||
# See: https://github.com/ionrock/cachecontrol/issues/63
|
||||
hashed = self.encode(name)
|
||||
parts = list(hashed[:5]) + [hashed]
|
||||
return os.path.join(self.directory, *parts)
|
||||
|
||||
def get(self, key: str) -> bytes | None:
|
||||
name = self._fn(key)
|
||||
try:
|
||||
with open(name, "rb") as fh:
|
||||
return fh.read()
|
||||
|
||||
except FileNotFoundError:
|
||||
return None
|
||||
|
||||
def set(
|
||||
self, key: str, value: bytes, expires: int | datetime | None = None
|
||||
) -> None:
|
||||
name = self._fn(key)
|
||||
self._write(name, value)
|
||||
|
||||
def _write(self, path: str, data: bytes) -> None:
|
||||
"""
|
||||
Safely write the data to the given path.
|
||||
"""
|
||||
# Make sure the directory exists
|
||||
dirname = os.path.dirname(path)
|
||||
os.makedirs(dirname, self.dirmode, exist_ok=True)
|
||||
|
||||
with self.lock_class(path + ".lock"):
|
||||
# Write our actual file
|
||||
(fd, name) = tempfile.mkstemp(dir=dirname)
|
||||
try:
|
||||
os.write(fd, data)
|
||||
finally:
|
||||
os.close(fd)
|
||||
os.chmod(name, self.filemode)
|
||||
os.replace(name, path)
|
||||
|
||||
def _delete(self, key: str, suffix: str) -> None:
|
||||
name = self._fn(key) + suffix
|
||||
if not self.forever:
|
||||
try:
|
||||
os.remove(name)
|
||||
except FileNotFoundError:
|
||||
pass
|
||||
|
||||
|
||||
class FileCache(_FileCacheMixin, BaseCache):
|
||||
"""
|
||||
Traditional FileCache: body is stored in memory, so not suitable for large
|
||||
downloads.
|
||||
"""
|
||||
|
||||
def delete(self, key: str) -> None:
|
||||
self._delete(key, "")
|
||||
|
||||
|
||||
class SeparateBodyFileCache(_FileCacheMixin, SeparateBodyBaseCache):
|
||||
"""
|
||||
Memory-efficient FileCache: body is stored in a separate file, reducing
|
||||
peak memory usage.
|
||||
"""
|
||||
|
||||
def get_body(self, key: str) -> IO[bytes] | None:
|
||||
name = self._fn(key) + ".body"
|
||||
try:
|
||||
return open(name, "rb")
|
||||
except FileNotFoundError:
|
||||
return None
|
||||
|
||||
def set_body(self, key: str, body: bytes) -> None:
|
||||
name = self._fn(key) + ".body"
|
||||
self._write(name, body)
|
||||
|
||||
def delete(self, key: str) -> None:
|
||||
self._delete(key, "")
|
||||
self._delete(key, ".body")
|
||||
|
||||
|
||||
def url_to_file_path(url: str, filecache: FileCache) -> str:
|
||||
"""Return the file cache path based on the URL.
|
||||
|
||||
This does not ensure the file exists!
|
||||
"""
|
||||
key = CacheController.cache_url(url)
|
||||
return filecache._fn(key)
|
||||
@@ -0,0 +1,48 @@
|
||||
# SPDX-FileCopyrightText: 2015 Eric Larson
|
||||
#
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
from __future__ import annotations
|
||||
|
||||
|
||||
from datetime import datetime, timezone
|
||||
from typing import TYPE_CHECKING
|
||||
|
||||
from pip._vendor.cachecontrol.cache import BaseCache
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from redis import Redis
|
||||
|
||||
|
||||
class RedisCache(BaseCache):
|
||||
def __init__(self, conn: Redis[bytes]) -> None:
|
||||
self.conn = conn
|
||||
|
||||
def get(self, key: str) -> bytes | None:
|
||||
return self.conn.get(key)
|
||||
|
||||
def set(
|
||||
self, key: str, value: bytes, expires: int | datetime | None = None
|
||||
) -> None:
|
||||
if not expires:
|
||||
self.conn.set(key, value)
|
||||
elif isinstance(expires, datetime):
|
||||
now_utc = datetime.now(timezone.utc)
|
||||
if expires.tzinfo is None:
|
||||
now_utc = now_utc.replace(tzinfo=None)
|
||||
delta = expires - now_utc
|
||||
self.conn.setex(key, int(delta.total_seconds()), value)
|
||||
else:
|
||||
self.conn.setex(key, expires, value)
|
||||
|
||||
def delete(self, key: str) -> None:
|
||||
self.conn.delete(key)
|
||||
|
||||
def clear(self) -> None:
|
||||
"""Helper for clearing all the keys in a database. Use with
|
||||
caution!"""
|
||||
for key in self.conn.keys():
|
||||
self.conn.delete(key)
|
||||
|
||||
def close(self) -> None:
|
||||
"""Redis uses connection pooling, no need to close the connection."""
|
||||
pass
|
||||
@@ -0,0 +1,511 @@
|
||||
# SPDX-FileCopyrightText: 2015 Eric Larson
|
||||
#
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
|
||||
"""
|
||||
The httplib2 algorithms ported for use with requests.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import calendar
|
||||
import logging
|
||||
import re
|
||||
import time
|
||||
import weakref
|
||||
from email.utils import parsedate_tz
|
||||
from typing import TYPE_CHECKING, Collection, Mapping
|
||||
|
||||
from pip._vendor.requests.structures import CaseInsensitiveDict
|
||||
|
||||
from pip._vendor.cachecontrol.cache import DictCache, SeparateBodyBaseCache
|
||||
from pip._vendor.cachecontrol.serialize import Serializer
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from typing import Literal
|
||||
|
||||
from pip._vendor.requests import PreparedRequest
|
||||
from pip._vendor.urllib3 import HTTPResponse
|
||||
|
||||
from pip._vendor.cachecontrol.cache import BaseCache
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
URI = re.compile(r"^(([^:/?#]+):)?(//([^/?#]*))?([^?#]*)(\?([^#]*))?(#(.*))?")
|
||||
|
||||
PERMANENT_REDIRECT_STATUSES = (301, 308)
|
||||
|
||||
|
||||
def parse_uri(uri: str) -> tuple[str, str, str, str, str]:
|
||||
"""Parses a URI using the regex given in Appendix B of RFC 3986.
|
||||
|
||||
(scheme, authority, path, query, fragment) = parse_uri(uri)
|
||||
"""
|
||||
match = URI.match(uri)
|
||||
assert match is not None
|
||||
groups = match.groups()
|
||||
return (groups[1], groups[3], groups[4], groups[6], groups[8])
|
||||
|
||||
|
||||
class CacheController:
|
||||
"""An interface to see if request should cached or not."""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
cache: BaseCache | None = None,
|
||||
cache_etags: bool = True,
|
||||
serializer: Serializer | None = None,
|
||||
status_codes: Collection[int] | None = None,
|
||||
):
|
||||
self.cache = DictCache() if cache is None else cache
|
||||
self.cache_etags = cache_etags
|
||||
self.serializer = serializer or Serializer()
|
||||
self.cacheable_status_codes = status_codes or (200, 203, 300, 301, 308)
|
||||
|
||||
@classmethod
|
||||
def _urlnorm(cls, uri: str) -> str:
|
||||
"""Normalize the URL to create a safe key for the cache"""
|
||||
(scheme, authority, path, query, fragment) = parse_uri(uri)
|
||||
if not scheme or not authority:
|
||||
raise Exception("Only absolute URIs are allowed. uri = %s" % uri)
|
||||
|
||||
scheme = scheme.lower()
|
||||
authority = authority.lower()
|
||||
|
||||
if not path:
|
||||
path = "/"
|
||||
|
||||
# Could do syntax based normalization of the URI before
|
||||
# computing the digest. See Section 6.2.2 of Std 66.
|
||||
request_uri = query and "?".join([path, query]) or path
|
||||
defrag_uri = scheme + "://" + authority + request_uri
|
||||
|
||||
return defrag_uri
|
||||
|
||||
@classmethod
|
||||
def cache_url(cls, uri: str) -> str:
|
||||
return cls._urlnorm(uri)
|
||||
|
||||
def parse_cache_control(self, headers: Mapping[str, str]) -> dict[str, int | None]:
|
||||
known_directives = {
|
||||
# https://tools.ietf.org/html/rfc7234#section-5.2
|
||||
"max-age": (int, True),
|
||||
"max-stale": (int, False),
|
||||
"min-fresh": (int, True),
|
||||
"no-cache": (None, False),
|
||||
"no-store": (None, False),
|
||||
"no-transform": (None, False),
|
||||
"only-if-cached": (None, False),
|
||||
"must-revalidate": (None, False),
|
||||
"public": (None, False),
|
||||
"private": (None, False),
|
||||
"proxy-revalidate": (None, False),
|
||||
"s-maxage": (int, True),
|
||||
}
|
||||
|
||||
cc_headers = headers.get("cache-control", headers.get("Cache-Control", ""))
|
||||
|
||||
retval: dict[str, int | None] = {}
|
||||
|
||||
for cc_directive in cc_headers.split(","):
|
||||
if not cc_directive.strip():
|
||||
continue
|
||||
|
||||
parts = cc_directive.split("=", 1)
|
||||
directive = parts[0].strip()
|
||||
|
||||
try:
|
||||
typ, required = known_directives[directive]
|
||||
except KeyError:
|
||||
logger.debug("Ignoring unknown cache-control directive: %s", directive)
|
||||
continue
|
||||
|
||||
if not typ or not required:
|
||||
retval[directive] = None
|
||||
if typ:
|
||||
try:
|
||||
retval[directive] = typ(parts[1].strip())
|
||||
except IndexError:
|
||||
if required:
|
||||
logger.debug(
|
||||
"Missing value for cache-control " "directive: %s",
|
||||
directive,
|
||||
)
|
||||
except ValueError:
|
||||
logger.debug(
|
||||
"Invalid value for cache-control directive " "%s, must be %s",
|
||||
directive,
|
||||
typ.__name__,
|
||||
)
|
||||
|
||||
return retval
|
||||
|
||||
def _load_from_cache(self, request: PreparedRequest) -> HTTPResponse | None:
|
||||
"""
|
||||
Load a cached response, or return None if it's not available.
|
||||
"""
|
||||
# We do not support caching of partial content: so if the request contains a
|
||||
# Range header then we don't want to load anything from the cache.
|
||||
if "Range" in request.headers:
|
||||
return None
|
||||
|
||||
cache_url = request.url
|
||||
assert cache_url is not None
|
||||
cache_data = self.cache.get(cache_url)
|
||||
if cache_data is None:
|
||||
logger.debug("No cache entry available")
|
||||
return None
|
||||
|
||||
if isinstance(self.cache, SeparateBodyBaseCache):
|
||||
body_file = self.cache.get_body(cache_url)
|
||||
else:
|
||||
body_file = None
|
||||
|
||||
result = self.serializer.loads(request, cache_data, body_file)
|
||||
if result is None:
|
||||
logger.warning("Cache entry deserialization failed, entry ignored")
|
||||
return result
|
||||
|
||||
def cached_request(self, request: PreparedRequest) -> HTTPResponse | Literal[False]:
|
||||
"""
|
||||
Return a cached response if it exists in the cache, otherwise
|
||||
return False.
|
||||
"""
|
||||
assert request.url is not None
|
||||
cache_url = self.cache_url(request.url)
|
||||
logger.debug('Looking up "%s" in the cache', cache_url)
|
||||
cc = self.parse_cache_control(request.headers)
|
||||
|
||||
# Bail out if the request insists on fresh data
|
||||
if "no-cache" in cc:
|
||||
logger.debug('Request header has "no-cache", cache bypassed')
|
||||
return False
|
||||
|
||||
if "max-age" in cc and cc["max-age"] == 0:
|
||||
logger.debug('Request header has "max_age" as 0, cache bypassed')
|
||||
return False
|
||||
|
||||
# Check whether we can load the response from the cache:
|
||||
resp = self._load_from_cache(request)
|
||||
if not resp:
|
||||
return False
|
||||
|
||||
# If we have a cached permanent redirect, return it immediately. We
|
||||
# don't need to test our response for other headers b/c it is
|
||||
# intrinsically "cacheable" as it is Permanent.
|
||||
#
|
||||
# See:
|
||||
# https://tools.ietf.org/html/rfc7231#section-6.4.2
|
||||
#
|
||||
# Client can try to refresh the value by repeating the request
|
||||
# with cache busting headers as usual (ie no-cache).
|
||||
if int(resp.status) in PERMANENT_REDIRECT_STATUSES:
|
||||
msg = (
|
||||
"Returning cached permanent redirect response "
|
||||
"(ignoring date and etag information)"
|
||||
)
|
||||
logger.debug(msg)
|
||||
return resp
|
||||
|
||||
headers: CaseInsensitiveDict[str] = CaseInsensitiveDict(resp.headers)
|
||||
if not headers or "date" not in headers:
|
||||
if "etag" not in headers:
|
||||
# Without date or etag, the cached response can never be used
|
||||
# and should be deleted.
|
||||
logger.debug("Purging cached response: no date or etag")
|
||||
self.cache.delete(cache_url)
|
||||
logger.debug("Ignoring cached response: no date")
|
||||
return False
|
||||
|
||||
now = time.time()
|
||||
time_tuple = parsedate_tz(headers["date"])
|
||||
assert time_tuple is not None
|
||||
date = calendar.timegm(time_tuple[:6])
|
||||
current_age = max(0, now - date)
|
||||
logger.debug("Current age based on date: %i", current_age)
|
||||
|
||||
# TODO: There is an assumption that the result will be a
|
||||
# urllib3 response object. This may not be best since we
|
||||
# could probably avoid instantiating or constructing the
|
||||
# response until we know we need it.
|
||||
resp_cc = self.parse_cache_control(headers)
|
||||
|
||||
# determine freshness
|
||||
freshness_lifetime = 0
|
||||
|
||||
# Check the max-age pragma in the cache control header
|
||||
max_age = resp_cc.get("max-age")
|
||||
if max_age is not None:
|
||||
freshness_lifetime = max_age
|
||||
logger.debug("Freshness lifetime from max-age: %i", freshness_lifetime)
|
||||
|
||||
# If there isn't a max-age, check for an expires header
|
||||
elif "expires" in headers:
|
||||
expires = parsedate_tz(headers["expires"])
|
||||
if expires is not None:
|
||||
expire_time = calendar.timegm(expires[:6]) - date
|
||||
freshness_lifetime = max(0, expire_time)
|
||||
logger.debug("Freshness lifetime from expires: %i", freshness_lifetime)
|
||||
|
||||
# Determine if we are setting freshness limit in the
|
||||
# request. Note, this overrides what was in the response.
|
||||
max_age = cc.get("max-age")
|
||||
if max_age is not None:
|
||||
freshness_lifetime = max_age
|
||||
logger.debug(
|
||||
"Freshness lifetime from request max-age: %i", freshness_lifetime
|
||||
)
|
||||
|
||||
min_fresh = cc.get("min-fresh")
|
||||
if min_fresh is not None:
|
||||
# adjust our current age by our min fresh
|
||||
current_age += min_fresh
|
||||
logger.debug("Adjusted current age from min-fresh: %i", current_age)
|
||||
|
||||
# Return entry if it is fresh enough
|
||||
if freshness_lifetime > current_age:
|
||||
logger.debug('The response is "fresh", returning cached response')
|
||||
logger.debug("%i > %i", freshness_lifetime, current_age)
|
||||
return resp
|
||||
|
||||
# we're not fresh. If we don't have an Etag, clear it out
|
||||
if "etag" not in headers:
|
||||
logger.debug('The cached response is "stale" with no etag, purging')
|
||||
self.cache.delete(cache_url)
|
||||
|
||||
# return the original handler
|
||||
return False
|
||||
|
||||
def conditional_headers(self, request: PreparedRequest) -> dict[str, str]:
|
||||
resp = self._load_from_cache(request)
|
||||
new_headers = {}
|
||||
|
||||
if resp:
|
||||
headers: CaseInsensitiveDict[str] = CaseInsensitiveDict(resp.headers)
|
||||
|
||||
if "etag" in headers:
|
||||
new_headers["If-None-Match"] = headers["ETag"]
|
||||
|
||||
if "last-modified" in headers:
|
||||
new_headers["If-Modified-Since"] = headers["Last-Modified"]
|
||||
|
||||
return new_headers
|
||||
|
||||
def _cache_set(
|
||||
self,
|
||||
cache_url: str,
|
||||
request: PreparedRequest,
|
||||
response: HTTPResponse,
|
||||
body: bytes | None = None,
|
||||
expires_time: int | None = None,
|
||||
) -> None:
|
||||
"""
|
||||
Store the data in the cache.
|
||||
"""
|
||||
if isinstance(self.cache, SeparateBodyBaseCache):
|
||||
# We pass in the body separately; just put a placeholder empty
|
||||
# string in the metadata.
|
||||
self.cache.set(
|
||||
cache_url,
|
||||
self.serializer.dumps(request, response, b""),
|
||||
expires=expires_time,
|
||||
)
|
||||
# body is None can happen when, for example, we're only updating
|
||||
# headers, as is the case in update_cached_response().
|
||||
if body is not None:
|
||||
self.cache.set_body(cache_url, body)
|
||||
else:
|
||||
self.cache.set(
|
||||
cache_url,
|
||||
self.serializer.dumps(request, response, body),
|
||||
expires=expires_time,
|
||||
)
|
||||
|
||||
def cache_response(
|
||||
self,
|
||||
request: PreparedRequest,
|
||||
response_or_ref: HTTPResponse | weakref.ReferenceType[HTTPResponse],
|
||||
body: bytes | None = None,
|
||||
status_codes: Collection[int] | None = None,
|
||||
) -> None:
|
||||
"""
|
||||
Algorithm for caching requests.
|
||||
|
||||
This assumes a requests Response object.
|
||||
"""
|
||||
if isinstance(response_or_ref, weakref.ReferenceType):
|
||||
response = response_or_ref()
|
||||
if response is None:
|
||||
# The weakref can be None only in case the user used streamed request
|
||||
# and did not consume or close it, and holds no reference to requests.Response.
|
||||
# In such case, we don't want to cache the response.
|
||||
return
|
||||
else:
|
||||
response = response_or_ref
|
||||
|
||||
# From httplib2: Don't cache 206's since we aren't going to
|
||||
# handle byte range requests
|
||||
cacheable_status_codes = status_codes or self.cacheable_status_codes
|
||||
if response.status not in cacheable_status_codes:
|
||||
logger.debug(
|
||||
"Status code %s not in %s", response.status, cacheable_status_codes
|
||||
)
|
||||
return
|
||||
|
||||
response_headers: CaseInsensitiveDict[str] = CaseInsensitiveDict(
|
||||
response.headers
|
||||
)
|
||||
|
||||
if "date" in response_headers:
|
||||
time_tuple = parsedate_tz(response_headers["date"])
|
||||
assert time_tuple is not None
|
||||
date = calendar.timegm(time_tuple[:6])
|
||||
else:
|
||||
date = 0
|
||||
|
||||
# If we've been given a body, our response has a Content-Length, that
|
||||
# Content-Length is valid then we can check to see if the body we've
|
||||
# been given matches the expected size, and if it doesn't we'll just
|
||||
# skip trying to cache it.
|
||||
if (
|
||||
body is not None
|
||||
and "content-length" in response_headers
|
||||
and response_headers["content-length"].isdigit()
|
||||
and int(response_headers["content-length"]) != len(body)
|
||||
):
|
||||
return
|
||||
|
||||
cc_req = self.parse_cache_control(request.headers)
|
||||
cc = self.parse_cache_control(response_headers)
|
||||
|
||||
assert request.url is not None
|
||||
cache_url = self.cache_url(request.url)
|
||||
logger.debug('Updating cache with response from "%s"', cache_url)
|
||||
|
||||
# Delete it from the cache if we happen to have it stored there
|
||||
no_store = False
|
||||
if "no-store" in cc:
|
||||
no_store = True
|
||||
logger.debug('Response header has "no-store"')
|
||||
if "no-store" in cc_req:
|
||||
no_store = True
|
||||
logger.debug('Request header has "no-store"')
|
||||
if no_store and self.cache.get(cache_url):
|
||||
logger.debug('Purging existing cache entry to honor "no-store"')
|
||||
self.cache.delete(cache_url)
|
||||
if no_store:
|
||||
return
|
||||
|
||||
# https://tools.ietf.org/html/rfc7234#section-4.1:
|
||||
# A Vary header field-value of "*" always fails to match.
|
||||
# Storing such a response leads to a deserialization warning
|
||||
# during cache lookup and is not allowed to ever be served,
|
||||
# so storing it can be avoided.
|
||||
if "*" in response_headers.get("vary", ""):
|
||||
logger.debug('Response header has "Vary: *"')
|
||||
return
|
||||
|
||||
# If we've been given an etag, then keep the response
|
||||
if self.cache_etags and "etag" in response_headers:
|
||||
expires_time = 0
|
||||
if response_headers.get("expires"):
|
||||
expires = parsedate_tz(response_headers["expires"])
|
||||
if expires is not None:
|
||||
expires_time = calendar.timegm(expires[:6]) - date
|
||||
|
||||
expires_time = max(expires_time, 14 * 86400)
|
||||
|
||||
logger.debug(f"etag object cached for {expires_time} seconds")
|
||||
logger.debug("Caching due to etag")
|
||||
self._cache_set(cache_url, request, response, body, expires_time)
|
||||
|
||||
# Add to the cache any permanent redirects. We do this before looking
|
||||
# that the Date headers.
|
||||
elif int(response.status) in PERMANENT_REDIRECT_STATUSES:
|
||||
logger.debug("Caching permanent redirect")
|
||||
self._cache_set(cache_url, request, response, b"")
|
||||
|
||||
# Add to the cache if the response headers demand it. If there
|
||||
# is no date header then we can't do anything about expiring
|
||||
# the cache.
|
||||
elif "date" in response_headers:
|
||||
time_tuple = parsedate_tz(response_headers["date"])
|
||||
assert time_tuple is not None
|
||||
date = calendar.timegm(time_tuple[:6])
|
||||
# cache when there is a max-age > 0
|
||||
max_age = cc.get("max-age")
|
||||
if max_age is not None and max_age > 0:
|
||||
logger.debug("Caching b/c date exists and max-age > 0")
|
||||
expires_time = max_age
|
||||
self._cache_set(
|
||||
cache_url,
|
||||
request,
|
||||
response,
|
||||
body,
|
||||
expires_time,
|
||||
)
|
||||
|
||||
# If the request can expire, it means we should cache it
|
||||
# in the meantime.
|
||||
elif "expires" in response_headers:
|
||||
if response_headers["expires"]:
|
||||
expires = parsedate_tz(response_headers["expires"])
|
||||
if expires is not None:
|
||||
expires_time = calendar.timegm(expires[:6]) - date
|
||||
else:
|
||||
expires_time = None
|
||||
|
||||
logger.debug(
|
||||
"Caching b/c of expires header. expires in {} seconds".format(
|
||||
expires_time
|
||||
)
|
||||
)
|
||||
self._cache_set(
|
||||
cache_url,
|
||||
request,
|
||||
response,
|
||||
body,
|
||||
expires_time,
|
||||
)
|
||||
|
||||
def update_cached_response(
|
||||
self, request: PreparedRequest, response: HTTPResponse
|
||||
) -> HTTPResponse:
|
||||
"""On a 304 we will get a new set of headers that we want to
|
||||
update our cached value with, assuming we have one.
|
||||
|
||||
This should only ever be called when we've sent an ETag and
|
||||
gotten a 304 as the response.
|
||||
"""
|
||||
assert request.url is not None
|
||||
cache_url = self.cache_url(request.url)
|
||||
cached_response = self._load_from_cache(request)
|
||||
|
||||
if not cached_response:
|
||||
# we didn't have a cached response
|
||||
return response
|
||||
|
||||
# Lets update our headers with the headers from the new request:
|
||||
# http://tools.ietf.org/html/draft-ietf-httpbis-p4-conditional-26#section-4.1
|
||||
#
|
||||
# The server isn't supposed to send headers that would make
|
||||
# the cached body invalid. But... just in case, we'll be sure
|
||||
# to strip out ones we know that might be problematic due to
|
||||
# typical assumptions.
|
||||
excluded_headers = ["content-length"]
|
||||
|
||||
cached_response.headers.update(
|
||||
{
|
||||
k: v
|
||||
for k, v in response.headers.items()
|
||||
if k.lower() not in excluded_headers
|
||||
}
|
||||
)
|
||||
|
||||
# we want a 200 b/c we have content via the cache
|
||||
cached_response.status = 200
|
||||
|
||||
# update our cache
|
||||
self._cache_set(cache_url, request, cached_response)
|
||||
|
||||
return cached_response
|
||||
@@ -0,0 +1,121 @@
|
||||
# SPDX-FileCopyrightText: 2015 Eric Larson
|
||||
#
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
from __future__ import annotations
|
||||
|
||||
import mmap
|
||||
from tempfile import NamedTemporaryFile
|
||||
from typing import TYPE_CHECKING, Any, Callable
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from collections.abc import Buffer
|
||||
from http.client import HTTPResponse
|
||||
|
||||
|
||||
class CallbackFileWrapper:
|
||||
"""
|
||||
Small wrapper around a fp object which will tee everything read into a
|
||||
buffer, and when that file is closed it will execute a callback with the
|
||||
contents of that buffer.
|
||||
|
||||
All attributes are proxied to the underlying file object.
|
||||
|
||||
This class uses members with a double underscore (__) leading prefix so as
|
||||
not to accidentally shadow an attribute.
|
||||
|
||||
The data is stored in a temporary file until it is all available. As long
|
||||
as the temporary files directory is disk-based (sometimes it's a
|
||||
memory-backed-``tmpfs`` on Linux), data will be unloaded to disk if memory
|
||||
pressure is high. For small files the disk usually won't be used at all,
|
||||
it'll all be in the filesystem memory cache, so there should be no
|
||||
performance impact.
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self, fp: HTTPResponse, callback: Callable[[Buffer], None] | None
|
||||
) -> None:
|
||||
self.__buf = NamedTemporaryFile("rb+", delete=True)
|
||||
self.__fp = fp
|
||||
self.__callback = callback
|
||||
|
||||
def __getattr__(self, name: str) -> Any:
|
||||
# The vagaries of garbage collection means that self.__fp is
|
||||
# not always set. By using __getattribute__ and the private
|
||||
# name[0] allows looking up the attribute value and raising an
|
||||
# AttributeError when it doesn't exist. This stop things from
|
||||
# infinitely recursing calls to getattr in the case where
|
||||
# self.__fp hasn't been set.
|
||||
#
|
||||
# [0] https://docs.python.org/2/reference/expressions.html#atom-identifiers
|
||||
fp = self.__getattribute__("_CallbackFileWrapper__fp")
|
||||
return getattr(fp, name)
|
||||
|
||||
def __is_fp_closed(self) -> bool:
|
||||
try:
|
||||
return self.__fp.fp is None
|
||||
|
||||
except AttributeError:
|
||||
pass
|
||||
|
||||
try:
|
||||
closed: bool = self.__fp.closed
|
||||
return closed
|
||||
|
||||
except AttributeError:
|
||||
pass
|
||||
|
||||
# We just don't cache it then.
|
||||
# TODO: Add some logging here...
|
||||
return False
|
||||
|
||||
def _close(self) -> None:
|
||||
result: Buffer
|
||||
if self.__callback:
|
||||
if self.__buf.tell() == 0:
|
||||
# Empty file:
|
||||
result = b""
|
||||
else:
|
||||
# Return the data without actually loading it into memory,
|
||||
# relying on Python's buffer API and mmap(). mmap() just gives
|
||||
# a view directly into the filesystem's memory cache, so it
|
||||
# doesn't result in duplicate memory use.
|
||||
self.__buf.seek(0, 0)
|
||||
result = memoryview(
|
||||
mmap.mmap(self.__buf.fileno(), 0, access=mmap.ACCESS_READ)
|
||||
)
|
||||
self.__callback(result)
|
||||
|
||||
# We assign this to None here, because otherwise we can get into
|
||||
# really tricky problems where the CPython interpreter dead locks
|
||||
# because the callback is holding a reference to something which
|
||||
# has a __del__ method. Setting this to None breaks the cycle
|
||||
# and allows the garbage collector to do it's thing normally.
|
||||
self.__callback = None
|
||||
|
||||
# Closing the temporary file releases memory and frees disk space.
|
||||
# Important when caching big files.
|
||||
self.__buf.close()
|
||||
|
||||
def read(self, amt: int | None = None) -> bytes:
|
||||
data: bytes = self.__fp.read(amt)
|
||||
if data:
|
||||
# We may be dealing with b'', a sign that things are over:
|
||||
# it's passed e.g. after we've already closed self.__buf.
|
||||
self.__buf.write(data)
|
||||
if self.__is_fp_closed():
|
||||
self._close()
|
||||
|
||||
return data
|
||||
|
||||
def _safe_read(self, amt: int) -> bytes:
|
||||
data: bytes = self.__fp._safe_read(amt) # type: ignore[attr-defined]
|
||||
if amt == 2 and data == b"\r\n":
|
||||
# urllib executes this read to toss the CRLF at the end
|
||||
# of the chunk.
|
||||
return data
|
||||
|
||||
self.__buf.write(data)
|
||||
if self.__is_fp_closed():
|
||||
self._close()
|
||||
|
||||
return data
|
||||
@@ -0,0 +1,157 @@
|
||||
# SPDX-FileCopyrightText: 2015 Eric Larson
|
||||
#
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
from __future__ import annotations
|
||||
|
||||
import calendar
|
||||
import time
|
||||
from datetime import datetime, timedelta, timezone
|
||||
from email.utils import formatdate, parsedate, parsedate_tz
|
||||
from typing import TYPE_CHECKING, Any, Mapping
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from pip._vendor.urllib3 import HTTPResponse
|
||||
|
||||
TIME_FMT = "%a, %d %b %Y %H:%M:%S GMT"
|
||||
|
||||
|
||||
def expire_after(delta: timedelta, date: datetime | None = None) -> datetime:
|
||||
date = date or datetime.now(timezone.utc)
|
||||
return date + delta
|
||||
|
||||
|
||||
def datetime_to_header(dt: datetime) -> str:
|
||||
return formatdate(calendar.timegm(dt.timetuple()))
|
||||
|
||||
|
||||
class BaseHeuristic:
|
||||
def warning(self, response: HTTPResponse) -> str | None:
|
||||
"""
|
||||
Return a valid 1xx warning header value describing the cache
|
||||
adjustments.
|
||||
|
||||
The response is provided too allow warnings like 113
|
||||
http://tools.ietf.org/html/rfc7234#section-5.5.4 where we need
|
||||
to explicitly say response is over 24 hours old.
|
||||
"""
|
||||
return '110 - "Response is Stale"'
|
||||
|
||||
def update_headers(self, response: HTTPResponse) -> dict[str, str]:
|
||||
"""Update the response headers with any new headers.
|
||||
|
||||
NOTE: This SHOULD always include some Warning header to
|
||||
signify that the response was cached by the client, not
|
||||
by way of the provided headers.
|
||||
"""
|
||||
return {}
|
||||
|
||||
def apply(self, response: HTTPResponse) -> HTTPResponse:
|
||||
updated_headers = self.update_headers(response)
|
||||
|
||||
if updated_headers:
|
||||
response.headers.update(updated_headers)
|
||||
warning_header_value = self.warning(response)
|
||||
if warning_header_value is not None:
|
||||
response.headers.update({"Warning": warning_header_value})
|
||||
|
||||
return response
|
||||
|
||||
|
||||
class OneDayCache(BaseHeuristic):
|
||||
"""
|
||||
Cache the response by providing an expires 1 day in the
|
||||
future.
|
||||
"""
|
||||
|
||||
def update_headers(self, response: HTTPResponse) -> dict[str, str]:
|
||||
headers = {}
|
||||
|
||||
if "expires" not in response.headers:
|
||||
date = parsedate(response.headers["date"])
|
||||
expires = expire_after(
|
||||
timedelta(days=1),
|
||||
date=datetime(*date[:6], tzinfo=timezone.utc), # type: ignore[index,misc]
|
||||
)
|
||||
headers["expires"] = datetime_to_header(expires)
|
||||
headers["cache-control"] = "public"
|
||||
return headers
|
||||
|
||||
|
||||
class ExpiresAfter(BaseHeuristic):
|
||||
"""
|
||||
Cache **all** requests for a defined time period.
|
||||
"""
|
||||
|
||||
def __init__(self, **kw: Any) -> None:
|
||||
self.delta = timedelta(**kw)
|
||||
|
||||
def update_headers(self, response: HTTPResponse) -> dict[str, str]:
|
||||
expires = expire_after(self.delta)
|
||||
return {"expires": datetime_to_header(expires), "cache-control": "public"}
|
||||
|
||||
def warning(self, response: HTTPResponse) -> str | None:
|
||||
tmpl = "110 - Automatically cached for %s. Response might be stale"
|
||||
return tmpl % self.delta
|
||||
|
||||
|
||||
class LastModified(BaseHeuristic):
|
||||
"""
|
||||
If there is no Expires header already, fall back on Last-Modified
|
||||
using the heuristic from
|
||||
http://tools.ietf.org/html/rfc7234#section-4.2.2
|
||||
to calculate a reasonable value.
|
||||
|
||||
Firefox also does something like this per
|
||||
https://developer.mozilla.org/en-US/docs/Web/HTTP/Caching_FAQ
|
||||
http://lxr.mozilla.org/mozilla-release/source/netwerk/protocol/http/nsHttpResponseHead.cpp#397
|
||||
Unlike mozilla we limit this to 24-hr.
|
||||
"""
|
||||
|
||||
cacheable_by_default_statuses = {
|
||||
200,
|
||||
203,
|
||||
204,
|
||||
206,
|
||||
300,
|
||||
301,
|
||||
404,
|
||||
405,
|
||||
410,
|
||||
414,
|
||||
501,
|
||||
}
|
||||
|
||||
def update_headers(self, resp: HTTPResponse) -> dict[str, str]:
|
||||
headers: Mapping[str, str] = resp.headers
|
||||
|
||||
if "expires" in headers:
|
||||
return {}
|
||||
|
||||
if "cache-control" in headers and headers["cache-control"] != "public":
|
||||
return {}
|
||||
|
||||
if resp.status not in self.cacheable_by_default_statuses:
|
||||
return {}
|
||||
|
||||
if "date" not in headers or "last-modified" not in headers:
|
||||
return {}
|
||||
|
||||
time_tuple = parsedate_tz(headers["date"])
|
||||
assert time_tuple is not None
|
||||
date = calendar.timegm(time_tuple[:6])
|
||||
last_modified = parsedate(headers["last-modified"])
|
||||
if last_modified is None:
|
||||
return {}
|
||||
|
||||
now = time.time()
|
||||
current_age = max(0, now - date)
|
||||
delta = date - calendar.timegm(last_modified)
|
||||
freshness_lifetime = max(0, min(delta / 10, 24 * 3600))
|
||||
if freshness_lifetime <= current_age:
|
||||
return {}
|
||||
|
||||
expires = date + freshness_lifetime
|
||||
return {"expires": time.strftime(TIME_FMT, time.gmtime(expires))}
|
||||
|
||||
def warning(self, resp: HTTPResponse) -> str | None:
|
||||
return None
|
||||
@@ -0,0 +1,146 @@
|
||||
# SPDX-FileCopyrightText: 2015 Eric Larson
|
||||
#
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
from __future__ import annotations
|
||||
|
||||
import io
|
||||
from typing import IO, TYPE_CHECKING, Any, Mapping, cast
|
||||
|
||||
from pip._vendor import msgpack
|
||||
from pip._vendor.requests.structures import CaseInsensitiveDict
|
||||
from pip._vendor.urllib3 import HTTPResponse
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from pip._vendor.requests import PreparedRequest
|
||||
|
||||
|
||||
class Serializer:
|
||||
serde_version = "4"
|
||||
|
||||
def dumps(
|
||||
self,
|
||||
request: PreparedRequest,
|
||||
response: HTTPResponse,
|
||||
body: bytes | None = None,
|
||||
) -> bytes:
|
||||
response_headers: CaseInsensitiveDict[str] = CaseInsensitiveDict(
|
||||
response.headers
|
||||
)
|
||||
|
||||
if body is None:
|
||||
# When a body isn't passed in, we'll read the response. We
|
||||
# also update the response with a new file handler to be
|
||||
# sure it acts as though it was never read.
|
||||
body = response.read(decode_content=False)
|
||||
response._fp = io.BytesIO(body) # type: ignore[assignment]
|
||||
response.length_remaining = len(body)
|
||||
|
||||
data = {
|
||||
"response": {
|
||||
"body": body, # Empty bytestring if body is stored separately
|
||||
"headers": {str(k): str(v) for k, v in response.headers.items()},
|
||||
"status": response.status,
|
||||
"version": response.version,
|
||||
"reason": str(response.reason),
|
||||
"decode_content": response.decode_content,
|
||||
}
|
||||
}
|
||||
|
||||
# Construct our vary headers
|
||||
data["vary"] = {}
|
||||
if "vary" in response_headers:
|
||||
varied_headers = response_headers["vary"].split(",")
|
||||
for header in varied_headers:
|
||||
header = str(header).strip()
|
||||
header_value = request.headers.get(header, None)
|
||||
if header_value is not None:
|
||||
header_value = str(header_value)
|
||||
data["vary"][header] = header_value
|
||||
|
||||
return b",".join([f"cc={self.serde_version}".encode(), self.serialize(data)])
|
||||
|
||||
def serialize(self, data: dict[str, Any]) -> bytes:
|
||||
return cast(bytes, msgpack.dumps(data, use_bin_type=True))
|
||||
|
||||
def loads(
|
||||
self,
|
||||
request: PreparedRequest,
|
||||
data: bytes,
|
||||
body_file: IO[bytes] | None = None,
|
||||
) -> HTTPResponse | None:
|
||||
# Short circuit if we've been given an empty set of data
|
||||
if not data:
|
||||
return None
|
||||
|
||||
# Previous versions of this library supported other serialization
|
||||
# formats, but these have all been removed.
|
||||
if not data.startswith(f"cc={self.serde_version},".encode()):
|
||||
return None
|
||||
|
||||
data = data[5:]
|
||||
return self._loads_v4(request, data, body_file)
|
||||
|
||||
def prepare_response(
|
||||
self,
|
||||
request: PreparedRequest,
|
||||
cached: Mapping[str, Any],
|
||||
body_file: IO[bytes] | None = None,
|
||||
) -> HTTPResponse | None:
|
||||
"""Verify our vary headers match and construct a real urllib3
|
||||
HTTPResponse object.
|
||||
"""
|
||||
# Special case the '*' Vary value as it means we cannot actually
|
||||
# determine if the cached response is suitable for this request.
|
||||
# This case is also handled in the controller code when creating
|
||||
# a cache entry, but is left here for backwards compatibility.
|
||||
if "*" in cached.get("vary", {}):
|
||||
return None
|
||||
|
||||
# Ensure that the Vary headers for the cached response match our
|
||||
# request
|
||||
for header, value in cached.get("vary", {}).items():
|
||||
if request.headers.get(header, None) != value:
|
||||
return None
|
||||
|
||||
body_raw = cached["response"].pop("body")
|
||||
|
||||
headers: CaseInsensitiveDict[str] = CaseInsensitiveDict(
|
||||
data=cached["response"]["headers"]
|
||||
)
|
||||
if headers.get("transfer-encoding", "") == "chunked":
|
||||
headers.pop("transfer-encoding")
|
||||
|
||||
cached["response"]["headers"] = headers
|
||||
|
||||
try:
|
||||
body: IO[bytes]
|
||||
if body_file is None:
|
||||
body = io.BytesIO(body_raw)
|
||||
else:
|
||||
body = body_file
|
||||
except TypeError:
|
||||
# This can happen if cachecontrol serialized to v1 format (pickle)
|
||||
# using Python 2. A Python 2 str(byte string) will be unpickled as
|
||||
# a Python 3 str (unicode string), which will cause the above to
|
||||
# fail with:
|
||||
#
|
||||
# TypeError: 'str' does not support the buffer interface
|
||||
body = io.BytesIO(body_raw.encode("utf8"))
|
||||
|
||||
# Discard any `strict` parameter serialized by older version of cachecontrol.
|
||||
cached["response"].pop("strict", None)
|
||||
|
||||
return HTTPResponse(body=body, preload_content=False, **cached["response"])
|
||||
|
||||
def _loads_v4(
|
||||
self,
|
||||
request: PreparedRequest,
|
||||
data: bytes,
|
||||
body_file: IO[bytes] | None = None,
|
||||
) -> HTTPResponse | None:
|
||||
try:
|
||||
cached = msgpack.loads(data, raw=False)
|
||||
except ValueError:
|
||||
return None
|
||||
|
||||
return self.prepare_response(request, cached, body_file)
|
||||
@@ -0,0 +1,43 @@
|
||||
# SPDX-FileCopyrightText: 2015 Eric Larson
|
||||
#
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
from __future__ import annotations
|
||||
|
||||
from typing import TYPE_CHECKING, Collection
|
||||
|
||||
from pip._vendor.cachecontrol.adapter import CacheControlAdapter
|
||||
from pip._vendor.cachecontrol.cache import DictCache
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from pip._vendor import requests
|
||||
|
||||
from pip._vendor.cachecontrol.cache import BaseCache
|
||||
from pip._vendor.cachecontrol.controller import CacheController
|
||||
from pip._vendor.cachecontrol.heuristics import BaseHeuristic
|
||||
from pip._vendor.cachecontrol.serialize import Serializer
|
||||
|
||||
|
||||
def CacheControl(
|
||||
sess: requests.Session,
|
||||
cache: BaseCache | None = None,
|
||||
cache_etags: bool = True,
|
||||
serializer: Serializer | None = None,
|
||||
heuristic: BaseHeuristic | None = None,
|
||||
controller_class: type[CacheController] | None = None,
|
||||
adapter_class: type[CacheControlAdapter] | None = None,
|
||||
cacheable_methods: Collection[str] | None = None,
|
||||
) -> requests.Session:
|
||||
cache = DictCache() if cache is None else cache
|
||||
adapter_class = adapter_class or CacheControlAdapter
|
||||
adapter = adapter_class(
|
||||
cache,
|
||||
cache_etags=cache_etags,
|
||||
serializer=serializer,
|
||||
heuristic=heuristic,
|
||||
controller_class=controller_class,
|
||||
cacheable_methods=cacheable_methods,
|
||||
)
|
||||
sess.mount("http://", adapter)
|
||||
sess.mount("https://", adapter)
|
||||
|
||||
return sess
|
||||
@@ -0,0 +1,20 @@
|
||||
This package contains a modified version of ca-bundle.crt:
|
||||
|
||||
ca-bundle.crt -- Bundle of CA Root Certificates
|
||||
|
||||
This is a bundle of X.509 certificates of public Certificate Authorities
|
||||
(CA). These were automatically extracted from Mozilla's root certificates
|
||||
file (certdata.txt). This file can be found in the mozilla source tree:
|
||||
https://hg.mozilla.org/mozilla-central/file/tip/security/nss/lib/ckfw/builtins/certdata.txt
|
||||
It contains the certificates in PEM format and therefore
|
||||
can be directly used with curl / libcurl / php_curl, or with
|
||||
an Apache+mod_ssl webserver for SSL client authentication.
|
||||
Just configure this file as the SSLCACertificateFile.#
|
||||
|
||||
***** BEGIN LICENSE BLOCK *****
|
||||
This Source Code Form is subject to the terms of the Mozilla Public License,
|
||||
v. 2.0. If a copy of the MPL was not distributed with this file, You can obtain
|
||||
one at http://mozilla.org/MPL/2.0/.
|
||||
|
||||
***** END LICENSE BLOCK *****
|
||||
@(#) $RCSfile: certdata.txt,v $ $Revision: 1.80 $ $Date: 2011/11/03 15:11:58 $
|
||||
@@ -0,0 +1,4 @@
|
||||
from .core import contents, where
|
||||
|
||||
__all__ = ["contents", "where"]
|
||||
__version__ = "2026.01.04"
|
||||
@@ -0,0 +1,12 @@
|
||||
import argparse
|
||||
|
||||
from pip._vendor.certifi import contents, where
|
||||
|
||||
parser = argparse.ArgumentParser()
|
||||
parser.add_argument("-c", "--contents", action="store_true")
|
||||
args = parser.parse_args()
|
||||
|
||||
if args.contents:
|
||||
print(contents())
|
||||
else:
|
||||
print(where())
|
||||
4468
.venv/lib/python3.9/site-packages/pip/_vendor/certifi/cacert.pem
Normal file
4468
.venv/lib/python3.9/site-packages/pip/_vendor/certifi/cacert.pem
Normal file
File diff suppressed because it is too large
Load Diff
@@ -0,0 +1,83 @@
|
||||
"""
|
||||
certifi.py
|
||||
~~~~~~~~~~
|
||||
|
||||
This module returns the installation location of cacert.pem or its contents.
|
||||
"""
|
||||
import sys
|
||||
import atexit
|
||||
|
||||
def exit_cacert_ctx() -> None:
|
||||
_CACERT_CTX.__exit__(None, None, None) # type: ignore[union-attr]
|
||||
|
||||
|
||||
if sys.version_info >= (3, 11):
|
||||
|
||||
from importlib.resources import as_file, files
|
||||
|
||||
_CACERT_CTX = None
|
||||
_CACERT_PATH = None
|
||||
|
||||
def where() -> str:
|
||||
# This is slightly terrible, but we want to delay extracting the file
|
||||
# in cases where we're inside of a zipimport situation until someone
|
||||
# actually calls where(), but we don't want to re-extract the file
|
||||
# on every call of where(), so we'll do it once then store it in a
|
||||
# global variable.
|
||||
global _CACERT_CTX
|
||||
global _CACERT_PATH
|
||||
if _CACERT_PATH is None:
|
||||
# This is slightly janky, the importlib.resources API wants you to
|
||||
# manage the cleanup of this file, so it doesn't actually return a
|
||||
# path, it returns a context manager that will give you the path
|
||||
# when you enter it and will do any cleanup when you leave it. In
|
||||
# the common case of not needing a temporary file, it will just
|
||||
# return the file system location and the __exit__() is a no-op.
|
||||
#
|
||||
# We also have to hold onto the actual context manager, because
|
||||
# it will do the cleanup whenever it gets garbage collected, so
|
||||
# we will also store that at the global level as well.
|
||||
_CACERT_CTX = as_file(files("pip._vendor.certifi").joinpath("cacert.pem"))
|
||||
_CACERT_PATH = str(_CACERT_CTX.__enter__())
|
||||
atexit.register(exit_cacert_ctx)
|
||||
|
||||
return _CACERT_PATH
|
||||
|
||||
def contents() -> str:
|
||||
return files("pip._vendor.certifi").joinpath("cacert.pem").read_text(encoding="ascii")
|
||||
|
||||
else:
|
||||
|
||||
from importlib.resources import path as get_path, read_text
|
||||
|
||||
_CACERT_CTX = None
|
||||
_CACERT_PATH = None
|
||||
|
||||
def where() -> str:
|
||||
# This is slightly terrible, but we want to delay extracting the
|
||||
# file in cases where we're inside of a zipimport situation until
|
||||
# someone actually calls where(), but we don't want to re-extract
|
||||
# the file on every call of where(), so we'll do it once then store
|
||||
# it in a global variable.
|
||||
global _CACERT_CTX
|
||||
global _CACERT_PATH
|
||||
if _CACERT_PATH is None:
|
||||
# This is slightly janky, the importlib.resources API wants you
|
||||
# to manage the cleanup of this file, so it doesn't actually
|
||||
# return a path, it returns a context manager that will give
|
||||
# you the path when you enter it and will do any cleanup when
|
||||
# you leave it. In the common case of not needing a temporary
|
||||
# file, it will just return the file system location and the
|
||||
# __exit__() is a no-op.
|
||||
#
|
||||
# We also have to hold onto the actual context manager, because
|
||||
# it will do the cleanup whenever it gets garbage collected, so
|
||||
# we will also store that at the global level as well.
|
||||
_CACERT_CTX = get_path("pip._vendor.certifi", "cacert.pem")
|
||||
_CACERT_PATH = str(_CACERT_CTX.__enter__())
|
||||
atexit.register(exit_cacert_ctx)
|
||||
|
||||
return _CACERT_PATH
|
||||
|
||||
def contents() -> str:
|
||||
return read_text("pip._vendor.certifi", "cacert.pem", encoding="ascii")
|
||||
@@ -0,0 +1,9 @@
|
||||
MIT License
|
||||
|
||||
Copyright (c) 2024-present Stephen Rosen <sirosen0@gmail.com>
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
||||
@@ -0,0 +1,13 @@
|
||||
from ._implementation import (
|
||||
CyclicDependencyError,
|
||||
DependencyGroupInclude,
|
||||
DependencyGroupResolver,
|
||||
resolve,
|
||||
)
|
||||
|
||||
__all__ = (
|
||||
"CyclicDependencyError",
|
||||
"DependencyGroupInclude",
|
||||
"DependencyGroupResolver",
|
||||
"resolve",
|
||||
)
|
||||
@@ -0,0 +1,65 @@
|
||||
import argparse
|
||||
import sys
|
||||
|
||||
from ._implementation import resolve
|
||||
from ._toml_compat import tomllib
|
||||
|
||||
|
||||
def main() -> None:
|
||||
if tomllib is None:
|
||||
print(
|
||||
"Usage error: dependency-groups CLI requires tomli or Python 3.11+",
|
||||
file=sys.stderr,
|
||||
)
|
||||
raise SystemExit(2)
|
||||
|
||||
parser = argparse.ArgumentParser(
|
||||
description=(
|
||||
"A dependency-groups CLI. Prints out a resolved group, newline-delimited."
|
||||
)
|
||||
)
|
||||
parser.add_argument(
|
||||
"GROUP_NAME", nargs="*", help="The dependency group(s) to resolve."
|
||||
)
|
||||
parser.add_argument(
|
||||
"-f",
|
||||
"--pyproject-file",
|
||||
default="pyproject.toml",
|
||||
help="The pyproject.toml file. Defaults to trying in the current directory.",
|
||||
)
|
||||
parser.add_argument(
|
||||
"-o",
|
||||
"--output",
|
||||
help="An output file. Defaults to stdout.",
|
||||
)
|
||||
parser.add_argument(
|
||||
"-l",
|
||||
"--list",
|
||||
action="store_true",
|
||||
help="List the available dependency groups",
|
||||
)
|
||||
args = parser.parse_args()
|
||||
|
||||
with open(args.pyproject_file, "rb") as fp:
|
||||
pyproject = tomllib.load(fp)
|
||||
|
||||
dependency_groups_raw = pyproject.get("dependency-groups", {})
|
||||
|
||||
if args.list:
|
||||
print(*dependency_groups_raw.keys())
|
||||
return
|
||||
if not args.GROUP_NAME:
|
||||
print("A GROUP_NAME is required", file=sys.stderr)
|
||||
raise SystemExit(3)
|
||||
|
||||
content = "\n".join(resolve(dependency_groups_raw, *args.GROUP_NAME))
|
||||
|
||||
if args.output is None or args.output == "-":
|
||||
print(content)
|
||||
else:
|
||||
with open(args.output, "w", encoding="utf-8") as fp:
|
||||
print(content, file=fp)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
@@ -0,0 +1,209 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import dataclasses
|
||||
import re
|
||||
from collections.abc import Mapping
|
||||
|
||||
from pip._vendor.packaging.requirements import Requirement
|
||||
|
||||
|
||||
def _normalize_name(name: str) -> str:
|
||||
return re.sub(r"[-_.]+", "-", name).lower()
|
||||
|
||||
|
||||
def _normalize_group_names(
|
||||
dependency_groups: Mapping[str, str | Mapping[str, str]],
|
||||
) -> Mapping[str, str | Mapping[str, str]]:
|
||||
original_names: dict[str, list[str]] = {}
|
||||
normalized_groups = {}
|
||||
|
||||
for group_name, value in dependency_groups.items():
|
||||
normed_group_name = _normalize_name(group_name)
|
||||
original_names.setdefault(normed_group_name, []).append(group_name)
|
||||
normalized_groups[normed_group_name] = value
|
||||
|
||||
errors = []
|
||||
for normed_name, names in original_names.items():
|
||||
if len(names) > 1:
|
||||
errors.append(f"{normed_name} ({', '.join(names)})")
|
||||
if errors:
|
||||
raise ValueError(f"Duplicate dependency group names: {', '.join(errors)}")
|
||||
|
||||
return normalized_groups
|
||||
|
||||
|
||||
@dataclasses.dataclass
|
||||
class DependencyGroupInclude:
|
||||
include_group: str
|
||||
|
||||
|
||||
class CyclicDependencyError(ValueError):
|
||||
"""
|
||||
An error representing the detection of a cycle.
|
||||
"""
|
||||
|
||||
def __init__(self, requested_group: str, group: str, include_group: str) -> None:
|
||||
self.requested_group = requested_group
|
||||
self.group = group
|
||||
self.include_group = include_group
|
||||
|
||||
if include_group == group:
|
||||
reason = f"{group} includes itself"
|
||||
else:
|
||||
reason = f"{include_group} -> {group}, {group} -> {include_group}"
|
||||
super().__init__(
|
||||
"Cyclic dependency group include while resolving "
|
||||
f"{requested_group}: {reason}"
|
||||
)
|
||||
|
||||
|
||||
class DependencyGroupResolver:
|
||||
"""
|
||||
A resolver for Dependency Group data.
|
||||
|
||||
This class handles caching, name normalization, cycle detection, and other
|
||||
parsing requirements. There are only two public methods for exploring the data:
|
||||
``lookup()`` and ``resolve()``.
|
||||
|
||||
:param dependency_groups: A mapping, as provided via pyproject
|
||||
``[dependency-groups]``.
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
dependency_groups: Mapping[str, str | Mapping[str, str]],
|
||||
) -> None:
|
||||
if not isinstance(dependency_groups, Mapping):
|
||||
raise TypeError("Dependency Groups table is not a mapping")
|
||||
self.dependency_groups = _normalize_group_names(dependency_groups)
|
||||
# a map of group names to parsed data
|
||||
self._parsed_groups: dict[
|
||||
str, tuple[Requirement | DependencyGroupInclude, ...]
|
||||
] = {}
|
||||
# a map of group names to their ancestors, used for cycle detection
|
||||
self._include_graph_ancestors: dict[str, tuple[str, ...]] = {}
|
||||
# a cache of completed resolutions to Requirement lists
|
||||
self._resolve_cache: dict[str, tuple[Requirement, ...]] = {}
|
||||
|
||||
def lookup(self, group: str) -> tuple[Requirement | DependencyGroupInclude, ...]:
|
||||
"""
|
||||
Lookup a group name, returning the parsed dependency data for that group.
|
||||
This will not resolve includes.
|
||||
|
||||
:param group: the name of the group to lookup
|
||||
|
||||
:raises ValueError: if the data does not appear to be valid dependency group
|
||||
data
|
||||
:raises TypeError: if the data is not a string
|
||||
:raises LookupError: if group name is absent
|
||||
:raises packaging.requirements.InvalidRequirement: if a specifier is not valid
|
||||
"""
|
||||
if not isinstance(group, str):
|
||||
raise TypeError("Dependency group name is not a str")
|
||||
group = _normalize_name(group)
|
||||
return self._parse_group(group)
|
||||
|
||||
def resolve(self, group: str) -> tuple[Requirement, ...]:
|
||||
"""
|
||||
Resolve a dependency group to a list of requirements.
|
||||
|
||||
:param group: the name of the group to resolve
|
||||
|
||||
:raises TypeError: if the inputs appear to be the wrong types
|
||||
:raises ValueError: if the data does not appear to be valid dependency group
|
||||
data
|
||||
:raises LookupError: if group name is absent
|
||||
:raises packaging.requirements.InvalidRequirement: if a specifier is not valid
|
||||
"""
|
||||
if not isinstance(group, str):
|
||||
raise TypeError("Dependency group name is not a str")
|
||||
group = _normalize_name(group)
|
||||
return self._resolve(group, group)
|
||||
|
||||
def _parse_group(
|
||||
self, group: str
|
||||
) -> tuple[Requirement | DependencyGroupInclude, ...]:
|
||||
# short circuit -- never do the work twice
|
||||
if group in self._parsed_groups:
|
||||
return self._parsed_groups[group]
|
||||
|
||||
if group not in self.dependency_groups:
|
||||
raise LookupError(f"Dependency group '{group}' not found")
|
||||
|
||||
raw_group = self.dependency_groups[group]
|
||||
if not isinstance(raw_group, list):
|
||||
raise TypeError(f"Dependency group '{group}' is not a list")
|
||||
|
||||
elements: list[Requirement | DependencyGroupInclude] = []
|
||||
for item in raw_group:
|
||||
if isinstance(item, str):
|
||||
# packaging.requirements.Requirement parsing ensures that this is a
|
||||
# valid PEP 508 Dependency Specifier
|
||||
# raises InvalidRequirement on failure
|
||||
elements.append(Requirement(item))
|
||||
elif isinstance(item, dict):
|
||||
if tuple(item.keys()) != ("include-group",):
|
||||
raise ValueError(f"Invalid dependency group item: {item}")
|
||||
|
||||
include_group = next(iter(item.values()))
|
||||
elements.append(DependencyGroupInclude(include_group=include_group))
|
||||
else:
|
||||
raise ValueError(f"Invalid dependency group item: {item}")
|
||||
|
||||
self._parsed_groups[group] = tuple(elements)
|
||||
return self._parsed_groups[group]
|
||||
|
||||
def _resolve(self, group: str, requested_group: str) -> tuple[Requirement, ...]:
|
||||
"""
|
||||
This is a helper for cached resolution to strings.
|
||||
|
||||
:param group: The name of the group to resolve.
|
||||
:param requested_group: The group which was used in the original, user-facing
|
||||
request.
|
||||
"""
|
||||
if group in self._resolve_cache:
|
||||
return self._resolve_cache[group]
|
||||
|
||||
parsed = self._parse_group(group)
|
||||
|
||||
resolved_group = []
|
||||
for item in parsed:
|
||||
if isinstance(item, Requirement):
|
||||
resolved_group.append(item)
|
||||
elif isinstance(item, DependencyGroupInclude):
|
||||
include_group = _normalize_name(item.include_group)
|
||||
if include_group in self._include_graph_ancestors.get(group, ()):
|
||||
raise CyclicDependencyError(
|
||||
requested_group, group, item.include_group
|
||||
)
|
||||
self._include_graph_ancestors[include_group] = (
|
||||
*self._include_graph_ancestors.get(group, ()),
|
||||
group,
|
||||
)
|
||||
resolved_group.extend(self._resolve(include_group, requested_group))
|
||||
else: # unreachable
|
||||
raise NotImplementedError(
|
||||
f"Invalid dependency group item after parse: {item}"
|
||||
)
|
||||
|
||||
self._resolve_cache[group] = tuple(resolved_group)
|
||||
return self._resolve_cache[group]
|
||||
|
||||
|
||||
def resolve(
|
||||
dependency_groups: Mapping[str, str | Mapping[str, str]], /, *groups: str
|
||||
) -> tuple[str, ...]:
|
||||
"""
|
||||
Resolve a dependency group to a tuple of requirements, as strings.
|
||||
|
||||
:param dependency_groups: the parsed contents of the ``[dependency-groups]`` table
|
||||
from ``pyproject.toml``
|
||||
:param groups: the name of the group(s) to resolve
|
||||
|
||||
:raises TypeError: if the inputs appear to be the wrong types
|
||||
:raises ValueError: if the data does not appear to be valid dependency group data
|
||||
:raises LookupError: if group name is absent
|
||||
:raises packaging.requirements.InvalidRequirement: if a specifier is not valid
|
||||
"""
|
||||
resolver = DependencyGroupResolver(dependency_groups)
|
||||
return tuple(str(r) for group in groups for r in resolver.resolve(group))
|
||||
@@ -0,0 +1,59 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import argparse
|
||||
import sys
|
||||
|
||||
from ._implementation import DependencyGroupResolver
|
||||
from ._toml_compat import tomllib
|
||||
|
||||
|
||||
def main(*, argv: list[str] | None = None) -> None:
|
||||
if tomllib is None:
|
||||
print(
|
||||
"Usage error: dependency-groups CLI requires tomli or Python 3.11+",
|
||||
file=sys.stderr,
|
||||
)
|
||||
raise SystemExit(2)
|
||||
|
||||
parser = argparse.ArgumentParser(
|
||||
description=(
|
||||
"Lint Dependency Groups for validity. "
|
||||
"This will eagerly load and check all of your Dependency Groups."
|
||||
)
|
||||
)
|
||||
parser.add_argument(
|
||||
"-f",
|
||||
"--pyproject-file",
|
||||
default="pyproject.toml",
|
||||
help="The pyproject.toml file. Defaults to trying in the current directory.",
|
||||
)
|
||||
args = parser.parse_args(argv if argv is not None else sys.argv[1:])
|
||||
|
||||
with open(args.pyproject_file, "rb") as fp:
|
||||
pyproject = tomllib.load(fp)
|
||||
dependency_groups_raw = pyproject.get("dependency-groups", {})
|
||||
|
||||
errors: list[str] = []
|
||||
try:
|
||||
resolver = DependencyGroupResolver(dependency_groups_raw)
|
||||
except (ValueError, TypeError) as e:
|
||||
errors.append(f"{type(e).__name__}: {e}")
|
||||
else:
|
||||
for groupname in resolver.dependency_groups:
|
||||
try:
|
||||
resolver.resolve(groupname)
|
||||
except (LookupError, ValueError, TypeError) as e:
|
||||
errors.append(f"{type(e).__name__}: {e}")
|
||||
|
||||
if errors:
|
||||
print("errors encountered while examining dependency groups:")
|
||||
for msg in errors:
|
||||
print(f" {msg}")
|
||||
sys.exit(1)
|
||||
else:
|
||||
print("ok")
|
||||
sys.exit(0)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
@@ -0,0 +1,62 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import argparse
|
||||
import subprocess
|
||||
import sys
|
||||
|
||||
from ._implementation import DependencyGroupResolver
|
||||
from ._toml_compat import tomllib
|
||||
|
||||
|
||||
def _invoke_pip(deps: list[str]) -> None:
|
||||
subprocess.check_call([sys.executable, "-m", "pip", "install", *deps])
|
||||
|
||||
|
||||
def main(*, argv: list[str] | None = None) -> None:
|
||||
if tomllib is None:
|
||||
print(
|
||||
"Usage error: dependency-groups CLI requires tomli or Python 3.11+",
|
||||
file=sys.stderr,
|
||||
)
|
||||
raise SystemExit(2)
|
||||
|
||||
parser = argparse.ArgumentParser(description="Install Dependency Groups.")
|
||||
parser.add_argument(
|
||||
"DEPENDENCY_GROUP", nargs="+", help="The dependency groups to install."
|
||||
)
|
||||
parser.add_argument(
|
||||
"-f",
|
||||
"--pyproject-file",
|
||||
default="pyproject.toml",
|
||||
help="The pyproject.toml file. Defaults to trying in the current directory.",
|
||||
)
|
||||
args = parser.parse_args(argv if argv is not None else sys.argv[1:])
|
||||
|
||||
with open(args.pyproject_file, "rb") as fp:
|
||||
pyproject = tomllib.load(fp)
|
||||
dependency_groups_raw = pyproject.get("dependency-groups", {})
|
||||
|
||||
errors: list[str] = []
|
||||
resolved: list[str] = []
|
||||
try:
|
||||
resolver = DependencyGroupResolver(dependency_groups_raw)
|
||||
except (ValueError, TypeError) as e:
|
||||
errors.append(f"{type(e).__name__}: {e}")
|
||||
else:
|
||||
for groupname in args.DEPENDENCY_GROUP:
|
||||
try:
|
||||
resolved.extend(str(r) for r in resolver.resolve(groupname))
|
||||
except (LookupError, ValueError, TypeError) as e:
|
||||
errors.append(f"{type(e).__name__}: {e}")
|
||||
|
||||
if errors:
|
||||
print("errors encountered while examining dependency groups:")
|
||||
for msg in errors:
|
||||
print(f" {msg}")
|
||||
sys.exit(1)
|
||||
|
||||
_invoke_pip(resolved)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
@@ -0,0 +1,9 @@
|
||||
try:
|
||||
import tomllib
|
||||
except ImportError:
|
||||
try:
|
||||
from pip._vendor import tomli as tomllib # type: ignore[no-redef, unused-ignore]
|
||||
except ModuleNotFoundError: # pragma: no cover
|
||||
tomllib = None # type: ignore[assignment, unused-ignore]
|
||||
|
||||
__all__ = ("tomllib",)
|
||||
@@ -0,0 +1,284 @@
|
||||
A. HISTORY OF THE SOFTWARE
|
||||
==========================
|
||||
|
||||
Python was created in the early 1990s by Guido van Rossum at Stichting
|
||||
Mathematisch Centrum (CWI, see http://www.cwi.nl) in the Netherlands
|
||||
as a successor of a language called ABC. Guido remains Python's
|
||||
principal author, although it includes many contributions from others.
|
||||
|
||||
In 1995, Guido continued his work on Python at the Corporation for
|
||||
National Research Initiatives (CNRI, see http://www.cnri.reston.va.us)
|
||||
in Reston, Virginia where he released several versions of the
|
||||
software.
|
||||
|
||||
In May 2000, Guido and the Python core development team moved to
|
||||
BeOpen.com to form the BeOpen PythonLabs team. In October of the same
|
||||
year, the PythonLabs team moved to Digital Creations (now Zope
|
||||
Corporation, see http://www.zope.com). In 2001, the Python Software
|
||||
Foundation (PSF, see http://www.python.org/psf/) was formed, a
|
||||
non-profit organization created specifically to own Python-related
|
||||
Intellectual Property. Zope Corporation is a sponsoring member of
|
||||
the PSF.
|
||||
|
||||
All Python releases are Open Source (see http://www.opensource.org for
|
||||
the Open Source Definition). Historically, most, but not all, Python
|
||||
releases have also been GPL-compatible; the table below summarizes
|
||||
the various releases.
|
||||
|
||||
Release Derived Year Owner GPL-
|
||||
from compatible? (1)
|
||||
|
||||
0.9.0 thru 1.2 1991-1995 CWI yes
|
||||
1.3 thru 1.5.2 1.2 1995-1999 CNRI yes
|
||||
1.6 1.5.2 2000 CNRI no
|
||||
2.0 1.6 2000 BeOpen.com no
|
||||
1.6.1 1.6 2001 CNRI yes (2)
|
||||
2.1 2.0+1.6.1 2001 PSF no
|
||||
2.0.1 2.0+1.6.1 2001 PSF yes
|
||||
2.1.1 2.1+2.0.1 2001 PSF yes
|
||||
2.2 2.1.1 2001 PSF yes
|
||||
2.1.2 2.1.1 2002 PSF yes
|
||||
2.1.3 2.1.2 2002 PSF yes
|
||||
2.2.1 2.2 2002 PSF yes
|
||||
2.2.2 2.2.1 2002 PSF yes
|
||||
2.2.3 2.2.2 2003 PSF yes
|
||||
2.3 2.2.2 2002-2003 PSF yes
|
||||
2.3.1 2.3 2002-2003 PSF yes
|
||||
2.3.2 2.3.1 2002-2003 PSF yes
|
||||
2.3.3 2.3.2 2002-2003 PSF yes
|
||||
2.3.4 2.3.3 2004 PSF yes
|
||||
2.3.5 2.3.4 2005 PSF yes
|
||||
2.4 2.3 2004 PSF yes
|
||||
2.4.1 2.4 2005 PSF yes
|
||||
2.4.2 2.4.1 2005 PSF yes
|
||||
2.4.3 2.4.2 2006 PSF yes
|
||||
2.4.4 2.4.3 2006 PSF yes
|
||||
2.5 2.4 2006 PSF yes
|
||||
2.5.1 2.5 2007 PSF yes
|
||||
2.5.2 2.5.1 2008 PSF yes
|
||||
2.5.3 2.5.2 2008 PSF yes
|
||||
2.6 2.5 2008 PSF yes
|
||||
2.6.1 2.6 2008 PSF yes
|
||||
2.6.2 2.6.1 2009 PSF yes
|
||||
2.6.3 2.6.2 2009 PSF yes
|
||||
2.6.4 2.6.3 2009 PSF yes
|
||||
2.6.5 2.6.4 2010 PSF yes
|
||||
3.0 2.6 2008 PSF yes
|
||||
3.0.1 3.0 2009 PSF yes
|
||||
3.1 3.0.1 2009 PSF yes
|
||||
3.1.1 3.1 2009 PSF yes
|
||||
3.1.2 3.1 2010 PSF yes
|
||||
3.2 3.1 2010 PSF yes
|
||||
|
||||
Footnotes:
|
||||
|
||||
(1) GPL-compatible doesn't mean that we're distributing Python under
|
||||
the GPL. All Python licenses, unlike the GPL, let you distribute
|
||||
a modified version without making your changes open source. The
|
||||
GPL-compatible licenses make it possible to combine Python with
|
||||
other software that is released under the GPL; the others don't.
|
||||
|
||||
(2) According to Richard Stallman, 1.6.1 is not GPL-compatible,
|
||||
because its license has a choice of law clause. According to
|
||||
CNRI, however, Stallman's lawyer has told CNRI's lawyer that 1.6.1
|
||||
is "not incompatible" with the GPL.
|
||||
|
||||
Thanks to the many outside volunteers who have worked under Guido's
|
||||
direction to make these releases possible.
|
||||
|
||||
|
||||
B. TERMS AND CONDITIONS FOR ACCESSING OR OTHERWISE USING PYTHON
|
||||
===============================================================
|
||||
|
||||
PYTHON SOFTWARE FOUNDATION LICENSE VERSION 2
|
||||
--------------------------------------------
|
||||
|
||||
1. This LICENSE AGREEMENT is between the Python Software Foundation
|
||||
("PSF"), and the Individual or Organization ("Licensee") accessing and
|
||||
otherwise using this software ("Python") in source or binary form and
|
||||
its associated documentation.
|
||||
|
||||
2. Subject to the terms and conditions of this License Agreement, PSF hereby
|
||||
grants Licensee a nonexclusive, royalty-free, world-wide license to reproduce,
|
||||
analyze, test, perform and/or display publicly, prepare derivative works,
|
||||
distribute, and otherwise use Python alone or in any derivative version,
|
||||
provided, however, that PSF's License Agreement and PSF's notice of copyright,
|
||||
i.e., "Copyright (c) 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010
|
||||
Python Software Foundation; All Rights Reserved" are retained in Python alone or
|
||||
in any derivative version prepared by Licensee.
|
||||
|
||||
3. In the event Licensee prepares a derivative work that is based on
|
||||
or incorporates Python or any part thereof, and wants to make
|
||||
the derivative work available to others as provided herein, then
|
||||
Licensee hereby agrees to include in any such work a brief summary of
|
||||
the changes made to Python.
|
||||
|
||||
4. PSF is making Python available to Licensee on an "AS IS"
|
||||
basis. PSF MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR
|
||||
IMPLIED. BY WAY OF EXAMPLE, BUT NOT LIMITATION, PSF MAKES NO AND
|
||||
DISCLAIMS ANY REPRESENTATION OR WARRANTY OF MERCHANTABILITY OR FITNESS
|
||||
FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF PYTHON WILL NOT
|
||||
INFRINGE ANY THIRD PARTY RIGHTS.
|
||||
|
||||
5. PSF SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF PYTHON
|
||||
FOR ANY INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR LOSS AS
|
||||
A RESULT OF MODIFYING, DISTRIBUTING, OR OTHERWISE USING PYTHON,
|
||||
OR ANY DERIVATIVE THEREOF, EVEN IF ADVISED OF THE POSSIBILITY THEREOF.
|
||||
|
||||
6. This License Agreement will automatically terminate upon a material
|
||||
breach of its terms and conditions.
|
||||
|
||||
7. Nothing in this License Agreement shall be deemed to create any
|
||||
relationship of agency, partnership, or joint venture between PSF and
|
||||
Licensee. This License Agreement does not grant permission to use PSF
|
||||
trademarks or trade name in a trademark sense to endorse or promote
|
||||
products or services of Licensee, or any third party.
|
||||
|
||||
8. By copying, installing or otherwise using Python, Licensee
|
||||
agrees to be bound by the terms and conditions of this License
|
||||
Agreement.
|
||||
|
||||
|
||||
BEOPEN.COM LICENSE AGREEMENT FOR PYTHON 2.0
|
||||
-------------------------------------------
|
||||
|
||||
BEOPEN PYTHON OPEN SOURCE LICENSE AGREEMENT VERSION 1
|
||||
|
||||
1. This LICENSE AGREEMENT is between BeOpen.com ("BeOpen"), having an
|
||||
office at 160 Saratoga Avenue, Santa Clara, CA 95051, and the
|
||||
Individual or Organization ("Licensee") accessing and otherwise using
|
||||
this software in source or binary form and its associated
|
||||
documentation ("the Software").
|
||||
|
||||
2. Subject to the terms and conditions of this BeOpen Python License
|
||||
Agreement, BeOpen hereby grants Licensee a non-exclusive,
|
||||
royalty-free, world-wide license to reproduce, analyze, test, perform
|
||||
and/or display publicly, prepare derivative works, distribute, and
|
||||
otherwise use the Software alone or in any derivative version,
|
||||
provided, however, that the BeOpen Python License is retained in the
|
||||
Software, alone or in any derivative version prepared by Licensee.
|
||||
|
||||
3. BeOpen is making the Software available to Licensee on an "AS IS"
|
||||
basis. BEOPEN MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR
|
||||
IMPLIED. BY WAY OF EXAMPLE, BUT NOT LIMITATION, BEOPEN MAKES NO AND
|
||||
DISCLAIMS ANY REPRESENTATION OR WARRANTY OF MERCHANTABILITY OR FITNESS
|
||||
FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF THE SOFTWARE WILL NOT
|
||||
INFRINGE ANY THIRD PARTY RIGHTS.
|
||||
|
||||
4. BEOPEN SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF THE
|
||||
SOFTWARE FOR ANY INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR LOSS
|
||||
AS A RESULT OF USING, MODIFYING OR DISTRIBUTING THE SOFTWARE, OR ANY
|
||||
DERIVATIVE THEREOF, EVEN IF ADVISED OF THE POSSIBILITY THEREOF.
|
||||
|
||||
5. This License Agreement will automatically terminate upon a material
|
||||
breach of its terms and conditions.
|
||||
|
||||
6. This License Agreement shall be governed by and interpreted in all
|
||||
respects by the law of the State of California, excluding conflict of
|
||||
law provisions. Nothing in this License Agreement shall be deemed to
|
||||
create any relationship of agency, partnership, or joint venture
|
||||
between BeOpen and Licensee. This License Agreement does not grant
|
||||
permission to use BeOpen trademarks or trade names in a trademark
|
||||
sense to endorse or promote products or services of Licensee, or any
|
||||
third party. As an exception, the "BeOpen Python" logos available at
|
||||
http://www.pythonlabs.com/logos.html may be used according to the
|
||||
permissions granted on that web page.
|
||||
|
||||
7. By copying, installing or otherwise using the software, Licensee
|
||||
agrees to be bound by the terms and conditions of this License
|
||||
Agreement.
|
||||
|
||||
|
||||
CNRI LICENSE AGREEMENT FOR PYTHON 1.6.1
|
||||
---------------------------------------
|
||||
|
||||
1. This LICENSE AGREEMENT is between the Corporation for National
|
||||
Research Initiatives, having an office at 1895 Preston White Drive,
|
||||
Reston, VA 20191 ("CNRI"), and the Individual or Organization
|
||||
("Licensee") accessing and otherwise using Python 1.6.1 software in
|
||||
source or binary form and its associated documentation.
|
||||
|
||||
2. Subject to the terms and conditions of this License Agreement, CNRI
|
||||
hereby grants Licensee a nonexclusive, royalty-free, world-wide
|
||||
license to reproduce, analyze, test, perform and/or display publicly,
|
||||
prepare derivative works, distribute, and otherwise use Python 1.6.1
|
||||
alone or in any derivative version, provided, however, that CNRI's
|
||||
License Agreement and CNRI's notice of copyright, i.e., "Copyright (c)
|
||||
1995-2001 Corporation for National Research Initiatives; All Rights
|
||||
Reserved" are retained in Python 1.6.1 alone or in any derivative
|
||||
version prepared by Licensee. Alternately, in lieu of CNRI's License
|
||||
Agreement, Licensee may substitute the following text (omitting the
|
||||
quotes): "Python 1.6.1 is made available subject to the terms and
|
||||
conditions in CNRI's License Agreement. This Agreement together with
|
||||
Python 1.6.1 may be located on the Internet using the following
|
||||
unique, persistent identifier (known as a handle): 1895.22/1013. This
|
||||
Agreement may also be obtained from a proxy server on the Internet
|
||||
using the following URL: http://hdl.handle.net/1895.22/1013".
|
||||
|
||||
3. In the event Licensee prepares a derivative work that is based on
|
||||
or incorporates Python 1.6.1 or any part thereof, and wants to make
|
||||
the derivative work available to others as provided herein, then
|
||||
Licensee hereby agrees to include in any such work a brief summary of
|
||||
the changes made to Python 1.6.1.
|
||||
|
||||
4. CNRI is making Python 1.6.1 available to Licensee on an "AS IS"
|
||||
basis. CNRI MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR
|
||||
IMPLIED. BY WAY OF EXAMPLE, BUT NOT LIMITATION, CNRI MAKES NO AND
|
||||
DISCLAIMS ANY REPRESENTATION OR WARRANTY OF MERCHANTABILITY OR FITNESS
|
||||
FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF PYTHON 1.6.1 WILL NOT
|
||||
INFRINGE ANY THIRD PARTY RIGHTS.
|
||||
|
||||
5. CNRI SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF PYTHON
|
||||
1.6.1 FOR ANY INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR LOSS AS
|
||||
A RESULT OF MODIFYING, DISTRIBUTING, OR OTHERWISE USING PYTHON 1.6.1,
|
||||
OR ANY DERIVATIVE THEREOF, EVEN IF ADVISED OF THE POSSIBILITY THEREOF.
|
||||
|
||||
6. This License Agreement will automatically terminate upon a material
|
||||
breach of its terms and conditions.
|
||||
|
||||
7. This License Agreement shall be governed by the federal
|
||||
intellectual property law of the United States, including without
|
||||
limitation the federal copyright law, and, to the extent such
|
||||
U.S. federal law does not apply, by the law of the Commonwealth of
|
||||
Virginia, excluding Virginia's conflict of law provisions.
|
||||
Notwithstanding the foregoing, with regard to derivative works based
|
||||
on Python 1.6.1 that incorporate non-separable material that was
|
||||
previously distributed under the GNU General Public License (GPL), the
|
||||
law of the Commonwealth of Virginia shall govern this License
|
||||
Agreement only as to issues arising under or with respect to
|
||||
Paragraphs 4, 5, and 7 of this License Agreement. Nothing in this
|
||||
License Agreement shall be deemed to create any relationship of
|
||||
agency, partnership, or joint venture between CNRI and Licensee. This
|
||||
License Agreement does not grant permission to use CNRI trademarks or
|
||||
trade name in a trademark sense to endorse or promote products or
|
||||
services of Licensee, or any third party.
|
||||
|
||||
8. By clicking on the "ACCEPT" button where indicated, or by copying,
|
||||
installing or otherwise using Python 1.6.1, Licensee agrees to be
|
||||
bound by the terms and conditions of this License Agreement.
|
||||
|
||||
ACCEPT
|
||||
|
||||
|
||||
CWI LICENSE AGREEMENT FOR PYTHON 0.9.0 THROUGH 1.2
|
||||
--------------------------------------------------
|
||||
|
||||
Copyright (c) 1991 - 1995, Stichting Mathematisch Centrum Amsterdam,
|
||||
The Netherlands. All rights reserved.
|
||||
|
||||
Permission to use, copy, modify, and distribute this software and its
|
||||
documentation for any purpose and without fee is hereby granted,
|
||||
provided that the above copyright notice appear in all copies and that
|
||||
both that copyright notice and this permission notice appear in
|
||||
supporting documentation, and that the name of Stichting Mathematisch
|
||||
Centrum or CWI not be used in advertising or publicity pertaining to
|
||||
distribution of the software without specific, written prior
|
||||
permission.
|
||||
|
||||
STICHTING MATHEMATISCH CENTRUM DISCLAIMS ALL WARRANTIES WITH REGARD TO
|
||||
THIS SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND
|
||||
FITNESS, IN NO EVENT SHALL STICHTING MATHEMATISCH CENTRUM BE LIABLE
|
||||
FOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
|
||||
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
|
||||
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT
|
||||
OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
|
||||
@@ -0,0 +1,33 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
#
|
||||
# Copyright (C) 2012-2024 Vinay Sajip.
|
||||
# Licensed to the Python Software Foundation under a contributor agreement.
|
||||
# See LICENSE.txt and CONTRIBUTORS.txt.
|
||||
#
|
||||
import logging
|
||||
|
||||
__version__ = '0.4.0'
|
||||
|
||||
|
||||
class DistlibException(Exception):
|
||||
pass
|
||||
|
||||
|
||||
try:
|
||||
from logging import NullHandler
|
||||
except ImportError: # pragma: no cover
|
||||
|
||||
class NullHandler(logging.Handler):
|
||||
|
||||
def handle(self, record):
|
||||
pass
|
||||
|
||||
def emit(self, record):
|
||||
pass
|
||||
|
||||
def createLock(self):
|
||||
self.lock = None
|
||||
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
logger.addHandler(NullHandler())
|
||||
1137
.venv/lib/python3.9/site-packages/pip/_vendor/distlib/compat.py
Normal file
1137
.venv/lib/python3.9/site-packages/pip/_vendor/distlib/compat.py
Normal file
File diff suppressed because it is too large
Load Diff
@@ -0,0 +1,358 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
#
|
||||
# Copyright (C) 2013-2017 Vinay Sajip.
|
||||
# Licensed to the Python Software Foundation under a contributor agreement.
|
||||
# See LICENSE.txt and CONTRIBUTORS.txt.
|
||||
#
|
||||
from __future__ import unicode_literals
|
||||
|
||||
import bisect
|
||||
import io
|
||||
import logging
|
||||
import os
|
||||
import pkgutil
|
||||
import sys
|
||||
import types
|
||||
import zipimport
|
||||
|
||||
from . import DistlibException
|
||||
from .util import cached_property, get_cache_base, Cache
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
cache = None # created when needed
|
||||
|
||||
|
||||
class ResourceCache(Cache):
|
||||
def __init__(self, base=None):
|
||||
if base is None:
|
||||
# Use native string to avoid issues on 2.x: see Python #20140.
|
||||
base = os.path.join(get_cache_base(), str('resource-cache'))
|
||||
super(ResourceCache, self).__init__(base)
|
||||
|
||||
def is_stale(self, resource, path):
|
||||
"""
|
||||
Is the cache stale for the given resource?
|
||||
|
||||
:param resource: The :class:`Resource` being cached.
|
||||
:param path: The path of the resource in the cache.
|
||||
:return: True if the cache is stale.
|
||||
"""
|
||||
# Cache invalidation is a hard problem :-)
|
||||
return True
|
||||
|
||||
def get(self, resource):
|
||||
"""
|
||||
Get a resource into the cache,
|
||||
|
||||
:param resource: A :class:`Resource` instance.
|
||||
:return: The pathname of the resource in the cache.
|
||||
"""
|
||||
prefix, path = resource.finder.get_cache_info(resource)
|
||||
if prefix is None:
|
||||
result = path
|
||||
else:
|
||||
result = os.path.join(self.base, self.prefix_to_dir(prefix), path)
|
||||
dirname = os.path.dirname(result)
|
||||
if not os.path.isdir(dirname):
|
||||
os.makedirs(dirname)
|
||||
if not os.path.exists(result):
|
||||
stale = True
|
||||
else:
|
||||
stale = self.is_stale(resource, path)
|
||||
if stale:
|
||||
# write the bytes of the resource to the cache location
|
||||
with open(result, 'wb') as f:
|
||||
f.write(resource.bytes)
|
||||
return result
|
||||
|
||||
|
||||
class ResourceBase(object):
|
||||
def __init__(self, finder, name):
|
||||
self.finder = finder
|
||||
self.name = name
|
||||
|
||||
|
||||
class Resource(ResourceBase):
|
||||
"""
|
||||
A class representing an in-package resource, such as a data file. This is
|
||||
not normally instantiated by user code, but rather by a
|
||||
:class:`ResourceFinder` which manages the resource.
|
||||
"""
|
||||
is_container = False # Backwards compatibility
|
||||
|
||||
def as_stream(self):
|
||||
"""
|
||||
Get the resource as a stream.
|
||||
|
||||
This is not a property to make it obvious that it returns a new stream
|
||||
each time.
|
||||
"""
|
||||
return self.finder.get_stream(self)
|
||||
|
||||
@cached_property
|
||||
def file_path(self):
|
||||
global cache
|
||||
if cache is None:
|
||||
cache = ResourceCache()
|
||||
return cache.get(self)
|
||||
|
||||
@cached_property
|
||||
def bytes(self):
|
||||
return self.finder.get_bytes(self)
|
||||
|
||||
@cached_property
|
||||
def size(self):
|
||||
return self.finder.get_size(self)
|
||||
|
||||
|
||||
class ResourceContainer(ResourceBase):
|
||||
is_container = True # Backwards compatibility
|
||||
|
||||
@cached_property
|
||||
def resources(self):
|
||||
return self.finder.get_resources(self)
|
||||
|
||||
|
||||
class ResourceFinder(object):
|
||||
"""
|
||||
Resource finder for file system resources.
|
||||
"""
|
||||
|
||||
if sys.platform.startswith('java'):
|
||||
skipped_extensions = ('.pyc', '.pyo', '.class')
|
||||
else:
|
||||
skipped_extensions = ('.pyc', '.pyo')
|
||||
|
||||
def __init__(self, module):
|
||||
self.module = module
|
||||
self.loader = getattr(module, '__loader__', None)
|
||||
self.base = os.path.dirname(getattr(module, '__file__', ''))
|
||||
|
||||
def _adjust_path(self, path):
|
||||
return os.path.realpath(path)
|
||||
|
||||
def _make_path(self, resource_name):
|
||||
# Issue #50: need to preserve type of path on Python 2.x
|
||||
# like os.path._get_sep
|
||||
if isinstance(resource_name, bytes): # should only happen on 2.x
|
||||
sep = b'/'
|
||||
else:
|
||||
sep = '/'
|
||||
parts = resource_name.split(sep)
|
||||
parts.insert(0, self.base)
|
||||
result = os.path.join(*parts)
|
||||
return self._adjust_path(result)
|
||||
|
||||
def _find(self, path):
|
||||
return os.path.exists(path)
|
||||
|
||||
def get_cache_info(self, resource):
|
||||
return None, resource.path
|
||||
|
||||
def find(self, resource_name):
|
||||
path = self._make_path(resource_name)
|
||||
if not self._find(path):
|
||||
result = None
|
||||
else:
|
||||
if self._is_directory(path):
|
||||
result = ResourceContainer(self, resource_name)
|
||||
else:
|
||||
result = Resource(self, resource_name)
|
||||
result.path = path
|
||||
return result
|
||||
|
||||
def get_stream(self, resource):
|
||||
return open(resource.path, 'rb')
|
||||
|
||||
def get_bytes(self, resource):
|
||||
with open(resource.path, 'rb') as f:
|
||||
return f.read()
|
||||
|
||||
def get_size(self, resource):
|
||||
return os.path.getsize(resource.path)
|
||||
|
||||
def get_resources(self, resource):
|
||||
def allowed(f):
|
||||
return (f != '__pycache__' and not
|
||||
f.endswith(self.skipped_extensions))
|
||||
return set([f for f in os.listdir(resource.path) if allowed(f)])
|
||||
|
||||
def is_container(self, resource):
|
||||
return self._is_directory(resource.path)
|
||||
|
||||
_is_directory = staticmethod(os.path.isdir)
|
||||
|
||||
def iterator(self, resource_name):
|
||||
resource = self.find(resource_name)
|
||||
if resource is not None:
|
||||
todo = [resource]
|
||||
while todo:
|
||||
resource = todo.pop(0)
|
||||
yield resource
|
||||
if resource.is_container:
|
||||
rname = resource.name
|
||||
for name in resource.resources:
|
||||
if not rname:
|
||||
new_name = name
|
||||
else:
|
||||
new_name = '/'.join([rname, name])
|
||||
child = self.find(new_name)
|
||||
if child.is_container:
|
||||
todo.append(child)
|
||||
else:
|
||||
yield child
|
||||
|
||||
|
||||
class ZipResourceFinder(ResourceFinder):
|
||||
"""
|
||||
Resource finder for resources in .zip files.
|
||||
"""
|
||||
def __init__(self, module):
|
||||
super(ZipResourceFinder, self).__init__(module)
|
||||
archive = self.loader.archive
|
||||
self.prefix_len = 1 + len(archive)
|
||||
# PyPy doesn't have a _files attr on zipimporter, and you can't set one
|
||||
if hasattr(self.loader, '_files'):
|
||||
self._files = self.loader._files
|
||||
else:
|
||||
self._files = zipimport._zip_directory_cache[archive]
|
||||
self.index = sorted(self._files)
|
||||
|
||||
def _adjust_path(self, path):
|
||||
return path
|
||||
|
||||
def _find(self, path):
|
||||
path = path[self.prefix_len:]
|
||||
if path in self._files:
|
||||
result = True
|
||||
else:
|
||||
if path and path[-1] != os.sep:
|
||||
path = path + os.sep
|
||||
i = bisect.bisect(self.index, path)
|
||||
try:
|
||||
result = self.index[i].startswith(path)
|
||||
except IndexError:
|
||||
result = False
|
||||
if not result:
|
||||
logger.debug('_find failed: %r %r', path, self.loader.prefix)
|
||||
else:
|
||||
logger.debug('_find worked: %r %r', path, self.loader.prefix)
|
||||
return result
|
||||
|
||||
def get_cache_info(self, resource):
|
||||
prefix = self.loader.archive
|
||||
path = resource.path[1 + len(prefix):]
|
||||
return prefix, path
|
||||
|
||||
def get_bytes(self, resource):
|
||||
return self.loader.get_data(resource.path)
|
||||
|
||||
def get_stream(self, resource):
|
||||
return io.BytesIO(self.get_bytes(resource))
|
||||
|
||||
def get_size(self, resource):
|
||||
path = resource.path[self.prefix_len:]
|
||||
return self._files[path][3]
|
||||
|
||||
def get_resources(self, resource):
|
||||
path = resource.path[self.prefix_len:]
|
||||
if path and path[-1] != os.sep:
|
||||
path += os.sep
|
||||
plen = len(path)
|
||||
result = set()
|
||||
i = bisect.bisect(self.index, path)
|
||||
while i < len(self.index):
|
||||
if not self.index[i].startswith(path):
|
||||
break
|
||||
s = self.index[i][plen:]
|
||||
result.add(s.split(os.sep, 1)[0]) # only immediate children
|
||||
i += 1
|
||||
return result
|
||||
|
||||
def _is_directory(self, path):
|
||||
path = path[self.prefix_len:]
|
||||
if path and path[-1] != os.sep:
|
||||
path += os.sep
|
||||
i = bisect.bisect(self.index, path)
|
||||
try:
|
||||
result = self.index[i].startswith(path)
|
||||
except IndexError:
|
||||
result = False
|
||||
return result
|
||||
|
||||
|
||||
_finder_registry = {
|
||||
type(None): ResourceFinder,
|
||||
zipimport.zipimporter: ZipResourceFinder
|
||||
}
|
||||
|
||||
try:
|
||||
# In Python 3.6, _frozen_importlib -> _frozen_importlib_external
|
||||
try:
|
||||
import _frozen_importlib_external as _fi
|
||||
except ImportError:
|
||||
import _frozen_importlib as _fi
|
||||
_finder_registry[_fi.SourceFileLoader] = ResourceFinder
|
||||
_finder_registry[_fi.FileFinder] = ResourceFinder
|
||||
# See issue #146
|
||||
_finder_registry[_fi.SourcelessFileLoader] = ResourceFinder
|
||||
del _fi
|
||||
except (ImportError, AttributeError):
|
||||
pass
|
||||
|
||||
|
||||
def register_finder(loader, finder_maker):
|
||||
_finder_registry[type(loader)] = finder_maker
|
||||
|
||||
|
||||
_finder_cache = {}
|
||||
|
||||
|
||||
def finder(package):
|
||||
"""
|
||||
Return a resource finder for a package.
|
||||
:param package: The name of the package.
|
||||
:return: A :class:`ResourceFinder` instance for the package.
|
||||
"""
|
||||
if package in _finder_cache:
|
||||
result = _finder_cache[package]
|
||||
else:
|
||||
if package not in sys.modules:
|
||||
__import__(package)
|
||||
module = sys.modules[package]
|
||||
path = getattr(module, '__path__', None)
|
||||
if path is None:
|
||||
raise DistlibException('You cannot get a finder for a module, '
|
||||
'only for a package')
|
||||
loader = getattr(module, '__loader__', None)
|
||||
finder_maker = _finder_registry.get(type(loader))
|
||||
if finder_maker is None:
|
||||
raise DistlibException('Unable to locate finder for %r' % package)
|
||||
result = finder_maker(module)
|
||||
_finder_cache[package] = result
|
||||
return result
|
||||
|
||||
|
||||
_dummy_module = types.ModuleType(str('__dummy__'))
|
||||
|
||||
|
||||
def finder_for_path(path):
|
||||
"""
|
||||
Return a resource finder for a path, which should represent a container.
|
||||
|
||||
:param path: The path.
|
||||
:return: A :class:`ResourceFinder` instance for the path.
|
||||
"""
|
||||
result = None
|
||||
# calls any path hooks, gets importer into cache
|
||||
pkgutil.get_importer(path)
|
||||
loader = sys.path_importer_cache.get(path)
|
||||
finder = _finder_registry.get(type(loader))
|
||||
if finder:
|
||||
module = _dummy_module
|
||||
module.__file__ = os.path.join(path, '')
|
||||
module.__loader__ = loader
|
||||
result = finder(module)
|
||||
return result
|
||||
447
.venv/lib/python3.9/site-packages/pip/_vendor/distlib/scripts.py
Normal file
447
.venv/lib/python3.9/site-packages/pip/_vendor/distlib/scripts.py
Normal file
@@ -0,0 +1,447 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
#
|
||||
# Copyright (C) 2013-2023 Vinay Sajip.
|
||||
# Licensed to the Python Software Foundation under a contributor agreement.
|
||||
# See LICENSE.txt and CONTRIBUTORS.txt.
|
||||
#
|
||||
from io import BytesIO
|
||||
import logging
|
||||
import os
|
||||
import re
|
||||
import struct
|
||||
import sys
|
||||
import time
|
||||
from zipfile import ZipInfo
|
||||
|
||||
from .compat import sysconfig, detect_encoding, ZipFile
|
||||
from .resources import finder
|
||||
from .util import (FileOperator, get_export_entry, convert_path, get_executable, get_platform, in_venv)
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
_DEFAULT_MANIFEST = '''
|
||||
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
|
||||
<assembly xmlns="urn:schemas-microsoft-com:asm.v1" manifestVersion="1.0">
|
||||
<assemblyIdentity version="1.0.0.0"
|
||||
processorArchitecture="X86"
|
||||
name="%s"
|
||||
type="win32"/>
|
||||
|
||||
<!-- Identify the application security requirements. -->
|
||||
<trustInfo xmlns="urn:schemas-microsoft-com:asm.v3">
|
||||
<security>
|
||||
<requestedPrivileges>
|
||||
<requestedExecutionLevel level="asInvoker" uiAccess="false"/>
|
||||
</requestedPrivileges>
|
||||
</security>
|
||||
</trustInfo>
|
||||
</assembly>'''.strip()
|
||||
|
||||
# check if Python is called on the first line with this expression
|
||||
FIRST_LINE_RE = re.compile(b'^#!.*pythonw?[0-9.]*([ \t].*)?$')
|
||||
SCRIPT_TEMPLATE = r'''# -*- coding: utf-8 -*-
|
||||
import re
|
||||
import sys
|
||||
if __name__ == '__main__':
|
||||
from %(module)s import %(import_name)s
|
||||
sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0])
|
||||
sys.exit(%(func)s())
|
||||
'''
|
||||
|
||||
# Pre-fetch the contents of all executable wrapper stubs.
|
||||
# This is to address https://github.com/pypa/pip/issues/12666.
|
||||
# When updating pip, we rename the old pip in place before installing the
|
||||
# new version. If we try to fetch a wrapper *after* that rename, the finder
|
||||
# machinery will be confused as the package is no longer available at the
|
||||
# location where it was imported from. So we load everything into memory in
|
||||
# advance.
|
||||
|
||||
if os.name == 'nt' or (os.name == 'java' and os._name == 'nt'):
|
||||
# Issue 31: don't hardcode an absolute package name, but
|
||||
# determine it relative to the current package
|
||||
DISTLIB_PACKAGE = __name__.rsplit('.', 1)[0]
|
||||
|
||||
WRAPPERS = {
|
||||
r.name: r.bytes
|
||||
for r in finder(DISTLIB_PACKAGE).iterator("")
|
||||
if r.name.endswith(".exe")
|
||||
}
|
||||
|
||||
|
||||
def enquote_executable(executable):
|
||||
if ' ' in executable:
|
||||
# make sure we quote only the executable in case of env
|
||||
# for example /usr/bin/env "/dir with spaces/bin/jython"
|
||||
# instead of "/usr/bin/env /dir with spaces/bin/jython"
|
||||
# otherwise whole
|
||||
if executable.startswith('/usr/bin/env '):
|
||||
env, _executable = executable.split(' ', 1)
|
||||
if ' ' in _executable and not _executable.startswith('"'):
|
||||
executable = '%s "%s"' % (env, _executable)
|
||||
else:
|
||||
if not executable.startswith('"'):
|
||||
executable = '"%s"' % executable
|
||||
return executable
|
||||
|
||||
|
||||
# Keep the old name around (for now), as there is at least one project using it!
|
||||
_enquote_executable = enquote_executable
|
||||
|
||||
|
||||
class ScriptMaker(object):
|
||||
"""
|
||||
A class to copy or create scripts from source scripts or callable
|
||||
specifications.
|
||||
"""
|
||||
script_template = SCRIPT_TEMPLATE
|
||||
|
||||
executable = None # for shebangs
|
||||
|
||||
def __init__(self, source_dir, target_dir, add_launchers=True, dry_run=False, fileop=None):
|
||||
self.source_dir = source_dir
|
||||
self.target_dir = target_dir
|
||||
self.add_launchers = add_launchers
|
||||
self.force = False
|
||||
self.clobber = False
|
||||
# It only makes sense to set mode bits on POSIX.
|
||||
self.set_mode = (os.name == 'posix') or (os.name == 'java' and os._name == 'posix')
|
||||
self.variants = set(('', 'X.Y'))
|
||||
self._fileop = fileop or FileOperator(dry_run)
|
||||
|
||||
self._is_nt = os.name == 'nt' or (os.name == 'java' and os._name == 'nt')
|
||||
self.version_info = sys.version_info
|
||||
|
||||
def _get_alternate_executable(self, executable, options):
|
||||
if options.get('gui', False) and self._is_nt: # pragma: no cover
|
||||
dn, fn = os.path.split(executable)
|
||||
fn = fn.replace('python', 'pythonw')
|
||||
executable = os.path.join(dn, fn)
|
||||
return executable
|
||||
|
||||
if sys.platform.startswith('java'): # pragma: no cover
|
||||
|
||||
def _is_shell(self, executable):
|
||||
"""
|
||||
Determine if the specified executable is a script
|
||||
(contains a #! line)
|
||||
"""
|
||||
try:
|
||||
with open(executable) as fp:
|
||||
return fp.read(2) == '#!'
|
||||
except (OSError, IOError):
|
||||
logger.warning('Failed to open %s', executable)
|
||||
return False
|
||||
|
||||
def _fix_jython_executable(self, executable):
|
||||
if self._is_shell(executable):
|
||||
# Workaround for Jython is not needed on Linux systems.
|
||||
import java
|
||||
|
||||
if java.lang.System.getProperty('os.name') == 'Linux':
|
||||
return executable
|
||||
elif executable.lower().endswith('jython.exe'):
|
||||
# Use wrapper exe for Jython on Windows
|
||||
return executable
|
||||
return '/usr/bin/env %s' % executable
|
||||
|
||||
def _build_shebang(self, executable, post_interp):
|
||||
"""
|
||||
Build a shebang line. In the simple case (on Windows, or a shebang line
|
||||
which is not too long or contains spaces) use a simple formulation for
|
||||
the shebang. Otherwise, use /bin/sh as the executable, with a contrived
|
||||
shebang which allows the script to run either under Python or sh, using
|
||||
suitable quoting. Thanks to Harald Nordgren for his input.
|
||||
|
||||
See also: http://www.in-ulm.de/~mascheck/various/shebang/#length
|
||||
https://hg.mozilla.org/mozilla-central/file/tip/mach
|
||||
"""
|
||||
if os.name != 'posix':
|
||||
simple_shebang = True
|
||||
elif getattr(sys, "cross_compiling", False):
|
||||
# In a cross-compiling environment, the shebang will likely be a
|
||||
# script; this *must* be invoked with the "safe" version of the
|
||||
# shebang, or else using os.exec() to run the entry script will
|
||||
# fail, raising "OSError 8 [Errno 8] Exec format error".
|
||||
simple_shebang = False
|
||||
else:
|
||||
# Add 3 for '#!' prefix and newline suffix.
|
||||
shebang_length = len(executable) + len(post_interp) + 3
|
||||
if sys.platform == 'darwin':
|
||||
max_shebang_length = 512
|
||||
else:
|
||||
max_shebang_length = 127
|
||||
simple_shebang = ((b' ' not in executable) and (shebang_length <= max_shebang_length))
|
||||
|
||||
if simple_shebang:
|
||||
result = b'#!' + executable + post_interp + b'\n'
|
||||
else:
|
||||
result = b'#!/bin/sh\n'
|
||||
result += b"'''exec' " + executable + post_interp + b' "$0" "$@"\n'
|
||||
result += b"' '''\n"
|
||||
return result
|
||||
|
||||
def _get_shebang(self, encoding, post_interp=b'', options=None):
|
||||
enquote = True
|
||||
if self.executable:
|
||||
executable = self.executable
|
||||
enquote = False # assume this will be taken care of
|
||||
elif not sysconfig.is_python_build():
|
||||
executable = get_executable()
|
||||
elif in_venv(): # pragma: no cover
|
||||
executable = os.path.join(sysconfig.get_path('scripts'), 'python%s' % sysconfig.get_config_var('EXE'))
|
||||
else: # pragma: no cover
|
||||
if os.name == 'nt':
|
||||
# for Python builds from source on Windows, no Python executables with
|
||||
# a version suffix are created, so we use python.exe
|
||||
executable = os.path.join(sysconfig.get_config_var('BINDIR'),
|
||||
'python%s' % (sysconfig.get_config_var('EXE')))
|
||||
else:
|
||||
executable = os.path.join(
|
||||
sysconfig.get_config_var('BINDIR'),
|
||||
'python%s%s' % (sysconfig.get_config_var('VERSION'), sysconfig.get_config_var('EXE')))
|
||||
if options:
|
||||
executable = self._get_alternate_executable(executable, options)
|
||||
|
||||
if sys.platform.startswith('java'): # pragma: no cover
|
||||
executable = self._fix_jython_executable(executable)
|
||||
|
||||
# Normalise case for Windows - COMMENTED OUT
|
||||
# executable = os.path.normcase(executable)
|
||||
# N.B. The normalising operation above has been commented out: See
|
||||
# issue #124. Although paths in Windows are generally case-insensitive,
|
||||
# they aren't always. For example, a path containing a ẞ (which is a
|
||||
# LATIN CAPITAL LETTER SHARP S - U+1E9E) is normcased to ß (which is a
|
||||
# LATIN SMALL LETTER SHARP S' - U+00DF). The two are not considered by
|
||||
# Windows as equivalent in path names.
|
||||
|
||||
# If the user didn't specify an executable, it may be necessary to
|
||||
# cater for executable paths with spaces (not uncommon on Windows)
|
||||
if enquote:
|
||||
executable = enquote_executable(executable)
|
||||
# Issue #51: don't use fsencode, since we later try to
|
||||
# check that the shebang is decodable using utf-8.
|
||||
executable = executable.encode('utf-8')
|
||||
# in case of IronPython, play safe and enable frames support
|
||||
if (sys.platform == 'cli' and '-X:Frames' not in post_interp and
|
||||
'-X:FullFrames' not in post_interp): # pragma: no cover
|
||||
post_interp += b' -X:Frames'
|
||||
shebang = self._build_shebang(executable, post_interp)
|
||||
# Python parser starts to read a script using UTF-8 until
|
||||
# it gets a #coding:xxx cookie. The shebang has to be the
|
||||
# first line of a file, the #coding:xxx cookie cannot be
|
||||
# written before. So the shebang has to be decodable from
|
||||
# UTF-8.
|
||||
try:
|
||||
shebang.decode('utf-8')
|
||||
except UnicodeDecodeError: # pragma: no cover
|
||||
raise ValueError('The shebang (%r) is not decodable from utf-8' % shebang)
|
||||
# If the script is encoded to a custom encoding (use a
|
||||
# #coding:xxx cookie), the shebang has to be decodable from
|
||||
# the script encoding too.
|
||||
if encoding != 'utf-8':
|
||||
try:
|
||||
shebang.decode(encoding)
|
||||
except UnicodeDecodeError: # pragma: no cover
|
||||
raise ValueError('The shebang (%r) is not decodable '
|
||||
'from the script encoding (%r)' % (shebang, encoding))
|
||||
return shebang
|
||||
|
||||
def _get_script_text(self, entry):
|
||||
return self.script_template % dict(
|
||||
module=entry.prefix, import_name=entry.suffix.split('.')[0], func=entry.suffix)
|
||||
|
||||
manifest = _DEFAULT_MANIFEST
|
||||
|
||||
def get_manifest(self, exename):
|
||||
base = os.path.basename(exename)
|
||||
return self.manifest % base
|
||||
|
||||
def _write_script(self, names, shebang, script_bytes, filenames, ext):
|
||||
use_launcher = self.add_launchers and self._is_nt
|
||||
if not use_launcher:
|
||||
script_bytes = shebang + script_bytes
|
||||
else: # pragma: no cover
|
||||
if ext == 'py':
|
||||
launcher = self._get_launcher('t')
|
||||
else:
|
||||
launcher = self._get_launcher('w')
|
||||
stream = BytesIO()
|
||||
with ZipFile(stream, 'w') as zf:
|
||||
source_date_epoch = os.environ.get('SOURCE_DATE_EPOCH')
|
||||
if source_date_epoch:
|
||||
date_time = time.gmtime(int(source_date_epoch))[:6]
|
||||
zinfo = ZipInfo(filename='__main__.py', date_time=date_time)
|
||||
zf.writestr(zinfo, script_bytes)
|
||||
else:
|
||||
zf.writestr('__main__.py', script_bytes)
|
||||
zip_data = stream.getvalue()
|
||||
script_bytes = launcher + shebang + zip_data
|
||||
for name in names:
|
||||
outname = os.path.join(self.target_dir, name)
|
||||
if use_launcher: # pragma: no cover
|
||||
n, e = os.path.splitext(outname)
|
||||
if e.startswith('.py'):
|
||||
outname = n
|
||||
outname = '%s.exe' % outname
|
||||
try:
|
||||
self._fileop.write_binary_file(outname, script_bytes)
|
||||
except Exception:
|
||||
# Failed writing an executable - it might be in use.
|
||||
logger.warning('Failed to write executable - trying to '
|
||||
'use .deleteme logic')
|
||||
dfname = '%s.deleteme' % outname
|
||||
if os.path.exists(dfname):
|
||||
os.remove(dfname) # Not allowed to fail here
|
||||
os.rename(outname, dfname) # nor here
|
||||
self._fileop.write_binary_file(outname, script_bytes)
|
||||
logger.debug('Able to replace executable using '
|
||||
'.deleteme logic')
|
||||
try:
|
||||
os.remove(dfname)
|
||||
except Exception:
|
||||
pass # still in use - ignore error
|
||||
else:
|
||||
if self._is_nt and not outname.endswith('.' + ext): # pragma: no cover
|
||||
outname = '%s.%s' % (outname, ext)
|
||||
if os.path.exists(outname) and not self.clobber:
|
||||
logger.warning('Skipping existing file %s', outname)
|
||||
continue
|
||||
self._fileop.write_binary_file(outname, script_bytes)
|
||||
if self.set_mode:
|
||||
self._fileop.set_executable_mode([outname])
|
||||
filenames.append(outname)
|
||||
|
||||
variant_separator = '-'
|
||||
|
||||
def get_script_filenames(self, name):
|
||||
result = set()
|
||||
if '' in self.variants:
|
||||
result.add(name)
|
||||
if 'X' in self.variants:
|
||||
result.add('%s%s' % (name, self.version_info[0]))
|
||||
if 'X.Y' in self.variants:
|
||||
result.add('%s%s%s.%s' % (name, self.variant_separator, self.version_info[0], self.version_info[1]))
|
||||
return result
|
||||
|
||||
def _make_script(self, entry, filenames, options=None):
|
||||
post_interp = b''
|
||||
if options:
|
||||
args = options.get('interpreter_args', [])
|
||||
if args:
|
||||
args = ' %s' % ' '.join(args)
|
||||
post_interp = args.encode('utf-8')
|
||||
shebang = self._get_shebang('utf-8', post_interp, options=options)
|
||||
script = self._get_script_text(entry).encode('utf-8')
|
||||
scriptnames = self.get_script_filenames(entry.name)
|
||||
if options and options.get('gui', False):
|
||||
ext = 'pyw'
|
||||
else:
|
||||
ext = 'py'
|
||||
self._write_script(scriptnames, shebang, script, filenames, ext)
|
||||
|
||||
def _copy_script(self, script, filenames):
|
||||
adjust = False
|
||||
script = os.path.join(self.source_dir, convert_path(script))
|
||||
outname = os.path.join(self.target_dir, os.path.basename(script))
|
||||
if not self.force and not self._fileop.newer(script, outname):
|
||||
logger.debug('not copying %s (up-to-date)', script)
|
||||
return
|
||||
|
||||
# Always open the file, but ignore failures in dry-run mode --
|
||||
# that way, we'll get accurate feedback if we can read the
|
||||
# script.
|
||||
try:
|
||||
f = open(script, 'rb')
|
||||
except IOError: # pragma: no cover
|
||||
if not self.dry_run:
|
||||
raise
|
||||
f = None
|
||||
else:
|
||||
first_line = f.readline()
|
||||
if not first_line: # pragma: no cover
|
||||
logger.warning('%s is an empty file (skipping)', script)
|
||||
return
|
||||
|
||||
match = FIRST_LINE_RE.match(first_line.replace(b'\r\n', b'\n'))
|
||||
if match:
|
||||
adjust = True
|
||||
post_interp = match.group(1) or b''
|
||||
|
||||
if not adjust:
|
||||
if f:
|
||||
f.close()
|
||||
self._fileop.copy_file(script, outname)
|
||||
if self.set_mode:
|
||||
self._fileop.set_executable_mode([outname])
|
||||
filenames.append(outname)
|
||||
else:
|
||||
logger.info('copying and adjusting %s -> %s', script, self.target_dir)
|
||||
if not self._fileop.dry_run:
|
||||
encoding, lines = detect_encoding(f.readline)
|
||||
f.seek(0)
|
||||
shebang = self._get_shebang(encoding, post_interp)
|
||||
if b'pythonw' in first_line: # pragma: no cover
|
||||
ext = 'pyw'
|
||||
else:
|
||||
ext = 'py'
|
||||
n = os.path.basename(outname)
|
||||
self._write_script([n], shebang, f.read(), filenames, ext)
|
||||
if f:
|
||||
f.close()
|
||||
|
||||
@property
|
||||
def dry_run(self):
|
||||
return self._fileop.dry_run
|
||||
|
||||
@dry_run.setter
|
||||
def dry_run(self, value):
|
||||
self._fileop.dry_run = value
|
||||
|
||||
if os.name == 'nt' or (os.name == 'java' and os._name == 'nt'): # pragma: no cover
|
||||
# Executable launcher support.
|
||||
# Launchers are from https://bitbucket.org/vinay.sajip/simple_launcher/
|
||||
|
||||
def _get_launcher(self, kind):
|
||||
if struct.calcsize('P') == 8: # 64-bit
|
||||
bits = '64'
|
||||
else:
|
||||
bits = '32'
|
||||
platform_suffix = '-arm' if get_platform() == 'win-arm64' else ''
|
||||
name = '%s%s%s.exe' % (kind, bits, platform_suffix)
|
||||
if name not in WRAPPERS:
|
||||
msg = ('Unable to find resource %s in package %s' %
|
||||
(name, DISTLIB_PACKAGE))
|
||||
raise ValueError(msg)
|
||||
return WRAPPERS[name]
|
||||
|
||||
# Public API follows
|
||||
|
||||
def make(self, specification, options=None):
|
||||
"""
|
||||
Make a script.
|
||||
|
||||
:param specification: The specification, which is either a valid export
|
||||
entry specification (to make a script from a
|
||||
callable) or a filename (to make a script by
|
||||
copying from a source location).
|
||||
:param options: A dictionary of options controlling script generation.
|
||||
:return: A list of all absolute pathnames written to.
|
||||
"""
|
||||
filenames = []
|
||||
entry = get_export_entry(specification)
|
||||
if entry is None:
|
||||
self._copy_script(specification, filenames)
|
||||
else:
|
||||
self._make_script(entry, filenames, options=options)
|
||||
return filenames
|
||||
|
||||
def make_multiple(self, specifications, options=None):
|
||||
"""
|
||||
Take a list of specifications and make scripts from them,
|
||||
:param specifications: A list of specifications.
|
||||
:return: A list of all absolute pathnames written to,
|
||||
"""
|
||||
filenames = []
|
||||
for specification in specifications:
|
||||
filenames.extend(self.make(specification, options))
|
||||
return filenames
|
||||
BIN
.venv/lib/python3.9/site-packages/pip/_vendor/distlib/t32.exe
Normal file
BIN
.venv/lib/python3.9/site-packages/pip/_vendor/distlib/t32.exe
Normal file
Binary file not shown.
Binary file not shown.
BIN
.venv/lib/python3.9/site-packages/pip/_vendor/distlib/t64.exe
Normal file
BIN
.venv/lib/python3.9/site-packages/pip/_vendor/distlib/t64.exe
Normal file
Binary file not shown.
1984
.venv/lib/python3.9/site-packages/pip/_vendor/distlib/util.py
Normal file
1984
.venv/lib/python3.9/site-packages/pip/_vendor/distlib/util.py
Normal file
File diff suppressed because it is too large
Load Diff
BIN
.venv/lib/python3.9/site-packages/pip/_vendor/distlib/w32.exe
Normal file
BIN
.venv/lib/python3.9/site-packages/pip/_vendor/distlib/w32.exe
Normal file
Binary file not shown.
Binary file not shown.
BIN
.venv/lib/python3.9/site-packages/pip/_vendor/distlib/w64.exe
Normal file
BIN
.venv/lib/python3.9/site-packages/pip/_vendor/distlib/w64.exe
Normal file
Binary file not shown.
202
.venv/lib/python3.9/site-packages/pip/_vendor/distro/LICENSE
Normal file
202
.venv/lib/python3.9/site-packages/pip/_vendor/distro/LICENSE
Normal file
@@ -0,0 +1,202 @@
|
||||
Apache License
|
||||
Version 2.0, January 2004
|
||||
http://www.apache.org/licenses/
|
||||
|
||||
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
||||
|
||||
1. Definitions.
|
||||
|
||||
"License" shall mean the terms and conditions for use, reproduction,
|
||||
and distribution as defined by Sections 1 through 9 of this document.
|
||||
|
||||
"Licensor" shall mean the copyright owner or entity authorized by
|
||||
the copyright owner that is granting the License.
|
||||
|
||||
"Legal Entity" shall mean the union of the acting entity and all
|
||||
other entities that control, are controlled by, or are under common
|
||||
control with that entity. For the purposes of this definition,
|
||||
"control" means (i) the power, direct or indirect, to cause the
|
||||
direction or management of such entity, whether by contract or
|
||||
otherwise, or (ii) ownership of fifty percent (50%) or more of the
|
||||
outstanding shares, or (iii) beneficial ownership of such entity.
|
||||
|
||||
"You" (or "Your") shall mean an individual or Legal Entity
|
||||
exercising permissions granted by this License.
|
||||
|
||||
"Source" form shall mean the preferred form for making modifications,
|
||||
including but not limited to software source code, documentation
|
||||
source, and configuration files.
|
||||
|
||||
"Object" form shall mean any form resulting from mechanical
|
||||
transformation or translation of a Source form, including but
|
||||
not limited to compiled object code, generated documentation,
|
||||
and conversions to other media types.
|
||||
|
||||
"Work" shall mean the work of authorship, whether in Source or
|
||||
Object form, made available under the License, as indicated by a
|
||||
copyright notice that is included in or attached to the work
|
||||
(an example is provided in the Appendix below).
|
||||
|
||||
"Derivative Works" shall mean any work, whether in Source or Object
|
||||
form, that is based on (or derived from) the Work and for which the
|
||||
editorial revisions, annotations, elaborations, or other modifications
|
||||
represent, as a whole, an original work of authorship. For the purposes
|
||||
of this License, Derivative Works shall not include works that remain
|
||||
separable from, or merely link (or bind by name) to the interfaces of,
|
||||
the Work and Derivative Works thereof.
|
||||
|
||||
"Contribution" shall mean any work of authorship, including
|
||||
the original version of the Work and any modifications or additions
|
||||
to that Work or Derivative Works thereof, that is intentionally
|
||||
submitted to Licensor for inclusion in the Work by the copyright owner
|
||||
or by an individual or Legal Entity authorized to submit on behalf of
|
||||
the copyright owner. For the purposes of this definition, "submitted"
|
||||
means any form of electronic, verbal, or written communication sent
|
||||
to the Licensor or its representatives, including but not limited to
|
||||
communication on electronic mailing lists, source code control systems,
|
||||
and issue tracking systems that are managed by, or on behalf of, the
|
||||
Licensor for the purpose of discussing and improving the Work, but
|
||||
excluding communication that is conspicuously marked or otherwise
|
||||
designated in writing by the copyright owner as "Not a Contribution."
|
||||
|
||||
"Contributor" shall mean Licensor and any individual or Legal Entity
|
||||
on behalf of whom a Contribution has been received by Licensor and
|
||||
subsequently incorporated within the Work.
|
||||
|
||||
2. Grant of Copyright License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
copyright license to reproduce, prepare Derivative Works of,
|
||||
publicly display, publicly perform, sublicense, and distribute the
|
||||
Work and such Derivative Works in Source or Object form.
|
||||
|
||||
3. Grant of Patent License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
(except as stated in this section) patent license to make, have made,
|
||||
use, offer to sell, sell, import, and otherwise transfer the Work,
|
||||
where such license applies only to those patent claims licensable
|
||||
by such Contributor that are necessarily infringed by their
|
||||
Contribution(s) alone or by combination of their Contribution(s)
|
||||
with the Work to which such Contribution(s) was submitted. If You
|
||||
institute patent litigation against any entity (including a
|
||||
cross-claim or counterclaim in a lawsuit) alleging that the Work
|
||||
or a Contribution incorporated within the Work constitutes direct
|
||||
or contributory patent infringement, then any patent licenses
|
||||
granted to You under this License for that Work shall terminate
|
||||
as of the date such litigation is filed.
|
||||
|
||||
4. Redistribution. You may reproduce and distribute copies of the
|
||||
Work or Derivative Works thereof in any medium, with or without
|
||||
modifications, and in Source or Object form, provided that You
|
||||
meet the following conditions:
|
||||
|
||||
(a) You must give any other recipients of the Work or
|
||||
Derivative Works a copy of this License; and
|
||||
|
||||
(b) You must cause any modified files to carry prominent notices
|
||||
stating that You changed the files; and
|
||||
|
||||
(c) You must retain, in the Source form of any Derivative Works
|
||||
that You distribute, all copyright, patent, trademark, and
|
||||
attribution notices from the Source form of the Work,
|
||||
excluding those notices that do not pertain to any part of
|
||||
the Derivative Works; and
|
||||
|
||||
(d) If the Work includes a "NOTICE" text file as part of its
|
||||
distribution, then any Derivative Works that You distribute must
|
||||
include a readable copy of the attribution notices contained
|
||||
within such NOTICE file, excluding those notices that do not
|
||||
pertain to any part of the Derivative Works, in at least one
|
||||
of the following places: within a NOTICE text file distributed
|
||||
as part of the Derivative Works; within the Source form or
|
||||
documentation, if provided along with the Derivative Works; or,
|
||||
within a display generated by the Derivative Works, if and
|
||||
wherever such third-party notices normally appear. The contents
|
||||
of the NOTICE file are for informational purposes only and
|
||||
do not modify the License. You may add Your own attribution
|
||||
notices within Derivative Works that You distribute, alongside
|
||||
or as an addendum to the NOTICE text from the Work, provided
|
||||
that such additional attribution notices cannot be construed
|
||||
as modifying the License.
|
||||
|
||||
You may add Your own copyright statement to Your modifications and
|
||||
may provide additional or different license terms and conditions
|
||||
for use, reproduction, or distribution of Your modifications, or
|
||||
for any such Derivative Works as a whole, provided Your use,
|
||||
reproduction, and distribution of the Work otherwise complies with
|
||||
the conditions stated in this License.
|
||||
|
||||
5. Submission of Contributions. Unless You explicitly state otherwise,
|
||||
any Contribution intentionally submitted for inclusion in the Work
|
||||
by You to the Licensor shall be under the terms and conditions of
|
||||
this License, without any additional terms or conditions.
|
||||
Notwithstanding the above, nothing herein shall supersede or modify
|
||||
the terms of any separate license agreement you may have executed
|
||||
with Licensor regarding such Contributions.
|
||||
|
||||
6. Trademarks. This License does not grant permission to use the trade
|
||||
names, trademarks, service marks, or product names of the Licensor,
|
||||
except as required for reasonable and customary use in describing the
|
||||
origin of the Work and reproducing the content of the NOTICE file.
|
||||
|
||||
7. Disclaimer of Warranty. Unless required by applicable law or
|
||||
agreed to in writing, Licensor provides the Work (and each
|
||||
Contributor provides its Contributions) on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
implied, including, without limitation, any warranties or conditions
|
||||
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
|
||||
PARTICULAR PURPOSE. You are solely responsible for determining the
|
||||
appropriateness of using or redistributing the Work and assume any
|
||||
risks associated with Your exercise of permissions under this License.
|
||||
|
||||
8. Limitation of Liability. In no event and under no legal theory,
|
||||
whether in tort (including negligence), contract, or otherwise,
|
||||
unless required by applicable law (such as deliberate and grossly
|
||||
negligent acts) or agreed to in writing, shall any Contributor be
|
||||
liable to You for damages, including any direct, indirect, special,
|
||||
incidental, or consequential damages of any character arising as a
|
||||
result of this License or out of the use or inability to use the
|
||||
Work (including but not limited to damages for loss of goodwill,
|
||||
work stoppage, computer failure or malfunction, or any and all
|
||||
other commercial damages or losses), even if such Contributor
|
||||
has been advised of the possibility of such damages.
|
||||
|
||||
9. Accepting Warranty or Additional Liability. While redistributing
|
||||
the Work or Derivative Works thereof, You may choose to offer,
|
||||
and charge a fee for, acceptance of support, warranty, indemnity,
|
||||
or other liability obligations and/or rights consistent with this
|
||||
License. However, in accepting such obligations, You may act only
|
||||
on Your own behalf and on Your sole responsibility, not on behalf
|
||||
of any other Contributor, and only if You agree to indemnify,
|
||||
defend, and hold each Contributor harmless for any liability
|
||||
incurred by, or claims asserted against, such Contributor by reason
|
||||
of your accepting any such warranty or additional liability.
|
||||
|
||||
END OF TERMS AND CONDITIONS
|
||||
|
||||
APPENDIX: How to apply the Apache License to your work.
|
||||
|
||||
To apply the Apache License to your work, attach the following
|
||||
boilerplate notice, with the fields enclosed by brackets "{}"
|
||||
replaced with your own identifying information. (Don't include
|
||||
the brackets!) The text should be enclosed in the appropriate
|
||||
comment syntax for the file format. We also recommend that a
|
||||
file or class name and description of purpose be included on the
|
||||
same "printed page" as the copyright notice for easier
|
||||
identification within third-party archives.
|
||||
|
||||
Copyright {yyyy} {name of copyright owner}
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
|
||||
@@ -0,0 +1,54 @@
|
||||
from .distro import (
|
||||
NORMALIZED_DISTRO_ID,
|
||||
NORMALIZED_LSB_ID,
|
||||
NORMALIZED_OS_ID,
|
||||
LinuxDistribution,
|
||||
__version__,
|
||||
build_number,
|
||||
codename,
|
||||
distro_release_attr,
|
||||
distro_release_info,
|
||||
id,
|
||||
info,
|
||||
like,
|
||||
linux_distribution,
|
||||
lsb_release_attr,
|
||||
lsb_release_info,
|
||||
major_version,
|
||||
minor_version,
|
||||
name,
|
||||
os_release_attr,
|
||||
os_release_info,
|
||||
uname_attr,
|
||||
uname_info,
|
||||
version,
|
||||
version_parts,
|
||||
)
|
||||
|
||||
__all__ = [
|
||||
"NORMALIZED_DISTRO_ID",
|
||||
"NORMALIZED_LSB_ID",
|
||||
"NORMALIZED_OS_ID",
|
||||
"LinuxDistribution",
|
||||
"build_number",
|
||||
"codename",
|
||||
"distro_release_attr",
|
||||
"distro_release_info",
|
||||
"id",
|
||||
"info",
|
||||
"like",
|
||||
"linux_distribution",
|
||||
"lsb_release_attr",
|
||||
"lsb_release_info",
|
||||
"major_version",
|
||||
"minor_version",
|
||||
"name",
|
||||
"os_release_attr",
|
||||
"os_release_info",
|
||||
"uname_attr",
|
||||
"uname_info",
|
||||
"version",
|
||||
"version_parts",
|
||||
]
|
||||
|
||||
__version__ = __version__
|
||||
@@ -0,0 +1,4 @@
|
||||
from .distro import main
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
1403
.venv/lib/python3.9/site-packages/pip/_vendor/distro/distro.py
Normal file
1403
.venv/lib/python3.9/site-packages/pip/_vendor/distro/distro.py
Normal file
File diff suppressed because it is too large
Load Diff
@@ -0,0 +1,31 @@
|
||||
BSD 3-Clause License
|
||||
|
||||
Copyright (c) 2013-2025, Kim Davies and contributors.
|
||||
All rights reserved.
|
||||
|
||||
Redistribution and use in source and binary forms, with or without
|
||||
modification, are permitted provided that the following conditions are
|
||||
met:
|
||||
|
||||
1. Redistributions of source code must retain the above copyright
|
||||
notice, this list of conditions and the following disclaimer.
|
||||
|
||||
2. Redistributions in binary form must reproduce the above copyright
|
||||
notice, this list of conditions and the following disclaimer in the
|
||||
documentation and/or other materials provided with the distribution.
|
||||
|
||||
3. Neither the name of the copyright holder nor the names of its
|
||||
contributors may be used to endorse or promote products derived from
|
||||
this software without specific prior written permission.
|
||||
|
||||
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED
|
||||
TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
|
||||
PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
|
||||
LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
|
||||
NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
|
||||
SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
@@ -0,0 +1,45 @@
|
||||
from .core import (
|
||||
IDNABidiError,
|
||||
IDNAError,
|
||||
InvalidCodepoint,
|
||||
InvalidCodepointContext,
|
||||
alabel,
|
||||
check_bidi,
|
||||
check_hyphen_ok,
|
||||
check_initial_combiner,
|
||||
check_label,
|
||||
check_nfc,
|
||||
decode,
|
||||
encode,
|
||||
ulabel,
|
||||
uts46_remap,
|
||||
valid_contextj,
|
||||
valid_contexto,
|
||||
valid_label_length,
|
||||
valid_string_length,
|
||||
)
|
||||
from .intranges import intranges_contain
|
||||
from .package_data import __version__
|
||||
|
||||
__all__ = [
|
||||
"__version__",
|
||||
"IDNABidiError",
|
||||
"IDNAError",
|
||||
"InvalidCodepoint",
|
||||
"InvalidCodepointContext",
|
||||
"alabel",
|
||||
"check_bidi",
|
||||
"check_hyphen_ok",
|
||||
"check_initial_combiner",
|
||||
"check_label",
|
||||
"check_nfc",
|
||||
"decode",
|
||||
"encode",
|
||||
"intranges_contain",
|
||||
"ulabel",
|
||||
"uts46_remap",
|
||||
"valid_contextj",
|
||||
"valid_contexto",
|
||||
"valid_label_length",
|
||||
"valid_string_length",
|
||||
]
|
||||
122
.venv/lib/python3.9/site-packages/pip/_vendor/idna/codec.py
Normal file
122
.venv/lib/python3.9/site-packages/pip/_vendor/idna/codec.py
Normal file
@@ -0,0 +1,122 @@
|
||||
import codecs
|
||||
import re
|
||||
from typing import Any, Optional, Tuple
|
||||
|
||||
from .core import IDNAError, alabel, decode, encode, ulabel
|
||||
|
||||
_unicode_dots_re = re.compile("[\u002e\u3002\uff0e\uff61]")
|
||||
|
||||
|
||||
class Codec(codecs.Codec):
|
||||
def encode(self, data: str, errors: str = "strict") -> Tuple[bytes, int]:
|
||||
if errors != "strict":
|
||||
raise IDNAError('Unsupported error handling "{}"'.format(errors))
|
||||
|
||||
if not data:
|
||||
return b"", 0
|
||||
|
||||
return encode(data), len(data)
|
||||
|
||||
def decode(self, data: bytes, errors: str = "strict") -> Tuple[str, int]:
|
||||
if errors != "strict":
|
||||
raise IDNAError('Unsupported error handling "{}"'.format(errors))
|
||||
|
||||
if not data:
|
||||
return "", 0
|
||||
|
||||
return decode(data), len(data)
|
||||
|
||||
|
||||
class IncrementalEncoder(codecs.BufferedIncrementalEncoder):
|
||||
def _buffer_encode(self, data: str, errors: str, final: bool) -> Tuple[bytes, int]:
|
||||
if errors != "strict":
|
||||
raise IDNAError('Unsupported error handling "{}"'.format(errors))
|
||||
|
||||
if not data:
|
||||
return b"", 0
|
||||
|
||||
labels = _unicode_dots_re.split(data)
|
||||
trailing_dot = b""
|
||||
if labels:
|
||||
if not labels[-1]:
|
||||
trailing_dot = b"."
|
||||
del labels[-1]
|
||||
elif not final:
|
||||
# Keep potentially unfinished label until the next call
|
||||
del labels[-1]
|
||||
if labels:
|
||||
trailing_dot = b"."
|
||||
|
||||
result = []
|
||||
size = 0
|
||||
for label in labels:
|
||||
result.append(alabel(label))
|
||||
if size:
|
||||
size += 1
|
||||
size += len(label)
|
||||
|
||||
# Join with U+002E
|
||||
result_bytes = b".".join(result) + trailing_dot
|
||||
size += len(trailing_dot)
|
||||
return result_bytes, size
|
||||
|
||||
|
||||
class IncrementalDecoder(codecs.BufferedIncrementalDecoder):
|
||||
def _buffer_decode(self, data: Any, errors: str, final: bool) -> Tuple[str, int]:
|
||||
if errors != "strict":
|
||||
raise IDNAError('Unsupported error handling "{}"'.format(errors))
|
||||
|
||||
if not data:
|
||||
return ("", 0)
|
||||
|
||||
if not isinstance(data, str):
|
||||
data = str(data, "ascii")
|
||||
|
||||
labels = _unicode_dots_re.split(data)
|
||||
trailing_dot = ""
|
||||
if labels:
|
||||
if not labels[-1]:
|
||||
trailing_dot = "."
|
||||
del labels[-1]
|
||||
elif not final:
|
||||
# Keep potentially unfinished label until the next call
|
||||
del labels[-1]
|
||||
if labels:
|
||||
trailing_dot = "."
|
||||
|
||||
result = []
|
||||
size = 0
|
||||
for label in labels:
|
||||
result.append(ulabel(label))
|
||||
if size:
|
||||
size += 1
|
||||
size += len(label)
|
||||
|
||||
result_str = ".".join(result) + trailing_dot
|
||||
size += len(trailing_dot)
|
||||
return (result_str, size)
|
||||
|
||||
|
||||
class StreamWriter(Codec, codecs.StreamWriter):
|
||||
pass
|
||||
|
||||
|
||||
class StreamReader(Codec, codecs.StreamReader):
|
||||
pass
|
||||
|
||||
|
||||
def search_function(name: str) -> Optional[codecs.CodecInfo]:
|
||||
if name != "idna2008":
|
||||
return None
|
||||
return codecs.CodecInfo(
|
||||
name=name,
|
||||
encode=Codec().encode,
|
||||
decode=Codec().decode, # type: ignore
|
||||
incrementalencoder=IncrementalEncoder,
|
||||
incrementaldecoder=IncrementalDecoder,
|
||||
streamwriter=StreamWriter,
|
||||
streamreader=StreamReader,
|
||||
)
|
||||
|
||||
|
||||
codecs.register(search_function)
|
||||
15
.venv/lib/python3.9/site-packages/pip/_vendor/idna/compat.py
Normal file
15
.venv/lib/python3.9/site-packages/pip/_vendor/idna/compat.py
Normal file
@@ -0,0 +1,15 @@
|
||||
from typing import Any, Union
|
||||
|
||||
from .core import decode, encode
|
||||
|
||||
|
||||
def ToASCII(label: str) -> bytes:
|
||||
return encode(label)
|
||||
|
||||
|
||||
def ToUnicode(label: Union[bytes, bytearray]) -> str:
|
||||
return decode(label)
|
||||
|
||||
|
||||
def nameprep(s: Any) -> None:
|
||||
raise NotImplementedError("IDNA 2008 does not utilise nameprep protocol")
|
||||
437
.venv/lib/python3.9/site-packages/pip/_vendor/idna/core.py
Normal file
437
.venv/lib/python3.9/site-packages/pip/_vendor/idna/core.py
Normal file
@@ -0,0 +1,437 @@
|
||||
import bisect
|
||||
import re
|
||||
import unicodedata
|
||||
from typing import Optional, Union
|
||||
|
||||
from . import idnadata
|
||||
from .intranges import intranges_contain
|
||||
|
||||
_virama_combining_class = 9
|
||||
_alabel_prefix = b"xn--"
|
||||
_unicode_dots_re = re.compile("[\u002e\u3002\uff0e\uff61]")
|
||||
|
||||
|
||||
class IDNAError(UnicodeError):
|
||||
"""Base exception for all IDNA-encoding related problems"""
|
||||
|
||||
pass
|
||||
|
||||
|
||||
class IDNABidiError(IDNAError):
|
||||
"""Exception when bidirectional requirements are not satisfied"""
|
||||
|
||||
pass
|
||||
|
||||
|
||||
class InvalidCodepoint(IDNAError):
|
||||
"""Exception when a disallowed or unallocated codepoint is used"""
|
||||
|
||||
pass
|
||||
|
||||
|
||||
class InvalidCodepointContext(IDNAError):
|
||||
"""Exception when the codepoint is not valid in the context it is used"""
|
||||
|
||||
pass
|
||||
|
||||
|
||||
def _combining_class(cp: int) -> int:
|
||||
v = unicodedata.combining(chr(cp))
|
||||
if v == 0:
|
||||
if not unicodedata.name(chr(cp)):
|
||||
raise ValueError("Unknown character in unicodedata")
|
||||
return v
|
||||
|
||||
|
||||
def _is_script(cp: str, script: str) -> bool:
|
||||
return intranges_contain(ord(cp), idnadata.scripts[script])
|
||||
|
||||
|
||||
def _punycode(s: str) -> bytes:
|
||||
return s.encode("punycode")
|
||||
|
||||
|
||||
def _unot(s: int) -> str:
|
||||
return "U+{:04X}".format(s)
|
||||
|
||||
|
||||
def valid_label_length(label: Union[bytes, str]) -> bool:
|
||||
if len(label) > 63:
|
||||
return False
|
||||
return True
|
||||
|
||||
|
||||
def valid_string_length(label: Union[bytes, str], trailing_dot: bool) -> bool:
|
||||
if len(label) > (254 if trailing_dot else 253):
|
||||
return False
|
||||
return True
|
||||
|
||||
|
||||
def check_bidi(label: str, check_ltr: bool = False) -> bool:
|
||||
# Bidi rules should only be applied if string contains RTL characters
|
||||
bidi_label = False
|
||||
for idx, cp in enumerate(label, 1):
|
||||
direction = unicodedata.bidirectional(cp)
|
||||
if direction == "":
|
||||
# String likely comes from a newer version of Unicode
|
||||
raise IDNABidiError("Unknown directionality in label {} at position {}".format(repr(label), idx))
|
||||
if direction in ["R", "AL", "AN"]:
|
||||
bidi_label = True
|
||||
if not bidi_label and not check_ltr:
|
||||
return True
|
||||
|
||||
# Bidi rule 1
|
||||
direction = unicodedata.bidirectional(label[0])
|
||||
if direction in ["R", "AL"]:
|
||||
rtl = True
|
||||
elif direction == "L":
|
||||
rtl = False
|
||||
else:
|
||||
raise IDNABidiError("First codepoint in label {} must be directionality L, R or AL".format(repr(label)))
|
||||
|
||||
valid_ending = False
|
||||
number_type: Optional[str] = None
|
||||
for idx, cp in enumerate(label, 1):
|
||||
direction = unicodedata.bidirectional(cp)
|
||||
|
||||
if rtl:
|
||||
# Bidi rule 2
|
||||
if direction not in [
|
||||
"R",
|
||||
"AL",
|
||||
"AN",
|
||||
"EN",
|
||||
"ES",
|
||||
"CS",
|
||||
"ET",
|
||||
"ON",
|
||||
"BN",
|
||||
"NSM",
|
||||
]:
|
||||
raise IDNABidiError("Invalid direction for codepoint at position {} in a right-to-left label".format(idx))
|
||||
# Bidi rule 3
|
||||
if direction in ["R", "AL", "EN", "AN"]:
|
||||
valid_ending = True
|
||||
elif direction != "NSM":
|
||||
valid_ending = False
|
||||
# Bidi rule 4
|
||||
if direction in ["AN", "EN"]:
|
||||
if not number_type:
|
||||
number_type = direction
|
||||
else:
|
||||
if number_type != direction:
|
||||
raise IDNABidiError("Can not mix numeral types in a right-to-left label")
|
||||
else:
|
||||
# Bidi rule 5
|
||||
if direction not in ["L", "EN", "ES", "CS", "ET", "ON", "BN", "NSM"]:
|
||||
raise IDNABidiError("Invalid direction for codepoint at position {} in a left-to-right label".format(idx))
|
||||
# Bidi rule 6
|
||||
if direction in ["L", "EN"]:
|
||||
valid_ending = True
|
||||
elif direction != "NSM":
|
||||
valid_ending = False
|
||||
|
||||
if not valid_ending:
|
||||
raise IDNABidiError("Label ends with illegal codepoint directionality")
|
||||
|
||||
return True
|
||||
|
||||
|
||||
def check_initial_combiner(label: str) -> bool:
|
||||
if unicodedata.category(label[0])[0] == "M":
|
||||
raise IDNAError("Label begins with an illegal combining character")
|
||||
return True
|
||||
|
||||
|
||||
def check_hyphen_ok(label: str) -> bool:
|
||||
if label[2:4] == "--":
|
||||
raise IDNAError("Label has disallowed hyphens in 3rd and 4th position")
|
||||
if label[0] == "-" or label[-1] == "-":
|
||||
raise IDNAError("Label must not start or end with a hyphen")
|
||||
return True
|
||||
|
||||
|
||||
def check_nfc(label: str) -> None:
|
||||
if unicodedata.normalize("NFC", label) != label:
|
||||
raise IDNAError("Label must be in Normalization Form C")
|
||||
|
||||
|
||||
def valid_contextj(label: str, pos: int) -> bool:
|
||||
cp_value = ord(label[pos])
|
||||
|
||||
if cp_value == 0x200C:
|
||||
if pos > 0:
|
||||
if _combining_class(ord(label[pos - 1])) == _virama_combining_class:
|
||||
return True
|
||||
|
||||
ok = False
|
||||
for i in range(pos - 1, -1, -1):
|
||||
joining_type = idnadata.joining_types.get(ord(label[i]))
|
||||
if joining_type == ord("T"):
|
||||
continue
|
||||
elif joining_type in [ord("L"), ord("D")]:
|
||||
ok = True
|
||||
break
|
||||
else:
|
||||
break
|
||||
|
||||
if not ok:
|
||||
return False
|
||||
|
||||
ok = False
|
||||
for i in range(pos + 1, len(label)):
|
||||
joining_type = idnadata.joining_types.get(ord(label[i]))
|
||||
if joining_type == ord("T"):
|
||||
continue
|
||||
elif joining_type in [ord("R"), ord("D")]:
|
||||
ok = True
|
||||
break
|
||||
else:
|
||||
break
|
||||
return ok
|
||||
|
||||
if cp_value == 0x200D:
|
||||
if pos > 0:
|
||||
if _combining_class(ord(label[pos - 1])) == _virama_combining_class:
|
||||
return True
|
||||
return False
|
||||
|
||||
else:
|
||||
return False
|
||||
|
||||
|
||||
def valid_contexto(label: str, pos: int, exception: bool = False) -> bool:
|
||||
cp_value = ord(label[pos])
|
||||
|
||||
if cp_value == 0x00B7:
|
||||
if 0 < pos < len(label) - 1:
|
||||
if ord(label[pos - 1]) == 0x006C and ord(label[pos + 1]) == 0x006C:
|
||||
return True
|
||||
return False
|
||||
|
||||
elif cp_value == 0x0375:
|
||||
if pos < len(label) - 1 and len(label) > 1:
|
||||
return _is_script(label[pos + 1], "Greek")
|
||||
return False
|
||||
|
||||
elif cp_value == 0x05F3 or cp_value == 0x05F4:
|
||||
if pos > 0:
|
||||
return _is_script(label[pos - 1], "Hebrew")
|
||||
return False
|
||||
|
||||
elif cp_value == 0x30FB:
|
||||
for cp in label:
|
||||
if cp == "\u30fb":
|
||||
continue
|
||||
if _is_script(cp, "Hiragana") or _is_script(cp, "Katakana") or _is_script(cp, "Han"):
|
||||
return True
|
||||
return False
|
||||
|
||||
elif 0x660 <= cp_value <= 0x669:
|
||||
for cp in label:
|
||||
if 0x6F0 <= ord(cp) <= 0x06F9:
|
||||
return False
|
||||
return True
|
||||
|
||||
elif 0x6F0 <= cp_value <= 0x6F9:
|
||||
for cp in label:
|
||||
if 0x660 <= ord(cp) <= 0x0669:
|
||||
return False
|
||||
return True
|
||||
|
||||
return False
|
||||
|
||||
|
||||
def check_label(label: Union[str, bytes, bytearray]) -> None:
|
||||
if isinstance(label, (bytes, bytearray)):
|
||||
label = label.decode("utf-8")
|
||||
if len(label) == 0:
|
||||
raise IDNAError("Empty Label")
|
||||
|
||||
check_nfc(label)
|
||||
check_hyphen_ok(label)
|
||||
check_initial_combiner(label)
|
||||
|
||||
for pos, cp in enumerate(label):
|
||||
cp_value = ord(cp)
|
||||
if intranges_contain(cp_value, idnadata.codepoint_classes["PVALID"]):
|
||||
continue
|
||||
elif intranges_contain(cp_value, idnadata.codepoint_classes["CONTEXTJ"]):
|
||||
try:
|
||||
if not valid_contextj(label, pos):
|
||||
raise InvalidCodepointContext(
|
||||
"Joiner {} not allowed at position {} in {}".format(_unot(cp_value), pos + 1, repr(label))
|
||||
)
|
||||
except ValueError:
|
||||
raise IDNAError(
|
||||
"Unknown codepoint adjacent to joiner {} at position {} in {}".format(
|
||||
_unot(cp_value), pos + 1, repr(label)
|
||||
)
|
||||
)
|
||||
elif intranges_contain(cp_value, idnadata.codepoint_classes["CONTEXTO"]):
|
||||
if not valid_contexto(label, pos):
|
||||
raise InvalidCodepointContext(
|
||||
"Codepoint {} not allowed at position {} in {}".format(_unot(cp_value), pos + 1, repr(label))
|
||||
)
|
||||
else:
|
||||
raise InvalidCodepoint(
|
||||
"Codepoint {} at position {} of {} not allowed".format(_unot(cp_value), pos + 1, repr(label))
|
||||
)
|
||||
|
||||
check_bidi(label)
|
||||
|
||||
|
||||
def alabel(label: str) -> bytes:
|
||||
try:
|
||||
label_bytes = label.encode("ascii")
|
||||
ulabel(label_bytes)
|
||||
if not valid_label_length(label_bytes):
|
||||
raise IDNAError("Label too long")
|
||||
return label_bytes
|
||||
except UnicodeEncodeError:
|
||||
pass
|
||||
|
||||
check_label(label)
|
||||
label_bytes = _alabel_prefix + _punycode(label)
|
||||
|
||||
if not valid_label_length(label_bytes):
|
||||
raise IDNAError("Label too long")
|
||||
|
||||
return label_bytes
|
||||
|
||||
|
||||
def ulabel(label: Union[str, bytes, bytearray]) -> str:
|
||||
if not isinstance(label, (bytes, bytearray)):
|
||||
try:
|
||||
label_bytes = label.encode("ascii")
|
||||
except UnicodeEncodeError:
|
||||
check_label(label)
|
||||
return label
|
||||
else:
|
||||
label_bytes = bytes(label)
|
||||
|
||||
label_bytes = label_bytes.lower()
|
||||
if label_bytes.startswith(_alabel_prefix):
|
||||
label_bytes = label_bytes[len(_alabel_prefix) :]
|
||||
if not label_bytes:
|
||||
raise IDNAError("Malformed A-label, no Punycode eligible content found")
|
||||
if label_bytes.decode("ascii")[-1] == "-":
|
||||
raise IDNAError("A-label must not end with a hyphen")
|
||||
else:
|
||||
check_label(label_bytes)
|
||||
return label_bytes.decode("ascii")
|
||||
|
||||
try:
|
||||
label = label_bytes.decode("punycode")
|
||||
except UnicodeError:
|
||||
raise IDNAError("Invalid A-label")
|
||||
check_label(label)
|
||||
return label
|
||||
|
||||
|
||||
def uts46_remap(domain: str, std3_rules: bool = True, transitional: bool = False) -> str:
|
||||
"""Re-map the characters in the string according to UTS46 processing."""
|
||||
from .uts46data import uts46data
|
||||
|
||||
output = ""
|
||||
|
||||
for pos, char in enumerate(domain):
|
||||
code_point = ord(char)
|
||||
try:
|
||||
uts46row = uts46data[code_point if code_point < 256 else bisect.bisect_left(uts46data, (code_point, "Z")) - 1]
|
||||
status = uts46row[1]
|
||||
replacement: Optional[str] = None
|
||||
if len(uts46row) == 3:
|
||||
replacement = uts46row[2]
|
||||
if (
|
||||
status == "V"
|
||||
or (status == "D" and not transitional)
|
||||
or (status == "3" and not std3_rules and replacement is None)
|
||||
):
|
||||
output += char
|
||||
elif replacement is not None and (
|
||||
status == "M" or (status == "3" and not std3_rules) or (status == "D" and transitional)
|
||||
):
|
||||
output += replacement
|
||||
elif status != "I":
|
||||
raise IndexError()
|
||||
except IndexError:
|
||||
raise InvalidCodepoint(
|
||||
"Codepoint {} not allowed at position {} in {}".format(_unot(code_point), pos + 1, repr(domain))
|
||||
)
|
||||
|
||||
return unicodedata.normalize("NFC", output)
|
||||
|
||||
|
||||
def encode(
|
||||
s: Union[str, bytes, bytearray],
|
||||
strict: bool = False,
|
||||
uts46: bool = False,
|
||||
std3_rules: bool = False,
|
||||
transitional: bool = False,
|
||||
) -> bytes:
|
||||
if not isinstance(s, str):
|
||||
try:
|
||||
s = str(s, "ascii")
|
||||
except UnicodeDecodeError:
|
||||
raise IDNAError("should pass a unicode string to the function rather than a byte string.")
|
||||
if uts46:
|
||||
s = uts46_remap(s, std3_rules, transitional)
|
||||
trailing_dot = False
|
||||
result = []
|
||||
if strict:
|
||||
labels = s.split(".")
|
||||
else:
|
||||
labels = _unicode_dots_re.split(s)
|
||||
if not labels or labels == [""]:
|
||||
raise IDNAError("Empty domain")
|
||||
if labels[-1] == "":
|
||||
del labels[-1]
|
||||
trailing_dot = True
|
||||
for label in labels:
|
||||
s = alabel(label)
|
||||
if s:
|
||||
result.append(s)
|
||||
else:
|
||||
raise IDNAError("Empty label")
|
||||
if trailing_dot:
|
||||
result.append(b"")
|
||||
s = b".".join(result)
|
||||
if not valid_string_length(s, trailing_dot):
|
||||
raise IDNAError("Domain too long")
|
||||
return s
|
||||
|
||||
|
||||
def decode(
|
||||
s: Union[str, bytes, bytearray],
|
||||
strict: bool = False,
|
||||
uts46: bool = False,
|
||||
std3_rules: bool = False,
|
||||
) -> str:
|
||||
try:
|
||||
if not isinstance(s, str):
|
||||
s = str(s, "ascii")
|
||||
except UnicodeDecodeError:
|
||||
raise IDNAError("Invalid ASCII in A-label")
|
||||
if uts46:
|
||||
s = uts46_remap(s, std3_rules, False)
|
||||
trailing_dot = False
|
||||
result = []
|
||||
if not strict:
|
||||
labels = _unicode_dots_re.split(s)
|
||||
else:
|
||||
labels = s.split(".")
|
||||
if not labels or labels == [""]:
|
||||
raise IDNAError("Empty domain")
|
||||
if not labels[-1]:
|
||||
del labels[-1]
|
||||
trailing_dot = True
|
||||
for label in labels:
|
||||
s = ulabel(label)
|
||||
if s:
|
||||
result.append(s)
|
||||
else:
|
||||
raise IDNAError("Empty label")
|
||||
if trailing_dot:
|
||||
result.append("")
|
||||
return ".".join(result)
|
||||
4309
.venv/lib/python3.9/site-packages/pip/_vendor/idna/idnadata.py
Normal file
4309
.venv/lib/python3.9/site-packages/pip/_vendor/idna/idnadata.py
Normal file
File diff suppressed because it is too large
Load Diff
@@ -0,0 +1,57 @@
|
||||
"""
|
||||
Given a list of integers, made up of (hopefully) a small number of long runs
|
||||
of consecutive integers, compute a representation of the form
|
||||
((start1, end1), (start2, end2) ...). Then answer the question "was x present
|
||||
in the original list?" in time O(log(# runs)).
|
||||
"""
|
||||
|
||||
import bisect
|
||||
from typing import List, Tuple
|
||||
|
||||
|
||||
def intranges_from_list(list_: List[int]) -> Tuple[int, ...]:
|
||||
"""Represent a list of integers as a sequence of ranges:
|
||||
((start_0, end_0), (start_1, end_1), ...), such that the original
|
||||
integers are exactly those x such that start_i <= x < end_i for some i.
|
||||
|
||||
Ranges are encoded as single integers (start << 32 | end), not as tuples.
|
||||
"""
|
||||
|
||||
sorted_list = sorted(list_)
|
||||
ranges = []
|
||||
last_write = -1
|
||||
for i in range(len(sorted_list)):
|
||||
if i + 1 < len(sorted_list):
|
||||
if sorted_list[i] == sorted_list[i + 1] - 1:
|
||||
continue
|
||||
current_range = sorted_list[last_write + 1 : i + 1]
|
||||
ranges.append(_encode_range(current_range[0], current_range[-1] + 1))
|
||||
last_write = i
|
||||
|
||||
return tuple(ranges)
|
||||
|
||||
|
||||
def _encode_range(start: int, end: int) -> int:
|
||||
return (start << 32) | end
|
||||
|
||||
|
||||
def _decode_range(r: int) -> Tuple[int, int]:
|
||||
return (r >> 32), (r & ((1 << 32) - 1))
|
||||
|
||||
|
||||
def intranges_contain(int_: int, ranges: Tuple[int, ...]) -> bool:
|
||||
"""Determine if `int_` falls into one of the ranges in `ranges`."""
|
||||
tuple_ = _encode_range(int_, 0)
|
||||
pos = bisect.bisect_left(ranges, tuple_)
|
||||
# we could be immediately ahead of a tuple (start, end)
|
||||
# with start < int_ <= end
|
||||
if pos > 0:
|
||||
left, right = _decode_range(ranges[pos - 1])
|
||||
if left <= int_ < right:
|
||||
return True
|
||||
# or we could be immediately behind a tuple (int_, end)
|
||||
if pos < len(ranges):
|
||||
left, _ = _decode_range(ranges[pos])
|
||||
if left == int_:
|
||||
return True
|
||||
return False
|
||||
@@ -0,0 +1 @@
|
||||
__version__ = "3.11"
|
||||
8841
.venv/lib/python3.9/site-packages/pip/_vendor/idna/uts46data.py
Normal file
8841
.venv/lib/python3.9/site-packages/pip/_vendor/idna/uts46data.py
Normal file
File diff suppressed because it is too large
Load Diff
@@ -0,0 +1,14 @@
|
||||
Copyright (C) 2008-2011 INADA Naoki <songofacandy@gmail.com>
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
|
||||
@@ -0,0 +1,55 @@
|
||||
# ruff: noqa: F401
|
||||
import os
|
||||
|
||||
from .exceptions import * # noqa: F403
|
||||
from .ext import ExtType, Timestamp
|
||||
|
||||
version = (1, 1, 2)
|
||||
__version__ = "1.1.2"
|
||||
|
||||
|
||||
if os.environ.get("MSGPACK_PUREPYTHON"):
|
||||
from .fallback import Packer, Unpacker, unpackb
|
||||
else:
|
||||
try:
|
||||
from ._cmsgpack import Packer, Unpacker, unpackb
|
||||
except ImportError:
|
||||
from .fallback import Packer, Unpacker, unpackb
|
||||
|
||||
|
||||
def pack(o, stream, **kwargs):
|
||||
"""
|
||||
Pack object `o` and write it to `stream`
|
||||
|
||||
See :class:`Packer` for options.
|
||||
"""
|
||||
packer = Packer(**kwargs)
|
||||
stream.write(packer.pack(o))
|
||||
|
||||
|
||||
def packb(o, **kwargs):
|
||||
"""
|
||||
Pack object `o` and return packed bytes
|
||||
|
||||
See :class:`Packer` for options.
|
||||
"""
|
||||
return Packer(**kwargs).pack(o)
|
||||
|
||||
|
||||
def unpack(stream, **kwargs):
|
||||
"""
|
||||
Unpack an object from `stream`.
|
||||
|
||||
Raises `ExtraData` when `stream` contains extra bytes.
|
||||
See :class:`Unpacker` for options.
|
||||
"""
|
||||
data = stream.read()
|
||||
return unpackb(data, **kwargs)
|
||||
|
||||
|
||||
# alias for compatibility to simplejson/marshal/pickle.
|
||||
load = unpack
|
||||
loads = unpackb
|
||||
|
||||
dump = pack
|
||||
dumps = packb
|
||||
@@ -0,0 +1,48 @@
|
||||
class UnpackException(Exception):
|
||||
"""Base class for some exceptions raised while unpacking.
|
||||
|
||||
NOTE: unpack may raise exception other than subclass of
|
||||
UnpackException. If you want to catch all error, catch
|
||||
Exception instead.
|
||||
"""
|
||||
|
||||
|
||||
class BufferFull(UnpackException):
|
||||
pass
|
||||
|
||||
|
||||
class OutOfData(UnpackException):
|
||||
pass
|
||||
|
||||
|
||||
class FormatError(ValueError, UnpackException):
|
||||
"""Invalid msgpack format"""
|
||||
|
||||
|
||||
class StackError(ValueError, UnpackException):
|
||||
"""Too nested"""
|
||||
|
||||
|
||||
# Deprecated. Use ValueError instead
|
||||
UnpackValueError = ValueError
|
||||
|
||||
|
||||
class ExtraData(UnpackValueError):
|
||||
"""ExtraData is raised when there is trailing data.
|
||||
|
||||
This exception is raised while only one-shot (not streaming)
|
||||
unpack.
|
||||
"""
|
||||
|
||||
def __init__(self, unpacked, extra):
|
||||
self.unpacked = unpacked
|
||||
self.extra = extra
|
||||
|
||||
def __str__(self):
|
||||
return "unpack(b) received extra data."
|
||||
|
||||
|
||||
# Deprecated. Use Exception instead to catch all exception during packing.
|
||||
PackException = Exception
|
||||
PackValueError = ValueError
|
||||
PackOverflowError = OverflowError
|
||||
170
.venv/lib/python3.9/site-packages/pip/_vendor/msgpack/ext.py
Normal file
170
.venv/lib/python3.9/site-packages/pip/_vendor/msgpack/ext.py
Normal file
@@ -0,0 +1,170 @@
|
||||
import datetime
|
||||
import struct
|
||||
from collections import namedtuple
|
||||
|
||||
|
||||
class ExtType(namedtuple("ExtType", "code data")):
|
||||
"""ExtType represents ext type in msgpack."""
|
||||
|
||||
def __new__(cls, code, data):
|
||||
if not isinstance(code, int):
|
||||
raise TypeError("code must be int")
|
||||
if not isinstance(data, bytes):
|
||||
raise TypeError("data must be bytes")
|
||||
if not 0 <= code <= 127:
|
||||
raise ValueError("code must be 0~127")
|
||||
return super().__new__(cls, code, data)
|
||||
|
||||
|
||||
class Timestamp:
|
||||
"""Timestamp represents the Timestamp extension type in msgpack.
|
||||
|
||||
When built with Cython, msgpack uses C methods to pack and unpack `Timestamp`.
|
||||
When using pure-Python msgpack, :func:`to_bytes` and :func:`from_bytes` are used to pack and
|
||||
unpack `Timestamp`.
|
||||
|
||||
This class is immutable: Do not override seconds and nanoseconds.
|
||||
"""
|
||||
|
||||
__slots__ = ["seconds", "nanoseconds"]
|
||||
|
||||
def __init__(self, seconds, nanoseconds=0):
|
||||
"""Initialize a Timestamp object.
|
||||
|
||||
:param int seconds:
|
||||
Number of seconds since the UNIX epoch (00:00:00 UTC Jan 1 1970, minus leap seconds).
|
||||
May be negative.
|
||||
|
||||
:param int nanoseconds:
|
||||
Number of nanoseconds to add to `seconds` to get fractional time.
|
||||
Maximum is 999_999_999. Default is 0.
|
||||
|
||||
Note: Negative times (before the UNIX epoch) are represented as neg. seconds + pos. ns.
|
||||
"""
|
||||
if not isinstance(seconds, int):
|
||||
raise TypeError("seconds must be an integer")
|
||||
if not isinstance(nanoseconds, int):
|
||||
raise TypeError("nanoseconds must be an integer")
|
||||
if not (0 <= nanoseconds < 10**9):
|
||||
raise ValueError("nanoseconds must be a non-negative integer less than 999999999.")
|
||||
self.seconds = seconds
|
||||
self.nanoseconds = nanoseconds
|
||||
|
||||
def __repr__(self):
|
||||
"""String representation of Timestamp."""
|
||||
return f"Timestamp(seconds={self.seconds}, nanoseconds={self.nanoseconds})"
|
||||
|
||||
def __eq__(self, other):
|
||||
"""Check for equality with another Timestamp object"""
|
||||
if type(other) is self.__class__:
|
||||
return self.seconds == other.seconds and self.nanoseconds == other.nanoseconds
|
||||
return False
|
||||
|
||||
def __ne__(self, other):
|
||||
"""not-equals method (see :func:`__eq__()`)"""
|
||||
return not self.__eq__(other)
|
||||
|
||||
def __hash__(self):
|
||||
return hash((self.seconds, self.nanoseconds))
|
||||
|
||||
@staticmethod
|
||||
def from_bytes(b):
|
||||
"""Unpack bytes into a `Timestamp` object.
|
||||
|
||||
Used for pure-Python msgpack unpacking.
|
||||
|
||||
:param b: Payload from msgpack ext message with code -1
|
||||
:type b: bytes
|
||||
|
||||
:returns: Timestamp object unpacked from msgpack ext payload
|
||||
:rtype: Timestamp
|
||||
"""
|
||||
if len(b) == 4:
|
||||
seconds = struct.unpack("!L", b)[0]
|
||||
nanoseconds = 0
|
||||
elif len(b) == 8:
|
||||
data64 = struct.unpack("!Q", b)[0]
|
||||
seconds = data64 & 0x00000003FFFFFFFF
|
||||
nanoseconds = data64 >> 34
|
||||
elif len(b) == 12:
|
||||
nanoseconds, seconds = struct.unpack("!Iq", b)
|
||||
else:
|
||||
raise ValueError(
|
||||
"Timestamp type can only be created from 32, 64, or 96-bit byte objects"
|
||||
)
|
||||
return Timestamp(seconds, nanoseconds)
|
||||
|
||||
def to_bytes(self):
|
||||
"""Pack this Timestamp object into bytes.
|
||||
|
||||
Used for pure-Python msgpack packing.
|
||||
|
||||
:returns data: Payload for EXT message with code -1 (timestamp type)
|
||||
:rtype: bytes
|
||||
"""
|
||||
if (self.seconds >> 34) == 0: # seconds is non-negative and fits in 34 bits
|
||||
data64 = self.nanoseconds << 34 | self.seconds
|
||||
if data64 & 0xFFFFFFFF00000000 == 0:
|
||||
# nanoseconds is zero and seconds < 2**32, so timestamp 32
|
||||
data = struct.pack("!L", data64)
|
||||
else:
|
||||
# timestamp 64
|
||||
data = struct.pack("!Q", data64)
|
||||
else:
|
||||
# timestamp 96
|
||||
data = struct.pack("!Iq", self.nanoseconds, self.seconds)
|
||||
return data
|
||||
|
||||
@staticmethod
|
||||
def from_unix(unix_sec):
|
||||
"""Create a Timestamp from posix timestamp in seconds.
|
||||
|
||||
:param unix_float: Posix timestamp in seconds.
|
||||
:type unix_float: int or float
|
||||
"""
|
||||
seconds = int(unix_sec // 1)
|
||||
nanoseconds = int((unix_sec % 1) * 10**9)
|
||||
return Timestamp(seconds, nanoseconds)
|
||||
|
||||
def to_unix(self):
|
||||
"""Get the timestamp as a floating-point value.
|
||||
|
||||
:returns: posix timestamp
|
||||
:rtype: float
|
||||
"""
|
||||
return self.seconds + self.nanoseconds / 1e9
|
||||
|
||||
@staticmethod
|
||||
def from_unix_nano(unix_ns):
|
||||
"""Create a Timestamp from posix timestamp in nanoseconds.
|
||||
|
||||
:param int unix_ns: Posix timestamp in nanoseconds.
|
||||
:rtype: Timestamp
|
||||
"""
|
||||
return Timestamp(*divmod(unix_ns, 10**9))
|
||||
|
||||
def to_unix_nano(self):
|
||||
"""Get the timestamp as a unixtime in nanoseconds.
|
||||
|
||||
:returns: posix timestamp in nanoseconds
|
||||
:rtype: int
|
||||
"""
|
||||
return self.seconds * 10**9 + self.nanoseconds
|
||||
|
||||
def to_datetime(self):
|
||||
"""Get the timestamp as a UTC datetime.
|
||||
|
||||
:rtype: `datetime.datetime`
|
||||
"""
|
||||
utc = datetime.timezone.utc
|
||||
return datetime.datetime.fromtimestamp(0, utc) + datetime.timedelta(
|
||||
seconds=self.seconds, microseconds=self.nanoseconds // 1000
|
||||
)
|
||||
|
||||
@staticmethod
|
||||
def from_datetime(dt):
|
||||
"""Create a Timestamp from datetime with tzinfo.
|
||||
|
||||
:rtype: Timestamp
|
||||
"""
|
||||
return Timestamp(seconds=int(dt.timestamp()), nanoseconds=dt.microsecond * 1000)
|
||||
@@ -0,0 +1,929 @@
|
||||
"""Fallback pure Python implementation of msgpack"""
|
||||
|
||||
import struct
|
||||
import sys
|
||||
from datetime import datetime as _DateTime
|
||||
|
||||
if hasattr(sys, "pypy_version_info"):
|
||||
from __pypy__ import newlist_hint
|
||||
from __pypy__.builders import BytesBuilder
|
||||
|
||||
_USING_STRINGBUILDER = True
|
||||
|
||||
class BytesIO:
|
||||
def __init__(self, s=b""):
|
||||
if s:
|
||||
self.builder = BytesBuilder(len(s))
|
||||
self.builder.append(s)
|
||||
else:
|
||||
self.builder = BytesBuilder()
|
||||
|
||||
def write(self, s):
|
||||
if isinstance(s, memoryview):
|
||||
s = s.tobytes()
|
||||
elif isinstance(s, bytearray):
|
||||
s = bytes(s)
|
||||
self.builder.append(s)
|
||||
|
||||
def getvalue(self):
|
||||
return self.builder.build()
|
||||
|
||||
else:
|
||||
from io import BytesIO
|
||||
|
||||
_USING_STRINGBUILDER = False
|
||||
|
||||
def newlist_hint(size):
|
||||
return []
|
||||
|
||||
|
||||
from .exceptions import BufferFull, ExtraData, FormatError, OutOfData, StackError
|
||||
from .ext import ExtType, Timestamp
|
||||
|
||||
EX_SKIP = 0
|
||||
EX_CONSTRUCT = 1
|
||||
EX_READ_ARRAY_HEADER = 2
|
||||
EX_READ_MAP_HEADER = 3
|
||||
|
||||
TYPE_IMMEDIATE = 0
|
||||
TYPE_ARRAY = 1
|
||||
TYPE_MAP = 2
|
||||
TYPE_RAW = 3
|
||||
TYPE_BIN = 4
|
||||
TYPE_EXT = 5
|
||||
|
||||
DEFAULT_RECURSE_LIMIT = 511
|
||||
|
||||
|
||||
def _check_type_strict(obj, t, type=type, tuple=tuple):
|
||||
if type(t) is tuple:
|
||||
return type(obj) in t
|
||||
else:
|
||||
return type(obj) is t
|
||||
|
||||
|
||||
def _get_data_from_buffer(obj):
|
||||
view = memoryview(obj)
|
||||
if view.itemsize != 1:
|
||||
raise ValueError("cannot unpack from multi-byte object")
|
||||
return view
|
||||
|
||||
|
||||
def unpackb(packed, **kwargs):
|
||||
"""
|
||||
Unpack an object from `packed`.
|
||||
|
||||
Raises ``ExtraData`` when *packed* contains extra bytes.
|
||||
Raises ``ValueError`` when *packed* is incomplete.
|
||||
Raises ``FormatError`` when *packed* is not valid msgpack.
|
||||
Raises ``StackError`` when *packed* contains too nested.
|
||||
Other exceptions can be raised during unpacking.
|
||||
|
||||
See :class:`Unpacker` for options.
|
||||
"""
|
||||
unpacker = Unpacker(None, max_buffer_size=len(packed), **kwargs)
|
||||
unpacker.feed(packed)
|
||||
try:
|
||||
ret = unpacker._unpack()
|
||||
except OutOfData:
|
||||
raise ValueError("Unpack failed: incomplete input")
|
||||
except RecursionError:
|
||||
raise StackError
|
||||
if unpacker._got_extradata():
|
||||
raise ExtraData(ret, unpacker._get_extradata())
|
||||
return ret
|
||||
|
||||
|
||||
_NO_FORMAT_USED = ""
|
||||
_MSGPACK_HEADERS = {
|
||||
0xC4: (1, _NO_FORMAT_USED, TYPE_BIN),
|
||||
0xC5: (2, ">H", TYPE_BIN),
|
||||
0xC6: (4, ">I", TYPE_BIN),
|
||||
0xC7: (2, "Bb", TYPE_EXT),
|
||||
0xC8: (3, ">Hb", TYPE_EXT),
|
||||
0xC9: (5, ">Ib", TYPE_EXT),
|
||||
0xCA: (4, ">f"),
|
||||
0xCB: (8, ">d"),
|
||||
0xCC: (1, _NO_FORMAT_USED),
|
||||
0xCD: (2, ">H"),
|
||||
0xCE: (4, ">I"),
|
||||
0xCF: (8, ">Q"),
|
||||
0xD0: (1, "b"),
|
||||
0xD1: (2, ">h"),
|
||||
0xD2: (4, ">i"),
|
||||
0xD3: (8, ">q"),
|
||||
0xD4: (1, "b1s", TYPE_EXT),
|
||||
0xD5: (2, "b2s", TYPE_EXT),
|
||||
0xD6: (4, "b4s", TYPE_EXT),
|
||||
0xD7: (8, "b8s", TYPE_EXT),
|
||||
0xD8: (16, "b16s", TYPE_EXT),
|
||||
0xD9: (1, _NO_FORMAT_USED, TYPE_RAW),
|
||||
0xDA: (2, ">H", TYPE_RAW),
|
||||
0xDB: (4, ">I", TYPE_RAW),
|
||||
0xDC: (2, ">H", TYPE_ARRAY),
|
||||
0xDD: (4, ">I", TYPE_ARRAY),
|
||||
0xDE: (2, ">H", TYPE_MAP),
|
||||
0xDF: (4, ">I", TYPE_MAP),
|
||||
}
|
||||
|
||||
|
||||
class Unpacker:
|
||||
"""Streaming unpacker.
|
||||
|
||||
Arguments:
|
||||
|
||||
:param file_like:
|
||||
File-like object having `.read(n)` method.
|
||||
If specified, unpacker reads serialized data from it and `.feed()` is not usable.
|
||||
|
||||
:param int read_size:
|
||||
Used as `file_like.read(read_size)`. (default: `min(16*1024, max_buffer_size)`)
|
||||
|
||||
:param bool use_list:
|
||||
If true, unpack msgpack array to Python list.
|
||||
Otherwise, unpack to Python tuple. (default: True)
|
||||
|
||||
:param bool raw:
|
||||
If true, unpack msgpack raw to Python bytes.
|
||||
Otherwise, unpack to Python str by decoding with UTF-8 encoding (default).
|
||||
|
||||
:param int timestamp:
|
||||
Control how timestamp type is unpacked:
|
||||
|
||||
0 - Timestamp
|
||||
1 - float (Seconds from the EPOCH)
|
||||
2 - int (Nanoseconds from the EPOCH)
|
||||
3 - datetime.datetime (UTC).
|
||||
|
||||
:param bool strict_map_key:
|
||||
If true (default), only str or bytes are accepted for map (dict) keys.
|
||||
|
||||
:param object_hook:
|
||||
When specified, it should be callable.
|
||||
Unpacker calls it with a dict argument after unpacking msgpack map.
|
||||
(See also simplejson)
|
||||
|
||||
:param object_pairs_hook:
|
||||
When specified, it should be callable.
|
||||
Unpacker calls it with a list of key-value pairs after unpacking msgpack map.
|
||||
(See also simplejson)
|
||||
|
||||
:param str unicode_errors:
|
||||
The error handler for decoding unicode. (default: 'strict')
|
||||
This option should be used only when you have msgpack data which
|
||||
contains invalid UTF-8 string.
|
||||
|
||||
:param int max_buffer_size:
|
||||
Limits size of data waiting unpacked. 0 means 2**32-1.
|
||||
The default value is 100*1024*1024 (100MiB).
|
||||
Raises `BufferFull` exception when it is insufficient.
|
||||
You should set this parameter when unpacking data from untrusted source.
|
||||
|
||||
:param int max_str_len:
|
||||
Deprecated, use *max_buffer_size* instead.
|
||||
Limits max length of str. (default: max_buffer_size)
|
||||
|
||||
:param int max_bin_len:
|
||||
Deprecated, use *max_buffer_size* instead.
|
||||
Limits max length of bin. (default: max_buffer_size)
|
||||
|
||||
:param int max_array_len:
|
||||
Limits max length of array.
|
||||
(default: max_buffer_size)
|
||||
|
||||
:param int max_map_len:
|
||||
Limits max length of map.
|
||||
(default: max_buffer_size//2)
|
||||
|
||||
:param int max_ext_len:
|
||||
Deprecated, use *max_buffer_size* instead.
|
||||
Limits max size of ext type. (default: max_buffer_size)
|
||||
|
||||
Example of streaming deserialize from file-like object::
|
||||
|
||||
unpacker = Unpacker(file_like)
|
||||
for o in unpacker:
|
||||
process(o)
|
||||
|
||||
Example of streaming deserialize from socket::
|
||||
|
||||
unpacker = Unpacker()
|
||||
while True:
|
||||
buf = sock.recv(1024**2)
|
||||
if not buf:
|
||||
break
|
||||
unpacker.feed(buf)
|
||||
for o in unpacker:
|
||||
process(o)
|
||||
|
||||
Raises ``ExtraData`` when *packed* contains extra bytes.
|
||||
Raises ``OutOfData`` when *packed* is incomplete.
|
||||
Raises ``FormatError`` when *packed* is not valid msgpack.
|
||||
Raises ``StackError`` when *packed* contains too nested.
|
||||
Other exceptions can be raised during unpacking.
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
file_like=None,
|
||||
*,
|
||||
read_size=0,
|
||||
use_list=True,
|
||||
raw=False,
|
||||
timestamp=0,
|
||||
strict_map_key=True,
|
||||
object_hook=None,
|
||||
object_pairs_hook=None,
|
||||
list_hook=None,
|
||||
unicode_errors=None,
|
||||
max_buffer_size=100 * 1024 * 1024,
|
||||
ext_hook=ExtType,
|
||||
max_str_len=-1,
|
||||
max_bin_len=-1,
|
||||
max_array_len=-1,
|
||||
max_map_len=-1,
|
||||
max_ext_len=-1,
|
||||
):
|
||||
if unicode_errors is None:
|
||||
unicode_errors = "strict"
|
||||
|
||||
if file_like is None:
|
||||
self._feeding = True
|
||||
else:
|
||||
if not callable(file_like.read):
|
||||
raise TypeError("`file_like.read` must be callable")
|
||||
self.file_like = file_like
|
||||
self._feeding = False
|
||||
|
||||
#: array of bytes fed.
|
||||
self._buffer = bytearray()
|
||||
#: Which position we currently reads
|
||||
self._buff_i = 0
|
||||
|
||||
# When Unpacker is used as an iterable, between the calls to next(),
|
||||
# the buffer is not "consumed" completely, for efficiency sake.
|
||||
# Instead, it is done sloppily. To make sure we raise BufferFull at
|
||||
# the correct moments, we have to keep track of how sloppy we were.
|
||||
# Furthermore, when the buffer is incomplete (that is: in the case
|
||||
# we raise an OutOfData) we need to rollback the buffer to the correct
|
||||
# state, which _buf_checkpoint records.
|
||||
self._buf_checkpoint = 0
|
||||
|
||||
if not max_buffer_size:
|
||||
max_buffer_size = 2**31 - 1
|
||||
if max_str_len == -1:
|
||||
max_str_len = max_buffer_size
|
||||
if max_bin_len == -1:
|
||||
max_bin_len = max_buffer_size
|
||||
if max_array_len == -1:
|
||||
max_array_len = max_buffer_size
|
||||
if max_map_len == -1:
|
||||
max_map_len = max_buffer_size // 2
|
||||
if max_ext_len == -1:
|
||||
max_ext_len = max_buffer_size
|
||||
|
||||
self._max_buffer_size = max_buffer_size
|
||||
if read_size > self._max_buffer_size:
|
||||
raise ValueError("read_size must be smaller than max_buffer_size")
|
||||
self._read_size = read_size or min(self._max_buffer_size, 16 * 1024)
|
||||
self._raw = bool(raw)
|
||||
self._strict_map_key = bool(strict_map_key)
|
||||
self._unicode_errors = unicode_errors
|
||||
self._use_list = use_list
|
||||
if not (0 <= timestamp <= 3):
|
||||
raise ValueError("timestamp must be 0..3")
|
||||
self._timestamp = timestamp
|
||||
self._list_hook = list_hook
|
||||
self._object_hook = object_hook
|
||||
self._object_pairs_hook = object_pairs_hook
|
||||
self._ext_hook = ext_hook
|
||||
self._max_str_len = max_str_len
|
||||
self._max_bin_len = max_bin_len
|
||||
self._max_array_len = max_array_len
|
||||
self._max_map_len = max_map_len
|
||||
self._max_ext_len = max_ext_len
|
||||
self._stream_offset = 0
|
||||
|
||||
if list_hook is not None and not callable(list_hook):
|
||||
raise TypeError("`list_hook` is not callable")
|
||||
if object_hook is not None and not callable(object_hook):
|
||||
raise TypeError("`object_hook` is not callable")
|
||||
if object_pairs_hook is not None and not callable(object_pairs_hook):
|
||||
raise TypeError("`object_pairs_hook` is not callable")
|
||||
if object_hook is not None and object_pairs_hook is not None:
|
||||
raise TypeError("object_pairs_hook and object_hook are mutually exclusive")
|
||||
if not callable(ext_hook):
|
||||
raise TypeError("`ext_hook` is not callable")
|
||||
|
||||
def feed(self, next_bytes):
|
||||
assert self._feeding
|
||||
view = _get_data_from_buffer(next_bytes)
|
||||
if len(self._buffer) - self._buff_i + len(view) > self._max_buffer_size:
|
||||
raise BufferFull
|
||||
|
||||
# Strip buffer before checkpoint before reading file.
|
||||
if self._buf_checkpoint > 0:
|
||||
del self._buffer[: self._buf_checkpoint]
|
||||
self._buff_i -= self._buf_checkpoint
|
||||
self._buf_checkpoint = 0
|
||||
|
||||
# Use extend here: INPLACE_ADD += doesn't reliably typecast memoryview in jython
|
||||
self._buffer.extend(view)
|
||||
view.release()
|
||||
|
||||
def _consume(self):
|
||||
"""Gets rid of the used parts of the buffer."""
|
||||
self._stream_offset += self._buff_i - self._buf_checkpoint
|
||||
self._buf_checkpoint = self._buff_i
|
||||
|
||||
def _got_extradata(self):
|
||||
return self._buff_i < len(self._buffer)
|
||||
|
||||
def _get_extradata(self):
|
||||
return self._buffer[self._buff_i :]
|
||||
|
||||
def read_bytes(self, n):
|
||||
ret = self._read(n, raise_outofdata=False)
|
||||
self._consume()
|
||||
return ret
|
||||
|
||||
def _read(self, n, raise_outofdata=True):
|
||||
# (int) -> bytearray
|
||||
self._reserve(n, raise_outofdata=raise_outofdata)
|
||||
i = self._buff_i
|
||||
ret = self._buffer[i : i + n]
|
||||
self._buff_i = i + len(ret)
|
||||
return ret
|
||||
|
||||
def _reserve(self, n, raise_outofdata=True):
|
||||
remain_bytes = len(self._buffer) - self._buff_i - n
|
||||
|
||||
# Fast path: buffer has n bytes already
|
||||
if remain_bytes >= 0:
|
||||
return
|
||||
|
||||
if self._feeding:
|
||||
self._buff_i = self._buf_checkpoint
|
||||
raise OutOfData
|
||||
|
||||
# Strip buffer before checkpoint before reading file.
|
||||
if self._buf_checkpoint > 0:
|
||||
del self._buffer[: self._buf_checkpoint]
|
||||
self._buff_i -= self._buf_checkpoint
|
||||
self._buf_checkpoint = 0
|
||||
|
||||
# Read from file
|
||||
remain_bytes = -remain_bytes
|
||||
if remain_bytes + len(self._buffer) > self._max_buffer_size:
|
||||
raise BufferFull
|
||||
while remain_bytes > 0:
|
||||
to_read_bytes = max(self._read_size, remain_bytes)
|
||||
read_data = self.file_like.read(to_read_bytes)
|
||||
if not read_data:
|
||||
break
|
||||
assert isinstance(read_data, bytes)
|
||||
self._buffer += read_data
|
||||
remain_bytes -= len(read_data)
|
||||
|
||||
if len(self._buffer) < n + self._buff_i and raise_outofdata:
|
||||
self._buff_i = 0 # rollback
|
||||
raise OutOfData
|
||||
|
||||
def _read_header(self):
|
||||
typ = TYPE_IMMEDIATE
|
||||
n = 0
|
||||
obj = None
|
||||
self._reserve(1)
|
||||
b = self._buffer[self._buff_i]
|
||||
self._buff_i += 1
|
||||
if b & 0b10000000 == 0:
|
||||
obj = b
|
||||
elif b & 0b11100000 == 0b11100000:
|
||||
obj = -1 - (b ^ 0xFF)
|
||||
elif b & 0b11100000 == 0b10100000:
|
||||
n = b & 0b00011111
|
||||
typ = TYPE_RAW
|
||||
if n > self._max_str_len:
|
||||
raise ValueError(f"{n} exceeds max_str_len({self._max_str_len})")
|
||||
obj = self._read(n)
|
||||
elif b & 0b11110000 == 0b10010000:
|
||||
n = b & 0b00001111
|
||||
typ = TYPE_ARRAY
|
||||
if n > self._max_array_len:
|
||||
raise ValueError(f"{n} exceeds max_array_len({self._max_array_len})")
|
||||
elif b & 0b11110000 == 0b10000000:
|
||||
n = b & 0b00001111
|
||||
typ = TYPE_MAP
|
||||
if n > self._max_map_len:
|
||||
raise ValueError(f"{n} exceeds max_map_len({self._max_map_len})")
|
||||
elif b == 0xC0:
|
||||
obj = None
|
||||
elif b == 0xC2:
|
||||
obj = False
|
||||
elif b == 0xC3:
|
||||
obj = True
|
||||
elif 0xC4 <= b <= 0xC6:
|
||||
size, fmt, typ = _MSGPACK_HEADERS[b]
|
||||
self._reserve(size)
|
||||
if len(fmt) > 0:
|
||||
n = struct.unpack_from(fmt, self._buffer, self._buff_i)[0]
|
||||
else:
|
||||
n = self._buffer[self._buff_i]
|
||||
self._buff_i += size
|
||||
if n > self._max_bin_len:
|
||||
raise ValueError(f"{n} exceeds max_bin_len({self._max_bin_len})")
|
||||
obj = self._read(n)
|
||||
elif 0xC7 <= b <= 0xC9:
|
||||
size, fmt, typ = _MSGPACK_HEADERS[b]
|
||||
self._reserve(size)
|
||||
L, n = struct.unpack_from(fmt, self._buffer, self._buff_i)
|
||||
self._buff_i += size
|
||||
if L > self._max_ext_len:
|
||||
raise ValueError(f"{L} exceeds max_ext_len({self._max_ext_len})")
|
||||
obj = self._read(L)
|
||||
elif 0xCA <= b <= 0xD3:
|
||||
size, fmt = _MSGPACK_HEADERS[b]
|
||||
self._reserve(size)
|
||||
if len(fmt) > 0:
|
||||
obj = struct.unpack_from(fmt, self._buffer, self._buff_i)[0]
|
||||
else:
|
||||
obj = self._buffer[self._buff_i]
|
||||
self._buff_i += size
|
||||
elif 0xD4 <= b <= 0xD8:
|
||||
size, fmt, typ = _MSGPACK_HEADERS[b]
|
||||
if self._max_ext_len < size:
|
||||
raise ValueError(f"{size} exceeds max_ext_len({self._max_ext_len})")
|
||||
self._reserve(size + 1)
|
||||
n, obj = struct.unpack_from(fmt, self._buffer, self._buff_i)
|
||||
self._buff_i += size + 1
|
||||
elif 0xD9 <= b <= 0xDB:
|
||||
size, fmt, typ = _MSGPACK_HEADERS[b]
|
||||
self._reserve(size)
|
||||
if len(fmt) > 0:
|
||||
(n,) = struct.unpack_from(fmt, self._buffer, self._buff_i)
|
||||
else:
|
||||
n = self._buffer[self._buff_i]
|
||||
self._buff_i += size
|
||||
if n > self._max_str_len:
|
||||
raise ValueError(f"{n} exceeds max_str_len({self._max_str_len})")
|
||||
obj = self._read(n)
|
||||
elif 0xDC <= b <= 0xDD:
|
||||
size, fmt, typ = _MSGPACK_HEADERS[b]
|
||||
self._reserve(size)
|
||||
(n,) = struct.unpack_from(fmt, self._buffer, self._buff_i)
|
||||
self._buff_i += size
|
||||
if n > self._max_array_len:
|
||||
raise ValueError(f"{n} exceeds max_array_len({self._max_array_len})")
|
||||
elif 0xDE <= b <= 0xDF:
|
||||
size, fmt, typ = _MSGPACK_HEADERS[b]
|
||||
self._reserve(size)
|
||||
(n,) = struct.unpack_from(fmt, self._buffer, self._buff_i)
|
||||
self._buff_i += size
|
||||
if n > self._max_map_len:
|
||||
raise ValueError(f"{n} exceeds max_map_len({self._max_map_len})")
|
||||
else:
|
||||
raise FormatError("Unknown header: 0x%x" % b)
|
||||
return typ, n, obj
|
||||
|
||||
def _unpack(self, execute=EX_CONSTRUCT):
|
||||
typ, n, obj = self._read_header()
|
||||
|
||||
if execute == EX_READ_ARRAY_HEADER:
|
||||
if typ != TYPE_ARRAY:
|
||||
raise ValueError("Expected array")
|
||||
return n
|
||||
if execute == EX_READ_MAP_HEADER:
|
||||
if typ != TYPE_MAP:
|
||||
raise ValueError("Expected map")
|
||||
return n
|
||||
# TODO should we eliminate the recursion?
|
||||
if typ == TYPE_ARRAY:
|
||||
if execute == EX_SKIP:
|
||||
for i in range(n):
|
||||
# TODO check whether we need to call `list_hook`
|
||||
self._unpack(EX_SKIP)
|
||||
return
|
||||
ret = newlist_hint(n)
|
||||
for i in range(n):
|
||||
ret.append(self._unpack(EX_CONSTRUCT))
|
||||
if self._list_hook is not None:
|
||||
ret = self._list_hook(ret)
|
||||
# TODO is the interaction between `list_hook` and `use_list` ok?
|
||||
return ret if self._use_list else tuple(ret)
|
||||
if typ == TYPE_MAP:
|
||||
if execute == EX_SKIP:
|
||||
for i in range(n):
|
||||
# TODO check whether we need to call hooks
|
||||
self._unpack(EX_SKIP)
|
||||
self._unpack(EX_SKIP)
|
||||
return
|
||||
if self._object_pairs_hook is not None:
|
||||
ret = self._object_pairs_hook(
|
||||
(self._unpack(EX_CONSTRUCT), self._unpack(EX_CONSTRUCT)) for _ in range(n)
|
||||
)
|
||||
else:
|
||||
ret = {}
|
||||
for _ in range(n):
|
||||
key = self._unpack(EX_CONSTRUCT)
|
||||
if self._strict_map_key and type(key) not in (str, bytes):
|
||||
raise ValueError("%s is not allowed for map key" % str(type(key)))
|
||||
if isinstance(key, str):
|
||||
key = sys.intern(key)
|
||||
ret[key] = self._unpack(EX_CONSTRUCT)
|
||||
if self._object_hook is not None:
|
||||
ret = self._object_hook(ret)
|
||||
return ret
|
||||
if execute == EX_SKIP:
|
||||
return
|
||||
if typ == TYPE_RAW:
|
||||
if self._raw:
|
||||
obj = bytes(obj)
|
||||
else:
|
||||
obj = obj.decode("utf_8", self._unicode_errors)
|
||||
return obj
|
||||
if typ == TYPE_BIN:
|
||||
return bytes(obj)
|
||||
if typ == TYPE_EXT:
|
||||
if n == -1: # timestamp
|
||||
ts = Timestamp.from_bytes(bytes(obj))
|
||||
if self._timestamp == 1:
|
||||
return ts.to_unix()
|
||||
elif self._timestamp == 2:
|
||||
return ts.to_unix_nano()
|
||||
elif self._timestamp == 3:
|
||||
return ts.to_datetime()
|
||||
else:
|
||||
return ts
|
||||
else:
|
||||
return self._ext_hook(n, bytes(obj))
|
||||
assert typ == TYPE_IMMEDIATE
|
||||
return obj
|
||||
|
||||
def __iter__(self):
|
||||
return self
|
||||
|
||||
def __next__(self):
|
||||
try:
|
||||
ret = self._unpack(EX_CONSTRUCT)
|
||||
self._consume()
|
||||
return ret
|
||||
except OutOfData:
|
||||
self._consume()
|
||||
raise StopIteration
|
||||
except RecursionError:
|
||||
raise StackError
|
||||
|
||||
next = __next__
|
||||
|
||||
def skip(self):
|
||||
self._unpack(EX_SKIP)
|
||||
self._consume()
|
||||
|
||||
def unpack(self):
|
||||
try:
|
||||
ret = self._unpack(EX_CONSTRUCT)
|
||||
except RecursionError:
|
||||
raise StackError
|
||||
self._consume()
|
||||
return ret
|
||||
|
||||
def read_array_header(self):
|
||||
ret = self._unpack(EX_READ_ARRAY_HEADER)
|
||||
self._consume()
|
||||
return ret
|
||||
|
||||
def read_map_header(self):
|
||||
ret = self._unpack(EX_READ_MAP_HEADER)
|
||||
self._consume()
|
||||
return ret
|
||||
|
||||
def tell(self):
|
||||
return self._stream_offset
|
||||
|
||||
|
||||
class Packer:
|
||||
"""
|
||||
MessagePack Packer
|
||||
|
||||
Usage::
|
||||
|
||||
packer = Packer()
|
||||
astream.write(packer.pack(a))
|
||||
astream.write(packer.pack(b))
|
||||
|
||||
Packer's constructor has some keyword arguments:
|
||||
|
||||
:param default:
|
||||
When specified, it should be callable.
|
||||
Convert user type to builtin type that Packer supports.
|
||||
See also simplejson's document.
|
||||
|
||||
:param bool use_single_float:
|
||||
Use single precision float type for float. (default: False)
|
||||
|
||||
:param bool autoreset:
|
||||
Reset buffer after each pack and return its content as `bytes`. (default: True).
|
||||
If set this to false, use `bytes()` to get content and `.reset()` to clear buffer.
|
||||
|
||||
:param bool use_bin_type:
|
||||
Use bin type introduced in msgpack spec 2.0 for bytes.
|
||||
It also enables str8 type for unicode. (default: True)
|
||||
|
||||
:param bool strict_types:
|
||||
If set to true, types will be checked to be exact. Derived classes
|
||||
from serializable types will not be serialized and will be
|
||||
treated as unsupported type and forwarded to default.
|
||||
Additionally tuples will not be serialized as lists.
|
||||
This is useful when trying to implement accurate serialization
|
||||
for python types.
|
||||
|
||||
:param bool datetime:
|
||||
If set to true, datetime with tzinfo is packed into Timestamp type.
|
||||
Note that the tzinfo is stripped in the timestamp.
|
||||
You can get UTC datetime with `timestamp=3` option of the Unpacker.
|
||||
|
||||
:param str unicode_errors:
|
||||
The error handler for encoding unicode. (default: 'strict')
|
||||
DO NOT USE THIS!! This option is kept for very specific usage.
|
||||
|
||||
:param int buf_size:
|
||||
Internal buffer size. This option is used only for C implementation.
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
*,
|
||||
default=None,
|
||||
use_single_float=False,
|
||||
autoreset=True,
|
||||
use_bin_type=True,
|
||||
strict_types=False,
|
||||
datetime=False,
|
||||
unicode_errors=None,
|
||||
buf_size=None,
|
||||
):
|
||||
self._strict_types = strict_types
|
||||
self._use_float = use_single_float
|
||||
self._autoreset = autoreset
|
||||
self._use_bin_type = use_bin_type
|
||||
self._buffer = BytesIO()
|
||||
self._datetime = bool(datetime)
|
||||
self._unicode_errors = unicode_errors or "strict"
|
||||
if default is not None and not callable(default):
|
||||
raise TypeError("default must be callable")
|
||||
self._default = default
|
||||
|
||||
def _pack(
|
||||
self,
|
||||
obj,
|
||||
nest_limit=DEFAULT_RECURSE_LIMIT,
|
||||
check=isinstance,
|
||||
check_type_strict=_check_type_strict,
|
||||
):
|
||||
default_used = False
|
||||
if self._strict_types:
|
||||
check = check_type_strict
|
||||
list_types = list
|
||||
else:
|
||||
list_types = (list, tuple)
|
||||
while True:
|
||||
if nest_limit < 0:
|
||||
raise ValueError("recursion limit exceeded")
|
||||
if obj is None:
|
||||
return self._buffer.write(b"\xc0")
|
||||
if check(obj, bool):
|
||||
if obj:
|
||||
return self._buffer.write(b"\xc3")
|
||||
return self._buffer.write(b"\xc2")
|
||||
if check(obj, int):
|
||||
if 0 <= obj < 0x80:
|
||||
return self._buffer.write(struct.pack("B", obj))
|
||||
if -0x20 <= obj < 0:
|
||||
return self._buffer.write(struct.pack("b", obj))
|
||||
if 0x80 <= obj <= 0xFF:
|
||||
return self._buffer.write(struct.pack("BB", 0xCC, obj))
|
||||
if -0x80 <= obj < 0:
|
||||
return self._buffer.write(struct.pack(">Bb", 0xD0, obj))
|
||||
if 0xFF < obj <= 0xFFFF:
|
||||
return self._buffer.write(struct.pack(">BH", 0xCD, obj))
|
||||
if -0x8000 <= obj < -0x80:
|
||||
return self._buffer.write(struct.pack(">Bh", 0xD1, obj))
|
||||
if 0xFFFF < obj <= 0xFFFFFFFF:
|
||||
return self._buffer.write(struct.pack(">BI", 0xCE, obj))
|
||||
if -0x80000000 <= obj < -0x8000:
|
||||
return self._buffer.write(struct.pack(">Bi", 0xD2, obj))
|
||||
if 0xFFFFFFFF < obj <= 0xFFFFFFFFFFFFFFFF:
|
||||
return self._buffer.write(struct.pack(">BQ", 0xCF, obj))
|
||||
if -0x8000000000000000 <= obj < -0x80000000:
|
||||
return self._buffer.write(struct.pack(">Bq", 0xD3, obj))
|
||||
if not default_used and self._default is not None:
|
||||
obj = self._default(obj)
|
||||
default_used = True
|
||||
continue
|
||||
raise OverflowError("Integer value out of range")
|
||||
if check(obj, (bytes, bytearray)):
|
||||
n = len(obj)
|
||||
if n >= 2**32:
|
||||
raise ValueError("%s is too large" % type(obj).__name__)
|
||||
self._pack_bin_header(n)
|
||||
return self._buffer.write(obj)
|
||||
if check(obj, str):
|
||||
obj = obj.encode("utf-8", self._unicode_errors)
|
||||
n = len(obj)
|
||||
if n >= 2**32:
|
||||
raise ValueError("String is too large")
|
||||
self._pack_raw_header(n)
|
||||
return self._buffer.write(obj)
|
||||
if check(obj, memoryview):
|
||||
n = obj.nbytes
|
||||
if n >= 2**32:
|
||||
raise ValueError("Memoryview is too large")
|
||||
self._pack_bin_header(n)
|
||||
return self._buffer.write(obj)
|
||||
if check(obj, float):
|
||||
if self._use_float:
|
||||
return self._buffer.write(struct.pack(">Bf", 0xCA, obj))
|
||||
return self._buffer.write(struct.pack(">Bd", 0xCB, obj))
|
||||
if check(obj, (ExtType, Timestamp)):
|
||||
if check(obj, Timestamp):
|
||||
code = -1
|
||||
data = obj.to_bytes()
|
||||
else:
|
||||
code = obj.code
|
||||
data = obj.data
|
||||
assert isinstance(code, int)
|
||||
assert isinstance(data, bytes)
|
||||
L = len(data)
|
||||
if L == 1:
|
||||
self._buffer.write(b"\xd4")
|
||||
elif L == 2:
|
||||
self._buffer.write(b"\xd5")
|
||||
elif L == 4:
|
||||
self._buffer.write(b"\xd6")
|
||||
elif L == 8:
|
||||
self._buffer.write(b"\xd7")
|
||||
elif L == 16:
|
||||
self._buffer.write(b"\xd8")
|
||||
elif L <= 0xFF:
|
||||
self._buffer.write(struct.pack(">BB", 0xC7, L))
|
||||
elif L <= 0xFFFF:
|
||||
self._buffer.write(struct.pack(">BH", 0xC8, L))
|
||||
else:
|
||||
self._buffer.write(struct.pack(">BI", 0xC9, L))
|
||||
self._buffer.write(struct.pack("b", code))
|
||||
self._buffer.write(data)
|
||||
return
|
||||
if check(obj, list_types):
|
||||
n = len(obj)
|
||||
self._pack_array_header(n)
|
||||
for i in range(n):
|
||||
self._pack(obj[i], nest_limit - 1)
|
||||
return
|
||||
if check(obj, dict):
|
||||
return self._pack_map_pairs(len(obj), obj.items(), nest_limit - 1)
|
||||
|
||||
if self._datetime and check(obj, _DateTime) and obj.tzinfo is not None:
|
||||
obj = Timestamp.from_datetime(obj)
|
||||
default_used = 1
|
||||
continue
|
||||
|
||||
if not default_used and self._default is not None:
|
||||
obj = self._default(obj)
|
||||
default_used = 1
|
||||
continue
|
||||
|
||||
if self._datetime and check(obj, _DateTime):
|
||||
raise ValueError(f"Cannot serialize {obj!r} where tzinfo=None")
|
||||
|
||||
raise TypeError(f"Cannot serialize {obj!r}")
|
||||
|
||||
def pack(self, obj):
|
||||
try:
|
||||
self._pack(obj)
|
||||
except:
|
||||
self._buffer = BytesIO() # force reset
|
||||
raise
|
||||
if self._autoreset:
|
||||
ret = self._buffer.getvalue()
|
||||
self._buffer = BytesIO()
|
||||
return ret
|
||||
|
||||
def pack_map_pairs(self, pairs):
|
||||
self._pack_map_pairs(len(pairs), pairs)
|
||||
if self._autoreset:
|
||||
ret = self._buffer.getvalue()
|
||||
self._buffer = BytesIO()
|
||||
return ret
|
||||
|
||||
def pack_array_header(self, n):
|
||||
if n >= 2**32:
|
||||
raise ValueError
|
||||
self._pack_array_header(n)
|
||||
if self._autoreset:
|
||||
ret = self._buffer.getvalue()
|
||||
self._buffer = BytesIO()
|
||||
return ret
|
||||
|
||||
def pack_map_header(self, n):
|
||||
if n >= 2**32:
|
||||
raise ValueError
|
||||
self._pack_map_header(n)
|
||||
if self._autoreset:
|
||||
ret = self._buffer.getvalue()
|
||||
self._buffer = BytesIO()
|
||||
return ret
|
||||
|
||||
def pack_ext_type(self, typecode, data):
|
||||
if not isinstance(typecode, int):
|
||||
raise TypeError("typecode must have int type.")
|
||||
if not 0 <= typecode <= 127:
|
||||
raise ValueError("typecode should be 0-127")
|
||||
if not isinstance(data, bytes):
|
||||
raise TypeError("data must have bytes type")
|
||||
L = len(data)
|
||||
if L > 0xFFFFFFFF:
|
||||
raise ValueError("Too large data")
|
||||
if L == 1:
|
||||
self._buffer.write(b"\xd4")
|
||||
elif L == 2:
|
||||
self._buffer.write(b"\xd5")
|
||||
elif L == 4:
|
||||
self._buffer.write(b"\xd6")
|
||||
elif L == 8:
|
||||
self._buffer.write(b"\xd7")
|
||||
elif L == 16:
|
||||
self._buffer.write(b"\xd8")
|
||||
elif L <= 0xFF:
|
||||
self._buffer.write(b"\xc7" + struct.pack("B", L))
|
||||
elif L <= 0xFFFF:
|
||||
self._buffer.write(b"\xc8" + struct.pack(">H", L))
|
||||
else:
|
||||
self._buffer.write(b"\xc9" + struct.pack(">I", L))
|
||||
self._buffer.write(struct.pack("B", typecode))
|
||||
self._buffer.write(data)
|
||||
|
||||
def _pack_array_header(self, n):
|
||||
if n <= 0x0F:
|
||||
return self._buffer.write(struct.pack("B", 0x90 + n))
|
||||
if n <= 0xFFFF:
|
||||
return self._buffer.write(struct.pack(">BH", 0xDC, n))
|
||||
if n <= 0xFFFFFFFF:
|
||||
return self._buffer.write(struct.pack(">BI", 0xDD, n))
|
||||
raise ValueError("Array is too large")
|
||||
|
||||
def _pack_map_header(self, n):
|
||||
if n <= 0x0F:
|
||||
return self._buffer.write(struct.pack("B", 0x80 + n))
|
||||
if n <= 0xFFFF:
|
||||
return self._buffer.write(struct.pack(">BH", 0xDE, n))
|
||||
if n <= 0xFFFFFFFF:
|
||||
return self._buffer.write(struct.pack(">BI", 0xDF, n))
|
||||
raise ValueError("Dict is too large")
|
||||
|
||||
def _pack_map_pairs(self, n, pairs, nest_limit=DEFAULT_RECURSE_LIMIT):
|
||||
self._pack_map_header(n)
|
||||
for k, v in pairs:
|
||||
self._pack(k, nest_limit - 1)
|
||||
self._pack(v, nest_limit - 1)
|
||||
|
||||
def _pack_raw_header(self, n):
|
||||
if n <= 0x1F:
|
||||
self._buffer.write(struct.pack("B", 0xA0 + n))
|
||||
elif self._use_bin_type and n <= 0xFF:
|
||||
self._buffer.write(struct.pack(">BB", 0xD9, n))
|
||||
elif n <= 0xFFFF:
|
||||
self._buffer.write(struct.pack(">BH", 0xDA, n))
|
||||
elif n <= 0xFFFFFFFF:
|
||||
self._buffer.write(struct.pack(">BI", 0xDB, n))
|
||||
else:
|
||||
raise ValueError("Raw is too large")
|
||||
|
||||
def _pack_bin_header(self, n):
|
||||
if not self._use_bin_type:
|
||||
return self._pack_raw_header(n)
|
||||
elif n <= 0xFF:
|
||||
return self._buffer.write(struct.pack(">BB", 0xC4, n))
|
||||
elif n <= 0xFFFF:
|
||||
return self._buffer.write(struct.pack(">BH", 0xC5, n))
|
||||
elif n <= 0xFFFFFFFF:
|
||||
return self._buffer.write(struct.pack(">BI", 0xC6, n))
|
||||
else:
|
||||
raise ValueError("Bin is too large")
|
||||
|
||||
def bytes(self):
|
||||
"""Return internal buffer contents as bytes object"""
|
||||
return self._buffer.getvalue()
|
||||
|
||||
def reset(self):
|
||||
"""Reset internal buffer.
|
||||
|
||||
This method is useful only when autoreset=False.
|
||||
"""
|
||||
self._buffer = BytesIO()
|
||||
|
||||
def getbuffer(self):
|
||||
"""Return view of internal buffer."""
|
||||
if _USING_STRINGBUILDER:
|
||||
return memoryview(self.bytes())
|
||||
else:
|
||||
return self._buffer.getbuffer()
|
||||
@@ -0,0 +1,3 @@
|
||||
This software is made available under the terms of *either* of the licenses
|
||||
found in LICENSE.APACHE or LICENSE.BSD. Contributions to this software is made
|
||||
under the terms of *both* these licenses.
|
||||
@@ -0,0 +1,177 @@
|
||||
|
||||
Apache License
|
||||
Version 2.0, January 2004
|
||||
http://www.apache.org/licenses/
|
||||
|
||||
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
||||
|
||||
1. Definitions.
|
||||
|
||||
"License" shall mean the terms and conditions for use, reproduction,
|
||||
and distribution as defined by Sections 1 through 9 of this document.
|
||||
|
||||
"Licensor" shall mean the copyright owner or entity authorized by
|
||||
the copyright owner that is granting the License.
|
||||
|
||||
"Legal Entity" shall mean the union of the acting entity and all
|
||||
other entities that control, are controlled by, or are under common
|
||||
control with that entity. For the purposes of this definition,
|
||||
"control" means (i) the power, direct or indirect, to cause the
|
||||
direction or management of such entity, whether by contract or
|
||||
otherwise, or (ii) ownership of fifty percent (50%) or more of the
|
||||
outstanding shares, or (iii) beneficial ownership of such entity.
|
||||
|
||||
"You" (or "Your") shall mean an individual or Legal Entity
|
||||
exercising permissions granted by this License.
|
||||
|
||||
"Source" form shall mean the preferred form for making modifications,
|
||||
including but not limited to software source code, documentation
|
||||
source, and configuration files.
|
||||
|
||||
"Object" form shall mean any form resulting from mechanical
|
||||
transformation or translation of a Source form, including but
|
||||
not limited to compiled object code, generated documentation,
|
||||
and conversions to other media types.
|
||||
|
||||
"Work" shall mean the work of authorship, whether in Source or
|
||||
Object form, made available under the License, as indicated by a
|
||||
copyright notice that is included in or attached to the work
|
||||
(an example is provided in the Appendix below).
|
||||
|
||||
"Derivative Works" shall mean any work, whether in Source or Object
|
||||
form, that is based on (or derived from) the Work and for which the
|
||||
editorial revisions, annotations, elaborations, or other modifications
|
||||
represent, as a whole, an original work of authorship. For the purposes
|
||||
of this License, Derivative Works shall not include works that remain
|
||||
separable from, or merely link (or bind by name) to the interfaces of,
|
||||
the Work and Derivative Works thereof.
|
||||
|
||||
"Contribution" shall mean any work of authorship, including
|
||||
the original version of the Work and any modifications or additions
|
||||
to that Work or Derivative Works thereof, that is intentionally
|
||||
submitted to Licensor for inclusion in the Work by the copyright owner
|
||||
or by an individual or Legal Entity authorized to submit on behalf of
|
||||
the copyright owner. For the purposes of this definition, "submitted"
|
||||
means any form of electronic, verbal, or written communication sent
|
||||
to the Licensor or its representatives, including but not limited to
|
||||
communication on electronic mailing lists, source code control systems,
|
||||
and issue tracking systems that are managed by, or on behalf of, the
|
||||
Licensor for the purpose of discussing and improving the Work, but
|
||||
excluding communication that is conspicuously marked or otherwise
|
||||
designated in writing by the copyright owner as "Not a Contribution."
|
||||
|
||||
"Contributor" shall mean Licensor and any individual or Legal Entity
|
||||
on behalf of whom a Contribution has been received by Licensor and
|
||||
subsequently incorporated within the Work.
|
||||
|
||||
2. Grant of Copyright License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
copyright license to reproduce, prepare Derivative Works of,
|
||||
publicly display, publicly perform, sublicense, and distribute the
|
||||
Work and such Derivative Works in Source or Object form.
|
||||
|
||||
3. Grant of Patent License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
(except as stated in this section) patent license to make, have made,
|
||||
use, offer to sell, sell, import, and otherwise transfer the Work,
|
||||
where such license applies only to those patent claims licensable
|
||||
by such Contributor that are necessarily infringed by their
|
||||
Contribution(s) alone or by combination of their Contribution(s)
|
||||
with the Work to which such Contribution(s) was submitted. If You
|
||||
institute patent litigation against any entity (including a
|
||||
cross-claim or counterclaim in a lawsuit) alleging that the Work
|
||||
or a Contribution incorporated within the Work constitutes direct
|
||||
or contributory patent infringement, then any patent licenses
|
||||
granted to You under this License for that Work shall terminate
|
||||
as of the date such litigation is filed.
|
||||
|
||||
4. Redistribution. You may reproduce and distribute copies of the
|
||||
Work or Derivative Works thereof in any medium, with or without
|
||||
modifications, and in Source or Object form, provided that You
|
||||
meet the following conditions:
|
||||
|
||||
(a) You must give any other recipients of the Work or
|
||||
Derivative Works a copy of this License; and
|
||||
|
||||
(b) You must cause any modified files to carry prominent notices
|
||||
stating that You changed the files; and
|
||||
|
||||
(c) You must retain, in the Source form of any Derivative Works
|
||||
that You distribute, all copyright, patent, trademark, and
|
||||
attribution notices from the Source form of the Work,
|
||||
excluding those notices that do not pertain to any part of
|
||||
the Derivative Works; and
|
||||
|
||||
(d) If the Work includes a "NOTICE" text file as part of its
|
||||
distribution, then any Derivative Works that You distribute must
|
||||
include a readable copy of the attribution notices contained
|
||||
within such NOTICE file, excluding those notices that do not
|
||||
pertain to any part of the Derivative Works, in at least one
|
||||
of the following places: within a NOTICE text file distributed
|
||||
as part of the Derivative Works; within the Source form or
|
||||
documentation, if provided along with the Derivative Works; or,
|
||||
within a display generated by the Derivative Works, if and
|
||||
wherever such third-party notices normally appear. The contents
|
||||
of the NOTICE file are for informational purposes only and
|
||||
do not modify the License. You may add Your own attribution
|
||||
notices within Derivative Works that You distribute, alongside
|
||||
or as an addendum to the NOTICE text from the Work, provided
|
||||
that such additional attribution notices cannot be construed
|
||||
as modifying the License.
|
||||
|
||||
You may add Your own copyright statement to Your modifications and
|
||||
may provide additional or different license terms and conditions
|
||||
for use, reproduction, or distribution of Your modifications, or
|
||||
for any such Derivative Works as a whole, provided Your use,
|
||||
reproduction, and distribution of the Work otherwise complies with
|
||||
the conditions stated in this License.
|
||||
|
||||
5. Submission of Contributions. Unless You explicitly state otherwise,
|
||||
any Contribution intentionally submitted for inclusion in the Work
|
||||
by You to the Licensor shall be under the terms and conditions of
|
||||
this License, without any additional terms or conditions.
|
||||
Notwithstanding the above, nothing herein shall supersede or modify
|
||||
the terms of any separate license agreement you may have executed
|
||||
with Licensor regarding such Contributions.
|
||||
|
||||
6. Trademarks. This License does not grant permission to use the trade
|
||||
names, trademarks, service marks, or product names of the Licensor,
|
||||
except as required for reasonable and customary use in describing the
|
||||
origin of the Work and reproducing the content of the NOTICE file.
|
||||
|
||||
7. Disclaimer of Warranty. Unless required by applicable law or
|
||||
agreed to in writing, Licensor provides the Work (and each
|
||||
Contributor provides its Contributions) on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
implied, including, without limitation, any warranties or conditions
|
||||
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
|
||||
PARTICULAR PURPOSE. You are solely responsible for determining the
|
||||
appropriateness of using or redistributing the Work and assume any
|
||||
risks associated with Your exercise of permissions under this License.
|
||||
|
||||
8. Limitation of Liability. In no event and under no legal theory,
|
||||
whether in tort (including negligence), contract, or otherwise,
|
||||
unless required by applicable law (such as deliberate and grossly
|
||||
negligent acts) or agreed to in writing, shall any Contributor be
|
||||
liable to You for damages, including any direct, indirect, special,
|
||||
incidental, or consequential damages of any character arising as a
|
||||
result of this License or out of the use or inability to use the
|
||||
Work (including but not limited to damages for loss of goodwill,
|
||||
work stoppage, computer failure or malfunction, or any and all
|
||||
other commercial damages or losses), even if such Contributor
|
||||
has been advised of the possibility of such damages.
|
||||
|
||||
9. Accepting Warranty or Additional Liability. While redistributing
|
||||
the Work or Derivative Works thereof, You may choose to offer,
|
||||
and charge a fee for, acceptance of support, warranty, indemnity,
|
||||
or other liability obligations and/or rights consistent with this
|
||||
License. However, in accepting such obligations, You may act only
|
||||
on Your own behalf and on Your sole responsibility, not on behalf
|
||||
of any other Contributor, and only if You agree to indemnify,
|
||||
defend, and hold each Contributor harmless for any liability
|
||||
incurred by, or claims asserted against, such Contributor by reason
|
||||
of your accepting any such warranty or additional liability.
|
||||
|
||||
END OF TERMS AND CONDITIONS
|
||||
@@ -0,0 +1,23 @@
|
||||
Copyright (c) Donald Stufft and individual contributors.
|
||||
All rights reserved.
|
||||
|
||||
Redistribution and use in source and binary forms, with or without
|
||||
modification, are permitted provided that the following conditions are met:
|
||||
|
||||
1. Redistributions of source code must retain the above copyright notice,
|
||||
this list of conditions and the following disclaimer.
|
||||
|
||||
2. Redistributions in binary form must reproduce the above copyright
|
||||
notice, this list of conditions and the following disclaimer in the
|
||||
documentation and/or other materials provided with the distribution.
|
||||
|
||||
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
|
||||
ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
|
||||
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
|
||||
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
|
||||
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
|
||||
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
|
||||
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
|
||||
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
|
||||
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
@@ -0,0 +1,15 @@
|
||||
# This file is dual licensed under the terms of the Apache License, Version
|
||||
# 2.0, and the BSD License. See the LICENSE file in the root of this repository
|
||||
# for complete details.
|
||||
|
||||
__title__ = "packaging"
|
||||
__summary__ = "Core utilities for Python packages"
|
||||
__uri__ = "https://github.com/pypa/packaging"
|
||||
|
||||
__version__ = "26.0"
|
||||
|
||||
__author__ = "Donald Stufft and individual contributors"
|
||||
__email__ = "donald@stufft.io"
|
||||
|
||||
__license__ = "BSD-2-Clause or Apache-2.0"
|
||||
__copyright__ = f"2014 {__author__}"
|
||||
@@ -0,0 +1,108 @@
|
||||
"""
|
||||
ELF file parser.
|
||||
|
||||
This provides a class ``ELFFile`` that parses an ELF executable in a similar
|
||||
interface to ``ZipFile``. Only the read interface is implemented.
|
||||
|
||||
ELF header: https://refspecs.linuxfoundation.org/elf/gabi4+/ch4.eheader.html
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import enum
|
||||
import os
|
||||
import struct
|
||||
from typing import IO
|
||||
|
||||
|
||||
class ELFInvalid(ValueError):
|
||||
pass
|
||||
|
||||
|
||||
class EIClass(enum.IntEnum):
|
||||
C32 = 1
|
||||
C64 = 2
|
||||
|
||||
|
||||
class EIData(enum.IntEnum):
|
||||
Lsb = 1
|
||||
Msb = 2
|
||||
|
||||
|
||||
class EMachine(enum.IntEnum):
|
||||
I386 = 3
|
||||
S390 = 22
|
||||
Arm = 40
|
||||
X8664 = 62
|
||||
AArc64 = 183
|
||||
|
||||
|
||||
class ELFFile:
|
||||
"""
|
||||
Representation of an ELF executable.
|
||||
"""
|
||||
|
||||
def __init__(self, f: IO[bytes]) -> None:
|
||||
self._f = f
|
||||
|
||||
try:
|
||||
ident = self._read("16B")
|
||||
except struct.error as e:
|
||||
raise ELFInvalid("unable to parse identification") from e
|
||||
magic = bytes(ident[:4])
|
||||
if magic != b"\x7fELF":
|
||||
raise ELFInvalid(f"invalid magic: {magic!r}")
|
||||
|
||||
self.capacity = ident[4] # Format for program header (bitness).
|
||||
self.encoding = ident[5] # Data structure encoding (endianness).
|
||||
|
||||
try:
|
||||
# e_fmt: Format for program header.
|
||||
# p_fmt: Format for section header.
|
||||
# p_idx: Indexes to find p_type, p_offset, and p_filesz.
|
||||
e_fmt, self._p_fmt, self._p_idx = {
|
||||
(1, 1): ("<HHIIIIIHHH", "<IIIIIIII", (0, 1, 4)), # 32-bit LSB.
|
||||
(1, 2): (">HHIIIIIHHH", ">IIIIIIII", (0, 1, 4)), # 32-bit MSB.
|
||||
(2, 1): ("<HHIQQQIHHH", "<IIQQQQQQ", (0, 2, 5)), # 64-bit LSB.
|
||||
(2, 2): (">HHIQQQIHHH", ">IIQQQQQQ", (0, 2, 5)), # 64-bit MSB.
|
||||
}[(self.capacity, self.encoding)]
|
||||
except KeyError as e:
|
||||
raise ELFInvalid(
|
||||
f"unrecognized capacity ({self.capacity}) or encoding ({self.encoding})"
|
||||
) from e
|
||||
|
||||
try:
|
||||
(
|
||||
_,
|
||||
self.machine, # Architecture type.
|
||||
_,
|
||||
_,
|
||||
self._e_phoff, # Offset of program header.
|
||||
_,
|
||||
self.flags, # Processor-specific flags.
|
||||
_,
|
||||
self._e_phentsize, # Size of section.
|
||||
self._e_phnum, # Number of sections.
|
||||
) = self._read(e_fmt)
|
||||
except struct.error as e:
|
||||
raise ELFInvalid("unable to parse machine and section information") from e
|
||||
|
||||
def _read(self, fmt: str) -> tuple[int, ...]:
|
||||
return struct.unpack(fmt, self._f.read(struct.calcsize(fmt)))
|
||||
|
||||
@property
|
||||
def interpreter(self) -> str | None:
|
||||
"""
|
||||
The path recorded in the ``PT_INTERP`` section header.
|
||||
"""
|
||||
for index in range(self._e_phnum):
|
||||
self._f.seek(self._e_phoff + self._e_phentsize * index)
|
||||
try:
|
||||
data = self._read(self._p_fmt)
|
||||
except struct.error:
|
||||
continue
|
||||
if data[self._p_idx[0]] != 3: # Not PT_INTERP.
|
||||
continue
|
||||
self._f.seek(data[self._p_idx[1]])
|
||||
return os.fsdecode(self._f.read(data[self._p_idx[2]])).strip("\0")
|
||||
return None
|
||||
@@ -0,0 +1,262 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import collections
|
||||
import contextlib
|
||||
import functools
|
||||
import os
|
||||
import re
|
||||
import sys
|
||||
import warnings
|
||||
from typing import Generator, Iterator, NamedTuple, Sequence
|
||||
|
||||
from ._elffile import EIClass, EIData, ELFFile, EMachine
|
||||
|
||||
EF_ARM_ABIMASK = 0xFF000000
|
||||
EF_ARM_ABI_VER5 = 0x05000000
|
||||
EF_ARM_ABI_FLOAT_HARD = 0x00000400
|
||||
|
||||
_ALLOWED_ARCHS = {
|
||||
"x86_64",
|
||||
"aarch64",
|
||||
"ppc64",
|
||||
"ppc64le",
|
||||
"s390x",
|
||||
"loongarch64",
|
||||
"riscv64",
|
||||
}
|
||||
|
||||
|
||||
# `os.PathLike` not a generic type until Python 3.9, so sticking with `str`
|
||||
# as the type for `path` until then.
|
||||
@contextlib.contextmanager
|
||||
def _parse_elf(path: str) -> Generator[ELFFile | None, None, None]:
|
||||
try:
|
||||
with open(path, "rb") as f:
|
||||
yield ELFFile(f)
|
||||
except (OSError, TypeError, ValueError):
|
||||
yield None
|
||||
|
||||
|
||||
def _is_linux_armhf(executable: str) -> bool:
|
||||
# hard-float ABI can be detected from the ELF header of the running
|
||||
# process
|
||||
# https://static.docs.arm.com/ihi0044/g/aaelf32.pdf
|
||||
with _parse_elf(executable) as f:
|
||||
return (
|
||||
f is not None
|
||||
and f.capacity == EIClass.C32
|
||||
and f.encoding == EIData.Lsb
|
||||
and f.machine == EMachine.Arm
|
||||
and f.flags & EF_ARM_ABIMASK == EF_ARM_ABI_VER5
|
||||
and f.flags & EF_ARM_ABI_FLOAT_HARD == EF_ARM_ABI_FLOAT_HARD
|
||||
)
|
||||
|
||||
|
||||
def _is_linux_i686(executable: str) -> bool:
|
||||
with _parse_elf(executable) as f:
|
||||
return (
|
||||
f is not None
|
||||
and f.capacity == EIClass.C32
|
||||
and f.encoding == EIData.Lsb
|
||||
and f.machine == EMachine.I386
|
||||
)
|
||||
|
||||
|
||||
def _have_compatible_abi(executable: str, archs: Sequence[str]) -> bool:
|
||||
if "armv7l" in archs:
|
||||
return _is_linux_armhf(executable)
|
||||
if "i686" in archs:
|
||||
return _is_linux_i686(executable)
|
||||
return any(arch in _ALLOWED_ARCHS for arch in archs)
|
||||
|
||||
|
||||
# If glibc ever changes its major version, we need to know what the last
|
||||
# minor version was, so we can build the complete list of all versions.
|
||||
# For now, guess what the highest minor version might be, assume it will
|
||||
# be 50 for testing. Once this actually happens, update the dictionary
|
||||
# with the actual value.
|
||||
_LAST_GLIBC_MINOR: dict[int, int] = collections.defaultdict(lambda: 50)
|
||||
|
||||
|
||||
class _GLibCVersion(NamedTuple):
|
||||
major: int
|
||||
minor: int
|
||||
|
||||
|
||||
def _glibc_version_string_confstr() -> str | None:
|
||||
"""
|
||||
Primary implementation of glibc_version_string using os.confstr.
|
||||
"""
|
||||
# os.confstr is quite a bit faster than ctypes.DLL. It's also less likely
|
||||
# to be broken or missing. This strategy is used in the standard library
|
||||
# platform module.
|
||||
# https://github.com/python/cpython/blob/fcf1d003bf4f0100c/Lib/platform.py#L175-L183
|
||||
try:
|
||||
# Should be a string like "glibc 2.17".
|
||||
version_string: str | None = os.confstr("CS_GNU_LIBC_VERSION")
|
||||
assert version_string is not None
|
||||
_, version = version_string.rsplit()
|
||||
except (AssertionError, AttributeError, OSError, ValueError):
|
||||
# os.confstr() or CS_GNU_LIBC_VERSION not available (or a bad value)...
|
||||
return None
|
||||
return version
|
||||
|
||||
|
||||
def _glibc_version_string_ctypes() -> str | None:
|
||||
"""
|
||||
Fallback implementation of glibc_version_string using ctypes.
|
||||
"""
|
||||
try:
|
||||
import ctypes # noqa: PLC0415
|
||||
except ImportError:
|
||||
return None
|
||||
|
||||
# ctypes.CDLL(None) internally calls dlopen(NULL), and as the dlopen
|
||||
# manpage says, "If filename is NULL, then the returned handle is for the
|
||||
# main program". This way we can let the linker do the work to figure out
|
||||
# which libc our process is actually using.
|
||||
#
|
||||
# We must also handle the special case where the executable is not a
|
||||
# dynamically linked executable. This can occur when using musl libc,
|
||||
# for example. In this situation, dlopen() will error, leading to an
|
||||
# OSError. Interestingly, at least in the case of musl, there is no
|
||||
# errno set on the OSError. The single string argument used to construct
|
||||
# OSError comes from libc itself and is therefore not portable to
|
||||
# hard code here. In any case, failure to call dlopen() means we
|
||||
# can proceed, so we bail on our attempt.
|
||||
try:
|
||||
process_namespace = ctypes.CDLL(None)
|
||||
except OSError:
|
||||
return None
|
||||
|
||||
try:
|
||||
gnu_get_libc_version = process_namespace.gnu_get_libc_version
|
||||
except AttributeError:
|
||||
# Symbol doesn't exist -> therefore, we are not linked to
|
||||
# glibc.
|
||||
return None
|
||||
|
||||
# Call gnu_get_libc_version, which returns a string like "2.5"
|
||||
gnu_get_libc_version.restype = ctypes.c_char_p
|
||||
version_str: str = gnu_get_libc_version()
|
||||
# py2 / py3 compatibility:
|
||||
if not isinstance(version_str, str):
|
||||
version_str = version_str.decode("ascii")
|
||||
|
||||
return version_str
|
||||
|
||||
|
||||
def _glibc_version_string() -> str | None:
|
||||
"""Returns glibc version string, or None if not using glibc."""
|
||||
return _glibc_version_string_confstr() or _glibc_version_string_ctypes()
|
||||
|
||||
|
||||
def _parse_glibc_version(version_str: str) -> _GLibCVersion:
|
||||
"""Parse glibc version.
|
||||
|
||||
We use a regexp instead of str.split because we want to discard any
|
||||
random junk that might come after the minor version -- this might happen
|
||||
in patched/forked versions of glibc (e.g. Linaro's version of glibc
|
||||
uses version strings like "2.20-2014.11"). See gh-3588.
|
||||
"""
|
||||
m = re.match(r"(?P<major>[0-9]+)\.(?P<minor>[0-9]+)", version_str)
|
||||
if not m:
|
||||
warnings.warn(
|
||||
f"Expected glibc version with 2 components major.minor, got: {version_str}",
|
||||
RuntimeWarning,
|
||||
stacklevel=2,
|
||||
)
|
||||
return _GLibCVersion(-1, -1)
|
||||
return _GLibCVersion(int(m.group("major")), int(m.group("minor")))
|
||||
|
||||
|
||||
@functools.lru_cache
|
||||
def _get_glibc_version() -> _GLibCVersion:
|
||||
version_str = _glibc_version_string()
|
||||
if version_str is None:
|
||||
return _GLibCVersion(-1, -1)
|
||||
return _parse_glibc_version(version_str)
|
||||
|
||||
|
||||
# From PEP 513, PEP 600
|
||||
def _is_compatible(arch: str, version: _GLibCVersion) -> bool:
|
||||
sys_glibc = _get_glibc_version()
|
||||
if sys_glibc < version:
|
||||
return False
|
||||
# Check for presence of _manylinux module.
|
||||
try:
|
||||
import _manylinux # noqa: PLC0415
|
||||
except ImportError:
|
||||
return True
|
||||
if hasattr(_manylinux, "manylinux_compatible"):
|
||||
result = _manylinux.manylinux_compatible(version[0], version[1], arch)
|
||||
if result is not None:
|
||||
return bool(result)
|
||||
return True
|
||||
if version == _GLibCVersion(2, 5) and hasattr(_manylinux, "manylinux1_compatible"):
|
||||
return bool(_manylinux.manylinux1_compatible)
|
||||
if version == _GLibCVersion(2, 12) and hasattr(
|
||||
_manylinux, "manylinux2010_compatible"
|
||||
):
|
||||
return bool(_manylinux.manylinux2010_compatible)
|
||||
if version == _GLibCVersion(2, 17) and hasattr(
|
||||
_manylinux, "manylinux2014_compatible"
|
||||
):
|
||||
return bool(_manylinux.manylinux2014_compatible)
|
||||
return True
|
||||
|
||||
|
||||
_LEGACY_MANYLINUX_MAP: dict[_GLibCVersion, str] = {
|
||||
# CentOS 7 w/ glibc 2.17 (PEP 599)
|
||||
_GLibCVersion(2, 17): "manylinux2014",
|
||||
# CentOS 6 w/ glibc 2.12 (PEP 571)
|
||||
_GLibCVersion(2, 12): "manylinux2010",
|
||||
# CentOS 5 w/ glibc 2.5 (PEP 513)
|
||||
_GLibCVersion(2, 5): "manylinux1",
|
||||
}
|
||||
|
||||
|
||||
def platform_tags(archs: Sequence[str]) -> Iterator[str]:
|
||||
"""Generate manylinux tags compatible to the current platform.
|
||||
|
||||
:param archs: Sequence of compatible architectures.
|
||||
The first one shall be the closest to the actual architecture and be the part of
|
||||
platform tag after the ``linux_`` prefix, e.g. ``x86_64``.
|
||||
The ``linux_`` prefix is assumed as a prerequisite for the current platform to
|
||||
be manylinux-compatible.
|
||||
|
||||
:returns: An iterator of compatible manylinux tags.
|
||||
"""
|
||||
if not _have_compatible_abi(sys.executable, archs):
|
||||
return
|
||||
# Oldest glibc to be supported regardless of architecture is (2, 17).
|
||||
too_old_glibc2 = _GLibCVersion(2, 16)
|
||||
if set(archs) & {"x86_64", "i686"}:
|
||||
# On x86/i686 also oldest glibc to be supported is (2, 5).
|
||||
too_old_glibc2 = _GLibCVersion(2, 4)
|
||||
current_glibc = _GLibCVersion(*_get_glibc_version())
|
||||
glibc_max_list = [current_glibc]
|
||||
# We can assume compatibility across glibc major versions.
|
||||
# https://sourceware.org/bugzilla/show_bug.cgi?id=24636
|
||||
#
|
||||
# Build a list of maximum glibc versions so that we can
|
||||
# output the canonical list of all glibc from current_glibc
|
||||
# down to too_old_glibc2, including all intermediary versions.
|
||||
for glibc_major in range(current_glibc.major - 1, 1, -1):
|
||||
glibc_minor = _LAST_GLIBC_MINOR[glibc_major]
|
||||
glibc_max_list.append(_GLibCVersion(glibc_major, glibc_minor))
|
||||
for arch in archs:
|
||||
for glibc_max in glibc_max_list:
|
||||
if glibc_max.major == too_old_glibc2.major:
|
||||
min_minor = too_old_glibc2.minor
|
||||
else:
|
||||
# For other glibc major versions oldest supported is (x, 0).
|
||||
min_minor = -1
|
||||
for glibc_minor in range(glibc_max.minor, min_minor, -1):
|
||||
glibc_version = _GLibCVersion(glibc_max.major, glibc_minor)
|
||||
if _is_compatible(arch, glibc_version):
|
||||
yield "manylinux_{}_{}_{}".format(*glibc_version, arch)
|
||||
|
||||
# Handle the legacy manylinux1, manylinux2010, manylinux2014 tags.
|
||||
if legacy_tag := _LEGACY_MANYLINUX_MAP.get(glibc_version):
|
||||
yield f"{legacy_tag}_{arch}"
|
||||
@@ -0,0 +1,85 @@
|
||||
"""PEP 656 support.
|
||||
|
||||
This module implements logic to detect if the currently running Python is
|
||||
linked against musl, and what musl version is used.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import functools
|
||||
import re
|
||||
import subprocess
|
||||
import sys
|
||||
from typing import Iterator, NamedTuple, Sequence
|
||||
|
||||
from ._elffile import ELFFile
|
||||
|
||||
|
||||
class _MuslVersion(NamedTuple):
|
||||
major: int
|
||||
minor: int
|
||||
|
||||
|
||||
def _parse_musl_version(output: str) -> _MuslVersion | None:
|
||||
lines = [n for n in (n.strip() for n in output.splitlines()) if n]
|
||||
if len(lines) < 2 or lines[0][:4] != "musl":
|
||||
return None
|
||||
m = re.match(r"Version (\d+)\.(\d+)", lines[1])
|
||||
if not m:
|
||||
return None
|
||||
return _MuslVersion(major=int(m.group(1)), minor=int(m.group(2)))
|
||||
|
||||
|
||||
@functools.lru_cache
|
||||
def _get_musl_version(executable: str) -> _MuslVersion | None:
|
||||
"""Detect currently-running musl runtime version.
|
||||
|
||||
This is done by checking the specified executable's dynamic linking
|
||||
information, and invoking the loader to parse its output for a version
|
||||
string. If the loader is musl, the output would be something like::
|
||||
|
||||
musl libc (x86_64)
|
||||
Version 1.2.2
|
||||
Dynamic Program Loader
|
||||
"""
|
||||
try:
|
||||
with open(executable, "rb") as f:
|
||||
ld = ELFFile(f).interpreter
|
||||
except (OSError, TypeError, ValueError):
|
||||
return None
|
||||
if ld is None or "musl" not in ld:
|
||||
return None
|
||||
proc = subprocess.run([ld], check=False, stderr=subprocess.PIPE, text=True)
|
||||
return _parse_musl_version(proc.stderr)
|
||||
|
||||
|
||||
def platform_tags(archs: Sequence[str]) -> Iterator[str]:
|
||||
"""Generate musllinux tags compatible to the current platform.
|
||||
|
||||
:param archs: Sequence of compatible architectures.
|
||||
The first one shall be the closest to the actual architecture and be the part of
|
||||
platform tag after the ``linux_`` prefix, e.g. ``x86_64``.
|
||||
The ``linux_`` prefix is assumed as a prerequisite for the current platform to
|
||||
be musllinux-compatible.
|
||||
|
||||
:returns: An iterator of compatible musllinux tags.
|
||||
"""
|
||||
sys_musl = _get_musl_version(sys.executable)
|
||||
if sys_musl is None: # Python not dynamically linked against musl.
|
||||
return
|
||||
for arch in archs:
|
||||
for minor in range(sys_musl.minor, -1, -1):
|
||||
yield f"musllinux_{sys_musl.major}_{minor}_{arch}"
|
||||
|
||||
|
||||
if __name__ == "__main__": # pragma: no cover
|
||||
import sysconfig
|
||||
|
||||
plat = sysconfig.get_platform()
|
||||
assert plat.startswith("linux-"), "not linux"
|
||||
|
||||
print("plat:", plat)
|
||||
print("musl:", _get_musl_version(sys.executable))
|
||||
print("tags:", end=" ")
|
||||
for t in platform_tags(re.sub(r"[.-]", "_", plat.split("-", 1)[-1])):
|
||||
print(t, end="\n ")
|
||||
@@ -0,0 +1,365 @@
|
||||
"""Handwritten parser of dependency specifiers.
|
||||
|
||||
The docstring for each __parse_* function contains EBNF-inspired grammar representing
|
||||
the implementation.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import ast
|
||||
from typing import List, Literal, NamedTuple, Sequence, Tuple, Union
|
||||
|
||||
from ._tokenizer import DEFAULT_RULES, Tokenizer
|
||||
|
||||
|
||||
class Node:
|
||||
__slots__ = ("value",)
|
||||
|
||||
def __init__(self, value: str) -> None:
|
||||
self.value = value
|
||||
|
||||
def __str__(self) -> str:
|
||||
return self.value
|
||||
|
||||
def __repr__(self) -> str:
|
||||
return f"<{self.__class__.__name__}({self.value!r})>"
|
||||
|
||||
def serialize(self) -> str:
|
||||
raise NotImplementedError
|
||||
|
||||
|
||||
class Variable(Node):
|
||||
__slots__ = ()
|
||||
|
||||
def serialize(self) -> str:
|
||||
return str(self)
|
||||
|
||||
|
||||
class Value(Node):
|
||||
__slots__ = ()
|
||||
|
||||
def serialize(self) -> str:
|
||||
return f'"{self}"'
|
||||
|
||||
|
||||
class Op(Node):
|
||||
__slots__ = ()
|
||||
|
||||
def serialize(self) -> str:
|
||||
return str(self)
|
||||
|
||||
|
||||
MarkerLogical = Literal["and", "or"]
|
||||
MarkerVar = Union[Variable, Value]
|
||||
MarkerItem = Tuple[MarkerVar, Op, MarkerVar]
|
||||
MarkerAtom = Union[MarkerItem, Sequence["MarkerAtom"]]
|
||||
MarkerList = List[Union["MarkerList", MarkerAtom, MarkerLogical]]
|
||||
|
||||
|
||||
class ParsedRequirement(NamedTuple):
|
||||
name: str
|
||||
url: str
|
||||
extras: list[str]
|
||||
specifier: str
|
||||
marker: MarkerList | None
|
||||
|
||||
|
||||
# --------------------------------------------------------------------------------------
|
||||
# Recursive descent parser for dependency specifier
|
||||
# --------------------------------------------------------------------------------------
|
||||
def parse_requirement(source: str) -> ParsedRequirement:
|
||||
return _parse_requirement(Tokenizer(source, rules=DEFAULT_RULES))
|
||||
|
||||
|
||||
def _parse_requirement(tokenizer: Tokenizer) -> ParsedRequirement:
|
||||
"""
|
||||
requirement = WS? IDENTIFIER WS? extras WS? requirement_details
|
||||
"""
|
||||
tokenizer.consume("WS")
|
||||
|
||||
name_token = tokenizer.expect(
|
||||
"IDENTIFIER", expected="package name at the start of dependency specifier"
|
||||
)
|
||||
name = name_token.text
|
||||
tokenizer.consume("WS")
|
||||
|
||||
extras = _parse_extras(tokenizer)
|
||||
tokenizer.consume("WS")
|
||||
|
||||
url, specifier, marker = _parse_requirement_details(tokenizer)
|
||||
tokenizer.expect("END", expected="end of dependency specifier")
|
||||
|
||||
return ParsedRequirement(name, url, extras, specifier, marker)
|
||||
|
||||
|
||||
def _parse_requirement_details(
|
||||
tokenizer: Tokenizer,
|
||||
) -> tuple[str, str, MarkerList | None]:
|
||||
"""
|
||||
requirement_details = AT URL (WS requirement_marker?)?
|
||||
| specifier WS? (requirement_marker)?
|
||||
"""
|
||||
|
||||
specifier = ""
|
||||
url = ""
|
||||
marker = None
|
||||
|
||||
if tokenizer.check("AT"):
|
||||
tokenizer.read()
|
||||
tokenizer.consume("WS")
|
||||
|
||||
url_start = tokenizer.position
|
||||
url = tokenizer.expect("URL", expected="URL after @").text
|
||||
if tokenizer.check("END", peek=True):
|
||||
return (url, specifier, marker)
|
||||
|
||||
tokenizer.expect("WS", expected="whitespace after URL")
|
||||
|
||||
# The input might end after whitespace.
|
||||
if tokenizer.check("END", peek=True):
|
||||
return (url, specifier, marker)
|
||||
|
||||
marker = _parse_requirement_marker(
|
||||
tokenizer,
|
||||
span_start=url_start,
|
||||
expected="semicolon (after URL and whitespace)",
|
||||
)
|
||||
else:
|
||||
specifier_start = tokenizer.position
|
||||
specifier = _parse_specifier(tokenizer)
|
||||
tokenizer.consume("WS")
|
||||
|
||||
if tokenizer.check("END", peek=True):
|
||||
return (url, specifier, marker)
|
||||
|
||||
marker = _parse_requirement_marker(
|
||||
tokenizer,
|
||||
span_start=specifier_start,
|
||||
expected=(
|
||||
"comma (within version specifier), semicolon (after version specifier)"
|
||||
if specifier
|
||||
else "semicolon (after name with no version specifier)"
|
||||
),
|
||||
)
|
||||
|
||||
return (url, specifier, marker)
|
||||
|
||||
|
||||
def _parse_requirement_marker(
|
||||
tokenizer: Tokenizer, *, span_start: int, expected: str
|
||||
) -> MarkerList:
|
||||
"""
|
||||
requirement_marker = SEMICOLON marker WS?
|
||||
"""
|
||||
|
||||
if not tokenizer.check("SEMICOLON"):
|
||||
tokenizer.raise_syntax_error(
|
||||
f"Expected {expected} or end",
|
||||
span_start=span_start,
|
||||
span_end=None,
|
||||
)
|
||||
tokenizer.read()
|
||||
|
||||
marker = _parse_marker(tokenizer)
|
||||
tokenizer.consume("WS")
|
||||
|
||||
return marker
|
||||
|
||||
|
||||
def _parse_extras(tokenizer: Tokenizer) -> list[str]:
|
||||
"""
|
||||
extras = (LEFT_BRACKET wsp* extras_list? wsp* RIGHT_BRACKET)?
|
||||
"""
|
||||
if not tokenizer.check("LEFT_BRACKET", peek=True):
|
||||
return []
|
||||
|
||||
with tokenizer.enclosing_tokens(
|
||||
"LEFT_BRACKET",
|
||||
"RIGHT_BRACKET",
|
||||
around="extras",
|
||||
):
|
||||
tokenizer.consume("WS")
|
||||
extras = _parse_extras_list(tokenizer)
|
||||
tokenizer.consume("WS")
|
||||
|
||||
return extras
|
||||
|
||||
|
||||
def _parse_extras_list(tokenizer: Tokenizer) -> list[str]:
|
||||
"""
|
||||
extras_list = identifier (wsp* ',' wsp* identifier)*
|
||||
"""
|
||||
extras: list[str] = []
|
||||
|
||||
if not tokenizer.check("IDENTIFIER"):
|
||||
return extras
|
||||
|
||||
extras.append(tokenizer.read().text)
|
||||
|
||||
while True:
|
||||
tokenizer.consume("WS")
|
||||
if tokenizer.check("IDENTIFIER", peek=True):
|
||||
tokenizer.raise_syntax_error("Expected comma between extra names")
|
||||
elif not tokenizer.check("COMMA"):
|
||||
break
|
||||
|
||||
tokenizer.read()
|
||||
tokenizer.consume("WS")
|
||||
|
||||
extra_token = tokenizer.expect("IDENTIFIER", expected="extra name after comma")
|
||||
extras.append(extra_token.text)
|
||||
|
||||
return extras
|
||||
|
||||
|
||||
def _parse_specifier(tokenizer: Tokenizer) -> str:
|
||||
"""
|
||||
specifier = LEFT_PARENTHESIS WS? version_many WS? RIGHT_PARENTHESIS
|
||||
| WS? version_many WS?
|
||||
"""
|
||||
with tokenizer.enclosing_tokens(
|
||||
"LEFT_PARENTHESIS",
|
||||
"RIGHT_PARENTHESIS",
|
||||
around="version specifier",
|
||||
):
|
||||
tokenizer.consume("WS")
|
||||
parsed_specifiers = _parse_version_many(tokenizer)
|
||||
tokenizer.consume("WS")
|
||||
|
||||
return parsed_specifiers
|
||||
|
||||
|
||||
def _parse_version_many(tokenizer: Tokenizer) -> str:
|
||||
"""
|
||||
version_many = (SPECIFIER (WS? COMMA WS? SPECIFIER)*)?
|
||||
"""
|
||||
parsed_specifiers = ""
|
||||
while tokenizer.check("SPECIFIER"):
|
||||
span_start = tokenizer.position
|
||||
parsed_specifiers += tokenizer.read().text
|
||||
if tokenizer.check("VERSION_PREFIX_TRAIL", peek=True):
|
||||
tokenizer.raise_syntax_error(
|
||||
".* suffix can only be used with `==` or `!=` operators",
|
||||
span_start=span_start,
|
||||
span_end=tokenizer.position + 1,
|
||||
)
|
||||
if tokenizer.check("VERSION_LOCAL_LABEL_TRAIL", peek=True):
|
||||
tokenizer.raise_syntax_error(
|
||||
"Local version label can only be used with `==` or `!=` operators",
|
||||
span_start=span_start,
|
||||
span_end=tokenizer.position,
|
||||
)
|
||||
tokenizer.consume("WS")
|
||||
if not tokenizer.check("COMMA"):
|
||||
break
|
||||
parsed_specifiers += tokenizer.read().text
|
||||
tokenizer.consume("WS")
|
||||
|
||||
return parsed_specifiers
|
||||
|
||||
|
||||
# --------------------------------------------------------------------------------------
|
||||
# Recursive descent parser for marker expression
|
||||
# --------------------------------------------------------------------------------------
|
||||
def parse_marker(source: str) -> MarkerList:
|
||||
return _parse_full_marker(Tokenizer(source, rules=DEFAULT_RULES))
|
||||
|
||||
|
||||
def _parse_full_marker(tokenizer: Tokenizer) -> MarkerList:
|
||||
retval = _parse_marker(tokenizer)
|
||||
tokenizer.expect("END", expected="end of marker expression")
|
||||
return retval
|
||||
|
||||
|
||||
def _parse_marker(tokenizer: Tokenizer) -> MarkerList:
|
||||
"""
|
||||
marker = marker_atom (BOOLOP marker_atom)+
|
||||
"""
|
||||
expression = [_parse_marker_atom(tokenizer)]
|
||||
while tokenizer.check("BOOLOP"):
|
||||
token = tokenizer.read()
|
||||
expr_right = _parse_marker_atom(tokenizer)
|
||||
expression.extend((token.text, expr_right))
|
||||
return expression
|
||||
|
||||
|
||||
def _parse_marker_atom(tokenizer: Tokenizer) -> MarkerAtom:
|
||||
"""
|
||||
marker_atom = WS? LEFT_PARENTHESIS WS? marker WS? RIGHT_PARENTHESIS WS?
|
||||
| WS? marker_item WS?
|
||||
"""
|
||||
|
||||
tokenizer.consume("WS")
|
||||
if tokenizer.check("LEFT_PARENTHESIS", peek=True):
|
||||
with tokenizer.enclosing_tokens(
|
||||
"LEFT_PARENTHESIS",
|
||||
"RIGHT_PARENTHESIS",
|
||||
around="marker expression",
|
||||
):
|
||||
tokenizer.consume("WS")
|
||||
marker: MarkerAtom = _parse_marker(tokenizer)
|
||||
tokenizer.consume("WS")
|
||||
else:
|
||||
marker = _parse_marker_item(tokenizer)
|
||||
tokenizer.consume("WS")
|
||||
return marker
|
||||
|
||||
|
||||
def _parse_marker_item(tokenizer: Tokenizer) -> MarkerItem:
|
||||
"""
|
||||
marker_item = WS? marker_var WS? marker_op WS? marker_var WS?
|
||||
"""
|
||||
tokenizer.consume("WS")
|
||||
marker_var_left = _parse_marker_var(tokenizer)
|
||||
tokenizer.consume("WS")
|
||||
marker_op = _parse_marker_op(tokenizer)
|
||||
tokenizer.consume("WS")
|
||||
marker_var_right = _parse_marker_var(tokenizer)
|
||||
tokenizer.consume("WS")
|
||||
return (marker_var_left, marker_op, marker_var_right)
|
||||
|
||||
|
||||
def _parse_marker_var(tokenizer: Tokenizer) -> MarkerVar: # noqa: RET503
|
||||
"""
|
||||
marker_var = VARIABLE | QUOTED_STRING
|
||||
"""
|
||||
if tokenizer.check("VARIABLE"):
|
||||
return process_env_var(tokenizer.read().text.replace(".", "_"))
|
||||
elif tokenizer.check("QUOTED_STRING"):
|
||||
return process_python_str(tokenizer.read().text)
|
||||
else:
|
||||
tokenizer.raise_syntax_error(
|
||||
message="Expected a marker variable or quoted string"
|
||||
)
|
||||
|
||||
|
||||
def process_env_var(env_var: str) -> Variable:
|
||||
if env_var in ("platform_python_implementation", "python_implementation"):
|
||||
return Variable("platform_python_implementation")
|
||||
else:
|
||||
return Variable(env_var)
|
||||
|
||||
|
||||
def process_python_str(python_str: str) -> Value:
|
||||
value = ast.literal_eval(python_str)
|
||||
return Value(str(value))
|
||||
|
||||
|
||||
def _parse_marker_op(tokenizer: Tokenizer) -> Op:
|
||||
"""
|
||||
marker_op = IN | NOT IN | OP
|
||||
"""
|
||||
if tokenizer.check("IN"):
|
||||
tokenizer.read()
|
||||
return Op("in")
|
||||
elif tokenizer.check("NOT"):
|
||||
tokenizer.read()
|
||||
tokenizer.expect("WS", expected="whitespace after 'not'")
|
||||
tokenizer.expect("IN", expected="'in' after 'not'")
|
||||
return Op("not in")
|
||||
elif tokenizer.check("OP"):
|
||||
return Op(tokenizer.read().text)
|
||||
else:
|
||||
return tokenizer.raise_syntax_error(
|
||||
"Expected marker operator, one of <=, <, !=, ==, >=, >, ~=, ===, in, not in"
|
||||
)
|
||||
@@ -0,0 +1,69 @@
|
||||
# This file is dual licensed under the terms of the Apache License, Version
|
||||
# 2.0, and the BSD License. See the LICENSE file in the root of this repository
|
||||
# for complete details.
|
||||
|
||||
import typing
|
||||
|
||||
|
||||
@typing.final
|
||||
class InfinityType:
|
||||
__slots__ = ()
|
||||
|
||||
def __repr__(self) -> str:
|
||||
return "Infinity"
|
||||
|
||||
def __hash__(self) -> int:
|
||||
return hash(repr(self))
|
||||
|
||||
def __lt__(self, other: object) -> bool:
|
||||
return False
|
||||
|
||||
def __le__(self, other: object) -> bool:
|
||||
return False
|
||||
|
||||
def __eq__(self, other: object) -> bool:
|
||||
return isinstance(other, self.__class__)
|
||||
|
||||
def __gt__(self, other: object) -> bool:
|
||||
return True
|
||||
|
||||
def __ge__(self, other: object) -> bool:
|
||||
return True
|
||||
|
||||
def __neg__(self: object) -> "NegativeInfinityType":
|
||||
return NegativeInfinity
|
||||
|
||||
|
||||
Infinity = InfinityType()
|
||||
|
||||
|
||||
@typing.final
|
||||
class NegativeInfinityType:
|
||||
__slots__ = ()
|
||||
|
||||
def __repr__(self) -> str:
|
||||
return "-Infinity"
|
||||
|
||||
def __hash__(self) -> int:
|
||||
return hash(repr(self))
|
||||
|
||||
def __lt__(self, other: object) -> bool:
|
||||
return True
|
||||
|
||||
def __le__(self, other: object) -> bool:
|
||||
return True
|
||||
|
||||
def __eq__(self, other: object) -> bool:
|
||||
return isinstance(other, self.__class__)
|
||||
|
||||
def __gt__(self, other: object) -> bool:
|
||||
return False
|
||||
|
||||
def __ge__(self, other: object) -> bool:
|
||||
return False
|
||||
|
||||
def __neg__(self: object) -> InfinityType:
|
||||
return Infinity
|
||||
|
||||
|
||||
NegativeInfinity = NegativeInfinityType()
|
||||
@@ -0,0 +1,193 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import contextlib
|
||||
import re
|
||||
from dataclasses import dataclass
|
||||
from typing import Generator, Mapping, NoReturn
|
||||
|
||||
from .specifiers import Specifier
|
||||
|
||||
|
||||
@dataclass
|
||||
class Token:
|
||||
name: str
|
||||
text: str
|
||||
position: int
|
||||
|
||||
|
||||
class ParserSyntaxError(Exception):
|
||||
"""The provided source text could not be parsed correctly."""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
message: str,
|
||||
*,
|
||||
source: str,
|
||||
span: tuple[int, int],
|
||||
) -> None:
|
||||
self.span = span
|
||||
self.message = message
|
||||
self.source = source
|
||||
|
||||
super().__init__()
|
||||
|
||||
def __str__(self) -> str:
|
||||
marker = " " * self.span[0] + "~" * (self.span[1] - self.span[0]) + "^"
|
||||
return f"{self.message}\n {self.source}\n {marker}"
|
||||
|
||||
|
||||
DEFAULT_RULES: dict[str, re.Pattern[str]] = {
|
||||
"LEFT_PARENTHESIS": re.compile(r"\("),
|
||||
"RIGHT_PARENTHESIS": re.compile(r"\)"),
|
||||
"LEFT_BRACKET": re.compile(r"\["),
|
||||
"RIGHT_BRACKET": re.compile(r"\]"),
|
||||
"SEMICOLON": re.compile(r";"),
|
||||
"COMMA": re.compile(r","),
|
||||
"QUOTED_STRING": re.compile(
|
||||
r"""
|
||||
(
|
||||
('[^']*')
|
||||
|
|
||||
("[^"]*")
|
||||
)
|
||||
""",
|
||||
re.VERBOSE,
|
||||
),
|
||||
"OP": re.compile(r"(===|==|~=|!=|<=|>=|<|>)"),
|
||||
"BOOLOP": re.compile(r"\b(or|and)\b"),
|
||||
"IN": re.compile(r"\bin\b"),
|
||||
"NOT": re.compile(r"\bnot\b"),
|
||||
"VARIABLE": re.compile(
|
||||
r"""
|
||||
\b(
|
||||
python_version
|
||||
|python_full_version
|
||||
|os[._]name
|
||||
|sys[._]platform
|
||||
|platform_(release|system)
|
||||
|platform[._](version|machine|python_implementation)
|
||||
|python_implementation
|
||||
|implementation_(name|version)
|
||||
|extras?
|
||||
|dependency_groups
|
||||
)\b
|
||||
""",
|
||||
re.VERBOSE,
|
||||
),
|
||||
"SPECIFIER": re.compile(
|
||||
Specifier._operator_regex_str + Specifier._version_regex_str,
|
||||
re.VERBOSE | re.IGNORECASE,
|
||||
),
|
||||
"AT": re.compile(r"\@"),
|
||||
"URL": re.compile(r"[^ \t]+"),
|
||||
"IDENTIFIER": re.compile(r"\b[a-zA-Z0-9][a-zA-Z0-9._-]*\b"),
|
||||
"VERSION_PREFIX_TRAIL": re.compile(r"\.\*"),
|
||||
"VERSION_LOCAL_LABEL_TRAIL": re.compile(r"\+[a-z0-9]+(?:[-_\.][a-z0-9]+)*"),
|
||||
"WS": re.compile(r"[ \t]+"),
|
||||
"END": re.compile(r"$"),
|
||||
}
|
||||
|
||||
|
||||
class Tokenizer:
|
||||
"""Context-sensitive token parsing.
|
||||
|
||||
Provides methods to examine the input stream to check whether the next token
|
||||
matches.
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
source: str,
|
||||
*,
|
||||
rules: Mapping[str, re.Pattern[str]],
|
||||
) -> None:
|
||||
self.source = source
|
||||
self.rules = rules
|
||||
self.next_token: Token | None = None
|
||||
self.position = 0
|
||||
|
||||
def consume(self, name: str) -> None:
|
||||
"""Move beyond provided token name, if at current position."""
|
||||
if self.check(name):
|
||||
self.read()
|
||||
|
||||
def check(self, name: str, *, peek: bool = False) -> bool:
|
||||
"""Check whether the next token has the provided name.
|
||||
|
||||
By default, if the check succeeds, the token *must* be read before
|
||||
another check. If `peek` is set to `True`, the token is not loaded and
|
||||
would need to be checked again.
|
||||
"""
|
||||
assert self.next_token is None, (
|
||||
f"Cannot check for {name!r}, already have {self.next_token!r}"
|
||||
)
|
||||
assert name in self.rules, f"Unknown token name: {name!r}"
|
||||
|
||||
expression = self.rules[name]
|
||||
|
||||
match = expression.match(self.source, self.position)
|
||||
if match is None:
|
||||
return False
|
||||
if not peek:
|
||||
self.next_token = Token(name, match[0], self.position)
|
||||
return True
|
||||
|
||||
def expect(self, name: str, *, expected: str) -> Token:
|
||||
"""Expect a certain token name next, failing with a syntax error otherwise.
|
||||
|
||||
The token is *not* read.
|
||||
"""
|
||||
if not self.check(name):
|
||||
raise self.raise_syntax_error(f"Expected {expected}")
|
||||
return self.read()
|
||||
|
||||
def read(self) -> Token:
|
||||
"""Consume the next token and return it."""
|
||||
token = self.next_token
|
||||
assert token is not None
|
||||
|
||||
self.position += len(token.text)
|
||||
self.next_token = None
|
||||
|
||||
return token
|
||||
|
||||
def raise_syntax_error(
|
||||
self,
|
||||
message: str,
|
||||
*,
|
||||
span_start: int | None = None,
|
||||
span_end: int | None = None,
|
||||
) -> NoReturn:
|
||||
"""Raise ParserSyntaxError at the given position."""
|
||||
span = (
|
||||
self.position if span_start is None else span_start,
|
||||
self.position if span_end is None else span_end,
|
||||
)
|
||||
raise ParserSyntaxError(
|
||||
message,
|
||||
source=self.source,
|
||||
span=span,
|
||||
)
|
||||
|
||||
@contextlib.contextmanager
|
||||
def enclosing_tokens(
|
||||
self, open_token: str, close_token: str, *, around: str
|
||||
) -> Generator[None, None, None]:
|
||||
if self.check(open_token):
|
||||
open_position = self.position
|
||||
self.read()
|
||||
else:
|
||||
open_position = None
|
||||
|
||||
yield
|
||||
|
||||
if open_position is None:
|
||||
return
|
||||
|
||||
if not self.check(close_token):
|
||||
self.raise_syntax_error(
|
||||
f"Expected matching {close_token} for {open_token}, after {around}",
|
||||
span_start=open_position,
|
||||
)
|
||||
|
||||
self.read()
|
||||
@@ -0,0 +1,147 @@
|
||||
#######################################################################################
|
||||
#
|
||||
# Adapted from:
|
||||
# https://github.com/pypa/hatch/blob/5352e44/backend/src/hatchling/licenses/parse.py
|
||||
#
|
||||
# MIT License
|
||||
#
|
||||
# Copyright (c) 2017-present Ofek Lev <oss@ofek.dev>
|
||||
#
|
||||
# Permission is hereby granted, free of charge, to any person obtaining a copy of this
|
||||
# software and associated documentation files (the "Software"), to deal in the Software
|
||||
# without restriction, including without limitation the rights to use, copy, modify,
|
||||
# merge, publish, distribute, sublicense, and/or sell copies of the Software, and to
|
||||
# permit persons to whom the Software is furnished to do so, subject to the following
|
||||
# conditions:
|
||||
#
|
||||
# The above copyright notice and this permission notice shall be included in all copies
|
||||
# or substantial portions of the Software.
|
||||
#
|
||||
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,
|
||||
# INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A
|
||||
# PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
|
||||
# HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF
|
||||
# CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE
|
||||
# OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
||||
#
|
||||
#
|
||||
# With additional allowance of arbitrary `LicenseRef-` identifiers, not just
|
||||
# `LicenseRef-Public-Domain` and `LicenseRef-Proprietary`.
|
||||
#
|
||||
#######################################################################################
|
||||
from __future__ import annotations
|
||||
|
||||
import re
|
||||
from typing import NewType, cast
|
||||
|
||||
from ._spdx import EXCEPTIONS, LICENSES
|
||||
|
||||
__all__ = [
|
||||
"InvalidLicenseExpression",
|
||||
"NormalizedLicenseExpression",
|
||||
"canonicalize_license_expression",
|
||||
]
|
||||
|
||||
license_ref_allowed = re.compile("^[A-Za-z0-9.-]*$")
|
||||
|
||||
NormalizedLicenseExpression = NewType("NormalizedLicenseExpression", str)
|
||||
|
||||
|
||||
class InvalidLicenseExpression(ValueError):
|
||||
"""Raised when a license-expression string is invalid
|
||||
|
||||
>>> canonicalize_license_expression("invalid")
|
||||
Traceback (most recent call last):
|
||||
...
|
||||
packaging.licenses.InvalidLicenseExpression: Invalid license expression: 'invalid'
|
||||
"""
|
||||
|
||||
|
||||
def canonicalize_license_expression(
|
||||
raw_license_expression: str,
|
||||
) -> NormalizedLicenseExpression:
|
||||
if not raw_license_expression:
|
||||
message = f"Invalid license expression: {raw_license_expression!r}"
|
||||
raise InvalidLicenseExpression(message)
|
||||
|
||||
# Pad any parentheses so tokenization can be achieved by merely splitting on
|
||||
# whitespace.
|
||||
license_expression = raw_license_expression.replace("(", " ( ").replace(")", " ) ")
|
||||
licenseref_prefix = "LicenseRef-"
|
||||
license_refs = {
|
||||
ref.lower(): "LicenseRef-" + ref[len(licenseref_prefix) :]
|
||||
for ref in license_expression.split()
|
||||
if ref.lower().startswith(licenseref_prefix.lower())
|
||||
}
|
||||
|
||||
# Normalize to lower case so we can look up licenses/exceptions
|
||||
# and so boolean operators are Python-compatible.
|
||||
license_expression = license_expression.lower()
|
||||
|
||||
tokens = license_expression.split()
|
||||
|
||||
# Rather than implementing a parenthesis/boolean logic parser, create an
|
||||
# expression that Python can parse. Everything that is not involved with the
|
||||
# grammar itself is replaced with the placeholder `False` and the resultant
|
||||
# expression should become a valid Python expression.
|
||||
python_tokens = []
|
||||
for token in tokens:
|
||||
if token not in {"or", "and", "with", "(", ")"}:
|
||||
python_tokens.append("False")
|
||||
elif token == "with":
|
||||
python_tokens.append("or")
|
||||
elif (
|
||||
token == "("
|
||||
and python_tokens
|
||||
and python_tokens[-1] not in {"or", "and", "("}
|
||||
) or (token == ")" and python_tokens and python_tokens[-1] == "("):
|
||||
message = f"Invalid license expression: {raw_license_expression!r}"
|
||||
raise InvalidLicenseExpression(message)
|
||||
else:
|
||||
python_tokens.append(token)
|
||||
|
||||
python_expression = " ".join(python_tokens)
|
||||
try:
|
||||
compile(python_expression, "", "eval")
|
||||
except SyntaxError:
|
||||
message = f"Invalid license expression: {raw_license_expression!r}"
|
||||
raise InvalidLicenseExpression(message) from None
|
||||
|
||||
# Take a final pass to check for unknown licenses/exceptions.
|
||||
normalized_tokens = []
|
||||
for token in tokens:
|
||||
if token in {"or", "and", "with", "(", ")"}:
|
||||
normalized_tokens.append(token.upper())
|
||||
continue
|
||||
|
||||
if normalized_tokens and normalized_tokens[-1] == "WITH":
|
||||
if token not in EXCEPTIONS:
|
||||
message = f"Unknown license exception: {token!r}"
|
||||
raise InvalidLicenseExpression(message)
|
||||
|
||||
normalized_tokens.append(EXCEPTIONS[token]["id"])
|
||||
else:
|
||||
if token.endswith("+"):
|
||||
final_token = token[:-1]
|
||||
suffix = "+"
|
||||
else:
|
||||
final_token = token
|
||||
suffix = ""
|
||||
|
||||
if final_token.startswith("licenseref-"):
|
||||
if not license_ref_allowed.match(final_token):
|
||||
message = f"Invalid licenseref: {final_token!r}"
|
||||
raise InvalidLicenseExpression(message)
|
||||
normalized_tokens.append(license_refs[final_token] + suffix)
|
||||
else:
|
||||
if final_token not in LICENSES:
|
||||
message = f"Unknown license: {final_token!r}"
|
||||
raise InvalidLicenseExpression(message)
|
||||
normalized_tokens.append(LICENSES[final_token]["id"] + suffix)
|
||||
|
||||
normalized_expression = " ".join(normalized_tokens)
|
||||
|
||||
return cast(
|
||||
"NormalizedLicenseExpression",
|
||||
normalized_expression.replace("( ", "(").replace(" )", ")"),
|
||||
)
|
||||
@@ -0,0 +1,799 @@
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from typing import TypedDict
|
||||
|
||||
class SPDXLicense(TypedDict):
|
||||
id: str
|
||||
deprecated: bool
|
||||
|
||||
class SPDXException(TypedDict):
|
||||
id: str
|
||||
deprecated: bool
|
||||
|
||||
|
||||
VERSION = '3.27.0'
|
||||
|
||||
LICENSES: dict[str, SPDXLicense] = {
|
||||
'0bsd': {'id': '0BSD', 'deprecated': False},
|
||||
'3d-slicer-1.0': {'id': '3D-Slicer-1.0', 'deprecated': False},
|
||||
'aal': {'id': 'AAL', 'deprecated': False},
|
||||
'abstyles': {'id': 'Abstyles', 'deprecated': False},
|
||||
'adacore-doc': {'id': 'AdaCore-doc', 'deprecated': False},
|
||||
'adobe-2006': {'id': 'Adobe-2006', 'deprecated': False},
|
||||
'adobe-display-postscript': {'id': 'Adobe-Display-PostScript', 'deprecated': False},
|
||||
'adobe-glyph': {'id': 'Adobe-Glyph', 'deprecated': False},
|
||||
'adobe-utopia': {'id': 'Adobe-Utopia', 'deprecated': False},
|
||||
'adsl': {'id': 'ADSL', 'deprecated': False},
|
||||
'afl-1.1': {'id': 'AFL-1.1', 'deprecated': False},
|
||||
'afl-1.2': {'id': 'AFL-1.2', 'deprecated': False},
|
||||
'afl-2.0': {'id': 'AFL-2.0', 'deprecated': False},
|
||||
'afl-2.1': {'id': 'AFL-2.1', 'deprecated': False},
|
||||
'afl-3.0': {'id': 'AFL-3.0', 'deprecated': False},
|
||||
'afmparse': {'id': 'Afmparse', 'deprecated': False},
|
||||
'agpl-1.0': {'id': 'AGPL-1.0', 'deprecated': True},
|
||||
'agpl-1.0-only': {'id': 'AGPL-1.0-only', 'deprecated': False},
|
||||
'agpl-1.0-or-later': {'id': 'AGPL-1.0-or-later', 'deprecated': False},
|
||||
'agpl-3.0': {'id': 'AGPL-3.0', 'deprecated': True},
|
||||
'agpl-3.0-only': {'id': 'AGPL-3.0-only', 'deprecated': False},
|
||||
'agpl-3.0-or-later': {'id': 'AGPL-3.0-or-later', 'deprecated': False},
|
||||
'aladdin': {'id': 'Aladdin', 'deprecated': False},
|
||||
'amd-newlib': {'id': 'AMD-newlib', 'deprecated': False},
|
||||
'amdplpa': {'id': 'AMDPLPA', 'deprecated': False},
|
||||
'aml': {'id': 'AML', 'deprecated': False},
|
||||
'aml-glslang': {'id': 'AML-glslang', 'deprecated': False},
|
||||
'ampas': {'id': 'AMPAS', 'deprecated': False},
|
||||
'antlr-pd': {'id': 'ANTLR-PD', 'deprecated': False},
|
||||
'antlr-pd-fallback': {'id': 'ANTLR-PD-fallback', 'deprecated': False},
|
||||
'any-osi': {'id': 'any-OSI', 'deprecated': False},
|
||||
'any-osi-perl-modules': {'id': 'any-OSI-perl-modules', 'deprecated': False},
|
||||
'apache-1.0': {'id': 'Apache-1.0', 'deprecated': False},
|
||||
'apache-1.1': {'id': 'Apache-1.1', 'deprecated': False},
|
||||
'apache-2.0': {'id': 'Apache-2.0', 'deprecated': False},
|
||||
'apafml': {'id': 'APAFML', 'deprecated': False},
|
||||
'apl-1.0': {'id': 'APL-1.0', 'deprecated': False},
|
||||
'app-s2p': {'id': 'App-s2p', 'deprecated': False},
|
||||
'apsl-1.0': {'id': 'APSL-1.0', 'deprecated': False},
|
||||
'apsl-1.1': {'id': 'APSL-1.1', 'deprecated': False},
|
||||
'apsl-1.2': {'id': 'APSL-1.2', 'deprecated': False},
|
||||
'apsl-2.0': {'id': 'APSL-2.0', 'deprecated': False},
|
||||
'arphic-1999': {'id': 'Arphic-1999', 'deprecated': False},
|
||||
'artistic-1.0': {'id': 'Artistic-1.0', 'deprecated': False},
|
||||
'artistic-1.0-cl8': {'id': 'Artistic-1.0-cl8', 'deprecated': False},
|
||||
'artistic-1.0-perl': {'id': 'Artistic-1.0-Perl', 'deprecated': False},
|
||||
'artistic-2.0': {'id': 'Artistic-2.0', 'deprecated': False},
|
||||
'artistic-dist': {'id': 'Artistic-dist', 'deprecated': False},
|
||||
'aspell-ru': {'id': 'Aspell-RU', 'deprecated': False},
|
||||
'aswf-digital-assets-1.0': {'id': 'ASWF-Digital-Assets-1.0', 'deprecated': False},
|
||||
'aswf-digital-assets-1.1': {'id': 'ASWF-Digital-Assets-1.1', 'deprecated': False},
|
||||
'baekmuk': {'id': 'Baekmuk', 'deprecated': False},
|
||||
'bahyph': {'id': 'Bahyph', 'deprecated': False},
|
||||
'barr': {'id': 'Barr', 'deprecated': False},
|
||||
'bcrypt-solar-designer': {'id': 'bcrypt-Solar-Designer', 'deprecated': False},
|
||||
'beerware': {'id': 'Beerware', 'deprecated': False},
|
||||
'bitstream-charter': {'id': 'Bitstream-Charter', 'deprecated': False},
|
||||
'bitstream-vera': {'id': 'Bitstream-Vera', 'deprecated': False},
|
||||
'bittorrent-1.0': {'id': 'BitTorrent-1.0', 'deprecated': False},
|
||||
'bittorrent-1.1': {'id': 'BitTorrent-1.1', 'deprecated': False},
|
||||
'blessing': {'id': 'blessing', 'deprecated': False},
|
||||
'blueoak-1.0.0': {'id': 'BlueOak-1.0.0', 'deprecated': False},
|
||||
'boehm-gc': {'id': 'Boehm-GC', 'deprecated': False},
|
||||
'boehm-gc-without-fee': {'id': 'Boehm-GC-without-fee', 'deprecated': False},
|
||||
'borceux': {'id': 'Borceux', 'deprecated': False},
|
||||
'brian-gladman-2-clause': {'id': 'Brian-Gladman-2-Clause', 'deprecated': False},
|
||||
'brian-gladman-3-clause': {'id': 'Brian-Gladman-3-Clause', 'deprecated': False},
|
||||
'bsd-1-clause': {'id': 'BSD-1-Clause', 'deprecated': False},
|
||||
'bsd-2-clause': {'id': 'BSD-2-Clause', 'deprecated': False},
|
||||
'bsd-2-clause-darwin': {'id': 'BSD-2-Clause-Darwin', 'deprecated': False},
|
||||
'bsd-2-clause-first-lines': {'id': 'BSD-2-Clause-first-lines', 'deprecated': False},
|
||||
'bsd-2-clause-freebsd': {'id': 'BSD-2-Clause-FreeBSD', 'deprecated': True},
|
||||
'bsd-2-clause-netbsd': {'id': 'BSD-2-Clause-NetBSD', 'deprecated': True},
|
||||
'bsd-2-clause-patent': {'id': 'BSD-2-Clause-Patent', 'deprecated': False},
|
||||
'bsd-2-clause-pkgconf-disclaimer': {'id': 'BSD-2-Clause-pkgconf-disclaimer', 'deprecated': False},
|
||||
'bsd-2-clause-views': {'id': 'BSD-2-Clause-Views', 'deprecated': False},
|
||||
'bsd-3-clause': {'id': 'BSD-3-Clause', 'deprecated': False},
|
||||
'bsd-3-clause-acpica': {'id': 'BSD-3-Clause-acpica', 'deprecated': False},
|
||||
'bsd-3-clause-attribution': {'id': 'BSD-3-Clause-Attribution', 'deprecated': False},
|
||||
'bsd-3-clause-clear': {'id': 'BSD-3-Clause-Clear', 'deprecated': False},
|
||||
'bsd-3-clause-flex': {'id': 'BSD-3-Clause-flex', 'deprecated': False},
|
||||
'bsd-3-clause-hp': {'id': 'BSD-3-Clause-HP', 'deprecated': False},
|
||||
'bsd-3-clause-lbnl': {'id': 'BSD-3-Clause-LBNL', 'deprecated': False},
|
||||
'bsd-3-clause-modification': {'id': 'BSD-3-Clause-Modification', 'deprecated': False},
|
||||
'bsd-3-clause-no-military-license': {'id': 'BSD-3-Clause-No-Military-License', 'deprecated': False},
|
||||
'bsd-3-clause-no-nuclear-license': {'id': 'BSD-3-Clause-No-Nuclear-License', 'deprecated': False},
|
||||
'bsd-3-clause-no-nuclear-license-2014': {'id': 'BSD-3-Clause-No-Nuclear-License-2014', 'deprecated': False},
|
||||
'bsd-3-clause-no-nuclear-warranty': {'id': 'BSD-3-Clause-No-Nuclear-Warranty', 'deprecated': False},
|
||||
'bsd-3-clause-open-mpi': {'id': 'BSD-3-Clause-Open-MPI', 'deprecated': False},
|
||||
'bsd-3-clause-sun': {'id': 'BSD-3-Clause-Sun', 'deprecated': False},
|
||||
'bsd-4-clause': {'id': 'BSD-4-Clause', 'deprecated': False},
|
||||
'bsd-4-clause-shortened': {'id': 'BSD-4-Clause-Shortened', 'deprecated': False},
|
||||
'bsd-4-clause-uc': {'id': 'BSD-4-Clause-UC', 'deprecated': False},
|
||||
'bsd-4.3reno': {'id': 'BSD-4.3RENO', 'deprecated': False},
|
||||
'bsd-4.3tahoe': {'id': 'BSD-4.3TAHOE', 'deprecated': False},
|
||||
'bsd-advertising-acknowledgement': {'id': 'BSD-Advertising-Acknowledgement', 'deprecated': False},
|
||||
'bsd-attribution-hpnd-disclaimer': {'id': 'BSD-Attribution-HPND-disclaimer', 'deprecated': False},
|
||||
'bsd-inferno-nettverk': {'id': 'BSD-Inferno-Nettverk', 'deprecated': False},
|
||||
'bsd-protection': {'id': 'BSD-Protection', 'deprecated': False},
|
||||
'bsd-source-beginning-file': {'id': 'BSD-Source-beginning-file', 'deprecated': False},
|
||||
'bsd-source-code': {'id': 'BSD-Source-Code', 'deprecated': False},
|
||||
'bsd-systemics': {'id': 'BSD-Systemics', 'deprecated': False},
|
||||
'bsd-systemics-w3works': {'id': 'BSD-Systemics-W3Works', 'deprecated': False},
|
||||
'bsl-1.0': {'id': 'BSL-1.0', 'deprecated': False},
|
||||
'busl-1.1': {'id': 'BUSL-1.1', 'deprecated': False},
|
||||
'bzip2-1.0.5': {'id': 'bzip2-1.0.5', 'deprecated': True},
|
||||
'bzip2-1.0.6': {'id': 'bzip2-1.0.6', 'deprecated': False},
|
||||
'c-uda-1.0': {'id': 'C-UDA-1.0', 'deprecated': False},
|
||||
'cal-1.0': {'id': 'CAL-1.0', 'deprecated': False},
|
||||
'cal-1.0-combined-work-exception': {'id': 'CAL-1.0-Combined-Work-Exception', 'deprecated': False},
|
||||
'caldera': {'id': 'Caldera', 'deprecated': False},
|
||||
'caldera-no-preamble': {'id': 'Caldera-no-preamble', 'deprecated': False},
|
||||
'catharon': {'id': 'Catharon', 'deprecated': False},
|
||||
'catosl-1.1': {'id': 'CATOSL-1.1', 'deprecated': False},
|
||||
'cc-by-1.0': {'id': 'CC-BY-1.0', 'deprecated': False},
|
||||
'cc-by-2.0': {'id': 'CC-BY-2.0', 'deprecated': False},
|
||||
'cc-by-2.5': {'id': 'CC-BY-2.5', 'deprecated': False},
|
||||
'cc-by-2.5-au': {'id': 'CC-BY-2.5-AU', 'deprecated': False},
|
||||
'cc-by-3.0': {'id': 'CC-BY-3.0', 'deprecated': False},
|
||||
'cc-by-3.0-at': {'id': 'CC-BY-3.0-AT', 'deprecated': False},
|
||||
'cc-by-3.0-au': {'id': 'CC-BY-3.0-AU', 'deprecated': False},
|
||||
'cc-by-3.0-de': {'id': 'CC-BY-3.0-DE', 'deprecated': False},
|
||||
'cc-by-3.0-igo': {'id': 'CC-BY-3.0-IGO', 'deprecated': False},
|
||||
'cc-by-3.0-nl': {'id': 'CC-BY-3.0-NL', 'deprecated': False},
|
||||
'cc-by-3.0-us': {'id': 'CC-BY-3.0-US', 'deprecated': False},
|
||||
'cc-by-4.0': {'id': 'CC-BY-4.0', 'deprecated': False},
|
||||
'cc-by-nc-1.0': {'id': 'CC-BY-NC-1.0', 'deprecated': False},
|
||||
'cc-by-nc-2.0': {'id': 'CC-BY-NC-2.0', 'deprecated': False},
|
||||
'cc-by-nc-2.5': {'id': 'CC-BY-NC-2.5', 'deprecated': False},
|
||||
'cc-by-nc-3.0': {'id': 'CC-BY-NC-3.0', 'deprecated': False},
|
||||
'cc-by-nc-3.0-de': {'id': 'CC-BY-NC-3.0-DE', 'deprecated': False},
|
||||
'cc-by-nc-4.0': {'id': 'CC-BY-NC-4.0', 'deprecated': False},
|
||||
'cc-by-nc-nd-1.0': {'id': 'CC-BY-NC-ND-1.0', 'deprecated': False},
|
||||
'cc-by-nc-nd-2.0': {'id': 'CC-BY-NC-ND-2.0', 'deprecated': False},
|
||||
'cc-by-nc-nd-2.5': {'id': 'CC-BY-NC-ND-2.5', 'deprecated': False},
|
||||
'cc-by-nc-nd-3.0': {'id': 'CC-BY-NC-ND-3.0', 'deprecated': False},
|
||||
'cc-by-nc-nd-3.0-de': {'id': 'CC-BY-NC-ND-3.0-DE', 'deprecated': False},
|
||||
'cc-by-nc-nd-3.0-igo': {'id': 'CC-BY-NC-ND-3.0-IGO', 'deprecated': False},
|
||||
'cc-by-nc-nd-4.0': {'id': 'CC-BY-NC-ND-4.0', 'deprecated': False},
|
||||
'cc-by-nc-sa-1.0': {'id': 'CC-BY-NC-SA-1.0', 'deprecated': False},
|
||||
'cc-by-nc-sa-2.0': {'id': 'CC-BY-NC-SA-2.0', 'deprecated': False},
|
||||
'cc-by-nc-sa-2.0-de': {'id': 'CC-BY-NC-SA-2.0-DE', 'deprecated': False},
|
||||
'cc-by-nc-sa-2.0-fr': {'id': 'CC-BY-NC-SA-2.0-FR', 'deprecated': False},
|
||||
'cc-by-nc-sa-2.0-uk': {'id': 'CC-BY-NC-SA-2.0-UK', 'deprecated': False},
|
||||
'cc-by-nc-sa-2.5': {'id': 'CC-BY-NC-SA-2.5', 'deprecated': False},
|
||||
'cc-by-nc-sa-3.0': {'id': 'CC-BY-NC-SA-3.0', 'deprecated': False},
|
||||
'cc-by-nc-sa-3.0-de': {'id': 'CC-BY-NC-SA-3.0-DE', 'deprecated': False},
|
||||
'cc-by-nc-sa-3.0-igo': {'id': 'CC-BY-NC-SA-3.0-IGO', 'deprecated': False},
|
||||
'cc-by-nc-sa-4.0': {'id': 'CC-BY-NC-SA-4.0', 'deprecated': False},
|
||||
'cc-by-nd-1.0': {'id': 'CC-BY-ND-1.0', 'deprecated': False},
|
||||
'cc-by-nd-2.0': {'id': 'CC-BY-ND-2.0', 'deprecated': False},
|
||||
'cc-by-nd-2.5': {'id': 'CC-BY-ND-2.5', 'deprecated': False},
|
||||
'cc-by-nd-3.0': {'id': 'CC-BY-ND-3.0', 'deprecated': False},
|
||||
'cc-by-nd-3.0-de': {'id': 'CC-BY-ND-3.0-DE', 'deprecated': False},
|
||||
'cc-by-nd-4.0': {'id': 'CC-BY-ND-4.0', 'deprecated': False},
|
||||
'cc-by-sa-1.0': {'id': 'CC-BY-SA-1.0', 'deprecated': False},
|
||||
'cc-by-sa-2.0': {'id': 'CC-BY-SA-2.0', 'deprecated': False},
|
||||
'cc-by-sa-2.0-uk': {'id': 'CC-BY-SA-2.0-UK', 'deprecated': False},
|
||||
'cc-by-sa-2.1-jp': {'id': 'CC-BY-SA-2.1-JP', 'deprecated': False},
|
||||
'cc-by-sa-2.5': {'id': 'CC-BY-SA-2.5', 'deprecated': False},
|
||||
'cc-by-sa-3.0': {'id': 'CC-BY-SA-3.0', 'deprecated': False},
|
||||
'cc-by-sa-3.0-at': {'id': 'CC-BY-SA-3.0-AT', 'deprecated': False},
|
||||
'cc-by-sa-3.0-de': {'id': 'CC-BY-SA-3.0-DE', 'deprecated': False},
|
||||
'cc-by-sa-3.0-igo': {'id': 'CC-BY-SA-3.0-IGO', 'deprecated': False},
|
||||
'cc-by-sa-4.0': {'id': 'CC-BY-SA-4.0', 'deprecated': False},
|
||||
'cc-pddc': {'id': 'CC-PDDC', 'deprecated': False},
|
||||
'cc-pdm-1.0': {'id': 'CC-PDM-1.0', 'deprecated': False},
|
||||
'cc-sa-1.0': {'id': 'CC-SA-1.0', 'deprecated': False},
|
||||
'cc0-1.0': {'id': 'CC0-1.0', 'deprecated': False},
|
||||
'cddl-1.0': {'id': 'CDDL-1.0', 'deprecated': False},
|
||||
'cddl-1.1': {'id': 'CDDL-1.1', 'deprecated': False},
|
||||
'cdl-1.0': {'id': 'CDL-1.0', 'deprecated': False},
|
||||
'cdla-permissive-1.0': {'id': 'CDLA-Permissive-1.0', 'deprecated': False},
|
||||
'cdla-permissive-2.0': {'id': 'CDLA-Permissive-2.0', 'deprecated': False},
|
||||
'cdla-sharing-1.0': {'id': 'CDLA-Sharing-1.0', 'deprecated': False},
|
||||
'cecill-1.0': {'id': 'CECILL-1.0', 'deprecated': False},
|
||||
'cecill-1.1': {'id': 'CECILL-1.1', 'deprecated': False},
|
||||
'cecill-2.0': {'id': 'CECILL-2.0', 'deprecated': False},
|
||||
'cecill-2.1': {'id': 'CECILL-2.1', 'deprecated': False},
|
||||
'cecill-b': {'id': 'CECILL-B', 'deprecated': False},
|
||||
'cecill-c': {'id': 'CECILL-C', 'deprecated': False},
|
||||
'cern-ohl-1.1': {'id': 'CERN-OHL-1.1', 'deprecated': False},
|
||||
'cern-ohl-1.2': {'id': 'CERN-OHL-1.2', 'deprecated': False},
|
||||
'cern-ohl-p-2.0': {'id': 'CERN-OHL-P-2.0', 'deprecated': False},
|
||||
'cern-ohl-s-2.0': {'id': 'CERN-OHL-S-2.0', 'deprecated': False},
|
||||
'cern-ohl-w-2.0': {'id': 'CERN-OHL-W-2.0', 'deprecated': False},
|
||||
'cfitsio': {'id': 'CFITSIO', 'deprecated': False},
|
||||
'check-cvs': {'id': 'check-cvs', 'deprecated': False},
|
||||
'checkmk': {'id': 'checkmk', 'deprecated': False},
|
||||
'clartistic': {'id': 'ClArtistic', 'deprecated': False},
|
||||
'clips': {'id': 'Clips', 'deprecated': False},
|
||||
'cmu-mach': {'id': 'CMU-Mach', 'deprecated': False},
|
||||
'cmu-mach-nodoc': {'id': 'CMU-Mach-nodoc', 'deprecated': False},
|
||||
'cnri-jython': {'id': 'CNRI-Jython', 'deprecated': False},
|
||||
'cnri-python': {'id': 'CNRI-Python', 'deprecated': False},
|
||||
'cnri-python-gpl-compatible': {'id': 'CNRI-Python-GPL-Compatible', 'deprecated': False},
|
||||
'coil-1.0': {'id': 'COIL-1.0', 'deprecated': False},
|
||||
'community-spec-1.0': {'id': 'Community-Spec-1.0', 'deprecated': False},
|
||||
'condor-1.1': {'id': 'Condor-1.1', 'deprecated': False},
|
||||
'copyleft-next-0.3.0': {'id': 'copyleft-next-0.3.0', 'deprecated': False},
|
||||
'copyleft-next-0.3.1': {'id': 'copyleft-next-0.3.1', 'deprecated': False},
|
||||
'cornell-lossless-jpeg': {'id': 'Cornell-Lossless-JPEG', 'deprecated': False},
|
||||
'cpal-1.0': {'id': 'CPAL-1.0', 'deprecated': False},
|
||||
'cpl-1.0': {'id': 'CPL-1.0', 'deprecated': False},
|
||||
'cpol-1.02': {'id': 'CPOL-1.02', 'deprecated': False},
|
||||
'cronyx': {'id': 'Cronyx', 'deprecated': False},
|
||||
'crossword': {'id': 'Crossword', 'deprecated': False},
|
||||
'cryptoswift': {'id': 'CryptoSwift', 'deprecated': False},
|
||||
'crystalstacker': {'id': 'CrystalStacker', 'deprecated': False},
|
||||
'cua-opl-1.0': {'id': 'CUA-OPL-1.0', 'deprecated': False},
|
||||
'cube': {'id': 'Cube', 'deprecated': False},
|
||||
'curl': {'id': 'curl', 'deprecated': False},
|
||||
'cve-tou': {'id': 'cve-tou', 'deprecated': False},
|
||||
'd-fsl-1.0': {'id': 'D-FSL-1.0', 'deprecated': False},
|
||||
'dec-3-clause': {'id': 'DEC-3-Clause', 'deprecated': False},
|
||||
'diffmark': {'id': 'diffmark', 'deprecated': False},
|
||||
'dl-de-by-2.0': {'id': 'DL-DE-BY-2.0', 'deprecated': False},
|
||||
'dl-de-zero-2.0': {'id': 'DL-DE-ZERO-2.0', 'deprecated': False},
|
||||
'doc': {'id': 'DOC', 'deprecated': False},
|
||||
'docbook-dtd': {'id': 'DocBook-DTD', 'deprecated': False},
|
||||
'docbook-schema': {'id': 'DocBook-Schema', 'deprecated': False},
|
||||
'docbook-stylesheet': {'id': 'DocBook-Stylesheet', 'deprecated': False},
|
||||
'docbook-xml': {'id': 'DocBook-XML', 'deprecated': False},
|
||||
'dotseqn': {'id': 'Dotseqn', 'deprecated': False},
|
||||
'drl-1.0': {'id': 'DRL-1.0', 'deprecated': False},
|
||||
'drl-1.1': {'id': 'DRL-1.1', 'deprecated': False},
|
||||
'dsdp': {'id': 'DSDP', 'deprecated': False},
|
||||
'dtoa': {'id': 'dtoa', 'deprecated': False},
|
||||
'dvipdfm': {'id': 'dvipdfm', 'deprecated': False},
|
||||
'ecl-1.0': {'id': 'ECL-1.0', 'deprecated': False},
|
||||
'ecl-2.0': {'id': 'ECL-2.0', 'deprecated': False},
|
||||
'ecos-2.0': {'id': 'eCos-2.0', 'deprecated': True},
|
||||
'efl-1.0': {'id': 'EFL-1.0', 'deprecated': False},
|
||||
'efl-2.0': {'id': 'EFL-2.0', 'deprecated': False},
|
||||
'egenix': {'id': 'eGenix', 'deprecated': False},
|
||||
'elastic-2.0': {'id': 'Elastic-2.0', 'deprecated': False},
|
||||
'entessa': {'id': 'Entessa', 'deprecated': False},
|
||||
'epics': {'id': 'EPICS', 'deprecated': False},
|
||||
'epl-1.0': {'id': 'EPL-1.0', 'deprecated': False},
|
||||
'epl-2.0': {'id': 'EPL-2.0', 'deprecated': False},
|
||||
'erlpl-1.1': {'id': 'ErlPL-1.1', 'deprecated': False},
|
||||
'etalab-2.0': {'id': 'etalab-2.0', 'deprecated': False},
|
||||
'eudatagrid': {'id': 'EUDatagrid', 'deprecated': False},
|
||||
'eupl-1.0': {'id': 'EUPL-1.0', 'deprecated': False},
|
||||
'eupl-1.1': {'id': 'EUPL-1.1', 'deprecated': False},
|
||||
'eupl-1.2': {'id': 'EUPL-1.2', 'deprecated': False},
|
||||
'eurosym': {'id': 'Eurosym', 'deprecated': False},
|
||||
'fair': {'id': 'Fair', 'deprecated': False},
|
||||
'fbm': {'id': 'FBM', 'deprecated': False},
|
||||
'fdk-aac': {'id': 'FDK-AAC', 'deprecated': False},
|
||||
'ferguson-twofish': {'id': 'Ferguson-Twofish', 'deprecated': False},
|
||||
'frameworx-1.0': {'id': 'Frameworx-1.0', 'deprecated': False},
|
||||
'freebsd-doc': {'id': 'FreeBSD-DOC', 'deprecated': False},
|
||||
'freeimage': {'id': 'FreeImage', 'deprecated': False},
|
||||
'fsfap': {'id': 'FSFAP', 'deprecated': False},
|
||||
'fsfap-no-warranty-disclaimer': {'id': 'FSFAP-no-warranty-disclaimer', 'deprecated': False},
|
||||
'fsful': {'id': 'FSFUL', 'deprecated': False},
|
||||
'fsfullr': {'id': 'FSFULLR', 'deprecated': False},
|
||||
'fsfullrsd': {'id': 'FSFULLRSD', 'deprecated': False},
|
||||
'fsfullrwd': {'id': 'FSFULLRWD', 'deprecated': False},
|
||||
'fsl-1.1-alv2': {'id': 'FSL-1.1-ALv2', 'deprecated': False},
|
||||
'fsl-1.1-mit': {'id': 'FSL-1.1-MIT', 'deprecated': False},
|
||||
'ftl': {'id': 'FTL', 'deprecated': False},
|
||||
'furuseth': {'id': 'Furuseth', 'deprecated': False},
|
||||
'fwlw': {'id': 'fwlw', 'deprecated': False},
|
||||
'game-programming-gems': {'id': 'Game-Programming-Gems', 'deprecated': False},
|
||||
'gcr-docs': {'id': 'GCR-docs', 'deprecated': False},
|
||||
'gd': {'id': 'GD', 'deprecated': False},
|
||||
'generic-xts': {'id': 'generic-xts', 'deprecated': False},
|
||||
'gfdl-1.1': {'id': 'GFDL-1.1', 'deprecated': True},
|
||||
'gfdl-1.1-invariants-only': {'id': 'GFDL-1.1-invariants-only', 'deprecated': False},
|
||||
'gfdl-1.1-invariants-or-later': {'id': 'GFDL-1.1-invariants-or-later', 'deprecated': False},
|
||||
'gfdl-1.1-no-invariants-only': {'id': 'GFDL-1.1-no-invariants-only', 'deprecated': False},
|
||||
'gfdl-1.1-no-invariants-or-later': {'id': 'GFDL-1.1-no-invariants-or-later', 'deprecated': False},
|
||||
'gfdl-1.1-only': {'id': 'GFDL-1.1-only', 'deprecated': False},
|
||||
'gfdl-1.1-or-later': {'id': 'GFDL-1.1-or-later', 'deprecated': False},
|
||||
'gfdl-1.2': {'id': 'GFDL-1.2', 'deprecated': True},
|
||||
'gfdl-1.2-invariants-only': {'id': 'GFDL-1.2-invariants-only', 'deprecated': False},
|
||||
'gfdl-1.2-invariants-or-later': {'id': 'GFDL-1.2-invariants-or-later', 'deprecated': False},
|
||||
'gfdl-1.2-no-invariants-only': {'id': 'GFDL-1.2-no-invariants-only', 'deprecated': False},
|
||||
'gfdl-1.2-no-invariants-or-later': {'id': 'GFDL-1.2-no-invariants-or-later', 'deprecated': False},
|
||||
'gfdl-1.2-only': {'id': 'GFDL-1.2-only', 'deprecated': False},
|
||||
'gfdl-1.2-or-later': {'id': 'GFDL-1.2-or-later', 'deprecated': False},
|
||||
'gfdl-1.3': {'id': 'GFDL-1.3', 'deprecated': True},
|
||||
'gfdl-1.3-invariants-only': {'id': 'GFDL-1.3-invariants-only', 'deprecated': False},
|
||||
'gfdl-1.3-invariants-or-later': {'id': 'GFDL-1.3-invariants-or-later', 'deprecated': False},
|
||||
'gfdl-1.3-no-invariants-only': {'id': 'GFDL-1.3-no-invariants-only', 'deprecated': False},
|
||||
'gfdl-1.3-no-invariants-or-later': {'id': 'GFDL-1.3-no-invariants-or-later', 'deprecated': False},
|
||||
'gfdl-1.3-only': {'id': 'GFDL-1.3-only', 'deprecated': False},
|
||||
'gfdl-1.3-or-later': {'id': 'GFDL-1.3-or-later', 'deprecated': False},
|
||||
'giftware': {'id': 'Giftware', 'deprecated': False},
|
||||
'gl2ps': {'id': 'GL2PS', 'deprecated': False},
|
||||
'glide': {'id': 'Glide', 'deprecated': False},
|
||||
'glulxe': {'id': 'Glulxe', 'deprecated': False},
|
||||
'glwtpl': {'id': 'GLWTPL', 'deprecated': False},
|
||||
'gnuplot': {'id': 'gnuplot', 'deprecated': False},
|
||||
'gpl-1.0': {'id': 'GPL-1.0', 'deprecated': True},
|
||||
'gpl-1.0+': {'id': 'GPL-1.0+', 'deprecated': True},
|
||||
'gpl-1.0-only': {'id': 'GPL-1.0-only', 'deprecated': False},
|
||||
'gpl-1.0-or-later': {'id': 'GPL-1.0-or-later', 'deprecated': False},
|
||||
'gpl-2.0': {'id': 'GPL-2.0', 'deprecated': True},
|
||||
'gpl-2.0+': {'id': 'GPL-2.0+', 'deprecated': True},
|
||||
'gpl-2.0-only': {'id': 'GPL-2.0-only', 'deprecated': False},
|
||||
'gpl-2.0-or-later': {'id': 'GPL-2.0-or-later', 'deprecated': False},
|
||||
'gpl-2.0-with-autoconf-exception': {'id': 'GPL-2.0-with-autoconf-exception', 'deprecated': True},
|
||||
'gpl-2.0-with-bison-exception': {'id': 'GPL-2.0-with-bison-exception', 'deprecated': True},
|
||||
'gpl-2.0-with-classpath-exception': {'id': 'GPL-2.0-with-classpath-exception', 'deprecated': True},
|
||||
'gpl-2.0-with-font-exception': {'id': 'GPL-2.0-with-font-exception', 'deprecated': True},
|
||||
'gpl-2.0-with-gcc-exception': {'id': 'GPL-2.0-with-GCC-exception', 'deprecated': True},
|
||||
'gpl-3.0': {'id': 'GPL-3.0', 'deprecated': True},
|
||||
'gpl-3.0+': {'id': 'GPL-3.0+', 'deprecated': True},
|
||||
'gpl-3.0-only': {'id': 'GPL-3.0-only', 'deprecated': False},
|
||||
'gpl-3.0-or-later': {'id': 'GPL-3.0-or-later', 'deprecated': False},
|
||||
'gpl-3.0-with-autoconf-exception': {'id': 'GPL-3.0-with-autoconf-exception', 'deprecated': True},
|
||||
'gpl-3.0-with-gcc-exception': {'id': 'GPL-3.0-with-GCC-exception', 'deprecated': True},
|
||||
'graphics-gems': {'id': 'Graphics-Gems', 'deprecated': False},
|
||||
'gsoap-1.3b': {'id': 'gSOAP-1.3b', 'deprecated': False},
|
||||
'gtkbook': {'id': 'gtkbook', 'deprecated': False},
|
||||
'gutmann': {'id': 'Gutmann', 'deprecated': False},
|
||||
'haskellreport': {'id': 'HaskellReport', 'deprecated': False},
|
||||
'hdf5': {'id': 'HDF5', 'deprecated': False},
|
||||
'hdparm': {'id': 'hdparm', 'deprecated': False},
|
||||
'hidapi': {'id': 'HIDAPI', 'deprecated': False},
|
||||
'hippocratic-2.1': {'id': 'Hippocratic-2.1', 'deprecated': False},
|
||||
'hp-1986': {'id': 'HP-1986', 'deprecated': False},
|
||||
'hp-1989': {'id': 'HP-1989', 'deprecated': False},
|
||||
'hpnd': {'id': 'HPND', 'deprecated': False},
|
||||
'hpnd-dec': {'id': 'HPND-DEC', 'deprecated': False},
|
||||
'hpnd-doc': {'id': 'HPND-doc', 'deprecated': False},
|
||||
'hpnd-doc-sell': {'id': 'HPND-doc-sell', 'deprecated': False},
|
||||
'hpnd-export-us': {'id': 'HPND-export-US', 'deprecated': False},
|
||||
'hpnd-export-us-acknowledgement': {'id': 'HPND-export-US-acknowledgement', 'deprecated': False},
|
||||
'hpnd-export-us-modify': {'id': 'HPND-export-US-modify', 'deprecated': False},
|
||||
'hpnd-export2-us': {'id': 'HPND-export2-US', 'deprecated': False},
|
||||
'hpnd-fenneberg-livingston': {'id': 'HPND-Fenneberg-Livingston', 'deprecated': False},
|
||||
'hpnd-inria-imag': {'id': 'HPND-INRIA-IMAG', 'deprecated': False},
|
||||
'hpnd-intel': {'id': 'HPND-Intel', 'deprecated': False},
|
||||
'hpnd-kevlin-henney': {'id': 'HPND-Kevlin-Henney', 'deprecated': False},
|
||||
'hpnd-markus-kuhn': {'id': 'HPND-Markus-Kuhn', 'deprecated': False},
|
||||
'hpnd-merchantability-variant': {'id': 'HPND-merchantability-variant', 'deprecated': False},
|
||||
'hpnd-mit-disclaimer': {'id': 'HPND-MIT-disclaimer', 'deprecated': False},
|
||||
'hpnd-netrek': {'id': 'HPND-Netrek', 'deprecated': False},
|
||||
'hpnd-pbmplus': {'id': 'HPND-Pbmplus', 'deprecated': False},
|
||||
'hpnd-sell-mit-disclaimer-xserver': {'id': 'HPND-sell-MIT-disclaimer-xserver', 'deprecated': False},
|
||||
'hpnd-sell-regexpr': {'id': 'HPND-sell-regexpr', 'deprecated': False},
|
||||
'hpnd-sell-variant': {'id': 'HPND-sell-variant', 'deprecated': False},
|
||||
'hpnd-sell-variant-mit-disclaimer': {'id': 'HPND-sell-variant-MIT-disclaimer', 'deprecated': False},
|
||||
'hpnd-sell-variant-mit-disclaimer-rev': {'id': 'HPND-sell-variant-MIT-disclaimer-rev', 'deprecated': False},
|
||||
'hpnd-uc': {'id': 'HPND-UC', 'deprecated': False},
|
||||
'hpnd-uc-export-us': {'id': 'HPND-UC-export-US', 'deprecated': False},
|
||||
'htmltidy': {'id': 'HTMLTIDY', 'deprecated': False},
|
||||
'ibm-pibs': {'id': 'IBM-pibs', 'deprecated': False},
|
||||
'icu': {'id': 'ICU', 'deprecated': False},
|
||||
'iec-code-components-eula': {'id': 'IEC-Code-Components-EULA', 'deprecated': False},
|
||||
'ijg': {'id': 'IJG', 'deprecated': False},
|
||||
'ijg-short': {'id': 'IJG-short', 'deprecated': False},
|
||||
'imagemagick': {'id': 'ImageMagick', 'deprecated': False},
|
||||
'imatix': {'id': 'iMatix', 'deprecated': False},
|
||||
'imlib2': {'id': 'Imlib2', 'deprecated': False},
|
||||
'info-zip': {'id': 'Info-ZIP', 'deprecated': False},
|
||||
'inner-net-2.0': {'id': 'Inner-Net-2.0', 'deprecated': False},
|
||||
'innosetup': {'id': 'InnoSetup', 'deprecated': False},
|
||||
'intel': {'id': 'Intel', 'deprecated': False},
|
||||
'intel-acpi': {'id': 'Intel-ACPI', 'deprecated': False},
|
||||
'interbase-1.0': {'id': 'Interbase-1.0', 'deprecated': False},
|
||||
'ipa': {'id': 'IPA', 'deprecated': False},
|
||||
'ipl-1.0': {'id': 'IPL-1.0', 'deprecated': False},
|
||||
'isc': {'id': 'ISC', 'deprecated': False},
|
||||
'isc-veillard': {'id': 'ISC-Veillard', 'deprecated': False},
|
||||
'jam': {'id': 'Jam', 'deprecated': False},
|
||||
'jasper-2.0': {'id': 'JasPer-2.0', 'deprecated': False},
|
||||
'jove': {'id': 'jove', 'deprecated': False},
|
||||
'jpl-image': {'id': 'JPL-image', 'deprecated': False},
|
||||
'jpnic': {'id': 'JPNIC', 'deprecated': False},
|
||||
'json': {'id': 'JSON', 'deprecated': False},
|
||||
'kastrup': {'id': 'Kastrup', 'deprecated': False},
|
||||
'kazlib': {'id': 'Kazlib', 'deprecated': False},
|
||||
'knuth-ctan': {'id': 'Knuth-CTAN', 'deprecated': False},
|
||||
'lal-1.2': {'id': 'LAL-1.2', 'deprecated': False},
|
||||
'lal-1.3': {'id': 'LAL-1.3', 'deprecated': False},
|
||||
'latex2e': {'id': 'Latex2e', 'deprecated': False},
|
||||
'latex2e-translated-notice': {'id': 'Latex2e-translated-notice', 'deprecated': False},
|
||||
'leptonica': {'id': 'Leptonica', 'deprecated': False},
|
||||
'lgpl-2.0': {'id': 'LGPL-2.0', 'deprecated': True},
|
||||
'lgpl-2.0+': {'id': 'LGPL-2.0+', 'deprecated': True},
|
||||
'lgpl-2.0-only': {'id': 'LGPL-2.0-only', 'deprecated': False},
|
||||
'lgpl-2.0-or-later': {'id': 'LGPL-2.0-or-later', 'deprecated': False},
|
||||
'lgpl-2.1': {'id': 'LGPL-2.1', 'deprecated': True},
|
||||
'lgpl-2.1+': {'id': 'LGPL-2.1+', 'deprecated': True},
|
||||
'lgpl-2.1-only': {'id': 'LGPL-2.1-only', 'deprecated': False},
|
||||
'lgpl-2.1-or-later': {'id': 'LGPL-2.1-or-later', 'deprecated': False},
|
||||
'lgpl-3.0': {'id': 'LGPL-3.0', 'deprecated': True},
|
||||
'lgpl-3.0+': {'id': 'LGPL-3.0+', 'deprecated': True},
|
||||
'lgpl-3.0-only': {'id': 'LGPL-3.0-only', 'deprecated': False},
|
||||
'lgpl-3.0-or-later': {'id': 'LGPL-3.0-or-later', 'deprecated': False},
|
||||
'lgpllr': {'id': 'LGPLLR', 'deprecated': False},
|
||||
'libpng': {'id': 'Libpng', 'deprecated': False},
|
||||
'libpng-1.6.35': {'id': 'libpng-1.6.35', 'deprecated': False},
|
||||
'libpng-2.0': {'id': 'libpng-2.0', 'deprecated': False},
|
||||
'libselinux-1.0': {'id': 'libselinux-1.0', 'deprecated': False},
|
||||
'libtiff': {'id': 'libtiff', 'deprecated': False},
|
||||
'libutil-david-nugent': {'id': 'libutil-David-Nugent', 'deprecated': False},
|
||||
'liliq-p-1.1': {'id': 'LiLiQ-P-1.1', 'deprecated': False},
|
||||
'liliq-r-1.1': {'id': 'LiLiQ-R-1.1', 'deprecated': False},
|
||||
'liliq-rplus-1.1': {'id': 'LiLiQ-Rplus-1.1', 'deprecated': False},
|
||||
'linux-man-pages-1-para': {'id': 'Linux-man-pages-1-para', 'deprecated': False},
|
||||
'linux-man-pages-copyleft': {'id': 'Linux-man-pages-copyleft', 'deprecated': False},
|
||||
'linux-man-pages-copyleft-2-para': {'id': 'Linux-man-pages-copyleft-2-para', 'deprecated': False},
|
||||
'linux-man-pages-copyleft-var': {'id': 'Linux-man-pages-copyleft-var', 'deprecated': False},
|
||||
'linux-openib': {'id': 'Linux-OpenIB', 'deprecated': False},
|
||||
'loop': {'id': 'LOOP', 'deprecated': False},
|
||||
'lpd-document': {'id': 'LPD-document', 'deprecated': False},
|
||||
'lpl-1.0': {'id': 'LPL-1.0', 'deprecated': False},
|
||||
'lpl-1.02': {'id': 'LPL-1.02', 'deprecated': False},
|
||||
'lppl-1.0': {'id': 'LPPL-1.0', 'deprecated': False},
|
||||
'lppl-1.1': {'id': 'LPPL-1.1', 'deprecated': False},
|
||||
'lppl-1.2': {'id': 'LPPL-1.2', 'deprecated': False},
|
||||
'lppl-1.3a': {'id': 'LPPL-1.3a', 'deprecated': False},
|
||||
'lppl-1.3c': {'id': 'LPPL-1.3c', 'deprecated': False},
|
||||
'lsof': {'id': 'lsof', 'deprecated': False},
|
||||
'lucida-bitmap-fonts': {'id': 'Lucida-Bitmap-Fonts', 'deprecated': False},
|
||||
'lzma-sdk-9.11-to-9.20': {'id': 'LZMA-SDK-9.11-to-9.20', 'deprecated': False},
|
||||
'lzma-sdk-9.22': {'id': 'LZMA-SDK-9.22', 'deprecated': False},
|
||||
'mackerras-3-clause': {'id': 'Mackerras-3-Clause', 'deprecated': False},
|
||||
'mackerras-3-clause-acknowledgment': {'id': 'Mackerras-3-Clause-acknowledgment', 'deprecated': False},
|
||||
'magaz': {'id': 'magaz', 'deprecated': False},
|
||||
'mailprio': {'id': 'mailprio', 'deprecated': False},
|
||||
'makeindex': {'id': 'MakeIndex', 'deprecated': False},
|
||||
'man2html': {'id': 'man2html', 'deprecated': False},
|
||||
'martin-birgmeier': {'id': 'Martin-Birgmeier', 'deprecated': False},
|
||||
'mcphee-slideshow': {'id': 'McPhee-slideshow', 'deprecated': False},
|
||||
'metamail': {'id': 'metamail', 'deprecated': False},
|
||||
'minpack': {'id': 'Minpack', 'deprecated': False},
|
||||
'mips': {'id': 'MIPS', 'deprecated': False},
|
||||
'miros': {'id': 'MirOS', 'deprecated': False},
|
||||
'mit': {'id': 'MIT', 'deprecated': False},
|
||||
'mit-0': {'id': 'MIT-0', 'deprecated': False},
|
||||
'mit-advertising': {'id': 'MIT-advertising', 'deprecated': False},
|
||||
'mit-click': {'id': 'MIT-Click', 'deprecated': False},
|
||||
'mit-cmu': {'id': 'MIT-CMU', 'deprecated': False},
|
||||
'mit-enna': {'id': 'MIT-enna', 'deprecated': False},
|
||||
'mit-feh': {'id': 'MIT-feh', 'deprecated': False},
|
||||
'mit-festival': {'id': 'MIT-Festival', 'deprecated': False},
|
||||
'mit-khronos-old': {'id': 'MIT-Khronos-old', 'deprecated': False},
|
||||
'mit-modern-variant': {'id': 'MIT-Modern-Variant', 'deprecated': False},
|
||||
'mit-open-group': {'id': 'MIT-open-group', 'deprecated': False},
|
||||
'mit-testregex': {'id': 'MIT-testregex', 'deprecated': False},
|
||||
'mit-wu': {'id': 'MIT-Wu', 'deprecated': False},
|
||||
'mitnfa': {'id': 'MITNFA', 'deprecated': False},
|
||||
'mmixware': {'id': 'MMIXware', 'deprecated': False},
|
||||
'motosoto': {'id': 'Motosoto', 'deprecated': False},
|
||||
'mpeg-ssg': {'id': 'MPEG-SSG', 'deprecated': False},
|
||||
'mpi-permissive': {'id': 'mpi-permissive', 'deprecated': False},
|
||||
'mpich2': {'id': 'mpich2', 'deprecated': False},
|
||||
'mpl-1.0': {'id': 'MPL-1.0', 'deprecated': False},
|
||||
'mpl-1.1': {'id': 'MPL-1.1', 'deprecated': False},
|
||||
'mpl-2.0': {'id': 'MPL-2.0', 'deprecated': False},
|
||||
'mpl-2.0-no-copyleft-exception': {'id': 'MPL-2.0-no-copyleft-exception', 'deprecated': False},
|
||||
'mplus': {'id': 'mplus', 'deprecated': False},
|
||||
'ms-lpl': {'id': 'MS-LPL', 'deprecated': False},
|
||||
'ms-pl': {'id': 'MS-PL', 'deprecated': False},
|
||||
'ms-rl': {'id': 'MS-RL', 'deprecated': False},
|
||||
'mtll': {'id': 'MTLL', 'deprecated': False},
|
||||
'mulanpsl-1.0': {'id': 'MulanPSL-1.0', 'deprecated': False},
|
||||
'mulanpsl-2.0': {'id': 'MulanPSL-2.0', 'deprecated': False},
|
||||
'multics': {'id': 'Multics', 'deprecated': False},
|
||||
'mup': {'id': 'Mup', 'deprecated': False},
|
||||
'naist-2003': {'id': 'NAIST-2003', 'deprecated': False},
|
||||
'nasa-1.3': {'id': 'NASA-1.3', 'deprecated': False},
|
||||
'naumen': {'id': 'Naumen', 'deprecated': False},
|
||||
'nbpl-1.0': {'id': 'NBPL-1.0', 'deprecated': False},
|
||||
'ncbi-pd': {'id': 'NCBI-PD', 'deprecated': False},
|
||||
'ncgl-uk-2.0': {'id': 'NCGL-UK-2.0', 'deprecated': False},
|
||||
'ncl': {'id': 'NCL', 'deprecated': False},
|
||||
'ncsa': {'id': 'NCSA', 'deprecated': False},
|
||||
'net-snmp': {'id': 'Net-SNMP', 'deprecated': True},
|
||||
'netcdf': {'id': 'NetCDF', 'deprecated': False},
|
||||
'newsletr': {'id': 'Newsletr', 'deprecated': False},
|
||||
'ngpl': {'id': 'NGPL', 'deprecated': False},
|
||||
'ngrep': {'id': 'ngrep', 'deprecated': False},
|
||||
'nicta-1.0': {'id': 'NICTA-1.0', 'deprecated': False},
|
||||
'nist-pd': {'id': 'NIST-PD', 'deprecated': False},
|
||||
'nist-pd-fallback': {'id': 'NIST-PD-fallback', 'deprecated': False},
|
||||
'nist-software': {'id': 'NIST-Software', 'deprecated': False},
|
||||
'nlod-1.0': {'id': 'NLOD-1.0', 'deprecated': False},
|
||||
'nlod-2.0': {'id': 'NLOD-2.0', 'deprecated': False},
|
||||
'nlpl': {'id': 'NLPL', 'deprecated': False},
|
||||
'nokia': {'id': 'Nokia', 'deprecated': False},
|
||||
'nosl': {'id': 'NOSL', 'deprecated': False},
|
||||
'noweb': {'id': 'Noweb', 'deprecated': False},
|
||||
'npl-1.0': {'id': 'NPL-1.0', 'deprecated': False},
|
||||
'npl-1.1': {'id': 'NPL-1.1', 'deprecated': False},
|
||||
'nposl-3.0': {'id': 'NPOSL-3.0', 'deprecated': False},
|
||||
'nrl': {'id': 'NRL', 'deprecated': False},
|
||||
'ntia-pd': {'id': 'NTIA-PD', 'deprecated': False},
|
||||
'ntp': {'id': 'NTP', 'deprecated': False},
|
||||
'ntp-0': {'id': 'NTP-0', 'deprecated': False},
|
||||
'nunit': {'id': 'Nunit', 'deprecated': True},
|
||||
'o-uda-1.0': {'id': 'O-UDA-1.0', 'deprecated': False},
|
||||
'oar': {'id': 'OAR', 'deprecated': False},
|
||||
'occt-pl': {'id': 'OCCT-PL', 'deprecated': False},
|
||||
'oclc-2.0': {'id': 'OCLC-2.0', 'deprecated': False},
|
||||
'odbl-1.0': {'id': 'ODbL-1.0', 'deprecated': False},
|
||||
'odc-by-1.0': {'id': 'ODC-By-1.0', 'deprecated': False},
|
||||
'offis': {'id': 'OFFIS', 'deprecated': False},
|
||||
'ofl-1.0': {'id': 'OFL-1.0', 'deprecated': False},
|
||||
'ofl-1.0-no-rfn': {'id': 'OFL-1.0-no-RFN', 'deprecated': False},
|
||||
'ofl-1.0-rfn': {'id': 'OFL-1.0-RFN', 'deprecated': False},
|
||||
'ofl-1.1': {'id': 'OFL-1.1', 'deprecated': False},
|
||||
'ofl-1.1-no-rfn': {'id': 'OFL-1.1-no-RFN', 'deprecated': False},
|
||||
'ofl-1.1-rfn': {'id': 'OFL-1.1-RFN', 'deprecated': False},
|
||||
'ogc-1.0': {'id': 'OGC-1.0', 'deprecated': False},
|
||||
'ogdl-taiwan-1.0': {'id': 'OGDL-Taiwan-1.0', 'deprecated': False},
|
||||
'ogl-canada-2.0': {'id': 'OGL-Canada-2.0', 'deprecated': False},
|
||||
'ogl-uk-1.0': {'id': 'OGL-UK-1.0', 'deprecated': False},
|
||||
'ogl-uk-2.0': {'id': 'OGL-UK-2.0', 'deprecated': False},
|
||||
'ogl-uk-3.0': {'id': 'OGL-UK-3.0', 'deprecated': False},
|
||||
'ogtsl': {'id': 'OGTSL', 'deprecated': False},
|
||||
'oldap-1.1': {'id': 'OLDAP-1.1', 'deprecated': False},
|
||||
'oldap-1.2': {'id': 'OLDAP-1.2', 'deprecated': False},
|
||||
'oldap-1.3': {'id': 'OLDAP-1.3', 'deprecated': False},
|
||||
'oldap-1.4': {'id': 'OLDAP-1.4', 'deprecated': False},
|
||||
'oldap-2.0': {'id': 'OLDAP-2.0', 'deprecated': False},
|
||||
'oldap-2.0.1': {'id': 'OLDAP-2.0.1', 'deprecated': False},
|
||||
'oldap-2.1': {'id': 'OLDAP-2.1', 'deprecated': False},
|
||||
'oldap-2.2': {'id': 'OLDAP-2.2', 'deprecated': False},
|
||||
'oldap-2.2.1': {'id': 'OLDAP-2.2.1', 'deprecated': False},
|
||||
'oldap-2.2.2': {'id': 'OLDAP-2.2.2', 'deprecated': False},
|
||||
'oldap-2.3': {'id': 'OLDAP-2.3', 'deprecated': False},
|
||||
'oldap-2.4': {'id': 'OLDAP-2.4', 'deprecated': False},
|
||||
'oldap-2.5': {'id': 'OLDAP-2.5', 'deprecated': False},
|
||||
'oldap-2.6': {'id': 'OLDAP-2.6', 'deprecated': False},
|
||||
'oldap-2.7': {'id': 'OLDAP-2.7', 'deprecated': False},
|
||||
'oldap-2.8': {'id': 'OLDAP-2.8', 'deprecated': False},
|
||||
'olfl-1.3': {'id': 'OLFL-1.3', 'deprecated': False},
|
||||
'oml': {'id': 'OML', 'deprecated': False},
|
||||
'openpbs-2.3': {'id': 'OpenPBS-2.3', 'deprecated': False},
|
||||
'openssl': {'id': 'OpenSSL', 'deprecated': False},
|
||||
'openssl-standalone': {'id': 'OpenSSL-standalone', 'deprecated': False},
|
||||
'openvision': {'id': 'OpenVision', 'deprecated': False},
|
||||
'opl-1.0': {'id': 'OPL-1.0', 'deprecated': False},
|
||||
'opl-uk-3.0': {'id': 'OPL-UK-3.0', 'deprecated': False},
|
||||
'opubl-1.0': {'id': 'OPUBL-1.0', 'deprecated': False},
|
||||
'oset-pl-2.1': {'id': 'OSET-PL-2.1', 'deprecated': False},
|
||||
'osl-1.0': {'id': 'OSL-1.0', 'deprecated': False},
|
||||
'osl-1.1': {'id': 'OSL-1.1', 'deprecated': False},
|
||||
'osl-2.0': {'id': 'OSL-2.0', 'deprecated': False},
|
||||
'osl-2.1': {'id': 'OSL-2.1', 'deprecated': False},
|
||||
'osl-3.0': {'id': 'OSL-3.0', 'deprecated': False},
|
||||
'padl': {'id': 'PADL', 'deprecated': False},
|
||||
'parity-6.0.0': {'id': 'Parity-6.0.0', 'deprecated': False},
|
||||
'parity-7.0.0': {'id': 'Parity-7.0.0', 'deprecated': False},
|
||||
'pddl-1.0': {'id': 'PDDL-1.0', 'deprecated': False},
|
||||
'php-3.0': {'id': 'PHP-3.0', 'deprecated': False},
|
||||
'php-3.01': {'id': 'PHP-3.01', 'deprecated': False},
|
||||
'pixar': {'id': 'Pixar', 'deprecated': False},
|
||||
'pkgconf': {'id': 'pkgconf', 'deprecated': False},
|
||||
'plexus': {'id': 'Plexus', 'deprecated': False},
|
||||
'pnmstitch': {'id': 'pnmstitch', 'deprecated': False},
|
||||
'polyform-noncommercial-1.0.0': {'id': 'PolyForm-Noncommercial-1.0.0', 'deprecated': False},
|
||||
'polyform-small-business-1.0.0': {'id': 'PolyForm-Small-Business-1.0.0', 'deprecated': False},
|
||||
'postgresql': {'id': 'PostgreSQL', 'deprecated': False},
|
||||
'ppl': {'id': 'PPL', 'deprecated': False},
|
||||
'psf-2.0': {'id': 'PSF-2.0', 'deprecated': False},
|
||||
'psfrag': {'id': 'psfrag', 'deprecated': False},
|
||||
'psutils': {'id': 'psutils', 'deprecated': False},
|
||||
'python-2.0': {'id': 'Python-2.0', 'deprecated': False},
|
||||
'python-2.0.1': {'id': 'Python-2.0.1', 'deprecated': False},
|
||||
'python-ldap': {'id': 'python-ldap', 'deprecated': False},
|
||||
'qhull': {'id': 'Qhull', 'deprecated': False},
|
||||
'qpl-1.0': {'id': 'QPL-1.0', 'deprecated': False},
|
||||
'qpl-1.0-inria-2004': {'id': 'QPL-1.0-INRIA-2004', 'deprecated': False},
|
||||
'radvd': {'id': 'radvd', 'deprecated': False},
|
||||
'rdisc': {'id': 'Rdisc', 'deprecated': False},
|
||||
'rhecos-1.1': {'id': 'RHeCos-1.1', 'deprecated': False},
|
||||
'rpl-1.1': {'id': 'RPL-1.1', 'deprecated': False},
|
||||
'rpl-1.5': {'id': 'RPL-1.5', 'deprecated': False},
|
||||
'rpsl-1.0': {'id': 'RPSL-1.0', 'deprecated': False},
|
||||
'rsa-md': {'id': 'RSA-MD', 'deprecated': False},
|
||||
'rscpl': {'id': 'RSCPL', 'deprecated': False},
|
||||
'ruby': {'id': 'Ruby', 'deprecated': False},
|
||||
'ruby-pty': {'id': 'Ruby-pty', 'deprecated': False},
|
||||
'sax-pd': {'id': 'SAX-PD', 'deprecated': False},
|
||||
'sax-pd-2.0': {'id': 'SAX-PD-2.0', 'deprecated': False},
|
||||
'saxpath': {'id': 'Saxpath', 'deprecated': False},
|
||||
'scea': {'id': 'SCEA', 'deprecated': False},
|
||||
'schemereport': {'id': 'SchemeReport', 'deprecated': False},
|
||||
'sendmail': {'id': 'Sendmail', 'deprecated': False},
|
||||
'sendmail-8.23': {'id': 'Sendmail-8.23', 'deprecated': False},
|
||||
'sendmail-open-source-1.1': {'id': 'Sendmail-Open-Source-1.1', 'deprecated': False},
|
||||
'sgi-b-1.0': {'id': 'SGI-B-1.0', 'deprecated': False},
|
||||
'sgi-b-1.1': {'id': 'SGI-B-1.1', 'deprecated': False},
|
||||
'sgi-b-2.0': {'id': 'SGI-B-2.0', 'deprecated': False},
|
||||
'sgi-opengl': {'id': 'SGI-OpenGL', 'deprecated': False},
|
||||
'sgp4': {'id': 'SGP4', 'deprecated': False},
|
||||
'shl-0.5': {'id': 'SHL-0.5', 'deprecated': False},
|
||||
'shl-0.51': {'id': 'SHL-0.51', 'deprecated': False},
|
||||
'simpl-2.0': {'id': 'SimPL-2.0', 'deprecated': False},
|
||||
'sissl': {'id': 'SISSL', 'deprecated': False},
|
||||
'sissl-1.2': {'id': 'SISSL-1.2', 'deprecated': False},
|
||||
'sl': {'id': 'SL', 'deprecated': False},
|
||||
'sleepycat': {'id': 'Sleepycat', 'deprecated': False},
|
||||
'smail-gpl': {'id': 'SMAIL-GPL', 'deprecated': False},
|
||||
'smlnj': {'id': 'SMLNJ', 'deprecated': False},
|
||||
'smppl': {'id': 'SMPPL', 'deprecated': False},
|
||||
'snia': {'id': 'SNIA', 'deprecated': False},
|
||||
'snprintf': {'id': 'snprintf', 'deprecated': False},
|
||||
'sofa': {'id': 'SOFA', 'deprecated': False},
|
||||
'softsurfer': {'id': 'softSurfer', 'deprecated': False},
|
||||
'soundex': {'id': 'Soundex', 'deprecated': False},
|
||||
'spencer-86': {'id': 'Spencer-86', 'deprecated': False},
|
||||
'spencer-94': {'id': 'Spencer-94', 'deprecated': False},
|
||||
'spencer-99': {'id': 'Spencer-99', 'deprecated': False},
|
||||
'spl-1.0': {'id': 'SPL-1.0', 'deprecated': False},
|
||||
'ssh-keyscan': {'id': 'ssh-keyscan', 'deprecated': False},
|
||||
'ssh-openssh': {'id': 'SSH-OpenSSH', 'deprecated': False},
|
||||
'ssh-short': {'id': 'SSH-short', 'deprecated': False},
|
||||
'ssleay-standalone': {'id': 'SSLeay-standalone', 'deprecated': False},
|
||||
'sspl-1.0': {'id': 'SSPL-1.0', 'deprecated': False},
|
||||
'standardml-nj': {'id': 'StandardML-NJ', 'deprecated': True},
|
||||
'sugarcrm-1.1.3': {'id': 'SugarCRM-1.1.3', 'deprecated': False},
|
||||
'sul-1.0': {'id': 'SUL-1.0', 'deprecated': False},
|
||||
'sun-ppp': {'id': 'Sun-PPP', 'deprecated': False},
|
||||
'sun-ppp-2000': {'id': 'Sun-PPP-2000', 'deprecated': False},
|
||||
'sunpro': {'id': 'SunPro', 'deprecated': False},
|
||||
'swl': {'id': 'SWL', 'deprecated': False},
|
||||
'swrule': {'id': 'swrule', 'deprecated': False},
|
||||
'symlinks': {'id': 'Symlinks', 'deprecated': False},
|
||||
'tapr-ohl-1.0': {'id': 'TAPR-OHL-1.0', 'deprecated': False},
|
||||
'tcl': {'id': 'TCL', 'deprecated': False},
|
||||
'tcp-wrappers': {'id': 'TCP-wrappers', 'deprecated': False},
|
||||
'termreadkey': {'id': 'TermReadKey', 'deprecated': False},
|
||||
'tgppl-1.0': {'id': 'TGPPL-1.0', 'deprecated': False},
|
||||
'thirdeye': {'id': 'ThirdEye', 'deprecated': False},
|
||||
'threeparttable': {'id': 'threeparttable', 'deprecated': False},
|
||||
'tmate': {'id': 'TMate', 'deprecated': False},
|
||||
'torque-1.1': {'id': 'TORQUE-1.1', 'deprecated': False},
|
||||
'tosl': {'id': 'TOSL', 'deprecated': False},
|
||||
'tpdl': {'id': 'TPDL', 'deprecated': False},
|
||||
'tpl-1.0': {'id': 'TPL-1.0', 'deprecated': False},
|
||||
'trustedqsl': {'id': 'TrustedQSL', 'deprecated': False},
|
||||
'ttwl': {'id': 'TTWL', 'deprecated': False},
|
||||
'ttyp0': {'id': 'TTYP0', 'deprecated': False},
|
||||
'tu-berlin-1.0': {'id': 'TU-Berlin-1.0', 'deprecated': False},
|
||||
'tu-berlin-2.0': {'id': 'TU-Berlin-2.0', 'deprecated': False},
|
||||
'ubuntu-font-1.0': {'id': 'Ubuntu-font-1.0', 'deprecated': False},
|
||||
'ucar': {'id': 'UCAR', 'deprecated': False},
|
||||
'ucl-1.0': {'id': 'UCL-1.0', 'deprecated': False},
|
||||
'ulem': {'id': 'ulem', 'deprecated': False},
|
||||
'umich-merit': {'id': 'UMich-Merit', 'deprecated': False},
|
||||
'unicode-3.0': {'id': 'Unicode-3.0', 'deprecated': False},
|
||||
'unicode-dfs-2015': {'id': 'Unicode-DFS-2015', 'deprecated': False},
|
||||
'unicode-dfs-2016': {'id': 'Unicode-DFS-2016', 'deprecated': False},
|
||||
'unicode-tou': {'id': 'Unicode-TOU', 'deprecated': False},
|
||||
'unixcrypt': {'id': 'UnixCrypt', 'deprecated': False},
|
||||
'unlicense': {'id': 'Unlicense', 'deprecated': False},
|
||||
'unlicense-libtelnet': {'id': 'Unlicense-libtelnet', 'deprecated': False},
|
||||
'unlicense-libwhirlpool': {'id': 'Unlicense-libwhirlpool', 'deprecated': False},
|
||||
'upl-1.0': {'id': 'UPL-1.0', 'deprecated': False},
|
||||
'urt-rle': {'id': 'URT-RLE', 'deprecated': False},
|
||||
'vim': {'id': 'Vim', 'deprecated': False},
|
||||
'vostrom': {'id': 'VOSTROM', 'deprecated': False},
|
||||
'vsl-1.0': {'id': 'VSL-1.0', 'deprecated': False},
|
||||
'w3c': {'id': 'W3C', 'deprecated': False},
|
||||
'w3c-19980720': {'id': 'W3C-19980720', 'deprecated': False},
|
||||
'w3c-20150513': {'id': 'W3C-20150513', 'deprecated': False},
|
||||
'w3m': {'id': 'w3m', 'deprecated': False},
|
||||
'watcom-1.0': {'id': 'Watcom-1.0', 'deprecated': False},
|
||||
'widget-workshop': {'id': 'Widget-Workshop', 'deprecated': False},
|
||||
'wsuipa': {'id': 'Wsuipa', 'deprecated': False},
|
||||
'wtfpl': {'id': 'WTFPL', 'deprecated': False},
|
||||
'wwl': {'id': 'wwl', 'deprecated': False},
|
||||
'wxwindows': {'id': 'wxWindows', 'deprecated': True},
|
||||
'x11': {'id': 'X11', 'deprecated': False},
|
||||
'x11-distribute-modifications-variant': {'id': 'X11-distribute-modifications-variant', 'deprecated': False},
|
||||
'x11-swapped': {'id': 'X11-swapped', 'deprecated': False},
|
||||
'xdebug-1.03': {'id': 'Xdebug-1.03', 'deprecated': False},
|
||||
'xerox': {'id': 'Xerox', 'deprecated': False},
|
||||
'xfig': {'id': 'Xfig', 'deprecated': False},
|
||||
'xfree86-1.1': {'id': 'XFree86-1.1', 'deprecated': False},
|
||||
'xinetd': {'id': 'xinetd', 'deprecated': False},
|
||||
'xkeyboard-config-zinoviev': {'id': 'xkeyboard-config-Zinoviev', 'deprecated': False},
|
||||
'xlock': {'id': 'xlock', 'deprecated': False},
|
||||
'xnet': {'id': 'Xnet', 'deprecated': False},
|
||||
'xpp': {'id': 'xpp', 'deprecated': False},
|
||||
'xskat': {'id': 'XSkat', 'deprecated': False},
|
||||
'xzoom': {'id': 'xzoom', 'deprecated': False},
|
||||
'ypl-1.0': {'id': 'YPL-1.0', 'deprecated': False},
|
||||
'ypl-1.1': {'id': 'YPL-1.1', 'deprecated': False},
|
||||
'zed': {'id': 'Zed', 'deprecated': False},
|
||||
'zeeff': {'id': 'Zeeff', 'deprecated': False},
|
||||
'zend-2.0': {'id': 'Zend-2.0', 'deprecated': False},
|
||||
'zimbra-1.3': {'id': 'Zimbra-1.3', 'deprecated': False},
|
||||
'zimbra-1.4': {'id': 'Zimbra-1.4', 'deprecated': False},
|
||||
'zlib': {'id': 'Zlib', 'deprecated': False},
|
||||
'zlib-acknowledgement': {'id': 'zlib-acknowledgement', 'deprecated': False},
|
||||
'zpl-1.1': {'id': 'ZPL-1.1', 'deprecated': False},
|
||||
'zpl-2.0': {'id': 'ZPL-2.0', 'deprecated': False},
|
||||
'zpl-2.1': {'id': 'ZPL-2.1', 'deprecated': False},
|
||||
}
|
||||
|
||||
EXCEPTIONS: dict[str, SPDXException] = {
|
||||
'389-exception': {'id': '389-exception', 'deprecated': False},
|
||||
'asterisk-exception': {'id': 'Asterisk-exception', 'deprecated': False},
|
||||
'asterisk-linking-protocols-exception': {'id': 'Asterisk-linking-protocols-exception', 'deprecated': False},
|
||||
'autoconf-exception-2.0': {'id': 'Autoconf-exception-2.0', 'deprecated': False},
|
||||
'autoconf-exception-3.0': {'id': 'Autoconf-exception-3.0', 'deprecated': False},
|
||||
'autoconf-exception-generic': {'id': 'Autoconf-exception-generic', 'deprecated': False},
|
||||
'autoconf-exception-generic-3.0': {'id': 'Autoconf-exception-generic-3.0', 'deprecated': False},
|
||||
'autoconf-exception-macro': {'id': 'Autoconf-exception-macro', 'deprecated': False},
|
||||
'bison-exception-1.24': {'id': 'Bison-exception-1.24', 'deprecated': False},
|
||||
'bison-exception-2.2': {'id': 'Bison-exception-2.2', 'deprecated': False},
|
||||
'bootloader-exception': {'id': 'Bootloader-exception', 'deprecated': False},
|
||||
'cgal-linking-exception': {'id': 'CGAL-linking-exception', 'deprecated': False},
|
||||
'classpath-exception-2.0': {'id': 'Classpath-exception-2.0', 'deprecated': False},
|
||||
'clisp-exception-2.0': {'id': 'CLISP-exception-2.0', 'deprecated': False},
|
||||
'cryptsetup-openssl-exception': {'id': 'cryptsetup-OpenSSL-exception', 'deprecated': False},
|
||||
'digia-qt-lgpl-exception-1.1': {'id': 'Digia-Qt-LGPL-exception-1.1', 'deprecated': False},
|
||||
'digirule-foss-exception': {'id': 'DigiRule-FOSS-exception', 'deprecated': False},
|
||||
'ecos-exception-2.0': {'id': 'eCos-exception-2.0', 'deprecated': False},
|
||||
'erlang-otp-linking-exception': {'id': 'erlang-otp-linking-exception', 'deprecated': False},
|
||||
'fawkes-runtime-exception': {'id': 'Fawkes-Runtime-exception', 'deprecated': False},
|
||||
'fltk-exception': {'id': 'FLTK-exception', 'deprecated': False},
|
||||
'fmt-exception': {'id': 'fmt-exception', 'deprecated': False},
|
||||
'font-exception-2.0': {'id': 'Font-exception-2.0', 'deprecated': False},
|
||||
'freertos-exception-2.0': {'id': 'freertos-exception-2.0', 'deprecated': False},
|
||||
'gcc-exception-2.0': {'id': 'GCC-exception-2.0', 'deprecated': False},
|
||||
'gcc-exception-2.0-note': {'id': 'GCC-exception-2.0-note', 'deprecated': False},
|
||||
'gcc-exception-3.1': {'id': 'GCC-exception-3.1', 'deprecated': False},
|
||||
'gmsh-exception': {'id': 'Gmsh-exception', 'deprecated': False},
|
||||
'gnat-exception': {'id': 'GNAT-exception', 'deprecated': False},
|
||||
'gnome-examples-exception': {'id': 'GNOME-examples-exception', 'deprecated': False},
|
||||
'gnu-compiler-exception': {'id': 'GNU-compiler-exception', 'deprecated': False},
|
||||
'gnu-javamail-exception': {'id': 'gnu-javamail-exception', 'deprecated': False},
|
||||
'gpl-3.0-389-ds-base-exception': {'id': 'GPL-3.0-389-ds-base-exception', 'deprecated': False},
|
||||
'gpl-3.0-interface-exception': {'id': 'GPL-3.0-interface-exception', 'deprecated': False},
|
||||
'gpl-3.0-linking-exception': {'id': 'GPL-3.0-linking-exception', 'deprecated': False},
|
||||
'gpl-3.0-linking-source-exception': {'id': 'GPL-3.0-linking-source-exception', 'deprecated': False},
|
||||
'gpl-cc-1.0': {'id': 'GPL-CC-1.0', 'deprecated': False},
|
||||
'gstreamer-exception-2005': {'id': 'GStreamer-exception-2005', 'deprecated': False},
|
||||
'gstreamer-exception-2008': {'id': 'GStreamer-exception-2008', 'deprecated': False},
|
||||
'harbour-exception': {'id': 'harbour-exception', 'deprecated': False},
|
||||
'i2p-gpl-java-exception': {'id': 'i2p-gpl-java-exception', 'deprecated': False},
|
||||
'independent-modules-exception': {'id': 'Independent-modules-exception', 'deprecated': False},
|
||||
'kicad-libraries-exception': {'id': 'KiCad-libraries-exception', 'deprecated': False},
|
||||
'lgpl-3.0-linking-exception': {'id': 'LGPL-3.0-linking-exception', 'deprecated': False},
|
||||
'libpri-openh323-exception': {'id': 'libpri-OpenH323-exception', 'deprecated': False},
|
||||
'libtool-exception': {'id': 'Libtool-exception', 'deprecated': False},
|
||||
'linux-syscall-note': {'id': 'Linux-syscall-note', 'deprecated': False},
|
||||
'llgpl': {'id': 'LLGPL', 'deprecated': False},
|
||||
'llvm-exception': {'id': 'LLVM-exception', 'deprecated': False},
|
||||
'lzma-exception': {'id': 'LZMA-exception', 'deprecated': False},
|
||||
'mif-exception': {'id': 'mif-exception', 'deprecated': False},
|
||||
'mxml-exception': {'id': 'mxml-exception', 'deprecated': False},
|
||||
'nokia-qt-exception-1.1': {'id': 'Nokia-Qt-exception-1.1', 'deprecated': True},
|
||||
'ocaml-lgpl-linking-exception': {'id': 'OCaml-LGPL-linking-exception', 'deprecated': False},
|
||||
'occt-exception-1.0': {'id': 'OCCT-exception-1.0', 'deprecated': False},
|
||||
'openjdk-assembly-exception-1.0': {'id': 'OpenJDK-assembly-exception-1.0', 'deprecated': False},
|
||||
'openvpn-openssl-exception': {'id': 'openvpn-openssl-exception', 'deprecated': False},
|
||||
'pcre2-exception': {'id': 'PCRE2-exception', 'deprecated': False},
|
||||
'polyparse-exception': {'id': 'polyparse-exception', 'deprecated': False},
|
||||
'ps-or-pdf-font-exception-20170817': {'id': 'PS-or-PDF-font-exception-20170817', 'deprecated': False},
|
||||
'qpl-1.0-inria-2004-exception': {'id': 'QPL-1.0-INRIA-2004-exception', 'deprecated': False},
|
||||
'qt-gpl-exception-1.0': {'id': 'Qt-GPL-exception-1.0', 'deprecated': False},
|
||||
'qt-lgpl-exception-1.1': {'id': 'Qt-LGPL-exception-1.1', 'deprecated': False},
|
||||
'qwt-exception-1.0': {'id': 'Qwt-exception-1.0', 'deprecated': False},
|
||||
'romic-exception': {'id': 'romic-exception', 'deprecated': False},
|
||||
'rrdtool-floss-exception-2.0': {'id': 'RRDtool-FLOSS-exception-2.0', 'deprecated': False},
|
||||
'sane-exception': {'id': 'SANE-exception', 'deprecated': False},
|
||||
'shl-2.0': {'id': 'SHL-2.0', 'deprecated': False},
|
||||
'shl-2.1': {'id': 'SHL-2.1', 'deprecated': False},
|
||||
'stunnel-exception': {'id': 'stunnel-exception', 'deprecated': False},
|
||||
'swi-exception': {'id': 'SWI-exception', 'deprecated': False},
|
||||
'swift-exception': {'id': 'Swift-exception', 'deprecated': False},
|
||||
'texinfo-exception': {'id': 'Texinfo-exception', 'deprecated': False},
|
||||
'u-boot-exception-2.0': {'id': 'u-boot-exception-2.0', 'deprecated': False},
|
||||
'ubdl-exception': {'id': 'UBDL-exception', 'deprecated': False},
|
||||
'universal-foss-exception-1.0': {'id': 'Universal-FOSS-exception-1.0', 'deprecated': False},
|
||||
'vsftpd-openssl-exception': {'id': 'vsftpd-openssl-exception', 'deprecated': False},
|
||||
'wxwindows-exception-3.1': {'id': 'WxWindows-exception-3.1', 'deprecated': False},
|
||||
'x11vnc-openssl-exception': {'id': 'x11vnc-openssl-exception', 'deprecated': False},
|
||||
}
|
||||
@@ -0,0 +1,388 @@
|
||||
# This file is dual licensed under the terms of the Apache License, Version
|
||||
# 2.0, and the BSD License. See the LICENSE file in the root of this repository
|
||||
# for complete details.
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import operator
|
||||
import os
|
||||
import platform
|
||||
import sys
|
||||
from typing import AbstractSet, Callable, Literal, Mapping, TypedDict, Union, cast
|
||||
|
||||
from ._parser import MarkerAtom, MarkerList, Op, Value, Variable
|
||||
from ._parser import parse_marker as _parse_marker
|
||||
from ._tokenizer import ParserSyntaxError
|
||||
from .specifiers import InvalidSpecifier, Specifier
|
||||
from .utils import canonicalize_name
|
||||
|
||||
__all__ = [
|
||||
"Environment",
|
||||
"EvaluateContext",
|
||||
"InvalidMarker",
|
||||
"Marker",
|
||||
"UndefinedComparison",
|
||||
"UndefinedEnvironmentName",
|
||||
"default_environment",
|
||||
]
|
||||
|
||||
Operator = Callable[[str, Union[str, AbstractSet[str]]], bool]
|
||||
EvaluateContext = Literal["metadata", "lock_file", "requirement"]
|
||||
MARKERS_ALLOWING_SET = {"extras", "dependency_groups"}
|
||||
MARKERS_REQUIRING_VERSION = {
|
||||
"implementation_version",
|
||||
"platform_release",
|
||||
"python_full_version",
|
||||
"python_version",
|
||||
}
|
||||
|
||||
|
||||
class InvalidMarker(ValueError):
|
||||
"""
|
||||
An invalid marker was found, users should refer to PEP 508.
|
||||
"""
|
||||
|
||||
|
||||
class UndefinedComparison(ValueError):
|
||||
"""
|
||||
An invalid operation was attempted on a value that doesn't support it.
|
||||
"""
|
||||
|
||||
|
||||
class UndefinedEnvironmentName(ValueError):
|
||||
"""
|
||||
A name was attempted to be used that does not exist inside of the
|
||||
environment.
|
||||
"""
|
||||
|
||||
|
||||
class Environment(TypedDict):
|
||||
implementation_name: str
|
||||
"""The implementation's identifier, e.g. ``'cpython'``."""
|
||||
|
||||
implementation_version: str
|
||||
"""
|
||||
The implementation's version, e.g. ``'3.13.0a2'`` for CPython 3.13.0a2, or
|
||||
``'7.3.13'`` for PyPy3.10 v7.3.13.
|
||||
"""
|
||||
|
||||
os_name: str
|
||||
"""
|
||||
The value of :py:data:`os.name`. The name of the operating system dependent module
|
||||
imported, e.g. ``'posix'``.
|
||||
"""
|
||||
|
||||
platform_machine: str
|
||||
"""
|
||||
Returns the machine type, e.g. ``'i386'``.
|
||||
|
||||
An empty string if the value cannot be determined.
|
||||
"""
|
||||
|
||||
platform_release: str
|
||||
"""
|
||||
The system's release, e.g. ``'2.2.0'`` or ``'NT'``.
|
||||
|
||||
An empty string if the value cannot be determined.
|
||||
"""
|
||||
|
||||
platform_system: str
|
||||
"""
|
||||
The system/OS name, e.g. ``'Linux'``, ``'Windows'`` or ``'Java'``.
|
||||
|
||||
An empty string if the value cannot be determined.
|
||||
"""
|
||||
|
||||
platform_version: str
|
||||
"""
|
||||
The system's release version, e.g. ``'#3 on degas'``.
|
||||
|
||||
An empty string if the value cannot be determined.
|
||||
"""
|
||||
|
||||
python_full_version: str
|
||||
"""
|
||||
The Python version as string ``'major.minor.patchlevel'``.
|
||||
|
||||
Note that unlike the Python :py:data:`sys.version`, this value will always include
|
||||
the patchlevel (it defaults to 0).
|
||||
"""
|
||||
|
||||
platform_python_implementation: str
|
||||
"""
|
||||
A string identifying the Python implementation, e.g. ``'CPython'``.
|
||||
"""
|
||||
|
||||
python_version: str
|
||||
"""The Python version as string ``'major.minor'``."""
|
||||
|
||||
sys_platform: str
|
||||
"""
|
||||
This string contains a platform identifier that can be used to append
|
||||
platform-specific components to :py:data:`sys.path`, for instance.
|
||||
|
||||
For Unix systems, except on Linux and AIX, this is the lowercased OS name as
|
||||
returned by ``uname -s`` with the first part of the version as returned by
|
||||
``uname -r`` appended, e.g. ``'sunos5'`` or ``'freebsd8'``, at the time when Python
|
||||
was built.
|
||||
"""
|
||||
|
||||
|
||||
def _normalize_extras(
|
||||
result: MarkerList | MarkerAtom | str,
|
||||
) -> MarkerList | MarkerAtom | str:
|
||||
if not isinstance(result, tuple):
|
||||
return result
|
||||
|
||||
lhs, op, rhs = result
|
||||
if isinstance(lhs, Variable) and lhs.value == "extra":
|
||||
normalized_extra = canonicalize_name(rhs.value)
|
||||
rhs = Value(normalized_extra)
|
||||
elif isinstance(rhs, Variable) and rhs.value == "extra":
|
||||
normalized_extra = canonicalize_name(lhs.value)
|
||||
lhs = Value(normalized_extra)
|
||||
return lhs, op, rhs
|
||||
|
||||
|
||||
def _normalize_extra_values(results: MarkerList) -> MarkerList:
|
||||
"""
|
||||
Normalize extra values.
|
||||
"""
|
||||
|
||||
return [_normalize_extras(r) for r in results]
|
||||
|
||||
|
||||
def _format_marker(
|
||||
marker: list[str] | MarkerAtom | str, first: bool | None = True
|
||||
) -> str:
|
||||
assert isinstance(marker, (list, tuple, str))
|
||||
|
||||
# Sometimes we have a structure like [[...]] which is a single item list
|
||||
# where the single item is itself it's own list. In that case we want skip
|
||||
# the rest of this function so that we don't get extraneous () on the
|
||||
# outside.
|
||||
if (
|
||||
isinstance(marker, list)
|
||||
and len(marker) == 1
|
||||
and isinstance(marker[0], (list, tuple))
|
||||
):
|
||||
return _format_marker(marker[0])
|
||||
|
||||
if isinstance(marker, list):
|
||||
inner = (_format_marker(m, first=False) for m in marker)
|
||||
if first:
|
||||
return " ".join(inner)
|
||||
else:
|
||||
return "(" + " ".join(inner) + ")"
|
||||
elif isinstance(marker, tuple):
|
||||
return " ".join([m.serialize() for m in marker])
|
||||
else:
|
||||
return marker
|
||||
|
||||
|
||||
_operators: dict[str, Operator] = {
|
||||
"in": lambda lhs, rhs: lhs in rhs,
|
||||
"not in": lambda lhs, rhs: lhs not in rhs,
|
||||
"<": lambda _lhs, _rhs: False,
|
||||
"<=": operator.eq,
|
||||
"==": operator.eq,
|
||||
"!=": operator.ne,
|
||||
">=": operator.eq,
|
||||
">": lambda _lhs, _rhs: False,
|
||||
}
|
||||
|
||||
|
||||
def _eval_op(lhs: str, op: Op, rhs: str | AbstractSet[str], *, key: str) -> bool:
|
||||
op_str = op.serialize()
|
||||
if key in MARKERS_REQUIRING_VERSION:
|
||||
try:
|
||||
spec = Specifier(f"{op_str}{rhs}")
|
||||
except InvalidSpecifier:
|
||||
pass
|
||||
else:
|
||||
return spec.contains(lhs, prereleases=True)
|
||||
|
||||
oper: Operator | None = _operators.get(op_str)
|
||||
if oper is None:
|
||||
raise UndefinedComparison(f"Undefined {op!r} on {lhs!r} and {rhs!r}.")
|
||||
|
||||
return oper(lhs, rhs)
|
||||
|
||||
|
||||
def _normalize(
|
||||
lhs: str, rhs: str | AbstractSet[str], key: str
|
||||
) -> tuple[str, str | AbstractSet[str]]:
|
||||
# PEP 685 - Comparison of extra names for optional distribution dependencies
|
||||
# https://peps.python.org/pep-0685/
|
||||
# > When comparing extra names, tools MUST normalize the names being
|
||||
# > compared using the semantics outlined in PEP 503 for names
|
||||
if key == "extra":
|
||||
assert isinstance(rhs, str), "extra value must be a string"
|
||||
# Both sides are normalized at this point already
|
||||
return (lhs, rhs)
|
||||
if key in MARKERS_ALLOWING_SET:
|
||||
if isinstance(rhs, str): # pragma: no cover
|
||||
return (canonicalize_name(lhs), canonicalize_name(rhs))
|
||||
else:
|
||||
return (canonicalize_name(lhs), {canonicalize_name(v) for v in rhs})
|
||||
|
||||
# other environment markers don't have such standards
|
||||
return lhs, rhs
|
||||
|
||||
|
||||
def _evaluate_markers(
|
||||
markers: MarkerList, environment: dict[str, str | AbstractSet[str]]
|
||||
) -> bool:
|
||||
groups: list[list[bool]] = [[]]
|
||||
|
||||
for marker in markers:
|
||||
if isinstance(marker, list):
|
||||
groups[-1].append(_evaluate_markers(marker, environment))
|
||||
elif isinstance(marker, tuple):
|
||||
lhs, op, rhs = marker
|
||||
|
||||
if isinstance(lhs, Variable):
|
||||
environment_key = lhs.value
|
||||
lhs_value = environment[environment_key]
|
||||
rhs_value = rhs.value
|
||||
else:
|
||||
lhs_value = lhs.value
|
||||
environment_key = rhs.value
|
||||
rhs_value = environment[environment_key]
|
||||
|
||||
assert isinstance(lhs_value, str), "lhs must be a string"
|
||||
lhs_value, rhs_value = _normalize(lhs_value, rhs_value, key=environment_key)
|
||||
groups[-1].append(_eval_op(lhs_value, op, rhs_value, key=environment_key))
|
||||
elif marker == "or":
|
||||
groups.append([])
|
||||
elif marker == "and":
|
||||
pass
|
||||
else: # pragma: nocover
|
||||
raise TypeError(f"Unexpected marker {marker!r}")
|
||||
|
||||
return any(all(item) for item in groups)
|
||||
|
||||
|
||||
def format_full_version(info: sys._version_info) -> str:
|
||||
version = f"{info.major}.{info.minor}.{info.micro}"
|
||||
kind = info.releaselevel
|
||||
if kind != "final":
|
||||
version += kind[0] + str(info.serial)
|
||||
return version
|
||||
|
||||
|
||||
def default_environment() -> Environment:
|
||||
iver = format_full_version(sys.implementation.version)
|
||||
implementation_name = sys.implementation.name
|
||||
return {
|
||||
"implementation_name": implementation_name,
|
||||
"implementation_version": iver,
|
||||
"os_name": os.name,
|
||||
"platform_machine": platform.machine(),
|
||||
"platform_release": platform.release(),
|
||||
"platform_system": platform.system(),
|
||||
"platform_version": platform.version(),
|
||||
"python_full_version": platform.python_version(),
|
||||
"platform_python_implementation": platform.python_implementation(),
|
||||
"python_version": ".".join(platform.python_version_tuple()[:2]),
|
||||
"sys_platform": sys.platform,
|
||||
}
|
||||
|
||||
|
||||
class Marker:
|
||||
def __init__(self, marker: str) -> None:
|
||||
# Note: We create a Marker object without calling this constructor in
|
||||
# packaging.requirements.Requirement. If any additional logic is
|
||||
# added here, make sure to mirror/adapt Requirement.
|
||||
|
||||
# If this fails and throws an error, the repr still expects _markers to
|
||||
# be defined.
|
||||
self._markers: MarkerList = []
|
||||
|
||||
try:
|
||||
self._markers = _normalize_extra_values(_parse_marker(marker))
|
||||
# The attribute `_markers` can be described in terms of a recursive type:
|
||||
# MarkerList = List[Union[Tuple[Node, ...], str, MarkerList]]
|
||||
#
|
||||
# For example, the following expression:
|
||||
# python_version > "3.6" or (python_version == "3.6" and os_name == "unix")
|
||||
#
|
||||
# is parsed into:
|
||||
# [
|
||||
# (<Variable('python_version')>, <Op('>')>, <Value('3.6')>),
|
||||
# 'and',
|
||||
# [
|
||||
# (<Variable('python_version')>, <Op('==')>, <Value('3.6')>),
|
||||
# 'or',
|
||||
# (<Variable('os_name')>, <Op('==')>, <Value('unix')>)
|
||||
# ]
|
||||
# ]
|
||||
except ParserSyntaxError as e:
|
||||
raise InvalidMarker(str(e)) from e
|
||||
|
||||
def __str__(self) -> str:
|
||||
return _format_marker(self._markers)
|
||||
|
||||
def __repr__(self) -> str:
|
||||
return f"<{self.__class__.__name__}('{self}')>"
|
||||
|
||||
def __hash__(self) -> int:
|
||||
return hash(str(self))
|
||||
|
||||
def __eq__(self, other: object) -> bool:
|
||||
if not isinstance(other, Marker):
|
||||
return NotImplemented
|
||||
|
||||
return str(self) == str(other)
|
||||
|
||||
def evaluate(
|
||||
self,
|
||||
environment: Mapping[str, str | AbstractSet[str]] | None = None,
|
||||
context: EvaluateContext = "metadata",
|
||||
) -> bool:
|
||||
"""Evaluate a marker.
|
||||
|
||||
Return the boolean from evaluating the given marker against the
|
||||
environment. environment is an optional argument to override all or
|
||||
part of the determined environment. The *context* parameter specifies what
|
||||
context the markers are being evaluated for, which influences what markers
|
||||
are considered valid. Acceptable values are "metadata" (for core metadata;
|
||||
default), "lock_file", and "requirement" (i.e. all other situations).
|
||||
|
||||
The environment is determined from the current Python process.
|
||||
"""
|
||||
current_environment = cast(
|
||||
"dict[str, str | AbstractSet[str]]", default_environment()
|
||||
)
|
||||
if context == "lock_file":
|
||||
current_environment.update(
|
||||
extras=frozenset(), dependency_groups=frozenset()
|
||||
)
|
||||
elif context == "metadata":
|
||||
current_environment["extra"] = ""
|
||||
|
||||
if environment is not None:
|
||||
current_environment.update(environment)
|
||||
if "extra" in current_environment:
|
||||
# The API used to allow setting extra to None. We need to handle
|
||||
# this case for backwards compatibility. Also skip running
|
||||
# normalize name if extra is empty.
|
||||
extra = cast("str | None", current_environment["extra"])
|
||||
current_environment["extra"] = canonicalize_name(extra) if extra else ""
|
||||
|
||||
return _evaluate_markers(
|
||||
self._markers, _repair_python_full_version(current_environment)
|
||||
)
|
||||
|
||||
|
||||
def _repair_python_full_version(
|
||||
env: dict[str, str | AbstractSet[str]],
|
||||
) -> dict[str, str | AbstractSet[str]]:
|
||||
"""
|
||||
Work around platform.python_version() returning something that is not PEP 440
|
||||
compliant for non-tagged Python builds.
|
||||
"""
|
||||
python_full_version = cast("str", env["python_full_version"])
|
||||
if python_full_version.endswith("+"):
|
||||
env["python_full_version"] = f"{python_full_version}local"
|
||||
return env
|
||||
@@ -0,0 +1,978 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import email.feedparser
|
||||
import email.header
|
||||
import email.message
|
||||
import email.parser
|
||||
import email.policy
|
||||
import keyword
|
||||
import pathlib
|
||||
import sys
|
||||
import typing
|
||||
from typing import (
|
||||
Any,
|
||||
Callable,
|
||||
Generic,
|
||||
Literal,
|
||||
TypedDict,
|
||||
cast,
|
||||
)
|
||||
|
||||
from . import licenses, requirements, specifiers, utils
|
||||
from . import version as version_module
|
||||
|
||||
if typing.TYPE_CHECKING:
|
||||
from .licenses import NormalizedLicenseExpression
|
||||
|
||||
T = typing.TypeVar("T")
|
||||
|
||||
|
||||
if sys.version_info >= (3, 11): # pragma: no cover
|
||||
ExceptionGroup = ExceptionGroup # noqa: F821
|
||||
else: # pragma: no cover
|
||||
|
||||
class ExceptionGroup(Exception):
|
||||
"""A minimal implementation of :external:exc:`ExceptionGroup` from Python 3.11.
|
||||
|
||||
If :external:exc:`ExceptionGroup` is already defined by Python itself,
|
||||
that version is used instead.
|
||||
"""
|
||||
|
||||
message: str
|
||||
exceptions: list[Exception]
|
||||
|
||||
def __init__(self, message: str, exceptions: list[Exception]) -> None:
|
||||
self.message = message
|
||||
self.exceptions = exceptions
|
||||
|
||||
def __repr__(self) -> str:
|
||||
return f"{self.__class__.__name__}({self.message!r}, {self.exceptions!r})"
|
||||
|
||||
|
||||
class InvalidMetadata(ValueError):
|
||||
"""A metadata field contains invalid data."""
|
||||
|
||||
field: str
|
||||
"""The name of the field that contains invalid data."""
|
||||
|
||||
def __init__(self, field: str, message: str) -> None:
|
||||
self.field = field
|
||||
super().__init__(message)
|
||||
|
||||
|
||||
# The RawMetadata class attempts to make as few assumptions about the underlying
|
||||
# serialization formats as possible. The idea is that as long as a serialization
|
||||
# formats offer some very basic primitives in *some* way then we can support
|
||||
# serializing to and from that format.
|
||||
class RawMetadata(TypedDict, total=False):
|
||||
"""A dictionary of raw core metadata.
|
||||
|
||||
Each field in core metadata maps to a key of this dictionary (when data is
|
||||
provided). The key is lower-case and underscores are used instead of dashes
|
||||
compared to the equivalent core metadata field. Any core metadata field that
|
||||
can be specified multiple times or can hold multiple values in a single
|
||||
field have a key with a plural name. See :class:`Metadata` whose attributes
|
||||
match the keys of this dictionary.
|
||||
|
||||
Core metadata fields that can be specified multiple times are stored as a
|
||||
list or dict depending on which is appropriate for the field. Any fields
|
||||
which hold multiple values in a single field are stored as a list.
|
||||
|
||||
"""
|
||||
|
||||
# Metadata 1.0 - PEP 241
|
||||
metadata_version: str
|
||||
name: str
|
||||
version: str
|
||||
platforms: list[str]
|
||||
summary: str
|
||||
description: str
|
||||
keywords: list[str]
|
||||
home_page: str
|
||||
author: str
|
||||
author_email: str
|
||||
license: str
|
||||
|
||||
# Metadata 1.1 - PEP 314
|
||||
supported_platforms: list[str]
|
||||
download_url: str
|
||||
classifiers: list[str]
|
||||
requires: list[str]
|
||||
provides: list[str]
|
||||
obsoletes: list[str]
|
||||
|
||||
# Metadata 1.2 - PEP 345
|
||||
maintainer: str
|
||||
maintainer_email: str
|
||||
requires_dist: list[str]
|
||||
provides_dist: list[str]
|
||||
obsoletes_dist: list[str]
|
||||
requires_python: str
|
||||
requires_external: list[str]
|
||||
project_urls: dict[str, str]
|
||||
|
||||
# Metadata 2.0
|
||||
# PEP 426 attempted to completely revamp the metadata format
|
||||
# but got stuck without ever being able to build consensus on
|
||||
# it and ultimately ended up withdrawn.
|
||||
#
|
||||
# However, a number of tools had started emitting METADATA with
|
||||
# `2.0` Metadata-Version, so for historical reasons, this version
|
||||
# was skipped.
|
||||
|
||||
# Metadata 2.1 - PEP 566
|
||||
description_content_type: str
|
||||
provides_extra: list[str]
|
||||
|
||||
# Metadata 2.2 - PEP 643
|
||||
dynamic: list[str]
|
||||
|
||||
# Metadata 2.3 - PEP 685
|
||||
# No new fields were added in PEP 685, just some edge case were
|
||||
# tightened up to provide better interoperability.
|
||||
|
||||
# Metadata 2.4 - PEP 639
|
||||
license_expression: str
|
||||
license_files: list[str]
|
||||
|
||||
# Metadata 2.5 - PEP 794
|
||||
import_names: list[str]
|
||||
import_namespaces: list[str]
|
||||
|
||||
|
||||
# 'keywords' is special as it's a string in the core metadata spec, but we
|
||||
# represent it as a list.
|
||||
_STRING_FIELDS = {
|
||||
"author",
|
||||
"author_email",
|
||||
"description",
|
||||
"description_content_type",
|
||||
"download_url",
|
||||
"home_page",
|
||||
"license",
|
||||
"license_expression",
|
||||
"maintainer",
|
||||
"maintainer_email",
|
||||
"metadata_version",
|
||||
"name",
|
||||
"requires_python",
|
||||
"summary",
|
||||
"version",
|
||||
}
|
||||
|
||||
_LIST_FIELDS = {
|
||||
"classifiers",
|
||||
"dynamic",
|
||||
"license_files",
|
||||
"obsoletes",
|
||||
"obsoletes_dist",
|
||||
"platforms",
|
||||
"provides",
|
||||
"provides_dist",
|
||||
"provides_extra",
|
||||
"requires",
|
||||
"requires_dist",
|
||||
"requires_external",
|
||||
"supported_platforms",
|
||||
"import_names",
|
||||
"import_namespaces",
|
||||
}
|
||||
|
||||
_DICT_FIELDS = {
|
||||
"project_urls",
|
||||
}
|
||||
|
||||
|
||||
def _parse_keywords(data: str) -> list[str]:
|
||||
"""Split a string of comma-separated keywords into a list of keywords."""
|
||||
return [k.strip() for k in data.split(",")]
|
||||
|
||||
|
||||
def _parse_project_urls(data: list[str]) -> dict[str, str]:
|
||||
"""Parse a list of label/URL string pairings separated by a comma."""
|
||||
urls = {}
|
||||
for pair in data:
|
||||
# Our logic is slightly tricky here as we want to try and do
|
||||
# *something* reasonable with malformed data.
|
||||
#
|
||||
# The main thing that we have to worry about, is data that does
|
||||
# not have a ',' at all to split the label from the Value. There
|
||||
# isn't a singular right answer here, and we will fail validation
|
||||
# later on (if the caller is validating) so it doesn't *really*
|
||||
# matter, but since the missing value has to be an empty str
|
||||
# and our return value is dict[str, str], if we let the key
|
||||
# be the missing value, then they'd have multiple '' values that
|
||||
# overwrite each other in a accumulating dict.
|
||||
#
|
||||
# The other potential issue is that it's possible to have the
|
||||
# same label multiple times in the metadata, with no solid "right"
|
||||
# answer with what to do in that case. As such, we'll do the only
|
||||
# thing we can, which is treat the field as unparsable and add it
|
||||
# to our list of unparsed fields.
|
||||
#
|
||||
# TODO: The spec doesn't say anything about if the keys should be
|
||||
# considered case sensitive or not... logically they should
|
||||
# be case-preserving and case-insensitive, but doing that
|
||||
# would open up more cases where we might have duplicate
|
||||
# entries.
|
||||
label, _, url = (s.strip() for s in pair.partition(","))
|
||||
|
||||
if label in urls:
|
||||
# The label already exists in our set of urls, so this field
|
||||
# is unparsable, and we can just add the whole thing to our
|
||||
# unparsable data and stop processing it.
|
||||
raise KeyError("duplicate labels in project urls")
|
||||
urls[label] = url
|
||||
|
||||
return urls
|
||||
|
||||
|
||||
def _get_payload(msg: email.message.Message, source: bytes | str) -> str:
|
||||
"""Get the body of the message."""
|
||||
# If our source is a str, then our caller has managed encodings for us,
|
||||
# and we don't need to deal with it.
|
||||
if isinstance(source, str):
|
||||
payload = msg.get_payload()
|
||||
assert isinstance(payload, str)
|
||||
return payload
|
||||
# If our source is a bytes, then we're managing the encoding and we need
|
||||
# to deal with it.
|
||||
else:
|
||||
bpayload = msg.get_payload(decode=True)
|
||||
assert isinstance(bpayload, bytes)
|
||||
try:
|
||||
return bpayload.decode("utf8", "strict")
|
||||
except UnicodeDecodeError as exc:
|
||||
raise ValueError("payload in an invalid encoding") from exc
|
||||
|
||||
|
||||
# The various parse_FORMAT functions here are intended to be as lenient as
|
||||
# possible in their parsing, while still returning a correctly typed
|
||||
# RawMetadata.
|
||||
#
|
||||
# To aid in this, we also generally want to do as little touching of the
|
||||
# data as possible, except where there are possibly some historic holdovers
|
||||
# that make valid data awkward to work with.
|
||||
#
|
||||
# While this is a lower level, intermediate format than our ``Metadata``
|
||||
# class, some light touch ups can make a massive difference in usability.
|
||||
|
||||
# Map METADATA fields to RawMetadata.
|
||||
_EMAIL_TO_RAW_MAPPING = {
|
||||
"author": "author",
|
||||
"author-email": "author_email",
|
||||
"classifier": "classifiers",
|
||||
"description": "description",
|
||||
"description-content-type": "description_content_type",
|
||||
"download-url": "download_url",
|
||||
"dynamic": "dynamic",
|
||||
"home-page": "home_page",
|
||||
"import-name": "import_names",
|
||||
"import-namespace": "import_namespaces",
|
||||
"keywords": "keywords",
|
||||
"license": "license",
|
||||
"license-expression": "license_expression",
|
||||
"license-file": "license_files",
|
||||
"maintainer": "maintainer",
|
||||
"maintainer-email": "maintainer_email",
|
||||
"metadata-version": "metadata_version",
|
||||
"name": "name",
|
||||
"obsoletes": "obsoletes",
|
||||
"obsoletes-dist": "obsoletes_dist",
|
||||
"platform": "platforms",
|
||||
"project-url": "project_urls",
|
||||
"provides": "provides",
|
||||
"provides-dist": "provides_dist",
|
||||
"provides-extra": "provides_extra",
|
||||
"requires": "requires",
|
||||
"requires-dist": "requires_dist",
|
||||
"requires-external": "requires_external",
|
||||
"requires-python": "requires_python",
|
||||
"summary": "summary",
|
||||
"supported-platform": "supported_platforms",
|
||||
"version": "version",
|
||||
}
|
||||
_RAW_TO_EMAIL_MAPPING = {raw: email for email, raw in _EMAIL_TO_RAW_MAPPING.items()}
|
||||
|
||||
|
||||
# This class is for writing RFC822 messages
|
||||
class RFC822Policy(email.policy.EmailPolicy):
|
||||
"""
|
||||
This is :class:`email.policy.EmailPolicy`, but with a simple ``header_store_parse``
|
||||
implementation that handles multi-line values, and some nice defaults.
|
||||
"""
|
||||
|
||||
utf8 = True
|
||||
mangle_from_ = False
|
||||
max_line_length = 0
|
||||
|
||||
def header_store_parse(self, name: str, value: str) -> tuple[str, str]:
|
||||
size = len(name) + 2
|
||||
value = value.replace("\n", "\n" + " " * size)
|
||||
return (name, value)
|
||||
|
||||
|
||||
# This class is for writing RFC822 messages
|
||||
class RFC822Message(email.message.EmailMessage):
|
||||
"""
|
||||
This is :class:`email.message.EmailMessage` with two small changes: it defaults to
|
||||
our `RFC822Policy`, and it correctly writes unicode when being called
|
||||
with `bytes()`.
|
||||
"""
|
||||
|
||||
def __init__(self) -> None:
|
||||
super().__init__(policy=RFC822Policy())
|
||||
|
||||
def as_bytes(
|
||||
self, unixfrom: bool = False, policy: email.policy.Policy | None = None
|
||||
) -> bytes:
|
||||
"""
|
||||
Return the bytes representation of the message.
|
||||
|
||||
This handles unicode encoding.
|
||||
"""
|
||||
return self.as_string(unixfrom, policy=policy).encode("utf-8")
|
||||
|
||||
|
||||
def parse_email(data: bytes | str) -> tuple[RawMetadata, dict[str, list[str]]]:
|
||||
"""Parse a distribution's metadata stored as email headers (e.g. from ``METADATA``).
|
||||
|
||||
This function returns a two-item tuple of dicts. The first dict is of
|
||||
recognized fields from the core metadata specification. Fields that can be
|
||||
parsed and translated into Python's built-in types are converted
|
||||
appropriately. All other fields are left as-is. Fields that are allowed to
|
||||
appear multiple times are stored as lists.
|
||||
|
||||
The second dict contains all other fields from the metadata. This includes
|
||||
any unrecognized fields. It also includes any fields which are expected to
|
||||
be parsed into a built-in type but were not formatted appropriately. Finally,
|
||||
any fields that are expected to appear only once but are repeated are
|
||||
included in this dict.
|
||||
|
||||
"""
|
||||
raw: dict[str, str | list[str] | dict[str, str]] = {}
|
||||
unparsed: dict[str, list[str]] = {}
|
||||
|
||||
if isinstance(data, str):
|
||||
parsed = email.parser.Parser(policy=email.policy.compat32).parsestr(data)
|
||||
else:
|
||||
parsed = email.parser.BytesParser(policy=email.policy.compat32).parsebytes(data)
|
||||
|
||||
# We have to wrap parsed.keys() in a set, because in the case of multiple
|
||||
# values for a key (a list), the key will appear multiple times in the
|
||||
# list of keys, but we're avoiding that by using get_all().
|
||||
for name_with_case in frozenset(parsed.keys()):
|
||||
# Header names in RFC are case insensitive, so we'll normalize to all
|
||||
# lower case to make comparisons easier.
|
||||
name = name_with_case.lower()
|
||||
|
||||
# We use get_all() here, even for fields that aren't multiple use,
|
||||
# because otherwise someone could have e.g. two Name fields, and we
|
||||
# would just silently ignore it rather than doing something about it.
|
||||
headers = parsed.get_all(name) or []
|
||||
|
||||
# The way the email module works when parsing bytes is that it
|
||||
# unconditionally decodes the bytes as ascii using the surrogateescape
|
||||
# handler. When you pull that data back out (such as with get_all() ),
|
||||
# it looks to see if the str has any surrogate escapes, and if it does
|
||||
# it wraps it in a Header object instead of returning the string.
|
||||
#
|
||||
# As such, we'll look for those Header objects, and fix up the encoding.
|
||||
value = []
|
||||
# Flag if we have run into any issues processing the headers, thus
|
||||
# signalling that the data belongs in 'unparsed'.
|
||||
valid_encoding = True
|
||||
for h in headers:
|
||||
# It's unclear if this can return more types than just a Header or
|
||||
# a str, so we'll just assert here to make sure.
|
||||
assert isinstance(h, (email.header.Header, str))
|
||||
|
||||
# If it's a header object, we need to do our little dance to get
|
||||
# the real data out of it. In cases where there is invalid data
|
||||
# we're going to end up with mojibake, but there's no obvious, good
|
||||
# way around that without reimplementing parts of the Header object
|
||||
# ourselves.
|
||||
#
|
||||
# That should be fine since, if mojibacked happens, this key is
|
||||
# going into the unparsed dict anyways.
|
||||
if isinstance(h, email.header.Header):
|
||||
# The Header object stores it's data as chunks, and each chunk
|
||||
# can be independently encoded, so we'll need to check each
|
||||
# of them.
|
||||
chunks: list[tuple[bytes, str | None]] = []
|
||||
for binary, _encoding in email.header.decode_header(h):
|
||||
try:
|
||||
binary.decode("utf8", "strict")
|
||||
except UnicodeDecodeError:
|
||||
# Enable mojibake.
|
||||
encoding = "latin1"
|
||||
valid_encoding = False
|
||||
else:
|
||||
encoding = "utf8"
|
||||
chunks.append((binary, encoding))
|
||||
|
||||
# Turn our chunks back into a Header object, then let that
|
||||
# Header object do the right thing to turn them into a
|
||||
# string for us.
|
||||
value.append(str(email.header.make_header(chunks)))
|
||||
# This is already a string, so just add it.
|
||||
else:
|
||||
value.append(h)
|
||||
|
||||
# We've processed all of our values to get them into a list of str,
|
||||
# but we may have mojibake data, in which case this is an unparsed
|
||||
# field.
|
||||
if not valid_encoding:
|
||||
unparsed[name] = value
|
||||
continue
|
||||
|
||||
raw_name = _EMAIL_TO_RAW_MAPPING.get(name)
|
||||
if raw_name is None:
|
||||
# This is a bit of a weird situation, we've encountered a key that
|
||||
# we don't know what it means, so we don't know whether it's meant
|
||||
# to be a list or not.
|
||||
#
|
||||
# Since we can't really tell one way or another, we'll just leave it
|
||||
# as a list, even though it may be a single item list, because that's
|
||||
# what makes the most sense for email headers.
|
||||
unparsed[name] = value
|
||||
continue
|
||||
|
||||
# If this is one of our string fields, then we'll check to see if our
|
||||
# value is a list of a single item. If it is then we'll assume that
|
||||
# it was emitted as a single string, and unwrap the str from inside
|
||||
# the list.
|
||||
#
|
||||
# If it's any other kind of data, then we haven't the faintest clue
|
||||
# what we should parse it as, and we have to just add it to our list
|
||||
# of unparsed stuff.
|
||||
if raw_name in _STRING_FIELDS and len(value) == 1:
|
||||
raw[raw_name] = value[0]
|
||||
# If this is import_names, we need to special case the empty field
|
||||
# case, which converts to an empty list instead of None. We can't let
|
||||
# the empty case slip through, as it will fail validation.
|
||||
elif raw_name == "import_names" and value == [""]:
|
||||
raw[raw_name] = []
|
||||
# If this is one of our list of string fields, then we can just assign
|
||||
# the value, since email *only* has strings, and our get_all() call
|
||||
# above ensures that this is a list.
|
||||
elif raw_name in _LIST_FIELDS:
|
||||
raw[raw_name] = value
|
||||
# Special Case: Keywords
|
||||
# The keywords field is implemented in the metadata spec as a str,
|
||||
# but it conceptually is a list of strings, and is serialized using
|
||||
# ", ".join(keywords), so we'll do some light data massaging to turn
|
||||
# this into what it logically is.
|
||||
elif raw_name == "keywords" and len(value) == 1:
|
||||
raw[raw_name] = _parse_keywords(value[0])
|
||||
# Special Case: Project-URL
|
||||
# The project urls is implemented in the metadata spec as a list of
|
||||
# specially-formatted strings that represent a key and a value, which
|
||||
# is fundamentally a mapping, however the email format doesn't support
|
||||
# mappings in a sane way, so it was crammed into a list of strings
|
||||
# instead.
|
||||
#
|
||||
# We will do a little light data massaging to turn this into a map as
|
||||
# it logically should be.
|
||||
elif raw_name == "project_urls":
|
||||
try:
|
||||
raw[raw_name] = _parse_project_urls(value)
|
||||
except KeyError:
|
||||
unparsed[name] = value
|
||||
# Nothing that we've done has managed to parse this, so it'll just
|
||||
# throw it in our unparsable data and move on.
|
||||
else:
|
||||
unparsed[name] = value
|
||||
|
||||
# We need to support getting the Description from the message payload in
|
||||
# addition to getting it from the the headers. This does mean, though, there
|
||||
# is the possibility of it being set both ways, in which case we put both
|
||||
# in 'unparsed' since we don't know which is right.
|
||||
try:
|
||||
payload = _get_payload(parsed, data)
|
||||
except ValueError:
|
||||
unparsed.setdefault("description", []).append(
|
||||
parsed.get_payload(decode=isinstance(data, bytes)) # type: ignore[call-overload]
|
||||
)
|
||||
else:
|
||||
if payload:
|
||||
# Check to see if we've already got a description, if so then both
|
||||
# it, and this body move to unparsable.
|
||||
if "description" in raw:
|
||||
description_header = cast("str", raw.pop("description"))
|
||||
unparsed.setdefault("description", []).extend(
|
||||
[description_header, payload]
|
||||
)
|
||||
elif "description" in unparsed:
|
||||
unparsed["description"].append(payload)
|
||||
else:
|
||||
raw["description"] = payload
|
||||
|
||||
# We need to cast our `raw` to a metadata, because a TypedDict only support
|
||||
# literal key names, but we're computing our key names on purpose, but the
|
||||
# way this function is implemented, our `TypedDict` can only have valid key
|
||||
# names.
|
||||
return cast("RawMetadata", raw), unparsed
|
||||
|
||||
|
||||
_NOT_FOUND = object()
|
||||
|
||||
|
||||
# Keep the two values in sync.
|
||||
_VALID_METADATA_VERSIONS = ["1.0", "1.1", "1.2", "2.1", "2.2", "2.3", "2.4", "2.5"]
|
||||
_MetadataVersion = Literal["1.0", "1.1", "1.2", "2.1", "2.2", "2.3", "2.4", "2.5"]
|
||||
|
||||
_REQUIRED_ATTRS = frozenset(["metadata_version", "name", "version"])
|
||||
|
||||
|
||||
class _Validator(Generic[T]):
|
||||
"""Validate a metadata field.
|
||||
|
||||
All _process_*() methods correspond to a core metadata field. The method is
|
||||
called with the field's raw value. If the raw value is valid it is returned
|
||||
in its "enriched" form (e.g. ``version.Version`` for the ``Version`` field).
|
||||
If the raw value is invalid, :exc:`InvalidMetadata` is raised (with a cause
|
||||
as appropriate).
|
||||
"""
|
||||
|
||||
name: str
|
||||
raw_name: str
|
||||
added: _MetadataVersion
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
*,
|
||||
added: _MetadataVersion = "1.0",
|
||||
) -> None:
|
||||
self.added = added
|
||||
|
||||
def __set_name__(self, _owner: Metadata, name: str) -> None:
|
||||
self.name = name
|
||||
self.raw_name = _RAW_TO_EMAIL_MAPPING[name]
|
||||
|
||||
def __get__(self, instance: Metadata, _owner: type[Metadata]) -> T:
|
||||
# With Python 3.8, the caching can be replaced with functools.cached_property().
|
||||
# No need to check the cache as attribute lookup will resolve into the
|
||||
# instance's __dict__ before __get__ is called.
|
||||
cache = instance.__dict__
|
||||
value = instance._raw.get(self.name)
|
||||
|
||||
# To make the _process_* methods easier, we'll check if the value is None
|
||||
# and if this field is NOT a required attribute, and if both of those
|
||||
# things are true, we'll skip the the converter. This will mean that the
|
||||
# converters never have to deal with the None union.
|
||||
if self.name in _REQUIRED_ATTRS or value is not None:
|
||||
try:
|
||||
converter: Callable[[Any], T] = getattr(self, f"_process_{self.name}")
|
||||
except AttributeError:
|
||||
pass
|
||||
else:
|
||||
value = converter(value)
|
||||
|
||||
cache[self.name] = value
|
||||
try:
|
||||
del instance._raw[self.name] # type: ignore[misc]
|
||||
except KeyError:
|
||||
pass
|
||||
|
||||
return cast("T", value)
|
||||
|
||||
def _invalid_metadata(
|
||||
self, msg: str, cause: Exception | None = None
|
||||
) -> InvalidMetadata:
|
||||
exc = InvalidMetadata(
|
||||
self.raw_name, msg.format_map({"field": repr(self.raw_name)})
|
||||
)
|
||||
exc.__cause__ = cause
|
||||
return exc
|
||||
|
||||
def _process_metadata_version(self, value: str) -> _MetadataVersion:
|
||||
# Implicitly makes Metadata-Version required.
|
||||
if value not in _VALID_METADATA_VERSIONS:
|
||||
raise self._invalid_metadata(f"{value!r} is not a valid metadata version")
|
||||
return cast("_MetadataVersion", value)
|
||||
|
||||
def _process_name(self, value: str) -> str:
|
||||
if not value:
|
||||
raise self._invalid_metadata("{field} is a required field")
|
||||
# Validate the name as a side-effect.
|
||||
try:
|
||||
utils.canonicalize_name(value, validate=True)
|
||||
except utils.InvalidName as exc:
|
||||
raise self._invalid_metadata(
|
||||
f"{value!r} is invalid for {{field}}", cause=exc
|
||||
) from exc
|
||||
else:
|
||||
return value
|
||||
|
||||
def _process_version(self, value: str) -> version_module.Version:
|
||||
if not value:
|
||||
raise self._invalid_metadata("{field} is a required field")
|
||||
try:
|
||||
return version_module.parse(value)
|
||||
except version_module.InvalidVersion as exc:
|
||||
raise self._invalid_metadata(
|
||||
f"{value!r} is invalid for {{field}}", cause=exc
|
||||
) from exc
|
||||
|
||||
def _process_summary(self, value: str) -> str:
|
||||
"""Check the field contains no newlines."""
|
||||
if "\n" in value:
|
||||
raise self._invalid_metadata("{field} must be a single line")
|
||||
return value
|
||||
|
||||
def _process_description_content_type(self, value: str) -> str:
|
||||
content_types = {"text/plain", "text/x-rst", "text/markdown"}
|
||||
message = email.message.EmailMessage()
|
||||
message["content-type"] = value
|
||||
|
||||
content_type, parameters = (
|
||||
# Defaults to `text/plain` if parsing failed.
|
||||
message.get_content_type().lower(),
|
||||
message["content-type"].params,
|
||||
)
|
||||
# Check if content-type is valid or defaulted to `text/plain` and thus was
|
||||
# not parseable.
|
||||
if content_type not in content_types or content_type not in value.lower():
|
||||
raise self._invalid_metadata(
|
||||
f"{{field}} must be one of {list(content_types)}, not {value!r}"
|
||||
)
|
||||
|
||||
charset = parameters.get("charset", "UTF-8")
|
||||
if charset != "UTF-8":
|
||||
raise self._invalid_metadata(
|
||||
f"{{field}} can only specify the UTF-8 charset, not {list(charset)}"
|
||||
)
|
||||
|
||||
markdown_variants = {"GFM", "CommonMark"}
|
||||
variant = parameters.get("variant", "GFM") # Use an acceptable default.
|
||||
if content_type == "text/markdown" and variant not in markdown_variants:
|
||||
raise self._invalid_metadata(
|
||||
f"valid Markdown variants for {{field}} are {list(markdown_variants)}, "
|
||||
f"not {variant!r}",
|
||||
)
|
||||
return value
|
||||
|
||||
def _process_dynamic(self, value: list[str]) -> list[str]:
|
||||
for dynamic_field in map(str.lower, value):
|
||||
if dynamic_field in {"name", "version", "metadata-version"}:
|
||||
raise self._invalid_metadata(
|
||||
f"{dynamic_field!r} is not allowed as a dynamic field"
|
||||
)
|
||||
elif dynamic_field not in _EMAIL_TO_RAW_MAPPING:
|
||||
raise self._invalid_metadata(
|
||||
f"{dynamic_field!r} is not a valid dynamic field"
|
||||
)
|
||||
return list(map(str.lower, value))
|
||||
|
||||
def _process_provides_extra(
|
||||
self,
|
||||
value: list[str],
|
||||
) -> list[utils.NormalizedName]:
|
||||
normalized_names = []
|
||||
try:
|
||||
for name in value:
|
||||
normalized_names.append(utils.canonicalize_name(name, validate=True))
|
||||
except utils.InvalidName as exc:
|
||||
raise self._invalid_metadata(
|
||||
f"{name!r} is invalid for {{field}}", cause=exc
|
||||
) from exc
|
||||
else:
|
||||
return normalized_names
|
||||
|
||||
def _process_requires_python(self, value: str) -> specifiers.SpecifierSet:
|
||||
try:
|
||||
return specifiers.SpecifierSet(value)
|
||||
except specifiers.InvalidSpecifier as exc:
|
||||
raise self._invalid_metadata(
|
||||
f"{value!r} is invalid for {{field}}", cause=exc
|
||||
) from exc
|
||||
|
||||
def _process_requires_dist(
|
||||
self,
|
||||
value: list[str],
|
||||
) -> list[requirements.Requirement]:
|
||||
reqs = []
|
||||
try:
|
||||
for req in value:
|
||||
reqs.append(requirements.Requirement(req))
|
||||
except requirements.InvalidRequirement as exc:
|
||||
raise self._invalid_metadata(
|
||||
f"{req!r} is invalid for {{field}}", cause=exc
|
||||
) from exc
|
||||
else:
|
||||
return reqs
|
||||
|
||||
def _process_license_expression(self, value: str) -> NormalizedLicenseExpression:
|
||||
try:
|
||||
return licenses.canonicalize_license_expression(value)
|
||||
except ValueError as exc:
|
||||
raise self._invalid_metadata(
|
||||
f"{value!r} is invalid for {{field}}", cause=exc
|
||||
) from exc
|
||||
|
||||
def _process_license_files(self, value: list[str]) -> list[str]:
|
||||
paths = []
|
||||
for path in value:
|
||||
if ".." in path:
|
||||
raise self._invalid_metadata(
|
||||
f"{path!r} is invalid for {{field}}, "
|
||||
"parent directory indicators are not allowed"
|
||||
)
|
||||
if "*" in path:
|
||||
raise self._invalid_metadata(
|
||||
f"{path!r} is invalid for {{field}}, paths must be resolved"
|
||||
)
|
||||
if (
|
||||
pathlib.PurePosixPath(path).is_absolute()
|
||||
or pathlib.PureWindowsPath(path).is_absolute()
|
||||
):
|
||||
raise self._invalid_metadata(
|
||||
f"{path!r} is invalid for {{field}}, paths must be relative"
|
||||
)
|
||||
if pathlib.PureWindowsPath(path).as_posix() != path:
|
||||
raise self._invalid_metadata(
|
||||
f"{path!r} is invalid for {{field}}, paths must use '/' delimiter"
|
||||
)
|
||||
paths.append(path)
|
||||
return paths
|
||||
|
||||
def _process_import_names(self, value: list[str]) -> list[str]:
|
||||
for import_name in value:
|
||||
name, semicolon, private = import_name.partition(";")
|
||||
name = name.rstrip()
|
||||
for identifier in name.split("."):
|
||||
if not identifier.isidentifier():
|
||||
raise self._invalid_metadata(
|
||||
f"{name!r} is invalid for {{field}}; "
|
||||
f"{identifier!r} is not a valid identifier"
|
||||
)
|
||||
elif keyword.iskeyword(identifier):
|
||||
raise self._invalid_metadata(
|
||||
f"{name!r} is invalid for {{field}}; "
|
||||
f"{identifier!r} is a keyword"
|
||||
)
|
||||
if semicolon and private.lstrip() != "private":
|
||||
raise self._invalid_metadata(
|
||||
f"{import_name!r} is invalid for {{field}}; "
|
||||
"the only valid option is 'private'"
|
||||
)
|
||||
return value
|
||||
|
||||
_process_import_namespaces = _process_import_names
|
||||
|
||||
|
||||
class Metadata:
|
||||
"""Representation of distribution metadata.
|
||||
|
||||
Compared to :class:`RawMetadata`, this class provides objects representing
|
||||
metadata fields instead of only using built-in types. Any invalid metadata
|
||||
will cause :exc:`InvalidMetadata` to be raised (with a
|
||||
:py:attr:`~BaseException.__cause__` attribute as appropriate).
|
||||
"""
|
||||
|
||||
_raw: RawMetadata
|
||||
|
||||
@classmethod
|
||||
def from_raw(cls, data: RawMetadata, *, validate: bool = True) -> Metadata:
|
||||
"""Create an instance from :class:`RawMetadata`.
|
||||
|
||||
If *validate* is true, all metadata will be validated. All exceptions
|
||||
related to validation will be gathered and raised as an :class:`ExceptionGroup`.
|
||||
"""
|
||||
ins = cls()
|
||||
ins._raw = data.copy() # Mutations occur due to caching enriched values.
|
||||
|
||||
if validate:
|
||||
exceptions: list[Exception] = []
|
||||
try:
|
||||
metadata_version = ins.metadata_version
|
||||
metadata_age = _VALID_METADATA_VERSIONS.index(metadata_version)
|
||||
except InvalidMetadata as metadata_version_exc:
|
||||
exceptions.append(metadata_version_exc)
|
||||
metadata_version = None
|
||||
|
||||
# Make sure to check for the fields that are present, the required
|
||||
# fields (so their absence can be reported).
|
||||
fields_to_check = frozenset(ins._raw) | _REQUIRED_ATTRS
|
||||
# Remove fields that have already been checked.
|
||||
fields_to_check -= {"metadata_version"}
|
||||
|
||||
for key in fields_to_check:
|
||||
try:
|
||||
if metadata_version:
|
||||
# Can't use getattr() as that triggers descriptor protocol which
|
||||
# will fail due to no value for the instance argument.
|
||||
try:
|
||||
field_metadata_version = cls.__dict__[key].added
|
||||
except KeyError:
|
||||
exc = InvalidMetadata(key, f"unrecognized field: {key!r}")
|
||||
exceptions.append(exc)
|
||||
continue
|
||||
field_age = _VALID_METADATA_VERSIONS.index(
|
||||
field_metadata_version
|
||||
)
|
||||
if field_age > metadata_age:
|
||||
field = _RAW_TO_EMAIL_MAPPING[key]
|
||||
exc = InvalidMetadata(
|
||||
field,
|
||||
f"{field} introduced in metadata version "
|
||||
f"{field_metadata_version}, not {metadata_version}",
|
||||
)
|
||||
exceptions.append(exc)
|
||||
continue
|
||||
getattr(ins, key)
|
||||
except InvalidMetadata as exc:
|
||||
exceptions.append(exc)
|
||||
|
||||
if exceptions:
|
||||
raise ExceptionGroup("invalid metadata", exceptions)
|
||||
|
||||
return ins
|
||||
|
||||
@classmethod
|
||||
def from_email(cls, data: bytes | str, *, validate: bool = True) -> Metadata:
|
||||
"""Parse metadata from email headers.
|
||||
|
||||
If *validate* is true, the metadata will be validated. All exceptions
|
||||
related to validation will be gathered and raised as an :class:`ExceptionGroup`.
|
||||
"""
|
||||
raw, unparsed = parse_email(data)
|
||||
|
||||
if validate:
|
||||
exceptions: list[Exception] = []
|
||||
for unparsed_key in unparsed:
|
||||
if unparsed_key in _EMAIL_TO_RAW_MAPPING:
|
||||
message = f"{unparsed_key!r} has invalid data"
|
||||
else:
|
||||
message = f"unrecognized field: {unparsed_key!r}"
|
||||
exceptions.append(InvalidMetadata(unparsed_key, message))
|
||||
|
||||
if exceptions:
|
||||
raise ExceptionGroup("unparsed", exceptions)
|
||||
|
||||
try:
|
||||
return cls.from_raw(raw, validate=validate)
|
||||
except ExceptionGroup as exc_group:
|
||||
raise ExceptionGroup(
|
||||
"invalid or unparsed metadata", exc_group.exceptions
|
||||
) from None
|
||||
|
||||
metadata_version: _Validator[_MetadataVersion] = _Validator()
|
||||
""":external:ref:`core-metadata-metadata-version`
|
||||
(required; validated to be a valid metadata version)"""
|
||||
# `name` is not normalized/typed to NormalizedName so as to provide access to
|
||||
# the original/raw name.
|
||||
name: _Validator[str] = _Validator()
|
||||
""":external:ref:`core-metadata-name`
|
||||
(required; validated using :func:`~packaging.utils.canonicalize_name` and its
|
||||
*validate* parameter)"""
|
||||
version: _Validator[version_module.Version] = _Validator()
|
||||
""":external:ref:`core-metadata-version` (required)"""
|
||||
dynamic: _Validator[list[str] | None] = _Validator(
|
||||
added="2.2",
|
||||
)
|
||||
""":external:ref:`core-metadata-dynamic`
|
||||
(validated against core metadata field names and lowercased)"""
|
||||
platforms: _Validator[list[str] | None] = _Validator()
|
||||
""":external:ref:`core-metadata-platform`"""
|
||||
supported_platforms: _Validator[list[str] | None] = _Validator(added="1.1")
|
||||
""":external:ref:`core-metadata-supported-platform`"""
|
||||
summary: _Validator[str | None] = _Validator()
|
||||
""":external:ref:`core-metadata-summary` (validated to contain no newlines)"""
|
||||
description: _Validator[str | None] = _Validator() # TODO 2.1: can be in body
|
||||
""":external:ref:`core-metadata-description`"""
|
||||
description_content_type: _Validator[str | None] = _Validator(added="2.1")
|
||||
""":external:ref:`core-metadata-description-content-type` (validated)"""
|
||||
keywords: _Validator[list[str] | None] = _Validator()
|
||||
""":external:ref:`core-metadata-keywords`"""
|
||||
home_page: _Validator[str | None] = _Validator()
|
||||
""":external:ref:`core-metadata-home-page`"""
|
||||
download_url: _Validator[str | None] = _Validator(added="1.1")
|
||||
""":external:ref:`core-metadata-download-url`"""
|
||||
author: _Validator[str | None] = _Validator()
|
||||
""":external:ref:`core-metadata-author`"""
|
||||
author_email: _Validator[str | None] = _Validator()
|
||||
""":external:ref:`core-metadata-author-email`"""
|
||||
maintainer: _Validator[str | None] = _Validator(added="1.2")
|
||||
""":external:ref:`core-metadata-maintainer`"""
|
||||
maintainer_email: _Validator[str | None] = _Validator(added="1.2")
|
||||
""":external:ref:`core-metadata-maintainer-email`"""
|
||||
license: _Validator[str | None] = _Validator()
|
||||
""":external:ref:`core-metadata-license`"""
|
||||
license_expression: _Validator[NormalizedLicenseExpression | None] = _Validator(
|
||||
added="2.4"
|
||||
)
|
||||
""":external:ref:`core-metadata-license-expression`"""
|
||||
license_files: _Validator[list[str] | None] = _Validator(added="2.4")
|
||||
""":external:ref:`core-metadata-license-file`"""
|
||||
classifiers: _Validator[list[str] | None] = _Validator(added="1.1")
|
||||
""":external:ref:`core-metadata-classifier`"""
|
||||
requires_dist: _Validator[list[requirements.Requirement] | None] = _Validator(
|
||||
added="1.2"
|
||||
)
|
||||
""":external:ref:`core-metadata-requires-dist`"""
|
||||
requires_python: _Validator[specifiers.SpecifierSet | None] = _Validator(
|
||||
added="1.2"
|
||||
)
|
||||
""":external:ref:`core-metadata-requires-python`"""
|
||||
# Because `Requires-External` allows for non-PEP 440 version specifiers, we
|
||||
# don't do any processing on the values.
|
||||
requires_external: _Validator[list[str] | None] = _Validator(added="1.2")
|
||||
""":external:ref:`core-metadata-requires-external`"""
|
||||
project_urls: _Validator[dict[str, str] | None] = _Validator(added="1.2")
|
||||
""":external:ref:`core-metadata-project-url`"""
|
||||
# PEP 685 lets us raise an error if an extra doesn't pass `Name` validation
|
||||
# regardless of metadata version.
|
||||
provides_extra: _Validator[list[utils.NormalizedName] | None] = _Validator(
|
||||
added="2.1",
|
||||
)
|
||||
""":external:ref:`core-metadata-provides-extra`"""
|
||||
provides_dist: _Validator[list[str] | None] = _Validator(added="1.2")
|
||||
""":external:ref:`core-metadata-provides-dist`"""
|
||||
obsoletes_dist: _Validator[list[str] | None] = _Validator(added="1.2")
|
||||
""":external:ref:`core-metadata-obsoletes-dist`"""
|
||||
import_names: _Validator[list[str] | None] = _Validator(added="2.5")
|
||||
""":external:ref:`core-metadata-import-name`"""
|
||||
import_namespaces: _Validator[list[str] | None] = _Validator(added="2.5")
|
||||
""":external:ref:`core-metadata-import-namespace`"""
|
||||
requires: _Validator[list[str] | None] = _Validator(added="1.1")
|
||||
"""``Requires`` (deprecated)"""
|
||||
provides: _Validator[list[str] | None] = _Validator(added="1.1")
|
||||
"""``Provides`` (deprecated)"""
|
||||
obsoletes: _Validator[list[str] | None] = _Validator(added="1.1")
|
||||
"""``Obsoletes`` (deprecated)"""
|
||||
|
||||
def as_rfc822(self) -> RFC822Message:
|
||||
"""
|
||||
Return an RFC822 message with the metadata.
|
||||
"""
|
||||
message = RFC822Message()
|
||||
self._write_metadata(message)
|
||||
return message
|
||||
|
||||
def _write_metadata(self, message: RFC822Message) -> None:
|
||||
"""
|
||||
Return an RFC822 message with the metadata.
|
||||
"""
|
||||
for name, validator in self.__class__.__dict__.items():
|
||||
if isinstance(validator, _Validator) and name != "description":
|
||||
value = getattr(self, name)
|
||||
email_name = _RAW_TO_EMAIL_MAPPING[name]
|
||||
if value is not None:
|
||||
if email_name == "project-url":
|
||||
for label, url in value.items():
|
||||
message[email_name] = f"{label}, {url}"
|
||||
elif email_name == "keywords":
|
||||
message[email_name] = ",".join(value)
|
||||
elif email_name == "import-name" and value == []:
|
||||
message[email_name] = ""
|
||||
elif isinstance(value, list):
|
||||
for item in value:
|
||||
message[email_name] = str(item)
|
||||
else:
|
||||
message[email_name] = str(value)
|
||||
|
||||
# The description is a special case because it is in the body of the message.
|
||||
if self.description is not None:
|
||||
message.set_payload(self.description)
|
||||
@@ -0,0 +1,635 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import dataclasses
|
||||
import logging
|
||||
import re
|
||||
from collections.abc import Mapping, Sequence
|
||||
from dataclasses import dataclass
|
||||
from datetime import datetime
|
||||
from typing import (
|
||||
TYPE_CHECKING,
|
||||
Any,
|
||||
Callable,
|
||||
Protocol,
|
||||
TypeVar,
|
||||
)
|
||||
|
||||
from .markers import Marker
|
||||
from .specifiers import SpecifierSet
|
||||
from .utils import NormalizedName, is_normalized_name
|
||||
from .version import Version
|
||||
|
||||
if TYPE_CHECKING: # pragma: no cover
|
||||
from pathlib import Path
|
||||
|
||||
from typing_extensions import Self
|
||||
|
||||
_logger = logging.getLogger(__name__)
|
||||
|
||||
__all__ = [
|
||||
"Package",
|
||||
"PackageArchive",
|
||||
"PackageDirectory",
|
||||
"PackageSdist",
|
||||
"PackageVcs",
|
||||
"PackageWheel",
|
||||
"Pylock",
|
||||
"PylockUnsupportedVersionError",
|
||||
"PylockValidationError",
|
||||
"is_valid_pylock_path",
|
||||
]
|
||||
|
||||
_T = TypeVar("_T")
|
||||
_T2 = TypeVar("_T2")
|
||||
|
||||
|
||||
class _FromMappingProtocol(Protocol): # pragma: no cover
|
||||
@classmethod
|
||||
def _from_dict(cls, d: Mapping[str, Any]) -> Self: ...
|
||||
|
||||
|
||||
_FromMappingProtocolT = TypeVar("_FromMappingProtocolT", bound=_FromMappingProtocol)
|
||||
|
||||
|
||||
_PYLOCK_FILE_NAME_RE = re.compile(r"^pylock\.([^.]+)\.toml$")
|
||||
|
||||
|
||||
def is_valid_pylock_path(path: Path) -> bool:
|
||||
"""Check if the given path is a valid pylock file path."""
|
||||
return path.name == "pylock.toml" or bool(_PYLOCK_FILE_NAME_RE.match(path.name))
|
||||
|
||||
|
||||
def _toml_key(key: str) -> str:
|
||||
return key.replace("_", "-")
|
||||
|
||||
|
||||
def _toml_value(key: str, value: Any) -> Any: # noqa: ANN401
|
||||
if isinstance(value, (Version, Marker, SpecifierSet)):
|
||||
return str(value)
|
||||
if isinstance(value, Sequence) and key == "environments":
|
||||
return [str(v) for v in value]
|
||||
return value
|
||||
|
||||
|
||||
def _toml_dict_factory(data: list[tuple[str, Any]]) -> dict[str, Any]:
|
||||
return {
|
||||
_toml_key(key): _toml_value(key, value)
|
||||
for key, value in data
|
||||
if value is not None
|
||||
}
|
||||
|
||||
|
||||
def _get(d: Mapping[str, Any], expected_type: type[_T], key: str) -> _T | None:
|
||||
"""Get a value from the dictionary and verify it's the expected type."""
|
||||
if (value := d.get(key)) is None:
|
||||
return None
|
||||
if not isinstance(value, expected_type):
|
||||
raise PylockValidationError(
|
||||
f"Unexpected type {type(value).__name__} "
|
||||
f"(expected {expected_type.__name__})",
|
||||
context=key,
|
||||
)
|
||||
return value
|
||||
|
||||
|
||||
def _get_required(d: Mapping[str, Any], expected_type: type[_T], key: str) -> _T:
|
||||
"""Get a required value from the dictionary and verify it's the expected type."""
|
||||
if (value := _get(d, expected_type, key)) is None:
|
||||
raise _PylockRequiredKeyError(key)
|
||||
return value
|
||||
|
||||
|
||||
def _get_sequence(
|
||||
d: Mapping[str, Any], expected_item_type: type[_T], key: str
|
||||
) -> Sequence[_T] | None:
|
||||
"""Get a list value from the dictionary and verify it's the expected items type."""
|
||||
if (value := _get(d, Sequence, key)) is None: # type: ignore[type-abstract]
|
||||
return None
|
||||
if isinstance(value, (str, bytes)):
|
||||
# special case: str and bytes are Sequences, but we want to reject it
|
||||
raise PylockValidationError(
|
||||
f"Unexpected type {type(value).__name__} (expected Sequence)",
|
||||
context=key,
|
||||
)
|
||||
for i, item in enumerate(value):
|
||||
if not isinstance(item, expected_item_type):
|
||||
raise PylockValidationError(
|
||||
f"Unexpected type {type(item).__name__} "
|
||||
f"(expected {expected_item_type.__name__})",
|
||||
context=f"{key}[{i}]",
|
||||
)
|
||||
return value
|
||||
|
||||
|
||||
def _get_as(
|
||||
d: Mapping[str, Any],
|
||||
expected_type: type[_T],
|
||||
target_type: Callable[[_T], _T2],
|
||||
key: str,
|
||||
) -> _T2 | None:
|
||||
"""Get a value from the dictionary, verify it's the expected type,
|
||||
and convert to the target type.
|
||||
|
||||
This assumes the target_type constructor accepts the value.
|
||||
"""
|
||||
if (value := _get(d, expected_type, key)) is None:
|
||||
return None
|
||||
try:
|
||||
return target_type(value)
|
||||
except Exception as e:
|
||||
raise PylockValidationError(e, context=key) from e
|
||||
|
||||
|
||||
def _get_required_as(
|
||||
d: Mapping[str, Any],
|
||||
expected_type: type[_T],
|
||||
target_type: Callable[[_T], _T2],
|
||||
key: str,
|
||||
) -> _T2:
|
||||
"""Get a required value from the dict, verify it's the expected type,
|
||||
and convert to the target type."""
|
||||
if (value := _get_as(d, expected_type, target_type, key)) is None:
|
||||
raise _PylockRequiredKeyError(key)
|
||||
return value
|
||||
|
||||
|
||||
def _get_sequence_as(
|
||||
d: Mapping[str, Any],
|
||||
expected_item_type: type[_T],
|
||||
target_item_type: Callable[[_T], _T2],
|
||||
key: str,
|
||||
) -> list[_T2] | None:
|
||||
"""Get list value from dictionary and verify expected items type."""
|
||||
if (value := _get_sequence(d, expected_item_type, key)) is None:
|
||||
return None
|
||||
result = []
|
||||
try:
|
||||
for item in value:
|
||||
typed_item = target_item_type(item)
|
||||
result.append(typed_item)
|
||||
except Exception as e:
|
||||
raise PylockValidationError(e, context=f"{key}[{len(result)}]") from e
|
||||
return result
|
||||
|
||||
|
||||
def _get_object(
|
||||
d: Mapping[str, Any], target_type: type[_FromMappingProtocolT], key: str
|
||||
) -> _FromMappingProtocolT | None:
|
||||
"""Get a dictionary value from the dictionary and convert it to a dataclass."""
|
||||
if (value := _get(d, Mapping, key)) is None: # type: ignore[type-abstract]
|
||||
return None
|
||||
try:
|
||||
return target_type._from_dict(value)
|
||||
except Exception as e:
|
||||
raise PylockValidationError(e, context=key) from e
|
||||
|
||||
|
||||
def _get_sequence_of_objects(
|
||||
d: Mapping[str, Any], target_item_type: type[_FromMappingProtocolT], key: str
|
||||
) -> list[_FromMappingProtocolT] | None:
|
||||
"""Get a list value from the dictionary and convert its items to a dataclass."""
|
||||
if (value := _get_sequence(d, Mapping, key)) is None: # type: ignore[type-abstract]
|
||||
return None
|
||||
result: list[_FromMappingProtocolT] = []
|
||||
try:
|
||||
for item in value:
|
||||
typed_item = target_item_type._from_dict(item)
|
||||
result.append(typed_item)
|
||||
except Exception as e:
|
||||
raise PylockValidationError(e, context=f"{key}[{len(result)}]") from e
|
||||
return result
|
||||
|
||||
|
||||
def _get_required_sequence_of_objects(
|
||||
d: Mapping[str, Any], target_item_type: type[_FromMappingProtocolT], key: str
|
||||
) -> Sequence[_FromMappingProtocolT]:
|
||||
"""Get a required list value from the dictionary and convert its items to a
|
||||
dataclass."""
|
||||
if (result := _get_sequence_of_objects(d, target_item_type, key)) is None:
|
||||
raise _PylockRequiredKeyError(key)
|
||||
return result
|
||||
|
||||
|
||||
def _validate_normalized_name(name: str) -> NormalizedName:
|
||||
"""Validate that a string is a NormalizedName."""
|
||||
if not is_normalized_name(name):
|
||||
raise PylockValidationError(f"Name {name!r} is not normalized")
|
||||
return NormalizedName(name)
|
||||
|
||||
|
||||
def _validate_path_url(path: str | None, url: str | None) -> None:
|
||||
if not path and not url:
|
||||
raise PylockValidationError("path or url must be provided")
|
||||
|
||||
|
||||
def _validate_hashes(hashes: Mapping[str, Any]) -> Mapping[str, Any]:
|
||||
if not hashes:
|
||||
raise PylockValidationError("At least one hash must be provided")
|
||||
if not all(isinstance(hash_val, str) for hash_val in hashes.values()):
|
||||
raise PylockValidationError("Hash values must be strings")
|
||||
return hashes
|
||||
|
||||
|
||||
class PylockValidationError(Exception):
|
||||
"""Raised when when input data is not spec-compliant."""
|
||||
|
||||
context: str | None = None
|
||||
message: str
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
cause: str | Exception,
|
||||
*,
|
||||
context: str | None = None,
|
||||
) -> None:
|
||||
if isinstance(cause, PylockValidationError):
|
||||
if cause.context:
|
||||
self.context = (
|
||||
f"{context}.{cause.context}" if context else cause.context
|
||||
)
|
||||
else:
|
||||
self.context = context
|
||||
self.message = cause.message
|
||||
else:
|
||||
self.context = context
|
||||
self.message = str(cause)
|
||||
|
||||
def __str__(self) -> str:
|
||||
if self.context:
|
||||
return f"{self.message} in {self.context!r}"
|
||||
return self.message
|
||||
|
||||
|
||||
class _PylockRequiredKeyError(PylockValidationError):
|
||||
def __init__(self, key: str) -> None:
|
||||
super().__init__("Missing required value", context=key)
|
||||
|
||||
|
||||
class PylockUnsupportedVersionError(PylockValidationError):
|
||||
"""Raised when encountering an unsupported `lock_version`."""
|
||||
|
||||
|
||||
@dataclass(frozen=True, init=False)
|
||||
class PackageVcs:
|
||||
type: str
|
||||
url: str | None = None
|
||||
path: str | None = None
|
||||
requested_revision: str | None = None
|
||||
commit_id: str # type: ignore[misc]
|
||||
subdirectory: str | None = None
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
*,
|
||||
type: str,
|
||||
url: str | None = None,
|
||||
path: str | None = None,
|
||||
requested_revision: str | None = None,
|
||||
commit_id: str,
|
||||
subdirectory: str | None = None,
|
||||
) -> None:
|
||||
# In Python 3.10+ make dataclass kw_only=True and remove __init__
|
||||
object.__setattr__(self, "type", type)
|
||||
object.__setattr__(self, "url", url)
|
||||
object.__setattr__(self, "path", path)
|
||||
object.__setattr__(self, "requested_revision", requested_revision)
|
||||
object.__setattr__(self, "commit_id", commit_id)
|
||||
object.__setattr__(self, "subdirectory", subdirectory)
|
||||
|
||||
@classmethod
|
||||
def _from_dict(cls, d: Mapping[str, Any]) -> Self:
|
||||
package_vcs = cls(
|
||||
type=_get_required(d, str, "type"),
|
||||
url=_get(d, str, "url"),
|
||||
path=_get(d, str, "path"),
|
||||
requested_revision=_get(d, str, "requested-revision"),
|
||||
commit_id=_get_required(d, str, "commit-id"),
|
||||
subdirectory=_get(d, str, "subdirectory"),
|
||||
)
|
||||
_validate_path_url(package_vcs.path, package_vcs.url)
|
||||
return package_vcs
|
||||
|
||||
|
||||
@dataclass(frozen=True, init=False)
|
||||
class PackageDirectory:
|
||||
path: str
|
||||
editable: bool | None = None
|
||||
subdirectory: str | None = None
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
*,
|
||||
path: str,
|
||||
editable: bool | None = None,
|
||||
subdirectory: str | None = None,
|
||||
) -> None:
|
||||
# In Python 3.10+ make dataclass kw_only=True and remove __init__
|
||||
object.__setattr__(self, "path", path)
|
||||
object.__setattr__(self, "editable", editable)
|
||||
object.__setattr__(self, "subdirectory", subdirectory)
|
||||
|
||||
@classmethod
|
||||
def _from_dict(cls, d: Mapping[str, Any]) -> Self:
|
||||
return cls(
|
||||
path=_get_required(d, str, "path"),
|
||||
editable=_get(d, bool, "editable"),
|
||||
subdirectory=_get(d, str, "subdirectory"),
|
||||
)
|
||||
|
||||
|
||||
@dataclass(frozen=True, init=False)
|
||||
class PackageArchive:
|
||||
url: str | None = None
|
||||
path: str | None = None
|
||||
size: int | None = None
|
||||
upload_time: datetime | None = None
|
||||
hashes: Mapping[str, str] # type: ignore[misc]
|
||||
subdirectory: str | None = None
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
*,
|
||||
url: str | None = None,
|
||||
path: str | None = None,
|
||||
size: int | None = None,
|
||||
upload_time: datetime | None = None,
|
||||
hashes: Mapping[str, str],
|
||||
subdirectory: str | None = None,
|
||||
) -> None:
|
||||
# In Python 3.10+ make dataclass kw_only=True and remove __init__
|
||||
object.__setattr__(self, "url", url)
|
||||
object.__setattr__(self, "path", path)
|
||||
object.__setattr__(self, "size", size)
|
||||
object.__setattr__(self, "upload_time", upload_time)
|
||||
object.__setattr__(self, "hashes", hashes)
|
||||
object.__setattr__(self, "subdirectory", subdirectory)
|
||||
|
||||
@classmethod
|
||||
def _from_dict(cls, d: Mapping[str, Any]) -> Self:
|
||||
package_archive = cls(
|
||||
url=_get(d, str, "url"),
|
||||
path=_get(d, str, "path"),
|
||||
size=_get(d, int, "size"),
|
||||
upload_time=_get(d, datetime, "upload-time"),
|
||||
hashes=_get_required_as(d, Mapping, _validate_hashes, "hashes"), # type: ignore[type-abstract]
|
||||
subdirectory=_get(d, str, "subdirectory"),
|
||||
)
|
||||
_validate_path_url(package_archive.path, package_archive.url)
|
||||
return package_archive
|
||||
|
||||
|
||||
@dataclass(frozen=True, init=False)
|
||||
class PackageSdist:
|
||||
name: str | None = None
|
||||
upload_time: datetime | None = None
|
||||
url: str | None = None
|
||||
path: str | None = None
|
||||
size: int | None = None
|
||||
hashes: Mapping[str, str] # type: ignore[misc]
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
*,
|
||||
name: str | None = None,
|
||||
upload_time: datetime | None = None,
|
||||
url: str | None = None,
|
||||
path: str | None = None,
|
||||
size: int | None = None,
|
||||
hashes: Mapping[str, str],
|
||||
) -> None:
|
||||
# In Python 3.10+ make dataclass kw_only=True and remove __init__
|
||||
object.__setattr__(self, "name", name)
|
||||
object.__setattr__(self, "upload_time", upload_time)
|
||||
object.__setattr__(self, "url", url)
|
||||
object.__setattr__(self, "path", path)
|
||||
object.__setattr__(self, "size", size)
|
||||
object.__setattr__(self, "hashes", hashes)
|
||||
|
||||
@classmethod
|
||||
def _from_dict(cls, d: Mapping[str, Any]) -> Self:
|
||||
package_sdist = cls(
|
||||
name=_get(d, str, "name"),
|
||||
upload_time=_get(d, datetime, "upload-time"),
|
||||
url=_get(d, str, "url"),
|
||||
path=_get(d, str, "path"),
|
||||
size=_get(d, int, "size"),
|
||||
hashes=_get_required_as(d, Mapping, _validate_hashes, "hashes"), # type: ignore[type-abstract]
|
||||
)
|
||||
_validate_path_url(package_sdist.path, package_sdist.url)
|
||||
return package_sdist
|
||||
|
||||
|
||||
@dataclass(frozen=True, init=False)
|
||||
class PackageWheel:
|
||||
name: str | None = None
|
||||
upload_time: datetime | None = None
|
||||
url: str | None = None
|
||||
path: str | None = None
|
||||
size: int | None = None
|
||||
hashes: Mapping[str, str] # type: ignore[misc]
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
*,
|
||||
name: str | None = None,
|
||||
upload_time: datetime | None = None,
|
||||
url: str | None = None,
|
||||
path: str | None = None,
|
||||
size: int | None = None,
|
||||
hashes: Mapping[str, str],
|
||||
) -> None:
|
||||
# In Python 3.10+ make dataclass kw_only=True and remove __init__
|
||||
object.__setattr__(self, "name", name)
|
||||
object.__setattr__(self, "upload_time", upload_time)
|
||||
object.__setattr__(self, "url", url)
|
||||
object.__setattr__(self, "path", path)
|
||||
object.__setattr__(self, "size", size)
|
||||
object.__setattr__(self, "hashes", hashes)
|
||||
|
||||
@classmethod
|
||||
def _from_dict(cls, d: Mapping[str, Any]) -> Self:
|
||||
package_wheel = cls(
|
||||
name=_get(d, str, "name"),
|
||||
upload_time=_get(d, datetime, "upload-time"),
|
||||
url=_get(d, str, "url"),
|
||||
path=_get(d, str, "path"),
|
||||
size=_get(d, int, "size"),
|
||||
hashes=_get_required_as(d, Mapping, _validate_hashes, "hashes"), # type: ignore[type-abstract]
|
||||
)
|
||||
_validate_path_url(package_wheel.path, package_wheel.url)
|
||||
return package_wheel
|
||||
|
||||
|
||||
@dataclass(frozen=True, init=False)
|
||||
class Package:
|
||||
name: NormalizedName
|
||||
version: Version | None = None
|
||||
marker: Marker | None = None
|
||||
requires_python: SpecifierSet | None = None
|
||||
dependencies: Sequence[Mapping[str, Any]] | None = None
|
||||
vcs: PackageVcs | None = None
|
||||
directory: PackageDirectory | None = None
|
||||
archive: PackageArchive | None = None
|
||||
index: str | None = None
|
||||
sdist: PackageSdist | None = None
|
||||
wheels: Sequence[PackageWheel] | None = None
|
||||
attestation_identities: Sequence[Mapping[str, Any]] | None = None
|
||||
tool: Mapping[str, Any] | None = None
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
*,
|
||||
name: NormalizedName,
|
||||
version: Version | None = None,
|
||||
marker: Marker | None = None,
|
||||
requires_python: SpecifierSet | None = None,
|
||||
dependencies: Sequence[Mapping[str, Any]] | None = None,
|
||||
vcs: PackageVcs | None = None,
|
||||
directory: PackageDirectory | None = None,
|
||||
archive: PackageArchive | None = None,
|
||||
index: str | None = None,
|
||||
sdist: PackageSdist | None = None,
|
||||
wheels: Sequence[PackageWheel] | None = None,
|
||||
attestation_identities: Sequence[Mapping[str, Any]] | None = None,
|
||||
tool: Mapping[str, Any] | None = None,
|
||||
) -> None:
|
||||
# In Python 3.10+ make dataclass kw_only=True and remove __init__
|
||||
object.__setattr__(self, "name", name)
|
||||
object.__setattr__(self, "version", version)
|
||||
object.__setattr__(self, "marker", marker)
|
||||
object.__setattr__(self, "requires_python", requires_python)
|
||||
object.__setattr__(self, "dependencies", dependencies)
|
||||
object.__setattr__(self, "vcs", vcs)
|
||||
object.__setattr__(self, "directory", directory)
|
||||
object.__setattr__(self, "archive", archive)
|
||||
object.__setattr__(self, "index", index)
|
||||
object.__setattr__(self, "sdist", sdist)
|
||||
object.__setattr__(self, "wheels", wheels)
|
||||
object.__setattr__(self, "attestation_identities", attestation_identities)
|
||||
object.__setattr__(self, "tool", tool)
|
||||
|
||||
@classmethod
|
||||
def _from_dict(cls, d: Mapping[str, Any]) -> Self:
|
||||
package = cls(
|
||||
name=_get_required_as(d, str, _validate_normalized_name, "name"),
|
||||
version=_get_as(d, str, Version, "version"),
|
||||
requires_python=_get_as(d, str, SpecifierSet, "requires-python"),
|
||||
dependencies=_get_sequence(d, Mapping, "dependencies"), # type: ignore[type-abstract]
|
||||
marker=_get_as(d, str, Marker, "marker"),
|
||||
vcs=_get_object(d, PackageVcs, "vcs"),
|
||||
directory=_get_object(d, PackageDirectory, "directory"),
|
||||
archive=_get_object(d, PackageArchive, "archive"),
|
||||
index=_get(d, str, "index"),
|
||||
sdist=_get_object(d, PackageSdist, "sdist"),
|
||||
wheels=_get_sequence_of_objects(d, PackageWheel, "wheels"),
|
||||
attestation_identities=_get_sequence(d, Mapping, "attestation-identities"), # type: ignore[type-abstract]
|
||||
tool=_get(d, Mapping, "tool"), # type: ignore[type-abstract]
|
||||
)
|
||||
distributions = bool(package.sdist) + len(package.wheels or [])
|
||||
direct_urls = (
|
||||
bool(package.vcs) + bool(package.directory) + bool(package.archive)
|
||||
)
|
||||
if distributions > 0 and direct_urls > 0:
|
||||
raise PylockValidationError(
|
||||
"None of vcs, directory, archive must be set if sdist or wheels are set"
|
||||
)
|
||||
if distributions == 0 and direct_urls != 1:
|
||||
raise PylockValidationError(
|
||||
"Exactly one of vcs, directory, archive must be set "
|
||||
"if sdist and wheels are not set"
|
||||
)
|
||||
try:
|
||||
for i, attestation_identity in enumerate( # noqa: B007
|
||||
package.attestation_identities or []
|
||||
):
|
||||
_get_required(attestation_identity, str, "kind")
|
||||
except Exception as e:
|
||||
raise PylockValidationError(
|
||||
e, context=f"attestation-identities[{i}]"
|
||||
) from e
|
||||
return package
|
||||
|
||||
@property
|
||||
def is_direct(self) -> bool:
|
||||
return not (self.sdist or self.wheels)
|
||||
|
||||
|
||||
@dataclass(frozen=True, init=False)
|
||||
class Pylock:
|
||||
"""A class representing a pylock file."""
|
||||
|
||||
lock_version: Version
|
||||
environments: Sequence[Marker] | None = None
|
||||
requires_python: SpecifierSet | None = None
|
||||
extras: Sequence[NormalizedName] | None = None
|
||||
dependency_groups: Sequence[str] | None = None
|
||||
default_groups: Sequence[str] | None = None
|
||||
created_by: str # type: ignore[misc]
|
||||
packages: Sequence[Package] # type: ignore[misc]
|
||||
tool: Mapping[str, Any] | None = None
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
*,
|
||||
lock_version: Version,
|
||||
environments: Sequence[Marker] | None = None,
|
||||
requires_python: SpecifierSet | None = None,
|
||||
extras: Sequence[NormalizedName] | None = None,
|
||||
dependency_groups: Sequence[str] | None = None,
|
||||
default_groups: Sequence[str] | None = None,
|
||||
created_by: str,
|
||||
packages: Sequence[Package],
|
||||
tool: Mapping[str, Any] | None = None,
|
||||
) -> None:
|
||||
# In Python 3.10+ make dataclass kw_only=True and remove __init__
|
||||
object.__setattr__(self, "lock_version", lock_version)
|
||||
object.__setattr__(self, "environments", environments)
|
||||
object.__setattr__(self, "requires_python", requires_python)
|
||||
object.__setattr__(self, "extras", extras)
|
||||
object.__setattr__(self, "dependency_groups", dependency_groups)
|
||||
object.__setattr__(self, "default_groups", default_groups)
|
||||
object.__setattr__(self, "created_by", created_by)
|
||||
object.__setattr__(self, "packages", packages)
|
||||
object.__setattr__(self, "tool", tool)
|
||||
|
||||
@classmethod
|
||||
def _from_dict(cls, d: Mapping[str, Any]) -> Self:
|
||||
pylock = cls(
|
||||
lock_version=_get_required_as(d, str, Version, "lock-version"),
|
||||
environments=_get_sequence_as(d, str, Marker, "environments"),
|
||||
extras=_get_sequence_as(d, str, _validate_normalized_name, "extras"),
|
||||
dependency_groups=_get_sequence(d, str, "dependency-groups"),
|
||||
default_groups=_get_sequence(d, str, "default-groups"),
|
||||
created_by=_get_required(d, str, "created-by"),
|
||||
requires_python=_get_as(d, str, SpecifierSet, "requires-python"),
|
||||
packages=_get_required_sequence_of_objects(d, Package, "packages"),
|
||||
tool=_get(d, Mapping, "tool"), # type: ignore[type-abstract]
|
||||
)
|
||||
if not Version("1") <= pylock.lock_version < Version("2"):
|
||||
raise PylockUnsupportedVersionError(
|
||||
f"pylock version {pylock.lock_version} is not supported"
|
||||
)
|
||||
if pylock.lock_version > Version("1.0"):
|
||||
_logger.warning(
|
||||
"pylock minor version %s is not supported", pylock.lock_version
|
||||
)
|
||||
return pylock
|
||||
|
||||
@classmethod
|
||||
def from_dict(cls, d: Mapping[str, Any], /) -> Self:
|
||||
"""Create and validate a Pylock instance from a TOML dictionary.
|
||||
|
||||
Raises :class:`PylockValidationError` if the input data is not
|
||||
spec-compliant.
|
||||
"""
|
||||
return cls._from_dict(d)
|
||||
|
||||
def to_dict(self) -> Mapping[str, Any]:
|
||||
"""Convert the Pylock instance to a TOML dictionary."""
|
||||
return dataclasses.asdict(self, dict_factory=_toml_dict_factory)
|
||||
|
||||
def validate(self) -> None:
|
||||
"""Validate the Pylock instance against the specification.
|
||||
|
||||
Raises :class:`PylockValidationError` otherwise."""
|
||||
self.from_dict(self.to_dict())
|
||||
@@ -0,0 +1,86 @@
|
||||
# This file is dual licensed under the terms of the Apache License, Version
|
||||
# 2.0, and the BSD License. See the LICENSE file in the root of this repository
|
||||
# for complete details.
|
||||
from __future__ import annotations
|
||||
|
||||
from typing import Iterator
|
||||
|
||||
from ._parser import parse_requirement as _parse_requirement
|
||||
from ._tokenizer import ParserSyntaxError
|
||||
from .markers import Marker, _normalize_extra_values
|
||||
from .specifiers import SpecifierSet
|
||||
from .utils import canonicalize_name
|
||||
|
||||
|
||||
class InvalidRequirement(ValueError):
|
||||
"""
|
||||
An invalid requirement was found, users should refer to PEP 508.
|
||||
"""
|
||||
|
||||
|
||||
class Requirement:
|
||||
"""Parse a requirement.
|
||||
|
||||
Parse a given requirement string into its parts, such as name, specifier,
|
||||
URL, and extras. Raises InvalidRequirement on a badly-formed requirement
|
||||
string.
|
||||
"""
|
||||
|
||||
# TODO: Can we test whether something is contained within a requirement?
|
||||
# If so how do we do that? Do we need to test against the _name_ of
|
||||
# the thing as well as the version? What about the markers?
|
||||
# TODO: Can we normalize the name and extra name?
|
||||
|
||||
def __init__(self, requirement_string: str) -> None:
|
||||
try:
|
||||
parsed = _parse_requirement(requirement_string)
|
||||
except ParserSyntaxError as e:
|
||||
raise InvalidRequirement(str(e)) from e
|
||||
|
||||
self.name: str = parsed.name
|
||||
self.url: str | None = parsed.url or None
|
||||
self.extras: set[str] = set(parsed.extras or [])
|
||||
self.specifier: SpecifierSet = SpecifierSet(parsed.specifier)
|
||||
self.marker: Marker | None = None
|
||||
if parsed.marker is not None:
|
||||
self.marker = Marker.__new__(Marker)
|
||||
self.marker._markers = _normalize_extra_values(parsed.marker)
|
||||
|
||||
def _iter_parts(self, name: str) -> Iterator[str]:
|
||||
yield name
|
||||
|
||||
if self.extras:
|
||||
formatted_extras = ",".join(sorted(self.extras))
|
||||
yield f"[{formatted_extras}]"
|
||||
|
||||
if self.specifier:
|
||||
yield str(self.specifier)
|
||||
|
||||
if self.url:
|
||||
yield f" @ {self.url}"
|
||||
if self.marker:
|
||||
yield " "
|
||||
|
||||
if self.marker:
|
||||
yield f"; {self.marker}"
|
||||
|
||||
def __str__(self) -> str:
|
||||
return "".join(self._iter_parts(self.name))
|
||||
|
||||
def __repr__(self) -> str:
|
||||
return f"<{self.__class__.__name__}('{self}')>"
|
||||
|
||||
def __hash__(self) -> int:
|
||||
return hash(tuple(self._iter_parts(canonicalize_name(self.name))))
|
||||
|
||||
def __eq__(self, other: object) -> bool:
|
||||
if not isinstance(other, Requirement):
|
||||
return NotImplemented
|
||||
|
||||
return (
|
||||
canonicalize_name(self.name) == canonicalize_name(other.name)
|
||||
and self.extras == other.extras
|
||||
and self.specifier == other.specifier
|
||||
and self.url == other.url
|
||||
and self.marker == other.marker
|
||||
)
|
||||
File diff suppressed because it is too large
Load Diff
651
.venv/lib/python3.9/site-packages/pip/_vendor/packaging/tags.py
Normal file
651
.venv/lib/python3.9/site-packages/pip/_vendor/packaging/tags.py
Normal file
@@ -0,0 +1,651 @@
|
||||
# This file is dual licensed under the terms of the Apache License, Version
|
||||
# 2.0, and the BSD License. See the LICENSE file in the root of this repository
|
||||
# for complete details.
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import logging
|
||||
import platform
|
||||
import re
|
||||
import struct
|
||||
import subprocess
|
||||
import sys
|
||||
import sysconfig
|
||||
from importlib.machinery import EXTENSION_SUFFIXES
|
||||
from typing import (
|
||||
Any,
|
||||
Iterable,
|
||||
Iterator,
|
||||
Sequence,
|
||||
Tuple,
|
||||
cast,
|
||||
)
|
||||
|
||||
from . import _manylinux, _musllinux
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
PythonVersion = Sequence[int]
|
||||
AppleVersion = Tuple[int, int]
|
||||
|
||||
INTERPRETER_SHORT_NAMES: dict[str, str] = {
|
||||
"python": "py", # Generic.
|
||||
"cpython": "cp",
|
||||
"pypy": "pp",
|
||||
"ironpython": "ip",
|
||||
"jython": "jy",
|
||||
}
|
||||
|
||||
|
||||
_32_BIT_INTERPRETER = struct.calcsize("P") == 4
|
||||
|
||||
|
||||
class Tag:
|
||||
"""
|
||||
A representation of the tag triple for a wheel.
|
||||
|
||||
Instances are considered immutable and thus are hashable. Equality checking
|
||||
is also supported.
|
||||
"""
|
||||
|
||||
__slots__ = ["_abi", "_hash", "_interpreter", "_platform"]
|
||||
|
||||
def __init__(self, interpreter: str, abi: str, platform: str) -> None:
|
||||
self._interpreter = interpreter.lower()
|
||||
self._abi = abi.lower()
|
||||
self._platform = platform.lower()
|
||||
# The __hash__ of every single element in a Set[Tag] will be evaluated each time
|
||||
# that a set calls its `.disjoint()` method, which may be called hundreds of
|
||||
# times when scanning a page of links for packages with tags matching that
|
||||
# Set[Tag]. Pre-computing the value here produces significant speedups for
|
||||
# downstream consumers.
|
||||
self._hash = hash((self._interpreter, self._abi, self._platform))
|
||||
|
||||
@property
|
||||
def interpreter(self) -> str:
|
||||
return self._interpreter
|
||||
|
||||
@property
|
||||
def abi(self) -> str:
|
||||
return self._abi
|
||||
|
||||
@property
|
||||
def platform(self) -> str:
|
||||
return self._platform
|
||||
|
||||
def __eq__(self, other: object) -> bool:
|
||||
if not isinstance(other, Tag):
|
||||
return NotImplemented
|
||||
|
||||
return (
|
||||
(self._hash == other._hash) # Short-circuit ASAP for perf reasons.
|
||||
and (self._platform == other._platform)
|
||||
and (self._abi == other._abi)
|
||||
and (self._interpreter == other._interpreter)
|
||||
)
|
||||
|
||||
def __hash__(self) -> int:
|
||||
return self._hash
|
||||
|
||||
def __str__(self) -> str:
|
||||
return f"{self._interpreter}-{self._abi}-{self._platform}"
|
||||
|
||||
def __repr__(self) -> str:
|
||||
return f"<{self} @ {id(self)}>"
|
||||
|
||||
def __setstate__(self, state: tuple[None, dict[str, Any]]) -> None:
|
||||
# The cached _hash is wrong when unpickling.
|
||||
_, slots = state
|
||||
for k, v in slots.items():
|
||||
setattr(self, k, v)
|
||||
self._hash = hash((self._interpreter, self._abi, self._platform))
|
||||
|
||||
|
||||
def parse_tag(tag: str) -> frozenset[Tag]:
|
||||
"""
|
||||
Parses the provided tag (e.g. `py3-none-any`) into a frozenset of Tag instances.
|
||||
|
||||
Returning a set is required due to the possibility that the tag is a
|
||||
compressed tag set.
|
||||
"""
|
||||
tags = set()
|
||||
interpreters, abis, platforms = tag.split("-")
|
||||
for interpreter in interpreters.split("."):
|
||||
for abi in abis.split("."):
|
||||
for platform_ in platforms.split("."):
|
||||
tags.add(Tag(interpreter, abi, platform_))
|
||||
return frozenset(tags)
|
||||
|
||||
|
||||
def _get_config_var(name: str, warn: bool = False) -> int | str | None:
|
||||
value: int | str | None = sysconfig.get_config_var(name)
|
||||
if value is None and warn:
|
||||
logger.debug(
|
||||
"Config variable '%s' is unset, Python ABI tag may be incorrect", name
|
||||
)
|
||||
return value
|
||||
|
||||
|
||||
def _normalize_string(string: str) -> str:
|
||||
return string.replace(".", "_").replace("-", "_").replace(" ", "_")
|
||||
|
||||
|
||||
def _is_threaded_cpython(abis: list[str]) -> bool:
|
||||
"""
|
||||
Determine if the ABI corresponds to a threaded (`--disable-gil`) build.
|
||||
|
||||
The threaded builds are indicated by a "t" in the abiflags.
|
||||
"""
|
||||
if len(abis) == 0:
|
||||
return False
|
||||
# expect e.g., cp313
|
||||
m = re.match(r"cp\d+(.*)", abis[0])
|
||||
if not m:
|
||||
return False
|
||||
abiflags = m.group(1)
|
||||
return "t" in abiflags
|
||||
|
||||
|
||||
def _abi3_applies(python_version: PythonVersion, threading: bool) -> bool:
|
||||
"""
|
||||
Determine if the Python version supports abi3.
|
||||
|
||||
PEP 384 was first implemented in Python 3.2. The threaded (`--disable-gil`)
|
||||
builds do not support abi3.
|
||||
"""
|
||||
return len(python_version) > 1 and tuple(python_version) >= (3, 2) and not threading
|
||||
|
||||
|
||||
def _cpython_abis(py_version: PythonVersion, warn: bool = False) -> list[str]:
|
||||
py_version = tuple(py_version) # To allow for version comparison.
|
||||
abis = []
|
||||
version = _version_nodot(py_version[:2])
|
||||
threading = debug = pymalloc = ucs4 = ""
|
||||
with_debug = _get_config_var("Py_DEBUG", warn)
|
||||
has_refcount = hasattr(sys, "gettotalrefcount")
|
||||
# Windows doesn't set Py_DEBUG, so checking for support of debug-compiled
|
||||
# extension modules is the best option.
|
||||
# https://github.com/pypa/pip/issues/3383#issuecomment-173267692
|
||||
has_ext = "_d.pyd" in EXTENSION_SUFFIXES
|
||||
if with_debug or (with_debug is None and (has_refcount or has_ext)):
|
||||
debug = "d"
|
||||
if py_version >= (3, 13) and _get_config_var("Py_GIL_DISABLED", warn):
|
||||
threading = "t"
|
||||
if py_version < (3, 8):
|
||||
with_pymalloc = _get_config_var("WITH_PYMALLOC", warn)
|
||||
if with_pymalloc or with_pymalloc is None:
|
||||
pymalloc = "m"
|
||||
if py_version < (3, 3):
|
||||
unicode_size = _get_config_var("Py_UNICODE_SIZE", warn)
|
||||
if unicode_size == 4 or (
|
||||
unicode_size is None and sys.maxunicode == 0x10FFFF
|
||||
):
|
||||
ucs4 = "u"
|
||||
elif debug:
|
||||
# Debug builds can also load "normal" extension modules.
|
||||
# We can also assume no UCS-4 or pymalloc requirement.
|
||||
abis.append(f"cp{version}{threading}")
|
||||
abis.insert(0, f"cp{version}{threading}{debug}{pymalloc}{ucs4}")
|
||||
return abis
|
||||
|
||||
|
||||
def cpython_tags(
|
||||
python_version: PythonVersion | None = None,
|
||||
abis: Iterable[str] | None = None,
|
||||
platforms: Iterable[str] | None = None,
|
||||
*,
|
||||
warn: bool = False,
|
||||
) -> Iterator[Tag]:
|
||||
"""
|
||||
Yields the tags for a CPython interpreter.
|
||||
|
||||
The tags consist of:
|
||||
- cp<python_version>-<abi>-<platform>
|
||||
- cp<python_version>-abi3-<platform>
|
||||
- cp<python_version>-none-<platform>
|
||||
- cp<less than python_version>-abi3-<platform> # Older Python versions down to 3.2.
|
||||
|
||||
If python_version only specifies a major version then user-provided ABIs and
|
||||
the 'none' ABItag will be used.
|
||||
|
||||
If 'abi3' or 'none' are specified in 'abis' then they will be yielded at
|
||||
their normal position and not at the beginning.
|
||||
"""
|
||||
if not python_version:
|
||||
python_version = sys.version_info[:2]
|
||||
|
||||
interpreter = f"cp{_version_nodot(python_version[:2])}"
|
||||
|
||||
if abis is None:
|
||||
abis = _cpython_abis(python_version, warn) if len(python_version) > 1 else []
|
||||
abis = list(abis)
|
||||
# 'abi3' and 'none' are explicitly handled later.
|
||||
for explicit_abi in ("abi3", "none"):
|
||||
try:
|
||||
abis.remove(explicit_abi)
|
||||
except ValueError: # noqa: PERF203
|
||||
pass
|
||||
|
||||
platforms = list(platforms or platform_tags())
|
||||
for abi in abis:
|
||||
for platform_ in platforms:
|
||||
yield Tag(interpreter, abi, platform_)
|
||||
|
||||
threading = _is_threaded_cpython(abis)
|
||||
use_abi3 = _abi3_applies(python_version, threading)
|
||||
if use_abi3:
|
||||
yield from (Tag(interpreter, "abi3", platform_) for platform_ in platforms)
|
||||
yield from (Tag(interpreter, "none", platform_) for platform_ in platforms)
|
||||
|
||||
if use_abi3:
|
||||
for minor_version in range(python_version[1] - 1, 1, -1):
|
||||
for platform_ in platforms:
|
||||
version = _version_nodot((python_version[0], minor_version))
|
||||
interpreter = f"cp{version}"
|
||||
yield Tag(interpreter, "abi3", platform_)
|
||||
|
||||
|
||||
def _generic_abi() -> list[str]:
|
||||
"""
|
||||
Return the ABI tag based on EXT_SUFFIX.
|
||||
"""
|
||||
# The following are examples of `EXT_SUFFIX`.
|
||||
# We want to keep the parts which are related to the ABI and remove the
|
||||
# parts which are related to the platform:
|
||||
# - linux: '.cpython-310-x86_64-linux-gnu.so' => cp310
|
||||
# - mac: '.cpython-310-darwin.so' => cp310
|
||||
# - win: '.cp310-win_amd64.pyd' => cp310
|
||||
# - win: '.pyd' => cp37 (uses _cpython_abis())
|
||||
# - pypy: '.pypy38-pp73-x86_64-linux-gnu.so' => pypy38_pp73
|
||||
# - graalpy: '.graalpy-38-native-x86_64-darwin.dylib'
|
||||
# => graalpy_38_native
|
||||
|
||||
ext_suffix = _get_config_var("EXT_SUFFIX", warn=True)
|
||||
if not isinstance(ext_suffix, str) or ext_suffix[0] != ".":
|
||||
raise SystemError("invalid sysconfig.get_config_var('EXT_SUFFIX')")
|
||||
parts = ext_suffix.split(".")
|
||||
if len(parts) < 3:
|
||||
# CPython3.7 and earlier uses ".pyd" on Windows.
|
||||
return _cpython_abis(sys.version_info[:2])
|
||||
soabi = parts[1]
|
||||
if soabi.startswith("cpython"):
|
||||
# non-windows
|
||||
abi = "cp" + soabi.split("-")[1]
|
||||
elif soabi.startswith("cp"):
|
||||
# windows
|
||||
abi = soabi.split("-")[0]
|
||||
elif soabi.startswith("pypy"):
|
||||
abi = "-".join(soabi.split("-")[:2])
|
||||
elif soabi.startswith("graalpy"):
|
||||
abi = "-".join(soabi.split("-")[:3])
|
||||
elif soabi:
|
||||
# pyston, ironpython, others?
|
||||
abi = soabi
|
||||
else:
|
||||
return []
|
||||
return [_normalize_string(abi)]
|
||||
|
||||
|
||||
def generic_tags(
|
||||
interpreter: str | None = None,
|
||||
abis: Iterable[str] | None = None,
|
||||
platforms: Iterable[str] | None = None,
|
||||
*,
|
||||
warn: bool = False,
|
||||
) -> Iterator[Tag]:
|
||||
"""
|
||||
Yields the tags for a generic interpreter.
|
||||
|
||||
The tags consist of:
|
||||
- <interpreter>-<abi>-<platform>
|
||||
|
||||
The "none" ABI will be added if it was not explicitly provided.
|
||||
"""
|
||||
if not interpreter:
|
||||
interp_name = interpreter_name()
|
||||
interp_version = interpreter_version(warn=warn)
|
||||
interpreter = f"{interp_name}{interp_version}"
|
||||
abis = _generic_abi() if abis is None else list(abis)
|
||||
platforms = list(platforms or platform_tags())
|
||||
if "none" not in abis:
|
||||
abis.append("none")
|
||||
for abi in abis:
|
||||
for platform_ in platforms:
|
||||
yield Tag(interpreter, abi, platform_)
|
||||
|
||||
|
||||
def _py_interpreter_range(py_version: PythonVersion) -> Iterator[str]:
|
||||
"""
|
||||
Yields Python versions in descending order.
|
||||
|
||||
After the latest version, the major-only version will be yielded, and then
|
||||
all previous versions of that major version.
|
||||
"""
|
||||
if len(py_version) > 1:
|
||||
yield f"py{_version_nodot(py_version[:2])}"
|
||||
yield f"py{py_version[0]}"
|
||||
if len(py_version) > 1:
|
||||
for minor in range(py_version[1] - 1, -1, -1):
|
||||
yield f"py{_version_nodot((py_version[0], minor))}"
|
||||
|
||||
|
||||
def compatible_tags(
|
||||
python_version: PythonVersion | None = None,
|
||||
interpreter: str | None = None,
|
||||
platforms: Iterable[str] | None = None,
|
||||
) -> Iterator[Tag]:
|
||||
"""
|
||||
Yields the sequence of tags that are compatible with a specific version of Python.
|
||||
|
||||
The tags consist of:
|
||||
- py*-none-<platform>
|
||||
- <interpreter>-none-any # ... if `interpreter` is provided.
|
||||
- py*-none-any
|
||||
"""
|
||||
if not python_version:
|
||||
python_version = sys.version_info[:2]
|
||||
platforms = list(platforms or platform_tags())
|
||||
for version in _py_interpreter_range(python_version):
|
||||
for platform_ in platforms:
|
||||
yield Tag(version, "none", platform_)
|
||||
if interpreter:
|
||||
yield Tag(interpreter, "none", "any")
|
||||
for version in _py_interpreter_range(python_version):
|
||||
yield Tag(version, "none", "any")
|
||||
|
||||
|
||||
def _mac_arch(arch: str, is_32bit: bool = _32_BIT_INTERPRETER) -> str:
|
||||
if not is_32bit:
|
||||
return arch
|
||||
|
||||
if arch.startswith("ppc"):
|
||||
return "ppc"
|
||||
|
||||
return "i386"
|
||||
|
||||
|
||||
def _mac_binary_formats(version: AppleVersion, cpu_arch: str) -> list[str]:
|
||||
formats = [cpu_arch]
|
||||
if cpu_arch == "x86_64":
|
||||
if version < (10, 4):
|
||||
return []
|
||||
formats.extend(["intel", "fat64", "fat32"])
|
||||
|
||||
elif cpu_arch == "i386":
|
||||
if version < (10, 4):
|
||||
return []
|
||||
formats.extend(["intel", "fat32", "fat"])
|
||||
|
||||
elif cpu_arch == "ppc64":
|
||||
# TODO: Need to care about 32-bit PPC for ppc64 through 10.2?
|
||||
if version > (10, 5) or version < (10, 4):
|
||||
return []
|
||||
formats.append("fat64")
|
||||
|
||||
elif cpu_arch == "ppc":
|
||||
if version > (10, 6):
|
||||
return []
|
||||
formats.extend(["fat32", "fat"])
|
||||
|
||||
if cpu_arch in {"arm64", "x86_64"}:
|
||||
formats.append("universal2")
|
||||
|
||||
if cpu_arch in {"x86_64", "i386", "ppc64", "ppc", "intel"}:
|
||||
formats.append("universal")
|
||||
|
||||
return formats
|
||||
|
||||
|
||||
def mac_platforms(
|
||||
version: AppleVersion | None = None, arch: str | None = None
|
||||
) -> Iterator[str]:
|
||||
"""
|
||||
Yields the platform tags for a macOS system.
|
||||
|
||||
The `version` parameter is a two-item tuple specifying the macOS version to
|
||||
generate platform tags for. The `arch` parameter is the CPU architecture to
|
||||
generate platform tags for. Both parameters default to the appropriate value
|
||||
for the current system.
|
||||
"""
|
||||
version_str, _, cpu_arch = platform.mac_ver()
|
||||
if version is None:
|
||||
version = cast("AppleVersion", tuple(map(int, version_str.split(".")[:2])))
|
||||
if version == (10, 16):
|
||||
# When built against an older macOS SDK, Python will report macOS 10.16
|
||||
# instead of the real version.
|
||||
version_str = subprocess.run(
|
||||
[
|
||||
sys.executable,
|
||||
"-sS",
|
||||
"-c",
|
||||
"import platform; print(platform.mac_ver()[0])",
|
||||
],
|
||||
check=True,
|
||||
env={"SYSTEM_VERSION_COMPAT": "0"},
|
||||
stdout=subprocess.PIPE,
|
||||
text=True,
|
||||
).stdout
|
||||
version = cast("AppleVersion", tuple(map(int, version_str.split(".")[:2])))
|
||||
|
||||
if arch is None:
|
||||
arch = _mac_arch(cpu_arch)
|
||||
|
||||
if (10, 0) <= version < (11, 0):
|
||||
# Prior to Mac OS 11, each yearly release of Mac OS bumped the
|
||||
# "minor" version number. The major version was always 10.
|
||||
major_version = 10
|
||||
for minor_version in range(version[1], -1, -1):
|
||||
compat_version = major_version, minor_version
|
||||
binary_formats = _mac_binary_formats(compat_version, arch)
|
||||
for binary_format in binary_formats:
|
||||
yield f"macosx_{major_version}_{minor_version}_{binary_format}"
|
||||
|
||||
if version >= (11, 0):
|
||||
# Starting with Mac OS 11, each yearly release bumps the major version
|
||||
# number. The minor versions are now the midyear updates.
|
||||
minor_version = 0
|
||||
for major_version in range(version[0], 10, -1):
|
||||
compat_version = major_version, minor_version
|
||||
binary_formats = _mac_binary_formats(compat_version, arch)
|
||||
for binary_format in binary_formats:
|
||||
yield f"macosx_{major_version}_{minor_version}_{binary_format}"
|
||||
|
||||
if version >= (11, 0):
|
||||
# Mac OS 11 on x86_64 is compatible with binaries from previous releases.
|
||||
# Arm64 support was introduced in 11.0, so no Arm binaries from previous
|
||||
# releases exist.
|
||||
#
|
||||
# However, the "universal2" binary format can have a
|
||||
# macOS version earlier than 11.0 when the x86_64 part of the binary supports
|
||||
# that version of macOS.
|
||||
major_version = 10
|
||||
if arch == "x86_64":
|
||||
for minor_version in range(16, 3, -1):
|
||||
compat_version = major_version, minor_version
|
||||
binary_formats = _mac_binary_formats(compat_version, arch)
|
||||
for binary_format in binary_formats:
|
||||
yield f"macosx_{major_version}_{minor_version}_{binary_format}"
|
||||
else:
|
||||
for minor_version in range(16, 3, -1):
|
||||
compat_version = major_version, minor_version
|
||||
binary_format = "universal2"
|
||||
yield f"macosx_{major_version}_{minor_version}_{binary_format}"
|
||||
|
||||
|
||||
def ios_platforms(
|
||||
version: AppleVersion | None = None, multiarch: str | None = None
|
||||
) -> Iterator[str]:
|
||||
"""
|
||||
Yields the platform tags for an iOS system.
|
||||
|
||||
:param version: A two-item tuple specifying the iOS version to generate
|
||||
platform tags for. Defaults to the current iOS version.
|
||||
:param multiarch: The CPU architecture+ABI to generate platform tags for -
|
||||
(the value used by `sys.implementation._multiarch` e.g.,
|
||||
`arm64_iphoneos` or `x84_64_iphonesimulator`). Defaults to the current
|
||||
multiarch value.
|
||||
"""
|
||||
if version is None:
|
||||
# if iOS is the current platform, ios_ver *must* be defined. However,
|
||||
# it won't exist for CPython versions before 3.13, which causes a mypy
|
||||
# error.
|
||||
_, release, _, _ = platform.ios_ver() # type: ignore[attr-defined, unused-ignore]
|
||||
version = cast("AppleVersion", tuple(map(int, release.split(".")[:2])))
|
||||
|
||||
if multiarch is None:
|
||||
multiarch = sys.implementation._multiarch
|
||||
multiarch = multiarch.replace("-", "_")
|
||||
|
||||
ios_platform_template = "ios_{major}_{minor}_{multiarch}"
|
||||
|
||||
# Consider any iOS major.minor version from the version requested, down to
|
||||
# 12.0. 12.0 is the first iOS version that is known to have enough features
|
||||
# to support CPython. Consider every possible minor release up to X.9. There
|
||||
# highest the minor has ever gone is 8 (14.8 and 15.8) but having some extra
|
||||
# candidates that won't ever match doesn't really hurt, and it saves us from
|
||||
# having to keep an explicit list of known iOS versions in the code. Return
|
||||
# the results descending order of version number.
|
||||
|
||||
# If the requested major version is less than 12, there won't be any matches.
|
||||
if version[0] < 12:
|
||||
return
|
||||
|
||||
# Consider the actual X.Y version that was requested.
|
||||
yield ios_platform_template.format(
|
||||
major=version[0], minor=version[1], multiarch=multiarch
|
||||
)
|
||||
|
||||
# Consider every minor version from X.0 to the minor version prior to the
|
||||
# version requested by the platform.
|
||||
for minor in range(version[1] - 1, -1, -1):
|
||||
yield ios_platform_template.format(
|
||||
major=version[0], minor=minor, multiarch=multiarch
|
||||
)
|
||||
|
||||
for major in range(version[0] - 1, 11, -1):
|
||||
for minor in range(9, -1, -1):
|
||||
yield ios_platform_template.format(
|
||||
major=major, minor=minor, multiarch=multiarch
|
||||
)
|
||||
|
||||
|
||||
def android_platforms(
|
||||
api_level: int | None = None, abi: str | None = None
|
||||
) -> Iterator[str]:
|
||||
"""
|
||||
Yields the :attr:`~Tag.platform` tags for Android. If this function is invoked on
|
||||
non-Android platforms, the ``api_level`` and ``abi`` arguments are required.
|
||||
|
||||
:param int api_level: The maximum `API level
|
||||
<https://developer.android.com/tools/releases/platforms>`__ to return. Defaults
|
||||
to the current system's version, as returned by ``platform.android_ver``.
|
||||
:param str abi: The `Android ABI <https://developer.android.com/ndk/guides/abis>`__,
|
||||
e.g. ``arm64_v8a``. Defaults to the current system's ABI , as returned by
|
||||
``sysconfig.get_platform``. Hyphens and periods will be replaced with
|
||||
underscores.
|
||||
"""
|
||||
if platform.system() != "Android" and (api_level is None or abi is None):
|
||||
raise TypeError(
|
||||
"on non-Android platforms, the api_level and abi arguments are required"
|
||||
)
|
||||
|
||||
if api_level is None:
|
||||
# Python 3.13 was the first version to return platform.system() == "Android",
|
||||
# and also the first version to define platform.android_ver().
|
||||
api_level = platform.android_ver().api_level # type: ignore[attr-defined]
|
||||
|
||||
if abi is None:
|
||||
abi = sysconfig.get_platform().split("-")[-1]
|
||||
abi = _normalize_string(abi)
|
||||
|
||||
# 16 is the minimum API level known to have enough features to support CPython
|
||||
# without major patching. Yield every API level from the maximum down to the
|
||||
# minimum, inclusive.
|
||||
min_api_level = 16
|
||||
for ver in range(api_level, min_api_level - 1, -1):
|
||||
yield f"android_{ver}_{abi}"
|
||||
|
||||
|
||||
def _linux_platforms(is_32bit: bool = _32_BIT_INTERPRETER) -> Iterator[str]:
|
||||
linux = _normalize_string(sysconfig.get_platform())
|
||||
if not linux.startswith("linux_"):
|
||||
# we should never be here, just yield the sysconfig one and return
|
||||
yield linux
|
||||
return
|
||||
if is_32bit:
|
||||
if linux == "linux_x86_64":
|
||||
linux = "linux_i686"
|
||||
elif linux == "linux_aarch64":
|
||||
linux = "linux_armv8l"
|
||||
_, arch = linux.split("_", 1)
|
||||
archs = {"armv8l": ["armv8l", "armv7l"]}.get(arch, [arch])
|
||||
yield from _manylinux.platform_tags(archs)
|
||||
yield from _musllinux.platform_tags(archs)
|
||||
for arch in archs:
|
||||
yield f"linux_{arch}"
|
||||
|
||||
|
||||
def _generic_platforms() -> Iterator[str]:
|
||||
yield _normalize_string(sysconfig.get_platform())
|
||||
|
||||
|
||||
def platform_tags() -> Iterator[str]:
|
||||
"""
|
||||
Provides the platform tags for this installation.
|
||||
"""
|
||||
if platform.system() == "Darwin":
|
||||
return mac_platforms()
|
||||
elif platform.system() == "iOS":
|
||||
return ios_platforms()
|
||||
elif platform.system() == "Android":
|
||||
return android_platforms()
|
||||
elif platform.system() == "Linux":
|
||||
return _linux_platforms()
|
||||
else:
|
||||
return _generic_platforms()
|
||||
|
||||
|
||||
def interpreter_name() -> str:
|
||||
"""
|
||||
Returns the name of the running interpreter.
|
||||
|
||||
Some implementations have a reserved, two-letter abbreviation which will
|
||||
be returned when appropriate.
|
||||
"""
|
||||
name = sys.implementation.name
|
||||
return INTERPRETER_SHORT_NAMES.get(name) or name
|
||||
|
||||
|
||||
def interpreter_version(*, warn: bool = False) -> str:
|
||||
"""
|
||||
Returns the version of the running interpreter.
|
||||
"""
|
||||
version = _get_config_var("py_version_nodot", warn=warn)
|
||||
return str(version) if version else _version_nodot(sys.version_info[:2])
|
||||
|
||||
|
||||
def _version_nodot(version: PythonVersion) -> str:
|
||||
return "".join(map(str, version))
|
||||
|
||||
|
||||
def sys_tags(*, warn: bool = False) -> Iterator[Tag]:
|
||||
"""
|
||||
Returns the sequence of tag triples for the running interpreter.
|
||||
|
||||
The order of the sequence corresponds to priority order for the
|
||||
interpreter, from most to least important.
|
||||
"""
|
||||
|
||||
interp_name = interpreter_name()
|
||||
if interp_name == "cp":
|
||||
yield from cpython_tags(warn=warn)
|
||||
else:
|
||||
yield from generic_tags()
|
||||
|
||||
if interp_name == "pp":
|
||||
interp = "pp3"
|
||||
elif interp_name == "cp":
|
||||
interp = "cp" + interpreter_version(warn=warn)
|
||||
else:
|
||||
interp = None
|
||||
yield from compatible_tags(interpreter=interp)
|
||||
158
.venv/lib/python3.9/site-packages/pip/_vendor/packaging/utils.py
Normal file
158
.venv/lib/python3.9/site-packages/pip/_vendor/packaging/utils.py
Normal file
@@ -0,0 +1,158 @@
|
||||
# This file is dual licensed under the terms of the Apache License, Version
|
||||
# 2.0, and the BSD License. See the LICENSE file in the root of this repository
|
||||
# for complete details.
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import re
|
||||
from typing import NewType, Tuple, Union, cast
|
||||
|
||||
from .tags import Tag, parse_tag
|
||||
from .version import InvalidVersion, Version, _TrimmedRelease
|
||||
|
||||
BuildTag = Union[Tuple[()], Tuple[int, str]]
|
||||
NormalizedName = NewType("NormalizedName", str)
|
||||
|
||||
|
||||
class InvalidName(ValueError):
|
||||
"""
|
||||
An invalid distribution name; users should refer to the packaging user guide.
|
||||
"""
|
||||
|
||||
|
||||
class InvalidWheelFilename(ValueError):
|
||||
"""
|
||||
An invalid wheel filename was found, users should refer to PEP 427.
|
||||
"""
|
||||
|
||||
|
||||
class InvalidSdistFilename(ValueError):
|
||||
"""
|
||||
An invalid sdist filename was found, users should refer to the packaging user guide.
|
||||
"""
|
||||
|
||||
|
||||
# Core metadata spec for `Name`
|
||||
_validate_regex = re.compile(r"[A-Z0-9]|[A-Z0-9][A-Z0-9._-]*[A-Z0-9]", re.IGNORECASE)
|
||||
_normalized_regex = re.compile(r"[a-z0-9]|[a-z0-9]([a-z0-9-](?!--))*[a-z0-9]")
|
||||
# PEP 427: The build number must start with a digit.
|
||||
_build_tag_regex = re.compile(r"(\d+)(.*)")
|
||||
|
||||
|
||||
def canonicalize_name(name: str, *, validate: bool = False) -> NormalizedName:
|
||||
if validate and not _validate_regex.fullmatch(name):
|
||||
raise InvalidName(f"name is invalid: {name!r}")
|
||||
# Ensure all ``.`` and ``_`` are ``-``
|
||||
# Emulates ``re.sub(r"[-_.]+", "-", name).lower()`` from PEP 503
|
||||
# Much faster than re, and even faster than str.translate
|
||||
value = name.lower().replace("_", "-").replace(".", "-")
|
||||
# Condense repeats (faster than regex)
|
||||
while "--" in value:
|
||||
value = value.replace("--", "-")
|
||||
return cast("NormalizedName", value)
|
||||
|
||||
|
||||
def is_normalized_name(name: str) -> bool:
|
||||
return _normalized_regex.fullmatch(name) is not None
|
||||
|
||||
|
||||
def canonicalize_version(
|
||||
version: Version | str, *, strip_trailing_zero: bool = True
|
||||
) -> str:
|
||||
"""
|
||||
Return a canonical form of a version as a string.
|
||||
|
||||
>>> canonicalize_version('1.0.1')
|
||||
'1.0.1'
|
||||
|
||||
Per PEP 625, versions may have multiple canonical forms, differing
|
||||
only by trailing zeros.
|
||||
|
||||
>>> canonicalize_version('1.0.0')
|
||||
'1'
|
||||
>>> canonicalize_version('1.0.0', strip_trailing_zero=False)
|
||||
'1.0.0'
|
||||
|
||||
Invalid versions are returned unaltered.
|
||||
|
||||
>>> canonicalize_version('foo bar baz')
|
||||
'foo bar baz'
|
||||
"""
|
||||
if isinstance(version, str):
|
||||
try:
|
||||
version = Version(version)
|
||||
except InvalidVersion:
|
||||
return str(version)
|
||||
return str(_TrimmedRelease(version) if strip_trailing_zero else version)
|
||||
|
||||
|
||||
def parse_wheel_filename(
|
||||
filename: str,
|
||||
) -> tuple[NormalizedName, Version, BuildTag, frozenset[Tag]]:
|
||||
if not filename.endswith(".whl"):
|
||||
raise InvalidWheelFilename(
|
||||
f"Invalid wheel filename (extension must be '.whl'): {filename!r}"
|
||||
)
|
||||
|
||||
filename = filename[:-4]
|
||||
dashes = filename.count("-")
|
||||
if dashes not in (4, 5):
|
||||
raise InvalidWheelFilename(
|
||||
f"Invalid wheel filename (wrong number of parts): {filename!r}"
|
||||
)
|
||||
|
||||
parts = filename.split("-", dashes - 2)
|
||||
name_part = parts[0]
|
||||
# See PEP 427 for the rules on escaping the project name.
|
||||
if "__" in name_part or re.match(r"^[\w\d._]*$", name_part, re.UNICODE) is None:
|
||||
raise InvalidWheelFilename(f"Invalid project name: {filename!r}")
|
||||
name = canonicalize_name(name_part)
|
||||
|
||||
try:
|
||||
version = Version(parts[1])
|
||||
except InvalidVersion as e:
|
||||
raise InvalidWheelFilename(
|
||||
f"Invalid wheel filename (invalid version): {filename!r}"
|
||||
) from e
|
||||
|
||||
if dashes == 5:
|
||||
build_part = parts[2]
|
||||
build_match = _build_tag_regex.match(build_part)
|
||||
if build_match is None:
|
||||
raise InvalidWheelFilename(
|
||||
f"Invalid build number: {build_part} in {filename!r}"
|
||||
)
|
||||
build = cast("BuildTag", (int(build_match.group(1)), build_match.group(2)))
|
||||
else:
|
||||
build = ()
|
||||
tags = parse_tag(parts[-1])
|
||||
return (name, version, build, tags)
|
||||
|
||||
|
||||
def parse_sdist_filename(filename: str) -> tuple[NormalizedName, Version]:
|
||||
if filename.endswith(".tar.gz"):
|
||||
file_stem = filename[: -len(".tar.gz")]
|
||||
elif filename.endswith(".zip"):
|
||||
file_stem = filename[: -len(".zip")]
|
||||
else:
|
||||
raise InvalidSdistFilename(
|
||||
f"Invalid sdist filename (extension must be '.tar.gz' or '.zip'):"
|
||||
f" {filename!r}"
|
||||
)
|
||||
|
||||
# We are requiring a PEP 440 version, which cannot contain dashes,
|
||||
# so we split on the last dash.
|
||||
name_part, sep, version_part = file_stem.rpartition("-")
|
||||
if not sep:
|
||||
raise InvalidSdistFilename(f"Invalid sdist filename: {filename!r}")
|
||||
|
||||
name = canonicalize_name(name_part)
|
||||
|
||||
try:
|
||||
version = Version(version_part)
|
||||
except InvalidVersion as e:
|
||||
raise InvalidSdistFilename(
|
||||
f"Invalid sdist filename (invalid version): {filename!r}"
|
||||
) from e
|
||||
|
||||
return (name, version)
|
||||
@@ -0,0 +1,792 @@
|
||||
# This file is dual licensed under the terms of the Apache License, Version
|
||||
# 2.0, and the BSD License. See the LICENSE file in the root of this repository
|
||||
# for complete details.
|
||||
"""
|
||||
.. testsetup::
|
||||
|
||||
from pip._vendor.packaging.version import parse, Version
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import re
|
||||
import sys
|
||||
import typing
|
||||
from typing import (
|
||||
Any,
|
||||
Callable,
|
||||
Literal,
|
||||
NamedTuple,
|
||||
SupportsInt,
|
||||
Tuple,
|
||||
TypedDict,
|
||||
Union,
|
||||
)
|
||||
|
||||
from ._structures import Infinity, InfinityType, NegativeInfinity, NegativeInfinityType
|
||||
|
||||
if typing.TYPE_CHECKING:
|
||||
from typing_extensions import Self, Unpack
|
||||
|
||||
if sys.version_info >= (3, 13): # pragma: no cover
|
||||
from warnings import deprecated as _deprecated
|
||||
elif typing.TYPE_CHECKING:
|
||||
from typing_extensions import deprecated as _deprecated
|
||||
else: # pragma: no cover
|
||||
import functools
|
||||
import warnings
|
||||
|
||||
def _deprecated(message: str) -> object:
|
||||
def decorator(func: object) -> object:
|
||||
@functools.wraps(func)
|
||||
def wrapper(*args: object, **kwargs: object) -> object:
|
||||
warnings.warn(
|
||||
message,
|
||||
category=DeprecationWarning,
|
||||
stacklevel=2,
|
||||
)
|
||||
return func(*args, **kwargs)
|
||||
|
||||
return wrapper
|
||||
|
||||
return decorator
|
||||
|
||||
|
||||
_LETTER_NORMALIZATION = {
|
||||
"alpha": "a",
|
||||
"beta": "b",
|
||||
"c": "rc",
|
||||
"pre": "rc",
|
||||
"preview": "rc",
|
||||
"rev": "post",
|
||||
"r": "post",
|
||||
}
|
||||
|
||||
__all__ = ["VERSION_PATTERN", "InvalidVersion", "Version", "parse"]
|
||||
|
||||
LocalType = Tuple[Union[int, str], ...]
|
||||
|
||||
CmpPrePostDevType = Union[InfinityType, NegativeInfinityType, Tuple[str, int]]
|
||||
CmpLocalType = Union[
|
||||
NegativeInfinityType,
|
||||
Tuple[Union[Tuple[int, str], Tuple[NegativeInfinityType, Union[int, str]]], ...],
|
||||
]
|
||||
CmpKey = Tuple[
|
||||
int,
|
||||
Tuple[int, ...],
|
||||
CmpPrePostDevType,
|
||||
CmpPrePostDevType,
|
||||
CmpPrePostDevType,
|
||||
CmpLocalType,
|
||||
]
|
||||
VersionComparisonMethod = Callable[[CmpKey, CmpKey], bool]
|
||||
|
||||
|
||||
class _VersionReplace(TypedDict, total=False):
|
||||
epoch: int | None
|
||||
release: tuple[int, ...] | None
|
||||
pre: tuple[Literal["a", "b", "rc"], int] | None
|
||||
post: int | None
|
||||
dev: int | None
|
||||
local: str | None
|
||||
|
||||
|
||||
def parse(version: str) -> Version:
|
||||
"""Parse the given version string.
|
||||
|
||||
>>> parse('1.0.dev1')
|
||||
<Version('1.0.dev1')>
|
||||
|
||||
:param version: The version string to parse.
|
||||
:raises InvalidVersion: When the version string is not a valid version.
|
||||
"""
|
||||
return Version(version)
|
||||
|
||||
|
||||
class InvalidVersion(ValueError):
|
||||
"""Raised when a version string is not a valid version.
|
||||
|
||||
>>> Version("invalid")
|
||||
Traceback (most recent call last):
|
||||
...
|
||||
packaging.version.InvalidVersion: Invalid version: 'invalid'
|
||||
"""
|
||||
|
||||
|
||||
class _BaseVersion:
|
||||
__slots__ = ()
|
||||
|
||||
# This can also be a normal member (see the packaging_legacy package);
|
||||
# we are just requiring it to be readable. Actually defining a property
|
||||
# has runtime effect on subclasses, so it's typing only.
|
||||
if typing.TYPE_CHECKING:
|
||||
|
||||
@property
|
||||
def _key(self) -> tuple[Any, ...]: ...
|
||||
|
||||
def __hash__(self) -> int:
|
||||
return hash(self._key)
|
||||
|
||||
# Please keep the duplicated `isinstance` check
|
||||
# in the six comparisons hereunder
|
||||
# unless you find a way to avoid adding overhead function calls.
|
||||
def __lt__(self, other: _BaseVersion) -> bool:
|
||||
if not isinstance(other, _BaseVersion):
|
||||
return NotImplemented
|
||||
|
||||
return self._key < other._key
|
||||
|
||||
def __le__(self, other: _BaseVersion) -> bool:
|
||||
if not isinstance(other, _BaseVersion):
|
||||
return NotImplemented
|
||||
|
||||
return self._key <= other._key
|
||||
|
||||
def __eq__(self, other: object) -> bool:
|
||||
if not isinstance(other, _BaseVersion):
|
||||
return NotImplemented
|
||||
|
||||
return self._key == other._key
|
||||
|
||||
def __ge__(self, other: _BaseVersion) -> bool:
|
||||
if not isinstance(other, _BaseVersion):
|
||||
return NotImplemented
|
||||
|
||||
return self._key >= other._key
|
||||
|
||||
def __gt__(self, other: _BaseVersion) -> bool:
|
||||
if not isinstance(other, _BaseVersion):
|
||||
return NotImplemented
|
||||
|
||||
return self._key > other._key
|
||||
|
||||
def __ne__(self, other: object) -> bool:
|
||||
if not isinstance(other, _BaseVersion):
|
||||
return NotImplemented
|
||||
|
||||
return self._key != other._key
|
||||
|
||||
|
||||
# Deliberately not anchored to the start and end of the string, to make it
|
||||
# easier for 3rd party code to reuse
|
||||
|
||||
# Note that ++ doesn't behave identically on CPython and PyPy, so not using it here
|
||||
_VERSION_PATTERN = r"""
|
||||
v?+ # optional leading v
|
||||
(?:
|
||||
(?:(?P<epoch>[0-9]+)!)?+ # epoch
|
||||
(?P<release>[0-9]+(?:\.[0-9]+)*+) # release segment
|
||||
(?P<pre> # pre-release
|
||||
[._-]?+
|
||||
(?P<pre_l>alpha|a|beta|b|preview|pre|c|rc)
|
||||
[._-]?+
|
||||
(?P<pre_n>[0-9]+)?
|
||||
)?+
|
||||
(?P<post> # post release
|
||||
(?:-(?P<post_n1>[0-9]+))
|
||||
|
|
||||
(?:
|
||||
[._-]?
|
||||
(?P<post_l>post|rev|r)
|
||||
[._-]?
|
||||
(?P<post_n2>[0-9]+)?
|
||||
)
|
||||
)?+
|
||||
(?P<dev> # dev release
|
||||
[._-]?+
|
||||
(?P<dev_l>dev)
|
||||
[._-]?+
|
||||
(?P<dev_n>[0-9]+)?
|
||||
)?+
|
||||
)
|
||||
(?:\+
|
||||
(?P<local> # local version
|
||||
[a-z0-9]+
|
||||
(?:[._-][a-z0-9]+)*+
|
||||
)
|
||||
)?+
|
||||
"""
|
||||
|
||||
_VERSION_PATTERN_OLD = _VERSION_PATTERN.replace("*+", "*").replace("?+", "?")
|
||||
|
||||
# Possessive qualifiers were added in Python 3.11.
|
||||
# CPython 3.11.0-3.11.4 had a bug: https://github.com/python/cpython/pull/107795
|
||||
# Older PyPy also had a bug.
|
||||
VERSION_PATTERN = (
|
||||
_VERSION_PATTERN_OLD
|
||||
if (sys.implementation.name == "cpython" and sys.version_info < (3, 11, 5))
|
||||
or (sys.implementation.name == "pypy" and sys.version_info < (3, 11, 13))
|
||||
or sys.version_info < (3, 11)
|
||||
else _VERSION_PATTERN
|
||||
)
|
||||
"""
|
||||
A string containing the regular expression used to match a valid version.
|
||||
|
||||
The pattern is not anchored at either end, and is intended for embedding in larger
|
||||
expressions (for example, matching a version number as part of a file name). The
|
||||
regular expression should be compiled with the ``re.VERBOSE`` and ``re.IGNORECASE``
|
||||
flags set.
|
||||
|
||||
:meta hide-value:
|
||||
"""
|
||||
|
||||
|
||||
# Validation pattern for local version in replace()
|
||||
_LOCAL_PATTERN = re.compile(r"[a-z0-9]+(?:[._-][a-z0-9]+)*", re.IGNORECASE)
|
||||
|
||||
|
||||
def _validate_epoch(value: object, /) -> int:
|
||||
epoch = value or 0
|
||||
if isinstance(epoch, int) and epoch >= 0:
|
||||
return epoch
|
||||
msg = f"epoch must be non-negative integer, got {epoch}"
|
||||
raise InvalidVersion(msg)
|
||||
|
||||
|
||||
def _validate_release(value: object, /) -> tuple[int, ...]:
|
||||
release = (0,) if value is None else value
|
||||
if (
|
||||
isinstance(release, tuple)
|
||||
and len(release) > 0
|
||||
and all(isinstance(i, int) and i >= 0 for i in release)
|
||||
):
|
||||
return release
|
||||
msg = f"release must be a non-empty tuple of non-negative integers, got {release}"
|
||||
raise InvalidVersion(msg)
|
||||
|
||||
|
||||
def _validate_pre(value: object, /) -> tuple[Literal["a", "b", "rc"], int] | None:
|
||||
if value is None:
|
||||
return value
|
||||
if (
|
||||
isinstance(value, tuple)
|
||||
and len(value) == 2
|
||||
and value[0] in ("a", "b", "rc")
|
||||
and isinstance(value[1], int)
|
||||
and value[1] >= 0
|
||||
):
|
||||
return value
|
||||
msg = f"pre must be a tuple of ('a'|'b'|'rc', non-negative int), got {value}"
|
||||
raise InvalidVersion(msg)
|
||||
|
||||
|
||||
def _validate_post(value: object, /) -> tuple[Literal["post"], int] | None:
|
||||
if value is None:
|
||||
return value
|
||||
if isinstance(value, int) and value >= 0:
|
||||
return ("post", value)
|
||||
msg = f"post must be non-negative integer, got {value}"
|
||||
raise InvalidVersion(msg)
|
||||
|
||||
|
||||
def _validate_dev(value: object, /) -> tuple[Literal["dev"], int] | None:
|
||||
if value is None:
|
||||
return value
|
||||
if isinstance(value, int) and value >= 0:
|
||||
return ("dev", value)
|
||||
msg = f"dev must be non-negative integer, got {value}"
|
||||
raise InvalidVersion(msg)
|
||||
|
||||
|
||||
def _validate_local(value: object, /) -> LocalType | None:
|
||||
if value is None:
|
||||
return value
|
||||
if isinstance(value, str) and _LOCAL_PATTERN.fullmatch(value):
|
||||
return _parse_local_version(value)
|
||||
msg = f"local must be a valid version string, got {value!r}"
|
||||
raise InvalidVersion(msg)
|
||||
|
||||
|
||||
# Backward compatibility for internals before 26.0. Do not use.
|
||||
class _Version(NamedTuple):
|
||||
epoch: int
|
||||
release: tuple[int, ...]
|
||||
dev: tuple[str, int] | None
|
||||
pre: tuple[str, int] | None
|
||||
post: tuple[str, int] | None
|
||||
local: LocalType | None
|
||||
|
||||
|
||||
class Version(_BaseVersion):
|
||||
"""This class abstracts handling of a project's versions.
|
||||
|
||||
A :class:`Version` instance is comparison aware and can be compared and
|
||||
sorted using the standard Python interfaces.
|
||||
|
||||
>>> v1 = Version("1.0a5")
|
||||
>>> v2 = Version("1.0")
|
||||
>>> v1
|
||||
<Version('1.0a5')>
|
||||
>>> v2
|
||||
<Version('1.0')>
|
||||
>>> v1 < v2
|
||||
True
|
||||
>>> v1 == v2
|
||||
False
|
||||
>>> v1 > v2
|
||||
False
|
||||
>>> v1 >= v2
|
||||
False
|
||||
>>> v1 <= v2
|
||||
True
|
||||
"""
|
||||
|
||||
__slots__ = ("_dev", "_epoch", "_key_cache", "_local", "_post", "_pre", "_release")
|
||||
__match_args__ = ("_str",)
|
||||
|
||||
_regex = re.compile(r"\s*" + VERSION_PATTERN + r"\s*", re.VERBOSE | re.IGNORECASE)
|
||||
|
||||
_epoch: int
|
||||
_release: tuple[int, ...]
|
||||
_dev: tuple[str, int] | None
|
||||
_pre: tuple[str, int] | None
|
||||
_post: tuple[str, int] | None
|
||||
_local: LocalType | None
|
||||
|
||||
_key_cache: CmpKey | None
|
||||
|
||||
def __init__(self, version: str) -> None:
|
||||
"""Initialize a Version object.
|
||||
|
||||
:param version:
|
||||
The string representation of a version which will be parsed and normalized
|
||||
before use.
|
||||
:raises InvalidVersion:
|
||||
If the ``version`` does not conform to PEP 440 in any way then this
|
||||
exception will be raised.
|
||||
"""
|
||||
# Validate the version and parse it into pieces
|
||||
match = self._regex.fullmatch(version)
|
||||
if not match:
|
||||
raise InvalidVersion(f"Invalid version: {version!r}")
|
||||
self._epoch = int(match.group("epoch")) if match.group("epoch") else 0
|
||||
self._release = tuple(map(int, match.group("release").split(".")))
|
||||
self._pre = _parse_letter_version(match.group("pre_l"), match.group("pre_n"))
|
||||
self._post = _parse_letter_version(
|
||||
match.group("post_l"), match.group("post_n1") or match.group("post_n2")
|
||||
)
|
||||
self._dev = _parse_letter_version(match.group("dev_l"), match.group("dev_n"))
|
||||
self._local = _parse_local_version(match.group("local"))
|
||||
|
||||
# Key which will be used for sorting
|
||||
self._key_cache = None
|
||||
|
||||
def __replace__(self, **kwargs: Unpack[_VersionReplace]) -> Self:
|
||||
epoch = _validate_epoch(kwargs["epoch"]) if "epoch" in kwargs else self._epoch
|
||||
release = (
|
||||
_validate_release(kwargs["release"])
|
||||
if "release" in kwargs
|
||||
else self._release
|
||||
)
|
||||
pre = _validate_pre(kwargs["pre"]) if "pre" in kwargs else self._pre
|
||||
post = _validate_post(kwargs["post"]) if "post" in kwargs else self._post
|
||||
dev = _validate_dev(kwargs["dev"]) if "dev" in kwargs else self._dev
|
||||
local = _validate_local(kwargs["local"]) if "local" in kwargs else self._local
|
||||
|
||||
if (
|
||||
epoch == self._epoch
|
||||
and release == self._release
|
||||
and pre == self._pre
|
||||
and post == self._post
|
||||
and dev == self._dev
|
||||
and local == self._local
|
||||
):
|
||||
return self
|
||||
|
||||
new_version = self.__class__.__new__(self.__class__)
|
||||
new_version._key_cache = None
|
||||
new_version._epoch = epoch
|
||||
new_version._release = release
|
||||
new_version._pre = pre
|
||||
new_version._post = post
|
||||
new_version._dev = dev
|
||||
new_version._local = local
|
||||
|
||||
return new_version
|
||||
|
||||
@property
|
||||
def _key(self) -> CmpKey:
|
||||
if self._key_cache is None:
|
||||
self._key_cache = _cmpkey(
|
||||
self._epoch,
|
||||
self._release,
|
||||
self._pre,
|
||||
self._post,
|
||||
self._dev,
|
||||
self._local,
|
||||
)
|
||||
return self._key_cache
|
||||
|
||||
@property
|
||||
@_deprecated("Version._version is private and will be removed soon")
|
||||
def _version(self) -> _Version:
|
||||
return _Version(
|
||||
self._epoch, self._release, self._dev, self._pre, self._post, self._local
|
||||
)
|
||||
|
||||
@_version.setter
|
||||
@_deprecated("Version._version is private and will be removed soon")
|
||||
def _version(self, value: _Version) -> None:
|
||||
self._epoch = value.epoch
|
||||
self._release = value.release
|
||||
self._dev = value.dev
|
||||
self._pre = value.pre
|
||||
self._post = value.post
|
||||
self._local = value.local
|
||||
self._key_cache = None
|
||||
|
||||
def __repr__(self) -> str:
|
||||
"""A representation of the Version that shows all internal state.
|
||||
|
||||
>>> Version('1.0.0')
|
||||
<Version('1.0.0')>
|
||||
"""
|
||||
return f"<Version('{self}')>"
|
||||
|
||||
def __str__(self) -> str:
|
||||
"""A string representation of the version that can be round-tripped.
|
||||
|
||||
>>> str(Version("1.0a5"))
|
||||
'1.0a5'
|
||||
"""
|
||||
# This is a hot function, so not calling self.base_version
|
||||
version = ".".join(map(str, self.release))
|
||||
|
||||
# Epoch
|
||||
if self.epoch:
|
||||
version = f"{self.epoch}!{version}"
|
||||
|
||||
# Pre-release
|
||||
if self.pre is not None:
|
||||
version += "".join(map(str, self.pre))
|
||||
|
||||
# Post-release
|
||||
if self.post is not None:
|
||||
version += f".post{self.post}"
|
||||
|
||||
# Development release
|
||||
if self.dev is not None:
|
||||
version += f".dev{self.dev}"
|
||||
|
||||
# Local version segment
|
||||
if self.local is not None:
|
||||
version += f"+{self.local}"
|
||||
|
||||
return version
|
||||
|
||||
@property
|
||||
def _str(self) -> str:
|
||||
"""Internal property for match_args"""
|
||||
return str(self)
|
||||
|
||||
@property
|
||||
def epoch(self) -> int:
|
||||
"""The epoch of the version.
|
||||
|
||||
>>> Version("2.0.0").epoch
|
||||
0
|
||||
>>> Version("1!2.0.0").epoch
|
||||
1
|
||||
"""
|
||||
return self._epoch
|
||||
|
||||
@property
|
||||
def release(self) -> tuple[int, ...]:
|
||||
"""The components of the "release" segment of the version.
|
||||
|
||||
>>> Version("1.2.3").release
|
||||
(1, 2, 3)
|
||||
>>> Version("2.0.0").release
|
||||
(2, 0, 0)
|
||||
>>> Version("1!2.0.0.post0").release
|
||||
(2, 0, 0)
|
||||
|
||||
Includes trailing zeroes but not the epoch or any pre-release / development /
|
||||
post-release suffixes.
|
||||
"""
|
||||
return self._release
|
||||
|
||||
@property
|
||||
def pre(self) -> tuple[str, int] | None:
|
||||
"""The pre-release segment of the version.
|
||||
|
||||
>>> print(Version("1.2.3").pre)
|
||||
None
|
||||
>>> Version("1.2.3a1").pre
|
||||
('a', 1)
|
||||
>>> Version("1.2.3b1").pre
|
||||
('b', 1)
|
||||
>>> Version("1.2.3rc1").pre
|
||||
('rc', 1)
|
||||
"""
|
||||
return self._pre
|
||||
|
||||
@property
|
||||
def post(self) -> int | None:
|
||||
"""The post-release number of the version.
|
||||
|
||||
>>> print(Version("1.2.3").post)
|
||||
None
|
||||
>>> Version("1.2.3.post1").post
|
||||
1
|
||||
"""
|
||||
return self._post[1] if self._post else None
|
||||
|
||||
@property
|
||||
def dev(self) -> int | None:
|
||||
"""The development number of the version.
|
||||
|
||||
>>> print(Version("1.2.3").dev)
|
||||
None
|
||||
>>> Version("1.2.3.dev1").dev
|
||||
1
|
||||
"""
|
||||
return self._dev[1] if self._dev else None
|
||||
|
||||
@property
|
||||
def local(self) -> str | None:
|
||||
"""The local version segment of the version.
|
||||
|
||||
>>> print(Version("1.2.3").local)
|
||||
None
|
||||
>>> Version("1.2.3+abc").local
|
||||
'abc'
|
||||
"""
|
||||
if self._local:
|
||||
return ".".join(str(x) for x in self._local)
|
||||
else:
|
||||
return None
|
||||
|
||||
@property
|
||||
def public(self) -> str:
|
||||
"""The public portion of the version.
|
||||
|
||||
>>> Version("1.2.3").public
|
||||
'1.2.3'
|
||||
>>> Version("1.2.3+abc").public
|
||||
'1.2.3'
|
||||
>>> Version("1!1.2.3dev1+abc").public
|
||||
'1!1.2.3.dev1'
|
||||
"""
|
||||
return str(self).split("+", 1)[0]
|
||||
|
||||
@property
|
||||
def base_version(self) -> str:
|
||||
"""The "base version" of the version.
|
||||
|
||||
>>> Version("1.2.3").base_version
|
||||
'1.2.3'
|
||||
>>> Version("1.2.3+abc").base_version
|
||||
'1.2.3'
|
||||
>>> Version("1!1.2.3dev1+abc").base_version
|
||||
'1!1.2.3'
|
||||
|
||||
The "base version" is the public version of the project without any pre or post
|
||||
release markers.
|
||||
"""
|
||||
release_segment = ".".join(map(str, self.release))
|
||||
return f"{self.epoch}!{release_segment}" if self.epoch else release_segment
|
||||
|
||||
@property
|
||||
def is_prerelease(self) -> bool:
|
||||
"""Whether this version is a pre-release.
|
||||
|
||||
>>> Version("1.2.3").is_prerelease
|
||||
False
|
||||
>>> Version("1.2.3a1").is_prerelease
|
||||
True
|
||||
>>> Version("1.2.3b1").is_prerelease
|
||||
True
|
||||
>>> Version("1.2.3rc1").is_prerelease
|
||||
True
|
||||
>>> Version("1.2.3dev1").is_prerelease
|
||||
True
|
||||
"""
|
||||
return self.dev is not None or self.pre is not None
|
||||
|
||||
@property
|
||||
def is_postrelease(self) -> bool:
|
||||
"""Whether this version is a post-release.
|
||||
|
||||
>>> Version("1.2.3").is_postrelease
|
||||
False
|
||||
>>> Version("1.2.3.post1").is_postrelease
|
||||
True
|
||||
"""
|
||||
return self.post is not None
|
||||
|
||||
@property
|
||||
def is_devrelease(self) -> bool:
|
||||
"""Whether this version is a development release.
|
||||
|
||||
>>> Version("1.2.3").is_devrelease
|
||||
False
|
||||
>>> Version("1.2.3.dev1").is_devrelease
|
||||
True
|
||||
"""
|
||||
return self.dev is not None
|
||||
|
||||
@property
|
||||
def major(self) -> int:
|
||||
"""The first item of :attr:`release` or ``0`` if unavailable.
|
||||
|
||||
>>> Version("1.2.3").major
|
||||
1
|
||||
"""
|
||||
return self.release[0] if len(self.release) >= 1 else 0
|
||||
|
||||
@property
|
||||
def minor(self) -> int:
|
||||
"""The second item of :attr:`release` or ``0`` if unavailable.
|
||||
|
||||
>>> Version("1.2.3").minor
|
||||
2
|
||||
>>> Version("1").minor
|
||||
0
|
||||
"""
|
||||
return self.release[1] if len(self.release) >= 2 else 0
|
||||
|
||||
@property
|
||||
def micro(self) -> int:
|
||||
"""The third item of :attr:`release` or ``0`` if unavailable.
|
||||
|
||||
>>> Version("1.2.3").micro
|
||||
3
|
||||
>>> Version("1").micro
|
||||
0
|
||||
"""
|
||||
return self.release[2] if len(self.release) >= 3 else 0
|
||||
|
||||
|
||||
class _TrimmedRelease(Version):
|
||||
__slots__ = ()
|
||||
|
||||
def __init__(self, version: str | Version) -> None:
|
||||
if isinstance(version, Version):
|
||||
self._epoch = version._epoch
|
||||
self._release = version._release
|
||||
self._dev = version._dev
|
||||
self._pre = version._pre
|
||||
self._post = version._post
|
||||
self._local = version._local
|
||||
self._key_cache = version._key_cache
|
||||
return
|
||||
super().__init__(version) # pragma: no cover
|
||||
|
||||
@property
|
||||
def release(self) -> tuple[int, ...]:
|
||||
"""
|
||||
Release segment without any trailing zeros.
|
||||
|
||||
>>> _TrimmedRelease('1.0.0').release
|
||||
(1,)
|
||||
>>> _TrimmedRelease('0.0').release
|
||||
(0,)
|
||||
"""
|
||||
# This leaves one 0.
|
||||
rel = super().release
|
||||
len_release = len(rel)
|
||||
i = len_release
|
||||
while i > 1 and rel[i - 1] == 0:
|
||||
i -= 1
|
||||
return rel if i == len_release else rel[:i]
|
||||
|
||||
|
||||
def _parse_letter_version(
|
||||
letter: str | None, number: str | bytes | SupportsInt | None
|
||||
) -> tuple[str, int] | None:
|
||||
if letter:
|
||||
# We normalize any letters to their lower case form
|
||||
letter = letter.lower()
|
||||
|
||||
# We consider some words to be alternate spellings of other words and
|
||||
# in those cases we want to normalize the spellings to our preferred
|
||||
# spelling.
|
||||
letter = _LETTER_NORMALIZATION.get(letter, letter)
|
||||
|
||||
# We consider there to be an implicit 0 in a pre-release if there is
|
||||
# not a numeral associated with it.
|
||||
return letter, int(number or 0)
|
||||
|
||||
if number:
|
||||
# We assume if we are given a number, but we are not given a letter
|
||||
# then this is using the implicit post release syntax (e.g. 1.0-1)
|
||||
return "post", int(number)
|
||||
|
||||
return None
|
||||
|
||||
|
||||
_local_version_separators = re.compile(r"[\._-]")
|
||||
|
||||
|
||||
def _parse_local_version(local: str | None) -> LocalType | None:
|
||||
"""
|
||||
Takes a string like abc.1.twelve and turns it into ("abc", 1, "twelve").
|
||||
"""
|
||||
if local is not None:
|
||||
return tuple(
|
||||
part.lower() if not part.isdigit() else int(part)
|
||||
for part in _local_version_separators.split(local)
|
||||
)
|
||||
return None
|
||||
|
||||
|
||||
def _cmpkey(
|
||||
epoch: int,
|
||||
release: tuple[int, ...],
|
||||
pre: tuple[str, int] | None,
|
||||
post: tuple[str, int] | None,
|
||||
dev: tuple[str, int] | None,
|
||||
local: LocalType | None,
|
||||
) -> CmpKey:
|
||||
# When we compare a release version, we want to compare it with all of the
|
||||
# trailing zeros removed. We will use this for our sorting key.
|
||||
len_release = len(release)
|
||||
i = len_release
|
||||
while i and release[i - 1] == 0:
|
||||
i -= 1
|
||||
_release = release if i == len_release else release[:i]
|
||||
|
||||
# We need to "trick" the sorting algorithm to put 1.0.dev0 before 1.0a0.
|
||||
# We'll do this by abusing the pre segment, but we _only_ want to do this
|
||||
# if there is not a pre or a post segment. If we have one of those then
|
||||
# the normal sorting rules will handle this case correctly.
|
||||
if pre is None and post is None and dev is not None:
|
||||
_pre: CmpPrePostDevType = NegativeInfinity
|
||||
# Versions without a pre-release (except as noted above) should sort after
|
||||
# those with one.
|
||||
elif pre is None:
|
||||
_pre = Infinity
|
||||
else:
|
||||
_pre = pre
|
||||
|
||||
# Versions without a post segment should sort before those with one.
|
||||
if post is None:
|
||||
_post: CmpPrePostDevType = NegativeInfinity
|
||||
|
||||
else:
|
||||
_post = post
|
||||
|
||||
# Versions without a development segment should sort after those with one.
|
||||
if dev is None:
|
||||
_dev: CmpPrePostDevType = Infinity
|
||||
|
||||
else:
|
||||
_dev = dev
|
||||
|
||||
if local is None:
|
||||
# Versions without a local segment should sort before those with one.
|
||||
_local: CmpLocalType = NegativeInfinity
|
||||
else:
|
||||
# Versions with a local segment need that segment parsed to implement
|
||||
# the sorting rules in PEP440.
|
||||
# - Alpha numeric segments sort before numeric segments
|
||||
# - Alpha numeric segments sort lexicographically
|
||||
# - Numeric segments sort numerically
|
||||
# - Shorter versions sort before longer versions when the prefixes
|
||||
# match exactly
|
||||
_local = tuple(
|
||||
(i, "") if isinstance(i, int) else (NegativeInfinity, i) for i in local
|
||||
)
|
||||
|
||||
return epoch, _release, _pre, _post, _dev, _local
|
||||
@@ -0,0 +1,17 @@
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to
|
||||
deal in the Software without restriction, including without limitation the
|
||||
rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
|
||||
sell copies of the Software, and to permit persons to whom the Software is
|
||||
furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in
|
||||
all copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
|
||||
FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
|
||||
IN THE SOFTWARE.
|
||||
File diff suppressed because it is too large
Load Diff
@@ -0,0 +1,21 @@
|
||||
MIT License
|
||||
|
||||
Copyright (c) 2010-202x The platformdirs developers
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
in the Software without restriction, including without limitation the rights
|
||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
copies of the Software, and to permit persons to whom the Software is
|
||||
furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in all
|
||||
copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
SOFTWARE.
|
||||
@@ -0,0 +1,631 @@
|
||||
"""
|
||||
Utilities for determining application-specific dirs.
|
||||
|
||||
See <https://github.com/platformdirs/platformdirs> for details and usage.
|
||||
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import os
|
||||
import sys
|
||||
from typing import TYPE_CHECKING
|
||||
|
||||
from .api import PlatformDirsABC
|
||||
from .version import __version__
|
||||
from .version import __version_tuple__ as __version_info__
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from pathlib import Path
|
||||
from typing import Literal
|
||||
|
||||
if sys.platform == "win32":
|
||||
from pip._vendor.platformdirs.windows import Windows as _Result
|
||||
elif sys.platform == "darwin":
|
||||
from pip._vendor.platformdirs.macos import MacOS as _Result
|
||||
else:
|
||||
from pip._vendor.platformdirs.unix import Unix as _Result
|
||||
|
||||
|
||||
def _set_platform_dir_class() -> type[PlatformDirsABC]:
|
||||
if os.getenv("ANDROID_DATA") == "/data" and os.getenv("ANDROID_ROOT") == "/system":
|
||||
if os.getenv("SHELL") or os.getenv("PREFIX"):
|
||||
return _Result
|
||||
|
||||
from pip._vendor.platformdirs.android import _android_folder # noqa: PLC0415
|
||||
|
||||
if _android_folder() is not None:
|
||||
from pip._vendor.platformdirs.android import Android # noqa: PLC0415
|
||||
|
||||
return Android # return to avoid redefinition of a result
|
||||
|
||||
return _Result
|
||||
|
||||
|
||||
if TYPE_CHECKING:
|
||||
# Work around mypy issue: https://github.com/python/mypy/issues/10962
|
||||
PlatformDirs = _Result
|
||||
else:
|
||||
PlatformDirs = _set_platform_dir_class() #: Currently active platform
|
||||
AppDirs = PlatformDirs #: Backwards compatibility with appdirs
|
||||
|
||||
|
||||
def user_data_dir(
|
||||
appname: str | None = None,
|
||||
appauthor: str | Literal[False] | None = None,
|
||||
version: str | None = None,
|
||||
roaming: bool = False, # noqa: FBT001, FBT002
|
||||
ensure_exists: bool = False, # noqa: FBT001, FBT002
|
||||
) -> str:
|
||||
"""
|
||||
:param appname: See `appname <platformdirs.api.PlatformDirsABC.appname>`.
|
||||
:param appauthor: See `appauthor <platformdirs.api.PlatformDirsABC.appauthor>`.
|
||||
:param version: See `version <platformdirs.api.PlatformDirsABC.version>`.
|
||||
:param roaming: See `roaming <platformdirs.api.PlatformDirsABC.roaming>`.
|
||||
:param ensure_exists: See `ensure_exists <platformdirs.api.PlatformDirsABC.ensure_exists>`.
|
||||
:returns: data directory tied to the user
|
||||
"""
|
||||
return PlatformDirs(
|
||||
appname=appname,
|
||||
appauthor=appauthor,
|
||||
version=version,
|
||||
roaming=roaming,
|
||||
ensure_exists=ensure_exists,
|
||||
).user_data_dir
|
||||
|
||||
|
||||
def site_data_dir(
|
||||
appname: str | None = None,
|
||||
appauthor: str | Literal[False] | None = None,
|
||||
version: str | None = None,
|
||||
multipath: bool = False, # noqa: FBT001, FBT002
|
||||
ensure_exists: bool = False, # noqa: FBT001, FBT002
|
||||
) -> str:
|
||||
"""
|
||||
:param appname: See `appname <platformdirs.api.PlatformDirsABC.appname>`.
|
||||
:param appauthor: See `appauthor <platformdirs.api.PlatformDirsABC.appauthor>`.
|
||||
:param version: See `version <platformdirs.api.PlatformDirsABC.version>`.
|
||||
:param multipath: See `roaming <platformdirs.api.PlatformDirsABC.multipath>`.
|
||||
:param ensure_exists: See `ensure_exists <platformdirs.api.PlatformDirsABC.ensure_exists>`.
|
||||
:returns: data directory shared by users
|
||||
"""
|
||||
return PlatformDirs(
|
||||
appname=appname,
|
||||
appauthor=appauthor,
|
||||
version=version,
|
||||
multipath=multipath,
|
||||
ensure_exists=ensure_exists,
|
||||
).site_data_dir
|
||||
|
||||
|
||||
def user_config_dir(
|
||||
appname: str | None = None,
|
||||
appauthor: str | Literal[False] | None = None,
|
||||
version: str | None = None,
|
||||
roaming: bool = False, # noqa: FBT001, FBT002
|
||||
ensure_exists: bool = False, # noqa: FBT001, FBT002
|
||||
) -> str:
|
||||
"""
|
||||
:param appname: See `appname <platformdirs.api.PlatformDirsABC.appname>`.
|
||||
:param appauthor: See `appauthor <platformdirs.api.PlatformDirsABC.appauthor>`.
|
||||
:param version: See `version <platformdirs.api.PlatformDirsABC.version>`.
|
||||
:param roaming: See `roaming <platformdirs.api.PlatformDirsABC.roaming>`.
|
||||
:param ensure_exists: See `ensure_exists <platformdirs.api.PlatformDirsABC.ensure_exists>`.
|
||||
:returns: config directory tied to the user
|
||||
"""
|
||||
return PlatformDirs(
|
||||
appname=appname,
|
||||
appauthor=appauthor,
|
||||
version=version,
|
||||
roaming=roaming,
|
||||
ensure_exists=ensure_exists,
|
||||
).user_config_dir
|
||||
|
||||
|
||||
def site_config_dir(
|
||||
appname: str | None = None,
|
||||
appauthor: str | Literal[False] | None = None,
|
||||
version: str | None = None,
|
||||
multipath: bool = False, # noqa: FBT001, FBT002
|
||||
ensure_exists: bool = False, # noqa: FBT001, FBT002
|
||||
) -> str:
|
||||
"""
|
||||
:param appname: See `appname <platformdirs.api.PlatformDirsABC.appname>`.
|
||||
:param appauthor: See `appauthor <platformdirs.api.PlatformDirsABC.appauthor>`.
|
||||
:param version: See `version <platformdirs.api.PlatformDirsABC.version>`.
|
||||
:param multipath: See `roaming <platformdirs.api.PlatformDirsABC.multipath>`.
|
||||
:param ensure_exists: See `ensure_exists <platformdirs.api.PlatformDirsABC.ensure_exists>`.
|
||||
:returns: config directory shared by the users
|
||||
"""
|
||||
return PlatformDirs(
|
||||
appname=appname,
|
||||
appauthor=appauthor,
|
||||
version=version,
|
||||
multipath=multipath,
|
||||
ensure_exists=ensure_exists,
|
||||
).site_config_dir
|
||||
|
||||
|
||||
def user_cache_dir(
|
||||
appname: str | None = None,
|
||||
appauthor: str | Literal[False] | None = None,
|
||||
version: str | None = None,
|
||||
opinion: bool = True, # noqa: FBT001, FBT002
|
||||
ensure_exists: bool = False, # noqa: FBT001, FBT002
|
||||
) -> str:
|
||||
"""
|
||||
:param appname: See `appname <platformdirs.api.PlatformDirsABC.appname>`.
|
||||
:param appauthor: See `appauthor <platformdirs.api.PlatformDirsABC.appauthor>`.
|
||||
:param version: See `version <platformdirs.api.PlatformDirsABC.version>`.
|
||||
:param opinion: See `roaming <platformdirs.api.PlatformDirsABC.opinion>`.
|
||||
:param ensure_exists: See `ensure_exists <platformdirs.api.PlatformDirsABC.ensure_exists>`.
|
||||
:returns: cache directory tied to the user
|
||||
"""
|
||||
return PlatformDirs(
|
||||
appname=appname,
|
||||
appauthor=appauthor,
|
||||
version=version,
|
||||
opinion=opinion,
|
||||
ensure_exists=ensure_exists,
|
||||
).user_cache_dir
|
||||
|
||||
|
||||
def site_cache_dir(
|
||||
appname: str | None = None,
|
||||
appauthor: str | Literal[False] | None = None,
|
||||
version: str | None = None,
|
||||
opinion: bool = True, # noqa: FBT001, FBT002
|
||||
ensure_exists: bool = False, # noqa: FBT001, FBT002
|
||||
) -> str:
|
||||
"""
|
||||
:param appname: See `appname <platformdirs.api.PlatformDirsABC.appname>`.
|
||||
:param appauthor: See `appauthor <platformdirs.api.PlatformDirsABC.appauthor>`.
|
||||
:param version: See `version <platformdirs.api.PlatformDirsABC.version>`.
|
||||
:param opinion: See `opinion <platformdirs.api.PlatformDirsABC.opinion>`.
|
||||
:param ensure_exists: See `ensure_exists <platformdirs.api.PlatformDirsABC.ensure_exists>`.
|
||||
:returns: cache directory tied to the user
|
||||
"""
|
||||
return PlatformDirs(
|
||||
appname=appname,
|
||||
appauthor=appauthor,
|
||||
version=version,
|
||||
opinion=opinion,
|
||||
ensure_exists=ensure_exists,
|
||||
).site_cache_dir
|
||||
|
||||
|
||||
def user_state_dir(
|
||||
appname: str | None = None,
|
||||
appauthor: str | Literal[False] | None = None,
|
||||
version: str | None = None,
|
||||
roaming: bool = False, # noqa: FBT001, FBT002
|
||||
ensure_exists: bool = False, # noqa: FBT001, FBT002
|
||||
) -> str:
|
||||
"""
|
||||
:param appname: See `appname <platformdirs.api.PlatformDirsABC.appname>`.
|
||||
:param appauthor: See `appauthor <platformdirs.api.PlatformDirsABC.appauthor>`.
|
||||
:param version: See `version <platformdirs.api.PlatformDirsABC.version>`.
|
||||
:param roaming: See `roaming <platformdirs.api.PlatformDirsABC.roaming>`.
|
||||
:param ensure_exists: See `ensure_exists <platformdirs.api.PlatformDirsABC.ensure_exists>`.
|
||||
:returns: state directory tied to the user
|
||||
"""
|
||||
return PlatformDirs(
|
||||
appname=appname,
|
||||
appauthor=appauthor,
|
||||
version=version,
|
||||
roaming=roaming,
|
||||
ensure_exists=ensure_exists,
|
||||
).user_state_dir
|
||||
|
||||
|
||||
def user_log_dir(
|
||||
appname: str | None = None,
|
||||
appauthor: str | Literal[False] | None = None,
|
||||
version: str | None = None,
|
||||
opinion: bool = True, # noqa: FBT001, FBT002
|
||||
ensure_exists: bool = False, # noqa: FBT001, FBT002
|
||||
) -> str:
|
||||
"""
|
||||
:param appname: See `appname <platformdirs.api.PlatformDirsABC.appname>`.
|
||||
:param appauthor: See `appauthor <platformdirs.api.PlatformDirsABC.appauthor>`.
|
||||
:param version: See `version <platformdirs.api.PlatformDirsABC.version>`.
|
||||
:param opinion: See `roaming <platformdirs.api.PlatformDirsABC.opinion>`.
|
||||
:param ensure_exists: See `ensure_exists <platformdirs.api.PlatformDirsABC.ensure_exists>`.
|
||||
:returns: log directory tied to the user
|
||||
"""
|
||||
return PlatformDirs(
|
||||
appname=appname,
|
||||
appauthor=appauthor,
|
||||
version=version,
|
||||
opinion=opinion,
|
||||
ensure_exists=ensure_exists,
|
||||
).user_log_dir
|
||||
|
||||
|
||||
def user_documents_dir() -> str:
|
||||
""":returns: documents directory tied to the user"""
|
||||
return PlatformDirs().user_documents_dir
|
||||
|
||||
|
||||
def user_downloads_dir() -> str:
|
||||
""":returns: downloads directory tied to the user"""
|
||||
return PlatformDirs().user_downloads_dir
|
||||
|
||||
|
||||
def user_pictures_dir() -> str:
|
||||
""":returns: pictures directory tied to the user"""
|
||||
return PlatformDirs().user_pictures_dir
|
||||
|
||||
|
||||
def user_videos_dir() -> str:
|
||||
""":returns: videos directory tied to the user"""
|
||||
return PlatformDirs().user_videos_dir
|
||||
|
||||
|
||||
def user_music_dir() -> str:
|
||||
""":returns: music directory tied to the user"""
|
||||
return PlatformDirs().user_music_dir
|
||||
|
||||
|
||||
def user_desktop_dir() -> str:
|
||||
""":returns: desktop directory tied to the user"""
|
||||
return PlatformDirs().user_desktop_dir
|
||||
|
||||
|
||||
def user_runtime_dir(
|
||||
appname: str | None = None,
|
||||
appauthor: str | Literal[False] | None = None,
|
||||
version: str | None = None,
|
||||
opinion: bool = True, # noqa: FBT001, FBT002
|
||||
ensure_exists: bool = False, # noqa: FBT001, FBT002
|
||||
) -> str:
|
||||
"""
|
||||
:param appname: See `appname <platformdirs.api.PlatformDirsABC.appname>`.
|
||||
:param appauthor: See `appauthor <platformdirs.api.PlatformDirsABC.appauthor>`.
|
||||
:param version: See `version <platformdirs.api.PlatformDirsABC.version>`.
|
||||
:param opinion: See `opinion <platformdirs.api.PlatformDirsABC.opinion>`.
|
||||
:param ensure_exists: See `ensure_exists <platformdirs.api.PlatformDirsABC.ensure_exists>`.
|
||||
:returns: runtime directory tied to the user
|
||||
"""
|
||||
return PlatformDirs(
|
||||
appname=appname,
|
||||
appauthor=appauthor,
|
||||
version=version,
|
||||
opinion=opinion,
|
||||
ensure_exists=ensure_exists,
|
||||
).user_runtime_dir
|
||||
|
||||
|
||||
def site_runtime_dir(
|
||||
appname: str | None = None,
|
||||
appauthor: str | Literal[False] | None = None,
|
||||
version: str | None = None,
|
||||
opinion: bool = True, # noqa: FBT001, FBT002
|
||||
ensure_exists: bool = False, # noqa: FBT001, FBT002
|
||||
) -> str:
|
||||
"""
|
||||
:param appname: See `appname <platformdirs.api.PlatformDirsABC.appname>`.
|
||||
:param appauthor: See `appauthor <platformdirs.api.PlatformDirsABC.appauthor>`.
|
||||
:param version: See `version <platformdirs.api.PlatformDirsABC.version>`.
|
||||
:param opinion: See `opinion <platformdirs.api.PlatformDirsABC.opinion>`.
|
||||
:param ensure_exists: See `ensure_exists <platformdirs.api.PlatformDirsABC.ensure_exists>`.
|
||||
:returns: runtime directory shared by users
|
||||
"""
|
||||
return PlatformDirs(
|
||||
appname=appname,
|
||||
appauthor=appauthor,
|
||||
version=version,
|
||||
opinion=opinion,
|
||||
ensure_exists=ensure_exists,
|
||||
).site_runtime_dir
|
||||
|
||||
|
||||
def user_data_path(
|
||||
appname: str | None = None,
|
||||
appauthor: str | Literal[False] | None = None,
|
||||
version: str | None = None,
|
||||
roaming: bool = False, # noqa: FBT001, FBT002
|
||||
ensure_exists: bool = False, # noqa: FBT001, FBT002
|
||||
) -> Path:
|
||||
"""
|
||||
:param appname: See `appname <platformdirs.api.PlatformDirsABC.appname>`.
|
||||
:param appauthor: See `appauthor <platformdirs.api.PlatformDirsABC.appauthor>`.
|
||||
:param version: See `version <platformdirs.api.PlatformDirsABC.version>`.
|
||||
:param roaming: See `roaming <platformdirs.api.PlatformDirsABC.roaming>`.
|
||||
:param ensure_exists: See `ensure_exists <platformdirs.api.PlatformDirsABC.ensure_exists>`.
|
||||
:returns: data path tied to the user
|
||||
"""
|
||||
return PlatformDirs(
|
||||
appname=appname,
|
||||
appauthor=appauthor,
|
||||
version=version,
|
||||
roaming=roaming,
|
||||
ensure_exists=ensure_exists,
|
||||
).user_data_path
|
||||
|
||||
|
||||
def site_data_path(
|
||||
appname: str | None = None,
|
||||
appauthor: str | Literal[False] | None = None,
|
||||
version: str | None = None,
|
||||
multipath: bool = False, # noqa: FBT001, FBT002
|
||||
ensure_exists: bool = False, # noqa: FBT001, FBT002
|
||||
) -> Path:
|
||||
"""
|
||||
:param appname: See `appname <platformdirs.api.PlatformDirsABC.appname>`.
|
||||
:param appauthor: See `appauthor <platformdirs.api.PlatformDirsABC.appauthor>`.
|
||||
:param version: See `version <platformdirs.api.PlatformDirsABC.version>`.
|
||||
:param multipath: See `multipath <platformdirs.api.PlatformDirsABC.multipath>`.
|
||||
:param ensure_exists: See `ensure_exists <platformdirs.api.PlatformDirsABC.ensure_exists>`.
|
||||
:returns: data path shared by users
|
||||
"""
|
||||
return PlatformDirs(
|
||||
appname=appname,
|
||||
appauthor=appauthor,
|
||||
version=version,
|
||||
multipath=multipath,
|
||||
ensure_exists=ensure_exists,
|
||||
).site_data_path
|
||||
|
||||
|
||||
def user_config_path(
|
||||
appname: str | None = None,
|
||||
appauthor: str | Literal[False] | None = None,
|
||||
version: str | None = None,
|
||||
roaming: bool = False, # noqa: FBT001, FBT002
|
||||
ensure_exists: bool = False, # noqa: FBT001, FBT002
|
||||
) -> Path:
|
||||
"""
|
||||
:param appname: See `appname <platformdirs.api.PlatformDirsABC.appname>`.
|
||||
:param appauthor: See `appauthor <platformdirs.api.PlatformDirsABC.appauthor>`.
|
||||
:param version: See `version <platformdirs.api.PlatformDirsABC.version>`.
|
||||
:param roaming: See `roaming <platformdirs.api.PlatformDirsABC.roaming>`.
|
||||
:param ensure_exists: See `ensure_exists <platformdirs.api.PlatformDirsABC.ensure_exists>`.
|
||||
:returns: config path tied to the user
|
||||
"""
|
||||
return PlatformDirs(
|
||||
appname=appname,
|
||||
appauthor=appauthor,
|
||||
version=version,
|
||||
roaming=roaming,
|
||||
ensure_exists=ensure_exists,
|
||||
).user_config_path
|
||||
|
||||
|
||||
def site_config_path(
|
||||
appname: str | None = None,
|
||||
appauthor: str | Literal[False] | None = None,
|
||||
version: str | None = None,
|
||||
multipath: bool = False, # noqa: FBT001, FBT002
|
||||
ensure_exists: bool = False, # noqa: FBT001, FBT002
|
||||
) -> Path:
|
||||
"""
|
||||
:param appname: See `appname <platformdirs.api.PlatformDirsABC.appname>`.
|
||||
:param appauthor: See `appauthor <platformdirs.api.PlatformDirsABC.appauthor>`.
|
||||
:param version: See `version <platformdirs.api.PlatformDirsABC.version>`.
|
||||
:param multipath: See `roaming <platformdirs.api.PlatformDirsABC.multipath>`.
|
||||
:param ensure_exists: See `ensure_exists <platformdirs.api.PlatformDirsABC.ensure_exists>`.
|
||||
:returns: config path shared by the users
|
||||
"""
|
||||
return PlatformDirs(
|
||||
appname=appname,
|
||||
appauthor=appauthor,
|
||||
version=version,
|
||||
multipath=multipath,
|
||||
ensure_exists=ensure_exists,
|
||||
).site_config_path
|
||||
|
||||
|
||||
def site_cache_path(
|
||||
appname: str | None = None,
|
||||
appauthor: str | Literal[False] | None = None,
|
||||
version: str | None = None,
|
||||
opinion: bool = True, # noqa: FBT001, FBT002
|
||||
ensure_exists: bool = False, # noqa: FBT001, FBT002
|
||||
) -> Path:
|
||||
"""
|
||||
:param appname: See `appname <platformdirs.api.PlatformDirsABC.appname>`.
|
||||
:param appauthor: See `appauthor <platformdirs.api.PlatformDirsABC.appauthor>`.
|
||||
:param version: See `version <platformdirs.api.PlatformDirsABC.version>`.
|
||||
:param opinion: See `opinion <platformdirs.api.PlatformDirsABC.opinion>`.
|
||||
:param ensure_exists: See `ensure_exists <platformdirs.api.PlatformDirsABC.ensure_exists>`.
|
||||
:returns: cache directory tied to the user
|
||||
"""
|
||||
return PlatformDirs(
|
||||
appname=appname,
|
||||
appauthor=appauthor,
|
||||
version=version,
|
||||
opinion=opinion,
|
||||
ensure_exists=ensure_exists,
|
||||
).site_cache_path
|
||||
|
||||
|
||||
def user_cache_path(
|
||||
appname: str | None = None,
|
||||
appauthor: str | Literal[False] | None = None,
|
||||
version: str | None = None,
|
||||
opinion: bool = True, # noqa: FBT001, FBT002
|
||||
ensure_exists: bool = False, # noqa: FBT001, FBT002
|
||||
) -> Path:
|
||||
"""
|
||||
:param appname: See `appname <platformdirs.api.PlatformDirsABC.appname>`.
|
||||
:param appauthor: See `appauthor <platformdirs.api.PlatformDirsABC.appauthor>`.
|
||||
:param version: See `version <platformdirs.api.PlatformDirsABC.version>`.
|
||||
:param opinion: See `roaming <platformdirs.api.PlatformDirsABC.opinion>`.
|
||||
:param ensure_exists: See `ensure_exists <platformdirs.api.PlatformDirsABC.ensure_exists>`.
|
||||
:returns: cache path tied to the user
|
||||
"""
|
||||
return PlatformDirs(
|
||||
appname=appname,
|
||||
appauthor=appauthor,
|
||||
version=version,
|
||||
opinion=opinion,
|
||||
ensure_exists=ensure_exists,
|
||||
).user_cache_path
|
||||
|
||||
|
||||
def user_state_path(
|
||||
appname: str | None = None,
|
||||
appauthor: str | Literal[False] | None = None,
|
||||
version: str | None = None,
|
||||
roaming: bool = False, # noqa: FBT001, FBT002
|
||||
ensure_exists: bool = False, # noqa: FBT001, FBT002
|
||||
) -> Path:
|
||||
"""
|
||||
:param appname: See `appname <platformdirs.api.PlatformDirsABC.appname>`.
|
||||
:param appauthor: See `appauthor <platformdirs.api.PlatformDirsABC.appauthor>`.
|
||||
:param version: See `version <platformdirs.api.PlatformDirsABC.version>`.
|
||||
:param roaming: See `roaming <platformdirs.api.PlatformDirsABC.roaming>`.
|
||||
:param ensure_exists: See `ensure_exists <platformdirs.api.PlatformDirsABC.ensure_exists>`.
|
||||
:returns: state path tied to the user
|
||||
"""
|
||||
return PlatformDirs(
|
||||
appname=appname,
|
||||
appauthor=appauthor,
|
||||
version=version,
|
||||
roaming=roaming,
|
||||
ensure_exists=ensure_exists,
|
||||
).user_state_path
|
||||
|
||||
|
||||
def user_log_path(
|
||||
appname: str | None = None,
|
||||
appauthor: str | Literal[False] | None = None,
|
||||
version: str | None = None,
|
||||
opinion: bool = True, # noqa: FBT001, FBT002
|
||||
ensure_exists: bool = False, # noqa: FBT001, FBT002
|
||||
) -> Path:
|
||||
"""
|
||||
:param appname: See `appname <platformdirs.api.PlatformDirsABC.appname>`.
|
||||
:param appauthor: See `appauthor <platformdirs.api.PlatformDirsABC.appauthor>`.
|
||||
:param version: See `version <platformdirs.api.PlatformDirsABC.version>`.
|
||||
:param opinion: See `roaming <platformdirs.api.PlatformDirsABC.opinion>`.
|
||||
:param ensure_exists: See `ensure_exists <platformdirs.api.PlatformDirsABC.ensure_exists>`.
|
||||
:returns: log path tied to the user
|
||||
"""
|
||||
return PlatformDirs(
|
||||
appname=appname,
|
||||
appauthor=appauthor,
|
||||
version=version,
|
||||
opinion=opinion,
|
||||
ensure_exists=ensure_exists,
|
||||
).user_log_path
|
||||
|
||||
|
||||
def user_documents_path() -> Path:
|
||||
""":returns: documents a path tied to the user"""
|
||||
return PlatformDirs().user_documents_path
|
||||
|
||||
|
||||
def user_downloads_path() -> Path:
|
||||
""":returns: downloads path tied to the user"""
|
||||
return PlatformDirs().user_downloads_path
|
||||
|
||||
|
||||
def user_pictures_path() -> Path:
|
||||
""":returns: pictures path tied to the user"""
|
||||
return PlatformDirs().user_pictures_path
|
||||
|
||||
|
||||
def user_videos_path() -> Path:
|
||||
""":returns: videos path tied to the user"""
|
||||
return PlatformDirs().user_videos_path
|
||||
|
||||
|
||||
def user_music_path() -> Path:
|
||||
""":returns: music path tied to the user"""
|
||||
return PlatformDirs().user_music_path
|
||||
|
||||
|
||||
def user_desktop_path() -> Path:
|
||||
""":returns: desktop path tied to the user"""
|
||||
return PlatformDirs().user_desktop_path
|
||||
|
||||
|
||||
def user_runtime_path(
|
||||
appname: str | None = None,
|
||||
appauthor: str | Literal[False] | None = None,
|
||||
version: str | None = None,
|
||||
opinion: bool = True, # noqa: FBT001, FBT002
|
||||
ensure_exists: bool = False, # noqa: FBT001, FBT002
|
||||
) -> Path:
|
||||
"""
|
||||
:param appname: See `appname <platformdirs.api.PlatformDirsABC.appname>`.
|
||||
:param appauthor: See `appauthor <platformdirs.api.PlatformDirsABC.appauthor>`.
|
||||
:param version: See `version <platformdirs.api.PlatformDirsABC.version>`.
|
||||
:param opinion: See `opinion <platformdirs.api.PlatformDirsABC.opinion>`.
|
||||
:param ensure_exists: See `ensure_exists <platformdirs.api.PlatformDirsABC.ensure_exists>`.
|
||||
:returns: runtime path tied to the user
|
||||
"""
|
||||
return PlatformDirs(
|
||||
appname=appname,
|
||||
appauthor=appauthor,
|
||||
version=version,
|
||||
opinion=opinion,
|
||||
ensure_exists=ensure_exists,
|
||||
).user_runtime_path
|
||||
|
||||
|
||||
def site_runtime_path(
|
||||
appname: str | None = None,
|
||||
appauthor: str | Literal[False] | None = None,
|
||||
version: str | None = None,
|
||||
opinion: bool = True, # noqa: FBT001, FBT002
|
||||
ensure_exists: bool = False, # noqa: FBT001, FBT002
|
||||
) -> Path:
|
||||
"""
|
||||
:param appname: See `appname <platformdirs.api.PlatformDirsABC.appname>`.
|
||||
:param appauthor: See `appauthor <platformdirs.api.PlatformDirsABC.appauthor>`.
|
||||
:param version: See `version <platformdirs.api.PlatformDirsABC.version>`.
|
||||
:param opinion: See `opinion <platformdirs.api.PlatformDirsABC.opinion>`.
|
||||
:param ensure_exists: See `ensure_exists <platformdirs.api.PlatformDirsABC.ensure_exists>`.
|
||||
:returns: runtime path shared by users
|
||||
"""
|
||||
return PlatformDirs(
|
||||
appname=appname,
|
||||
appauthor=appauthor,
|
||||
version=version,
|
||||
opinion=opinion,
|
||||
ensure_exists=ensure_exists,
|
||||
).site_runtime_path
|
||||
|
||||
|
||||
__all__ = [
|
||||
"AppDirs",
|
||||
"PlatformDirs",
|
||||
"PlatformDirsABC",
|
||||
"__version__",
|
||||
"__version_info__",
|
||||
"site_cache_dir",
|
||||
"site_cache_path",
|
||||
"site_config_dir",
|
||||
"site_config_path",
|
||||
"site_data_dir",
|
||||
"site_data_path",
|
||||
"site_runtime_dir",
|
||||
"site_runtime_path",
|
||||
"user_cache_dir",
|
||||
"user_cache_path",
|
||||
"user_config_dir",
|
||||
"user_config_path",
|
||||
"user_data_dir",
|
||||
"user_data_path",
|
||||
"user_desktop_dir",
|
||||
"user_desktop_path",
|
||||
"user_documents_dir",
|
||||
"user_documents_path",
|
||||
"user_downloads_dir",
|
||||
"user_downloads_path",
|
||||
"user_log_dir",
|
||||
"user_log_path",
|
||||
"user_music_dir",
|
||||
"user_music_path",
|
||||
"user_pictures_dir",
|
||||
"user_pictures_path",
|
||||
"user_runtime_dir",
|
||||
"user_runtime_path",
|
||||
"user_state_dir",
|
||||
"user_state_path",
|
||||
"user_videos_dir",
|
||||
"user_videos_path",
|
||||
]
|
||||
@@ -0,0 +1,55 @@
|
||||
"""Main entry point."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from pip._vendor.platformdirs import PlatformDirs, __version__
|
||||
|
||||
PROPS = (
|
||||
"user_data_dir",
|
||||
"user_config_dir",
|
||||
"user_cache_dir",
|
||||
"user_state_dir",
|
||||
"user_log_dir",
|
||||
"user_documents_dir",
|
||||
"user_downloads_dir",
|
||||
"user_pictures_dir",
|
||||
"user_videos_dir",
|
||||
"user_music_dir",
|
||||
"user_runtime_dir",
|
||||
"site_data_dir",
|
||||
"site_config_dir",
|
||||
"site_cache_dir",
|
||||
"site_runtime_dir",
|
||||
)
|
||||
|
||||
|
||||
def main() -> None:
|
||||
"""Run the main entry point."""
|
||||
app_name = "MyApp"
|
||||
app_author = "MyCompany"
|
||||
|
||||
print(f"-- platformdirs {__version__} --") # noqa: T201
|
||||
|
||||
print("-- app dirs (with optional 'version')") # noqa: T201
|
||||
dirs = PlatformDirs(app_name, app_author, version="1.0")
|
||||
for prop in PROPS:
|
||||
print(f"{prop}: {getattr(dirs, prop)}") # noqa: T201
|
||||
|
||||
print("\n-- app dirs (without optional 'version')") # noqa: T201
|
||||
dirs = PlatformDirs(app_name, app_author)
|
||||
for prop in PROPS:
|
||||
print(f"{prop}: {getattr(dirs, prop)}") # noqa: T201
|
||||
|
||||
print("\n-- app dirs (without optional 'appauthor')") # noqa: T201
|
||||
dirs = PlatformDirs(app_name)
|
||||
for prop in PROPS:
|
||||
print(f"{prop}: {getattr(dirs, prop)}") # noqa: T201
|
||||
|
||||
print("\n-- app dirs (with disabled 'appauthor')") # noqa: T201
|
||||
dirs = PlatformDirs(app_name, appauthor=False)
|
||||
for prop in PROPS:
|
||||
print(f"{prop}: {getattr(dirs, prop)}") # noqa: T201
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
@@ -0,0 +1,249 @@
|
||||
"""Android."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import os
|
||||
import re
|
||||
import sys
|
||||
from functools import lru_cache
|
||||
from typing import TYPE_CHECKING, cast
|
||||
|
||||
from .api import PlatformDirsABC
|
||||
|
||||
|
||||
class Android(PlatformDirsABC):
|
||||
"""
|
||||
Follows the guidance `from here <https://android.stackexchange.com/a/216132>`_.
|
||||
|
||||
Makes use of the `appname <platformdirs.api.PlatformDirsABC.appname>`, `version
|
||||
<platformdirs.api.PlatformDirsABC.version>`, `ensure_exists <platformdirs.api.PlatformDirsABC.ensure_exists>`.
|
||||
|
||||
"""
|
||||
|
||||
@property
|
||||
def user_data_dir(self) -> str:
|
||||
""":return: data directory tied to the user, e.g. ``/data/user/<userid>/<packagename>/files/<AppName>``"""
|
||||
return self._append_app_name_and_version(cast("str", _android_folder()), "files")
|
||||
|
||||
@property
|
||||
def site_data_dir(self) -> str:
|
||||
""":return: data directory shared by users, same as `user_data_dir`"""
|
||||
return self.user_data_dir
|
||||
|
||||
@property
|
||||
def user_config_dir(self) -> str:
|
||||
"""
|
||||
:return: config directory tied to the user, e.g. \
|
||||
``/data/user/<userid>/<packagename>/shared_prefs/<AppName>``
|
||||
"""
|
||||
return self._append_app_name_and_version(cast("str", _android_folder()), "shared_prefs")
|
||||
|
||||
@property
|
||||
def site_config_dir(self) -> str:
|
||||
""":return: config directory shared by the users, same as `user_config_dir`"""
|
||||
return self.user_config_dir
|
||||
|
||||
@property
|
||||
def user_cache_dir(self) -> str:
|
||||
""":return: cache directory tied to the user, e.g.,``/data/user/<userid>/<packagename>/cache/<AppName>``"""
|
||||
return self._append_app_name_and_version(cast("str", _android_folder()), "cache")
|
||||
|
||||
@property
|
||||
def site_cache_dir(self) -> str:
|
||||
""":return: cache directory shared by users, same as `user_cache_dir`"""
|
||||
return self.user_cache_dir
|
||||
|
||||
@property
|
||||
def user_state_dir(self) -> str:
|
||||
""":return: state directory tied to the user, same as `user_data_dir`"""
|
||||
return self.user_data_dir
|
||||
|
||||
@property
|
||||
def user_log_dir(self) -> str:
|
||||
"""
|
||||
:return: log directory tied to the user, same as `user_cache_dir` if not opinionated else ``log`` in it,
|
||||
e.g. ``/data/user/<userid>/<packagename>/cache/<AppName>/log``
|
||||
"""
|
||||
path = self.user_cache_dir
|
||||
if self.opinion:
|
||||
path = os.path.join(path, "log") # noqa: PTH118
|
||||
return path
|
||||
|
||||
@property
|
||||
def user_documents_dir(self) -> str:
|
||||
""":return: documents directory tied to the user e.g. ``/storage/emulated/0/Documents``"""
|
||||
return _android_documents_folder()
|
||||
|
||||
@property
|
||||
def user_downloads_dir(self) -> str:
|
||||
""":return: downloads directory tied to the user e.g. ``/storage/emulated/0/Downloads``"""
|
||||
return _android_downloads_folder()
|
||||
|
||||
@property
|
||||
def user_pictures_dir(self) -> str:
|
||||
""":return: pictures directory tied to the user e.g. ``/storage/emulated/0/Pictures``"""
|
||||
return _android_pictures_folder()
|
||||
|
||||
@property
|
||||
def user_videos_dir(self) -> str:
|
||||
""":return: videos directory tied to the user e.g. ``/storage/emulated/0/DCIM/Camera``"""
|
||||
return _android_videos_folder()
|
||||
|
||||
@property
|
||||
def user_music_dir(self) -> str:
|
||||
""":return: music directory tied to the user e.g. ``/storage/emulated/0/Music``"""
|
||||
return _android_music_folder()
|
||||
|
||||
@property
|
||||
def user_desktop_dir(self) -> str:
|
||||
""":return: desktop directory tied to the user e.g. ``/storage/emulated/0/Desktop``"""
|
||||
return "/storage/emulated/0/Desktop"
|
||||
|
||||
@property
|
||||
def user_runtime_dir(self) -> str:
|
||||
"""
|
||||
:return: runtime directory tied to the user, same as `user_cache_dir` if not opinionated else ``tmp`` in it,
|
||||
e.g. ``/data/user/<userid>/<packagename>/cache/<AppName>/tmp``
|
||||
"""
|
||||
path = self.user_cache_dir
|
||||
if self.opinion:
|
||||
path = os.path.join(path, "tmp") # noqa: PTH118
|
||||
return path
|
||||
|
||||
@property
|
||||
def site_runtime_dir(self) -> str:
|
||||
""":return: runtime directory shared by users, same as `user_runtime_dir`"""
|
||||
return self.user_runtime_dir
|
||||
|
||||
|
||||
@lru_cache(maxsize=1)
|
||||
def _android_folder() -> str | None: # noqa: C901
|
||||
""":return: base folder for the Android OS or None if it cannot be found"""
|
||||
result: str | None = None
|
||||
# type checker isn't happy with our "import android", just don't do this when type checking see
|
||||
# https://stackoverflow.com/a/61394121
|
||||
if not TYPE_CHECKING:
|
||||
try:
|
||||
# First try to get a path to android app using python4android (if available)...
|
||||
from android import mActivity # noqa: PLC0415
|
||||
|
||||
context = cast("android.content.Context", mActivity.getApplicationContext()) # noqa: F821
|
||||
result = context.getFilesDir().getParentFile().getAbsolutePath()
|
||||
except Exception: # noqa: BLE001
|
||||
result = None
|
||||
if result is None:
|
||||
try:
|
||||
# ...and fall back to using plain pyjnius, if python4android isn't available or doesn't deliver any useful
|
||||
# result...
|
||||
from jnius import autoclass # noqa: PLC0415
|
||||
|
||||
context = autoclass("android.content.Context")
|
||||
result = context.getFilesDir().getParentFile().getAbsolutePath()
|
||||
except Exception: # noqa: BLE001
|
||||
result = None
|
||||
if result is None:
|
||||
# and if that fails, too, find an android folder looking at path on the sys.path
|
||||
# warning: only works for apps installed under /data, not adopted storage etc.
|
||||
pattern = re.compile(r"/data/(data|user/\d+)/(.+)/files")
|
||||
for path in sys.path:
|
||||
if pattern.match(path):
|
||||
result = path.split("/files")[0]
|
||||
break
|
||||
else:
|
||||
result = None
|
||||
if result is None:
|
||||
# one last try: find an android folder looking at path on the sys.path taking adopted storage paths into
|
||||
# account
|
||||
pattern = re.compile(r"/mnt/expand/[a-fA-F0-9-]{36}/(data|user/\d+)/(.+)/files")
|
||||
for path in sys.path:
|
||||
if pattern.match(path):
|
||||
result = path.split("/files")[0]
|
||||
break
|
||||
else:
|
||||
result = None
|
||||
return result
|
||||
|
||||
|
||||
@lru_cache(maxsize=1)
|
||||
def _android_documents_folder() -> str:
|
||||
""":return: documents folder for the Android OS"""
|
||||
# Get directories with pyjnius
|
||||
try:
|
||||
from jnius import autoclass # noqa: PLC0415
|
||||
|
||||
context = autoclass("android.content.Context")
|
||||
environment = autoclass("android.os.Environment")
|
||||
documents_dir: str = context.getExternalFilesDir(environment.DIRECTORY_DOCUMENTS).getAbsolutePath()
|
||||
except Exception: # noqa: BLE001
|
||||
documents_dir = "/storage/emulated/0/Documents"
|
||||
|
||||
return documents_dir
|
||||
|
||||
|
||||
@lru_cache(maxsize=1)
|
||||
def _android_downloads_folder() -> str:
|
||||
""":return: downloads folder for the Android OS"""
|
||||
# Get directories with pyjnius
|
||||
try:
|
||||
from jnius import autoclass # noqa: PLC0415
|
||||
|
||||
context = autoclass("android.content.Context")
|
||||
environment = autoclass("android.os.Environment")
|
||||
downloads_dir: str = context.getExternalFilesDir(environment.DIRECTORY_DOWNLOADS).getAbsolutePath()
|
||||
except Exception: # noqa: BLE001
|
||||
downloads_dir = "/storage/emulated/0/Downloads"
|
||||
|
||||
return downloads_dir
|
||||
|
||||
|
||||
@lru_cache(maxsize=1)
|
||||
def _android_pictures_folder() -> str:
|
||||
""":return: pictures folder for the Android OS"""
|
||||
# Get directories with pyjnius
|
||||
try:
|
||||
from jnius import autoclass # noqa: PLC0415
|
||||
|
||||
context = autoclass("android.content.Context")
|
||||
environment = autoclass("android.os.Environment")
|
||||
pictures_dir: str = context.getExternalFilesDir(environment.DIRECTORY_PICTURES).getAbsolutePath()
|
||||
except Exception: # noqa: BLE001
|
||||
pictures_dir = "/storage/emulated/0/Pictures"
|
||||
|
||||
return pictures_dir
|
||||
|
||||
|
||||
@lru_cache(maxsize=1)
|
||||
def _android_videos_folder() -> str:
|
||||
""":return: videos folder for the Android OS"""
|
||||
# Get directories with pyjnius
|
||||
try:
|
||||
from jnius import autoclass # noqa: PLC0415
|
||||
|
||||
context = autoclass("android.content.Context")
|
||||
environment = autoclass("android.os.Environment")
|
||||
videos_dir: str = context.getExternalFilesDir(environment.DIRECTORY_DCIM).getAbsolutePath()
|
||||
except Exception: # noqa: BLE001
|
||||
videos_dir = "/storage/emulated/0/DCIM/Camera"
|
||||
|
||||
return videos_dir
|
||||
|
||||
|
||||
@lru_cache(maxsize=1)
|
||||
def _android_music_folder() -> str:
|
||||
""":return: music folder for the Android OS"""
|
||||
# Get directories with pyjnius
|
||||
try:
|
||||
from jnius import autoclass # noqa: PLC0415
|
||||
|
||||
context = autoclass("android.content.Context")
|
||||
environment = autoclass("android.os.Environment")
|
||||
music_dir: str = context.getExternalFilesDir(environment.DIRECTORY_MUSIC).getAbsolutePath()
|
||||
except Exception: # noqa: BLE001
|
||||
music_dir = "/storage/emulated/0/Music"
|
||||
|
||||
return music_dir
|
||||
|
||||
|
||||
__all__ = [
|
||||
"Android",
|
||||
]
|
||||
@@ -0,0 +1,299 @@
|
||||
"""Base API."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import os
|
||||
from abc import ABC, abstractmethod
|
||||
from pathlib import Path
|
||||
from typing import TYPE_CHECKING
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from collections.abc import Iterator
|
||||
from typing import Literal
|
||||
|
||||
|
||||
class PlatformDirsABC(ABC): # noqa: PLR0904
|
||||
"""Abstract base class for platform directories."""
|
||||
|
||||
def __init__( # noqa: PLR0913, PLR0917
|
||||
self,
|
||||
appname: str | None = None,
|
||||
appauthor: str | Literal[False] | None = None,
|
||||
version: str | None = None,
|
||||
roaming: bool = False, # noqa: FBT001, FBT002
|
||||
multipath: bool = False, # noqa: FBT001, FBT002
|
||||
opinion: bool = True, # noqa: FBT001, FBT002
|
||||
ensure_exists: bool = False, # noqa: FBT001, FBT002
|
||||
) -> None:
|
||||
"""
|
||||
Create a new platform directory.
|
||||
|
||||
:param appname: See `appname`.
|
||||
:param appauthor: See `appauthor`.
|
||||
:param version: See `version`.
|
||||
:param roaming: See `roaming`.
|
||||
:param multipath: See `multipath`.
|
||||
:param opinion: See `opinion`.
|
||||
:param ensure_exists: See `ensure_exists`.
|
||||
|
||||
"""
|
||||
self.appname = appname #: The name of application.
|
||||
self.appauthor = appauthor
|
||||
"""
|
||||
The name of the app author or distributing body for this application.
|
||||
|
||||
Typically, it is the owning company name. Defaults to `appname`. You may pass ``False`` to disable it.
|
||||
|
||||
"""
|
||||
self.version = version
|
||||
"""
|
||||
An optional version path element to append to the path.
|
||||
|
||||
You might want to use this if you want multiple versions of your app to be able to run independently. If used,
|
||||
this would typically be ``<major>.<minor>``.
|
||||
|
||||
"""
|
||||
self.roaming = roaming
|
||||
"""
|
||||
Whether to use the roaming appdata directory on Windows.
|
||||
|
||||
That means that for users on a Windows network setup for roaming profiles, this user data will be synced on
|
||||
login (see
|
||||
`here <https://technet.microsoft.com/en-us/library/cc766489(WS.10).aspx>`_).
|
||||
|
||||
"""
|
||||
self.multipath = multipath
|
||||
"""
|
||||
An optional parameter which indicates that the entire list of data dirs should be returned.
|
||||
|
||||
By default, the first item would only be returned.
|
||||
|
||||
"""
|
||||
self.opinion = opinion #: A flag to indicating to use opinionated values.
|
||||
self.ensure_exists = ensure_exists
|
||||
"""
|
||||
Optionally create the directory (and any missing parents) upon access if it does not exist.
|
||||
|
||||
By default, no directories are created.
|
||||
|
||||
"""
|
||||
|
||||
def _append_app_name_and_version(self, *base: str) -> str:
|
||||
params = list(base[1:])
|
||||
if self.appname:
|
||||
params.append(self.appname)
|
||||
if self.version:
|
||||
params.append(self.version)
|
||||
path = os.path.join(base[0], *params) # noqa: PTH118
|
||||
self._optionally_create_directory(path)
|
||||
return path
|
||||
|
||||
def _optionally_create_directory(self, path: str) -> None:
|
||||
if self.ensure_exists:
|
||||
Path(path).mkdir(parents=True, exist_ok=True)
|
||||
|
||||
def _first_item_as_path_if_multipath(self, directory: str) -> Path:
|
||||
if self.multipath:
|
||||
# If multipath is True, the first path is returned.
|
||||
directory = directory.partition(os.pathsep)[0]
|
||||
return Path(directory)
|
||||
|
||||
@property
|
||||
@abstractmethod
|
||||
def user_data_dir(self) -> str:
|
||||
""":return: data directory tied to the user"""
|
||||
|
||||
@property
|
||||
@abstractmethod
|
||||
def site_data_dir(self) -> str:
|
||||
""":return: data directory shared by users"""
|
||||
|
||||
@property
|
||||
@abstractmethod
|
||||
def user_config_dir(self) -> str:
|
||||
""":return: config directory tied to the user"""
|
||||
|
||||
@property
|
||||
@abstractmethod
|
||||
def site_config_dir(self) -> str:
|
||||
""":return: config directory shared by the users"""
|
||||
|
||||
@property
|
||||
@abstractmethod
|
||||
def user_cache_dir(self) -> str:
|
||||
""":return: cache directory tied to the user"""
|
||||
|
||||
@property
|
||||
@abstractmethod
|
||||
def site_cache_dir(self) -> str:
|
||||
""":return: cache directory shared by users"""
|
||||
|
||||
@property
|
||||
@abstractmethod
|
||||
def user_state_dir(self) -> str:
|
||||
""":return: state directory tied to the user"""
|
||||
|
||||
@property
|
||||
@abstractmethod
|
||||
def user_log_dir(self) -> str:
|
||||
""":return: log directory tied to the user"""
|
||||
|
||||
@property
|
||||
@abstractmethod
|
||||
def user_documents_dir(self) -> str:
|
||||
""":return: documents directory tied to the user"""
|
||||
|
||||
@property
|
||||
@abstractmethod
|
||||
def user_downloads_dir(self) -> str:
|
||||
""":return: downloads directory tied to the user"""
|
||||
|
||||
@property
|
||||
@abstractmethod
|
||||
def user_pictures_dir(self) -> str:
|
||||
""":return: pictures directory tied to the user"""
|
||||
|
||||
@property
|
||||
@abstractmethod
|
||||
def user_videos_dir(self) -> str:
|
||||
""":return: videos directory tied to the user"""
|
||||
|
||||
@property
|
||||
@abstractmethod
|
||||
def user_music_dir(self) -> str:
|
||||
""":return: music directory tied to the user"""
|
||||
|
||||
@property
|
||||
@abstractmethod
|
||||
def user_desktop_dir(self) -> str:
|
||||
""":return: desktop directory tied to the user"""
|
||||
|
||||
@property
|
||||
@abstractmethod
|
||||
def user_runtime_dir(self) -> str:
|
||||
""":return: runtime directory tied to the user"""
|
||||
|
||||
@property
|
||||
@abstractmethod
|
||||
def site_runtime_dir(self) -> str:
|
||||
""":return: runtime directory shared by users"""
|
||||
|
||||
@property
|
||||
def user_data_path(self) -> Path:
|
||||
""":return: data path tied to the user"""
|
||||
return Path(self.user_data_dir)
|
||||
|
||||
@property
|
||||
def site_data_path(self) -> Path:
|
||||
""":return: data path shared by users"""
|
||||
return Path(self.site_data_dir)
|
||||
|
||||
@property
|
||||
def user_config_path(self) -> Path:
|
||||
""":return: config path tied to the user"""
|
||||
return Path(self.user_config_dir)
|
||||
|
||||
@property
|
||||
def site_config_path(self) -> Path:
|
||||
""":return: config path shared by the users"""
|
||||
return Path(self.site_config_dir)
|
||||
|
||||
@property
|
||||
def user_cache_path(self) -> Path:
|
||||
""":return: cache path tied to the user"""
|
||||
return Path(self.user_cache_dir)
|
||||
|
||||
@property
|
||||
def site_cache_path(self) -> Path:
|
||||
""":return: cache path shared by users"""
|
||||
return Path(self.site_cache_dir)
|
||||
|
||||
@property
|
||||
def user_state_path(self) -> Path:
|
||||
""":return: state path tied to the user"""
|
||||
return Path(self.user_state_dir)
|
||||
|
||||
@property
|
||||
def user_log_path(self) -> Path:
|
||||
""":return: log path tied to the user"""
|
||||
return Path(self.user_log_dir)
|
||||
|
||||
@property
|
||||
def user_documents_path(self) -> Path:
|
||||
""":return: documents a path tied to the user"""
|
||||
return Path(self.user_documents_dir)
|
||||
|
||||
@property
|
||||
def user_downloads_path(self) -> Path:
|
||||
""":return: downloads path tied to the user"""
|
||||
return Path(self.user_downloads_dir)
|
||||
|
||||
@property
|
||||
def user_pictures_path(self) -> Path:
|
||||
""":return: pictures path tied to the user"""
|
||||
return Path(self.user_pictures_dir)
|
||||
|
||||
@property
|
||||
def user_videos_path(self) -> Path:
|
||||
""":return: videos path tied to the user"""
|
||||
return Path(self.user_videos_dir)
|
||||
|
||||
@property
|
||||
def user_music_path(self) -> Path:
|
||||
""":return: music path tied to the user"""
|
||||
return Path(self.user_music_dir)
|
||||
|
||||
@property
|
||||
def user_desktop_path(self) -> Path:
|
||||
""":return: desktop path tied to the user"""
|
||||
return Path(self.user_desktop_dir)
|
||||
|
||||
@property
|
||||
def user_runtime_path(self) -> Path:
|
||||
""":return: runtime path tied to the user"""
|
||||
return Path(self.user_runtime_dir)
|
||||
|
||||
@property
|
||||
def site_runtime_path(self) -> Path:
|
||||
""":return: runtime path shared by users"""
|
||||
return Path(self.site_runtime_dir)
|
||||
|
||||
def iter_config_dirs(self) -> Iterator[str]:
|
||||
""":yield: all user and site configuration directories."""
|
||||
yield self.user_config_dir
|
||||
yield self.site_config_dir
|
||||
|
||||
def iter_data_dirs(self) -> Iterator[str]:
|
||||
""":yield: all user and site data directories."""
|
||||
yield self.user_data_dir
|
||||
yield self.site_data_dir
|
||||
|
||||
def iter_cache_dirs(self) -> Iterator[str]:
|
||||
""":yield: all user and site cache directories."""
|
||||
yield self.user_cache_dir
|
||||
yield self.site_cache_dir
|
||||
|
||||
def iter_runtime_dirs(self) -> Iterator[str]:
|
||||
""":yield: all user and site runtime directories."""
|
||||
yield self.user_runtime_dir
|
||||
yield self.site_runtime_dir
|
||||
|
||||
def iter_config_paths(self) -> Iterator[Path]:
|
||||
""":yield: all user and site configuration paths."""
|
||||
for path in self.iter_config_dirs():
|
||||
yield Path(path)
|
||||
|
||||
def iter_data_paths(self) -> Iterator[Path]:
|
||||
""":yield: all user and site data paths."""
|
||||
for path in self.iter_data_dirs():
|
||||
yield Path(path)
|
||||
|
||||
def iter_cache_paths(self) -> Iterator[Path]:
|
||||
""":yield: all user and site cache paths."""
|
||||
for path in self.iter_cache_dirs():
|
||||
yield Path(path)
|
||||
|
||||
def iter_runtime_paths(self) -> Iterator[Path]:
|
||||
""":yield: all user and site runtime paths."""
|
||||
for path in self.iter_runtime_dirs():
|
||||
yield Path(path)
|
||||
@@ -0,0 +1,146 @@
|
||||
"""macOS."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import os.path
|
||||
import sys
|
||||
from typing import TYPE_CHECKING
|
||||
|
||||
from .api import PlatformDirsABC
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from pathlib import Path
|
||||
|
||||
|
||||
class MacOS(PlatformDirsABC):
|
||||
"""
|
||||
Platform directories for the macOS operating system.
|
||||
|
||||
Follows the guidance from
|
||||
`Apple documentation <https://developer.apple.com/library/archive/documentation/FileManagement/Conceptual/FileSystemProgrammingGuide/MacOSXDirectories/MacOSXDirectories.html>`_.
|
||||
Makes use of the `appname <platformdirs.api.PlatformDirsABC.appname>`,
|
||||
`version <platformdirs.api.PlatformDirsABC.version>`,
|
||||
`ensure_exists <platformdirs.api.PlatformDirsABC.ensure_exists>`.
|
||||
|
||||
"""
|
||||
|
||||
@property
|
||||
def user_data_dir(self) -> str:
|
||||
""":return: data directory tied to the user, e.g. ``~/Library/Application Support/$appname/$version``"""
|
||||
return self._append_app_name_and_version(os.path.expanduser("~/Library/Application Support")) # noqa: PTH111
|
||||
|
||||
@property
|
||||
def site_data_dir(self) -> str:
|
||||
"""
|
||||
:return: data directory shared by users, e.g. ``/Library/Application Support/$appname/$version``.
|
||||
If we're using a Python binary managed by `Homebrew <https://brew.sh>`_, the directory
|
||||
will be under the Homebrew prefix, e.g. ``$homebrew_prefix/share/$appname/$version``.
|
||||
If `multipath <platformdirs.api.PlatformDirsABC.multipath>` is enabled, and we're in Homebrew,
|
||||
the response is a multi-path string separated by ":", e.g.
|
||||
``$homebrew_prefix/share/$appname/$version:/Library/Application Support/$appname/$version``
|
||||
"""
|
||||
is_homebrew = "/opt/python" in sys.prefix
|
||||
homebrew_prefix = sys.prefix.split("/opt/python")[0] if is_homebrew else ""
|
||||
path_list = [self._append_app_name_and_version(f"{homebrew_prefix}/share")] if is_homebrew else []
|
||||
path_list.append(self._append_app_name_and_version("/Library/Application Support"))
|
||||
if self.multipath:
|
||||
return os.pathsep.join(path_list)
|
||||
return path_list[0]
|
||||
|
||||
@property
|
||||
def site_data_path(self) -> Path:
|
||||
""":return: data path shared by users. Only return the first item, even if ``multipath`` is set to ``True``"""
|
||||
return self._first_item_as_path_if_multipath(self.site_data_dir)
|
||||
|
||||
@property
|
||||
def user_config_dir(self) -> str:
|
||||
""":return: config directory tied to the user, same as `user_data_dir`"""
|
||||
return self.user_data_dir
|
||||
|
||||
@property
|
||||
def site_config_dir(self) -> str:
|
||||
""":return: config directory shared by the users, same as `site_data_dir`"""
|
||||
return self.site_data_dir
|
||||
|
||||
@property
|
||||
def user_cache_dir(self) -> str:
|
||||
""":return: cache directory tied to the user, e.g. ``~/Library/Caches/$appname/$version``"""
|
||||
return self._append_app_name_and_version(os.path.expanduser("~/Library/Caches")) # noqa: PTH111
|
||||
|
||||
@property
|
||||
def site_cache_dir(self) -> str:
|
||||
"""
|
||||
:return: cache directory shared by users, e.g. ``/Library/Caches/$appname/$version``.
|
||||
If we're using a Python binary managed by `Homebrew <https://brew.sh>`_, the directory
|
||||
will be under the Homebrew prefix, e.g. ``$homebrew_prefix/var/cache/$appname/$version``.
|
||||
If `multipath <platformdirs.api.PlatformDirsABC.multipath>` is enabled, and we're in Homebrew,
|
||||
the response is a multi-path string separated by ":", e.g.
|
||||
``$homebrew_prefix/var/cache/$appname/$version:/Library/Caches/$appname/$version``
|
||||
"""
|
||||
is_homebrew = "/opt/python" in sys.prefix
|
||||
homebrew_prefix = sys.prefix.split("/opt/python")[0] if is_homebrew else ""
|
||||
path_list = [self._append_app_name_and_version(f"{homebrew_prefix}/var/cache")] if is_homebrew else []
|
||||
path_list.append(self._append_app_name_and_version("/Library/Caches"))
|
||||
if self.multipath:
|
||||
return os.pathsep.join(path_list)
|
||||
return path_list[0]
|
||||
|
||||
@property
|
||||
def site_cache_path(self) -> Path:
|
||||
""":return: cache path shared by users. Only return the first item, even if ``multipath`` is set to ``True``"""
|
||||
return self._first_item_as_path_if_multipath(self.site_cache_dir)
|
||||
|
||||
@property
|
||||
def user_state_dir(self) -> str:
|
||||
""":return: state directory tied to the user, same as `user_data_dir`"""
|
||||
return self.user_data_dir
|
||||
|
||||
@property
|
||||
def user_log_dir(self) -> str:
|
||||
""":return: log directory tied to the user, e.g. ``~/Library/Logs/$appname/$version``"""
|
||||
return self._append_app_name_and_version(os.path.expanduser("~/Library/Logs")) # noqa: PTH111
|
||||
|
||||
@property
|
||||
def user_documents_dir(self) -> str:
|
||||
""":return: documents directory tied to the user, e.g. ``~/Documents``"""
|
||||
return os.path.expanduser("~/Documents") # noqa: PTH111
|
||||
|
||||
@property
|
||||
def user_downloads_dir(self) -> str:
|
||||
""":return: downloads directory tied to the user, e.g. ``~/Downloads``"""
|
||||
return os.path.expanduser("~/Downloads") # noqa: PTH111
|
||||
|
||||
@property
|
||||
def user_pictures_dir(self) -> str:
|
||||
""":return: pictures directory tied to the user, e.g. ``~/Pictures``"""
|
||||
return os.path.expanduser("~/Pictures") # noqa: PTH111
|
||||
|
||||
@property
|
||||
def user_videos_dir(self) -> str:
|
||||
""":return: videos directory tied to the user, e.g. ``~/Movies``"""
|
||||
return os.path.expanduser("~/Movies") # noqa: PTH111
|
||||
|
||||
@property
|
||||
def user_music_dir(self) -> str:
|
||||
""":return: music directory tied to the user, e.g. ``~/Music``"""
|
||||
return os.path.expanduser("~/Music") # noqa: PTH111
|
||||
|
||||
@property
|
||||
def user_desktop_dir(self) -> str:
|
||||
""":return: desktop directory tied to the user, e.g. ``~/Desktop``"""
|
||||
return os.path.expanduser("~/Desktop") # noqa: PTH111
|
||||
|
||||
@property
|
||||
def user_runtime_dir(self) -> str:
|
||||
""":return: runtime directory tied to the user, e.g. ``~/Library/Caches/TemporaryItems/$appname/$version``"""
|
||||
return self._append_app_name_and_version(os.path.expanduser("~/Library/Caches/TemporaryItems")) # noqa: PTH111
|
||||
|
||||
@property
|
||||
def site_runtime_dir(self) -> str:
|
||||
""":return: runtime directory shared by users, same as `user_runtime_dir`"""
|
||||
return self.user_runtime_dir
|
||||
|
||||
|
||||
__all__ = [
|
||||
"MacOS",
|
||||
]
|
||||
@@ -0,0 +1,272 @@
|
||||
"""Unix."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import os
|
||||
import sys
|
||||
from configparser import ConfigParser
|
||||
from pathlib import Path
|
||||
from typing import TYPE_CHECKING, NoReturn
|
||||
|
||||
from .api import PlatformDirsABC
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from collections.abc import Iterator
|
||||
|
||||
if sys.platform == "win32":
|
||||
|
||||
def getuid() -> NoReturn:
|
||||
msg = "should only be used on Unix"
|
||||
raise RuntimeError(msg)
|
||||
|
||||
else:
|
||||
from os import getuid
|
||||
|
||||
|
||||
class Unix(PlatformDirsABC): # noqa: PLR0904
|
||||
"""
|
||||
On Unix/Linux, we follow the `XDG Basedir Spec <https://specifications.freedesktop.org/basedir-spec/basedir-spec-
|
||||
latest.html>`_.
|
||||
|
||||
The spec allows overriding directories with environment variables. The examples shown are the default values,
|
||||
alongside the name of the environment variable that overrides them. Makes use of the `appname
|
||||
<platformdirs.api.PlatformDirsABC.appname>`, `version <platformdirs.api.PlatformDirsABC.version>`, `multipath
|
||||
<platformdirs.api.PlatformDirsABC.multipath>`, `opinion <platformdirs.api.PlatformDirsABC.opinion>`, `ensure_exists
|
||||
<platformdirs.api.PlatformDirsABC.ensure_exists>`.
|
||||
|
||||
"""
|
||||
|
||||
@property
|
||||
def user_data_dir(self) -> str:
|
||||
"""
|
||||
:return: data directory tied to the user, e.g. ``~/.local/share/$appname/$version`` or
|
||||
``$XDG_DATA_HOME/$appname/$version``
|
||||
"""
|
||||
path = os.environ.get("XDG_DATA_HOME", "")
|
||||
if not path.strip():
|
||||
path = os.path.expanduser("~/.local/share") # noqa: PTH111
|
||||
return self._append_app_name_and_version(path)
|
||||
|
||||
@property
|
||||
def _site_data_dirs(self) -> list[str]:
|
||||
path = os.environ.get("XDG_DATA_DIRS", "")
|
||||
if not path.strip():
|
||||
path = f"/usr/local/share{os.pathsep}/usr/share"
|
||||
return [self._append_app_name_and_version(p) for p in path.split(os.pathsep)]
|
||||
|
||||
@property
|
||||
def site_data_dir(self) -> str:
|
||||
"""
|
||||
:return: data directories shared by users (if `multipath <platformdirs.api.PlatformDirsABC.multipath>` is
|
||||
enabled and ``XDG_DATA_DIRS`` is set and a multi path the response is also a multi path separated by the
|
||||
OS path separator), e.g. ``/usr/local/share/$appname/$version`` or ``/usr/share/$appname/$version``
|
||||
"""
|
||||
# XDG default for $XDG_DATA_DIRS; only first, if multipath is False
|
||||
dirs = self._site_data_dirs
|
||||
if not self.multipath:
|
||||
return dirs[0]
|
||||
return os.pathsep.join(dirs)
|
||||
|
||||
@property
|
||||
def user_config_dir(self) -> str:
|
||||
"""
|
||||
:return: config directory tied to the user, e.g. ``~/.config/$appname/$version`` or
|
||||
``$XDG_CONFIG_HOME/$appname/$version``
|
||||
"""
|
||||
path = os.environ.get("XDG_CONFIG_HOME", "")
|
||||
if not path.strip():
|
||||
path = os.path.expanduser("~/.config") # noqa: PTH111
|
||||
return self._append_app_name_and_version(path)
|
||||
|
||||
@property
|
||||
def _site_config_dirs(self) -> list[str]:
|
||||
path = os.environ.get("XDG_CONFIG_DIRS", "")
|
||||
if not path.strip():
|
||||
path = "/etc/xdg"
|
||||
return [self._append_app_name_and_version(p) for p in path.split(os.pathsep)]
|
||||
|
||||
@property
|
||||
def site_config_dir(self) -> str:
|
||||
"""
|
||||
:return: config directories shared by users (if `multipath <platformdirs.api.PlatformDirsABC.multipath>`
|
||||
is enabled and ``XDG_CONFIG_DIRS`` is set and a multi path the response is also a multi path separated by
|
||||
the OS path separator), e.g. ``/etc/xdg/$appname/$version``
|
||||
"""
|
||||
# XDG default for $XDG_CONFIG_DIRS only first, if multipath is False
|
||||
dirs = self._site_config_dirs
|
||||
if not self.multipath:
|
||||
return dirs[0]
|
||||
return os.pathsep.join(dirs)
|
||||
|
||||
@property
|
||||
def user_cache_dir(self) -> str:
|
||||
"""
|
||||
:return: cache directory tied to the user, e.g. ``~/.cache/$appname/$version`` or
|
||||
``~/$XDG_CACHE_HOME/$appname/$version``
|
||||
"""
|
||||
path = os.environ.get("XDG_CACHE_HOME", "")
|
||||
if not path.strip():
|
||||
path = os.path.expanduser("~/.cache") # noqa: PTH111
|
||||
return self._append_app_name_and_version(path)
|
||||
|
||||
@property
|
||||
def site_cache_dir(self) -> str:
|
||||
""":return: cache directory shared by users, e.g. ``/var/cache/$appname/$version``"""
|
||||
return self._append_app_name_and_version("/var/cache")
|
||||
|
||||
@property
|
||||
def user_state_dir(self) -> str:
|
||||
"""
|
||||
:return: state directory tied to the user, e.g. ``~/.local/state/$appname/$version`` or
|
||||
``$XDG_STATE_HOME/$appname/$version``
|
||||
"""
|
||||
path = os.environ.get("XDG_STATE_HOME", "")
|
||||
if not path.strip():
|
||||
path = os.path.expanduser("~/.local/state") # noqa: PTH111
|
||||
return self._append_app_name_and_version(path)
|
||||
|
||||
@property
|
||||
def user_log_dir(self) -> str:
|
||||
""":return: log directory tied to the user, same as `user_state_dir` if not opinionated else ``log`` in it"""
|
||||
path = self.user_state_dir
|
||||
if self.opinion:
|
||||
path = os.path.join(path, "log") # noqa: PTH118
|
||||
self._optionally_create_directory(path)
|
||||
return path
|
||||
|
||||
@property
|
||||
def user_documents_dir(self) -> str:
|
||||
""":return: documents directory tied to the user, e.g. ``~/Documents``"""
|
||||
return _get_user_media_dir("XDG_DOCUMENTS_DIR", "~/Documents")
|
||||
|
||||
@property
|
||||
def user_downloads_dir(self) -> str:
|
||||
""":return: downloads directory tied to the user, e.g. ``~/Downloads``"""
|
||||
return _get_user_media_dir("XDG_DOWNLOAD_DIR", "~/Downloads")
|
||||
|
||||
@property
|
||||
def user_pictures_dir(self) -> str:
|
||||
""":return: pictures directory tied to the user, e.g. ``~/Pictures``"""
|
||||
return _get_user_media_dir("XDG_PICTURES_DIR", "~/Pictures")
|
||||
|
||||
@property
|
||||
def user_videos_dir(self) -> str:
|
||||
""":return: videos directory tied to the user, e.g. ``~/Videos``"""
|
||||
return _get_user_media_dir("XDG_VIDEOS_DIR", "~/Videos")
|
||||
|
||||
@property
|
||||
def user_music_dir(self) -> str:
|
||||
""":return: music directory tied to the user, e.g. ``~/Music``"""
|
||||
return _get_user_media_dir("XDG_MUSIC_DIR", "~/Music")
|
||||
|
||||
@property
|
||||
def user_desktop_dir(self) -> str:
|
||||
""":return: desktop directory tied to the user, e.g. ``~/Desktop``"""
|
||||
return _get_user_media_dir("XDG_DESKTOP_DIR", "~/Desktop")
|
||||
|
||||
@property
|
||||
def user_runtime_dir(self) -> str:
|
||||
"""
|
||||
:return: runtime directory tied to the user, e.g. ``/run/user/$(id -u)/$appname/$version`` or
|
||||
``$XDG_RUNTIME_DIR/$appname/$version``.
|
||||
|
||||
For FreeBSD/OpenBSD/NetBSD, it would return ``/var/run/user/$(id -u)/$appname/$version`` if
|
||||
exists, otherwise ``/tmp/runtime-$(id -u)/$appname/$version``, if``$XDG_RUNTIME_DIR``
|
||||
is not set.
|
||||
"""
|
||||
path = os.environ.get("XDG_RUNTIME_DIR", "")
|
||||
if not path.strip():
|
||||
if sys.platform.startswith(("freebsd", "openbsd", "netbsd")):
|
||||
path = f"/var/run/user/{getuid()}"
|
||||
if not Path(path).exists():
|
||||
path = f"/tmp/runtime-{getuid()}" # noqa: S108
|
||||
else:
|
||||
path = f"/run/user/{getuid()}"
|
||||
return self._append_app_name_and_version(path)
|
||||
|
||||
@property
|
||||
def site_runtime_dir(self) -> str:
|
||||
"""
|
||||
:return: runtime directory shared by users, e.g. ``/run/$appname/$version`` or \
|
||||
``$XDG_RUNTIME_DIR/$appname/$version``.
|
||||
|
||||
Note that this behaves almost exactly like `user_runtime_dir` if ``$XDG_RUNTIME_DIR`` is set, but will
|
||||
fall back to paths associated to the root user instead of a regular logged-in user if it's not set.
|
||||
|
||||
If you wish to ensure that a logged-in root user path is returned e.g. ``/run/user/0``, use `user_runtime_dir`
|
||||
instead.
|
||||
|
||||
For FreeBSD/OpenBSD/NetBSD, it would return ``/var/run/$appname/$version`` if ``$XDG_RUNTIME_DIR`` is not set.
|
||||
"""
|
||||
path = os.environ.get("XDG_RUNTIME_DIR", "")
|
||||
if not path.strip():
|
||||
if sys.platform.startswith(("freebsd", "openbsd", "netbsd")):
|
||||
path = "/var/run"
|
||||
else:
|
||||
path = "/run"
|
||||
return self._append_app_name_and_version(path)
|
||||
|
||||
@property
|
||||
def site_data_path(self) -> Path:
|
||||
""":return: data path shared by users. Only return the first item, even if ``multipath`` is set to ``True``"""
|
||||
return self._first_item_as_path_if_multipath(self.site_data_dir)
|
||||
|
||||
@property
|
||||
def site_config_path(self) -> Path:
|
||||
""":return: config path shared by the users, returns the first item, even if ``multipath`` is set to ``True``"""
|
||||
return self._first_item_as_path_if_multipath(self.site_config_dir)
|
||||
|
||||
@property
|
||||
def site_cache_path(self) -> Path:
|
||||
""":return: cache path shared by users. Only return the first item, even if ``multipath`` is set to ``True``"""
|
||||
return self._first_item_as_path_if_multipath(self.site_cache_dir)
|
||||
|
||||
def iter_config_dirs(self) -> Iterator[str]:
|
||||
""":yield: all user and site configuration directories."""
|
||||
yield self.user_config_dir
|
||||
yield from self._site_config_dirs
|
||||
|
||||
def iter_data_dirs(self) -> Iterator[str]:
|
||||
""":yield: all user and site data directories."""
|
||||
yield self.user_data_dir
|
||||
yield from self._site_data_dirs
|
||||
|
||||
|
||||
def _get_user_media_dir(env_var: str, fallback_tilde_path: str) -> str:
|
||||
media_dir = _get_user_dirs_folder(env_var)
|
||||
if media_dir is None:
|
||||
media_dir = os.environ.get(env_var, "").strip()
|
||||
if not media_dir:
|
||||
media_dir = os.path.expanduser(fallback_tilde_path) # noqa: PTH111
|
||||
|
||||
return media_dir
|
||||
|
||||
|
||||
def _get_user_dirs_folder(key: str) -> str | None:
|
||||
"""
|
||||
Return directory from user-dirs.dirs config file.
|
||||
|
||||
See https://freedesktop.org/wiki/Software/xdg-user-dirs/.
|
||||
|
||||
"""
|
||||
user_dirs_config_path = Path(Unix().user_config_dir) / "user-dirs.dirs"
|
||||
if user_dirs_config_path.exists():
|
||||
parser = ConfigParser()
|
||||
|
||||
with user_dirs_config_path.open() as stream:
|
||||
# Add fake section header, so ConfigParser doesn't complain
|
||||
parser.read_string(f"[top]\n{stream.read()}")
|
||||
|
||||
if key not in parser["top"]:
|
||||
return None
|
||||
|
||||
path = parser["top"][key].strip('"')
|
||||
# Handle relative home paths
|
||||
return path.replace("$HOME", os.path.expanduser("~")) # noqa: PTH111
|
||||
|
||||
return None
|
||||
|
||||
|
||||
__all__ = [
|
||||
"Unix",
|
||||
]
|
||||
@@ -0,0 +1,34 @@
|
||||
# file generated by setuptools-scm
|
||||
# don't change, don't track in version control
|
||||
|
||||
__all__ = [
|
||||
"__version__",
|
||||
"__version_tuple__",
|
||||
"version",
|
||||
"version_tuple",
|
||||
"__commit_id__",
|
||||
"commit_id",
|
||||
]
|
||||
|
||||
TYPE_CHECKING = False
|
||||
if TYPE_CHECKING:
|
||||
from typing import Tuple
|
||||
from typing import Union
|
||||
|
||||
VERSION_TUPLE = Tuple[Union[int, str], ...]
|
||||
COMMIT_ID = Union[str, None]
|
||||
else:
|
||||
VERSION_TUPLE = object
|
||||
COMMIT_ID = object
|
||||
|
||||
version: str
|
||||
__version__: str
|
||||
__version_tuple__: VERSION_TUPLE
|
||||
version_tuple: VERSION_TUPLE
|
||||
commit_id: COMMIT_ID
|
||||
__commit_id__: COMMIT_ID
|
||||
|
||||
__version__ = version = '4.5.1'
|
||||
__version_tuple__ = version_tuple = (4, 5, 1)
|
||||
|
||||
__commit_id__ = commit_id = None
|
||||
@@ -0,0 +1,278 @@
|
||||
"""Windows."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import os
|
||||
import sys
|
||||
from functools import lru_cache
|
||||
from typing import TYPE_CHECKING
|
||||
|
||||
from .api import PlatformDirsABC
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from collections.abc import Callable
|
||||
|
||||
|
||||
class Windows(PlatformDirsABC):
|
||||
"""
|
||||
`MSDN on where to store app data files <https://learn.microsoft.com/en-us/windows/win32/shell/knownfolderid>`_.
|
||||
|
||||
Makes use of the `appname <platformdirs.api.PlatformDirsABC.appname>`, `appauthor
|
||||
<platformdirs.api.PlatformDirsABC.appauthor>`, `version <platformdirs.api.PlatformDirsABC.version>`, `roaming
|
||||
<platformdirs.api.PlatformDirsABC.roaming>`, `opinion <platformdirs.api.PlatformDirsABC.opinion>`, `ensure_exists
|
||||
<platformdirs.api.PlatformDirsABC.ensure_exists>`.
|
||||
|
||||
"""
|
||||
|
||||
@property
|
||||
def user_data_dir(self) -> str:
|
||||
"""
|
||||
:return: data directory tied to the user, e.g.
|
||||
``%USERPROFILE%\\AppData\\Local\\$appauthor\\$appname`` (not roaming) or
|
||||
``%USERPROFILE%\\AppData\\Roaming\\$appauthor\\$appname`` (roaming)
|
||||
"""
|
||||
const = "CSIDL_APPDATA" if self.roaming else "CSIDL_LOCAL_APPDATA"
|
||||
path = os.path.normpath(get_win_folder(const))
|
||||
return self._append_parts(path)
|
||||
|
||||
def _append_parts(self, path: str, *, opinion_value: str | None = None) -> str:
|
||||
params = []
|
||||
if self.appname:
|
||||
if self.appauthor is not False:
|
||||
author = self.appauthor or self.appname
|
||||
params.append(author)
|
||||
params.append(self.appname)
|
||||
if opinion_value is not None and self.opinion:
|
||||
params.append(opinion_value)
|
||||
if self.version:
|
||||
params.append(self.version)
|
||||
path = os.path.join(path, *params) # noqa: PTH118
|
||||
self._optionally_create_directory(path)
|
||||
return path
|
||||
|
||||
@property
|
||||
def site_data_dir(self) -> str:
|
||||
""":return: data directory shared by users, e.g. ``C:\\ProgramData\\$appauthor\\$appname``"""
|
||||
path = os.path.normpath(get_win_folder("CSIDL_COMMON_APPDATA"))
|
||||
return self._append_parts(path)
|
||||
|
||||
@property
|
||||
def user_config_dir(self) -> str:
|
||||
""":return: config directory tied to the user, same as `user_data_dir`"""
|
||||
return self.user_data_dir
|
||||
|
||||
@property
|
||||
def site_config_dir(self) -> str:
|
||||
""":return: config directory shared by the users, same as `site_data_dir`"""
|
||||
return self.site_data_dir
|
||||
|
||||
@property
|
||||
def user_cache_dir(self) -> str:
|
||||
"""
|
||||
:return: cache directory tied to the user (if opinionated with ``Cache`` folder within ``$appname``) e.g.
|
||||
``%USERPROFILE%\\AppData\\Local\\$appauthor\\$appname\\Cache\\$version``
|
||||
"""
|
||||
path = os.path.normpath(get_win_folder("CSIDL_LOCAL_APPDATA"))
|
||||
return self._append_parts(path, opinion_value="Cache")
|
||||
|
||||
@property
|
||||
def site_cache_dir(self) -> str:
|
||||
""":return: cache directory shared by users, e.g. ``C:\\ProgramData\\$appauthor\\$appname\\Cache\\$version``"""
|
||||
path = os.path.normpath(get_win_folder("CSIDL_COMMON_APPDATA"))
|
||||
return self._append_parts(path, opinion_value="Cache")
|
||||
|
||||
@property
|
||||
def user_state_dir(self) -> str:
|
||||
""":return: state directory tied to the user, same as `user_data_dir`"""
|
||||
return self.user_data_dir
|
||||
|
||||
@property
|
||||
def user_log_dir(self) -> str:
|
||||
""":return: log directory tied to the user, same as `user_data_dir` if not opinionated else ``Logs`` in it"""
|
||||
path = self.user_data_dir
|
||||
if self.opinion:
|
||||
path = os.path.join(path, "Logs") # noqa: PTH118
|
||||
self._optionally_create_directory(path)
|
||||
return path
|
||||
|
||||
@property
|
||||
def user_documents_dir(self) -> str:
|
||||
""":return: documents directory tied to the user e.g. ``%USERPROFILE%\\Documents``"""
|
||||
return os.path.normpath(get_win_folder("CSIDL_PERSONAL"))
|
||||
|
||||
@property
|
||||
def user_downloads_dir(self) -> str:
|
||||
""":return: downloads directory tied to the user e.g. ``%USERPROFILE%\\Downloads``"""
|
||||
return os.path.normpath(get_win_folder("CSIDL_DOWNLOADS"))
|
||||
|
||||
@property
|
||||
def user_pictures_dir(self) -> str:
|
||||
""":return: pictures directory tied to the user e.g. ``%USERPROFILE%\\Pictures``"""
|
||||
return os.path.normpath(get_win_folder("CSIDL_MYPICTURES"))
|
||||
|
||||
@property
|
||||
def user_videos_dir(self) -> str:
|
||||
""":return: videos directory tied to the user e.g. ``%USERPROFILE%\\Videos``"""
|
||||
return os.path.normpath(get_win_folder("CSIDL_MYVIDEO"))
|
||||
|
||||
@property
|
||||
def user_music_dir(self) -> str:
|
||||
""":return: music directory tied to the user e.g. ``%USERPROFILE%\\Music``"""
|
||||
return os.path.normpath(get_win_folder("CSIDL_MYMUSIC"))
|
||||
|
||||
@property
|
||||
def user_desktop_dir(self) -> str:
|
||||
""":return: desktop directory tied to the user, e.g. ``%USERPROFILE%\\Desktop``"""
|
||||
return os.path.normpath(get_win_folder("CSIDL_DESKTOPDIRECTORY"))
|
||||
|
||||
@property
|
||||
def user_runtime_dir(self) -> str:
|
||||
"""
|
||||
:return: runtime directory tied to the user, e.g.
|
||||
``%USERPROFILE%\\AppData\\Local\\Temp\\$appauthor\\$appname``
|
||||
"""
|
||||
path = os.path.normpath(os.path.join(get_win_folder("CSIDL_LOCAL_APPDATA"), "Temp")) # noqa: PTH118
|
||||
return self._append_parts(path)
|
||||
|
||||
@property
|
||||
def site_runtime_dir(self) -> str:
|
||||
""":return: runtime directory shared by users, same as `user_runtime_dir`"""
|
||||
return self.user_runtime_dir
|
||||
|
||||
|
||||
def get_win_folder_from_env_vars(csidl_name: str) -> str:
|
||||
"""Get folder from environment variables."""
|
||||
result = get_win_folder_if_csidl_name_not_env_var(csidl_name)
|
||||
if result is not None:
|
||||
return result
|
||||
|
||||
env_var_name = {
|
||||
"CSIDL_APPDATA": "APPDATA",
|
||||
"CSIDL_COMMON_APPDATA": "ALLUSERSPROFILE",
|
||||
"CSIDL_LOCAL_APPDATA": "LOCALAPPDATA",
|
||||
}.get(csidl_name)
|
||||
if env_var_name is None:
|
||||
msg = f"Unknown CSIDL name: {csidl_name}"
|
||||
raise ValueError(msg)
|
||||
result = os.environ.get(env_var_name)
|
||||
if result is None:
|
||||
msg = f"Unset environment variable: {env_var_name}"
|
||||
raise ValueError(msg)
|
||||
return result
|
||||
|
||||
|
||||
def get_win_folder_if_csidl_name_not_env_var(csidl_name: str) -> str | None:
|
||||
"""Get a folder for a CSIDL name that does not exist as an environment variable."""
|
||||
if csidl_name == "CSIDL_PERSONAL":
|
||||
return os.path.join(os.path.normpath(os.environ["USERPROFILE"]), "Documents") # noqa: PTH118
|
||||
|
||||
if csidl_name == "CSIDL_DOWNLOADS":
|
||||
return os.path.join(os.path.normpath(os.environ["USERPROFILE"]), "Downloads") # noqa: PTH118
|
||||
|
||||
if csidl_name == "CSIDL_MYPICTURES":
|
||||
return os.path.join(os.path.normpath(os.environ["USERPROFILE"]), "Pictures") # noqa: PTH118
|
||||
|
||||
if csidl_name == "CSIDL_MYVIDEO":
|
||||
return os.path.join(os.path.normpath(os.environ["USERPROFILE"]), "Videos") # noqa: PTH118
|
||||
|
||||
if csidl_name == "CSIDL_MYMUSIC":
|
||||
return os.path.join(os.path.normpath(os.environ["USERPROFILE"]), "Music") # noqa: PTH118
|
||||
return None
|
||||
|
||||
|
||||
def get_win_folder_from_registry(csidl_name: str) -> str:
|
||||
"""
|
||||
Get folder from the registry.
|
||||
|
||||
This is a fallback technique at best. I'm not sure if using the registry for these guarantees us the correct answer
|
||||
for all CSIDL_* names.
|
||||
|
||||
"""
|
||||
machine_names = {
|
||||
"CSIDL_COMMON_APPDATA",
|
||||
}
|
||||
shell_folder_name = {
|
||||
"CSIDL_APPDATA": "AppData",
|
||||
"CSIDL_COMMON_APPDATA": "Common AppData",
|
||||
"CSIDL_LOCAL_APPDATA": "Local AppData",
|
||||
"CSIDL_PERSONAL": "Personal",
|
||||
"CSIDL_DOWNLOADS": "{374DE290-123F-4565-9164-39C4925E467B}",
|
||||
"CSIDL_MYPICTURES": "My Pictures",
|
||||
"CSIDL_MYVIDEO": "My Video",
|
||||
"CSIDL_MYMUSIC": "My Music",
|
||||
}.get(csidl_name)
|
||||
if shell_folder_name is None:
|
||||
msg = f"Unknown CSIDL name: {csidl_name}"
|
||||
raise ValueError(msg)
|
||||
if sys.platform != "win32": # only needed for mypy type checker to know that this code runs only on Windows
|
||||
raise NotImplementedError
|
||||
import winreg # noqa: PLC0415
|
||||
|
||||
# Use HKEY_LOCAL_MACHINE for system-wide folders, HKEY_CURRENT_USER for user-specific folders
|
||||
hkey = winreg.HKEY_LOCAL_MACHINE if csidl_name in machine_names else winreg.HKEY_CURRENT_USER
|
||||
|
||||
key = winreg.OpenKey(hkey, r"Software\Microsoft\Windows\CurrentVersion\Explorer\Shell Folders")
|
||||
directory, _ = winreg.QueryValueEx(key, shell_folder_name)
|
||||
return str(directory)
|
||||
|
||||
|
||||
def get_win_folder_via_ctypes(csidl_name: str) -> str:
|
||||
"""Get folder with ctypes."""
|
||||
# There is no 'CSIDL_DOWNLOADS'.
|
||||
# Use 'CSIDL_PROFILE' (40) and append the default folder 'Downloads' instead.
|
||||
# https://learn.microsoft.com/en-us/windows/win32/shell/knownfolderid
|
||||
|
||||
import ctypes # noqa: PLC0415
|
||||
|
||||
csidl_const = {
|
||||
"CSIDL_APPDATA": 26,
|
||||
"CSIDL_COMMON_APPDATA": 35,
|
||||
"CSIDL_LOCAL_APPDATA": 28,
|
||||
"CSIDL_PERSONAL": 5,
|
||||
"CSIDL_MYPICTURES": 39,
|
||||
"CSIDL_MYVIDEO": 14,
|
||||
"CSIDL_MYMUSIC": 13,
|
||||
"CSIDL_DOWNLOADS": 40,
|
||||
"CSIDL_DESKTOPDIRECTORY": 16,
|
||||
}.get(csidl_name)
|
||||
if csidl_const is None:
|
||||
msg = f"Unknown CSIDL name: {csidl_name}"
|
||||
raise ValueError(msg)
|
||||
|
||||
buf = ctypes.create_unicode_buffer(1024)
|
||||
windll = getattr(ctypes, "windll") # noqa: B009 # using getattr to avoid false positive with mypy type checker
|
||||
windll.shell32.SHGetFolderPathW(None, csidl_const, None, 0, buf)
|
||||
|
||||
# Downgrade to short path name if it has high-bit chars.
|
||||
if any(ord(c) > 255 for c in buf): # noqa: PLR2004
|
||||
buf2 = ctypes.create_unicode_buffer(1024)
|
||||
if windll.kernel32.GetShortPathNameW(buf.value, buf2, 1024):
|
||||
buf = buf2
|
||||
|
||||
if csidl_name == "CSIDL_DOWNLOADS":
|
||||
return os.path.join(buf.value, "Downloads") # noqa: PTH118
|
||||
|
||||
return buf.value
|
||||
|
||||
|
||||
def _pick_get_win_folder() -> Callable[[str], str]:
|
||||
try:
|
||||
import ctypes # noqa: PLC0415
|
||||
except ImportError:
|
||||
pass
|
||||
else:
|
||||
if hasattr(ctypes, "windll"):
|
||||
return get_win_folder_via_ctypes
|
||||
try:
|
||||
import winreg # noqa: PLC0415, F401
|
||||
except ImportError:
|
||||
return get_win_folder_from_env_vars
|
||||
else:
|
||||
return get_win_folder_from_registry
|
||||
|
||||
|
||||
get_win_folder = lru_cache(maxsize=None)(_pick_get_win_folder())
|
||||
|
||||
__all__ = [
|
||||
"Windows",
|
||||
]
|
||||
@@ -0,0 +1,25 @@
|
||||
Copyright (c) 2006-2022 by the respective authors (see AUTHORS file).
|
||||
All rights reserved.
|
||||
|
||||
Redistribution and use in source and binary forms, with or without
|
||||
modification, are permitted provided that the following conditions are
|
||||
met:
|
||||
|
||||
* Redistributions of source code must retain the above copyright
|
||||
notice, this list of conditions and the following disclaimer.
|
||||
|
||||
* Redistributions in binary form must reproduce the above copyright
|
||||
notice, this list of conditions and the following disclaimer in the
|
||||
documentation and/or other materials provided with the distribution.
|
||||
|
||||
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
@@ -0,0 +1,82 @@
|
||||
"""
|
||||
Pygments
|
||||
~~~~~~~~
|
||||
|
||||
Pygments is a syntax highlighting package written in Python.
|
||||
|
||||
It is a generic syntax highlighter for general use in all kinds of software
|
||||
such as forum systems, wikis or other applications that need to prettify
|
||||
source code. Highlights are:
|
||||
|
||||
* a wide range of common languages and markup formats is supported
|
||||
* special attention is paid to details, increasing quality by a fair amount
|
||||
* support for new languages and formats are added easily
|
||||
* a number of output formats, presently HTML, LaTeX, RTF, SVG, all image
|
||||
formats that PIL supports, and ANSI sequences
|
||||
* it is usable as a command-line tool and as a library
|
||||
* ... and it highlights even Brainfuck!
|
||||
|
||||
The `Pygments master branch`_ is installable with ``easy_install Pygments==dev``.
|
||||
|
||||
.. _Pygments master branch:
|
||||
https://github.com/pygments/pygments/archive/master.zip#egg=Pygments-dev
|
||||
|
||||
:copyright: Copyright 2006-2025 by the Pygments team, see AUTHORS.
|
||||
:license: BSD, see LICENSE for details.
|
||||
"""
|
||||
from io import StringIO, BytesIO
|
||||
|
||||
__version__ = '2.19.2'
|
||||
__docformat__ = 'restructuredtext'
|
||||
|
||||
__all__ = ['lex', 'format', 'highlight']
|
||||
|
||||
|
||||
def lex(code, lexer):
|
||||
"""
|
||||
Lex `code` with the `lexer` (must be a `Lexer` instance)
|
||||
and return an iterable of tokens. Currently, this only calls
|
||||
`lexer.get_tokens()`.
|
||||
"""
|
||||
try:
|
||||
return lexer.get_tokens(code)
|
||||
except TypeError:
|
||||
# Heuristic to catch a common mistake.
|
||||
from pip._vendor.pygments.lexer import RegexLexer
|
||||
if isinstance(lexer, type) and issubclass(lexer, RegexLexer):
|
||||
raise TypeError('lex() argument must be a lexer instance, '
|
||||
'not a class')
|
||||
raise
|
||||
|
||||
|
||||
def format(tokens, formatter, outfile=None): # pylint: disable=redefined-builtin
|
||||
"""
|
||||
Format ``tokens`` (an iterable of tokens) with the formatter ``formatter``
|
||||
(a `Formatter` instance).
|
||||
|
||||
If ``outfile`` is given and a valid file object (an object with a
|
||||
``write`` method), the result will be written to it, otherwise it
|
||||
is returned as a string.
|
||||
"""
|
||||
try:
|
||||
if not outfile:
|
||||
realoutfile = getattr(formatter, 'encoding', None) and BytesIO() or StringIO()
|
||||
formatter.format(tokens, realoutfile)
|
||||
return realoutfile.getvalue()
|
||||
else:
|
||||
formatter.format(tokens, outfile)
|
||||
except TypeError:
|
||||
# Heuristic to catch a common mistake.
|
||||
from pip._vendor.pygments.formatter import Formatter
|
||||
if isinstance(formatter, type) and issubclass(formatter, Formatter):
|
||||
raise TypeError('format() argument must be a formatter instance, '
|
||||
'not a class')
|
||||
raise
|
||||
|
||||
|
||||
def highlight(code, lexer, formatter, outfile=None):
|
||||
"""
|
||||
This is the most high-level highlighting function. It combines `lex` and
|
||||
`format` in one function.
|
||||
"""
|
||||
return format(lex(code, lexer), formatter, outfile)
|
||||
@@ -0,0 +1,17 @@
|
||||
"""
|
||||
pygments.__main__
|
||||
~~~~~~~~~~~~~~~~~
|
||||
|
||||
Main entry point for ``python -m pygments``.
|
||||
|
||||
:copyright: Copyright 2006-2025 by the Pygments team, see AUTHORS.
|
||||
:license: BSD, see LICENSE for details.
|
||||
"""
|
||||
|
||||
import sys
|
||||
from pip._vendor.pygments.cmdline import main
|
||||
|
||||
try:
|
||||
sys.exit(main(sys.argv))
|
||||
except KeyboardInterrupt:
|
||||
sys.exit(1)
|
||||
@@ -0,0 +1,70 @@
|
||||
"""
|
||||
pygments.console
|
||||
~~~~~~~~~~~~~~~~
|
||||
|
||||
Format colored console output.
|
||||
|
||||
:copyright: Copyright 2006-2025 by the Pygments team, see AUTHORS.
|
||||
:license: BSD, see LICENSE for details.
|
||||
"""
|
||||
|
||||
esc = "\x1b["
|
||||
|
||||
codes = {}
|
||||
codes[""] = ""
|
||||
codes["reset"] = esc + "39;49;00m"
|
||||
|
||||
codes["bold"] = esc + "01m"
|
||||
codes["faint"] = esc + "02m"
|
||||
codes["standout"] = esc + "03m"
|
||||
codes["underline"] = esc + "04m"
|
||||
codes["blink"] = esc + "05m"
|
||||
codes["overline"] = esc + "06m"
|
||||
|
||||
dark_colors = ["black", "red", "green", "yellow", "blue",
|
||||
"magenta", "cyan", "gray"]
|
||||
light_colors = ["brightblack", "brightred", "brightgreen", "brightyellow", "brightblue",
|
||||
"brightmagenta", "brightcyan", "white"]
|
||||
|
||||
x = 30
|
||||
for dark, light in zip(dark_colors, light_colors):
|
||||
codes[dark] = esc + "%im" % x
|
||||
codes[light] = esc + "%im" % (60 + x)
|
||||
x += 1
|
||||
|
||||
del dark, light, x
|
||||
|
||||
codes["white"] = codes["bold"]
|
||||
|
||||
|
||||
def reset_color():
|
||||
return codes["reset"]
|
||||
|
||||
|
||||
def colorize(color_key, text):
|
||||
return codes[color_key] + text + codes["reset"]
|
||||
|
||||
|
||||
def ansiformat(attr, text):
|
||||
"""
|
||||
Format ``text`` with a color and/or some attributes::
|
||||
|
||||
color normal color
|
||||
*color* bold color
|
||||
_color_ underlined color
|
||||
+color+ blinking color
|
||||
"""
|
||||
result = []
|
||||
if attr[:1] == attr[-1:] == '+':
|
||||
result.append(codes['blink'])
|
||||
attr = attr[1:-1]
|
||||
if attr[:1] == attr[-1:] == '*':
|
||||
result.append(codes['bold'])
|
||||
attr = attr[1:-1]
|
||||
if attr[:1] == attr[-1:] == '_':
|
||||
result.append(codes['underline'])
|
||||
attr = attr[1:-1]
|
||||
result.append(codes[attr])
|
||||
result.append(text)
|
||||
result.append(codes['reset'])
|
||||
return ''.join(result)
|
||||
@@ -0,0 +1,70 @@
|
||||
"""
|
||||
pygments.filter
|
||||
~~~~~~~~~~~~~~~
|
||||
|
||||
Module that implements the default filter.
|
||||
|
||||
:copyright: Copyright 2006-2025 by the Pygments team, see AUTHORS.
|
||||
:license: BSD, see LICENSE for details.
|
||||
"""
|
||||
|
||||
|
||||
def apply_filters(stream, filters, lexer=None):
|
||||
"""
|
||||
Use this method to apply an iterable of filters to
|
||||
a stream. If lexer is given it's forwarded to the
|
||||
filter, otherwise the filter receives `None`.
|
||||
"""
|
||||
def _apply(filter_, stream):
|
||||
yield from filter_.filter(lexer, stream)
|
||||
for filter_ in filters:
|
||||
stream = _apply(filter_, stream)
|
||||
return stream
|
||||
|
||||
|
||||
def simplefilter(f):
|
||||
"""
|
||||
Decorator that converts a function into a filter::
|
||||
|
||||
@simplefilter
|
||||
def lowercase(self, lexer, stream, options):
|
||||
for ttype, value in stream:
|
||||
yield ttype, value.lower()
|
||||
"""
|
||||
return type(f.__name__, (FunctionFilter,), {
|
||||
'__module__': getattr(f, '__module__'),
|
||||
'__doc__': f.__doc__,
|
||||
'function': f,
|
||||
})
|
||||
|
||||
|
||||
class Filter:
|
||||
"""
|
||||
Default filter. Subclass this class or use the `simplefilter`
|
||||
decorator to create own filters.
|
||||
"""
|
||||
|
||||
def __init__(self, **options):
|
||||
self.options = options
|
||||
|
||||
def filter(self, lexer, stream):
|
||||
raise NotImplementedError()
|
||||
|
||||
|
||||
class FunctionFilter(Filter):
|
||||
"""
|
||||
Abstract class used by `simplefilter` to create simple
|
||||
function filters on the fly. The `simplefilter` decorator
|
||||
automatically creates subclasses of this class for
|
||||
functions passed to it.
|
||||
"""
|
||||
function = None
|
||||
|
||||
def __init__(self, **options):
|
||||
if not hasattr(self, 'function'):
|
||||
raise TypeError(f'{self.__class__.__name__!r} used without bound function')
|
||||
Filter.__init__(self, **options)
|
||||
|
||||
def filter(self, lexer, stream):
|
||||
# pylint: disable=not-callable
|
||||
yield from self.function(lexer, stream, self.options)
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user