ok

Mini Shell

Direktori : /proc/self/root/opt/alt/python36/lib/python3.6/site-packages/pip/_internal/index/
Upload File :
Current File : //proc/self/root/opt/alt/python36/lib/python3.6/site-packages/pip/_internal/index/collector.pyo

�
Bu�_c@sdZddlZddlZddlZddlZddlZddlZddlZddlm	Z	ddl
mZmZddl
mZddlmZmZddlmZddlmZdd	lmZdd
lmZddlmZddlmZdd
lm Z ddl!m"Z"ddl#m$Z$m%Z%ddl&m'Z'ddl(m)Z)m*Z*ddl+m,Z,m-Z-e'rddl.m/Z/ddl0m1Z1m2Z2m3Z3m4Z4m5Z5m6Z6m7Z7m8Z8ddl9Z:ddl;m<Z<ddl=m>Z>e:j?j@jAZBe4eCeCfZDnejEeF�ZGd�ZHd�ZIdeJfd��YZKd�ZLdeJfd��YZMd�ZNd�ZOd �ZPd!�ZQd"�ZRd#�ZSejTd$ejU�ZVd%�ZWd&�ZXd'�ZYd(eZfd)��YZ[d*�Z\e\d+��Z]d,eZfd-��YZ^dd.�Z`ead/�Zbdd0�Zcd1�Zdeed2�Zfd3eZfd4��YZgd5eZfd6��YZhdS(7sM
The main purpose of this module is to expose LinkCollector.collect_links().
i����N(tOrderedDict(thtml5libtrequests(tunescape(t
RetryErrortSSLError(tparse(trequest(tNetworkConnectionError(tLink(tSearchScope(traise_for_status(t	lru_cache(tARCHIVE_EXTENSIONS(tpairwisetredact_auth_from_url(tMYPY_CHECK_RUNNING(tpath_to_urlturl_to_path(tis_urltvcs(tValues(tCallabletIterabletListtMutableMappingtOptionaltSequencetTupletUnion(tResponse(t
PipSessioncCsGx@tjD]5}|j�j|�r
|t|�dkr
|Sq
WdS(sgLook for VCS schemes in the URL.

    Returns the matched VCS scheme, or None if there's no match.
    s+:N(Rtschemestlowert
startswithtlentNone(turltscheme((sN/opt/alt/python36/lib/python3.6/site-packages/pip/_internal/index/collector.pyt_match_vcs_scheme8s+cCs7t|�j}x!tD]}|j|�rtSqWtS(s2Return whether the URL looks like an archive.
    (R	tfilenameR
tendswithtTruetFalse(R%R(tbad_ext((sN/opt/alt/python36/lib/python3.6/site-packages/pip/_internal/index/collector.pyt_is_url_like_archiveDs

t_NotHTMLcBseZd�ZRS(cCs/tt|�j||�||_||_dS(N(tsuperR.t__init__tcontent_typetrequest_desc(tselfR1R2((sN/opt/alt/python36/lib/python3.6/site-packages/pip/_internal/index/collector.pyR0Ps	(t__name__t
__module__R0(((sN/opt/alt/python36/lib/python3.6/site-packages/pip/_internal/index/collector.pyR.OscCsF|jjdd�}|j�jd�sBt||jj��ndS(s�Check the Content-Type header to ensure the response contains HTML.

    Raises `_NotHTML` if the content type is not text/html.
    sContent-Typets	text/htmlN(theaderstgetR!R"R.Rtmethod(tresponseR1((sN/opt/alt/python36/lib/python3.6/site-packages/pip/_internal/index/collector.pyt_ensure_html_headerWst_NotHTTPcBseZRS((R4R5(((sN/opt/alt/python36/lib/python3.6/site-packages/pip/_internal/index/collector.pyR<bscCsitj|�\}}}}}|ddhkr<t��n|j|dt�}t|�t|�dS(s�Send a HEAD request to the URL, and ensure the response contains HTML.

    Raises `_NotHTTP` if the URL is not available for a HEAD request, or
    `_NotHTML` if the content type is not text/html.
    thttpthttpstallow_redirectsN(turllib_parseturlsplitR<theadR*RR;(R%tsessionR&tnetloctpathtquerytfragmenttresp((sN/opt/alt/python36/lib/python3.6/site-packages/pip/_internal/index/collector.pyt_ensure_html_responsefs
cCspt|�rt|d|�ntjdt|��|j|didd6dd6�}t|�t|�|S(sAccess an HTML page with GET, and return the response.

    This consists of three parts:

    1. If the URL looks suspiciously like an archive, send a HEAD first to
       check the Content-Type is HTML, to avoid downloading a large file.
       Raise `_NotHTTP` if the content type cannot be determined, or
       `_NotHTML` if it is not HTML.
    2. Actually perform the request. Raise HTTP exceptions on network failures.
    3. Check the Content-Type header to make sure we got HTML, and raise
       `_NotHTML` otherwise.
    RCsGetting page %sR7s	text/htmltAccepts	max-age=0s
Cache-Control(R-RItloggertdebugRR8RR;(R%RCRH((sN/opt/alt/python36/lib/python3.6/site-packages/pip/_internal/index/collector.pyt_get_html_responsews


cCsF|rBd|krBtj|d�\}}d|krB|dSndS(sBDetermine if we have any encoding information in our headers.
    sContent-TypetcharsetN(tcgitparse_headerR$(R7R1tparams((sN/opt/alt/python36/lib/python3.6/site-packages/pip/_internal/index/collector.pyt_get_encoding_from_headers�s
cCs=x6|jd�D]%}|jd�}|dk	r|SqW|S(s�Determine the HTML document's base URL.

    This looks for a ``<base>`` tag in the HTML document. If present, its href
    attribute denotes the base URL of anchor tags in the document. If there is
    no such tag (or if it does not have a valid href attribute), the HTML
    file's URL is used as the base URL.

    :param document: An HTML document representation. The current
        implementation expects the result of ``html5lib.parse()``.
    :param page_url: The URL of the HTML document.
    s.//basethrefN(tfindallR8R$(tdocumenttpage_urltbaseRS((sN/opt/alt/python36/lib/python3.6/site-packages/pip/_internal/index/collector.pyt_determine_base_url�s

cCstjtj|��S(sP
    Clean a "part" of a URL path (i.e. after splitting on "@" characters).
    (R@tquotetunquote(tpart((sN/opt/alt/python36/lib/python3.6/site-packages/pip/_internal/index/collector.pyt_clean_url_path_part�scCstjtj|��S(s�
    Clean the first part of a URL path that corresponds to a local
    filesystem path (i.e. the first part after splitting on "@" characters).
    (turllib_requesttpathname2urlturl2pathname(R[((sN/opt/alt/python36/lib/python3.6/site-packages/pip/_internal/index/collector.pyt_clean_file_url_path�ss(@|%2F)cCs�|rt}nt}tj|�}g}xOttj|dg��D]2\}}|j||��|j|j��qFWdj	|�S(s*
    Clean the path portion of a URL.
    R6(
R`R\t_reserved_chars_retsplitRt	itertoolstchaintappendtuppertjoin(REt
is_local_patht
clean_functpartst
cleaned_partstto_cleantreserved((sN/opt/alt/python36/lib/python3.6/site-packages/pip/_internal/index/collector.pyt_clean_url_path�s	(cCsGtj|�}|j}t|jd|�}tj|jd|��S(s�
    Make sure a link is fully quoted.
    For example, if ' ' occurs in the URL, it will be replaced with "%20",
    and without double-quoting other characters.
    RhRE(R@turlparseRDRnREt
urlunparset_replace(R%tresultRhRE((sN/opt/alt/python36/lib/python3.6/site-packages/pip/_internal/index/collector.pyt_clean_link�s	
cCs�|jd�}|sdSttj||��}|jd�}|rRt|�nd}|jd�}|r|t|�}nt|d|d|d|�}|S(sJ
    Convert an anchor element in a simple repository page to a Link.
    RSsdata-requires-pythonsdata-yankedt
comes_fromtrequires_pythont
yanked_reasonN(R8R$RsR@turljoinRR	(tanchorRVtbase_urlRSR%t	pyrequireRvtlink((sN/opt/alt/python36/lib/python3.6/site-packages/pip/_internal/index/collector.pyt_create_link_from_elements		tCacheablePageContentcBs#eZd�Zd�Zd�ZRS(cCs
||_dS(N(tpage(R3R~((sN/opt/alt/python36/lib/python3.6/site-packages/pip/_internal/index/collector.pyR0,scCs+t|t|��o*|jj|jjkS(N(t
isinstancettypeR~R%(R3tother((sN/opt/alt/python36/lib/python3.6/site-packages/pip/_internal/index/collector.pyt__eq__1scCst|jj�S(N(thashR~R%(R3((sN/opt/alt/python36/lib/python3.6/site-packages/pip/_internal/index/collector.pyt__hash__6s(R4R5R0R�R�(((sN/opt/alt/python36/lib/python3.6/site-packages/pip/_internal/index/collector.pyR}+s		csCtdd��fd���tj����fd��}|S(s�
    Given a function that parses an Iterable[Link] from an HTMLPage, cache the
    function's result (keyed by CacheablePageContent), unless the HTMLPage
    `page` has `page.cache_link_parsing == False`.
    tmaxsizecst�|j��S(N(tlistR~(tcacheable_page(tfn(sN/opt/alt/python36/lib/python3.6/site-packages/pip/_internal/index/collector.pytwrapperEscs)|jr�t|��St�|��S(N(tcache_link_parsingR}R�(R~(R�R�(sN/opt/alt/python36/lib/python3.6/site-packages/pip/_internal/index/collector.pytwrapper_wrapperJs	N(RR$t	functoolstwraps(R�R�((R�R�sN/opt/alt/python36/lib/python3.6/site-packages/pip/_internal/index/collector.pytwith_cached_html_pages;s
!ccs�tj|jd|jdt�}|j}t||�}xF|jd�D]5}t|d|d|�}|dkryqIn|VqIWdS(sP
    Parse an HTML document, and yield its anchor elements as Link objects.
    ttransport_encodingtnamespaceHTMLElementss.//aRVRyN(
RRtcontenttencodingR+R%RXRTR|R$(R~RUR%RyRxR{((sN/opt/alt/python36/lib/python3.6/site-packages/pip/_internal/index/collector.pytparse_linksTs					tHTMLPagecBs#eZdZed�Zd�ZRS(s'Represents one page, along with its URLcCs(||_||_||_||_dS(sm
        :param encoding: the encoding to decode the given content.
        :param url: the URL from which the HTML was downloaded.
        :param cache_link_parsing: whether links parsed from this page's url
                                   should be cached. PyPI index urls should
                                   have this set to False, for example.
        N(R�R�R%R�(R3R�R�R%R�((sN/opt/alt/python36/lib/python3.6/site-packages/pip/_internal/index/collector.pyR0ps			cCs
t|j�S(N(RR%(R3((sN/opt/alt/python36/lib/python3.6/site-packages/pip/_internal/index/collector.pyt__str__�s(R4R5t__doc__R*R0R�(((sN/opt/alt/python36/lib/python3.6/site-packages/pip/_internal/index/collector.pyR�mscCs,|dkrtj}n|d||�dS(Ns%Could not fetch URL %s: %s - skipping(R$RKRL(R{treasontmeth((sN/opt/alt/python36/lib/python3.6/site-packages/pip/_internal/index/collector.pyt_handle_get_page_fail�scCs1t|j�}t|jd|d|jd|�S(NR�R%R�(RRR7R�R�R%(R:R�R�((sN/opt/alt/python36/lib/python3.6/site-packages/pip/_internal/index/collector.pyt_make_html_page�s		c
Cs,|dkrtd��n|jjdd�d}t|�}|r]tjd||�dStj|�\}}}}}}|dkr�t	j
jtj
|��r�|jd�s�|d7}ntj|d�}tjd	|�nyt|d
|�}Wntk
rtjd|�n	tk
rM}tjd||j|j�n�tk
rl}t||�n�tk
r�}t||�n�tk
r�}d
}	|	t|�7}	t||	dtj�n_tjk
r�}t|dj|��n4tjk
rt|d�nXt |d|j!�SdS(Ns?_get_html_page() missing 1 required keyword argument: 'session't#iisICannot look at %s URL %s because it does not support lookup as web pages.tfilet/s
index.htmls# file: URL is directory, getting %sRCs`Skipping page %s because it looks like an archive, and cannot be checked by a HTTP HEAD request.siSkipping page %s because the %s request got Content-Type: %s.The only supported Content-Type is text/htmls4There was a problem confirming the ssl certificate: R�sconnection error: {}s	timed outR�("R$t	TypeErrorR%RbR'RKtwarningR@RotosREtisdirR]R_R)RwRLRMR<R.R2R1RR�RRtstrtinfoRtConnectionErrortformattTimeoutR�R�(
R{RCR%t
vcs_schemeR&t_RERHtexcR�((sN/opt/alt/python36/lib/python3.6/site-packages/pip/_internal/index/collector.pyt_get_html_page�sP	
!'


	
cCsttj|��S(sQ
    Return a list of links, with duplicates removed and ordering preserved.
    (R�Rtfromkeys(tlinks((sN/opt/alt/python36/lib/python3.6/site-packages/pip/_internal/index/collector.pyt_remove_duplicate_links�scshg�g���fd�}x=|D]5}tjj|�}|jd�}|sX|r.|rg|}nt|�}tjj|�r�|r�tjj|�}xYtj|�D]}|tjj||��q�Wq+|r��j	|�q+t
jd|�qZtjj|�r||�qZt
jd|�q%t
|�rJ�j	|�q%t
jd|�q%W��fS(s�
    Divide a list of locations into two groups: "files" (archives) and "urls."

    :return: A pair of lists (files, urls).
    csLt|�}tj|dt�ddkr;�j|�n
�j|�dS(Ntstrictis	text/html(Rt	mimetypest
guess_typeR+Re(RER%(tfilesturls(sN/opt/alt/python36/lib/python3.6/site-packages/pip/_internal/index/collector.pyt	sort_path�ssfile:s(Path '%s' is ignored: it is a directory.s:Url '%s' is ignored: it is neither a file nor a directory.sQUrl '%s' is ignored. It is either a non-existing path or lacks a specific scheme.(R�REtexistsR"RR�trealpathtlistdirRgReRKR�tisfileR(t	locationst
expand_dirR�R%Rhtis_file_urlREtitem((R�R�sN/opt/alt/python36/lib/python3.6/site-packages/pip/_internal/index/collector.pytgroup_locations�s<
	 


tCollectedLinkscBseZdZd�ZRS(s�
    Encapsulates the return value of a call to LinkCollector.collect_links().

    The return value includes both URLs to project pages containing package
    links, as well as individual package Link objects collected from other
    sources.

    This info is stored separately as:

    (1) links from the configured file locations,
    (2) links from the configured find_links, and
    (3) urls to HTML project pages, as described by the PEP 503 simple
        repository API.
    cCs||_||_||_dS(s�
        :param files: Links from file locations.
        :param find_links: Links from find_links.
        :param project_urls: URLs to HTML project pages, as described by
            the PEP 503 simple repository API.
        N(R�t
find_linkstproject_urls(R3R�R�R�((sN/opt/alt/python36/lib/python3.6/site-packages/pip/_internal/index/collector.pyR0,s
		(R4R5R�R0(((sN/opt/alt/python36/lib/python3.6/site-packages/pip/_internal/index/collector.pyR�st
LinkCollectorcBsJeZdZd�Zeed��Zed��Zd�Z	d�Z
RS(s�
    Responsible for collecting Link objects from all configured locations,
    making network requests as needed.

    The class's main method is its collect_links() method.
    cCs||_||_dS(N(tsearch_scopeRC(R3RCR�((sN/opt/alt/python36/lib/python3.6/site-packages/pip/_internal/index/collector.pyR0Gs	cCs�|jg|j}|jrO|rOtjddjd�|D���g}n|jp[g}tjd|d|�}t	d|d|�}|S(s�
        :param session: The Session to use to make requests.
        :param suppress_no_index: Whether to ignore the --no-index option
            when constructing the SearchScope object.
        sIgnoring indexes: %st,css|]}t|�VqdS(N(R(t.0R%((sN/opt/alt/python36/lib/python3.6/site-packages/pip/_internal/index/collector.pys	<genexpr>\sR�t
index_urlsRCR�(
t	index_urltextra_index_urlstno_indexRKRLRgR�R
tcreateR�(tclsRCtoptionstsuppress_no_indexR�R�R�tlink_collector((sN/opt/alt/python36/lib/python3.6/site-packages/pip/_internal/index/collector.pyR�Ps		cCs
|jjS(N(R�R�(R3((sN/opt/alt/python36/lib/python3.6/site-packages/pip/_internal/index/collector.pyR�kscCst|d|j�S(s>
        Fetch an HTML page containing package links.
        RC(R�RC(R3tlocation((sN/opt/alt/python36/lib/python3.6/site-packages/pip/_internal/index/collector.pyt
fetch_pagepscCs]|j}|j|�}t|�\}}t|jdt�\}}gtj||�D]}t|�^qX}	g|jD]}t|d�^qz}
gtjd�|D�d�|D��D]}|jj	|�r�|^q�}t
|�}djt|�|�g}
x$|D]}|
j
dj|��qWtjdj|
��td|	d	|
d
|�S(s�Find all available links for the given project name.

        :return: All the Link objects (unfiltered), as a CollectedLinks object.
        R�s-fcss!|]}t|dt�VqdS(R�N(R	R+(R�R%((sN/opt/alt/python36/lib/python3.6/site-packages/pip/_internal/index/collector.pys	<genexpr>�scss|]}t|�VqdS(N(R	(R�R%((sN/opt/alt/python36/lib/python3.6/site-packages/pip/_internal/index/collector.pys	<genexpr>�ss,{} location(s) to search for versions of {}:s* {}s
R�R�R�(R�tget_index_urls_locationsR�R�R*RcRdR	RCtis_secure_originR�R�R#ReRKRLRgR�(R3tproject_nameR�tindex_locationstindex_file_loct
index_url_loctfl_file_loct
fl_url_locR%t
file_linkstfind_link_linksR{t
url_locationstlines((sN/opt/alt/python36/lib/python3.6/site-packages/pip/_internal/index/collector.pyt
collect_linksws*	+%	

(R4R5R�R0tclassmethodR+R�tpropertyR�R�R�(((sN/opt/alt/python36/lib/python3.6/site-packages/pip/_internal/index/collector.pyR�>s			(iR�ROR�RctloggingR�R�tretcollectionsRtpip._vendorRRtpip._vendor.distlib.compatRtpip._vendor.requests.exceptionsRRtpip._vendor.six.moves.urllibRR@RR]tpip._internal.exceptionsRtpip._internal.models.linkR	t!pip._internal.models.search_scopeR
tpip._internal.network.utilsRtpip._internal.utils.compatRtpip._internal.utils.filetypesR
tpip._internal.utils.miscRRtpip._internal.utils.typingRtpip._internal.utils.urlsRRtpip._internal.vcsRRtoptparseRttypingRRRRRRRRtxml.etree.ElementTreetxmltpip._vendor.requestsRtpip._internal.network.sessionRtetreetElementTreetElementtHTMLElementR�tResponseHeaderst	getLoggerR4RKR'R-t	ExceptionR.R;R<RIRMRRRXR\R`tcompilet
IGNORECASERaRnRsR|tobjectR}R�R�R�R$R�R*R�R�R�R+R�R�R�(((sN/opt/alt/python36/lib/python3.6/site-packages/pip/_internal/index/collector.pyt<module>st:
					3								 	
9		;#

Zerion Mini Shell 1.0