1
0
mirror of https://git.FreeBSD.org/ports.git synced 2025-01-13 07:34:50 +00:00

www/py-scrapy: Update to 2.5.0

Changelog:	https://docs.scrapy.org/en/latest/news.html?highlight=news#scrapy-2-5-0-2021-04-06

This also removes/disables the experimental HTTP/2 support, there are
issues with Twisted and h2 at this moment which breaks scrapy.

PR:		256259
Approved by:	maintainer timeout (skreuzer)
This commit is contained in:
Danilo G. Baio 2021-06-26 13:43:12 -03:00
parent b8bf1cfd41
commit a9ed9dd7a3
4 changed files with 43 additions and 20 deletions

View File

@ -1,8 +1,7 @@
# Created by: Qing Feng <qingfeng@douban.com>
PORTNAME= Scrapy
DISTVERSION= 1.6.0
PORTREVISION= 2
DISTVERSION= 2.5.0
CATEGORIES= www python
MASTER_SITES= CHEESESHOP
PKGNAMEPREFIX= ${PYTHON_PKGNAMEPREFIX}
@ -13,23 +12,28 @@ COMMENT= High level scraping and web crawling framework
LICENSE= BSD3CLAUSE
LICENSE_FILE= ${WRKSRC}/LICENSE
RUN_DEPENDS= ${PYTHON_PKGNAMEPREFIX}twisted>=13.1.0:devel/py-twisted@${PY_FLAVOR} \
${PYTHON_PKGNAMEPREFIX}lxml>0:devel/py-lxml@${PY_FLAVOR} \
${PYTHON_PKGNAMEPREFIX}sqlite3>0:databases/py-sqlite3@${PY_FLAVOR} \
RUN_DEPENDS= ${PYTHON_PKGNAMEPREFIX}cryptography>=2.0:security/py-cryptography@${PY_FLAVOR} \
${PYTHON_PKGNAMEPREFIX}cssselect>=0.9.1:www/py-cssselect@${PY_FLAVOR} \
${PYTHON_PKGNAMEPREFIX}itemloaders>=1.0.1:devel/py-itemloaders@${PY_FLAVOR} \
${PYTHON_PKGNAMEPREFIX}parsel>=1.5:textproc/py-parsel@${PY_FLAVOR} \
${PYTHON_PKGNAMEPREFIX}openssl>=16.2.0:security/py-openssl@${PY_FLAVOR} \
${PYTHON_PKGNAMEPREFIX}queuelib>=1.4.2:sysutils/py-queuelib@${PY_FLAVOR} \
${PYTHON_PKGNAMEPREFIX}service_identity>=16.0.0:security/py-service_identity@${PY_FLAVOR} \
${PYTHON_PKGNAMEPREFIX}w3lib>=1.17.0:www/py-w3lib@${PY_FLAVOR} \
${PYTHON_PKGNAMEPREFIX}cssselect>=0.9:www/py-cssselect@${PY_FLAVOR} \
${PYTHON_PKGNAMEPREFIX}queuelib>0:sysutils/py-queuelib@${PY_FLAVOR} \
${PYTHON_PKGNAMEPREFIX}pydispatcher>=2.0.5:devel/py-pydispatcher@${PY_FLAVOR} \
${PYTHON_PKGNAMEPREFIX}service_identity>0:security/py-service_identity@${PY_FLAVOR} \
${PYTHON_PKGNAMEPREFIX}six>=1.5.2:devel/py-six@${PY_FLAVOR} \
${PYTHON_PKGNAMEPREFIX}parsel>=1.5:textproc/py-parsel@${PY_FLAVOR}
${PYTHON_PKGNAMEPREFIX}zope.interface>=4.1.3:devel/py-zope.interface@${PY_FLAVOR} \
${PYTHON_PKGNAMEPREFIX}protego>=0.1.15:www/py-protego@${PY_FLAVOR} \
${PYTHON_PKGNAMEPREFIX}itemadapter>=0.1.0:devel/py-itemadapter@${PY_FLAVOR} \
${PYTHON_PKGNAMEPREFIX}lxml>=3.5.0:devel/py-lxml@${PY_FLAVOR} \
${PYTHON_PKGNAMEPREFIX}pydispatcher>=2.0.5:devel/py-pydispatcher@${PY_FLAVOR}
USES= python:3.6+
USE_PYTHON= distutils concurrent autoplist
OPTIONS_DEFINE= SSL
OPTIONS_DEFAULT= SSL
NO_ARCH= yes
SSL_RUN_DEPENDS= ${PYTHON_PKGNAMEPREFIX}openssl>0:security/py-openssl@${PY_FLAVOR}
# Remove experimental HTTP/2 support, issues with Twisted and h2
post-extract:
@${RM} -r ${WRKSRC}/scrapy/core/http2
@${RM} ${WRKSRC}/scrapy/core/downloader/handlers/http2.py
.include <bsd.port.mk>

View File

@ -1,3 +1,3 @@
TIMESTAMP = 1549229032
SHA256 (Scrapy-1.6.0.tar.gz) = 558dfd10ac53cb324ecd7eefd3eac412161c7507c082b01b0bcd2c6e2e9f0766
SIZE (Scrapy-1.6.0.tar.gz) = 926576
TIMESTAMP = 1622321274
SHA256 (Scrapy-2.5.0.tar.gz) = 0a68ed41f7173679f160c4cef2db05288548c21e7164170552adae8b13cefaab
SIZE (Scrapy-2.5.0.tar.gz) = 1071824

View File

@ -0,0 +1,19 @@
# Remove experimental HTTP/2 support, issues with Twisted and h2
--- setup.py.orig 2021-04-06 14:48:02 UTC
+++ setup.py
@@ -19,7 +19,6 @@ def has_environment_marker_platform_impl_support():
install_requires = [
- 'Twisted[http2]>=17.9.0',
'cryptography>=2.0',
'cssselect>=0.9.1',
'itemloaders>=1.0.1',
@@ -31,7 +30,6 @@ install_requires = [
'zope.interface>=4.1.3',
'protego>=0.1.15',
'itemadapter>=0.1.0',
- 'h2>=3.0,<4.0',
]
extras_require = {}
cpython_dependencies = [

View File

@ -1,5 +1,5 @@
Scrapy is a high level scraping and web crawling framework for writing
spiders to crawl and parse web pages for all kinds of purposes, from
information retrieval to monitoring or testing web sites.
Scrapy is a fast high-level web crawling and web scraping framework, used to
crawl websites and extract structured data from their pages. It can be used for
a wide range of purposes, from data mining to monitoring and automated testing.
WWW: https://scrapy.org/