Scrapy-splash
Splash是一个javascript渲染服务。它是一个带有HTTP API的轻量级Web浏览器,使用Twisted和QT5在Python 3中实现。QT反应器用于使服务彻底异步,容许经过QT主循环利用webkit并发。
一些Splash功能:javascript
- 并行处理多个网页
- 获取HTML源代码或截取屏幕截图
- 关闭图像或使用Adblock Plus规则使渲染更快
- 在页面上下文中执行自定义JavaScript
- 可经过Lua脚原本控制页面的渲染过程
- 在Splash-Jupyter 笔记本中开发Splash Lua脚本。
- 以HAR格式获取详细的渲染信息
1.splash安装
Scrapy-Splash的安装分为两部分,一个是Splash服务的安装,具体经过Docker来安装服务,运行服务会启动一个Splash服务,经过它的接口来实现JavaScript页面的加载;另一个是Scrapy-Splash的Python库的安装,安装后就可在Scrapy中使用Splash服务了,下面咱们分三部份来安装:html
1.安装docker
passjava
2.安装splash服务
docker pull scrapinghub/splash docker run -d -p 8050:8050 scrapinghub/splash
3.Python包Scrapy-Splash安装
pip3 install scrapy-splash
2.Scrapy-Splash使用
1.setting添加配置
SPIDER_MIDDLEWARES = { 'scrapy_splash.SplashDeduplicateArgsMiddleware': 100, # 配置splash服务 } DOWNLOADER_MIDDLEWARES = { 'scrapy_splash.SplashCookiesMiddleware': 723, # 配置splash服务 'scrapy_splash.SplashMiddleware': 725, # 配置splash服务 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware': 810, # 配置splash服务 } # 添加splash服务器地址: SPLASH_URL = "http://192.168.31.111:8050/" # 设置去重过滤器: DUPEFILTER_CLASS = "scrapy_splash.SplashAwareDupeFilter" # 开启换成 HTTPCACHE_ENABLED = True # 缓存超时时间 HTTPCACHE_EXPIRATION_SECS = 0 # 缓存保存路径 HTTPCACHE_DIR = 'httpcache' # 缓存忽略的Http状态码 HTTPCACHE_IGNORE_HTTP_CODES = [] # 最后配置一个Cache存储 HTTPCACHE_STORAGE = 'scrapy_splash.SplashAwareFSCacheStorage'
2.spider.py
import scrapy from scrapy.http import Request, FormRequest from scrapy.selector import Selector from scrapy_splash.request import SplashRequest, SplashFormRequest class JdSpiderSpider(scrapy.Spider): name = 'jd_spider' allowed_domains = ['.com'] start_urls = ['https://www.baidu.com'] def start_requests(self): splash_args = {"lua_source": """ --splash.response_body_enabled = true splash.private_mode_enabled = false splash:set_user_agent("Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/64.0.3282.186 Safari/537.36") assert(splash:go("https://item.jd.com/5089239.html")) splash:wait(3) return {html = splash:html()} """} # yield SplashRequest("https://item.jd.com/5089239.html", endpoint='run', args=splash_args, callback=self.onSave) yield SplashRequest("https://item.jd.com/35674728065.html", endpoint='run', args=splash_args, callback=self.onSave) def onSave(self, response): value = response.xpath('//span[@class="p-price"]//text()').extract() print(value) def parse(self, response): pass def SplashRequest(url=None, callback=None, method='GET', endpoint='render.html', args=None, splash_url=None, slot_policy=SlotPolicy.PER_DOMAIN, splash_headers=None, dont_process_response=False, dont_send_headers=False, magic_response=True, session_id='default', http_status_from_error_code=True, cache_args=None, meta=None, **kwargs): url:与scrapy.Request中的url相同,也就是待爬取页面的url headers:与scrapy.Request中的headers相同 cookies:与scrapy.Request中的cookies相同 args:传递给Splash的参数,如wait(等待时间),timeout(超时时间),images(是否禁止加载图片,0禁止,1不由止), proxy(设置代理)等 args={'wait': 5, 'lua_source': source, 'proxy': 'http://proxy_ip:proxy_port' } endpoint:Splash服务端点,默认为'render.html',即JS页面渲染服务 splash_url:Splash服务器地址,默认为None,即便用settings.py配置文件中的SPLASH_URL = 'http://localhost:8050' method:请求类型 def SplashFormRequest(url=None, callback=None, method=None, formdata=None, body=None, **kwargs): body:请求体