文的文字及图片来源于网络,仅供学习、交流使用,不具备任何商业用途,版权归原做者全部,若有问题请及时联系咱们以做处理。html
做者: 人走茶凉cscnode
PS:若有须要Python学习资料的小伙伴能够加点击下方连接自行获取web
http://note.youdao.com/noteshare?id=3054cce4add8a909e784ad934f956cef复制代码
一、爬取字段 爬取字段很少,只须要三个字段便可,其中“内容”字段须要进到详情页爬取ajax
二、网页分析json
知乎发现板块为典型的ajax加载页面。 咱们打开网页,右键点击检查,切换到Network界面,点击XHR,此状态下,刷新出来的均为Ajax加载条目。bash
接下来咱们不断下拉网页 能够看到ajax加载条目不断出现。
此ajax加载的params以下,我上次写的百度图片下载爬虫咱们是经过构造params来实现的,在此次爬虫中我尝试使用此方法可是返回404结果因此咱们经过分析ajax的url来实现。
此为ajax加载出来的urlcookie
https://www.zhihu.com/node/ExploreAnswerListV2?params=%7B%22offset%22%3A10%2C%22type%22%3A%22day%22%7D
https://www.zhihu.com/node/ExploreAnswerListV2?params=%7B%22offset%22%3A15%2C%22type%22%3A%22day%22%7D复制代码
经过上面两个url分析咱们能够看出此url只有一个可变参数为网络
因此咱们只需更改此参数便可。app
同时在分析网页的时候咱们发现,此ajax加载上限为40页,其最后一页ajax加载网址为dom
https://www.zhihu.com/node/ExploreAnswerListV2?params=%7B%22offset%22%3A199%2C%22type%22%3A%22day%22%7D复制代码
好了 到目前为止,咱们ajax请求的url地址咱们分析结束。爬取字段部分不在分析,都是很简单的静态网页,使用xpath便可。
一、items部分
# -*- coding: utf-8 -*-
# Define here the models for your scraped items
#
# See documentation in:
# https://doc.scrapy.org/en/latest/topics/items.html
import scrapy
class ZhihufaxianItem(scrapy.Item):
# 标题
title = scrapy.Field()
# 做者
author = scrapy.Field()
# 内容
content = scrapy.Field()
复制代码
创建爬取字段
二、settings部分
# -*- coding: utf-8 -*-
# Scrapy settings for zhihufaxian project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
# https://doc.scrapy.org/en/latest/topics/settings.html
# https://doc.scrapy.org/en/latest/topics/downloader-middleware.html
# https://doc.scrapy.org/en/latest/topics/spider-middleware.html
BOT_NAME = 'zhihufaxian'
SPIDER_MODULES = ['zhihufaxian.spiders']
NEWSPIDER_MODULE = 'zhihufaxian.spiders'
# Crawl responsibly by identifying yourself (and your website) on the user-agent
USER_AGENT = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.142 Safari/537.36'
# Obey robots.txt rules
ROBOTSTXT_OBEY = False
# Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32
# Configure a delay for requests for the same website (default: 0)
# See https://doc.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
#DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16
# Disable cookies (enabled by default)
#COOKIES_ENABLED = False
# Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False
# Override the default request headers:
#DEFAULT_REQUEST_HEADERS = {
# 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
# 'Accept-Language': 'en',
#}
# Enable or disable spider middlewares
# See https://doc.scrapy.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
# 'zhihufaxian.middlewares.ZhihufaxianSpiderMiddleware': 543,
#}
# Enable or disable downloader middlewares
# See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html
#DOWNLOADER_MIDDLEWARES = {
# 'zhihufaxian.middlewares.ZhihufaxianDownloaderMiddleware': 543,
#}
# Enable or disable extensions
# See https://doc.scrapy.org/en/latest/topics/extensions.html
#EXTENSIONS = {
# 'scrapy.extensions.telnet.TelnetConsole': None,
#}
# Configure item pipelines
# See https://doc.scrapy.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {
'zhihufaxian.pipelines.ZhihufaxianPipeline': 300,
}
# Enable and configure the AutoThrottle extension (disabled by default)
# See https://doc.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False
# Enable and configure HTTP caching (disabled by default)
# See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = 'httpcache'
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'
复制代码
打开itempippline部分修改user_agent便可。
三、spider
# -*- coding: utf-8 -*-
import scrapy
from zhihufaxian.items import ZhihufaxianItem
class ZhfxSpider(scrapy.Spider):
name = 'zhfx'
allowed_domains = ['zhihu.com']
start_urls = ['http://zhihu.com/']
# 知乎发现板块只能ajax加载40页
def start_requests(self):
base_url = "https://www.zhihu.com/node/ExploreAnswerListV2?"
for page in range(1, 41):
if page < 40:
params = "params=%7B%22offset%22%3A" + str(page*5) + "%2C%22type%22%3A%22day%22%7D"
else:
params = "params=%7B%22offset%22%3A" + str(199) + "%2C%22type%22%3A%22day%22%7D"
url = base_url + params
yield scrapy.Request(
url=url,
callback=self.parse
)
def parse(self, response):
list = response.xpath("//body/div")
for li in list:
item = ZhihufaxianItem()
# 标题
item["title"] = "".join(li.xpath(".//h2/a/text()").getall())
item["title"] = item["title"].replace("\n", "")
# 做者
item["author"] = "".join(li.xpath(".//div[@class='zm-item-answer-author-info']/span[1]/span[1]/a/text()").getall())
item["author"] = item["author"].replace("\n","")
details_url = "".join(li.xpath(".//div[@class='zh-summary summary clearfix']/a/@href").getall())
details_url = "https://www.zhihu.com" + details_url
yield scrapy.Request(
url=details_url,
callback=self.details,
meta={"item": item}
)
# 详情页获取content
def details(self, response):
item = response.meta["item"]
item["content"] = "".join(response.xpath("//div[@class='RichContent-inner']/span/p/text()").getall())
print(item)
复制代码
首先构造start_requests方法,构造完整的url地址。而后交给parse解析,最后进到详情页拿到content字段便可。
知乎是比较好的用来练手的网站,其ajax加载出来的内容是完整的网页不是json格式的内容,省去了分析json的麻烦。 直接爬取便可。