Scrapy学习(二) 入门python
有了前两篇的基础,就能够开始互联网上爬取咱们感兴趣的信息了。由于暂时尚未学到如何模拟登录,因此我就先抓像豆瓣这样不须要登录的网站上的内容。
个人开发环境是 Win7 + PyChram + Python3.5 + MongoDB
爬虫的目标是豆瓣的日本文学标签下的全部书籍基本信息git
scrapy startproject doubangithub
接着移动到douban
目录下数据库
scrapy genspider book book.douban.comjson
在spider目录下生成相应的BookSpider模板浏览器
在items.py中编写咱们须要的数据模型服务器
class BookItem(scrapy.Item): book_name = scrapy.Field() book_star = scrapy.Field() book_pl = scrapy.Field() book_author = scrapy.Field() book_publish = scrapy.Field() book_date = scrapy.Field() book_price = scrapy.Field()
访问豆瓣的日本文学标签,将url的值写到start_urls
中。接着在Chrome的帮助下,能够看到每本图书是在ul#subject-list > li.subject-item
dom
class BookSpider(scrapy.Spider): ... def parse(self, response): sel = Selector(response) book_list = sel.css('#subject_list > ul > li') for book in book_list: item = BookItem() item['book_name'] = book.xpath('div[@class="info"]/h2/a/text()').extract()[0].strip() item['book_star'] = book.xpath("div[@class='info']/div[2]/span[@class='rating_nums']/text()").extract()[ 0].strip() item['book_pl'] = book.xpath("div[@class='info']/div[2]/span[@class='pl']/text()").extract()[0].strip() pub = book.xpath('div[@class="info"]/div[@class="pub"]/text()').extract()[0].strip().split('/') item['book_price'] = pub.pop() item['book_date'] = pub.pop() item['book_publish'] = pub.pop() item['book_author'] = '/'.join(pub) yield item
测试一下代码是否有问题scrapy
scrapy crawl book -o items.json
奇怪的发现,items.json内并无数据,后头看控制台中的DEBUG信息
2017-02-04 16:15:38 [scrapy.core.engine] INFO: Spider opened
2017-02-04 16:15:38 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2017-02-04 16:15:38 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6023
2017-02-04 16:15:39 [scrapy.core.engine] DEBUG: Crawled (403) <GET https://book.douban.com/robot... (referer: None)
2017-02-04 16:15:39 [scrapy.core.engine] DEBUG: Crawled (403) <GET https://book.douban.com/tag/%... (referer: None)
爬取网页时状态码是403。这是由于服务器判断出爬虫程序,拒绝咱们访问。
咱们能够在settings中设定USER_AGENT
的值,假装成浏览器访问页面。
USER_AGENT = "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; AcooBrowser; .NET CLR 1.1.4322; .NET CLR 2.0.50727)"
再试一次,就发现items.json有值了。但仔细只有第一页的数据,若是咱们想要爬取全部的数据,就须要爬完当前页后自动得到下一页的url,以此类推爬完全部数据。
因此咱们对spider进行改造。
... def parse(self, response): sel = Selector(response) book_list = sel.css('#subject_list > ul > li') for book in book_list: item = BookItem() try: item['book_name'] = book.xpath('div[@class="info"]/h2/a/text()').extract()[0].strip() item['book_star'] = book.xpath("div[@class='info']/div[2]/span[@class='rating_nums']/text()").extract()[0].strip() item['book_pl'] = book.xpath("div[@class='info']/div[2]/span[@class='pl']/text()").extract()[0].strip() pub = book.xpath('div[@class="info"]/div[@class="pub"]/text()').extract()[0].strip().split('/') item['book_price'] = pub.pop() item['book_date'] = pub.pop() item['book_publish'] = pub.pop() item['book_author'] = '/'.join(pub) yield item except: pass nextPage = sel.xpath('//div[@id="subject_list"]/div[@class="paginator"]/span[@class="next"]/a/@href').extract()[0].strip() if nextPage: next_url = 'https://book.douban.com'+nextPage yield scrapy.http.Request(next_url,callback=self.parse)
其中scrapy.http.Request
会回调parse函数,用try...catch是由于豆瓣图书并非格式一致的。遇到有问题的数据,就抛弃不用。
通常来讲,若是爬虫速度过快。会致使网站拒绝咱们的访问,因此咱们须要在settings设置爬虫的间隔时间,并关掉COOKIES
DOWNLOAD_DELAY = 2
COOKIES_ENABLED = False
或者,咱们能够设置不一样的浏览器UA或者IP地址来回避网站的屏蔽
下面用更改UA来做为例子。
在middlewares.py,编写一个随机替换UA的中间件,每一个request都会通过middleware。
其中process_request
,返回None
,Scrapy将继续到其余的middleware进行处理。
class RandomUserAgent(object): def __init__(self,agents): self.agents = agents @classmethod def from_crawler(cls,crawler): return cls(crawler.settings.getlist('USER_AGENTS')) def process_request(self,request,spider): request.headers.setdefault('User-Agent',random.choice(self.agents))
接着道settings
中设置
DOWNLOADER_MIDDLEWARES = { 'douban.middlewares.RandomUserAgent': 1, } ... USER_AGENTS = [ "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; AcooBrowser; .NET CLR 1.1.4322; .NET CLR 2.0.50727)", "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.0; Acoo Browser; SLCC1; .NET CLR 2.0.50727; Media Center PC 5.0; .NET CLR 3.0.04506)", ... ]
再次运行程序,显然速度快了很多。
接下来咱们要将数据保存到数据库作持久化处理(这里用MongoDB举例,保存到其余数据库同理)。
这部分处理是写在pipelines
中。在此以前咱们还要先安装链接数据库的驱动。
pip install pymongo
咱们在settings
写下配置
# MONGODB configure MONGODB_SERVER = 'localhost' MONGODB_PORT = 27017 MONGODB_DB = 'douban' MONGODB_COLLECTION = "book"
class MongoDBPipeline(object): def __init__(self): connection = MongoClient( host=settings['MONGODB_SERVER'], port=settings['MONGODB_PORT'] ) db = connection[settings['MONGODB_DB']] self.collection = db[settings['MONGODB_COLLECTION']] def process_item(self, item, spider): self.collection.insert(dict(item)) log.msg("Book added to MongoDB database!", level=log.DEBUG, spider=spider) return item
将运行项目的时候控制台中输出的DEBUG信息保存到log文件中。只须要在settings
中设置
LOG_FILE = "logs/book.log"
项目代码地址:豆瓣图书爬虫