爬虫框架Scrapy实战一——股票数据爬取

简介

目标: 获取上交所和深交所全部股票的名称和交易信息。
输出: 保存到文件中。
技术路线:Scrapy爬虫框架
语言: python3.5
因为在上一篇博客中已经介绍了股票信息爬取的原理,在这里再也不进行过多介绍,如需了解能够参考博客:连接描述,在本篇文章中主要讲解该项目在Scrapy框架中如何实现。css

原理分析

Scrapy框架以下图所示:html

clipboard.png

咱们主要进行两步操做:
(1) 首先须要在框架中编写一个爬虫程序spider,用于连接爬取和页面解析;
(2) 编写pipelines,用于处理解析后的股票数据并将这些数据存储到文件中。python

代码编写

步骤:
(1) 创建一个工程生成Spider模板
打开cmd命令行,定位到项目所放的路径,输入:scrapy startproject BaiduStocks,此时会在目录中新建一个名字为BaiduStocks的工程。再输入:cd BaiduStocks进入目录,接着输入:scrapy genspider stocks baidu.com生成一个爬虫。以后咱们能够在spiders/目录下看到一个stocks.py文件,以下图所示:segmentfault

clipboard.png

(2) 编写Spider:配置stocks.py文件,修改返回页面的处理,修改对新增URL爬取请求的处理
打开stocks.py文件,代码以下所示:框架

# -*- coding: utf-8 -*-
import scrapy


class StocksSpider(scrapy.Spider):
    name = 'stocks'
    allowed_domains = ['baidu.com']
    start_urls = ['http://baidu.com/']

    def parse(self, response):
        pass

将上述代码修改以下:dom

# -*- coding: utf-8 -*-
import scrapy
import re
 
 
class StocksSpider(scrapy.Spider):
    name = "stocks"
    start_urls = ['http://quote.eastmoney.com/stocklist.html']
 
    def parse(self, response):
        for href in response.css('a::attr(href)').extract():
            try:
                stock = re.findall(r"[s][hz]\d{6}", href)[0]
                url = 'https://gupiao.baidu.com/stock/' + stock + '.html'
                yield scrapy.Request(url, callback=self.parse_stock)
            except:
                continue
 
    def parse_stock(self, response):
        infoDict = {}
        stockInfo = response.css('.stock-bets')
        name = stockInfo.css('.bets-name').extract()[0]
        keyList = stockInfo.css('dt').extract()
        valueList = stockInfo.css('dd').extract()
        for i in range(len(keyList)):
            key = re.findall(r'>.*</dt>', keyList[i])[0][1:-5]
            try:
                val = re.findall(r'\d+\.?.*</dd>', valueList[i])[0][0:-5]
            except:
                val = '--'
            infoDict[key]=val
 
        infoDict.update(
            {'股票名称': re.findall('\s.*\(',name)[0].split()[0] + \
             re.findall('\>.*\<', name)[0][1:-1]})
        yield infoDict

(3) 配置pipelines.py文件,定义爬取项(Scraped Item)的处理类
打开pipelinse.py文件,以下图所示:scrapy

# -*- coding: utf-8 -*-

# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: http://doc.scrapy.org/en/latest/topics/item-pipeline.html


class BaidustocksPipeline(object):
    def process_item(self, item, spider):
        return item

对上述代码修改以下:ide

# -*- coding: utf-8 -*-
 
# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: http://doc.scrapy.org/en/latest/topics/item-pipeline.html
 
 
class BaidustocksPipeline(object):
    def process_item(self, item, spider):
        return item

#每一个pipelines类中有三个方法
class BaidustocksInfoPipeline(object):
    #当一个爬虫被调用时,对应的pipelines启动的方法
    def open_spider(self, spider):
        self.f = open('BaiduStockInfo.txt', 'w')
    #一个爬虫关闭或结束时的pipelines对应的方法
    def close_spider(self, spider):
        self.f.close()
    #对每个Item项进行处理时所对应的方法,也是pipelines中最主体的函数
    def process_item(self, item, spider):
        try:
            line = str(dict(item)) + '\n'
            self.f.write(line)
        except:
            pass
        return item

(4) 修改settings.py,是框架找到咱们在pipelinse.py中写的类
settings.py中加入:函数

# Configure item pipelines
# See http://scrapy.readthedocs.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {
    'BaiduStocks.pipelines.BaidustocksInfoPipeline': 300,
}

到这里,程序就完成了。url

(4) 执行程序
在命令行中输入:scrapy crawl stocks