以前用过selenium和request爬取数据,可是感受速度慢,而后看了下scrapy教程,准备用这个框架爬取试一下。html
一、目的:经过爬取成都链家的二手房信息,主要包含小区名,小区周边环境,小区楼层以及价格等信息。而且把这些信息写入mysql。python
二、环境:scrapy1.5.1 +python3.6mysql
三、建立项目:建立scrapy项目,在项目路径执行命令:scrapy startproject LianJiaScrapyweb
四、项目路径:(其中run.py新加的,run.py是在eclipse里面启动scrapy项目,方便调试的)sql
这些文件分别是:数据库
五、建立爬虫的主文件:cmd进入到主目录,输入命令:scrapy genspider lianjia_spider,查看spiders目录下,新建了一个lianjia_spider.pycookie
六、items.py编写:app
# -*- coding: utf-8 -*-框架
# Define here the models for your scraped items
#
# See documentation in:
# https://doc.scrapy.org/en/latest/topics/items.htmldom
from scrapy import Field, Item
class ScrapylianjiaItem(Item):
'''
houseName:小区楼盘
description:房子描述
floor:此条信息的关注度和发布时间
positionIcon:房子所属区
followInfo:楼层信息
subway:是否临近地铁
taxfree:是否有税
haskey:是否随时看房
totalPrice:总价
unitPrice:单价
'''
houseName = Field()
description = Field()
floor = Field()
positionIcon = Field()
followInfo = Field()
subway = Field()
taxfree = Field()
haskey = Field()
totalPrice = Field()
unitPrice = Field()
七、爬虫文件lianjia_spider.py编写
# -*- coding: utf-8 -*- ''' Created on 2018年8月23日 @author: zww ''' import scrapy import random import time from LianJiaScrapy.items import ScrapylianjiaItem class LianJiaSpider(scrapy.Spider): name = "Lianjia" start_urls = [ "https://cd.lianjia.com/ershoufang/pg1/", ] def parse(self, response): # 组装下一页要抓取的网址 init_url = 'https://cd.lianjia.com/ershoufang/pg' # 房子列表在//li[@class="clear LOGCLICKDATA"]路径下面,每页有30条 sels = response.xpath('//li[@class="clear LOGCLICKDATA"]') # 这里是一次性所有获取30条的信息 houseName_list = sels.xpath( '//div[@class="houseInfo"]/a/text()').extract() description_list = sels.xpath( '//div[@class="houseInfo"]/text()').extract() floor_list = sels.xpath( '//div[@class="positionInfo"]/text()').extract() positionIcon_list = sels.xpath( '//div[@class="positionInfo"]/a/text()').extract() followInfo_list = sels.xpath( '//div[@class="followInfo"]/text()').extract() subway_list = sels.xpath('//span[@class="subway"]/text()').extract() taxfree_list = sels.xpath('//span[@class="taxfree"]/text()').extract() haskey_list = sels.xpath('//span[@class="haskey"]/text()').extract() totalPrice_list = sels.xpath( '//div[@class="totalPrice"]/span/text()').extract() unitPrice_list = sels.xpath( '//div[@class="unitPrice"]/span/text()').extract() # 爬取的数据和item文件里面的数据对应起来 i = 0 for sel in sels: item = ScrapylianjiaItem() item['houseName'] = houseName_list[i].strip() item['description'] = description_list[i].strip() item['floor'] = floor_list[i].strip() item['positionIcon'] = positionIcon_list[i].strip() item['followInfo'] = followInfo_list[i].strip() item['subway'] = subway_list[i].strip() item['taxfree'] = taxfree_list[i].strip() item['haskey'] = haskey_list[i].strip() item['totalPrice'] = totalPrice_list[i].strip() item['unitPrice'] = unitPrice_list[i].strip() i += 1 yield item # 获取当前页数,获取出来的格式是{"totalPage":100,"curPage":98} has_next_page = sels.xpath( '//div[@class="page-box fr"]/div[1]/@page-data').extract()[0] # 取出来的值是str类型的,转成字典,而后取curPage这个字段的值 to_dict = eval(has_next_page) current_page = to_dict['curPage'] # 链家只展现100页的内容,抓完100页就终止爬虫 if current_page != 100: next_page = current_page + 1 url = ''.join([init_url, str(next_page), '/']) print('starting crapy url:', url) # 随机爬取时间,防止封ip time.sleep(round(random.uniform(1, 2), 2)) yield scrapy.Request(url, callback=self.parse) else: print('scrapy done!')
八、数据处理文件pipelines.py的编写:
# -*- coding: utf-8 -*- # Define your item pipelines here # # Don't forget to add your pipeline to the ITEM_PIPELINES setting # See: https://doc.scrapy.org/en/latest/topics/item-pipeline.html import pymysql from scrapy.utils.project import get_project_settings class LianjiascrapyPipeline(object): InsertSql = '''insert into scrapy_LianJia (houseName,description,floor,followInfo,haskey, positionIcon,subway,taxfree,totalPrice,unitPrice) values('{houseName}','{description}','{floor}','{followInfo}', '{haskey}','{positionIcon}','{subway}','{taxfree}','{totalPrice}','{unitPrice}')''' def __init__(self): self.settings = get_project_settings() # 链接数据库 self.connect = pymysql.connect( host=self.settings.get('MYSQL_HOST'), port=self.settings.get('MYSQL_PORT'), db=self.settings.get('MYSQL_DBNAME'), user=self.settings.get('MYSQL_USER'), passwd=self.settings.get('MYSQL_PASSWD'), charset='utf8', use_unicode=True) # 经过cursor执行增删查改 self.cursor = self.connect.cursor() def process_item(self, item, spider): sqltext = self.InsertSql.format( houseName=item['houseName'], description=item['description'], floor=item['floor'], followInfo=item['followInfo'], haskey=item['haskey'], positionIcon=item['positionIcon'], subway=item['subway'], taxfree=item['taxfree'], totalPrice=item['totalPrice'], unitPrice=item['unitPrice']) try: self.cursor.execute(sqltext) self.connect.commit() except Exception as e: print('插入数据失败', e) return item def close_spider(self, spider): self.cursor.close() self.connect.close()
九、要使用pipelines文件,须要在settings.py里面设置:
ITEM_PIPELINES = {
'LianJiaScrapy.pipelines.LianjiascrapyPipeline': 300,
}
#设置mysql链接信息:
MYSQL_HOST = 'localhost'
MYSQL_DBNAME = 'test_scrapy'
MYSQL_USER = ‘这里填写链接库的帐号’
MYSQL_PASSWD = '填写密码'
MYSQL_PORT = 3306
#设置爬虫的信息头
DEFAULT_REQUEST_HEADERS = {'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8',
'Accept-Encoding': 'gzip, deflate, br',
'Accept-Language': 'zh-CN,zh;q=0.9',
'Cache-Control': 'max-age=0',
'Connection': 'keep-alive',
'Cookie': '填写的你cookie',
'Host': 'cd.lianjia.com',
'Upgrade-Insecure-Requests': '1',
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/68.0.3440.106 Safari/537.36'}
十、在mysql的库test_scrapy里面新建表:
CREATE TABLE `scrapy_lianjia` (
`ID` int(11) NOT NULL AUTO_INCREMENT,
`houseName` varchar(255) DEFAULT NULL COMMENT '小区名',
`description` varchar(255) DEFAULT NULL COMMENT '房子描述',
`floor` varchar(255) DEFAULT NULL COMMENT '楼层',
`followInfo` varchar(255) DEFAULT NULL COMMENT '此条信息的关注度和发布时间',
`haskey` varchar(255) DEFAULT NULL COMMENT '看房要求',
`positionIcon` varchar(255) DEFAULT NULL COMMENT '房子所属区',
`subway` varchar(255) DEFAULT NULL COMMENT '是否近地铁',
`taxfree` varchar(255) DEFAULT NULL COMMENT '房屋税',
`totalPrice` varchar(11) DEFAULT NULL COMMENT '总价',
`unitPrice` varchar(255) DEFAULT NULL COMMENT '单价',
PRIMARY KEY (`ID`)
) ENGINE=InnoDB AUTO_INCREMENT=3001 DEFAULT CHARSET=utf8;
十一、运行爬虫项目:
这里能够直接在cmd里面输入命令:scrapy crawl Lianjia执行。
我在写脚本的时候,须要调试,因此新加了run.py,能够直接运行,也能够debug。
个人run.py文件:
# -*- coding: utf-8 -*- ''' Created on 2018年8月23日 @author: zww ''' from scrapy import cmdline name = 'Lianjia' cmd = 'scrapy crawl {0}'.format(name) #下面这2中方式均可以的,好像python2.7版本和3.6版本还有点不同, #2.7版本用第二种的话,须要加空格 cmdline.execute(cmd.split()) # cmdline.execute(['scrapy', 'crawl', name])
#下面这2中方式均可以的,好像python2.7版本和3.6版本还有点不同,
#2.7版本用第二种的话,须要加空格
cmdline.execute(cmd.split())
# cmdline.execute(['scrapy', 'crawl', name])
十二、爬取的过程:
1三、爬取的结果: