51JOB的数据相比BOSS直聘仍是好作不少,首先仍是在items.py中进行定义:html
import scrapy class PositionViewItem(scrapy.Item): # define the fields for your item here like: name :scrapy.Field = scrapy.Field()#名称 salary :scrapy.Field = scrapy.Field()#薪资 education :scrapy.Field = scrapy.Field()#学历 experience :scrapy.Field = scrapy.Field()#经验 jobjd :scrapy.Field = scrapy.Field()#工做ID district :scrapy.Field = scrapy.Field()#地区 category :scrapy.Field = scrapy.Field()#行业分类 scale :scrapy.Field = scrapy.Field()#规模 corporation :scrapy.Field = scrapy.Field()#公司名称 url :scrapy.Field = scrapy.Field()#职位URL createtime :scrapy.Field = scrapy.Field()#发布时间 posistiondemand :scrapy.Field = scrapy.Field()#岗位职责 cortype :scrapy.Field = scrapy.Field()#公司性质
而后也是采起直接搜索全国-数据分析职位的url做为起始url,记得须要模拟一个请求头:python
name :str = 'job51Analysis' url :str = 'https://search.51job.com/list/000000,000000,0000,00,9,99,%25E6%2595%25B0%25E6%258D%25AE%25E5%2588%2586%25E6%259E%2590,2,1.html?lang=c&postchannel=0000&workyear=99&cotype=99°reefrom=99&jobterm=99&companysize=99&ord_field=0&dibiaoid=0&line=&welfare=' headers :Dict = { 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:69.0) Gecko/20100101 Firefox/69.0', 'Referer': 'https://mkt.51job.com/tg/sem/pz_2018.html?from=baidupz' } def start_requests(self) -> Request: yield Request(self.url, headers=self.headers)
直接把定义好的headers做为参数传进Request里就能够了.安全
首先也是用默认的回调函数parse(比较懒,临时用就没有自定义):scrapy
if response.status == 200: PositionInfos :selector.SelectorList = response.selector.xpath(r'//div[@class="el"]')
如何取得单个职位的信息呢?首先用xpath把单个职位的list选出来,以后再用这个list去二次选择,这样就能够获取了.ide
for positioninfo in PositionInfos:#遍历取得的selectorlist pvi = PositionViewItem() pvi['name'] :str = ''.join(positioninfo.xpath(r'p[@class="t1 "]/span/a/text()').extract()).strip() pvi['salary'] :str = ''.join(positioninfo.xpath(r'span[@class="t4"]/text()').extract()) pvi['createtime'] :str = ''.join(positioninfo.xpath(r'span[@class="t5"]/text()').extract()) pvi['district'] :str = ''.join(positioninfo.xpath(r'span[@class="t3"]/text()').extract()) pvi['corporation'] :str = ''.join(positioninfo.xpath(r'span[@class="t2"]/a/text()').extract()).strip() pvi['url'] :str = ''.join(positioninfo.xpath(r'p[@class="t1 "]/span/a/@href').extract())
因为51JOB中的职位信息用一层搜索是看不全的,须要点击进去处理下一层路径,所以在这里获取职位详细信息的url,以后进行下一级处理:函数
#处理二级路径 if len(pvi['url']) > 0: request :Request = Request(pvi['url'], callback=self.positiondetailparse, headers=self.headers) request.meta['positionViewItem'] = pvi yield request
以上的代码用到了自定义的callback函数,用来对二级路径进行处理,另外在request中加入了meta属性,能够用来把参数经过request进行传递(这个好像是get请求,因此不建议传太长,不太安全也不规范),这样的话在positiondetailparse这个方法中就能够获取传过去的item实例了.post
def positiondetailparse(self, response) -> PositionViewItem: if response.status == 200: pvi :PositionViewItem = response.meta['positionViewItem'] pvi['posistiondemand'] :str = ''.join(response.selector.xpath(r'//div[@class="bmsg job_msg inbox"]//p/text()').extract()).strip() pvi['cortype'] :str = ''.join(response.selector.xpath(r'//div[@class="com_tag"]/p[@class="at"][1]/@title').extract()).strip()#xpath从1开始须要注意 pvi['scale'] :str = ''.join(response.selector.xpath(r'//div[@class="com_tag"]/p[@class="at"][2]/@title').extract()).strip() pvi['category'] :str = ''.join(response.selector.xpath(r'//div[@class="com_tag"]/p[@class="at"][3]/@title').extract()) pvi['education'] :str = ''.join(response.selector.xpath(r'//p[@class="msg ltype"]/text()[3]').extract()).strip() yield pvi
解析二级路径中的信息,须要注意的是xpath选择器中的元素个数是从1开始的,不是0.url
把全部的信息取得以后,返回一个pvi给pipeline,用来进行处理和存储.spa
单个职位信息抓取完成以后,天然也须要下一页的信息,在parse中:excel
nexturl = ''.join(response.selector.xpath(r'//li[@class="bk"][2]/a/@href').extract()) print(nexturl) if nexturl: # nexturl = urljoin(self.url, ''.join(nexturl)) print(nexturl) yield Request(nexturl, headers=self.headers)
若是不加callback参数,就会默认调用parse这个方法,从而达到解析下一页的目的.
最后要在pipelines.py里加入处理item数据的程序,这里我选择把数据存到csv当中.
import os import csv class LearningPipeline(object): def __init__(self): self.file = open('51job.csv', 'a+', encoding='utf-8', newline='') self.writer = csv.writer(self.file, dialect='excel') def process_item(self, item, spider): if item['name']: self.writer.writerow([item['name'], item['salary'], item['district'], item['createtime'], item['education'], item['posistiondemand'], item['corporation'], item['cortype'], item['scale'], item['category']]) return item def close_spider(self, spider): self.file.close()
初始化方法里默认打开这个文件,而后process_item是默认的处理item 的方法,返回一个item就会调用一次!
close_spider方法是关闭爬虫时用的,写一个关闭文件就能够了.
须要注意的是输出的csv文件用excel打开是有中文乱码的,我把文件用记事本打开以后以ASCII的方式另存一份,乱码就消失了.
好了,接下来就能够开始运行爬虫了,这只是一个很是初级的爬虫,也很简易.留着备忘吧!