最近学习数据分析,所以尝试一下这两个网站的职位需求作分析用,在其中遇到了不少坑,记录一下。html
框架就选用了scrapy,比较简单,建了两个文件,分别做用于不一样的网站。前端
先来看BOSS直聘:python
网上搜了不少BOSS直聘的例子,觉得很容易,只须要模拟一个登录头就能够了……可是进去发现彻底不是那么一回事。web
按照惯例,首先在items.py中定义须要获取的数据:api
import scrapy class PositionViewItem(scrapy.Item): # define the fields for your item here like: name :scrapy.Field = scrapy.Field()#名称 salary :scrapy.Field = scrapy.Field()#薪资 education :scrapy.Field = scrapy.Field()#学历 experience :scrapy.Field = scrapy.Field()#经验 jobjd :scrapy.Field = scrapy.Field()#工做ID district :scrapy.Field = scrapy.Field()#地区 category :scrapy.Field = scrapy.Field()#行业分类 scale :scrapy.Field = scrapy.Field()#规模 corporation :scrapy.Field = scrapy.Field()#公司名称 url :scrapy.Field = scrapy.Field()#职位URL createtime :scrapy.Field = scrapy.Field()#发布时间 posistiondemand :scrapy.Field = scrapy.Field()#岗位职责 cortype :scrapy.Field = scrapy.Field()#公司性质
上面定义的就是ITEM,构思好须要的数值,目前就简单的设置为普通的scrapy.Field() cookie
name :str = 'DA' url :str='https://www.zhipin.com/c100010000/?query=%E6%95%B0%E6%8D%AE&page=10'#起始url设定为进入BOSS直聘以后的搜索页,搜索参数为全国的数据分析 cookies :Dict = { "__zp_stoken__":"bf79ElaZ4z7IK5JruWAX5j256l7CJf3k7Ag2A9mrsSPN%2FnLgjChK0LguCrB%2FtIEFMKdnysNhr4ilqIicjeHkCsCpBQ%3D%3D" }#设置cookies headers :Dict = { 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:69.0) Gecko/20100101 Firefox/69.0', 'Referer': 'https://www.zhipin.com/web/common/security-check.html?seed=6gkgYHovIokVntQcwXUH9KW3%2FbEZsqfeaoCctIp1rE8%3D&name=f2d51032&ts=1571623520634&callbackUrl=%2Fjob_detail%2F%3Fquery%3D%25E6%2595%25B0%25E6%258D%25AE%25E5%2588%2586%25E6%259E%2590%26city%3D100010000%26industry%3D%26position%3D&srcReferer=https%3A%2F%2Fwww.zhipin.com%2Fjob_detail%2F%3Fquery%3D%25E6%2595%25B0%25E6%258D%25AE%25E5%2588%2586%25E6%259E%2590%26city%3D100010000%26industry%3D%26position%3D' }#设置登陆头
设置完经常使用的参数以后,尝试定义start_requests方法做为爬取的起始url框架
def start_requests(self) -> Request: yield Request(self.url, headers=self.headers, cookies=self.cookies)#返回一个yield,调用默认callback,第一个参数是以前定义的url,第二个是定义的请求头,第三个是cookies。
scrapy中默认的回调函数为parse,直接定义一个parse用于获取response的内容,以后直接用xpath语法进行解析。scrapy
def parse(self, response) -> None: if response.status == 200: PositionInfos :selector.SelectorList = response.selector.xpath(r'//div[@class="job-primary"]') for positioninfo in PositionInfos: pvi = PositionViewItem() pvi['name'] = ''.join(positioninfo.xpath(r'div[@class="info-primary"]/h3[@class="name"]/a/div[@class="job-title"]/text()').extract()) pvi['salary'] = ''.join(positioninfo.xpath(r'div[@class="info-primary"]/h3[@class="name"]/a/span[@class="red"]/text()').extract()) pvi['education'] = ''.join(positioninfo.xpath(r'div[@class="info-primary"]/p/text()').extract()[2]) pvi['experience'] = ''.join(positioninfo.xpath(r'div[@class="info-primary"]/p/text()').extract()[1]) pvi['district'] = ''.join(positioninfo.xpath(r'div[@class="info-primary"]/p/text()').extract()[0]) pvi['corporation'] = ''.join(positioninfo.xpath(r'div[@class="info-company"]/div[@class="company-text"]/h3[@class="name"]/a/text()').extract()) pvi['category'] = ''.join(positioninfo.xpath(r'div[@class="info-company"]/div[@class="company-text"]/p/text()').extract()[0]) try: pvi['scale'] = ''.join(positioninfo.xpath(r'div[@class="info-company"]/div[@class="company-text"]/p/text()').extract()[2]) except IndexError: pvi['scale'] = ''.join(positioninfo.xpath(r'div[@class="info-company"]/div[@class="company-text"]/p/text()').extract()[1]) pvi['url'] = ''.join(positioninfo.xpath(r'div[@class="info-primary"]/h3[@class="name"]/a/@href').extract()) yield pvi nexturl = response.selector.xpath(r'//a[@ka="page-next"]/@href').extract() if nexturl: nexturl = urljoin(self.url, ''.join(nexturl)) print(nexturl) yield Request(nexturl, headers=self.headers, cookies=self.cookies, callback=self.parse)
xpath选择器后面跟的.extract()会返回一个list,里面包含的是选择器选择出来的全部元素,若是选择不出来,那么这个语句会报错而不是返回空值!函数
yield pvi的做用是把定义好的ITEM传给pipelines,方便在pipelines中对获取的数据进行操做。学习
nexturl = response.selector.xpath(r'//a[@ka="page-next"]/@href').extract()获取到下一页的连接以后,要用urllib.parse中的urljoin将获取到的连接和源连接进行合并,由于抓到的连接并非一个完整的url,而是相似于
/c101010100/?query=%E6%95%B0%E6%8D%AE%E5%88%86%E6%9E%90&page=2这种格式,须要用urljoin进行合并,合并规则以下:
url='http://ip/ path='api/user/login' urljoin(url,path)拼接后的路径为'http//ip/api/user/login'
本觉得这样就行了,用scrapy crawl + 名字()运行,结果发现请求不到数据,会直接302重定向到一个securitycheck的网页.
打开fiddler查看请求过程:
能够看到彻底模拟了整个查询过程,先直接请求一遍地址,以后重定向到security-check的网页,以后再切回到返回的页面,看上去没有问题,可是仔细查看会发现cookies中的__zp_token__发生了变化:
那么就很清楚了,应该是在调用security-check以后回写了一个token,以后根据这个最新的token来判断请求,看了一下彷佛是经过一个js进行加密回写的,知乎上有大神写了解密的办法,对前端不太懂,放弃了...
转载连接以下:https://zhuanlan.zhihu.com/p/83235220
这个token只能经过手动刷新的方式获取,通常能持续个几回请求就会失效,要从新获取.不过手动爬也只能爬个10页左右,后面的不登录就没有了,所以也无所谓.
后来尝试经过selenium模拟的方式进行,也宣告失败.
总之不是很成功,目前不推荐啦...