概述app
因为疫情的缘由,你们都在家待着,若是你是应届生的话,那么对于找工做可能会有点迷茫,所以小编从网上弄了一部分招聘信息以供使用。ide
项目总述url
整个项目依旧很简单,主要是获取数据比较麻烦,至于流程,详见几十行代码批量下载高清壁纸 爬虫入门实战spa
部分代码code
此次我只分为了两部分。blog
1.从主页获取各个专业对应的url列表get
# 1.获取各个专业对应的url列表 index_data = requests.get(index_url, headers=headers).content.decode('gbk', 'ignore') tree = etree.HTML(index_data) second_data = tree.xpath(".//ul[@class='s_clear']/li/a/@href")[0:33] major_name = tree.xpath(".//ul[@class='s_clear']/li/a/text()")[0:33] major_url = [] for one_third_url in second_data: x = str(one_third_url).split(".", 2)[1] major_url.append(x)
2.获取各个目录下的岗位列表requests
# 获取各个专业目录下的岗位 for i in range(len(major_url)): print(major_url[i]) print(major_name[i]) major_job_page = requests.get(major_url[i], headers=headers).content.decode('gbk', 'ignore') major_job_page_tree = etree.HTML(major_job_page) job_list_title = major_job_page_tree.xpath(".//div[@class='hotJobList']/div/ul/li/a/text()") job_list_url = major_job_page_tree.xpath(".//div[@class='hotJobList']/div/ul/li/a/@href") job_list_date = major_job_page_tree.xpath(".//div[@class='hotJobList']/div/ul/li/span/text()")
结果展现it