抓取智联招聘的工做(指定了条件)

 1 #!usr/bin/env python
 2 #coding:utf-8
 3 
 4 import sys;  
 5 reload(sys);
 6 sys.setdefaultencoding('utf-8');
 7 
 8 import urllib2
 9 from bs4 import BeautifulSoup
10 
11 filename = open('work.txt','w')
12 
13 user_anget = {'User-Agent':'Mozilla/5.0 (Windows NT 10.0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/49.0.2623.110 Safari/537.36'}
14 
15 header = {
16           'User-Agent' : user_anget,
17           }
18 url = 'http://sou.zhaopin.com/jobs/searchresult.ashx?jl=%E5%8D%97%E6%98%8C&kw=%E6%95%B0%E6%8D%AE%E5%88%86%E6%9E%90&sm=0&p='
19 
20 #filename = open('work_url.txt','w')
21 url_list = []
22 i = 1
23 while i<3:
24     full_url = url + str(i)
25     request = urllib2.Request(full_url)
26     response = urllib2.urlopen(request)
27     soup = BeautifulSoup(response,'lxml',from_encoding='utf-8')
28     #<td class="zwmc" style="width: 250px;">
29     links = soup.find_all('td',class_='zwmc')
30     #print links
31     for link in links:
32         new_url = link.find('a')['href']
33         print new_url
34         url_list.append(new_url)
35     i +=1
36 print url_list
37 
38 filename = open('work.txt','a')
39 while len(url_list) != 0:
40     new_url = url_list.pop()
41     request = urllib2.Request(new_url)
42     response = urllib2.urlopen(request)
43     soup = BeautifulSoup(response,'lxml',from_encoding='utf-8')
44     #<div class="inner-left fl"> <h1>商品专员/数据分析员</h1>
45     title = soup.find('div',class_="inner-left fl").find('h1')
46     #<ul class="terminal-ul clearfix">
47     clearfix = soup.find('ul',class_="terminal-ul clearfix")
48     #<div class="tab-inner-cont">
49     cont = soup.find('div',class_="tab-inner-cont")
50     #print biaoti.get_text(),yaoqiu.get_text(),zhiwu.get_text()
51     
52     filename.write(new_url + '\n')
53     filename.write(title.get_text())
54     filename.write(clearfix.get_text())
55     filename.write(cont.get_text())
56 filename.close()
57 print url_list

不足:python

  一、获取网页的代码能够重复利用,这里没有写好!懒,主要是。编程

  二、仍是没有用面向对象编程(白天试了,有些地方不懂,就pass了)app

  三、没有按本身的要求保存数据。url

  四、可能会抓取到重复,由于用的是列表,没有用集合。spa

  五、抓取的网页信息是从最后一项开始抓取的,这样也很差。code

 

我怎么感受写的不足愈来愈多了啊 ,加了好几条了,(⊙﹏⊙)b,算了不写了,就这样吧,在写下去都没有信心了!xml

 

不过整体来讲仍是完成了本身想要实现的目的,抓取每一个工做的网址,并根据抓取的网址老获取想要的信息!对象

有点进步,最起码代码就长了点了。blog

相关文章
相关标签/搜索