python网页爬虫

1. 静态页面爬取html

这类最简单啦,右键->查看页面源码时,想下载的信息都可以显示在这里,这时只须要直接down页面源码,代码以下:python

# Simple open web
import urllib2
print urllib2.urlopen('http://stockrt.github.com').read()
# With password?
import urllib
opener = urllib.FancyURLopener()
print opener.open('http://user:password@stockrt.github.com').read()

 

2. 滑动鼠标动态加载内容git

有些页面在打开时不会彻底显示,而是经过滑动鼠标动态加载。对于这类页面的爬虫,须要找到触发动态加载的url,一般方法为:右键->审查元素->Networkgithub

寻找滑动鼠标时触发的事件,分析每次滑动鼠标时url中变化的参数,在代码中拼接出对应的url便可。web

 

3. 使用 mechanize 模拟浏览器访问网页 浏览器

有时会发现上述方法不灵,即down的东西与页面内容不一致,会发现内容少了不少,这时就须要浏览器假装,模拟浏览器动做,在命令行或者python脚本中实例化一个浏览器。代码网页链接cookie

模拟浏览器:session

import mechanize
import cookielib
# Browser
br = mechanize.Browser()
# Cookie Jar
cj = cookielib.LWPCookieJar()
br.set_cookiejar(cj)
# Browser options
br.set_handle_equiv(True)
br.set_handle_gzip(True)
br.set_handle_redirect(True)
br.set_handle_referer(True)
br.set_handle_robots(False)
# Follows refresh 0 but not hangs on refresh > 0
br.set_handle_refresh(mechanize._http.HTTPRefreshProcessor(), max_time=1)
# Want debugging messages?
#br.set_debug_http(True)
#br.set_debug_redirects(True)
#br.set_debug_responses(True)
# User-Agent (this is cheating, ok?)
br.addheaders = [('User-agent', 'Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.0.1) Gecko/2008071615 Fedora/3.0.1-1.fc9 Firefox/3.0.1')]

如今你获得了一个浏览器的示例,br对象。使用这个对象,即可以打开一个页面,使用相似以下的代码: dom

# Open some site, let's pick a random one, the first that pops in mind:
r = br.open('http://google.com')
html = r.read()
# Show the source
print html
# or
print br.response().read()
# Show the html title
print br.title()
# Show the response headers
print r.info()
# or
print br.response().info()
# Show the available forms
for f in br.forms():
    print f
# Select the first (index zero) form
br.select_form(nr=0)
# Let's search
br.form['q']='weekend codes'
br.submit()
print br.response().read()
# Looking at some results in link format
for l in br.links(url_regex='stockrt'):
    print l

若是你访问的网站须要验证(http basic auth),那么: 网站

# If the protected site didn't receive the authentication data you would
# end up with a 410 error in your face
br.add_password('http://safe-site.domain', 'username', 'password')
br.open('http://safe-site.domain')

因为以前使用了Cookie Jar,你不须要管理网站的登陆session。也就是不须要管理须要POST一个用户名和密码的状况。 
一般这种状况,网站会请求你的浏览器去存储一个session cookie除非你重复登录, 
而致使你的cookie中含有这个字段。全部这些事情,存储和重发这个session cookie已经被Cookie Jar搞定了,爽吧。 
同时,你能够管理你的浏览器历史: 

# Testing presence of link (if the link is not found you would have to
# handle a LinkNotFoundError exception)
br.find_link(text='Weekend codes')
# Actually clicking the link
req = br.click_link(text='Weekend codes')
br.open(req)
print br.response().read()
print br.geturl()
# Back
br.back()
print br.response().read()
print br.geturl()

下载一个文件: 

# Download
f = br.retrieve('http://www.google.com.br/intl/pt-BR_br/images/logo.gif')[0]
print f
fh = open(f)

为http设置代理

# Proxy and user/password
br.set_proxies({"http": "joe:password@myproxy.example.com:3128"})
# Proxy
br.set_proxies({"http": "myproxy.example.com:3128"})
# Proxy password
br.add_proxy_password("joe", "password")
相关文章
相关标签/搜索