Python3.x实现简单爬虫—爬糗事百科

一、Python版本:3.5,urllib库,html

二、爬取糗事百科24小时热门段子,第一页(网页地址:http://www.qiushibaike.com/hot/1)ui

三、使用正则匹配, re库url

四、Python2的urllib、urllib2合并成pytohn3的urllib库,Pytohn3:urllib.request, urllib.error, urllib.parsespa

# -*- coding:utf-8 -*-
# 抓取糗事百科24小时第一页段子(用户名,内容,可笑数,评论数)
import re
import urllib.request
from urllib.error import URLError

url = 'http://www.qiushibaike.com/hot/page/1'
# headers验证
h = {
    'User-Agent': '(Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/53.0.2785.143 Safari/537.36',
}
try:
    requ = urllib.request.Request(url, headers=h)
    response = urllib.request.urlopen(requ)
    content = response.read().decode('utf-8')
# 正则匹配(此正则匹配目前糗百最新网页内容)
    pattern = re.compile(
        '<div class="author clearfix">.*?<h2>(.*?)</h2>.*?<div class="content">(.*?)</div>.*?<div class="stats"'
        '.*?i class="number">(.*?)</i>(.*?)</span>.*?<span class="dash">.*?i class="number">(.*?)</i>(.*?)</a>',
        re.S
    )
    items = re.findall(pattern, content)
    # 过滤掉内容中图片
    for item in items:
        img = re.search('img', item[1])
        if not img:
            print(item[0], item[1], item[2], item[3], item[4], item[5])

except URLError as e:
    print('error', e.reason)

注: 本文阅读参考博客后,修改运行。code

相关文章
相关标签/搜索