[爬虫] 一块儿来爬知乎娘

知乎内容须要登录后才能查看,与以前案例不一样,这里要向浏览器提交登陆信息。html

首先爬取知乎登陆页面python

def getHtmlText(url):
    try:
        r = requests.get(url)
        r.encoding = 'utf-8'
        return r.text
    except:
        return ''

url = 'https://www.zhihu.com/'
getHtmlText(url)
'<html><body><h1>500 Server Error</h1>\nAn internal server error occured.\n</body></html>\n'

此时出现 500 Server Error,解决方法为经过 headers={...} 更改用户代理为浏览器json

def getHtmlText(url):
    headers = {'User-Agent':'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/56.0.2924.87 Safari/537.36'}
    try:
        r = requests.get(url, headers = headers)
        r.encoding = 'utf-8'
        return r.text
    except:
        return ''

 

在知乎登陆页面打开Chrome浏览器F12,这里打钩以后新跳转的页面的信息就不会覆盖以前接受到的信息,输入帐号密码点击登陆,就能够看到须要提交的表单数据。浏览器

 

版本一:

#!/usr/bin/env python3
# -*- coding: utf-8 -*-
import requests
import re
import time
from PIL import Image
from bs4 import BeautifulSoup
import json

# 构造 Request headers
# 登录的url地址
logn_url = 'http://www.zhihu.com/#signin'

session = requests.session()

headers = {
    'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/52.0.2743.82 Safari/537.36',
}

content = session.get(logn_url, headers=headers).content
soup = BeautifulSoup(content, 'html.parser')


def getxsrf():
    return soup.find('input', attrs={'name': "_xsrf"})['value']


# 获取验证码
def get_captcha():
    t = str(int(time.time() * 1000))
    captcha_url = 'http://www.zhihu.com/captcha.gif?r=' + t + "&type=login"
    r = session.get(captcha_url, headers=headers)
    with open('captcha.jpg', 'wb') as f:
        f.write(r.content)
        f.close()
    im = Image.open('captcha.jpg')
    im.show()
    im.close()
    captcha = input("please input the captcha\n>")
    return captcha


def isLogin():
    # 经过查看用户我的信息来判断是否已经登陆
    url = "https://www.zhihu.com/settings/profile"
    login_code = session.get(url, allow_redirects=False).status_code
    if int(x=login_code) == 200:
        return True
    else:
        return False


def login(secret, account):
    # 经过输入的用户名判断是不是手机号
    if re.match(r"^1\d{10}$", account):
        print("手机号登陆 \n")
        post_url = 'http://www.zhihu.com/login/phone_num'
        postdata = {
            '_xsrf': getxsrf(),
            'password': secret,
            'remember_me': 'true',
            'phone_num': account,
        }
    else:
        print("邮箱登陆 \n")
        post_url = 'http://www.zhihu.com/login/email'
        postdata = {
            '_xsrf': getxsrf(),
            'password': secret,
            'remember_me': 'true',
            'email': account,
        }
    try:
        # 不须要验证码直接登陆成功
        login_page = session.post(post_url, data=postdata, headers=headers)
        login_code = login_page.text
        print(login_page.status)
        print(login_code)
    except:
        # 须要输入验证码后才能登陆成功
        postdata["captcha"] = get_captcha()
        login_page = session.post(post_url, data=postdata, headers=headers)
        login_code = eval(login_page.text)
        print(login_code['msg'])


if __name__ == '__main__':

    if isLogin():
        print('您已经登陆')
    else:
        account = input('请输入你的用户名\n>  ')
        secret = input("请输入你的密码\n>  ")
        login(secret, account)

存在问题:运到验证码为“点击图中倒立文字或移动滑块至”,则登录失败,跳转到登陆界面。据说能够用打码平台解决cookie

 

版本二(来自某知乎er):

打开你要爬取信息的界面,发出请求查看所需 headers,比较重要的有 'User-Agent'‘cookie’,对应添加session

import requests
import re

url='https://www.zhihu.com/question/22591304/followers'
headers={
'User-Agent':
'Cookie':}

page=requests.get(url,headers=headers).text
imgs=re.findall(r'<img src=\"(.*?)_m.jpg',page) 

 查看 imgs 便可看到匹配的图片dom

 

如下是一段爬取知乎头像的代码:函数

# -*- coding: utf-8 -*-
#py3.6
import requests
import urllib
import re
import random
from time import sleep

def main():
    url='https://www.zhihu.com/question/22591304/followers'
    headers={ 'User-Agent':'', 'Cookie':''}
 
    i=1
    for x in range(20,40,20):
        data={'start':'0',
        'offset':str(i),
        '_xsrf':'2e65c02ceeaaa1ac16d193415cf8d5be'}

        page=requests.post(url,headers=headers,data=data,timeout=50).text
        imgs=re.findall(r'<img src=\\"(.*?)_m.jpg',page) 
        #在爬下来的json上用正则提取图片地址,去掉_m为大图 
        for img in imgs:
            try:
                img=img.replace('\\','')
                #去掉\字符这个干扰成分
                pic=img+'.jpg'
                path='d:\\zhihu\\'+str(i)+'.jpg'
                #声明存储地址及图片名称
                urllib.request.urlretrieve(pic,path)
                #下载图片
                print(u'下载了第'+str(i)+u'张图片')
                i+=1
                sleep(random.uniform(0.5,1))
                #睡眠函数用于防止爬取过快被封IP
            except:
                pass
        sleep(random.uniform(0.5,1))

if __name__=='__main__':
    main()

  

貌似 get 和 post 方法返回的 .text 形式不同, post 会用 \\" 转义表示 ”,而 get 不会;工具

另外,data 中 '_xsrf' 在 F12 Form Data 中没找到,但必不可少,难道隐藏了?要用抓包工具?post