平台:machtml
网站:人人网web
最近练习爬虫登录,方法一是找页面里的js文件,经过解析js文件找到cookie信息再保持。但如今的站点登录都有验证码,并且最烦的是request时data表单里的值基本上没有不加密的,js学的很差,就别想着破解了。因此想起了用的比较少的selenium模块,用于模拟登录并获取cookie。chrome
import time,random from selenium import webdriver import requests from urllib import request from lxml import etree driver = webdriver.Chrome(executable_path=r'/Applications/Google Chrome.app/chromedriver') driver.get('http://www.renren.com/PLogin.do') time.sleep(2) driver.find_element_by_id('email').clear() driver.find_element_by_id('email').send_keys('myusername') # 输入用户名 driver.find_element_by_id('password').clear() driver.find_element_by_id('password').send_keys('mypassword') # 输入密码 img_url = 'http://icode.renren.com/getcode.do?t=web_login&rnd='+str(random.random()) request.urlretrieve(img_url,'renren_yzm.jpg') try: driver.find_element_by_id('icode').clear() img_res = input('输入验证码:') # 若是须要输入验证码,能够手工,或者接口给打码平台 driver.find_element_by_id('icode').send_keys(img_res) except: pass driver.find_element_by_id('autoLogin').click() # 自动登录 driver.find_element_by_id('login').click() # 登录 time.sleep(3) cookie_items = driver.get_cookies() # 获取cookie值 post = {} # 保存cookie值 for cookie in cookie_items: post[cookie['name']] = cookie['value'] print(post['t']) # 人人网登录后须要保持登录的cookie信息 driver.quit() # 退出selenium # ------------------------------------------------------------ url = 'http://www.renren.com/265025131/profile' headers = { 'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/67.0.3396.99 Safari/537.36', 'Cookie':'t='+post['t'], } response = requests.get(url,headers=headers) print('-'*50) html = etree.HTML(response.text) title = html.xpath('//title/text()') print('目前获得的页面信息',title) print(response.url)
总结:使用selenium模拟登录、获取cookie没用多少时间,但想固然的觉得进入renren的我的页面必须使用获取的全部cookie值,徒浪费N多个小时,结果只保留了cookie内的't'值,就作到保持登录,因此,不断的测试,是比较重要的。cookie