爬虫就是咱们利用某种程序代替人工批量读取、获取网站上的资料信息。而反爬则是跟爬虫的对立面,是不遗余力阻止非人为的采集网站信息,两者相生相克,水火不容,到目前为止大部分的网站都仍是能够轻易的爬取资料信息。
爬虫想要绕过被反的策略就是尽量的让服务器人你不是机器程序,因此在程序中就要把本身假装成浏览器访问网站,这能够极大程度下降被反的几率,那如何作到假装浏览器呢?html
好比:python
user_agent_list = [ "Opera/9.80 (X11; Linux i686; U; hu) Presto/2.9.168 Version/11.50", "Opera/9.80 (X11; Linux i686; U; ru) Presto/2.8.131 Version/11.11", "Opera/9.80 (X11; Linux i686; U; es-ES) Presto/2.8.131 Version/11.11", "Mozilla/5.0 (Windows NT 5.1; U; en; rv:1.8.1) Gecko/20061208 Firefox/5.0 Opera 11.11", "Opera/9.80 (X11; Linux x86_64; U; bg) Presto/2.8.131 Version/11.10", "Opera/9.80 (Windows NT 6.0; U; en) Presto/2.8.99 Version/11.10", "Opera/9.80 (Windows NT 5.1; U; zh-tw) Presto/2.8.131 Version/11.10", "Opera/9.80 (Windows NT 6.1; Opera Tablet/15165; U; en) Presto/2.8.149 Version/11.1", "Opera/9.80 (X11; Linux x86_64; U; Ubuntu/10.10 (maverick); pl) Presto/2.7.62 Version/11.01", "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/65.0.3325.181 Safari/537.36", "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.97 Safari/537.36", "Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:70.0) Gecko/20100101 Firefox/70.0", "Opera/9.80 (X11; Linux i686; Ubuntu/14.10) Presto/2.12.388 Version/12.16", "Opera/9.80 (Windows NT 6.0) Presto/2.12.388 Version/12.14", "Mozilla/5.0 (Windows NT 6.0; rv:2.0) Gecko/20100101 Firefox/4.0 Opera 12.14", "Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.0) Opera 12.14", "Opera/12.80 (Windows NT 5.1; U; en) Presto/2.10.289 Version/12.02", "Opera/9.80 (Windows NT 6.1; U; es-ES) Presto/2.9.181 Version/12.00", "Opera/9.80 (Windows NT 5.1; U; zh-sg) Presto/2.9.181 Version/12.00", "Opera/12.0(Windows NT 5.2;U;en)Presto/22.9.168 Version/12.00", "Opera/12.0(Windows NT 5.1;U;en)Presto/22.9.168 Version/12.00", "Mozilla/5.0 (Windows NT 5.1) Gecko/20100101 Firefox/14.0 Opera/12.0", "Opera/9.80 (Windows NT 6.1; WOW64; U; pt) Presto/2.10.229 Version/11.62", "Opera/9.80 (Windows NT 6.0; U; pl) Presto/2.10.229 Version/11.62", "Opera/9.80 (Macintosh; Intel Mac OS X 10.6.8; U; fr) Presto/2.9.168 Version/11.52", "Opera/9.80 (Macintosh; Intel Mac OS X 10.6.8; U; de) Presto/2.9.168 Version/11.52", "Opera/9.80 (Windows NT 5.1; U; en) Presto/2.9.168 Version/11.51", "Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; de) Opera 11.51", "Opera/9.80 (X11; Linux x86_64; U; fr) Presto/2.9.168 Version/11.50", ] referer_list = ["https://www.test.com/", "https://www.baidu.com/"]
获取随机数,即每次采集都会根据随机数提取随机用户代理、引用地址(注:如有多个页面循环采集,最好采集完单个等待个几秒钟再继续采集,减少服务器的压力。):web
import random import re, urllib.request, lxml.html import requests import time, random def get_randam(data): return random.randint(0, len(data)-1) def crawl(): headers = { 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8', 'Accept-Encoding': 'gzip, deflate', 'Accept-Language': 'zh-CN,zh;q=0.9', 'Connection': 'keep-alive', 'host': 'test.com', 'Referer': 'https://test.com/', } random_index = get_randam(user_agent_list) random_agent = user_agent_list[random_index] headers['User-Agent'] = random_agent random_index_01 = get_randam(referer_list) random_agent_01 = referer_list[random_index_01] headers['Referer'] = random_agent_01 session = requests.session() url = "https://www.test.com/" html_data = session.get(url, headers=headers, timeout=180) html_data.raise_for_status() html_data.encoding = 'utf-8-sig' data = html_data.text data_doc = lxml.html.document_fromstring(data) ...(对网页数据进行解析、提取、存储等) time.sleep(random.randint(3, 5))
根据代理ip的匿名程度,代理ip能够分为下面四类:浏览器
下面我采用免费的高匿代理IP进行采集:服务器
#代理IP: https://www.xicidaili.com/nn import requests proxies = { "http": "http://117.30.113.248:9999", "https": "https://120.83.120.157:9999" } r=requests.get("https://www.baidu.com", proxies=proxies) r.raise_for_status() r.encoding = 'utf-8-sig' print(r.text)
注意:踩坑经历,以前误把proxies里面的key设置成大写的HTTP/HTTPS,致使请求不走代理,过了几个月才发现这个问题,头皮发麻啊cookie
以前也常常写一些采集亚马逊的爬虫,可是采集没多久就被识别出来是程序爬虫,会默认跳到一个robotecheck页面,也就是叫你输入一个图片验证码,只是为了验证究竟是不是人为在访问他们的网站。session