最近在学习python,不过有一个正则表达式一直搞不懂,本身直接使用最笨的方法写出了一个百度爬虫,只有短短16行代码。
首先安装必背包:php
pip3 install bs4 pip3 install requests
安装好后,输入html
import requests from bs4 import BeautifulSoup
F5运行若是不报错则说明安装成功。
打开浏览器,输入'www.baidu.com',即进入百度,随便搜索什么,我这里用'python'为例
能够发现,百度搜索出来的连接为python
https://www.baidu.com/s?ie=utf-8&f=8&rsv_bp=1&tn=baidu&wd=python****
最后能够简化为:nginx
https://www.baidu.com/s?wd=python
因此首先尝试获取搜索结果的html:web
import requests from bs4 import BeautifulSoup url='https://www.baidu.com/s?wd='+'python' headers = {"Accept":"text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9","User-Agent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.16 Safari/537.36"} html = requests.get(url,headers=headers).text print(html)
能够看爬下来的数据也能够使用谷歌浏览器的F12
这里已谷歌的F12为例
能够发现,div标签中sql
首先定义筛选浏览器
soup = BeautifulSoup(html, 'html.parser')
使用for循环找出全部div标签,且class为'result c-container'app
for div in soup.find_all('div',class_="result c-container"): print(div)
让后再次使用for循环在其中找出h3标签学习
for div in soup.find_all('div',class_="result c-container"): #print(div)注释掉方便检查代码 for h3 in div.find_all('h3'): print(h3.text)
再次寻找出标题和连接(a标签)
for div in soup.find_all('div',class_="result c-container"): #print(div) for h3 in div.find_all('h3'): #print(h3.text) for a in h3.find_all('a'): print(a.text,' url:',a['href'])
这样,咱们就成功屏蔽了广告、百度百科等等
总体代码以下:
import requests from bs4 import BeautifulSoup url='https://www.baidu.com/s?wd='+'python' headers = {"Accept":"text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9","User-Agent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.16 Safari/537.36"} html = requests.get(url,headers=headers).text print(html) soup = BeautifulSoup(html, 'html.parser') for div in soup.find_all('div',class_="result c-container"): #print(div) for h3 in div.find_all('h3'): #print(h3.text) for a in h3.find_all('a'): print(a.text,' url:',a['href']) #with open(r'C:/爬虫/百度.txt', 'w', encoding='utf-8') as wr:#若是须要将爬下来的内容写入文档,能够加上这两句 # wr.write(page)