股票数据定向爬虫时出现 AttributeError: 'NoneType' object has no attribute 'find_all' 错误提示

今天按照视频上的步骤爬起股票数据时,控制台出现了html

name = stockInfo.find_all(attrs={'class':'bets-name'})[0]
AttributeError: 'NoneType' object has no attribute 'find_all'
python

的错误信息。app

百度搜索了一下,发现你们都有这样的错误提示。在网上暂时没有找到解决这个错误的办法。函数

因此我就把本身的想法和你们分享一下,但愿对你们有所帮助,同时也但愿你们能提出意见和建议。url

出现错误提示的代码:

import requests
from bs4 import BeautifulSoup
import re
import traceback



def getHTMLText(url,code='utf-8'):
    try:
        r = requests.get(url)
        r.raise_for_status()
        r.encoding = code
        return r.text
    except:
        return ""


def getStockList(lst,stockURL):
    html = getHTMLText(stockURL,'GB2312')
    soup = BeautifulSoup(html,'html.parser')
    a = soup.find_all('a')
    for i in a:
        try:
            href = i.attrs['href']
            lst.append(re.findall(r"[s][hz]\d{6}",href)[0])
        except:
            continue


def getStockInfo(lst,stockURL,fpath):
    count = 0
    for stock in lst:
        url = stockURL + stock + ".html"
        html = getHTMLText(url)
        try:
            if html == "":
                continue
            infoDict = {}
            soup = BeautifulSoup(html,'html.parser')
            stockInfo = soup.find('div',attrs={'class':'stock-bets'})
            name = stockInfo.find_all(attrs={'class':'bets-name'})[0]
            infoDict.update({'股票名称': name.text.split()[0]})
            keyList = stockInfo.find_all('dt')
            valueList = stockInfo.find_all('dd')
            for i in range(len(keyList)):
                key = keyList[i].text
                val = valueList[i].text
                infoDict[key] = val
            with open(fpath,'a',encoding='utf-8') as f:
                f.write(str(infoDict) + '\n')
                count = count + 1
                print("\r当前进度:{:.2f}%".format(count * 100 / len(lst)),end="")
        except:
            count = count + 1
            print("\r当前进度:{:.2f}%".format(count * 100 / len(lst)), end="")
            traceback.print_exc()
            continue


def main():
    stock_list_url = 'http://quote.eastmoney.com/stocklist.html'
    stock_info_url = 'https://gupiao.baidu.com/stock/'
    output_file = 'D://BaiduStockInfo.txt'
    slist = []
    getStockList(slist,stock_list_url)
    getStockInfo(slist,stock_info_url,output_file)
main()

错误提示分析:属性错误,'NoneType' 对象没有属性 'find_all' 。这种类型的错误在之前的博客中已经和你们介绍过两个,一样如今这个错误提示依然和之前同样。spa

错误提示出现的缘由:由于上一行代码:stockInfo = soup.find('div', attrs={'class' : 'stock-bets'}) 中获取的数据类型中有空类型'NoneType’ ,经过 print(type(stockInfo)) 能够输出变量stockInfo的类型,发现输出了<class 'bs4.element.Tag'>和<class 'NoneType'>这两种类型。因此出现错误提示的缘由是<class 'NoneType'>code

 

解决办法:

既然知道了缘由,那么就能够针对缘由提出解决方法。orm

个人办法是过滤掉<class 'NoneType'>这种类型,由于这个空类型是错误的根本。视频

在stockInfo = soup.find('div', attrs={'class' : 'stock-bets'})代码下面增长一个if 判断, 用isinstance()函数将空类型过滤掉。htm

即:

if isinstance(stockInfo,bs4.element.Tag):

而后缩进后续相关的代码,注意在使用isinstance()时,参数2:bs4.element.Tag是参数1:stockInfo要匹配的类型。在使用参数2时要在开头引入bs4模块,即import bs4

 

 

修改后的代码:

import requests
from bs4 import BeautifulSoup
import re
import traceback
import bs4   #注意点1:引入模块


def getHTMLText(url,code='utf-8'):
    try:
        r = requests.get(url)
        r.raise_for_status()
        r.encoding = code
        return r.text
    except:
        return ""


def getStockList(lst,stockURL):
    html = getHTMLText(stockURL,'GB2312')
    soup = BeautifulSoup(html,'html.parser')
    a = soup.find_all('a')
    for i in a:
        try:
            href = i.attrs['href']
            lst.append(re.findall(r"[s][hz]\d{6}",href)[0])
        except:
            continue


def getStockInfo(lst,stockURL,fpath):
    count = 0
    for stock in lst:
        url = stockURL + stock + ".html"
        html = getHTMLText(url)
        try:
            if html == "":
                continue
            infoDict = {}
            soup = BeautifulSoup(html,'html.parser')
            stockInfo = soup.find('div',attrs={'class':'stock-bets'})
            if isinstance(stockInfo,bs4.element.Tag):   # 注意点2:增长一个if判断语句以及后续代码的缩进
                name = stockInfo.find_all(attrs={'class':'bets-name'})[0]
                infoDict.update({'股票名称': name.text.split()[0]})
                keyList = stockInfo.find_all('dt')
                valueList = stockInfo.find_all('dd')
                for i in range(len(keyList)):
                    key = keyList[i].text
                    val = valueList[i].text
                    infoDict[key] = val
                with open(fpath,'a',encoding='utf-8') as f:
                    f.write(str(infoDict) + '\n')
                    count = count + 1
                    print("\r当前进度:{:.2f}%".format(count * 100 / len(lst)),end="")
        except:
            count = count + 1
            print("\r当前进度:{:.2f}%".format(count * 100 / len(lst)), end="")
            traceback.print_exc()
            continue


def main():
    stock_list_url = 'http://quote.eastmoney.com/stocklist.html'
    stock_info_url = 'https://gupiao.baidu.com/stock/'
    output_file = 'D://BaiduStockInfo.txt'
    slist = []
    getStockList(slist,stock_list_url)
    getStockInfo(slist,stock_info_url,output_file)
main()