python BeautifulSoup的简单使用

  官网:https://www.crummy.com/software/BeautifulSoup/bs4/doc/css

  参考:http://www.javashuo.com/article/p-bhmtmues-cu.htmlhtml

  

  什么是BeautifulSoup?函数

    BeautifulSoup是用Python写的一个HTML/XML的解析器,它能够很好的处理不规范标记并生成剖析树(parse tree)。 它提供简单又经常使用的导航(navigating),搜索以及修改剖析树的操做。测试

 

  下面经过一个测试例子简单说明下BeautifulSoup的用法spa

  

    def beautifulSoup_test(self):
        html_doc = """
        <html><head><title>The Dormouse's story</title></head>
        <body>
        <p class="title"><b>The Dormouse's story</b></p>

        <p class="story">Once upon a time there were three little sisters; and their names were
        <a href="http://example.com/elsie" class="sister" id="link1">Elsie</a>,
        <a href="http://example.com/lacie" class="sister" id="link2">Lacie</a> and
        <a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>;
        <div  class="text" id="div1">测试</div>
        and they lived at the bottom of a well.</p>

        <p class="story">...</p>

        """
        # soup 就是BeautifulSoup处理格式化后的字符串
        soup = BeautifulSoup(html_doc,'lxml')

        # 获得的是title标签
        print(soup.title)
        # 输出:<title>The Dormouse's story</title>

        # 获得的是文档中的第一个p标签,要想获得全部标签,得用find_all函数。
        # find_all 函数返回的是一个序列,能够对它进行循环,依次获得想到的东西.
        print(soup.p)
        print(soup.find_all('p'))


        print(soup.find(id='link3'))

        # 是返回文本,这个对每个BeautifulSoup处理后的对象获得的标签都是生效的
        print(soup.get_text())

        aitems = soup.find_all('a')
        # 获取标签a的连接和id
        for item in aitems:
            print(item["href"],item["id"])

        # 一、经过css查找
        print(soup.find_all("a", class_="sister"))
        # 输出:[<a class="sister" href="http://example.com/elsie" id="link1">Elsie</a>,
        # <a class="sister" href="http://example.com/lacie" id="link2">Lacie</a>,
        # <a class="sister" href="http://example.com/tillie" id="link3">Tillie</a>]

        print(soup.select("p.title"))
        # 输出:[<p class="title"><b>The Dormouse's story</b></p>]

        # 二、经过属性进行查找
        print(soup.find_all("a", attrs={"class": "sister"}))
        #输出:[<a class="sister" href="http://example.com/elsie" id="link1">Elsie</a>,
        # <a class="sister" href="http://example.com/lacie" id="link2">Lacie</a>,
        # <a class="sister" href="http://example.com/tillie" id="link3">Tillie</a>]

        # 三、经过文本进行查找
        print(soup.find_all(text="Elsie"))
        # 输出:['Elsie']

        print(soup.find_all(text=["Tillie", "Elsie", "Lacie"]))
        # 输出:['Elsie', 'Lacie', 'Tillie']


        # 四、限制结果个数
        print(soup.find_all("a", limit=2))
        #输出:[<a class="sister" href="http://example.com/elsie" id="link1">Elsie</a>,
        # <a class="sister" href="http://example.com/lacie" id="link2">Lacie</a>]

        print(soup.find_all(id="link2"))
        # [<a class="sister" href="http://example.com/lacie" id="link2">Lacie</a>]

        print(soup.find_all(id=True))
        #输出:[<a class="sister" href="http://example.com/elsie" id="link1">Elsie</a>,
        # 输出:<a class="sister" href="http://example.com/lacie" id="link2">Lacie</a>,
        # <a class="sister" href="http://example.com/tillie" id="link3">Tillie</a>,
        # <div class="text" id="div1">测试</div>]
相关文章
相关标签/搜索