python爬虫之xpath的基本使用 python爬虫之xpath的基本使用

python爬虫之xpath的基本使用

 转发:https://www.cnblogs.com/lei0213/p/7506130.html

1、简介html

  XPath 是一门在 XML 文档中查找信息的语言。XPath 可用来在 XML 文档中对元素和属性进行遍历。XPath 是 W3C XSLT 标准的主要元素,而且 XQuery 和 XPointer 都构建于 XPath 表达之上。python

   参照python爬虫

2、安装scrapy

1
pip3 install lxml

 

3、使用ide

  一、导入post

1
from  lxml  import  etree

  二、基本使用ui

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
from  lxml  import  etree
 
wb_data  =  """
         <div>
             <ul>
                  <li class="item-0"><a href="link1.html">first item</a></li>
                  <li class="item-1"><a href="link2.html">second item</a></li>
                  <li class="item-inactive"><a href="link3.html">third item</a></li>
                  <li class="item-1"><a href="link4.html">fourth item</a></li>
                  <li class="item-0"><a href="link5.html">fifth item</a>
              </ul>
          </div>
         """
html  =  etree.HTML(wb_data)
print (html)
result  =  etree.tostring(html)
print (result.decode( "utf-8" ))

  从下面的结果来看,咱们打印机html其实就是一个python对象,etree.tostring(html)则是不全里html的基本写法,补全了缺胳膊少腿的标签。url

1
2
3
4
5
6
7
8
9
10
11
<Element html at  0x39e58f0 >
<html><body><div>
             <ul>
                  <li  class = "item-0" ><a href = "link1.html" >first item< / a>< / li>
                  <li  class = "item-1" ><a href = "link2.html" >second item< / a>< / li>
                  <li  class = "item-inactive" ><a href = "link3.html" >third item< / a>< / li>
                  <li  class = "item-1" ><a href = "link4.html" >fourth item< / a>< / li>
                  <li  class = "item-0" ><a href = "link5.html" >fifth item< / a>
              < / li>< / ul>
          < / div>
         < / body>< / html>

  三、获取某个标签的内容(基本使用),注意,获取a标签的全部内容,a后面就不用再加正斜杠,不然报错。spa

  写法一code

1
2
3
4
5
6
7
8
9
10
11
12
13
html  =  etree.HTML(wb_data)
html_data  =  html.xpath( '/html/body/div/ul/li/a' )
print (html)
for  in  html_data:
     print (i.text)
 
 
<Element html at  0x12fe4b8 >
first item
second item
third item
fourth item
fifth item

  写法二(直接在须要查找内容的标签后面加一个/text()就行)

1
2
3
4
5
6
7
8
9
10
11
12
html  =  etree.HTML(wb_data)
html_data  =  html.xpath( '/html/body/div/ul/li/a/text()' )
print (html)
for  in  html_data:
     print (i)
 
<Element html at  0x138e4b8 >
first item
second item
third item
fourth item
fifth item

  四、打开读取html文件

1
2
3
4
5
6
#使用parse打开html的文件
html  =  etree.parse( 'test.html' )
html_data  =  html.xpath( '//*' )<br> #打印是一个列表,须要遍历
print (html_data)
for  in  html_data:
     print (i.text)

  

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
html  =  etree.parse( 'test.html' )
html_data  =  etree.tostring(html,pretty_print = True )
res  =  html_data.decode( 'utf-8' )
print (res)
 
打印:
<div>
      <ul>
          <li  class = "item-0" ><a href = "link1.html" >first item< / a>< / li>
          <li  class = "item-1" ><a href = "link2.html" >second item< / a>< / li>
          <li  class = "item-inactive" ><a href = "link3.html" >third item< / a>< / li>
          <li  class = "item-1" ><a href = "link4.html" >fourth item< / a>< / li>
          <li  class = "item-0" ><a href = "link5.html" >fifth item< / a>< / li>
      < / ul>
< / div>

  五、打印指定路径下a标签的属性(能够经过遍历拿到某个属性的值,查找标签的内容)

1
2
3
4
5
6
7
8
9
10
11
html  =  etree.HTML(wb_data)
html_data  =  html.xpath( '/html/body/div/ul/li/a/@href' )
for  in  html_data:
     print (i)
 
打印:
link1.html
link2.html
link3.html
link4.html
link5.html

  六、咱们知道咱们使用xpath拿到得都是一个个的ElementTree对象,因此若是须要查找内容的话,还须要遍历拿到数据的列表。

  查到绝对路径下a标签属性等于link2.html的内容。

1
2
3
4
5
6
7
8
9
html  =  etree.HTML(wb_data)
html_data  =  html.xpath( '/html/body/div/ul/li/a[@href="link2.html"]/text()' )
print (html_data)
for  in  html_data:
     print (i)
 
打印:
[ 'second item' ]
second item

  七、上面咱们找到所有都是绝对路径(每个都是从根开始查找),下面咱们查找相对路径,例如,查找全部li标签下的a标签内容。

1
2
3
4
5
6
7
8
9
10
11
12
13
html  =  etree.HTML(wb_data)
html_data  =  html.xpath( '//li/a/text()' )
print (html_data)
for  in  html_data:
     print (i)
 
打印:
[ 'first item' 'second item' 'third item' 'fourth item' 'fifth item' ]
first item
second item
third item
fourth item
fifth item

  八、上面咱们使用绝对路径,查找了全部a标签的属性等于href属性值,利用的是/---绝对路径,下面咱们使用相对路径,查找一下l相对路径下li标签下的a标签下的href属性的值,注意,a标签后面须要双//。

1
2
3
4
5
6
7
8
9
10
11
12
13
html  =  etree.HTML(wb_data)
html_data  =  html.xpath( '//li/a//@href' )
print (html_data)
for  in  html_data:
     print (i)
 
打印:
[ 'link1.html' 'link2.html' 'link3.html' 'link4.html' 'link5.html' ]
link1.html
link2.html
link3.html
link4.html
link5.html

  九、相对路径下跟绝对路径下查特定属性的方法相似,也能够说相同。

1
2
3
4
5
6
7
8
9
html  =  etree.HTML(wb_data)
html_data  =  html.xpath( '//li/a[@href="link2.html"]' )
print (html_data)
for  in  html_data:
     print (i.text)
 
打印:
[<Element a at  0x216e468 >]
second item

  十、查找最后一个li标签里的a标签的href属性

1
2
3
4
5
6
7
8
9
html  =  etree.HTML(wb_data)
html_data  =  html.xpath( '//li[last()]/a/text()' )
print (html_data)
for  in  html_data:
     print (i)
 
打印:
[ 'fifth item' ]
fifth item

  十一、查找倒数第二个li标签里的a标签的href属性

1
2
3
4
5
6
7
8
9
html  =  etree.HTML(wb_data)
html_data  =  html.xpath( '//li[last()-1]/a/text()' )
print (html_data)
for  in  html_data:
     print (i)
 
打印:
[ 'fourth item' ]
fourth item

  十二、若是在提取某个页面的某个标签的xpath路径的话,能够以下图:

  //*[@id="kw"] 

  解释:使用相对路径查找全部的标签,属性id等于kw的标签。

 

 

复制代码
#!/usr/bin/env python
# -*- coding:utf-8 -*-
from scrapy.selector import Selector, HtmlXPathSelector
from scrapy.http import HtmlResponse
html = """<!DOCTYPE html>
<html>
    <head lang="en">
        <meta charset="UTF-8">
        <title></title>
    </head>
    <body>
        <ul>
            <li class="item-"><a id='i1' href="link.html">first item</a></li>
            <li class="item-0"><a id='i2' href="llink.html">first item</a></li>
            <li class="item-1"><a href="llink2.html">second item<span>vv</span></a></li>
        </ul>
        <div><a href="llink2.html">second item</a></div>
    </body>
</html>
"""
response = HtmlResponse(url='http://example.com', body=html,encoding='utf-8')
# hxs = HtmlXPathSelector(response)
# print(hxs)
# hxs = Selector(response=response).xpath('//a')
# print(hxs)
# hxs = Selector(response=response).xpath('//a[2]')
# print(hxs)
# hxs = Selector(response=response).xpath('//a[@id]')
# print(hxs)
# hxs = Selector(response=response).xpath('//a[@id="i1"]')
# print(hxs)
# hxs = Selector(response=response).xpath('//a[@href="link.html"][@id="i1"]')
# print(hxs)
# hxs = Selector(response=response).xpath('//a[contains(@href, "link")]')
# print(hxs)
# hxs = Selector(response=response).xpath('//a[starts-with(@href, "link")]')
# print(hxs)
# hxs = Selector(response=response).xpath('//a[re:test(@id, "i\d+")]')
# print(hxs)
# hxs = Selector(response=response).xpath('//a[re:test(@id, "i\d+")]/text()').extract()
# print(hxs)
# hxs = Selector(response=response).xpath('//a[re:test(@id, "i\d+")]/@href').extract()
# print(hxs)
# hxs = Selector(response=response).xpath('/html/body/ul/li/a/@href').extract()
# print(hxs)
# hxs = Selector(response=response).xpath('//body/ul/li/a/@href').extract_first()
# print(hxs)
 
# ul_list = Selector(response=response).xpath('//body/ul/li')
# for item in ul_list:
#     v = item.xpath('./a/span')
#     # 或
#     # v = item.xpath('a/span')
#     # 或
#     # v = item.xpath('*/a/span')
#     print(v)
复制代码
相关文章
相关标签/搜索