一直苦于找不到好的音乐资源,网传python爬虫异常强大,恰好有点python基础就想着作一个脚本爬一点mp3资源,废话很少说,先看看效果吧。php
/home/roland/PycharmProjects/Main.py please input the artist name: billie eilish Process finished with exit code 0
运行以后须要输入爬取的歌手名,这里咱们填写一个喜欢的歌手名而后回车便可(支持中文歌手),日志并无输出内容,而是直接将mp3文件保存到save_path
路径下,以下图所示:html
这里用的python版本是3.6 理论上3.X版本均可以直接运行,不用额外装request
库。
代码分析python
POST
以chrome为例
按下F12
打开控制台
依次依照下图找到Form Data
即为所需,固然,并非全部的请求方式都会用到data
报头,咱们这么作的目的是模仿浏览器访问网页的过程。
然后把须要添加到报头的字段依次添加进来,这里pages
和content
是动态变量(原网址为ajax
异步加载)content
即为查找的歌手ajax
def resemble_data(content, index): data = {} data['types'] = 'search' data['count'] = '30' data['source'] = 'netease' data['pages'] = index data['name'] = content data = urllib.parse.urlencode(data).encode('utf-8') return data
此外咱们还须要获取到User-Agent
对应的代码是:chrome
opener.addheaders = [('User-Agent','Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/65.0.3325.181 Safari/537.36')]
这个东西在那里获取呢?
咱们直接从浏览器的控制台获取就行了,目的将python的request
请求假装成浏览器访问。json
# set proxy agent proxy_support = urllib.request.ProxyHandler({'http:': '119.6.144.73:81'}) opener = urllib.request.build_opener(proxy_support) urllib.request.install_opener(opener) # set proxy agent
这里咱们又设置了一个代理ip,防止服务器的反爬虫机制(一个ip频繁访问会被认为是爬虫而不是访客操做,这里仅仅是实例,咱们能够爬取代理ip的地址和端口号来让更多ip同时访问,减少被认定为爬虫的可能)
下面是爬取代理ip的例子(不感兴趣的话能够直接跳过)api
import urllib.request import urllib.parse import re url = 'http://31f.cn/' head = {} head['User-Agent'] = 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/65.0.3325.181 Safari/537.36' response = urllib.request.urlopen(url) html_document = response.read().decode('utf-8') pattern_ip = re.compile(r'<td>(\d+\.\d+\.\d+\.\d+)</td>[\s\S]*? <td>(\d{2,4})</td>') ip_list = pattern_ip.findall(html_document) print(len(ip_list)) for item in ip_list: print("ip地址是:%s 端口号是:%s" % (item[0], item[1]))
这里的response
返回的实际上是一个音乐文件的连接地址,格式相似于xxxxuuid.mp3
咱们把默认的uuid.mp3
直接命名为歌曲名.mp3,而后以二进制文件格式写入文件。浏览器
data = {} data['types'] = 'url' data['id'] = id data['source'] = 'netease' data = urllib.parse.urlencode(data).encode('utf-8') response = urllib.request.urlopen(url, data) music_url_str = response.read().decode('utf-8') music_url = pattern.findall(music_url_str) result = urllib.request.urlopen(music_url[0]) file = open(save_path+name+'.mp3', 'wb') file.write(result.read())
至于Request url
,能够在这里获取(固然,这只是一个例子,这个url并非例子所用的url):服务器
如下是完整的代码,把音乐文件的保存路径save_path = '/home/roland/Spider/Img/
修改为本身的保存路径就能够了app
import urllib.request import urllib.parse import json import re def resemble_data(content, index): data = {} data['types'] = 'search' data['count'] = '30' data['source'] = 'netease' data['pages'] = index data['name'] = content data = urllib.parse.urlencode(data).encode('utf-8') return data def request_music(url, content): # set proxy agent proxy_support = urllib.request.ProxyHandler({'http:': '119.6.144.73:81'}) opener = urllib.request.build_opener(proxy_support) opener.addheaders = [('User-Agent','Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/65.0.3325.181 Safari/537.36')] urllib.request.install_opener(opener) # set proxy agent total = [] pattern = re.compile(r'\(([\s\S]*)\)') for i in range(1, 10): data = resemble_data(content, str(i)) response = urllib.request.urlopen(url, data) result = response.read().decode('unicode_escape') json_result = pattern.findall(result) total.append(json_result) return total def save_music_file(id, name): save_path = '/home/roland/Spider/Img/' pattern = re.compile('http.*?mp3') url = 'http://www.gequdaquan.net/gqss/api.php?callback=jQuery111307210973120745481_1533280033798' data = {} data['types'] = 'url' data['id'] = id data['source'] = 'netease' data = urllib.parse.urlencode(data).encode('utf-8') response = urllib.request.urlopen(url, data) music_url_str = response.read().decode('utf-8') music_url = pattern.findall(music_url_str) result = urllib.request.urlopen(music_url[0]) file = open(save_path+name+'.mp3', 'wb') file.write(result.read()) file.flush() file.close() def main(): url = 'http://www.gequdaquan.net/gqss/api.php?callback=jQuery11130967955054499249_1533275477385' content = input('please input the artist name:') result = request_music(url, content) for group in result[0]: target = json.loads(group) for item in target: save_music_file(str(item['id']), str(item['name'])) main()