百度图片小爬虫

         刚学习爬虫,写了一个百度图片爬虫看成练习。html

        环境:python3.6(请下好第三方库requests)python

         实现的功能:输入关键字,下载240张关键字有关的百度图片到本地的d:\百度图片\关键字\文件夹中。ajax

        百度图片的加载是ajax异步形式的,除了前面的一部分图片,后面靠下拉加载的图片都是异步从服务器端请求获得的。这些异步加载的图片的信息能够在一个个acjson的百度图片接口中,能够在开发者工具中xhr下找到这些文件。json

        接下来上代码:服务器

         

import requestsimport reimport osdef get_page_url(url, param):    response = requests.get(url, params=param)    response.encoding = 'utf-8'    return response.textdef parse_page(str):    pattern = re.compile('"middleURL":"(.*?)",')#利用正则匹配图片url    url_list = re.findall(pattern, str)    return url_listdef run(keyword, path):    url = "https://image.baidu.com/search/acjson"    i = 0    for j in range(30, 270, 30):        params = {"ipn": "rj", "tn": "resultjson_com", "word": keyword, "pn": str(j)}        html = get_page_url(url, params)        lists = parse_page(html)        print(lists)        for item in lists:            try:                img_data = requests.get(item, timeout=10).content                with open(path + "/" + str(i) + ".jpg", "wb") as f:                    f.write(img_data)                    f.close()                i = i+1            except requests.exceptions.ConnectionError:                print('can not download')                continuedef make_dir(keyword):    path = "D:/百度图片/"    path = path+keyword    is_exists = os.path.exists(path)    if not is_exists:        os.makedirs(path)        return path    else:        print(path + '目录已存在')        return pathdef main():    keyword = input("input keyword about images you want to download: ")    path = make_dir(keyword)    run(keyword, path)if __name__ == '__main__':    main()
相关文章
相关标签/搜索