原本打算作一个关于微博粉丝列表的爬虫,能够统计一下某个微博帐号的粉丝里面,僵尸粉(水军)的数量,大V数量。html
结果写完爬虫才发现,如今微博只给人看粉丝列表的前5页.......哈哈,好吧。挺无奈的,淘宝那边也是只展现前100页的评论。web
直接上爬虫代码浏览器
import requests import re tmpt_url = 'https://weibo.com/p/1005051678105910/follow?page=%d#Pl_Official_HisRelation__59' def get_data(tmpt_url): urllist = [tmpt_url%i for i in range(1,6)] user_id = [] #粉丝ID user_name = [] #粉丝名称 user_follow = [] #粉丝的关注 user_fans = [] #粉丝的粉丝量 user_address = [] #粉丝的地址 headers = {'Accept':'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8', 'Accept-Encoding':'gzip, deflate, br', 'Accept-Language':'zh-CN,zh;q=0.8,zh-TW;q=0.7,zh-HK;q=0.5,en-US;q=0.3,en;q=0.2', 'Connection':'keep-alive', 'Cookie':'请在本身的浏览器中查看,因涉及我的隐私不公开', 'Host':'weibo.com', 'Upgrade-Insecure-Requests':'1', 'User-Agent':'Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:65.0) Gecko/20100101 Firefox/65.0'} for url in urllist: html = requests.get(url,headers=headers).text user_id.extend(re.findall(r'<a class=\\"S_txt1\\" target=\\"_blank\\" usercard=\\"id=(\d+)&refer_flag=\d+_\\" href=\\"\\/\S+\?refer_flag=\d+_\\" >\S+<\\/a>',html)) user_name.extend(re.findall(r'<a class=\\"S_txt1\\" target=\\"_blank\\" usercard=\\"id=\d+&refer_flag=\d+_\\" href=\\"\\/\S+\?refer_flag=\d+_\\" >(\S+)<\\/a>',html)) user_follow.extend(re.findall(r'关注 <em class=\\"count\\"><a target=\\"_blank\\" href=\\"\\/\d+\\/follow\\" >(\d+)<\\/a>',html)) user_fans.extend(re.findall(r'粉丝<em class=\\"count\\"><a target=\\"_blank\\" href=\\"\\/\d+\\/fans\?current=fans\\" >(\d+)<\\/a>',html)) user_address.extend(re.findall(r'<em class=\\"tit S_txt2\\">地址<\\/em><span>(\S+\s?\S+?)<\\/span>\\r\\n\\t\\t\\t\\t\\t<\\/div>',html)) print('user_id',user_id) print('user_name',user_name) print('user_follow',user_follow) print('user_fans',user_fans) print('user_address',user_address)
这个url是孙俪的微博帐号app
下面是她粉丝列表前5页爬到的信息,包括:粉丝ID,粉丝名称,粉丝的关注,粉丝的粉丝量,粉丝的地址url