爬虫项目

   简单爬取简书中的专题‘’@IT·互联网“中的文章,爬取信息以后经过jieba分词生成词云而且进行分析;html

 

2.实现过程:浏览器

   第一步:打开简书并进入到@IT-互联网专题安全

   网页连接:https://www.jianshu.com/c/V2CqjW?utm_medium=index-collections&utm_source=desktopapp

   经过观察,咱们能够发现网页中的文章并无分页,而是经过下拉滚动条js生成下一页dom

   咱们能够进入开发者工具观察得知,每次拉到网页的最后都会多一条请求,仔细观察它们之间是存在着必定的规律的工具

  它们都是https://www.jianshu.com/c/V2CqjW?order_by=added_at&page={}这样的格式,改变的值只是page中的数字,是否这就是咱们所须要的页码呢,能够经过访问途中连接验证。post

  如今咱们已经取得所须要的连接,即可写出循环的代码,url

  可是咱们并不知道具体有多少页,这时,咱们经过观察网页以及网页源码,能够发现spa

  在专题下面有收录了多少篇文章的字样,即咱们只须要获取到共有多少篇文章再除以每页多少篇文章便可得出总页数。分析源码能够轻松找到excel

  而后咱们就能够写出如下代码来获取它的页数

  注意,因为网页的安全性问题,直接使用requests,get(url)是没法获取到简书网页的源码的,因此咱们要加上浏览器信息

  获取方法

接着,编写代码

headers = {
        'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/65.0.3325.181 Safari/537.36'
    }
复制代码
def getPageN():
    url = 'https://www.jianshu.com/c/V2CqjW?utm_medium=index-collections&utm_source=desktop'
    resp = requests.get(url, headers=headers)
    html_content = resp.text  # 获得网页内容
    soup = BeautifulSoup(html_content, 'lxml')  # 开始解析
    info = soup.select('.info')[0].text
    pagenumber=int(info[info.find('收录了'):].split()[0].lstrip('收录了').rstrip('篇文章'))
    a = len(soup.find_all('a', class_='title'))
    page = pagenumber//a+1
    return page
复制代码

 

  第二步:取出一个文章列表页的所有文章

  观察网页源码可知道每篇文章的具体连接是

 最后经过循环得到全部文章的连接

 

复制代码
def getListPage(pageUrl):   
    res = requests.get(pageUrl,headers=headers)
    html_content = res.text  
    soup = BeautifulSoup(html_content, 'lxml')

    newslist = []
    for i in range(len(soup.find_all('a', class_='title'))):
        Url = soup.find_all('a', class_='title')[i].attrs['href']
        newsUrl = "https://www.jianshu.com" + Url
        newslist.append(getNewsDetail(newsUrl))

    return(newslist)
复制代码

  第三步:得到一篇文章的所有内容,并进行分析

复制代码
def getNewsDetail(newsUrl):   #一篇文章的所有内容
    resd = requests.get(newsUrl,headers=headers)
    html_content = resd.text
    soupd = BeautifulSoup(html_content, 'lxml')

    news = {}
    news['标题'] = soupd.select('.title')[0].text
    news['做者'] = soupd.select('.name')[0].text
    news['时间'] = datetime.strptime(soupd.select('.publish-time')[0].text.rstrip('*'), '%Y.%m.%d %H:%M')
    news['字数'] = soupd.select('.wordage')[0].text.lstrip('字数 ')
    # news['内容'] = soupd.select('.show-content-free')[0].text.strip()
    news['连接'] = newsUrl
    content= soupd.select('.show-content-free')[0].text.strip()
    writeNewsDetail(content)
    return(news)
复制代码

  到这里,基本的爬取工做已经完成了

3.把数据保存成文本:

def writeNewsDetail(content):
    f = open('content.txt','a',encoding='utf-8')
    f.write(content)
    f.close()

以及生成excel表格

import pandas
df = pandas.DataFrame(newstotal)
df.to_excel('简书数据.xlsx')

 

4.生成词云:

复制代码
file = codecs.open('content.txt', 'r', 'utf-8')
image=np.array(Image.open('ditu.jpg'))
font=r'C:\Windows\Fonts\AdobeHeitiStd-Regular.otf'
word=file.read()
#去掉英文,保留中文
resultword=re.sub("[A-Za-z0-9\[\`\~\!\@\#\$\^\&\*\(\)\=\|\{\}\'\:\;\'\,\[\]\.\<\>\/\?\~\!\@\#\\\&\*\%]", "",word)
wordlist_after_jieba = jieba.cut(resultword, cut_all = True)

wl_space_split = " ".join(wordlist_after_jieba)

# 设置停用词
stopwords = set(STOPWORDS)
stopwords.add("一个")
my_wordcloud = WordCloud(font_path=font,mask=image,stopwords=stopwords,background_color='white',max_words = 2000,max_font_size = 100,random_state=50).generate(wl_space_split)
#根据图片生成词云
iamge_colors = ImageColorGenerator(image)
#my_wordcloud.recolor(color_func = iamge_colors)
#显示生成的词云
plt.imshow(my_wordcloud)
plt.axis("off")
plt.show()
#保存生成的图片,当关闭图片时才会生效,中断程序不会保存
my_wordcloud.to_file('result.jpg')
复制代码

生成的词云图片:

 

 代码实现:

import re
import requests
import pandas
from bs4 import BeautifulSoup 
from datetime import datetime
import jieba
import matplotlib.pyplot as plt
from wordcloud import WordCloud, STOPWORDS, ImageColorGenerator
import codecs
import numpy as np
from PIL import Image

headers = {
        'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/65.0.3325.181 Safari/537.36'
    }

def writeNewsDetail(content):
    f = open('content.txt','a',encoding='utf-8')
    f.write(content)
    f.close()

def getNewsDetail(newsUrl):   #一篇文章的所有内容
    resd = requests.get(newsUrl,headers=headers)
    html_content = resd.text
    soupd = BeautifulSoup(html_content, 'lxml')

    news = {}
    news['标题'] = soupd.select('.title')[0].text
    news['做者'] = soupd.select('.name')[0].text
    news['时间'] = datetime.strptime(soupd.select('.publish-time')[0].text.rstrip('*'), '%Y.%m.%d %H:%M')
    news['字数'] = soupd.select('.wordage')[0].text.lstrip('字数 ')
    # news['内容'] = soupd.select('.show-content-free')[0].text.strip()
    news['连接'] = newsUrl
    content= soupd.select('.show-content-free')[0].text.strip()
    writeNewsDetail(content)
    return(news)

def getListPage(pageUrl):
    res = requests.get(pageUrl,headers=headers)
    html_content = res.text
    soup = BeautifulSoup(html_content, 'lxml')

    newslist = []
    for i in range(len(soup.find_all('a', class_='title'))):
        Url = soup.find_all('a', class_='title')[i].attrs['href']
        newsUrl = "https://www.jianshu.com" + Url
        newslist.append(getNewsDetail(newsUrl))

    return(newslist)


def getPageN():
    url = 'https://www.jianshu.com/c/V2CqjW?utm_medium=index-collections&utm_source=desktop'
    resp = requests.get(url, headers=headers)
    html_content = resp.text  # 获得网页内容
    soup = BeautifulSoup(html_content, 'lxml')  # 开始解析
    info = soup.select('.info')[0].text
    pagenumber=int(info[info.find('收录了'):].split()[0].lstrip('收录了').rstrip('篇文章'))
    a = len(soup.find_all('a', class_='title'))
    page = pagenumber//a+1
    return page

newstotal = []
firstPageUrl='https://www.jianshu.com/c/V2CqjW?utm_medium=index-collections&utm_source=desktop'
newstotal.extend(getListPage(firstPageUrl))
for i in range(2,201):
    listPageUrl='https://www.jianshu.com/c/V2CqjW?order_by=added_at&page={}'.format(i)
    newstotal.extend(getListPage(listPageUrl))

df = pandas.DataFrame(newstotal)
df.to_excel('简书数据.xlsx')

file = codecs.open('content.txt', 'r', 'utf-8')
image=np.array(Image.open('ditu.jpg'))
font=r'C:\Windows\Fonts\AdobeHeitiStd-Regular.otf'
word=file.read()
#去掉英文,保留中文
resultword=re.sub("[A-Za-z0-9\[\`\~\!\@\#\$\^\&\*\(\)\=\|\{\}\'\:\;\'\,\[\]\.\<\>\/\?\~\!\@\#\\\&\*\%]", "",word)
wordlist_after_jieba = jieba.cut(resultword, cut_all = True)

wl_space_split = " ".join(wordlist_after_jieba)

# 设置停用词
stopwords = set(STOPWORDS)
stopwords.add("一个")
my_wordcloud = WordCloud(font_path=font,mask=image,stopwords=stopwords,background_color='white',max_words = 2000,max_font_size = 100,random_state=50).generate(wl_space_split)
#根据图片生成词云
iamge_colors = ImageColorGenerator(image)
#my_wordcloud.recolor(color_func = iamge_colors)
#显示生成的词云
plt.imshow(my_wordcloud)
plt.axis("off")
plt.show()
#保存生成的图片,当关闭图片时才会生效,中断程序不会保存
my_wordcloud.to_file('result.jpg')
相关文章
相关标签/搜索