查看源码html
DOWNLOAD_URL = 'http://movie.douban.com/top250/' html = requests.get(url).text tree = lxml.html.fromstring(html)
观察该网站html结构mysql
可知该页面下全部电影包含在 ol 标签下。每一个 li 标签包含单个电影的内容。git
使用XPath语句获取该ol标签github
movies = tree.xpath("//ol[@class='grid_view']/li")
在ol标签中遍历每一个li标签获取单个电影的信息。sql
以电影名字为例数据库
for movie in movies: name_num = len(movie.xpath("descendant::span[@class='title']")) name = '' for num in range(0, name_num): name += movie.xpath("descendant::span[@class='title']")[num].text.strip() name = ' '.join(name.replace('/', '').split()) # 清洗数据
其他部分详见源码网站
检查“后页”标签。跳转到下一页面编码
next_page = DOWNLOAD_URL + tree.xpath("//span[@class='next']/a/@href")[0]
返回None则已获取全部页面。url
建立csv文件spa
writer = csv.writer(open('movies.csv', 'w', newline='', encoding='utf-8')) fields = ('rank', 'name', 'score', 'country', 'year', 'category', 'votes', 'douban_url') writer.writerow(fields)
其他部分详见源码
db = pymysql.connect(host='127.0.0.1', port=3306, user='root', passwd=PWD, db='douban',charset='utf8')
cur = db.cursor()
sql = "INSERT INTO test(rank, NAME, score, country, year, " \ "category, votes, douban_url) values(%s,%s,%s,%s,%s,%s,%s,%s)" try: cur.executemany(sql, movies_info) db.commit() except Exception as e: print("Error:", e) db.rollback()
以上全部内容能够在80行Python代码内完成,很简单吧。(`・ω・´)