前些天寫的一篇"我與Scrapy的初次相識,理論+實戰入門Scrapy "都上首頁推薦了,到現在瀏覽量還沒破百,難受。
寫的不好大概,不過只要有一人看,都是鼓勵,一直寫,一直積澱,終成大佬,給自己加油。
言歸正傳,直入主題,用Scrapy爬取知乎Python專題精華,連答主的頭像都給爬下來,不放過一切,片草不留。
實在話,答主的頭像爬下來也不知道幹嘛,主要是想練習以下兩點:
- Scrapy爬取圖片,並重命名圖片(即重寫imagePipeline)
- Scrapy多個Item多個Pipeline的相關處理
從百度進知乎的專題曲曲折折的,大家直接點我這鏈接去:https://www.zhihu.com/topic/19552832/top-answers
寫代碼? 不急,首要的是先創建一個Scrapy項目了,框架有了,後面才能添磚加瓦。在終端依次輸入指令。
# 創建項目
scrapy startproject zhihu
# 進入到項目目錄下
cd zhihu
# 創建Spider文件 python_zhihu爲項目名 zhihu.com爲限定的爬取網域
scrapy genspider python_zhihu zhihu.com
1.分析網頁,找到數據在哪
1.1、是不是異步加載?
數據無非是在html文件或者是異步加載出來的,我發現知乎這個專題精華沒有分頁欄,往下拉加載數據URL地址也沒有發生變化,斷定這個網頁是個異步加載,我們打開開發者工具去XHR文件中查找。
1.2、網頁裏數據的結構是什麼樣的?
剛刷新網頁那會,出來的文件確實沒有什麼可用的數據,但是伴隨着網頁往下滾動加載數據,出來了有用的數據了,如下可以看出這個文件返回的正是這文章josn格式的相關數據。這是Preview模塊,方便查看返回了數據的結構。
這裏我們就可以編寫Item的代碼了。
分析json數據,編寫Item。
我這裏就不貼代碼了,放個截圖,這代碼真得自己敲了纔有印象,若是有哪裏不清楚可以來討論一下的,最後搞不定的情況(必定有細節錯誤)再發源碼。
特意說明:爬取頭像的Item中image_urls和images這兩個是Scrapy自帶的Image管道默認的名稱,儘可能不要修改,另外一個image_name是爲了重寫ImagePipeline,修改圖片的名稱才加進去的,後面再說。
1.3、請求的網址呢?
知道了上面的json數據離爬取是進了一步,那我們需要知道如何請求到這些數據,去這個文件的Headers模塊查看,以下是截圖。
不知道大家有沒有看到這api幾字,這裏調用api來獲取數據再方便不過了,通過URL末尾的offset和limit兩個參數來控制獲取的頁數和每頁的數據量。
2.構建Spider爬蟲,爬取數據
2.1、解析response,解析出數據
和普通的網絡爬蟲不一樣,Scrapy的下載器已經下載好了網頁,我們只需要解析它的response就行了。
這裏有兩個try-except,主要是解決掉下面兩個問題。
問題1:返回的json數據中,不全是文章的數據,我沒細分析,不清楚那些個無關的數據是哪的。
問題2:有些用戶是匿名回答的,或者已經註銷了賬號了,沒有詳情頁鏈接的,爬取時把它設置成”未知用戶“。
因爲每次返回的json數據不盡是文章內容,所以邏輯上,到這裏至少能夠爬取一條的文章的信息。
2.2、測試發現的問題
插一句,之前爲了初步測試一下,我先把piplines和settings兩個文件配置起來了(這裏測試的我就不細說了,主要說一下出現的問題)。
文本信息爬取倒是沒事,安然無恙,這使用自帶的ImagePipline倒是出了如下兩個問題,已解決。
問題1: ModuleNotFoundError: No module named ‘scrapy.contrib’
解決方案:
問題2: raise ValueError(‘Missing scheme in request url: %s’ % self._url)
解決方案:
原本是下圖中的String類型,但是image_urls要求的是list類型,所以強轉一下。
修改爲
另外圖片爬取下來後,有如下一個缺陷,Scrapy自帶的圖片管道下載的圖片名稱是它們URL的 SHA1 hash值,不是很好看,所以要重寫它的ImagePipeline,修改圖片的名稱。
2.3、piplines和settings的設置
上一步插了一句測試,這步要重歸正題,講一下pipelines和settings的配置了,不然有些人要犯迷糊了。
piplines文件中設置了兩個pipline分別對Items中的兩個item進行處理。
另外因爲Scrapy傳item是一股腦就都傳來了,這兩個pipline分不清楚哪個是自己需要處理的item,直接運行會報錯誤,所以在函數接收item後要對item進行辨別,使用 isinstance() 方法對item進行判斷類型相同與否。
值得一提的是,這些個Pipline當中的函數名都是默認的,我們只是對它進行重寫,所以不能修改函數名稱。
settings文件 一配置完,這個項目真就完全可以跑起來了。
ITEM_PIPELINES裏每個pipeline後面的整數型數值是指它運行的優先級,或者說距離,數值越小,距離越近,越先執行。
2.4、翻頁請求下一頁怎麼做?
前面配置了item、spider、piplines、settings四個文件,項目也就完成,但是之前也就只爬取了一頁的數據,所以,我們還要在Spider中的parse函數中加上翻頁請求下一頁的代碼。
這裏給大傢伙看一眼最後的結果,好叫大傢伙安心,這個代碼是可以運行的。
以下爲自娛自樂環節
Q:當真不貼源碼,你寫的博文全是圖,不方便借鑑呀
A:當真不發,圖上註釋也很清楚了,自己敲更有印象,看看代碼截圖再自己思考一下更有收穫
Q:其實我知道你想發源碼的,不然整這麼大的加粗黑字在這裏幹嘛
A:好吧,被你發現了,是擔心自己有些細節沒表達好,導致大傢伙最終的沒爬到數據
Q:不對不對,你必定還有別的想法,爬蟲主要是思路,每個網頁的結構不同,學會一個代碼只能說會了一個爬蟲代碼而已,要學會思路自己去分析纔是王道
A:知己呀,還是你懂我,我其實就是想讓那些個看見沒源代碼的童鞋早早的走,不要誤了他們學習其他有源碼的。
Q:哈哈哈哈,快行動起來,早知你有詐,我前面沒看就翻到文末來了
A:哎呀,可惡,道高一尺魔高一丈,不過我這裏還是要給好好學習的童鞋們貼上源碼的,讓他們好好研究,不要錯了什麼細節,盡情享用
Spider
# -*- coding: utf-8 -*-
import scrapy
import json
from ..items import ZhihuItem, ZhihuUserImageItem
class PythonZhihuSpider(scrapy.Spider):
name = 'python_zhihu'
start_urls = ['https://www.zhihu.com/api/v4/topics/19552832/feeds/essence?include=data%5B%3F(target.type%3Dtopic_sticky_module)%5D.target.data%5B%3F(target.type%3Danswer)%5D.target.content%2Crelationship.is_authorized%2Cis_author%2Cvoting%2Cis_thanked%2Cis_nothelp%3Bdata%5B%3F(target.type%3Dtopic_sticky_module)%5D.target.data%5B%3F(target.type%3Danswer)%5D.target.is_normal%2Ccomment_count%2Cvoteup_count%2Ccontent%2Crelevant_info%2Cexcerpt.author.badge%5B%3F(type%3Dbest_answerer)%5D.topics%3Bdata%5B%3F(target.type%3Dtopic_sticky_module)%5D.target.data%5B%3F(target.type%3Darticle)%5D.target.content%2Cvoteup_count%2Ccomment_count%2Cvoting%2Cauthor.badge%5B%3F(type%3Dbest_answerer)%5D.topics%3Bdata%5B%3F(target.type%3Dtopic_sticky_module)%5D.target.data%5B%3F(target.type%3Dpeople)%5D.target.answer_count%2Carticles_count%2Cgender%2Cfollower_count%2Cis_followed%2Cis_following%2Cbadge%5B%3F(type%3Dbest_answerer)%5D.topics%3Bdata%5B%3F(target.type%3Danswer)%5D.target.annotation_detail%2Ccontent%2Chermes_label%2Cis_labeled%2Crelationship.is_authorized%2Cis_author%2Cvoting%2Cis_thanked%2Cis_nothelp%3Bdata%5B%3F(target.type%3Danswer)%5D.target.author.badge%5B%3F(type%3Dbest_answerer)%5D.topics%3Bdata%5B%3F(target.type%3Darticle)%5D.target.annotation_detail%2Ccontent%2Chermes_label%2Cis_labeled%2Cauthor.badge%5B%3F(type%3Dbest_answerer)%5D.topics%3Bdata%5B%3F(target.type%3Dquestion)%5D.target.annotation_detail%2Ccomment_count%3B&offset=0&limit=10']
def parse(self, response):
item = ZhihuItem()
imageItem = ZhihuUserImageItem()
datas = json.loads(response.body)['data']
for data in datas:
try:
item['id'] = data['target']['id']
item['title'] = data['target']['title']
item['url'] = data['target']['url']
item['content'] = data['target']['content']
item['voteup_count'] = data['target']['voteup_count']
item['comment_count'] = data['target']['comment_count']
item['author_name'] = data['target']['author']['name']
try:
item['author_url'] = 'https://www.zhihu.com/' + data['target']['author']['user_type'] + data['target']['author']['url_token']
except Exception as e:
item['author_url'] = '未知用戶'
pass
yield item
imageItem['image_urls'] = [data['target']['author']['avatar_url']]
imageItem['image_name'] = data['target']['author']['name']
yield imageItem
except Exception as e:
pass
# 進行下幾頁的爬取
url = 'https://www.zhihu.com/api/v4/topics/19552832/feeds/essence?include=data%5B%3F(target.type%3Dtopic_sticky_module)%5D.target.data%5B%3F(target.type%3Danswer)%5D.target.content%2Crelationship.is_authorized%2Cis_author%2Cvoting%2Cis_thanked%2Cis_nothelp%3Bdata%5B%3F(target.type%3Dtopic_sticky_module)%5D.target.data%5B%3F(target.type%3Danswer)%5D.target.is_normal%2Ccomment_count%2Cvoteup_count%2Ccontent%2Crelevant_info%2Cexcerpt.author.badge%5B%3F(type%3Dbest_answerer)%5D.topics%3Bdata%5B%3F(target.type%3Dtopic_sticky_module)%5D.target.data%5B%3F(target.type%3Darticle)%5D.target.content%2Cvoteup_count%2Ccomment_count%2Cvoting%2Cauthor.badge%5B%3F(type%3Dbest_answerer)%5D.topics%3Bdata%5B%3F(target.type%3Dtopic_sticky_module)%5D.target.data%5B%3F(target.type%3Dpeople)%5D.target.answer_count%2Carticles_count%2Cgender%2Cfollower_count%2Cis_followed%2Cis_following%2Cbadge%5B%3F(type%3Dbest_answerer)%5D.topics%3Bdata%5B%3F(target.type%3Danswer)%5D.target.annotation_detail%2Ccontent%2Chermes_label%2Cis_labeled%2Crelationship.is_authorized%2Cis_author%2Cvoting%2Cis_thanked%2Cis_nothelp%3Bdata%5B%3F(target.type%3Danswer)%5D.target.author.badge%5B%3F(type%3Dbest_answerer)%5D.topics%3Bdata%5B%3F(target.type%3Darticle)%5D.target.annotation_detail%2Ccontent%2Chermes_label%2Cis_labeled%2Cauthor.badge%5B%3F(type%3Dbest_answerer)%5D.topics%3Bdata%5B%3F(target.type%3Dquestion)%5D.target.annotation_detail%2Ccomment_count%3B&limit=10&offset={}'
page = 10
for i in range(5, 15 + 10*int(page), 10):
yield scrapy.Request(url=url.format(i), callback=self.parse)
items
# -*- coding: utf-8 -*-
# Define here the models for your scraped items
#
# See documentation in:
# https://doc.scrapy.org/en/latest/topics/items.html
import scrapy
class ZhihuItem(scrapy.Item):
id = scrapy.Field()
title = scrapy.Field()
url = scrapy.Field()
content = scrapy.Field()
voteup_count = scrapy.Field()
comment_count = scrapy.Field()
author_name = scrapy.Field()
author_url = scrapy.Field()
pass
# 爬取用戶頭像的Item
class ZhihuUserImageItem(scrapy.Item):
image_urls = scrapy.Field()
images = scrapy.Field()
image_name = scrapy.Field()
pass
Piplines
# -*- coding: utf-8 -*-
# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://doc.scrapy.org/en/latest/topics/item-pipeline.html
from scrapy import Request
from scrapy.exceptions import DropItem
from scrapy.pipelines.images import ImagesPipeline
import csv
import re
from .items import ZhihuItem
from .items import ZhihuUserImageItem
class ZhihuPipeline(object):
def __init__(self):
file = './data.csv'
self.file = open(file, 'a+', encoding="utf-8", newline='')
self.writer = csv.writer(self.file, dialect="excel")
def process_item(self, item, spider):
if isinstance(item, ZhihuItem):
item['content'] = re.sub('<.*?>', '', re.sub('</p>', '\n', item['content']))
self.writer.writerow([item['id'], item['title'], item['url'], item['content'],
item['voteup_count'], item['comment_count'], item['author_name'], item['author_url']])
print('已進入文本管道')
return item
def close_spider(self, spider):
self.file.close()
class UserImagePipeline(ImagesPipeline):
def get_media_requests(self, item, info):
if isinstance(item, ZhihuUserImageItem):
for image_url in item['image_urls']:
yield Request(image_url, meta={'name': item['image_name']})
def item_completed(self, results, item, info):
if isinstance(item, ZhihuUserImageItem):
image_path = [x['path'] for ok, x in results if ok]
if not image_path:
raise DropItem('Item contains no images')
print('已進入圖片管道')
return item
def file_path(self, request, response=None, info=None):
image_name = request.meta['name']
filename = image_name + '.jpg'
return filename
Settings
# -*- coding: utf-8 -*-
# Scrapy settings for zhihu project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
# https://doc.scrapy.org/en/latest/topics/settings.html
# https://doc.scrapy.org/en/latest/topics/downloader-middleware.html
# https://doc.scrapy.org/en/latest/topics/spider-middleware.html
BOT_NAME = 'zhihu'
SPIDER_MODULES = ['zhihu.spiders']
NEWSPIDER_MODULE = 'zhihu.spiders'
LOG_LEVEL = 'ERROR'
# Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = 'zhihu (+http://www.yourdomain.com)'
USER_AGENT = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.132 Safari/537.36'
# Obey robots.txt rules
ROBOTSTXT_OBEY = False
# Configure item pipelines
# See https://doc.scrapy.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {
'zhihu.pipelines.ZhihuPipeline': 300,
'zhihu.pipelines.UserImagePipeline': 10,
}
IMAGES_STORE = 'images'
IMAGES_EXPIRES = 90