Scrapy框架結合Spynner採集需進行js,ajax動態加載的網頁並提取網頁信息(以採集微信公衆號文章列表爲例)

http://doc.okbase.net/kevinflynn/archive/163892.html

對於網頁的採集有這樣幾種:

1.靜態網頁

2.動態網頁(需進行js,ajax動態加載數據的網頁)

3.需進行模擬登錄後才能採集的網頁

4.加密的網頁

 

3,4的解決方案和思路會在後續blog中陳述

現在只針對1,2的解決方案與思路:

一.靜態網頁

      對於靜態網頁的採集解析方法很多很多!java,python都提供了很多的工具包或框架,例如java的httpclient,Htmlunit,Jsoup,HtmlParser等,Python的urllib,urllib2,BeautifulSoup,Scrapy等,不詳述,網上資料很多的。

 

二.動態網頁

      對於採集來說的動態網頁是那些需要經過js,ajax動態加載來獲取數據的網頁,採集數據的方案分爲兩種: 

      1.通過抓包工具分析js,ajax的請求,模擬該請求獲取js加載後的數據。

      2.調用瀏覽器的內核,獲取加載後的網頁源碼,然後對源碼經行解析

      一個研究爬蟲的人js是必須要會得東西,網上學習資料很多,不陳述,寫該條只爲文章的完整性

調用瀏覽器內核的工具包Java也有幾個,但是不是今天所講的重點,今天的重點是文章的標題Scrapy框架結合Spynner採集需進行js,ajax動態加載的網頁並提取網頁信息(以採集微信公衆號文章列表爲例)

 

在使用Scrapy和Spynner之前需要安裝環境,我初學python,在mac上折騰了好長時間,就在快要瘋掉的時候成功了,同時也陣亡了好些個腦細胞,贏得如此慘烈啊!對着就總結出一條,使用的時候讓裝啥就裝啥!

 

Start......

1.創建個微信公衆號文章列表採集項目(以下簡稱微採集)

scrapy startproject weixin

 

2.在spider目錄下創建一個採集spider文件

vim weixinlist.py

    寫入如下代碼

from weixin.items import WeixinItem
import sys
sys.path.insert(0,'..')
import scrapy
import time
from scrapy import Spider

class MySpider(Spider):
        name = 'weixinlist'
        allowed_domains = []
        start_urls = [
                'http://weixin.sogou.com/gzh?openid=oIWsFt5QBSP8mn4Jx2WSGw_rCNzQ',
         ]
        download_delay = 1
        print('start init....')

        def parse(self, response):
                sel=scrapy.Selector(response)
                print('hello,world!')
                print(response)
                print(sel)
                list=sel.xpath('//div[@class="txt-box"]/h4')
                items=[]
                for single in list:
                        data=WeixinItem()
                        title=single.xpath('a/text()').extract()
                        link=single.xpath('a/@href').extract()
                        data['title']=title
                        data['link']=link
                        if len(title)>0:
                                print(title[0].encode('utf-8'))
                                print(link)

 

 

3.在items.py中加入WeixinItem類

 

import scrapy


class WeixinItem(scrapy.Item):
    # define the fields for your item here like:
    # name = scrapy.Field()
        title=scrapy.Field()
        link=scrapy.Field()

 

 

4.在items.py的同級目錄下創建一個下載中間件downloadwebkit.py,並向其中寫入如下代碼:

import spynner
import pyquery
import time
import BeautifulSoup
import sys
from scrapy.http import HtmlResponse
class WebkitDownloaderTest( object ):
    def process_request( self, request, spider ):
#        if spider.name in settings.WEBKIT_DOWNLOADER:
#            if( type(request) is not FormRequest ):
                browser = spynner.Browser()
                browser.create_webview()
                browser.set_html_parser(pyquery.PyQuery)
                browser.load(request.url, 20)
                try:
                        browser.wait_load(10)
                except:
                        pass
                string = browser.html
                string=string.encode('utf-8')
                renderedBody = str(string)
                return HtmlResponse( request.url, body=renderedBody )

 

 

   這段代碼就是調用瀏覽器內核,獲取網頁加載後的源碼

5.在setting.py文件中進行配置,聲明下載使用下載中間件

    在底部加上如下代碼:

#which spider should use WEBKIT
WEBKIT_DOWNLOADER=['fenghuangblog']

DOWNLOADER_MIDDLEWARES = {
    'weixin.downloadwebkit.WebkitDownloaderTest': 543,
}

import os
os.environ["DISPLAY"] = ":0"

 

 

 

6.運行程序:

    運行命令:

 

scrapy crawl weixinlist

    運行結果: 

kevinflynndeMacBook-Pro:spiders kevinflynn$ scrapy crawl weixinlist
start init....
2015-07-28 21:13:55 [scrapy] INFO: Scrapy 1.0.1 started (bot: weixin)
2015-07-28 21:13:55 [scrapy] INFO: Optional features available: ssl, http11
2015-07-28 21:13:55 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'weixin.spiders', 'SPIDER_MODULES': ['weixin.spiders'], 'BOT_NAME': 'weixin'}
2015-07-28 21:13:55 [py.warnings] WARNING: :0: UserWarning: You do not have a working installation of the service_identity module: 'No module named service_identity'.  Please install it from <https://pypi.python.org/pypi/service_identity> and make sure all of its dependencies are satisfied.  Without the service_identity module and a recent enough pyOpenSSL to support it, Twisted can perform only rudimentary TLS client hostname verification.  Many valid certificate/hostname mappings may be rejected.

2015-07-28 21:13:55 [scrapy] INFO: Enabled extensions: CloseSpider, TelnetConsole, LogStats, CoreStats, SpiderState
2015-07-28 21:13:55 [scrapy] INFO: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, WebkitDownloaderTest, DefaultHeadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, RedirectMiddleware, CookiesMiddleware, ChunkedTransferMiddleware, DownloaderStats
2015-07-28 21:13:55 [scrapy] INFO: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
2015-07-28 21:13:55 [scrapy] INFO: Enabled item pipelines: 
2015-07-28 21:13:55 [scrapy] INFO: Spider opened
2015-07-28 21:13:55 [scrapy] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2015-07-28 21:13:55 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6023
QFont::setPixelSize: Pixel size <= 0 (0)
2015-07-28 21:14:08 [scrapy] DEBUG: Crawled (200) <GET http://weixin.sogou.com/gzh?openid=oIWsFt5QBSP8mn4Jx2WSGw_rCNzQ> (referer: None)
hello,world!
<200 http://weixin.sogou.com/gzh?openid=oIWsFt5QBSP8mn4Jx2WSGw_rCNzQ>
<Selector xpath=None data=u'<html><head><meta http-equiv="X-UA-Compa'>
互聯網協議入門
[u'http://mp.weixin.qq.com/s?__biz=MzA4MjEyNTA5Mw==&mid=210032701&idx=1&sn=6b1fc2bc5d4eb0f87513751e4ccf610c&3rd=MzA3MDU4NTYzMw==&scene=6#rd']
自己動手寫貝葉斯分類器給圖書分類
[u'http://mp.weixin.qq.com/s?__biz=MzA4MjEyNTA5Mw==&mid=210013947&idx=1&sn=1f36ba5794e22d0fb94a9900230e74ca&3rd=MzA3MDU4NTYzMw==&scene=6#rd']
不當免費技術支持的10種方法
[u'http://mp.weixin.qq.com/s?__biz=MzA4MjEyNTA5Mw==&mid=209998175&idx=1&sn=216106034a3b4afea6e67f813ce1971f&3rd=MzA3MDU4NTYzMw==&scene=6#rd']
以 Python 爲實例,介紹貝葉斯理論
[u'http://mp.weixin.qq.com/s?__biz=MzA4MjEyNTA5Mw==&mid=209998175&idx=2&sn=2f3dee873d7350dfe9546ab4a9323c05&3rd=MzA3MDU4NTYzMw==&scene=6#rd']
我從騰訊那“偷了”3000萬QQ用戶數據,出了份很有趣的...
[u'http://mp.weixin.qq.com/s?__biz=MzA4MjEyNTA5Mw==&mid=209980651&idx=1&sn=11fd40a2dee5132b0de8d4c79a97dac2&3rd=MzA3MDU4NTYzMw==&scene=6#rd']
如何用 Spark 快速開發應用?
[u'http://mp.weixin.qq.com/s?__biz=MzA4MjEyNTA5Mw==&mid=209820653&idx=2&sn=23712b78d82fb412e960c6aa1e361dd3&3rd=MzA3MDU4NTYzMw==&scene=6#rd']
一起來寫個簡單的解釋器(1)
[u'http://mp.weixin.qq.com/s?__biz=MzA4MjEyNTA5Mw==&mid=209797651&idx=1&sn=15073e27080e6b637c8d24b6bb815417&3rd=MzA3MDU4NTYzMw==&scene=6#rd']
那個直接在機器碼中改 Bug 的傢伙
[u'http://mp.weixin.qq.com/s?__biz=MzA4MjEyNTA5Mw==&mid=209762756&idx=1&sn=04ae1bc3a366d358f474ac3e9a85fb60&3rd=MzA3MDU4NTYzMw==&scene=6#rd']
把一個庫開源,你該做些什麼
[u'http://mp.weixin.qq.com/s?__biz=MzA4MjEyNTA5Mw==&mid=209762756&idx=2&sn=0ac961ffd82ead6078a60f25fed3c2c4&3rd=MzA3MDU4NTYzMw==&scene=6#rd']
程序員的困境
[u'http://mp.weixin.qq.com/s?__biz=MzA4MjEyNTA5Mw==&mid=209696436&idx=1&sn=8cb55b03c8b95586ba4498c64fa54513&3rd=MzA3MDU4NTYzMw==&scene=6#rd']
2015-07-28 21:14:08 [scrapy] INFO: Closing spider (finished)
2015-07-28 21:14:08 [scrapy] INFO: Dumping Scrapy stats:
{'downloader/response_bytes': 131181,
 'downloader/response_count': 1,
 'downloader/response_status_count/200': 1,
 'finish_reason': 'finished',
 'finish_time': datetime.datetime(2015, 7, 28, 13, 14, 8, 958071),
 'log_count/DEBUG': 2,
 'log_count/INFO': 7,
 'log_count/WARNING': 1,
 'response_received_count': 1,
 'scheduler/dequeued': 1,
 'scheduler/dequeued/memory': 1,
 'scheduler/enqueued': 1,
 'scheduler/enqueued/memory': 1,
 'start_time': datetime.datetime(2015, 7, 28, 13, 13, 55, 688111)}
2015-07-28 21:14:08 [scrapy] INFO: Spider closed (finished)
QThread: Destroyed while thread is still running
kevinflynndeMacBook-Pro:spiders kevinflynn$ 

 

 

    

 

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章