爬蟲利器Scrapy框架:2:使用runspider運行爬蟲

在這裏插入圖片描述
在上一篇文章中我們介紹了使用scrapy shell交互式地獲取Web頁面的標題信息,這篇文章繼續以這個簡單的示例來介紹在Scrapy框架下爬蟲應用程序的使用方法。

爬蟲示例代碼

liumiaocn:scrapy liumiao$ ls
myspider.py
liumiaocn:scrapy liumiao$ cat myspider.py 
import scrapy

class MySpider(scrapy.Spider):
    name = 'myspider'
    start_urls = ['https://scrapy.org/']

    def parse(self, response):
        for title in response.css('title'):
            yield {'title': title.get()}
liumiaocn:scrapy liumiao$ 

代碼非常簡單,就是獲取https://scrapy.org/的標題信息,yield是python的用法,css提取是HTML的基礎,需要注意的只有兩點

  • import scrapy之後指定scrapy.Spider
  • 提取數據的函數名稱爲parse,這是缺省的約定

創建工程 vs 自包含方式

一般來說使用Scrapy框架需要創建工程,然後在工程中創建爬蟲實例並運行,在後續的文章中將會進一步地進行說明。但是Scrapy還提供了一種簡化的(自包含)方式來運行爬蟲,這種方式下不需要創建工程即可運行。

運行爬蟲

使用如下命令即可使用自包含方式來運行爬蟲

執行命令:scrapy runspider 爬蟲程序名稱

執行示例

liumiaocn:scrapy liumiao$ scrapy runspider myspider.py 
2020-03-28 06:53:16 [scrapy.utils.log] INFO: Scrapy 2.0.1 started (bot: scrapybot)
2020-03-28 06:53:16 [scrapy.utils.log] INFO: Versions: lxml 4.5.0.0, libxml2 2.9.10, cssselect 1.1.0, parsel 1.5.2, w3lib 1.21.0, Twisted 20.3.0, Python 3.7.5 (default, Nov  1 2019, 02:16:32) - [Clang 11.0.0 (clang-1100.0.33.8)], pyOpenSSL 19.1.0 (OpenSSL 1.1.1d  10 Sep 2019), cryptography 2.8, Platform Darwin-19.2.0-x86_64-i386-64bit
2020-03-28 06:53:16 [scrapy.utils.log] DEBUG: Using reactor: twisted.internet.selectreactor.SelectReactor
2020-03-28 06:53:16 [scrapy.crawler] INFO: Overridden settings:
{'SPIDER_LOADER_WARN_ONLY': True}
2020-03-28 06:53:16 [scrapy.extensions.telnet] INFO: Telnet Password: aeb340a45dd4aacb
2020-03-28 06:53:16 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
 'scrapy.extensions.telnet.TelnetConsole',
 'scrapy.extensions.memusage.MemoryUsage',
 'scrapy.extensions.logstats.LogStats']
2020-03-28 06:53:16 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
 'scrapy.downloadermiddlewares.retry.RetryMiddleware',
 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
 'scrapy.downloadermiddlewares.stats.DownloaderStats']
2020-03-28 06:53:16 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
 'scrapy.spidermiddlewares.referer.RefererMiddleware',
 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
 'scrapy.spidermiddlewares.depth.DepthMiddleware']
2020-03-28 06:53:16 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2020-03-28 06:53:16 [scrapy.core.engine] INFO: Spider opened
2020-03-28 06:53:16 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2020-03-28 06:53:16 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6024
2020-03-28 06:53:17 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://scrapy.org/> (referer: None)
2020-03-28 06:53:17 [scrapy.core.scraper] DEBUG: Scraped from <200 https://scrapy.org/>
{'title': '<title>Scrapy | A Fast and Powerful Scraping and Web Crawling Framework</title>'}
2020-03-28 06:53:17 [scrapy.core.engine] INFO: Closing spider (finished)
2020-03-28 06:53:17 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 210,
 'downloader/request_count': 1,
 'downloader/request_method_count/GET': 1,
 'downloader/response_bytes': 15374,
 'downloader/response_count': 1,
 'downloader/response_status_count/200': 1,
 'elapsed_time_seconds': 0.853152,
 'finish_reason': 'finished',
 'finish_time': datetime.datetime(2020, 3, 27, 22, 53, 17, 680433),
 'item_scraped_count': 1,
 'log_count/DEBUG': 2,
 'log_count/INFO': 10,
 'memusage/max': 50212864,
 'memusage/startup': 50212864,
 'response_received_count': 1,
 'scheduler/dequeued': 1,
 'scheduler/dequeued/memory': 1,
 'scheduler/enqueued': 1,
 'scheduler/enqueued/memory': 1,
 'start_time': datetime.datetime(2020, 3, 27, 22, 53, 16, 827281)}
2020-03-28 06:53:17 [scrapy.core.engine] INFO: Spider closed (finished)
liumiaocn:scrapy liumiao$ 

從結果中,我們可以看到如下的內容

{'title': '<title>Scrapy | A Fast and Powerful Scraping and Web Crawling Framework</title>'}

說明使用此種方式獲取頁面標題已經成功。

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章