Scrapy筆記(10)- 動態配置爬蟲

有很多時候我們需要從多個網站爬取所需要的數據,比如我們想爬取多個網站的新聞,將其存儲到數據庫同一個表中。我們是不是要對每個網站都得去定義一個Spider類呢?其實不需要,我們可以通過維護一個規則配置表或者一個規則配置文件來動態增加或修改爬取規則,然後程序代碼不需要更改就能實現多個網站爬取。

要這樣做,我們就不能再使用前面的scrapy crawl test這種命令了,我們需要使用編程的方式運行Scrapy spider,參考官方文檔

腳本運行Scrapy

可以利用scrapy提供的核心API通過編程方式啓動scrapy,代替傳統的scrapy crawl啓動方式。

Scrapy構建於Twisted異步網絡框架基礎之上,因此你需要在Twisted reactor裏面運行。

首先你可以使用scrapy.crawler.CrawlerProcess這個類來運行你的spider,這個類會爲你啓動一個Twisted reactor,並能配置你的日誌和shutdown處理器。所有的scrapy命令都使用這個類。

run.py

import scrapy
from scrapy.crawler import CrawlerProcess
from scrapy.utils.project import get_project_settings

process = CrawlerProcess(get_project_settings())

process.crawl(MySpider)
process.start() # the script will block here until the crawling is finished

然後你就可以直接執行這個腳本

python run.py

另外一個功能更強大的類是scrapy.crawler.CrawlerRunner,推薦你使用這個

run.py

from twisted.internet import reactor
import scrapy
from scrapy.crawler import CrawlerRunner
from scrapy.utils.log import configure_logging

class MySpider(scrapy.Spider):
    # Your spider definition
    ...

configure_logging({'LOG_FORMAT': '%(levelname)s: %(message)s'})
runner = CrawlerRunner()

d = runner.crawl(MySpider)
d.addBoth(lambda _: reactor.stop())
reactor.run() # the script will block here until the crawling is finished

同一進程運行多個spider

默認情況當你每次執行scrapy crawl命令時會創建一個新的進程。但我們可以使用核心API在同一個進程中同時運行多個spider

import scrapy
from twisted.internet import reactor
from scrapy.crawler import CrawlerRunner
from scrapy.utils.log import configure_logging

class MySpider1(scrapy.Spider):
    # Your first spider definition
    ...

class MySpider2(scrapy.Spider):
    # Your second spider definition
    ...

configure_logging()
runner = CrawlerRunner()
runner.crawl(MySpider1)
runner.crawl(MySpider2)
d = runner.join()
d.addBoth(lambda _: reactor.stop())

reactor.run() # the script will block here until all crawling jobs are finished

定義規則表

好了言歸正傳,有了前面的腳本啓動基礎,就可以開始我們的動態配置爬蟲了。我們的需求是這樣的,從兩個不同的網站爬取我們所需要的新聞文章,然後存儲到article表中。

首先我們需要定義規則表和文章表,通過動態的創建蜘蛛類,我們以後就只需要維護規則表即可了。這裏我使用SQLAlchemy框架來映射數據庫。

models.py

#!/usr/bin/env python
# -*- encoding: utf-8 -*-
"""
Topic: 定義數據庫模型實體
Desc :
"""
import datetime

from sqlalchemy.engine.url import URL
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy import create_engine, Column, Integer, String, Text, DateTime
from coolscrapy.settings import DATABASE

Base = declarative_base()

class ArticleRule(Base):
    """自定義文章爬取規則"""
    __tablename__ = 'article_rule'

    id = Column(Integer, primary_key=True)
    # 規則名稱
    name = Column(String(30))
    # 運行的域名列表,逗號隔開
    allow_domains = Column(String(100))
    # 開始URL列表,逗號隔開
    start_urls = Column(String(100))
    # 下一頁的xpath
    next_page = Column(String(100))
    # 文章鏈接正則表達式(子串)
    allow_url = Column(String(200))
    # 文章鏈接提取區域xpath
    extract_from = Column(String(200))
    # 文章標題xpath
    title_xpath = Column(String(100))
    # 文章內容xpath
    body_xpath = Column(Text)
    # 發佈時間xpath
    publish_time_xpath = Column(String(30))
    # 文章來源
    source_site = Column(String(30))
    # 規則是否生效
    enable = Column(Integer)


class Article(Base):
    """文章類"""
    __tablename__ = 'articles'

    id = Column(Integer, primary_key=True)
    url = Column(String(100))
    title = Column(String(100))
    body = Column(Text)
    publish_time = Column(String(30))
    source_site = Column(String(30))

定義文章Item

這個很簡單了,沒什麼需要說明的

items.py

import scrapy


class Article(scrapy.Item):
    title = scrapy.Field()
    url = scrapy.Field()
    body = scrapy.Field()
    publish_time = scrapy.Field()
    source_site = scrapy.Field()

定義ArticleSpider

接下來我們將定義爬取文章的蜘蛛,這個spider會使用一個Rule實例來初始化,然後根據Rule實例中的xpath規則來獲取相應的數據。

from coolscrapy.utils import parse_text
from scrapy.spiders import CrawlSpider, Rule
from scrapy.linkextractors import LinkExtractor
from coolscrapy.items import Article


class ArticleSpider(CrawlSpider):
    name = "article"

    def __init__(self, rule):
        self.rule = rule
        self.name = rule.name
        self.allowed_domains = rule.allow_domains.split(",")
        self.start_urls = rule.start_urls.split(",")
        rule_list = []
        # 添加`下一頁`的規則
        if rule.next_page:
            rule_list.append(Rule(LinkExtractor(restrict_xpaths=rule.next_page)))
        # 添加抽取文章鏈接的規則
        rule_list.append(Rule(LinkExtractor(
            allow=[rule.allow_url],
            restrict_xpaths=[rule.extract_from]),
            callback='parse_item'))
        self.rules = tuple(rule_list)
        super(ArticleSpider, self).__init__()

    def parse_item(self, response):
        self.log('Hi, this is an article page! %s' % response.url)

        article = Article()
        article["url"] = response.url

        title = response.xpath(self.rule.title_xpath).extract()
        article["title"] = parse_text(title, self.rule.name, 'title')

        body = response.xpath(self.rule.body_xpath).extract()
        article["body"] = parse_text(body, self.rule.name, 'body')

        publish_time = response.xpath(self.rule.publish_time_xpath).extract()
        article["publish_time"] = parse_text(publish_time, self.rule.name, 'publish_time')

        article["source_site"] = self.rule.source_site

        return article

要注意的是start_urls,rules等都初始化成了對象的屬性,都由傳入的rule對象初始化,parse_item方法中的抽取規則也都有rule對象提供。

編寫pipeline存儲到數據庫中

我們還是使用SQLAlchemy來將文章Item數據存儲到數據庫中

pipelines.py

@contextmanager
def session_scope(Session):
    """Provide a transactional scope around a series of operations."""
    session = Session()
    try:
        yield session
        session.commit()
    except:
        session.rollback()
        raise
    finally:
        session.close()


class ArticleDataBasePipeline(object):
    """保存文章到數據庫"""

    def __init__(self):
        engine = db_connect()
        create_news_table(engine)
        self.Session = sessionmaker(bind=engine)

    def open_spider(self, spider):
        """This method is called when the spider is opened."""
        pass

    def process_item(self, item, spider):
        a = Article(url=item["url"],
                    title=item["title"].encode("utf-8"),
                    publish_time=item["publish_time"].encode("utf-8"),
                    body=item["body"].encode("utf-8"),
                    source_site=item["source_site"].encode("utf-8"))
        with session_scope(self.Session) as session:
            session.add(a)

    def close_spider(self, spider):
        pass

修改run.py啓動腳本

我們將上面的run.py稍作修改即可定製我們的文章爬蟲啓動腳本

run.py

import logging
from spiders.article_spider import ArticleSpider
from twisted.internet import reactor
from scrapy.crawler import CrawlerRunner
from scrapy.utils.project import get_project_settings
from scrapy.utils.log import configure_logging
from coolscrapy.models import db_connect
from coolscrapy.models import ArticleRule
from sqlalchemy.orm import sessionmaker

if __name__ == '__main__':
    settings = get_project_settings()
    configure_logging(settings)
    db = db_connect()
    Session = sessionmaker(bind=db)
    session = Session()
    rules = session.query(ArticleRule).filter(ArticleRule.enable == 1).all()
    session.close()
    runner = CrawlerRunner(settings)

    for rule in rules:
        # stop reactor when spider closes
        # runner.signals.connect(spider_closing, signal=signals.spider_closed)
        runner.crawl(ArticleSpider, rule=rule)

    d = runner.join()
    d.addBoth(lambda _: reactor.stop())

    # blocks process so always keep as the last statement
    reactor.run()
    logging.info('all finished.')

OK,一切搞定。現在我們可以往ArticleRule表中加入成百上千個網站的規則,而不用添加一行代碼,就可以對這成百上千個網站進行爬取。當然你完全可以做一個Web前端來完成維護ArticleRule表的任務。當然ArticleRule規則也可以放在除了數據庫的任何地方,比如配置文件。

你可以在GitHub上看到本文的完整項目源碼。https://github.com/yidao620c/core-scrapy

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章