python爬蟲小白昇仙_5-----初識scrapy(爬取電影天堂數據)

   初識scrapy

  1. scrapy安裝
  2. 創建scrapy項目
  3. 爬取電影天堂相關電影資訊
  4. 數據寫入數據庫mongodb
  5. 使用Robo 3T查看數據庫存儲的數據

scrapy安裝

使用 pip install scrapy 命令安裝

創建scrapy項目

1. scrapy基本流程

2. 進入要創建項目的文件夾,輸入scrapy startproject "項目名稱"   ------>    創建爬蟲spider

3.  項目生成,在pycharm打開,有如下文件

movie.py:編寫爬蟲           entrypoint.py:IDE調試程序     items.py:定義獲取的數據     pipelines.py: 存儲數據   settings.py: 設置

爬取電影天堂相關電影資訊

1. 查看網頁源碼獲取相關信息

電影類型,url的變化很明顯(僅末位數字變):https://www.dy2018.com/3/

  

每種類型下,下一頁url的獲取,很容易觀察出變化,https://www.dy2018.com/6/index_2.html

每種類型下,獲取電影的名字、日期、評分、類型、導演等數據,可以清楚得到資訊,xpath獲取,再進行數據處理提取

 2. 源碼:

新建 entrypoint.py

from scrapy.cmdline import execute

# 在IDE運行此文件,跑爬蟲程序
execute(['scrapy','crawl','movie'])

items.py

import scrapy

class MovieBanaItem(scrapy.Item):
    # define the fields for your item here like:
    # name = scrapy.Field()

    # 評分
    score=scrapy.Field()
    # 類型
    type=scrapy.Field()
    # 電影名
    name=scrapy.Field()
    # 日期
    date=scrapy.Field()
    # 導演
    director=scrapy.Field()

settings.py

# -*- coding: utf-8 -*-

# Scrapy settings for Movie_Bana project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
#     https://doc.scrapy.org/en/latest/topics/settings.html
#     https://doc.scrapy.org/en/latest/topics/downloader-middleware.html
#     https://doc.scrapy.org/en/latest/topics/spider-middleware.html

BOT_NAME = 'Movie_Bana'

SPIDER_MODULES = ['Movie_Bana.spiders']
NEWSPIDER_MODULE = 'Movie_Bana.spiders'


# Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = 'Movie_Bana (+http://www.yourdomain.com)'

# Obey robots.txt rules
ROBOTSTXT_OBEY = True

# Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32

# Configure a delay for requests for the same website (default: 0)
# See https://doc.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
#DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16

# Disable cookies (enabled by default)
#COOKIES_ENABLED = False

# Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False

# Override the default request headers:
#DEFAULT_REQUEST_HEADERS = {
#   'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
#   'Accept-Language': 'en',
#}

# Enable or disable spider middlewares
# See https://doc.scrapy.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
#    'Movie_Bana.middlewares.MovieBanaSpiderMiddleware': 543,
#}

# Enable or disable downloader middlewares
# See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html
#DOWNLOADER_MIDDLEWARES = {
#    'Movie_Bana.middlewares.MovieBanaDownloaderMiddleware': 543,
#}

# Enable or disable extensions
# See https://doc.scrapy.org/en/latest/topics/extensions.html
#EXTENSIONS = {
#    'scrapy.extensions.telnet.TelnetConsole': None,
#}

# Configure item pipelines
# See https://doc.scrapy.org/en/latest/topics/item-pipeline.html
# 優先級程度(1-1000隨意設置,數值越低,組件的優先級越高)
ITEM_PIPELINES = {
    'Movie_Bana.pipelines.MovieBanaPipeline': 1,
}

# Enable and configure the AutoThrottle extension (disabled by default)
# See https://doc.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False

# Enable and configure HTTP caching (disabled by default)
# See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings

# 使用下列,Scrapy會緩存你有的Requests!當你再次請求時,如果存在緩存文檔則返回緩存文檔,而不是去網站請求,這樣既加快了本地調試速度,也減輕了 網站的壓力
HTTPCACHE_ENABLED = True
HTTPCACHE_EXPIRATION_SECS = 0
HTTPCACHE_DIR = 'httpcache'
HTTPCACHE_IGNORE_HTTP_CODES = []
HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'

# Mongodb參數配置 ip/port/數據庫名/集合名
MONGODB_HOST = '127.0.0.1'
MONGODB_PORT = 27017
MONGODB_DBNAME = 'movies'
MONGODB_DOCNAME = 'movie_collection'

movie.py

# -*- coding: utf-8 -*-
import scrapy
import requests
from lxml import etree
from scrapy.http import Request # 一個單獨的request模塊,需要跟進url的時候使用
from Movie_Bana.items import MovieBanaItem   # 導入定義好的字段


class MovieSpider(scrapy.Spider):
    name = 'movie'  # 爬蟲名字
    allowed_domains = ['dy2018.com']   # 作用是隻會跟進存在於allowed_domains中的URL。不存在的URL會被忽略。
    start_url = 'https://www.dy2018.com/'
    end_url='.html'

    # 獲取電影類型的所用url,拼接url
    def start_requests(self):
        for i in range(21):
            url=self.start_url+str(i)+'/'
            # 使用了導入的Request包,來跟進我們的URL(並將返回的response作爲參數傳遞給self.parse, 嗯!這個叫回調函數!)
            yield Request(url,self.parse)
            # yield Request:請求新的url,後面跟的是回調函數,需要哪一個函數來處理這個返回值,就調用哪一個函數,返回值會以參數的形式傳遞給所調用的函數

    # 拼接每個類型下的所有頁面的url,使用parse函數接受上面request獲取到的response
    def parse(self, response):
        '''
        # xml=etree.HTML(response.text)
        # 獲取電影類型
        types=xml.xpath('//div[@class="title_all"]/h1/font/text()')[0]
        # '>'分割字符,並去掉空格,提取出數據
        movie_type=types.split('>')[1].strip()
        print(movie_type)
        # 獲取每個類型下頁面的總數
        # max_num =xml.xpath('//div[@class="x"]/p/select//option//text()')
        # num=len(max_num)
        '''
        # 頁數較多,僅選擇10頁測試
        num=10
        for i in range(1,int(num)+1):
            if i==1: #第一頁
                url = response.url + 'index' + self.end_url
                yield Request(url,self.get_data)
            else:
                url = response.url + 'index_' + str(i) + self.end_url
                yield Request(url, self.get_data)

                # yield Request(url, self.get_data, meta={'type': movie_type})
                # meta字典,是Scrapy中傳遞額外數據的方法。將在此方法獲取到的數據向下個函數傳遞

    # 獲取並處理數據
    def get_data(self,response):
        xml = etree.HTML(response.text)
        item = MovieBanaItem()
        # name
        names= xml.xpath('//div[@class="co_content8"]/ul//table//tr[2]//td[2]/b/a[2]/text()')
        name_list=[]
        for name in names:
            name_list.append('《'+name.split('《')[1].split('》')[0]+'》')
        item["name"]=name_list
        # 通過response.meta['']獲取額外傳遞的數據
        # item["type"]=str(response.meta['type'])
        # type
        types=xml.xpath('//div[@class="co_content8"]/ul//table//tr[4]//td/p[2]/text()')
        type_list=[]
        for i in types:
            type_list.append((i.replace("\r\n◎類型:","").strip().split("◎")[0]).replace("\r\n","").strip())
        item["type"]=type_list
        # 返回字典,然後Pipelines就可以開始對這些數據進行處理了
        # date
        dates=xml.xpath('//div[@class="co_content8"]/ul//table//tr[3]//td[2]/font[1]/text()')
        dates_list=[]
        for date in dates:
            dates_list.append(date.split(":")[1].strip())
        item["date"]=dates_list
        # score
        scores=xml.xpath('//div[@class="co_content8"]/ul//table//tr[3]//td[2]/font[2]/text()')
        scores_list=[]
        for score in scores:
            scores_list.append(score.split(": ")[1].strip())
        item["score"]=scores_list
        # director
        directors=xml.xpath('//div[@class="co_content8"]/ul//table//tr[4]//td/p[1]/text()')
        directors_list=[]
        for director in directors:
            directors_list.append(director.split('◎')[3].split(":")[1].replace("\r\n",""))
        item["director"]=directors_list
        return item


 pipelines.py  

from Movie_Bana.items import MovieBanaItem
from scrapy.utils.project import get_project_settings  #  獲取settings.py
import pymongo

class MovieBanaPipeline(object):
    def __init__(self):
        settings=get_project_settings()
        host=settings['MONGODB_HOST']
        port=settings['MONGODB_PORT']
        dbName = settings['MONGODB_DBNAME']
        # 創建連接
        client = pymongo.MongoClient(host=host, port=port)
        # 創建數據庫
        db = client[dbName]
        # 創建集合
        self.collection = db[settings['MONGODB_DOCNAME']]
    def process_item(self, item, spider):
        if isinstance(item,MovieBanaItem):
            bookInfo=dict(item)
            self.collection.insert_one(bookInfo)
            print(self.collection)
            return item  # 切記 一定要返回item進行後續的pipelines 數據處理

 使用Robo 3T查看數據庫存儲的數據

 

 

 

參考:https://cuiqingcai.com/3472.html

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章