用Scrapy抓取豆瓣小組數據(二)

在scrapy中怎麼讓Spider自動去抓取豆瓣小組頁面

1,引入Scrapy中的另一個預定義的蜘蛛CrawlSpider

from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor

2, 基於CrawSpider定義一個新的類GroupSpider,並添加相應的爬行規則。

class GroupSpider(CrawlSpider):
    name = "Group"
    allowed_domains = ["douban.com"]
    start_urls = [
        "http://www.douban.com/group/explore?tag=%E8%B4%AD%E7%89%A9",
        "http://www.douban.com/group/explore?tag=%E7%94%9F%E6%B4%BB",
        "http://www.douban.com/group/explore?tag=%E7%A4%BE%E4%BC%9A",
        "http://www.douban.com/group/explore?tag=%E8%89%BA%E6%9C%AF",
        "http://www.douban.com/group/explore?tag=%E5%AD%A6%E6%9C%AF",
        "http://www.douban.com/group/explore?tag=%E6%83%85%E6%84%9F",
        "http://www.douban.com/group/explore?tag=%E9%97%B2%E8%81%8A",
        "http://www.douban.com/group/explore?tag=%E5%85%B4%E8%B6%A3"
    ]
 
    rules = [
        Rule(SgmlLinkExtractor(allow=('/group/[^/]+/$', )), callback='parse_group_home_page', process_request='add_cookie'),
        Rule(SgmlLinkExtractor(allow=('/group/explore\?tag', )), follow=True, process_request='add_cookie'),
    ]

start_urls預定義了豆瓣有所小組分類頁面,蜘蛛會從這些頁面出發去找小組。

rules定義是CrawlSpider中最重要的一環,可以理解爲:當蜘蛛看到某種類型的網頁,如何去進行處理。

例如,如下規則會處理URL以/group/XXXX/爲後綴的網頁,調用parse_group_home_page爲處理函數,並且會在request發送前調用add_cookie來附加cookie信息。

Rule(SgmlLinkExtractor(allow=('/group/[^/]+/$', )), callback='parse_group_home_page', process_request='add_cookie'),

又如,如下規則會抓取網頁內容,並自動抓取網頁中鏈接供下一步抓取,但不會處理網頁的其他內容。

Rule(SgmlLinkExtractor(allow=('/group/explore\?tag', )), follow=True, process_request='add_cookie'),

如何添加Cookie

定義如下函數,並如前面所講在Rule定義裏添加process_request=add_cookie。

def add_cookie(self, request):
    request.replace(cookies=[
        {'name': 'COOKIE_NAME','value': 'VALUE','domain': '.douban.com','path': '/'},
        ]);
    return request;
一般網站在client端都用cookie來保存用戶的session信息,添加cookie信息就可以模擬登陸用戶來抓取數據。

如何防止蜘蛛被網站Ban掉

首先可以嘗試添加登陸用戶的cookie去抓取網頁,即使你抓取的是公開網頁,添加cookie有可能會防止蜘蛛在應用程序層被禁。這個我沒有實際驗證過,但肯定沒有壞處。

其次,即使你是授權用戶,如果你的訪問過於頻繁,你的IP會可能被ban,所以一般你需要讓蜘蛛在訪問網址中間休息1~2秒。

還有就是配置User Agent,儘量輪換使用不同的UserAgent去抓取網頁

在Scrapy項目的settings.py鍾,添加如下設置:

DOWNLOAD_DELAY = 2
RANDOMIZE_DOWNLOAD_DELAY = True
USER_AGENT = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_3) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.54 Safari/536.5'
COOKIES_ENABLED = True
?

================

到此位置,抓取豆瓣小組頁面的蜘蛛就完成了。接下來,可以按照這種模式定義抓取小組討論頁面數據的Spider,然後就放手讓蜘蛛去爬行吧!Have Fun!

from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.selector import HtmlXPathSelector
from scrapy.item import Item
from douban.items import DoubanItem
import re
 
class GroupSpider(CrawlSpider):
    name = "Group"
    allowed_domains = ["douban.com"]
    start_urls = [
        "http://www.douban.com/group/explore?tag=%E8%B4%AD%E7%89%A9",
        "http://www.douban.com/group/explore?tag=%E7%94%9F%E6%B4%BB",
        "http://www.douban.com/group/explore?tag=%E7%A4%BE%E4%BC%9A",
        "http://www.douban.com/group/explore?tag=%E8%89%BA%E6%9C%AF",
        "http://www.douban.com/group/explore?tag=%E5%AD%A6%E6%9C%AF",
        "http://www.douban.com/group/explore?tag=%E6%83%85%E6%84%9F",
        "http://www.douban.com/group/explore?tag=%E9%97%B2%E8%81%8A",
        "http://www.douban.com/group/explore?tag=%E5%85%B4%E8%B6%A3"
    ]
 
    rules = [
        Rule(SgmlLinkExtractor(allow=('/group/[^/]+/$', )), callback='parse_group_home_page', process_request='add_cookie'),
    #   Rule(SgmlLinkExtractor(allow=('/group/[^/]+/discussion\?start\=(\d{1,4})$', )), callback='parse_group_topic_list', process_request='add_cookie'),
        Rule(SgmlLinkExtractor(allow=('/group/explore\?tag', )), follow=True, process_request='add_cookie'),
    ]
 
    def __get_id_from_group_url(self, url):
        m =  re.search("^http://www.douban.com/group/([^/]+)/$", url)
        if(m):
            return m.group(1) 
        else:
            return 0
 
 
 
    def add_cookie(self, request):
        request.replace(cookies=[
 
        ]);
        return request;
 
    def parse_group_topic_list(self, response):
        self.log("Fetch group topic list page: %s" % response.url)
        pass
 
 
    def parse_group_home_page(self, response):
 
        self.log("Fetch group home page: %s" % response.url)
 
        hxs = HtmlXPathSelector(response)
        item = DoubanItem()
 
        #get group name
        item['groupName'] = hxs.select('//h1/text()').re("^\s+(.*)\s+$")[0]
 
        #get group id 
        item['groupURL'] = response.url
        groupid = self.__get_id_from_group_url(response.url)
 
        #get group members number
        members_url = "http://www.douban.com/group/%s/members" % groupid
        members_text = hxs.select('//a[contains(@href, "%s")]/text()' % members_url).re("\((\d+)\)")
        item['totalNumber'] = members_text[0]
 
        #get relative groups
        item['RelativeGroups'] = []
        groups = hxs.select('//div[contains(@class, "group-list-item")]')
        for group in groups:
            url = group.select('div[contains(@class, "title")]/a/@href').extract()[0]
            item['RelativeGroups'].append(url)
        #item['RelativeGroups'] = ','.join(relative_groups)
        return item<span><span style="line-height:20px;"> </span></span>



發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章