【Scrapy學習心得】添加IP代理

【Scrapy學習心得】添加IP代理

聲明:僅供技術交流,請勿用於非法用途,如有其它非法用途造成損失,和本博客無關

添加ip代理即添加proxy屬性的值
這裏我用到的免費ip代理需要先在66免費代理網上取到ip,這個網站真得很好用,只需要請求一下便可以得到想要數量的ip。附上鍊接點擊跳轉

只需修改scrapy項目下的middlewares.py中間件,廢話不多說,直接上代碼:

from scrapy import signals
import requests
import parsel  #用來解析網頁的庫,類似於BeautifulSoup,不過比它要強得多

class ipdailiDownloaderMiddleware(object):
    def __init__(self):
        self.proxy=self.get_ip  #保存ip
        self.flag=True  #判斷是否要重新獲取ip
        self.w=0   #標記proxy的下標
        
    #裝飾器的用處是將下面這個函數變爲一個變量
    @property
    def get_ip(self):
        #url裏面的getum參數爲要獲取的ip數量
        url = 'http://www.66ip.cn/nmtq.php?getnum=200&isp=0&anonymoustype=3&area=0&proxytype=1&api=66ip'
        headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.142 Safari/537.36'}
        r = requests.get(url, headers=headers)
        response = parsel.Selector(r.text)
        ips = []
        hehe = response.xpath('.//script/../text()').getall()
        for ip in hehe:
            temp = ip.strip()
            if temp != '':
                tem='https://'+temp
                ips.append(tem)
        return ips

    #每次請求前都會調用的函數
    def process_request(self,request,spider):
        if (self.flag) and (self.w < len(self.proxy)):
            proxy=self.proxy[self.w]
            print('this ip is :',proxy)
            request.meta['proxy']=proxy
            self.w += 1
        else:
        	if self.w == 0:
        		proxy = self.proxy[self.w]
        	else:
            	proxy = self.proxy[self.w - 1]
            print('this ip is :', proxy)
            request.meta['proxy'] = proxy

    #每次請求成功返回網頁源代碼之前都會調用的函數
    def process_response(self,request,response,spider):
        if response.status == 200:
            self.flag=False
            return response  #若成功返回數據則直接return它
        else:
            self.flag=True
            self.proxy=self.get_ip
            self.w=0
            return request  #狀態碼不爲200,則需要return它,並修改flag爲True,重新去獲取ip

下面定義一個爬蟲文件ip.py來驗證能否使用免費的代理ip來訪問,這個就不用多解釋了吧。代碼如下:

import scrapy

class IpSpider(scrapy.Spider):
    name = 'ip'
    allowed_domains = ['baidu.com']
    start_urls = ['https://www.baidu.com']

    def parse(self, response):
        print(response.status)
        yield scrapy.Request(
            self.start_urls[0],
            dont_filter=True
        )

最後別忘了在setting.py文件中添加以下代碼,將上面定義的ip代理中間件給打開:

DOWNLOADER_MIDDLEWARES = {
'hehe.middlewares.ipdailiDownloaderMiddleware': 543,
}

最後在cmd中先進入到這個項目的根目錄下,即有scrapy.cfg文件的目錄下,然後輸入並運行scrapy crawl ip

可以看到輸出如下:

this ip is : https://182.35.87.118:9999
this ip is : https://120.25.252.232:8118
this ip is : https://117.95.162.79:9999
this ip is : https://175.42.123.108:9999
this ip is : https://113.120.38.141:9999
this ip is : https://120.25.252.18:8118
this ip is : https://49.51.193.134:1080
200
this ip is : https://49.51.193.134:1080
200
this ip is : https://49.51.193.134:1080
200
this ip is : https://49.51.193.134:1080
200
this ip is : https://49.51.193.134:1080
200
this ip is : https://49.51.193.134:1080
200
this ip is : https://49.51.193.134:1080
200
this ip is : https://49.51.193.134:1080
200
this ip is : https://49.51.193.134:1080
200
this ip is : https://49.51.193.134:1080
200
this ip is : https://49.51.193.134:1080
200
this ip is : https://49.51.193.134:1080
200
this ip is : https://49.51.193.134:1080
200
this ip is : https://49.51.193.134:1080
200
this ip is : https://49.51.193.134:1080
200
this ip is : https://49.51.193.134:1080
200
this ip is : https://49.51.193.134:1080
200
this ip is : https://49.51.193.134:1080
200
this ip is : https://49.51.193.134:1080

Process finished with exit code -1

可以看到當運行到了iphttps://49.51.193.134:1080時,返回狀態碼200,說明已經完成了添加ip代理來訪問百度首頁了,並且每次訪問都是用這一個ip

寫在最後

好了,寫完這次,不知道下次是什麼時候了,不過,我依然會繼續學習,可能不在繼續學Scrapy了,因爲學得也差不多了我覺得,我認爲其實就剩下Scrapy-redis沒有學了,也不是沒有學吧,有看過網上的教程視頻,大概也知道怎麼使用,只是沒有實戰罷了,也因爲現在根本用不上這個模塊,我知道怎麼用,有這個東西就行了,如果真的要用到,就到時候再學就好了哈哈,其實還有很多網站的爬蟲也都爬過,例子也很多,代碼也還在,只是不想再重新去分析網頁了,因爲之前還沒有學到Scrapy,用得都是requestsselenium。以後有機會在寫吧。

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章