爬蟲學習(一)---爬取電影天堂下載鏈接

歡迎加入python學習交流羣 667279387
爬蟲學習
爬蟲學習(一)—爬取電影天堂下載鏈接
爬蟲學習(二)–爬取360應用市場app信息

主要利用了python3.5 requests,BeautifulSoup,eventlet三個庫來實現。

1、解析單個電影的詳細頁面
例如這個網址:http://www.dy2018.com/i/98477.html。要獲取這個電影的影片名和下載地址。我們先打開這個網頁來分析下這個這個網頁的源代碼。

包含影片名字的字段:

<div class="title_all"><h1>2017年歐美7.0分科幻片《猩球崛起3:終極之戰》HD中英雙字</h1></div>

包含影片下載地址的 字段:

 <td style="WORD-WRAP: break-word" bgcolor="#fdfddf"><a href="ftp://d:[email protected]:12311/[電影天堂www.dy2018.com]猩球崛起3:終極之戰HD中英雙字.rmvb">ftp://d:[email protected]:12311/[電影天堂www.dy2018.com]猩球崛起3:終極之戰HD中英雙字.rmvb</a></td>

獲取單個影片的影片名和下載鏈接

import re
import requests
from bs4 import BeautifulSoup

url = 'http://www.dy2018.com/i/98477.html'

headers = {'User-Agent':'Mozilla/5.0 (Windows NT 6.1; WOW64; rv:38.0) Gecko/20100101 Firefox/38.0'}


def get_one_film_detail(url):
    #print("one_film doing:%s" % url)
    r = requests.get(url, headers=headers)
    # print(r.text.encode(r.encoding).decode('gbk'))
    bsObj = BeautifulSoup(r.content.decode('gbk','ignore'), "html.parser")
    td = bsObj.find('td', attrs={'style': 'WORD-WRAP: break-word'})
    if td is None:#沒有找到下載標籤的返回None,個別網頁格式不同
        return None, None
    url_a = td.find('a')
    url_a = url_a.string
    title = bsObj.find('h1')
    title = title.string
    # title = re.findall(r'[^《》]+', title)[1] #此處處理一下的話就只返回影片名 本例結果爲:猩球崛起3:終極之戰
    return title, url_a

print (get_one_film_detail(url))

2、解析頁面列表中所有的影片鏈接
先打開一個影片列表鏈接,例如:http://www.dy2018.com/2/index_2.html
打開查看源碼可以看到每部影片都是下面這樣的信息開頭,然後接上影片的簡單介紹,介紹我們不關心,只關心下面這段含有影片詳情頁面的鏈接,主要獲取這個頁面所有這樣列出的影片的詳情頁面的鏈接。

<td height="26">
    <b>
        <a class=ulink href='/html/gndy/dyzz/'>[最新電影]</a>
        <a href="/i/98256.html" class="ulink" title="2017年印度7.1分動作片《巴霍巴利王(下):終結》BD中英雙字">2017年印度7.1分動作片《巴霍巴利王(下):終結》BD中英雙字</a>
    </b>
</td>
import re
import requests
from bs4 import BeautifulSoup

page_url = 'http://www.dy2018.com/2/index_22.html'

headers = {'User-Agent':'Mozilla/5.0 (Windows NT 6.1; WOW64; rv:38.0) Gecko/20100101 Firefox/38.0'}


def get_one_page_urls(page_url):
    #print("one_page doing:%s" % page_url)
    urls = []
    base_url = "http://www.dy2018.com"
    r = requests.get(page_url, headers=headers)
    bsObj = BeautifulSoup(r.content, "html.parser")
    url_all = bsObj.find_all('a', attrs={'class': "ulink", 'title': re.compile('.*')})
    for a_url in url_all:
        a_url = a_url.get('href')
        a_url = base_url + a_url
        urls.append(a_url)
    return urls

print (get_one_page_urls(page_url))

3、多頁面開始爬蟲
有了前面兩步驟的準備就可以進行多頁面爬取數據了。

import eventlet
import re
import time
import requests
from bs4 import BeautifulSoup

headers = {'User-Agent':'Mozilla/5.0 (Windows NT 6.1; WOW64; rv:38.0) Gecko/20100101 Firefox/38.0'}


def get_one_film_detail(url):
    print("one_film doing:%s" % url)
    r = requests.get(url, headers=headers)
    # print(r.text.encode(r.encoding).decode('gbk'))
    bsObj = BeautifulSoup(r.content.decode('gbk','ignore'), "html.parser")
    td = bsObj.find('td', attrs={'style': re.compile('.*')})
    if td is None:
        return None, None
    url_a = td.find('a')
    url_a = url_a.string
    title = bsObj.find('h1')
    title = title.string
    # title = re.findall(r'[^《》]+', title)[1]
    return title, url_a


def get_one_page_urls(page_url):
    print("one_page doing:%s" % page_url)
    urls = []
    base_url = "http://www.dy2018.com"
    r = requests.get(page_url, headers=headers)
    bsObj = BeautifulSoup(r.content, "html.parser")
    url_all = bsObj.find_all('a', attrs={'class': "ulink", 'title': re.compile('.*')})
    for a_url in url_all:
        a_url = a_url.get('href')
        a_url = base_url + a_url
        urls.append(a_url)
    return urls
    # print(r.text.encode(r.encoding).decode('gbk'))


pool = eventlet.GreenPool()
f = open("download.txt", "w")
start = time.time()

for i in range(2, 100):
    page_url = 'http://www.dy2018.com/2/index_%s.html' % i
    for title, url_a in pool.imap(get_one_film_detail, get_one_page_urls(page_url)):
        # print("titel:%s,download url:%s"%(title,url_a))
        f.write("%s:%s\n\n" % (title, url_a))
end = time.time()
print('total time cost:')
print(end - start)

前面的這段代碼估計是因爲用了綠色線程爬取速度過快,爬去到了20多頁之後會主動給我斷開,報錯如下:

 raise RemoteDisconnected("Remote end closed connection without"
http.client.RemoteDisconnected: Remote end closed connection without response

優化之後的代碼

import eventlet
import re
import time
import requests
from bs4 import BeautifulSoup

result = []
headers={'User-Agent':'Mozilla/5.0 (Windows NT 6.1; WOW64; rv:38.0) Gecko/20100101 Firefox/38.0'}

def get_one_film_detail(urls):
    req = requests.session() #此處改用session,可以減少和服務器的 tcp鏈接次數。
    req.headers.update(headers)
    for url in urls:
        print("one_film doing:%s" % url)
        r = req.get(url)
        # print(r.text.encode(r.encoding).decode('gbk'))
        bsObj = BeautifulSoup(r.content.decode('gbk','ignore'), "html.parser")
        td = bsObj.find('td', attrs={'style': re.compile('.*')})
        if td is None:
            continue
        url_a = td.find('a')
        if url_a is None:
            continue
        url_a = url_a.string
        title = bsObj.find('h1')
        title = title.string
        # title = re.findall(r'[^《》]+', title)[1]
        f = open("download.txt", "a")
        f.write("%s:%s\n\n" % (title, url_a))



def get_one_page_urls(page_url):
    print("one_page doing:%s" % page_url)
    urls = []
    base_url = "http://www.dy2018.com"
    r = requests.get(page_url, headers=headers)
    bsObj = BeautifulSoup(r.content, "html.parser")
    url_all = bsObj.find_all('a', attrs={'class': "ulink", 'title': re.compile('.*')})
    for a_url in url_all:
        a_url = a_url.get('href')
        a_url = base_url + a_url
        urls.append(a_url)
    return urls
    # print(r.text.encode(r.encoding).decode('gbk'))


pool = eventlet.GreenPool()
start = time.time()
page_urls = ['http://www.dy2018.com/2/']
for i in range(2, 100):
    page_url = 'http://www.dy2018.com/2/index_%s.html' % i
    page_urls.append(page_url)

for urls in pool.imap(get_one_page_urls, page_urls):
        get_one_film_detail(urls)

end = time.time()
print('total time cost:')
print(end - start)

優化之後相當於多線程獲取所有的影片的鏈接,再單線程去獲取每個影片的下載地址。獲取電影天堂上面的動作電影下載鏈接總共 花了

total time cost:
304.44839310646057

獲取結果如下圖所示
這裏寫圖片描述

如果源碼對你有用,請評論下博客說聲謝謝吧~
歡迎python愛好者加入:學習交流羣 667279387

發佈了78 篇原創文章 · 獲贊 134 · 訪問量 28萬+
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章