Selenium模擬瀏覽器爬取拉勾網職位信息

今天想通過requests庫來爬取拉鉤網的崗位信息,但通過分析網站發現他的崗位信息都是通過向Ajax請求來獲得的,也就是說返回來的網頁源代碼沒有這部分信息,那requests庫就沒有什麼作用了。後來我想到了利用selenium模擬瀏覽器來爬取,果真可行....

設計思路:

1、我們先來看網站的結構:

 然後每個崗位又可以點擊,點進去之後就是這個崗位的詳細信息。

2、功能設計:

所以我們就是通過先獲取職位列表,然後在通過職位列表中的每一個職位的詳細信息的url來獲取信息

3、構建函數:

    '''解析出每一頁所有崗位的url'''
    def parse_list_page(self, source):
        pass
    '''解析出每一崗位的網頁源代碼'''
    def get_detail_page(self, url):
        pass
    '''將每一個崗位的信息打印或保存'''
    def parse_detail_page(self, source):
        pass

4、demo:

from selenium import webdriver
from lxml import etree
import re
import time
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By
class LagouSpider(object):
    driver_path = r'D:\ChormDriveer\chrome\chromedriver.exe'
    def __init__(self):
        '''加載驅動'''
        self.driver = webdriver.Chrome(executable_path=LagouSpider.driver_path)
        self.url = 'https://www.lagou.com/jobs/list_python?city=%E5%85%A8%E5%9B%BD&cl=false&fromSearch=true&labelWords=&suginput='
        '''用來存儲信息'''
        self.positions = []
    def run(self):
        '''模擬瀏覽器打開網頁'''
        self.driver.get(self.url)
        while True:
          
            source = self.driver.page_source
           
            self.parse_list_page(source)
           
            WebDriverWait(driver=self.driver, timeout=10).until(
                EC.presence_of_element_located((By.XPATH, "//div[@class='pager_container']/span[last()]"))
            )
            '''獲得下一頁按鈕'''
            next_btn =  self.driver.find_element_by_xpath("//div[@class='pager_container']/span[last()]")
            '''當到達最後一頁時 不在進行下一頁的爬取 退出死循環結束程序'''
            if "pager_next_disabled" in next_btn.get_attribute("class"):
                break
            else:
                next_btn.click()
    def parse_list_page(self, source):
        html = etree.HTML(source)
        links = html.xpath("//a[@class='position_link']/@href")
        for link in links:
            self.get_detail_page(link)
            time.sleep(1)
    def get_detail_page(self, url):
        
        self.driver.execute_script("window.open('%s')" % url)
       
        self.driver.switch_to.window(self.driver.window_handles[1])
        source = self.driver.page_source
        self.parse_detail_page(source)
       
        self.driver.close()
       
        self.driver.switch_to.window(self.driver.window_handles[0])
    def parse_detail_page(self, source):
        
        html = etree.HTML(source)
        position_name = html.xpath("//span[@class='name']/text()")[0]
        job_request_spans = html.xpath("//dd[@class='job_request']//span")
        salary = job_request_spans[0].xpath('.//text()')[0].strip()
        city = job_request_spans[1].xpath(".//text()")[0].strip()
        city = re.sub(r"[\s/]", "", city)
        work_years = job_request_spans[2].xpath(".//text()")[0].strip()
        work_years = re.sub(r"[\s/]", "", work_years)
        education = job_request_spans[3].xpath(".//text()")[0].strip()
        education = re.sub(r"[\s/]", "", education)
        desc = "".join(html.xpath("//dd[@class='job_bt']//text()")).strip()
        company_name = html.xpath("//h4[@class='company']/text()")[0].strip()
        position = {
            'name': position_name,
            'company_name': company_name,
            'salary': salary,
            'city': city,
            'work_years': work_years,
            'education': education,
            'desc': desc
        }
        self.positions.append(position)
        print(position)
        print('=' * 40)
if __name__ == '__main__':
    spider = LagouSpider()
    spider.run()

5、運行結果就不再展示了,測試過了沒有問題的.....

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章