數據分析的小練習

查看源碼:

git地址:https://github.com/champion-yang/dataAnalysis

題目要求以及完成情況:

  • 1.從指定招聘網站爬取數據,提取有效數據,並保存爲json格式文件。
    完成情況:bossZhipin
    利用scrapy框架將boss直聘的相關信息爬取下來,保存爲bossData.json文件.代碼查看bossZhipin文件夾

  • 2.設置post請求參數並將信息返回給變量response
    完成情況:postReq.py
    使用了resquests,json包,將請求頭,請求信息,請求地址傳入到resquests請求中,注意請求方式

  • 3.將提取出來的數據轉化爲json格式,並賦值變量
    完成情況:dataToJson.py
    使用了json,resquests,BeautifulSoup,爬取筆趣網小說狂神,拿到了每一章的標題和對應的鏈接,並轉化爲json格式,賦值給變量jsonObj

  • 4.用with函數創建json文件,通過json方法,寫入json數據
    完成情況:withFunBuildJson.py
    使用3中拿到的json數據,通過encode()編碼爲二進制文件,寫入build.json文件中

OK!直接上代碼!
1.
使用了scrapy框架,這裏給出爬蟲的代碼管道文件的代碼

# -*- coding: utf-8 -*-
import scrapy

from bossZhipin.items import BosszhipinItem

class BossSpider(scrapy.Spider):
    name = 'boss'
    allowed_domains = ['zhipin.com']

    offset = 1
    url = 'https://www.zhipin.com/c101010100-p100109/?page='
    start_urls = [ url + str(offset)]
    url1 = 'https://www.zhipin.com'
    def parse(self, response):
        for each in response.xpath("//div[@class='job-primary']"):
            item = BosszhipinItem()
            item['company'] = each.xpath("./div[@class='info-company']/div/h3/a/text()").extract()[0]
            item['company_link'] = self.url1 + each.xpath("./div[@class='info-company']/div/h3/a/@href").extract()[0]
            item['position'] = each.xpath("./div[@class='info-primary']/h3/a/div[@class='job-title']/text()").extract()[0]
            item['wages'] = each.xpath("./div[@class='info-primary']/h3/a/span[@class]/text()").extract()[0]
            item['place'] = each.xpath("./div[@class='info-primary']/p/text()").extract()[0]
            item['experience'] = each.xpath("./div[@class='info-primary']/p/text()").extract()[1]
            yield scrapy.Request(item['company_link'],meta={'item':item},callback=self.get_company_info)
        if self.offset < 10:
            self.offset += 1
        yield scrapy.Request(self.url + str(self.offset) , callback=self.parse)
        
    def get_company_info(self,response):
        item = response.meta['item']
        company_link = item['company_link']
        company_infos = response.xpath("//div[@id='main']/div[3]/div/div[2]/div/div[1]/div/text()").extract()
        position_nums = response.xpath("//div[@id='main']/div[1]/div/div[1]/div[1]/span[1]/a/b/text()").extract()
        for position_num,company_info in zip(position_nums,company_infos):
            item['position_num'] = position_num
            item['company_info'] = company_info
            print(item['position_num'],item['company_info'])
            yield item
import json
# dumps和
class BosszhipinPipeline(object):
    def __init__(self):
        self.filename = open('bossData.json','wb')
    def process_item(self, item, spider):
        # 將獲取到的數據保存爲json格式
        text = json.dumps(dict(item),ensure_ascii=False) + '\n'
        self.filename.write(text.encode('utf-8'))
        return item

    def close_spider(self,spider):
        print('爬蟲關閉')
        self.filename.close()

結果示例圖

  1. post參數包括:請求地址,請求信息,請求頭
import requests,json
url = "http://xsxxgk.huaibei.gov.cn/site/label/8888?IsAjax=1&dataType=html&_=0.27182235250895626"
data ={
    "siteId":"4704161",
    "pageSize":"15",
    "pageIndex":"4",
    "action":"list",
    "isDate":"true",
    "dateFormat":"yyyy-MM-dd",
    "length":"46",
    "organId":"33",
    "type":"4",
    "catId":"3827899",
    "cId":"",
    "result":"暫無相關信息",
    "labelName":"publicInfoList",
    "file":"/xsxxgk/publicInfoList-xs"
}
headers = {
    "Accept":"text/html, */*; q=0.01","Accept-Encoding":"gzip, deflate",
    "Accept-Language":"zh-CN,zh;q=0.9,en;q=0.8",
    "Connection":"keep-alive",
    "Content-Length":"253",
    "Content-Type":"application/x-www-form-urlencoded; charset=UTF-8",
    "Cookie":"SHIROJSESSIONID=f30feb26-6495-4287-a5a6-27bbd76bf960",
    "Host":"xsxxgk.huaibei.gov.cn",
    "Origin":"http",
    "Referer":"http",
    "User-Agent":"Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.102 Safari/537.36",
    "X-Requested-With":"XMLHttpRequest"
}
response = requests.post(url=url,data =data,headers=headers)
# response = requests.post(url=url,data =json.dumps(data),headers=headers) # 注意請求的方式是json還是text
print(response.text)

3&4.

import json,os
import requests
from bs4 import BeautifulSoup
import time

# 獲取所有的章節名和章節地址
if __name__ == '__main__':
    # 拿到url地址
    target = 'http://www.biquw.com/book/7627/'
    req = requests.get(url=target)
    # 拿到html文檔
    html = req.text
    # 解析html文檔信息
    div_bf = BeautifulSoup(html)  # 創建bf對象來存儲html信息
    div = div_bf.find_all('div',class_='book_list')
    a_list = div_bf.select('div>ul>li>a')
    titleList1 = []
    titleList2 = []
    for each in a_list:
        # 拿到每一章的標題和連接
        str1 = each.string
        str2 = target+each.get('href')
        titleList1.append(str1)
        titleList2.append(str2)
        d = dict(zip(titleList1,titleList2))
    jsonObj = json.dumps(d).encode()
    print(type(jsonObj))
    with open('build.json','wb') as f:
        f.write(jsonObj)
        
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章