JSON 數據優化方案

因一些業務,需要從JAVA後端發送上百M的數據到前端進行渲染,從服務器到前端的傳輸時間不能多0.5秒

從網上找過了網站感覺效果不大,這裏也分享下自己的優化經驗

目錄

一、TOMCAT壓縮機制

二、JAVA 過濾器壓縮

三、msgpack壓縮技術

四、ajax輪詢機制

五、返回數據格式  集合對象類型

六、返回數據格式 數組類型


一、TOMCAT壓縮機制

Tomcat自帶的一個壓縮機制,可以數據進行壓縮,壓縮量可以讓數據減少到20%左右,對帶寬可以起到一個很好的效果,但在提速方面表現不佳,大數據量(100M)的壓縮情況下,數據傳輸速度沒有明顯的提高,反而會更久。數據的壓縮和解壓也是需要一定的時間進行處理。所以該方案否決

參考地址:https://www.cnblogs.com/DDgougou/p/8675504.html

二、JAVA 過濾器壓縮

Java過濾器壓縮,基本和Tomcat的壓縮機制一樣,使用的壓縮方式也是Gzip。測試後效果基本和Tomcat的差不多。

import javax.servlet.ServletOutputStream;
import javax.servlet.http.HttpServletResponse;
import javax.servlet.http.HttpServletResponseWrapper;
import java.io.IOException;
import java.io.OutputStreamWriter;
import java.io.PrintWriter;

public class GZIPResponseWrapper extends HttpServletResponseWrapper {
    protected HttpServletResponse origResponse = null;
    protected ServletOutputStream stream = null;
    protected PrintWriter writer = null;

    public GZIPResponseWrapper(HttpServletResponse response) {
        super(response);
        origResponse = response;
    }

    public ServletOutputStream createOutputStream() throws IOException {
        return (new GZIPResponseStream(origResponse));
    }

    public void finishResponse() {
        try {
            if (writer != null) {
                writer.close();
            } else {
                if (stream != null) {
                    stream.close();
                }
            }
        } catch (IOException e) {}
    }

    @Override
    public void flushBuffer() throws IOException {
        stream.flush();
    }

    @Override
    public ServletOutputStream getOutputStream() throws IOException {
        if (writer != null) {
            throw new IllegalStateException("getWriter() has already been called!");
        }

        if (stream == null)
            stream = createOutputStream();
        return (stream);
    }

    @Override
    public PrintWriter getWriter() throws IOException {
        if (writer != null) {
            return (writer);
        }

        if (stream != null) {
            throw new IllegalStateException("getOutputStream() has already been called!");
        }

        stream = createOutputStream();
        writer = new PrintWriter(new OutputStreamWriter(stream, "UTF-8"));
        return (writer);
    }

    @Override
    public void setContentLength(int length) {}
}
import javax.servlet.ServletOutputStream;
import javax.servlet.WriteListener;
import javax.servlet.http.HttpServletResponse;
import java.io.ByteArrayOutputStream;
import java.io.IOException;
import java.util.zip.GZIPOutputStream;

public class GZIPResponseStream extends ServletOutputStream {
    protected ByteArrayOutputStream baos = null;
    protected GZIPOutputStream gzipstream = null;
    protected boolean closed = false;
    protected HttpServletResponse response = null;
    protected ServletOutputStream output = null;

    public GZIPResponseStream(HttpServletResponse response) throws IOException {
        super();
        closed = false;
        this.response = response;
        this.output = response.getOutputStream();
        baos = new ByteArrayOutputStream();
        gzipstream = new GZIPOutputStream(baos);
    }

    @Override
    public void close() throws IOException {
        if (closed) {
            throw new IOException("This output stream has already been closed");
        }
        gzipstream.finish();

        byte[] bytes = baos.toByteArray();


        response.addHeader("Content-Length",
                Integer.toString(bytes.length));
        response.addHeader("Content-Encoding", "gzip");
        output.write(bytes);
        output.flush();
        output.close();
        closed = true;
    }

    @Override
    public void flush() throws IOException {
        if (closed) {
            throw new IOException("Cannot flush a closed output stream");
        }
        gzipstream.flush();
    }

    @Override
    public void write(int b) throws IOException {
        if (closed) {
            throw new IOException("Cannot write to a closed output stream");
        }
        gzipstream.write((byte)b);
    }

    public void write(byte b[]) throws IOException {
        write(b, 0, b.length);
    }

    public void write(byte b[], int off, int len) throws IOException {
        if (closed) {
            throw new IOException("Cannot write to a closed output stream");
        }
        gzipstream.write(b, off, len);
    }

    public boolean closed() {
        return (this.closed);
    }

    public void reset() {
        //noop
    }

    @Override
    public boolean isReady() {
        return false;
    }

    @Override
    public void setWriteListener(WriteListener writeListener) {

    }
}
package com.spdb.web.base;

import java.io.*;
import java.util.zip.GZIPOutputStream;
import javax.servlet.*;
import javax.servlet.http.*;


/**
* Description springmvc 壓縮返回參數
* @param
* @Author junwei
* @Date 17:49 2020/3/26
**/
public class GZIPFilter implements Filter {

    @Override
    public void doFilter(ServletRequest req, ServletResponse res, FilterChain chain)
            throws IOException, ServletException {
        if (req instanceof HttpServletRequest) {
            HttpServletRequest request = (HttpServletRequest) req;
            HttpServletResponse response = (HttpServletResponse) res;
            String ae = request.getHeader("accept-encoding");
            if (ae != null && ae.indexOf("gzip") != -1) {
                GZIPResponseWrapper wrappedResponse = new GZIPResponseWrapper(response);
                chain.doFilter(req, wrappedResponse);
                wrappedResponse.finishResponse();
                return;
            }
            chain.doFilter(req, res);
        }
    }

    @Override
    public void init(FilterConfig filterConfig) {
        // noop
    }

    @Override
    public void destroy() {
        // noop
    }
}






最後在web.xml文件加上過濾(com.spdb.web.base 爲自己項目的文件相對位置)

	<filter>
		<filter-name>GZIPFilter</filter-name>
		<filter-class>com.spdb.web.base.GZIPFilter</filter-class>
	</filter>

	<filter-mapping>
		<filter-name>GZIPFilter</filter-name>
		<url-pattern>/*</url-pattern>
	</filter-mapping>

 參考網站:https://blog.csdn.net/cafebar123/article/details/80037589?depth_1-utm_source=distribute.pc_relevant.none-task&utm_source=distribute.pc_relevant.none-task

三、msgpack壓縮技術

Msgpack是個很好的壓縮插件,有自己的壓縮轉換算法,幾乎包含大部分編程語言,但是經過一系列的測試後,發現JAVA和JS

的壓縮解壓不是用的同一個算法,兩者間的數據交換無法共通,遂放棄。

參考地址:https://msgpack.org/

四、ajax輪詢機制

Ajax輪詢機制,這裏我是作爲一個想法來測試的,一次請求分解成多次請求,在數據沒有執行完畢的情況,繼續請求。

好處是分解多次請求,數據能立刻渲染加載,但是總體的加載時間會更長。而且後端和前端的ajax邏輯代碼 需要比較完善才行,需要考慮到異步同步的一系列問題,嘗試過一些測試,不實用,遂棄用。

參考地址:https://www.cnblogs.com/YangJieCheng/p/8367586.html

五、返回數據格式  集合對象類型

優化到到這裏的時候,壓縮其實就是減少數據量,那麼從根源上減少數據量的話,需要怎麼整。減少集合的總量是不可能,

但是可以對 集合的裏面對象的key做優化

例如正常JSON ,正常的分頁集合返回到前端都是這樣的。按照字符串的數據量來決定 這次傳輸的內存大小的話,這裏有將近150個字符

{"total":28,"rows":[
    {"productid":"FI-SW-01","productname":"Koi","unitcost":10.00,"status":"P","listprice":36.50,"attr1":"Large","itemid":"EST-1"},

 通過對key做簡化處理,只剩下100個字符不到,數據的內存大小 減少了30%,這樣相應的傳輸速度也因爲 數據量的減少而提高了

{"total":28,"rows":[ {"A":"FI-SW-01","B":"Koi","C":10.00,"D":"P","E":36.50,"F":"Large","G":"EST-1"},

結論是通常這種簡化key的方式,可以提速30%-50%。缺點就是對字段的識別不明確,沒有約定好的話,不知道這個數據是什麼內容 

六、返回數據格式 數組類型

通過第五種方案,在這個基礎上再進行優化,key可以簡化的,那轉變下數據類型換成數組,直接就把key給去掉了

字符數量只有約50個左右,相對於最開始150個字符,現在只剩下了1/3,相應的傳輸速度也提高了60%-70%。但是缺點也一樣比簡化key更難讀懂這是什麼數據。需要前端和後端約定好數據渲染的邏輯。

[[28],[FI-SW-01,Koi,10.00,P,36.50,Large,EST-1]]

 最後也是採用的第六種方案來處理渲染數據,從結果來傳輸時間減少了很多,但是過程也增加一定的難度

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章