POJ1521

/**
 * @file POJ1521.cpp
 * @author your name ([email protected])
 * @brief 
 * @version 0.1
 * @date 2019-11-16
 * 
Entropy

Time Limit: 1000MS
Memory Limit: 10000K
Total Submissions: 12341
Accepted: 4311

Description
An entropy encoder is a data encoding method that achieves lossless(無損的) data compression(壓縮) by encoding a message with "wasted" 
or "extra" information removed. In other words, entropy encoding removes information that was not necessary in the first
place to accurately(準確地) encode the message. A high degree of entropy implies(意味着) a message with a great deal of wasted information;
english text encoded in ASCII is an example of a message type that has very high entropy. Already compressed messages, 
such as JPEG graphics or ZIP archives, have very little entropy and do not benefit from further attempts at entropy encoding. 

English text encoded in ASCII has a high degree of entropy because all characters are encoded using the same number of bits,
eight. It is a known fact that the letters E, L, N, R, S and T occur at a considerably higher frequency than do most other 
letters in english text. If a way could be found to encode just these letters with four bits, then the new encoding would be 
smaller, would contain all the original information, and would have less entropy. ASCII uses a fixed number of bits for a 
reason, however: it’s easy, since one is always dealing with a fixed number of bits to represent each possible glyph or 
character. How would an encoding scheme that used four bits for the above letters be able to distinguish between the 
four-bit codes and eight-bit codes? This seemingly difficult problem is solved using what is known as a "prefix-free 
variable-length"(無前綴可變長度) encoding. 

In such an encoding, any number of bits can be used to represent any glyph(字形), and glyphs not present in the message are simply not
encoded. However, in order to be able to recover the information, no bit pattern that encodes a glyph is allowed to be the prefix
of any other encoding bit pattern. This allows the encoded bitstream to be read bit by bit, and whenever a set of bits is 
encountered that represents a glyph, that glyph can be decoded. If the prefix-free constraint was not enforced, then such a 
decoding would be impossible. 

Consider the text "AAAAABCD". Using ASCII, encoding this would require 64 bits. If, instead, we encode "A" with the bit pattern
"00", "B" with "01", "C" with "10", and "D" with "11" then we can encode this text in only 16 bits; the resulting bit pattern 
would be "0000000000011011". This is still a fixed-length encoding, however; we’re using two bits per glyph instead of eight. 
Since the glyph "A" occurs with greater frequency, could we do better by encoding it with fewer bits? In fact we can, 
but in order to maintain a prefix-free encoding, some of the other bit patterns will become longer than two bits. 
An optimal encoding is to encode "A" with "0", "B" with "10", "C" with "110", and "D" with "111". 
(This is clearly not the only optimal encoding, as it is obvious that the encodings for B, C and D could be interchanged 
freely for any given encoding without increasing the size of the final encoded message.) Using this encoding, the message 
encodes in only 13 bits to "0000010110111", a compression ratio of 4.9 to 1 (that is, each bit in the final encoded message 
represents as much information as did 4.9 bits in the original encoding). Read through this bit pattern from left to right 
and you’ll see that the prefix-free encoding makes it simple to decode this into the original text even though the codes 
have varying bit lengths. 

As a second example, consider the text "THE CAT IN THE HAT". In this text, the letter "T" and the space character both occur 
with the highest frequency, so they will clearly have the shortest encoding bit patterns in an optimal encoding. 
The letters "C", "I’ and "N" only occur once, however, so they will have the longest codes. 

There are many possible sets of prefix-free variable-length bit patterns that would yield the optimal encoding, that is, 
that would allow the text to be encoded in the fewest number of bits. One such optimal encoding is to encode spaces 
with "00", "A" with "100", "C" with "1110", "E" with "1111", "H" with "110", "I" with "1010", "N" with "1011" and "T" with "01".
The optimal encoding therefore requires only 51 bits compared to the 144 that would be necessary to encode the message 
with 8-bit ASCII encoding, a compression ratio of 2.8 to 1. 

Input
The input file will contain a list of text strings, one per line. The text strings will consist only of uppercase alphanumeric
characters and underscores (which are used in place of spaces). The end of the input will be signalled by a line containing 
only the word “END” as the text string. This line should not be processed.

Output
For each text string in the input, output the length in bits of the 8-bit ASCII encoding, the length in bits of an optimal 
prefix-free variable-length encoding, and the compression ratio accurate to one decimal point.

Sample Input
AAAAABCD
THE_CAT_IN_THE_HAT
END

Sample Output
64 13 4.9
144 51 2.8

 * @copyright Copyright (c) 2019
 * 
 */

/**
 * @brief 貪心算法;哈夫曼編碼;最優二叉樹;priority_queue;
 * 
 */

#include<queue>
#include<functional>
#include<algorithm>
#include<vector>
#include<iostream>
#include<string> 

using namespace std;

template <typename T> void print_queue(T& q){
    while(!q.empty()){
        std::cout<<q.top()<<" ";
        q.pop();
    }
    std::cout<<std::endl;
}

int main(){
    string s;
    while(getline(cin,s) && s!="END"){
        //先把字符串按字典序排序,統計同一種字符出現的次數
        std::sort(s.begin(),s.end());
        // cout<<s<<endl;
        priority_queue<int,vector<int>,greater<int>>q;//以greater作爲Compare的參數保證通過優先隊列的top()操作取得的元素都是隊列當前最小的元素
        int cnt = 1;//cnt用於統計掃描的字符出現的次數
        for( int i = 0; i < s.length();++i){
            if(s[i]!=s[i+1]){       
                q.push(cnt);        //如果當前字符與前一個字符不是同一個字符,那麼表明前一個字符的次數統計已完成,可以將這個字符的出現次數放入優先隊列中
                cnt = 1;            //並將統計字符次數的變量重新置一表示統計一個新的字符出現的次數
            }else{
                ++cnt;              //當前字符與前一個字符進行比較,如果是同一個字符那麼就將用於統計當前字符的次數加一
            }
        }
        // print_queue(q);
        int leng = 0;
        //處理退化的情況,只有一種字符輸入的情況
        if(q.size()==1)
        leng = q.top();

        while(q.size()!=1){         //注意哈夫曼樹合併完成後優先隊列只有一個元素了,並且這個元素就是哈弗曼樹的根節點
            int min_1 = q.top();
            q.pop();                //可以理解爲從哈夫曼森林中取出權重最小的兩棵樹
            int min_2 = q.top();
            q.pop();
            q.push(min_1+min_2);    //將取出的兩棵權重最小的哈弗曼樹合併爲一棵權重爲這兩棵樹權重之和的樹並將這棵樹加入到哈夫曼森林中
            leng += (min_1+min_2);  //關於計算整個字符串哈夫曼編碼長度的方法確實不是那麼好理解.
            //某一種字符在這個哈夫曼編碼長度中所佔的長度爲其出現的次數和其在哈夫曼樹中的路徑的長度
            //假設這個字符出現的頻次爲w,其在這棵哈夫曼樹中的路徑長度爲h.
            //那麼在構建這一棵哈夫曼編碼樹的過程中,這個字符結點(可以單獨作爲一個哈夫曼樹的根節點或者作爲合併後的哈夫曼樹的葉子結點)
            //在會出現h次從優先隊列中取出,合併再加入到優先隊列中.
            //語句"leng += (min_1+min_2);"(min_1+min_2)中必然包含了這個字符出現的頻次w,並且執行h次.
            //按此方法分析其他字符,都滿足這個規律.
            //從而在合併哈夫曼樹的過程中,就完成了哈夫曼編碼字符串長度的計算.       
        }
        // cout<<leng<<endl;
        // cout<<8.0*s.length()/leng<<endl;
        printf("%d %d %.1f\n",8*s.length(),leng,8.0*s.length()/leng);
    }
}


 

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章