Differences
This shows you the differences between two versions of the page.
| Both sides previous revisionPrevious revision | |||
| courses:cs211:winter2018:journals:thetfordt:chapter4 [2018/03/12 00:17] – thetfordt | courses:cs211:winter2018:journals:thetfordt:chapter4 [2018/03/12 00:17] (current) – thetfordt | ||
|---|---|---|---|
| Line 96: | Line 96: | ||
| This leads us to the bottom-up approach, AKA Huffman codes. Huffman developed and proved the optimality of his algorithm by constructing the prefix tree from the bottom up. Suppose we are given an alphabet S with given frequencies. If S has two letters then we encode one letter using 0 and the other letter using 1 (base case). Otherwise, we let y* and z* be the two lowest-frequency letters. We form a new alphabet S' by deleting y* and z* and replace them with a new letter w of frequency f(y*)+f(z*). We then recursively construct a prefix code m' for S', with tree T' and define a prefix code for S as follows: Start with T', take the leaf labeled w and add two children below it labeled y* and z*. We can see that this algorithm addresses exactly the kind of recursive problem the chapter began by describing. The book follows the logic of the algorithm by proving the optimality of it. It does this by proving the Huffman code for a given alphabet achieves the minimum average number of bits per letter of any prefix code. | This leads us to the bottom-up approach, AKA Huffman codes. Huffman developed and proved the optimality of his algorithm by constructing the prefix tree from the bottom up. Suppose we are given an alphabet S with given frequencies. If S has two letters then we encode one letter using 0 and the other letter using 1 (base case). Otherwise, we let y* and z* be the two lowest-frequency letters. We form a new alphabet S' by deleting y* and z* and replace them with a new letter w of frequency f(y*)+f(z*). We then recursively construct a prefix code m' for S', with tree T' and define a prefix code for S as follows: Start with T', take the leaf labeled w and add two children below it labeled y* and z*. We can see that this algorithm addresses exactly the kind of recursive problem the chapter began by describing. The book follows the logic of the algorithm by proving the optimality of it. It does this by proving the Huffman code for a given alphabet achieves the minimum average number of bits per letter of any prefix code. | ||
| - | The book then walks through implementation, | + | The book then walks through implementation, |
| In the section mentioning some extensions of Huffman codes, the idea of arithmetic encoding really jumped out to me. I would think that a series of black and white pixels would have to be represented by 0's and 1's for black and white. However, I did not think of the fact that you could represent certain common patterns of whites and blacks using prefix codes. | In the section mentioning some extensions of Huffman codes, the idea of arithmetic encoding really jumped out to me. I would think that a series of black and white pixels would have to be represented by 0's and 1's for black and white. However, I did not think of the fact that you could represent certain common patterns of whites and blacks using prefix codes. | ||
