Differences
This shows you the differences between two versions of the page.
| Both sides previous revisionPrevious revision | |||
| courses:cs211:winter2018:journals:goldm:ch4 [2018/03/06 01:08] – goldm | courses:cs211:winter2018:journals:goldm:ch4 [2018/03/12 18:08] (current) – goldm | ||
|---|---|---|---|
| Line 110: | Line 110: | ||
| While I understand why this stands alone as a section, I do feel as if it could have been shrunk enough to be added as an " | While I understand why this stands alone as a section, I do feel as if it could have been shrunk enough to be added as an " | ||
| + | |||
| + | ===== 4.8: Huffman Codes and Data Compression | ||
| + | |||
| + | Given that computers encode information in bits, it is necessary for computers to have a means of encoding different alphabets into strings of 0's and 1's. There are a number of viable means of doing this, but we want to encode things with, on average, the fewest number of bits. This was especially important in the past when memory was hard to come by. This is where data compression comes in. Data compression and compression algorithms allow for files to be taken as input and compressed down to much smaller sizes. It is still important, however, to reduce the size of things up front. Morse code serves as a good example of this where producing common letters took fewer input characters. Having variable length of strings, while easily avoidable in morse by pausing, is an issue in binary where there are only two symbols. To avoid this with binary, we would need to ensure that no letter' | ||
| + | We ultimately end up with Huffman' | ||
| + | |||
| + | This section seemed particularly long and contained a lot of proofs, which generally can be difficult to read. Overall, I give the section a 5/10. | ||
