Also mentioned in lectures was the concept of a Huffman code, which is a method of lossless data compression (reducing the amount of data needed to store or transmit information). Recall that a Huffman code can be extracted from a Huffman tree – one basic algorithm for building a Huffman tree is: 1. Start with an empty set T 2. Add all the symbols to set T 3. While there are multiple "trees" in the set: a. Remove the lowest probability tree from set T (break ties in terms of smaller tree size), and call this tree A b. Remove the lowest probability tree from set T (break ties in terms of smaller tree size), and call this tree B c. Make a new tree C by joining A and B, and set the probability of C to p(A) + p(B) d. Add the new tree C to set T 4. Return the only tree in set T as the Huffman tree (note: we will cover algorithm concepts is more detail in later classes, for now you only need to know the algorithm itself) As with entropy, a worked example of building a Huffman tree is discussed in lectures. For example, one possible Huffman tree corresponding to the system shown earlier (with entropy of 2.171 bits) would be: {{/Labs/01/Images/fig1.svg}} The Huffman coding can then be extract from this tree by walking the path along each symbol: <table> <tr> <th>Symbol</th> <th>P(Symbol)</th> <th>Code</th> <th> |Code|</th> </tr> <tr> <td>A</td> <td>0.2</td> <td>11</td> <td>2</td> </tr> <tr> <td>B</td> <td>0.1</td> <td>100</td> <td>3</td> </tr> <tr> <td>C</td> <td>0.3</td> <td>00</td> <td>2</td> </tr> <tr> <td>D</td> <td>0.3</td> <td>01</td> <td>2</td> </tr> <tr> <td>E</td> <td>0.1</td> <td>101</td> <td>3</td> </tr> </table> On average, the code length in this system is 0.2 * 2 + 0.1 * 3 + 0.3 * 2 + 0.3 * 2 + 0.1 * 3 = 2.2 bits. Recall that the entropy of this system is 2.171 bits, so the efficiency of this Huffman coding is 2.171/2.200 = 98.7%. Contrast this with a naïve encoding of our symbols, with three bits per symbol: <table> <tr> <th>Symbol</th> <th>P(Symbol)</th> <th>Code</th> <th> |Code|</th> </tr> <tr> <td>A</td> <td>0.2</td> <td>000</td> <td>2</td> </tr> <tr> <td>B</td> <td>0.1</td> <td>001</td> <td>3</td> </tr> <tr> <td>C</td> <td>0.3</td> <td>010</td> <td>2</td> </tr> <tr> <td>D</td> <td>0.3</td> <td>011</td> <td>2</td> </tr> <tr> <td>E</td> <td>0.1</td> <td>100</td> <td>3</td> </tr> </table> The code length here is 3 bits, so the efficiency of this system is 2.171/3.000 = 72.4%. In other words, the Huffman coding is over 25% more efficient than a naïve encoding. **Exercise** Compute the Huffman tree of the second example from the previous section (the one with entropy of 1.875 bits). Use this tree to extract the Huffman coding, and then compute its efficiency. In doing so, what do you note about the lengths of the codes, both overall and relative to the probabilities of the symbols that they represent? Draw your tree now to fill in the table below. <table> <tr> <th>Symbol</th> <th>P(Symbol)</th> <th>Code</th> <th> |Code|</th> </tr> <tr> <td>A</td> <td>0.625</td> <td></td> <td></td> </tr> <tr> <td>B</td> <td>0.25</td> <td></td> <td></td> </tr> <tr> <td>C</td> <td>0.5</td> <td></td> <td></td> </tr> <tr> <td>D</td> <td>0.0625</td> <td></td> <td></td> </tr> <tr> <td>E</td> <td>0.125</td> <td></td> <td></td> </tr> </table>