Huffman Code

In subject area: Mathematics

Huffman code is defined as a prefix code generated from a weighted binary tree, where each symbol at a leaf is assigned a unique binary representation based on the path taken from the root to the leaf.

AI generated definition based on: Fundamental Data Compression, 2006

How useful is this definition?

Chapters and Articles

You might find these chapters and articles relevant to this topic.

6.1.2.1 Huffman Coding

An important class of prefix codes is the class of Huffman codes [14]. The key idea behind the Huffman code is to represent a symbol from a source alphabet by a sequence of bits of length being proportional to the amount of information conveyed by the symbol under consideration, that is, Lk ≅ −log(pk). Clearly, the Huffman code requires knowledge of the source statistics and attempts to represent the DMS statistics by a simpler one. The Huffman encoding algorithm can be summarized as follows:

1.

List the symbols of the source in decreasing order of probability of occurrence. The two symbols with a lowest probability are assigned to a 0 or 1.

2.

Two symbols with a lowest probability are combined into a new symbol (super-symbol) with a probability being the sum of individual symbols within the super-symbol. Location of the super-symbol in the list in the next stage is according to the combined probability.

3.

The procedure is repeated until we are left with only two symbols to which we assign bits 0 and 1.

By reading out the bits assigned until we reach the root we get the codeword assigned to each symbol from the source. The Huffman code is applicable not only to binary source code, but also to nonbinary codes.

Example. A DMS has an alphabet of eight symbols whose probabilities of occurrence are as follows:

Symbols: s1 s2 s3 s4 s5 s6 s7 s8

Probabilities: 0.28 0.18 0.15 0.13 0.10 0.07 0.05 0.04

We are concerned with the designing of the Huffman code for this source by moving a super-symbol as high as possible and assuming ternary transmission with symbols {0,1,2}. The Huffman procedure for this ternary code is summarized in Fig. 6.1.

Figure 6.1. Huffman procedure (left) and corresponding ternary code (right).

Read full chapter
URL: https://www.sciencedirect.com/science/article/pii/B9780128219829000095

Observation

1.

Huffman or Shannon-Fano codes are prefix codes (Section 2.3.3) which are uniquely decodable.

2.

There may be a number of Huffman codes, for two reasons:

(a)

There are two ways to assign a 0 or 1 to an edge of the tree. In Figure 4.1, we have chosen to assign 0 to the left edge and 1 to the right. However, it is possible to assign 0 to the right and 1 to the left. This would make no difference to the compression ratio.

(b)

There are a number of different ways to insert a combined item into the frequency (or probability) table. This leads to different binary trees. We have chosen in the same example to:

i.

make the item at the higher position the left child

ii.

insert the combined item on the frequency table at the highest possible position.

3.

For a canonical minimum-variance code, the differences among the lengths of the codewords turn out to be the minimum possible.

4.

The frequency table can be replaced by a probability table. In fact, it can be replaced by any approximate statistical data at the cost of losing some compression ratio. For example, we can apply a probability table derived from a typical text file in English to any source data.

5.

When the alphabet is small, a fixed length (less than 8 bits) code can also be used to save bits.

Read full chapter
URL: https://www.sciencedirect.com/science/article/pii/B9780750663106500076

4.3 Optimal Huffman codes

Huffman codes are optimal when probabilities of the source symbols are all negative powers of two. Examples of a negative power of two are 12,14,18etc.

The conclusion can be drawn from the following justification.

Suppose that the lengths of the Huffman code are L=(l1,l2,…,ln) for a source P=(p1,p2,…,pn) where n is the size of the alphabet.

Using a variable length code to the symbols, ljbits for sj, the average length of the codewords is (in bits):

l¯=j=1nljpj=l1p1+l2p2++lnpn

The entropy of the source is:

H=j=1npjlog1pj=p1log1p1+p2log1p2++pnlog1pn

As we know from Section 2.4.2, a code is optimal if the average length of the codewords equals the entropy of the source.

Let

j=1nljpj=j=1npjlog21pj

and notice

j=1nljpj=j=1npjlj

This equation holds if and only if lj = −log2pjfor all j = 1, 2, …, n, because ljhas to be an integer (in bits). Since the length ljhas to be an integer (in bits) for Huffman codes, −log2pjhas to be an integer, too. Of course, −log2pj be an integer unless pjis a negative power of 2, for all j=1, 2, …, n.

In other words, this can only happen if all probabilities are negative powers of 2 in Huffman codes, for ljhas to be an integer (in bits). For example, for a source=(12,14,18,18), Huffman codes for the source can be optimal.

Read full chapter
URL: https://www.sciencedirect.com/science/article/pii/B9780750663106500076

II.B.4 Huffman Codes

The optimal worst-case encoding length, Lˆ can be easily achieved. Simply map every element of χ to a unique binary string of length ⌈{log ∣χ∣}⌉.

In 1952, Huffman discovered an encoding scheme that achieves L¯. Huffman codes are constructed recursively. The Huffman code for the 1-element probability distribution P1 = (1) consists of the empty codeword. The Huffman code of an n-element probability distribution Pn = (p1, p2,  ,pn−2, pn−1, pn) where, without loss of generality, pi  pi + 1, is constructed from the Huffman code of the (n  1)-element probability distribution Pn−1 = (p1, p2,  , pn−2, pn−1 + pn) as follows. The codewords of p1,  , pn−2 in Pn are the same as their codewords in Pn−1, the codeword of pn−1 in Pn is the codeword of pn−1 + pn in Pn−1 followed by 0, and the codeword of pn in Pn is the codeword of pn−1 + pn in Pn−1 followed by 1.

For example, the Huffman code for the probability distribution P4 = (0.45, 0.25, 0.2, 0.1) is constructed as follows. We first combine the two smallest probabilities to obtain the probability distribution (0.45, 0.25, 0.3) which we reorder to get P3 = (0.45, 0.3, 0.25). Again, combining the two smallest probabilities and reordering, we obtain P2 = (0.55, 0.45). Finally, combining the two probabilities we obtain P1 = (1). Next, we retrace the steps and construct the code. The codeword of the probability 1 in P1 is the empty string. Since 1 = 0.55 + 0.45, the codewords for 0.55 and 0.45 in P2 are 0 and 1, respectively. In P3, the codeword of 0.45 remains 1, and, since 0.55 = 0.3 + 0.25, the codeword for 0.3 is 00 and that of 0.25 is 01. Finally, in P4, the codewords of 0.45 and 0.25 remain 1 and 01 as in P3, and, since 0.3 = 0.2 + 0.1, the codeword of 0.2 is 000, and that of 0.1 is 001. The Huffman code for (0.45, 0.25, 0.2, 0.1) is therefore (1, 01, 000, 001). Figure 4 illustrates this construction.

FIGURE 4. Huffman code for (0.45, 0.25, 0.2, 0.1).

Huffman codes are defined only over finite support sets and require a priori knowledge of the underlying probability distribution. These constraints limit their applicability. For example, when encoding text files, we often do not know the underlying probability distribution, and since the files are unbounded in length, the support set—the set of all possible files—is infinite. Hence Huffman codes cannot be used for both reasons. The codes described in the next sections address both issues.

Read full chapter
URL: https://www.sciencedirect.com/science/article/pii/B0122274105003379

Example 4.6

If the size of the alphabet set is smaller than or equal to 32, we can use 5 bits to encode each character. This would give a saving percentage of

8×325×328×32=37.5%
6.

Huffman codes are fragile for decoding: the entire file could be corrupted even if there is a 1 bit error.

7.

The average codeword length of the Huffman code for a source is greater and equal to the entropy of the source and less than the entropy plus 1(Theorem 2.2).

Read full chapter
URL: https://www.sciencedirect.com/science/article/pii/B9780750663106500076

4.1 Static Huffman coding

Huffman coding is a successful compression method used originally for text compression. In any text, some characters occur far more frequently than others. For example, in English text, the letters E, A, O, T are normally used much more frequently than J, Q, X.

Huffman's idea is, instead of using a fixed-length code such as 8 bit extended ASCII or DBCDIC for each symbol, to represent a frequently occurring character in a source with a shorter codeword and to represent a less frequently occurring one with a longer codeword. Hence the total number of bits of this representation is significantly reduced for a source of symbols with different frequencies. The number of bits required is reduced for each symbol on average.

Compression

In order to understand the problem, we first look at some examples of source texts.

Example 4.1

Consider the string BILL BEATS BEN. For convenience, we ignore the two spaces.

The frequency of each symbol is:

BILEATSN
31221111

Sort the list by frequency:

BLEIATSN
32211111

This source consists of symbols from an alphabet (B, L, E, I, A, T, S, N) with the recurrent statistics (3, 2, 2, 1, 1, 1, 1, 1). We want to assign a variable length of prefix code to the alphabet, i.e. one codeword of some length for each symbol.

The input of the compression algorithm is a string of text. The output of the algorithm is the string of binary bits to interpret the input string. The problem contains three subproblems:

1.

Read input string

2.

Interpret each input symbol

3.

Output the codeword for each input symbol.

The first and the last subproblems are easy. For the first subproblem we only need a data structure to allow the access to each symbol one after another. Suppose the prefix code is C = (c1, c2, …, cn), where n =8 in this example. For the last subproblem, we only need a means to find the corresponding codeword for each symbol and output the codeword.

So we focus on the second subproblem which is how to derive the code C.

We write the description of the problem:

Main Subproblemml:Derive an optimal or suboptimal prefix code.

Input:An alphabet and a frequency table.

Output:A prefix code C such that the average length of the codewords is as short as possible.

Modelling is fairly easy if we follow the statistical model in Chapter 2. The alphabet of a source is S = (s1, s2, …,sn) which associates with a probability distribution P =(p1, p2, …, pn). Note a frequency table can be easily converted to a probability table. For example, we use the previous example where a frequency table (3, 2, 2, 1, 1, 1, 1, 1) is given for alphabet (B, L, E, I, A, T, S, N). The total frequency is 3 + 2 + 2 + 1 + 1 + 1 + 1 + 1 = 12. The probability for each symbol is the ratio of its frequency over the total frequency. We then have the probability distribution for prediction of a source in the future (312,212,212,112,112,112,112,112).

We now consider how to construct a prefix code in which short codewords are assigned to frequent symbols and long codewords to rare symbols. Recall in Section 2.3.3 that any prefix code can be represented in a 0–1 tree where all the symbols are at leaves and the codeword for each symbol consists of the collection of the 0s and 1s from the root to that leaf (Figure 2.5). The short codewords are at lower level leaves and the long codewords at higher level leaves (Figure 2.8).1If we have such a binary tree for the alphabet, we have the prefix code for the source.

Suppose the prefix code is C = (c1, c2, …, cn) with lengths L= (l1, l2, …, ln)respectively.

Our problem of deriving a prefix code becomes a problem of how to construct a 0–1 tree so that

1.

All the symbols are leaves

2.

If pj >pi, then ljli, for all i, j = 1, …, n

3.

Two longest codewords are identical except for the last bit.

For example, symbol B has a higher frequency than L, therefore the codeword length for B should be no longer than L.

The longest codewords should be assigned to the more rare symbols which are the last two symbols in our sorted list:

BLEIATSN
32211111

If the codeword for S is 0000, then the codeword for N should be 0001.

There are two approaches to construct a binary tree: one is starting from the leaves to build the tree from the bottom up to the root. This ‘bottom-up’ approach is used in Huffman encoding. The other is starting from the root down to the leaves. The ‘top-down’ approach is used in Shannon–Fano encoding (Section 4.2).

4.1.1 Huffman approach

We first look at Huffman's ‘bottom-up’ approach. Here we begin with a list of symbols as the tree leaves. The symbols are repeatedly combined with other symbols or subtrees, two items at a time, to form new subtrees. The subtrees grow in size by combination on each iteration until the final combination before reaching the root.

In order to easily find the two items with the smallest frequency, we maintain a sorted list of items in descending order. With minor changes, the method also works if an ascending order list is maintained.

Figure 4.1 shows how the tree is built from the leaves to the root step by step. It carries out the following steps in each iteration:

Figure 4.1. Building a Huffman tree

1.

Combine the last two items which have the minimum frequencies or probabilities on the list and replace them by a combined item.

2.

The combined item, which represents a subtree, is placed accordingly to its combined frequency on the sorted list.

For example, in Figure 4.1(1), the two symbols S and N (in shade) with the least frequencies are combined to form a new combined item SN with a frequency 2. This is the frequency sum of two singleton symbols S and N. The combined item SN is then inserted to the second position in Figure 4.1(2) in order to maintain the sorted order of the list.

Note there may be more than one possible place available. For example, in Figure 4.1(2), SN with a frequency of 2 can also be inserted immediately before symbol I, or before E. In this case, we always place the newly combined item to a highest possible position to avoid it getting combined again too soon. So SN is placed before L.

Generalising from the example, we derive the following algorithm for building the tree:

Building the binary tree

Sort alphabet in descending order s = (s1 , s2, …, sn) according to the associated probability distribution. P=(p1, p2,…,pn). Each si represents the root of a subtree. Repeat the following until there is only one composite symbol in S:

1:

If there is one symbol, the tree is the root and the leaf. Otherwise, take two symbols si and sj in the alphabet which have the lowest probabilities piand pj.

2:

Remove si and sj from the alphabet and add a new combined symbol (si, sj) with probability pi + pj. The new symbol represents the root of a subtree. Now the alphabet contains one fewer symbol than before.

3:

Insert the new symbol (si,sj) to a highest possible position so the alphabet remains the descending order.

Generating the prefix code

Once we have the binary tree, it is easy to assign a 0 to the left branch and a 1 to the right branch for each internal node of the tree as in Figure 4.2. The 0–1 values marked next to the edges are usually called the weights of the tree. A tree with these 0–1 labels is called a weighted tree. The weighted binary tree derived in this way is called a Huffman tree.

Figure 4.2. A Huffman tree

We then, for each symbol at a leaf, collect the 0 or 1 bit while traversing each tree path from the root to the leaf. When we reach a leaf, the collection of the 0s and 1s forms the prefix code for the symbol at that leaf. The codes derived in this way are called Huffman codes.

For example, the collection of the 0s and 1s from the root to leaf for symbol E is first a left branch 0, then right branch 1 and finally left branch 0. Therefore the codeword for symbol E is 010. Traversing in this way for all the leaves, we derive the prefix code (10 001 010 011 110 111 0000 0001) for the whole alphabet (B, L, E, I, A, T, S, N) respectively. A prefix code generated in this way is called Huffman code.

4.1.2 Huffman compression algorithm

We first outline the ideas of Huffman compression algorithm with missing details.

Algorithm 4.1 Huffman encoding ideas
1:Build a binary tree where the leaves of the tree are the symbols in the alphabet.
2:The edges of the tree are labelled by a 0 or 1.
3:Derive the Huffman code from the Huffman tree.

This algorithm is easy to understand. In fact, the process of labelling the 0s and 1s does not have to be at the end of construction of the entire Huffman tree. An assignment of a 0 or 1 can be fulfilled as soon as two items are combined, beginning from the least significant bit of each codeword.

We now add details and derive an algorithm as follows.

Algorithm 4.2 Huffman encoding
INPUT:a sorted list of one-node binary trees (t1,t2,…tn) for alphabet (s1,…sn) with frequencies (w1,…wn)
OUPUT:a Huffman code with n codewords
1:initialise a list of one-node binary trees (t1,t2,…tn) with weight (w1,w2,…wn) respectively
2:fork = 1; k < n; k = k + 1do
3:take two trees t i and t j with minimal weights (w iw j)
4:tmerge(t i,t j) with weight ww i + w j, where left_child(t) ← t i and right_child(t) ← t j
5:edge(t,t i) ← 0; edge(t,t j) ← 1
6:end for
7:output every path from the root of t to a leaf, where pathi consists of consecutive edges from the root to leafi for si

Figure 4.3 shows an example of how this practical approach works step by step.

Figure 4.3. Deriving a Huffman code

Canonical and minimum-variance Huffman coding

We have followed the two ‘rules’ below as standard practice during the derivation of a Huffman tree in this section:

1.

A newly created item is placed at the highest possible position in the alphabet list while keeping the list sorted.

2.

When combining two items, the one higher up on the list is assigned 0 and the one lower down 1.

The Huffman code derived from a process that follows these rules is called a canonical and minimum-variance code. The code is regarded as standard and the length difference among the codewords is kept to the minimum. Huffman coding that follows these rules is called canonical and minimum-variance Huffman coding.

Note the canonical and minimum-variance Huffman code is not necessarily unique for a given alphabet with associated probability distribution, because there may be more than one way to sort the alphabet list. For example, alphabet (B, L, E, I, A, T, S, N) with the probabilities (3, 2, 2, 1, 1, 1, 1) may be sorted in many ways. Figure 4.4 shows two different canonical and minimum-variance Huffman trees for the same source, one is based on (B, L, E, I, A, T, S, N) and the other on (R, L, E, I, A, S, T, N) with the only difference in the position of symbols T and S (see highlighted symbols in both lists).

Figure 4.4. Two canonical and minimum-variance trees

4.1.3 Huffman decompression algorithm

The decompression algorithm involves the operations where the codeword for a symbol is obtained by ‘walking’ down from the root of the Huffman tree to the leaf for each symbol.

Example 4.2

Decode the sequence 00000100001 using the Huffman tree in Figure 4.2.

Figure 4.5 shows the first seven steps of decoding the symbols S and E. The decoder reads the 0s or 1s bit by bit. The ‘current’ bit is highlighted in shade in the sequence to be decompressed on each step. The edge chosen by the decompression algorithm is marked as a bold line. For example, in step (1), starting from the root of the Huffman tree, we move along the left branch one edge down to the left child since a bit 0 is read. In step (2), we move along the left branch again to the left child since a bit 0 is read, and so on. When we reach a leaf, for example, in step (4), the symbol (the bold ‘S’) at the leaf is output. This process starts from the root again (5) until step (7) when another leaf is reached and the symbol ‘E’ is output.

Figure 4.5. Huffman decompression process

The decoding process ends when EOF is reached for the entire string.

We now outline the ideas of Huffman decoding.

Algorithm 4.3 Huffman encoding ideas
1:Read the coded message bit by bit. Starting from the root, we traverse one edge down the tree to a child according to the bit value. If the current bit read is 0 we move to the left child, otherwise, to the right child.
2:Repeat this process until we reach a leaf. If we reach a leaf, we will decode one character and restart the traversal from the root.
3:Repeat this read-and-move procedure until the end of the message.

Adding more details, we have the following algorithm:

Algorithm 4.4 Huffman encoding
INPUT:a Huffman tree and a 0-1 bit string of encoded message
OUTPUT:decoded string
1:initilise p ← root
2:while not EOF do
3:read next bit b
4:if b =0 then
5:pleft_child(p)
3:else
7:pleft_child(p)
8:end if
9:if p is a leaf then
10:output the symbol at the leaf
11:p ← root
12:end if
13:end while
Read full chapter
URL: https://www.sciencedirect.com/science/article/pii/B9780750663106500076

4.5 Extended Huffman coding

One problem with Huffman codes is that they meet the entropy bound only when all probabilities are powers of 2. What would happen if the alphabet is binary, e.g. S =(a, b)? The only optimal case 3 is when P =(pa, pb), pa = 1/2 and pb = 1/2. Hence, Huffman codes can be bad.

Example 4.7

Consider a situation when pa = 0.8 and pb = 0.2

Solution

Since Huffman coding needs to use 1 bit per symbol at least, to encode the input, the Huffman codewords are 1 bit per symbol on average:

l¯=1×0.8+1×0.2=1bit

However, the entropy of the distribution is

H(P)=(0.8log20.8+0.2log20.2)=0.72bit

The efficiency of the code is

H(P)l¯=0.721=72%

This gives a gap of 1 − 0.72 = 0.28 bit. The performance of the Huffman encoding algorithm is, therefore, 0.28/1 = 28% worse than optimal in this case.

The idea of extended Huffman coding is to encode a sequence of source symbols instead of individual symbols. The alphabet size of the source is artificially increased in order to improve the code efficiency. For example, instead of assigning a codeword to every individual symbol for a source alphabet, we derive a codeword for every two symbols.

The following example shows how to achieve this:

Example 4.8

Create a new alphabet S′ = (aa, ab, ba, bb) extended from S = (a, b). Let aa be A, ab be B, ba be C and bb be D. We now have an extended alphabet S′ = (A, B, C, D). Each symbol in the alphabet S′ is a combination of two symbols from the original alphabet S. The size of the alphabet S′ increases to 22 = 4.

Suppose symbol ‘a’ or ‘b’ occurs independently. The probability distribution for S′, the extended alphabet, can be calculated as below:

pA=pa×pa=0.64
pB=pa×pb=0.16
pC=pb×pa=0.16
pD=pb×pb=0.04

We then follow the normal static Huffman encoding algorithm (Section 4.1.2) to derive the Huffman code for S.

The canonical minimum-variance code for S′ is (0, 11, 100, 101), for A, B, C, D respectively. The average length is 1.56 bits for two symbols.

The original output became 1.56/2 = 0.78 bit per symbol. The efficiency of the code has been increased to 0.72/0.78 ≈ 92%. This is only (0.78 − 0.72)/0.78 ≈ 8% worse than optimal.

This method is supported by the following Shannon's fundamental theorem of discrete noiseless coding:

Theorem 4.1

For a source S with entropy H(S), it is possible to assign codewords to sequences of m letters of the source so that the prefix condition is satisfied and the average lengthl¯mof the codewords per source symbol satisfies

H(S)l¯mm<1m
Read full chapter
URL: https://www.sciencedirect.com/science/article/pii/B9780750663106500076

Summary

Statistical models and heuristic approach give rise to celebrating static Huffman and Shannon-Fano algorithms. Huffman algorithms take a bottom-up approach while Shannon-Fano top-down. Implementation issues make Huffman code more popular than Shannon-Fano's. Maintaining two tables may improve the efficiency of the Huffman encoding algorithm. However, Huffman codes can give bad compression performance when the alphabet is small and the probability distribution of a source is skewed. In this case, extending the small alphabet and encoding the source in small groups of symbols may improve the overall compression.

Read full chapter
URL: https://www.sciencedirect.com/science/article/pii/B9780750663106500076

Generating the prefix code

Once we have the binary tree, it is easy to assign a 0 to the left branch and a 1 to the right branch for each internal node of the tree as in Figure 4.2. The 0–1 values marked next to the edges are usually called the weights of the tree. A tree with these 0–1 labels is called a weighted tree. The weighted binary tree derived in this way is called a Huffman tree.

Figure 4.2. A Huffman tree

We then, for each symbol at a leaf, collect the 0 or 1 bit while traversing each tree path from the root to the leaf. When we reach a leaf, the collection of the 0s and 1s forms the prefix code for the symbol at that leaf. The codes derived in this way are called Huffman codes.

For example, the collection of the 0s and 1s from the root to leaf for symbol E is first a left branch 0, then right branch 1 and finally left branch 0. Therefore the codeword for symbol E is 010. Traversing in this way for all the leaves, we derive the prefix code (10 001 010 011 110 111 0000 0001) for the whole alphabet (B, L, E, I, A, T, S, N) respectively. A prefix code generated in this way is called Huffman code.

Read full chapter
URL: https://www.sciencedirect.com/science/article/pii/B9780750663106500076

Exercises

E4.1

Derive a Huffman code for the string AAABEDBBTGGG.

E4.2

Derive a Shannon-Fano code for the same string.

E4.3

Provide an example to show step by step how the Huffman decoding algorithm works.

E4.4

Provide a similar example for the Shannon-Fano decoding algorithm.

E4.5

Given an alphabet S = (A, B, C, D, E, F, G, H) of symbols with the probabilities 0.25, 0.2, 0.2, 0.18, 0.09, 0.05, 0.02, 0.01 respectively in the input, construct a canonical minimum-variance Huffman code for the symbols.

E4.6

Construct a canonical minimum-variance code for the alphabet A, B, C, D with probabilities 0.4, 0.3, 0.2 and 0.1 respectively. If the coded output is 101000001011, what was the input?

E4.7

Given an alphabet (a, b) with pa = 1/5 and pb = 4/5, derive a canonical minimum-variance Huffman code and compute:

(a)

the expected average length of the Huffman code

(b)

the entropy of the Huffman code.

E4.8

Following the Shannon-Fano code in Example 4.4, decode 0010001110100 step by step.

E4.9

Given a binary alphabet (X, Y) with pX = 0.8 and pY= 0.2, derive a Huffman code and determine the average code length if we group three symbols at a time.

E4.10

Explain with an example how to improve the entropy of a code by grouping the alphabet.

E4.11

Derive step by step a canonical minimum-variance Huffman code for alphabet (A, B, C, D, E, F), given the probabilities below:

SymbolProbability
A0.3
B0.2
C0.2
D0.1
E0.1
F0.1

Compare the average length of the Huffman code to the optimal length derived from the entropy distribution. Specify the unit of the codeword lengths used.

Hint: log102 ≈ 0.3; log100.3 ≈ −0.52; log100.2 ≈ −0.7; log100.1 = −1.

Read full chapter
URL: https://www.sciencedirect.com/science/article/pii/B9780750663106500076