What does this mean, (interpretation): "As can be easily seen, only number systems

with bases 2' have an integral number of bits per symbol, and can thus be

efficiently represented in binary form?"(Adoption of the Octal Number System)

What type of proof would be feasible? Is a nonconstructive proof possible? I'm just looking for a direction, because I want to learn how to do this myself.

Again, I'm not asking anyone to do my assignment for me, just hint me at a general direction of how to prove this.

Thanks, for anyone who responds.

with bases 2' have an integral number of bits per symbol, and can thus be

efficiently represented in binary form?"(Adoption of the Octal Number System)

What type of proof would be feasible? Is a nonconstructive proof possible? I'm just looking for a direction, because I want to learn how to do this myself.

Again, I'm not asking anyone to do my assignment for me, just hint me at a general direction of how to prove this.

Thanks, for anyone who responds.

Last edited on

A bit (binary digit) is a base 2 symbol, so it follows that only number systems with base 2 will have an integral number of bits per symbol.

But you could extend that statement to any base, base 3, 10 or 16 for example. There's nothing special about using base 2.

But you could extend that statement to any base, base 3, 10 or 16 for example. There's nothing special about using base 2.

Last edited on

I am not understanding how one would calculate the number of bits per symbol

x := integer power of 2

k = 2^x

k:= base

base^n = number of combinations (patterns)

(2^x)^n =

2^(x*n) number of combinations?

k = 2^x | x power of 2

2 | 1

4 | 2

8 | 3

16 | 4

32 | 5

rate = bits / symbols...?

rate = n / 2^x

and the rate is variable upon the number of digits. there are k (base number of symbols to choose from).

k^n / k = k^(n-1)

I mean I can take the statement at face value: "so it follows that only number systems with base 2 will have an integral number of bits per symbol."

But i'm trying to look at how you get such an assumption.

x := integer power of 2

k = 2^x

k:= base

base^n = number of combinations (patterns)

(2^x)^n =

2^(x*n) number of combinations?

k = 2^x | x power of 2

2 | 1

4 | 2

8 | 3

16 | 4

32 | 5

rate = bits / symbols...?

rate = n / 2^x

and the rate is variable upon the number of digits. there are k (base number of symbols to choose from).

k^n / k = k^(n-1)

I mean I can take the statement at face value: "so it follows that only number systems with base 2 will have an integral number of bits per symbol."

But i'm trying to look at how you get such an assumption.

Last edited on

kbw wrote: |
---|

But you could extend that statement to any base, base 3, 10 or 16 for example. There's nothing special about using base 2. |

Don't confuse OP. He's working with bit representation on binary computer systems.

Thought experiments:

(1)

unary: digits = { 0 }, of which there is exactly one.

There is no number of bits that can hold exactly one value.

(One bit holds two values, which is too many.)

(2)

binary: digits = { 0, 1 }, of which there are exactly two.

A single bit can hold exactly two values.

(3)

ternary: digits = { 0, 1, 2 }, of which there are exactly three.

There is no number of bits that can hold exactly three values.

(One bit holds two values, which is too few; two bits holds four values, which is too many.)

(4)

quaternary: digits = { 0, 1, 2, 3 }, of which there are exactly four.

Two bits can hold exactly four values.

(5)

quinary: digits = { 0, 1, 2, 3, 4 }, of which there are exactly five.

There is no number of bits that can hold exactly five values.

(Two bits holds four values, which is too few; three bits holds eight values, which is too many.)

etc.

Consider:

n bits holds exactly 2

n-ary has exactly n digits.

You should be able to put this together.

Hope this helps.

Topic archived. No new replies allowed.