Decimal to Binary conversion (16-bit)

I need help with a C++ programming problem, for some reason I can't even think of a way to start on this problem. Any help would be greatly appreciated. Also, not allowed to use any existing functions or packages to perform the conversions, has to be written all the way out.

Program requirements:

16-bit binary number to/from decimal integer (unsigned)

16-bit binary number to/from hexadecimal integer (unsigned)

Decimal to/from 16-bit binary 2’s compliment signed integer number
This problem is worded somewhat strangely. I'll try to explain without confusing you even more.

Computers have a concept of numbers. They do not really have a concept of binary<->decimal<->hexadecimal. To a computer, the number ten is simply the number ten.

However, in English, we have multiple different ways to illustrate the number ten. Here are a few:


ten   (textual)
10   (decimal)
0x0A   (hexadecimal)
00001010  (binary)
012  (octal)
X  (roman numeral)

etc


All of these things represent the exact same number: the number ten. The key difference here is that they're all represented in text.

Conversely, the computer does not typically represent numbers in text form.. therefore it only has one way to represent the number.





So this problem is not so much a problem about converting a number base... it's about parsing a string of text, building a number from it, then building a new string of text that illustrates the same number in a different form.

So any such conversion is:

text->number->text


So to start you'll want a function to convert text->number. (Actually you will want several text->number functions, since as previously mentioned the text can take several different forms and you'll probably want a different function for each form)

1
2
3
4
5
6
7
8
9
10
11
12
int decimalTextToInt( std::string text )  // text would contain the string "10" to represent ten
{
    // ... convert the "10" string to the number 10, and return that number
}

int binarySigned16TextToInt( std::string text )  // text would contain the string
           //   "1111111111111111" to represent the number 'negative one'
{
    // ... convert the string to the number -1, and return that number
}

// etc 



You might want to chose simpler names, but hopefully you get the idea.


From there, you just do the reverse, and have a series of functions to convert a number into different types of text.
Last edited on
Okay thanks for your help Disch, everything you said makes perfect sense. If you could help me to figure out how to get the 2's complement of a binary number I would be grateful.

Getting the 1's complement is a cinch however I'm struggling with adding the 1 to get the 2's complement. I can't seem to get the right number every time.

Thanks again for your help.
Digits in binary have a "weight" of 2n, where n is the bit number.

So...
1
2
3
4
5
6
7
8
bit n ->    weight
___________________
bit 0 ->  2^0  -> 1
bit 1 ->  2^1  -> 2
bit 2 ->  2^2  -> 4
bit 3 ->  2^3  -> 8
...
bit 7 ->  2^7  -> 128



You can calculate the value of any given binary number by summing the weight of the bits that are set.

Example:

1
2
3
4
5
6
7
8
9
10
11
12
00100100
^      ^
|      |
|      bit 0
bit 7

Here, bits 2 and 5 are set

Therefore the value is:
2^2 + 2^5
 4  +  32
    36



Signed 2's compliment numbers work the exact same way. The only difference is the weight of the highest bit is negative. So, with a 16 bit number, the highest bit is bit 15, so it would have a weight of -(2^15)

Another example (8-bit to keep it simple, but the same concept applies to 16-bit, just use bit 15 as the high bit instead of bit 7):

1
2
3
4
5
6
7
8
9
10
11
12
10000010
^      ^
|      |
|      bit 0
bit 7

Here, bits 1 and 7 are set

Therefore the value is:
2^1 + -(2^7)
 2  -  128
    -126





-------------------------------------



Alternatively, (and likely more simply).... if you have an unsigned 16-bit number and you want to make it signed, there's some bit magic you can do:

1
2
3
4
5
6
int foo = some_unsigned_16_bit_number;

// make it signed:
foo = (foo ^ 0x8000) - 0x8000;

// now it's signed! 



EDIT:

If you want an explanation of how that bit magic works...

The XOR operator (^) will toggle bit 15.

So if the number is positive, bit 15 is clear, which means the ^ operator will set it. Then we immediately subtract bit 15, so the value remains unchanged.

But if the number is negative, bit 15 is set, which means the ^ operator will clear it. At which point, when we subtract bit 15, we're effectively replacing the original positive weight of that bit, and replacing it with a negative weight.
Last edited on
Topic archived. No new replies allowed.