How to convert data in char array to hex?

Pages: 12
Or as the C++20 way:

1
2
3
4
5
6
7
8
9
10
11
12
13
#include <iostream>
#include <iterator>
#include <format>

int main()
{
	constexpr char sample[] {"This sample 12! @#$9 stream of characters."};

	std::cout << std::format("Size: {}\n{:-<22}\n{: >5}{: >5}{: >5}{: >6}\n{:-<22}\n", std::size(sample), '-', "POS", "DEC", "HEX", "CHAR", '-');

	for (size_t position {}; const unsigned char ch : sample)
		std::cout << std::format("{0:5}{1:5}{1:5x}{2:>5c}", position++, static_cast<unsigned>(ch), ch) << '\n';
}

Last edited on
I need to see if I understood this correctly...

When I print out the chars in my array, after casting to int...

like this for example, range loop:
cout << (int) mychararray[i]

I get some decimal values and some hex values. I wonder this because I thought int only printed out base 10 decimal values. But if I understand correctly, the hex reprsentation of the binary is printed instead, when the binary value is >= the max (256) for 8 bit.
Last edited on
you get a hex value. 0-9, 10 is a, 11 is b, ... there are no decimal values, but hex and decimal share 10 of the 16 digits when printed in text.
the hex and binary of the integer are the same value, yes.
the hex, decimal, binary, and every other way you can print the number are now sizeof(int) bits long instead of sizeof(char) bits long. that is typically 32 vs 8.
for negative numbers in 2s complement, which is a typical computer approach (but not all computers), the bits outside the 8 bits of a char may be set making the number very large if viewed as unsigned or printed in hex (it still represents the same value). So again -4 is -4 whether int or char, but for int, 30 bits are set, for char, only 6, or whatever it was. printed bit by bit, or in hex which is really just a compact way to print binary, you can see this. **

**each hex digit directly converts to 4 bits, and it is often used to talk 'binary' when printing the full bits are not required.
Last edited on
if you promote a negative value to a larger, negative type, some of the high order bits get set in the process of making it negative, and that makes it a gigantic value in your case.


Do you mean like this? For example:


8bit, 1 on the last and some numbers before, so this is negative because I used a signed array
- - - - - - - -
1 x x x x x x x

When changed to int, it becomes like this?:

... x x x x x x x x x x x x 1 x x x x x x x

1 is not on the last anymore, so that's why I got the large positive number?
Last edited on
No. When casted to int, a signed char with value
1xxxxxxx
becomes
11111111 11111111 11111111 1xxxxxxx
because that's how that negative value is represented in larger types.
For example, (signed char)-1 is represented as
11111111
(signed char)-2 is represented as
11111110
and so on. Likewise, (int)-1 and (int)-2 are represented as
11111111 11111111 11111111 11111111
and
11111111 11111111 11111111 11111110
respectively. You can think of it as if the representation is the unsigned number that when added the negative value (with its sign inverted) gives zero. So 11111111 11111111 11111111 11111111 (4294967295) represents (int)-1, because when you add 1 to that value the number loops back around to zero.
there are no decimal values


Now you confused me. The hex representation of the binary stored in my array is 16 and 20. But the output is the decimal versions of 16 and 20, which is 22 and 32
Let's back up a bit. When you said this:
like this for example, range loop:
cout << (int) mychararray[i]

I get some decimal values and some hex values.
What did you mean by "some decimal values and some hex values"? What was the actual output verbatim, and what do you interpret it to mean?

Now you confused me. The hex representation of the binary stored in my array is 16 and 20. But the output is the decimal versions of 16 and 20, which is 22 and 32


if you have hex 20, which is decimal 32, there are an infinite number of ways to write those two numbers that all mean the same 'magnitude'. The computer understands some of these because humans programmed it to (it understands binary, octal, decimal, and hex as well as scientific notation and some other formats) for ease of use. So you can print that magnitude on the screen in any of many formats, and you can also type the magnitude into the computer (for a cin statement, for example, or directly in your code) in any of many ways too. It does not matter: the computer will translate whatever format you give it into binary deep in the circuitry.

the computer never stores a value 'in hex' or 'in decimal' unless it does so as a *text string* which, to the math circuits on the cpu, is gibberish. It can't understand those, and needs to run a (rather slow) algorithm over the text to get it back into binary (and it does that because the programmer told it to, you effectively say "hey computer, these bytes need to run through this function to produce an integer you can use".

you want to think in terms of that magnitude, not how it looks on screen(which you can control, if you are the programmer). It takes a while to get there. At times, hex or decimal or even binary may be easier to visualize the concept you are working on, and use the one that makes sense, but its not stored that way.
Last edited on
Topic archived. No new replies allowed.
Pages: 12