General advanced programming question (not specifically related to C++)

For over a year now I've wondered, well let me put it this way, you know when you type a letter (like the letter z not a letter to your friend) how does the computer know what the letter z is suppose to look like? How does the computer know that the letter z isn't suppose to look like a Japanese character or anything else for that matter?

P.S. I hope I made my question clear because I don't know if I can make it any clearer.

@mods and admins

Sorry if I put this thread in the wrong subforum. I didn't see a better one to put it in.
Maybe the lounge would've been better but I wasn't sure.
There are different character encodings such as ASCII and unicode. ASCII can only represent 128 characters, while unicode has codes for 110,000. Each character corresponds to a number.
Last edited on
In a computer, a character is made up of several parts:
1. The "semantics" of the character: for example, minus and dash may look the same, despite meaning different things in different context. The semantics is what gives a character identity.
2. The encoding of the character: what C programmers think of when they talk about characters. This is how a computer represents and stores text data. Minus and dash are represented using the same number in ASCII.
3. The glyph: the graphical shape of the character.

Displaying text means transforming a string of character codes into a string of glyphs. Ultimately, this is done using a glyph table that was prepared by hand by a typographer. The glyph table can have many forms: it could be an array of bitmaps in the graphics card or printer, or a table of vector images in a font file that has to be rasterized (converted to a bitmap) to be displayed. TrueType is an example of the latter, while text-only screen modes and line printers are examples of the former.
helios wrote:
Displaying text means transforming a string of character codes into a string of glyphs. Ultimately, this is done using a glyph table that was prepared by hand by a typographer. The glyph table can have many forms: it could be an array of bitmaps in the graphics card or printer, or a table of vector images in a font file that has to be rasterized (converted to a bitmap) to be displayed. TrueType is an example of the latter, while text-only screen modes and line printers are examples of the former.


thanks that explains it. Man I'd love to see that done. I have one more question how was the computer told what the colors of the rainbow look like?
Simply put, most computer applications, even some that do handle color, are not aware that color is a thing. For example, you can multiply a number by 2 without knowing whether what you did was making something twice as loud or twice as bright.
letter pressed -> keyboard matrix -> sends code to keyboard controller (xev) -> translate it to an alphabet map -> code for letter Z -> this is mapped to the font map -> which sends a block of pixels (bytes) to the graphic display service which then applies the character to the screen. Color is displayed by blending bits (RGB) to simulate colors for our eyes. The more bits per pixel, the more colors.

how color works - bits of a pixel are blended to simulate color.

http://webstyleguide.com/wsg2/graphics/displays.html

more on color and greyscale

http://www710.univ-lyon1.fr/~jciehl/Public/OpenGL_PG/ch05.html

how pixels work to display letters and such

http://www710.univ-lyon1.fr/~jciehl/Public/OpenGL_PG/ch09.html
Topic archived. No new replies allowed.