Question:
How did programmers first tell computers how each letter is shaped? (Read the description before answering)?
Tyklon
2015-06-14 03:55:54 UTC
I get how each character has a binary code, and how one could program from that point on. I know how binary code works, and that programming with characters is really just a way to program with binary faster. But with just binary code, how did programmers shape each letter? Obviously they didn't *at first* just type a binary number and a letter appeared. How did they program the SHAPE of each letter without any programs to do so? Or the shape of any on-screen character, for that matter?

I have never been able to find an answer to this. Everyone just gives the history of binary code or how it works. That is not what I am asking.
Four answers:
?
2015-06-14 04:48:17 UTC
The very earliest computers used teletypes and chain printers, so the shape of each letter was engraved in a metal shape that was physically pushed against a carbon ribbon and onto a piece of paper. The computer didn't need to know the shape.



In the 1970s, "glass" teletypes/Visual Display Units (VDUs) became common. These replaced the noisy and slow paper teletypes by a faster equivalent, that displayed characters on a small CRT. The actual character shapes were built into the device so again the computer didn't need to know it. Within the device, there was a Read Only Memory (ROM) which stored each letter shape in a matrix of say 8x8 bits - that is 8 bytes to store the shape of each character. Each byte represented one row of pixels, with a 1 bit being ON and 0 bit being OFF. When the device displayed a character, it referred to the correct place in the ROM and read out which bits to turn on and off. This was the beginning of bit-mapped fonts.



As technology advanced, people wanted more control over what they saw on the screen, and for businesses, what they printed out. This led to the development of "font" files. Initially, these were extensions of the simple bitmapped fonts used in the VDU. Each character might be stored as a matrix of bits, sometimes as large as 300 bits x 300 bits for 1" high (72 point) characters - and that's for just ONE character. You'd need different font files for each point size you were going to use, plus separate files for the display and printer. Obviously, this ate up storage and there was a lot of effort to produce font files that contained all the detail of the characters, but in a much smaller space and that could be easily expanded or contracted from 8 point to 80 point. Several viable formats were produced, such as Adobe's PostScript font format and True Type, produced by Apple and Microsoft in collaboration.



A True Type font file contains a description of each character using a number mathematical equations, plus a further series of "hints" to deal with special situations. However, no matter how it's stored, it is eventually converted into a block of pixels that are set to an appropriate colour to display the character against a background.
Chris
2015-06-14 04:29:20 UTC
The first computers that had graphical displays, i.e. pixel based ones, used bitmap fonts. Each letter was a small 2 color image. The image representing an "A" was assigned to ascii code 65, and the computer was told to display that image when a file contained an "A".

Shaping was probably done by drawing a big capital A on quad paper and sequentially feeding the pixels into the computer. It's also completely possible to write in binary a GUI based program to design the letters directly on the computer.
2015-06-14 05:03:30 UTC
Programmers don't do that. That's the work of a font designer. Chris is right on target with the assumptions about quad paper (old school) and software assistance (pretty much needed to make a modern font).



Each character (called a "glyph") is drawn as a black-and-white bitmapped image with something like 1000x1000 resolution. This is what tells the computer "how the letter is shaped."



In older, raster-type fonts, this image would be down-sampled to each font size to be used. Newer font formats like TrueType use a compact description of how to draw the boundary of that character smoothly. That description takes much less space than the original megabit image, but is also easy to resize. Computers and algorithms are used to detect the edges of the glyph and convert that into a sequence of steps to draw them smoothly.



There are additional details, like tweaks to small font sizes to adjust for the fact that eyes are not linear, and "anti-aliasing" to make diagonal boundaries look more like smooth curves than staircases at coarse resolutions. But, that's the idea.



The study of all this is called "typograpy", and there's been a lot of interesting work done.
?
2015-06-14 04:16:46 UTC
Well it's pretty easy really. A computer is connected to a screen and has the ability to turn pixels on and off on the screen by sending signals to it. For each letter in the alphabet one can just pre-define a pixel pattern to turn on to display that letter.


This content was originally posted on Y! Answers, a Q&A website that shut down in 2021.
Loading...