Your answer was right 10 years ago, but things have changed since then. The majority of programs these days use UNICODE or UTF-8 instead of ASCII. As you point out above, each character in ASCII has a value from 0 to 255, which is fine for US English but falls apart a little when you get German, Swedish, and other similar languages and brakes utterly when you get to Japanese, Korean, Hebrew, Arabic and other languages that use completely different character sets.
Window uses UNICODE internally. It's very similar to ASCII in many ways, but instead of using 8 bits and having values from 0 to 255, it uses 16 bits and values from 0 to 65535. This is sufficient for most languages. (Worth noting that ASCII is technically only 0 to 127, but most programs that use ASCII use extended ASCII, which goes from 0 to 255 and has more European accent characters, like ö and ƒ and ô and such).
The internet uses UTF-8 which is much more closely relate to ASCII but is much more complex. Values from 1 to 255 (note it starts at 1!) use something very similar to extended ASCII... in fact the first 127 characters are ASCII. But if the value is zero, then that means the next two or more values use an international character set. In fact it can use up to 4 bytes (ASCII uses 1, UNICODE uses 2) to represent a single character. It gets really complicated really fast, but it supports international languages better than even UNICODE.
There's also a UTF-16 which is almost a cross between UNICODE and UTF-8. You can read more details about either UTF-8 or UTF-16 in Wikipedia. Just to confuse things a little further, there's an older international standard called "multibyte" based on ASCII and similar in some ways to UTF-8, although more limited. Few programs use that anymore.
As for keyboards, although the mechanics you've discussed are true for most keyboards (some actually do things differently internally) the translation from the matrix of wires and contacts to a keystroke is handled on the device itself so the computer doesn't need to worry about it. The computer receives a scan code, which is basically a code for which button is hit. Depending on what language setup you're using (international keyboards put buttons in different places, and even within the US there are a few common keyboard layouts, such as QWERTY and Dvorak) the computer then takes that scan code and turns it into a character.
Probably more information than you wanted, but hope it points you in the right direction. In summary, nearly all programs in Windows use UNICODE, and internet/e-mail traffic is nearly all UTF-8 these days. They are more complex than ASCII but support international characters better, and since the computer world is far bigger than the United States, better international support is a good thing. (For those of us who prefer US English, know that the core of these international character sets are still based on the American ASCII standard. So we still get a leg up on any other language).
Good luck.