I think the thrust of your question, and I may be wrong, is how the fundemental interface of a computer is created (or was, it is no longer necessary to start from scratch).
The first element is the BIOS (basic input/output system), acts as a kind of "bootstrap" (think of the bootstraps on a horse saddle...also the origin of the word "boot" for turning on a computers). It is defined on wikipedia thusly:
"BIOS refers to the firmware code run by an IBM compatible PC when first powered on. The primary function of the BIOS is to prepare the machine so other software programs stored on various media (such as hard drives, floppies, and CDs) can load, execute, and assume control of the PC[1]. This process is known as booting up."
Basically, the BIOS is software written in the chip's native assembly language that allows other software to interact with the hardware. This requires a bit of explanation. Each "chip" (like an x86 chip, which is what all PCs use) has a set of basic instructions that it can execute, typically no more than 200 or so, but often less than that. Everything that is done boils down to some combination of these instructions (actually it boils down even further to a stream of 1's and 0's, called binary, but the instructions directly translate to these 1's and 0's so it doesn't matter to us). I should add at this point that this is called "firmware" and it is stored in small, semi-permanent memory on the motherboard that the chip is installed on.
Now in the first days of computing (before PCs or any microcomputer for that matter) people would just make up their own bootstrap loaders as it was relatively simple and most interaction with the machine was pretty direct. It is infinitely more complex today, and so we won't worry about that part, because it is already done for you. This is what you referred to as "read-only software" in your question. To the "second" part of your question we go...
To "write" to a blank computer, you'll need to create a second layer...what we call an operating system. These can be really simple, small programs (and indeed, before mass storage (hard drives) became affordable, most OS's were designed to be booted from small removable media like floppy disks) to huge ones (Windows Vista). The basic purpose of an operating system is to provide a set of processes to enable people and programs to interact with the machine (I.E. make it do stuff). This goes from simple (a command line interface, where everything you want to do you have to type in commands for) to complex (a modern GUI (GUI = Graphical User Interface) where everything can be accessed by clicks and moving windows etc...like Windows). To write an application like MS Word, you would write a program in a language like C++, and compile it with a compiler that is written for your operating system (so it can use the specific processes that the OS provides for) so that the machine can understand what you wrote.
Luckily you won't have to do this, it has all been done for you and it is **pretty** standardized at this point...rather, the stuff we have right now for basic operations is pretty optimized already, and so writing your own from scratch would be a huge (massive) waste of time and it would probably suck. As far as the more complex stuff, well that varies as well, but everything you see today is highly evolved (Windows, Linux, etc..) with a lot of variation, but what is improved upon is relatively high level (that is, the stuff on top of the basic stuff).
Anyway, I hoped this helped...there is a lot more to this so you might want to explore wikipedia or even get a book so you can learn more about it.
-IR
To answer your last "additional question" about typing in 1's and 0's on a keyboard:
That is not entirely accurate. The "instructions" I was referring to are built right into the chip or "CPU" (http://en.wikipedia.org/wiki/X86) like a Intel Pentium 4 which uses an INSTRUCTION SET that is standard for that platform, I.E. the "language" of that chip. So the chip you are most likely using right now uses the x86 instruction set. There are other chips (like Motorola chips, IBM chips, etc...) that use different instruction sets. Each of the instructions that are understood by the chip are really abstractions in that they translate to a steam of bits (1's and 0's) that tell the CPU what to do. An example of an instruction might be "LOAD" meaning to load some set of data (like an ADDRESS) into a register (a register is a piece of fast memory that is on the CPU as well) or "SUM" meaning to add the contents of two registers together.
Essentially, the BIOS is a series of these instructions that tie all of the components of a computer together, from the RAM, to the CPU, to the video card, so that additional software can use those resources to run more complex applications. The reason the BIOS has to be so small is that it has to fit on flash memory on the motherboard itself (think of the memory in an ipod nano...very similar) and it has to load quickly (you wouldn't want your computer to take 30 minutes to boot). An interesting fact is that, originally, on the first computers (which were room sized or larger because they used vacuum tubes, not transistors) people just plugged in different things to different places on the computer to turn things "on" or "off" to represent data or add two numbers. As computers got smaller and also more complex, humans could no longer physically interact with the system.
The details of all of this are too complicated to fully (or even partially) attempt to explain here, except to say this:
Computers speak in binary. Binary is simply another way of counting, it is what is known as a "number system". For instance, we learned from the first grade (or earlier) how to count to 10. Every 10 digits, we have a new set of 10 more digits and so forth. For this reason, the standard system that humans use to count is called decimal or the decimal system (dec is the prefiix for 10). Many believe this came from humans having 10 fingers, so that when they counted, it was by tens (think of keeping track of stuff by counting your fingers...). So it is easier and some would argue more natural for us to understand numbers this way. It turns out however, that computers think more naturally in terms of 2's, hence binary (bi is the prefix for 2) because a transistor (the basic building block of modern electronics) has TWO "states," on or off. So all a CPU knows how to do is turn something off or turn something on. Put a number of these transistors together (a Pentium 4 has around 50 million of them) and billions upon trillions of unique states are possible, and it is thus also possible to represent different things with a stream of 1'a and 0's. This is what ALL computer programs boil down to, but to make it easier, we have made ourselves abstractions (things that represent other, more complex things in a way that is more natural to humans) like assembly language (the instructions on a chip), or a high level programming language (like C++) to make it easier for us to manipulate the computer without having to type in those 1's and 0's. Hope this helps...
-IR