The reasons ARE technical. There certainly is history, too. But without the technical reasons, the history would certainly have been much different. So it's essentially based on some early, technical, design decisions which turned out beneficial for vector processing machines (which were commercially available in the late 1970's and moving into the 1980's and beyond.)
The ONE thing that no one seems to have hit on (because I guess none of them actually USE FORTRAN for numerical processing themselves) is that FORTRAN makes certain contractual guarantees which permit compilers to optimize the resulting code better for both scalar and vector processing. Note that I'm not saying that somehow FORTRAN compilers use magical methods that C compilers mysteriously do not have access to. I'm saying that the language itself makes statements about the code you are permitted to write which is different than what a C coder is permitted to write. And these differences matter when it comes to optimization.
I'll provide a single example. You can find more, if you need them.
In FORTRAN, you cannot pass two different arrays as parameters if these arrays overlap anywhere. In C, you can pass an overlapping array as a 2nd parameter, for example. In FORTRAN, you cannot. This simple guarantee alone allows for ready vectorizing on VLIW, pipelined, or systems with parallel functional units. A C compiler, because it has no such guarantee, cannot generate such code for the function. It has to assume that the memories "might" overlap and generate appropriate code.
This difference was one important reason for the history you see. It started early, was found to be valuable as optimization technology lept by bounds during the 1980's (see BULLDOG: A COMPILER FOR VLIW ARCHITECTURES. 1985, by Dr. Ellis), and was only recently headed off at the pass, so to speak, with the "restrict" keyword in C. THere are other reasons that still exist, both historical and real, that still cause FORTRAN to be preferred. But the gap is diminishing somewhat.
A great deal of optimization efforts for the highest performance computers (vector processing, VLIW, and "transputer" array style as represented by Intel and NVIDEA high end computers) has been plowed into FORTRAN already now. If you want to get the most out of the fastest, you use FORTRAN (and/or mixed with assembly.) You won't find the ability to move code across code edges to fill functional units elsewhere, recognize DRAM cache refresh cycle boundaries for aligning data sets, etc., elsewhere. May happen in special cases here and there with other languages. But if you have a high end supercomputer to sell, you are porting a high performance optimizing FORTRAN to it first. You won't care about the other languages until later. Obviously, if you are porting such code, you will depend on that first-out compiler, too. And if you are developing new code, you will use the compiler you can first lay hands on and be pretty sure about it being solid.