High Performance Computing (also known as supercomputing) is the process of solving large-scale problems that require substantially more computation power, bandwidth or storage than can be obtained on current workstations and desktop systems. Currently, most HPC systems are able to perform in excess of one trillion floating point operations per second (TeraFLOPS). In comparison, the average desktop PC (e.g. a Pentium 4 processor at 3 GHz) can only perform six billion floating point operations per second (GigaFLOPs).
The latest systems fielded at the ARL DSRC perform in excess of 10 TeraFLOPs each. In addition to speed, high performance computing systems also have orders of magnitude more memory and storage space available than desktop systems.
“High-Performance Computing," or HPC, is the application of "supercomputers" to computational problems that are either too large for standard computers or would take too long. A desktop computer generally has had a single processing chip, commonly called a CPU. An HPC system, on the other hand, is essentially a network of nodes, each of which contains one or more processing chips, as well as its own memory.