Encyclopedia  |   World Factbook  |   World Flags  |   Reference Tables  |   List of Lists     
   Academic Disciplines  |   Historical Timeline  |   Themed Timelines  |   Biographies  |   How-Tos     
Sponsor by The Tattoo Collection
Computer architecture
Main Page | See live article | Alphabetical index

Computer architecture

Computer architecture is the theory behind the design of a computer. In the same way as a building architect sets the principles and goals of a building project as the basis for the draftsman's plans, so too, a computer architect sets out the computer architecture as a basis for the actual design specifications.

There are several usages of the term, which can be used to refer to:

Table of contents
1 Design goals
2 Notable tendencies
3 See also

Design goals

The most common goals in a Computer Architecture revolve around the tradeoffs between cost and performance (i.e. speed), although other considerations, such as size, weight, and power consumption, may be a factor as well.

1. Cost

Generally cost is held constant, determined by either system or commercial requirements, and speed and storage capacity are adjusted to meet the cost target.

2. Performance

Computer retailers describe the performance of their machines in terms of clock speed (usually in MHz or GHz). This refers to the cycles per second of the main clock of the CPU. However, this metric is somewhat misleading, as a machine with a higher clock rate may not necessarily have higher performance. Modern CPUs can execute multiple instructions per clock cycle, which dramatically speeds-up a program. Other factors aid speed, such as the mix of functional units, bus speeds, available memory, and the type and order of instructions in the programs being run.

But there are also different types of speed. Interrupt latency is the guaranteed maximum response time of the system to an electronic event (e.g. when the disk drive finishes moving some data). This number is affected by a very wide range of design choices -- for example, adding cache usually makes latency worse (slower) but makes other things faster. Computers that control machinery usually need low interrupt latencies, because the machine can't, won't or should not wait. For example, computer-controlled anti-lock brakes should not wait for the computer to finish what it's doing - they should brake. Low latencies can often be had very inexpensively.

Benchmarking tries to take all these factors into account by measuring the time a computer takes to run through a series of test programs. Although benchmarking shows strengths, it may not help one to choose a computer. Often the measured machines split on different measures. For example, one system might handle scientific applications quickly, while another might play popular video games more smoothly. Furthermore, designers have been known to add special features to their products, whether in hardware or software, features which permit a specific benchmark to execute quickly but which do not offer similar advantages to other, more general computational tasks. Naïve users are apt to be unaware of such deceptive tricks.

The general scheme of optimization is to find the costs of the different parts of the computer. In a balanced computer system, the data rate will be constant for all parts of the system, and cost will be allocated proportionally to assure this. The exact form of the computer system will depend on the constraints and goals it was optimized for.

Notable tendencies

A very notable approach that potentially breaks the structural limits of conventional processing architectures is called Configurable Computing. Here the compiler creates code suitable for runtime reconfigurable Field Programmable Gate Arrays in which during the scope of an Object the configurable logic represents the calculations to be performed. Since all Objects can potentially perform in parallel on streaming data this is the ultimate parallel processing architecture. (See http://www.jhdl.org/) Configurable computing could be categorized under Computing in Memory which is inspired by the function of the neuronal brain, where the processor and the memory eventually cannot be distinguished from each other.

See also