em360tech image

Computers have become an integral part of our daily lives. They power everything from smartphones to hospital systems and have shaped society to such an extent that many people simply couldn’t live without the hardware and software that defines the digital world. 

Despite this, the majority of people still have no idea how computers work and the role of hardware and software in powering the modern technologies we use today. 

Behind the sleek screens and intricate interfaces, computer architecture forms the fundamental components and processes that make our computers tick. 

This article delves deep into the meaning of computer architecture, exploring its four types, structure, and how it forms the backbone of much of the technology we use today.

What is computer architecture? Definition

Computer architecture is a specification describing how computer software and hardware connect and interact to create a computer network. It determines the structure and function of computers and the technologies it is compatible with – from the central processing unit (CPU) to memory, input/output devices, and storage units. 

Understanding the meaning of computer architecture is crucial for both computer scientists and enthusiasts, as it forms the basis for designing innovative and efficient computing solutions. It also helps programmers write software that can take full advantage of a computer's capabilities, allowing them to create everything from web applications to large language models (LLMs).

These design decisions can have a huge influence on factors like a computer’s processing speed, energy efficiency, and overall system performance.

Computer scientists must build a computer with the same principles in mind as building the foundations of physical structure. The three main pillars they must consider are: 

  1. System design. This is what makes up the structure of a computer, including all hardware parts, such as CPU, data processors, multiprocessors, memory controllers, and direct memory access. 
  2. Instruction set architecture (ISA). This is any software that makes a computer run, including the CPU’s functions and capabilities, programming languages, data formats, processor register types, and instructions used by programmers.
  3. Microarchitecture. This defines the data processing and storage element or data paths. These include storage devices and related computer organisation tools. 

The four types of computer architecture

Despite the rapid advancement of computing, the fundamentals of computer architecture remain the same. There are four main types of computer architecture: Von Neumann architectureHarvard architectureModified Harvard Architecture, and the RISC & CISC Architectures

1. Von Neumann architecture 

Named after mathematician and computer scientist John von Neumann, the Von Neumann architecture features a single memory space for both data and instructions, which are fetched and executed sequentially. This means that programs and data are stored in the same memory, allowing for flexible and easy modification of programs.

But instructions are also fetched and executed one at a time, which creates a bottleneck where the CPU can't fetch instructions and data simultaneously. This is known as the Von Neumann bottleneck. To address this, modern CPUs employ techniques like caching and pipelining to improve efficiency. 

Diagram showing Von Neumann architecture Source: ResearchGate
example of computer architecture

Still, the Von Neumann architecture remains highly relevant and influential in computer design. Von Neumann's architecture introduced the concept of stored-program computers, where both instructions and data are stored in the same memory, allowing for flexible program execution.

2. Harvard architecture

Unlike the von Neumann architecture where instructions and data share the same memory and data paths, Harvard architecture is a type of computer architecture that has separate storage units and dedicated pathways for instructions and data. This allows for simultaneous access to instructions and data, potentially improving performance.

By having separate pathways, the CPU can fetch instructions and access data at the same time, without waiting for each other, leading to faster program execution, especially for tasks that involve a lot of data movement.

Separate memory units can be optimized for their specific purposes. For example, instruction memory might be read-only, while data memory might be optimized for fast read/write operations.

harvard architecture computer architectire

Still, implementing separate storage and pathways can be more complex than the simpler von Neumann architecture and having separate memory units can increase the overall cost of the system.

3. Modified Harvard Architecture

A Modified Harvard Architecture is a hybrid type of computer architecture that combines features of both the classic Harvard architecture and the more widely used von Neumann architecture. 

Like a true Harvard architecture, a modified Harvard architecture utilizes separate caches for instructions and data. These caches are much faster than main memory, so frequently accessed instructions and data can be retrieved quickly.

However, unlike the pure Harvard architecture where instructions and data have completely separate physical memory units, a modified Harvard architecture keeps instructions and data in the same main memory. 

modified harvard architecture type of computer architecture

This combination allows for simultaneous access to instructions and data, boosting performance over a standard von Neumann architecture with a single cache. Compared to a true Harvard architecture with separate memory units, the unified memory simplifies the design and reduces costs. 

Many processors you'll find in computers today use a modified Harvard architecture with separate instruction and data caches

4. RISC & CISC Architectures

RISC (Reduced Instruction Set Computing) and CISC (Complex Instruction Set Computing) are two different architectures for computer processors that determine how they handle instructions. 

RISC processors are designed with a set of basic, well-defined instructions that are typically fixed-length and easy for the processor to decode and execute quickly. The emphasis in RISC is on designing the hardware to execute simple instructions efficiently, leading to faster clock speeds and potentially lower power consumption. Examples of RISC processors include ARM processors commonly found in smartphones and tablets, and MIPS processors used in some embedded systems.

cisc and risc computer architectures

CISC processors, however, have a wider range of instructions, including some very complex ones that can perform multiple operations in a single instruction. This can be more concise for programmers but can take the processor more time to decode and execute.

The goal of CISC is to provide a comprehensive set of instructions to handle a wide range of tasks, potentially reducing the number of instructions a programmer needs to write. Examples of CISC processors include Intel’s x86 processors, which are used in most personal computers, and Motorola 68000 family of processors which are used in older Apple computers.

Components of Computer Architecture 

While computer architectures can differ greatly depending on the purpose of the computer, several key components generally contribute to its structure. These include:

  1. Central Processing Unit (CPU) - Often referred to as the "brain" of the computer, the CPU executes instructions, performs calculations, and manages data. Its architecture dictates factors such as instruction set, clock speed, and cache hierarchy, all of which significantly impact overall system performance.
     
  2. Memory Hierarchy - This includes various types of memory, such as cache memory, random access memory (RAM), and storage devices. The memory hierarchy plays a crucial role in optimizing data access times, as data moves between different levels of memory based on their proximity to the CPU and the frequency of access.
     
  3. Input/Output (I/O) System - The I/O system enables communication between the computer and external devices, such as keyboards, monitors, and storage devices. It involves designing efficient data transfer mechanisms to ensure smooth interaction and data exchange.
     
  4. Storage Architecture - This deals with how data is stored and retrieved from storage devices like hard drives, solid-state drives (SSDs), and optical drives. Efficient storage architectures ensure data integrity, availability, and fast access times.
     
  5. Instruction Pipelining - Modern CPUs employ pipelining, a technique that breaks down instruction execution into multiple stages. This allows the CPU to process multiple instructions simultaneously, resulting in improved throughput.
     
  6. Parallel Processing - This involves dividing a task into smaller subtasks and executing them concurrently, often on multiple cores or processors. Parallel processing significantly accelerates computations, making it key to tasks like simulations, video rendering, and machine learning.

All of the above parts are connected through a system bus consisting of the address bus, data bus and control bus. The diagram below is an example of this structure:

Diagram depicting the structure of basic computer architecture with a uniprocessor CPU.
computer architecture structure

The future of computer architecture

While the fundamentals of computer architecture remain unchanged, recent advancements in computing have led to the development of specialised architectures tailored to specific tasks. 

Graphics Processing Units (GPUs), for instance, are designed to handle complex calculations required for rendering graphics and simulations. They are often found in systems built for graphic-heavy applications like video editing or gaming. 

The rise of GPUs could lead to the rise of heterogeneous computing, which combines different types of processors (CPUs, GPUs, specialized hardware) in a single system. Each processor can be optimized for specific tasks, leading to better overall performance and efficiency.

With recent advancements in AI, there’s also the possibility of computer architecture evolving into neuromorphic computing, a type of computer architecture Inspired by the human brain that uses artificial neurons and synapses to process information. Neuromorphic Computing holds promise for applications in artificial intelligence and machine learning that deal with complex patterns and relationships.

As the boundaries of classical computing are pushed to their limits, quantum computing will also likely define the future of computer architecture. While still in its early stages, quantum computing utilizes the principles of quantum mechanics to perform calculations that are impossible for traditional computers. It has the potential to revolutionize fields like materials science, drug discovery, and cryptography.

The rise of quantum computing will not only lead to a leap in processing power but also reimagine computer architecture and infrastructure as we know it. 

Hybrid architectures that combine classical and quantum components are also predicted to emerge, capitalising on the strengths of both worlds to address quantum computing's computing challenges.

As the field continues to evolve, we can expect even more groundbreaking innovations that shape the way we interact with computers and the capabilities they offer.