What is Memory Interleaving? & Advantages | DataTrained

Chandrakishor Gupta Avatar

What is Memory Interleaving?

Memory Interleaving spreads memory addresses uniformly across memory banks to compensate for the relatively poor speed of dynamic random-access memory (DRAM) or core memory. Contiguous memory reads and writes employ each memory bank in this manner, resulting in better memory throughput due to less waiting for memory banks to become ready for operations.

It differs from multi-channel memory architectures in that it does not introduce additional channels between the main memory and the memory controller. Nevertheless, channel interleaving is also feasible, as demonstrated by Freescale i.MX6 CPUs, which support interleaving between two channels. Memory addresses are assigned to each memory bank in Memory Interleaving.

Example of Memory Interleaving

Example of Memory Interleaving

It is a memory abstraction technique that separates memory into numerous modules so that consecutive words in the address space are placed in distinct modules.

Assume we have four 256-byte memory banks, and the Block Oriented approach (no interleaving) will a lot virtual addresses 0 to 255 to the primary bank and 256 to 511 to the secondary. But, in memory Interleaving, virtual address 0 will be associated with the first memory bank, virtual address 1 with the second memory bank, virtual address 2 with the third memory bank, virtual address 3 with the fourth, and virtual address 4 with the first memory bank again.

As a result, the CPU can access other parts without having to wait for memory to be cached. There are many memory banks that supply data in turn.

Virtual addresses 0, 1, 2, and 3 with Data in the preceding example of four memory banks can be accessed concurrently because they are in distinct memory banks. As a result, we don’t have to wait for a data retrieval to finish before proceeding to the next process.

A memory interleaving having n banks is said to be n-way interleaved. An interleaved memory system still has two banks of DRAM, but logically, the system seems to have one bank of memory that is twice as large.

The first long word of bank 0 is followed by that of bank 1, followed by the second long word of bank 0, followed by the second long word of bank 1, and so on in the interleaved bank representation below with two memory banks.

Click here to learn about: coding

Why do we use Memory Interleaving?

Uses of Memory Interleaving

When the processor requests data from main memory, a block (chunk) of data is transferred to the cache, which is then transferred to the processor. As a result, anytime a cache miss occurs, the data must be retrieved from main memory. Yet, main memory is slower than cache memory. Memory Interleaving is used to improve the access time of the main memory.

For example, we can access all four modules concurrently, obtaining parallelism. The higher bits can be used to obtain data from the module. This approach makes good use of memory.

Read now: data analyst course in hyderabad

Types of Memory Interleaving

Types of Memory Interleaving

In an OS, there are two types of Memory Interleaving, such as:

High order interleaving:

The most crucial bits of the memory address determine which memory banks contain a specific location in high order memory interleaving. Nevertheless, the memory banks are decided by the least significant bits of the memory address in low order interleaving.

Each chip is addressed using the least significant bits. One issue is that successive addresses frequently occur on the same chip. The memory cycle time limits the maximum rate of data transfer. Memory Banking is another name for it.

Low order interleaving: 

In low-order interleaving, the least significant bits select the memory bank (module). Consecutive memory addresses are in various memory modules in this case, allowing memory access to be faster than the cycle time.

Related topic: Everything about Dynamic Memory Allocation in C

Benefits of Memory Interleaving

Benefits of Memory Interleaving

An instruction pipeline may demand both instructions and operands from main memory at the same time, which is not achievable with typical memory access methods. Similarly, an arithmetic pipeline needs two operands to be fetched from main memory at the same time. As a result, memory interleaving is used to solve this problem.

  • It enables concurrent access to many memory modules. The modular memory approach enables the CPU to commence memory access with one module while others are engaged in reading or writing activities with the CPU. As a result, we may claim that interleave memory honors all memory requests regardless of the status of the other modules.
  • Memory Interleaving, for obvious reasons, makes a system more responsive and faster than non-interleaving memory. Also, simultaneous memory access reduces CPU processing time while increasing throughput. Interleave memory is beneficial in systems that use pipelining and vector processing.
  • Consecutive memory addresses are scattered across different memory modules in an interleaved memory. With a byte-addressable 4 way Memory Interleaving, for example, if byte 0 is in the first module, byte 1 will be in the second module, byte 2 will be in the third module, byte 3 will be in the fourth module, and so on.
  • An n-way interleaved memory in which main memory is divided into n-banks and the system can access n operands/instructions from n separate memory banks at the same time. This type of memory access can minimize memory access time by a factor that is proportional to the number of memory banks. I can be found in the bank I mod n of this memory interleaving memory location.

Also read: data science course noida

Interleaving DRAM

Main memory is often made up of a collection of DRAM memory chips, with multiple chips clustered together to form a memory bank. It is therefore possible to organize these memory banks so that they are interleaved using a memory controller that supports interleaving.

DRAM data is stored in page units. Each DRAM bank contains a row buffer that acts as a cache for any page in the bank. A page in the DRAM bank is first loaded into the row-buffer before it is read. The page has the smallest memory access latency in one memory cycle if it is read promptly from the row-buffer. Assume it’s a row buffer miss, also known as a row-buffer dispute. It takes longer since the new page must first be loaded into the row-buffer before it can be read.

Row-buffer misses occur when access requests are handled on distinct memory pages in the same bank. A row-buffer conflict causes a significant latency in memory access. On the other hand, memory accesses to separate banks can be performed in parallel with high throughput.

Memory banks can be assigned a contiguous block of memory addresses in traditional layouts, which is very st
raightforward for the memory controller and provides equal performance in entirely random access circumstances compared to performance levels attained through interleaving.

Yet, because of the proximity of reference, memory reads are rarely random, and optimizing for close together access yields significantly higher performance in interleaved architectures.

The manner in which memory is addressed has no effect on the access time for memory locations that are already cached, only on memory locations that must be retrieved from DRAM.

Frequently Asked Questions

What is memory interleaving and explain?

Memory Interleaving is a concept in computing that compensates for the comparatively poor performance of dynamic random-access memory (DRAM) or core memory by uniformly distributing memory addresses across memory banks.

An instruction pipeline may demand both instructions and operands from main memory at the same time, which is not achievable with typical memory access methods. Similarly, an arithmetic pipeline needs two operands to be fetched from main memory at the same time. As a result, memory interleaving is used to solve this problem.

It enables concurrent access to many memory modules. The modular memory approach enables the CPU to commence memory access with one module while others are engaged in reading or writing activities with the CPU. As a result, we may claim that interleave memory honors all memory requests regardless of the status of the several other modules.

Interleave memory, for obvious reasons, makes a system more responsive and speedy than non-interleaving memory. Also, simultaneous memory access reduces CPU processing time while increasing throughput. Interleave memory is beneficial in systems that use pipelining and vector processing.

Consecutive memory addresses are scattered across different memory modules in memory interleaving. With a byte-addressable 4 way memory interleaving, for example, if byte 0 is in the first module, byte 1 will be in the second module, byte 2 will be in the third module, byte 3 will be in the fourth module, and so on.

An n-way interleaved memory in which the main memory is divided into n-banks and the system can access n operands/instructions from n separate memory banks at the same time. This type of memory access can minimize memory access time by a factor that is proportional to the number of memory banks. I can be found in the bank I mod n of this memory interleaving memory location.

Memory interleaving is classified into two types:

1. High Order Interleaving
2. Low Order Interleaving

Low Order Interleaving – In low-order interleaving, the memory bank is chosen using the least significant bits (module). Consecutive memory addresses are in distinct memory modules in this case. This enables memory access at far quicker rates than the cycle time allows.

It differs from multi-channel memory architectures in that it does not introduce additional channels between the main memory and the memory controller. Nevertheless, channel interleaving is also feasible, as demonstrated by freescale i.MX 6 processors, which support interleaving between two channels.

Tagged in :

UNLOCK THE PATH TO SUCCESS

We will help you achieve your goal. Just fill in your details, and we'll reach out to provide guidance and support.