Operating System Cache Memory Mapping Techniques | Direct Mapping in Cache Memory

Cache Memory Mapping Techniques | Direct Mapping in Cache Memory

Definition: When all needed data get transfer from the primary memory to cache memory area, so you can say as “Cache Memory Mapping“.

cache memory

There are several cache mapping techniques which are using in computer architecture, such as

Direct mapping is very simplest mapping technique because in which every block of primary memory allows to map into with single possible cache line.

Also Read: Functions, Needs, and Role of Operating System

In Direct mapping, every memory block gets allot for particular line in the cache memory. Some time memory block engages with recently cache line, then fresh block required for loading, and previously block get delete. The address space has two segments like as index field and tag field, and a tag field keep save into cache memory.

A = B modulo C

Where to assign

A = cache line number

B = main memory block number

c = number of lines in the cache

Working of direct mapped cache is split in some steps, such as –

If, CPU arises memory request then

  • The line number field has unique address that helps to access the specific line of cache.
  • CPU address contains tag field, and it compares to tag of line.
  • If two tags have matched, then cache hit is fire and needed word is found in cache.
  • If those tags are not same (no match), then cache miss fire.
  • If cache miss occurs then needed word get fetch from primary memory.
  • Finally, it is saved in the cache memory along with fresh recently tag and it is replaced to previous tag.

Associative mapping is very flexible technique because in which all content and addresses of memory word saved into associative memory. Each block is capable to enter in the cache’s line. So, it can identify that word is necessary with help of word id bits in the block, and due to this get possible to swap any word on the any area in the cache memory. So, we can consider that associative mapping is fastest and great flexible.

Associative Mapping

Set-Associative mapping is the combination of direct and associative cache mapping techniques.

Also Read: What is Page Fault? Page Fault Handling in OS

This mapping helps to remove all issues of direct mapping technique. Set-Associative helps to address all issues of possible thrashing in direct mapping technique. It maps the all blocks with cache, then some line work together, and generates a SET. Set-Associative mapping permits to all words which presented in the cache for same index address of multiple words in the memory.

Set Associative Mapping

Any block can map within the SET.

For Example – Use 6 bit for tag then 64 tags

Memory Address – All blocks that present in the cache memory, and spited into 64 sets, and it contains two blocks for each set.

There are some considerations about fully associative mapping such as –

  1. In which use primary memory’s block can map with freely available cache line.
  2. It is more flexible compare to direct mapping.
  3. If cache is packed then this algorithm is required.

In fully associative can determine that block of primary memory presents in cache, to do comparison between tag bits of memory address in cache, and primary memory  block is built in parallel.

In which, cache memory consists 4,096 blocks, and that are containing 2 lines each

In which, cache memory will have 2,048 blocks containing four lines each (8,192 lines / 4).

In 16-way set associative cache the memory cache will have 512 blocks containing 16 lines each.

N-Way Set Associative Cache helps to decrease the conflictions that provided by N blocks in every set where to map data with that set may find.

Leave a Reply

Your email address will not be published. Required fields are marked *