It’s a part of the CPU’s MMU and acts as a cache for page table entries. When a virtual address needs to be translated, the MMU first checks the TLB. If the corresponding page table entry is found there—a hit—the physical address is quickly returned and memory access can continue. If the entry is not found in the TLB—a miss—then the MMU must do a standard page table lookup, which is slower because it involves accessing the main memory.

The TLB can significantly reduce the time it takes to translate addresses because memory accesses are frequent, and it’s common for a program to access the same pages multiple times due to program locality. If a TLB miss occurs, after the address is found in the page table, it’s also placed in the TLB to speed up future accesses.

To maintain consistency, the TLB must be updated whenever there are changes to the page tables, such as when pages are swapped in or out of memory. Most modern CPUs manage this automatically, either by hardware mechanisms that invalidate stale entries or by notifications from the operating system.

Caching

  • Store small number of page table mappings
  • Fast (one order of magnitude faster than page table walk)
  • All entries searched in parallel
    • TLB Hit
    • TLB Miss TLB data must be flushed
  • When virtual mapping changes
  • Context switch
  • Expensive