A cache is a temporary memory that allows faster access to data that is accessed frequently or for short periods of time.
The “cache memory” is a temporary memory that enables faster access times to information . It reduces the number of necessary accesses to a slower storage medium. This process of saving runs in most cases in the background and users do not notice it. The cache can be implemented as a hardware or software function.
A hardware cache accesses data more quickly than memory where the data is stored long-term. Software caches do not have their own hardware memory devices, but use storage facilities that are already on the system.
An example is the cache of the web browser, in which, for example, data already retrieved from a server, such as images, are temporarily stored.
Typical applications of a cache memory include processors, hard disk storage or web browsers. How efficiently a cache works depends on how well it manages to actually hold the data it needs. Because the size is limited, intelligent storage management and displacement methods are required – for example, Most Recently Used (MRU), Least Frequently Used (LFU) or Least Recently Used (LRU).
What is the purpose of a cache memory?
The goals of a cache memory are:
- Reduction of access times to frequently required data
- Reduction in the number of accesses to the primary medium
- Relief of interfaces and bus systems
- Reduction of the network load
- Minimization of transmission errors
- Increasing the execution time of applications
Types of Cache Memory
Cache memory is categorized into what are called tiers – or levels. These describe how close and accessible it is to the microprocessor. There are three general levels of cache:
The L1 cache (also called primary cache) acts very fast, but is relatively small and is usually embedded in the processor chip as CPU cache.
The L2 c ache (secondary cache) is often more extensive than the L1 cache. The L2 cache may be embedded in the CPU, or may be on a separate chip or co-processor and have an alternate high-speed system bus connecting the cache and the CPU. so the traffic on the main system bus doesn’t slow him down.
The L3 cache is a specialized memory designed to improve the performance of L1 and L2. L1 or L2 can be significantly faster than L3, even though L3 is normally twice as fast as DRAM. In multi-core processors, each core can have a dedicated L1 and L2 cache, but they can share an L3 cache. If an L3 cache references an instruction, it is usually raised to a higher cache level.
While in the past L1, L2 and L3 caches were created with combined processor and motherboard components, the trend now is to consolidate all three levels of memory caching on the CPU itself. Therefore, the main method of increasing cache size has shifted from buying a specific motherboard with different chipsets and bus architectures to buying a CPU with the right amount of integrated L1, L2 and L3 cache.
Contrary to popular belief, implementing flash or more dynamic RAM (DRAM) in a system will not increase cache memory. This can lead to confusion because the terms memory caching and cache memory are often used interchangeably.
The cache provides faster access to data that you regularly retrieve.
Accordingly, the use of caches is particularly worthwhile where the access time also has a significant effect on the overall performance. This is the case, for example, with the processor cache of most (scalar) microprocessors, whereas it does not apply to vector computers, where the access time plays a subordinate role. That is why caches are usually not used there, as they are of little or no use.
Your comment has been sent successfully.
Your comment will be checked
Das könnte dich auch interessieren:
The ranking factors are the individual characteristics of a web page that are scanned by...
Link exchange is the agreed, mutual linking of one’s own website by another site. It...
Referring domains are web pages that contain a link to another page. The number and...