English
Language : 

SH7604 Datasheet, PDF (232/633 Pages) Hitachi Semiconductor – Hardware Manual
8.3 Address Space and the Cache
The address space is divided into six partitions. The cache access operation is specified by
addresses. Table 8.2 lists the partitions and their cache operations. For more information on
address spaces, see section 7, Bus State Controller. Note that the spaces of the cache area and
cache-through area are the same.
Table 8.2 Address Space and Cache Operation
Addresses
A31–A29 Partition
Cache Operation
000
Cache area
Cache is used when the CE bit in CCR is 1.
001
Cache-through area
Cache is not used.
010
Associative purge area
Cache line of the specified address is purged
(disabled).
011
Address array read/write area Cache address array is accessed directly.
110
Data array read/write area Cache data array is accessed directly.
111
I/O area
Cache is not used.
8.4 Cache Operation
8.4.1 Cache Reads
This section describes cache operation when the cache is enabled and data is read from the CPU.
One of the 64 entries is selected by the entry address part of the address output from the CPU on
the cache address bus. The tag addresses of ways 0 through 3 are compared to the tag address parts
of the addresses output from the CPU. A match to the tag address of a way is called a cache hit. In
proper use, the tag addresses of each way differ from each other, and the tag address of only one
way will match. When none of the way tag addresses match, it is called a cache miss. Tag
addresses of entries with valid bits of 0 will not match in any case.
When a cache hit occurs, data is read from the data array of the way that was matched according to
the entry address, the byte address within the line, and the access data size. The data is then sent to
the CPU. The address output on the cache address bus is calculated in the CPU’s instruction
execution phase and the results of the read are written during the CPU’s write-back stage. The
cache address bus and cache data bus both operate as pipelines in concert with the CPU’s pipeline
structure. From address comparison to data read requires 1 cycle; since the address and data
operate as a pipeline, consecutive reads can be performed at each cycle with no waits.
216