English
Language : 

C8051F120 Datasheet, PDF (211/356 Pages) Silicon Laboratories – Mixed Signal ISP Flash MCU Family
C8051F120/1/2/3/4/5/6/7
C8051F130/1/2/3
16. Branch Target Cache
The C8051F12x and C8051F13x device families incorporate a 63x4 byte branch target cache with a 4-byte
prefetch engine. Because the access time of the FLASH memory is 40ns, and the minimum instruction
time is 10ns (C8051F120/1/2/3 and C8051F130/1/2/3) or 20ns (C8051F124/5/6/7), the branch target
cache and prefetch engine are necessary for full-speed code execution. Instructions are read from FLASH
memory four bytes at a time by the prefetch engine, and given to the CIP-51 processor core to execute.
When running linear code (code without any jumps or branches), the prefetch engine alone allows instruc-
tions to be executed at full speed. When a code branch occurs, a search is performed for the branch tar-
get (destination address) in the cache. If the branch target information is found in the cache (called a
“cache hit”), the instruction data is read from the cache and immediately returned to the CIP-51 with no
delay in code execution. If the branch target is not found in the cache (called a “cache miss”), the proces-
sor may be stalled for up to four clock cycles while the next set of four instructions is retrieved from FLASH
memory. Each time a cache miss occurs, the requested instruction data is written to the cache if allowed
by the current cache settings. A data flow diagram of the interaction between the CIP-51 and the Branch
Target Cache and Prefetch Engine is shown in Figure 16.1.
FLASH
Memory
Prefetch
Engine
Branch Target
Cache
Instruction
Data
CIP-51
Instruction Address
Figure 16.1. Branch Target Cache Data Flow
16.1. Cache and Prefetch Operation
The branch target cache maintains two sets of memory locations: “slots” and “tags”. A slot is where the
cached instruction data from FLASH is stored. Each slot holds four consecutive code bytes. A tag con-
tains the 15 most significant bits of the corresponding FLASH address for each four-byte slot. Thus,
instruction data is always cached along four-byte boundaries in code space. A tag also contains a “valid
bit”, which indicates whether a cache location contains valid instruction data. A special cache location
(called the linear tag and slot), is reserved for use by the prefetch engine. The cache organization is shown
in Figure 16.2. Each time a FLASH read is requested, the address is compared with all valid cache tag
locations (including the linear tag). If any of the tag locations match the requested address, the data from
that slot is immediately provided to the CIP-51. If the requested address matches a location that is cur-
rently being read by the prefetch engine, the CIP-51 will be stalled until the read is complete. If a match is
not found, the current prefetch operation is abandoned, and a new prefetch operation is initiated for the
requested instruction data. When the prefetch operation is finished, the CIP-51 begins executing the
instructions that were retrieved, and the prefetch engine begins reading the next four-byte word from
FLASH memory. If the newly-fetched data also meets the criteria necessary to be cached, it will be written
to the cache in the slot indicated by the current replacement algorithm.
Rev. 1.3
213