Patent application title: Mechanism to Update the Status of In-Flight Cache Coherence In a Multi-Level Cache Hierarchy
Raguram Damodaran (Plano, TX, US)
Raguram Damodaran (Plano, TX, US)
Naveen Bhoria (Plano, TX, US)
Naveen Bhoria (Plano, TX, US)
Krishna C. Gurram (Allen, TX, US)
TEXAS INSTRUMENTS INCORPORATED
IPC8 Class: AG06F1208FI
Class name: Caching multiple caches hierarchical caches
Publication date: 2012-08-02
Patent application number: 20120198165
Separate buffers store snoop writes and direct memory access writes. A
multiplexer selects one of these for input to a FIFO buffer. The FIFO
buffer is split into multiple FIFOs including: a command FIFO; an address
FIFO; and write data FIFO. Each snoop command is compared with an
allocated line set and way and deleted on a match to avoid data
corruption. Each snoop command is also compared with a victim address. If
the snoop address matches victim address logic redirects the snoop
command to a victim buffer and the snoop write is completed in the victim
1. A data processing system comprising: a central processing unit
executing program instructions to manipulate data; a first level data
cache connected to said central processing unit temporarily storing in a
plurality of cache lines data for manipulation by said central processing
unit, each cache line including a tag indicating a valid and a dirty
status of said data stored therein; a second level cache connected to
said first level cache temporarily storing in a plurality of cache lines
data for manipulation by said central processing unit, said second level
cache supplying a snoop write to said first level cache if a received
write has an address that may be stored in said first level cache; a
snoop first-in-first-out buffer having an input receiving snoop writes
from said second level cache and an output compared with said tag for
each cache line in said first level cache to determine if data at an
address corresponding to an address of said snoop write is stored in said
first level cache, an allocated line address buffer storing addresses
corresponding to cache line allocated by said first level cache; a kill
logic unit comparing addresses in said snoop first-in-first-out buffer
with addresses stored in said allocated line address buffer and deleting
a snoop first-in-first-out buffer entry upon a match; a victim address
buffer storing addresses of cache lines selected to be evicted from said
first level cache and supplying said snoop write data to be written into
a victim buffer.
2. The data processing system of claim 1, wherein: said snoop first-in-first-out buffer includes a command first-in-first-out buffer; an address first-in-first-out buffer; and write data first-in-first-out buffer.
3. The data processing system of claim 1, further comprising: a direct memory access unit connected to said central processing unit controlling data transfer, said direct memory access unit operating under control of said central processing unit to control data transfers including transferring data into said second level directly addressable memory; a snoop buffer temporarily storing snoop writes; a direct memory access buffer storing direct memory access writes; a multiplexer having a first input connected to said snoop buffer, a second input connected to said direct memory access buffer and an output connected to said input of said snoop first-in-first-out buffer, said multiplexer selecting one of said inputs to output.
CLAIM OF PRIORITY
 This application claims priority under 35 U.S.C. 119(e)(1) to U.S. Provisional Application No. 61/387,283 filed Sep. 28, 2010.
TECHNICAL FIELD OF THE INVENTION
 The technical field of this invention is cache for digital data processors.
BACKGROUND OF THE INVENTION
 In prior art date processing systems of the type having a multi-level cache to which this invention is applicable the level two cache controller to level one cache (l1d) controller snoop interface operated at a lower clock frequency than the central processing unit. There was a need in the art to improve the interface frequency to the central processing unit clock frequency and to reduce the interface width from 256 bits to 64 bits. Current snoop architecture would reduce the snoop bandwidth drastically because the level one cache controller couldn't accept more than one snoop at a time.
SUMMARY OF THE INVENTION
 Separate buffers store snoop writes and direct memory access writes. A multiplexer selects one of these for input to a FIFO buffer. The FIFO buffer is split into multiple FIFOs including: a command FIFO; an address FIFO; and write data FIFO. Each snoop command is compared with an allocated line line and deleted on a match to avoid data corruption. Each snoop command is also compared with a victim address. If the snoop address matches victim address logic redirects the snoop command to a victim buffer. The snoop write is completed in the victim buffer.
BRIEF DESCRIPTION OF THE DRAWINGS
 These and other aspects of this invention are illustrated in the drawings, in which:
 FIG. 1 illustrates the organization of a typical digital signal processor to which this invention is applicable (prior art);
 FIG. 2 illustrates details of a very long instruction word digital signal processor core suitable for use in Figure (prior art);
 FIG. 3 illustrates the pipeline stages of the very long instruction word digital signal processor core illustrated in FIG. 2 (prior art);
 FIG. 4 illustrates the instruction syntax of the very long instruction word digital signal processor core illustrated in FIG. 2 (prior art);
 FIG. 5 illustrates a computing system including a local memory arbiter according to an embodiment of the invention;
 FIG. 6 is a further view of the digital signal processor system of this invention showing various cache controllers;
 FIG. 7 illustrates the prior art interface between the level two memory controller and the level one memory controller; and
 FIG. 8 illustrates the interface between the level two memory controller and the level one memory controller according the this invention.
DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS
 FIG. 1 illustrates the organization of a typical digital signal processor system 100 to which this invention is applicable (prior art). Digital signal processor system 100 includes central processing unit core 110. Central processing unit core 110 includes the data processing portion of digital signal processor system 100. Central processing unit core 110 could be constructed as known in the art and would typically includes a register file, an integer arithmetic logic unit, an integer multiplier and program flow control units. An example of an appropriate central processing unit core is described below in conjunction with FIGS. 2 to 4.
 Digital signal processor system 100 includes a number of cache memories. FIG. 1 illustrates a pair of first level caches. Level one instruction cache (L1I) 121 stores instructions used by central processing unit core 110. Central processing unit core 110 first attempts to access any instruction from level one instruction cache 121. Level one data cache (L1D) 123 stores data used by central processing unit core 110. Central processing unit core 110 first attempts to access any required data from level one data cache 123. The two level one caches are backed by a level two unified cache (L2) 130. In the event of a cache miss to level one instruction cache 121 or to level one data cache 123, the requested instruction or data is sought from level two unified cache 130. If the requested instruction or data is stored in level two unified cache 130, then it is supplied to the requesting level one cache for supply to central processing unit core 110. As is known in the art, the requested instruction or data may be simultaneously supplied to both the requesting cache and central processing unit core 110 to speed use.
 Level two unified cache 130 is further coupled to higher level memory systems. Digital signal processor system 100 may be a part of a multiprocessor system. The other processors of the multiprocessor system are coupled to level two unified cache 130 via a transfer request bus 141 and a data transfer bus 143. A direct memory access unit 150 provides the connection of digital signal processor system 100 to external memory 161 and external peripherals 169.
 FIG. 1 illustrates several data/instruction movements within the digital signal processor system 100. These include: (1) instructions move from L2 cache 130 to L1I cache 121 to fill in response to a L1I cache miss; (2) data moves from L2 cache 130 to L1D cache 123 to fill in response to a L1D cache miss; (3) data moves from L1D cache 123 to L2 cache 130 in response to a write miss in L1D cache 123, in response to a L1D cache 123 victim eviction and in response to a snoop from L2 cache 130; (4) data moves from external memory 161 to L2 cache 130 to fill in response to L2 cache miss or a direct memory access (DMA) data transfer into L2 cache 130; (5) data moves from L2 cache 130 to external memory 161 in response to a L2 cache victim eviction or writeback and in response to a DMA transfer out of L2 cache 130; (6) data moves from peripherals 169 to L2 cache 130 in response to a DMA transfer into L2 cache 130; and (7) data moves from L2 cache 130 to peripherals 169 is response to a DMA transfer out of L2 cache 130.
 FIG. 2 is a block diagram illustrating details of a digital signal processor integrated circuit 200 suitable but not essential for use in this invention (prior art). The digital signal processor integrated circuit 200 includes central processing unit 1, which is a 32-bit eight-way VLIW pipelined processor. Central processing unit 1 is coupled to level one instruction cache 121 included in digital signal processor integrated circuit 200. Digital signal processor integrated circuit 200 also includes level one data cache 123. Digital signal processor integrated circuit 200 also includes peripherals 4 to 9. These peripherals preferably include an external memory interface (EMIF) 4 and a direct memory access (DMA) controller 5. External memory interface (EMIF) 4 preferably supports access to supports synchronous and asynchronous SRAM and synchronous DRAM. Direct memory access (DMA) controller 5 preferably provides 2-channel auto-boot loading direct memory access. These peripherals include power-down logic 6. Power-down logic 6 preferably can halt central processing unit activity, peripheral activity, and phase lock loop (PLL) clock synchronization activity to reduce power consumption. These peripherals also include host ports 7, serial ports 8 and programmable timers 9.
 Central processing unit 1 has a 32-bit, byte addressable address space. Internal memory on the same integrated circuit is preferably organized in a data space including level one data cache 123 and a program space including level one instruction cache 121. When off-chip memory is used, preferably these two spaces are unified into a single memory space via the external memory interface (EMIF) 4.
 Level one data cache 123 may be internally accessed by central processing unit 1 via two internal ports 3a and 3b. Each internal port 3a and 3b preferably has 32 bits of data and a 32-bit byte address reach. Level one instruction cache 121 may be internally accessed by central processing unit 1 via a single port 2a. Port 2a of level one instruction cache 121 preferably has an instruction-fetch width of 256 bits and a 30-bit word (four bytes) address, equivalent to a 32-bit byte address.
 Central processing unit 1 includes program fetch unit 10, instruction dispatch unit 11, instruction decode unit 12 and two data paths 20 and 30. First data path 20 includes four functional units designated L1 unit 22, S1 unit 23, M1 unit 24 and D1 unit 25 and 16 32-bit A registers forming register file 21. Second data path 30 likewise includes four functional units designated L2 unit 32, S2 unit 33, M2 unit 34 and D2 unit 35 and 16 32-bit B registers forming register file 31. The functional units of each data path access the corresponding register file for their operands. There are two cross paths 27 and 37 permitting access to one register in the opposite register file each pipeline stage. Central processing unit 1 includes control registers 13, control logic 14, and test logic 15, emulation logic 16 and interrupt logic 17.
 Program fetch unit 10, instruction dispatch unit 11 and instruction decode unit 12 recall instructions from level one instruction cache 121 and deliver up to eight 32-bit instructions to the functional units every instruction cycle. Processing occurs simultaneously in each of the two data paths 20 and 30. As previously described each data path has four corresponding functional units (L, S, M and D) and a corresponding register file containing 16 32-bit registers. Each functional unit is controlled by a 32-bit instruction. The data paths are further described below. A control register file 13 provides the means to configure and control various processor operations.
 FIG. 3 illustrates the pipeline stages 300 of digital signal processor core 110 (prior art). These pipeline stages are divided into three groups: fetch group 310; decode group 320; and execute group 330. All instructions in the instruction set flow through the fetch, decode, and execute stages of the pipeline. Fetch group 310 has four phases for all instructions, and decode group 320 has two phases for all instructions. Execute group 330 requires a varying number of phases depending on the type of instruction.
 The fetch phases of the fetch group 310 are: Program address generate phase 311 (PG); Program address send phase 312 (PS); Program access ready wait stage 313 (PW); and Program fetch packet receive stage 314 (PR). Digital signal processor core 110 uses a fetch packet (FP) of eight instructions. All eight of the instructions proceed through fetch group 310 together. During PG phase 311, the program address is generated in program fetch unit 10. During PS phase 312, this program address is sent to memory. During PW phase 313, the memory read occurs. Finally during PR phase 314, the fetch packet is received at CPU 1.
 The decode phases of decode group 320 are: Instruction dispatch (DP) 321; and Instruction decode (DC) 322. During the DP phase 321, the fetch packets are split into execute packets. Execute packets consist of one or more instructions which are coded to execute in parallel. During DP phase 322, the instructions in an execute packet are assigned to the appropriate functional units. Also during DC phase 322, the source registers, destination registers and associated paths are decoded for the execution of the instructions in the respective functional units.
 The execute phases of the execute group 330 are: Execute 1 (E1) 331; Execute 2 (E2) 332; Execute 3 (E3) 333; Execute 4 (E4) 334; and Execute 5 (E5) 335. Different types of instructions require different numbers of these phases to complete. These phases of the pipeline play an important role in understanding the device state at CPU cycle boundaries.
 During E1 phase 331, the conditions for the instructions are evaluated and operands are read for all instruction types. For load and store instructions, address generation is performed and address modifications are written to a register file. For branch instructions, branch fetch packet in PG phase 311 is affected. For all single-cycle instructions, the results are written to a register file. All single-cycle instructions complete during the E1 phase 331.
 During the E2 phase 332, for load instructions, the address is sent to memory. For store instructions, the address and data are sent to memory. Single-cycle instructions that saturate results set the SAT bit in the control status register (CSR) if saturation occurs. For single cycle 16 by 16 multiply instructions, the results are written to a register file. For M unit non-multiply instructions, the results are written to a register file. All ordinary multiply unit instructions complete during E2 phase 322.
 During E3 phase 333, data memory accesses are performed. Any multiply instruction that saturates results sets the SAT bit in the control status register (CSR) if saturation occurs. Store instructions complete during the E3 phase 333.
 During E4 phase 334, for load instructions, data is brought to the CPU boundary. For multiply extension instructions, the results are written to a register file. Multiply extension instructions complete during the E4 phase 334.
 During E5 phase 335, load instructions write data into a register. Load instructions complete during the E5 phase 335.
 FIG. 4 illustrates an example of the instruction coding of instructions used by digital signal processor core 110 (prior art). Each instruction consists of 32 bits and controls the operation of one of the eight functional units. The bit fields are defined as follows. The creg field (bits 29 to 31) is the conditional register field. These bits identify whether the instruction is conditional and identify the predicate register. The z bit (bit 28) indicates whether the predication is based upon zero or not zero in the predicate register. If z=1, the test is for equality with zero. If z=0, the test is for nonzero. The case of creg=0 and z=0 is treated as always true to allow unconditional instruction execution. The creg field is encoded in the instruction opcode as shown in Table 1.
TABLE-US-00001 TABLE 1 Conditional Register creg z 31 30 29 28 Unconditional 0 0 0 0 Reserved 0 0 0 1 B0 0 0 1 z B1 0 1 0 z B2 0 1 1 z A1 1 0 0 z A2 1 0 1 z A0 1 1 0 z Reserved 1 1 1 x Note that "z" in the z bit column refers to the zero/not zero comparison selection noted above and "x" is a don't care state. This coding can only specify a subset of the 32 registers in each register file as predicate registers. This selection was made to preserve bits in the instruction coding.
 The dst field (bits 23 to 27) specifies one of the 32 registers in the corresponding register file as the destination of the instruction results.
 The scr2 field (bits 18 to 22) specifies one of the 32 registers in the corresponding register file as the second source operand.
 The scr1/cst field (bits 13 to 17) has several meanings depending on the instruction opcode field (bits 3 to 12). The first meaning specifies one of the 32 registers of the corresponding register file as the first operand. The second meaning is a 5-bit immediate constant. Depending on the instruction type, this is treated as an unsigned integer and zero extended to 32 bits or is treated as a signed integer and sign extended to 32 bits. Lastly, this field can specify one of the 32 registers in the opposite register file if the instruction invokes one of the register file cross paths 27 or 37.
 The opcode field (bits 3 to 12) specifies the type of instruction and designates appropriate instruction options. A detailed explanation of this field is beyond the scope of this invention except for the instruction options detailed below.
 The s bit (bit 1) designates the data path 20 or 30. If s=0, then data path 20 is selected. This limits the functional unit to L1 unit 22, S1 unit 23, M1 unit 24 and D1 unit 25 and the corresponding register file A 21. Similarly, s=1 selects data path 20 limiting the functional unit to L2 unit 32, S2 unit 33, M2 unit 34 and D2 unit 35 and the corresponding register file B 31.
 The p bit (bit 0) marks the execute packets. The p-bit determines whether the instruction executes in parallel with the following instruction. The p-bits are scanned from lower to higher address. If p=1 for the current instruction, then the next instruction executes in parallel with the current instruction. If p=0 for the current instruction, then the next instruction executes in the cycle after the current instruction. All instructions executing in parallel constitute an execute packet. An execute packet can contain up to eight instructions. Each instruction in an execute packet must use a different functional unit.
 FIG. 5 is a block diagram illustrating a computing system including a local memory arbiter according to an embodiment of the invention. FIG. 5 illustrates system on a chip (SoC) 500. SoC 500 includes one or more DSP cores 510, SRAM/Caches 520 and shared memory 530. SoC 500 is preferably formed on a common semiconductor substrate. These elements can also be implemented in separate substrates, circuit boards and packages. For example shared memory 530 could be implemented in a separate semiconductor substrate. FIG. 5 illustrates four DSP cores 510, but SoC 500 may include fewer or more DSP cores 510.
 Each DSP core 510 preferably includes a level one data cache such as L1 SRAM/cache 512. In the preferred embodiment each L1 SRAM/cache 512 may be configured with selected amounts of memory directly accessible by the corresponding DSP core 510 (SRAM) and data cache. Each DSP core 510 has a corresponding level two combined cache L2 SRAM/cache 520. As with L1 SRAM/cache 512, each L2 SRAM/cache 520 is preferably configurable with selected amounts of directly accessible memory (SRAM) and data cache. Each L2 SRAM/cache 520 includes a prefetch unit 522. Each prefetch unit 522 prefetchs data for the corresponding L2 SRAM/cache 520 based upon anticipating the needs of the corresponding DSP core 510. Each DSP core 510 is further coupled to shared memory 530. Shared memory 530 is usually slower and typically less expensive memory than L2 SRAM/cache 520 or L1 SRAM/cache 510. Shared memory 530 typically stores program and data information shared between the DSP cores 510.
 In various embodiments, each DSP core 510 includes a corresponding local memory arbiter 524 for reordering memory commands in accordance with a set of reordering rules. Each local memory arbiter 524 arbitrates and schedules memory requests from differing streams at a local level before sending the memory requests to central memory arbiter 534. A local memory arbiter 524 may arbitrate between more than one DSP core 510. Central memory arbiter 534 controls memory accesses for shared memory 530 that are generated by differing DSP cores 510 that do not share a common local memory arbiter 524.
 FIG. 6 is a further view of the digital signal processor system 100 of this invention. CPU 110 is bidirectionally connected to L1I cache 121 and L1D cache 123. L1I cache 121 and L1D cache 123 are shown together because they are at the same level in the memory hierarchy. These level one caches are bidirectionally connected to L2 130. L2 cache 130 is in turn bidirectionally connected to external memory 161 and peripherals 169. External memory 161 and peripherals 169 are shown together because they are at the same level in the memory hierarchy. Data transfers into and out of L1D cache 123 is controlled by data memory controller (DMC) 610. Data transfers into and out of L1I cache 121 is controlled by program memory controller (PMC) 620. Data transfers into and out of L2 130 including both cache and directly addressable memory (SRAM) are controlled by unified memory controller (UMC) 630.
 FIG. 7 illustrates the prior art interface between UMC 630 and DMC 610. Snoop and DMA signals from UMC 730 supply pipeline buffers 711 and 712. As shown in FIG. 7, either buffer 711 or 712 can store snoop or DMA signals. Pipeline buffers 711 and 712 store data across the boundary between Execute 2 (E2) 332 phase and Execute 3 (E3) 333 phase. Multiplexer 721 selects the data stored in one of buffers 711 and 712 for output to the snoop input of DMC 710. Multiplexer 722 selects the data stored in one of buffers 711 and 712 for output to the DMA input of DMC 710. Each of the data paths illustrated in FIG. 7 is 256 bits.
 FIG. 8 illustrates the interface between UMC 630 and DMC 610 according to this invention. Buffers 811 and 812 correspond substantially to buffers 711 and 712 except that buffers 811 and 812 store data across the boundary between Execute 1 (E1) 331 phase and Execute 2 (E2) 332 phase. This change enables this interface to run a the clock frequency of the central processing unit. The Interface width is also reduced from 256 to 64 bits. At the end of FIFO 831 register 841 holds the FIFO elements before supply to the data cache.
 The L1D snoop architecture illustrated in FIG. 8 is improved over the prior art to accept multiple snoop commands. Multiplexer 821 selects the data stored in one of buffers 811 and 812 for output to the input of snoop FIFO 831. Thus direct memory access and snoop accesses are merged into a single eight entry deep FIFO 831 and share same pipeline inside DMC 610. DMC 610 can now have up to 10 pending SNP/DMA commands without stalling the External/L2 interface.
 FIFO 831 DMA/SNP command packet is split into multiple FIFOs. These include: a command FIFO; an address FIFO; and write data FIFO. Each command in the command FIFO can either: get committed to L1D cache; hit victim; or hit allocated line. Each snoop command is compared with an L2 allocated line (set and way) stored in L2W_ADDR register 842. In case the snoop address matches allocated line set and way in L2W_ADDR register 842, SNP kill logic 832 is implemented killing the SNP command to avoid data corruption. DMC 610 can kill any number of SNP commands sitting at any level inside DMC 610 (E1, FIFO, E2). SNP kill logic 832 is encapsulated within command FIFO. There is a self acknowledge (ACK) logic implemented inside command FIFO to flush out a SNP command which needs to be dropped. There is a synchronization protocol in place to keep the different pipelines of FIFO 831 in sync in case a SNP command is dropped.
 Each snoop command is also compared with a victim address stored in VCT_ADDR register 843. In case the snoop address matches victim address in VCT_ADDR register 843, SNP hit victim logic 833 redirects the SNP command to a victim buffer. The snoop write is completed in the victim buffer.
 Since command accept signal for SNP and DMA is now pipelined, there is need to predict command accept signal for future commands. Command accept signal for DMA and SNP is predicted based on FIFO status, bandwidth management logic and current commands present in L1D E1 pipeline. There can be now 10 commands pending inside DMC 610. Effective priority logic is enhanced to take into account priorities of all pending transactions. Effective priority of all pending DMA/SNP commands is used to arbitrate with CPU 110 traffic in case of bank stalls.
 This solution allows DMC 610 to accept multiple snoop commands and provides a unique way to drop any number of pending snoop commands within DMC 610. Using this solution, snoop interface speed-up is achieved without any drop in snoop through-put.
Patent applications by Naveen Bhoria, Plano, TX US
Patent applications by Raguram Damodaran, Plano, TX US
Patent applications by TEXAS INSTRUMENTS INCORPORATED
Patent applications in class Hierarchical caches
Patent applications in all subclasses Hierarchical caches