Patent application number | Description | Published |
20090327646 | Minimizing TLB Comparison Size - In one embodiment, a system comprises one or more registers configured to store a plurality of values that identify a virtual address space (collectively a tag), a translation lookaside buffer (TLB), and a control unit coupled to the TLB and the one or more registers. The control unit is configured to detect whether or not the tag has changed and in response to a change in the tag, map the changed tag to an identifier having fewer bits than the total number of bits in the tag, and provide the current identifier to the TLB. The TLB is configured to detect a hit/miss in response to the identifier. A similar method is also contemplated. | 12-31-2009 |
20110078425 | BRANCH PREDICTION MECHANISM FOR PREDICTING INDIRECT BRANCH TARGETS - A multithreaded microprocessor includes an instruction fetch unit that may fetch and maintain a plurality of instructions belonging to one or more threads and one or more execution units that may concurrently execute the one or more threads. The instruction fetch unit includes a target branch prediction unit that may provide a predicted branch target address in response to receiving an instruction fetch address of a current indirect branch instruction. The branch prediction unit includes a primary storage and a control unit. The storage includes a plurality of entries, and each entry may store a predicted branch target address corresponding to a previous indirect branch instruction. The control unit may generate an index value for accessing the storage using a portion of the instruction fetch address of the current indirect branch instruction, and branch direction history information associated with a currently executing thread of the one or more threads. | 03-31-2011 |
20110087866 | PERCEPTRON-BASED BRANCH PREDICTION MECHANISM FOR PREDICTING CONDITIONAL BRANCH INSTRUCTIONS ON A MULTITHREADED PROCESSOR - A multithreaded microprocessor includes an instruction fetch unit including a perceptron-based conditional branch prediction unit configured to provide, for each of one or more concurrently executing threads, a direction branch prediction. The conditional branch prediction unit includes a plurality of storages each including a plurality of entries. Each entry may be configured to store one or more prediction values. Each prediction value of a given storage may correspond to at least one conditional branch instruction in a cache line. The conditional branch prediction unit may generate a separate index value for accessing each storage by generating a first index value for accessing a first storage by combining one or more portions of a received instruction fetch address, and generating each other index value for accessing the other storages by combining the first index value with a different portion of direction branch history information. | 04-14-2011 |
20120137077 | MISS BUFFER FOR A MULTI-THREADED PROCESSOR - A multi-threaded processor configured to allocate entries in a buffer for instruction cache misses is disclosed. Entries in the buffer may store thread state information for a corresponding instruction cache miss for one of a plurality of threads executable by the processor. The buffer may include dedicated entries and dynamically allocable entries, where the dedicated entries are reserved for a subset of the plurality of threads and the dynamically allocable entries are allocable to a group of two or more of the plurality of threads. In one embodiment, the dedicated entries are dedicated for use by a single thread and the dynamically allocable entries are allocable to any of the plurality of threads. The buffer may store two or more entries for a given thread at a given time. In some embodiments, the buffer may help ensure none of the plurality of threads experiences starvation with respect to instruction fetches. | 05-31-2012 |
20120216020 | INSTRUCTION SUPPORT FOR PERFORMING STREAM CIPHER - Techniques relating to a processor that provides instruction-level support for a stream cipher are disclosed. In one embodiment, the processor supports a first instruction executable to perform an alpha multiplication, an alpha division, and an exclusive-OR operation using a result of the alpha multiplication and a result of the alpha division. In one embodiment, the processor supports a second instruction executable to perform a modular addition of a value R1 and a value S, and to perform a first exclusive-OR operation on a result of the modular addition and a value R2. In one embodiment, the processor supports a third instruction executable to perform a substitution-box (S-Box) operation on a value R1 to produce a value R2′, and to perform a modular addition using a value R2 to produce a value R1'. | 08-23-2012 |
20120233441 | MULTI-THREADED INSTRUCTION BUFFER DESIGN - An instruction buffer for a processor configured to execute multiple threads is disclosed. The instruction buffer is configured to receive instructions from a fetch unit and provide instructions to a selection unit. The instruction buffer includes one or more memory arrays comprising a plurality of entries configured to store instructions and/or other information (e.g., program counter addresses). One or more indicators are maintained by the processor and correspond to the plurality of threads. The one or more indicators are usable such that for instructions received by the instruction buffer, one or more of the plurality entries of a memory array can be determined as a write destination for the received instructions, and for instructions to be read from the instruction buffer (and sent to a selection unit), one or more entries can be determined as the correct source location from which to read. | 09-13-2012 |
20120233442 | RETURN ADDRESS PREDICTION IN MULTITHREADED PROCESSORS - Techniques and structures are disclosed relating to predicting return addresses in multithreaded processors. In one embodiment, a processor is disclosed that includes a return address prediction unit. The return address prediction unit is configured to store return addresses for different ones of a plurality of threads executable on the processor. The return address prediction unit is configured to receive a request for a predicted return address for one of the plurality of threads. The first request includes an identification of the requesting thread. The return address prediction unit is configured to provide the predicted return address to the requesting thread. In some embodiments, the return address prediction unit is configured to store the return addresses in a memory that has a plurality of dedicated portions. In some embodiments, the return address prediction unit is configured to store the return addresses in a memory that has dynamically allocable entries. | 09-13-2012 |
20120290817 | BRANCH TARGET STORAGE AND RETRIEVAL IN AN OUT-OF-ORDER PROCESSOR - A processor configured to facilitate transfer and storage of predicted targets for control transfer instructions (CTIs). In certain embodiments, the processor may be multithreaded and support storage of predicted targets for multiple threads. In some embodiments, a CTI branch target may be stored by one element of a processor and a tag may indicate the location of the stored target. The tag may be associated with the CTI rather than associating the complete target address with the CTI. When the CTI reaches an execution stage of the processor, the tag may be used to retrieve the predicted target address. In some embodiments using a tag to retrieve a predicted target, CTI instructions from different processor threads may be interleaved without affecting retrieval of predicted targets. | 11-15-2012 |
20120290820 | SUPPRESSION OF CONTROL TRANSFER INSTRUCTIONS ON INCORRECT SPECULATIVE EXECUTION PATHS - Techniques are disclosed relating to a processor that is configured to execute control transfer instructions (CTIs). In some embodiments, the processor includes a mechanism that suppresses results of mispredicted younger CTIs on a speculative execution path. This mechanism permits the branch predictor to maintain its fidelity, and eliminates spurious flushes of the pipeline. In one embodiment, a misprediction bit is be used to indicate that a misprediction has occurred, and younger CTIs than the CTI that was mispredicted are suppressed. In some embodiments, the processor may be configured to execute instruction streams from multiple threads. Each thread may include a misprediction indication. CTIs in each thread may execute in program order with respect to other CTIs of the thread, while instructions other than CTIs may execute out of program order. | 11-15-2012 |
20120290821 | LOW-LATENCY BRANCH TARGET CACHE - Techniques and structures are disclosed relating to a branch target cache (BTC) in a processor. In one embodiment, the BTC is usable to predict whether a control transfer instruction is to be taken, and, if applicable, a target address for the instruction. The BTC may operate in conjunction with a delayed branch predictor (DBP) that is more accurate but slower than the BTC. If the BTC indicates that a control transfer instruction is predicted to be taken, the processor begins to fetch instructions at the target address indicated by the BTC, but may discard those instructions if the DBP subsequently determines that the control transfer instruction was predicted incorrectly. Branch prediction information output from the BTC and the DBP may be used to update the branch target cache for subsequent predictions. In various embodiments, the BTC may simultaneously store entries for multiple processor threads, and may be fully associative. | 11-15-2012 |
20120297167 | EFFICIENT CALL RETURN STACK TECHNIQUE - A processor, method, and medium for implementing a call return stack within a pipelined processor. A stack head register is used to store a copy of the top entry of the call return stack, and the stack head register is accessed by the instruction fetch unit on each fetch cycle. If a fetched instruction is decoded as a return instruction, the speculatively read address from the static register is utilized as a target address to fetch subsequent instructions and the address at the second entry from the top of the call return stack is written to the stack head register. | 11-22-2012 |
20130019080 | DYNAMIC SIZING OF TRANSLATION LOOKASIDE BUFFER FOR POWER REDUCTIONAANM Levinsky; Gideon N.AACI Cedar ParkAAST TXAACO USAAGP Levinsky; Gideon N. Cedar Park TX USAANM Shah; Manish K.AACI AustinAAST TXAACO USAAGP Shah; Manish K. Austin TX US - Methods and mechanisms for operating a translation lookaside buffer (TLB). A translation lookaside buffer (TLB) includes a plurality of segments, each segment including one or more entries. A control unit is coupled to the TLB. The control unit is configured to determine utilization of segments, and dynamically disable segments in response to determining that segments are under-utilized. The control unit is also configured to dynamically enable segments responsive to determining a given number of segments are over-utilized. | 01-17-2013 |
20130138888 | STORING A TARGET ADDRESS OF A CONTROL TRANSFER INSTRUCTION IN AN INSTRUCTION FIELD - A control transfer instruction (CTI), such as a branch, jump, etc., may have an offset value for a control transfer that is to be performed. The offset value may be usable to compute a target address for the CTI (e.g., the address of a next instruction to be executed for a thread or instruction stream). The offset may be specified relative to a program counter. In response to detecting a specified offset value, the CTI may be modified to include at least a portion of a computed target address. Information indicating this modification has been performed may be stored, for example, in a pre-decode bit. In some cases, CTI modification may be performed only when a target address is a “near” target, rather than a “far” target. Modifying CTIs as described herein may eliminate redundant address calculations and produce a savings of power and/or time in some embodiments. | 05-30-2013 |