Patent application number | Description | Published |
20080198743 | DATA FLOW CONTROL FOR SIMULTANEOUS PACKET RECEPTION - Embodiments of the present invention provide methods, a module, and a system for calculating a credit limit for an interface capable of receiving multiple packets simultaneously. Generally, the multiple packets are simultaneously received at an interface on the second device, each packet being one of a plurality of packet types, and a flow control credit limit to be transmitted to the first device is adjusted based on the combination of packet types of the simultaneously received packets. | 08-21-2008 |
20080288780 | LOW-LATENCY DATA DECRYPTION INTERFACE - Methods and apparatus for reducing the impact of latency associated with decrypting encrypted data are provided. Rather than wait until an entire packet of encrypted data is validated (e.g., by checking for data transfer errors), the encrypted data may be pipelined to a decryption engine as it is received, thus allowing decryption to begin prior to validation. In some cases, the decryption engine may be notified of data transfer errors detected during the validation process, in order to prevent reporting false security violations. | 11-20-2008 |
20090077433 | SELF-HEALING LINK SEQUENCE COUNTS WITHIN A CIRCULAR BUFFER - Methods and apparatus that allow recovery in the event that sequence counts used on receive and transmit sides of a communications link become out of sync are provided. In response to receiving a packet with an expected sequence count from a receiving device, a transmitting device may adjust pointers into a transmit buffer allowing the transmitting device to begin transmitting packets with the sequence count expected by the receiving device. | 03-19-2009 |
20090109996 | Network on Chip - A network on chip (‘NOC’) that includes integrated processor (‘IP’) blocks, routers, memory communications controllers, and network interface controllers, with each IP block adapted to a router through a memory communications controller and a network interface controller, where each memory communications controller controlling communications between an IP block and memory, and each network interface controller controlling inter-IP block communications through routers. | 04-30-2009 |
20090125574 | Software Pipelining On a Network On Chip - Memory sharing in a software pipeline on a network on chip (‘NOC’), the NOC including integrated processor (‘IP’) blocks, routers, memory communications controllers, and network interface controllers, with each IP block adapted to a router through a memory communications controller and a network interface controller, where each memory communications controller controlling communications between an IP block and memory, and each network interface controller controlling inter-IP block communications through routers, including segmenting a computer software application into stages of a software pipeline, the software pipeline comprising one or more paths of execution; allocating memory to be shared among at least two stages including creating a smart pointer, the smart pointer including data elements for determining when the shared memory can be deallocated; determining, in dependence upon the data elements for determining when the shared memory can be deallocated, that the shared memory can be deallocated; and deallocating the shared memory. | 05-14-2009 |
20090125703 | Context Switching on a Network On Chip - Data processing on a network on chip (‘NOC’) that includes integrated processor (‘IP’) blocks, routers, memory communications controllers, and network interface controllers, with each IP block adapted to a router through a memory communications controller and a network interface controller, with each IP block also adapted to the network by a low latency, high bandwidth application messaging interconnect comprising an inbox and an outbox, each IP block also including a stack normally used for context switching, the stack access slower than the outbox access, and each IP block further including a processor supporting a plurality of threads of execution, the processor configured to save, upon a context switch, a context of a current thread of execution in memory locations in a memory array in the outbox instead of the stack and lock the memory locations in which the context was saved. | 05-14-2009 |
20090135739 | Network On Chip With Partitions - A network on chip (‘NOC’) that includes integrated processor (‘IP’) blocks, routers, memory communications controllers, and network interface controllers, with each IP block adapted to a router through a memory communications controller and a network interface controller, where each memory communications controller controlling communications between an IP block and memory, and each network interface controller controlling inter-IP block communications through routers, with the network organized into partitions, each partition including at least one IP block, each partition assigned exclusive access to a separate physical memory address space and one or more applications executing on one or more of the partitions. | 05-28-2009 |
20090138567 | Network on chip with partitions - A design structure embodied in a machine readable medium is provided. Embodiments of the design structure include a network on chip (‘NOC’), the NOC comprising: integrated processor (‘IP’) blocks, routers, memory communications controllers, and network interface controller, each IP block adapted to a router through a memory communications controller and a network interface controller, each memory communications controller controlling communication between an IP block and memory, and each network interface controller controlling inter-IP block communications through routers; the network organized into partitions, each partition including at least one IP block, each partition assigned exclusive access to a separate physical memory address space; and one or more applications executing on one or more of the partitions. | 05-28-2009 |
20090144564 | DATA ENCRYPTION INTERFACE FOR REDUCING ENCRYPT LATENCY IMPACT ON STANDARD TRAFFIC - Methods and apparatus that may be utilized in systems to reduce the impact of latency associated with encrypting data on non-encrypted data are provided. Secure and non-secure data may be routed independently. Thus, non-secure data may be forwarded on (e.g., to targeted write buffers), without waiting for previously sent secure data to be encrypted. As a result, non-secure data may be made available for subsequent processing much earlier than in conventional systems utilizing a common data path for both secure and non-secure data. | 06-04-2009 |
20090201302 | Graphics Rendering On A Network On Chip - Graphics rendering on a network on chip (‘NOC’) including receiving, in the geometry processor, a representation of an object to be rendered; converting, by the geometry processor, the representation of the object to two dimensional primitives; sending, by the geometry processor, the primitives to the plurality of scan converters; converting, by the scan converters, the primitives to fragments, each fragment comprising one or more portions of a pixel; for each fragment: selecting, by the scan converter for the fragment in dependence upon sorting rules, a pixel processor to process the fragment; sending, by the scan converter to the pixel processor, the fragment; and processing, by the pixel processor, the fragment to produce pixels for an image. | 08-13-2009 |
20090210883 | Network On Chip Low Latency, High Bandwidth Application Messaging Interconnect - Data processing on a network on chip (‘NOC’) that includes integrated processor (‘IP’) blocks, routers, memory communications controllers, and network interface controllers, with each IP block adapted to a router through a memory communications controller and a network interface controller, where each memory communications controller controlling communications between an IP block and memory, each network interface controller controlling inter-IP block communications through routers, with each IP block also adapted to the network by a low latency, high bandwidth application messaging interconnect comprising an inbox and an outbox. | 08-20-2009 |
20090260013 | Computer Processors With Plural, Pipelined Hardware Threads Of Execution - Computer processors and methods of operation of computer processors that include a plurality of pipelined hardware threads of execution, each thread including a plurality of computer program instructions; an instruction decoder that determines dependencies and latencies among instructions of a thread; and an instruction dispatcher that arbitrates, in the presence of resource contention and in accordance with the dependencies and latencies, priorities for dispatch of instructions from the plurality of threads of execution. | 10-15-2009 |
20090276572 | Memory Management Among Levels of Cache in a Memory Hierarchy - Methods, apparatus, and product for memory management among levels of cache in a memory hierarchy in a computer with a processor operatively coupled through two or more levels of cache to a main random access memory, caches closer to the processor in the hierarchy characterized as higher in the hierarchy, including: identifying a line in a first cache that is preferably retained in the first cache, the first cache backed up by at least one cache lower in the memory hierarchy, the lower cache implementing an LRU-type cache line replacement policy; and updating LRU information for the lower cache to indicate that the line has been recently accessed. | 11-05-2009 |
20090282211 | Network On Chip With Partitions - Data processing with a network on chip (‘NOC’) that includes integrated processor (‘IP’) blocks, routers, memory communications controllers, and network interface controller, including: organizing the network into partitions; assigning all IP blocks of a partition a partition identifier (‘partition ID’) that uniquely identifies for an IP block a particular partition in which the IP block is included; establishing one or more permissions tables associating partition IDs with sources and destinations of data communications on the NOC, each record in the permissions tables representing a restriction on data communications on the NOC; executing one or more applications on one or more of the partitions, including transmitting data communications messages among IP blocks and between IP blocks and memory, each data communications message including a partition ID of a sender of the data communications message; and controlling data communications among the partitions in dependence upon the permissions tables and the partition IDs. | 11-12-2009 |
20090282222 | Dynamic Virtual Software Pipelining On A Network On Chip - A NOC for dynamic virtual software pipelining including IP blocks, routers, memory communications controllers, and network interface controllers, each IP block adapted to a router through a memory communications controller and a network interface controller, the NOC also including: a computer software application segmented into stages, each stage comprising a flexibly configurable module of computer program instructions identified by a stage ID, each stage assigned to a thread of execution on an IP block; and each stage executing on a thread of execution on an IP block, including a first stage executing on an IP block, producing output data and sending by the first stage the produced output data to a second stage, the output data including control information for the next stage and payload data; and the second stage consuming the produced output data in dependence upon the control information. | 11-12-2009 |
20090282226 | Context Switching On A Network On Chip - A network on chip (‘NOC’) that includes IP blocks, routers, memory communications controllers, and network interface controllers, each IP block adapted to the network by an application messaging interconnect including an inbox and an outbox, one or more of the IP blocks including computer processors supporting a plurality of threads, the NOC also including an inbox and outbox controller configured to set pointers to the inbox and outbox, respectively, that identify valid message data for a current thread; and software running in the current thread that, upon a context switch to a new thread, is configured to: save the pointer values for the current thread, and reset the pointer values to identify valid message data for the new thread, where the inbox and outbox controller are further configured to retain the valid message data for the current thread in the boxes until context switches again to the current thread. | 11-12-2009 |
20090282227 | Monitoring Software Pipeline Performance On A Network On Chip - Software pipelining on a network on chip (‘NOC’), the NOC including integrated processor (‘IP’) blocks, routers, memory communications controllers, and network interface controllers, each IP block adapted to a router through a memory communications controller and a network interface controller, each memory communications controller controlling communication between an IP block and memory, and each network interface controller controlling inter-IP block communications through routers. Embodiments of the present invention include implementing a software pipeline on the NOC, including segmenting a computer software application into stages, each stage comprising a flexibly configurable module of computer program instructions identified by a stage ID; executing each stage of the software pipeline on a thread of execution on an IP block; monitoring software pipeline performance in real time; and reconfiguring the software pipeline, dynamically, in real time, and in dependence upon the monitored software pipeline performance. | 11-12-2009 |
20090282419 | Ordered And Unordered Network-Addressed Message Control With Embedded DMA Commands For A Network On Chip - Data processing on a network on chip (‘NOC’) that includes integrated processor (‘IP’) blocks, routers, memory communications controllers, network interface controllers, and network-addressed message controllers, with each IP block adapted to a router through a memory communications controller, a network-addressed message controller, and a network interface controller, where each memory communications controller controlling communications between an IP block and memory, each network interface controller controlling inter-IP block communications through routers, with each IP block also adapted to the network by a low latency, high bandwidth application messaging interconnect comprising an inbox and an outbox. | 11-12-2009 |
20100091787 | DIRECT INTER-THREAD COMMUNICATION BUFFER THAT SUPPORTS SOFTWARE CONTROLLED ARBITRARY VECTOR OPERAND SELECTION IN A DENSELY THREADED NETWORK ON A CHIP - A computer-implemented method, system and computer program product for retrieving arbitrarily aligned vector operands within a highly threaded Network On a Chip (NOC) processor are presented. Multiple nodes in a NOC are able to access a single Compressed Direct Interthread Communication Buffer (CDICB), which contains a misaligned but compacted set of operands. Using information from a Special Purpose Register (SPR) within the NOC, each node is able to selectively extract one or more operands from the CDICB for use in an execution unit within that node. Output from the execution unit is then sent to the CDICB to update the compacted set of operands. | 04-15-2010 |
20100100707 | DATA STRUCTURE FOR CONTROLLING AN ALGORITHM PERFORMED ON A UNIT OF WORK IN A HIGHLY THREADED NETWORK ON A CHIP - A computer-implemented method, system and computer program product for controlling an algorithm that is performed on a unit of work in a subsequent software pipeline stage in a Network On a Chip (NOC) is presented. In one embodiment, the method executes a first operation in a first node of the NOC. The first node generates payload, and then loads that payload into a message. The message with the payload is transmitted to a nanokernel that controls a second node in the NOC. The nanokernel calls an algorithm that is needed by a second operation in a second node in the NOC, which uses the algorithm to execute the second operation. | 04-22-2010 |
20100100770 | SOFTWARE DEBUGGER FOR PACKETS IN A NETWORK ON A CHIP - A breakpoint packet is dispatched to a Network On A Chip (NOC). The breakpoint packet instructs one or more specified nodes on the NOC to place the specified nodes, or a core or hardware thread within a specified node, to execute in “single step” mode, in order to enable a debugging of a work packet that is dispatched to the specific node. | 04-22-2010 |
20100100934 | SECURITY METHODOLOGY TO PREVENT USER FROM COMPROMISING THROUGHPUT IN A HIGHLY THREADED NETWORK ON A CHIP PROCESSOR - A computer-implemented method, system and computer program product for preventing an untrusted work unit message from compromising throughput in a highly threaded Network On a Chip (NOC) processor are presented. A security message, which is associated with the untrusted work unit message, directs other resources within the NOC to operate in a secure mode while a specified node, within the NOC, executes instructions from the work unit message in a less privileged non-secure mode. Thus, throughput within the NOC is uncompromised due to resources, other than the first node, being protected from the untrusted work unit message. | 04-22-2010 |
20100106940 | Processing Unit With Operand Vector Multiplexer Sequence Control - Operand vector multiplexer sequence control is used in a vector-based execution unit to control the shuffling of data elements in operand vectors used by a sequence of vector instructions processed by the vector-based execution unit. A swizzle sequence instruction is defined in an instruction set for the vector-based execution unit and is used to selectively apply a sequence of vector data element shuffle orders to one or more operand vectors to be used by the associated sequence of vector instructions. As a result, when a common sequence of data element shuffle orders is used frequently for a sequence of vector instructions, a single swizzle sequence instruction may be used to select the desired sequence of custom data element ordering for each of the vector instructions in the sequence. | 04-29-2010 |
20100188402 | User-Defined Non-Visible Geometry Featuring Ray Filtering - A method, system and computer program product for managing secondary rays during ray-tracing are presented. A non-visible unidirectional ray tracing object logically surrounds a user-selected virtual object in a computer generated illustration. This unidirectional ray tracing object prevents secondary tracing rays from emanating from the user-selected virtual object during ray tracing. | 07-29-2010 |
20100189111 | STREAMING DIRECT INTER-THREAD COMMUNICATION BUFFER PACKETS THAT SUPPORT HARDWARE CONTROLLED ARBITRARY VECTOR OPERAND ALIGNMENT IN A DENSELY THREADED NETWORK ON A CHIP - A computer-implemented method, system and computer program product for arbitrarily aligning vector operands, which are transmitted in inter-thread communication buffer packets within a highly threaded Network On a Chip (NOC) processor, are presented. A set of multiplexers in a node in the NOC realigns and extracts data word aggregations from an incoming compressed inter-thread communication buffer packet. The extracted data word aggregations are used as operands by an execution unit within the node. | 07-29-2010 |
20100191940 | SINGLE STEP MODE IN A SOFTWARE PIPELINE WITHIN A HIGHLY THREADED NETWORK ON A CHIP MICROPROCESSOR - A hardware thread is selectively forced to single step the execution of software instructions from a work packet granule. A “single step” packet is associated with a work packet granule. The work packet granule, with the associated “single step” packet, is dispatched as an appended work packet granule to a preselected hardware thread in a processor core, which, in one embodiment, is located at a node in a Network On a Chip (NOC). The work packet granule then executes in a single step mode until completion. | 07-29-2010 |
20100192014 | PSEUDO RANDOM PROCESS STATE REGISTER FOR FAST RANDOM PROCESS TEST GENERATION - A method, system and computer program product are presented for providing pseudo-random input test data to a test program. A seed value is generated and stored in a seed register. Using the seed value as an input, a pseudo-random input test value is generated by a Linear Feedback Shift Register (LFSR), and stored in a GPR within a processor core. Using the pseudo-random input test value from the GPR, a test program is executed within the processor core. | 07-29-2010 |
20100199067 | Split Vector Loads and Stores with Stride Separated Words - A method, system and computer program product are presented for causing a parallel load/store of stride-separated words from a data vector using different memory chips in a computer. | 08-05-2010 |
20100238169 | Physical Rendering With Textured Bounding Volume Primitive Mapping - A circuit arrangement, program product and circuit arrangement utilize a textured bounding volume to reduce the overhead associated with generating and using an Accelerated Data Structure (ADS) in connection with physical rendering. In particular, a subset of the primitives in a scene may be mapped to surfaces of a bounding volume to generate textures on such surfaces that can be used during physical rendering. By doing so, the primitives that are mapped to the bounding volume surfaces may be omitted from the ADS to reduce the processing overhead associated with both generating the ADS and using the ADS during physical rendering, and furthermore, in many instances the size of the ADS may be reduced, thus reducing the memory footprint of the ADS, and often improving cache hit rates and reducing memory bandwidth. | 09-23-2010 |
20100239185 | Accelerated Data Structure Optimization Based Upon View Orientation - A circuit arrangement, program product and method utilize the known view orientation for an image frame to be rendered to optimize the generation and/or use of an Accelerated Data Structure (ADS) used in physical rendering-based image processing. In particular, it has been found that while geometry primitives that are not within a view orientation generally cannot be culled from a scene when a physical rendering technique such as ray tracing is performed, those primitives nonetheless have a smaller impact on the resulting image frame, and as a result, less processing resources can be applied to such primitives, leaving greater processing resources available for processing those primitives that are located within the view orientation, and thereby improving overall rendering performance. | 09-23-2010 |
20100239186 | Accelerated Data Structure Positioning Based Upon View Orientation - A circuit arrangement, program product and circuit arrangement utilize the known view orientation for an image frame to be rendered to reposition an Accelerated Data Structure (ADS) used during rendering to optimize the generation and/or use of the ADS, e.g., by transforming a scene from which an image frame is rendered to orient the scene relative to the view orientation prior to generating the ADS. A scene may be transformed, for example, to orient the view orientation within a single octant of the scene, with additional processing resources assigned to that octant to ensure sufficient processing resources are devoted to processing the primitives within the view orientation. | 09-23-2010 |
20100269123 | Performance Event Triggering Through Direct Interthread Communication On a Network On Chip - Performance event triggering through direct interthread communication (‘DITC’) on a network on chip (‘NOC’), the NOC including integrated processor (‘IP’) blocks, routers, memory communications controllers, and network interface controllers, with each IP block adapted to a router through a memory communications controller and a network interface controller, where each memory communications controller controlling communications between an IP block and memory, and each network interface controller controlling inter-IP block communications through routers, including enabling performance event monitoring in a selected set of IP blocks distributed throughout the NOC, each IP block within the selected set of IP blocks having one or more event counters; collecting performance results from the one or more event counters; and returning performance results from the one or more event counters to a destination repository, the returning being initiated by a triggering event occurring within the NOC. | 10-21-2010 |
20110047355 | Offset Based Register Address Indexing - A circuit arrangement and method support offset based register address indexing, wherein register addresses to be used by an instruction are calculated using offsets to the full target register address, and the offsets are contained in the instruction and occupy less instruction space than the full address widths. An instruction may include at least one offset value that identifies a register address. During decoding of the instruction, an offset and a full target address are retrieved from the instruction, and then a register address is calculated by addition of the offset to the full target address. | 02-24-2011 |
20110283090 | Instruction Addressing Using Register Address Sequence Detection - A circuit arrangement and method support efficient indexing into large register files by utilizing register address sequence detection, wherein register addresses to be used by an instruction are produced by concatenating a portion of the address that is contained in the instruction with another portion that is speculatively produced by sequence detection logic. The portion of the correct full address that is not contained in the instruction is stored in a software accessible special purpose register. If the end of a particular sequence of addresses is detected by the sequence detection logic, the invention speculatively assumes that the next address in the sequence will be used. Since only a portion of the full addresses are stored in the instruction, they occupy less instruction space than the full address widths. An instruction may include at least one address portion that identifies a register address. | 11-17-2011 |
20110285709 | Allocating Resources Based On A Performance Statistic - A method includes rendering an object of a three dimensional image via a pixel shader based on a render context data structure associated with the object. The method includes measuring a performance statistic associated with rendering the object. The method also includes storing the performance statistic in the render context data structure associated with the object. The performance statistic is accessible to a host interface processor to determine whether to allocate a second pixel shader to render the object in a subsequent three-dimensional image. | 11-24-2011 |
20110285710 | Parallelized Ray Tracing - A method includes assigning a priority to a ray data structure of a plurality of ray data structures based on one or more priorities. The ray data structure includes properties of a ray to be traced from an illumination source in a three-dimensional image. The method includes identifying a portion of the three-dimensional image through which the ray passes. The method also includes identifying a slave processing element associated with the portion of the three-dimensional image. The method further includes sending the ray data structure to the slave processing element. | 11-24-2011 |
20110289485 | Software Trace Collection and Analysis Utilizing Direct Interthread Communication On A Network On Chip - Collecting and analyzing trace data while in a software debug mode through direct interthread communication (‘DITC’) on a network on chip (‘NOC’), the NOC including integrated processor (‘IP’) blocks, routers, memory communications controllers, and network interface controllers, with each IP block adapted to a router through a memory communications controller and a network interface controller, where each memory communications controller controlling communications between an IP block and memory, and each network interface controller controlling inter-IP block communications through routers, including enabling the collection of software debug information in a selected set of IP blocks distributed through the NOC, each IP block within the selected set of IP blocks having a set of trace data; collecting software debugging information via the set of trace data; communicating the set of trace data to a destination repository; and analyzing the set of trace data at the destination repository. | 11-24-2011 |
20110292063 | ROLLING TEXTURE CONTEXT DATA STRUCTURE FOR MAINTAINING TEXTURE DATA IN A MULTITHREADED IMAGE PROCESSING PIPELINE - A multithreaded rendering software pipeline architecture utilizes a rolling texture context data structure to store multiple texture contexts that are associated with different textures that are being processed in the software pipeline. Each texture context stores state data for a particular texture, and facilitates the access to texture data by multiple, parallel stages in a software pipeline. In addition, texture contexts are capable of being “rolled”, or copied to enable different stages of a rendering pipeline that require different state data for a particular texture to separately access the texture data independently from one another, and without the necessity for stalling the pipeline to ensure synchronization of shared texture data among the stages of the pipeline. | 12-01-2011 |
20110316855 | Parallelized Streaming Accelerated Data Structure Generation - A method includes receiving at a master processing element primitive data that includes properties of a primitive. The method includes partially traversing a spatial data structure that represents a three-dimensional image to identify an internal node of the spatial data structure. The internal node represents a portion of the three-dimensional image. The method also includes selecting a slave processing element from a plurality of slave processing elements. The selected processing element is associated with the internal node. The method further includes sending the primitive data to the selected slave processing element to traverse a portion of the spatial data structure to identify a leaf node of the spatial data structure. | 12-29-2011 |
20110316864 | MULTITHREADED SOFTWARE RENDERING PIPELINE WITH DYNAMIC PERFORMANCE-BASED REALLOCATION OF RASTER THREADS - A multithreaded rendering software pipeline architecture dynamically reallocates regions of an image space to raster threads based upon performance data collected by the raster threads. The reallocation of the regions typically includes resizing the regions assigned to particular raster threads and/or reassigning regions to different raster threads to better balance the relative workloads of the raster threads. | 12-29-2011 |
20110317712 | Recovering Data From A Plurality of Packets - A method includes receiving a plurality of packets at an integrated processor block of a network on a chip device. The plurality of packets includes a first packet that includes an indication of a start of data associated with a pixel shader application. The method includes recovering the data from the plurality of packets. The method also includes storing the recovered data in a dedicated packet collection memory within the network on the chip device. The method further includes retaining the data stored in the dedicated packet collection memory during an interruption event. Upon completion of the interruption event, the method includes copying packets stored in the dedicated packet collection memory prior to the interruption event to an inbox of the network on the chip device for processing. | 12-29-2011 |
20110320719 | PROPAGATING SHARED STATE CHANGES TO MULTIPLE THREADS WITHIN A MULTITHREADED PROCESSING ENVIRONMENT - A circuit arrangement and method make state changes to shared state data in a highly multithreaded environment by propagating or streaming the changes to multiple parallel hardware threads of execution in the multithreaded environment using an on-chip communications network and without attempting to access any copy of the shared state data in a shared memory to which the parallel threads of execution are also coupled. Through the use of an on-chip communications network, changes to the shared state data may be communicated quickly and efficiently to multiple threads of execution, enabling those threads to locally update their local copies of the shared state. Furthermore, by avoiding attempts to access a shared memory, the interface to the shared memory is not overloaded with concurrent access attempts, thus preserving memory bandwidth for other activities and reducing memory latency. Particularly for larger shared states, propagating the changes, rather than an entire shared state, further improves performance by reducing the amount of data communicated over the on-chip communications network. | 12-29-2011 |
20110320724 | DMA-BASED ACCELERATION OF COMMAND PUSH BUFFER BETWEEN HOST AND TARGET DEVICES - Direct Memory Access (DMA) is used in connection with passing commands between a host device and a target device coupled via a push buffer. Commands passed to a push buffer by a host device may be accumulated by the host device prior to forwarding the commands to the push buffer, such that DMA may be used to collectively pass a block of commands to the push buffer. In addition, a host device may utilize DMA to pass command parameters for commands to a command buffer that is accessible by the target device but is separate from the push buffer, with the commands that are passed to the push buffer including pointers to the associated command parameters in the command buffer. | 12-29-2011 |
20110320771 | INSTRUCTION UNIT WITH INSTRUCTION BUFFER PIPELINE BYPASS - A circuit arrangement and method selectively bypass an instruction buffer for selected instructions so that bypassed instructions can be dispatched without having to first pass through the instruction buffer. Thus, for example, in the case that an instruction buffer is partially or completely flushed as a result of an instruction redirect (e.g., due to a branch mispredict), instructions can be forwarded to subsequent stages in an instruction unit and/or to one or more execution units without the latency associated with passing through the instruction buffer. | 12-29-2011 |
20110321049 | Programmable Integrated Processor Blocks - An integrated processor block of the network on a chip is programmable to perform a first function. The integrated processor block includes an inbox to receive incoming packets from other integrated processor blocks of a network on a chip, an outbox to send outgoing packets to the other integrated processor blocks, an on-chip memory, and a memory management unit to enable access to the on-chip memory. | 12-29-2011 |
20110321057 | MULTITHREADED PHYSICS ENGINE WITH PREDICTIVE LOAD BALANCING - A circuit arrangement and method utilize predictive load balancing to allocate the workload among hardware threads in a multithreaded physics engine. The predictive load balancing is based at least in part upon the detection of predicted future collisions between objects in a scene, such that the reallocation of respective loads of a plurality of hardware threads may be initiated prior to detection of the actual collisions, thereby increasing the likelihood that hardware threads will be optimally allocated when the actual collisions occur. | 12-29-2011 |
20120144253 | HARD MEMORY ARRAY FAILURE RECOVERY UTILIZING LOCKING STRUCTURE - A technique for managing hard failures in a memory system employing a locking is disclosed. An error count is maintained for units of memory within the memory system. When the error count indicates a hard failure, the unit of memory is locked out from further use. An arbitrary set of error counters are assigned to record errors resulting from access to the units of memory. Embodiments of the present invention advantageously enable a system to continue reliable operation even after one or more internal hard memory failures. Other embodiments advantageously enable manufacturers to salvage partially failed devices and deploy the devices as having a lower-performance specification rather than discarding the devices, as would otherwise be indicated by conventional practice. | 06-07-2012 |
20120176364 | REUSE OF STATIC IMAGE DATA FROM PRIOR IMAGE FRAMES TO REDUCE RASTERIZATION REQUIREMENTS - An apparatus, program product and method reuse static image data generated during rasterization of static geometry to reduce the processing overhead associated with rasterizing subsequent image frames. In particular, static image data generated one frame may be reused in a subsequent image frame such that the subsequent image frame is generated without having to re-rasterize the static geometry from the scene, i.e., with only the dynamic geometry rasterized. The resulting image frame includes dynamic image data generated as a result of rasterizing the dynamic geometry during that image frame, and static image data generated as a result of rasterizing the static image data during a prior image frame. | 07-12-2012 |
20120179899 | UPGRADEABLE PROCESSOR ENABLING HARDWARE LICENSING - A technique for programming a configurable co-processor in a processing system is disclosed. The configurable co-processor includes field programmable logic and is configured using a pre-generated co-processor image. The technique involves enabling a user application to program the configurable co-processor with certain application-specific hardware based processing functions. One advantage of the present invention is that application-specific hardware design optimizations may be implemented to improve application performance after hardware for the processing system has been manufactured. | 07-12-2012 |
20120192202 | Context Switching On A Network On Chip - A network on chip (NOC) that includes IP blocks, routers, memory communications controllers, and network interface controllers, each IP block adapted to the network by an application messaging interconnect including an inbox and an outbox, one or more of the IP blocks including computer processors supporting a plurality of threads, the NOC also including an inbox and outbox controller configured to set pointers to the inbox and outbox, respectively, that identify valid message data for a current thread; and software running in the current thread that, upon a context switch to a new thread, is configured to: save the pointer values for the current thread, and reset the pointer values to identify valid message data for the new thread, where the inbox and outbox controller are further configured to retain the valid message data for the current thread in the boxes until context switches again to the current thread. | 07-26-2012 |
20120209944 | Software Pipelining On A Network On Chip - Memory sharing in a software pipeline on a network on chip (‘NOC’), the NOC including integrated processor (‘IP’) blocks, routers, memory communications controllers, and network interface controllers, with each IP block adapted to a router through a memory communications controller and a network interface controller, where each memory communications controller controlling communications between an IP block and memory, and each network interface controller controlling inter-IP block communications through routers, including segmenting a computer software application into stages of a software pipeline, the software pipeline comprising one or more paths of execution; allocating memory to be shared among at least two stages including creating a smart pointer, the smart pointer including data elements for determining when the shared memory can be deallocated; determining, in dependence upon the data elements for determining when the shared memory can be deallocated, that the shared memory can be deallocated; and deallocating the shared memory. | 08-16-2012 |
20120221711 | REGULAR EXPRESSION SEARCHES UTILIZING GENERAL PURPOSE PROCESSORS ON A NETWORK INTERCONNECT - A first hardware node in a network interconnect receives a data packet from a network. The first hardware node examines the data packet for a regular expression. In response to the first hardware node failing to identify the regular expression in the data packet, the data packet is forwarded to a second hardware node in the network interconnect for further examination of the data packet in order to search for the regular expression in the data packet. | 08-30-2012 |
20120254548 | ALLOCATING CACHE FOR USE AS A DEDICATED LOCAL STORAGE - A method and apparatus dynamically allocates and deallocates a portion of a cache for use as a dedicated local storage. Cache lines may be dynamically allocated and deallocated for inclusion in the dedicated local storage. Cache entries that are included in the dedicated local storage may not be evicted or invalidated. Additionally, coherence is not maintained between the cache entries that are included in the dedicated local storage and the backing memory. A load instruction may be configured to allocate, e.g., lock, a portion of the data cache for inclusion in the dedicated local storage and load data into the dedicated local storage. A load instruction may be configured to read data from the dedicated local storage and to deallocate, e.g., unlock, a portion of the data cache that was included in the dedicated local storage. | 10-04-2012 |
20120254877 | TRANSFERRING ARCHITECTED STATE BETWEEN CORES - A method and apparatus for transferring architected state bypasses system memory by directly transmitting architected state between processor cores over a dedicated interconnect. The transfer may be performed by state transfer interface circuitry with or without software interaction. The architected state for a thread may be transferred from a first processing core to a second processing core when the state transfer interface circuitry detects an error that prevents proper execution of the thread corresponding to the architected state. A program instruction may be used to initiate the transfer of the architected state for the thread to one or more other threads in order to parallelize execution of the thread or perform load balancing between multiple processor cores by distributing processing of multiple threads. | 10-04-2012 |
20120260252 | SCHEDULING SOFTWARE THREAD EXECUTION - A computer-implemented method, system, and/or computer program product schedules execution of software threads. A first software thread is executed together with a second software thread as a first software thread pair. A first content, which resulted from executing the first software pair together, of at least one performance counter, is stored. The first software thread is then executed with a third software thread as a second software thread pair, and the resulting second content of the performance counter(s) is stored. An identification is made of a most efficient software thread pair from the first and second software thread pairs. Upon receiving a request to re-execute the first software thread, the first software thread is selectively matched with either the second software thread or the third software thread for execution based on whether the first software thread pair or the second software thread pair has been identified as the most efficient software thread pair. | 10-11-2012 |
20130044117 | VECTOR REGISTER FILE CACHING OF CONTEXT DATA STRUCTURE FOR MAINTAINING STATE DATA IN A MULTITHREADED IMAGE PROCESSING PIPELINE - Frequently accessed state data used in a multithreaded graphics processing architecture is cached within a vector register file of a processing unit to optimize accesses to the state data and minimize memory bus utilization associated therewith. A processing unit may include a fixed point execution unit as well as a vector floating point execution unit, and a vector register file utilized by the vector floating point execution unit may be used to cache state data used by the fixed point execution unit and transferred as needed into the general purpose registers accessible by the fixed point execution unit, thereby reducing the need to repeatedly retrieve and write back the state data from and to an L1 or lower level cache accessed by the fixed point execution unit. | 02-21-2013 |
20130046518 | MULTITHREADED PHYSICS ENGINE WITH IMPULSE PROPAGATION - A circuit arrangement and method implement impulse propagation in a multithreaded physics engine by assigning ownership of objects in a scene to individual threads and propagating impulses between objects that are in contact with one another by passing inter-thread impulse messages between the threads that own the contacting objects, while locally propagating impulses through objects using the threads to which such objects are assigned. | 02-21-2013 |
20130086329 | ALLOCATING CACHE FOR USE AS A DEDICATED LOCAL STORAGE - A method and apparatus dynamically allocates and deallocates a portion of a cache for use as a dedicated local storage. Cache lines may be dynamically allocated and deallocated for inclusion in the dedicated local storage. Cache entries that are included in the dedicated local storage may not be evicted or invalidated. Additionally, coherence is not maintained between the cache entries that are included in the dedicated local storage and the backing memory. A load instruction may be configured to allocate, e.g., lock, a portion of the data cache for inclusion in the dedicated local storage and load data into the dedicated local storage. A load instruction may be configured to read data from the dedicated local storage and to deallocate, e.g., unlock, a portion of the data cache that was included in the dedicated local storage. | 04-04-2013 |
20130097411 | TRANSFERRING ARCHITECTED STATE BETWEEN CORES - A method and apparatus for transferring architected state bypasses system memory by directly transmitting architected state between processor cores over a dedicated interconnect. The transfer may be performed by state transfer interface circuitry with or without software interaction. The architected state for a thread may be transferred from a first processing core to a second processing core when the state transfer interface circuitry detects an error that prevents proper execution of the thread corresponding to the architected state. A program instruction may be used to initiate the transfer of the architected state for the thread to one or more other threads in order to parallelize execution of the thread or perform load balancing between multiple processor cores by distributing processing of multiple threads. | 04-18-2013 |
20130111190 | OPERATIONAL CODE EXPANSION IN RESPONSE TO SUCCESSIVE TARGET ADDRESS DETECTION | 05-02-2013 |
20130138918 | DIRECT INTERTHREAD COMMUNICATION DATAPORT PACK/UNPACK AND LOAD/SAVE - A circuit arrangement, method, and program product for compressing and decompressing data in a node of a system including a plurality of nodes interconnected via an on-chip network. Compressed data may be received and stored at an input buffer of a node, and in parallel with moving the compressed data to an execution register of the node, decompression logic of the node may decompress the data to generate uncompressed data, such that uncompressed data is stored in the execution register for utilization by an execution unit of the node. Uncompressed data may be output by the execution unit into the execution register, and in parallel with moving the uncompressed data to an output buffer of the node connected to the on-chip network, compression logic may compress the uncompressed data to generate compressed data, such that compressed data is stored at the output buffer. | 05-30-2013 |
20130145128 | PROCESSING CORE WITH PROGRAMMABLE MICROCODE UNIT - A method and circuit arrangement utilize a programmable microcode unit that is capable of being programmed via software to modify the instruction sequences output by the microcode unit in response to microcode instructions issued to the microcode unit. Among other benefits, a programmable microcode unit consistent with the invention enables customization of a processor design to handle specific applications or tasks, as well as to support specific hardware configurations such as specific execution units. In addition, a programmable microcode unit may be updatable, e.g., to correct bugs or faults found in previous instruction sequences supported by the unit. | 06-06-2013 |
20130159668 | PREDECODE LOGIC FOR AUTOVECTORIZING SCALAR INSTRUCTIONS IN AN INSTRUCTION BUFFER - A circuit arrangement, method, and program product for substituting a plurality of scalar instructions in an instruction stream with a functionally equivalent vector instruction for execution by a vector execution unit. Predecode logic is coupled to an instruction buffer which stores instructions in an instruction stream to be executed by the vector execution unit. The predecode logic analyzes the instructions passing through the instruction buffer to identify a plurality of scalar instructions that may be replaced by a vector instruction in the instruction stream. The predecode logic may generate the functionally equivalent vector instruction based on the plurality of scalar instructions, and the functionally equivalent vector instruction may be substituted into the instruction stream, such that the vector execution unit executes the vector instruction in lieu of the plurality of scalar instructions. | 06-20-2013 |
20130159669 | LOW LATENCY VARIABLE TRANSFER NETWORK FOR FINE GRAINED PARALLELISM OF VIRTUAL THREADS ACROSS MULTIPLE HARDWARE THREADS - A method and circuit arrangement utilize a low latency variable transfer network between the register files of multiple processing cores in a multi-core processor chip to support fine grained parallelism of virtual threads across multiple hardware threads. The communication of a variable over the variable transfer network may be initiated by a move from a local register in a register file of a source processing core to a variable register that is allocated to a destination hardware thread in a destination processing core, so that the destination hardware thread can then move the variable from the variable register to a local register in the destination processing core. | 06-20-2013 |
20130159674 | INSTRUCTION PREDICATION USING INSTRUCTION FILTERING - A method and circuit arrangement for selectively predicating instructions in an instruction stream based upon a predication filter criteria defined by a predication filter, which describes types or patterns of instructions that should be predicated. Predication logic compares a respective instruction of an instruction stream to predication filter criteria to determine whether the respective instruction matches the predication filter criteria, and the respective instruction is selectively predicated based on whether the respective instruction matches the predication filter criteria. | 06-20-2013 |
20130159675 | INSTRUCTION PREDICATION USING UNUSED DATAPATH FACILITIES - A method and circuit arrangement for selectively predicating an instruction in an instruction stream based upon a value corresponding to a predication register address indicated by a portion of an operand associated with the instruction. A first compare instruction in an instruction stream stores a compare result in at a register address of a predication register. The register address of the predication register is stored in a portion of an operand associated with a second instruction, and during decoding the second instruction, the predication register is accessed to determine a value stored at the register address of the predication register, and the second instruction is selectively predicated based on the value stored at the register address of the predication register. | 06-20-2013 |
20130159676 | INSTRUCTION SET ARCHITECTURE WITH EXTENDED REGISTER ADDRESSING - A method and circuit arrangement selectively repurpose bits from a primary opcode portion of an instruction for use in decoding one or more operands for the instruction. Decode logic of a processor, for example, may be placed in a predetermined mode that decodes a primary opcode for an instruction that is different from that specified in the primary opcode portion of the instruction, and then utilize one or more bits in the primary opcode portion to decode one or more operands for the instruction. By doing so, additional space is freed up in the instruction to support a larger register file and/or additional instruction types, e.g., as specified by a secondary or extended opcode. | 06-20-2013 |
20130159799 | MULTI-CORE PROCESSOR WITH INTERNAL VOTING-BASED BUILT IN SELF TEST (BIST) - A method and circuit arrangement utilize scan logic disposed on a multi-core processor integrated circuit device or chip to perform internal voting-based built in self test (BIST) of the chip. Test patterns are generated internally on the chip and communicated to the scan chains within multiple processing cores on the chip. Test results output by the scan chains are compared with one another on the chip, and majority voting is used to identify outlier test results that are indicative of a faulty processing core. A bit position in a faulty test result may be used to identify a faulty latch in a scan chain and/or a faulty functional unit in the faulty processing core, and a faulty processing core and/or a faulty functional unit may be automatically disabled in response to the testing. | 06-20-2013 |
20130160026 | INDIRECT INTER-THREAD COMMUNICATION USING A SHARED POOL OF INBOXES - A circuit arrangement, method, and program product for communicating data between hardware threads of a network on a chip processing unit utilizes shared inboxes to communicate data to pools of hardware threads. The associated hardware in the pools threads receive data packets from the shared inboxes in response to issuing work requests to an associated shared inbox. Data packets include a source identifier corresponding to a hardware thread from which the data packet was generated, and the shared inboxes may manage data packet distribution to associated hardware threads based on the source identifier of each data packet. A shared inbox may also manage workload distribution and uneven workload lengths by communicating data packets to hardware threads associated with the shared inbox in response to receiving work requests from associated hardware threads. | 06-20-2013 |
20130160114 | INTER-THREAD COMMUNICATION WITH SOFTWARE SECURITY - A circuit arrangement and method utilize a process context translation data structure in connection with an on-chip network of a processor chip to implement secure inter-thread communication between hardware threads in the processor chip. The process context translation data structure maps processes to inter-thread communication hardware resources, e.g., the inbox and/or outbox buffers of a NOC processor, such that a user process is only allowed to access the inter-thread communication hardware resources that it has been granted access to, and typically with only certain types of authorized access types. Moreover, a hypervisor or supervisor may manage the process context translation data structure to grant or deny access rights to user processes such that, once those rights are established in the data structure, user processes are permitted to perform inter-thread communications without requiring context switches to a hypervisor or supervisor in order to handle the communications. | 06-20-2013 |
20130173860 | NEAR NEIGHBOR DATA CACHE SHARING - Parallel computing environments, where threads executing in neighboring processors may access the same set of data, may be designed and configured to share one or more levels of cache memory. Before a processor forwards a request for data to a higher level of cache memory following a cache miss, the processor may determine whether a neighboring processor has the data stored in a local cache memory. If so, the processor may forward the request to the neighboring processor to retrieve the data. Because access to the cache memories for the two processors is shared, the effective size of the memory is increased. This may advantageously decrease cache misses for each level of shared cache memory without increasing the individual size of the caches on the processor chip. | 07-04-2013 |
20130173861 | NEAR NEIGHBOR DATA CACHE SHARING - Parallel computing environments, where threads executing in neighboring processors may access the same set of data, may be designed and configured to share one or more levels of cache memory. Before a processor forwards a request for data to a higher level of cache memory following a cache miss, the processor may determine whether a neighboring processor has the data stored in a local cache memory. If so, the processor may forward the request to the neighboring processor to retrieve the data. Because access to the cache memories for the two processors is shared, the effective size of the memory is increased. This may advantageously decrease cache misses for each level of shared cache memory without increasing the individual size of the caches on the processor chip. | 07-04-2013 |
20130173863 | Memory Management Among Levels Of Cache In A Memory Hierarchy - Methods, apparatus, and product for memory management among levels of cache in a memory hierarchy in a computer with a processor operatively coupled through two or more levels of cache to a main random access memory, caches closer to the processor in the hierarchy characterized as higher in the hierarchy, including: identifying a line in a first cache that is preferably retained in the first cache, the first cache backed up by at least one cache lower in the memory hierarchy, the lower cache implementing an LRU-type cache line replacement policy; and updating LRU information for the lower cache to indicate that the line has been recently accessed. | 07-04-2013 |
20130173967 | HARD MEMORY ARRAY FAILURE RECOVERY UTILIZING LOCKING STRUCTURE - A technique for managing hard failures in a memory system employing a locking is disclosed. An error count is maintained for units of memory within the memory system. When the error count indicates a hard failure, the unit of memory is locked out from further use. An arbitrary set of error counters are assigned to record errors resulting from access to the units of memory. Embodiments of the present invention advantageously enable a system to continue reliable operation even after one or more internal hard memory failures. Other embodiments advantageously enable manufacturers to salvage partially failed devices and deploy the devices as having a lower-performance specification rather than discarding the devices, as would otherwise be indicated by conventional practice. | 07-04-2013 |
20130185542 | EXTERNAL AUXILIARY EXECUTION UNIT INTERFACE TO OFF-CHIP AUXILIARY EXECUTION UNIT - An external Auxiliary Execution Unit (AXU) interface is provided between a processing core disposed in a first programmable chip and an off-chip AXU disposed in a second programmable chip to integrate the AXU with an issue unit, a fixed point execution unit, and optionally other functional units in the processing core. The external AXU interface enables the issue unit to issue instructions to the AXU in much the same manner as the issue unit would be able to issue instructions to an AXU that was disposed on the same chip. By doing so, the AXU on the second programmable chip can be designed, tested and verified independent of the processing core on the first programmable chip, thereby enabling a common processing core, which has been designed, tested, and verified, to be used in connection with multiple different AXU designs. | 07-18-2013 |
20130185704 | PROVIDING PERFORMANCE TUNED VERSIONS OF COMPILED CODE TO A CPU IN A SYSTEM OF HETEROGENEOUS CORES - A compiler may optimize source code and any referenced libraries to execute on a plurality of different processor architecture implementations. For example, if a compute node has three different types of processors with three different architecture implementations, the compiler may compile the source code and generate three versions of object code where each version is optimized for one of the three different processor types. After compiling the source code, the resultant executable code may contain the necessary information for selecting between the three versions. For example, when a program loader assigns the executable code to the processor, the system determines the processor's type and ensures only the optimized version that corresponds to that type is executed. Thus, the operating system is free to assign the executable code to any processor based on, for example, the current status of the processor (i.e., whether its CPU is being fully utilized) and still enjoy the benefits of executing code that is optimized for whichever processor is assigned the executable code. | 07-18-2013 |
20130185705 | PROVIDING PERFORMANCE TUNED VERSIONS OF COMPILED CODE TO A CPU IN A SYSTEM OF HETEROGENEOUS CORES - A compiler may optimize source code and any referenced libraries to execute on a plurality of different processor architecture implementations. For example, if a compute node has three different types of processors with three different architecture implementations, the compiler may compile the source code and generate three versions of object code where each version is optimized for one of the three different processor types. After compiling the source code, the resultant executable code may contain the necessary information for selecting between the three versions. For example, when a program loader assigns the executable code to the processor, the system determines the processor's type and ensures only the optimized version that corresponds to that type is executed. Thus, the operating system is free to assign the executable code to any of the different types of processors. | 07-18-2013 |
20130191600 | COMBINED CACHE INJECT AND LOCK OPERATION - A circuit arrangement and method utilize cache injection logic to perform a cache inject and lock operation to inject a cache line in a cache memory and automatically lock the cache line in the cache memory in parallel with communication of the cache line to a main memory. The cache injection logic may additionally limit the maximum number of locked cache lines that may be stored in the cache memory, e.g., by aborting a cache inject and lock operation, injecting the cache line without locking, or unlocking and/or evicting another cache line in the cache memory. | 07-25-2013 |
20130191649 | MEMORY ADDRESS TRANSLATION-BASED DATA ENCRYPTION/COMPRESSION - A method and circuit arrangement selectively stream data to an encryption or compression engine based upon encryption and/or compression-related page attributes stored in a memory address translation data structure such as an Effective To Real Translation (ERAT) or Translation Lookaside Buffer (TLB). A memory address translation data structure may be accessed, for example, in connection with a memory access request for data in a memory page, such that attributes associated with the memory page in the data structure may be used to control whether data is encrypted/decrypted and/or compressed/decompressed in association with handling the memory access request. | 07-25-2013 |
20130191651 | MEMORY ADDRESS TRANSLATION-BASED DATA ENCRYPTION WITH INTEGRATED ENCRYPTION ENGINE - A method and circuit arrangement utilize an integrated encryption engine within a processing core of a multi-core processor to perform encryption operations, i.e., encryption and decryption of secure data, in connection with memory access requests that access such data. The integrated encryption engine is utilized in combination with a memory address translation data structure such as an Effective To Real Translation (ERAT) or Translation Lookaside Buffer (TLB) that is augmented with encryption-related page attributes to indicate whether pages of memory identified in the data structure are encrypted such that secure data associated with a memory access request in the processing core may be selectively streamed to the integrated encryption engine based upon the encryption-related page attribute for the memory page associated with the memory access request. | 07-25-2013 |
20130191824 | VIRTUALIZATION SUPPORT FOR BRANCH PREDICTION LOGIC ENABLE/DISABLE - A hypervisor and one or more guest operating systems resident in a data processing system and hosted by the hypervisor are configured to selectively enable or disable branch prediction logic through separate hypervisor-mode and guest-mode instructions. By doing so, different branch prediction strategies may be employed for different operating systems and user applications hosted thereby to provide finer grained optimization of the branch prediction logic for different operating scenarios. | 07-25-2013 |
20130191825 | VIRTUALIZATION SUPPORT FOR SAVING AND RESTORING BRANCH PREDICTION LOGIC STATES - A hypervisor and one or more programs, e.g., guest operating systems and/or user processes or applications hosted by the hypervisor to configured to selectively save and restore the state of branch prediction logic through separate hypervisor-mode and guest-mode and/or user-mode instructions. By doing so, different branch prediction strategies may be employed for different operating systems and user applications hosted thereby to provide finer grained optimization of the branch prediction logic. | 07-25-2013 |