Class / Patent application number | Description | Number of patent applications / Date published |
712201000 | Data flow based system | 15 |
20090119484 | Method and Apparatus for Implementing Digital Logic Circuitry - A method of generating digital control parameters for implementing digital logic circuitry comprising functional nodes with at least one input or at least one output and connections indicating interconnections between said functional nodes, wherein said digital logic circuitry comprises a first path streamed by successive tokens, and a second path streamed by said tokens is disclosed. The method comprises determining a necessary relative throughput for data flow to said paths; assigning buffers to one of said paths to balance throughput of said paths; removing assigned buffers until said necessary relative throughput is obtained with minimized number of buffers; and generating digital control parameters for implementing said digital logic circuitry comprising said minimized number of buffers. An apparatus, a computer implemented digital logic circuitry, a Data Flow Machine, methods and computer program products are also disclosed. | 05-07-2009 |
20100199070 | PROGRAMMABLE FILTER PROCESSOR - A programmable filter processor which is adaptable to different filtering algorithms, a plurality of different software algorithms being executable, the programmable filter processor including a logic unit which includes a plurality of pipeline stages; a first memory in which the software algorithms are stored; a second memory in which raw data and parameters for the different filter algorithms are stored; and an address generating unit which is controllable via a program counter, the address generating unit being developed to generate control commands for the second memory and the logic unit. | 08-05-2010 |
20100199071 | DATA PROCESSING APPARATUS AND IMAGE PROCESSING APPARATUS - A data processing apparatus in which pipeline processing is performed comprises a control unit that controls a data processing sequence, a first processing unit that begins first data processing by inputting data on the basis of a start signal, outputs data subjected to the first data processing, and outputs a completion signal to the control unit after completing the first data processing, and a second processing unit that begins second data processing by inputting the data subjected to the first data processing on the basis of a start signal, outputs data subjected to the second data processing, and outputs a completion signal to the control unit after completing the second data processing. The control unit outputs a following start signal to the first processing unit and the second processing unit upon reception of the completion signal of the first data processing and the second data processing respectively. | 08-05-2010 |
20110060890 | STREAM DATA GENERATING METHOD, STREAM DATA GENERATING DEVICE AND A RECORDING MEDIUM STORING STREAM DATA GENERATING PROGRAM - A stream data generating method for a computer system for generating stream data having time information applied thereto in a time series order and processing the generated stream data on the basis of a registered query. The computer system includes a storage for storing therein query information indicative of a plurality of sorts of constituent elements forming stream data corresponding to the query on the basis of the query and a stream definition indicative of the plurality of constituent elements, a data generator for generating and transmitting stream data; and a stream data processor for processing the stream data transmitted from the data generator. The data generator a less quantity of stream data to be transmitted to the stream data processor on the basis of the query information. | 03-10-2011 |
20110320771 | INSTRUCTION UNIT WITH INSTRUCTION BUFFER PIPELINE BYPASS - A circuit arrangement and method selectively bypass an instruction buffer for selected instructions so that bypassed instructions can be dispatched without having to first pass through the instruction buffer. Thus, for example, in the case that an instruction buffer is partially or completely flushed as a result of an instruction redirect (e.g., due to a branch mispredict), instructions can be forwarded to subsequent stages in an instruction unit and/or to one or more execution units without the latency associated with passing through the instruction buffer. | 12-29-2011 |
20120216019 | METHOD OF, AND APPARATUS FOR, STREAM SCHEDULING IN PARALLEL PIPELINED HARDWARE - There is provided embodiment of methods of generating a hardware design for a pipelined parallel stream processor. An embodiment of the method comprises defining, on a computing device, a processing operation designating processes to be implemented in hardware as part of said pipelined parallel stream processor; defining, on a computing device, a graph representing said processing operation as a parallel structure in the time domain as a function of clock cycles, said graph comprising at least one data path to be implemented as a hardware design for said pipelined parallel stream processor and comprising a plurality of branches configured to enable data values to be streamed therethrough, the branches of the or each data path being represented as comprising at least one input, at least one output, at least one discrete object corresponding directly to a hardware element to be implemented in hardware as part of said pipelined parallel stream processor, the or each discrete object being operable to execute a function for one or more clock cycles and having a predefined latency associated therewith, said predefined latency representing the time required for said hardware element to execute said function; said data values propagating through said data path from the at least one input to the at least one output as a function of increasing clock cycle; defining, on a computing device, the at least one data path and associated latencies of said graph as a set of algebraic linear inequalities; solving, on a computing device, said set of linear inequalities; optimising, on a computing device, the at least one data path in said graph using said solved linear inequalities to produce an optimised graph; and utilising, on a computing device, said optimised graph to define an optimised hardware design for implementation in hardware as said pipelined parallel stream processor. | 08-23-2012 |
20130246737 | SIMD Compare Instruction Using Permute Logic for Distributed Register Files - Mechanisms, in a data processing system comprising a single instruction multiple data (SIMD) processor, for performing a data dependency check operation on vector element values of at least two input vector registers are provided. Two calls to a simd-check instruction are performed, one with input vector registers having a first order and one with the input vector registers having a different order. The simd-check instruction performs comparisons to determine if any data dependencies are present. Results of the two calls to the simd-check instruction are obtained and used to determine if any data dependencies are present in the at least two input vector registers. Based on the results, the SIMD processor may perform various operations. | 09-19-2013 |
20130290674 | Modeling Structured SIMD Control FLow Constructs in an Explicit SIMD Language - Constructs may express SIMD control flow that can be efficiently implemented on a SIMD machine with support for SIMD control flow. The execution semantics of constructs serve as a functional specification for an emulation implementation in the central processing unit (CPU), a non-SIMD machine, using conventional C++ compiler such as GCC or Microsoft Visual C++ without any modification to the conventional compiler in some embodiments. | 10-31-2013 |
20140013080 | METHOD AND SYSTEM ADAPTED FOR CONVERTING SOFTWARE CONSTRUCTS INTO RESOURCES FOR IMPLEMENTATION BY A DYNAMICALLY RECONFIGURABLE PROCESSOR - A method and system are provided for deriving a resultant software code from an originating ordered list of instructions that does not include overlapping branch logic. The method may include deriving a plurality of unordered software constructs from a sequence of processor instructions; associating software constructs in accordance with an original logic of the sequence of processor instructions; determining and resolving memory precedence conflicts within the associated plurality of software constructs; resolving forward branch logic structures into conditional logic constructs; resolving back branch logic structures into loop logic constructs; and/or applying the plurality of unordered software constructs in a programming operation by a parallel execution logic circuitry. The resultant plurality of unordered software constructs may be converted into programming reconfigurable logic, computers or processors, and also by means of a computer network or an electronics communications network. | 01-09-2014 |
20140013081 | EFFICIENTLY IMPLEMENTING A PLURALITY OF FINITE STATE MACHINES - An approach for processing data by a pipeline of a single hardware-implemented virtual multiple instance finite state machine (VMI FSM) is presented. Based on a current state and context of an FSM instance, an input token selected from multiple input tokens to enter a pipeline of the VMI FSM, and a status of an environment, a new state of the FSM instance is determined and an output token is determined. The input token includes a reference to the FSM instance. In one embodiment, the reference is an InfiniBand QP number. After a receipt by the pipeline of the first input token and prior to determining the new state of the FSM instance and determining the output token, a logic circuit selects a second input token to enter the pipeline. The second input token includes a reference to a second FSM instance. | 01-09-2014 |
20140181472 | SCALABLE COMPUTE FABRIC - A method and apparatus for providing a scalable compute fabricare provided herein. The method includes determining a workflow for processing by the scalable compute fabric, wherein the workflow is based on an instruction set. A pipeline in configured dynamically for processing the workflow, and the workflow is executed using the pipeline. | 06-26-2014 |
20150074376 | System and Method for an Asynchronous Processor with Assisted Token - Embodiments are provided for an asynchronous processor using master and assisted tokens. In an embodiment, an apparatus for an asynchronous processor comprises a memory to cache a plurality of instructions, a feedback engine to decode the instructions from the memory, and a plurality of XUs coupled to the feedback engine and arranged in a token ring architecture. Each one of the XUs is configured to receive an instruction of the instructions form the feedback engine, and receive a master token associated with a resource and further receive an assisted token for the master token. Upon determining that the assisted token and the master token are received in an abnormal order, the XU is configured to detect an operation status for the instruction in association with the assisted token, and upon determining a needed action in accordance with the operation status and the assisted token, perform the needed action. | 03-12-2015 |
20150347149 | AUTOMATED DECOMPOSITION FOR MIXED INTEGER LINEAR PROGRAMS WITH EMBEDDED NETWORKS REQUIRING MINIMAL SYNTAX - An apparatus includes a communications component to receive computer-executable query instructions to solve a MILP problem, the query instructions including a first expression conveying an objective function and side constraint that define a master problem of the MILP problem, a second expression conveying a mapping of graph data to a graph, and a third expression conveying a selection of a graph-based algorithm to solve a subproblem of the MILP problem; a subproblem component to replace the third expression with a fourth expression during decomposition of the MILP problem, the fourth expression including instructions to implement the graph-based algorithm to solve the subproblem; and an execution control component to perform iterations of solving the MILP problem that include executing the first expression to derive a solution to the master problem; and executing the fourth expression to derive a solution to the subproblem based on the mapping and the master problem solution. | 12-03-2015 |
20160077834 | EXECUTION FLOW PROTECTION IN MICROCONTROLLERS - An execution flow protection module ( | 03-17-2016 |
20160179519 | Multi-Phased and Multi-Threaded Program Execution Based on SIMD Ratio | 06-23-2016 |