Class / Patent application number | Description | Number of patent applications / Date published |
710310000 | Buffer or que control | 83 |
20080263254 | Method and System For Adjusting Bus Frequency And Link Width On A Bus - A computer system that includes a host bus connected between a processor and a Northbridge chipset. The Northbridge chipset monitors the host bus and adjusts the host bus frequency and bus link width according to monitored traffic conditions on the host bus. | 10-23-2008 |
20080288707 | METHOD FOR CONTROLLING THE ACTIVE DATA INTERFACE WHEN MULTIPLE INTERFACES ARE AVAILABLE - Systems and methods are provided for controlling which of multiple data interfaces in an electronic device is used for communication with another electronic device so as to minimize disruption of the user experience. In one embodiment, a switch may be provided that is configured to maintain the data stream through a presently used data interface even when other data interfaces become physically connected or available for data transfer. Benefits of unused, but nevertheless connected data interfaces may be received by the electronic device without initiating a transfer of the communication duties between interfaces. | 11-20-2008 |
20080307146 | STRUCTURE FOR DYNAMICALLY SCALABLE QUEUES FOR PERFORMANCE DRIVEN PCI EXPRESS MEMORY TRAFFIC - A method, computer system, and PCI Express device/protocol for a design structure that enables high performance IO data transfers for multiple, different IO configurations, which include variable packet sizes and/or variable/different numbers of transactions on the IO link. PCI Express protocol is enhanced to support utilization of counters and dynamically variable queue sizes. In addition to the standard queue entries, several (or a selected number of) dynamically changeable queue entries are provided/reserved and a dynamic queue modification (DQM) utility is provided within the enhanced PCI Express protocol to monitor ongoing, current data transfer and manage when the size(s) of the queue entries are modified (increased or decreased) based on current data traffic transmitting on the PCI Express IO link. The enhanced PCI Express protocol provides an equilibrium point at which many large data packets are transferred efficiently, while imposing a limit on the number of each size of packets outstanding. | 12-11-2008 |
20080307147 | COMPUTER SYSTEM BUS BRIDGE - A bus bridge between a high speed computer processor bus and a high speed output bus. The preferred embodiment is a bus bridge between a GPUL bus for a GPUL PowerPC microprocessor from International Business Machines Corporation (IBM) and an output high speed interface (MPI). Another preferred embodiment is a bus bridge in a bus transceiver on a multi-chip module. | 12-11-2008 |
20090006705 | Hub for Supporting High Capacity Memory Subsystem - A high-capacity memory subsystem architecture utilizes multiple memory modules arranged in one or more clusters, each attached to a respective hub which in turn is attached to a memory controller. Within a cluster, data is interleaved so that each data access command accesses all modules of the cluster. The hub communicates with the memory modules at a lower bus frequency, but the distributing of data among multiple modules enables the cluster to maintain the composite data rate of the memory-controller-to-hub bus. Preferably, the memory system employs buffered memory chips having dual-mode operation, one of which supports a cluster configuration in which data is interleaved and the communications buses operate at reduced bus width and/or reduced bus frequency to match the level of interleaving | 01-01-2009 |
20090037636 | DATA FLUSH METHODS - A bridge capable of preventing data inconsistency without degrading system performance is provided, in which a buffering unit comprises a plurality of buffers, a first master device outputs a flush request to flush the buffering unit, and a flush request control circuit records the flushed buffer(s) in the buffering unit when receiving the flush request and outputs a flush acknowledge signal to indicate to the first master device that the buffering unit has been flushed when all the plurality of buffers have been flushed once after the flush request has been received. | 02-05-2009 |
20090043940 | Reconstructing Transaction Order Using Clump Tags - A method and system for enforcing ordering rules for transactions are presented. The method and system generates transaction clump tags for each transaction before the transactions are stored in various type specific transaction queues. A transaction clump tag decoding unit decodes the transaction clump tag to recover temporal information regarding the transaction to avoid violations of the ordering rules. | 02-12-2009 |
20090089477 | DEADLOCK AVOIDANCE IN A BUS FABRIC - Circuits, apparatus, and methods for avoiding deadlock conditions in a bus fabric. One exemplary embodiment provides an address decoder for determining whether a received posted request is a peer-to-peer request. If it is, the posted request is sent as a non-posted request. A limit on the number of pending non-posted requests is maintained and not exceed, such that deadlock is avoided. Another exemplary embodiment provides an arbiter that tracks a number of pending posted requests. When the number pending posted requests reaches a predetermined or programmable level, a Block Peer-to-Peer signal is sent to the arbiter's clients, again avoiding deadlock. | 04-02-2009 |
20090119439 | STRUCTURE COMPATIBLE WITH I2C BUS AND SYSTEM MANAGEMENT BUS AND TIMING BUFFERING APPARATUS THEREOF - A structure compatible with I2C bus and system management (SM) bus is provided. The structure includes a first device having an I2C bus interface, a second device having a SM bus interface, and a timing buffering apparatus connected between the I2C bus interface and the SM bus interface. The timing buffering apparatus provides a time delay when the first device sends data to the second device so as to meet the requirement of the second device to data holding time. | 05-07-2009 |
20090172239 | Method and Device for Coupling at Least Two Independent Bus Systems - There is described a method for coupling at least two independent bus systems and to a suitable device for carrying out said method, a cycle time T | 07-02-2009 |
20090177831 | ROUTE AWARE SERIAL ADVANCED TECHNOLOGY ATTACHMENT (SATA ) SWITCH - An embodiment of the present invention is disclosed to include a SATA Switch allowing for access by two hosts to a single port SATA device Further disclosed are embodiments for reducing the delay and complexity of the SATA Switch. | 07-09-2009 |
20090248942 | Posted Memory Write Verification - A method for verifying the proper communication of data packets from an initiator device on a PCIe data bus to a target device on the data bus. A target-specific counter on the initiator is synchronized to an initiator-specific counter on the target with the same value. The initiator writes the value of the target-specific counter into the tag field of the packet header, and also writes an identifier of the initiator into the header. Then the initiator sends the packet to the target on the PCIe data bus. Upon receipt of the packet, the target reads the identifier and checks the value against the appropriate initiator-specific counter on the target. When the value is not equal to the initiator-specific counter on the target, then it generates an error message. An additional memory write with specific data is posted from the initiator to the target. A memory read is posted of the additional memory write location from the initiator to the target. The operation of the initiator is continued when a good status and data matching the additional write data is returned from the target, and operation is halted when an error status is returned or data that does not match the additional write data. | 10-01-2009 |
20090276558 | LANE MERGING - A buffer is associated with each of a plurality of data lanes of a multi-lane serial data bus. Data words are timed through the buffers of active ones of the data lanes. Words timed through buffers of active data lanes are merged onto a parallel bus such that data words from each of the active data lanes are merged onto the parallel bus in a pre-defined repeating sequence of data lanes. This approach allows other, non-active, data lanes to remain in a power conservation state. | 11-05-2009 |
20100005214 | ENHANCING BUS EFFICIENCY IN A MEMORY SYSTEM - A communication interface device, system, method, and design structure for enhancing bus efficiency and utilization in a memory system. The communication interface device includes a first bus interface to communicate on a high-speed bus, a second bus interface to communicate on a lower-speed bus, and clock ratio logic configurable to support multiple clock ratios between the high-speed bus and the lower-speed bus. The clock ratio logic reduces a high-speed clock frequency received at the first bus interface and outputs a reduced ratio of the high-speed clock frequency on the lower-speed bus via the second bus interface supporting variable frame sizes. | 01-07-2010 |
20100011145 | Dynamic Storage Resources - A storage server in a distributed content storage and access system provides a mechanism for dynamically establishing storage resources, such as buffers, with specified semantic models. For example, the semantic models support distributed control of single buffering and double buffering during a content transfer that makes use of the buffer for intermediate storage. In some examples, a method includes examining characteristics associated with a desired transfer of data, such as a unit of content, and then selecting characteristics of a first storage resource based on results of the examining. The desired transfer of the data is then affected to use the first storage resource element. | 01-14-2010 |
20100070672 | METHOD AND SYSTEM FOR PROCESSING WIRELESS DIGITAL MULTIMEDIA - Multimedia from a source can be wirelessly transmitted in a 60 GHz system to a display. To support rapid reads of encryption, EDID, and other data written into a slave at the display by a master at the source in accordance with I | 03-18-2010 |
20100077125 | SEMICONDUCTOR MEMORY DEVICE - Disclosed is a semiconductor memory device includes a selector for selectively loading read inversion information and write inversion information on an inversion bus, the inversion bus for transferring the inversion information loaded by the selector, a plurality of read inversion units for reflecting the inversion information from the inversion bus to output data, and a plurality of write inversion units for reflecting the inversion information from the inversion bus to input data. | 03-25-2010 |
20100082872 | METHOD AND APPARATUS TO OBTAIN CODE DATA FOR USB DEVICE - A method and apparatus are provided that include creating an image of a page descriptor at a universal serial bus (USB) device, transferring the image of the page descriptor to a main memory, modifying a schedule list in a main memory based on the transferred image, identifying an active transaction in the modified schedule list, and providing code data to the USB device from the main memory based on the identified active transaction. | 04-01-2010 |
20100115172 | BRIDGE DEVICE HAVING A VIRTUAL PAGE BUFFER - A composite memory device including discrete memory devices and a bridge device for controlling the discrete memory devices. The bridge device has a virtual page buffer corresponding to each discrete memory device for storing read data from the discrete memory device, or write data from an external device. The virtual page buffer is configurable to have a size up to the maximum physical size of the page buffer of a discrete memory device. The page buffer is then logically divided into page segments, where each page segment corresponds in size to the configured virtual page buffer size. By storing read or write data in the virtual page buffer, both the discrete memory device and the external device can operate to provide or receive data at different data rates to maximize the performance of both devices. | 05-06-2010 |
20100131692 | BUS BRIDGE APPARATUS AND BUS BRIDGE SYSTEM - A bus bridge is connected between a general-purpose first bus and a second bus on which an interruption signal is transmitted using a packet. The bus bridge includes a plurality of reception buffers and a control section. The control section controllably switches the order of read of the read responses and the requests based on the order of reception of the read responses and the requests after recognizing reception of an interruption assert signal packet transferred by the second bus and before recognizing reception of an interruption de-assert signal packet transferred by the second bus. | 05-27-2010 |
20100153611 | SYSTEM AND METHOD FOR HIGH PERFORMANCE SYNCHRONOUS DRAM MEMORY CONTROLLER - The disclosed system and method enhances performance of pipelined data transactions involving FIFO buffers by implementing a transaction length indicator in a transaction header. The length indicator in the header is formed by components coupled to a memory controller through FIFO buffers. The memory controller uses the length indicator to execute pipelined data transfers at relatively high speeds without causing additional inadvertent shifts or indexes in the FIFO buffer being read. The system and method can be applied to any memory type in general, and avoids the use of additional control signals or added complexity or size in the memory controller. | 06-17-2010 |
20100211714 | METHOD, SYSTEM, AND APPARATUS FOR TRANSFERRING DATA BETWEEN SYSTEM MEMORY AND INPUT/OUTPUT BUSSES - Transferring data between system memory and input/output busses involves determining, via a request buffer, a memory-mapped, input/output (I/O) read request targeted for a first-in-first-out (FIFO) I/O device. The read request is targeted to a request address in a prefetchable memory space corresponding to the I/O device. It is determined whether the request address corresponds to an expected address in the prefetchable memory space. The expected address is determined based on one or more previous read requests targeted to the prefetchable memory space. The read request is reordered in the request buffer if the request address does not correspond to the expected address. The read request is fulfilled if the address corresponds to the expected address. | 08-19-2010 |
20100306440 | SYSTEM AND METHOD FOR SERIAL INTERFACE TOPOLOGIES - A system and method for serial interface topologies is disclosed. A serial interface topology includes a replication device configured to receive control information from a controller interface. The replication device is configured to transmit two or more copies of substantially replicated control information to a device control interface. A data interface is configured to provide differential, point-to-point communication of data with the device controller interface. | 12-02-2010 |
20100306441 | DATA TRANSFER APPARATUS AND DATA TRANSFER METHOD - A data transfer apparatus for transferring data between a system bus and a local bus at a high speed is provided. A bus bridge | 12-02-2010 |
20100312941 | NETWORK INTERFACE DEVICE WITH FLOW-ORIENTED BUS INTERFACE - A network interface device includes a bus interface that communicates over a bus with a host processor and memory, and a network interface that sends and receive data packets carrying data over a packet network. A protocol processor conveys the data between the network interface and the memory via the bus interface while performing protocol offload processing on the data packets in accordance with multiple different application flows. The bus interface queues the data for transmission over the bus in a plurality of queues that are respectively assigned to the different application flows, and transmits the data over the bus according to the queues. | 12-09-2010 |
20100332715 | VEHICLE SYSTEM MONITORING AND COMMUNICATIONS ARCHITECTURE - Systems, methods and devices are provided that allow more efficient transfer and processing of sensor information in a hierarchical data system. The system provides for a plurality of component area managers (CAM), each of the CAMS being in operable communication with at least one of a plurality of transducers that monitors a phenomena of a component and in operable communication with a data bus. A CAM comprises a processor in operable communication with the at least one transducer of the plurality of transducers, wherein the first processor is configured to record data generated by the at least one transducer of the plurality of transducers, to reduce the recorded data, to place the reduced data on the data bus. The system also includes a transducer selection module controlled by the first processor by which the first processor selects one of the plurality of transducers to record and a rolling buffer in operable communication with the first processor and in operable communication with the at least one transducer by which to record the data generated by the at least one transducer of the plurality in a first-in-first-out manner. | 12-30-2010 |
20110016251 | MULTI-PROCESSOR SYSTEM AND DYNAMIC POWER SAVING METHOD THEREOF - A multi-processor system and a dynamic power saving method thereof are provided. The multi-processor system includes a plurality of processors and a chipset. Each of the processors has a plurality of standard bus request pins and a specific bus request pin, and the standard bus request pins of each processor are alternately connected to the standard bus request pins of other processors respectively. The chipset is coupled to the specific bus request pins of the processors for detecting a control request signal on the specific bus request pins. When the chipset detects the control request signal, the chipset turns on an input buffer connected with the processors so that the processors can access data through the input buffer. When the chipset does not detect the control request signal, the chipset turns off the input buffer. | 01-20-2011 |
20110087820 | QUEUE SHARING AND RECONFIGURATION IN PCI EXPRESS LINKS - In one embodiment an electronic device comprises at least one processor, at least one PCI express link, a virtual channel/sub-link flow control module, and a memory module communicatively connected to the one or more processors and comprising logic instructions which, when executed on the one or more processors configure the one or more processors to determine, in an electrical device, whether a virtual channel/sub-link is inactive, and in response to a determination that at least one virtual channel/sub-link is inactive, reallocate queue space from the at least one inactive channel to at least one active channel. | 04-14-2011 |
20110093639 | Secure Communications Between and Verification of Authorized CAN Devices - Encrypted encoding and decoding of identification data of CAN bus devices for communications therebetween provides deterrence of theft and unauthorized access of these secure CAN bus devices. Each one of the CAN bus devices is considered a “node” on the CAN bus for communications purposes. By using a unique encryption code stored in each of the “authorized” CAN bus devices, unauthorized CAN bus nodes will not be able to communicate with the authorized, e.g., secure, CAN bus nodes functioning in a CAN system. | 04-21-2011 |
20110093640 | Universal Serial Bus Host Controller and Control Method Thereof - A USB host controller is provided. The USB host controller is capable of communicating with multiple USB apparatuses having endpoints and sends a request to a first endpoint. The USB host controller includes a first storage and a first control unit. The first control unit stores endpoint information from the first endpoint into the first storage when the first endpoint issues an unready transaction packet in response to the request. The unready transaction packet indicates that the first endpoint is not ready. | 04-21-2011 |
20110289253 | INTERCONNECTION METHOD AND DEVICE, FOR EXAMPLE FOR SYSTEMS-ON-CHIP - Transactions of the request/response type between a first circuit module and a second circuit module operating with incompatible protocols or interfaces envisage organizing a queue of memory locations for storing transaction information items and transaction identifiers associated to said transactions and implementing the transactions via operations of reading/writing of the locations in the queue, mapping on the transaction identifiers information for management of the queue. | 11-24-2011 |
20110296073 | TIME ALIGNING CIRCUIT AND TIME ALIGNING METHOD FOR ALIGNING DATA TRANSMISSION TIMING OF A PLURALITY OF LANES - A time aligning circuit includes a plurality of buffers, a plurality of delay selectors, a plurality of adjustment symbol generators, and a controller. Each buffer receives an ordered set on a corresponding lane. Each delay selector delays an output of the ordered set of the corresponding buffer. Each adjustment symbol generator outputs an adjustment symbol or the output received from the corresponding delay selector according to an adjustment control signal. When an initial symbol of a designated delay selector is detected but initial symbols of other delay selectors are not received yet, the controller generates the delay control signal to the designated delay selector and generates the adjustment control signal to control a designated adjustment symbol generator corresponding to the designated delay selector in order to output one adjustment symbol until initial signals of all delay selectors are detected. | 12-01-2011 |
20120179852 | ONE-WAY BUS BRIDGE - A one-way bus bridge pair that transfers secure data in one direction, the bus bridge pair including a transmitting bus bridge, a receiving bus bridge, and a link. The link can connect the transmitting bus bridge and receiving bus bridge. The transmitting bus bridge may be arranged not to receive any data from the receiving bus bridge, and the receiving bus bridge may be arranged not to send any data to the transmitting bus bridge. | 07-12-2012 |
20130042044 | BRIDGE, SYSTEM AND THE METHOD FOR PRE-FETCHING AND DISCARDING DATA THEREOF - A bridge system includes a request device, connected to a first bus; a target device, connected to a second bus; and a bridge, communicated with the first bus and the second bus, and the bridge has a buffer, wherein when the request device asks the bridge for reading data of a target address from the target device, a transaction is started, and the bridge asks the target device to transfer data of the target address and following addresses, and then the target device retrieves and transfers the data of the target address and following addresses to the bridge, that is stored in the buffer and then transferred to the request device in turn, and wherein as amount of transferred data to the request device reaches a threshold, the bridge continuously asks data of a following address of the target device before the transaction is finished. | 02-14-2013 |
20130054864 | BUFFER MANAGEMENT USING FREELIST BUFFERS - A device includes a link interface circuit, a first plurality of allocated buffers, and a second plurality of non-allocated buffers. The link interface circuit is operable to communicate over a communications link using a plurality of virtual channels. A different subset of the plurality of allocated buffers is allocated to each of the virtual channels. The non-allocated buffers are not allocated to a particular virtual channel. The link interface circuit is operable to receive a first transaction over the communications link and assign the first transaction to one of the allocated buffers or one of the non-allocated buffers. | 02-28-2013 |
20130103875 | CPU INTERCONNECT DEVICE - The present disclosure provides a CPU interconnect device, the CPU interconnect device connects with a first CPU, which includes a quick path interconnect QPI interface and a serial deserial SerDes interface, the quick path interconnect QPI interface receives serial QPI data sent from a CPU, converts the received serial QPI data into a parallel QPI data, and outputs the parallel QPI data to the serial deserial SerDes interface; the serial deserial SerDes interface converts the parallel QPI data output by the QPI interface into a high-speed serial SerDes data and then send the high-speed serial SerDes data to another CPU interconnect device connected with another CPU. The defects of poor scalability, long data transmission delay, and a high cost of an existing interconnect system among CPUs can be solved. | 04-25-2013 |
20130132635 | DIGITAL SIGNAL TRANSCEIVER SYSTEMS - A system includes a first digital signal transceiver having a first interface, a second digital signal transceiver having a second interface, and a communication bus coupled between the first interface and the second interface. The communication bus is operable for establishing communication between the first digital signal transceiver and the second digital signal transceiver, and the communication is serial and asynchronous. | 05-23-2013 |
20130151747 | CO-PROCESSING ACCELERATION METHOD, APPARATUS, AND SYSTEM - An embodiment of the present invention discloses a co-processing acceleration method, including: receiving a co-processing request message which is sent by a compute node in a computer system and carries address information of to-be-processed data; according to the co-processing request message, obtaining the to-be-processed data, and storing the to-be-processed data in a public buffer card; and allocating the to-be-processed data stored in the public buffer card to an idle co-processor card in the computer system for processing. An added public buffer card is used as a public data buffer channel between a hard disk and each co-processor card of a computer system, and to-be-processed data does not need to be transferred by a memory of the compute node, which avoids overheads of the data in transmission through the memory of the compute node, and thereby breaks through a bottleneck of memory delay and bandwidth, and increases a co-processing speed. | 06-13-2013 |
20130159591 | VERIFYING DATA RECEIVED OUT-OF-ORDER FROM A BUS - In an embodiment, load transactions are issued to a bus. The load transactions are stalled if the bus cannot accept additional load transactions, and the load transactions are restarted after the bus can accept the additional load transactions. Responses are received from the bus to the load transactions out-of-order from an order that the load transactions were sent to the bus. The responses comprise data and index values that indicate an order that the load transactions were received by the bus. The data is compared in the order that the load transactions were received by the bus against expected data in the order that the load transaction were sent to the bus. | 06-20-2013 |
20130185472 | TECHNIQUES FOR IMPROVING THROUGHPUT AND PERFORMANCE OF A DISTRIBUTED INTERCONNECT PERIPHERAL BUS - A method for accelerating execution of read operations in a distributed interconnect peripheral bus is provided. The method comprises generating a first number of speculative read requests addressed to an address space related to a last read request served on the bus; sending the speculative read requests to a root component connected to the bus; receiving a second number of read completion messages from the root component of the bus; and sending a read completion message out of the received read completion messages component to the endpoint component only if the read completion message is respective of a real read request or a valid speculative read request out of the speculative read requests, wherein a real read request is issued by the endpoint component. | 07-18-2013 |
20130246682 | OUT-OF-ORDER EXECUTION OF BUS TRANSACTIONS - A slave-interface unit for use with a system-on-a-chip bus (such as an AXI bus) executes received transactions out-of-order while accounting for groups of in-order transactions. | 09-19-2013 |
20130254450 | Interface Device and Method for Consistently Exchanging Data - An interface device for exchanging data between a first bus system and a second bus system, wherein an input/output device is connectable to the second bus system and within the input/output device includes an addressable slot and an addressable subslot for output or acceptance of input/output data to optimize the consistent exchange of the data between the bus systems. A data transfer device including a transfer memory is connected via the control device and a list storage device in which a data structure for addressing the data for the input/output device is stored, and wherein the data structure is predetermined for a plurality of subslots in a telegram format of the telegrams of the first bus system. | 09-26-2013 |
20130262734 | MODULAR SCALABLE PCI-EXPRESS IMPLEMENTATION - In some embodiments a functional PCI Express port includes first buffers and an idle PCI Express port includes second buffers. One or more of the second buffers are accessed by the functional PCI Express port. Other embodiments are described and claimed. | 10-03-2013 |
20130268712 | GENERAL INPUT/OUTPUT ARCHITECTURE, PROTOCOL AND RELATED METHODS TO IMPLEMENT FLOW CONTROL - An enhanced general input/output communication architecture, protocol and related methods are presented. | 10-10-2013 |
20140019665 | OPTIMIZED BUFFER PLACEMENT BASED ON TIMING AND CAPACITANCE ASSERTIONS - Optimized buffer placement is provided based on timing and capacitance assertions in a functional chip unit including a single source and multiple macros, each having a sink. Placement of the source and macros with the sinks is pre-designed and buffers are placed in branches connecting the source with the multiple sinks. An estimated slack is calculated for each branch, the branches are arranged according to the calculated slack, decoupling buffers are inserted in all branches except the most critical branch(es), the most critical branch(es) are globally routed and slew conditions are fixed within this branch, and at least one next branch is globally routed and slew conditions are fixed therein. | 01-16-2014 |
20140047155 | MEMORY MODULE THREADING WITH STAGGERED DATA TRANSFERS - A method of transferring data between a memory controller and at least one memory module via a primary data bus having a primary data bus width is disclosed. The method includes accessing a first one of a memory device group via a corresponding data bus path in response to a threaded memory request from the memory controller. The accessing results in data groups collectively forming a first data thread transferred across a corresponding secondary data bus path. Transfer of the first data thread across the primary data bus width is carried out over a first time interval, while using less than the primary data transfer continuous throughput during that first time interval. During the first time interval, at least one data group from a second data thread is transferred on the primary data bus. | 02-13-2014 |
20140095760 | Generic and multi-role controller structure for data and communication exchanges - This generic and multi-role data and communication exchange controller structure is characterized in that it assumes the form of a single component further including means forming a generic data and communication exchange controller, associated with at least a means forming a data storage/exchange buffer; a means forming multiple connection interfaces to several data production/consumption units, one connection interface being associated with one data production/consumption unit; and a means forming multiple connection interfaces with several external data communication buses, one connection interface being associated with one external data communication bus. | 04-03-2014 |
20140101355 | VIRTUALIZED COMMUNICATION SOCKETS FOR MULTI-FLOW ACCESS TO MESSAGE CHANNEL INFRASTRUCTURE WITHIN CPU - A message channel optimization method and system enables multi-flow access to the message channel infrastructure within a CPU of a processor-based system. A user (pcode) employs a virtual channel to submit message channel transactions, with the message channel driver processing the transaction “behind the scenes”. The message channel driver thus allows the user to continue processing without having to block other transactions from being processed. Each transaction will be processed, either immediately or at some future time, by the message channel driver. The message channel optimization method and system are useful for tasks involving message channel transactions as well as non-message channel transactions. | 04-10-2014 |
20140143471 | FLEXIBLE CONTROL MECHANISM FOR STORE GATHERING IN A WRITE BUFFER - A store gathering policy is enabled or disabled at a data processing device. A store gathering policy to be implemented by a store buffer can be selected from a plurality of store gathering polices. For example, the plurality of store gathering policies can be constrained or unconstrained. A store gathering policy can be enabled by a user programmable storage location. A specific store gathering policy can be specified by a user programmable storage location. A store gathering policy can be determined based upon an attribute of a store request, such as based upon a destination address. | 05-22-2014 |
20140173162 | Command Queue for Communications Bus - Performing transactions on a bus by first generating a sequence of commands by an initiator module and queuing the sequence of commands in a queue module. A first one of the sequence of commands is sent from the queue module via the bus to a target module. The queue module is paused while waiting for a response via the bus from the target module; however, the initiator may continue processing another task. The queue module repeatedly sends a next command via the bus to the target module and waits for a response via the bus from the target module until a last one of the sequence of commands is sent to the target module. The queue module provides only a single acknowledgement to the initiator module after the sequence of commands has been transferred to the target module. | 06-19-2014 |
20140173163 | INFORMATION PROCESSING APPARATUS, CONTROL METHOD OF INFORMATION PROCESSING APPARATUS AND APPARATUS - An information processing apparatus includes a first processor, a second processor, a switch configured to relay a packet transmitted between the first processor and the second processor, a first output buffer corresponding to the first processor and being configured to store therein a first packet from the first processor and being addressed to the second processor and received through the switch, a first input buffer corresponding to the first processor, and a first selector configured to select one of a first path and a second path, based on a free space of the first output buffer. When the first packet is input, the first path is configured to output the first packet from the first processor to the switch through the first input buffer and the second path is configured to output the first packet from the first processor to the switch not through the first input buffer. | 06-19-2014 |
20140181349 | PER-SOURCE ORDERING - Systems and methods for maintaining an order of read and write transactions for each source through a bridge in a bus fabric. The bridge provides a connection from a first bus to a second bus within the bus fabric. The first bus has a single path for read and write transactions and the second bus has separate paths for read and write transactions. The bridge maintains a pair of counters for each source in a SoC to track the numbers of outstanding read and write transactions. The bridge prevents a read transaction from being forwarded to the second bus if the corresponding write counter is non-zero, and the bridge prevents a write transaction from being forwarded to the second bus if the corresponding read counter is non-zero. | 06-26-2014 |
20140189187 | METHOD TO INTEGRATE ARM ECOSYSTEM IPS INTO PCI-BASED INTERCONNECT - Methods and apparatus for integrating ARM-based IPs in computer system employing PCI-based fabrics. An PCI-based fabric is operatively coupled to an ARM-based ecosystem employing an ARM-based fabric such as OCP, AHB, or BVCI via a corresponding fabric-to-fabric bridge. Transactions between IP operatively coupled to the PCI-based fabric and IP in the ARM-based ecosystem are facilitated by applying applicable ordering and conversions operations via the fabric-to-fabric bridge and/or fabrics. For example, posted writes originating from IP coupled to the PCI-based fabric are converted to non-posted writes and serialized via the fabric-to-fabric bridge and forwarded to the ARM-based ecosystem. | 07-03-2014 |
20140189188 | METHODS AND APPARATUS FOR BRIDGED DATA TRANSMISSION AND PROTOCOL TRANSLATION IN A HIGH-SPEED SERIALIZED DATA SYSTEM - An apparatus for transmitting data across a high-speed serial bus includes an IEEE 802.3-compliant PHY having a GMII interface; an IEEE 1394-compliant PHY in communication with the IEEE 802.3-compliant PHY via a switch; the switch determining whether data transmission is be routed to the IEEE 802.3-compliant PHY or the IEEE 1394-compliant PHY; a first connection, the first connection for transmitting data between a device and the IEEE 802.3-compliant PHY; and a second connection, the second connection for transmitting data between a device and the IEEE 1394-compliant PRY. | 07-03-2014 |
20140237152 | Folded Memory Modules - A memory module comprises a data interface including a plurality of data lines and a plurality of configurable switches coupled between the data interface and a data path to one or more memories. The effective width of the memory module can be configured by enabling or disabling different subsets of the configurable switches. The configurable switches may be controlled by manual switches, by a buffer on the memory module, by an external memory controller, or by the memories on the memory module. | 08-21-2014 |
20140250252 | First-in First-Out (FIFO) Modular Memory Structure - A modular first-in first-out circuit including at least three non-addressable memory blocks forming a data pipeline is disclosed. At least two of the memory block including a data storage structure for receiving as input data from a global data bus and a control logic structure including logic for determining whether data should be added to the data storage structure from the global data bus and whether any data within the data storage structure should be transferred to the output of the memory block. The data storage structure of the at least two memory blocks includes a first data input for selectively receiving data from the global data bus and a second data input for selectively receiving data from a previous memory block in the modular first-in first-out circuit. | 09-04-2014 |
20140281102 | PATTERN-BASED SERVICE BUS ARCHITECTURE USING ACTIVITY-ORIENTED SERVICES - A pattern-based service bus includes a plurality of bus endpoints, a bus-hosted service, and a bus storage component. The plurality of bus endpoints interact with bus participants external to the pattern-based service bus, wherein each of the plurality of bus endpoints are identified by a unique address, and type of interaction to be provided by the bus endpoint. The bus-hosted service implements patterns that define allowed interactions between each of the plurality of bus endpoints and the bus-hosted service, wherein the implemented patterns can be utilized by the plurality of bus endpoints to interact with the bus-hosted service. The bus storage component interacts with the bus-hosted service to store information relevant to operation of the pattern-based service bus. | 09-18-2014 |
20140281103 | INFORMATION PROCESSING APPARATUS AND METHOD FOR GENERATING COUPLING INFORMATION - A processing apparatus includes a memory, and a processor coupled to the memory and configured to acquire first data that indicates correspondence relationship between a first address given to a first adapter of a first device and a first bus number given to a first bus coupled to the first adapter, acquire second data that indicates correspondence relationship between a second address given to a second adapter of a first device and a second bus number given to a second bus coupled to the second adapter, acquire third data that indicates correspondence relationship between the first address and a port number given to a port of a second device, the port being coupled to the first adapter with the first bus, and when the second bus number is identical to the first bus number, generate fourth data that indicates that the second adapter is coupled to the port. | 09-18-2014 |
20140289442 | BUTTON SIGNALING FOR APPARATUS STATE CONTROL - A keypad circuit provides a signal corresponding to an actuation of a button. A query is presented to confirm intent to place an apparatus in a low-power state in response to the signal. The apparatus assumes a low-power state in response to a confirmation. The apparatus is also configured to assume an active state from a low-power state in response to the signal. Point-of-sale terminals and other apparatus can be controlled and operated accordingly. | 09-25-2014 |
20140289443 | Inter-Bus Communication Interface Device - There is provided an inter-bus communication interface device capable of efficiently performing transfer of data between a plurality of devices connected to different buses, respectively. When communication data is transmitted, a first device writes the communication data into a buffer, whereas when communication control information is transmitted, the first device writes the communication control information into a register. A control circuit passes the communication data stored in the buffer to a second device, and passes the communication control information stored in the register to a second device. | 09-25-2014 |
20140372656 | DATA PROCESSING DEVICE, SEMICONDUCTOR EXTERNAL VIEW INSPECTION DEVICE, AND DATA VOLUME INCREASE ALLEVIATION METHOD - Provided is a data processing device with which, when a temporary network congestion occurs, it is possible to avoid a buffer overflow and sustain a process. When a request for retransmission of the same data with respect to a processor element from a buffer occurs continuously a prescribed number of iterations, a data processing device according to the present invention determines that it is possible that a buffer overflow occurs, and suppresses an increase in the volume of data which is accumulated in the buffer (see FIG. | 12-18-2014 |
20140379954 | RELAY DEVICE, COMMUNICATION HARNESS, AND COMMUNICATION SYSTEM - Provided are a relay device where a routing table can be replaced in order to accommodate diverse relay variations and the storage capacity of the routing table can be saved, a communication harness including the relay device and a communication system including the relay device. In the relay device, a routing program for a CPU to execute relay processing and a routing table for identifying the relay destination by the relay processing are stored in an information rewritable flash memory. The routing table includes a first table that manages whether the CAN message is to be relayed or not, a second table storing the received CAN message, a third table storing the CAN message to be transmitted (relayed), a fourth table that manages the signal storage position, and a fifth table that manages the number of effective records in the second to fourth tables. | 12-25-2014 |
20150012681 | Common Idle State, Active State And Credit Management For An Interface - In one embodiment, the present invention includes method for entering a credit initialization state of an agent state machine of an agent coupled to a fabric to initialize credits in a transaction credit tracker of the fabric. This tracker tracks credits for transaction queues of a first channel of the agent for a given transaction type. The agent may then assert a credit initialization signal to cause credits to be stored in the transaction credit tracker corresponding to the number of the transaction queues of the first channel of the agent for the first transaction type. Other embodiments are described and claimed. | 01-08-2015 |
20150074319 | UNIVERSAL SPI (SERIAL PERIPHERAL INTERFACE) - A Universal SPI Interface is provided that is compatible, without the need for additional interface logic or software, with the SPI bus, existing DSA and other serial busses similar to (but not directly compatible with) the SPI bus, and parallel busses requiring compatibility with 74xx164-type signaling. In an additional aspect, a reduced-pincount Universal SPI Interface is provided that provides the same universal interface, but using fewer external output pins. The Universal SPI Interface includes multiple latches, buffers, and in an alternative embodiment, a multiplexer, configured together such that a Universal SPI interface is provided that can be readily reconfigured using only input signals to provide compatibility across multiple bus interfaces. | 03-12-2015 |
20150127873 | SEMICONDUCTOR DEVICE AND MEMORY SYSTEM INCLUDING THE SAME - A semiconductor device and a memory system including the same are disclosed, which relate to a technology for reducing a toggle current of a global input output (GIO) of a semiconductor device configured to use a data bus inversion (DBI) scheme. The semiconductor device includes:a local input/output (LIO) line driver configured to perform inversion or non-inversion of data of a global input/output (GIO) line according to a control signal, and to output the inversion or non-inversion result to the LIO line; and an inversion processor configured to combine an inversion control signal and mat information, and output the control signal for controlling inversion or non-inversion of data to the LIO line driver. | 05-07-2015 |
20150301961 | HAZARD CHECKING CONTROL WITHIN INTERCONNECT CIRCUITRY - A system-on-check integrated circuit | 10-22-2015 |
20150317266 | CONFIGURABLE PERIPHERAL COMPONENENT INTERCONNECT EXPRESS (PCIe) CONTROLLER - In an system on a chip, multiple PCIe controllers may be present in which each PCIe controller may be configured to route input data to either itself or to another PCIe controller based on a priority level of the input data. Similarly, each PCIe controller may be configured to route output data by way of its own PCIe link or that of another PCIe controller based on a scheduling order which may be based on a priority level of the buffer in which the output data is stored. In this manner, multiple PCIe controllers which, in a first mode, are capable of operating independently from each other can be configured, in a second mode, to provide multiple channels for a single PCIe link, in which each channel may correspond to a different priority level. | 11-05-2015 |
20150324134 | SYSTEM AND METHOD FOR CACHING SOLID STATE DEVICE READ REQUEST RESULTS - Techniques for caching results of a read request to a solid state device are disclosed. In some embodiments, the techniques may be realized as a method for caching solid state device read request results comprising receiving, at a solid state device, a data request from a host device communicatively coupled to the solid state device, and retrieving, using a controller of the solid state device, a compressed data chunk from the solid state device in response to the data request. The techniques may further include decompressing the compressed data chunk, returning, to the host device, a block of the data chunk responsive to the data request, and caching one or more additional blocks of the data chunk in a data buffer for subsequent read requests. | 11-12-2015 |
20150356033 | SYSTEM AND METHOD FOR EFFICIENT PROCESSING OF QUEUED READ COMMANDS IN A MEMORY SYSTEM - A solid state drive (SSD) storage system includes a memory controller, host interface, memory channels and solid state memories as storage elements. The completion status of sub-commands of individual read commands is monitored and used to determine an optimal selection for returning data for individual read commands. The completion of a read command may be dependent on the completion of multiple individual memory accesses at various times. The queueing of multiple read commands which may proceed in parallel or out of order causes interleaving of multiple memory accesses from different commands to individual memories. A system and method is disclosed which enables the selection, firstly of completed read commands, independent of the order they were queued and, secondly, of partially completed read commands which are most likely to complete with the least interruption or delay, for data transfer, which in turn improves the efficiency of the data transfer interface. | 12-10-2015 |
20150370535 | METHOD AND APPARATUS FOR HANDLING INCOMING DATA FRAMES - A method and apparatus for handling incoming data frames within a network interface controller. The network interface controller comprises at least one controller component operably coupled to at least one memory element. The at least one controller component is arranged to identify a next available buffer pointer from a pool of buffer pointers stored within a first area of memory within the at least one memory element, receive an indication that a start of a data frame has been received via a network interface, and allocate the identified next available buffer pointer to the data frame. | 12-24-2015 |
20150370706 | METHOD AND APPARATUS FOR CACHE MEMORY DATA PROCESSING - Apparatus and methods are disclosed that enable the allocation of a cache portion of a memory buffer to be utilized by an on-cache function controller (OFC) to execute processing functions on “main line” data. A particular method may include receiving, at a memory buffer, a request from a memory controller for allocation of a cache portion of the memory buffer. The method may also include acquiring, by an on-cache function controller (OFC) of the memory buffer, the requested cache portion of the memory buffer. The method may further include executing, by the OFC, a processing function on data stored at the cache portion of the memory buffer. | 12-24-2015 |
20160004659 | BUS CONTROLLER, DATA FORWARDING SYSTEM, AND METHOD FOR CONTROLLING BUSES - The first buffers forward data from the first device to the respective corresponding second devices through the respective buses while the second buffers forward data from the respective corresponding second devices to the first device through the respective buses. In response to a simultaneous data transmission request to simultaneously transmit data from the first device to the second devices, the switch controller switches the first buffer into a data-forwarding enable state, and switches the second buffer into a data-forwarding disable state, for simultaneous data transmission from the first device to the plurality of the second devices. The pseudo-response generator generates pseudo-response signals acting as a plurality of response signals that the second devices transmit to the first device as a result of the simultaneous data transmission, and transmits the plurality of the pseudo-response signals to the first device. This configuration achieves simultaneous access to multiple devices. | 01-07-2016 |
20160077991 | BRIDGING STRONGLY ORDERED WRITE TRANSACTIONS TO DEVICES IN WEAKLY ORDERED DOMAINS, AND RELATED APPARATUSES, METHODS, AND COMPUTER-READABLE MEDIA - Bridging strongly ordered write transactions to devices in weakly ordered domains, and related apparatuses, methods, and computer-readable media are disclosed. In one aspect, a host bridge device is configured to receive strongly ordered write transactions from one or more strongly ordered producer devices. The host bridge device issues the strongly ordered write transactions to one or more consumer devices within a weakly ordered domain. The host bridge device detects a first write transaction that is not accepted by a first consumer device of the one or more consumer devices. For each of one or more write transactions issued subsequent to the first write transaction and accepted by a respective consumer device, the host bridge device sends a cancellation message to the respective consumer device. The host bridge device replays the first write transaction and the one or more write transactions that were issued subsequent to the first write transaction. | 03-17-2016 |
20160103718 | METHOD AND APPARATUS FOR MESSAGE INTERACTIVE PROCESSING - Provided are a message interaction processing method and device. The method includes: a first buffer with a preset size is applied for to a Central Processing Unit (CPU) and/or a chip; and message interaction is performed between the CPU and the chip through the first buffer, wherein the first buffer is used for storing at least two messages. By the disclosure, the problem that frequent switching between states may cause high resource overhead and low message transmission efficiency under the condition of large message interaction between the CPU and the chip is solved, and the effect of remarkably improving message sending and receiving efficiency and performance of network equipment is further achieved. | 04-14-2016 |
20160124889 | ASYNCHRONOUS FIFO BUFFER WITH JOHNSON CODE WRITE POINTER - An asynchronous data transfer system includes a bus interface unit (BIU), a FIFO write logic module, a write pointer synchronizer, a write pointer validator, a FIFO read logic module, and an asynchronous FIFO buffer. The FIFO buffer receives a variable size data from the BIU and stores the variable size data at a write address. The FIFO write logic module generates a write pointer by encoding the write address using a Johnson code. The FIFO read logic module receives a synchronized write pointer at the asynchronous clock domain and generates a read address signal when the synchronized write pointer is a valid Johnson code format. The FIFO buffer transfers the variable size data to a processor based on the read address signal. | 05-05-2016 |
20160124891 | METHODS AND CIRCUITS FOR DEADLOCK AVOIDANCE - A system is disclosed that includes a first communication circuit that communicates data over a first data port using a first communication protocol. The system also includes a second communication circuit that communicates data over a second data port using a second communication protocol. The second communication protocol processes read and write requests in an order that the read and write requests are received. A bridge circuit is configured to communicate data between the first data port of the first communication circuit and the second data port of the second communication circuit. The bridge circuit is configured to communicate non-posted writes to the second communication circuit via a buffer circuit and communicate posted writes to the second communication circuit via a communication path that bypasses the buffer circuit. | 05-05-2016 |
20160139880 | Bypass FIFO for Multiple Virtual Channels - A group of low-level FIFOs may be logically bound together to form a super-FIFO. The super-FIFO may treat each low-level FIFO as a storage location. The super-FIFO may enable a push to (or a pop from) every low-level FIFO, simultaneously. The super-FIFO may enable a virtual channel (VC) to use the super-FIFO, bypassing a VC FIFO for the VC, removing several cycles of latency otherwise needed for enqueuing and dequeuing messages in the VC FIFO. In addition, the super-FIFO may enable bypassing of an arbiter, further reducing latency by avoiding a penalty of the arbiter. | 05-19-2016 |
20160140061 | MANAGING BUFFERED COMMUNICATION BETWEEN CORES - Communicating among multiple sets of multiples cores includes: buffering messages in first buffer associated with a first set of multiple cores; buffering messages in a second buffer associated with a second set of multiple cores; and transferring messages over communication circuitry from cores not in the first set to the first buffer, and to transferring messages from cores not in the second set to the second buffer. A first core of the first set sends messages corresponding to multiple types of instructions to a second core of the second set through the communication circuitry. The second buffer is large enough to store a maximum number of instructions of a second type that are allowed to be outstanding from cores in the first set at the same time, and still have enough storage space for one or more instructions of a first type. | 05-19-2016 |
20160140070 | NETWORK TRAFFIC PROCESSING - As disclosed herein a method, executed by a computer, for providing improved multi-protocol traffic processing includes receiving a data packet, determining if a big processor is activated, deactivating a little processor and activating the big processor if the big processor is not activated and an overflow queue is full, and deactivating the big processor and activating the little processor if the big processor is activated and a current throughput for the big processor is below a first threshold or a sustained throughput for the big processor remains below a second threshold. The big and little processors may be co-located on a single integrated circuit. An overflow queue, managed with a token bucket algorithm, may be used to enable the little processor to handle short burst of data packet traffic. A computer program product and an apparatus corresponding to the described method are also disclosed herein. | 05-19-2016 |
20160154749 | COMPUTER HAVING BUFFERING CIRCUIT FOR HARD DISK DRIVE | 06-02-2016 |
20160179710 | PHYSICAL INTERFACE FOR A SERIAL INTERCONNECT | 06-23-2016 |
20160196221 | HARDWARE ACCELERATOR AND CHIP | 07-07-2016 |
20180024948 | BAD COLUMN MANAGEMENT WITH DATA SHUFFLE IN PIPELINE | 01-25-2018 |