Class / Patent application number | Description | Number of patent applications / Date published |
710053000 | Alternately filling or emptying buffers | 49 |
20080307127 | Method for Managing Under-Runs and a Device Having Under-Run Management Capabilities - A method for managing under-runs and a device having under-run management capabilities. The method includes retrieving packets from multiple buffers, monitoring a state of a multiple buffers, determining whether an under-run associated with a transmission attempt of a certain information frame from a certain buffer occurs; if an under-run occurs, requesting a certain information frame transmitter to transmit predefined packets while ignoring packets that are retrieved from the certain buffer, until a last packet of the information frame is retrieved from the certain buffer; and notifying a processor that an under-run occurred after at least one predefined packet was transmitted; wherein each buffer out of the multiple buffers is adapted to store a fraction of a maximal sized information frame. | 12-11-2008 |
20090019195 | INTEGRATED CIRCUIT, MEMORY MODULE AND SYSTEM - An integrated circuit comprises a first data interface configured to be coupled to a first memory device, a second data interface configured to be coupled to a second memory device, a first control interface configured to be coupled to the first memory device, and a second control interface configured to be coupled to the second memory device. The control interfaces are arranged between the first data interface and the second data interface or the data interfaces are arranged between the first control interface and the second control interface. | 01-15-2009 |
20090031058 | Methods and Apparatuses for Flushing Write-Combined Data From A Buffer - Methods and apparatuses for flushing write-combined data from a buffer within a memory to an input/output (I/O) device. | 01-29-2009 |
20090113085 | FLUSHING WRITE BUFFERS - A first node to cause flushing data units stored in a write buffer of a second node to a memory of the second node. While using a pin-based approach, the central processing unit (CPU) of the first node may activate a first pin coupled to a second pin of the second node that may cause a sequence of operations to flush the write buffer. While using a control-register based approach, the CPU or the memory controller hub (MCH) may configure the control register using an inter-node path such as the SMBus or a data transfer path that may cause a sequence of operations to flush the write buffer. While using an in-band flush mechanism, the CPU may send a message over the data transfer path after transferring the data units that may cause a sequence of operations to flush the write buffer. | 04-30-2009 |
20090248921 | DATA PROCESSING METHOD AND DATA PROCESSING DEVICE - A data processing method includes: sequentially receiving data transmitted by radio communication; sequentially accumulating the received data in a buffer memory in which data is not yet accumulated; starting reading the data from the buffer memory in an accumulating order when the amount of data accumulated in the buffer memory exceeds a first critical value; adding supplementary data to the data read from the buffer memory until the amount of data accumulated in the buffer memory reaches a second critical value larger than the first critical value after starting reading the data from the buffer memory; and reading the data from the buffer memory without adding the supplementary data when the amount of data accumulated in the buffer memory reaches the second critical value during the reading of the data from the buffer memory. | 10-01-2009 |
20090271544 | APPARATUS AND METHOD FOR WRITING DATA TO RECORDING MEDIUM - A distance calculating unit calculates a distance from a current position on a tape to the end of the tape. A command processing unit receives a write command. If the distance is small, a determining unit sets a usable capacity of a buffer to be equal to a maximum capacity of the buffer. If the distance is large, the determining unit sets the usable capacity of the buffer according to the distance. If a capacity for data indicated by the write command is less than or equal to a difference between the usable capacity and current usage of the buffer, a buffer managing unit stores the data in the buffer. When the command processing unit receives a write FM command, the buffer managing unit reads the data from the buffer, updates the current usage, and a channel input/output unit writes the data to the tape. | 10-29-2009 |
20090276550 | SERIAL LINK BUFFER FILL-LEVEL COMPENSATION USING MULTI-PURPOSE START OF PROTOCOL DATA UNIT TIMING CHARACTERS - Embodiments of the invention provide improved timing compensation for a bidirectional serial link in order to relax accuracy requirements of clock sources used for the link. Fill levels of receiver buffers at either ends of the link are used to determine a particular type of start of PDU (SOP) character sequence to use when forming a PDU for transmission over the link. When a given type of SOP character sequence is present in a PDU received at one end of the link, a next PDU to be transmitted from the same end of the link is delayed by a predetermined amount of time to allow the receiver buffer at the other end of the link to decrease its fill level before receiving the next PDU. | 11-05-2009 |
20100077112 | DATA STORAGE MANAGEMENT SYSTEM AND METHOD THEREOF - The present disclosure provides a management system and method for data storage. A storage management system comprising a plurality of data output units, a plurality of buffers, a memory, and a processor. The plurality of buffers correspond to the data output units is coupled to the data output units respectively and configured for storing data outputted from the data output units temporarily. The processor comprises a selecting module, a memory apportioning module, a copy module, and a writing module. The selecting module is electrically coupled to the plurality of buffers, and is configured for selecting a buffer from the plurality of buffers as a combined buffer. The memory apportioning module is electrically coupled to the combined buffer, and is configured for leaving out a memory paragraph in the combined buffer. The copy module is electrically coupled to the plurality of buffers, and configured for copying the data in the rest of the buffers into the memory paragraph of the combined buffer. The writing module is electrically coupled to the combined buffer and the memory, and configured for writing the data stored in the combined buffer into the memory. | 03-25-2010 |
20100169518 | Semiconductor memory device - A semiconductor memory device includes a plurality of output buffer units connected to a plurality of terminals. Each of the output buffer units includes a first high speed data output (HSDO) buffer adapted to buffer even-numbered data of a corresponding data row among a plurality of data rows and to output the even-numbered data to a corresponding terminal among the plurality of terminals, a second HSDO buffer adapted to buffer odd-numbered data of the corresponding data row and to output the odd-numbered data to the corresponding terminal, and a buffer selector adapted to select and activate the first HSDO buffer and/or the second HSDO buffer in response to a corresponding control signal out of at least one control signal during a HSDO test. | 07-01-2010 |
20100199001 | SUBSTRATE PROCESSING SYSTEM - Provided is a substrate processing system configured to provide proper data. The substrate processing system comprises a substrate processing apparatus comprising a plurality of components, a controller configured to control the substrate processing apparatus by setting a sequence prescribing time and components, and a collection unit configured to collect data from the components. The collection unit is configured to match data collected from the components via the controller with data collected directly from the components. | 08-05-2010 |
20100205331 | Non-Volatile Memory That Includes An Internal Data Source - The present disclosure includes systems and techniques relating to a non-volatile memory that includes an internal data source. In some implementations, a device includes a buffer, a memory cell array, and processing circuitry coupled with the buffer and the memory cell array, and configured to selectively fill the buffer with auxiliary data from the internal data source specified by the controller and user data received from an external source, in response to instructions from the controller. | 08-12-2010 |
20100268854 | SYSTEM AND METHOD FOR UTILIZING PERIPHERAL FIRST-IN-FIRST-OUT (FIFO) RESOURCES - A system and method for sharing peripheral first-in-first-out (FIFO) resources is disclosed. In one embodiment, a system for utilizing peripheral FIFO resources includes a processor, a first peripheral FIFO controller and a second peripheral FIFO controller coupled to the processor for controlling buffering of first data and second data associated with the processor respectively. Further, the system includes a merge module coupled to the first peripheral FIFO controller and the second peripheral FIFO controller for merging a first FIFO channel associated with the first peripheral FIFO controller and a second FIFO channel associated with the second peripheral FIFO controller based on an operational state of the first FIFO channel and an operational state of the second FIFO channel respectively. Also, the system includes a first FIFO and a second FIFO coupled to the merge module via the first FIFO channel and the second FIFO channel respectively. | 10-21-2010 |
20110040905 | EFFICIENT BUFFERED READING WITH A PLUG-IN FOR INPUT BUFFER SIZE DETERMINATION - A method of buffered reading of data is provided. A read request for data is received by a buffered reader, and in response to the read request, a main memory input buffer is partially filled with the data by the buffered reader to a predetermined amount that is less than a fill capacity of the input buffer. Corresponding computer system and program products are also provided. | 02-17-2011 |
20110119414 | APPARTUS AND METHOD FOR SORTING ITEMS - An apparatus for sorting items has a buffer device with a multiplicity of buffer storage locations, filled by a loading device, and an intermediate store with a multiplicity of intermediate storage locations. The intermediate store and the buffer device are arranged such that items stored at a buffer storage location can be transferred into an intermediate storage location. The intermediate storage locations are movable at a relative speed with respect to the buffer storage locations and are suitable for receiving more than one item, for presorting. The buffer device is arranged over the intermediate store such that an item located in a buffer storage location can fall into an intermediate storage location. The apparatus has a multiplicity of collecting containers, arranged under the intermediate store, and are at rest during the sorting operation and are filled during the sorting operation with items contained in the intermediate storage locations. | 05-19-2011 |
20110179200 | ACCESS BUFFER - The disclosed embodiments relate to a system for controlling accesses to one or more memory devices. This system includes one or more write queues configured to store entries for write requests, wherein a given entry for a write request includes an address and write data to be written to the address. The system also includes a search mechanism configured to receive a read request which includes an address, and to search the one or more write queues for an entry with a matching address. If a matching address is found in an entry in a write queue, the search mechanism is configured to retrieve the write data from the entry and to cancel the associated write request, whereby the read request can be satisfied without accessing the one or more memory devices. | 07-21-2011 |
20110191508 | Low-Contention Update Buffer Queuing For Large Systems - A method for queuing thread update buffers to enhance garbage collection. The method includes providing a global update buffer queue and a global array with slots for storing pointers to filled update buffers. The method includes with an application thread writing to the update buffer and, when filled, attempting to write the pointer for the update buffer to the global array. The array slot may be selected randomly or by use of a hash function. When the writing fails due to a non-null slot, the method includes operating the application thread to add the filled update buffer to the global update buffer queue. The method includes, with a garbage collector thread, inspecting the global array for non-null entries and, upon locating a pointer, claiming the filled update buffer. The method includes using the garbage collector thread to claim and process buffers added to the global update buffer queue. | 08-04-2011 |
20110271017 | EFFICIENT NON-TRANSACTIONAL WRITE BARRIERS FOR STRONG ATOMICITY - A method and apparatus for providing optimized strong atomicity operations for non-transactional writes is herein described. Locks are acquired upon initial non-transactional writes to memory locations. The locks are maintained until an event is detected resulting in the release of the locks. As a result, in the intermediary period between acquiring and releasing the locks, any subsequent writes to memory locations that are locked are accelerated through non-execution of lock acquire operations. | 11-03-2011 |
20110320651 | Buffering of a data stream - A data processing apparatus is provided comprising a buffer for buffering data contained in a data stream generated by a data stream generator and received by a data stream receptor. Buffer occupancy tracking circuitry is provided and configured to maintain a high buffer utilisation value providing an indication of a high buffer occupation level for a given time period during utilisation of the buffer. Alternatively, in an apparatus where the buffer is implemented in dedicated memory, the buffer occupancy tracking circuitry is configured to store a programmable buffer size limit controlling a maximum allowable buffer storage capacity. | 12-29-2011 |
20120059958 | SYSTEM AND METHOD FOR A HIERARCHICAL BUFFER SYSTEM FOR A SHARED DATA BUS - The present invention provides a system and method for controlling data entries in a hierarchical buffer system. The system includes an integrated circuit device comprising: a memory core, a shared data bus, and a plurality of 1st tier buffers that receive data from the memory. The system further includes a 2nd tier transfer buffer that delivers the data onto the shared data bus with pre-determined timing. The present invention can also be viewed as providing methods for controlling moving data entries in a hierarchical buffer system. The method includes managing the buffers to allow data to flow from a plurality of 1st tier buffers through a 2nd tier transfer buffer, and delivering the data onto a shared data bus with pre-determined timing. | 03-08-2012 |
20120084469 | USB TRANSACTION TRANSLATOR AND A BULK TRANSACTION METHOD - The present invention is directed to a universal serial bus (USB) transaction translator and an associated IN/OUT bulk transaction method. A device interface is coupled to a device via a device bus, and a host interface is coupled to a host via a host bus, wherein the host USB version is higher than the device USB version. At least two buffers configured to store data are disposed between the device interface and the host interface. A controller stores the data in the buffers alternately. In a bulk-IN transaction, before the host sends an IN packet, the controller pre-fetches data and stores the data in the buffers until all the buffers are full or a requested data length has been achieved; the pre-fetched data are then sent to the host after the host sends the IN packet. In a bulk-OUT transaction, the controller stores the data sent from the host in the buffers, and the data are then post-written to the device. | 04-05-2012 |
20120102243 | METHOD FOR THE RECOVERY OF A CLOCK AND SYSTEM FOR THE TRANSMISSION OF DATA BETWEEN DATA MEMORIES BY REMOTE DIRECT MEMORY ACCESS AND NETWORK STATION SET UP TO OPERATE IN THE METHOD AS A TRANSMITTING OR,RESPECTIVELY,RECEIVING STATION - In the method, data are transmitted between a first memory allocated to a source computer and a second memory allocated to a target computer via a network by remote direct memory access. On the source computer side, a predetermined number of directly consecutive transmission buffers is selected from a continuous buffer memory area and transmitted in a single RDMA transmission process to the target computer. On the target computer side, an RDMA data transfer is executed over the entire continuous buffer memory area and a buffer sequence procedure. The buffer sequence procedure causes the received buffers to be supplied to the target application in the transmitted sequence. | 04-26-2012 |
20120131240 | SLIDING WRITE WINDOW MECHANISM FOR WRITING DATA - Various embodiments writing data are provided. In one embodiment, the data arranged in a plurality of write intervals is loaded into a plurality of buffers, the totality of the plurality of buffers configured as a sliding write window mechanism adapted for movement to accommodate the write intervals. The data may reach the storage system out of a sequential order, and by loading it appropriately into the said buffers the data is ordered sequentially before it is written to the storage media. When a commencing section of the sliding write window is filled up with written data, this section is flushed to the storage media, and the window slides forward, to accommodate further data written by the writers. The writers are synchronized with the interval reflected by the current position of the sliding write window, and they send data to be written only where this data fits into the current interval of the window. | 05-24-2012 |
20120203942 | DATA PROCESSING APPARATUS - A data processing apparatus may include a data acquisition unit, a buffer unit that includes a plurality of division buffers, a valid data area determination unit that calculates an area of valid data, a buffer state management unit that manages whether or not the data is stored in the division buffer, a data write control unit that writes data of a unit of the storage capacity of the division buffer, which at least includes data indicated to be valid data by the valid data information within the data, to the division buffer in which no data is stored, the division buffer being selected based on the management information, and a data read control unit that reads data indicated to be valid data by the valid data information from the division buffer in which data is stored, the division buffer being selected based on the management information. | 08-09-2012 |
20120260011 | EFFICIENT BUFFERED READING WITH A PLUG-IN FOR INPUT BUFFER SIZE DETERMINATION - A method of buffered reading of data is provided. A read request for data is received by a buffered reader, and in response to the read request, a main memory input buffer is partially filled with the data by the buffered reader to a predetermined amount that is less than a fill capacity of the input buffer. Corresponding computer system and program products are also provided. | 10-11-2012 |
20120265908 | SERVER AND METHOD FOR BUFFERING MONITORED DATA - A method for buffering monitored data received from a monitoring device. The received monitored data is buffered into a buffer area and all of the monitored data from the buffer area is stored to a database server when a current count of data in the buffer area equals a recycling predetermined count N. An address of the received monitored data is recorded in a data list. When a monitoring server receives request for monitored data from a client server, the required one or more items of monitored data is read from the buffer area and sent to the client server. | 10-18-2012 |
20120278517 | ASSEMBLY AND A METHOD OF RECEIVING AND STORING DATA WHILE SAVING BANDWIDTH BY CONTROLLING UPDATING OF FILL LEVELS OF QUEUES - An assembly where a number of receivers receiving packets for storing in queues in a storage and a means for de-queuing data from the storage. A controller determines addresses for the storage, the address being determined on the basis of at least a fill level of the queue(s), where information relating to de-queues addresses is only read-out when the fill-level(s) exceed a limit so as to not spend bandwidth on this information before it is required. | 11-01-2012 |
20130086286 | INTER-PROCESSOR COMMUNICATION APPARATUS AND METHOD - Inter-processor communication (IPC) apparatus and a method for providing communication between two processors having a shared memory, the IPC apparatus including an arbitrated bus coupling the processors to one another and to the memory, a buffer in the shared memory associated with each processor, and at least one pair of First In First Out hardware units (FIFOs) coupled to each processor, the FIFOs holding pointers to addresses in the buffer associated with that processor, wherein a first of the pair of FIFOs (an empty buffer FIFO) is configured to hold pointers to empty portions of the buffer while the second of the pair of FIFOs (a message FIFO) is configured to hold pointers to portions of the buffer having data therein. | 04-04-2013 |
20130097344 | Circuit with memory and support for host accesses of storage drive memory - A circuit including a first memory and a processor. The processor is configured to receive data from a host device and transfer the data from the circuit to a storage drive. The processor is configured to receive the data back from the storage drive when a second memory in the storage drive does not have available space for the data, and prior to the data being transferred from the second memory to a third memory in the storage drive. The processor is configured to: store the data received from the storage drive in the first memory or transfer the data received from the storage drive back to the host device; and based on a request received from the storage drive, transfer the data from the first memory or the host device back to the storage drive. The request indicates that space is available in the second memory for the data. | 04-18-2013 |
20130132620 | USB REDIRECTION FOR WRITE STREAMS - Methods and systems for conducting a transaction between a USB device and a virtual USB device driver are provided. A client USB manager stores in a buffer one or more data packets associated with the virtual USB device driver. The client USB manager dequeues one of the one or more data packets from the buffer. The client USB manager transmits the dequeued data packet to the USB device for processing. The client USB manager re-fills completed data packets from the buffer and queues the data packets for transmitting to the USB device without waiting for the virtual USB device driver. | 05-23-2013 |
20130219089 | Communication Processing Device that Stores Communication Data in Buffers, Image Forming Apparatus, and Method of Communication Processing - A communication processing device includes a communication data processing circuit that (i) issues an access request for a buffer specified by a descriptor, among a plurality of buffers in a first memory, and (ii) outputs a predetermined switching permission signal at a time when a data access for one of the plurality of buffers is completed. The communication processing device also includes a second memory and a transmission destination switching circuit. The second memory includes a plurality of alternative buffers corresponding to the plurality of buffers. The transmission destination switching circuit switches a transmission destination of the access request from one of the plurality of buffers in the first memory to one of the plurality of alternative buffers in the second memory, based on the switching permission signal. | 08-22-2013 |
20130318260 | DATA TRANSFER DEVICE - A data transfer unit includes: a collection-interval storage unit that storing therein a collection interval set by the host computer; a data collection unit reading a collection interval stored in the collection-interval storage unit and collecting device data at the read collection interval; a transfer-interval storage unit storing therein a transfer interval set by the host computer, which is equal to or larger than the collection interval; a ring buffer accumulating and storing therein device data that are collected by the data collection unit and have not been transferred to the host computer by a data transferring unit; and a data transferring unit reading a transfer interval stored in the transfer-interval storage unit, and collectively transfers device data accumulated and stored in the ring buffer to the host computer at the read transfer interval. | 11-28-2013 |
20130346649 | METHOD OF TRANSMITTING DATA AND COMMUNICATION DEVICE - A method of transmitting data is described comprising selecting a transmission mode from at least a first and a second transmission mode, wherein according to the first transmission mode data is transmitted in at least two first time periods using first communication resources wherein the at least two first time periods are separated by a first time interval, wherein according to the second transmission mode data is transmitted in at least two second time periods using second communication resources wherein the at least two second time periods are separated by a second time interval, and wherein the first time interval is longer than the second and the first communication resources allow the transmission of a higher amount of data than the second communication resources; and transmitting data according to the selected transmission mode. | 12-26-2013 |
20140019650 | Multi-Write Bit-Fill FIFO - Various embodiments of the present invention are related to memory buffers, and in particular to a multi-write bit-fill FIFO to which multiple addresses may be written simultaneously and which fills in bit spaces as data blocks are written. | 01-16-2014 |
20140068118 | BIT STREAM PROCESSING DEVICE FOR PIPE-LINED DECODING AND MULTIMEDIA DEVICE INCLUDING THE SAME - A bit stream processing device may include a virtual division memory, a stream shift buffer, a decoder circuit, and a controller. The virtual division memory may be divided into a plurality of group memory regions configured to store a plurality of stream groups in the respective group memory regions and to output a memory bit stream. The stream groups may be included in an input bit stream. The stream shift buffer is configured to receive and store the memory bit stream and output a buffer bit stream. The decoder circuit is configured to perform a decoding operation on the buffer bit stream from the stream shift buffer. The controller is configured to control operations of the virtual division memory, the stream shift buffer, and the decoder circuit. | 03-06-2014 |
20140075059 | Waveform Accumulation and Storage in Alternating Memory Banks - System and method for hardware implemented accumulation of waveform data. A digitizer is provided that includes first and second memory banks. A first waveform is stored in chunks alternating between successive buffers in the first and second memory banks, and concurrently, the first and second chunks may be transferred to first and second FIFOs, respectively, which may be accumulated with respective first and second chunks of a second waveform into the first and second memory banks. This process may be repeated for respective successive pairs of the first and second waveforms, where the first and second memory banks and FIFOs are used in an alternating manner, and further, to accumulate additional waveforms, where previously stored (and accumulated) waveform data are accumulated chunkwise with successive additional waveform data, and where at least some of the accumulation is performed concurrently with waveform data transfers to and from the memory banks and FIFOs. | 03-13-2014 |
20140095744 | DATA TRANSFER DEVICE AND METHOD - A transfer control circuit stores data in a FIFO memory, outputs data in the FIFO memory in response to a data request signal, and outputs a state signal in accordance with an amount of stored data in the FIFO memory. An output data generating unit outputs image data having a horizontal image size in accordance with a horizontal count value and a horizontal synchronizing signal, and thereafter, outputs blank data. When the state signal indicates that the FIFO memory is in a “EMPTY” or “MODERATE” storage state, a blank control unit outputs a blank addition signal until the FIFO memory changes to a “FULL” storage state. | 04-03-2014 |
20140095745 | BUFFER DEVICE, BUFFER CONTROL DEVICE, AND BUFFER CONTROL METHOD - A buffer device includes a plurality of input ports, a plurality of first in, first out (FIFO) buffers on which information input from the plurality of input ports is written, respectively, and at least one output port, an input switch unit that writes input information on a write target buffer selected from a predetermined buffer group based on information indicating a write position in the buffers of the buffer group and switches the write target buffer to another buffer in the buffer group according to the information indicating the write position, when the information is input from the input ports assigned to the predetermined buffer group among the plurality of buffers, and an output controller that reads information from a read target buffer selected based on information indicating a read position in the buffers of the buffer group and outputs the read information to the output port. | 04-03-2014 |
20140115201 | Signal Order-Preserving Method and Apparatus - Embodiments of the present invention relate to a signal order-preserving method and apparatus. When data of a request signal that comes from a corresponding first upstream device is written into a first first input first output (FIFO) memory, invalid data is written into a second FIFO memory corresponding to a second upstream device in a same clock cycle; and the data of the request signal is read from the first FIFO memory, the invalid data is read from the second FIFO memory, the invalid data is discarded, and the data of the request signal is conveyed to a downstream device. Through the signal order-preserving method and apparatus in the embodiments of the present invention, the coupling extent between devices on which there is an order-preserving requirement is reduced while signal order-preserving is achieved. | 04-24-2014 |
20140129745 | ASYMMETRIC FIFO MEMORY - A First-in First-out (FIFO) memory comprising a latch array and a RAM array, the latch array being assigned higher priority to receive data than the RAM array. Incoming data are pushed into the latch array while the latch array has vacancies. Upon the latch array becoming empty, incoming data are pushed into the RAM array during a spill-over period. The RAM array may comprise two spill regions with only one active to receive data at a spill-over period. The allocation of data among the latch array and the spill regions of the RAM array can be transparent to external logic. | 05-08-2014 |
20140195702 | METHOD OF OPERATING DATA COMPRESSION CIRCUIT AND DEVICES TO PERFORM THE SAME - A method of operating a data compression circuit includes receiving and storing a plurality of data blocks until a cache is full and writing the data blocks that have been stored in the cache to a buffer memory when the cache is full. The method also includes performing forced literal/literal encoding on each of the data blocks regardless of repetitiveness of each data block when the cache is full. | 07-10-2014 |
20140237145 | DUAL-BUFFER SERIALIZATION AND CONSUMPTION OF VARIABLE-LENGTH DATA RECORDS PRODUCED BY MULTIPLE PARALLEL THREADS - Under control of the consumer, it is determined that a first buffer is empty and that a second buffer contains data; a first compare-double-and-swap operation within a spin loop is executed to swap a double pointer of the first buffer and a double pointer of the second buffer, wherein responsive to the executing of the operation the consumer drains the second buffer, and wherein the executing of the operation directs the at least one producer to fill the first buffer; and it is determined that the first buffer and the second buffer are empty and the consumer waits for a notification from one of i) the at least one producer and ii) a timer. Under control of the at least one producer, a second compare-double-and-swap operation within a spin loop is executed to atomically locate the first buffer and update the double pointer of the first buffer. | 08-21-2014 |
20140281059 | ARITHMETIC PROCESSING APPARATUS AND CONTROL METHOD OF ARITHMETIC PROCESSING APPARATUS - In a multicore system in which a plurality of CPUs each including a cache memory share one main memory, a write buffer having a plurality of stages of buffers each holding data to be written to the main memory and an address of a write destination is provided between the cache memory and the main memory, and at the time of a write to the write buffer from the cache memory, an address of a write destination and the addresses stored in the buffers are compared, and when any of the buffers has an agreeing address, data is overwritten to this buffer, and the buffer is logically moved to a last stage. | 09-18-2014 |
20140281060 | LOW-CONTENTION UPDATE BUFFER QUEUING FOR LARGE SYSTEMS - A method for queuing thread update buffers to enhance garbage collection. The method includes providing a global update buffer queue and a global array with slots for storing pointers to filled update buffers. The method includes with an application thread writing to the update buffer and, when filled, attempting to write the pointer for the update buffer to the global array. The array slot may be selected randomly or by use of a hash function. When the writing fails due to a non-null slot, the method includes operating the application thread to add the filled update buffer to the global update buffer queue. The method includes, with a garbage collector thread, inspecting the global array for non-null entries and, upon locating a pointer, claiming the filled update buffer. The method includes using the garbage collector thread to claim and process buffers added to the global update buffer queue. | 09-18-2014 |
20140359175 | RE-TIMING SAMPLED DATA - For re-timing sampled data, input data samples at an input data rate are stored in a FIFO buffer and output at an output data rate according to an output clock that is locked to the input data rate in dependence on a loop-filtered measure of the fill level of the said buffer. The frequency of the output clock is additionally controlled by an estimate of the input data rate. | 12-04-2014 |
20150019766 | BUFFER MEMORY RESERVATION TECHNIQUES FOR USE WITH A NAND FLASH MEMORY - This disclosure provides examples of circuits, devices, systems, and methods for managing a buffer memory. Regions of the buffer memory are dynamically reserved, responsive to a read/write request. Where the read/write request includes a plurality of data transfer requests, following completion of a data transfer request, the reserved buffer space may be recycled for use in a further data transfer request or for other purposes. During fulfillment of a read request, a buffer region is reserved from a larger buffer pool for a time period significantly smaller than the time required to execute a sense operation associated with the read request. The reserved buffer region may be reused for unrelated processes during execution of the sense operation. | 01-15-2015 |
20150019767 | SEMICONDUCTOR MEMORY DEVICE HAVING DATA COMPRESSION TEST CIRCUIT - A semiconductor memory device includes a data transmission unit configured to transmit first input data to only a first global line driver or to the first global line driver and a second global line driver in response to a test signal, and a transmission element configured to transmit second input data only to the second global line driver in response to the test signal. | 01-15-2015 |
20160034188 | INPUT/OUTPUT INTERCEPTOR WITH INTELLIGENT FLUSH CONTROL LOGIC - Inventive aspects include an input/output (I/O) interceptor logic section having an I/O interface coupled with a storage stack. The I/O interface can intercept write I/Os, read I/Os, and flush requests from an application. A temporary write holding buffer can store the write I/Os. A re-order logic section can change an order of the write I/Os, and combine the re-ordered write I/Os into a combined write I/O. An intelligent flush control logic section can receive the flush requests from the I/O interface, communicate write I/O completion of the write I/Os to the application without the write I/Os having been written to a non-volatile storage device, and cause the combined write I/O to be written to the non-volatile storage device responsive to at least one of a predefined Nth flush request from among the plurality flush requests, a threshold amount of data being accumulated, or an expiration of a predefined time period. | 02-04-2016 |
20160077798 | IN-MEMORY BUFFER SERVICE - A capture service running on an application server receives events from a client application running on an application server to be stored in a data store and stores the events in an in-memory bounded buffer on the application server, the in-memory bounded buffer comprising a plurality of single-threaded segments, the capture service to write events to each segment in parallel. The in-memory bounded buffer provides a notification to a buffer flush regulator when a number of events stored in the in-memory bounded buffer reaches a predefined limit. The in-memory bounded buffer receive a request to flush the events in the in-memory bounded buffer from a consumer executor service. The consumer executor service consumes the events in the in-memory bounded buffer using a dynamically sized thread pool of consumer threads to read the segments of the bounded buffer in parallel, wherein consuming the events comprises writing the events directly to the data store. | 03-17-2016 |
20160117148 | DATA TRANSMITTER APPARATUS AND METHOD FOR DATA COMMUNICATION USING THE SAME - Disclosed are a data transmission apparatus and a data communication method using the same. The data transmission apparatus includes a buffer manager configured to generate a transmission buffer pool including a plurality of buffers each having a size corresponding to a size of a transmission packet and manage buffer position information and buffer use status information of the plurality of buffers; a data processor configured to divide data into data blocks each having a predetermined size, and a data transmitter configured to convert each of the data blocks received from the data processor into a plurality of transmission packets and request the buffer manager to allocate a number of buffers corresponding to the number of transmission packets. | 04-28-2016 |