Class / Patent application number | Description | Number of patent applications / Date published |
711150000 | Simultaneous access regulation | 42 |
20090043971 | DATA INTEGRITY FOR DATA STORAGE DEVICES SHARED BY MULTIPLE HOSTS VIA A NETWORK - Access by multiple hosts, such as computers, to a data storage device by way of a network while maintaining data integrity. In one embodiment, a method for accessing the storage device includes acquiring a resource “lock” that provides exclusive access to one of the hosts at a time. In another embodiment, the file systems of a first and second host provide file system attributes stored in a storage device to provide mutually exclusive access for each host to free blocks of the device. In another embodiment, a networked system contains a first host having exclusive direct access to a storage device over a digital network. A second host requiring access to the storage device communicates with the first host by way of the digital network. File access requests generated by the second host are transferred by a redirection filter driver within the second host to the first host. | 02-12-2009 |
20090313441 | ACCESS CONTROL DEVICE, ACCESS CONTROL INTEGRATED CIRCUIT, AND ACCESS CONTROL METHOD - In a device in which a master that requires access at a predetermined rate and a processor that requires responsiveness to an access request access a shared memory, responsiveness to the access request of the processor is improved while the access of the master at the predetermined rate is guaranteed, compared to conventional technologies. When the master has a resource available for accessing the shared memory, the master accesses the shared memory at the predetermined rate or above. In a case that the access is executed at the predetermined rate or above, the processor accesses the shared memory by using a resource that was originally allocated to the master. | 12-17-2009 |
20100023706 | COEXISTENCE OF ADVANCED HARDWARE SYNCHRONIZATION AND GLOBAL LOCKS - A computer-implemented method and article of manufacture is disclosed for enabling computer programs utilizing hardware transactional memory to safely interact with code utilizing traditional locks. A thread executing on a processor of a plurality of processors in a shared-memory system may initiate transactional execution of a section of code, which includes a plurality of access operations to the shared-memory, including one or more to locations protected by a lock. Before executing any operations accessing the location associated with the lock, the thread reads the value of the lock as part of the transaction, and only proceeds if the lock is not held. If the lock is acquired by another thread during transactional execution, the processor detects this acquisition, aborts the transaction, and attempts to re-execute it. | 01-28-2010 |
20100146219 | MEMORY ACCESS DEVICE INCLUDING MULTIPLE PROCESSORS - Provided is a memory access device including multiple processors accessing a specific memory. The memory access device includes first and second processors, first and second transaction controllers, a memory access switch, and a memory controller. The first and second transaction controllers are connected respectively to the first and second processors. The memory access switch is connected to the first and second transaction controllers. The memory controller is connected to the memory access switch to control a memory device. Herein, if the first and second processors simultaneously access the memory device, the second processor stores an address or data in the second transaction controller while the first processor is accessing the memory device. Accordingly, the memory access device enables multiple processors, which are to simultaneously access a specific memory, to perform other operations during the standby time taken to access the specific memory. | 06-10-2010 |
20100161913 | PORTABLE ELECTRONIC DEVICE - In an IC card, an operating system manages the access order of each channel for each file using a channel management table. An application controls access to each file based on the access order managed in the channel management table. The channel management table stores, as an access order, an order that each logical channel has set a file in a current state. If current setting by a specific logical channel is canceled, a table updating function deletes the logical channel from the channel management table and moves up the access order of each logical channel next to the deleted logical channel. | 06-24-2010 |
20100325371 | SYSTEMS AND METHODS FOR WEB LOGGING OF TRACE DATA IN A MULTI-CORE SYSTEM - A method and system for generating a web log that includes transaction entries from transaction queues of one or more cores of a multi-core system. A transaction queue is maintained for each core so that either a packet engine or web logging client executing on the core can write transaction entries to the transaction queue. In some embodiments, a timestamp value obtained from a synchronized timestamp variable can be assigned to the transaction entries. When a new transaction entry is added to the transaction queue, the earliest transaction entry is removed from the transaction queue and added to a heap. Periodically the earliest entry in the heap is removed from the heap and written to a web log. When an entry is removed from the heap, the earliest entry in a transaction queue corresponding to the removed entry is removed from the transaction queue and added to the heap. | 12-23-2010 |
20110113203 | Reservation Required Transactions - A method for performing a transaction including a transaction head and a transaction tail, includes executing the transaction head, including executing at least one memory reserve instruction to reserve a transactional memory location that are accessed in the transaction and executing the transaction tail, wherein the transaction cannot be aborted due to a data race on that transactional memory location while executing the transaction tail, wherein data of memory write operations to the transactional memory location is committed without being buffered. | 05-12-2011 |
20110119454 | DISPLAY SYSTEM FOR SIMULTANEOUS DISPLAYING OF WINDOWS GENERATED BY MULTIPLE WINDOW SYSTEMS BELONGING TO THE SAME COMPUTER PLATFORM - A display system for simultaneous displaying of windows generated by a plurality of window systems belonging to the same desktop or laptop platform includes a master computer device with its display device and at least one slave computer device, a shared memory, an input means and an output means, as described herein. Each of the master computer device and the at least one slave computer device has a corresponding window system. The shared memory is coupled to the computer devices and is accessible by the master computer device and the at least one slave computer device. The input means receives multiple windows simultaneously generated by the window systems of the master computer device and the at least one slave computer device. The output means generates the multiple windows for the display device of the master computer device. In support of these operations, the master computer device and the at least one slave computer device simultaneously read and write window data stored in the shared memory. | 05-19-2011 |
20110138134 | Software Transactional Memory for Dynamically Sizable Shared Data Structures - We propose a new form of software transactional memory (STM) designed to support dynamic-sized data structures, and we describe a novel non-blocking implementation. The non-blocking property we consider is obstruction-freedom. Obstruction-freedom is weaker than lock-freedom; as a result, it admits substantially simpler and more efficient implementations. An interesting feature of our obstruction-free STM implementation is its ability to use of modular contention managers to ensure progress in practice. | 06-09-2011 |
20110145515 | Method for modifying a shared data queue and processor configured to implement same - According to one exemplary embodiment, a method for modifying a shared data queue accessible by a plurality of processors comprises receiving an instruction from one of the processors to produce a modification to the shared data queue, running a microcode program in response to the instruction, to attempt to produce the modification, and generating a final datum to signify whether the modification to the shared data queue has occurred. In one embodiment, the modification comprises enqueuing data, and running the microcode program includes checking writability of a write pointer of the shared data queue, checking writability of a data field designated by the write pointer, locking the write pointer and checking the old value of its lock bit with atomicity, writing the data to the data field and incrementing the write pointer by the size of the data, and unlocking the write pointer. | 06-16-2011 |
20110161603 | MEMORY TRANSACTION GROUPING - Various technologies and techniques are described for providing a transaction grouping feature for use in programs operating under a transactional memory system. The transaction grouping feature is operable to allow transaction groups to be created that contain related transactions. The transaction groups are used to enhance performance and/or operation of the programs. Different locking and versioning mechanisms can be used with different transaction groups. When running transactions, a hardware transactional memory execution mechanism can be used for one transaction group while a software transactional memory execution mechanism used for another transaction group. | 06-30-2011 |
20110225375 | Concurrent Execution of Critical Sections by Eliding Ownership of Locks - One embodiment of the present invention provides a system that facilitates avoiding locks by speculatively executing critical sections of code. During operation, the system allows a process to speculatively execute a critical section of code within a program without first acquiring a lock associated with the critical section. If the process subsequently completes the critical section without encountering an interfering data access from another process, the system commits changes made during the speculative execution, and resumes normal non-speculative execution of the program past the critical section. Otherwise, if an interfering data access from another process is encountered during execution of the critical section, the system discards changes made during the speculative execution, and attempts to re-execute the critical section. | 09-15-2011 |
20110246727 | System and Method for Tracking References to Shared Objects Using Byte-Addressable Per-Thread Reference Counters - The system described herein may track references to a shared object by concurrently executing threads using a reference tracking data structure that includes an owner field and an array of byte-addressable per-thread entries, each including a per-thread reference counter and a per-thread counter lock. Slotted threads assigned to a given array entry may increment or decrement the per-thread reference counter in that entry in response to referencing or dereferencing the shared object. Unslotted threads may increment or decrement a shared unslotted reference counter. A thread may update the data structure and/or examine it to determine whether the number of references to the shared object is zero or non-zero using a blocking-optimistic or a non-blocking mechanism. A checking thread may acquire ownership of the data structure, obtain an instantaneous snapshot of all counters, and return a value indicating whether the number of references to the shared object is zero or non-zero. | 10-06-2011 |
20110252204 | SHARED SINGLE ACCESS MEMORY WITH MANAGEMENT OF MULTIPLE PARALLEL REQUESTS - A memory is used by concurrent threads in a multithreaded processor. Any addressable storage location is accessible by any of the concurrent threads, but only one location at a time is accessible. The memory is coupled to parallel processing engines that generate a group of parallel memory access requests, each specifying a target address that might be the same or different for different requests. Serialization logic selects one of the target addresses and determines which of the requests specify the selected target address. All such requests are allowed to proceed in parallel, while other requests are deferred. Deferred requests may be regenerated and processed through the serialization logic so that a group of requests can be satisfied by accessing each different target address in the group exactly once. | 10-13-2011 |
20120059997 | APPARATUS AND METHOD FOR DETECTING DATA RACE - An apparatus and method for detecting a data race of a multithread system is provided. A thread may be divided into an open sub region or a closed sub region according to a vector clock and an execution state. In order to detect a data race before the execution is terminated, when an open sub region is converted to a closed sub region, a memory access event corresponding to the closed sub region is investigated and a memory access event having no parallel relation with an open sub region is deleted among memory access events having been subject to the investigation. | 03-08-2012 |
20120151154 | LATENCY MANAGEMENT SYSTEM AND METHOD FOR MULTIPROCESSOR SYSTEM - A latency management apparatus and method are provided. A latency management apparatus for a multiprocessor system having a plurality of processors and shared memory, when the shared memory and each of the processors is configured to generate a delayed signal, includes a delayed signal detector configured to detect the generated delayed signal; and one or more latency managers configured to manage an operation latency of any one of the processors upon detection of the delayed signal. | 06-14-2012 |
20120191922 | OBJECT SYNCHRONIZATION IN SHARED OBJECT SPACE - A shared object space in a computer system provides synchronized access to data objects accessible to a plurality of concurrently running applications in the computer system. The shared object space is allocated a portion of memory of the computer system and concurrently running applications are able to connect to the shared object space. The shared object space restricts simultaneous access to data objects by the concurrently running applications by associating locks with the data objects. | 07-26-2012 |
20120272013 | DATA ACCESS SYSTEM WITH AT LEAST MULTIPLE CONFIGURABLE CHIP SELECT SIGNALS TRANSMITTED TO DIFFERENT MEMORY RANKS AND RELATED DATA ACCESS METHOD THEREOF - A data access system includes a memory controller, a first memory rank, a second memory rank, a first chip select bus coupled between the memory controller and the first memory rank, a second chip select bus coupled between the memory controller and the second memory rank, a group of shared buses shared by the first and second memory ranks and coupled between the memory controller and each of the first and second memory ranks, a first group of dedicated buses dedicated to the first memory rank and coupled between the memory controller and the first memory rank, and a second group of dedicated buses dedicated to the second memory rank and coupled between the memory controller and the second memory rank. | 10-25-2012 |
20120297149 | METHOD AND DEVICE FOR MULTITHREAD TO ACCESS MULTIPLE COPIES - A method and a device for multithread to access multiple copies. The method includes: when multiple threads of a process are distributed to different nodes, creating a thread page directory table whose content is the same as that of a process page directory table of the process, where each thread page directory table includes a special entry which points to specific data and a common entry other than the special entry, each thread corresponds to a thread page directory table, and the specific data is data with multiple copies at different nodes; and when each thread is scheduled and the special entry in the thread page directory table of the each thread does not point to the specific data stored in a node where the thread is located, modifying, based on a physical address of the specific data, the special entry to point to the specific data. | 11-22-2012 |
20130097391 | Concurrent Execution of Critical Sections by Eliding Ownership of Locks - Critical sections of multi-threaded programs, normally protected by locks providing access by only one thread, are speculatively executed concurrently by multiple threads with elision of the lock acquisition and release. Upon a completion of the speculative execution without actual conflict as may be identified using standard cache protocols, the speculative execution is committed, otherwise the speculative execution is squashed. Speculative execution with elision of the lock acquisition, allows a greater degree of parallel execution in multi-threaded programs with aggressive lock usage. | 04-18-2013 |
20130151794 | MEMORY CONTROLLER AND MEMORY CONTROL METHOD - Provided is a memory controller that manages memory access requests between the processor and the memory. In response to the memory controller receiving two or more memory access requests for the same area of memory, the memory controller is configured to stall the memory controller and sequentially process the memory access requests. | 06-13-2013 |
20130173866 | Optimized Approach to Parallelize Writing to a Shared Memory Resource - Reducing contentions between processes or tasks that are trying to access shared resources is described herein. According to embodiments of the invention, a method of writing a set of data associated with a task to a memory resource is provided. The method includes calculating the amount of memory required to write said data to the memory resource and updating an expected end marker to reflect the amount of memory required to write the data to the memory resource. A flag is then set to an incomplete state, and the data is written to the memory resource. The flag can be set to a complete state and an end marker is updated. The end marker indicates the end of the data stored in the memory resource. | 07-04-2013 |
20130185524 | METHOD AND DEVICE FOR DETECTING A RACE CONDITION - A method for detecting a race condition, comprising storing a seed value to a first global variable D; detecting a race condition when the second global variable A does not equal a first predefined value V | 07-18-2013 |
20130254495 | MEMORY SYSTEM - A memory system includes a memory controller, and first through fourth memory modules. The first memory module is directly connected to the memory controller through a first memory bus and exchanges first data with the memory controller through the first memory bus. The second memory module is directly connected to the memory controller through a second memory bus and exchanges second data with the memory controller through the second memory bus. The third memory module is connected to the first memory module through a third memory bus and exchanges the first data with the memory controller through the first and third memory buses. The fourth memory module is connected to the second memory module through a fourth memory bus and exchanges the second data with the memory controller through the second and fourth memory buses. | 09-26-2013 |
20130304996 | METHOD AND SYSTEM FOR RUN TIME DETECTION OF SHARED MEMORY DATA ACCESS HAZARDS - A system and method for detecting shared memory hazards are disclosed. The method includes, for a unit of hardware operating on a block of threads, mapping a plurality of shared memory locations assigned to the unit to a tracking table. The tracking table comprises an initialization bit as well as access type information, collectively called the state tracking bits for each shared memory location. The method also includes, for an instruction of a program within a barrier region, identifying a second access to a location in shared memory within a block of threads executed by the hardware unit. The second access is identified based on a status of the state tracking bits. The method also includes determining a hazard based on a first type of access and a second type of access to the shared memory location. Information related to the first access is provided in the table. | 11-14-2013 |
20140059301 | EXECUTING PARALLEL OPERATIONS TO INCREASE DATA ACCESS PERFORMANCE - Techniques are described for increasing data access performance for a memory device. In various embodiments, a scheduler/controller is configured to manage data as it read to or written from a memory. Read or write access is increased by partitioning a memory into a group of sub-blocks, associating a parity block with the sub-blocks, and accessing the sub-blocks to read data as needed. Write access is increased by including a latency cache that stores data associated with a read command. Once a read-modify write command is received, the data stored in the data cache is used to update the parity block. In a memory without a parity block, write access is increased by adding one or more spare memory blocks to provide additional memory locations for performing write operations to the same memory block in parallel. | 02-27-2014 |
20140075129 | SYSTEMS AND METHODS EXCHANGING DATA BETWEEN PROCESSORS THROUGH CONCURRENT SHARED MEMORY - A method and apparatus for matching parent processor address translations to media processors' address translations and providing concurrent memory access to a plurality of media processors through separate translation table information. In particular, a page directory for a given media application is copied to a media processor's page directory when the media application allocates memory that is to be shared by a media application running on the parent processor and media processors. | 03-13-2014 |
20140089607 | INPUT/OUTPUT TRAFFIC BACKPRESSURE PREDICTION - According to one aspect of the present disclosure, a method and technique for input/output traffic backpressure prediction is disclosed. The method includes: performing a plurality of memory transactions; determining, for each memory transaction, a traffic value corresponding to a time for performing the respective memory transactions; responsive to determining the traffic value for a respective memory transaction, determining a median value based on the determined traffic values; determining whether successive median values are incrementing; and responsive to a quantity of successively incrementing median values exceeding a threshold, indicating a prediction of a backpressure condition. | 03-27-2014 |
20140115278 | MEMORY ARCHITECTURE - According to one example embodiment, an arbiter is disclosed to mediate memory access requests from a plurality of processing elements. If two or more processing elements try to access data within the same word in a single memory bank, the arbiter permits some or all of the processing elements to access the word. If two or more processing elements try to access different data words in the same memory bank, the lowest-ordered processing element is granted access and the others are stalled. | 04-24-2014 |
20140164718 | METHODS AND APPARATUS FOR SHARING MEMORY BETWEEN MULTIPLE PROCESSES OF A VIRTUAL MACHINE - Methods and apparatus for sharing memory between multiple processes of a virtual machine are disclosed. A hypervisor associates a plurality of guest user memory regions with a first domain and assigns each associated user process an address space identifier to protect the different user memory regions from the different user processes. In addition, the hypervisor associates a global kernel memory region with a second domain. The global kernel region is reserved for the operating system of the virtual machine and is not accessible to the user processes, because the user processes do not have access rights to memory regions associated with the second domain. The hypervisor also associates a global shared memory region with a third domain. The hypervisor allows user processes associated with the third domain to access the global shared region. Using this global shared memory region, different user processes within a virtual machine may share data without the need to swap the shared data in and out of each processes respective user region of memory. | 06-12-2014 |
20140325162 | MEMORY DEVICE AND METHOD FOR HIGH SPEED AND RELIABILITY - A memory device is provided with an instruction decoding unit, a control and logic unit, a first memory, and a second memory. The memory device serves to decode an inputted instruction and producing a decoding signal. The control and logic unit serves to produce a control signal based on the decoding signal. The first memory has a first memory array and a first page buffer, and the second memory with a second memory array and a second page buffer. When the inputted instruction is a preset instruction, the preset instruction is used to simultaneously execute data access on a first memory and access the backup data on a second memory based on a same data. | 10-30-2014 |
20150127914 | SEMICONDUCTOR MEMORY DEVICE, MEMORY SYSTEM AND METHOD OF OPERATING THE SAME - A memory system including a plurality of memory chips is provided. The memory system includes a first memory chip and a second memory chip that share a data bus and become active by a chip enable signal, and a controller transmitting multi chip select commands to the first and second memory chips. The first memory chip, in response to the first multichip select command, receives a first operation request transmitted by the controller through the data base, and the second memory chip, in response to the second multichip select command, receives a second operation request transmitted by the controller through the data bus before the first memory chip operates according to the first operation request. | 05-07-2015 |
20150309725 | Shared Memory Controller and Method of Using Same - Disclosed herein are a shared memory controller and a method of controlling a shared memory. An embodiment method of controlling a shared memory includes concurrently scanning-in a plurality of read/write commands for respective transactions. Each of the plurality of read/write commands includes respective addresses and respective priorities. Additionally, each of the respective transactions is divisible into at least one beat and at least one of the respective transactions is divisible into multiple beats. The method also includes dividing the plurality of read/write commands into respective beat-level read/write commands and concurrently arbitrating the respective beat-level read/write commands according to the respective addresses and the respective priorities. Concurrently arbitrating yields respective sequences of beat-level read/write commands corresponding to the respective addresses. The method further includes concurrently dispatching the respective sequences of beat-level read/write commands to the shared memory, thereby accessing the shared memory. | 10-29-2015 |
20150355851 | DYNAMIC SELECTION OF MEMORY MANAGEMENT ALGORITHM - A data processing system | 12-10-2015 |
20150363352 | PULSE-LATCH BASED BUS DESIGN FOR INCREASED BANDWIDTH - A memory bus comprising a plurality of latches arranged sequentially between a source node and a destination node of a channel of the memory bus; and a pulse generator. The pulse generator is operable to generate a sequence of pulses, each sequential pulse to be simultaneously received by the plurality of latches. A pulse is generated for each edge of a clock signal. A first latch of the plurality of latches is operable to pass on a first data sample while a first pulse is received by the first latch of the plurality of latches. A second latch of the plurality of latches is operable to pass on a second data sample towards the first latch of the plurality of latches while the first pulse is simultaneously received by the first and second latches of the plurality of latches. | 12-17-2015 |
20150370506 | HINT INSTRUCTION FOR MANAGING TRANSACTIONAL ABORTS IN TRANSACTIONAL MEMORY COMPUTING ENVIRONMENTS - When executed, a transaction-hint instruction specifies a transaction-count-to-completion (CTC) value for a transaction. The CTC value indicates how far a transaction is from completion. The CTC may be a number of instructions to completion or an amount of time to completion. The CTC value is adjusted as the transaction progresses. When a disruptive event associated with inducing transactional aborts, such as an interrupt or a conflicting memory access, is identified while processing the transaction, processing of the disruptive event is deferred if the adjusted CTC value satisfies deferral criteria. If the adjusted CTC value does not satisfy deferral criteria, the transaction is aborted and the disruptive event is processed. | 12-24-2015 |
20150370613 | MEMORY TRANSACTION HAVING IMPLICIT ORDERING EFFECTS - In at least some embodiments, a processor core executes a code segment including a memory transaction and a non-transactional memory access instructions preceding the memory transaction in program order. The memory transaction includes at least an initiating instruction, a transactional memory access instruction, and a terminating instruction. The initiating instruction has an implicit barrier that imparts the effect of ordering execution of the transactional memory access instruction within the memory transaction with respect to the non-transactional memory access instructions preceding the memory transaction in program order. Executing the code segment includes executing the transactional memory access instruction within the memory transaction concurrently with at least one of the non-transactional memory access instructions preceding the memory transaction in program order and enforcing the barrier implicit in the initiating instruction following execution of the initiating instruction. | 12-24-2015 |
20150378939 | MEMORY MECHANISM FOR PROVIDING SEMAPHORE FUNCTIONALITY IN MULTI-MASTER PROCESSING ENVIRONMENT - A memory mechanism for providing semaphore functionality in a multi-master processing environment is disclosed. An exemplary memory unit includes a memory controller that manages access to a shared memory. The memory controller includes a semaphore context monitor associated with each master having access to the shared memory. A semaphore context monitor associated with a semaphore-capable master is activated by the semaphore-capable master (for example, by exclusive request signal(s) received by memory controller from semaphore-capable master). A semaphore context monitor associated with a non-semaphore-capable master is activated by the memory controller (for example, by exclusive request signal(s) generated by the memory controller). The memory controller can include a semaphore address command mechanism configured to derive a semaphore command from a memory access request received from the non-semaphore-capable master and activate the semaphore context monitor when the semaphore command specifies exclusive access. | 12-31-2015 |
20160154600 | Automatic Mutual Exclusion | 06-02-2016 |
20160179708 | INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND NON-TRANSITORY COMPUTER READABLE MEDIUM | 06-23-2016 |
20160188499 | TIGHTLY-COUPLED DISTRIBUTED UNCORE COHERENT FABRIC - Selected portions of an uncore fabric of a system-on-a-chip (SoC) or other embedded system are divided into two independent pipelines. Each pipeline operates independently of the other pipeline, and each accesses only one-half of the system memory, such as even or odd addresses in an interleaved memory. However, the two pipelines are tightly coupled to maintain coherency of the fabric. Coupling may be accomplished, for example, by a shared clock that is one-half of the base clock cycle for the fabric. Each incoming address may be processed by a deterministic hash, assigned to one of the pipelines, processed through memory, and then passed to a credit return. | 06-30-2016 |
20160378382 | ADDRESS PROBING FOR TRANSACTION - Embodiments relate to address probing for a transaction. An aspect includes determining, before starting execution of a transaction, a plurality of addresses that will be used by the transaction during execution. Another aspect includes probing each address of the plurality of addresses to determine whether any of the plurality of addresses has an address conflict. Yet another aspect includes, based on determining that none of the plurality of addresses has an address conflict, starting execution of the transaction. | 12-29-2016 |