Class / Patent application number | Description | Number of patent applications / Date published |
710200000 | ACCESS LOCKING | 81 |
20080215784 | REALTIME-SAFE READ COPY UPDATE WITH PER-PROCESSOR READ/WRITE LOCKS - A technique for realtime-safe detection of a grace period for deferring the destruction of a shared data element until pre-existing references to the data element have been removed. A per-processor read/write lock is established for each of one or more processors. When reading a shared data element at a processor, the processor's read/write lock is acquired for reading, the shared data element is referenced, and the read/write lock that was acquired for reading is released. When starting a new grace period, all of the read/write locks are acquired for writing, a new grace period is started, and all of the read/write locks are released. | 09-04-2008 |
20080222331 | Computer-Implemented System And Method For Lock Handling - Computer-implemented systems and methods for handling access to one or more resources. Executable entities that are running substantially concurrently provide access requests to an operating system (OS). One or more traps of the OS are avoided to improve resource accessing performance through use of information stored in a shared locking mechanism. The shared locking mechanism indicates the overall state of the locking process, such as the number of processes waiting to retrieve data from a resource and/or whether a writer process is waiting to access the resource. | 09-11-2008 |
20080244134 | Multiprocessor System and Method for Processing Memory Access - When receiving a write message, an input/output controller issues a write request message to a home processor node that holds the corresponding data in a memory. A memory controller of the processor node having received the write request message performs a consistency processing based on the status of the corresponding data stored in a directory and controls a write permission message to reach the input/output controller having issued the write request message. The input/output controller of the input/output node having received the write permission message issues, as the write message, an update message to the home processor node. The memory controller of the processor node having received the update message updates the data in a main storage part. In the processing described above, when receiving a plurality of write messages from input/output devices, the input/output controller issues a write request message regardless of the progress of the preceding write message, and issues a write message after an issuance of the write message of the preceding write. | 10-02-2008 |
20080276025 | LOCK INFERENCE FOR ATOMIC SECTIONS - Locks which protect data structures used within atomic sections of concurrent programs are inferred from atomic sections and acquired in a manner to avoid deadlock. Locks may be inferred by expression correspondence using a backward inter-procedural analysis of an atomic section. Locks may be sorted according to a total order and acquired early in an atomic section to prevent deadlock. Multiple granularity of locks are determined and employed. Fine grained locks may be inferred and acquired to reduce contention. Coarse grained locks may be determined and substituted for fine grained locks when necessary for unbounded locations or to reduce the number of finer grained locks. | 11-06-2008 |
20080288691 | METHOD AND APPARATUS OF LOCK TRANSACTIONS PROCESSING IN SINGLE OR MULTI-CORE PROCESSOR - The present invention relates to a method and apparatus of lock transactions processing in a single or multi-core processor. An embodiment of the present invention is a processor with one or more processing cores, an address arbitrator, where one or more processing cores are configured to submit a lock transaction request to the address arbitrator corresponding to a specific instruction in response to the execution of the specific instruction. The lock transaction request includes a lock variable address asserted on an address bus. The processor further includes a lock controller for performing lock transaction processing in response to the lock transaction request, and notifying processing result to the processing core from which the lock transaction request was sent. The processor further includes a switching device, coupled to the address arbitrator and the lock controller, for identifying the lock transaction request and notifying the lock transaction request to the lock controller. | 11-20-2008 |
20080307138 | METHOD AND SYSTEM FOR LOCKING RESOURCES IN A DISTRIBUTED ENVIRONMENT - A method and system that creates and maintains lock properties for a resource or object in a distributed environment. The lock properties provide other client computer systems limited availability to the locked resource. Limited availability relates to being able to only read, write or delete the resource, or any combination thereof. Additionally, these lock properties allow other client computer systems to simultaneously hold or share equivalent locks. Other lock properties relate to advisory or mandatory status for the lock. Advisory locks may be honored or ignored by other client computer systems. | 12-11-2008 |
20090049218 | RETRIEVING LOCK ATTENTION DATA - Provided are techniques for retrieving lock attention data. A group of attention connection paths configured to transmit lock attention interrupts and lock attention data between the host and the control unit are identified. A lock attention interrupt is received from the control unit. In response to receiving the lock attention interrupt, a connection path from the group of attention connection paths is selected and lock attention data is retrieved from the control unit using the selected connection path | 02-19-2009 |
20090164682 | Livelock Resolution - A mechanism is provided for resolving livelock conditions in a multiple processor data processing system. When a bus unit detects a timeout condition, or potential timeout condition, the bus unit activates a livelock resolution request signal. A livelock resolution unit receives livelock resolution requests from the bus units and signals an attention to a control processor. The control processor performs actions to attempt to resolve the livelock condition. Once a bus unit that issued a livelock resolution request has managed to successfully issue its command, it deactivates its livelock resolution request. If all livelock resolution request signals are deactivated, then the control processor instructs the bus and all bus units to resume normal activity. On the other hand, if the control processor determines that a predetermined amount of time passes without any progress being made, it determines that a hang condition has occurred. | 06-25-2009 |
20090198849 | Memory Lock Mechanism for a Multiprocessor System - A memory lock mechanism within a multi-processor system is disclosed. A lock control section is initially assigned to a data block within a system memory of the multiprocessor system. In response to a request for accessing the data block by a processing unit within the multiprocessor system, a determination is made by a memory controller whether or not the lock control section of the data block has been set. If the lock control section of the data block has been set, the request for accessing the data block is denied. Otherwise, if the lock control section of the data block has not been set, the lock control section of the data block is set, and the request for accessing the data block is allowed. | 08-06-2009 |
20090222607 | DOCUMENT MANAGEMENT SYSTEM, DOCUMENT MANAGEMENT METHOD, PROGRAM AND STORAGE MEDIUM - In a document management system according to the present invention, which is used for registering and managing a document in a database of a relational database server, a judgment is made as to whether or not a capacity of the database has reached a predetermined limited capacity, and when it is judged that the predetermined limited capacity has been reached, an identifier indicating an editing-inhibited state is added to the database, to inhibit all editing actions to the database, thereby achieving the user-friendly system. | 09-03-2009 |
20090240860 | Lock Mechanism to Enable Atomic Updates to Shared Memory - A system and method for locking and unlocking access to a shared memory for atomic operations provides immediate feedback indicating whether or not the lock was successful. Read data is returned to the requestor with the lock status. The lock status may be changed concurrently when locking during a read or unlocking during a write. Therefore, it is not necessary to check the lock status as a separate transaction prior to or during a read-modify-write operation. Additionally, a lock or unlock may be explicitly specified for each atomic memory operation. Therefore, lock operations are not performed for operations that do not modify the contents of a memory location. | 09-24-2009 |
20090319710 | DEVICE AND METHOD FOR LOCKING TOUCH SCREEN - An electronic device includes a central processing unit (CPU), a touch screen, a locking module, and a locking button. The locking module is configured for storing a locking program. When the touch screen is forced into an unlocked status, operation of the locking button is capable of causing the CPU to instruct the locking module to activate the locking program to lock the touch screen. | 12-24-2009 |
20100057965 | Extension of Lock Discipline Violation Detection for Lock Wait Patterns - A computer usable medium including computer usable program code for detection of potential shared memory access deadlocks. The code determines, when a process waits on a first shared memory access lock, if the process holds locks other than the first lock. If so, then the code issues a warning about potential deadlock. | 03-04-2010 |
20100100655 | MANAGEMENT OF CLUSTER-WIDE RESOURCES WITH SHARED VARIABLES - The number of concurrent systems locks supported on a Sysplex is limited. Since persistent system locks may not be released for a long time, the limit may be reached resulting in outage periods. Access to resources may be managed through shared variables across a cluster of computing systems. Processes running on the cluster can use shared variables that are either exclusive or non-exclusive. An exclusive shared variable associates a resource with a process that has exclusive control of the resource. Since each exclusive shared variable is unique across the cluster, another application cannot create a second exclusive shared variable to control the resource. There is no limit on the number of exclusive shared variables that can be created on a cluster. Using exclusive shared variables instead of persistent system locks can prevent a system from reaching the limit of concurrent system locks while allowing processes exclusive use of resources. | 04-22-2010 |
20100191884 | METHOD FOR REPLICATING LOCKS IN A DATA REPLICATION ENGINE - An automated method is provided of replicating a locking protocol in a database environment for performing I/O operations wherein the database environment includes a plurality of databases. A locking protocol is performed that includes one or more explicit locking operations on objects in a first database of the database environment. The one or more explicit locking operations are replicated in one or more other databases in the database environment. At least some of the explicit locking operations are performed asynchronously with respect to the explicit locking operations performed in the first database. I/O operations are performed at the first database of the database environment that are associated with the one or more explicit locking operations implemented in the first database. | 07-29-2010 |
20100241774 | SCALABLE READER-WRITER LOCK - A reader-writer lock is provided that scales to accommodate multiple readers without contention. The lock comprises a hierarchical C-SNZI (Conditioned Scalable Non-Zero Indicator) structure that scales with the number readers seeking simultaneous acquisition of the lock. All readers that have joined the C-SNZI structure share concurrent acquisition, and additional readers may continue to join until the structure is disabled. The lock may be disabled by a writer, at which time subsequent readers will wait (e.g., in a wait queue) until the lock is again available. The C-SNZI structure may be implemented in a lockword or in reader entries within a wait queue. If implemented in reader entries of a wait queue, the lockword may be omitted, and new readers arriving at the queue may be able join an existing reader entry even if the reader entry is not at the tail of the queue. | 09-23-2010 |
20100250809 | SYNCHRONIZATION MECHANISMS BASED ON COUNTERS - A method and apparatus to maintain a plurality of counters to synchronize a plurality of requests for a lock independent of interlocks are described. The plurality of counters include a lock counter and an unlock counter. The requests wait in a wait queue maintained separately from the counters without direct access between the counters and the wait queue. The lock counter indicates a cumulative number of lock requests to acquire the lock. The unlock counter indicates a cumulative number of unlock requests to release the lock acquired. One or more requests waiting for the lock are selected according to the counters to be granted with the lock when the lock is released. A request corresponds to a task performing synchronized operations when granted with the lock. | 09-30-2010 |
20100274937 | PROVIDING LOCK-BASED ACCESS TO NODES IN A CONCURRENT LINKED LIST - A method of providing lock-based access to nodes in a concurrent linked list includes providing a plurality of striped lock objects. Each striped lock object is configured to lock at least one of the nodes in the concurrent linked list. An index is computed based on a value stored in a first node to be accessed in the concurrent linked list. A first one of the striped lock objects is identified based on the computed index. The first striped lock object is acquired, thereby locking and providing protected access to the first node. | 10-28-2010 |
20100306432 | COMPUTER-IMPLEMENTED MULTI-RESOURCE SHARED LOCK - In one embodiment of a computer-implemented system, comprising a plurality of computer entities and multiple resources, one of the computer entities may request a multi-resource lock to one of the multiple resources; the one resource determines whether a resource lock is available at the one resource and, if so, the one resource communicates with all peer resources to determine whether a resource lock is available; if the peer resources indicate a resource lock is available, lock all of the resources to the requesting computer entity, and the one resource communicates the lock of the resources to the requesting computer entity; and if any the resource indicates contention for the multi-resource lock, the one resource communicates the contention to the requesting computer entity, and the requesting computer entity backs off the multi-resource lock request and, after a random time interval, repeats the request. | 12-02-2010 |
20100312936 | INTERLOCKING INPUT/OUTPUTS ON A VIRTUAL LOGIC UNIT NUMBER - In one embodiment, a solution is provided wherein a lock client sends lock requests to a lock manager upon receipt of an input/output (I/O) and receives back a lock grant. At some point later, the lock client may send a lock release. The lock manager, upon receipt of a lock release from a lock client, remove a first lock request corresponding to the lock release from a lock grant queue corresponding to the lock manager. Then, for each dependency queue lock request in a dependency queue corresponding to the first lock request, the lock manager may determine whether the dependency queue lock request conflicts with a second lock request in the lock grant queue, and then may process the dependency queue lock request according to whether the dependency queue lock requires conflicts with a second lock request in the lock grant queue. | 12-09-2010 |
20110161539 | OPPORTUNISTIC USE OF LOCK MECHANISM TO REDUCE WAITING TIME OF THREADS TO ACCESS A SHARED RESOURCE - Embodiments of the invention provide a method, apparatus and computer program product for enabling a thread to acquire a lock associated with a shared resource, when a locking mechanism is used therewith, wherein each embodiment reduces waiting time and enhances efficiency in using the shared resource. One embodiment is associated with a plurality of processors, which includes two or more processors that each provides a specified thread to access a shared resource. The shared resource can only be accessed by one thread at a given time, a locking mechanism enables a first one of the specified threads to access the shared resource while each of the other specified threads is retained in a waiting queue, and a second one of the specified threads occupies a position of highest priority in the queue. The method includes the step of identifying a time period between a time when the first specified thread releases access to the shared resource, and a later time when the second specified thread becomes enabled to access the shared resource. Responsive to an additional thread that is not one of the specified threads being provided by a processor to access the shared resource during the identified time period, it is determined whether a first prespecified criterion pertaining to the specified threads retained in the queue has been met. Responsive to the first criterion being met, the method determines whether a second prespecified criterion has been met, wherein the second criterion is that the number of specified threads in the queue has not decreased since a specified prior time. Responsive to the second criterion being met, the method then decides whether to enable the additional thread to access the shared resource before the second specified thread accesses the resource. | 06-30-2011 |
20110161540 | HARDWARE SUPPORTED HIGH PERFORMANCE LOCK SCHEMA - A method and apparatus for lock allocation control. When a processor core acquires a lock, other processor cores do not need to constantly poll memory to check whether the required lock is released. Instead, other processor cores will be in sleep state and the next processor core needed will be selectively woken up based on predetermined rule, such that an out-of-order lock contention procedure is turned into an in-order lock allocation procedure. By selectively waking up a processor core that is in sleep state, the method and apparatus can avoid occupying a large amount of bus bandwidth, can avoid cache misses, and can save power consumption of chip. | 06-30-2011 |
20110225335 | USING A DUAL MODE READER WRITER LOCK - A method, system, and computer usable program product for using a dual mode reader writer lock. A contention condition is detected in the use of a lock in a data processing system, the lock being used for managing read and write access to a resource in the data processing system. A determination of the data structure used for implementing the lock is made. If the data structure is a data structure of a reader writer lock (RWL), the data structure is transitioned to a second data structure suitable for implementing the DML. A determination is made whether the DML has been expanded. If the DML is not expanded, the DML is expanded such that the data structure includes an original lock and a set of expanded locks. The original lock and each expanded lock in the set of expanded locks forms an element of the DML. | 09-15-2011 |
20110246694 | MULTI-PROCESSOR SYSTEM AND LOCK ARBITRATION METHOD THEREOF - A multi-processor system of the present invention comprises a plurality of processors each configured to lock a shared resource and process a task; each of the processors including a lock wait information storage unit for storing lock wait information indicating whether or not the processor is waiting for acquirement of a lock of the shared resource; and a lock acquirement priority information storage unit for storing lock acquirement priority information indicating a priority according to which the shared resource is acquired; and each of the processors being configured to acquire the lock of the shared resource based on the lock wait information and the lock acquirement priority information. | 10-06-2011 |
20110296069 | Fabric Based Lock Manager Service - A replicated finite state machine lock service facilitates resource sharing in a distributed system. A lock request from a client identifies a resource and a lock-mode, and requests a leaseless lock on the resource. The service uses client instance identifiers to categorize requests as duplicate, stale, abandoned, or actionable. A lock may be abandoned when a client holding the lock goes down. After a per-client abandonment timer expires, the lock service may treat any exclusive lock granted to the client as abandoned, and treat any non-exclusive lock granted to the client as unlocked. The service tries to notify a lock-holding client if another client requests the same lock, and treats the lock as abandoned if the notification attempt fails. An abandoned read lock is granted to a different client on request. An abandoned write lock is granted or refused depending on whether the requesting client accepts abandoned write locks. | 12-01-2011 |
20110320661 | DIAGNOSE INSTRUCTION FOR SERIALIZING PROCESSING - A system serialization capability is provided to facilitate processing in those environments that allow multiple processors to update the same resources. The system serialization capability is used to facilitate processing in a multi-processing environment in which guests and hosts use locks to provide serialization. The system serialization capability includes a diagnose instruction which is issued after the host acquires a lock, eliminating the need for the guest to acquire the lock. | 12-29-2011 |
20120059963 | Adaptive Locking of Retained Resources in a Distributed Database Processing Environment - System, method, computer program product embodiments and combinations and sub-combinations thereof for adaptive locking of retained resources in a distributed database processing environment are provided. An embodiment includes identifying a locking priority for at least a portion of a buffer pool, determining lock requests based upon the identified locking priority, and granting locks for the lock requests. | 03-08-2012 |
20120089760 | Increasing Functionality Of A Reader-Writer Lock - In one embodiment, the present invention includes a method for accessing a shared memory associated with a reader-writer lock according to a first concurrency mode, dynamically changing from the first concurrency mode to a second concurrency mode, and accessing the shared memory according to the second concurrency mode. In this way, concurrency modes can be adaptively changed based on system conditions. Other embodiments are described and claimed. | 04-12-2012 |
20120151110 | PROCESS-SAFE READ/WRITE LOCKS - In one embodiment, a non-transitory processor-readable medium stores code representing instructions that when executed cause a processor to obtain a first mutual exclusion object. The first mutual exclusion object can be a write mutual exclusion object associated with a shared resource. The code can further represent instructions that when executed cause the processor to obtain a second mutual exclusion object associated with an object manager module and define a read event object with a name conforming to a predetermined format. The code can further represent instructions that when executed cause the processor to release the second mutual exclusion object, release the first mutual exclusion object, read at least a portion of the shared resource, obtain the second mutual exclusion object, destroy the read event object and release the second mutual exclusion object. | 06-14-2012 |
20120191892 | COMPONENT-SPECIFIC DISCLAIMABLE LOCKS - Systems and methods of protecting a shared resource in a multi-threaded execution environment in which threads are permitted to transfer control between different software components, for any of which a disclaimable lock having a plurality of orderable locks can be identified. Back out activity can be tracked among a plurality of threads with respect to the disclaimable lock and the shared resource, and reclamation activity among the plurality of threads may be ordered with respect to the disclaimable lock and the shared resource. | 07-26-2012 |
20120198111 | MANAGING A RESOURCE LOCK - Controlling access to a resource by a plurality of resource requesters is disclosed. The resource lock operates in a contention efficient (heavyweight) operating mode, and in response to a request from a resource requester to acquire the resource lock, a count of a total number of acquisitions of the resource lock in the contention efficient operating mode is incremented. In response to access to the resource not being contended by more than one resource requester, a count of a number of uncontended acquisitions of the resource lock in the contention efficient operating mode is incremented, and a contention rate is calculated as the number of uncontended acquisitions in the contention efficient operating mode divided by the total number of acquisitions in the contention efficient operating mode. In response to the contention rate meeting a threshold contention rate, the resource lock is changed to a non-contention efficient (lightweight) operating mode. | 08-02-2012 |
20120210031 | COMPUTER-IMPLEMENTED MULTI-RESOURCE SHARED LOCK - In one embodiment of a computer-implemented system, comprising a plurality of computer entities and multiple resources, one of the computer entities may request a multi-resource lock to one of the multiple resources; the one resource determines whether a resource lock is available at the one resource and, if so, the one resource communicates with all peer resources to determine whether a resource lock is available; if the peer resources indicate a resource lock is available, lock all of the resources to the requesting computer entity, and the one resource communicates the lock of the resources to the requesting computer entity; and if any the resource indicates contention for the multi-resource lock, the one resource communicates the contention to the requesting computer entity, and the requesting computer entity backs off the multi-resource lock request and, after a random time interval, repeats the request. | 08-16-2012 |
20120290754 | Scheduling Virtual Interfaces - A mechanism is provided for scheduling virtual interfaces having at least one virtual interface scheduler, a virtual interface context cache and a pipeline with a number of processing units. The virtual interface scheduler is configured to send a lock request for a respective virtual interface to the virtual interface context cache. The virtual interface context cache is configured to lock a virtual interface context of the respective virtual interface and to send a lock token to the virtual interface scheduler in dependence on said lock request. The virtual interface context cache configured to hold a current lock token for the respective virtual interface context and to unlock the virtual interface context, if a lock token of an unlock request received from the pipeline matches the held current lock token. | 11-15-2012 |
20130007322 | Hardware Enabled Lock Mediation - A tangible storage medium and data processing system build a runtime environment of a system. A profile manager receives a service request containing a profile identifier. The profile identifier specifies a required version of at least one software component. The profile manager identifies a complete installation of the software component, and at least one delta file. The profile manager dynamically constructs a classpath for the required version by preferentially utilizing files from the at least one delta file followed by files from the complete installation. The runtime environment is then built utilizing the classpath. | 01-03-2013 |
20130007323 | Hardware Enabled Lock Mediation - A computer implemented method for control access to a contested resource. When a lock acquisition request is received from a virtual machine, the partition management firmware determines whether the lock acquisition request is received within a preemption period of a time slice allocated to the virtual machine. If the lock acquisition request is received within the preemption period, the partition management firmware ends the time slice early, and performs a context switch. | 01-03-2013 |
20130007324 | POWER MANAGEMENT MODULE FOR USB DEVICES - A system and method of managing power of a multi-function USB device suspends the device in response to receipt of a request to suspend from a USB host; assigns respective device functions to indefinite, locked or unlocked states; allows the device to resume if there are data or requests for host attention pending at a given function that is in the unlocked state and assigning the given function to the locked state; and otherwise maintains the suspend even if there are data are pending at one or more functions that are in the locked state. | 01-03-2013 |
20130013833 | LOCK WAIT TIME REDUCTION IN A DISTRIBUTED PROCESSING ENVIRONMENT - Aspects of the present invention reduce a lock wait time in a distributed processing environment. A plurality of wait-for dependencies between a first plurality of transactions and a second plurality of transactions in a distributed processing environment is identified. The first plurality of transactions waits for the second plurality of transactions to release a plurality of locks on a plurality of shared resources. An amount of time the first plurality of transactions will wait for the second plurality of transactions in the distributed processing environment is determined based on the plurality of wait-for dependencies between the first plurality of transactions and the second plurality of transactions. Historical transaction data related to the plurality of wait-for dependencies between the first plurality of transactions and the second plurality of transactions is analyzed. The amount of time the first plurality of transactions will wait for the second plurality of transactions is reduced based on the historical transaction data. | 01-10-2013 |
20130042039 | DEADLOCK PREVENTION - Methods, systems, and computer-readable media with executable instructions stored thereon for preventing deadlocks are provided. An inter-device mutex (IDM) can be locked for a first client. An error message can be sent to a second client in response to a received first lock command from the second client while the IDM is locked for the first client. A number of second lock commands from the second client while the IDM is locked for the first client can be received. The IDM can be unlocked for the first client in response to an unlock command received from the first client. The IDM can be locked for the second client in response to a received third lock command from the second client, wherein the third lock command is received subsequent to unlocking the IDM for the first client. | 02-14-2013 |
20130046910 | METHOD FOR MANAGING A PROCESSOR, LOCK CONTENTION MANAGEMENT APPARATUS, AND COMPUTER SYSTEM - A method for managing a processor includes: obtaining an online request of a processor of a computer system; collecting lock contention information of the computer system if a lock contention status flag indicates a non-lock thrashing status; determining whether the computer system is in a lock thrashing status according to the lock contention information; and accepting the online request if it is determined that the computer system is in a non-lock thrashing status. By using the management method according to embodiments of the present application, processor performance degradation and a waste of idle processor resources that are caused by the case that the computer system is in a lock thrashing status are prevented, thereby improving utilization efficiency of processor resources and promoting overall performance of the computer system. | 02-21-2013 |
20130080672 | SYSTEM, METHOD AND COMPUTER PROGRAM PRODUCT FOR ACCESS CONTROL - An access control system for controlling access to a resources group including multiple computer accessible resources, the system including: a lock, configured to selectively deny a request of a process to access the resource when the resource is locked; and a global lock, configured to grant to the process exclusive access to add a pending-task entry into a resource-associated data structure associated with the resource; wherein the global lock has to be acquired by any process whose request to access any resource of the resources group for performing of any task was denied, in order for access thereto for performing the respective task to be granted; wherein the lock is further configured to selectively grant, following the adding of the pending task-entry into the resource-associated data structure, exclusive access to the resource for performing a task associated with the pending task entry upon a releasing of the resource associated lock. | 03-28-2013 |
20130111089 | Time Limited Lock Ownership | 05-02-2013 |
20130132627 | System and Method for Implementing Locks Shared Between Kernel and User Space - An apparatus comprising one or more processors configured to implement a plurality of operations for an operating system (OS) platform including a kernel and a user application, one or more shared resource blocks by the kernel and the user application, and one or more shared locks by the kernel and the user application corresponding to the shared resource blocks, wherein the user application is configured to synchronize accesses to the shared resource blocks between a user thread and a kernel thread by directly accessing the locks without using a system call to the kernel. | 05-23-2013 |
20130205057 | EXCLUSIVE CONTROL METHOD OF RESOURCE AND EXCLUSIVE CONTROLLER OF RESOURCE - Under the circumstances that a lock object which performs a restriction control on an exclusive use of a sharable resource is granting a second information processor a right of prior use of the sharable resource over a first information processor, a time length of exclusive use during which the sharable resource is exclusively used by the second information processor is measured when an attempt to acquire the right of prior use requested by the first information processor for the lock object fails, and least two standby operations are set, the at least two standby operations being carried out by the first information processor until the right of prior use of the sharable resource granted to the second information processor is no longer valid, and the time length of exclusive use is compared to a decision threshold value preset for evaluation of the time length of exclusive use so that one of the standby operations suitable for a comparison result is selected. | 08-08-2013 |
20130282943 | APPARATUS AND METHODS FOR A TAMPER RESISTANT BUS FOR SECURE LOCK BIT TRANSFER - A tamper-resistant bus architecture for secure lock bit transfer in an integrated circuit includes a nonvolatile memory having an n-bit storage region for storing encoded lock bits, A plurality of read access circuits are coupled to the nonvolatile memory. An n-bit tamper-resistant bus is coupled to the read access circuits. A decoder is coupled to the tamper-resistant bus. A k-bit decoded lock signal bus is coupled to the decoder. A controller is coupled to the k-bit decoded lock signal bus. | 10-24-2013 |
20130290583 | System and Method for NUMA-Aware Locking Using Lock Cohorts - The system and methods described herein may be used to implement NUMA-aware locks that employ lock cohorting. These lock cohorting techniques may reduce the rate of lock migration by relaxing the order in which the lock schedules the execution of critical code sections by various threads, allowing lock ownership to remain resident on a single NUMA node longer than under strict FIFO ordering, thus reducing coherence traffic and improving aggregate performance. A NUMA-aware cohort lock may include a global shared lock that is thread-oblivious, and multiple node-level locks that provide cohort detection. The lock may be constructed from non-NUMA-aware components (e.g., spin-locks or queue locks) that are modified to provide thread-obliviousness and/or cohort detection. Lock ownership may be passed from one thread that holds the lock to another thread executing on the same NUMA node without releasing the global shared lock. | 10-31-2013 |
20130290584 | SEQUENCE-BASED PROCESS LOCKING - Methods, apparatuses, and computer readable media for scheduling operations in a hardware apparatus. A method includes receiving a lock request corresponding to a requested action, and registering a lock corresponding to and in response to the lock request. Registering the lock includes including assigning the registered lock a sequence number. The method includes selecting a current lock based on the sequence number. The method includes permitting the requested action to be performed when the current lock corresponds to the registered lock, and if the registered lock has been requested. The method includes clearing the registered lock. | 10-31-2013 |
20130339560 | LOCK CONTROL APPARATUS AND LOCK CONTROL METHOD - A lock control apparatus includes a control unit that controls acquisition of a lock for using a shared resource shared among a plurality of tasks by a task according to first lock information that indicates whether to permit the tasks to acquire the lock, and a determining unit that determines whether there is a conflict of requests for acquisition of the lock by the tasks, wherein when the determining unit determines that there is a conflict of requests for acquisition of the lock, the control unit controls acquisition of the lock by the tasks according to second lock information that indicates whether to permit acquisition of the lock when there is a conflict. | 12-19-2013 |
20130346660 | USB DEVICE CONTROL USING ENDPOINT TYPE DETECTION DURING ENUMERATION - Described herein are embodiments of USB device control using endpoint type detection during enumeration. An apparatus configured for USB device control using endpoint type detection during enumeration may include a host controller configured to selectively disable enumeration of a USB device based at least in part on an endpoint type of the USB device. The apparatus may include a management engine configured to store in the host controller a USB lock policy defining endpoint types disallowed to be enumerated by the apparatus. Other embodiments may be described and/or claimed. | 12-26-2013 |
20140040519 | ACTIVE LOCK INFORMATION MAINTENANCE AND RETRIEVAL - Technologies related to active lock information maintenance and retrieval are generally described. In some examples, a computing device may be configured to maintain active lock information including lock identifiers for active locks, lock access identifiers corresponding to a number of times a lock has been placed and/or released, and/or lock owner identifiers corresponding to threads placing locks. The computing device may provide an active lock information system configured to return active lock information including some or all of the lock identifiers for active locks, lock access identifiers, and/or lock owner identifiers in response to active lock information requests. | 02-06-2014 |
20140059261 | NOVEL LOCK LEASING METHOD FOR SOLVING DEADLOCK - A method for resolving deadlock in a multi-threaded computing system using a novel lock lease is disclosed. A first thread leases a lock held by the first thread to a second thread different from the first thread. The leasing transfers control of the lock to the second thread while the first thread retains ownership of the lock. To lease the lock: (1) the second thread applies for the lease from the first thread; (2) the first thread grants the lease; (3) the first thread waits for the second thread to complete a task; (4) the second thread terminates the lease; (5) the first thread confirms termination of the lease. The first thread receives control of the lock back from the second thread after the second thread has finished using resources controlled by the lock. The second thread also can sublease the lock to a third thread. | 02-27-2014 |
20140068127 | SHARED LOCKING MECHANISM FOR STORAGE CENTRIC LEASES - A computing device receives a request from a host for a shared lock on a resource. The computing device obtains an exclusive lock on the resource using a locking data structure that is stored on the storage domain. The computing device subsequently obtains a shared lock on the resource for the host by writing a flag to the locking data structure, wherein the flag indicates that the host has the shared lock on the resource. The computing device then releases the exclusive lock on the resource. | 03-06-2014 |
20140089545 | LEASED LOCK IN ACTIVE-ACTIVE HIGH AVAILABILITY DAS SYSTEMS - A method and system for IO processing in a storage system is disclosed. In accordance with the present disclosure, a controller may take long term “lease” of a portion (e.g., an LBA range) of a virtual disk of a RAID system and then utilize local locks for IOs directed to the leased portion. The method and system in accordance with the present disclosure eliminates inter-controller communication for the majority of IOs and improves the overall performance for a High Availability Active-Active DAS RAID system. | 03-27-2014 |
20140115213 | TIERED LOCKING OF RESOURCES - In an embodiment, a lock command is received from a thread that specifies a resource. If tier status in a nodal lock indicates the nodal lock is currently owned, an identifier of the thread is added to a nodal waiters list, and if the thread's lock wait indicator indicates that the thread owns the nodal lock, then a successful completion status is returned for the lock command to the thread after waiting until a next tier wait indicator in the nodal lock indicates that any thread owns a global lock on the resource. If the tier status indicates no thread holds the nodal lock, the tier status is changed to indicate the nodal lock is owned, and if a global waiters and holder list is empty, an identifier of a node at which the thread executes is added to the global waiters and holder list. | 04-24-2014 |
20140115214 | BITMAP LOCKING USING A NODAL LOCK - In an embodiment, in response to a request from a producer thread to set a bit in a global bitmap, a nodal lock is obtained on a nodal bitmap at a node at which the producer thread executes. A determination is made whether a corresponding bit in a pending clear bitmap in the nodal bitmap indicates that a clear of the bit in the global bitmap is pending. If the corresponding bit in the pending clear bitmap in the nodal bitmap indicates that a clear of the bit in the global bitmap is pending, the corresponding bit in the pending clear bitmap is cleared. If the corresponding bit in the pending clear bitmap in the nodal bitmap indicates that the clear of the bit in the global bitmap is not pending, a corresponding bit in a pending set bitmap in the nodal bitmap is set. | 04-24-2014 |
20140115215 | TIERED LOCKING OF RESOURCES - In an embodiment, a lock command is received from a thread that specifies a resource. If tier status in a nodal lock indicates the nodal lock is currently owned, an identifier of the thread is added to a nodal waiters list, and if the thread's lock wait indicator indicates that the thread owns the nodal lock, then a successful completion status is returned for the lock command to the thread after waiting until a next tier wait indicator in the nodal lock indicates that any thread owns a global lock on the resource. If the tier status indicates no thread holds the nodal lock, the tier status is changed to indicate the nodal lock is owned, and if a global waiters and holder list is empty, an identifier of a node at which the thread executes is added to the global waiters and holder list. | 04-24-2014 |
20140115216 | BITMAP LOCKING USING A NODAL LOCK - In an embodiment, in response to a request from a producer thread to set a bit in a global bitmap, a nodal lock is obtained on a nodal bitmap at a node at which the producer thread executes. A determination is made whether a corresponding bit in a pending clear bitmap in the nodal bitmap indicates that a clear of the bit in the global bitmap is pending. If the corresponding bit in the pending clear bitmap in the nodal bitmap indicates that a clear of the bit in the global bitmap is pending, the corresponding bit in the pending clear bitmap is cleared. If the corresponding bit in the pending clear bitmap in the nodal bitmap indicates that the clear of the bit in the global bitmap is not pending, a corresponding bit in a pending set bitmap in the nodal bitmap is set. | 04-24-2014 |
20140143465 | Offloading Input/Output (I/O) Completion Operations - A mechanism is provided for offloading an input/output (I/O) completion operation. Responsive to a second processor identifying that a flag has been set by a first processor requesting assistance in completing an I/O operation, the second processor copies an I/O response from a first I/O response data structure associated with the first processor to a second I/O response data structure associated with the second processor. The second processor deletes the I/O response from the first I/O response data structure, clears the flag, and processes the I/O operation by addressing the I/O response in the second I/O response data structure. Responsive to completing the I/O operation, the second processor deletes the I/O response from the second I/O response data structure. | 05-22-2014 |
20140149621 | Switching a Locking Mode of an Object in a Multi-Thread Program - A mechanism is provided for switching a locking mode of an object in a multi-thread program. The mechanism acquires, during execution of the program, access information related to accesses to the object by a plurality of threads. The object supports a single-level locking mode and a multi-level locking mode. The single-level locking mode is a mode capable of locking the object. The multi-level locking mode is a mode capable of locking the object and fields in the object respectively. The mechanism switches the locking mode of the object between the single-level locking mode and the multi-level locking mode based on the access information. | 05-29-2014 |
20140181341 | SYSTEM AND METHOD TO RESET A LOCK INDICATION - An apparatus include a first core processor, a second core processor, and a lock register coupled to the first core processor and to the second core processor. The apparatus further includes a shared structure responsive to the first core processor and to the second core processor. The shared structure is responsive to an unlock instruction issued by either the first core processor or the second core processor to send a signal to the lock register to reset a lock indication in the lock register. | 06-26-2014 |
20140181342 | PRIORITIZED LOCK REQUESTS TO REDUCE BLOCKING - A method includes requesting a lock on a resource. The request for the lock on the resource is specified as a low priority non-blocking request that does not block one or more other requests such that one or more other requests can request a lock on the resource and obtain the lock on the resource in priority to the low priority non-blocking request. Based on the low priority request, the method includes maintaining the low priority request in a non-blocking fashion until a predetermined condition occurs. As a result of the predetermined condition occurring, the method includes handling the low priority request such that it is no longer treated as a low priority non-blocking request. Embodiments may further include a kill request which kills any operations on the resource, aborts any transactions having a lock on the resource, and locks the resource. | 06-26-2014 |
20140207987 | MULTIPROCESSOR SYSTEM WITH MULTIPLE CONCURRENT MODES OF EXECUTION - A multiprocessor system supports multiple concurrent modes of speculative execution. Speculation identification numbers (IDs) are allocated to speculative threads from a pool of available numbers. The pool is divided into domains, with each domain being assigned to a mode of speculation. Modes of speculation include TM, TLS, and rollback. Allocation of the IDs is carried out with respect to a central state table and using hardware pointers. The IDs are used for writing different versions of speculative results in different ways of a set in a cache memory. | 07-24-2014 |
20140223058 | TERMINAL DEVICE, PROCESSING METHOD, AND PROGRAM THEREOF - A terminal device includes: an application processor that processes a started application program; an operation lock determination section that determines whether to start operation lock while processing the application program; and an operation lock processor that determines whether input operation information matches operation lock information in a case where the operation lock determination section determines to start the operation lock, the operation lock information indicating an operation to be restricted while processing the started application program, the operation lock processor restricting an operation corresponding with the input operation information in a case where the input operation information matches the operation lock information. | 08-07-2014 |
20140244876 | DATA PROCESSING LOCK SIGNAL TRANSMISSION - In accordance with one aspect of the present description, a node of the distributed computing system has multiple communication paths to a data processing resource lock which controls access to shared resources, for example. In this manner, at least one redundant communication path is provided between a node and a data processing resource lock to facilitate reliable transmission of data processing resource lock signals between the node and the data processing resource lock. Other features and aspects may be realized, depending upon the particular application. | 08-28-2014 |
20140250248 | METHOD FOR MANAGING A PROCESSOR, LOCK CONTENTION MANAGEMENT APPARATUS, AND COMPUTER SYSTEM - A method for managing a processor includes: obtaining an online request of a processor of a computer system; collecting lock contention information of the computer system if a lock contention status flag indicates a non-lock thrashing status; determining whether the computer system is in a lock thrashing status according to the lock contention information; and accepting the online request if it is determined that the computer system is in a non-lock thrashing status. By using the management method according to embodiments of the present application, processor performance degradation and a waste of idle processor resources that are caused by the case that the computer system is in a lock thrashing status are prevented, thereby improving utilization efficiency of processor resources and promoting overall performance of the computer system. | 09-04-2014 |
20140281085 | METHOD, APPARATUS, SYSTEM FOR HYBRID LANE STALLING OR NO-LOCK BUS ARCHITECTURES - A method, apparatus, and system to recover a clock for a bus comprising: to assign a master lane, to lock non-master lanes to the master lane, to fill the master lane during data inactivity, to idle the non-master lanes during data inactivity, to maintain clock for the master lane, and to recover the clock for the non-master lanes from the master lane. A method, apparatus, and system to transmit and receive serial data with an unsynchronized clock comprising: to transmit data in a bit stream, the data have multiple bit redundancy, to receive the data in the bit stream, to sample a value of the data in the bit stream, to use voting on the value of the data in the bit stream, and to determine a correct logic state for the data from the voting. | 09-18-2014 |
20140310438 | Semaphore with Timeout and Lock-Free Fast Path for Message Passing Architectures - The exemplary embodiments describe systems and methods for utilizing a semaphore with timeout and lock-free path for message passing architectures. One embodiment is related to a method comprising receiving a request from a client to access an object, the object including a plurality of resources, placing the request in a lock-free pend queue of a semaphore, manipulating a count of the semaphore based on an availability of at least one of the plurality of resources, and determining whether the client can use a fast path to the object. | 10-16-2014 |
20150032927 | APPARATUS, ELECTRONIC DEVICES AND METHODS ASSOCIATED WITH AN OPERATIVE TRANSITION FROM A FIRST INTERFACE TO A SECOND INTERFACE - Subject matter disclosed herein relates to an apparatus comprising memory and a controller, such as a controller which determines block locking states in association with operative transitions between two or more interfaces that share at least one block of memory. The apparatus may support single channel or multi-channel memory access, write protection state logic, or various interface priority schemes. | 01-29-2015 |
20150052273 | INFORMATION PROCESSING SYSTEM, INFORMATION PROCESSING APPARATUS, AND COMPUTER-READABLE RECORDING MEDIUM HAVING STORED THEREIN CONTROL PROGRAM FOR INFORMATION PROCESSING APPARATUS - An information processing system includes: an information processing apparatus; and a terminal device configured to communicate with the information processing apparatus using a connection established between the information processing apparatus and the terminal device. The information processing apparatus notifies the terminal device of scheduled time of release of the connection, and the terminal device determines whether or not current time has passed the scheduled time notified from the information processing apparatus at the time of transmitting a request to the information processing apparatus and, in a case where the current time is determined to have passed the scheduled time, before transmitting the request to the information processing apparatus, transmits a connection request for establishing a connection with the information processing apparatus to the information processing apparatus. | 02-19-2015 |
20150058509 | ELECTRONIC APPARATUS AND PORT CONTROL METHOD - A method of an embodiment enables locking of downstream ports of a USB hub controller in an electronic apparatus. The method includes the determination step, the assertion step and the lock step. The determination step determines, with a BIOS, whether a lock setting has been made on each of the downstream ports. The assertion step performs, with the BIOS, assertion control for resetting the USB hub controller. The lock step performs, with the BIOS, lock control during the assertion control based on whether the lock setting has been made. | 02-26-2015 |
20150106542 | LOCK MANAGEMENT SYSTEM, LOCK MANAGEMENT METHOD AND LOCK MANAGEMENT PROGRAM - Provided is a lock management system, a lock management method and a lock management program whereby lock acquisition and release processes can be carried out at high speed. | 04-16-2015 |
20150113190 | Processing Concurrency in a Network Device - A processing unit of a packet processing node initiates a transaction with an accelerator engine to trigger the accelerator engine for performing a processing operation with respect to a packet, and triggers the accelerator engine to perform the processing operation. The processing unit attempts to retrieve a result of the first processing operation from a memory location to which a result is to be written. It is determined whether the result has been written to the memory location, and when it is determined that the result has not yet been written to the memory location, the processing unit is locked until at least a portion of the result is written to the memory location. | 04-23-2015 |
20150317191 | SYSTEM AND METHOD FOR SUPPORTING AN ADAPTIVE SELF-TUNING LOCKING MECHANISM IN A TRANSACTIONAL MIDDLEWARE MACHINE ENVIRONMENT - A system and method can support an adaptive self-tuning locking mechanism in a transactional middleware machine environment. The system allows each process in a plurality of processes to perform one or more test-and-set (TAS) operations in order to obtain a lock for data in a shared memory. Then, the system can obtain a spin failed rate for a current tuning period, wherein a spin failure happens when a process fails to obtain the lock after performing a maximum number of rounds of TAS operations that are allowed. Furthermore, the system can adaptively configuring a spin count for a next tuning period based on the obtained spin failure rate, wherein the spin count specifies the maximum number of rounds of TAS operations that are allowed for the next tuning period. | 11-05-2015 |
20150355953 | Low Overhead Contention-Based Switching Between Ticket Lock And Queued Lock - A technique for low overhead contention-based switching between ticket locking and queued locking to access shared data may include establishing a ticket lock, establishing a queue lock, operating in ticket lock mode using the ticket lock to access the shared data during periods of relatively low data contention, and operating in queue lock mode using the queue lock to access the shared data during periods of relatively high data contention. | 12-10-2015 |
20150363243 | ADAPTIVE PROCESS FOR DATA SHARING WITH SELECTION OF LOCK ELISION AND LOCKING - In a Hardware Lock Elision (HLE) Environment, predictively determining whether a HLE transaction should actually acquire a lock and execute non-transactionally, is provided. Included is, based on encountering an HLE lock-acquire instruction, determining, based on an HLE predictor, whether to elide the lock and proceed as an HLE transaction or to acquire the lock and proceed as a non-transaction; based on the HLE predictor predicting to elide, setting the address of the lock as a read-set of the transaction, and suppressing any write by the lock-acquire instruction to the lock and proceeding in HLE transactional execution mode until an xrelease instruction is encountered wherein the xrelease instruction releases the lock or the HLE transaction encounters a transactional conflict; and based on the HLE predictor predicting not-to-elide, treating the HLE lock-acquire instruction as a non-HLE lock-acquire instruction, and proceeding in non-transactional mode. | 12-17-2015 |
20160077891 | HIGH PERFORMANCE LOCKS - Systems and methods of enhancing computing performance may provide for detecting a request to acquire a lock associated with a shared resource in a multi-threaded execution environment. A determination may be made as to whether to grant the request based on a context-based lock condition. In one example, the context-based lock condition includes a lock redundancy component and an execution context component. | 03-17-2016 |
20160132364 | LOCK MANAGEMENT METHOD AND SYSTEM, METHOD AND APPARATUS FOR CONFIGURING LOCK MANAGEMENT SYSTEM - A lock management method and system, and a method and an apparatus for configuring a lock management system is provided. A corresponding level of a lock management system is set for each service execution node according to the number of service execution nodes included in a distributed system, the number of system instances on all service execution nodes, the number of handling processes on all the service execution nodes, and a delay of access of each service execution node to a central control node of the distributed system. At least one lock manager is allocated to each service execution node separately according to the level, which is corresponding to each service execution node, of the lock management system. A lock level context is configured for each lock manager, where the lock level context is used to determine an adjacent lock manager of each lock manager. | 05-12-2016 |
20160139967 | CONCURRENT COMPUTING WITH REDUCED LOCKING REQUIREMENTS FOR SHARED DATA - Where data are shared by multiple computer processing threads, modifying the data by determining whether modifying data associated with a first computer processing thread violates a constraint associated with the data, and responsive to determining that modifying the data associated with the computer processing thread violates the constraint associated with the data, using the data associated with the first computer processing thread to modify the data shared by the multiple computer processing threads that includes the first computer processing thread, where the constraint associated with the data associated with the first computer processing thread represents a portion of a tolerance value that is associated with the data shared by the multiple computer processing threads and that is divided among multiple constraints, where each of the constraints is associated with a different one of the multiple computer processing threads. | 05-19-2016 |
20160139968 | Autonomous Instrument Concurrency Management - An Autonomous Concurrency Management (ACM) subsystem enables test instruments (operating as servers) to reliably and efficiently handle a variety of seamless multi-device-under-test (multi-DUT) scenarios and with minimal cooperation from the original equipment manufacturer (OEM) client software (e.g. test plans, hardware abstraction layer, etc.). Concurrency capability is built directly into the test instruments. Making the instrument based concurrency autonomous means the OEM software code base need not be specifically implemented for concurrency, potentially saving thousands of lines of OEM software code. To support basic concurrency scenarios where clients asynchronously share the instrument, as well as advanced concurrency scenarios such as a broadcast scenario, the ACM includes software lock, client separator, client rendezvous, and client observer functionality. An instrument ACM subsystem simplifies the problem from the client's perspective by moving the complexity to the lowest software layer, the RF (test) instrument. | 05-19-2016 |
20160162341 | READER-WRITER LOCK - A method and system for implementing a reader-writer lock having a write lock requested by a thread is disclosed. The reader-writer lock is structured to have counters and a flag. The counters use an atomic process to count read locks held or outstanding read lock requests. The flag identifies a counter and is configured to distinguish between counters. A read lock is prepared, acquired, and released. The atomic process is used and the flag or flagged counter is polled. A write lock is prepared, acquired, and released. | 06-09-2016 |
20160378573 | Scalable RCU Callback Offloading - In order to scale Read-Copy Update (RCU) callback offloading from no-callbacks (No-CBs) CPUs, a set of RCU callback offload kernel threads (rcuo kthreads) may be spawned and each may be assigned to one of the No-CBs CPUs to invoke RCU callbacks generated by workloads running on the No-CBs CPUs at CPUs that are not No-CBs CPUs. Groups of the rcuo kthreads may be established, with each rcuo kthread group having one leader kthread and one or more follower rcuo kthreads. The leader rcuo kthreads may be periodically awakened without waking up the follower kthreads when an RCU grace period ends and an RCU callback needs to be invoked, or when a new RCU callback arrives and a new RCU grace period needs to be started. The leader rcuo kthreads may periodically awaken their associated follower rcuo kthreads for which the leader rcuo kthreads have sole responsibility to wake. | 12-29-2016 |
20170235696 | CHASSIS WITH LOCK MECHANISM | 08-17-2017 |