Patent application title: Processor-Implemented Systems And Methods For Event Handling
Inventors:
Charles Scott Shorb (Cary, NC, US)
IPC8 Class: AG06F946FI
USPC Class:
718100
Class name: Electrical computers and digital processing systems: virtual machine task or process management or task management/control task management or control
Publication date: 2011-01-27
Patent application number: 20110023032
tems and methods are provided for synchronization
of a thread, wherein the thread waits for one or more events to occur
before continuing execution. A processor-implemented system and method
can include a wait data structure which stores event conditions in order
to determine when the thread should continue execution. Event objects,
executing on one or more data processors, allow for thread
synchronization. A pointer is stored with respect to a wait data
structure in order to provide visibility of event conditions to the event
objects. The thread continues execution when the stored event conditions
are satisfied.Claims:
1. A processor-implemented system for synchronization of a thread, wherein
the thread waits for one or more events to occur before continuing
execution, said system comprising:wait data structure, stored on a
computer-readable medium, for storing event conditions which determine
when the thread continues execution;event objects, executing on one or
more data processors, that allow for thread synchronization;wherein a
registration process provides a notification mechanism for a set of event
objects that are associated with the one or more events upon which the
thread waits to occur before continuing execution;wherein the
notification includes the set of event objects being provided with
visibility of the event conditions stored in the wait data
structure;wherein a pointer is stored with respect to the wait data
structure to provide visibility of the event conditions to the event
objects;wherein the thread continues execution when the stored event
conditions are satisfied.
2. The system of claim 1, wherein synchronization of the threads is performed without use of an operating system lock with respect to the events.
3. The system of claim 1, wherein the event objects have a post method for setting the event status and the post code of an event.
4. The system of claim 1, wherein the event objects have a clear method for clearing status of an event.
5. The system of claim 1, wherein the event objects have a get code method for obtaining the post code associated with an event.
6. The system of claim 1, wherein the event objects have a wait method for use by the thread to ensure that one or more event conditions have been satisfied before continuing execution.
7. The system of claim 1, wherein an operating system operates on a processing device that contains the thread; wherein the thread waits on multiple events whose count is larger than wait semantics provided by the operating system.
8. The system of claim 1, wherein the thread operates in a multi-threaded environment provided by one or more data processing devices.
9. The system of claim 8, wherein the one or more data processing devices include a single multi-threaded object server or a networked set of multi-threaded object servers.
10. The system of claim 1, wherein the thread's execution involves accessing a computer-based resource.
11. The system of claim 10, wherein the accessing of the computer-based resource includes accessing a buffer on one or more data processing devices.
12. The system of claim 1, wherein the events are multiple asynchronous events.
13. The system of claim 1 further comprising an event data structure, stored on the computer-readable medium, for maintaining event state information and event status information for an event object.
14. The system of claim 13, wherein the event state information includes whether an event has been posted or cleared.
15. The system of claim 13, wherein the event status information includes post code information.
16. The system of claim 1, wherein the event objects have a wait method for use by the thread to ensure that one or more event conditions have been satisfied before continuing execution;wherein the wait method is provided with an array of pointers to the wait data structure which is opaque.
17. The system of claim 16, wherein, during registration, the wait method steps through each of the events and atomically stores a pointer back to the opaque wait data structure.
18. The system of claim 17, wherein, when the wait method is signaled, the wait method then de-registers each event which was registered with a wait data structure pointer.
19. A processor-implemented method for synchronization of a thread, wherein the thread waits for one or more events to occur before continuing execution, said method comprising:storing on a computer-readable medium a wait data structure for storing event conditions which determine when the thread continues execution;executing event objects on one or more data processors to allow for thread synchronization;wherein a registration process provides a notification mechanism for a set of event objects that are associated with the one or more events upon which the thread waits to occur before continuing execution;wherein the notification includes the set of event objects being provided with visibility of the event conditions stored in the wait data structure;wherein a pointer is stored with respect to the wait data structure to provide visibility of the event conditions to the event objects;wherein the thread continues execution when the stored event conditions are satisfied.
20. Computer-readable storage medium or mediums encoded with instructions that cause a computer to perform a method for synchronization of a thread, wherein the thread waits for one or more events to occur before continuing execution, said method comprising:storing a wait data structure for storing event conditions which determine when the thread continues execution;executing event objects to allow for thread synchronization;wherein a registration process provides a notification mechanism for a set of event objects that are associated with the one or more events upon which the thread waits to occur before continuing execution;wherein the notification includes the set of event objects being provided with visibility of the event conditions stored in the wait data structure;wherein a pointer is stored with respect to the wait data structure to provide visibility of the event conditions to the event objects;wherein the thread continues execution when the stored event conditions are satisfied.Description:
TECHNICAL FIELD
[0001]This document relates generally to processor-implemented systems and methods for multi-threaded environments and more particularly to processor-implemented systems and methods for event handling within a multi-threaded environment.
BACKGROUND
[0002]Many operating systems support multiple concurrent threads of execution. In certain situations, threads must wait on particular events to occur before continuing their execution. However, operating systems generally limit the number of events that can be waited upon by the threads. Still further, the number of events posted to trigger a wait complete is limited to one or all events posted.
[0003]For example, operating systems provide native event abstractions, such as the CreateEvent( ) routine in Windows and condition variables in Unix. Each of these has a limitation both in the number of events that can be simultaneously waited on as well as a limitation that the events are waited on in an all or nothing fashion.
SUMMARY
[0004]In accordance with the teachings provided herein, systems and methods for operation upon data processing devices are provided for synchronization of a thread, wherein the thread waits for one or more events to occur before continuing execution. As an example, a processor-implemented system and method can include a wait data structure, stored on computer-readable medium, which stores event conditions in order to determine when the thread should continue execution. Event objects, executing on one or more data processors, allow for thread synchronization. A pointer is stored with respect to a wait data structure in order to provide visibility of event conditions to the set of event objects. The thread continues execution when the stored event conditions are satisfied.
[0005]As another example, a processor-implemented system and method can be configured for synchronization of a thread, wherein the thread waits for one or more events to occur before continuing execution. A wait data structure, stored on computer-readable medium, stores event conditions which determine when the thread continues execution. Event objects, executing on one or more data processors, allow for thread synchronization. A registration process provides a notification mechanism for a set of event objects that are associated with the one or more events upon which the thread waits to occur before continuing execution. The notification includes the set of event objects being provided with visibility of the event conditions stored in the wait data structure. A pointer is stored with respect to a wait data structure in order to provide visibility of the event conditions to the set of event objects. The thread continues execution when the stored event conditions are satisfied.
BRIEF DESCRIPTION OF THE DRAWINGS
[0006]FIG. 1 is a block diagram depicting a processor-implemented system for handling one or more threads executing within a multi-threaded environment.
[0007]FIGS. 2 and 3 are block diagrams depicting examples of an event object for performing thread and event synchronization.
[0008]FIGS. 4-6 depict an example of an operational scenario, which involves wait processing, registration, and post-processing.
[0009]FIGS. 7 and 8 depict an example of event and wait structures.
[0010]FIGS. 9A-13 are state diagrams depicting the handling of unlimited events during a wait method without use of operating system locks.
DETAILED DESCRIPTION
[0011]FIG. 1 depicts at 30 a processor-implemented system that handles one or more threads 40 executing within a multi-threaded environment 50. One or more processing device(s) 60 provide the multi-threaded environment 50 for the threads 40. The processing device(s) 60 can include many different types of computing platforms, such as a single multi-threaded object server or a networked set of multi-threaded object servers. The processing device(s) 60 support multiple, concurrent requests for various computer-based resources.
[0012]During a thread's execution, a thread (e.g., thread 42) may have to wait on multiple asynchronous events 70 before it can continue its processing. A thread may have to wait for a variety of reasons (i.e., wait conditions). For example, a thread may be waiting on a series of buffers to be full before processing (e.g., waiting on slower I/O). When a percentage of the buffers have been filled, the computation can begin processing while the I/O continues to use the buffers that are yet to be filled. In this example, the event handling system 30 can be configured to allow for any number of events 70 (e.g., buffer-related events) to have to be posted in order to satisfy the wait condition. To handle synchronization of threads 40 with respect to such events 70, the event handling system 30 includes event objects 80.
[0013]FIG. 2 depicts at 100 an example of an event object for performing thread and event synchronization. In this example, the event object 100 is an operating system level object which allows for synchronization of threads by using such methods 110 as: Post( ) 120, Clear( ) 130, GetCode( ) 140, and Wait( ) 150. The event object 100 also can be configured to maintain internal pieces of information in event structure(s) 160, such as state information (e.g., posted or cleared) and, when in the posted state, status or Post Code information.
[0014]More specifically with respect to the methods 110 of the event object 100, the Post method 120 of the event object 100 can be configured to check the status of the event (which information is stored in the structure(s) 160 of the event object 100). If it is not already posted, the Post method 120 sets the internal state to a POSTED value and updates the event's Post Code to that provided in the Post( ) method 120. Any thread attempting to wait (or is currently waiting) on this event (which is associated with the event object 100) will have the condition met for this single event.
[0015]The Clear method 130 of the event object 100 operates to clear the status of the event. The Clear method's processing is the reverse of the Post operation. With respect to the GetCode method 140, if the status of the event is posted, the Post Code can be obtained using the GetCode method 140. This ensures that the event is in the posted state and that the code associated with the Post( ) call is returned.
[0016]Threads use the Wait method 150 of the event object 100 to ensure that one or more wait conditions have been met before continued processing. An event handling system 30 can allow for waiting on any number of events instead of one or all events. This is effected by management of the visibility of each component in the event/wait pairing, such as by providing the Wait method 150 with an array of pointers to wait structure(s) 200 (e.g., opaque wait structure(s)) as shown in FIG. 3.
[0017]As an example of utilization of the wait structure(s) 200, FIGS. 4-6 provide an operational scenario, which involves wait processing, registration, and post-processing. FIG. 4 depicts a wait processing scenario wherein an array of event pointers 300 are used by the Wait method with event objects 80 in order to handle events. In the array 300, a first pointer is associated with a first event object, a second pointer is associated with a second event, and so on. A wait structure(s) 200 is associated with a thread 42 which is waiting on multiple events.
[0018]FIG. 5 depicts the registration phase of this operational scenario. In the registration phase, the wait operation steps through each of the events and atomically stores a pointer back to the wait structure 200 as shown at 320. At the same time that the pointer is stored, the status of the event is obtained. If the number of posted events is greater than the number of events to be waited upon, then the process can be short circuited and de-registration processing can begin.
[0019]If all of the events register the wait structure pointer, then the waiting thread 42 enters into the OS wait processing. Threads posting registered event object(s) 80 update a counter in the wait structure 200. This counter tracks the number of events that need to be posted before the "woEvent" (i.e., the Event API provided by a commercially available operating system) for the waiting thread 42 can be posted. (The prefix "wo" has been generally added herein to more clearly indicate that an operating system routine is involved (instead of the WAIT and POST routines that are part of the innovative aspects disclosed herein).) If the count goes to zero, then negotiations are made to determine who should post the woEvent. Only one of the waiting thread 42 or an active thread posting the event object 80 will succeed marking the wait structure 200 for the responsibility of posting the woEvent. If the waiting thread 42 has this responsibility, no wait on the woEvent is necessary. Otherwise, the waiting thread 42 will be required to perform a wait on the woEvent; and the thread posting the registered event object 80 will post woEvent.
[0020]Once signaled, the waiting thread 42 then de-registers each event which was registered with a wait structure pointer. It may be the case that a post or clear operation is occurring during this de-registration processing. The event status bits are used to detect this. A separate de-registration count is used to ensure that post and clear events having visibility of the wait structure are accounted for in this operational scenario.
[0021]FIG. 6 depicts the post-processing phase of this operational scenario. In this phase, post-processing attempts to change the status of the event to a posted state. At the same time the event state is updated to reflect a Posted Status, and any registered wait structure pointers are also obtained. If there is a valid wait structure pointer, then the counter in the wait structure 200 is updated. If this counter is zero and the wait signal bit can be obtained, then the posting thread 400 has the responsibility of posting the woEvent. It can also be the case that the post operation to update the wait counter occurs at the same time as a wait de-registration operation. The post operation will be notified of this by the wait structure pointer in its status field being zeroed out on return from updating the wait structure counter. At this point the post operation will use the wait structure de-registration counter as described above.
[0022]FIGS. 7 and 8 provide an example of event and wait structures that were discussed above. FIG. 7 depicts an example definition of event structure(s) 160. Generally, event objects have a user level structure of two atomic integer values. Atomic integers can be used to guarantee visibility of updates between threads without locking.
[0023]More specifically within the event structure(s) 160, Event.Status 500 determines the current state of the event. It is broken into two separate `fields` within the atomic integer value: A pointer to a wait structure (which is 4-byte aligned in this example) and one or more status bits. The status bits are used to determine the current posted state of an event; bitA determines the Posted Status, bitB protects against a multiple poster race condition.
[0024]When an event is in the posted state, the Event.Value value 510 of the event structure(s) 160 reflects the user given value of the event. This value can be queried by an Event GetValue( ) method.
[0025]FIG. 8 depicts an example definition of wait structure(s) 200, which are used to process Wait( ) requests. The wait structure(s) 200 pointed to by Waitp of Event.Status 500 in the event structure(s) 160 can be implemented in multiple ways, such as in the following ways: an atomic structure within Wait( ); embedded within each thread context structure; as a WaitCount field within the Thread Context, etc. In essence, each wait structure has a count field and a separate wait object (which may be implied within a thread context).
[0026]More specifically within the wait structure(s) 200, Wait.WaitStat 600 determines the current state of the wait object. It is broken into two separate `fields` within the atomic integer value: a count before the WaitObject should be signaled and a status bit S which determines if the WaitObject has already been signaled.
[0027]Wait.DeRegCnt 610 is used for processing the de-registration of the Waitp after the wait operation conditions have been met. This can be accomplished in different ways, such as by differentiating the current mode with an additional status bit in Wait.WaitStat 600 to determine the current operational mode.
[0028]Also within the wait structure(s) 200, Wait.WaitObject 620 is the underlying wait object that the thread calling Wait( ) uses to wait until it has been signaled.
[0029]As a further illustration of these structures, FIGS. 9-13 provide another operational scenario in the form of state diagrams for handling unlimited events during a wait( ) method without use of operating system locks. The operational scenario illustrates an example implementation which allows threads to wait on multiple events whose count may be larger than the operating system provided wait API (example: Windows® limits the event count to 64). The event handling system can also be configured to wait on any number events instead of a One-or-All model. The implementation depicted in the state diagrams accomplishes this with a minimal of resources (e.g., minimizes operating system interaction). For example, the implementation reduces the number of required operating system calls/overhead, such as eliminating the need for both an operating system lock and operating system event for each event structure. As another example, the implementation also eliminates the need for an operating system post/clear event for each Post( ) and Clear( ) operation to each event as well as provides for a thread to wait for an unlimited (within hardware/operating system limits) number of wait events. Additionally, the operational scenario illustrates that no operating system lock is required for individual event object processing or wait object processing. This is also accomplished without using a SPIN-LOCK semantics (which diminishes thread scaling performance).
[0030]FIGS. 9A-9B depict at 700 an operational scenario with respect to posting an event and includes a registration phase 720, a phase 730 which involves waiting thread processing, and a waiting thread visibility de-registration phase 740. In this operational scenario, the post method is structured as shown at 710: Post (Event, Value). Posting an event allows for setting the internal value and ensuring that a thread waiting on the event will be awakened if its wait condition has been met. Within the registration phase 720, State P0 resolves the race condition with any other Post or Clear request. Progression to state P1 is only accomplished if the status bits can be successfully changed from cleared (00) to a processing state value (01).
[0031]State P1 sets the events value to the value expressed in the Post call. State P1 resolves the race condition between a Wait request and a Post request. If the event status for the wait pointer remains null while the status is changed from processing (01) to a posted state (10) then the Post request has been completed. If a pointer to a wait structure has been successfully registered in the event status, then processing proceeds to state P2.
[0032]Within the waiting thread processing phase 730, State P2 decrements the wait structure's status counter. If this decrement operation reduced the count portion of the wait status to zero then the thread's wait condition has been met; processing proceeds at state P2.1. If this decrement operation is non-zero then the wait condition has not been met and processing proceeds to state P3.
[0033]State P2.1 resolves the race condition between other post requests and the wait request. If the S bit of the wait structure WaitStat field is successfully changed from clear (0) to signaled (1), then the process proceeds to state P2.2 to signal the waiting thread; otherwise the bit will be in the signaled (1) state and processing continues at state P3.
[0034]State P2.2 sets the signal flag to true for future post (after status has been updated to POST).
[0035]State P3 resolves the race condition between post processing and wait deregistration. This is accomplished by capturing the value of the wait pointer in the event status while the status bits are changed from wait (11) to posted (10).
[0036]State P4 ensures that the waiting thread is signaled if necessary.
[0037]Within the de-registration phase 740, State P5 determines the next processing requirement. If the transition to a posted state P3 results in a valid Wait Pointer, then the post operation is complete. If the Wait Pointer is null, visibility processing continues at state P6.
[0038]State P6 completes the resolution of the race condition between post processing and wait deregistration processing. This is done by decrementing the wait structure pointer DeRegCnt field. If the result of this field is non-zero, the event processing is complete. If it is zero, wait processing requires another post, and then processing proceeds to state P6.1. State P6.1 posts the wait event, signaling the thread that all event processing threads with visibility have completed their operations.
[0039]FIGS. 10A-10B depict at 800 an operational scenario with respect to clearing an event and includes a registration phase 820, a phase 830 which involves waiting thread processing, and a waiting thread visibility de-registration phase 840. In this operational scenario, the clear method is structured as shown at 810: Clear(Event). Clearing an event is the act of setting the state of the event to be cleared (or unposted). It also updates any wait processing state to include one less posted event (if necessary).
[0040]Within the registration phase 820, State C0 resolves the race condition with any other Post or Clear request. Progression to state C1 is only accomplished if the status bits can be successfully changed from posted (10) to a processing state value (01).
[0041]State C1 resolves the race condition between a Wait request and a Clear request. If the event status for the wait pointer remains null while the status is changed from processing (01) to a cleared state (00) then the Clear request has been completed. If a pointer to a wait structure has been successfully registered in the event status, the process proceeds to state C2.
[0042]Within the waiting thread processing phase 830, State C2 increments the wait structures status counter. It also resolves the race condition between this clear request and a wait request. If the event status for the wait pointer remains non-null while the status bits can be changed from wait thread processing (11) to cleared (00) then the clear operation is complete. Otherwise the wait pointer will be null and processing continues at state C3.
[0043]Within the de-registration phase 840, State C3 completes the resolution of the race condition between clear processing and wait deregistration processing. This is done by decrementing the wait structure pointer DeRegCnt field. If the result of this field is non-zero, the event processing is complete. If it is zero, wait processing requires another post and processing proceeds to state C3.1 wherein the wait event is posted.
[0044]FIG. 11 depicts at 900 an operational scenario with respect to obtaining an event internal value. In this operational scenario, the Get Value method is structured as shown at 910: GetValue(Event). Getting an event value ensures it is in a posted state and returning the value.
[0045]Within the operational scenario 900, State G0 validates the current state of the event. Progression to state G1 is only accomplished if the status bits are equal to the posted state value (10). Any other value for these status bits returns an `event cleared` value.
[0046]State G1 returns the current value of the event. Race condition protection can be added in this operational scenario via the status bits. However, it is noted that the race condition resolves itself from the perspective that the posted value is considered to be the value of the last posted state.
[0047]FIGS. 12 and 13 provide an operational scenario with respect to waiting on event(s) and includes a registration phase 1020, a signaled processing phase 1030, and an event deregistration phase 1040. In this operational scenario, the wait method is structured as shown at 1010: Wait(WaitCount, WaitArray, PostCount, Timeout). Threads use the wait process to wait on any number of events to be in a posted state. This provides the capability to wait on any number of events from one to WaitCount.
[0048]Within the registration phase 1020, State W0 sets up the internal Wait structure status and initializes internal variables. If the parameter WaitCount is zero (i.e., a thread does not want to wait on any events), then the wait processing proceeds directly to state W2.1.
[0049]State W1 begins the process of setting the event status wait ptr to point back to the address of this thread's wait structure. The status is set and progress continues at state W1.1. This process continues for each of the events until all the events specified in WaitArray have been thus registered.
[0050]State W1.1 keeps track of how many events are already in the posted state. If the state is truly posted (10), then processing proceeds to state W1.2. Any other event state processing will return to state W1 for the next pass.
[0051]State W1.2 increments the current posted count (pCnt). At this point there is a chance to short circuit the event registration process (though it is not required). The conditions of the wait operation are considered met when the number of posted events is equal to PostCount. This can be done with a check at state W1.2 (pCnt==PostCount). It is noted that this check may not be completely accurate, as events could have been cleared during wait processing. A complete solution can use the equation: ((pCnt-Waitp->WaitStat.WaitCount)>=PostCount). In either case, a short circuit method can be used for early termination of the event registration phase.
[0052]Within the signaled processing phase 1030, State W2 determines if the current wait condition has been met. Note that precedence has been given to the thread that is not waiting in that State W0 initializes the wait status as already signaled. State W2 reverses this assumption if a wait is necessary. The number of posted events (pCnt) is subtracted from PostCnt. This remaining value is then added to the wait status count field. If this field is less than or equal to zero, then processing proceeds to state W3. If it is greater than zero and the wait count status bit can be successfully cleared, then processing proceeds to state W2.1.
[0053]State W2.1 waits on the operating system wait object, clearing it after the wait has been completed. Processing continues at state W3.
[0054]State W3 begins the event deregistration phase 1040. If there are no events to deregister, the wait has completed. If there are events, then internal variables are initialized and processing continues at state W3.1.
[0055]State W3.1 is the reverse of the registration processing loop begun at state W1. For each event that was registered, the Waitp needs to be removed from the event status waitp field. If there are more events to deregister, then processing proceeds to state W3.2. If the count of deregistered events equals the count of registered events, then processing proceeds to state W4.
[0056]State W3.2 initializes internal variables and then processes deregistration of the waitp for the current event. This resolves the race condition between Post, Clear, and Wait. If the waitp field is nulled while the status bits are active (11), then processing proceeds to state W3.3. If the waitp field can be nulled with any other status (00, 01, 10), then processing proceeds for the next event at state W3.1.
[0057]State W3.3 tracks the number of visibility race conditions. The visibility counter (vCnt) is incremented and processing proceeds to the next loop at state W3.1.
[0058]State W4 determines if visibility processing is required (race conditions were encountered at state W3.2). If vCnt is zero, then wait processing is complete. If vCnt is non-zero continue to state W5.
[0059]State W5 finalizes the deregistration race condition processing. It adds the number of visibility events (vCnt) to the Waitp DeRegCnt field. If this is zero, all visibility events are accounted for and wait processing is complete. If this is non-zero, a final wait is performed on the OS wait object. After this wait completes, the wait operation is complete.
[0060]It should be understood that similar to the other processing flows described herein, the processing flow of this operational scenario may be altered, modified, removed and/or augmented and still achieve the desired outcome.
[0061]While examples have been used to disclose the invention, including the best mode, and also to enable any person skilled in the art to make and use the invention, the patentable scope of the invention is defined by claims, and may include other examples that occur to those skilled in the art. Accordingly the examples disclosed herein are to be considered non-limiting. As an illustration, the systems and methods may be implemented on various types of computer architectures, such as for example on a single general purpose computer or workstation, or on a networked system, or in a client-server configuration, or in an application service provider configuration.
[0062]It is further noted that the systems and methods may include data signals conveyed via networks (e.g., local area network, wide area network, internet, combinations thereof, etc.), fiber optic medium, carrier waves, wireless networks, etc. for communication with one or more data processing devices. The data signals can carry any or all of the data disclosed herein that is provided to or from a device.
[0063]Additionally, the methods and systems described herein may be implemented on many different types of processing devices by program code comprising program instructions that are executable by the device processing subsystem. The software program instructions may include source code, object code, machine code, or any other stored data that is operable to cause a processing system to perform the methods and operations described herein.
[0064]The systems' and methods' data (e.g., associations, mappings, data input, data output, intermediate data results, final data results, etc.) may be stored and implemented in one or more different types of computer-implemented data stores, such as different types of storage devices and programming constructs (e.g., RAM, ROM, Flash memory, flat files, databases, programming data structures, programming variables, IF-THEN (or similar type) statement constructs, etc.). It is noted that data structures describe formats for use in organizing and storing data in databases, programs, memory, or other computer-readable media for use by a computer program.
[0065]The systems and methods may be provided on many different types of computer-readable media including computer storage mechanisms (e.g., CD-ROM, diskette, RAM, flash memory, computer's hard drive, etc.) that contain instructions (e.g., software) for use in execution by a processor to perform the methods' operations and implement the systems described herein.
[0066]The computer components, software modules, functions, data stores and data structures described herein may be connected directly or indirectly to each other in order to allow the flow of data needed for their operations. It is also noted that a module or processor includes but is not limited to a unit of code that performs a software operation, and can be implemented for example as a subroutine unit of code, or as a software function unit of code, or as an object (as in an object-oriented paradigm), or as an applet, or in a computer script language, or as another type of computer code. The software components and/or functionality may be located on a single computer or distributed across multiple computers depending upon the situation at hand.
[0067]It should be understood that as used in the description herein and throughout the claims that follow, the meaning of "a," "an," and "the" includes plural reference unless the context clearly dictates otherwise. Also, as used in the description herein and throughout the claims that follow, the meaning of "in" includes "in" and "on" unless the context clearly dictates otherwise. Finally, as used in the description herein and throughout the claims that follow, the meanings of "and" and "or" include both the conjunctive and disjunctive and may be used interchangeably unless the context expressly dictates otherwise; the phrase "exclusive or" may be used to indicate situation where only the disjunctive meaning may apply.
Claims:
1. A processor-implemented system for synchronization of a thread, wherein
the thread waits for one or more events to occur before continuing
execution, said system comprising:wait data structure, stored on a
computer-readable medium, for storing event conditions which determine
when the thread continues execution;event objects, executing on one or
more data processors, that allow for thread synchronization;wherein a
registration process provides a notification mechanism for a set of event
objects that are associated with the one or more events upon which the
thread waits to occur before continuing execution;wherein the
notification includes the set of event objects being provided with
visibility of the event conditions stored in the wait data
structure;wherein a pointer is stored with respect to the wait data
structure to provide visibility of the event conditions to the event
objects;wherein the thread continues execution when the stored event
conditions are satisfied.
2. The system of claim 1, wherein synchronization of the threads is performed without use of an operating system lock with respect to the events.
3. The system of claim 1, wherein the event objects have a post method for setting the event status and the post code of an event.
4. The system of claim 1, wherein the event objects have a clear method for clearing status of an event.
5. The system of claim 1, wherein the event objects have a get code method for obtaining the post code associated with an event.
6. The system of claim 1, wherein the event objects have a wait method for use by the thread to ensure that one or more event conditions have been satisfied before continuing execution.
7. The system of claim 1, wherein an operating system operates on a processing device that contains the thread; wherein the thread waits on multiple events whose count is larger than wait semantics provided by the operating system.
8. The system of claim 1, wherein the thread operates in a multi-threaded environment provided by one or more data processing devices.
9. The system of claim 8, wherein the one or more data processing devices include a single multi-threaded object server or a networked set of multi-threaded object servers.
10. The system of claim 1, wherein the thread's execution involves accessing a computer-based resource.
11. The system of claim 10, wherein the accessing of the computer-based resource includes accessing a buffer on one or more data processing devices.
12. The system of claim 1, wherein the events are multiple asynchronous events.
13. The system of claim 1 further comprising an event data structure, stored on the computer-readable medium, for maintaining event state information and event status information for an event object.
14. The system of claim 13, wherein the event state information includes whether an event has been posted or cleared.
15. The system of claim 13, wherein the event status information includes post code information.
16. The system of claim 1, wherein the event objects have a wait method for use by the thread to ensure that one or more event conditions have been satisfied before continuing execution;wherein the wait method is provided with an array of pointers to the wait data structure which is opaque.
17. The system of claim 16, wherein, during registration, the wait method steps through each of the events and atomically stores a pointer back to the opaque wait data structure.
18. The system of claim 17, wherein, when the wait method is signaled, the wait method then de-registers each event which was registered with a wait data structure pointer.
19. A processor-implemented method for synchronization of a thread, wherein the thread waits for one or more events to occur before continuing execution, said method comprising:storing on a computer-readable medium a wait data structure for storing event conditions which determine when the thread continues execution;executing event objects on one or more data processors to allow for thread synchronization;wherein a registration process provides a notification mechanism for a set of event objects that are associated with the one or more events upon which the thread waits to occur before continuing execution;wherein the notification includes the set of event objects being provided with visibility of the event conditions stored in the wait data structure;wherein a pointer is stored with respect to the wait data structure to provide visibility of the event conditions to the event objects;wherein the thread continues execution when the stored event conditions are satisfied.
20. Computer-readable storage medium or mediums encoded with instructions that cause a computer to perform a method for synchronization of a thread, wherein the thread waits for one or more events to occur before continuing execution, said method comprising:storing a wait data structure for storing event conditions which determine when the thread continues execution;executing event objects to allow for thread synchronization;wherein a registration process provides a notification mechanism for a set of event objects that are associated with the one or more events upon which the thread waits to occur before continuing execution;wherein the notification includes the set of event objects being provided with visibility of the event conditions stored in the wait data structure;wherein a pointer is stored with respect to the wait data structure to provide visibility of the event conditions to the event objects;wherein the thread continues execution when the stored event conditions are satisfied.
Description:
TECHNICAL FIELD
[0001]This document relates generally to processor-implemented systems and methods for multi-threaded environments and more particularly to processor-implemented systems and methods for event handling within a multi-threaded environment.
BACKGROUND
[0002]Many operating systems support multiple concurrent threads of execution. In certain situations, threads must wait on particular events to occur before continuing their execution. However, operating systems generally limit the number of events that can be waited upon by the threads. Still further, the number of events posted to trigger a wait complete is limited to one or all events posted.
[0003]For example, operating systems provide native event abstractions, such as the CreateEvent( ) routine in Windows and condition variables in Unix. Each of these has a limitation both in the number of events that can be simultaneously waited on as well as a limitation that the events are waited on in an all or nothing fashion.
SUMMARY
[0004]In accordance with the teachings provided herein, systems and methods for operation upon data processing devices are provided for synchronization of a thread, wherein the thread waits for one or more events to occur before continuing execution. As an example, a processor-implemented system and method can include a wait data structure, stored on computer-readable medium, which stores event conditions in order to determine when the thread should continue execution. Event objects, executing on one or more data processors, allow for thread synchronization. A pointer is stored with respect to a wait data structure in order to provide visibility of event conditions to the set of event objects. The thread continues execution when the stored event conditions are satisfied.
[0005]As another example, a processor-implemented system and method can be configured for synchronization of a thread, wherein the thread waits for one or more events to occur before continuing execution. A wait data structure, stored on computer-readable medium, stores event conditions which determine when the thread continues execution. Event objects, executing on one or more data processors, allow for thread synchronization. A registration process provides a notification mechanism for a set of event objects that are associated with the one or more events upon which the thread waits to occur before continuing execution. The notification includes the set of event objects being provided with visibility of the event conditions stored in the wait data structure. A pointer is stored with respect to a wait data structure in order to provide visibility of the event conditions to the set of event objects. The thread continues execution when the stored event conditions are satisfied.
BRIEF DESCRIPTION OF THE DRAWINGS
[0006]FIG. 1 is a block diagram depicting a processor-implemented system for handling one or more threads executing within a multi-threaded environment.
[0007]FIGS. 2 and 3 are block diagrams depicting examples of an event object for performing thread and event synchronization.
[0008]FIGS. 4-6 depict an example of an operational scenario, which involves wait processing, registration, and post-processing.
[0009]FIGS. 7 and 8 depict an example of event and wait structures.
[0010]FIGS. 9A-13 are state diagrams depicting the handling of unlimited events during a wait method without use of operating system locks.
DETAILED DESCRIPTION
[0011]FIG. 1 depicts at 30 a processor-implemented system that handles one or more threads 40 executing within a multi-threaded environment 50. One or more processing device(s) 60 provide the multi-threaded environment 50 for the threads 40. The processing device(s) 60 can include many different types of computing platforms, such as a single multi-threaded object server or a networked set of multi-threaded object servers. The processing device(s) 60 support multiple, concurrent requests for various computer-based resources.
[0012]During a thread's execution, a thread (e.g., thread 42) may have to wait on multiple asynchronous events 70 before it can continue its processing. A thread may have to wait for a variety of reasons (i.e., wait conditions). For example, a thread may be waiting on a series of buffers to be full before processing (e.g., waiting on slower I/O). When a percentage of the buffers have been filled, the computation can begin processing while the I/O continues to use the buffers that are yet to be filled. In this example, the event handling system 30 can be configured to allow for any number of events 70 (e.g., buffer-related events) to have to be posted in order to satisfy the wait condition. To handle synchronization of threads 40 with respect to such events 70, the event handling system 30 includes event objects 80.
[0013]FIG. 2 depicts at 100 an example of an event object for performing thread and event synchronization. In this example, the event object 100 is an operating system level object which allows for synchronization of threads by using such methods 110 as: Post( ) 120, Clear( ) 130, GetCode( ) 140, and Wait( ) 150. The event object 100 also can be configured to maintain internal pieces of information in event structure(s) 160, such as state information (e.g., posted or cleared) and, when in the posted state, status or Post Code information.
[0014]More specifically with respect to the methods 110 of the event object 100, the Post method 120 of the event object 100 can be configured to check the status of the event (which information is stored in the structure(s) 160 of the event object 100). If it is not already posted, the Post method 120 sets the internal state to a POSTED value and updates the event's Post Code to that provided in the Post( ) method 120. Any thread attempting to wait (or is currently waiting) on this event (which is associated with the event object 100) will have the condition met for this single event.
[0015]The Clear method 130 of the event object 100 operates to clear the status of the event. The Clear method's processing is the reverse of the Post operation. With respect to the GetCode method 140, if the status of the event is posted, the Post Code can be obtained using the GetCode method 140. This ensures that the event is in the posted state and that the code associated with the Post( ) call is returned.
[0016]Threads use the Wait method 150 of the event object 100 to ensure that one or more wait conditions have been met before continued processing. An event handling system 30 can allow for waiting on any number of events instead of one or all events. This is effected by management of the visibility of each component in the event/wait pairing, such as by providing the Wait method 150 with an array of pointers to wait structure(s) 200 (e.g., opaque wait structure(s)) as shown in FIG. 3.
[0017]As an example of utilization of the wait structure(s) 200, FIGS. 4-6 provide an operational scenario, which involves wait processing, registration, and post-processing. FIG. 4 depicts a wait processing scenario wherein an array of event pointers 300 are used by the Wait method with event objects 80 in order to handle events. In the array 300, a first pointer is associated with a first event object, a second pointer is associated with a second event, and so on. A wait structure(s) 200 is associated with a thread 42 which is waiting on multiple events.
[0018]FIG. 5 depicts the registration phase of this operational scenario. In the registration phase, the wait operation steps through each of the events and atomically stores a pointer back to the wait structure 200 as shown at 320. At the same time that the pointer is stored, the status of the event is obtained. If the number of posted events is greater than the number of events to be waited upon, then the process can be short circuited and de-registration processing can begin.
[0019]If all of the events register the wait structure pointer, then the waiting thread 42 enters into the OS wait processing. Threads posting registered event object(s) 80 update a counter in the wait structure 200. This counter tracks the number of events that need to be posted before the "woEvent" (i.e., the Event API provided by a commercially available operating system) for the waiting thread 42 can be posted. (The prefix "wo" has been generally added herein to more clearly indicate that an operating system routine is involved (instead of the WAIT and POST routines that are part of the innovative aspects disclosed herein).) If the count goes to zero, then negotiations are made to determine who should post the woEvent. Only one of the waiting thread 42 or an active thread posting the event object 80 will succeed marking the wait structure 200 for the responsibility of posting the woEvent. If the waiting thread 42 has this responsibility, no wait on the woEvent is necessary. Otherwise, the waiting thread 42 will be required to perform a wait on the woEvent; and the thread posting the registered event object 80 will post woEvent.
[0020]Once signaled, the waiting thread 42 then de-registers each event which was registered with a wait structure pointer. It may be the case that a post or clear operation is occurring during this de-registration processing. The event status bits are used to detect this. A separate de-registration count is used to ensure that post and clear events having visibility of the wait structure are accounted for in this operational scenario.
[0021]FIG. 6 depicts the post-processing phase of this operational scenario. In this phase, post-processing attempts to change the status of the event to a posted state. At the same time the event state is updated to reflect a Posted Status, and any registered wait structure pointers are also obtained. If there is a valid wait structure pointer, then the counter in the wait structure 200 is updated. If this counter is zero and the wait signal bit can be obtained, then the posting thread 400 has the responsibility of posting the woEvent. It can also be the case that the post operation to update the wait counter occurs at the same time as a wait de-registration operation. The post operation will be notified of this by the wait structure pointer in its status field being zeroed out on return from updating the wait structure counter. At this point the post operation will use the wait structure de-registration counter as described above.
[0022]FIGS. 7 and 8 provide an example of event and wait structures that were discussed above. FIG. 7 depicts an example definition of event structure(s) 160. Generally, event objects have a user level structure of two atomic integer values. Atomic integers can be used to guarantee visibility of updates between threads without locking.
[0023]More specifically within the event structure(s) 160, Event.Status 500 determines the current state of the event. It is broken into two separate `fields` within the atomic integer value: A pointer to a wait structure (which is 4-byte aligned in this example) and one or more status bits. The status bits are used to determine the current posted state of an event; bitA determines the Posted Status, bitB protects against a multiple poster race condition.
[0024]When an event is in the posted state, the Event.Value value 510 of the event structure(s) 160 reflects the user given value of the event. This value can be queried by an Event GetValue( ) method.
[0025]FIG. 8 depicts an example definition of wait structure(s) 200, which are used to process Wait( ) requests. The wait structure(s) 200 pointed to by Waitp of Event.Status 500 in the event structure(s) 160 can be implemented in multiple ways, such as in the following ways: an atomic structure within Wait( ); embedded within each thread context structure; as a WaitCount field within the Thread Context, etc. In essence, each wait structure has a count field and a separate wait object (which may be implied within a thread context).
[0026]More specifically within the wait structure(s) 200, Wait.WaitStat 600 determines the current state of the wait object. It is broken into two separate `fields` within the atomic integer value: a count before the WaitObject should be signaled and a status bit S which determines if the WaitObject has already been signaled.
[0027]Wait.DeRegCnt 610 is used for processing the de-registration of the Waitp after the wait operation conditions have been met. This can be accomplished in different ways, such as by differentiating the current mode with an additional status bit in Wait.WaitStat 600 to determine the current operational mode.
[0028]Also within the wait structure(s) 200, Wait.WaitObject 620 is the underlying wait object that the thread calling Wait( ) uses to wait until it has been signaled.
[0029]As a further illustration of these structures, FIGS. 9-13 provide another operational scenario in the form of state diagrams for handling unlimited events during a wait( ) method without use of operating system locks. The operational scenario illustrates an example implementation which allows threads to wait on multiple events whose count may be larger than the operating system provided wait API (example: Windows® limits the event count to 64). The event handling system can also be configured to wait on any number events instead of a One-or-All model. The implementation depicted in the state diagrams accomplishes this with a minimal of resources (e.g., minimizes operating system interaction). For example, the implementation reduces the number of required operating system calls/overhead, such as eliminating the need for both an operating system lock and operating system event for each event structure. As another example, the implementation also eliminates the need for an operating system post/clear event for each Post( ) and Clear( ) operation to each event as well as provides for a thread to wait for an unlimited (within hardware/operating system limits) number of wait events. Additionally, the operational scenario illustrates that no operating system lock is required for individual event object processing or wait object processing. This is also accomplished without using a SPIN-LOCK semantics (which diminishes thread scaling performance).
[0030]FIGS. 9A-9B depict at 700 an operational scenario with respect to posting an event and includes a registration phase 720, a phase 730 which involves waiting thread processing, and a waiting thread visibility de-registration phase 740. In this operational scenario, the post method is structured as shown at 710: Post (Event, Value). Posting an event allows for setting the internal value and ensuring that a thread waiting on the event will be awakened if its wait condition has been met. Within the registration phase 720, State P0 resolves the race condition with any other Post or Clear request. Progression to state P1 is only accomplished if the status bits can be successfully changed from cleared (00) to a processing state value (01).
[0031]State P1 sets the events value to the value expressed in the Post call. State P1 resolves the race condition between a Wait request and a Post request. If the event status for the wait pointer remains null while the status is changed from processing (01) to a posted state (10) then the Post request has been completed. If a pointer to a wait structure has been successfully registered in the event status, then processing proceeds to state P2.
[0032]Within the waiting thread processing phase 730, State P2 decrements the wait structure's status counter. If this decrement operation reduced the count portion of the wait status to zero then the thread's wait condition has been met; processing proceeds at state P2.1. If this decrement operation is non-zero then the wait condition has not been met and processing proceeds to state P3.
[0033]State P2.1 resolves the race condition between other post requests and the wait request. If the S bit of the wait structure WaitStat field is successfully changed from clear (0) to signaled (1), then the process proceeds to state P2.2 to signal the waiting thread; otherwise the bit will be in the signaled (1) state and processing continues at state P3.
[0034]State P2.2 sets the signal flag to true for future post (after status has been updated to POST).
[0035]State P3 resolves the race condition between post processing and wait deregistration. This is accomplished by capturing the value of the wait pointer in the event status while the status bits are changed from wait (11) to posted (10).
[0036]State P4 ensures that the waiting thread is signaled if necessary.
[0037]Within the de-registration phase 740, State P5 determines the next processing requirement. If the transition to a posted state P3 results in a valid Wait Pointer, then the post operation is complete. If the Wait Pointer is null, visibility processing continues at state P6.
[0038]State P6 completes the resolution of the race condition between post processing and wait deregistration processing. This is done by decrementing the wait structure pointer DeRegCnt field. If the result of this field is non-zero, the event processing is complete. If it is zero, wait processing requires another post, and then processing proceeds to state P6.1. State P6.1 posts the wait event, signaling the thread that all event processing threads with visibility have completed their operations.
[0039]FIGS. 10A-10B depict at 800 an operational scenario with respect to clearing an event and includes a registration phase 820, a phase 830 which involves waiting thread processing, and a waiting thread visibility de-registration phase 840. In this operational scenario, the clear method is structured as shown at 810: Clear(Event). Clearing an event is the act of setting the state of the event to be cleared (or unposted). It also updates any wait processing state to include one less posted event (if necessary).
[0040]Within the registration phase 820, State C0 resolves the race condition with any other Post or Clear request. Progression to state C1 is only accomplished if the status bits can be successfully changed from posted (10) to a processing state value (01).
[0041]State C1 resolves the race condition between a Wait request and a Clear request. If the event status for the wait pointer remains null while the status is changed from processing (01) to a cleared state (00) then the Clear request has been completed. If a pointer to a wait structure has been successfully registered in the event status, the process proceeds to state C2.
[0042]Within the waiting thread processing phase 830, State C2 increments the wait structures status counter. It also resolves the race condition between this clear request and a wait request. If the event status for the wait pointer remains non-null while the status bits can be changed from wait thread processing (11) to cleared (00) then the clear operation is complete. Otherwise the wait pointer will be null and processing continues at state C3.
[0043]Within the de-registration phase 840, State C3 completes the resolution of the race condition between clear processing and wait deregistration processing. This is done by decrementing the wait structure pointer DeRegCnt field. If the result of this field is non-zero, the event processing is complete. If it is zero, wait processing requires another post and processing proceeds to state C3.1 wherein the wait event is posted.
[0044]FIG. 11 depicts at 900 an operational scenario with respect to obtaining an event internal value. In this operational scenario, the Get Value method is structured as shown at 910: GetValue(Event). Getting an event value ensures it is in a posted state and returning the value.
[0045]Within the operational scenario 900, State G0 validates the current state of the event. Progression to state G1 is only accomplished if the status bits are equal to the posted state value (10). Any other value for these status bits returns an `event cleared` value.
[0046]State G1 returns the current value of the event. Race condition protection can be added in this operational scenario via the status bits. However, it is noted that the race condition resolves itself from the perspective that the posted value is considered to be the value of the last posted state.
[0047]FIGS. 12 and 13 provide an operational scenario with respect to waiting on event(s) and includes a registration phase 1020, a signaled processing phase 1030, and an event deregistration phase 1040. In this operational scenario, the wait method is structured as shown at 1010: Wait(WaitCount, WaitArray, PostCount, Timeout). Threads use the wait process to wait on any number of events to be in a posted state. This provides the capability to wait on any number of events from one to WaitCount.
[0048]Within the registration phase 1020, State W0 sets up the internal Wait structure status and initializes internal variables. If the parameter WaitCount is zero (i.e., a thread does not want to wait on any events), then the wait processing proceeds directly to state W2.1.
[0049]State W1 begins the process of setting the event status wait ptr to point back to the address of this thread's wait structure. The status is set and progress continues at state W1.1. This process continues for each of the events until all the events specified in WaitArray have been thus registered.
[0050]State W1.1 keeps track of how many events are already in the posted state. If the state is truly posted (10), then processing proceeds to state W1.2. Any other event state processing will return to state W1 for the next pass.
[0051]State W1.2 increments the current posted count (pCnt). At this point there is a chance to short circuit the event registration process (though it is not required). The conditions of the wait operation are considered met when the number of posted events is equal to PostCount. This can be done with a check at state W1.2 (pCnt==PostCount). It is noted that this check may not be completely accurate, as events could have been cleared during wait processing. A complete solution can use the equation: ((pCnt-Waitp->WaitStat.WaitCount)>=PostCount). In either case, a short circuit method can be used for early termination of the event registration phase.
[0052]Within the signaled processing phase 1030, State W2 determines if the current wait condition has been met. Note that precedence has been given to the thread that is not waiting in that State W0 initializes the wait status as already signaled. State W2 reverses this assumption if a wait is necessary. The number of posted events (pCnt) is subtracted from PostCnt. This remaining value is then added to the wait status count field. If this field is less than or equal to zero, then processing proceeds to state W3. If it is greater than zero and the wait count status bit can be successfully cleared, then processing proceeds to state W2.1.
[0053]State W2.1 waits on the operating system wait object, clearing it after the wait has been completed. Processing continues at state W3.
[0054]State W3 begins the event deregistration phase 1040. If there are no events to deregister, the wait has completed. If there are events, then internal variables are initialized and processing continues at state W3.1.
[0055]State W3.1 is the reverse of the registration processing loop begun at state W1. For each event that was registered, the Waitp needs to be removed from the event status waitp field. If there are more events to deregister, then processing proceeds to state W3.2. If the count of deregistered events equals the count of registered events, then processing proceeds to state W4.
[0056]State W3.2 initializes internal variables and then processes deregistration of the waitp for the current event. This resolves the race condition between Post, Clear, and Wait. If the waitp field is nulled while the status bits are active (11), then processing proceeds to state W3.3. If the waitp field can be nulled with any other status (00, 01, 10), then processing proceeds for the next event at state W3.1.
[0057]State W3.3 tracks the number of visibility race conditions. The visibility counter (vCnt) is incremented and processing proceeds to the next loop at state W3.1.
[0058]State W4 determines if visibility processing is required (race conditions were encountered at state W3.2). If vCnt is zero, then wait processing is complete. If vCnt is non-zero continue to state W5.
[0059]State W5 finalizes the deregistration race condition processing. It adds the number of visibility events (vCnt) to the Waitp DeRegCnt field. If this is zero, all visibility events are accounted for and wait processing is complete. If this is non-zero, a final wait is performed on the OS wait object. After this wait completes, the wait operation is complete.
[0060]It should be understood that similar to the other processing flows described herein, the processing flow of this operational scenario may be altered, modified, removed and/or augmented and still achieve the desired outcome.
[0061]While examples have been used to disclose the invention, including the best mode, and also to enable any person skilled in the art to make and use the invention, the patentable scope of the invention is defined by claims, and may include other examples that occur to those skilled in the art. Accordingly the examples disclosed herein are to be considered non-limiting. As an illustration, the systems and methods may be implemented on various types of computer architectures, such as for example on a single general purpose computer or workstation, or on a networked system, or in a client-server configuration, or in an application service provider configuration.
[0062]It is further noted that the systems and methods may include data signals conveyed via networks (e.g., local area network, wide area network, internet, combinations thereof, etc.), fiber optic medium, carrier waves, wireless networks, etc. for communication with one or more data processing devices. The data signals can carry any or all of the data disclosed herein that is provided to or from a device.
[0063]Additionally, the methods and systems described herein may be implemented on many different types of processing devices by program code comprising program instructions that are executable by the device processing subsystem. The software program instructions may include source code, object code, machine code, or any other stored data that is operable to cause a processing system to perform the methods and operations described herein.
[0064]The systems' and methods' data (e.g., associations, mappings, data input, data output, intermediate data results, final data results, etc.) may be stored and implemented in one or more different types of computer-implemented data stores, such as different types of storage devices and programming constructs (e.g., RAM, ROM, Flash memory, flat files, databases, programming data structures, programming variables, IF-THEN (or similar type) statement constructs, etc.). It is noted that data structures describe formats for use in organizing and storing data in databases, programs, memory, or other computer-readable media for use by a computer program.
[0065]The systems and methods may be provided on many different types of computer-readable media including computer storage mechanisms (e.g., CD-ROM, diskette, RAM, flash memory, computer's hard drive, etc.) that contain instructions (e.g., software) for use in execution by a processor to perform the methods' operations and implement the systems described herein.
[0066]The computer components, software modules, functions, data stores and data structures described herein may be connected directly or indirectly to each other in order to allow the flow of data needed for their operations. It is also noted that a module or processor includes but is not limited to a unit of code that performs a software operation, and can be implemented for example as a subroutine unit of code, or as a software function unit of code, or as an object (as in an object-oriented paradigm), or as an applet, or in a computer script language, or as another type of computer code. The software components and/or functionality may be located on a single computer or distributed across multiple computers depending upon the situation at hand.
[0067]It should be understood that as used in the description herein and throughout the claims that follow, the meaning of "a," "an," and "the" includes plural reference unless the context clearly dictates otherwise. Also, as used in the description herein and throughout the claims that follow, the meaning of "in" includes "in" and "on" unless the context clearly dictates otherwise. Finally, as used in the description herein and throughout the claims that follow, the meanings of "and" and "or" include both the conjunctive and disjunctive and may be used interchangeably unless the context expressly dictates otherwise; the phrase "exclusive or" may be used to indicate situation where only the disjunctive meaning may apply.
User Contributions:
Comment about this patent or add new information about this topic: