Patent application number | Description | Published |
20080243965 | COOPERATIVE DLL UNLOAD - Loading and unloading a plurality of libraries on a computing device having a loader lock and internal and external counts for each library in the plurality of libraries is disclosed. The libraries assume an initialize state, followed by an initialized state, a pending unload state, and an unload state according to when the internal and external counts are incremented and decremented. When in the pending unload state, the functions of a library that include functions that require acquiring the loader lock exit, the internal count is decremented by one, and the loader lock is released. Prior to entering the pending unload state, a library may be placed into a reloadable state. A library in the reloadable state may be reloaded upon request until a timer times out. When the timer times out, the library in the reloadable state transitions into the pending unload state. | 10-02-2008 |
20080244550 | DYNAMIC DLL CYCLE RESOLUTION - Deterministically resolving cycles in a library tree is disclosed. Resolving cycles supports certain processes such as safe library initialization. Cycles in the library tree are identified; at least one soft link in each identified cycle is identified; and the at least one soft link in each identified cycle is broken. If a cycle has no soft links, notification is provided indicating that the cycle cannot be broken. Identifying at least one soft link in each identified cycle comprises, for each link in the cycle, determining the dependent and supporting libraries; and determining if one or more functions in the supporting library are required for initializing the dependent library. | 10-02-2008 |
20080244551 | PARALLEL DLL TREE INITIALIZATION - A parallel processing method and apparatus for initializing libraries is disclosed. Libraries for an application are identified, an initialization order for the libraries is determined, and the libraries are initialized in asynchronous stages. The initialization order is determined by forming a library tree of the libraries' references and determining a load order for the references according to the levels of the references in the library tree. The asynchronous stages comprise a loading stage that includes a load queue, a snapping stage that includes a snap queue, and an initializing stage that includes an initialize queue. | 10-02-2008 |
20100031254 | Efficient detection and response to spin waits in multi-processor virtual machines - Various aspects are disclosed herein for attenuating spin waiting in a virtual machine environment comprising a plurality of virtual machines and virtual processors. Selected virtual processors can be given time slice extensions in order to prevent such virtual processors from becoming de-scheduled (and hence causing other virtual processors to have to spin wait). Selected virtual processors can also be expressly scheduled so that they can be given higher priority to resources, resulting in reduced spin waits for other virtual processors waiting on such selected virtual processors. Finally, various spin wait detection techniques can be incorporated into the time slice extension and express scheduling mechanisms, in order to identify potential and existing spin waiting scenarios. | 02-04-2010 |
20100332721 | OPERATING SYSTEM VIRTUAL MEMORY MANAGEMENT FOR HARDWARE TRANSACTIONAL MEMORY - Operating system virtual memory management for hardware transactional memory. A method may be performed in a computing environment where an application running on a first hardware thread has been in a hardware transaction, with transactional memory hardware state in cache entries correlated by memory hardware when data is read from or written to data cache entries. The data cache entries are correlated to physical addresses in a first physical page mapped from a first virtual page in a virtual memory page table. The method includes an operating system deciding to unmap the first virtual page. As a result, the operating system removes the mapping of the first virtual page to the first physical page from the virtual memory page table. As a result, the operating system performs an action to discard transactional memory hardware state for at least the first physical page. Embodiments may further suspend hardware transactions in kernel mode. Embodiments may further perform soft page fault handling without aborting a hardware transaction, resuming the hardware transaction upon return to user mode, and even successfully committing the hardware transaction. | 12-30-2010 |
20110145552 | Handling Operating System (OS) Transitions In An Unbounded Transactional Memory (UTM) Mode - In one embodiment, the present invention includes a method for receiving control in a kernel mode via a ring transition from a user thread during execution of an unbounded transactional memory (UTM) transaction, updating a state of a transaction status register (TSR) associated with the user thread and storing the TSR with a context of the user thread, and later restoring the context during a transition from the kernel mode to the user thread. In this way, the UTM transaction may continue on resumption of the user thread. Other embodiments are described and claimed. | 06-16-2011 |
20110154378 | API NAMESPACE VIRTUALIZATION - A computer operating system with a map that relates API namespaces to components that implement an interface contracts for the namespaces. When an API namespace is to be used, a loader within the operating system uses the map to load components based on the map. An application can reference an API namespace in the same way as it references a dynamically linked library, but the implementation of the interface contract for the API namespace is not tied to a single file or to a static collection of files. The map may identify versions of the API namespace or values of runtime parameters that may be used to select appropriate files to implement an interface contract in scenarios that may depend on factors such as hardware in the execution environment, a version of the API namespace against which an application was developed or the application accessing the API namespace. | 06-23-2011 |
20110214128 | ONE-TIME INITIALIZATION - Aspects of the present invention are directed at providing safe and efficient ways for a program to perform a one-time initialization of a data item in a multi-threaded environment. In accordance with one embodiment, a method is provided that allows a program to perform a synchronized initialization of a data item that may be accessed by multiple threads. More specifically, the method includes receiving a request to initialize the data item from a current thread. In response to receiving the request, the method determines whether the current thread is the first thread to attempt to initialize the data item. If the current thread is the first thread to attempt to initialize the data item, the method enforces mutual exclusion and blocks other attempts to initialize the data item made by concurrent threads. Then, the current thread is allowed to execute program code provided by the program to initialize the data item. | 09-01-2011 |
20110219379 | ONE-TIME INITIALIZATION - Aspects of the present invention are directed at providing safe and efficient ways for a program to perform a one-time initialization of a data item in a multi-threaded environment. In accordance with one embodiment, a method is provided that allows a program to perform a synchronized initialization of a data item that may be accessed by multiple threads. More specifically, the method includes receiving a request to initialize the data item from a current thread. In response to receiving the request, the method determines whether the current thread is the first thread to attempt to initialize the data item. If the current thread is the first thread to attempt to initialize the data item, the method enforces mutual exclusion and blocks other attempts to initialize the data item made by concurrent threads. Then, the current thread is allowed to execute program code provided by the program to initialize the data item. | 09-08-2011 |
20120284485 | OPERATING SYSTEM VIRTUAL MEMORY MANAGEMENT FOR HARDWARE TRANSACTIONAL MEMORY - Operating system virtual memory management for hardware transactional memory. A system includes an operating system deciding to unmap a first virtual page. As a result, the operating system removes the mapping of the first virtual page to the first physical page from the virtual memory page table. As a result, the operating system performs an action to discard transactional memory hardware state for at least the first physical page. Embodiments may further suspend hardware transactions in kernel mode. Embodiments may further perform soft page fault handling without aborting a hardware transaction, resuming the hardware transaction upon return to user mode, and even successfully committing the hardware transaction. | 11-08-2012 |
20150039869 | Handling Operating System (Os) Transitions In An Unbounded Transactional Memory (Utm) Mode - In one embodiment, the present invention includes a method for receiving control in a kernel mode via a ring transition from a user thread during execution of an unbounded transactional memory (UTM) transaction, updating a state of a transaction status register (TSR) associated with the user thread and storing the TSR with a context of the user thread, and later restoring the context during a transition from the kernel mode to the user thread. In this way, the UTM transaction may continue on resumption of the user thread. Other embodiments are described and claimed. | 02-05-2015 |
Patent application number | Description | Published |
20130067475 | MANAGING PROCESSES WITHIN SUSPEND STATES AND EXECUTION STATES - One or more techniques and/or systems are provided for suspending logically related processes associated with an application, determining whether to resume a suspended process based upon a wake policy, and/or managing an application state of an application, such as timer and/or system message data. That is, logically related processes associated with an application, such as child processes, may be identified and suspended based upon logical relationships between the processes (e.g., a logical container hierarchy may be traversed to identify logically related processes). A suspended process may be resumed based upon a wake policy. For example, a suspended process may be resumed based upon an inter-process communication call policy that may be triggered by an application attempting to communicate with the suspended process. Application data may be managed while an application is suspended so that the application may be resumed in a current and/or relevant state. | 03-14-2013 |
20130067490 | MANAGING PROCESSES WITHIN SUSPEND STATES AND EXECUTION STATES - One or more techniques and/or systems are provided for suspending logically related processes associated with an application, determining whether to resume a suspended process based upon one or more wake policies, and/or managing an application state of an application, such as timer and/or system message data. That is, logically related processes associated with an application, such as child processes, may be identified and suspended based upon logical relationships between the processes (e.g., a logical container hierarchy may be traversed to identify logically related processes). A suspended process may be resumed based upon a set of wake policies. For example, a suspended process may be resumed based upon an inter-process communication call policy that may be triggered by an application attempting to communicate with the suspended process. Application data may be managed while an application is suspended so that the application may be resumed in a current and/or relevant state. | 03-14-2013 |
20130067495 | MANAGING PROCESSES WITHIN SUSPEND STATES AND EXECUTION STATES - One or more techniques and/or systems are provided for suspending logically related processes associated with an application, determining whether to resume a suspended process based upon one or more wake policies, and/or managing an application state of an application, such as timer and/or system message data. That is, logically related processes associated with an application, such as child processes, may be identified and suspended based upon logical relationships between the processes (e.g., a logical container hierarchy may be traversed to identify logically related processes). A suspended process may be resumed based upon a set of wake policies. For example, a suspended process may be resumed based upon an inter-process communication call policy that may be triggered by an application attempting to communicate with the suspended process. Application data may be managed while an application is suspended so that the application may be resumed in a current and/or relevant state. | 03-14-2013 |
20130191541 | BACKGROUND TASK RESOURCE CONTROL - Among other things, one or more techniques and/or systems are provided for controlling resource access for background tasks. For example, a background task created by an application may utilize a resource (e.g., CPU cycles, bandwidth usage, etc.) by consuming resource allotment units from an application resource pool. Once the application resource pool is exhausted, the background task is generally restricted from utilizing the resource. However, the background task may also utilize global resource allotment units from a global resource pool shared by a plurality of applications to access the resource. Once the global resource pool is exhausted, unless the background task is a guaranteed background task which can consume resources regardless of resource allotment states of resource pools, the background task may be restricted from utilizing the resource until global resource allotment units within the global resource pool and/or resource allotment units within the application resource pool are replenished. | 07-25-2013 |
20140366045 | DYNAMIC MANAGEMENT OF COMPOSABLE API SETS - Systems and methods for composing a dynamic runtime API set schema employing a base API set schema and a set of API set schema extensions are disclosed. A base API set schema may be loaded into system memory at boot time with an associated set of host base binaries. A set of API set schema extensions binaries may also be loaded into system memory at boot time. At a second time, the API set schema extensions may be merged into the base API set schema on a dynamic as-needed basis. | 12-11-2014 |
20140372356 | PREDICTIVE PRE-LAUNCH FOR APPLICATIONS - Systems and methods of pre-launching applications in a computer system, said applications being likely to be activated by a user from a terminated and/or suspended process state, are disclosed. The pre-launching of an application may be based on the assessed probability of the application being activated—as well as the level of availability of system resources to affect such pre-launching. Applications may be pre-launched based on these and other conditions/considerations, designed to improve the user's experience of a quick launch of applications in the background. Several prediction models are presented to provide a good estimate of the likelihood of an application being activated by a user. Such prediction models may comprise an adaptive predictor (based on past application usage situations) and/or a switch rate predictor (based on historic data of an application being switched and, possibly, having a decay rate applied to such switch rate measure). | 12-18-2014 |
20140373021 | Assigning and Scheduling Threads for Multiple Prioritized Queues - An operating system provides a pool of worker threads servicing multiple queues of requests at different priority levels. A concurrency controller limits the number of currently executing threads. The system tracks the number of currently executing threads above each priority level, and preempts operations of lower priority worker threads in favor of higher priority worker threads. A system can have multiple pools of worker threads, with each pool having its own priority queues and concurrency controller. A thread also can change its priority mid-operation. If a thread becomes lower priority and is currently active, then steps are taken to ensure priority inversion does not occur. In particular, the current thread for the now lower priority item can be preempted by a thread for a higher priority item and the preempted item is placed in the lower priority queue. | 12-18-2014 |
20140373032 | PREFETCHING CONTENT FOR SERVICE-CONNECTED APPLICATIONS - Systems and methods of pre-fetching data for applications in a computer system that are terminated or suspended and may be pre-launched by the computer system are disclosed. The applications may employ data that is remote from the computer system and available from a third party content resource. A method for pre-fetching such remote data comprises associating a set of application with such data and/or its location; determining a set of pre-fetching conditions, determining which applications may be pre-fetched and pre-fetching the data, if pre-fetch conditions meet a desired pre-fetch policy. A predictive module or technique may be used to identify those applications which may be pre-launched. The present system may comprise a pre-fetch success module capable of measuring the success data for a current pre-fetch and associating such success data with an application to improve future pre-fetches. | 12-18-2014 |
Patent application number | Description | Published |
20080288750 | Small barrier with local spinning - A barrier with local spinning. The barrier is described as a barrier object having a bit vector embedded as a pointer. If the vector bit is zero, the object functions as a counter; if the vector bit is one, the object operates as a pointer to a stack. The object includes the total number of threads required to rendezvous at the barrier to trigger release of the threads. The object points to a stack block list that describes each thread that has arrived at the barrier. Arriving at the barrier involves reading the top stack block, pushing onto the list a stack block for the thread that just arrived, decrementing the thread count, and spinning on corresponding local memory locations or timing out and blocking. When the last thread arrives at the barrier, the barrier is reset and all threads at the barrier are awakened for the start of the next process. | 11-20-2008 |
20090187784 | FAIR AND DYNAMIC CENTRAL PROCESSING UNIT SCHEDULING - Embodiments that facilitate the fair and dynamic distribution of central processing unit (CPU) time are disclosed. In accordance with one embodiment, a method includes organizing one or more processes into one or more groups. The method further includes allocating a CPU time interval for each group. The allocation of a CPU time interval for each group is accomplished by equally distributing a CPU cycle based on the number of groups. The method also includes adjusting the allocated CPU time intervals based on a change in the quantity of the one or more groups. | 07-23-2009 |
20090249094 | POWER-AWARE THREAD SCHEDULING AND DYNAMIC USE OF PROCESSORS - Techniques and apparatuses for providing power-aware thread scheduling and dynamic use of processors are disclosed. In some aspects, a multi-core system is monitored to determine core activity. The core activity may be compared to a power policy that balances a power savings plan with a performance plan. One or more of the cores may be parked in response to the comparison to reduce power consumption by the multi-core system. In additional aspects, the power-aware scheduling may be performed during a predetermined interval to dynamically park or unpark cores. Further aspects include adjusting the power state of unparked cores in response to the comparison of the core activity and power policy. | 10-01-2009 |
20100017581 | LOW OVERHEAD ATOMIC MEMORY OPERATIONS - Embodiments that provide low-overhead restricted memory transactions are disclosed. In accordance with one embodiment, the method includes providing one or more references to processor-specific data that corresponds to a first processor. The method further includes detecting an interrupt to the first processor when the interrupt indicates modification of the one or more references to the processor-specific data during the execution of one or more instructions. The method also includes taking remedial action on the one or more instructions when the interrupt is detected. | 01-21-2010 |
20100083261 | INTELLIGENT CONTEXT MIGRATION FOR USER MODE SCHEDULING - Embodiments for performing directed switches between user mode schedulable (UMS) thread and primary threads are disclosed. In accordance with one embodiment, a primary thread user portion is switched to a UMS thread user portion so that the UMS thread user portion is executed in user mode via the primary thread user portion. The primary thread is then transferred into kernel mode via an implicit switch. A kernel portion of the UMS thread is then executed in kernel mode using the context information of a primary thread kernel portion. | 04-01-2010 |
20100083275 | TRANSPARENT USER MODE SCHEDULING ON TRADITIONAL THREADING SYSTEMS - Embodiments for performing cooperative user mode scheduling between user mode schedulable (UMS) threads and primary threads are disclosed. In accordance with one embodiment, an asynchronous procedure call (APC) is received on a kernel portion of a user mode schedulable (UMS) thread. The status of the UMS thread as it is being processed in a multi-processor environment is determined. Based on the determined status, the APC is processed on the UMS thread. | 04-01-2010 |
20100251250 | LOCK-FREE SCHEDULER WITH PRIORITY SUPPORT - Techniques for implementing a lock-free scheduler with ordering support are described herein. In addition to the foregoing, other aspects are described in the claims, drawings, and text forming a part of the present disclosure. It can be appreciated by one of skill in the art that one or more various aspects of the disclosure may include but are not limited to circuitry and/or programming for effecting the herein-referenced aspects of the present disclosure; the circuitry and/or programming can be virtually any combination of hardware, software, and/or firmware configured to effect the herein-referenced aspects depending upon the design choices of the system designer. | 09-30-2010 |
20110307730 | Power-Aware Thread Scheduling and Dynamic Use of Processors - Techniques and apparatuses for providing power-aware thread scheduling and dynamic use of processors are disclosed. In some aspects, a multi-core system is monitored to determine core activity. The core activity may be compared to a power policy that balances a power savings plan with a performance plan. One or more of the cores may be parked in response to the comparison to reduce power consumption by the multi-core system. In additional aspects, the power-aware scheduling may be performed during a predetermined interval to dynamically park or unpark cores. Further aspects include adjusting the power state of unparked cores in response to the comparison of the core activity and power policy. | 12-15-2011 |
20120265947 | LIGHTWEIGHT RANDOM MEMORY ALLOCATION - In response to a memory allocation request received from an application thread, a random number is obtained (e.g., from a random number list previously populated with multiple random numbers). A starting location in at least a portion of a bitmap associated with a region including multiple blocks of the memory is determined based on the random number. A portion of the bitmap is scanned, beginning at the starting location, to identify a location in the bitmap corresponding to an available block of the multiple blocks, and an indication of this available block is returned to the application thread. | 10-18-2012 |
20120291033 | THREAD-RELATED ACTIONS BASED ON HISTORICAL THREAD BEHAVIORS - Various embodiments provide techniques for managing threads based on a thread history. In at least some embodiments, a behavior associated with currently existing threads is observed and a thread-related action is performed. A result of the thread-related action with respect to the currently existing threads, resources associated with the currently existing threads (e.g., hardware and/or data resources), and/or other threads, is then observed. A thread history is recorded (e.g., as part of a thread history database) that includes the behavior associated with the currently existing threads, the thread related action that was performed, and the result of the thread-related action. The thread history can include information about multiple different thread behaviors and can be referenced to determine whether to perform thread-related actions in response to other observed thread behaviors. | 11-15-2012 |
20130067494 | Resuming Applications and/or Exempting Applications from Suspension - Only a particular number of applications on a computing device are active at any given time, with applications that are not active being suspended. A policy is applied to determine when an application is to be suspended. However, an operating system component can have a particular application be exempted from being suspended (e.g., due to an operation being performed by the application). Additionally, an operating system component can have an application that has been suspended resumed (e.g., due to a desire of another application to communicate with the suspended application). | 03-14-2013 |
20140359774 | Protecting Anti-Malware Processes - Anti-malware process protection techniques are described. In one or more implementations, an anti-malware process is launched. The anti-malware process is verified based at least in part on an anti-malware driver that contains certificates which contain an identity that is signed with the trusted certificate from a verified source. After the anti-malware process is verified, the anti-malware process may be assigned a protection level, and an administrative user may be prevented from altering the anti-malware process. | 12-04-2014 |
20140359775 | Protecting Anti-Malware Processes - Anti-malware process protection techniques are described. In one or more implementations, an anti-malware driver is signed using a hash that identifies a manufacturer of the anti-malware driver. The anti-malware driver is then provided to a computing device. The anti-malware driver may be assigned a protection level based on an agreement between the anti-malware manufacturer and an operating system manufacturer, and this protection level effects the operation of the anti-malware program on the computing device. | 12-04-2014 |
Patent application number | Description | Published |
20130061249 | DECOUPLING BACKGROUND WORK AND FOREGROUND WORK - Systems, methods, and apparatus for separately loading and managing foreground work and background work of an application. In some embodiments, a method is provided for use by an operating system executing on at least one computer. The operating system may identify at least one foreground component and at least one background component of an application, and may load the at least one foreground component for execution separately from the at least one background component. For example, the operating system may execute the at least one foreground component without executing the at least one background component. In some further embodiments, the operating system may use a specification associated with the application to identify at least one piece of computer executable code implementing the at least one background component. | 03-07-2013 |
20130061251 | EVENT AGGREGATION FOR BACKGROUND WORK EXECUTION - Systems, methods, and apparatus for separately managing foreground work and background work. In some embodiments, an operating system may identify at least one foreground component and at least one background component of a same application or different applications, and may manage the execution of the components differently. For example, the operating system may receive a request that at least one background component of an application be executed in response to at least one event. In response to detecting an occurrence of the at least one event, the operating system may determine whether at least one first condition set by the application is satisfied and whether at least one second condition set by the operating system is satisfied, and may execute the at least one background component when it is determined that the at least one first and second conditions are satisfied following the occurrence of the at least one event. | 03-07-2013 |
20130159662 | Working Set Swapping Using a Sequentially Ordered Swap File - Techniques described enable efficient swapping of memory pages to and from a working set of pages for a process through the use of large writes and reads of pages to and from sequentially ordered locations in secondary storage. When writing pages from a working set of a process into secondary storage, the pages may be written into reserved, contiguous locations in a dedicated swap file according to a virtual address order or other order. Such writing into sequentially ordered locations enables reading in of clusters of pages in large, sequential blocks of memory, providing for more efficient read operations to return pages to physical memory. | 06-20-2013 |
20130268938 | TRANSPARENT USER MODE SCHEDULING ON TRADITIONAL THREADING SYSTEMS - Embodiments for performing cooperative user mode scheduling between user mode schedulable (UMS) threads and primary threads are disclosed. In accordance with one embodiment, an asynchronous procedure call (APC) is received on a kernel portion of a user mode schedulable (UMS) thread. The status of the UMS thread as it is being processed in a multi-processor environment is determined. Based on the determined status, the APC is processed on the UMS thread. | 10-10-2013 |
20140040917 | Resuming Applications and/or Exempting Applications from Suspension - Only a particular number of applications on a computing device are active at any given time, with applications that are not active being suspended. A policy is applied to determine when an application is to be suspended. However, an operating system component can have a particular application be exempted from being suspended (e.g., due to an operation being performed by the application). Additionally, an operating system component can have an application that has been suspended resumed (e.g., due to a desire of another application to communicate with the suspended application). | 02-06-2014 |
20140123151 | APPLICATION PRIORITIZATION - Among other things, one or more techniques and/or systems are provided for application prioritization. For example, an operating system of a computing device may contemporaneously host one or more applications, which may compete for computing resources, such as CPU cycles, I/O operations, memory access, and/or network bandwidth. Accordingly, an application (e.g., a background task or service) may be placed within a de-prioritized operating mode during launch and/or during execution, which may result in the application receiving a relatively lower priority when competing with applications placed within a standard operating mode for access to computing resources. In this way, an application placed within a standard operating mode (e.g., a foreground application currently interacted with by a user) may have priority to computing resources over the de-prioritized application, such that the application within the standard operating mode may provide enhanced performance based upon having priority to computing resources. | 05-01-2014 |
20140195767 | Lightweight Random Memory Allocation - In response to a memory allocation request received from an application thread, a random number is obtained (e.g., from a random number list previously populated with multiple random numbers). A starting location in at least a portion of a bitmap associated with a region including multiple blocks of the memory is determined based on the random number. A portion of the bitmap is scanned, beginning at the starting location, to identify a location in the bitmap corresponding to an available block of the multiple blocks, and an indication of this available block is returned to the application thread. | 07-10-2014 |
20140351552 | WORKING SET SWAPPING USING A SEQUENTIALLY ORDERED SWAP FILE - Techniques described enable efficient swapping of memory pages to and from a working set of pages for a process through the use of large writes and reads of pages to and from sequentially ordered locations in secondary storage. When writing pages from a working set of a process into secondary storage, the pages may be written into reserved, contiguous locations in a dedicated swap file according to a virtual address order or other order. Such writing into sequentially ordered locations enables reading in of clusters of pages in large, sequential blocks of memory, providing for more efficient read operations to return pages to physical memory. | 11-27-2014 |
20140380058 | Process Authentication and Resource Permissions - The techniques and systems described herein present various implementations of a model for authenticating processes for execution and specifying and enforcing permission restrictions on system resources for processes and users. In some implementations, a binary file for an application, program, or process may be augmented to include a digital signature encrypted with a key such that an operating system may subsequently authenticate the digital signature. Once the binary file has been authenticated, the operating system may create a process and tag the process with metadata indicating the type of permissions that are allowed for the process. The metadata may correspond to a particular access level for specifying resource permissions. | 12-25-2014 |
20150082305 | VIRTUAL SECURE MODE FOR VIRTUAL MACHINES - A virtual machine manager (e.g., hypervisor) implements a virtual secure mode that makes multiple different virtual trust levels available to virtual processors of a virtual machine. Different memory access protections (such as the ability to read, write, and/or execute memory) can be associated with different portions of memory (e.g., memory pages) for each virtual trust level. The virtual trust levels are organized as a hierarchy with a higher level virtual trust level being more privileged than a lower virtual trust level, and programs running in the higher virtual trust level being able to change memory access protections of a lower virtual trust level. The number of virtual trust levels can vary, and can vary for different virtual machines as well as for different virtual processors in the same virtual machine. | 03-19-2015 |
Patent application number | Description | Published |
20140359632 | EFFICIENT PRIORITY-AWARE THREAD SCHEDULING - A priority-based scheduling and execution of threads may enable the completion of higher-priority tasks above lower-priority tasks. Occasionally, a high-priority thread may request a resource that has already been reserved by a lower-priority thread, and the higher-priority thread may be blocked until the lower-priority thread relinquishes the reservation. Such prioritization may be acceptable if the lower-priority thread is able to execute comparatively unimpeded, but in some scenarios, the lower-priority thread may execute at a lower priority than a third thread that also has a lower priority than the high-priority thread. In this scenario, the third thread is effectively but incorrectly prioritized above the high-priority thread. Instead, upon detecting this scenario, the device may temporarily elevate the priority of the lower-priority thread over the priority of the third thread until the lower-priority thread relinquishes the resource, thereby reducing the waiting period of the high-priority thread for the requested resource. | 12-04-2014 |
20140373027 | APPLICATION LIFETIME MANAGEMENT - One or more techniques and/or systems are provided for facilitating lifetime management of dynamically created child applications and/or for managing dependencies between a set of applications of an application package. In an example, a parent application may dynamically create a child application. A child lifetime of the child application may be managed independently and/or individually from lifetimes of other applications with which the child application does not have a dependency relationship. In another example, an application within an application package may be identified as a dependency application that may provide functionality depended upon by another application, such as a first application, within the application package. A dependency lifetime of the dependency application may be managed according to a first lifetime of the first application. In this way, lifetimes (e.g., initialization, execution, suspension, termination, etc.) of applications may be managed to take into account dynamically created child applications and/or dependency relationships. | 12-18-2014 |