20th week of 2016 patent applcation highlights part 46 |
Patent application number | Title | Published |
20160139872 | Method for Controlling Voice Talk of Portable Terminal - A generated voice talk file is transmitted when a touchscreen of a sender mobile terminal is touched off to a receiver mobile terminal and the receiver mobile terminal performs the voice talk by reproducing the voice talk file, so that voice call quality is improved and the ‘No signal’ phenomenon, delayed connection, and noise caused by the network state may be minimized. A voice input area occupying the majority of the touchscreen decides a generating and ending time of the voice talk file. The method displays a voice input area on a touchscreen; generates a voice talk file corresponding to a voice signal of a user when the voice input area is touched on; and stops generation of the voice talk file and transmits the voice talk file to a user preset phone number of a receiver mobile terminal when the voice input area is touched off. | 2016-05-19 |
20160139873 | Audio Content Auditioning by Playback Device - A first playback device plays first audio content in synchrony with at least one second playback device. One or more commands is received that define a duration of time. The first playback device stops playback of the first audio content based on the received one or more commands. After stopping playback of the first audio content, second audio content is played for the duration of time defined by the one or more commands. After playing the second audio content for the duration of time, the first playback device resumes playback of the first audio content in synchrony with the at least one second playback device. | 2016-05-19 |
20160139874 | Audio Content Auditioning by Playback Device - A processor and memory comprising instructions stored therein executable by the processor to display a first media item available for playback, wherein the first media item is associated at least with a preview option, a playback option, and a queue option. A first input associated with the first media item is received. The first input is determined to correspond only to the preview option. Responsive to determining that the first input corresponds only to the preview option, causing at least a portion of the first media item to be played. | 2016-05-19 |
20160139875 | Software Application and Zones - Embodiments described herein relate to a software application that is configured to operate as an add-on software component to audio-playback software on a playback device of a media playback system. One embodiment may involve transmitting, to a computing device, data indicating that a first add-on component installed on a playback device is active; receiving, from the computing device, a command to activate a second add-on component installed on a playback device; in response to receiving the command, activating the second add-on component on the playback device; and causing playback of audio using at least the second add-on component. | 2016-05-19 |
20160139876 | METHODS AND APPARATUS FOR VOICE-CONTROLLED ACCESS AND DISPLAY OF ELECTRONIC CHARTS ONBOARD AN AIRCRAFT - A method for accessing electronic charts stored on an aircraft is provided. The method receives, via an onboard avionics system, location data for the aircraft; receives a set of speech data via a user interface of the aircraft; identifies one or more applicable electronic charts, based on the received location data and the received set of speech data, wherein the electronic charts stored on the aircraft comprise at least the one or more applicable electronic charts; and presents, via an aircraft display, a first one of the one or more applicable electronic charts. | 2016-05-19 |
20160139877 | VOICE-CONTROLLED DISPLAY DEVICE AND METHOD OF VOICE CONTROL OF DISPLAY DEVICE - The present invention is to provide a voice-controlled display device configured such that the inputted user's speech is compared with the identification voice data assigned to each of the execution unit areas on a screen displayed through a display unit and, if there exists identification voice data corresponding to the user's speech, an execution signal is generated to the execution unit area to which the identification voice data is assigned to resolve the inconvenience that the user needs to learn the voice commands stored in the database and to apply the convenience and intuitive simplicity of user experience (UX) of the conventional touchscreen control to the voice control, and a method of voice control of the above display device | 2016-05-19 |
20160139878 | VOICE INTERFACE FOR VIRTUAL AREA INTERACTION - Examples of systems and methods for voice-based navigation in one or more virtual areas that define respective persistent virtual communication contexts are described. These examples enable communicants to use voice commands to, for example, search for communication opportunities in the different virtual communication contexts, enter specific ones of the virtual communication contexts, and bring other communicants into specific ones of the virtual communication contexts. In this way, these examples allow communicants to exploit the communication opportunities that are available in virtual areas, even when hands-based or visual methods of interfacing with the virtual areas are not available. | 2016-05-19 |
20160139879 | HIGH PERFORMANCE SHIFTER CIRCUIT - An improved shifter design for high-speed data processors is described. The shifter may include a first stage, in which the input bits are shifted by increments of N bits where N>1, followed by a second stage, in which all bits are shifted by a residual amount. A pre-shift may be removed from an input to the shifter and replaced by a shift adder at the second stage to further increase the speed of the shifter. | 2016-05-19 |
20160139880 | Bypass FIFO for Multiple Virtual Channels - A group of low-level FIFOs may be logically bound together to form a super-FIFO. The super-FIFO may treat each low-level FIFO as a storage location. The super-FIFO may enable a push to (or a pop from) every low-level FIFO, simultaneously. The super-FIFO may enable a virtual channel (VC) to use the super-FIFO, bypassing a VC FIFO for the VC, removing several cycles of latency otherwise needed for enqueuing and dequeuing messages in the VC FIFO. In addition, the super-FIFO may enable bypassing of an arbiter, further reducing latency by avoiding a penalty of the arbiter. | 2016-05-19 |
20160139881 | ACCURACY-CONSERVING FLOATING-POINT VALUE AGGREGATION - A method for enhancing an accuracy of a sum of a plurality of floating-point numbers. The method receives a floating-point number and generates a plurality of provisional numbers with a value of zero. The method further generates a surjective map from the values of an exponent and a sign of a mantissa to the provisional numbers in the plurality of provisional numbers. The method further maps a value of the exponent and the sign of the mantissa to a first provisional number with the surjective map. The method further generates a test number from the first provisional number and if the test number exceeds a limit, modifies a second provisional number by using at least part of the test number. The method further equates the first provisional number to the test number if the test number does not exceed the limit. The method further sums the plurality of provisional numbers. | 2016-05-19 |
20160139882 | ACCURACY-CONSERVING FLOATING-POINT VALUE AGGREGATION - A method for enhancing an accuracy of a sum of a plurality of floating-point numbers. The method receives a floating-point number and generates a plurality of provisional numbers with a value of zero. The method further generates a surjective map from the values of an exponent and a sign of a mantissa to the provisional numbers in the plurality of provisional numbers. The method further maps a value of the exponent and the sign of the mantissa to a first provisional number with the surjective map. The method further generates a test number from the first provisional number and if the test number exceeds a limit, modifies a second provisional number by using at least part of the test number. The method further equates the first provisional number to the test number if the test number does not exceed the limit. The method further sums the plurality of provisional numbers. | 2016-05-19 |
20160139883 | DISTRIBUTING RESOURCE REQUESTS IN A COMPUTING SYSTEM - In an embodiment, a method include, in a hardware processor, producing, by a block of hardware logic resources, a constrained randomly generated or pseudo-randomly generated number (CRGN) based on a bit mask stored in a register memory. | 2016-05-19 |
20160139884 | SYSTEMS AND METHODS EMPLOYING UNIQUE DEVICE FOR GENERATING RANDOM SIGNALS AND METERING AND ADDRESSING, E.G., UNUSUAL DEVIATIONS IN SAID RANDOM SIGNALS - According to some embodiments, a system comprises a generator of a truly random signal is connected to an input and feedback device for the purpose of providing a user with real time feedback on the random signal. The user observes a representation of the signal in the process of an external physical event for the purpose of finding a correlation between the random output and what happens during the physical event. In some examples, the system is preferably designed such the system is shielded from all classically known forces such as gravity, physical pressure, motion, electromagnetic fields, humidity, etc. and/or, such classical forces are factored out of the process as much as possible. The system is thus designed to be selectively response to signals from living creatures, in particular, humans. | 2016-05-19 |
20160139885 | SYSTEMS AND METHODS FOR SCALING A CLOUD INFRASTRUCTURE - A method for scaling a cloud infrastructure, comprises receiving at least one of resource-level metrics and application-level metrics, estimating parameters of at least one application based on the received metrics, automatically and dynamically determining directives for scaling application deployment based on the estimated parameters, and providing the directives to a cloud service provider to execute the scaling. | 2016-05-19 |
20160139886 | METHOD OF EXCHANGING DATA DESCRIPTIVE OF TECHNICAL INSTALLATIONS - A method of exchanging data descriptive of technical installations between at least two applications is provided, a said application being able to provide and/or to consume data according to an associated application data model, the method allowing interoperability between applications by virtue of an exchange of the data expressed and formalized through at least one chosen standard having an associated data model and associated exchange formats, implemented by a programmable device. For at least one application, the method includes the integration (T | 2016-05-19 |
20160139887 | CODE GENERATOR FOR PROGRAMMABLE NETWORK DEVICES - A processing network including a plurality of lookup and decision engines (LDEs) each having one or more configuration registers and a plurality of on-chip routers forming a matrix for routing the data between the LDEs, wherein each of the on-chip routers is communicatively coupled with one or more of the LDEs. The processing network further including an LDE compiler stored on a memory and communicatively coupled with each of the LDEs, wherein the LDE compiler is configured to generate values based on input source code that when programmed into the configuration registers of the LDEs cause the LDEs to implement the functionality defined by the input source code. | 2016-05-19 |
20160139888 | AUTOMATED APP GENERATION SYSTEM - An app development platform that provides a unique way for anyone, without any programming skills, to build and deploy apps in 5 easy steps. Typically the entire end-to-end 5 step process from app conceptualization to app deployment can be done in 5 days or less. The platform is a Model-Driven platform that guides anyone with no programming skills through 5 steps that include 1) define the process; 2) enhance the screens; 3) build integration logic; 4) publish and manage; and 5) analyze user behavior. | 2016-05-19 |
20160139889 | MANAGING REUSABLE ARTIFACTS USING PLACEHOLDERS - Arrangements described herein relate to managing reusable artifacts. Responsive to receiving a request to create a placeholder for a reusable artifact representing a reusable unit, the placeholder for an appropriate version of the reusable artifact is created within a container and a first parameter identifying the reusable artifact is assigned to an artifact property of the placeholder. Responsive to receiving a request to pin a particular version of the reusable artifact to the placeholder, a version property of the placeholder can be updated to set the version, wherein the request to pin the particular version of the reusable artifact to the placeholder is implemented by assigning a value to a parameter in the placeholder or adding a pin property into the placeholder. The particular version can be maintained as the set version of the reusable artifact regardless of whether new artifact versions are created for the reusable artifact. | 2016-05-19 |
20160139890 | MANAGING REUSABLE ARTIFACTS USING PLACEHOLDERS - Arrangements described herein relate to managing reusable artifacts. Responsive to receiving a request to create a placeholder for a reusable artifact representing a reusable unit, the placeholder for an appropriate version of the reusable artifact is created within a container and a first parameter identifying the reusable artifact is assigned to an artifact property of the placeholder. Responsive to receiving a request to pin a particular version of the reusable artifact to the placeholder, a version property of the placeholder can be updated to set the version, wherein the request to pin the particular version of the reusable artifact to the placeholder is implemented by assigning a value to a parameter in the placeholder or adding a pin property into the placeholder. The particular version can be maintained as the set version of the reusable artifact regardless of whether new artifact versions are created for the reusable artifact. | 2016-05-19 |
20160139891 | COMPILER ARCHITECTURE FOR PROGRAMMABLE APPLICATION SPECIFIC INTEGRATED CIRCUIT BASED NETWORK DEVICES - A processing network including a plurality of lookup and decision engines (LDEs) each having one or more configuration registers and a plurality of on-chip routers forming a matrix for routing the data between the LDEs, wherein each of the on-chip routers is communicatively coupled with one or more of the LDEs. The processing network further including an LDE compiler stored on a memory and communicatively coupled with each of the LDEs, wherein the LDE compiler is configured to generate values based on input source code that when programmed into the configuration registers of the LDEs cause the LDEs to implement the functionality defined by the input source code. | 2016-05-19 |
20160139892 | PARSER ENGINE PROGRAMMING TOOL FOR PROGRAMMABLE NETWORK DEVICES - A parser engine programming tool configured to receive an input file representing a directly connected cyclical graph or tree of decision points for parsing a range of incoming packet headers, automatically generate all possible paths within the graph and thereby the associated possible headers, and convert the determined paths/headers into a proper format for programming memory of a parser engine to parse the determined headers (represented by the paths). | 2016-05-19 |
20160139893 | CODE PROCESSOR TO BUILD ORTHOGONAL EXECUTION BLOCKS FOR PROGRAMMABLE NETWORK DEVICES - A processing network including a plurality of lookup and decision engines (LDEs) each having one or more configuration registers and a plurality of on-chip routers forming a matrix for routing the data between the LDEs, wherein each of the on-chip routers is communicatively coupled with one or more of the LDEs. The processing network further including an LDE compiler stored on a memory and communicatively coupled with each of the LDEs, wherein the LDE compiler is configured to generate values based on input source code that when programmed into the configuration registers of the LDEs cause the LDEs to implement the functionality defined by the input source code. | 2016-05-19 |
20160139894 | METHOD FOR CONSTRUCTING A GRAPH DATA STRUCTURE - The disclosure relates to a method for constructing a graph data structure as an intermediate representation of source code for a compiler configured for compiling the source code into executable machine code running on a processor of a computer system, wherein program operations of the source code are represented in an object-oriented programming language by objects of classes that form a hierarchy growing from a base node class of the graph data structure, the method comprising: producing new nodes of the graph data structure by calling factory methods associated with existing nodes of the graph data structure based on a factory method design pattern implemented in the nodes of the graph data structure, wherein the nodes of the graph data structure are identified by symbols; and using the symbols as proxies of the nodes of the graph data structure according to a proxy design pattern. | 2016-05-19 |
20160139895 | SYSTEM AND METHOD FOR PROVIDING AND EXECUTING A DOMAIN-SPECIFIC LANGUAGE FOR CLOUD SERVICES INFRASTRUCTURE - A system and method for providing and executing a domain-specific programming language for cloud services infrastructure is provided. The system may be used to integrate references to external entities, such as cloud service compute instances, directly into a domain-specific programming language, allowing developers to easily integrate cloud services directly using the domain-specific programming language. Using a domain-specific programming language, references to external entities (not in memory) as variables may be used. Using the domain-specific programming language described herein, lexical scoping may be mapped onto collections of entities that aren't a native part of the language. In order to facilitate these and other benefits, the system may maintain state information of all references and shared variables across program boundaries. The system may make the state information accessible via a state information service that understands the language features of the domain-specific programming language. | 2016-05-19 |
20160139896 | ALGORITHM TO DERIVE LOGIC EXPRESSION TO SELECT EXECUTION BLOCKS FOR PROGRAMMABLE NETWORK DEVICES - A processing network including a plurality of lookup and decision engines (LDEs) each having one or more configuration registers and a plurality of on-chip routers forming a matrix for routing the data between the LDEs, wherein each of the on-chip routers is communicatively coupled with one or more of the LDEs. The processing network further including an LDE compiler stored on a memory and communicatively coupled with each of the LDEs, wherein the LDE compiler is configured to generate values based on input source code that when programmed into the configuration registers of the LDEs cause the LDEs to implement the functionality defined by the input source code. | 2016-05-19 |
20160139897 | LOOP VECTORIZATION METHODS AND APPARATUS - Loop vectorization methods and apparatus are disclosed. An example method includes prior to executing an original loop having iterations, analyzing, via a processor, the iterations of the original loop, identifying a dependency between a first one of the iterations of the original loop and a second one of the iterations of the original loop, after identifying the dependency, vectorizing a first group of the iterations of the original loop based on the identified dependency to form a vectorization loop, and setting a dynamic adjustment value of the vectorization loop based on the identified dependency. | 2016-05-19 |
20160139898 | ALGORITHM TO ACHIEVE OPTIMAL LAYOUT OF INSTRUCTION TABLES FOR PROGRAMMABLE NETWORK DEVICES - A processing network including a plurality of lookup and decision engines (LDEs) each having one or more configuration registers and a plurality of on-chip routers forming a matrix for routing the data between the LDEs, wherein each of the on-chip routers is communicatively coupled with one or more of the LDEs. The processing network further including an LDE compiler stored on a memory and communicatively coupled with each of the LDEs, wherein the LDE compiler is configured to generate values based on input source code that when programmed into the configuration registers of the LDEs cause the LDEs to implement the functionality defined by the input source code. | 2016-05-19 |
20160139899 | OPTIMIZING INTERMEDIATE REPRESENTATION OF SCRIPT CODE FOR FAST PATH EXECUTION - Disclosed here are methods, systems, paradigms and structures for optimizing intermediate representation (IR) of a script code for fast path execution. A fast path is typically a path that handles most commonly occurring tasks more efficiently than less commonly occurring ones which are handled by slow paths. The less commonly occurring tasks may include uncommon cases, error handling, and other anomalies. The IR includes checkpoints which evaluate to two possible values resulting in either a fast path or slow path execution. The IR is optimized for fast path execution by regenerating a checkpoint as a labeled checkpoint. The code in the portion of the IR following the checkpoint is optimized assuming the checkpoint evaluates to a value resulting in fast path. The code for handling situations where the checkpoint evaluates to a value resulting in slow path is transferred to a portion of the IR identified by the label. | 2016-05-19 |
20160139900 | ALGORITHM TO ACHIEVE OPTIMAL LAYOUT OF DECISION LOGIC ELEMENTS FOR PROGRAMMABLE NETWORK DEVICES - A processing network including a plurality of lookup and decision engines (LDEs) each having one or more configuration registers and a plurality of on-chip routers forming a matrix for routing the data between the LDEs, wherein each of the on-chip routers is communicatively coupled with one or more of the LDEs. The processing network further including an LDE compiler stored on a memory and communicatively coupled with each of the LDEs, wherein the LDE compiler is configured to generate values based on input source code that when programmed into the configuration registers of the LDEs cause the LDEs to implement the functionality defined by the input source code. | 2016-05-19 |
20160139901 | SYSTEMS, METHODS, AND COMPUTER PROGRAMS FOR PERFORMING RUNTIME AUTO PARALLELIZATION OF APPLICATION CODE - Systems, methods, and computer programs are disclosed for performing runtime auto-parallelization of application code. One embodiment of such a method comprises receiving application code to be executed in a multi-processor system. The application code comprises an injected code cost computation expression for at least one loop in the application code defining a serial workload for processing the loop. A runtime profitability check of the loop is performed based on the injected code cost computation expression to determine whether the serial workload can be profitably parallelized. If the serial workload can be profitably parallelized, the loop is executed in parallel using two or more processors in the multi-processor system. | 2016-05-19 |
20160139902 | AUGMENTED DEPLOYMENT SPECIFICATION FOR SOFTWARE COMPLIANCE - A method of augmenting a deployment specification for a software application to determine a level of compliance of the application with a compliance characteristic, the deployment specification being suitable for identifying a resource required to execute the software application in a virtualised computing environment, the method comprising: receiving a definition of the compliance characteristic as a set of compliance criteria concerning the resource, wherein satisfaction of the compliance criteria during execution of the software application is suitable for determining the level of compliance of the software application with the compliance characteristic; selecting at least one software component from a library of components based on the definition of the compliance characteristic, the software component being operable to determine a state of satisfaction of at least a subset of the set of criteria for the compliance characteristic; and modifying the deployment specification to identify the at least one selected software component such that, on execution of the application, the level of compliance of the application with the compliance characteristic is determined. | 2016-05-19 |
20160139903 | HEALTHCARE AS A SERVICE - DOWNLOADABLE ENTERPRISE APPLICATION - An application as a service provided in a secure environment. A sandbox in a user's computing environment may be created. An application may be downloaded to the user's computing environment to run within the sandbox. Data sources associated with the user's computing environment may be searched and connectivity established with data registry of the data sources based on data description received with the application. The application may be run within the sandbox using the established connectivity. Metering may be performed to monitor usage of the application at the user's computing environment. | 2016-05-19 |
20160139904 | METHOD FOR DOWNLOADING A PROGRAM - A method for downloading a program having at least one file from at least one service terminal to a user terminal is disclosed. The method includes the following steps of segmenting the at least one file into a plurality of blocks, and arranging the plurality of blocks according to a particular read order of the program, wherein the plurality of blocks include a first block and a second block; transmitting the first block to the user terminal; and executing the first block at the user terminal before the second block is transmitted to the user terminal. | 2016-05-19 |
20160139905 | APPLICATION MATCHING METHOD FOR MOBILE DEVICE AND ACCESSORY METHOD - This invention relates to an application matching method for a mobile device and an accessory device. The mobile device installs a plurality of applications. When the mobile device is connected to the accessory device, the accessory device can receive an application list from the mobile device. The application list should be a list of applications installed on the mobile device. Thus, the accessory device is able to install or reload part of or all applications installed on the mobile device according to the application list. | 2016-05-19 |
20160139906 | DEPLOYING AN APPLICATION ACROSS MULTIPLE DEPLOYMENT ENVIRONMENTS - Disclosed examples to configure an application for deployment involve displaying a user-selectable control in a user interface. A selected state and an unselected state of the user-selectable control distinguish between whether different components of the application are to be deployed in a same cloud and whether the different components of the application are to be deployed in separate clouds. When the user-selectable control indicates that the different components of the application are to be deployed in the separate clouds, a first one of the different components is bound to a first cloud and a second one of the different components is bound to a second cloud in an application deployment profile. When the user-selectable control indicates that the different components of the application are to be deployed in the same cloud, the different components of the application are bound to the same cloud in the application deployment profile. | 2016-05-19 |
20160139907 | ELECTRONIC DEVICE AND METHOD FOR CONTROLLING ELECTRONIC DEVICE - A mobile phone includes a display, a memory and a deletion control. The deletion control unit is configured to, when a tap operation is performed on a deletion icon attached to a start-up icon, extract a deletion candidate application relevant to a deletion target application on which a tap operation has been performed based on information in an application information table stored in the memory unit. The deletion control unit is configured to delete the deletion target application and to delete the extracted deletion candidate application. | 2016-05-19 |
20160139908 | CONSTRUCTING VIRTUAL IMAGES FOR INTERDEPENDENT APPLICATIONS - A technique for automating the construction of multiple virtual images with interdependencies includes creating a first virtual image (VI) instance by extending a first base VI based on input values associated with the first base VI and software bundles associated with the first base VI. A second VI instance is created by extending a second base VI based on input values associated with the second base VI, instance values received from the first VI instance, dependency instance values received from a deployed dependency instance, and software bundles associated with the second base VI. The first and second VI instances are then deployed to designated machines in a cloud environment for execution. | 2016-05-19 |
20160139909 | DELTA PATCH PROCESS - The present disclosure describes methods, systems, and computer program products for providing a delta patch process. One computer-implemented method includes receiving, from a customer site, a request for a software patch, wherein the request comprises a note number and a customer software component version, wherein the customer software component version includes a customer workspace globally unique identifier (GUID) and a customer integration sequence number (ISN), selecting a correction package (CP) based on the note number and the customer workspace GUID, wherein the CP comprises a maximum ISN, determining that the CP is uncontained in the customer software component version based on the maximum ISN and the customer software component version, and transmitting, to the customer site, the uncontained CP. | 2016-05-19 |
20160139910 | POLICY-DRIVEN MANAGEMENT OF APPLICATION TRAFFIC FOR PROVIDING SERVICES TO CLOUD-BASED APPLICATIONS - Policy-driven management of application traffic is provided for services to cloud-based applications. A steering policy refers to a set of rules is generated for a deployment from a current code environment to one or more replicated code environment differing in some key respect. The steering policy can guide steering decisions between the current and updated code environments. A steering server uses the steering policy to make decisions about whether to send service requests to the current code environment or the updated code environment. Feedback concerning actual steering decisions made by the steering server is received (e.g., performance metrics). The steering policy is automatically adjusted in response to the feedback. | 2016-05-19 |
20160139911 | EXTERNAL PLATFORM EXTENSIONS IN A MULTI-TENANT ENVIRONMENT - Methods and systems are described for allowing third party developers to add extensions to a cloud service provider's software as a service (SaaS) services by editing an ‘empty’ config file according to a schema provided by the cloud service provider to form a delta file and then merging the delta file with an internal, full version of the config file. The full config file is then used to initialize and instantiate objects upon a restart of the cloud provider's services. | 2016-05-19 |
20160139912 | SYSTEM, METHOD, AND COMPUTER-READABLE MEDIUM - A system includes: processing circuitry and a database which stores target data to be accessed by a plurality of information processing apparatus each of which executes a software program that issues a request to access the target data, the request including version information regarding the software program which issued the request and is being updated from a first version to a second version. The processing circuitry is configured to: update the database by transferring data from a first database relating to the first version to a second database relating to the second version; receive the request from one of the plurality of information processing apparatuses during the updating of the database; and determine a transmission destination of the received request based on the version information included in the received request and a condition of whether or not the target data is transferred from the first database to the second database. | 2016-05-19 |
20160139913 | SYSTEMS AND METHODS FOR INCREMENTAL SOFTWARE DEPLOYMENT - Methods and systems for facilitating incremental software deployment are disclosed. For example, a method can include receiving a command to deploy a second version of software to a computing system for execution on the computing system. In response to the command, differences between the second version of the software and a first version of the software being executed on the computing system are determined. Code changes to be made to the first version of the software to produce the second version of the software are determined based on the differences. The code changes to be made to the first version of the software are transmitted to the computing system. | 2016-05-19 |
20160139914 | CONTEXTUAL-BASED LOCALIZATION BASED ON MANUAL TESTING - Example embodiments relate to contextual-based localization based on manual testing. A system may recreate, based on code of an application and user action data, how a user interacts with the application. The user action data may indicate how the user interacts with the application while manually testing the application. The system may detect screen states in the code based on the recreation. The screen states may be associated with screens displayed to the user while the user interacts with the application. The system may create, for each of the screen states, a translation package that includes a screen shot related to the particular screen state and a reduced properties file that includes a portion of a properties file that is related to a portion of the code that is associated with the particular screen state. The properties file may include information that can be used to localize the code. | 2016-05-19 |
20160139915 | EVALUATING SOFTWARE COMPLIANCE - A software compliance assessment apparatus for determining a level of compliance of a software application in execution in a virtualised computing environment, the apparatus comprising: an identifier component operable to identify resources instantiated for execution of the application; a retriever component operable to retrieve a compliance characteristic for the application, the compliance characteristic being retrieved based on the identified resources, and the compliance characteristic having associated a compliance criterion based on a formal parameter; a selector component operable to select a software component for providing an actual parameter corresponding to the formal parameter, the actual parameter being based on data concerning at least one of the resources; an evaluator component operable to evaluate the compliance criterion using the actual parameter; and a detector component operable to detect a change to one or more of the resources, wherein the identifier component, selector component and evaluator component are operable in response to a determination by the detector component that one or more resources is changed, wherein the selector component selects the software component based on an identification of one or more data items that the software component is operable to provide. | 2016-05-19 |
20160139916 | Build Deployment Automation for Information Technology Mangement - A computer-executable mechanism captures code modifications for a computer-executable process from a development environment into build packages that may be deployed onto specified target environments with trace, audit, code compliance and rollback options from one single web portal. The mechanism supports build package code changes from different sources, automated test of the resulting build packages, and phantom source control of all packaged code base to reduce the burden on developers to manually source control code. The computer-executable mechanism supports a portal web server for building and deploying build packages to render user responses to configurable actions that may be passed on to a job sequencer to execute a series of jobs. A computer-executable roll-back mechanism takes a snapshot of the target environment prior to deployment of a build package so that a complete release rollback or an incremental release rollback may occur as needed. | 2016-05-19 |
20160139917 | INCREMENTAL SOURCE CODE ANALYSIS - Methods, systems, and apparatus, including computer programs encoded on computer storage media, for generating a full set of analysis artifacts after incremental analysis of a source code base. One of the methods includes receiving a first full set of analysis artifacts and an incremental set of analysis artifacts for a project. An initial keep graph that is initially equivalent to the full build graph is generated. Any source code file or analysis artifact nodes that also occur in the incremental build graph are removed from the keep graph. Analysis artifacts for source code files in the full build graph that do not occur in the keep graph are deleted from the first full set of analysis artifacts. The analysis artifacts represented by nodes in the incremental build graph are copied into the first full set of analysis artifacts to generate a second full set of analysis artifacts for the project. | 2016-05-19 |
20160139918 | Performing Rounding Operations Responsive To An Instruction - In one embodiment, the present invention includes a method for receiving a rounding instruction and an immediate value in a processor, determining if a rounding mode override indicator of the immediate value is active, and if so executing a rounding operation on a source operand in a floating point unit of the processor responsive to the rounding instruction and according to a rounding mode set forth in the immediate operand. Other embodiments are described and claimed. | 2016-05-19 |
20160139919 | Machine Level Instructions to Compute a 3D Z-Curve Index from 3D Coordinates - In one embodiment, a processor includes 32-bit and 64-bit machine level instructions to compute a 3D Z-curve Index. A processor decode unit is configured to decode a z-curve ordering instruction having three source operands, each operand associated with one of a first, second, or third coordinate and a processor execution unit is configured to execute the decoded instruction before outputting the 3D Z-curve index to a location specified by a destination operand. | 2016-05-19 |
20160139920 | CARRY CHAIN FOR SIMD OPERATIONS - Examples of a carry chain for performing an operation on operands each including elements of a selectable size is provided. Advantageously, the carry chain adapts to elements of different sizes. The carry chain determines a mask based on a selected size of an element. The carry chain selects, based on the mask, whether to carry a partial result of an operation performed on corresponding first portions of a first operand and a second operand into a next operation. The next operation is performed on corresponding second portions of the first operand and the second operand, and, based on the selection, the partial result of the operation. The carry chain stores, in a memory, a result formed from outputs of the operation and the next operation. | 2016-05-19 |
20160139921 | VECTOR INSTRUCTION TO COMPUTE COORDIANTE OF NEXT POINT IN A Z-ORDER CURVE - In one embodiment, a processor includes machine level instructions to compute a next point in a Z-order curve of a specified dimension for a specified coordinate. A processor decode unit is configured to decode an instruction having a source and immediate operands including a first z-curve index, the specified dimension and the specified coordinate. A processor execution unit is configured to execute the decoded instruction to compute the coordinate of the next point by incrementing the coordinate value associated with the specified coordinate to generate a second z-curve index including the incremented coordinate. | 2016-05-19 |
20160139922 | CONTEXT SENSITIVE BARRIERS IN DATA PROCESSING - Apparatus for data processing and a method of data processing are provided, according to which the processing circuitry of the apparatus can access a memory system and execute data processing instructions in one context of multiple contexts which it supports. When the processing circuitry executes a barrier instruction, the resulting access ordering constraint may be limited to being enforced for accesses which have been initiated by the processing circuitry when operating in an identified context, which may for example be the context in which the barrier instruction has been executed. This provides a separation between the operation of the processing circuitry in its multiple possible contexts and in particular avoids delays in the completion of the access ordering constraint, for example relating to accesses to high latency regions of memory, from affecting the timing sensitivities of other contexts. | 2016-05-19 |
20160139923 | LOAD REGISTER ON CONDITION IMMEDIATE OR IMMEDIATE INSTRUCTION - A data processor comprising a plurality of registers, and instruction execution circuitry having an associated instruction set, wherein the instruction set includes an instruction specifying at least a mask operand, a register operand and an immediate value operand, and the instruction execution circuitry, in response to an instance of the instruction, determines a Boolean value based on the mask operand and sets a respective one of a plurality of registers specified by the register operand of the instance to a value of the immediate value operand if the Boolean value is true. The instruction execution circuitry, in response to the instance of the instruction, may set the respective one of the plurality of registers specified by the register operand of the instance to zero if the Boolean value is false. | 2016-05-19 |
20160139924 | Machine Level Instructions to Compute a 4D Z-Curve Index from 4D Coordinates - In one embodiment, a processor includes 32-bit and 64-bit machine level instructions to compute a 4D Z-curve Index. A processor decode unit is configured to decode a z-curve ordering instruction having three source operands, each operand associated with one of a first, second, or third coordinate and a processor execution unit is configured to execute the decoded instruction before outputting the 4D Z-curve index to a location specified by a destination operand. | 2016-05-19 |
20160139925 | TECHNIQUES FOR IDENTIFYING INSTRUCTIONS FOR DECODE-TIME INSTRUCTION OPTIMIZATION GROUPING IN VIEW OF CACHE BOUNDARIES - A technique for processing instructions includes examining instructions in an instruction stream of a processor to determine properties of the instructions. The properties indicate whether the instructions may belong in an instruction sequence subject to decode-time instruction optimization (DTIO). Whether the properties of multiple ones of the instructions are compatible for inclusion within an instruction sequence of a same group is determined. The instructions with compatible ones of the properties are grouped into a first instruction group. The instructions of the first instruction group are decoded subsequent to formation of the first instruction group. Whether the first instruction group actually includes a DTIO sequence is verified based on the decoding. Based on the verifying, DTIO is performed on the instructions of the first instruction group or is not performed on the instructions of the first instruction group. | 2016-05-19 |
20160139926 | INSTRUCTION GROUP FORMATION TECHNIQUES FOR DECODE-TIME INSTRUCTION OPTIMIZATION BASED ON FEEDBACK - A technique of processing instructions for execution by a processor includes determining whether a first property of a first instruction and a second property of a second instruction are compatible. The first instruction and the second instruction are grouped in an instruction group in response to the first and second properties being compatible and a feedback value generated by a feedback function indicating the instruction group has been historically beneficial with respect to a benefit metric of the processor. Group formation for the first and second instructions is performed according to another criteria, in response to the first and second properties being incompatible or the feedback value indicating the grouping of the first and second instructions has not been historically beneficial. | 2016-05-19 |
20160139927 | IDENTIFYING INSTRUCTIONS FOR DECODE-TIME INSTRUCTION OPTIMIZATION GROUPING IN VIEW OF CACHE BOUNDARIES - A technique for processing instructions includes examining instructions in an instruction stream of a processor to determine properties of the instructions. The properties indicate whether the instructions may belong in an instruction sequence subject to decode-time instruction optimization (DTIO). Whether the properties of multiple ones of the instructions are compatible for inclusion within an instruction sequence of a same group is determined. The instructions with compatible ones of the properties are grouped into a first instruction group. The instructions of the first instruction group are decoded subsequent to formation of the first instruction group. Whether the first instruction group actually includes a DTIO sequence is verified based on the decoding. Based on the verifying, DTIO is performed on the instructions of the first instruction group or is not performed on the instructions of the first instruction group. | 2016-05-19 |
20160139928 | TECHNIQUES FOR INSTRUCTION GROUP FORMATION FOR DECODE-TIME INSTRUCTION OPTIMIZATION BASED ON FEEDBACK - A technique of processing instructions for execution by a processor includes determining whether a first property of a first instruction and a second property of a second instruction are compatible. The first instruction and the second instruction are grouped in an instruction group in response to the first and second properties being compatible and a feedback value generated by a feedback function indicating the instruction group has been historically beneficial with respect to a benefit metric of the processor. Group formation for the first and second instructions is performed according to another criteria, in response to the first and second properties being incompatible or the feedback value indicating the grouping of the first and second instructions has not been historically beneficial. | 2016-05-19 |
20160139929 | THREE-DIMENSIONAL MORTON COORDINATE CONVERSION PROCESSORS, METHODS, SYSTEMS, AND INSTRUCTIONS - A processor includes a plurality of packed data registers, a decode unit, and an execution unit. The decode unit is to decode a three-dimensional (3D) Morton coordinate conversion instruction. The 3D Morton coordinate conversion instruction to indicate a source packed data operand that is to include a plurality of 3D Morton coordinates, and to indicate one or more destination storage locations. The execution unit is coupled with the packed data registers and the decode unit. The execution unit, in response to the decode unit decoding the 3D Morton coordinate conversion instruction, is to store one or more result packed data operands in the one or more destination storage locations. The one or more result packed data operands are to include a plurality of sets of three 3D coordinates. Each of the sets of the three 3D coordinates is to correspond to a different one of the 3D Morton coordinates. | 2016-05-19 |
20160139930 | FOUR-DIMENSIONAL MORTON COORDINATE CONVERSION PROCESSORS, METHODS, SYSTEMS, AND INSTRUCTIONS - A processor includes packed data registers, a decode unit, and an execution unit. The decode unit is to decode a four-dimensional (4D) Morton coordinate conversion instruction. The 4D Morton coordinate conversion instruction is to indicate a source packed data operand that is to include a plurality of 4D Morton coordinates, and is to indicate one or more destination storage locations. The execution unit is coupled with the packed data registers and the decode unit. The execution unit, in response to the decode unit decoding the 4D Morton coordinate conversion instruction, is to store one or more result packed data operands in the one or more destination storage locations. The one or more result packed data operands are to include a plurality of sets of four 4D coordinates. Each of the sets of the four 4D coordinates is to correspond to a different one of the 4D Morton coordinates. | 2016-05-19 |
20160139931 | MORTON COORDINATE ADJUSTMENT PROCESSORS, METHODS, SYSTEMS, AND INSTRUCTIONS - A processor includes a decode unit to decode an instruction that is to indicate a source packed data operand to include Morton coordinates, a dimensionality of a multi-dimensional space having points that the Morton coordinates are to be mapped to, a given dimension of the multi-dimensional space, and a destination. The execution unit is coupled with the decode unit. The execution unit, in response to the decode unit decoding the instruction, stores a result packed data operand in the destination. The result operand is to include Morton coordinates that are each to correspond to a different one of the Morton coordinates of the source operand. The Morton coordinates of the result operand are to be mapped to points in the multi-dimensional space that differ from the points that the corresponding Morton coordinates of the source operand are to be mapped to by a fixed change in the given dimension. | 2016-05-19 |
20160139932 | MANAGING HISTORY INFORMATION FOR BRANCH PREDICTION - Branch history information characterizes results of branch instructions previously executed by a processor. A count is stored of a number of consecutive branch instructions previously executed by the processor whose results all indicate a not taken branch. In a first pipeline stage, a predicted branch result is provided based on at least a portion of the branch history information, and one or more of the branch history information, and the count, is updated based on the predicted branch result. In a second pipeline stage an actual branch result is provided based on an executed branch instruction, and the branch history information is updated based on the actual branch result. If the predicted branch result indicates a taken branch, the branch history information is updated based on the count, and if the predicted branch result indicates a not taken branch, the count is updated but not the branch history information. | 2016-05-19 |
20160139933 | PROVIDING LOOP-INVARIANT VALUE PREDICTION USING A PREDICTED VALUES TABLE, AND RELATED APPARATUSES, METHODS, AND COMPUTER-READABLE MEDIA - Providing loop-invariant value prediction using a predicted values table, and related apparatuses, methods, and computer-readable media are disclosed. In one aspect, an apparatus comprising an instruction processing circuit is provided. The instruction processing circuit is configured to detect a loop body in an instruction stream, and to detect a value-generating instruction within the loop body. The instruction processing circuit determines whether an attribute of the value-generating instruction matches an entry of a predicted values table. If the attribute of the value-generating instruction is determined to be present in the entry of the predicted values table, the instruction processing circuit further determines whether a counter of the entry exceeds an iteration threshold. Responsive to determining that the counter of the entry exceeds the iteration threshold, the instruction processing circuit provides a predicted value in the entry of the predicted values table for execution of at least one dependent instruction. | 2016-05-19 |
20160139934 | HARDWARE INSTRUCTION SET TO REPLACE A PLURALITY OF ATOMIC OPERATIONS WITH A SINGLE ATOMIC OPERATION - Systems and methods may process a single atomic operation. An instruction set may be generated to replace a plurality of atomic operations with a single atomic operation. The instruction set may include an accumulation instruction to compute a prefix sum for a plurality of initial values associated with a plurality of processing lanes to generate a plurality of accumulated values. The instruction set may also include a broadcast instruction to return a pre-existing value to be added with each of the plurality of accumulated values to generate a plurality of intermediate accumulated values. In one example, a graphics processor may execute the instruction set to process the single atomic operation. | 2016-05-19 |
20160139935 | LIVE INITIALIZATION OF A BOOT DEVICE - Embodiments of the present invention are provided that include executing, by a processor, a software stack received from a first boot image, and retrieving and executing, by the processor, a second software stack. A writeable boot device such as a storage device with a removable medium is detected, and the second software stack is saved by replacing, on the writeable boot device, the first boot image with a second boot image comprising the second software stack. The second software stack is saved upon detecting the boot device having no boot image. | 2016-05-19 |
20160139936 | Method and apparatus for multi-mode mobile computing devices and peripherals - Embodiments of a method and apparatus are described for operating a mobile computing device in different modes using different operating systems. An apparatus may comprise, for example, a memory operative to store multiple operating systems, a processor operative to execute the multiple operating systems, an operating system management module operative to select a first operating system when the mobile computing device is in a first mode or a second operating system when the mobile computing device is in a second mode and the mobile computing device is coupled to one or more external devices. Other embodiments are described and claimed. | 2016-05-19 |
20160139937 | INTERFACING SYSTEMS AND METHODS - Systems and methods may replace and/or enhance green screens. A green screen may be replaced and/or enhanced by receiving green screen data, determining a modification to be applied to the green screen data, generating a user interface screen including the green screen data and the modification, and causing the user interface screen to be displayed on a display. | 2016-05-19 |
20160139938 | ENFORCING SOFTWARE COMPLIANCE - An apparatus for enforcing a compliance requirement for a software application in execution in a virtualised computing environment, the apparatus comprising: an identifier component operable to identify a resource instantiated for execution of the application; a retriever component operable to retrieve a compliance characteristic for the application, the compliance characteristic being retrieved based on the identified resource and having associated a compliance criterion based on a formal parameter, the compliance criterion defining a set of compliant resource states; a first selector component operable to select a software component for providing an actual parameter corresponding to the formal parameter, the actual parameter being based on data concerning the resource; an evaluator component operable to evaluate the compliance criterion using the actual parameter; an application modifier component operable to, in response to a determination that the resource is outside the set of compliant resource states, the determination being based on the evaluation of the compliance criterion, modify the software application to a modified software application having associated a resource with a state belonging to the set of compliant resource states; and a detector component operable to detect a change to one or more of the resources, wherein the identifier component, selector component and evaluator component are operable in response to a determination by the detector component that one or more resources is changed, and wherein the selector selects the software component based on an identification of one or more data items that the software component is operable to provide. | 2016-05-19 |
20160139939 | SYSTEM AND METHOD TO CHAIN DISTRIBUTED APPLICATIONS IN A NETWORK ENVIRONMENT - A method is provided in one example embodiment and may include communicating information between a plurality of network function virtualized (NFV) based applications; and creating at least one service chain using at least two of the plurality of NFV-based applications based on the information communicated between the plurality NFV based applications. In some instances, the information can be communicated using border gateway protocol (BGP) exchanges between the NFV-based applications. In some instances, the information can include at least one of: next-hop address information for one or more ingress points of a particular NFV-based application; one or more capabilities by which a particular NFV-based application can receive data on one or more ingress points; and a method by which one or more egress points of a previous NFV-based application in a particular service chain is to perform load balancing for a subsequent NFV-based application in the particular service chain. | 2016-05-19 |
20160139940 | SYSTEMS AND METHODS FOR CREATING VIRTUAL MACHINE - A system for creating virtual machines, adapted to a virtual management platform, includes a configuration module, a selection module, a determination module and a distribution module. The configuration module creates a plurality of virtual sections according to the resource specifications. The selection module selects one of the virtual sections according to the customized specifications. The determination module creates a virtual section-setting profile according to the customized specifications. The determination module further calculates the quantity of the remaining resources of the virtual section, and determines whether the quantity of the customized specifications will exceed the remaining resources or not. When the quantity of the customized specifications does not exceed the quantity of the remaining resources, the distribution module creates a virtual machine in the virtual section according to the virtual section-setting profile. | 2016-05-19 |
20160139941 | METHOD AND SYSTEM FOR SORTING AND BUCKETIZING ALERTS IN A VIRTUALIZATION ENVIRONMENT - An architecture for sorts and bucketizes alerts in a virtualization environment. A plurality of alerts associated with virtual machines in the virtualization environment is received. A plurality of attributes is identified for the virtual machines, and a plurality of buckets defined for each attribute, into which the received alerts are assigned. The buckets for each attribute are then sorted. The attributes may also be sorted based upon the distribution of alerts in the buckets of the attribute, allowing a system administrator or other personnel to more easily determine which attributes of the virtual machines are correlated with the received alerts, in order to identify potential causes and solutions for the alerts in the virtualization environment. | 2016-05-19 |
20160139942 | VIRTUAL MACHINE INPUT/OUTPUT THREAD MANAGEMENT - A method performed by a physical computing system includes detecting an interrupt signal sent to a virtual processor being managed by the hypervisor, creating a map between the virtual processor and an Input/Output (I/O) thread associated with the interrupt signal, determining that the virtual processor is idle, finding the I/O thread associated with the idle virtual processor based on the map, and moving the I/O thread associated with the idle virtual processor up in a processing queue, the processing queue being for processes to be executed on a physical processor. | 2016-05-19 |
20160139943 | VIRTUAL MACHINE CLUSTER BACKUP - Embodiments are directed to backing up a virtual machine cluster and to determining virtual machine node ownership prior to backing up a virtual machine cluster. In one scenario, a computer system determines which virtual machines nodes are part of the virtual machine cluster, determines which shared storage resources are part of the virtual machine cluster and determines which virtual machine nodes own the shared storage resources. The computer system then indicates to the virtual machine node owners that at least one specified application is to be quiesced over the nodes of the virtual machine cluster, such that a consistent, cluster-wide checkpoint can be created. The computer system further creates a cluster-wide checkpoint which includes a checkpoint for each virtual machine in the virtual machine cluster. | 2016-05-19 |
20160139944 | Method and Apparatus for Combined Hardware/Software VM Migration - A method and apparatus are provided for migrating one or more hardware devices ( | 2016-05-19 |
20160139945 | TECHNIQUES FOR CONSTRUCTING VIRTUAL IMAGES FOR INTERDEPENDENT APPLICATIONS - A technique for automating the construction of multiple virtual images with interdependencies includes creating a first virtual image (VI) instance by extending a first base VI based on input values associated with the first base VI and software bundles associated with the first base VI. A second VI instance is created by extending a second base VI based on input values associated with the second base VI, instance values received from the first VI instance, dependency instance values received from a deployed dependency instance, and software bundles associated with the second base VI. The first and second VI instances are then deployed to designated machines in a cloud environment for execution. | 2016-05-19 |
20160139946 | WORKLOAD-AWARE LOAD BALANCING TO MINIMIZE SCHEDULED DOWNTIME DURING MAINTENANCE OF HOST OR HYPERVISOR OF A VIRTUALIZED COMPUTING SYSTEM - A computer-implemented method for computing an optimal plan for maximizing availability of the workload balancing of a virtual computing device, in the event of maintenance of the virtual computing device, is provided. The computer-implemented method comprises determining a workload placement plan that migrates a plurality of virtual machines of the virtual computing device to at least one location of a plurality of hypervisors. The computer-implemented method further comprises receiving input parameters for computing the workload placement plan for migrating the plurality of virtual machines. The computer-implemented method further comprises determining the workload placement plan that forms the basis for migrating the plurality of virtual machines, within the virtual computing device, for maximizing operating objectives of the virtual computing device. | 2016-05-19 |
20160139947 | SYSTEM AND METHOD FOR AUTOMATICALLY LAUNCHING VIRTUAL MACHINES BASED ON ATTENDANCE - Certain aspect of the present disclosure relates to a virtual machine (VM) control system, which includes a VM controller. For a plurality of employees, the VM controller registers each employee by assigning an employee ID, and stores registration information in an attendance database. The VM controller also associates one or more VMs to each employee, and stores VM association information between the VMs and the employees in an employee ID database. The VM controller transmits polling inquiries periodically to the attendance database to retrieve employee presence events of the employees. For each employee, the employee presence events include an ingress event and an egress event. When the ingress event is detected and the associated VM is off, the VM controller launches the associated VM. When the egress event is detected and the associated VM is on, the VM controller shuts down the associated VM. | 2016-05-19 |
20160139948 | Dynamic Resource Configuration Based on Context - Aspects of the disclosure allocate shares of processing resources or other physical resources among virtual machines (VMs) operating as, for example, virtual desktops on a plurality of host computing devices. Allocations of resources are adjusted based on the user activity, VM activity, and/or application activity detected by an agent executing on each VM. Allocated shares may be boosted, unboosted, or normalized, depending on the type and duration of detected activity, by a resource allocation manager executing on a management server. | 2016-05-19 |
20160139949 | VIRTUAL MACHINE RESOURCE MANAGEMENT SYSTEM AND METHOD THEREOF - Implementations of the present disclosure provide a virtual machine resource management system and method thereof. According to one implementation, a request for service provisioning is received and at least one virtual machine associated with the request is created. When a determination has been made that the allocated virtual resources have exceeded a threshold value, a virtual machine is modified based on an associated life cycle stage priority or service information. | 2016-05-19 |
20160139950 | SHARING RESOURCES IN A MULTI-CONTEXT COMPUTING SYSTEM - In an embodiment, a method of providing quality of service (QoS) to at least one resource of a hardware processor includes providing, in a memory of the hardware processor, a context including at least one quality of service parameter and allocating access to the at least one resource of the hardware processor based on the quality of service parameter of the context, a device identifier, a virtual machine identifier, and the context. | 2016-05-19 |
20160139951 | SERVICE CLEAN-UP - Versions of a service not reachable by a set of service requestors that use the service are removed. Multiple, different versions of a service are stored, along with metadata associated with the multiple, different versions of the service. The metadata is examined to determine one or more of the multiple, different versions of the service that are not reachable by the set of service requestors that use the service. Those versions are deleted. | 2016-05-19 |
20160139952 | Throttle Control on Cloud-based Computing Tasks - Systems and methods for throttle control on cloud-based computing tasks are provided. An example method includes, obtaining a service request from a first user, in a plurality of users, of the computer system; in accordance with a first determination that placing the service request in a service queue associated with the first user would not cause an enqueue counter associated with the first user to be exceeded, causing the service request to be placed in the service quest to await execution. The method also includes, after the service request is placed in the service queue, in accordance with a second determination that executing the service request would not cause a dequeue counter associated with the first user to be exceeded, causing the service request to be executed. | 2016-05-19 |
20160139953 | PREFERENTIAL CPU UTILIZATION FOR TASKS - In a computing storage environment having multiple processor devices, lists of Task Control Blocks (TCBs) are maintained in a processor-specific manner, such that each of the multiple processor devices is assigned a local TCB list. | 2016-05-19 |
20160139954 | QUIESCE HANDLING IN MULTITHREADED ENVIRONMENTS - Methods and apparatuses for performing a quiesce operation in a multithread environment is provided. A processor receives a first thread quiesce request from a first thread executing on the processor. A processor sends a first processor quiesce request to a system controller to initiate a quiesce operation. A processor performs one or more operations of the first thread based, at least in part, on receiving a response from the system controller. | 2016-05-19 |
20160139955 | QUIESCE HANDLING IN MULTITHREADED ENVIRONMENTS - Methods and apparatuses for performing a quiesce operation in a multithread environment is provided. A processor receives a first thread quiesce request from a first thread executing on the processor. A processor sends a first processor quiesce request to a system controller to initiate a quiesce operation. A processor performs one or more operations of the first thread based, at least in part, on receiving a response from the system controller. | 2016-05-19 |
20160139956 | MONITORING OVERTIME OF TASKS - A computer system monitors the execution time of each of a plurality of tasks over a plurality of time periods. The system receives a first input that selects a particular time period from the plurality of time periods, and further monitors the execution time of the plurality of tasks in the selected time period. The system receives a second input that selects a particular task from the particular time period, and monitors the execution time of the particular task in the particular time period. | 2016-05-19 |
20160139957 | METHOD AND SYSTEM FOR SCHEDULING VIRTUAL MACHINES IN INTEGRATED VIRTUAL MACHINE CLUSTERS - A method for scheduling virtual machines in a virtual machine cluster includes obtaining a filename of a target virtual machine when a user requests to start the target virtual machine; inquiring, based on the filename of the target virtual machine, a storage module or a database to acquire one or more nodes where copies of the target virtual machine are located; selecting, from the acquired one or more nodes, a node with a highest score as a target node having a copy of the target virtual machine; and running the copy of the target virtual machine on the selected target node with the highest score. | 2016-05-19 |
20160139958 | ASSIGNING LEVELS OF POOLS OF RESOURCES TO A SUPER PROCESS HAVING SUB-PROCESSES - Provided are a computer program product, system, and method for assigning levels of pools of resources in an operating system to a super process having sub-processes. A plurality of first level pools of resources are reserved in the operating system for first level processes to perform a first level operation and invoke at least one second level process to perform a second level operation. A plurality of second level pools of resources are reserved in the operating system for second level processes. One of the second level pools of resources assigned to one of the second level processes is released and available to assign to another second level process when the second level process completes the second level operation for which it was invoked. | 2016-05-19 |
20160139959 | INFORMATION PROCESSING SYSTEM, METHOD AND MEDIUM - An information processing system includes: a memory configured to store job requests each of which is to be assigned to one of computing resources based on a priority which is determined by an allocation ratio assigned for each of a plurality of users; and processing circuitry configured to: assign first job request to the one of the computing resources; determine, when the first job request is assigned, a decrease degree of the priority of a first user corresponding to the first job request based on an allocation ratio of the first user and allocation ratio of other users whose job requests are stored in the memory; modify the priority of the first user based on the determined decrease degree of the priority; and assign second job request to one of the computing resources, based on the modified priority of the first user and priority of remaining plurality of users. | 2016-05-19 |
20160139960 | SYSTEM, METHOD, PROGRAM, AND CODE GENERATION UNIT - A system for parallel processing tasks by allocating the use of exclusive locks to process critical sections of a task. The system includes storing update information that is updated in response to acquisition and release of an exclusive lock. When processing a task which includes a critical section containing code affecting execution of the other task, an exclusive execution unit acquires an exclusive lock prior to processing the critical section. When the section has been processed successfully, the lock is released and update information updated. Meanwhile a second task, whose critical section does not contain code affecting execution of the other task may run in parallel, without acquiring an exclusive lock, via a nonexclusive execution unit. The nonexclusive execution unit determines that the second critical section has successfully completed if the update information has not changed during processing of the second critical section. | 2016-05-19 |
20160139961 | EVENT SUMMARY MODE FOR TRACING SYSTEMS - Reducing resource requirements in an instrumented process tracing system, a process having a top instrumented process and a nested hierarchy of instrumented sub-processes. A computer receives a plurality of instrumented process data from the top process and the sub-processes, each datum including a process identifier, a process type, a top process identifier, and a process completion elapsed time. Based on the computer determining that the process identifier and the top process identifier in the datum received are equivalent: if the process completion elapsed time in the datum received is determined to be less than a threshold value, the computer writes a summary of the plurality of instrumented process data to a data store, and if the process completion elapsed time in the datum received is determined to not be less than the threshold value, the computer writes the plurality of instrumented process data to the data store. | 2016-05-19 |
20160139962 | MIGRATING A VM IN RESPONSE TO AN ACCESS ATTEMPT BY THE VM TO A SHARED MEMORY PAGE THAT HAS BEEN MIGRATED - A hypervisor of a source host receives a request to migrate a group of virtual machines from the source host to a destination host. The hypervisor of the source host determines that a first virtual machine being migrated to the destination host shares a memory space on the source host with a second virtual machine on the source host. Upon receiving a request from the second virtual machine on the source host to access a first memory page of the shared memory space on the source host that has been migrated to the destination host, the hypervisor of the source host initiates migration of the second virtual machine to the destination host. | 2016-05-19 |
20160139963 | VIRTUAL COMPUTING POWER MANAGEMENT - As disclosed herein, a method, executed by a computer, includes comparing a current power consumption profile for a computing task with an historical power consumption profile, receiving a request for a computing resource, granting the request if the historical power consumption profile does not suggest a pending peak in the current power consumption profile or the historical power consumption profile indicates persistent consumption at a higher power level, and denying the request for the computing resource if the historical power consumption profile suggests a pending peak in the current power consumption profile and the historical power consumption profile indicates temporary consumption at the higher power level. Denying the request may include initiating an allocation timeout and subsequently ending the allocation timeout in response to a drop in a power consumption below a selected level. A computer system and computer program product corresponding to the method are also disclosed herein. | 2016-05-19 |
20160139964 | Energy Efficient Multi-Cluster System and Its Operations - A multi-cluster system having processor cores of different energy efficiency characteristics is configured to operate with high efficiency such that performance and power requirements can be satisfied. The system includes multiple processor cores in a hierarchy of groups. The hierarchy of groups includes: multiple level-1 groups, each level-1 group including one or more of processor cores having identical energy efficiency characteristics, and each level-1 group configured to be assigned tasks by a level-1 scheduler; one or more level-2 groups, each level-2 group including respective level-1 groups, the processor cores in different level-1 groups of the same level-2 group having different energy efficiency characteristics, and each level-2 group configured to be assigned tasks by a respective level-2 scheduler; and a level-3 group including the one or more level-2 groups and configured to be assigned tasks by a level-3 scheduler. | 2016-05-19 |
20160139965 | METHOD AND APPARATUS FOR A HIERARCHICAL SYNCHRONIZATION BARRIER IN A MULTI-NODE SYSTEM - A hierarchical barrier synchronization of cores and nodes on a multiprocessor system, in one aspect, may include providing by each of a plurality of threads on a chip, input bit signal to a respective bit in a register, in response to reaching a barrier; determining whether all of the plurality of threads reached the barrier by electrically tying bits of the register together and “AND”ing the input bit signals; determining whether only on-chip synchronization is needed or whether inter-node synchronization is needed; in response to determining that all of the plurality of threads on the chip reached the barrier, notifying the plurality of threads on the chip, if it is determined that only on-chip synchronization is needed; and after all of the plurality of threads on the chip reached the barrier, communicating the synchronization signal to outside of the chip, if it is determined that inter-node synchronization is needed. | 2016-05-19 |
20160139966 | ALMOST FAIR BUSY LOCK - Managing exclusive control of a shareable resource includes publishing a claim non atomically to a lock by a thread that is next to own the lock in an ordered set of threads that have requested to own the lock. The claim includes a structure capable of being read and written only in a single memory access. A determination is made of whether the next owning thread has been pre-empted. Responsive to the determination, the next owning thread of the lock acquires the lock if the next owning thread has not been pre-empted and retries acquisition of the lock if the next owning thread has been pre-empted. Responsive to the next owning thread being pre-empted, a subsequent owning thread acquires the lock unfairly and atomically, consistently modifies the lock such that a next lock owner can determine that the next lock owner has been preempted. | 2016-05-19 |
20160139967 | CONCURRENT COMPUTING WITH REDUCED LOCKING REQUIREMENTS FOR SHARED DATA - Where data are shared by multiple computer processing threads, modifying the data by determining whether modifying data associated with a first computer processing thread violates a constraint associated with the data, and responsive to determining that modifying the data associated with the computer processing thread violates the constraint associated with the data, using the data associated with the first computer processing thread to modify the data shared by the multiple computer processing threads that includes the first computer processing thread, where the constraint associated with the data associated with the first computer processing thread represents a portion of a tolerance value that is associated with the data shared by the multiple computer processing threads and that is divided among multiple constraints, where each of the constraints is associated with a different one of the multiple computer processing threads. | 2016-05-19 |
20160139968 | Autonomous Instrument Concurrency Management - An Autonomous Concurrency Management (ACM) subsystem enables test instruments (operating as servers) to reliably and efficiently handle a variety of seamless multi-device-under-test (multi-DUT) scenarios and with minimal cooperation from the original equipment manufacturer (OEM) client software (e.g. test plans, hardware abstraction layer, etc.). Concurrency capability is built directly into the test instruments. Making the instrument based concurrency autonomous means the OEM software code base need not be specifically implemented for concurrency, potentially saving thousands of lines of OEM software code. To support basic concurrency scenarios where clients asynchronously share the instrument, as well as advanced concurrency scenarios such as a broadcast scenario, the ACM includes software lock, client separator, client rendezvous, and client observer functionality. An instrument ACM subsystem simplifies the problem from the client's perspective by moving the complexity to the lowest software layer, the RF (test) instrument. | 2016-05-19 |
20160139969 | IN-MEMORY APPROACH TO EXTEND SEMANTIC EVENT PROCESSING WITH DOMAIN INSIGHTS - A method, medium, and system to receive an event stream, the event stream including a plurality of events, the events being semantically modeled; receive domain insights specifying a relationship between two events, the domain insights being semantically modeled and defined by a specified time limit and a comparison of event attributes using the specified time limit with a logical operator; retrieve stored representations of events referenced in the received domain insights; process the event stream, the received domain insights, and the retrieved stored events to produce a temporal processing result; and store the temporal processing result. | 2016-05-19 |
20160139970 | NETWORK TRAFFIC PROCESSING - As disclosed herein a method, executed by a computer, for providing improved multi-protocol traffic processing includes receiving a data packet, determining if a big processor is activated, deactivating a little processor and activating the big processor if the big processor is not activated and an overflow queue is full, and deactivating the big processor and activating the little processor if the big processor is activated and a current throughput for the big processor is below a first threshold or a sustained throughput for the big processor remains below a second threshold. The big and little processors may be co-located on a single integrated circuit. An overflow queue, managed with a token bucket algorithm, may be used to enable the little processor to handle short burst of data packet traffic. A computer program product and an apparatus corresponding to the described method are also disclosed herein. | 2016-05-19 |
20160139971 | SYSTEM AND METHOD FOR TAGGING AND TRACKING EVENTS OF AN APPLICATION PLATFORM - A system and method for providing delegated metric tools within a partially closed communication platform that includes receiving a tag identifier linked to at least a first identified platform interaction in the communication platform; associating the tag identifier with at least one logged event of an account associated with the first identified platform interaction; defining a tracking resource with at least one tag identifier; measuring platform interactions tracked by a tracking resource; and providing access to measured platform interactions through an application. | 2016-05-19 |