52nd week of 2015 patent applcation highlights part 45 |
Patent application number | Title | Published |
20150370532 | SYSTEM FOR CONTROLLING SOUND WAVE BASED TOUCH RECOGNITION INTERFACE - A system for controlling a sound wave-based touch recognition interface includes: an auditory user interface (AUI) pad configured to transfer a sound wave, a sound wave recognition member configured to convert the sound wave transferred from the AUI pad into an electrical signal, and an actuator member attached to one side of the AUI pad and configured to be deformed in response to the electrical signal converted by the sound wave recognition member. | 2015-12-24 |
20150370533 | Solar tablet verbal - This invention is a first. Nothing exists like this tablet | 2015-12-24 |
20150370534 | MANAGING DEVICE, MANAGEMENT METHOD, RECORDING MEDIUM, AND PROGRAM - A managing device ( | 2015-12-24 |
20150370535 | METHOD AND APPARATUS FOR HANDLING INCOMING DATA FRAMES - A method and apparatus for handling incoming data frames within a network interface controller. The network interface controller comprises at least one controller component operably coupled to at least one memory element. The at least one controller component is arranged to identify a next available buffer pointer from a pool of buffer pointers stored within a first area of memory within the at least one memory element, receive an indication that a start of a data frame has been received via a network interface, and allocate the identified next available buffer pointer to the data frame. | 2015-12-24 |
20150370536 | FORMATTING FLOATING POINT NUMBERS - Flexible high-speed generation and formatting of application-specified strings in floating point and related formats is available through table-based base conversion which may be integrated with custom formatting, and through printf-style functionality based on separate control string parsing and specialized format command sequence execution. | 2015-12-24 |
20150370537 | High Efficiency Computer Floating Point Multiplier Unit - A high-power-efficiency multiplier combines a standard floating-point multiplier with a power-of-two multiplier that performs multiplications by shifting operations without the need for floating-point multiplication circuitry. By selectively steering some operands to this power-of-two multiplier, substantial power savings may be realized. In one embodiment, multiplicands may be modified to work with the power-of-two multiplier introducing low errors that may be accommodated in pixel calculations. | 2015-12-24 |
20150370538 | HTML5 GRAPH LAYOUT FOR APPLICATION TOPOLOGY - A user may create a blueprint that specifies an application's architecture, including virtual machines (VM) nodes, software services and application components within the VM nodes. To do so, the user manipulates a graphical user interface (GUI) rendered by a Scalable Vector Graphics (SVG) layout engine. The SVG layout engine parses declarative layout configurations and translates the declarative layout into SVG elements that visually represent the blueprint. The SVG layout engine dynamically calculates absolute positions and sizes of child elements based on the declarative layout. | 2015-12-24 |
20150370539 | TRANSITIVE RELATIONSHIP IN MODEL DIAGRAM WITH ELEMENTS AND RELATIONSHIPS - Depicting a UML (unified modeling language) model by: (i) receiving data model data corresponding to a data model including: (a) a plurality of entity nodes, (b) a plurality of transitive relationship links, with each transitive relationship link directly linking two entity nodes, and (c) a plurality of non-transitive relationship links, with each non-transitive relationship link directly linking two entity nodes; and (ii) presenting a presented portion of the data model, with presented portion including at least one transitive relationship link(s) and at least one non-transitive relationship link(s). The presentation of the presented portion includes at least one of the following features: (i) transitive relationship link(s) of the presented portion are presented in a different manner than the non-transitive relationship link(s) of the presented portion; and/or (ii) at least one connection path is represented as a multiple link path. | 2015-12-24 |
20150370540 | Method of developing an application for execution in a workflow management system and apparatus to assist with generation of an application for execution in a workflow management system - This disclosure provides techniques for facilitating workflow design and modification in a workflow management system. In one embodiment, software provides a design interface to an application developer to streamline transition definition and associated conditions and other parameters between phases of a workflow (and related rework for workflow modification), without requiring substantial manual recoding. The workflow management system accepts consequent data as metadata, which the system uses to enforce both state and required transition conditions to regulate how end-users interact with a database. The metadata is invoked during workflow execution in a manner tied to any desired condition and, thus, desired context. | 2015-12-24 |
20150370541 | VERIFICATION OF A MODEL OF A GUI-BASED APPLICATION - A method may include receiving a model of a graphical user interface (GUI) based application that includes a plurality of paths. The method may further include determining one or more paths of the plurality of paths that each include a pattern that satisfies a rule-pattern. The rule-pattern may be based on potential inaccuracies in the model as indicated by the pattern. The method may additionally include verifying whether the model is consistent with the GUI-based application. The verification may be based on a prioritization of a determination of whether the one or more paths are consistent with the GUI-based application. The prioritization of the one or more paths may be based on the one or more paths each including the pattern. | 2015-12-24 |
20150370542 | DRAG-AND-DROP FUNCTIONALITY FOR SCALABLE VECTOR GRAPHICS - A graphical user interface (GUI) engine receives an input event associated with a drag-and-drop action, determines a Scalable Vector Graphics (SVG) element that relates to the input event, and causes an anchor element to be attached to the SVG element, such as be wrapping the SVG element with the anchor element. Attaching an anchor element to an SVG element and defining the anchor element as “draggable” enables web browsers to perform drag-and-drop actions with SVG elements in a uniform and predictable manner. In one example use case, an SVG element may be wrapped with an anchor element when the SVG element is selected and dragged by a user, enabling an accurate representation of the SVG element to be displayed while the user is performing the drag-and-drop action. | 2015-12-24 |
20150370543 | DRIVER PROGRAM GENERATING APPARATUS, DRIVER PROGRAM GENERATING METHOD, DRIVER PROGRAM GENERATING PROGRAM, AND DRIVER PROGRAM - A driver program generating apparatus includes: an access unit configured to access a definition database including a plurality of pieces of UI definition data and a plurality of pieces of command definition data, and first and second association information; a program acquiring unit configured to acquire a driver program including at least a part of the plurality of pieces of UI definition data and at least a part of the plurality of pieces of command definition data as a program before change; a change unit configured to generate a program after change acquired by changing the UI definition data and the command definition data included in the acquired program before change according to a user's direction; and a correction unit configured to generate a correction program by changing a part of a plurality of pieces of command definition data included in the program after change. | 2015-12-24 |
20150370544 | METHODS FOR FACILITATING PERSISTENT STORAGE OF IN-MEMORY DATABASES AND DEVICES THEREOF - A method, non-transitory computer readable medium, and application host computing device that parses assembly language code to identify a transaction block including an assignment to a memory location, the assembly language code associated with an application and output by a compiler. The assembly language code is modified to insert an invocation of a plurality of functions collectively configured to facilitate persistent storage of one or more data updates associated with the assignment at run-time. The assembly language code is assembled to generate object code and the object code is linked with at least a run-time library including a definition for each of the plurality of inserted functions to generate an executable file for the application. | 2015-12-24 |
20150370545 | OBJECT STORAGE AND SYNCHRONIZATION HOOKS FOR OCCASIONALLY-CONNECTED DEVICES - A system may include an application programming interface (API) layer, a cache layer, and an object storage/access layer. The API layer may expose an interface to store a business object and an interface to retrieve the business object, and may transmit a request to store the business object and a request to retrieve the business object, and the cache layer may cache the business object and transmit the request to store the business object and the request to retrieve the business object. The object storage/access layer may receive the request to store the business object and, in response to the request to store the business object, to invoke a serialization method exposed by the business object to store the data associated with the object in a data structure. The object storage/access layer may also receive the request to retrieve the business object and, in response to the request to retrieve the business object, to invoke a deserialization method of the business object to deserialize the data associated with the business object in the data structure. | 2015-12-24 |
20150370546 | METHOD AND APPARATUS FOR GENERATING DATA DISTRIBUTION SERVICE APPLICATION - Provided herein a method and apparatus for generating a data distribution service application, the method including syntax-analyzing an IDL (interface description language) file; determining a topic model to be used in the data distribution service application based on a result of the syntax-analyzing of an IDL (interface description language) file; receiving QoS information and determining a QoS model by a QoS (quality of service) modeler; determining a DDS application model based on the topic model and QoS model by a DDS (data distribution service) application modeler; and generating a source code based on the topic model, QoS model and DDS application model. | 2015-12-24 |
20150370547 | PROGRAM EDITING DEVICE, PROGRAM EDITING METHOD AND PROGRAM EDITING PROGRAM - A command code extraction part extracts, from among a plurality of command codes included in an instrument control program to be executed by a CPU unit and an input/output unit, a command code that is the same as an extraction target code indicated in an extraction target code list, as an extracted code. A sub-control program creation part creates, as a sub-control program to be executed by the input/output unit, a program including the extracted code that has been extracted. A main control program creation part creates, as a main control program to be executed by the CPU unit, a program which is obtained by removing from the instrument control program, the extracted code that has been extracted. | 2015-12-24 |
20150370548 | Automated Mobile Application Publishing - Systems, device and techniques are disclosed for publishing multiple versions of an application to an application market, via an application programming interface. The application programming interface may be configured to allow automated uploads of multiple version of an application without requiring individual uploads of each version. A developer associated party may provide the multiple versions of the application via the application programming interface. The multiple versions of the applications may be provided via the application programming interface and not an application market interface. The multiple versions may be published in an application to different set of users. | 2015-12-24 |
20150370549 | SYSTEM AND METHOD FOR SUPPORTING DEPLOYMENT IN A MULTITENANT APPLICATION SERVER ENVIRONMENT - In accordance with an embodiment, described herein is a system and method for supporting deployment in an application server environment. A resource, for example an application or library, can be deployed to different resource groups in different partitions in a domain, to a resource group template referenced by the different resource groups, or to a domain-level resource group. One or more additional deployment operations can be performed on a deployed resource by a partition administrator or a system administrator. A deployment API can be provided to enable a plurality to deployment clients to perform the deployment operations, and can be used to derive partition information and target information for the deployment operations when the information is not provided by a partition administrator. Different deployment scopes are defined to allow a same resource to be deployed in different partitions of a domain and outside any partition in the domain. | 2015-12-24 |
20150370550 | VIRTUAL SOFTWARE APPLICATION DEPLOYMENT CONFIGURATIONS - Configuration items for a software application can be automatically and/or manually discovered, and the application can be packaged to form a virtual application package. A deployment configuration can include settings for the configuration items. The deployment configuration can be set after packaging the software application. For example, a selected configuration item in the deployment configuration may be changed in response to user input. The virtual application package can be deployed to instantiate the application one or more times, and the deployment configuration can be applied in the instantiated application. | 2015-12-24 |
20150370551 | Dynamic Update of Applications as Code is Checked-In - Software receives a message from a client device requesting an update check for an app deployed on the client device. The message includes a version number for the app. The software determines that a count of messages requesting an update check for the app exceeds a specified number. The software obtains an executable for the app from an app database, using the received version number. The software generates a dependency analysis by scanning the executable. The dependency analysis includes a version number for at least one dependent code module. The software determines that the app is updatable by comparing the version number in the dependency analysis with a version number for source code for the dependent code module. The software creates an updated app using newer source code for the dependent code module, using a developer specification as to compilation type, and transmits the updated app to the client device. | 2015-12-24 |
20150370552 | SUBSCRIBER DEFINED DYNAMIC EVENTING - A computer-implemented method of modifying execution behavior of a programmatic unit of source code is provided. The method includes loading the programmatic unit of source code and determining whether at least one customization is defined for the programmatic unit. The at least one customization is selectively executed based on whether a prerequisite of the customization is satisfied. | 2015-12-24 |
20150370553 | WRAPPING COMPUTER SOFTWARE APPLICATIONS - Wrapping a computer software application by unpackaging the computer software application into constituent components including a data file that includes a listing of any of the components, modifying the data file to include a reference to a library, where the library is configured to cause communications between the computer software application and a computer operating system to be intercepted and processed by instructions within the library when the computer software application is executed by a computer, and repackaging the computer software application to include the library and any of the components listed in the modified data file. | 2015-12-24 |
20150370554 | PROVIDING CODE CHANGE JOB SETS OF DIFFERENT SIZES TO VALIDATORS - Examples disclosed herein relate to providing code change job sets of different sizes to validators. Examples include placing a plurality of jobs in a queue, each job including at least one code change requested to be committed to shared code. Examples further include providing job sets of different sizes to a plurality of validators, each of the job sets comprising a consecutive group of one or more of the jobs in the queue at a given time and beginning with the job at the front of the queue at the given time. | 2015-12-24 |
20150370555 | COMPOSITING DELTAS WHEN MERGING ARTIFACTS IN A VERSION CONTROL SYSTEM - Embodiments of the present invention address deficiencies of the art in respect to merging artifacts in a version control system and provide a novel and non-obvious method, system and computer program product for compositing deltas when merging artifacts in a version control system. In one embodiment, a method for compositing deltas for artifacts can be provided. The method can include generating deltas for a contributor artifact of an ancestor artifact, identifying interrelated ones of the deltas and grouping the interrelated ones of the deltas into a composited set of deltas. The method further can include rendering the composited set of deltas in a hierarchical view of a compare view for a version control data processing system in a development platform. | 2015-12-24 |
20150370556 | ESTABLISHING SUBSYSTEM BOUNDARIES BASED ON CALL FLOW GRAPH TOPOLOGY - According to one exemplary embodiment, a method for establishing subsystem boundaries is provided. The method may include receiving an input program having a plurality of subroutines and at least one inter-subroutine call. The method may include generating a graph having a plurality of nodes and at least one edge, wherein the at least one edge includes a first end connected to a first node and a second end connected to a second node. The method may include assigning an edge weight to the at least one edge wherein the edge weight is based on a number of second ends received by the second node. The method may include determining, based on the assigned edge weight, a distance value between each pair of nodes. The method may include generating a grouping of nodes based on the determined distance value between each pair of nodes. | 2015-12-24 |
20150370557 | FLOATING POINT EXECUTION UNIT FOR CALCULATING PACKED SUM OF ABSOLUTE DIFFERENCES - A method provides support for packed sum of absolute difference operations in a floating point execution unit, e.g., a scalar or vector floating point execution unit. Existing adders in a floating point execution unit may be utilized along with minimal additional logic in the floating point execution unit to support efficient execution of a fixed point packed sum of absolute differences instruction within the floating point execution unit, often eliminating the need for a separate vector fixed point execution unit in a processor architecture, and thereby leading to less logic and circuit area, lower power consumption and lower cost. | 2015-12-24 |
20150370558 | RELOCATION OF INSTRUCTIONS THAT USE RELATIVE ADDRESSING - Relocation of instructions that use relative addressing. Metadata relating to an instruction that uses relative addressing to access data and is to be relocated is stored prior to relocation. Based on relocating the instruction from one memory location to another memory location, a determination is made of an address to be used to access the data by the instruction. The determining is based on at least one of the metadata or an address of the another memory location. The instruction is executed at the another memory location, and the determined address is used to access the data. | 2015-12-24 |
20150370559 | ENDIAN-MODE-INDEPENDENT MEMORY ACCESS IN A BI-ENDIAN-MODE PROCESSOR ARCHITECTURE - Embodiments relate to vector processors. An aspect includes endian-mode-sensitive memory instructions for a vector processor. One embodiment includes a computer-implemented method for copying data between a vector register that includes byte elements 0 to S and a memory that is byte addressable. The computer-implemented method includes obtaining a vector instruction by a processor in a computer. The processor determines that the vector instruction is a memory access instruction specifying the vector register and a memory address. In response to the determination that is instruction is a memory access instruction and independent of a current global endian mode setting that is selectable in the processor, the processor executes the memory access instruction by copying the byte data between the memory and the vector register so that the byte element n of the vector register corresponds to the memory address+n for n=0 to S. | 2015-12-24 |
20150370560 | METHODS FOR ENFORCING CONTROL FLOW OF A COMPUTER PROGRAM - One aspect of the invention provides a method of controlling execution of a computer program. The method comprises the following runtime steps: parsing code to identify one or more indirect branches; creating a branch ID data structure that maps an indirect branch location to a branch ID, which is the indirect branch's equivalence class ID; creating a target ID data structure that maps a code address to a target ID, which is an equivalence class ID to which the address belongs; and prior to execution of an indirect branch including a return instruction located at an address: obtaining the branch ID associated with the return address from the branch ID data structure; obtaining the target ID associated with an actual return address for the indirect branch from the target ID data structure; and comparing the branch ID and the target ID. | 2015-12-24 |
20150370561 | SKIP INSTRUCTION TO SKIP A NUMBER OF INSTRUCTIONS ON A PREDICATE - A pipelined run-to-completion processor executes a conditional skip instruction. If a predicate condition as specified by a predicate code field of the skip instruction is true, then the skip instruction causes execution of a number of instructions following the skip instruction to be “skipped”. The number of instructions to be skipped is specified by a skip count field of the skip instruction. In some examples, the skip instruction includes a “flag don't touch” bit. If this bit is set, then neither the skip instruction nor any of the skipped instructions can change the values of the flags. Both the skip instruction and following instructions to be skipped are decoded one by one in sequence and pass through the processor pipeline, but the execution stage is prevented from carrying out the instruction operation of a following instruction if the predicate condition of the skip instruction was true. | 2015-12-24 |
20150370562 | EFFICIENT CONDITIONAL INSTRUCTION HAVING COMPANION LOAD PREDICATE BITS INSTRUCTION - A pipelined run-to-completion processor can decode three instructions in three consecutive clock cycles, and can also execute the instructions in three consecutive clock cycles. The first instruction causes the ALU to generate a value which is then loaded due to execution of the first instruction into a register of a register file. The second instruction accesses the register and loads the value into predicate bits in a register file read stage. The predicate bits are loaded in the very next clock cycle following the clock cycle in which the second instruction was decoded. The third instruction is a conditional instruction that uses the values of the predicate bits as a predicate code to determine a predicate function. If a predicate condition (as determined by the predicate function as applied to flags) is true then an instruction operation of the third instruction is carried out, otherwise it is not carried out. | 2015-12-24 |
20150370563 | MULTI-PROCESSOR SYSTEM HAVING TRIPWIRE DATA MERGING AND COLLISION DETECTION - An integrated circuit includes a pool of processors and a Tripwire Data Merging and Collision Detection Circuit (TDMCDC). Each processor has a special tripwire bus port. Execution of a novel tripwire instruction causes the processor to output a tripwire value onto its tripwire bus port. Each respective tripwire bus port is coupled to a corresponding respective one of a plurality of tripwire bus inputs of the TDMCDC. The TDMCDC receives tripwire values from the processors and communicates them onto a consolidated tripwire bus. From the consolidated bus the values are communicated out of the integrated circuit and to a debug station. If more than one processor outputs a valid tripwire value at a given time, then the TDMCDC asserts a collision bit signal that is communicated along with the tripwire value. Receiving tripwire values onto the debug station facilitates use of the debug station in monitoring and debugging processor code. | 2015-12-24 |
20150370564 | APPARATUS AND METHOD FOR ADDING A PROGRAMMABLE SHORT DELAY - Described is an integrated circuit (IC) comprising: a processor; and a plurality of registers coupled to the processor, wherein the processor to select one of the registers of the plurality to stall execution of an instruction by a predetermined time. | 2015-12-24 |
20150370565 | DATA PROCESSING DEVICE AND METHOD, AND PROCESSOR UNIT OF SAME - A processor unit ( | 2015-12-24 |
20150370566 | INSTRUCTION SET ARCHITECTURE WITH OPCODE LOOKUP USING MEMORY ATTRIBUTE - A method decodes instructions based in part on one or more decode-related attributes stored in a memory address translation data structure such as an Effective To Real Translation (ERAT) or Translation Lookaside Buffer (TLB). A memory address translation data structure may be accessed, for example, in connection with a decode of an instruction stored in a page of memory, such that one or more attributes associated with the page in the data structure may be used to control how that instruction is decoded. | 2015-12-24 |
20150370567 | METHOD AND APPARATUS FOR PERFORMANCE EFFICIENT ISA VIRTUALIZATION USING DYNAMIC PARTIAL BINARY TRANSLATION - Methods, apparatus and systems for virtualization of a native instruction set are disclosed. Embodiments include a processor core executing the native instructions and a second core, or alternatively only the second processor core consuming less power while executing a second instruction set that excludes portions of the native instruction set. The second core's decoder detects invalid opcodes of the second instruction set. A microcode layer disassembler determines if opcodes should be translated. A translation runtime environment identifies an executable region containing an invalid opcode, other invalid opcodes and interjacent valid opcodes of the second instruction set. An analysis unit determines an initial machine state prior to execution of the invalid opcode. A partial translation of the executable region that includes encapsulations of the translations of invalid opcodes and state recoveries of the machine states is generated and saved to a translation cache memory. | 2015-12-24 |
20150370568 | INTEGRATED CIRCUIT PROCESSOR AND METHOD OF OPERATING A INTEGRATED CIRCUIT PROCESSOR - A processor includes an instruction pipeline. The pipeline can be operated alternatively in a multi-thread mode and in a single-thread mode. In the multi-thread mode, the instruction pipeline processes multiple threads in an interleaved or simultaneous manner. In the single-thread mode, the pipeline processes a single thread. The instruction pipeline comprises multiple functional units, each of which is reserved for one thread among the multiple threads when the pipeline is in the multi-thread mode and reserved for one context layer among multiple context layers when the instruction pipeline is in the single-thread mode. A method of operating a processor is also disclosed. | 2015-12-24 |
20150370569 | INSTRUCTION PROCESSING SYSTEM AND METHOD - An instruction processing system is provided. The system includes a central processing unit (CPU), an m number of memory devices and an instruction control unit. The CPU is capable of being coupled to the m number of memory devices. Further, the CPU is configured to execute one or more instructions of the executable instructions. The m number of memory devices with different access speeds are configured to store the instructions, where m is a natural number greater than 1. The instruction control unit is configured to, based on a track address of a target instruction of a branch instruction stored in a track table, control a memory with a lower speed to provide the instruction for a memory with a higher speed. | 2015-12-24 |
20150370570 | Computer Processor Employing Temporal Addressing For Storage Of Transient Operands - A computer processor including a plurality of storage elements logically organized as a fixed length queue referenced by logical temporal addresses. The fixed length queue operates over multiple cycles to temporarily store operands referenced by at least one instruction utilizing the logical temporal addresses. A plurality of functional units performs operations over the multiple cycles, wherein the operations produce and access operands stored in the logical fixed length queue. Operands can be added to the front of the logical fixed length queue according to the temporal order that operands are produced by the functional units, and operands can drop from the end of the logical fixed length queue as operands are added to the front of the fixed length queue. A plurality of operands produced by the plurality of functional units (possibly with different latencies in producing such operands) can be added to the logical fixed length queue in a single cycle. A plurality of operands operated on by the functional units can be accessed from the logical fixed length queue in a single cycle. | 2015-12-24 |
20150370571 | PROCESSOR HAVING A TRIPWIRE BUS PORT AND EXECUTING A TRIPWIRE INSTRUCTION - A pipelined run-to-completion processor has a special tripwire bus port and executes a novel tripwire instruction. Execution of the tripwire instruction causes the processor to output a tripwire value onto the port during a clock cycle when the tripwire instruction is being executed. A first multi-bit value of the tripwire value is data that is output from registers, and/or flags, and/or pointers, and/or data values stored in the pipeline. A field of the tripwire instruction specifies what particular stored values will be output as the first multi-bit value. A second multi-bit value of the tripwire value is a number that identifies the particular processor that output the tripwire value. The processor has a TE enable/disable control bit. This bit is programmable by a special instruction to disable all tripwire instructions. If disabled, a tripwire instruction is fetched and decoded but does not cause the output of a tripwire value. | 2015-12-24 |
20150370572 | Multi-User Processor System for Processing Information - This multi-user processor system for processing information, of the type including a data exchange engine ( | 2015-12-24 |
20150370573 | SPECULATIVE FINISH OF INSTRUCTION EXECUTION IN A PROCESSOR CORE - In a processor core, high latency operations are tracked in entries of a data structure associated with an execution unit of the processor core. In the execution unit, execution of an instruction dependent on a high latency operation tracked by an entry of the data structure is speculatively finished prior to completion of the high latency operation. Speculatively finishing the instruction includes reporting an identifier of the entry to completion logic of the processor core and removing the instruction from an execution pipeline of the execution unit. The completion logic records dependence of the instruction on the high latency operation and commits execution results of the instruction to an architected state of the processor only after successful completion of the high latency operation. | 2015-12-24 |
20150370574 | REPLICATING LOGIC BLOCKS TO ENABLE INCREASED THROUGHPUT - A datapath pipeline which uses replicated logic blocks to increase the throughput of the pipeline is described. In an embodiment, the pipeline, or a part thereof, comprises a number of parallel logic paths each comprising the same logic. Input register stages at the start of each logic path are enabled in turn on successive clock cycles such that data is read into each logic path in turn and the logic in the different paths operates out of phase. The output of the logic paths is read into one or more output register stages and the logic paths are combined using a multiplexer which selects an output from one of the logic paths on any clock cycle. Various optimization techniques are described and in various examples, register retiming may also be used. In various examples, the datapath pipeline is within a processor. | 2015-12-24 |
20150370575 | LICENSE MANAGEMENT USING A BASIC INPUT/OUTPUT SYSTEM (BIOS) - Methods and systems for license management using a basic input/output system (BIOS) may involve performing license activation, monitoring, and enforcement. The BIOS may store license information to manage licenses for hardware and/or software components of an information handling system. License management by the BIOS may include monitoring a system clock of the information handling system for changes to avoid tampering with license durations. | 2015-12-24 |
20150370576 | Method to Facilitate Rapid Deployment and Redeployment of an Inforamtion Handling System - An information handling system includes a processor, a Unified Extensible Firmware Interface (UEFI) boot volume, and a memory including UEFI code and a setup module. The UEFI code is executable by the processor to boot the information handling system, determine if the UEFI boot volume includes a setup data file, and launch the setup module in response to determining that the UEFI boot volume includes the setup data file. The setup module is executable by the processor to read first information from the setup data file, and set a first configuration setting of the information handling system based upon the first information. | 2015-12-24 |
20150370577 | REMOTELY EXECUTING OPERATIONS OF AN APPLICATION USING A SCHEMA THAT PROVIDES FOR EXECUTABLE SCRIPTS IN A NODAL HIERARCHY - A schema is provided that logically represents a nodal hierarchy relating to execution of an application. The hierarchy includes multiple nodes, including one or more category nodes and one or more content nodes. An executable script is provided with the schema. The script may be associated with at least one node of the hierarchy. Each of multiple user inputs from the computing device are processed using the schema. The individual user inputs may be selective of nodes of the hierarchy. In response to processing each of multiple user inputs, user interface content is provided to the computing device. The user interface content for each user input corresponds to one of (i) one or more nodes, or (ii) a script content, generated as an output of an executed script that is associated with a selected node. | 2015-12-24 |
20150370578 | METHOD FOR ACCESSING MULTIPLE INTERNAL REGISTERS OF A SERVER - A method is provided for facilitating access by an external user to the internal registers of a server including: transmitting access commands originating from the external user to a service processor using a communication protocol directly understandable by the service processor which accesses the internal registers using one or more access protocols, automatically transforming command lines issued by the user into access commands in the communication protocol using one or more service modules which associate at least the corresponding addresses of the internal registers with the names of the internal registers supplied by the external user. On the occasion of a user-commanded access by the service processor to the internal registers, the service processor is responsible for managing a possible risk of collision with a monitoring access to the internal registers for the purposes of updating a copy of the status of the internal registers. | 2015-12-24 |
20150370579 | MODULAR SPACE VEHICLE BOARDS, CONTROL SOFTWARE, REPROGRAMMING, AND FAILURE RECOVERY - A space vehicle may have a modular board configuration that commonly uses some or all components and a common operating system for at least some of the boards. Each modular board may have its own dedicated processing, and processing loads may be distributed. The space vehicle may be reprogrammable, and may be launched without code that enables all functionality and/or components. Code errors may be detected and the space vehicle may be reset to a working code version to prevent system failure. | 2015-12-24 |
20150370580 | CONFIGURATION CONTROLLER FOR AND A METHOD OF CONTROLLING A CONFIGURATION OF A CIRCUITRY - A configuration controller for and a method of controlling a configuration of a circuitry are provided. The configuration controller comprises an input, a selection checker, a data selector and an output. The input receives an input configuration selection signal which is encoded according to a specific encoding scheme. The selection checker checks a correctness of the received input configuration selection signal and provides to the data selector a selection signal which indicates a specific configuration selection if the input configuration selection data is correct or indicates a default configuration selection if the input configuration selection signal is incorrect according to the specific encoding scheme. The data selector selects configuration data from its internal configuration data storage in accordance with the selection signal and provides the selected configuration data to the output. | 2015-12-24 |
20150370581 | Common System Services for Managing Configuration and Other Runtime Settings of Applications - A method for managing settings of applications. A request from an application to store runtime settings currently being used by the application is identified. In response to identifying the request, the runtime settings are then stored on in a repository of runtime settings. In one or more examples, the application is running on an operating system on a computer system, and the request is communicated through a common system service of the operating system. | 2015-12-24 |
20150370582 | AT LEAST ONE USER SPACE RESIDENT INTERFACE BETWEEN AT LEAST ONE USER SPACE RESIDENT VIRTUAL APPLIANCE AND AT LEAST ONE VIRTUAL DATA PLANE - In an embodiment, circuitry may be provided that may execute at least one interface process in a user space of a host. The host, in operation, also may have a kernel space. The at least one process may provide at least one interface, at least in part, between at least one virtual appliance and at least one virtual data plane. The at least one virtual data plane may facilitate communication between at least one physical device and the at least one virtual appliance via the at least one interface. The at least one physical device may appear to the at least one virtual appliance, when the at least one virtual appliance communicates with the at least one physical device via the at least one interface, as at least one local device. The at least one virtual appliance and the at least one interface may be resident in the user space. | 2015-12-24 |
20150370583 | SYSTEM AND METHOD FOR SIMULATING VIRTUAL MACHINE (VM) PLACEMENT IN VIRTUAL DATACENTERS - A placement simulator is used for testing a placement engine in a virtual machine environment. The placement simulator includes a simulation controller, an event manager, and an inventory manager. The simulation controller receives input data for a simulated datacenter. The event manager invokes event handlers for a sequence of events from the input data. The inventory manager stores states of inventory objects to simulate deployment of virtual infrastructure resources by the placement engine based on the sequence of the events. | 2015-12-24 |
20150370584 | COMPUTER SYSTEM AND PROGRAM - Provided is a computer system in which computer environments | 2015-12-24 |
20150370585 | DYNAMIC CODE INJECTION - Embodiments of the present invention disclose an approach for inserting code into a running thread of execution. A computer sets a first set of bits to a first value, wherein the first value indicates that a first set of instructions should be inserted onto a stack. The computer executes a second set of instructions associated with a first safepoint, wherein the second set of instructions comprises one or more instructions to determine if the first set of bits is set to the first value. The computer determines that the first set of bits is set to the first value, and the computer inserts the first set of instructions onto the stack. | 2015-12-24 |
20150370586 | LOCAL SERVICE CHAINING WITH VIRTUAL MACHINES AND VIRTUALIZED CONTAINERS IN SOFTWARE DEFINED NETWORKING - Methods, software, and apparatus for implementing local service chaining (LSC) with virtual machines (VMs) or virtualized containers in Software Defined Networking (SDN). In one aspect a method is implemented on a compute platform including a plurality of VMs or containers, each including a virtual network interface controller (vNIC) communicatively coupled to a virtual switch in an SDN. LSCs are implemented via a plurality of virtual network appliances hosted by the plurality of VMs or containers. Each LCS comprises a sequence (chain) of services performed by virtual network appliances defined for the LSC. In connection with performing the chain of services, packet data is forwarded between VMs or containers using a cut-through mechanisms under which packet data is directly written to receive (Rx) buffers on the vNICs in a manner that bypasses the virtual switch. LSC indicia (e.g., through LSC tags) and flow tables are used to inform each virtual network appliance and/or or its host VM or container of the next vNIC Rx buffer or Rx port to which packet data is to be written. | 2015-12-24 |
20150370587 | COMPUTER-READABLE RECORDING MEDIUM HAVING STORED THEREIN OUTPUTTING PROGRAM, OUTPUT APPARATUS AND OUTPUTTING METHOD - A computer is caused to execute a process including acquiring operating information relating to an operation situation within a predetermined period of a virtual machine operating on an information processing apparatus from a management machine that performs acquisition of the operating information and activation control of the virtual machine, and outputting, where a first period within which operating information of the virtual machine is not acquired is included in the predetermined period, operation actual results of the virtual machine within the first period based on operating information of the management machine within the first period and operating information of the virtual machine acquired at least at one of preceding and succeeding timings to the first period. | 2015-12-24 |
20150370588 | SELECTING OPTIMAL HYPERVISOR PLATFORMS THAT SATISFY APPLICATION WORKLOAD REQUIREMENTS - A method, system and computer program product for selecting hypervisor platforms that are best suited to process application workloads. Attribute requirements for an application workload, such as high CPU capacity, high power and low cost, are received. A ranking algorithm is then applied to a list of pools of compute nodes to identify an ordered list of pools of compute nodes that are best suited for satisfying the attribute requirements of the application workload by comparing hypervisor characteristics of the pools of compute nodes with the attribute requirements of the application workload. Each pool of compute nodes runs on a particular hypervisor platform which has a unique combination of characteristics that correspond to a combination of a set of attribute requirements (e.g., medium CPU/memory/disk capacity; high CPU and memory performance). In this manner, the hypervisor platforms that are best suited for satisfying the application workload requirements are identified. | 2015-12-24 |
20150370589 | CACHING GRAPHICS OPERATION OUTPUTS - Exemplary methods, apparatuses, and systems receive a first instruction set from a first virtual machine (VM), the first instruction set including a request to perform an operation on an input. A first identifier is generated based upon the operation and the input. The first identifier is mapped to a stored copy of the input, the operation, and an output resulting from a processor performing the operation. In response to receiving a second instruction set from a second VM, a second identifier is generated based upon the input and operation received within the second instruction set. In response to determining that the second identifier matches the stored first identifier, it is further determined that the input and operation of the first instruction set matches the input and operation of the second instruction set. A copy of the stored output is returned to the second VM. | 2015-12-24 |
20150370590 | HYPERVISOR CONTEXT SWITCHING USING A TRAMPOLINE SCHEME IN PROCESSORS HAVING MORE THAN TWO HIERARCHICAL PRIVILEGE LEVELS - In a virtualized computer system operable in more than two hierarchical privilege levels, components of a hypervisor, which include a virtual machine kernel and virtual machine monitors (VMMs), are assigned to different privilege levels. The virtual machine kernel operates at a low privilege level to be able to exploit certain features provided by the low privilege level, and the VMMs operate at a high privilege level to support execution of virtual machines. Upon determining that a context switch from the virtual machine kernel to a VMM is to be performed, the computer system exits the low privilege level, and enters the high privilege level to execute a trampoline that supports context switches to VMMs, such as state changes, and then the VMM. The trampoline is deactivated after execution control is switched to the VMM. | 2015-12-24 |
20150370591 | HYPERVISOR CONTEXT SWITCHING USING A REDIRECTION EXCEPTION VECTOR IN PROCESSORS HAVING MORE THAN TWO HIERARCHICAL PRIVILEGE LEVELS - In a virtualized computer system operable in more than two hierarchical privilege levels, components of a hypervisor, which include a virtual machine kernel and virtual machine monitors (VMMs), are assigned to different privilege levels. The virtual machine kernel operates at a low privilege level to be able to exploit certain features provided by the low privilege level, and the VMMs operate at a high privilege level to support execution of virtual machines. Upon determining that a context switch from the virtual machine kernel to a VMM is to be performed, the computer system exits the low privilege level, and enters the high privilege level to execute a trampoline that supports context switches to VMMs, such as state changes, and then the VMM. The trampoline is deactivated after execution control is switched to the VMM. | 2015-12-24 |
20150370592 | HYPERVISOR CONTEXT SWITCHING USING TLB TAGS IN PROCESSORS HAVING MORE THAN TWO HIERARCHICAL PRIVILEGE LEVELS - In a virtualized computer system operable in more than two hierarchical privilege levels, components of a hypervisor, which include a virtual machine kernel and virtual machine monitors (VMMs), are assigned to different privilege levels. The virtual machine kernel operates at a low privilege level to be able to exploit certain features provided by the low privilege level, and the VMMs operate at a high privilege level to support execution of virtual machines. Upon determining that a context switch from the virtual machine kernel to a VMM is to be performed, the computer system exits the low privilege level, and enters the high privilege level to execute a trampoline that supports context switches to VMMs, such as state changes, and then the VMM. The trampoline is deactivated after execution control is switched to the VMM. | 2015-12-24 |
20150370593 | SYSTEM CONSTRUCTION DEVICE AND SYSTEM CONSTRUCTION METHOD - In the case of constructing systems having configurations different each other by using a virtual machine including a common component, a binary file of the virtual machine depending on the plural systems to be constructed is generate, efficiently. The system construction device | 2015-12-24 |
20150370594 | OPTIMIZING RUNTIME PERFORMANCE OF AN APPLICATION WORKLOAD BY MINIMIZING NETWORK INPUT/OUTPUT COMMUNICATIONS BETWEEN VIRTUAL MACHINES ON DIFFERENT CLOUDS IN A HYBRID CLOUD TOPOLOGY DURING CLOUD BURSTING - A method, system and computer program product for optimizing runtime performance of an application workload. Network input/output (I/O) operations between virtual machines of a pattern of virtual machines servicing the application workload in a private cloud are measured over a period of time and depicted in a histogram. A score is generated for each virtual machine or group of virtual machines in the pattern of virtual machines based on which range in the ranges of I/O operations per seconds (IOPS) depicted in the histogram has the largest sample size and the number of virtual machines in the same pattern that are allowed to be in the public cloud. In this manner, the runtime performance of the application workload is improved by minimizing the network input/output communications between the two cloud environments by migrating those virtual machine(s) or group(s) of virtual machines with a score that exceeds a threshold value. | 2015-12-24 |
20150370595 | IMPLEMENTING DYNAMIC VIRTUALIZATION OF AN SRIOV CAPABLE SAS ADAPTER - A method, system and computer program product are provided for implementing dynamic virtualization of a Single Root Input/Output Virtualization (SRIOV) capable Serial Attached SCSI (SAS) adapter. The SRIOV SAS adapter includes a plurality of virtual functions (VFs). Each individual Host Bus Adapter (HBA) resource is enabled to be explicitly assigned to a virtual function (VF); and each VF being enabled to be assigned to a system partition. Multiple VFs are enabled to be assigned to a single system partition. | 2015-12-24 |
20150370596 | SYSTEM AND METHOD FOR LIVE MIGRATION OF A VIRTUALIZED NETWORKING STACK - A method and apparatus are provided in which a source and target perform bidirectional forwarding of traffic while a migration guest is being transferred from the source to the target. In some examples, the migration guest is exposed to the impending migration and takes an action in response. A virtual network programming controller informs other devices in the network of the change, such that those devices may communicate directly with the migration guest on the target host. According to some examples, an “other” virtual network device in communication with the controller and the target host facilitates the seamless migration. In such examples, the forwarding may be performed only until the other virtual machine receives an incoming packet from the target host, and then the other virtual machine resumes communication with the migration guest on the target host. | 2015-12-24 |
20150370597 | INFERRING PERIODS OF NON-USE OF A WEARABLE DEVICE - A wearable computing device is described that predicts, based on movement detected, over time, by the wearable computing device, one or more future periods of time during which the wearable computing device will not be used. Responsive to determining that the wearable computing device is not being used at a current time, the wearable computing device determines whether the current time coincides with at least one period of time from the one or more future periods of time. Responsive to determining that the current time coincides with the at least one period of time, the wearable computing device performs an operation. | 2015-12-24 |
20150370598 | COMMON SYSTEM SERVICES FOR MANAGING CONFIGURATION AND OTHER RUNTIME SETTINGS OF APPLICATIONS - Managing settings of applications is provided. A request from an application to store runtime settings, currently being used by the application, is identified by a processor executing program instructions for managing settings of applications. In response to identifying the request, the runtime settings are then stored on in a repository of runtime settings. In one or more examples, the application is running on an operating system on a computer system, and the request is communicated through a common system service of the operating system. | 2015-12-24 |
20150370599 | PROCESSING TASKS IN A DISTRIBUTED SYSTEM - Embodiments of the present application relate to a method, apparatus, and system for processing a task in a distributed system. The method includes, in response to being triggered to start a task and before processing the task, determining, by a task processor in a distributed system of a plurality of task processors, a vital status of the task. In the event that the vital status of the task is set to alive, determining not to process the task, and in the event that the vital status of the task is set to dead, updating the vital status of the task so as to be set to alive, processing the task, and in response to completing the processing of the task, updating the vital status of the task to dead. | 2015-12-24 |
20150370600 | SYSTEM HAVING OPERATION QUEUES CORRESPONDING TO OPERATION EXECUTION TIME - A system and method for prioritized queues is provided. A plurality of queues are organized to enable long-running operations to be directed to a long running queue operation, while faster operations are directed to a non-long running operation queue. When an operation request is received, a determination is made whether it is a long-running operation, and, if so, the operation is placed in a long-running operation queue. When the processor core that is executing long-running operations is ready for the next operation, it removes an operation from the long-running operation queue and processes the operation. | 2015-12-24 |
20150370601 | OPTIMIZING RUNTIME PERFORMANCE OF AN APPLICATION WORKLOAD BY MINIMIZING NETWORK INPUT/OUTPUT COMMUNICATIONS BETWEEN VIRTUAL MACHINES ON DIFFERENT CLOUDS IN A HYBRID CLOUD TOPOLOGY DURING CLOUD BURSTING - A method, system and computer program product for optimizing runtime performance of an application workload. Network input/output (I/O) operations between virtual machines of a pattern of virtual machines servicing the application workload in a private cloud are measured over a period of time and depicted in a histogram. A score is generated for each virtual machine or group of virtual machines in the pattern of virtual machines based on which range in the ranges of I/O operations per seconds (IOPS) depicted in the histogram has the largest sample size and the number of virtual machines in the same pattern that are allowed to be in the public cloud. In this manner, the runtime performance of the application workload is improved by minimizing the network input/output communications between the two cloud environments by migrating those virtual machine(s) or group(s) of virtual machines with a score that exceeds a threshold value. | 2015-12-24 |
20150370602 | Time Critical Tasks Scheduling - A method and system for scheduling a time critical task. The system may include a processing unit, a hardware assist scheduler, and a memory coupled to both the processing unit and the hardware assist scheduler. The method may include receiving timing information for executing the time critical task, the time critical task executing program instructions via a thread on a core of a processing unit and scheduling the time critical task based on the received timing information. The method may further include programming a lateness timer, waiting for a wakeup time to obtain and notifying the processing unit of the scheduling. Additionally, the method may include executing, on the core of the processing unit, the time critical task in accordance with the scheduling, monitoring the lateness timer, and asserting a thread execution interrupt in response to the lateness timer expiring, thereby suspending execution of the time critical task. | 2015-12-24 |
20150370603 | DYNAMIC PARALLEL DISTRIBUTED JOB CONFIGURATION IN A SHARED-RESOURCE ENVIRONMENT - Dynamically adjusting the parameters of a parallel, distributed job in response to changes to the status of the job cluster. Includes beginning execution of a job in a cluster, receiving cluster status information, determining a job performance impact of the cluster status, reconfiguring job parameters based on the performance impact, and continuing execution of the job using the updated configuration. Dynamically requesting a change to the resources of the job cluster for a parallel, distributed job in response to changes in job status. Includes beginning execution of a job in a cluster, receiving job status information, determining a job performance impact, requesting a changed allocation of cluster resources based on the determined job performance impact, reconfiguring one or more job parameters based on the changed allocation, and continuing execution of the job using the updated configuration. | 2015-12-24 |
20150370604 | INFORMATION PROCESSING DEVICE AND METHOD - An information processing device comprising a processor that selects, from among a plurality of data processing sections that subject data blocks to a predetermined process, a data processing section to which a first data block group with first identification information based on the data blocks is allocated, and divides, when a workload placed on the data processing section exceeds a first threshold, the first data block group allocated to the data processing section into a plurality of second data block groups with second identification information based on the data blocks, and selects, from among the plurality of data processing sections, data processing sections to which the plurality of second data block groups are allocated. | 2015-12-24 |
20150370605 | Resource Sharing Using Process Delay - Methods and systems that reduce the number of instance of a shared resource needed for a processor to perform an operation and/or execute a process without impacting function are provided. a method of processing in a processor is provided. Aspects include determining that an operation to be performed by the processor will require the use of a shared resource. A command can be issued to cause a second operation to not use the shared resources N cycles later. The shared resource can then be used for a first aspect of the operation at cycle X and then used for a second aspect of the operation at cycle X+N. The second operation may be rescheduled according to embodiments. | 2015-12-24 |
20150370606 | METHOD FOR PRIORITIZING TASKS QUEUED AT A SERVER SYSTEM - An algorithm for assigning priorities to tasks queued for processing by users based on how heavily each task's user used the system resources in the past, including the number of tasks queued by the user in the past, the volume of these tasks, and the amount of processor time used. In the OCR context, the tasks are graphic files placed on servers and chosen for processing in accordance with the assigned priorities. | 2015-12-24 |
20150370607 | BLUEPRINT-DRIVEN ENVIRONMENT TEMPLATE CREATION IN A VIRTUAL INFRASTRUCTURE - A system for blueprint-driven environment template creation in a virtual infrastructure comprises a processor and a memory. The processor is configured to receive a blueprint, receive an environment template configuration, and build an environment template using the blueprint and the environment template configuration. The environment template is for provisioning an environment. The environment is for deploying an application. The memory is coupled to the processor and is configured to provide the processor with instructions. | 2015-12-24 |
20150370608 | SYSTEM AND METHOD FOR PARTITION TEMPLATES IN A MULTITENANT APPLICATION SERVER ENVIRONMENT - In accordance with an embodiment, described herein is a system and method for supporting the use of partition templates in a multitenant application server environment. A partition template, including a partition configurator and/or attributes, can be used to configure partitions deployed to a domain using that partition template. When a request is received to create a new partition, a selected partition template is determined. The partition configurator of that partition template is then used to configure and deploy the partition to the domain at a corresponding virtual target, which in turn is associated with a target system (e.g., a computer server, or a cluster). A plurality of partition templates can be provided, wherein each partition template can include its own partition configurator and/or attributes that can be used to configure partitions deployed to the domain using that partition template, including different configuration attributes for each partition template. | 2015-12-24 |
20150370609 | THREAD SCHEDULING ACROSS HETEROGENEOUS PROCESSING ELEMENTS WITH RESOURCE MAPPING - A method for scheduling processes of a workload on a plurality of hardware threads configured in a plurality of processing elements of a multithreading parallel computing system for processing thereby. Process dimensions for each process are determined based on processing attributes associated with each process, and a place and route algorithm is utilized to map the processes to a processor space representative of the processing resources of the computing system based at least in part on the process dimensions to thereby distribute the processes of the workload. | 2015-12-24 |
20150370610 | FLEXIBLE DEPLOYMENT AND MIGRATION OF VIRTUAL MACHINES - Virtual machines in a computer system cluster, or cloud environment, require access to their assigned storage resources connected to the virtual machines via storage area networks (SAN). Such virtual machines may be independent from associated physical servers in the computer system cluster on which they are deployed. These virtual machines may dynamically migrate among assigned physical servers while maintaining access to their connected storage resources both from the source physical server and the target physical server during the migration. | 2015-12-24 |
20150370611 | FLEXIBLE DEPLOYMENT AND MIGRATION OF VIRTUAL MACHINES - Virtual machines in a computer system cluster, or cloud environment, require access to their assigned storage resources connected to the virtual machines via storage area networks (SAN). Such virtual machines may be independent from associated physical servers in the computer system cluster on which they are deployed. These virtual machines may dynamically migrate among assigned physical servers while maintaining access to their connected storage resources both from the source physical server and the target physical server during the migration. | 2015-12-24 |
20150370612 | SYSTEM AND METHOD FOR SYNCHRONIZATION USING DEVICE PAIRINGS WITH DOCKING STATIONS - A system and method for synchronizing information from an information handling system is disclosed. The method includes identifying a device within a pre-determined range of a docking station, the device operable to communicate with the docking station. The method also includes pairing with the device, determining a user of the device, and predicting, based upon past activity by the user at another docking station, content to be launched on the device. The method further include presenting the content for launching on the device, receiving a selection of the content; and launching the content on the device. | 2015-12-24 |
20150370613 | MEMORY TRANSACTION HAVING IMPLICIT ORDERING EFFECTS - In at least some embodiments, a processor core executes a code segment including a memory transaction and a non-transactional memory access instructions preceding the memory transaction in program order. The memory transaction includes at least an initiating instruction, a transactional memory access instruction, and a terminating instruction. The initiating instruction has an implicit barrier that imparts the effect of ordering execution of the transactional memory access instruction within the memory transaction with respect to the non-transactional memory access instructions preceding the memory transaction in program order. Executing the code segment includes executing the transactional memory access instruction within the memory transaction concurrently with at least one of the non-transactional memory access instructions preceding the memory transaction in program order and enforcing the barrier implicit in the initiating instruction following execution of the initiating instruction. | 2015-12-24 |
20150370614 | SUPPORTING ATOMIC ACCUMULATION WITH AN ADDRESSABLE ACCUMULATOR - Atomically accumulating memory updates in a computer system configured with an accumulator that is memory mapped. The accumulator includes an accumulator memory and an accumulator queue and is configured to communicatively couple to a processor. Included is receiving from the processor, by the accumulator, an accumulation request. The accumulation request includes an accumulation operation identifier and data. Based on determining, by the accumulator, that the accumulator can immediately process the request, immediately processing the request. Processing the request includes atomically updating a value in the accumulator memory, by the accumulator, based on the operation identifier and data of the accumulation request. Based on determining, by the accumulator, that the accumulator is actively processing another accumulation request, queuing, by the accumulator, the accumulation request for later processing. Further included is signaling the processor, by the accumulator, the completion of the accumulation request. | 2015-12-24 |
20150370615 | METHODS AND APPARATUS FOR USING SMART ENVIRONMENT DEVICES VIA APPLICATION PROGRAM INTERFACES - In one embodiment, a tangible, non-transitory computer-readable media stores computer instructions. The computer instructions, when executed by a processor, are configured to send one or more requests including an access token to retrieve, access, view, subscribe, or modify data elements of a data model representative of one or more smart environments. The access token is associated with at least an application programming interface (API) client or API client device and one or more scopes granted to the API client or API client device. The one or more scopes provide one or more access rights to one or more of the data elements of the data model defined by a hierarchical position of the data elements in the data model represented by a respective path to the data elements. | 2015-12-24 |
20150370616 | METHOD AND SYSTEM FOR RECOMMENDING COMPUTER PRODUCTS ON THE BASIS OF OBSERVED USAGE PATTERNS OF A COMPUTATIONAL DEVICE OF KNOWN CONFIGURATION - A computer implemented method and system for locally or remotely monitoring any system process or user action on a specific computer device, recording data that summarizes any patterns in usage, and displaying recommendation to the user of actions that could be taken, and products that could be purchased that would in any way better the overall user experience. Possibilities for such a system are quite broad, some examples include identifying out-of-date software which possesses both security and system performance issues, identifying the “type of user” and suggesting alternate more appropriate pieces of software and hardware that would better suit the user, for example suggesting a more powerful graphics processor for a serious gamer, an upgrade in RAM for a high intensity analyst, an easier to use software suite for a casual user, an upgraded battery for a smart phone when the batter is often low, a reminder to restart your device occasionally for system updates, etc. Such a system could enhance the general user experience for a wide variety of users on a wide variety of devices. | 2015-12-24 |
20150370617 | SYSTEM AND METHODS FOR LAUNCHING AN APPLICATION ON AN ELECTRONIC DEVICE - A method and system are provided for operating an electronic device to launch an application. The method includes detecting an event associated with the application and, when an indicator of user presence is detected within a predetermined period of time after the event, launching the application. The method may further include loading the application in the background upon detecting the event if the application has not been loaded, wherein launching the application includes bringing the application to the foreground. The method may further include, when an indicator of user presence is not detected within the predetermined period of time after the event, closing the application loaded in the background. | 2015-12-24 |
20150370618 | RECORDING UNSTRUCTURED EVENTS IN CONTEXT - Recording an unstructured event in context can include detecting a first occurrence indicative of a start to the unstructured event utilizing a sensing feature of a client device. Upon detection of the first occurrence, device activity data associated with the client device is tracked. A second occurrence indicative an end to the unstructured event is detected utilizing the sensing feature. Following detection of the second occurrence, an event object for the unstructured event spanning between the first and second events is presented. The event object includes or is otherwise associated with the device activity data tracked during the timespan. | 2015-12-24 |
20150370619 | MANAGEMENT SYSTEM FOR MANAGING COMPUTER SYSTEM AND MANAGEMENT METHOD THEREOF - Provided is a management system managing a computer system including apparatuses to be monitored. The management system holds configuration information on the computer system, analysis rules and plan execution effect rules. The analysis rules each associates a causal event that may occur in the computer system with derivative events that may occur by effects of the causal event and defines the causal event and the derivative events with types of components in the computer system. The plan execution effect rules each indicates types of components that may be affected by a computer system configuration change and specifics of the effects. The management system identifies a first event that may occur when a first plan changing the computer system configuration is executed using the plan execution effect rules and the configuration information, and identifies a range where the first event affects using the analysis rules and the configuration information. | 2015-12-24 |
20150370620 | SYSTEMS AND METHODS FOR MANAGING NAVIGATION AMONG APPLICATIONS - Systems and methods are provided for managing navigation among applications installed on an electronic device. According to certain aspects, an electronic device receives ( | 2015-12-24 |
20150370621 | METHODS AND APPARATUS FOR USING SMART ENVIRONMENT DEVICES VIA APPLICATION PROGRAM INTERFACES - Systems and Methods disclosed herein relate to providing a message to an application programming interface (API). The message includes a request for data from a data model, a submission of data to the data model, or both; and a host selection between: a representational state transfer (REST) host and a subscription-based application programming interface (API) host, wherein the REST host receives REST-based messages and the subscription-based API host receives messages in accordance with a standard of the subscription-based API host; wherein the request for data, the submission of data, or both are configured to create, delete, modify, or any combination thereof data related to a smart-device environment structure, a thermostat, a hazard detector, or any combination thereof stored in a data model accessible by the API. | 2015-12-24 |
20150370622 | SYSTEM VERIFICATION OF INTERACTIVE SCREENSHOTS AND LOG FILES BETWEEN CLIENT SYSTEMS AND SERVER SYSTEMS WITHIN A NETWORK COMPUTING ENVIRONMENT - A computer-implemented method for system performance verification is provided. The computer-implemented method includes invoking an integrated system tool to perform system performance verification of a client system. The computer-implemented method further includes monitoring administrative actions within an interface of the client system of an administrative device during the system performance verification. The computer-implemented method further includes recording screenshots of the monitored administrative actions, wherein the recorded screenshots are recorded to administrative log files of the administrative device. The computer-implemented method further includes transmitting the recorded screenshots to a storage location of system log files, wherein the recorded screenshots are associated with appropriate system log files for performing diagnosis of system performance verification of the client system. | 2015-12-24 |
20150370623 | MONITORING APPARATUS, MONITORING METHOD, AND RECORDING MEDIUM - Monitoring apparatus is configured to execute: obtaining pieces of information about access in a first time slot by specifying the first time slot including access to a first apparatus in the system where a response time is long; selecting a feature common to the access in the first time slot from the pieces of information about the access in the first time slot that have been obtained; first extracting from pieces of information about access in a given period of time, pieces of information about access that has the common feature selected as a feature common to the access in the first time slot; and first generating a first graph which shows changes with time in response time, based on the pieces of information about the access having the feature common to the access in the first time slot that have been extracted. | 2015-12-24 |
20150370624 | INFORMATION PROCESSING APPARATUS AND FAULT DIAGNOSIS METHOD - An information processing apparatus and a fault diagnosis method for monitoring signals relating to the start of a CPU to determine that a failure occurs, in a case where a predetermined signal is not output within a predetermined time period after the output of a predetermined signal, and determine the failure type based on the signal states at the time of the occurrence of the failure to display information corresponding to the failure type. | 2015-12-24 |
20150370625 | MONITORING SYSTEM AND MONITORING METHOD - A monitoring system includes positional information for indicating positions to display failures which occur in the computers on a screen image, event information for indicating failures which occur in the computers, times at which the failures have occurred, and statuses of troubleshooting of the failures, an image creation part for creating a screen image indicating failures which occurred by the end time and have not been removed at a current time based on the positional information and the event information, and a display part for displaying the created screen image. | 2015-12-24 |
20150370626 | RECORDING MEDIUM STORING A DATA MANAGEMENT PROGRAM, DATA MANAGEMENT APPARATUS AND DATA MANAGEMENT METHOD - A data management apparatus obtains first operation information about a specified operation from an information processing apparatus; specifies second operation information that matches a registered operation pattern from the first operation information, by utilizing operation pattern information that includes correspondence information between the specified operation and the registered operation pattern, the operation pattern information being stored in a first memory; obtains a second log from a first log of the information processing apparatus, the second log corresponds to time periods in which the specified operations that are not permitted by the registered operation pattern are done, the first log being stored in a second memory; and specifies a time period of a log extracted on the basis of a performance value indicated by the second log. | 2015-12-24 |
20150370627 | MANAGEMENT SYSTEM, PLAN GENERATION METHOD, PLAN GENERATION PROGRAM - A management system that generates a plan which is a countermeasure against an event occurring in a computer system includes: a plan generating unit configured to generate a plan according to the event; and an indicator generating unit configured to generate, as a performance change evaluation indicator of the plan, information on a change in performance of a resource of the computer system, which can occur due to other subject's process executed by the other subject different from a subject of the plan when the plan generated by the plan generating unit is executed. | 2015-12-24 |
20150370628 | EMPLOYING INTERMEDIARY STRUCTURES FOR FACILITATING ACCESS TO SECURE MEMORY - The present application is directed to employing intermediary structures for facilitating access to secure memory. A secure driver (SD) may be loaded into the device to reserve a least a section of memory in the device as a secure page cache (SPC). The SPC may protect application data from being accessed by other active applications in the device. Potential race conditions may be avoided through the use of a linear address manager (LAM) that maps linear addresses (LAs) in an application page table (PT) to page slots in the SPC. The SD may also facilitate error handling in the device by reconfiguring VEs that would otherwise be ignored by the OS. | 2015-12-24 |
20150370629 | STORAGE DEVICE INCLUDING NONVOLATILE MEMORY AND MEMORY CONTROLLER AND OPERATING METHOD OF STORAGE DEVICE - An operating method of a storage device includes reading data from a nonvolatile memory using first read parameters and second read parameters and collecting read histories associated with a plurality of read operations. First histories and second histories are determined from the collected read histories. The second read parameters are adjusted according to the first histories, and the first read parameters are adjusted according to the second histories. The read histories include information on read voltages used to perform the read operations, and the first histories and the second histories are determined from the collected read histories according to the number of read voltages having the same level. | 2015-12-24 |
20150370630 | FLASH MEMORY CONTROL APPARATUS UTILIZING BUFFER TO TEMPORARILY STORING VALID DATA STORED IN STORAGE PLANE, AND CONTROL SYSTEM AND CONTROL METHOD THEREOF - A flash memory controlling apparatus includes a data read/write interface and a controller. The data read/write interface is arranged to couple a first flash memory and a second flash memory, wherein the first flash memory includes a first storage plane and a first buffer, and the second flash memory includes a second storage plane and a second buffer. When the read/write interface couples the first flash memory and the second flash memory, the controller is arranged to temporary store a plurality of valid data stored in the first storage plane into the second buffer. After an erase cycle is performed on the first storage plane, the controller further programs the plurality of valid data temporarily stored in the second buffer into the first storage plane. | 2015-12-24 |
20150370631 | WRITE MAPPING TO MITIGATE HARD ERRORS VIA SOFT-DECISION DECODING - An apparatus having mapping and interface circuits. The mapping circuit (i) generates a coded item by mapping write unit bits using a modulation or recursion of past-seen bits, and (ii) calculates a particular state to program into a nonvolatile memory cell. The interface circuit programs the cell at the particular state. Two normal cell states are treated as at least four refined states. The particular state is one of the refined states. A mapping to the refined states mitigates programming write misplacement that shifts an analog voltage of the cell from the particular state to an erroneous state. The erroneous state corresponds to a readily observable illegal or atypical write sequence, and results in a modified soft decision from that calculated based on the normal states only. A voltage swing between the particular state and the erroneous state is less than between the normal states. | 2015-12-24 |