25th week of 2019 patent applcation highlights part 50 |
Patent application number | Title | Published |
20190187947 | SMARTPAD WINDOW MANAGEMENT - A multi-display device is adapted to be dockable or otherwise associatable with an additional device. In accordance with one exemplary embodiment, the multi-display device is dockable with a smartpad. The exemplary smartpad can include a screen, a touch sensitive display, a configurable area, a gesture capture region(s) and a camera. The smartpad can also include a port adapted to receive the device. The exemplary smartpad is able to cooperate with the device such that information displayable on the device is also displayable on the smartpad. Furthermore, any one or more of the functions on the device are extendable to the smartpad, with the smartpad capable of acting as an input/output interface or extension of the smartpad. Therefore, for example, information from one or more of the displays on the multi-screen device is displayable on the smartpad. | 2019-06-20 |
20190187948 | METHOD AND SYSTEM FOR MANAGING ACCESS OF FUNCTIONS IN A MULTI-FUNCTIONAL PRINTER - A method and device are provided for managing access of functions in a Multi-Functional Printer (MFP) by an access manager. The access manager receives information regarding occurrence of at least one error in at least one functional unit of the MFP, and identifies one or more functions operably dependent on at least one functional unit one or more functions operably independent of at least one functional unit, based on a pre-defined master error list. The, access manager divides a display screen of MFP into a plurality of display portions, where a first display portion of a plurality of display portions displays one or more functions operably independent of at least the one functional unit and a second display portion of the plurality of display portions displays information associated with at least one error that has occurred in at least one functional unit of the MFP. | 2019-06-20 |
20190187949 | NON-TRANSITORY COMPUTER READABLE MEDIUM WITH PROGRAM, INFORMATION PROCESSING METHOD, AND INFORMATION PROCESSING TERMINAL - A non-transitory computer-readable recording medium has computer readable instructions stored thereon, which when executed, cause an information processing terminal that includes a display and at least one processor, to execute a method. The method includes displaying a screen of a talk room that displays one or more contents transmitted and received in a group including the information processing terminal and another information processing terminal; displaying a list of objects corresponding to information input by a user; displaying on the screen of the talk room an object selected by the user from among the objects included in the list; transmitting the object to said another information processing terminal; and displaying on the screen of the talk room an extended function corresponding to the object in chronological order in the talk room as a content in the talk room. | 2019-06-20 |
20190187950 | ELECTRONIC DEVICE - An electronic device comprising: a first sensor which detects motion; a second sensor which detects proximity; and a controller which sets power of the electronic device ON when the electronic device is in a stand-by state, the first sensor detects motion and the second sensor detects proximity. | 2019-06-20 |
20190187951 | PARAMETER SETTING DEVICE AND METHOD IN SIGNAL PROCESSING APPARATUS - A setting device includes: a manual-operator operable by a user for adjusting a parameter value; a mode selector for selecting a temporary operation mode that is a mode for temporarily operating the manual-operator; a memory; and a controller. The controller performs storage control for, in association with an adjusting operation executed via the manual-operator while the temporary operation mode is selected, storing into the memory a pre-change parameter value as a change history and return control for returning the adjusted parameter value of the manual-operator to the pre-change parameter value based on the change history in response to ending of the temporary operation mode. Because only the parameter value adjusted during the temporary operation mode can be returned to the pre-change parameter value, the inventive setting device can easily, quickly, and accurately return the parameter value, temporarily changed during the temporary operation mode, to the pre-change parameter value. | 2019-06-20 |
20190187952 | DEVICE INCLUDING A DIGITAL ASSISTANT FOR PERSONALIZED SPEECH PLAYBACK AND METHOD OF USING SAME - A method and device for reviewing audio content are provided. The method includes using a digital assistant on a device to review audio content (e.g., recorded audio information and/or text converted to speech) in a preferred tone and/or at a preferred rate. The digital assistant can also provide video, images, and/or web links during playback of the audio information to further assist a listener. | 2019-06-20 |
20190187953 | INFORMATION PROCESSING APPARATUS, SPEECH RECOGNITION SYSTEM, AND INFORMATION PROCESSING METHOD - An information processing apparatus includes: a speech obtainer which obtains speech of a user; a first controller which, when the first controller recognizes that the speech obtained by the speech obtainer is a first activation word, outputs a speech signal corresponding to the first activation word; and a second controller. In the first speech transmission process in which the speech signal of the speech obtained by speech obtainer is transmitted to the VPA cloud server, the first controller determines whether to output a speech signal corresponding to a second activation word to the second controller based on a predetermined priority level when the first controller recognizes that the speech obtained by the speech obtainer indicates the second activation word for causing the second controller to start a second speech transmission process. | 2019-06-20 |
20190187954 | Content Discovery - A method, apparatus and computer program code is provided. The method comprises: causing display of a virtual object at a first position in virtual space, the virtual object having a visual position and an aural position at the first position; processing positional audio data based on the aural position of the virtual object being at the first position; causing positional audio to be output to a user based on the processed positional audio data; changing the aural position of the virtual object from the first position to a second position in the virtual space, while maintaining the visual position of the virtual object at the first position; further processing positional audio data based on the aural position of the virtual object being at the second position; and causing positional audio to be output to the user based on the further processed positional audio data, while maintaining the visual position of the virtual object at the first position. | 2019-06-20 |
20190187955 | SYSTEMS AND METHODS FOR COMMENT RANKING USING NEURAL EMBEDDINGS - Systems, methods, and non-transitory computer readable media are configured to generate, an embedding for a post. The post can correspond to an entity. An embedding for a comment in a set of comments can be generated. The comments in the set can be responsive to the post. The embedding for the post can be updated. The updating can be based on the embedding for the post and the embedding for the comment. Subsequently, a rank for the comment in the set of comments can be determined. | 2019-06-20 |
20190187956 | METHOD AND APPARATUS FOR USE IN THE DESIGN AND MANUFACTURE OF INTEGRATED CIRCUITS - A method and apparatus are provided for manufacturing integrated circuits performing invariant integer division x/d. A desired rounding mode is provided and an integer triple (a,b,k) for this rounding mode is derived. Furthermore, a set of conditions for the rounding mode is derived. An RTL representation is then derived using the integer triple. From this a hardware layout can be derived and an integrated circuit manufactured with the derived hardware layout. When the integer triple is derived a minimum value of k for the desired rounding mode and set of conditions is also derived. 0 | 2019-06-20 |
20190187957 | COMPACT BIT GENERATOR - A bit generator includes a sampler and a voltage controlled oscillator (VCO) powered by a supply voltage. The sampler outputs a non-deterministic bit series which is generated by sampling an output of the VCO. The randomness of the non-deterministic bit series depends on inherent background noise and/or inherent clock jitter. Optionally, the bit generator does not include noise source circuitry. | 2019-06-20 |
20190187958 | EXTRACTING MOBILE APPLICATION WORKFLOW FROM DESIGN FILES - A workflow extraction method, system, and computer program product include analyzing, for each of the design screens, a relatability of one design screen to a previously analyzed design screen in the database and generating a tag that represents a workflow and creating a database linking the tag to a sequence of design screens from a transition graph that details how to move from one of the design screens to another. | 2019-06-20 |
20190187959 | SYSTEM, METHOD, AND RECORDING MEDIUM FOR VALIDATING COMPUTER DOCUMENTATION - A computer-implemented computer documentation validation method, the method comprising: manipulating a user interface of an operating system by taking control of a user input device to execute a command of a computer software documentation on behalf of the user; and outputting an error code when a failure is a result of the executed command. | 2019-06-20 |
20190187960 | REDUCING MEMORY USAGE IN SOFTWARE APPLICATIONS - Embodiments of the present disclosure pertain to reducing memory usage in software applications. In one embodiment, the present disclosure includes a computer implemented method comprising constructing a dynamic HTML component in a document object model by executing first source code of a scripting language, generating a static HTML component clone of the dynamic HTML component by executing second source code of the scripting language, replacing the dynamic HTML component with the static HTML component in the document object model by executing third source code of the scripting language, decoupling the dynamic HTML component by executing fourth source code of the scripting language, and deleting the dynamic HTML component from memory using a garbage collection process in a scripting engine. | 2019-06-20 |
20190187961 | CHATBOT BUILDER USER INTERFACE - A method for providing a bot builder user interface by a bot builder user interface providing apparatus includes providing a developer device with a bot builder user interface (UI) for producing a chatbot; if at least one sentence is input from the developer device, providing multiple parameters including attribute information regarding respective words included in the at least one sentence; and receiving, from the developer device, grouping information regarding two or more parameters selected from the multiple parameters, wherein the chatbot produced by the developer device is accessible by a user device connecting with a chatbot service server, and if at least one of the two or more grouped parameters is extracted from a sentence of a chat message input by the user device, the chatbot executes a predetermined instruction with reference to the extracted parameter. | 2019-06-20 |
20190187962 | Spreadsheet-Based Software Application Development - Aspects described herein may be used with local spreadsheet applications, web, and/or cloud-based spreadsheet solutions, to create complex custom software applications. Spreadsheets themselves lack the conceptual framework to be used as a platform tool to build custom or complex software applications. Using the methods and systems described herein using low-code/no-code techniques, a designer can create custom and/or complex software applications using one or more spreadsheets as the underlying blueprints for the software application. The resultant software application may be static/read-only, or may be interactive to allow users to dynamically add, delete, edit, or otherwise amend application data, e.g., via one or more online web pages or via a mobile application. Data transfer may be one-way or bi-directional between the blueprint spreadsheets and the resultant software application, thereby allowing amended data to be transferred from the software application back into spreadsheet form. | 2019-06-20 |
20190187963 | MEMORY ACCESS OPTIMISATION USING PER-LAYER COMPUTATIONAL MAPPING AND MEMORY ALLOCATION FOR CNN APPLICATION - A method of configuring a System on Chip to execute a CNN process comprising CNN layers, the method comprising, for each schedule: determining memory access amount information describing how many memory accesses are required; expressing the memory access amount information as relationships describing reusability of data; combining the relationships with a cost of writing and reading from external memory, to form memory access information; determining a memory allocation for on-chip memory of the SoC for the input FMs and the output FMs; and determining, dependent upon the memory access information and the memory allocation for each schedule; a schedule which minimises the memory access information of external memory access for the CNN layer of the CNN process; and a memory allocation associated with the determined schedule. | 2019-06-20 |
20190187964 | Method and Apparatus for Compiler Driven Bank Conflict Avoidance - Systems, apparatuses, and methods for converting computer program source code from a first high level language to a functionally equivalent executable program code. Source code in a first high level language is analyzed by a code compilation tool. In response to identifying a potential bank conflict in a multi-bank register file, operands of one or more instructions are remapped such that they map to different physical banks of the multi-bank register file. Identifying a potential bank conflict comprises one or more of identifying an intra-instruction bank conflict, an inter-instruction bank conflict, and identifying a multi-word operand with a potential bank conflict. | 2019-06-20 |
20190187965 | Reduced Memory Consumption of Compiler-Transformed Asynchronous Methods - An asynchronous method is implemented in a manner that reduces the amount of runtime overhead needed to execute the asynchronous method. The data elements needed to suspend an asynchronous method to await completion of an asynchronous operation, to resume the asynchronous method at a resumption point, and to provide a completion status of the caller of the asynchronous method are consolidated into one or two reusable objects. An asynchronous method may be associated with a distinct object pool of reusable objects. The size of a pool and the total size of all pools can be configured statically or dynamically based on runtime conditions. | 2019-06-20 |
20190187966 | DYNAMICALLY REPLACING A CALL TO A SOFTWARE LIBRARY WITH A CALL TO AN ACCELERATOR - A computer program includes calls to a software library. A virtual function table is built that includes the calls to the software library in the computer program. A programmable device includes one or more currently-implemented accelerators. The available accelerators that are currently-implemented are determined. The calls in the software library that correspond to a currently-implemented accelerator are determined. One or more calls to the software library in the virtual function table are replaced with one or more corresponding calls to a corresponding currently-implemented accelerator. When a call in the software library could be implemented in a new accelerator, an accelerator image for the new accelerator is dynamically generated. The accelerator image is then deployed to create the new accelerator. One or more calls to the software library in the virtual function table are replaced with one or more corresponding calls to the new accelerator. | 2019-06-20 |
20190187967 | GENERATION OF DYNAMIC SOFTWARE MODELS USING INPUT MAPPING WITH FEATURE DEFINITIONS - A system and method for facilitating construction of and/or adaptation of a dynamic software model. One embodiment provides for generating software models by mapping user selections to one or more model features as specified by feature definitions. An initial software model is used to obtain the user selections. Artifacts are associated with the initial business planning model according to the selections by mapping the selections to model features according to previously determined feature definitions. | 2019-06-20 |
20190187968 | DISTRIBUTION AND EXECUTION OF INSTRUCTIONS IN A DISTRIBUTED COMPUTING ENVIRONMENT - Methods and apparatus for distribution and execution of instructions in a distributed computing environment are disclosed. An example method includes requesting, by executing an instruction with a processor within a deployment environment, a package supporting execution of a second instruction from a management endpoint, loading, by executing an instruction with the processor, a first component of the package in a command cache, the first component including a third instruction to implement a plugin framework, causing, by executing an instruction with the processor, a second component of the package to be stored in an instruction cache, the instruction cache located outside the deployment environment, the second component including a fourth instruction, and executing the first component from the command cache. | 2019-06-20 |
20190187969 | METHOD FOR VIRTUALIZING SOFTWARE APPLICATIONS - A method for virtualizing of software applications. The method comprises initializing a virtual environment created by a virtual engine executed over a computer; creating a new data file; launching an installation process of a software application to be virtualized, wherein the installation process runs in the virtual environment; during the installation process, capturing data writes to a file system of the computer's operating system; and saving the data writes to the new data file. | 2019-06-20 |
20190187970 | METHOD OF UPDATING FIRMWARE OF CLOSED STORAGE DEVICE - A method of updating firmware of closed storage device includes the steps of connecting an electronic device to a closed storage device having built-in first and second memories and bootstrap loader, and the first memory storing a first application that is set by the bootstrap loader as a default boot loader; the electronic device downloading a second application having a different version from the first application and setting the first memory to a locked state; the electronic device transmitting the second application to the second memory via the bootstrap loader and the second memory is updated when the second application is written thereinto; and the bootstrap loader setting the second application as the boot loader. The two applications of different versions in the closed storage device are updated alternately, and the old application can still be used as the boot loader when the update of the other application has failed. | 2019-06-20 |
20190187971 | METHOD AND SYSTEM FOR PROVIDING SECURE OVER-THE-AIR VEHICLE UPDATES - Embodiments of the present disclosure are directed to methods and systems for providing secure over-the-air firmware updates to one or more vehicles. More specifically, the present disclosure describes applying to firmware images distributed to one or more vehicles encryption that is unique to each update version. The encryption is also unique to each vehicle receiving the update. Embodiments of the present disclosure can also include determining and verifying the integrity of an available OTA firmware update prior to authorizing installation of the firmware update in a vehicle. | 2019-06-20 |
20190187972 | CLIENT TERMINAL, INFORMATION PROCESSING SYSTEM, AND FIRMWARE UPDATE NOTIFICATION METHOD - A client terminal, an information processing system and a firmware update notification method. The information processing system includes the client terminal, an electronic device, and a server that provides the firmware for the electronic device. The firmware update notification method includes acquiring version information of the firmware installed on the electronic device and the latest firmware available on the server, comparing the version of the firmware installed on the electronic device with the version of the latest firmware available on the server, determining whether the latest firmware to update the firmware installed on the electronic device is available on the server, providing notice that the latest firmware to update the firmware installed on the electronic device is available on the server and acquiring the latest firmware from the server to update the firmware installed on the electronic device based on an inputted instruction. | 2019-06-20 |
20190187973 | METHOD AND SYSTEM FOR UPDATING LEGACY SOFTWARE - A method includes analyzing operational code to determine identifiers used within the operational code. The method further includes grouping like identifiers based a relational aspect of the identifiers. The method further includes, for one or more identifier groups, determining potential feature(s) of the identifier group(s). The method further includes testing the potential feature(s) based on a corresponding feature test suite to produce feedback regarding meaningfulness of the potential feature(s). The method further comprises, when the meaningfulness is above a threshold, adding the potential feature(s) to a feature set. The method further includes, when the meaningfulness is at or below the threshold, adjusting analysis parameter(s), grouping parameter(s), feature parameter(s), and/or testing parameter(s). | 2019-06-20 |
20190187974 | SYSTEM AND METHOD FOR DOWNGRADING APPLICATIONS - Disclosed herein is a technique for downgrading applications to placeholder applications in order to free up storage space in a user device. Based on a variety of heuristics, a number of installed applications are identified as candidates for a downgrade. The downgrading of the identified applications involves creating a placeholder application for each of the identified applications. The identified applications are temporarily deleted while keeping the user data associated with the applications intact and the placeholder applications are installed. | 2019-06-20 |
20190187975 | System and Method for Providing Automatic Firmware Update Management - A method for updating firmware of cable modems optimizing management resources in a network comprising a web application, network collector, more than one cable modem, one or more servers. The method includes the web application receiving an update firmware policy, the policy defined by a list of cable modems to have their firmware updated, a Uniform Resource Identifier (URI) pointing to a file within a server in the network, and the web application adding a policy with this information to a policies table. The network collector polls a database engine for a new policy and computing a list of cable modems to have their firmware updated, and the network collector sends a command to a cable modem to update to a new firmware, wherein the new firmware is specified by the URI. | 2019-06-20 |
20190187976 | METHOD FOR UPDATING SOFTWARE FOR VEHICLE AND THE VEHICLE USING OF THE SAME - A method for updating software for a vehicle by receiving difference information from an external device in a certain space, and the vehicle to which the method is applied are provided, may include determining whether necessary difference map data of the vehicle is present based on information received from a parking lot server in a limited space such as a parking lot, and receiving necessary difference information related to map data of the vehicle from a vehicle in which the difference information is stored | 2019-06-20 |
20190187977 | DUAL BOOT OPERATING SYSTEM INSTALLATION USING MULTIPLE REDUNDANT DRIVES - The disclosure herein describes installing operating system (OS) software of a computing device with multiple redundant drives. A first drive is removed from a redundant drive array mirroring the first drive and a second drive, the drives including a first OS. The first drive is formatted to remove the first OS and include a plurality of partitions. Installation data is mounted on an installation partition of the first drive, the installation data configured to install a second OS. A bootloader component is updated to include an installation option for the second OS. The second OS is then installed on the plurality of partitions of the first drive based on the installation data. The plurality of partitions are configured as multiple virtual redundant drives with respect to the second OS, whereby the computing device is enabled to boot to either the first OS or the second OS. | 2019-06-20 |
20190187978 | SOFTWARE VERSION SYNCHRONIZATION FOR AVIONICS SYSTEMS - An assembly for an aircraft according to an example of the present disclosure includes, among other things, a control module including a processor and a local memory that stores a first instance of operational software executable by the processor and that relates to functionality of the control module to selectively control a vehicle system, and a backplane memory device coupled to the control module by a common backplane. The backplane memory device includes shadow memory that stores a second instance of the operational software. A method of synchronizing an assembly is also disclosed. | 2019-06-20 |
20190187979 | DYNAMIC ACCELERATOR GENERATION AND DEPLOYMENT - A code portion in a computer program is identified that will be improved from being deployed to a hardware accelerator to enhance the run-time performance of the computer program. An accelerator catalog includes a listing of currently-implemented accelerators, along with available resources on one or more programmable devices. When the catalog does not include the needed accelerator, the available resources are determined from the catalog, and when the available resources are insufficient to deploy the needed accelerator, one or more of the existing accelerators is cast out of the programmable device according to specified ranking criteria to make room for the needed accelerator. The needed accelerator image is dynamically generated and deployed, the identified code portion of the computer program is replaced with a call to the deployed hardware accelerator, the newly-generated accelerator is stored in the catalog, and the available resources data in the catalog is updated. | 2019-06-20 |
20190187980 | VERSION CONTROL OF APPLICATIONS - An application development system allows developers of software system to manage infrastructure resources during the development and testing process. The application development system allows users to define application containers that comprise components including source code, binaries, and virtual databases used for the application. An application container can be associated with policies that control various aspects of the actions taken using the application container including constraints and access control. The application development system enforces the policies for actions taken by users for the application containers. The encapsulation of policies with the application containers allows users of the application containers to take actions including creating virtual databases, provisioning virtual databases, and the like without requiring system administrators to manage resource issues. | 2019-06-20 |
20190187981 | REGISTRY FOR MAPPING NAMES TO COMPONENT INSTANCES USING CONFIGURABLE BINDINGS AND POINTER DEFINITIONS - The disclosed embodiments relate to a system that facilitates developing applications in a component-based software development environment. This system provides an execution environment comprising instances of application components and a registry that maps names to instances of application components. Upon receiving a call to register a mapping between a name and an instance of an application component, the system updates the registry to include an entry for the mapping. Moreover, upon receiving a call to be notified about registry changes for a name, the system updates the registry to send a notification to a caller when a registry change occurs for the name. | 2019-06-20 |
20190187982 | MANAGED MULTI-CONTAINER BUILDS - Techniques for managing multi-container builds are described herein. A software build task description specifies a build environment and the build environment specifies a set of parameters for building a version of a software object. A container is instantiated that corresponds to the build environment and build commands are sent to the container. As the container completes the build command, it sends a response that is used to determine a second command to send to the container. A status of the software build task is provided based at least in part on the response. | 2019-06-20 |
20190187983 | NEURAL PROCESSING ACCELERATOR - A system for calculating. A scratch memory is connected to a plurality of configurable processing elements by a communication fabric including a plurality of configurable nodes. The scratch memory sends out a plurality of streams of data words. Each data word is either a configuration word used to set the configuration of a node or of a processing element, or a data word carrying an operand or a result of a calculation. Each processing element performs operations according to its current configuration and returns the results to the communication fabric, which conveys them back to the scratch memory. | 2019-06-20 |
20190187984 | SYSTEM MEMORY CONTROLLER WITH ATOMIC OPERATIONS - Methods and System for use on a memory controller are disclosed which provides atomic compute operations of any size using an asynchronous, pipelined message passing interface between clients and the memory controller. | 2019-06-20 |
20190187985 | Storage Organization for Transposing a Matrix Using a Streaming Engine - Software instructions are executed on a processor within a computer system to configure a steaming engine to operate in either a linear mode or a transpose mode. A stream of addresses is generated using an address generator, in which the stream of addresses includes consecutive nested loop iterations for at least a first loop and a second loop. While in the linear mode, the first loop is treated as an inner loop. While in the transpose mode, the second loop is treated as the inner loop. A matrix can be fetched from memory in the linear mode to provide row-wise vectors. A matrix can be fetched from the memory in the transpose mode to provide column wise vectors. Local storage on the streaming engine is organized as sectors based on the number of rows in the matrix to allow overlapping transposition processing and to minimize memory accesses. | 2019-06-20 |
20190187986 | Transposing a Matrix Using a Streaming Engine - Software instructions are executed on a processor within a computer system to configure a steaming engine to operate in either a linear mode or a transpose mode. A stream of addresses is generated using an address generator, in which the stream of addresses includes consecutive nested loop iterations for at least a first loop and a second loop. While in the linear mode, the first loop is treated as an inner loop. While in the transpose mode, the second loop is treated as the inner loop. A matrix can be fetched from memory in the linear mode to provide row-wise vectors. A matrix can be fetched from the memory in the transpose mode to provide column wise vectors. | 2019-06-20 |
20190187987 | AUTOMATION OF SEQUENCES OF ACTIONS - Traditional manual macro-recorders may not work under a dynamically changing operating environment. Technical solutions are disclosed to automatically generate macros to increase productivity. After a new sequence of actions is detected, the system will prompt the user with the information of an existing macro if the existing macro contains a similar sequence. Otherwise, the system will attempt to automatically generate a new macro based on the sequence of actions. | 2019-06-20 |
20190187988 | PROCESSOR LOAD USING A BIT VECTOR TO CALCULATE EFFECTIVE ADDRESS - Circuitry may be configured to identify a particular element position of a bit vector stored in a register, where a value of the element occupying the particular element position matches a first predetermined value, and determine an address value dependent upon the particular element position of the bit vector and a base address. The circuitry may be further configured to load data from a memory dependent upon the address value. | 2019-06-20 |
20190187989 | LOAD REGISTER ON CONDITION IMMEDIATE INSTRUCTION - A data processor comprising a plurality of registers, and instruction execution circuitry having an associated instruction set, wherein the instruction set includes an instruction specifying at least a mask operand, a register operand and an immediate value operand, and the instruction execution circuitry, in response to an instance of the instruction, determines a Boolean value based on the mask operand and sets a respective one of a plurality of registers specified by the register operand of the instance to a value of the immediate value operand if the Boolean value is true. The instruction execution circuitry, in response to the instance of the instruction, may set the respective one of the plurality of registers specified by the register operand of the instance to zero if the Boolean value is false. | 2019-06-20 |
20190187990 | SYSTEM AND METHOD FOR A LIGHTWEIGHT FENCING OPERATION - A system and method for a lightweight fence is described. In particular, micro-operations including a fencing micro-operation are dispatched to a load queue. The fencing micro-operation allows micro-operations younger than the fencing micro-operation to execute, where the micro-operations are related to a type of fencing micro-operation. The fencing micro-operation is executed if the fencing micro-operation is the oldest memory access micro-operation, where the oldest memory access micro-operation is related to the type of fencing micro-operation. The fencing micro-operation determines whether micro-operations younger than the fencing micro-operation have load ordering violations and if load ordering violations are detected, the fencing micro-operation signals the retire queue that instructions younger than the fencing micro-operation should be flushed. The instructions to be flushed should include all micro-operations with load ordering violations. | 2019-06-20 |
20190187991 | DEVICE, DATA-PROCESSING CHAIN AND CONTEXT-SWITCHING METHOD - This data-processing device includes a unit for processing data, a storage memory and a buffer-memory device configured to contain a first group of data relative to a first context and exchange data between the processing unit and the first group of data. The buffer-memory device is further configured to contain a second group of data relative to a second context and, upon reception of a context-switching instruction, exchange data between the processing unit and the second group of data, in place of the first group of data. The data-processing device further includes a context-switching device configured to emit the context-switching instruction, select a group of data recorded in the storage memory, copy the first group of data to the storage memory and copy the selected group of data to the buffer-memory device. | 2019-06-20 |
20190187992 | PRIORITIZED INSTRUCTIONS IN AN INSTRUCTION COMPLETION TABLE OF A SIMULTANEOUS MULTITHREADING PROCESSOR - Implementations are disclosed for a simultaneous multithreading processor configured to execute a plurality of threads. In one implementation, the simultaneous multithreading processor is configured to select a first thread of the plurality of threads according to a predefined scheme, and access an instruction completion table to determine whether the first thread is eligible to have a first instruction prioritized. Responsive to determining that the first thread is eligible to have the first instruction prioritized, the simultaneous multithreading processor is further configured to execute the first instruction of the first thread using a dedicated prioritization resource. | 2019-06-20 |
20190187993 | FINISH STATUS REPORTING FOR A SIMULTANEOUS MULTITHREADING PROCESSOR USING AN INSTRUCTION COMPLETION TABLE - A simultaneous multithreading processor and related method of operating are disclosed. The method comprises dispatching portions of a first instruction to be executed by a respective plurality of execution units of the processor; receiving, at an instruction completion table of the processor, respective finish reports responsive to execution of the portions of the first instruction; determining, using the received finish reports, that all of the portions of the first instruction have been executed; and updating the instruction completion table to indicate that the first instruction is ready for completion. | 2019-06-20 |
20190187994 | SEQUENCE VERIFICATION - A method of monitoring execution in an execution environment of an operation, for example a cryptographic operation, comprising a sequence of instructions, is disclosed. Instructions sent in the sequence from a main processor to one or more auxiliary processors, for example cryptographic processors, to execute the operation are monitored and the sequence of instructions is verified using verification information. The method comprises enabling output from the execution environment of a result of the operation in response to a successful verification of the sequence, or generating a verification failure signal in response to a failed verification of the sequence. | 2019-06-20 |
20190187995 | ASYNCHRONOUS FLUSH AND RESTORE OF DISTRIBUTED HISTORY BUFFER - Techniques are disclosed for performing a flush and restore of a history buffer (HB) in a processing unit. One technique inludes identifying one or more entries of the HB to restore to a register file in the processing unit. For each of the one or more HB entries, a determination is made whether to send the HB entry to the register file via a first restore bus or via a second restore bus, different from the first restore bus, based on contents of the HB entry. Each of the one or more HB entries is then sent to the register file via one of the first restore bus or the second restore bus, based on the determination. | 2019-06-20 |
20190187996 | OPERATING ON DATA STREAMS USING CHAINED HARDWARE INSTRUCTIONS - A method for accessing and using a hardware acceleration circuit in a computer system is disclosed. The computer system may receive a single call to a particular library function that is implemented by a hardware acceleration circuit included in the computer system. A plurality of chained hardware instructions is generated in response to the single call, wherein the plurality of chained hardware instructions is based on different ones of a plurality of flags and a plurality of data streams specified by the single call. The computer system may send the plurality of chained hardware instructions to the hardware acceleration circuit for execution. | 2019-06-20 |
20190187997 | SYSTEMS AND METHODS FOR OPTIMIZED CLUSTER RESOURCE UTILIZATION - Systems and methods for optimizing cluster resource utilization are disclosed. Systems and methods for optimizing cluster resource utilization are disclosed. In one embodiment, in an information processing apparatus comprising at least one computer processor, a method for optimizing cluster resource utilization may include: (1) retrieving cluster usage information for at least one cluster resource in a multi-tenant environment; (2) determining tenant usage for the cluster resource for each of a plurality of tenants; (3) determining a tenant resource commitment for the cluster resource for each tenant; and (4) presenting tenant usage and tenant resource commitment for each resource. | 2019-06-20 |
20190187998 | INFORMATION PROCESSING DEVICE AND METHOD OF CONTROLLING COMPUTERS - An information processing device includes a processor that calculates a first energy consumption of a first computer during a boot latency when the first computer has already booted and is not executing any job. The boot latency is time taken for boot of a second computer scheduled to execute a second job with the first computer after a first job executed by the first computer. The processor calculates a second energy consumption of the second computer during a waiting time when the second computer has already booted and is not executing any job. The waiting time is obtained by subtracting the boot latency from a time difference between a scheduled end time of the first job and a present time. The processor powers on the second computer when power of the second computer is off and when the second energy consumption becomes equal to or less than a threshold value. | 2019-06-20 |
20190187999 | RADIO NODE DEVICE AND BACKHAUL CONNECTION METHOD THEREOF - A radio node device executes a backhaul connection method. The radio node device receives a radio access network issued configuration message requesting multi-connectivity capability of the relay node device. The radio node device provides two wireless communication channels in parallel as a part of a wireless backhaul channel to a radio access network entity in response to the configuration message. The relay node device serves as an intermediate node in the wireless backhaul channel. The relay node device performs route selection for the wireless backhaul channel based on metrics of relay nodes. | 2019-06-20 |
20190188000 | Method for Preloading Application, Computer Readable Storage Medium, and Terminal Device - A method for preloading an application, a storage medium, and a terminal device are provided. The method includes the following. In response to a target application being detected to be closed, current state feature information of a terminal device is acquired. The current state feature information is input into a random forest prediction model corresponding to the target application, where the random forest prediction model is generated based on a usage regularity of the target application corresponding to historical state feature information of the terminal device. Whether to preload the target application is determined according to a prediction result of the random forest prediction model. | 2019-06-20 |
20190188001 | PSEUDO-RANDOM LOGICAL TO PHYSICAL CORE ASSIGNMENT AT BOOT FOR AGE AVERAGING - A computing device includes a processor having a plurality of cores, a core translation component, and a core assignment component. The core translation component provides a set of registers, one register for each core of the multiple processor cores. The core assignment component includes components to provide a core index to each of the registers of the core translation component according to a core assignment scheme during processor initialization. Process instructions from an operating system are transferred to a respective core based on the core indices. | 2019-06-20 |
20190188002 | System, Device, Method, and Computer-Readable Recording Medium - A device identifies an information processing terminal and transmits a device identifier associated with the user to the information processing terminal. A server receives the device identifier from the information processing terminal and then transmits device metadata associated with the device identifier to the information processing terminal. The information processing terminal extracts an app identifier from the device metadata and transmits a distribution request for a device app including the extracted app identifier to the server. | 2019-06-20 |
20190188003 | ADAPTER CONFIGURATION OVER OUT OF BAND MANAGEMENT NETWORK - Examples described herein include receiving contact information of a management controller for a host device and querying the management controller for a supported data network adapter over a management network. In response to a determination that the host device comprises a supported data network adapter, identifying information of a storage array is transmitted to the management controller. Examples also include receiving a unique identifier of a storage volume associated with the storage array and configuring, over the management network, the supported data network adapter to boot from the storage volume over a data network that is out of band from the management network. | 2019-06-20 |
20190188004 | SOFTWARE APPLICATION DYNAMIC LINGUISTIC TRANSLATION SYSTEM AND METHODS - Aspects of the present disclosure relate to text and/or image translation computing systems, and in particular, text and image processing of user-interface elements during run-time of a software application. Code is injected into an application binary file. During execution of the application the injected code executes to identify user-interface elements defined within the application and extracts various textual aspects, such as text strings, from the user-interface elements. The system translates the extracted text strings into a desired language and modifies the user-interface element to include the translated text. | 2019-06-20 |
20190188005 | Method for Preloading Application, Storage Medium, and Terminal Device - A method for preloading an application, a storage medium, and a terminal device are provided. The method includes the following. First status feature information of a terminal device is acquired in response to an application-preloading-prediction event being detected to be triggered. The first status feature information is compared with a plurality of pre-collected samples of a sample set. The plurality of pre-collected samples include status second feature information of the terminal device in a preset sampling period, and each sample of the plurality of pre-collected samples corresponds to a sample tag indicating a next application to be launched. A target application to be launched is predicted according to a comparison result. The target application is preloaded. | 2019-06-20 |
20190188006 | ADAPTER CONFIGURATION - Aspects of the application relate to configuring of an adapter. Code of the adapter is received and dependencies from the code are determined, wherein at least one of the dependencies includes library code and a version of the library code. A control flow graph is derived from the code and the dependencies. A type of the adapter is determined to, specify how the adapter processes messages. The method further comprises determining at least one implementation of at least one adapter task of the adapter based on the control flow graph. When the determined implementation is not annotated in the code or the control flow graph, the method includes annotating the control flow graph to specify the implementation. A configuration GUI is generated based on the annotated control flow graph and the adapter type. A configuration task may be performed on the adapter according to input received via the configuration GUI. | 2019-06-20 |
20190188007 | Method for Preloading Application, Storage Medium, and Terminal Device - A method for preloading an application, a storage medium, and a terminal device are provided. The method includes the follows. Current state feature information of the terminal device is acquired, when an application preloading prediction event is detected to be triggered. The current state feature information is input into a plurality of decision tree prediction models each corresponding to an application in a preset application set, where each of the decision tree prediction models is generated based on a usage regularity of an associated application corresponding to historical state feature information of the terminal device. A target application to be initiated is predicted according to output results of the decision tree prediction models, and then the target application is preloaded. | 2019-06-20 |
20190188008 | METHODS AND DEVICES FOR THE AUTOMATIC CONFIGURATION OF AN EXCHANGE FIELD DEVICE IN A PROCESS CONTROL SYSTEM - A method provides for the automatic configuration of a field device in a process control system. The process control system comprises one or more field devices and a configuration provider. The field devices and the configuration provider are communicatively coupled via a communication system. The method comprises: the automatic retrieval and provision of configuration data of the field devices via the configuration provider; the automatic recognition of the field device via the configuration provider after the removal of a second field device from the process control system and the replacement with the field device; and the automatic configuration of the field device using the provisioned configuration data of the second field device via the configuration provider. The method further comprises the recurrent sending of an introducing message from the configuration provider to the field devices to introduce the configuration provider to the field devices. The method further comprises the registration of the field devices with the configuration provider. | 2019-06-20 |
20190188009 | DYNAMIC CONFIGURATION OF A MULTIPROCESSOR SYSTEM - A multiprocessor system includes multiple processors configured to run applications, and a dynamic configuration system operating independently on one or more of the multiple processors. The dynamic configuration system is configured to automatically incorporate new processors into the multiprocessor system for communication with one or more of the multiple processors. The dynamic configuration system automatically reconfigures the multiprocessor system in real-time to run at least one application normally run on one or more of the multiple processor to run on one or more of the automatically incorporated new processors. | 2019-06-20 |
20190188010 | Remote Component Loader - Methods, systems, computer-readable media, and apparatuses may provide for the creation and management of applications with dependencies. An application executing via a client application on a computing device may require a dependency, such as a software module, that is unavailable at the computing device. The application may be compiled with a remote loader module. Based on determining the dependency is unavailable at the computing device, the remote loader module may send information about the dependency to a server, which may provide instructions for retrieving the dependency. The application may then, via the remote loader and based on the instructions, request the dependency. The server may locate the dependency or generate it based on capabilities of the computing device and send the dependency to the application. The application may execute with the received dependency. | 2019-06-20 |
20190188011 | TIRE PRESSURE MONITORING UNIT WITH EXPANDABLE PROGRAM LIBRARY AND METHOD FOR SUPPLEMENTING A PROGRAM LIBRARY OF A TIRE PRESSURE MONITORING UNIT - A tire pressure monitoring unit includes a pressure sensor, a temperature sensor, a transmitter for wireless transmission of pressure and temperature data HF signals, a receiver for receiving wireless control LF signals, a microcontroller containing a program memory and a data storage device containing a library of control programs to control the measurement and transmission activity of the tire pressure monitoring unit. The microcontroller selects from this library, on the basis of control signals that are received, a control program and then writing it into its program memory. When a loading program is activated by a control signal, it causes the microcontroller to transfer into the data storage device an additional control program that is received by the receiver. A method includes adding, to a library of control programs in a data storage device connected to a microcontroller of a tire pressure monitoring unit, an additional control program. | 2019-06-20 |
20190188012 | METHOD, DEVICE, TERMINAL AND STORAGE MEDIUM FOR PROCESSING APPLICATION - The present application discloses a method, a device, a terminal and a storage medium for processing an application, and relates to the field of application processing technology, the method includes: acquiring status data, where the status data indicating a running status of the terminal; determining, according to the status data, a target application to be preloaded; displaying an application window corresponding to the target application in the virtual screen, where a display area corresponding to the virtual screen is located outside a display area corresponding to the physical screen; and migrating the application window corresponding to the target application to the physical screen in response to receiving a running instruction of the target application. | 2019-06-20 |
20190188013 | Suggesting Actions Based on Machine Learning - This document describes techniques for suggesting actions based on machine learning. These techniques determine a task that a user desires to perform, and presents a user interface through which to perform the task. To determine this task, the techniques can analyze content displayed on the user device or analyze contexts of the user and user device. With this determined task, the techniques determine an action that may assist the user in performing the task. This action is further determined to be performable through analysis of functionalities of an application, which may or may not be executing or installed on the user device. With some subset of the application's functionalities determined, the techniques presents the subset of functionalities via the user interface. By so doing, the techniques enable a user to complete a task more easily, quickly, or using fewer computing resources. | 2019-06-20 |
20190188014 | VIRTUAL APPLIANCES - Example implementations relate to virtual appliances. In an example, a processor-based appliance abstraction engine exposes a programming interface for accessing undifferentiated resources of a computing environment irrespective of the type of the computing environment. Computing environment types may include physical infrastructure, virtual infrastructure, or cloud infrastructure. The appliance abstraction engine discovers available resources of the computing environment and creates a virtual appliance by configuring the discovered available resources of the computing environment according to capabilities defined in a specification and by populating the computing environment with artifacts for a computing platform defined in the specification. | 2019-06-20 |
20190188015 | NATIVE EXECUTION BRIDGE FOR SANDBOXED SCRIPTING LANGUAGES - Techniques herein include receiving, at a scripting language component of a native execution bridge, a request to execute one or more scripting language commands, and sending the commands from the scripting language component to a native execution component of the native execution bridge for determination, based at least in part on a security policy, whether to execute the one or more scripting language commands as corresponding native commands outside the scripting language component. In response to determining to execute the commands, the commands are translated into one or more natively executable commands and are executed. In some embodiments, the scripting language component determines, based on a security policy, whether commands are permissible, and only if they are, forwarding those commands to the native execution component for translation and execution. | 2019-06-20 |
20190188016 | DEPLOYING A VIRTUAL MACHINE IN A COMPUTING ENVIRONMENT - A method and associated system. In response to a request to deploy a virtual machine, a virtual machine resource usage pattern having attributes matching a subset of attributes in than ordered sequence of attributes is selected from at least one virtual machine resource usage pattern stored in a virtual machine resource usage pattern library, based on an ordering of the attributes in the ordered sequence of attributes, wherein the virtual machine resource usage pattern library stores usage patterns for virtual machines previously deployed. A node on which the virtual machine is to be deployed is selected, based on the selected virtual machine resource usage pattern, and additionally based on either available resources of the plurality of nodes or predicted runtime resource requirements of the virtual machine to be deployed. The virtual machine is configured for being deployed on the selected node. The virtual machine is deployed on the selected node. | 2019-06-20 |
20190188017 | CONVERTING VIRTUAL VOLUMES IN PLACE - A technique includes changing a configuration setting of a virtual volume of data stored in a storage system. The technique includes converting data of the virtual volume in place to reflect the changing of the configuration setting. | 2019-06-20 |
20190188018 | NODE IN CLUSTER MEMBERSHIP MANAGEMENT PROTOCOL - A method for a node to become a member of a cluster includes, when the node is in an initialization state, refraining from starting any service for the cluster, rejecting any reconfiguration request from a coordinator of the cluster, and determining if a local copy of a member list is out-of-date. When the local member list is up-to-date, the method includes advancing to an observer state or a participant state depending on if the node is in the member list. When the local copy of the member list is out-of-date, the method includes waiting to receive the member list, updating the local member list to be equal to the member list, persisting the local member list, recording the local member list as up-to-date, and advancing to an observer state or a participant state depending if the node is in the member list. | 2019-06-20 |
20190188019 | GRADUAL CREATION PROCESS OF SERVER VIRTUAL MACHINES - An example method for the gradual creation process of server virtual machines includes a virtualization manager locking a virtual machine template, saving a configuration of a virtual machine, locking the virtual machine, and directing a worker host to create a volume. The worker host creates the volume, and the virtualization manager unlocks the virtual machine. A destination host executes the virtual machine, and the worker host merges the volume with a disk of the virtual machine template. | 2019-06-20 |
20190188020 | SYSTEMS AND METHODS FOR ADAPTIVE ACCESS OF MEMORY NAMESPACES - In accordance with embodiments of the present disclosure, an information handling system may include a memory subsystem and a processor subsystem communicatively coupled to the memory subsystem and configured to execute a hypervisor, wherein the hypervisor is configured to host a plurality of virtual machines and host an interface to the memory subsystem, wherein the interface is configured to maintain a data structure for mapping at least one namespace instantiated within the memory subsystem to a plurality of access modes for accessing the at least one namespace from the processor subsystem. | 2019-06-20 |
20190188021 | VIRTUAL COMPUTING SYSTEMS INCLUDING IP ADDRESS ASSIGNMENT USING EXPRESSION EVALUATION - Examples described herein may include virtualized environments having multiple computing nodes accessing a storage pool. User interfaces are described which may allow a user to enter one or more IP address generation formula for various components of computing nodes. Examples of system described herein may evaluate the IP address generation formula(s) to generate a set of IP addresses that may be assigned to computing nodes in the system. This may advantageously allow for systematic and efficient assigning of IP addresses across large numbers of computing nodes. | 2019-06-20 |
20190188022 | Virtual Redundancy for Active-Standby Cloud Applications - Virtual redundancy for active-standby cloud applications is disclosed herein. A virtual machine (“VM”) placement scheduling system is disclosed herein. The system can compute, for each standby VM of a plurality of available standby VMs, a minimum required placement overlap delta to meet an entitlement assurance rate (“EAR”) threshold. The system can compute a minimum number of available VM slots for activating each standby VM to meet the EAR threshold. For each standby VM of a given application, the system can filter out any server of a plurality of servers that does not meet criteria. If a given server meets the criteria, the system can add the given server to a candidate list; sort, in descending order, the candidate list by the minimum required placement overlap delta and the number of available virtual machine slots; and select, from the candidate list of servers, a candidate server from atop the candidate list. | 2019-06-20 |
20190188023 | METHOD FOR DATA CENTER STORAGE EVALUATION FRAMEWORK SIMULATION - An a method for simulating a data center is provided and a non-transitory computer-readable storage medium having recorded thereon a computer program for executing the method of simulating a data center. The method includes storing at least one hardware configuration file and at least one functional description file of a data center to be simulated in a configuration file application; generating a simulation program of the data center using the at least one hardware configuration file and the at least one functional description file by a data center storage evaluation framework (DCEF) application; and executing a flow-based simulation on the simulation program generated by the DCEF application by a simulator. | 2019-06-20 |
20190188024 | VIRTUAL MACHINE HOT MIGRATION METHOD AND APPARATUS, AND SYSTEM - This application discloses a virtual machine hot migration method performed by a virtual machine hot migration apparatus to a cloud computing system including a plurality of hosts, each host including a plurality of virtual machines. The apparatus obtains a load of each host, determines a host whose load exceeds a preset threshold as a source host, determines a to-be-hot-migrated target virtual machine in the source host; and controls the target virtual machine to be hot-migrated from the source host to a target host. According to the solutions provided in the embodiments of this application, when a load of a host is excessively high, a redundantly configured virtual machine on the host is hot-migrated to another host, thereby improving the resource utilization rate of the host when use by a user is ensured. | 2019-06-20 |
20190188025 | PROVISION OF INPUT/OUTPUT CLASSIFICATION IN A STORAGE SYSTEM - Embodiments of the present disclosure are directed towards techniques and configurations for an apparatus configured to provide I/O classification information in a distributed cloud storage system, in accordance with some embodiments. In one embodiment, the apparatus may include a partition scanner, to scan an image of a virtual disk associated with the storage system, to determine one or more partitions associated with the virtual disk; a file system scanner coupled with the partition scanner, to identify file systems associated with the determined partitions, to access files stored in the identified file systems; and I/O classifier coupled with the file system scanner, to generate I/O classification information associated with the accessed files. The I/O classification information provides characteristics of input-output operations performed on the virtual disk. Other embodiments may be described and/or claimed. | 2019-06-20 |
20190188026 | AGILE VM LOAD BALANCING THROUGH MICRO-CHECKPOINTING AND MULTI-ARCHITECTURE EMULATION - Methods and systems for agile load balancing include detecting an increased load for a first primary virtual machine (VM) on a first node that has a plurality of additional primary VMs running on a processor; deactivating one or more of the additional primary VMs, reducing said one or more deactivated VMs to a secondary state, to free resources at the first node for the first primary VM; and activating secondary VMs, located at one or more additional nodes, that correspond to the one or more deactivated VMs, raising said secondary VMs to a primary state. Activation and deactivation through micro-checkpointing may involve nodes of different CPU architectures during transient periods of peak load. | 2019-06-20 |
20190188027 | NETWORK RECONFIGURATION IN HYPERVISOR-AGNOSTIC DISASTER RECOVERY SCENARIOS - Systems for restarting a virtual machine in a disaster recovery scenario where a network configuration differs between the failed system and the recovery system. A method commences upon identifying a disaster recovery plan for restarting a virtual machine from a first system on a second system (e.g., a recovery system). A configuration for providing network access at the second system through an adapter present in the second system is stored at a location accessible to the second system. Restarting the virtual machine at the second system upon detection of a failure event at the first system. | 2019-06-20 |
20190188028 | PARAVIRTUALIZED ACCESS FOR DEVICE ASSIGNMENT BY BAR EXTENSION - A hypervisor associates a combined register space with a virtual device to be presented to a guest operating system of a virtual machine, the combined register space comprising a default register space and an additional register space. Responsive to detecting an access of the additional register space by the guest operating system of the virtual machine, the hypervisor performs an operation on behalf of the virtual machine, the operation pertaining to the access of the additional register space. | 2019-06-20 |
20190188029 | NON-REPUDIABLE TRANSACTION PROTOCOL - A non-repudiable transaction protocol system includes a memory, at least one processor in communication with the memory, an operating system executing on the at least one processor, a resource manager configured to manage a storage system, and a transaction manager. The transaction manager is configured to provide NRO-W evidence of a work request from a client to the resource manager and provide NRR-W evidence to the client that the resource manager has completed initial work for the work request. Additionally, the transaction manager is configured to provide NRO-C evidence to the resource manager that the client requested completion of the initial work and NRR-C evidence to the client that the resource manager promised to execute the completion. Each of the NRO-W evidence, the NRR-W evidence, the NRO-C evidence, and the NRR-C evidence are exchanged to prevent either one of the client and the resource manager from gaining an advantage. | 2019-06-20 |
20190188030 | TERMINAL BACKGROUND APPLICATION MANAGEMENT METHOD AND APPARATUS - Embodiments of the present disclosure disclose a terminal background application management method, including: detecting a running status of each of applications running in the background of a terminal; selecting a first target application from the applications, where a running status of the first target application is a preset running status; and allocating a first processing resource to the first target application, where the first processing resource is greater than a second processing resource that is pre-allocated to the applications in the background of the terminal. | 2019-06-20 |
20190188031 | PRIORITIZING I/O OPERATIONS - A computer-implemented method according to one embodiment includes identifying an input/output (I/O) operation to be implemented within a distributed computing environment, where the distributed computing environment executes a plurality of different jobs, determining information associated with the I/O operation indicating that the I/O operation is associated with a recovery of one of the plurality of different jobs, and assigning an implementation priority to the I/O operation, based on the information associated with the I/O operation. | 2019-06-20 |
20190188032 | THREAD INTERRUPT OFFLOAD RE-PRIORITIZATION - A computing system is provided and includes first and second computing resources defined, during system initialization, as first kernel threads and a second kernel thread with which the first kernel threads are operably associated, a memory manager and a re-prioritization controller. The memory manager is configured to handle a portion of pending input/output (I/O) operations at an interrupt level and to offload a remainder of the pending I/O operations to the first kernel threads according to an offload condition whereby the offloaded I/O operations are queued according to a first scheme. The re-prioritization controller is configured to transfer a portion of the offloaded I/O operations from the first kernel threads to the second kernel thread according to a transfer condition whereby the transferred I/O operations are re-prioritized according to a second scheme. | 2019-06-20 |
20190188033 | MANAGEMENT APPARATUS, CONTROL METHOD, AND STORAGE MEDIUM - A management apparatus includes a memory; and a processor configured to: store configuration information and performance information for each of a plurality of information processing apparatuses acquired from a host OS executed by each of a plurality of groups to which the plurality of information processing apparatuses belong and each of which provides a given service into the memory for each of the plurality of groups, select, based on the configuration information and the performance information, a movement target virtual machine for each of the plurality of groups, when the movement target virtual machine is moved to the different group, determine whether a performance value of a service corresponding to the movement target virtual machine satisfies a reference value, and when it is determined that the performance value satisfies the reference value, determine to move the movement target virtual machine from the target group to the different group. | 2019-06-20 |
20190188034 | THREAD POOL AND TASK QUEUING METHOD AND SYSTEM - A system includes a memory, at least one processor in communication with the memory, a counter, and a scheduler. The scheduler is configured to classify a thread of the pool of threads as a first classification as either available or unavailable. Additionally, the scheduler is configured to classify the pool of threads, based on a counter value from the counter, as a second classification less than a configured core thread pool size, equal to or larger than the configured core thread pool size but less than a maximum core thread pool size, or equal to or larger than the maximum core thread pool size. The scheduler is also configured to classify a resistance factor as a third classification as either within a limit or outside the limit, and schedule a work order based on at least one of the classifications. | 2019-06-20 |
20190188035 | SIMPLIFYING DATA MAPPING IN COMPLEX FLOWS BY DEFINING SCHEMAS AT CONVERGENCE POINTS IN A WORKFLOW - A computer-implemented method comprising: receiving, by a computing device, user input defining a workflow; receiving, by the computing device, information defining schemas at convergence points in the workflow; determining, by the computing device, a set of mapping parameters at outputs of nodes of the workflow based on the schemas; receiving, by the computing device, input values to the mapping parameters; storing, by the computing device, the input values to the mapping parameters in a structure corresponding to the schemas; and executing, by the computing device, the workflow based on the input values to the mapping parameters, wherein the executing includes invoking one or more applications residing on one or more application servers through application programming interface (API) calls. | 2019-06-20 |
20190188036 | COMPUTER SYSTEM AND PROGRAM MIGRATION METHOD - Provided are a computer system and a program migration method capable of properly migrating programs between different computers. A first computer calculates a migration priority of each of the plurality of programs based on information which indicates weighting relative to usage of hardware resources, and operation information of hardware resources in the first computer when each of the plurality of programs is executed, and, based on hardware resource expansion schedule information which defines hardware resources of the second computer in each of a plurality of migration phases, determines migration feasibility of a program in hardware resources used in each migration phase and decides the migration phase for migrating each of the plurality of programs in order from a first migration phase of the plurality of migration phases and in order of the calculated migration priority. | 2019-06-20 |
20190188037 | PIPELINE TASK VERIFICATION FOR A DATA PROCESSING PLATFORM - A pipeline task verification method and system is disclosed, and may use one or more processors. The method may comprise providing a data processing pipeline specification, wherein the data processing pipeline specification defines a plurality of data elements of a data processing pipeline. The method may further comprise identifying from the data processing pipeline specification one or more tasks defining a relationship between a first data element and a second data element. The method may further comprise receiving for a given task one or more data processing elements intended to receive the first data element and to produce the second data element. The method may further comprise verifying that the received one or more data processing elements receive the first data element and produce the second data element according to the defined relationship. | 2019-06-20 |
20190188038 | Cascading of Graph Streaming Processors - Methods, systems and apparatuses for graph stream processing are disclosed. One apparatus includes a cascade of graph streaming processors, wherein each of the graph streaming processor includes a processor array, and a graph streaming processor scheduler. The cascade of graph streaming processors further includes a plurality of shared command buffers, wherein each shared command buffer includes a buffer address, a write pointer, and a read pointer, wherein for each of the plurality of shared command buffers a first graph streaming processor writes commands to the shared command buffer as indicated by the write pointer of the shared command buffer and a second graph streaming processor reads commands from the shared command buffer as indicated by the read pointer, wherein at least one graph streaming processor scheduler operates to manage the write pointer and the read pointer to avoid overwriting unused commands of the shared command buffer. | 2019-06-20 |
20190188039 | SYSTEM AND METHOD FOR MANAGING SYSTEM MEMORY INTEGRITY IN SUSPENDED ELECTRONIC CONTROL UNITS - A system for controlling a subsystem of a vehicle includes a memory, a first processor, and a second processor. The first processor allocates a portion of the memory upon booting to perform operations to control the subsystem and generates an indication when an amount of memory used from the allocated portion of the memory is greater than or equal to a threshold. The first processor monitors times when the vehicle is turned on and off and determines a time period during which the vehicle remains turned off. After the vehicle is turned off, the first processor enters a power save mode. The memory and the second processor continue to receive power. During the time period, on receiving the indication, the second processor wakes up the first processor, which performs a reboot operation, reallocates the memory, and reenters the power save mode. The memory continues to receive power. | 2019-06-20 |
20190188040 | MULTI-CONSTRAINT DYNAMIC RESOURCE MANAGER - An arrangement is illustrated wherein a flash controller with a multi-constraints dynamic resource manager module configured to control both software and hardware clients is provided. The arrangement also provides for memory and an interface for connecting the controller to a host. | 2019-06-20 |
20190188041 | METHOD AND APPARATUS FOR IMPLEMENTING HARDWARE RESOURCE ALLOCATION, AND STORAGE MEDIUM - The aspects of the present disclosure provide a method and an apparatus for implementing hardware resource allocation. For example, the apparatus includes processing circuitry. The processing circuitry obtains a first value that is indicative of an allocable resource quantity of a hardware resource in a computing device. The processing circuitry also receives a second value that is indicative of a requested resource quantity of the hardware resource by a user, and then determines whether the second value is greater than the first value. When the second value is determined to be less than or equal to the first value, the processing circuitry requests the computing device to allocate the hardware resource of the requested resource quantity to the user, and subtracts the second value from the first value to update the allocable resource quantity of the hardware resource in the computing device. | 2019-06-20 |
20190188042 | System and Method of Image Analyses - An image analysis system can schedule a plurality of different inputted video streams for performing real-time image analysis thereof in different time segments, respectively, so as to maximize the usage of the limited resources of an image analysis module of the image analysis system, wherein the image analysis module is unable to perform real-time image analysis on all the plurality of video streams at the same time due to the limited resources of the image analysis module. | 2019-06-20 |
20190188043 | DYNAMIC TASK ALLOCATION AND NODE RECONFIGURATION IN MESH NETWORK - A system for allocating tasks within a moving multi-hop mesh network includes a processor operatively coupled to memory. The processor is configured to implement the steps of: sending a bid request from a first network node to two or more other network nodes for computing a task, wherein the first network node has a first geographical location relative to a first geographical location of the two or more other network nodes; in response to the first network node receiving a bid from at least two of the two or more other network nodes for computing the task; predicting a second geographical location for each of the at least two of the two or more other network nodes relative to a second geographical location of the first network node based on the time when the task will be completed; predicting a total task completion time for the at least two of the two or more other network nodes; comparing the total task completion time predicted for the at least two of the two or more other network nodes to generate a winning bid; and allocating the task to the winning bid. | 2019-06-20 |
20190188044 | METHOD AND DEVICE FOR ALLOCATING RESOURCES IN A SYSTEM - Provided are a device and method for allocating system resources. In one example, the method includes identifying resources that are available from a plurality of devices included in a system, allocating available resources of the plurality of devices to a plurality of components operating in the system, the allocating comprising reserving a set of resources from the plurality of devices in the system for each respective component, from among the plurality of components, based on operating requirements included in the metadata of the respective component, and managing the system based on the allocated resources. By allocating resources to components executing in the system, in advance, and preventing other components from consuming those resources, the system can operate with improved stability. | 2019-06-20 |
20190188045 | METHOD FOR PROCESSING DATA AND PROGRAMMABLE LOGIC CONTROLLER - A method for processing data on a programmable logic controller includes a priority with a predetermined priority level assigned to at least one parallel processing section of a program of a master-processor core of a control task. Respective priority levels are inserted into a data structure as the respective master-processor core arrives at the parallel processing section. A parallel-processor core examines whether entries are present in the data structure and processes partial tasks from a work package of the master-processor core the priority level of which ranks first among the entries. A real-time condition of the control task is met by setting executing times of the programs for the master-processor core so that the master-processor core is capable of processing the partial tasks from the work packages without being supported by the parallel-processor core. The master-processor core further processes partial tasks not processed by the at least one parallel-processor core. | 2019-06-20 |
20190188046 | BLOCKCHAIN INTEGRATION FOR SCALABLE DISTRIBUTED COMPUTATIONS - An apparatus is configured to initiate distributed computations across a plurality of data processing clusters associated with respective data zones, to utilize local processing results of at least a subset of the distributed computations from respective ones of the data processing clusters to generate global processing results, and to update at least one distributed ledger maintained by one or more of the plurality of data processing clusters to incorporate one or more blocks each characterizing at least a portion of the distributed computations. Each of at least a subset of the data processing clusters is configured to process data from a data source of the corresponding data zone using one or more local computations of that data processing cluster to generate at least a portion of the local processing results. At least one of the data processing clusters is configured to apply one or more global computations to one or more of the local processing results to generate at least a portion of the global processing results. | 2019-06-20 |