30th week of 2020 patent applcation highlights part 44 |
Patent application number | Title | Published |
20200233652 | SYSTEM AND METHOD FOR DETERMINING DYNAMIC DEPENDENCIES FOR ENTERPRISE IT CHANGE MANAGEMENT, SIMULATION AND ROLLOUT - A method of managing a selected change to an IT (information technology) system comprises obtaining an inventory of all components available to the IT system, assigning each of the components in the inventory to a component class, identifying relationships between the components in the IT system, defining attributes of the relationships, generating a complete dependency mapping of the components of the IT system based on the relationships, cross-relationships and constraints, simulating the selected change within a processor of the IT system to one or more of the IT components using the dependency mapping to generate a change impact analysis, and automatically updating impacted IT components via at least one authenticating agent executing on the IT system. | 2020-07-23 |
20200233653 | PROGRAM UPDATING METHOD - Provided is a technique for a user deciding a timing to update a program. A program that controls the operation of a device installed in a vehicle is updated. This method of updating includes a first step of determining whether or not storage of an update program for updating the program is complete, a second step of notifying that the storage is complete when a result of the determination in the first step is affirmative, and a third step of, after the second step, starting updating of the program by means of the stored update program after a predetermined instruction. | 2020-07-23 |
20200233654 | INFORMATION DISTRIBUTION SYSTEM AND IN-VEHICLE DEVICE - A server which communicates with an in-vehicle terminal, a communication terminal, or a vehicle communication device, upon receiving at least one of either configuration information of in-vehicle terminal software or configuration information of vehicle software from any information source among the in-vehicle terminal, the communication terminal, or the vehicle communication device, generates, based on the received information and identification information for identifying the communication path used by the information source, at least one among software to be distributed for distributing the information source as a communication target and which includes update information of the vehicle software or update information of the in-vehicle terminal software, a list of vehicle software to be updated or a list of in-vehicle terminal software to be updated as information which was excluded from the software to be distributed, and sends at least one among the generated software to be distributed, the list of vehicle software to be updated, or the list of in-vehicle terminal software to be updated to the information source. | 2020-07-23 |
20200233655 | SOFTWARE DISTRIBUTION SYSTEM AND SOFTWARE DISTRIBUTION METHOD - A software distribution system includes an acquiring unit configured to acquire, from vehicles, error information about an in-vehicle software component having an error; a holding unit configured to hold acquired error information in association with vehicle information of the vehicle from which the error information is acquired; an identifying unit configured to identify an information element common to a plurality of pieces of vehicle information associated with the same error information; and a distribution unit configured to distribute a correction file for correcting the error in the in-vehicle software component to a vehicle group having the identified information element. | 2020-07-23 |
20200233656 | SYSTEMS AND METHODS FOR VERSIONING A CLOUD ENVIRONMENT FOR A DEVICE - Disclosed embodiments describe systems and methods for versioning a cloud environment for a device. A versioning system can store a snapshot of a first version of an environment of a device for using a cloud provider of a plurality of cloud providers. The environment can include one or more resource template files and one or more deployment application programming interfaces (APIs) for the cloud provider. The versioning system can receive a request to automatically deploy a second version of the environment for the device. A snapshot of the second version of the environment can include at least one second resource template file different than the one or more resource template files of the snapshot of the first version of the environment. The versioning system can automatically deploy the second version of the environment responsive to the request. | 2020-07-23 |
20200233657 | BUILDING MANAGEMENT SYSTEM WITH PLUG AND PLAY DEVICE REGISTRATION AND CONFIGURATION - A building management system includes a system manager and a cloud-based data platform. The system manager is configured to identify building equipment and generate a reported network tree listing the building equipment. The cloud-based data platform is configured to receive the reported network tree from the system manager, generate a list of bound properties of the building equipment based on the reported network tree, and create timeseries for the bound properties of the building equipment. The system manager is configured to detect a change of value (COV) of a bound property listed in the list of bound properties and post a sample of the bound property to the timeseries in response to detecting the COV of the bound property. | 2020-07-23 |
20200233658 | POWER TOOL SYSTEM AND UPGRADING METHOD FOR THE SAME - A power tool system includes a power tool and a cloud sever configured to receive an upgrading file for upgrading the power tool. The power tool is adapted for wireless communication with the cloud server and includes a motor, a driving module for driving the motor, a control module for outputting a control signal to the driving module, and an IoT module for establishing a wireless communication link between the power tool and the cloud server. The IoT module, the driving module and control module share a bus, and the upgrading file is simultaneously distributed to the control module and/or the driving module through the bus. | 2020-07-23 |
20200233659 | METHOD OF UPDATING FIRMWARE AND DEVICE FOR USING THE METHOD - An operation method of a server for updating firmware includes: generating a first delta file including a plurality of blocks based on a plurality of update areas included in a first version firmware; generating a second delta file by repositioning the plurality of blocks included in the first delta file such that a plurality of unit blocks are generated by grouping control blocks, difference blocks, and extra blocks, each of which corresponds to the plurality of update areas, respectively; generating a plurality of swap blocks based on extra blocks among the plurality of blocks; and generating a third delta file by adding the generated plurality of swap blocks to the second delta file. | 2020-07-23 |
20200233660 | DISTRIBUTED PARALLEL BUILD SYSTEM - This document describes, among other things, systems and methods for managing distributed parallel builds. A computer-implemented method to manage parallel builds, comprises identifying one or more software components in a software project, wherein each software component includes an executable binary file; determining a build configuration for each software component, wherein the build configuration includes a mapping from each software component to one or more build servers; and building each software component using the mapped one or more build servers in the corresponding build configuration, wherein the building includes compiling one or more source files associated with each software component to one or more object files, by distributing the one or more source files to one or more compilation machines. | 2020-07-23 |
20200233661 | ANNOTATIONS FOR PARALLELIZATION OF USER-DEFINED FUNCTIONS WITH FLEXIBLE PARTITIONING - Annotations can be placed in source code to indicate properties for user-defined functions. A wide variety of properties can be implemented to provide information that can be leveraged when constructing a query execution plan for the user-defined function and associated core database relational operations. A flexible range of permitted partition arrangements can be specified via the annotations. Other supported properties include expected sorting and grouping arrangements, ensured post-conditions, and behavior of the user-defined function. | 2020-07-23 |
20200233662 | SOFTWARE PORTFOLIO MANAGEMENT SYSTEM AND METHOD - A software-based product development portfolio management system and method that may be implemented using a software as a service (SaaS) model that allows users (based on access rights) to: create and update valid project plans using integrated management tools and techniques, view near-real-time project data and metrics; enable lean project management; send messages to other users via system alerts and/or e-mails and receive messages/alerts from other SPM System users; input data; establish and change organizational governance guidelines; and approve, conditionally approve or reject decisions. | 2020-07-23 |
20200233663 | VECTOR PROCESSING UNIT - A vector processing unit is described, and includes processor units that each include multiple processing resources. The processor units are each configured to perform arithmetic operations associated with vectorized computations. The vector processing unit includes a vector memory in data communication with each of the processor units and their respective processing resources. The vector memory includes memory banks configured to store data used by each of the processor units to perform the arithmetic operations. The processor units and the vector memory are tightly coupled within an area of the vector processing unit such that data communications are exchanged at a high bandwidth based on the placement of respective processor units relative to one another, and based on the placement of the vector memory relative to each processor unit. | 2020-07-23 |
20200233664 | EFFICIENT RANGE-BASED MEMORY WRITEBACK TO IMPROVE HOST TO DEVICE COMMUNICATION FOR OPTIMAL POWER AND PERFORMANCE - Method and apparatus for efficient range-based memory writeback is described herein. One embodiment of an apparatus includes a system memory, a plurality of hardware processor cores each of which includes a first cache, a decoder circuitry to decode an instruction having fields for a first memory address and a range indicator, and an execution circuitry to execute the decoded instruction. Together, the first memory address and the range indicator define a contiguous region in the system memory that includes one or more cache lines. An execution of the decoded instruction causes any instances of the one or more cache lines in the first cache to be invalidated. Additionally, any invalidated instances of the one or more cache lines that are dirty are to be stored to system memory. | 2020-07-23 |
20200233665 | SYSTEMS, METHODS, AND APPARATUS FOR MATRIX MOVE - Detailed herein are embodiment systems, processors, and methods for matrix move. For example, a processor comprising decode circuitry to decode an instruction having fields for an opcode, a source matrix operand identifier, and a destination matrix operand identifier; and execution circuitry to execute the decoded instruction to move each data element of the identified source matrix operand to corresponding data element position of the identified destination matrix operand is described. | 2020-07-23 |
20200233666 | SYSTEMS, METHODS, AND APPARATUSES FOR TILE STORE - Embodiments detailed herein relate to matrix operations. In particular, the loading of a matrix (tile) from memory. For example, support for a loading instruction is described in at least a form of decode circuitry to decode an instruction having fields for an opcode, a source matrix operand identifier, and destination memory information, and execution circuitry to execute the decoded instruction to store each data element of configured rows of the identified source matrix operand to memory based on the destination memory information | 2020-07-23 |
20200233667 | SYSTEMS, METHODS, AND APPARATUSES FOR TILE MATRIX MULTIPLICATION AND ACCUMULATION - Embodiments detailed herein relate to matrix operations. In particular, matrix (tile) multiply accumulate and negated matrix (tile) multiply accumulate are discussed. For example, in some embodiments decode circuitry to decode an instruction having fields for an opcode, an identifier for a first source matrix operand, an identifier of a second source matrix operand, and an identifier for a source/destination matrix operand; and execution circuitry to execute the decoded instruction to multiply the identified first source matrix operand by the identified second source matrix operand, add a result of the multiplication to the identified source/destination matrix operand, and store a result of the addition in the identified source/destination matrix operand and zero unconfigured columns of identified source/destination matrix operand are detailed. | 2020-07-23 |
20200233668 | SYSTEMS AND METHODS FOR CONTROLLING MACHINE OPERATIONS - Systems and methods for controlling machine operations are provided. A number of data entries are organized into a stack. Each data entry includes a type, a flag, a length, and a value or pointer entry. For each data entry in the stack, the type of data is determined from the type entry, the presence of an address or value is determined by the respective flag entry, and a length of the address or value is determined from the respective length entry. The data to be utilized or an address for the same at the electronic storage area is provided at the respective value or pointer entry. | 2020-07-23 |
20200233669 | PROCESSOR SYSTEM AND MULTIPROCESSOR SYSTEM - A processor system ( | 2020-07-23 |
20200233670 | DOUBLE LOAD INSTRUCTION - A processor comprising an execution unit, memory and one or more register files. The execution unit is configured to execute instances of machine code instructions from an instruction set. The types of instruction defined in the instruction set include a double-load instruction for loading from the memory to at least one of the one or more register files. The execution unit is configured so as, when the load instruction is executed, to perform a first load operation strided by a fixed stride, and a second load operation strided by a variable stride, the variable stride being specified in a variable stride register in one of the one or more register files. | 2020-07-23 |
20200233671 | PARALLELIZATION OF NUMERIC OPTIMIZERS - A method for parallelization of a numeric optimizer includes detecting an initialization of a numeric optimization process of a given function. The method computes a vector-distance between an input vector and a first neighbor vector of a set of neighbor vectors. The method predicts, using the computed vector-distance, a subset of the set of neighbor vectors. The method pre-computes, in a parallel processing system, a set of evaluation values in parallel, each evaluation value corresponding to one of the subset of the set of neighbor vectors. The method detects a computation request from the numeric optimization process, the computation request involving at least one of the set of evaluation values. The method supplies, in response to receiving the computation request, and without performing a computation of the computation request, a parallelly pre-computed evaluation value from the set of evaluation values to the numeric optimization process. | 2020-07-23 |
20200233672 | BRANCH PREDICTOR - An apparatus comprises processing circuitry to perform data processing in response to instructions; and a branch predictor to predict a branch outcome for a given branch instruction as one of taken and not-taken, based on branch prediction state information indexed based on at least one property of the given branch instruction. In a static branch prediction mode of operation, the branch predictor predicts the branch outcome based on static values of the branch prediction state information set independent of actual branch outcomes of branch instructions which are executed by the processing circuitry while in the static branch prediction mode. The static values of the branch prediction state information are programmable. | 2020-07-23 |
20200233673 | POWER-SAVING MECHANISM FOR MEMORY SUB-SYSTEM IN PIPELINED PROCESSOR - A pipelined processor for carrying out pipeline processing of instructions, which undergo a plurality of stages, is provided. The pipelined processor includes: a memory-activation indicator and a memory controller. The memory-activation indicator stores content information that indicates whether to activate a first volatile memory and/or a second volatile memory while performing a current instruction. The memory controller is arranged for controlling activation of the first volatile memory and/or the second volatile memory in a specific stage of the plurality of stages of the current instruction according to the content information stored in the memory-activation indicator. | 2020-07-23 |
20200233674 | AUTOMATICALLY CONFIGURING BOOT ORDER IN RECOVERY OPERATIONS - Systems and methods for automatically generating a boot sequence. A multiple virtual machine computing environment is analyzed to generate a boot sequence that is used during a recovery operation. The boot sequence may be based on applications and application types running on the virtual machines, a network configuration and network traffic, and on manual boots of virtual machines. The boot sequence prioritizes the order in which the virtual machines are booted in the recovery site. | 2020-07-23 |
20200233675 | SCALABLE SOFTWARE RESOURCE LOADER - Embodiments of the present disclosure relate to loading software resources for execution by a software application. Other embodiments may be described and/or claimed. | 2020-07-23 |
20200233676 | BIOS MANAGEMENT DEVICE, BIOS MANAGEMENT SYSTEM, BIOS MANAGEMENT METHOD, AND BIOS MANAGEMENT PROGRAM-STORED RECORDING MEDIUM - A BIOS management device includes: a storage unit storing original BIOS information used as original information of BIOS information referred to by an information processing device when the BIOS information is stored in the information processing device; an operation unit executing, on the BIOS information and the original BIOS information, operation processing that varies each time the information processing device is activated; a comparison unit comparing a first result of the operation processing executed on the BIOS information with a second result of the operation processing executed on the original BIOS information; and a control unit controlling the information processing device in such a way as to execute the BIOS information and thereby complete activation, when the first and second results match each other, whereby BIOS robustness against illicit alteration is strengthened. | 2020-07-23 |
20200233677 | Dynamically-Updatable Deep Transactional Monitoring Systems and Methods - Provided herein are system, method and computer program products for providing dynamically-updatable deep transactional monitoring of running applications in real-time. A method for monitoring a target software application operates by injecting a software engine into a new thread within a target process of the target software application. The method then retrieves a monitoring script and initiates execution of the monitoring script within the software engine. The monitoring script determining the address functions and calls to the functions and inserts a trampoline call within the one or more functions. The trampoline saves the execution state of the target process and calls a corresponding monitoring function that to retrieves data associated with the target process. The method then restoring the execution state of the target process and resumes execution of the target function. | 2020-07-23 |
20200233678 | EXTENSION POINTS FOR WEB-BASED APPLICATIONS AND SERVICES - A web-based application is executable on one or more computing devices, where execution of the web-based application involves invocation of at least one extension point. The one or more computing devices are configured to: (i) receive, by the web-based application and from a client device, a request for web-based content; (ii) receive, by an extension point service, a call to a particular extension point, where the particular extension point is related to the web-based content, (iii) request and receive, by the extension point service and in communication with a database, one or more implementations corresponding to the particular extension point, and (iv) transmit, by the extension point service and in response to the call to the particular extension point, one or more user-defined plugin scripts included in the one or more implementations, output from which is incorporated in the web-based content as displayed by the client device. | 2020-07-23 |
20200233679 | SOFTWARE APPLICATION OPTIMIZATION - Embodiments of the present disclosure relate to software optimization by identifying unused/obsolete components of a software application. Other embodiments may be described and/or claimed. | 2020-07-23 |
20200233680 | SMART BUILDING AUTOMATION SYSTEM WITH DIGITAL SIGNAGE - One or more non-transitory computer-readable storage mediums having instructions stored thereon that, when executed by one or more processors, cause the one or more processors to receive, from a sensor, identifying information associated with an individual, query a database to retrieve context information corresponding to the individual, the database comprising a number of entities corresponding to two or more of spaces, equipment, people, or events and a number of relationships between the entities, the context information determined based on the entities and relationships of the database, determine a purpose of the individual based on the context information, dynamically generate a user interface element for a display based on the purpose of the individual, and control the display to display the user interface element. | 2020-07-23 |
20200233681 | COMPUTER-GENERATED REALITY PLATFORM - The present disclosure relates to providing a computer-generated reality (CGR) platform for generating CGR environments including virtual and augmented reality environments. In some embodiments, the platform includes an operating-system-level (OS-level) process that simulates and renders content in the CGR environment, and one or more application-level processes that provide information related to the content to be simulated and rendered to the OS-level process. | 2020-07-23 |
20200233682 | VIRTUAL DESKTOP INFRASTRUCTURE MANAGEMENT - A new approach to virtual desktop infrastructure management is described. In one example, a master virtual machine is configured to form a master image. The master virtual machine and master image are also modified to incorporate a service that performs an enrollment call to an endpoint manager associated with a virtual desktop infrastructure. One or more virtual machines are instantiated using the master image. When one of the virtual machines is booted and a user logs on, the service is invoked or executed and performs the enrollment call. The enrollment call leads to the enrollment of the virtual machine with the endpoint manager. During and after enrollment, the endpoint manager can configure the virtual machine based on one or more management policies. The management policies can be tailored in various cases, such as depending upon the credentials used to log on to the virtual machine. | 2020-07-23 |
20200233683 | CONTEXTUAL VIRTUAL ASSISTANT COMMUNICATIONS - A method comprises a computer-implemented virtual assistant (VA) determining user information to communicate to a recipient, receiving VA input information, and determining a recipient context. In response to determining the recipient context, the VA composes a preferred presentation based on information attributes and recipient attributes. The VA outputs the user information according to the preferred presentation. A virtual assistant system comprises personal information, a preferred presentation, and a computer-implemented VA. The VA is configured to compose user information and to receive VA input information. In response to receiving the VA input information, the VA determines a recipient context and composes the preferred presentation based on the user information, recipient context, information attributes, and recipient attributes. The VA outputs the user information according to the preferred presentation. A computer program product can embody the method. | 2020-07-23 |
20200233684 | RUNNING APPLICATIONS ON A COMPUTING DEVICE - A method of running an application on a computing device, in which the computing device runs a first operating system ( | 2020-07-23 |
20200233685 | METHOD FOR ENHANCING PRODUCTS WITH CLOUD ANALYTICS - A method is provided to enhance a virtualized infrastructure at a customer's premise with a cloud analytics service. The method includes receive a request for an expert use case on an expertise about an object in the virtualized infrastructure and performing an expertise cycle on the expert use case, which includes retrieving a manifest for the expert use case from a cloud analytics site remote from the customer's premise, collecting the telemetry data from the virtualized infrastructure based on the manifest, uploading the collected telemetry data to the cloud analytics site, and retrieving an expertise result for the expert use case from the cloud analytics site. The method further includes communicating the expertise result about the object to the customer and changing a configuration of the object. | 2020-07-23 |
20200233686 | DYNAMIC DISCOVERY OF INTERNAL KERNEL FUNCTIONS AND GLOBAL DATA - A method is provided to for a hypervisor to dynamically discover internal address information of a guest kernel on a virtual machine. The method includes locating a kernel exported system call or function in an image of the guest kernel in guest memory of the virtual machine, disassembling machine code of the kernel exported system call or function in the image into assembly code, detecting a pattern from memory references in the assembly code, and, after detecting the pattern, determining the internal address information of the guest kernel from the assembly code. | 2020-07-23 |
20200233687 | USER SPACE PCI DEVICE EMULATION FOR PEER PROCESSES - A system and method include receiving, at a host device, a request from a virtual machine to communicate with an emulated device. The host device establishes a socket connection between an emulator and the emulated device and communicates input-output messages via the socket connection from the virtual machine to the emulated device where the input-output messages use a virtual function input/output (VFIO) message protocol. | 2020-07-23 |
20200233688 | HARDWARE PLACEMENT AND MAINTENANCE SCHEDULING IN HIGH AVAILABILITY SYSTEMS - A method of organizing computer resources includes receiving a specification defining a plurality of quiescence groups of independent component instances for each of at least two services, and performing a first load balancing of the quiescence groups across a plurality of physical servers to define a plurality of supergroups while assigning each of the physical servers across the supergroups. | 2020-07-23 |
20200233689 | METHODS AND SYSTEMS FOR SECURELY AND EFFICIENTLY CLUSTERING DISTRIBUTED PROCESSES USING A CONSISTENT DATABASE - Certain embodiments described herein are directed to methods and systems for adding one or more nodes to a first cluster including a first node in a computer system. A method performed by the first node comprises receiving a first request from a second node to join the first cluster. The method also comprises retrieving a first cluster configuration associated with the first cluster from a distributed database through a first database server (DBS) and creating a second cluster configuration using the first cluster configuration and information received from the second node as part of the request. The method further comprises populating a first one or more local trust stores of a first one or more processes executing on the first node with a second one or more security certificates of a second one or more processes executing on the second node. The method further comprises writing the second cluster configuration to the distributed database and returning the second cluster configuration to the second node. | 2020-07-23 |
20200233690 | SYSTEMS AND METHODS FOR RECOMMENDING OPTIMIZED VIRTUAL-MACHINE CONFIGURATIONS - An example method is provided for recommending VM configurations, including one or more servers upon which one or more VMs can run. A user wishing to run these VMs can request a recommendation for an appropriate server or set of servers. The user can indicate a category corresponding to the type of workload that pertains to the VMs. The system can receive the request and identify a pool of servers available to the user. Using industry specifications and benchmarks, the system can classify the available servers into multiple categories. Within those categories, similar servers can be clustered and then ranked based on their levels of optimization. The sorted results can be displayed to the user, who can select a particular server (or group of servers) and customize the deployment as needed. This process allows a user to identify and select an optimized setup quickly and accurately. | 2020-07-23 |
20200233691 | CONTAINERIZED MANAGEMENT SERVICES WITH HIGH AVAILABILITY - In one example, a management service may be deployed in a first container. Further, a shadow service corresponding to the management service may be generated in the first container. Furthermore, network traffic may be routed to an active one of the management service and the shadow service, via a watchdog service in the first container, to provide high availability at a service level. | 2020-07-23 |
20200233692 | SYSTEM AND METHOD FOR MANAGING A MONITORING AGENT IN AN OPERATING SYSTEM OF A VIRTUAL COMPUTING INSTANCE - A system and method for managing a monitoring agent in an operating system of a virtual computing instance uses a monitoring agent lifecycle service of the monitoring agent that is started as part of a startup process of the operating system of the virtual computing instance. When needed, a monitoring agent core of the monitoring agent is downloaded and installed from an external service to the virtual computing instance by the monitoring agent lifecycle service so that a monitoring operation of the virtual computing instance is performed by the monitoring agent core. | 2020-07-23 |
20200233693 | Maintaining High Availability During Network Partitions for Virtual Machines Stored on Distributed Object-Based Storage - Techniques are disclosed for maintaining high availability (HA) for virtual machines (VMs) running on host systems of a host cluster, where each host system executes a HA module in a plurality of HA modules and a storage module in a plurality of storage modules, where the host cluster aggregates, via the plurality of storage modules, locally-attached storage resources of the host systems to provide an object store, where persistent data for the VMs is stored as per-VM storage objects across the locally-attached storage resources comprising the object store, and where a failure causes the plurality of storage modules to observe a network partition in the host cluster that the plurality of HA modules do not. In one embodiment, a host system in the host cluster executing a first HA module invokes an API exposed by the plurality of storage modules for persisting metadata for a VM to the object store. If the API is not processed successfully, the host system: (1) identifies a subset of second HA modules in the plurality of HA modules; (2) issues an accessibility query for the VM to the subset of second HA modules in parallel, the accessibility query being configured to determine whether the VM is accessible to the respective host systems of the subset of second HA modules; and (3) if at least one second HA module in the subset indicates that the VM is accessible to its respective host system, transmits a command to the at least one second HA module to invoke the API on its respective host system. | 2020-07-23 |
20200233694 | METHODS, MEDIUMS, AND SYSTEMS FOR PROVISIONING APPLICATION SERVICES - Exemplary embodiments relate to techniques for improving startup times of a cloud-based virtual servers in response to a spike in service usage (although other applications are contemplated and described). According to some embodiments, in response to a request to provision a new virtual server in a cluster, high-priority services (e.g., those that enable the server to respond to system health checks or that support an application providing the service) are started while lower-priority services are delayed. In some embodiments, prior to receiving such a request, a new server may be started and then hibernated to create a “hot spare.” When the request is received, the hot spare may be taken out of hibernation to quickly bring the hot spare online. It is contemplated that the delayed-startup and hot spare embodiments may be used together to further improve performance. | 2020-07-23 |
20200233695 | MULTI-LINE/MULTI-STATE VIRTUALIZED OAM TRANSPONDER - Novel tools and techniques might provide for implementing applications management, based at least in part on operations, administration, and management (“OAM”) information. A host computing system might comprise a dedicated OAM management agent. While normal application frame flow might be sent or received by VMs running on the host computing system, OAM frame flow might be sent or received by the OAM management agent, which might also serve as an OAM frame generator. Alternatively, or additionally, based on a determination that at least one OAM frame has changed (in response to a change in address of far-end and/or near-end OAM server functions), the OAM management agent might update a list associating the at least one OAM frame that has changed with corresponding at least one VM of the one or more VMs, without restarting any of the at least one VM, the OAM management agent, and/or the host computing system. | 2020-07-23 |
20200233696 | Real Time User Matching Using Purchasing Behavior - A system and a method are disclosed for disambiguating between anonymous users associated with multiple transactions. In an embodiment, a system receives information about a transaction and generates a profile of the anonymous user associated with the transaction. In response to another transaction, the system compares the transactions to determine whether the same anonymous user is associated with both transactions. Upon determining that the same anonymous user is associated both transactions, the system adds information about the new transaction to the profile of the anonymous user. | 2020-07-23 |
20200233697 | Systems and Methods for Transaction Tracing Within an IT Environment - A system for tracing transactions includes a system mapping engine configured to generate a multi-tier control point map based on source code and transaction data of one or more source systems; and a tracing engine configured to trace transactions across the one or more source systems based on the multi-tier control point map. The multi-tier control point map provides end-to-end transaction traceability. | 2020-07-23 |
20200233698 | CLIENT CONTROLLED TRANSACTION PROCESSING INVOLVING A PLURALITY OF PARTICIPANTS - Methods and systems are provided for client controlled transaction processing. The method may be carried out at a transaction server, and include: receiving a transaction request from a transaction initiator and allocating a transaction identifier to the transaction; receiving notification of the number of jobs to be completed in the transaction; maintaining a transaction status indicating the current status of the transaction; receiving job status updates from one or more participants processing the jobs included in the transaction and updating a transaction record reflecting the status of each of the jobs included in the transaction; updating the transaction status when required based on the job status updates of the jobs included in the transaction; and receiving and responding to transaction status polling to provide a current transaction status, where the transaction status polling originates from the transaction initiator and the participants processing the jobs. | 2020-07-23 |
20200233699 | PLATFORM-BASED CHANGE MANAGEMENT - A technique for change management comprising receiving a proposed schedule associated with a change request, determining whether the proposed schedule conforms with preexisting schedule restrictions and does not conflict with a schedule of a preexisting change order, in response to determining that the proposed schedule does not conform with the preexisting schedule restriction or conflicts with the schedule of a preexisting change order, determining a set of potential schedules which do conform with the preexisting schedule restriction and does not conflict with the schedule of a preexisting change order, and providing the set of potential schedules for output, receiving a schedule selected from among the set of potential schedules, storing the change request and schedule in a database table, receiving, from a remote client device hosting an external application, an update task request associated with the change request, and updating a change task database table based on the update task request. | 2020-07-23 |
20200233700 | TECHNIQUES FOR BEHAVIORAL PAIRING IN A TASK ASSIGNMENT SYSTEM - Techniques for behavioral pairing in a task assignment system are disclosed. In one particular embodiment, the techniques may be realized as a method for behavioral pairing in a task assignment system comprising determining, by at least one computer processor communicatively coupled to and configured to operate in the task assignment system, availability of at least one task in a queue; querying, based on at least one variable pertaining to the at least one task, at least one data source for information associated with the at least one task; receiving, from the at least one data source, the information associated with the at least one task; and pairing, based at least in part on the information associated with the at least one task, the at least one task to an agent in the task assignment system. | 2020-07-23 |
20200233701 | MANAGING EXECUTION OF DATA PROCESSING JOBS IN A VIRTUAL COMPUTING ENVIRONMENT - A device may receive a job request associated with a data processing job, including job timing data specifying a time at which the data processing job is to be executed by a virtual computing environment. The device may receive user data associated with the job request and validate the data processing job based on the user data. In addition, the device may identify a priority associated with the data processing job, based on the user data and the job timing data. The device may provide, to a job queue, job data that corresponds to the data processing job, and monitor the virtual computing environment to determine when virtual resources are available. The device may also determine, based on the monitoring, that a virtual resource is available and, based on the determination and the priority, provide the virtual resource with data that causes execution of the data processing job. | 2020-07-23 |
20200233702 | CONTROL APPARATUS AND COMPUTER READABLE MEDIUM - A microcontroller ( | 2020-07-23 |
20200233703 | METHODS AND ARRANGEMENTS FOR AUTOMATED IMPROVING OF QUALITY OF SERVICE OF A DATA CENTER - An automated improving of quality of service of a data center. Transients of a power grid fed to a power supply unit are monitored by a probe. Information on transients is provided across an interface to a server of the data center. Based on characteristics of the transients, a reliability of the data center is subjected to automated updating. A request for migration of workload requiring a higher reliability than the updated reliability can be sent to a central management. When the central management has identified another data center that can meet the required reliability, the central management migrates or relocates the workload to the another data center. | 2020-07-23 |
20200233704 | MULTI-CORE PROCESSOR IN STORAGE SYSTEM EXECUTING DEDICATED POLLING THREAD FOR INCREASED CORE AVAILABILITY - At least one processor of a storage system comprises a plurality of cores and is configured to execute a first thread on a first core of the plurality of cores. The first thread polls at least one interface for an indication of data and, responsive to a detection of an indication of data, processes the data. Responsive to the first thread having no remaining data to be processed, the first thread suspends execution on the first core. The at least one processor is further configured to execute a second thread of a second type on a second core of the plurality of cores. The second thread polls the at least one interface for an indication of data to be processed by the first thread. Responsive to a detection of an indication of data, the second thread causes the first thread to resume execution on the first core. | 2020-07-23 |
20200233705 | MULTI-CORE PROCESSOR IN STORAGE SYSTEM EXECUTING DYNAMIC THREAD FOR INCREASED CORE AVAILABILITY - At least one processor of a storage system comprises a plurality of cores and is configured to execute a first thread in a plurality of modes of operation. When operating in a first mode of operation, the first thread polls at least one interface of the storage system for data to be processed. Responsive to detecting the data, the first thread processes the data. Responsive to having no remaining data to be processed, the first thread suspends execution on the first core if another thread is executing on a second core and operating in a second mode of operation. When operating in the second mode of operation, the first thread polls at least one interface associated with a second thread operating executing on a second core and operating in the first mode of operation for data to be processed. Responsive to detecting the data, the first thread causes the second thread to resume execution. | 2020-07-23 |
20200233706 | DISTRIBUTED JOB SCHEDULER WITH INTELLIGENT JOB SPLITTING - Methods and systems for improving the performance of a distributed job scheduler by dynamically splitting and distributing the work of a single job into parallelizable tasks that are executed among multiple nodes in a cluster are described. The distributed job scheduler may split a job into a plurality of tasks and assign the tasks to nodes within the cluster based on a time remaining to complete the job, an estimated time to complete the job, and a number of identified healthy nodes within the cluster. The distributed job scheduler may monitor job progress over time and adjust (e.g., increase) the number of nodes used to execute the plurality of tasks if the time remaining to complete the job falls below a threshold amount of time or if the time remaining to complete the job minus the estimated time to complete the job falls below the threshold amount of time. | 2020-07-23 |
20200233707 | PROCESS DISCOVERY AND AUTOMATIC ROBOTIC SCRIPTS GENERATION FOR DISTRIBUTED COMPUTING RESOURCES - Techniques for process discovery and automatic generation of robotic scripts for distributed computing resources are disclosed. In one embodiment, at least one automatable process step associated with an activity performed while interacting with at least one application may be determined. The at least one automatable process step may be segregated into multiple tasks based on parallel executable tasks and sequentially executable tasks. Different types of distributed computing resources may be determined to execute the multiple tasks based on the segregation. A modified process flow corresponding to the at least one automatable process step may be automatically generated based on the segregated multiple tasks and the different types of the distributed computing resources. Further, a robotic script based on the modified flow of the at least one automatable process step may be automatically generated. The robotic script may be executed to perform the activity. | 2020-07-23 |
20200233708 | POST PROVISIONING OPERATION MANAGEMENT IN CLOUD ENVIRONMENT - An example method to manage post provisioning operations of a virtual computing instance in a heterogeneous cloud environment is disclosed. The virtual computing instance may be provisioned by a first management entity and configured to receive a command from a second management entity. The method includes defining the instance with a dynamic type by the first management entity and repeatedly finding the dynamic type with one or more finder workflows to determine whether the virtual computing instance is terminated based on the command from the second management entity. In response to not finding the dynamic type within the heterogeneous cloud environment, the method further includes creating a catalog item for the virtual computing instance in a common service catalog and managing one or more resources allocated for the virtual computing instance based on the created catalog item. | 2020-07-23 |
20200233709 | CASCADING JOB SCHEDULING IN GUESTS - Cascading job scheduling in guests is disclosed. For example, first, second, third, and fourth nodes, each execute respective first, second, third, and fourth pluralities of guests each of which executes respective first, second, third, and fourth pluralities of jobs. A scheduler executes on a processor to receive a current capacity update of the first node. A respective quantity of jobs executing on each of the first, second, third, and fourth nodes is tracked. A first, second, third, and fourth estimated capacity of the respective first, second, third, and fourth nodes is calculated. The first, second, third, and fourth nodes are ranked in a list based on the respective estimated capacities. A request to execute a job is received. The first, second, and third nodes are selected as a schedulable set based on the list. A schedulable set notice and the job are sent to the first node to be executed. | 2020-07-23 |
20200233710 | PLATFORM FOR HIERARCHY COOPERATIVE COMPUTING - A system for hierarchical cooperative computing is provided, comprising a vector definition service configured to receive a user-submitted request, and compile the request into a vector; a rules engine configured to retrieve the vector from the vector definition service, and evaluate the vector for appropriateness; a parametric evaluator configured to parameterize the vector, and generate at least a run from the parameterized vector; and an optimizer configured to retrieve the run from the parametric evaluator, and determine an optimal plan for executing the user-submitted request. | 2020-07-23 |
20200233711 | System and Method of Providing System Jobs Within a Compute Environment - The disclosure relates to systems, methods and computer-readable media for using system jobs for performing actions outside the constraints of batch compute jobs submitted to a compute environment such as a cluster or a grid. The method for modifying a compute environment from a system job disclosure associating a system job to a queuable object, triggering the system job based on an event and performing arbitrary actions on resources outside of compute nodes in the compute environment. The queuable objects include objects such as batch compute jobs or job reservations. The events that trigger the system job may be time driven, such as ten minutes prior to completion of the batch compute job, or dependent on other actions associated with other system jobs. The system jobs may be utilized also to perform rolling maintenance on a node by node basis. | 2020-07-23 |
20200233712 | Data Processing Method, Apparatus, Storage Medium, Processor, and System - Data processing method, apparatus, storage medium, processor, and system are disclosed. The method includes determining a resource to be expanded in a resource group of a container service, wherein the resource group includes a plurality of different types of resources associated with the container service, and the resource to be expanded is a resource that fails to meet a scheduling requirement among a plurality of different types of resources during resource scheduling; and performing a capacity expansion for the resource to be expanded. The present disclosure solves the technical problem of wasting resources due to a capacity expansion of all resources in a resource group during a capacity expansion of a resource. | 2020-07-23 |
20200233713 | METHOD, APPARATUS, AND COMPUTER PROGRAM PRODUCT FOR MANAGING MEMORIES OF COMPUTING RESOURCES - A method, apparatus and computer program product for managing memories of computing resources is disclosed. In the method, a computing task processed by a first computing resource in a group of computing resources is determined. In response to a second memory of a second computing resource other than the first computing resource in the group of computing resources being allocated to the computing task, a second access speed with which the first computing resource accesses the second memory is determined. A target computing resource is selected from the group of computing resources based on an access speed with which the first computing resource accesses a target memory of the target computing resource, where the access speed is higher than the second access speed. At least one part of data in the second memory is migrated to the target memory. | 2020-07-23 |
20200233714 | Memory Pool Allocation for a Multi-Core System - An apparatus includes processing cores, memory blocks, a connection between each of processing core and memory block, chip selection circuit, and chip selection circuit busses between the chip selection circuit and each of the memory blocks. Each memory block includes a data port and a memory check port. The chip selection circuit is configured to enable writing data from a highest priority core through respective data ports of the memory blocks. The chip selection circuit is further configured to enable writing data from other cores through respective memory check ports of the memory blocks. | 2020-07-23 |
20200233715 | DYNAMICALLY PROVISIONING PHYSICAL HOSTS IN A HYPERCONVERGED INFRASTRUCTURE BASED ON CLUSTER PRIORITY - Techniques for dynamically provisioning and/or deprovisioning physical hosts in a hyperconverged infrastructure based on cluster priority in hyperconverged infrastructures are disclosed. In one embodiment, a user maps physical hosts in a host pool to respective clusters in the hyperconverged infrastructure. Further the user sets one or more resource utilization threshold limits for each cluster by the user. A management cluster then periodically obtains resource utilization data at a cluster level for each cluster. The management cluster then dynamically provisions and/or deprovisions one or more physical hosts to one or more clusters in the hyperconverged infrastructure using the mapped physical hosts in the host pool based on the obtained resource utilization data and the set one or more resource utilization threshold limits. | 2020-07-23 |
20200233716 | DETERMINING WHETHER TO PROCESS A HOST REQUEST USING A MACHINE LEARNING MODULE - Provided are a computer program product, system, and method for determining whether to process a host request using a machine learning module. Information that relates to at least one of running tasks, mail queue messages related to host requests, Input/Output (I/O) request processing, and a host request received from the host system is provided to a machine learning module. An output representing a processing load in a system is received from the machine learning module. The output is used to determine whether to process the host request. | 2020-07-23 |
20200233717 | TECHNOLOGIES FOR HYBRID FIELD-PROGRAMMABLE GATE ARRAYAPPLICATION-SPECIFIC INTEGRATED CIRCUIT CODE ACCELERATION - Technologies for hybrid acceleration of code include a computing device ( | 2020-07-23 |
20200233718 | COMPUTE NODES WITHIN RECONFIGURABLE COMPUTING CLUSTERS - Reconfigurable computing clusters, compute nodes within reconfigurable computing clusters, and methods of operating a reconfigurable computing cluster are disclosed. A reconfigurable computing cluster includes an optical circuit switch, and a plurality of computing assets, each of the plurality of computing assets connected to the optical circuit switch by two or more bidirectional fiber optic communications paths. | 2020-07-23 |
20200233719 | CLOUD INFRASTRUCTURE SERVICE AND MAINTENANCE - In one aspect, the present approach provides functionality to allow a customer to rename a client instance utilized by the customer without having to provision a new instance. In such an implementation, data may be kept or maintained within the renamed instance. In a further aspect, a virtual internet protocol (VIP) address may be migrated to address load conditions. In accordance with aspects of the approach, multiple VIPs and the instances using the VIPs may be migrated at one time and without downtime to the customer. | 2020-07-23 |
20200233720 | Resource Control Stack Based System for Multiple Domain Presentation of Cloud Computing Resource Control - A multi-layer resource control stack based system may generate an availability indication for multiple domains supported by the resource control stack and send the indication to a client node. The client node may respond with a selection of a domain. The client node may also indicate a compute resource to be managed by the resource control stack. In response to the selection from the client node, the resource control stack may initiate a virtual representation of the domain. The client node may interact with the virtual representation to receive recommendations, utilization data, and control information relevant to the compute resource and within a subject area associated with the domain. | 2020-07-23 |
20200233721 | ELASTIC DATA PARTITIONING OF A DATABASE - A database entry may be stored in a container in a database table corresponding with a partition key. The partition key may be determined by applying one or more partition rules to one or more data values associated with the database entry. The database entry may be an instance of one of a plurality of data object definitions associated with database entries in the database. Each of the data object definitions may identify a respective one or more data fields included within an instance of the data object definition. | 2020-07-23 |
20200233722 | METHOD FOR AUDITING A VIRTUALISED RESOURCE DEPLOYED IN A CLOUD COMPUTING NETWORK - A method of auditing at least one virtualized resource deployed in a cloud computing network, implemented by an administration device in respect of the at least one resource, able to administer virtual network functions, the virtual infrastructure or the network services. The method includes: storing a set of rules of the audit which are associated with the at least one virtualized resource; receiving from the at least one virtualized resource a message including an item of information about an event arising on the virtualized resource; correlating the item of information received with the set of stored rules; and if the correlation is positive, sending, to a recording device, a command message for writing at least one datum linked to the item of information received in a data register associated with the at least one virtualized resource. | 2020-07-23 |
20200233723 | CONSOLIDATION OF IDENTICAL VIRTUAL MACHINES ON HOST COMPUTING SYSTEMS TO ENABLE PAGE SHARING - In one example, configuration data and resource utilization data associated with a plurality of virtual machines in a data center may be retrieved. Further, a cluster analysis may be performed on the configuration data and the resource utilization data to generate a plurality of clusters. Each cluster may include identical virtual machines from the plurality of virtual machines. Furthermore, for each cluster, the identical virtual machines in a cluster may be consolidated to execute in a host computing system such that physical memory pages are shared by the consolidated identical virtual machines in the cluster. | 2020-07-23 |
20200233724 | WORKLOAD PLACEMENT IN A CLUSTER COMPUTING ENVIRONMENT USING MACHINE LEARNING - A method for allocating a workload to a cluster machine of a plurality of cluster machines which are part of a computer cluster operating in a cluster computing environment, includes the step of collecting values from hardware performance counters of each of the cluster machines while the cluster machines are running different workloads. A value of a hardware performance counter from a system which executed the workload to be allocated in isolation and the values from the hardware performance counters of each of the cluster machines which are running the different workloads are used as input to a machine learning algorithm trained to provide as output in each case a prediction of a performance of the workload on each of the cluster machines which are running the different workloads. The cluster machine is selected for placement of the workload based on the predictions. | 2020-07-23 |
20200233725 | DYNAMIC LOAD-BALANCE OF MULTI-CHANNEL REQUESTS - A multi-tenant load balancing system that includes artificial intelligence based algorithm to dynamically route requests from one or more channels to an agent best suited to process the request. The AI based algorithm routes the request based on company's business goals, agent attributes, and channel attributes. The AI based algorithm also predicts agent availability. | 2020-07-23 |
20200233726 | DATA PROCESSING SYSTEMS - A data processing system including a data processor which is operable to execute programs to perform data processing operations and in which execution threads executing a program to perform data processing operations may be grouped together into thread groups. The data processor comprises a cross-lane permutation circuit which is operable to perform processing for cross-lane instructions which require data to be permuted (copied or moved) between the threads of a thread group. The cross-lane permutation circuit has plural data lanes between which data may be permuted (moved or copied). The number data lanes is fewer than the number of threads in a thread group. | 2020-07-23 |
20200233727 | TASK MANAGEMENT DEVICE AND TASK MANAGEMENT METHOD - In a task management device, an acquisition unit is configured to acquire vehicle information from a vehicle. A task management unit is configured to generate instruction information on priorities of a plurality of tasks executed by an in-vehicle multimedia device based on the vehicle information. A communication unit is configured to transmit to the multimedia device the instruction information for executing the task. The task management unit is configured to derive the priorities of the plurality of tasks based on the vehicle information, and to generate the instruction information on the derived priorities of the tasks. | 2020-07-23 |
20200233728 | Clustering and Monitoring System - Disclosed herein are system, method, and computer program product embodiments for providing clustering and monitoring functionality. An embodiment operates by determining that an application programming interface (API) call has been made from a first application to a second application. Metric data regarding a performance of one or more computing devices responsive to the determined API call is received. The received metric data associated with the determined API call is clustered into one of a plurality of predetermined clusters associated with the performance of the one or more computing devices responsive to one or more previous API calls. A notification indicating a system state of the one or more computing devices is determined based on the clustering, and provided. | 2020-07-23 |
20200233729 | MANAGING APPLICATIONS FOR POWER CONSERVATION - Embodiments of the present application relate to a method, apparatus, and system for waking up an app. The method includes adding an application (app) to a wake-up alarm group comprising a plurality of apps, adjusting a plurality of alarm wake-up times corresponding to the plurality of apps, wherein the plurality of alarm wake-up times corresponding to the plurality of apps are adjusted to be consistent, and waking up the plurality of apps belonging to the wake-up alarm group according to the adjusted alarm wake-up times corresponding to the plurality of apps belonging to the wake-up alarm group. | 2020-07-23 |
20200233730 | SYSTEMS AND METHODS FOR FILESYSTEM-BASED COMPUTER APPLICATION COMMUNICATION - A method of filesystem-based communication of computer applications is provided. The method implemented using a filesystem communications interface (FCI) computer device coupled to a first computer and a second computer on which computer applications are installed. The method includes mounting file systems on the first computer and second computer by installing communications interface drivers, receiving a data transfer command that includes a data unit from the first computer, identifying that the data transfer command corresponds to a filesystem-based data transfer protocol, generating another data transfer command by converting the first data transfer command into a first network-based data transfer protocol, receiving the data unit from the first computer, and transmitting, using the second communications interface driver, the data unit to the second computer application by the using a third data transfer command. | 2020-07-23 |
20200233731 | ADAPTIVE IN-APPLICATION MESSAGING - A device implementing a system for in-application messaging includes a processor configured to, receive, from a server, a message and a rule, the rule specifying a condition to be satisfied prior to displaying the message in association with an application, the condition corresponding to user interaction with respect to the application. The at least one processor is further configured to store the message in local memory, and determine that user activity performed with respect to the application satisfies the condition corresponding to the user interaction. The at least one processor is further configured to, in response to the determining, retrieve the message from the local memory, and display the message in association with the application. | 2020-07-23 |
20200233732 | HELPING A HARDWARE ACCELERATOR USING SOFTWARE - An accelerator helper monitors pending calls for a first accelerator, and when the accelerator is too busy, the accelerator helper sends a new call to the first accelerator to a software routine instead of to the first accelerator. The software routine processes the new call in parallel with the first accelerator processing a previous call. When the accelerator is not too busy, the accelerator helper sends to the first accelerator the new call to the first accelerator. The determination of when the accelerator is too busy can be whether a number of pending calls for the first accelerator exceeds a predetermined threshold. The accelerator helper speeds up execution of calls to the first accelerator by executing some calls to the accelerator in a software routine when the first accelerator has too many calls pending. | 2020-07-23 |
20200233733 | USING A CLIENT TO MANAGE REMOTE MACHINE LEARNING JOBS - Methods, apparatuses, and systems for a web services provider to interact with a client on remote job execution. For example, a web services provider may receive a job command, from an interactive programming environment of a client, applicable to job for a machine learning algorithm on a web services provider system, process the job command using at least one of a training instance and an inference instance, and provide metrics and log data during the processing of the job to the interactive programming environment. | 2020-07-23 |
20200233734 | WAIT-AND-SEE CANDIDATE IDENTIFICATION APPARATUS, WAIT-AND-SEE CANDIDATE IDENTIFICATION METHOD, AND COMPUTER READABLE MEDIUM - A constituent similarity calculation unit determines, for each attribute, whether a comparison source element that is a constituent of a monitored system in which a subject fault occurred and a comparison target element that is another constituent of the monitored system match, wherein the subject fault is a fault requiring no handling among faults that have occurred in the monitored system; and calculates a configuration similarity for the comparison target element using attributes determined to match and a contribution assigned to each attribute. A candidate identification unit identifies a wait-and-see candidate that is a candidate of a constituent requiring no handling when the subject fault has occurred, on the basis of the calculated configuration similarity. | 2020-07-23 |
20200233735 | PERFORMANCE ANOMALY DETECTION - Embodiments facilitating performance anomaly detection are described. A computer-implemented method comprises: detecting, by a device operatively coupled to one or more processing units, based on monitoring data of a plurality of performance metrics of a monitored device, at least one trend within the monitoring data of the respective performance metrics; removing, by the device, the at least one trend from the monitoring data of the respective performance metrics to generate modified data of the respective performance metrics; and detecting, by the device, a performance anomaly based on the modified data of the respective performance metrics and a behavior clustering model comprising at least one steady state. | 2020-07-23 |
20200233736 | ENABLING SYMPTOM VERIFICATION - Systems, products and methods for enabling symptom verification. Verifying a symptom may include eliminating repeated symptom definitions or eliminating symptoms having low accuracy. A computer system enables verification of a symptom including a rule for detecting a set of events related to a given problem. The computer system includes a symptom database which stores the symptom, a specimen database which stores a specimen including a set of events detected according to a rule of a certain symptom, and an analysis unit which analyzes the specimen stored in the specimen database using a new symptom in order to determine whether to add the new symptom to the symptom database. The present disclosure also includes a method and a computer program for enabling verification of a symptom including a rule for detecting a set of events related to a given problem. | 2020-07-23 |
20200233737 | SYSTEM AND METHOD OF RESOLUTION PREDICTION FOR MULTIFUNCTION PERIPHERAL FAILURES - A system and method for predicting device failures and generating proposed resolutions for such an error when occurs includes receiving device status data from each of a plurality of identified multifunction peripherals into a memory. Service history data for each of the multifunction peripherals is stored, the service history data including data corresponding to a plurality of data patterns associated with prior device failures associatively with resolutions implemented to address such failures. Patterns are detected in received device status data. Device failure is predicted for at least one identified multifunction peripheral in accordance with detected patterns and service history data. The predicted device failure is reported along with at least one proposed resolution to address a device error predicted by the predictive device failure data. | 2020-07-23 |
20200233738 | MANAGEMENT OF A FAULT CONDITION IN A COMPUTING SYSTEM - Systems, apparatuses, and/or methods may manage a fault condition in a computer system. An apparatus may dynamically publish a message over a publisher-subscriber system and dynamically subscribe to a message over the publisher-subscriber system, wherein at least one message may be used to address a fault condition in the computing system. The apparatus may predict a fault condition in a high performance computing (HPC) system, communicate fault information to a user, monitor health of the HPC system, respond to the fault condition in the HPC system, recover from the fault condition in the HPC system, maintain a rule for a fault management component, and/or communicate the fault information over the publisher-subscriber system in real-time. Messages may also be aggregated to minimize fault information traffic. The publisher-subscriber system may facilitate dynamic and/or real-time coordinated, integrated (e.g., system-wide), and/or scalable fault management. | 2020-07-23 |
20200233739 | MEMORY SYSTEMS COMPRISING NON-VOLATILE MEMORY DEVICES - A memory system includes a non-volatile memory device and controller circuitry. The non-volatile memory device includes an array of memory cells that includes memory blocks and pages. Each separate memory block includes a separate, respective set of one or more pages. The controller circuitry is configured to control an operation of the non-volatile memory device. The controller circuitry includes processing circuitry configured to perform a recovery operation for the non-volatile memory device in response to a determination that a specific event has occurred at the memory system during a program operation of the non-volatile memory device. The recovery operation includes determining status information associated with a first group including at least one page, determining a quantity of a set of pages included in a second group based on the status information, and programming dummy data for one or more pages of the set of pages included in the second group. | 2020-07-23 |
20200233740 | SYSTEM AND METHOD FOR THE DYNAMIC ANALYSIS OF EVENT DATA - Disclosed is a system and method for the analysis of event data that enables analysts to create user specified datasets in a dynamic fashion. Performance, equipment and system safety, reliability, and significant event analysis utilizes failure or performance data that are composed in part of time-based records. These data identify the temporal occurrence of performance changes that may necessitate scheduled or unscheduled intervention like maintenance events, trades, purchases, or other actions to take advantage of, mitigate or compensate for the observed changes. The criteria used to prompt a failure or performance record can range from complete loss of function to subtle changes in performance parameters that are known to be precursors of more severe events. These specific criteria applied to any explicit specific application and this invention is relevant to this type of data taxonomy and can be applied across all areas in which event data may be collected. | 2020-07-23 |
20200233741 | CHANNEL MODULATION FOR A MEMORY DEVICE - Methods, systems, and devices for channel modulation for a memory device are described. A system may include a memory device and a host device coupled with the memory device. The system may be configured to communicate a first signal modulated using a first modulation scheme and communicate a second signal that is based on the first signal and that is modulated using a second modulation scheme. The first modulation scheme may include a first quantity of voltage levels that span a first range of voltages, and the second modulation scheme may include a second quantity of voltage levels that span a second range of voltages different than (e.g., smaller than) the first range of voltages. The first signal may include write data carried over a data channel, and the second signal may include error detection information based on the write data that is carried over an error detection channel. | 2020-07-23 |
20200233742 | TOUCH INSTRUCTION - An apparatus comprising data processing circuitry for processing data in one of a plurality of operating states, an instruction decoder for decoding instructions and error checking circuitry for performing error checking operations. In response to a touch instruction being decoded by the instruction decoder, error checking operation is performed on selected architectural state. The architectural state is architecturally inaccessible to the operating state. As a result of the touch instruction, the architectural state remains unchanged, at least when no error is detected. | 2020-07-23 |
20200233743 | ERROR CORRECTION CODE MEMORY DEVICE AND CODEWORD ACCESSING METHOD THEREOF - The codeword accessing method including: receiving a write data with M message bits; generating parity information with N-M bits based on an error correction algorithm and the M message bits, where N and M are positive integers; transforming the M message bits and the parity information to a scrambled codeword with N bits by a scrambling operation, where the scrambled codeword contains only a part of the M message bits; and writing the scrambled codeword into a memory device. | 2020-07-23 |
20200233744 | System and Methods for Diagnosing and Repairing a Smart Mobile Device by Disabling Components - The present invention relates to computerized (“smart”) mobile electronic devices and more particularly, to a system and methods of diagnosing and repairing malfunctions in smart mobile electronic devices, including a diagnostic process that utilizes decisions based on Big Data that holds information of multiple devices and offers a “disable components” (i.e., turn-off components) solution in order to overcome the problem without flashing a firmware or doing a factory-reset. | 2020-07-23 |
20200233745 | FLASH MEMORY APPARATUS AND STORAGE MANAGEMENT METHOD FOR FLASH MEMORY - A flash memory method includes: classifying data into a plurality of groups of data; respectively executing error code encoding to generate first corresponding parity check code to store the groups of data and first corresponding parity check code into flash memory module as first blocks; reading out the groups of data from first blocks; executing error correction and de-randomize operation upon read out data to generate de-randomized data; executing randomize operation upon de-randomized data according to a set of seeds to generate randomized data; performing error code encoding upon randomized data to generate second corresponding parity check code; and, storing randomized data and second corresponding parity check code into flash memory module as second block; a cell of first block is used for storing data of first bit number which is different from second bit number corresponding to a cell of second block. | 2020-07-23 |
20200233746 | SYSTEMS, METHODS, AND APPARATUSES FOR STACKED MEMORY - Embodiments of the invention are generally directed to systems, methods, and apparatuses for hybrid memory. In one embodiment, a hybrid memory may include a package substrate. The hybrid memory may also include a hybrid memory buffer chip attached to the first side of the package substrate. High speed input/output (HSIO) logic supporting a HSIO interface with a processor. The hybrid memory also includes packet processing logic to support a packet processing protocol on the HSIO interface. Additionally, the hybrid memory also has one or more memory tiles that are vertically stacked on the hybrid memory buffer. | 2020-07-23 |
20200233747 | AUTOMATIC RESTARTING AND RECONFIGURATION OF PHYSICS-BASED MODELS IN EVENT OF MODEL FAILURE - A simulation model recovery method, system, and computer program product include initiating a simulation model, during an operation of a model, periodically writing a solution space of the model to a checkpoint restart file, during an operation of the model, periodically writing diagnostic information on model progression to a log file, detecting a failure of the model, based on the log of the model, determining a time of the failure, based on the model outputs and restart files, determining a period of a numerical instability preceding the failure, selecting a checkpoint of the model preceding the period of the numerical instability, based on the numerical instability and diagnostic information in log files, modifying a configuration of the model, and restarting the model based on the selected checkpoint and the modified configuration. | 2020-07-23 |
20200233748 | MANAGEMENT METHOD, STRUCTURE MONITORING DEVICE, AND STRUCTURE MONITORING SYSTEM - Structure monitoring software includes measuring software, arithmetic software, and communication software. The measuring software collects an output signal from an inertial sensor, stores a result of collection into a first storage unit, and outputs the result of collection to the arithmetic software. The arithmetic software computes the result of collection received from the measuring software, stores a result of computation into a second storage unit, and outputs the result of computation to the communication software. The communication software stores the result of computation received from the arithmetic software into a third storage unit and transmits the result of computation to outside. Management software determines whether each of the measuring software, the arithmetic software, and the communication software is operating normally or not, and terminates and restarts the software that is not operating normally. | 2020-07-23 |
20200233749 | ERROR HANDLING TOOL - An apparatus includes a memory and a hardware processor. The memory stores a plurality of reprocessing rules. The processor receives a request message from a user device. The processor communicates a second request to a first resource and a third request to a second resource. The processor determines that a response to the second request was not received. The processor increases the first timeout. The processor communicates the second request to the first resource after increasing the first timeout, receives a response to the second request, and determines that a response to the third request was not received. The processor increases the reconnect parameter. The processor communicates the third request to the second resource after increasing the reconnect parameter, receives a response to the third request, generates a response message to the request message, and communicates the response message. | 2020-07-23 |
20200233750 | SYSTEM AND METHOD TO IMPLEMENT AUTOMATED APPLICATION CONSISTENT VIRTUAL MACHINE IMAGE BACKUP - A method for performing backup operations includes selecting an application executing on a virtual machine (VM) to quiesce, generating, using a pre-snapshot template for the application, a pre-snapshot script for the application, generating a snapshot of the virtual machine after the pre-snapshot script has executed on the VM, and initiating a backup operation for the VM using the snapshot. | 2020-07-23 |
20200233751 | INDEXING A RELATIONSHIP STRUCTURE OF A FILESYSTEM - One or more storage locations of file inodes in a data source to be backed up are identified. Filesystem metadata information is extracted from the one or more identified storage locations. At least one item of the extracted filesystem metadata information includes a reference to a parent inode. The extracted filesystem metadata information is stored in a data structure. The contents of the data structure are analyzed to index a relationship structure of file system contents of the data source. | 2020-07-23 |