34th week of 2022 patent applcation highlights part 48 |
Patent application number | Title | Published |
20220269495 | APPLICATION DEPLOYMENT IN A COMPUTING ENVIRONMENT - In an approach, a processor, in response to detecting a new customer resource (CR) file: requests, the computing environment to deploy a plurality of function deployment components in the computing environment, where: the CR file indicates information of a plurality of functions of an application; the plurality of function deployment components request the computing environment to deploy a plurality of function components in the computing environment; and the plurality of function components execute the plurality of functions of the application; determines that each of the plurality of function components has been deployed in the computing environment; and in response to determining that each of the plurality of function components has been deployed in the computing environment, requests the computing environment to delete each of the plurality of deployed function deployment components. | 2022-08-25 |
20220269496 | REMOTE SYSTEM MONITORING AND FIRMWARE-OVER-THE-AIR UPGRADE OF ELECTROSURGICAL UNIT - Systems and methods for monitoring an electrosurgical unit (ESU), analyzing ESU system data, predicting future ESU maintenance, and updating the ESU using firmware over-the-air (FOTA). | 2022-08-25 |
20220269497 | SYSTEM AND METHOD USING NATURAL LANGUAGE PROCESSING TO SYNTHESIZE AND BUILD INFRASTRUCTURE PLATFORMS - Embodiments of the invention are directed to a system, method, or computer program product structured for synthesizing and building infrastructure platforms. In some embodiments, the system is structured for performing a natural language synthesis of a proposed upgrade to existing infrastructure platform(s), where the natural language synthesis includes analyzing, using natural language processing, code of the proposed upgrade; generating a trust score indicating a predicted likelihood of success from results of the natural language synthesis; in response to the trust score being above a threshold, identifying, using natural language processing, inactive code in the platform(s); generating a build automation script for deploying the proposed upgrade to create upgraded infrastructure platform(s) that exclude the inactive code; executing the build automation script; capturing data from the build automation script execution; and using the result of the build automation script execution and the captured data to refine the natural language synthesis. | 2022-08-25 |
20220269498 | SYSTEM AND METHOD USING NATURAL LANGUAGE PROCESSING TO SYNTHESIZE AND BUILD INFRASTRUCTURE PLATFORMS - Embodiments of the invention are directed to a system, method, or computer program product structured for synthesizing and building infrastructure platforms. In some embodiments, the system is structured for performing a natural language synthesis of a proposed upgrade to existing infrastructure platform(s), where the natural language synthesis includes analyzing, using natural language processing, code of the proposed upgrade; generating a trust score indicating a predicted likelihood of success from results of the natural language synthesis; in response to the trust score being above a threshold, identifying, using natural language processing, inactive code in the platform(s); generating a build automation script for deploying the proposed upgrade to create upgraded infrastructure platform(s) that exclude the inactive code; executing the build automation script; capturing data from the build automation script execution; and using the result of the build automation script execution and the captured data to refine the natural language synthesis. | 2022-08-25 |
20220269499 | PROVIDING TAILORED SOFTWARE UPDATE RELEVANCE INFORMATION FOR DEPLOYED SOFTWARE - A recommendation system can be configured to provide tailored software update relevance information for deployed software. The recommendation engine can obtain running state information for a current version of software running on a device, as well as build data for each of the current version of the software and a new version of the software. The recommendation engine can obtain software version difference information based on the build data and determine, based on at least the software version difference information and the running state information, a number of functions in the current version of software that are directly impacted by the new version. The recommendation engine can cause relevance information derived from this determination to be displayed on a computing device, and/or the recommendation engine can automatically cause an update to the new version of the software to be applied or rejected based on the determination. | 2022-08-25 |
20220269500 | Device Decision to Download Software Update - Various embodiments that pertain to device software is described. A decision can be made by a device on if the device should download an update for device software, such as a software patch. When the device decides that it should download the update, the device can download the appropriate update. In one example, the update can be downloaded by way of a patch portal that communicates with a patch database. The device can request the patch for the software and in response the device can be provided access to the patch by way of the patch portal. | 2022-08-25 |
20220269501 | FPGA DYNAMIC RECONFIGURATION METHOD, APPARATUS, DEVICE AND READABLE STORAGE MEDIUM - A field programmable gate array (FPGA) dynamic reconfiguration method, apparatus, device and readable storage medium are provided. The technical solution includes: performing board support package (BSP) flat compilation on a target project to obtain a static region; performing BSP generation and reconfiguration information compilation on the target project to obtain static information; revising the static region using the static information to obtain reconfiguration compilation version projects that meet timing and correspond to different reconfiguration compilation parameters, respectively; importing a preset heterogeneous acceleration kernel to the reconfiguration compilation version projects and then performing static compilation to obtain clock frequencies corresponding to the reconfiguration compilation version projects, respectively; and determining a target reconfiguration compilation version project with a clock frequency meeting performance requirements using the clock frequencies, and obtaining a dynamic reconfiguration compilation version project file. The dynamic reconfiguration compilation version project file obtained in this technical solution ensures that the static region can meet the timing, and also enables an operating clock of the heterogeneous acceleration kernel to meet the performance requirements for heterogeneous acceleration. | 2022-08-25 |
20220269502 | IMPROVEMENT PROPOSING DEVICE AND IMPROVEMENT PROPOSING METHOD - Even when one refactoring operation cannot establish a target software structure, an appropriate refactoring operation establishes the target software structure. An improvement proposing device includes: a structure comparator to output, as an improvement object, a difference between a first software structure and a second software structure different in software structure from the first software structure; and an improvement plan examining unit to examine an improvement plan for each improvement portion in the improvement object, the improvement plan being a method for bringing the first software structure closer to the second software structure. | 2022-08-25 |
20220269503 | METHODS AND SYSTEMS FOR AUTO CREATION OF SOFTWARE COMPONENT REFERENCE GUIDE FROM MULTIPLE INFORMATION SOURCES - Systems and methods for automatically creating a software component reference guide from multiple information sources are disclosed. In one aspect, the method includes receiving, from a user, a request for reference guides for a software components and view corresponding results, identifying the software component, identifying different sources of information for the software component, generating introductory information of the software component, generating technology details of the software component, generating frequently asked questions (FAQs) and their related solutions associated with the software component, training a catalog of natural language terms related to the software components, and providing, based on the trained catalog, the introductory information, technology details, and FAQs to the user. | 2022-08-25 |
20220269504 | CLIENT-SIDE ENRICHMENT AND TRANSFORMATION VIA DYNAMIC LOGIC FOR ANALYTICS - Described are systems and methods for client side enrichment and transform via dynamic logic for analytics across various platforms for improved performance, features, and uses. Analytics data collected in client applications is transformed and enriched before being sent to the downstream pipeline using native code and logic bundled into the core application code. The additional logic specific to manipulation of analytics may be unbundled from client-side application code and still be executed on on-device to achieve the same result. The logic may be written in a single language, such as JavaScript, and run across all clients including web browser and mobile operating systems. | 2022-08-25 |
20220269505 | CLIENT-SIDE ENRICHMENT AND TRANSFORMATION VIA DYNAMIC LOGIC FOR ANALYTICS - Described are systems and methods for client side enrichment and transform via dynamic logic for analytics across various platforms for improved performance, features, and uses. Analytics data collected in client applications is transformed and enriched before being sent to the downstream pipeline using native code and logic bundled into the core application code. The additional logic specific to manipulation of analytics may be unbundled from client-side application code and still be executed on on-device to achieve the same result. The logic may be written in a single language, such as JavaScript, and run across all clients including web browser and mobile operating systems. | 2022-08-25 |
20220269506 | Multiplier-Accumulator Processing Pipelines and Processing Component, and Methods of Operating Same - An integrated circuit including a plurality of processing components to process image data of a plurality of image frames, wherein each image frame includes a plurality of stages. Each processing component includes a plurality of execution pipelines, wherein each pipeline includes a plurality of multiplier-accumulator circuits configurable to perform multiply and accumulate operations using image data and filter weights, wherein: (i) a first processing component is configurable to process all of the data associated with a first plurality of stages of each image frame, and (ii) a second processing component of the plurality of processing components is configurable to process all of the data associated with a second plurality of stages of each image frame. The first and second processing component processes data associated with the first and second plurality of stages, respectively, of a first image frame concurrently. | 2022-08-25 |
20220269507 | SYSTEMS AND METHODS FOR EMULATING A PROCESSOR - In an example, a machine learning (ML) processor emulator can be configured to emulate a legacy processor for emulating a legacy program. The emulator environment can include virtual registers storing operand data on which an operation is to be performed based on a respective instruction from instruction data representative of the legacy program. The ML processor emulator includes a processor ensemble engine that includes ML modules generated by a different ML algorithm, and a voting engine. Each ML module can be configured to emulate an instruction set of a processor and process the operand data according to the operation of the respective instruction to produce a set of candidate result data. The voting engine can be configured to identify a subset of candidate result data from the set of candidate result data and provide output data with content similar to the subset of candidate result data. | 2022-08-25 |
20220269508 | METHODS AND SYSTEMS FOR NESTED STREAM PREFETCHING FOR GENERAL PURPOSE CENTRAL PROCESSING UNITS - A method and hardware system to remove the overhead caused by having stream handling instructions in nested loops. Where code contains inner loops, nested in outer loops, a compiler pass identifies qualified nested streams and generates ISA specific instructions for transferring stream information linking an inner loop stream with an outer loop stream, to hardware components of a co-designed prefetcher. The hardware components include a frontend able to decode and execute instructions for a stream linking information transfer mechanism, a stream engine unit with a streams configuration table (SCT) having a field for allowing a subordinate stream to stay pending for values from its master stream, and a stream prefetch manager with buffers for storing values of current elements of a master stream, and with a nested streams control unit for reconfiguring and iterating the streams. | 2022-08-25 |
20220269509 | GENERATING AND EXECUTING A CONTROL FLOW - Examples of the present disclosure provide apparatuses and methods related to generating and executing a control flow. An example apparatus can include a first device configured to generate control flow instructions, and a second device including an array of memory cells, an execution unit to execute the control flow instructions, and a controller configured to control an execution of the control flow instructions on data stored in the array. | 2022-08-25 |
20220269510 | SYSTEM AND METHOD FOR PROVIDE PERSISTENT COMPANION SOFTWARE IN AN INFORMATION HANDLING SYSTEM - An information handling system includes a device, a driver associated with the device, and a BIOS. The device provides first information associated with a first function and second information associated with a companion application. The BIOS receives the first and second information. The BIOS includes a procedure to implement the first function, but lacks a procedure to implement the second function. The BIOS sends the second input information to the driver. The driver determines that the companion application is instantiated on the information handling system and directs the second information to the companion application, and the driver determines that the companion application is not instantiated on the information handling system, accesses a network to install the companion application, and directs the second information to the companion application. | 2022-08-25 |
20220269511 | OPERATING SYSTEM PARTITIONING OF DIFFERENT USERS FOR SINGLE-USER APPLICATIONS - Examples described herein generally relate to a computer device including a memory, and at least one processor configured to partition application files for multiple users of the computer device. The computer device creates a per-user location for a first user when installing an application package to an installation location. The application package includes a plurality of files for an application that are read-only for the first user. The computer device projects, via one or more filter drivers, installed package files from the installation location into the per-user location. The computer device receives a modification to the plurality of files for the application projected into the per-user location. The computer device writes at least one modified file into the per-user location. The computer device loads, during execution of the application by the first user, the at least one modified file from the per-user location for the first user. | 2022-08-25 |
20220269512 | SYNCING SETTINGS ACROSS INCOMPATIBLE OPERATING SYSTEMS - Example aspects include techniques for syncing configuration settings between incompatible operating systems. These techniques may include determining, via a first application, system-wide configuration information associated with a host system configuration parameter and a first configuration value of the host operating system, and transmitting a synchronization notification to a second application executing on a guest operating system, wherein the synchronization notification corresponding to the system-wide configuration information. In addition, the techniques may include configuring a guest system configuration parameter to a second configuration value based on the synchronization notification, and executing a third application on the guest operating system based on the second configuration value. | 2022-08-25 |
20220269513 | Serial NAND Flash With XIP Capability - Based on power on of an electronic device, a location of first data in a NAND flash memory of an electronic device is determined. The first data is transmitted to a shadow RAM of the electronic device, outputting the first data is output from the shadow RAM to a host device of the electronic device through a serial peripheral interface (SPI) when accessing the location of the first data in the NAND Flash memory. | 2022-08-25 |
20220269514 | MODIFYING READABLE AND FOCUSABLE ELEMENTS ON A PAGE DURING EXECUTION OF AUTOMATED SCRIPTS - A device may initiate an automated script to perform one or more interactions with a browser application and identify a first element in a page rendered by the browser application that satisfies one or more accessibility criteria, wherein the first element may include text that is readable by a screen reader application and/or an attribute that causes the first element to be navigable using a keyboard. The device may modify the first element to be inaccessible to the screen reader application and the keyboard and insert, into the page, a second element including text that is readable by the screen reader application to describe the one or more interactions that the automated script is performing. The client device may restore the page to an original state based on determining that the automated script has finished executing. | 2022-08-25 |
20220269515 | Dynamic Interface Layout Method and Device - A dynamic interface layout method includes that a width of a screen of an electronic device is divided into a plurality of columns. The electronic device displays a first interface on the screen. After detecting an interface refresh signal, the electronic device obtains a first column quantity corresponding to a width of a second interface to be displayed after refreshing. The first column quantity is a quantity of columns included in the width of the second interface. The electronic device determines a second column quantity according to a layout rule corresponding to a first element on the second interface. The second column quantity is a quantity of columns included in a width of the first element. The electronic device displays the second interface on the screen. | 2022-08-25 |
20220269516 | TRANSITIONING APPLICATION WINDOWS BETWEEN LOCAL AND REMOTE DESKTOPS - The disclosure provides for transitioning application windows between local and remote desktops. Example implementations include opening a first file with a first application to generate a first application window on a first desktop window on a user display; based at least on a trigger event for transitioning the first application window from the first desktop window to a second desktop window, determining whether a second application is available for the second desktop window to produce a version of the first application window; and based at least on the second application being available: transferring the first file across a network to become a second file; and opening the second file with the second application to generate a second application window on the second desktop window, the second application window replacing the first application window on the user display. The transition may go either direction. | 2022-08-25 |
20220269517 | ADAPTABLE WARNINGS AND FEEDBACK - Co-browsing sessions between a customer and an agent are conducted in real-time and provide an opportunity for the customer to receive assistance or guidance from the agent while performing operations online, such as completing a web form. Systems monitor the inputs provided by the customer. If an error is detected, such as entering confidential information in a non-confidential field, the customer is warned and the agent may be blocked from seeing the data. However, this may be due to a mistake. As provided herein, if a customer tells the agent that the entry is correct a server may process the comment to the agent and automatically remove the warning, allow the customer to continue typing in the field, allow the agent to see the content provided, and/or automatically update the logic to reduce the chances of erroneous error detection in subsequent encounters with similar data. | 2022-08-25 |
20220269518 | APPLICATION DEPLOYMENT USING REDUCED OVERHEAD BYTECODE - A system includes a memory, a processor in communication with the memory, and a recorder. The recorder is configured to obtain a proxy for each respective real object. Each respective real object is related to a respective service. The recorder is also configured to record a sequence of each invocation on each respective proxy and generate an intermediate representation of an application that is configured to invoke the sequence of each invocation on each real object associated with each respective proxy. | 2022-08-25 |
20220269519 | SYSTEM AND METHOD FOR THE ORCHESTRATION OF PLUGINS ON THE COMMAND LINE - The present disclosure describes systems and methods for a command line interface with artificial intelligence integration. Embodiments of the disclosure provide a command line orchestration component (e.g., including a reinforcement learning model) that provides a generic command line interface environment (e.g., that researchers can interface using a simple sense-act application programming interface (API)). For instance, a command line orchestration component receives commands (e.g., text input) from a user via a command line interface, and the command line orchestration component can identify command line plugins and candidate response from the command line plugins. Further, the command line orchestration component may select a response from the candidate responses based on user preferences, user characteristics, etc., thus providing a generic command line interface environment for various users (e.g., including artificial intelligence developers and researchers). | 2022-08-25 |
20220269520 | FACTOR IDENTIFICATION METHOD AND INFORMATION PROCESSING DEVICE - A non-transitory computer-readable recording medium stores a program for causing a computer to execute a factor identification process, the factor identification process includes detecting an occurrence time point when a system call of a host operating system (OS) has occurred, acquiring switching operation information that enables an environment switching time point to be identified, the environment switching time point being a time point when an environmental process has switched, the environmental process implementing a software execution environment which is in operation on the host OS and is isolated from the host OS, identifying, based on the switching operation information, a first environmental process which is in operation on the host OS at the occurrence time point, and outputting the first environmental process in association with the system call. | 2022-08-25 |
20220269521 | MEMORY PAGE COPYING FOR VIRTUAL MACHINE MIGRATION - Systems and methods of the disclosure include: identifying, by a destination host computer system, a first memory page residing in a memory of the destination host computer system; transmitting, by the destination host computer system, at least a part of the first memory page to a source host computer system; receiving, by the destination host computer system, a confirmation from the source host computer system that the first memory page matches a second memory page associated with a virtual machine to be migrated from the source host computer system to the destination host computer system; and associating, by the destination host computer system, the first memory page with the virtual machine. | 2022-08-25 |
20220269522 | MEMORY OVER-COMMIT SUPPORT FOR LIVE MIGRATION OF VIRTUAL MACHINES - Systems and methods for providing memory over-commit support for live migration of virtual machines (VMs). In one implementation, a processing device of a source host computer system may identify a host page cache associated with a VM undergoing live migration from the source to a destination host computer system. The host page cache comprises a first plurality of memory pages associated with the VM. The processing device may transmit, from the source to the destination, at least a part of the host page cache. The processing device may discard the part of the host page cache. The processing device may read into the host page cache one or more memory pages of a second plurality of memory pages associated with the VM. The processing device may transmit, from the source to the destination, the one or more memory pages stored by the host page cache. | 2022-08-25 |
20220269523 | SYSTEM AND METHOD OF CODE EXECUTION AT A VIRTUAL MACHINE ALLOWING FOR EXTENDIBILITY AND MONITORING OF CUSTOMIZED APPLICATIONS AND SERVICES - A processing system allows external systems to customize and extend services without increasing system intricacy. The processing platform maintains cloud containers that support virtual machines for external systems. An external system provides code for execution on a virtual machine that is supported by a cloud container. Cloud containers provide a boundary for executing code such that the processing platform may limit types of code an external system can run at a cloud container. The external system code can provide new services or may build upon existing public services, and external systems may designate their services as being available to other external systems by publishing the access information in a global application programming interface (API) maintained by the processing platform. Since the external systems submit instructions for execution within their assigned cloud containers, the services and applications are developed without affecting the underlying functionality of the processing platform. | 2022-08-25 |
20220269524 | METHOD AND APPARATUS FOR SECURE DATA ACCESS DURING MACHINE LEARNING TRAINING - At least a method and an apparatus are presented for secure access of shared data for machine learning training. In one embodiment, a virtual machine is created based on a virtual machine environment type input, wherein the virtual machine permits access to one or more training data sets for training a machine learning system if the virtual machine environment type input indicates access to data enabled mode, and wherein the virtual machine prohibits the access to the one or more training data sets for training the machine learning system if the virtual machine environment type input indicates access to data disabled mode. | 2022-08-25 |
20220269525 | METHOD FOR OPERATING A MICROCONTROLLER - A method for operating a microcontroller. The microcontroller includes a plurality of resources, a plurality of virtual machines being executed in the microcontroller, a coordination unit being superordinate to the plurality of virtual machines. Access information concerning accesses of the plurality of virtual machines to the plurality of resources is stored in the coordination unit. In the event that one of the virtual machines requests a reset of one of the resources, the coordination unit checks on the basis of the access information, which of the virtual machines are accessing this resource. The coordination unit determines on the basis of this check, whether the resource will be reset or whether a substitute measure will be taken. | 2022-08-25 |
20220269526 | ENABLING RESTORATION OF QUBITS FOLLOWING QUANTUM PROCESS TERMINATION - Enabling restoration of qubits following quantum process termination is disclosed. In one example, a quantum restore service, executing on a processor device of a quantum computing device, detects an exit request corresponding to a quantum process associated with one or more qubits. The quantum restore service obtains metadata, including an identification of the quantum process (such as a quantum process identifier (ID), a quantum process name, and/or a Quantum Assembly Language (QASM) file descriptor) and an identification of each qubit. The quantum restore service then maintains the qubits in association with the identification of the quantum process based on the metadata after termination of the quantum process. In some examples, the quantum restore service may allocate a logical partition, associate the logical partition with the quantum process, and then associate the qubits with the logical partition. In this manner, the qubits may be preserved after the quantum process has terminated. | 2022-08-25 |
20220269527 | Task Repacking - A method of repacking tasks in a graphics pipeline includes, in response to a task reaching a checkpoint in a program, determining if the task is eligible for repacking. If the task is eligible for repacking, the task is de-scheduled and it is determined whether repacking conditions are satisfied. In the event that the repacking conditions are satisfied, the method looks for a pair of compatible and non-conflicting tasks at the checkpoint. If such a pair of tasks are found, one or more instances are transferred between the pair of tasks. | 2022-08-25 |
20220269528 | SYSTEM, METHOD AND APPARATUS FOR INTELLIGENT HETEROGENEOUS COMPUTATION - The disclosed systems and methods for intelligent heterogeneous computation directed to receiving monitoring data and a set of training data, wherein the monitoring data includes an occupancy rate of a preprocessed data queue and a utilization factor of accelerating devices, generating a resource computation job list in accordance with the monitoring data, forwarding jobs, in the resource computation job list to be executed on a central processing unit (CPU), to a CPU worker queue, forwarding control messages to the CPU worker queue, wherein the control messages are associated with jobs in the resource computation job list to be executed on the accelerating devices, and executing, by the accelerating devices, jobs in the resource computation job list to be executed on the accelerating devices. | 2022-08-25 |
20220269529 | TASK COMPLETION THROUGH INTER-APPLICATION COMMUNICATION - Among other things, one or more techniques and/or systems for facilitating task completion through inter-application communication and/or for registering a target application for contextually aware task execution are provided. That is, a current application may display content comprising an entity (e.g., a mapping application may display a restaurant entity). One or more actions capable of being performed on the entity may be exposed (e.g., a reserve table action). Responsive to selection of an action, one or more target applications capable of performing the action on the entity may be presented. Responsive to selection of a target application, contextual information for the entity and/or the action may be passed to the target application so that the target application may be launched in a contextually relevant state to facilitate completion of a task. For example, a dinning application may be launched to a table reservation form for the restaurant entity. | 2022-08-25 |
20220269530 | INTELLIGENT CONTAINERIZATION PLATFORM FOR OPTIMIZING CLUSTER UTILIZATION - Embodiments of the present invention provide a system for intelligently optimizing the utilization of clusters. The system is configured to continuously gather real-time hardware telemetric data associated with one or more entity systems via a hardware telemetric device, continuously convert the real-time hardware telemetric data into a first color coded representation, receive one or more tasks associated with one or more entity applications, queue the one or more tasks associated with the one or more entity applications, determine hardware requirements associated with the one or more tasks, determine one or more attributes associated with the one or more tasks, convert the hardware requirements and the one or more attributes of the one or more tasks into a second color coded representation, and allocate the one or more tasks to the one or more entity systems based on the first color coded representation and the second color coded representation. | 2022-08-25 |
20220269531 | Optimization of Workload Scheduling in a Distributed Shared Resource Environment - An artificial intelligence (AI) platform to support optimization of workload scheduling in a distributed computing environment. Unstructured data corresponding to one or more application artifacts related to a workload in the distributed computing environment is leveraged. NLP is applied to the unstructured data to identify one or more host requirements corresponding to the application artifacts. One or more hosts in the computing environment compatible with the identified host requirements are selectively identified and compatibility between the application artifacts and the identified hosts is assessed. The workload is selectively scheduled responsive to the selective host identification based on the assessed compatibility. The scheduled workload is selectively executed on at least one of the selectively identified hosts responsive to the assessment workload compatibility. | 2022-08-25 |
20220269532 | INTEGRATED MULTI-PROVIDER COMPUTE PLATFORM - The present invention includes embodiments of systems and methods for addressing the interdependencies that result from integrating the computing resources of multiple hardware and software providers. The integrated, multi-provider cloud-based platform of the present invention employs abstraction layers for communicating with and integrating the resources of multiple back-end hardware providers, multiple software providers and multiple license servers. These abstraction layers and associated functionality free users not only from having to implement and configure provider-specific protocols, but also from having to address interdependencies among selected hardware, software and license servers on a job-level basis or at other levels of granularity. | 2022-08-25 |
20220269533 | STORAGE MEDIUM, JOB PREDICTION SYSTEM, AND JOB PREDICTION METHOD - A storage medium storing a job prediction program that causes a computer to execute a process includes extracting a first job that has a similar topic distribution to a prediction target job from a plurality of past jobs based on a first topic model trained with information regarding a plurality of jobs; extracting a second job that has a similar topic distribution to the prediction target job from the plurality of past jobs based on a second topic model trained with information regarding a job of which the data input/output amount is equal to or more than a predetermined value, the job being a part of the plurality of jobs of which information is used to train the first topic model; and outputting the data input/output amount of the first job or the second job. | 2022-08-25 |
20220269534 | Time-Multiplexed use of Reconfigurable Hardware - A method for executing applications in a system comprising general hardware and reconfigurable hardware includes accessing a first execution file comprising metadata storing a first priority indicator associated with a first application, and a second execution file comprising metadata storing a second priority indicator associated with a second application. In an example, use of the reconfigurable hardware is interleaved between the first application and the second application, and the interleaving is scheduled to take into account (i) workload of the reconfigurable hardware and (ii) the first priority indicator and the second priority indicator associated with the first application and the second application, respectively. In an example, when the reconfigurable hardware is used by one of the first and second applications, the general hardware is used by another of the first and second applications. | 2022-08-25 |
20220269535 | ENFORCING CENTRAL PROCESSING UNIT QUALITY OF SERVICE GUARANTEES WHEN SERVICING ACCELERATOR REQUESTS - Systems, apparatuses, and methods for enforcing processor quality of service guarantees when servicing system service requests (SSRs) are disclosed. A system includes a first processor executing an operating system and a second processor executing an application which generates SSRs for the first processor to service. The first processor monitors the number of cycles spent servicing SSRs over a previous time interval, and if this number of cycles is above a threshold, the first processor starts delaying the servicing of subsequent SSRs. In one implementation, if the previous delay was non-zero, the first processor increases the delay used in the servicing of subsequent SSRs. If the number of cycles is less than or equal to the threshold, then the first processor services SSRs without delay. As the delay is increased, the second processor begins to stall and its SSR generation rate falls, reducing the load on the first processor. | 2022-08-25 |
20220269536 | MULTI-QUEUE MULTI-CLUSTER TASK SCHEDULING METHOD AND SYSTEM - The present disclosure provides a multi-queue multi-cluster task scheduling method and system, and relates to the technical field of cloud computing. The method includes: constructing a training data set; training and optimizing a plurality of parallel deep neural networks (DNN) by using the training data set to obtain a plurality of trained and optimized parallel DNNs; setting a reward function, where the reward function minimizes the sum of a task delay and energy consumption by adjusting a reward value proportion of the task delay and a reward value proportion of the energy consumption; inputting a to-be-scheduled state space into the plurality of trained and optimized parallel DNNs to obtain a plurality of to-be-scheduled action decisions; determining an optimal action decision among the plurality of to-be-scheduled action decisions based on the reward function for output; and scheduling the plurality of task attribute groups to a plurality of clusters based on the optimal action decision. In the present disclosure, an optimal scheduling strategy can be generated by using task delay and energy consumption minimization as an optimization objective of a cloud system. | 2022-08-25 |
20220269537 | ARTIFICIAL INTELLIGENCE (AI) WORKLOAD SHARING SYSTEM AND METHOD OF USING THE SAME - According to one illustrative, non-limiting embodiment, a first IHS may include computer-executable instructions for performing at least one artificial intelligence (AI) service to optimize a performance of the first IHS. In response to determining that an AI workload of the AI service exceeds a specified threshold, the first IHS selects a second IHS to perform at least a portion of the AI workload, and transmits the at least one portion of the AI workload to the second IHS. When a processed AI workload is received from the second IHS, the first IHS applies one or more profile recommendations included in the processed AI workload. | 2022-08-25 |
20220269538 | HIERARCHICAL WORKLOAD ALLOCATION IN A STORAGE SYSTEM - A method for hierarchical workload allocation in a storage system, the method may include determining to reallocate a compute workload of a current compute core of the storage system; wherein the current compute core is responsible for executing a workload allocation unit that comprises one or more first type shards; and reallocating the compute workload by (a) maintaining the responsibility of the current compute core for executing the workload allocation unit, and (b) reallocating at least one first type shard of the one or more first type shards to a new workload allocation unit that is allocated to a new compute core of new compute cores. | 2022-08-25 |
20220269539 | REDISTRIBUTING UPDATE RESOURCES DURING UPDATE CAMPAIGNS - Disclosed are various embodiments for the controlling the amount of active updates that can occur during a given time on devices that are associated with tenants (e.g., organizations) and subtenants (e.g., sub-organizations) in a multi-tenant environment. In particular, each tenant and subtenant is assigned throttle corresponding to different update parameters (e.g., an amount of devices executing an active update, an amount of data to be downloaded during a campaign, a time for completing the update campaign, etc.). When an update campaign is established, the update campaign can define the different devices that are to be updated. In some situations, the number of active updates required may exceed the allotted resources for a given subtenant. When a subtenant requires additional resources than what is assigned to complete the update, the subtenant can borrow resources defined by the update parameters from a subtenant peer that has a surplus. | 2022-08-25 |
20220269540 | NVMe POLICY-BASED I/O QUEUE ALLOCATION - A multi-function NVMe subsystem includes a plurality of primary controllers, and a plurality of queue resources. The multi-function NVMe subsystem also includes a plurality of policies with each different policy of the plurality of policies differently dictating how the plurality of queue resources is divided amongst different primary controllers of the plurality of primary controllers. | 2022-08-25 |
20220269541 | METHODS FOR MANAGING STORAGE QUOTA ASSIGNMENT IN A DISTRIBUTED SYSTEM AND DEVICES THEREOF - Methods, non-transitory machine readable media, and computing devices that more efficiently and effectively manage storage quota enforcement are disclosed. With this technology, a quota ticket comprising a tally generation number (TGN) and a local allowed usage amount (AUA) are obtained. The local AUA comprises a portion of a global AUA associated with a quota rule. The local AUA is increased following receipt of another portion of the global AUA in a response from a cluster peer, when another TGN in the response matches the TGN and the local AUA is insufficient to execute a received storage operation associated with the quota rule. The local AUA is decreased by an amount corresponding to, and following execution of, the storage operation, when the increased local AUA is sufficient to execute the storage operation. | 2022-08-25 |
20220269542 | MANAGEMENT OF A COMPUTING DEVICE USAGE PROFILE - Methods, systems, and apparatuses related to management of a computing device usage profile are described. The usage profile can be a usage profile of a computing device. Characteristics of workloads executed by a computing device can be monitored to determine whether performance of the computing device can be optimized by execution of an updated usage profile. Responsive to a determination that the performance of the computing device can be improved by execution of an updated usage profile, the updated usage profile can be received by the computing device and executed thereon. | 2022-08-25 |
20220269543 | SYSTEMS AND METHODS FOR OPTIMIZING PREBOOT TELEMETRY EFFICIENCY - An information handling system may include a processor and a basic input/output system configured to identify, test, and/or initialize information handling resources of the information handling system, and further configured to predict a volume of incoming telemetry data collected by a preboot driver of the basic input/output system and based on the volume predicted, manage storage of the telemetry data among memory associated with the basic input/output system. | 2022-08-25 |
20220269544 | COMPUTER SYSTEM WITH PROCESSING CIRCUIT THAT WRITES DATA TO BE PROCESSED BY PROGRAM CODE EXECUTED ON PROCESSOR INTO EMBEDDED MEMORY INSIDE PROCESSOR - A computer system includes a processor and a processing circuit. The processor has an embedded memory. The processing circuit is arranged to perform a write operation for writing a first write data into the embedded memory included in the processor. The processor is arranged to load and execute a program code to perform a read operation for reading the first write data from the embedded memory included in the processor. | 2022-08-25 |
20220269545 | COMPUTER-READABLE RECORDING MEDIUM STORING DATA PROCESSING PROGRAM, DATA PROCESSING METHOD, AND DATA PROCESSING SYSTEM - A data processing system configured to perform processing including: each time receiving data that includes time information, inputting the received data to each of a first processing system of a switching source and a second processing system of a switching destination; comparing the time information of the data with switching time set as time to switch a processing system in the first processing system; outputting a processing result of processing that uses the data from the first processing system in a case where time indicated by the time information is before the switching time; comparing the time information of the data with the switching time in the second processing system; and outputting the processing result of processing that uses the data from the second processing system in a case where the time indicated by the time information is the switching time or time after the switching time. | 2022-08-25 |
20220269546 | CONTROL DEVICE, METHOD, PROGRAM, AND VEHICLE - A control device that controls an operation of a plurality of applications includes: a first acquisition unit that acquires a message transmitted from the applications and a message received by the applications; a storage unit that stores a priority that at least sets a priority order for a process of the message related to the plurality of applications; and an arbitration unit that arbitrates an order of an encryption process of the message acquired by the first acquisition unit based on the priority stored by the storage unit. | 2022-08-25 |
20220269547 | MANAGING VIRTUAL MACHINE MEMORY BALLOON USING TIME SERIES PREDICTIVE DATA - A virtual machine's (VM's) usage of a resource over a first time period may be monitored to determine a load pattern for the VM. A time series analysis of the load pattern may be performed to generate a predictive resource usage model, the predictive resource usage model indicating one or more predicted variations in the usage of the resource by the VM over a second time period. A predicted resource usage of the VM at a future time that is within the second time period may be determined based, at least in part, on the predictive resource usage model. An amount of the resource to allocate to the VM at a current time may be determined based, at least in part, on the predicted resource usage of the VM at the future time and the actual resource usage of the VM at the current time. | 2022-08-25 |
20220269548 | PROFILING AND PERFORMANCE MONITORING OF DISTRIBUTED COMPUTATIONAL PIPELINES - Apparatuses, systems, and techniques to collect performance data for one or more computations tasks executed by a plurality of nodes of a computational pipeline and enable optimization of distribution of task execution among the plurality of nodes. | 2022-08-25 |
20220269549 | VALIDATING POLICIES AND DATA IN API AUTHORIZATION SYSTEM - Some embodiments provide a method for distributing a set of parameters associated with policies for authorizing Application Programming Interface (API) calls to an application. For a previously stored hierarchical first document that comprises a first set of elements in a first hierarchical structure, the method receives a hierarchical update second document that comprises a second set of elements in a second hierarchical structure corresponding to the first hierarchical structure, wherein at least a subset of elements in the first and the second documents correspond to the set of parameters for evaluating API calls. The method receives a first set of hash values for elements of the first document that are not specified in the second document, and generates a second set of hash values for a set of elements specified in the second document. The method generates an overall hash for the second document by using the received first set of hash values and the generated second set of hash values. The method uses the overall hash to validate a signature from an entity that is authorized to specify the set of parameters. | 2022-08-25 |
20220269550 | DELAYED PROCESSING FOR ELECTRONIC DATA MESSAGES IN A DISTRIBUTED COMPUTER SYSTEM - A distributed computer system is provided. The distributed computer system includes at least one sequencer computing node and at least one matcher computing node. Electronic data messages are sequenced by the sequencer and sent to at least matcher computing node. The matcher computing node receives the electronic data messages and a reference value from an external computing source. New electronic data messages are put into a pending list before they can be acted upon by the matcher. A timer is started based on a comparison of the reference value (or a calculation based thereon) to at least one attribute or value of a new electronic data message. When the timer expires, the electronic data message is moved from the pending list to another list—where it is eligible to be matched against other, contra-side electronic data messages. | 2022-08-25 |
20220269551 | GROUPING REQUESTS TO REDUCE INTER-PROCESS COMMUNICATION IN MEMORY SYSTEMS - A memory system having a set of media, a plurality of inter-process communication channels, and a controller configured to run a plurality of processes that communicate with each other using inter-process communication messages transmitted via the plurality of inter-process communication channels, in response to requests from a host system to store data in the media or retrieve data from the media. The memory system has a message manager that examines requests from the host system, identifies a plurality of combinable requests, generates a combined request, and provides the combined request to the plurality of processes as a substitute of the plurality of combinable requests. | 2022-08-25 |
20220269552 | RESOLVING DATA LOCATION FOR QUERIES IN A MULTI-SYSTEM INSTANCE LANDSCAPE - The present disclosure involves systems, software, and computer implemented methods for resolving data location for queries in a multi-system instance landscape. One example method includes receiving a request for data for at least one entity that includes a qualified identifier that includes a system tenant qualifier and a local identifier. The system tenant qualifier identifies a system tenant in a multi-system tenant landscape and the local identifier identifies an entity instance of an entity in the system tenant. A routing policy table configured for the multi-system tenant landscape is identified and a cell is located in the routing policy table that corresponds to the entity and the system tenant. A routing policy is determined for routing the request based on the cell. The routing policy is used to determine a target system tenant to which to route the request and the request is provided to the target system tenant. | 2022-08-25 |
20220269553 | Memory Evaluation Method and Apparatus - A memory evaluation method includes determining a health degree evaluation model indicating a relationship in which a health degree of a memory changes with at least one health degree influencing factor of the memory; obtaining at least one running parameter value corresponding to each of the at least one health degree influencing factor; separately matching the at least one running parameter value corresponding to each health degree influencing factor to the health degree evaluation model, to obtain the health degree of the memory; and outputting health degree indication information indicating whether the memory needs to be replaced. | 2022-08-25 |
20220269554 | CLUSTERING OF STRUCTURED LOG DATA BY KEY SCHEMA - Clustering structured log data by key schema includes receiving a raw log message. At least a portion of the raw log message comprises structured machine data including a set of key-value pairs. It further includes receiving a map of keys to values. It further includes using the received map of keys to values to determine a key schema of the structured machine data. The key schema is associated with a corresponding cluster. It further includes associating the raw log message with the cluster corresponding to the determined key schema. | 2022-08-25 |
20220269555 | USAGE-BANDED ANOMALY DETECTION AND REMEDIATION UTILIZING ANALYSIS OF SYSTEM METRICS - A method comprises collecting a set of data from an information processing system, wherein the set of data represents one or more system metrics associated with the information processing system. The method tags data values in the collected set of data with usage bands selected from a plurality of predefined usage bands, wherein each usage band represents a unique range of values within which data values in the set of data can be categorized. Further, the tagged data values are segregated into at least a first set and a second set based on the usage bands. A first anomaly detection algorithm is applied to the first set and a second anomaly detection algorithm is applied to the second set to generate anomaly data sets. The anomaly data sets are mapped back to the collected set of data to identify one or more specific anomalies. | 2022-08-25 |
20220269556 | Third-Party Software Isolation Using An Actor-Based Model - A software framework for implementation in the performance of automated robotic workflows imparts a hierarchical communications command structure, utilizing an actor-based model to run driver software isolated from scheduling software, by instantiating a message-based abstraction layer that acts as an intermediary between the scheduling software and the third-party driver software. The actor-based model is used within the message-based abstraction layer to isolate the third-party software controlling third party instruments from scheduling software, where such scheduling software and third-party instruments are operating on a common computing platform. This framework prevents scheduling applications from entering an error state, or crashing, where the third-party software component also crashes, and allows the scheduling software to restart the third-party software to continue with the processes controlled by the scheduling software, without interruption to the automated workflow environment. | 2022-08-25 |
20220269557 | CORRUPTED DATA MANAGEMENT IN A SYSTEM OF SERVICES - A system for poisoned data management includes an interface and a processor. The interface is configured to receive an indication of poisoned data in a published event. The processor is configured to mark the poisoned data in a data graph; mark in the data graph a set of downstream nodes as poisoned; and store the data graph. | 2022-08-25 |
20220269558 | SYSTEM AND METHODS FOR HARDWARE-SOFTWARE COOPERATIVE PIPELINE ERROR DETECTION - An error reporting system utilizes a parity checker to receive data results from execution of an original instruction and a parity bit for the data. A decoder receives an error correcting code (ECC) for data resulting from execution of a shadow instruction of the original instruction, and data error correction is initiated on the original instruction result on condition of a mismatch between the parity bit and the original instruction result, and the decoder asserting a correctable error in the original instruction result. | 2022-08-25 |
20220269559 | MULTI-PAGE PARITY PROTECTION WITH POWER LOSS HANDLING - A variety of applications can include use of parity groups in a memory system with the parity groups arranged for data protection of the memory system. Each parity group can be structured with multiple data pages in which to write data and a parity page in which to write parity data generated from the data written in the multiple data pages. Each data page of a parity group can have storage capacity to include metadata of data written to the data page. Information can be added to the metadata of a data page with the information identifying an asynchronous power loss status of data pages that precede the data page in an order of writing data to the data pages of the parity group. The information can be used in re-construction of data in the parity group following an uncorrectable error correction code error in writing to the parity group. | 2022-08-25 |
20220269560 | APPARATUS AND METHOD FOR USING AN ERROR CORRECTION CODE IN A MEMORY SYSTEM - Error correction code apparatuses and memory systems are disclosed. The apparatus may include an encoder configured to generate a first result by multiplying bits of the data by a first matrix, divides parity bits into a first parity group obtained by multiplying the first result by a second matrix and a second parity group obtained by an exclusive OR operation of the first result and the first parity group, based on a plurality of polynomials determined based on the second matrix, and multiply the first result and the second matrix to generate one or more first parity bits in the first parity group, perform an exclusive OR operation on the first result and the first parity group to generate one or more second parity bits in the second parity group, and generate a codeword having the bits of the data bits and the parity bits. | 2022-08-25 |
20220269561 | STORAGE CONTROLLER, OPERATION METHOD THEREOF - A storage controller and an operating method of the storage controller are provided. The storage controller includes processing circuitry configured to read sub-stripe data from each of a plurality of non-volatile memory devices connected with a RAID (Redundant Array of Inexpensive Disk), check error information of at least one of the sub-stripe data, and perform a RAID recovery operation in response to the at least one of the sub-stripe data having an uncorrectable error, and a RAID memory which stores calculation results of the RAID recovery operation. | 2022-08-25 |
20220269562 | Utilizing Integrity Information in a Vast Storage System - A method includes receiving a data retrieval request. A plurality of identifiers are determined in accordance with the data retrieval request. Integrity information is generated based on determining the plurality of identifiers by performing a cyclic redundancy check. Stored integrity information corresponding to the data retrieval request is compared with the integrity information, where the stored integrity information was previously generated by performing the cyclic redundancy check. When the stored integrity information compares unfavorably with the integrity information, corruption associated with the plurality of identifiers is determined. | 2022-08-25 |
20220269563 | SAFE-STATING A SYSTEM INTERCONNECT WITHIN A DATA PROCESSING SYSTEM - A data processing system includes a system interconnect, a first master, and a bridge circuit. The bridge circuit is coupled between the first master and the system interconnect. The bridge circuit is configured to, in response to occurrence of an error in the first master, isolate the first master from the system interconnect, wherein the isolating by the bridge circuit is performed while the first master has one or more outstanding issued write commands to the system interconnect which have not been completed. The bridge circuit is further configured to, after isolating the first master from the system interconnect, complete the one or more outstanding issued write commands while the first master remains isolated from the system interconnect. | 2022-08-25 |
20220269564 | PROCESSING NODE MANAGEMENT METHOD, CONFIGURATION METHOD, AND RELATED APPARATUS - This application discloses a processing node management method and apparatus, a device, and a storage medium, which belongs to the field of cloud technologies and big data. The method includes: obtaining, in response to an abnormal processing node in a processing node cluster being detected, abnormal status information of the abnormal processing node; in response to the abnormal status information satisfying a condition, enabling an auxiliary node outside the processing node cluster to replace the abnormal processing node; adjusting an execution policy of the data processing task in response to the auxiliary node being enabled; distributing data processing sub-tasks to the auxiliary node and remaining processing nodes based on the execution policy; and transmitting corresponding task execution instructions to the auxiliary node and the remaining processing nodes, the task execution instructions being used for instructing the auxiliary node and the remaining processing nodes to perform the corresponding data processing sub-tasks. | 2022-08-25 |
20220269565 | METHODS AND SYSTEMS FOR PREVENTING HANGUP IN A POST ROUTINE FROM FAULTY BIOS SETTINGS - A system and method for preventing a hang up after initiation of a watch dog time out in a computer system. A start-up routine is run via a basic input output system (BIOS). The routine applies settings for hardware components. It is determined if a watch dog timer triggered a restart from timing out when the start-up routine ran previously. The system checks a database storing settings for each of the plurality of hardware components for a proper setting for the hardware components if the watch dog timer triggered the restart. The system applies the settings from the database for the hardware components to avoid another hang up. | 2022-08-25 |
20220269566 | SYSTEMS, METHODS, AND DEVICES FOR FAULT RESILIENT STORAGE - A method of operating a storage device may include determining a fault condition of the storage device, selecting a fault resilient mode based on the fault condition of the storage device, and operating the storage device in the selected fault resilient mode. The selected fault resilient mode may include one of a power cycle mode, a reformat mode, a reduced capacity read-only mode, a reduced capacity mode, a reduced performance mode, a read-only mode, a partial read-only mode, a temporary read-only mode, a temporary partial read-only mode, or a vulnerable mode. The storage device may be configured to perform a namespace capacity management command received from the host. The namespace capacity management command may include a resize subcommand and/or a zero-size namespace subcommand. The storage device may report the selected fault resilient mode to a host. | 2022-08-25 |
20220269567 | SCALE-OUT STORAGE SYSTEM AND STORAGE CONTROL METHOD - A scale-out storage system includes a plurality of computer nodes each of which has a memory and a processor, and a storage apparatus. The computer nodes have one or more redundancy groups each of which is a group for metadata protection. Each of the one or more redundancy groups includes two or more of the computer nodes including a primary node being a primary computer node and a secondary node being a secondary computer node, and a failover is performed from the primary node to the secondary node. The memory of the primary node has stored therein metadata related to the redundancy group and to be accessed for control. The metadata is redundantly stored in the memory of the primary node and the memory of the secondary node. | 2022-08-25 |
20220269568 | BLOCKCHAIN-BASED DATA SNAPSHOT METHOD AND APPARATUS, AND COMPUTER-READABLE STORAGE MEDIUM - A blockchain-based data snapshot method, performed by a consensus node in a blockchain network, includes: obtaining a snapshot trigger instruction and a trigger moment of the snapshot trigger instruction; performing snapshot processing on one or more transaction blocks in a ledger at the trigger moment, to obtain a snapshot account state of a transaction account, the snapshot account state being account data related to transaction data in the transaction blocks; obtaining a write-ahead logging (WAL) log of a target block, the target block being a block with a highest block height in the transaction blocks, the WAL log including a log of account data corresponding to the target block; and correcting dirty data in the snapshot account state according to the WAL log, to obtain a corrected snapshot account state, the dirty data being generated based on incomplete transaction data in the target block included in the ledger. | 2022-08-25 |
20220269569 | I/O to Unpinned Memory Supporting Memory Overcommit and Live Migration of Virtual Machines - Systems and methods of error handling in a network interface card (NIC) are provided. For a data packet destined for a local virtual machine (VM), if the NIC cannot determine a valid translation memory address for a virtual memory address in a buffer descriptor from a receive queue of the VM, the NIC can retrieve a backup buffer descriptor from a hypervisor queue, and store the packet in a host memory location indicated by an address in the backup buffer descriptor. For a transmission request from a local VM, if the NIC cannot determine a valid translated address for a virtual memory address in the packet descriptor from a transmit queue of the VM, the NIC can send a message to a hypervisor backup queue, and generate and transmit a data packet based on data in a memory page reallocated by the hypervisor. | 2022-08-25 |
20220269570 | USING A STREAM OF SOURCE SYSTEM STORAGE CHANGES TO UPDATE A CONTINUOUS DATA PROTECTION-ENABLED HOT STANDBY - A stream of source system storage changes associated with an object are received at a backup system from a source system. The source system storage changes associated with the object are provided to a remote data recovery system. The remote data recovery system is configured to store the provided source system storage changes associated with the object. The backup system is utilized to generate one or more reference restoration points based on the stream of source system storage changes associated with the object. | 2022-08-25 |
20220269571 | VIRTUAL MACHINE CONFIGURATION UPDATE TECHNIQUE IN A DISASTER RECOVERY ENVIRONMENT - A resource matching technique between a primary site and one or more secondary sites accommodates a configuration update of a virtual machine (VM) in a disaster recovery (DR) environment. The resource matching technique determines whether a proposed resource configuration update or change to a primary VM running at the primary site is permissible on a secondary VM configured for failover operation at a secondary site in the event of failure to the primary VM. The technique continuously monitors the availability of resources at each secondary site and enables negotiation between the primary and secondary sites of the proposed configuration change based on corresponding indications resource availability. The resources may include generic resources (e.g., memory, storage capacity and CPU processing capacity) and specialized resources (e.g., GPU types and/or models). | 2022-08-25 |
20220269572 | STORAGE DEVICE AND METHOD FOR OPERATING STORAGE DEVICE - A storage device includes an integrity checking module checking integrity of data stored in a first host memory buffer (HMB) address of an HMB in a host coupled to the storage device, and an HMB mapping module mapping, if the integrity checking module determines the data as corrupted, the first HMB address to a second address | 2022-08-25 |
20220269573 | AUTOMATED QUERY RETRY IN A DATABASE SYSTEM - Techniques for automated query retry in a database platform include assigning by at least one hardware processor a first execution of a query directed to database data to a first execution node of a plurality of execution nodes of an execution platform. The first execution node uses a first set of configurations during the first execution. The techniques further include determining that the first execution of the query by the first execution node results in a failed execution. The query is transferred to a second execution node of the plurality of execution nodes. A second execution of the query at the second execution node is caused. The second execution node uses a second set of configurations during the second execution. A cause of the failed execution at the first execution node is determined based on a result of the second execution of the query at the second execution node. | 2022-08-25 |
20220269574 | SYSTEMS AND METHODS FOR MITIGATING FAILURE TO ENTER STANDBY MODE - An information handling system may include a processor, a visual indicator, and a management controller communicatively coupled to the processor and the visual indicator and configured to perform out of band management of the information handling system, the management controller further configured to, responsive to receiving an indication from the processor that the information handling system is attempting to enter a standby mode and prior to the information handling system entering the standby mode, cause the visual indicator to generate a visual indication that the information handling system is attempting to enter the standby mode. | 2022-08-25 |
20220269575 | Optimization of Power and Computational Density of a Data Center - Techniques for optimizing power and computational density of data centers are described. According to various embodiments, a benchmark test is performed by a computer data center system. Thereafter, transaction information and power consumption information associated with the performance of the benchmark test are accessed. An efficiency metric value is then generated based on the transaction information and the power consumption information. In some implementations, the efficiency metric value indicates a number of transactions executed via the computer data center system during a specific time period per unit of power consumed in executing the transactions during the specific time period. The generated efficiency metric value is then compared to a target threshold value. Thereafter, a performance summary report indicating the generated efficiency metric value, and indicating a result of the comparison of the generated efficiency metric value to the target value, is generated. | 2022-08-25 |
20220269576 | METHOD AND SYSTEM FOR PROVIDING APPLICATION HOSTING BENCHMARKS - A method for providing predictive cost and performance analytics to facilitate benchmarking of an application host is disclosed. The method includes receiving, via a graphical user interface, an input, the input including a request to benchmark a networked environment to host an application; retrieving, from a repository based on the input, a data storage object that corresponds to the application, the data storage object including a deployment artifact and a performance script; simulating, based on the retrieved data storage object, deployment of the application in the networked environment; collecting, from the networked environment, a result of the simulation, the result including a metric that corresponds to the application; and determining, by using a model, predicted implementation information that corresponds to the application based on the result. | 2022-08-25 |
20220269577 | Data-Center Management using Machine Learning - A method for data-center management includes, in a data center including multiple components, monitoring a plurality of performance measures of the components. A set of composite metrics is automatically defined, each composite metric including a respective weighted combination of two or more performance measures from among the performance measures. Baseline values are established for the composite metrics. An anomalous deviation is detected of one or more of the composite metrics from the respective baseline values. | 2022-08-25 |
20220269578 | Measurement of Parallelism in Multicore Processors - A method of logging thread parallelism data include executing a plurality of threads at a multicore processor associated with an operating system to perform symmetrical multiprocessing. The method also includes tracking, at a logging subsystem of the operating system, an accumulated runtime associated with each thread combination of the plurality of threads during execution of the plurality of threads. The accumulated runtime of a particular thread combination increases while the particular thread combination is running on the multicore processor in parallel. The method also include generating, at the logging subsystem, logging data indicating the accumulated runtime for each thread combination. The method further includes outputting the logging data. The logging data is usable to increase thread parallelism at the multicore processor. | 2022-08-25 |
20220269579 | PERFORMANCE METRIC MONITORING AND FEEDBACK SYSTEM - Disclosed herein are various embodiments for a performance metric monitoring and feedback system. An embodiment operates by determining a first performance metric for a first application available to a plurality of users of the first application operating first user devices. A second performance metric for a second application available to a plurality of users of the second application operating second user devices is determined. A real-time performance of both the first application and the second application is monitored across the set of computing devices. A user interface that simultaneously displays both the first performance metric for the first application and the second performance metric for the second application accessible to one or more members of the organization in accordance with the display format of the metric template is generated. | 2022-08-25 |
20220269580 | METHODS AND SYSTEMS FOR ASSESSING FUNCTIONAL VALIDATION OF SOFTWARE COMPONENTS COMPARING SOURCE CODE AND FEATURE DOCUMENTATION - Systems and methods for automatic validation of software components comparing source code and feature documentation are provided herein. An exemplary method includes assessing functional validation of software components by comparing the claimed features of software components against the features extracted from their source code. To prevent the issue of having to try unverified software components the present disclosure provides a solution that uses machine learning to extract claimed features of a software component and its actual features that are implemented in its source code. Then, for evaluating the software components, the disclosed solution compares the claimed features of software components against the features extracted from their source code to give a validated score to the developer so that the developer can easily decide on choosing the validated software. | 2022-08-25 |
20220269581 | MEMORY CHECK METHOD, MEMORY CHECK DEVICE, AND MEMORY CHECK SYSTEM - A memory check method, a memory check device and a memory check system are disclosed. The method includes the following. A debug file is generated according to a source code, where the debug file carries symbol information related to a description message in the source code. Memory data generated by a memory storage device in execution of a firmware is received. The debug file is loaded to automatically analyze the memory data. In addition, an analysis result is presented by an application program interface, where the analysis result reflects a status of the firmware with assistance of the symbol information. | 2022-08-25 |
20220269582 | METHOD AND SYSTEM FOR SYNCHRONOUS DEVELOPMENT AND TESTING OF LIVE, MULTI-TENANT MICROSERVICES BASED SAAS SYSTEMS - The present disclosure provides techniques for configuring and provisioning a tenant for testing microservices in a multi-tenant instance. Code is committed for a modified microservice, and a configuration is received for a production tenant of the multi-tenant instance. The configuration is updated to include a reference to the updated microservice, and then provided to a provisioner that provisions a test tenant based on the configuration. The microservices for the test tenant are compared with versions in a code version management system and updated, then a reference to the test tenant is provided to a developer to test the modified microservice. The test tenant may be deprovisioned after a predetermined amount of time, by a command of the developer, or other automated method. | 2022-08-25 |
20220269583 | SYSTEMS AND METHODS FOR PROGRAM CODE DEFECT AND ACCEPTABILITY FOR USE DETERMINATION - A code development engine can be programmed to evaluate build code that can be representative of program code at an instance of time during or after a software development of the program code to identify and correct coding errors in the build code. A code run-time simulation engine can be programmed to simulate the build code in a modeled program code environment for the program code to identify and correct coding failures in the build code. A build code output module can be programmed to evaluate the build code to determine whether the build code is acceptable for use in a program code environment based on a level of acceptable risk for the build code in response to the coding error and/or coding failure being corrected in the build code. | 2022-08-25 |
20220269584 | METHOD AND SYSTEM FOR VERIFYING RESULTING BEHAVIOR OF GRAPH QUERY LANGUAGE - Disclosed are a method and system for verifying a resulting behavior of a graph query language. A lexical parser and a preset lexical rule are added to a BDD test framework, to construct an improved BDD test framework; a first Gherkin text in a graph query language standard is migrated to be a second Gherkin text in a graph query language; a first parsing result of the second Gherkin text is acquired from the improved BDD test framework; a first return result from a graph database is acquired, a comparison result between the first parsing result and the first return result is acquired, wherein the graph query language meets the graph query language standard if the first parsing result and the first return result are the same. The present disclosure solves the problem that the BDD test framework cannot recognize complex Gherkin texts, thereby facilitating development of graph databases. | 2022-08-25 |
20220269585 | Computer Card for Testing In-Vehicle Software - A computer card for testing a software of an in-vehicle computer, the computer card comprising: a computing unit comprising at least a system on a chip (SoC) for the in-vehicle computer, the computing unit being adapted to run the software to be tested, and a field-programmable gate array (FPGA) connected to the computing unit for feeding the software to be tested with environment and/or driving data (EDD) and for recovering detection and/or behavior data (DBD) from the software to be tested. | 2022-08-25 |
20220269586 | SYSTEMS AND METHODS FOR AUTOMATING TEST AND VALIDITY - A test automation system is provided that enables “codeless” code test generation and execution. Various embodiments allow users to create automation tests, sets variables in test scripts, set validation criteria, etc. all without having to write code for the operation being tested. In some examples, the system is configured to provide access to mobile device emulation based on selection of or from any number of mobile devices. By automatically defining a suite of tests that can be run on a mobile device population, automated testing can improve validation of any developed software, functionality, and identification of test failures over many existing approaches. Once the codeless tests are created on the system, they can be scheduled to run repeatedly, periodically, a-periodically, all without supervision. Any errors can be communicated to the user, with recommendations to resolve, re-test, among other options. | 2022-08-25 |
20220269587 | AUTOMATED BROWSER TESTING ASSERTION ON NATIVE FILE FORMATS - Embodiments provide systems and methods for performing automated browser testing on different native file types by receiving files of different types. A preview version of each file can be generated and rendered in an output file type. Generating the preview version can be performed by a preview application executed by the testing system and rendering the preview version of the first file can be performed by a browser application executed by the testing system. The output file type can be different from the received file type. For example, the received file type can be a native file type of a first application different from the browser and the output file type comprises an HyperText Markup Language (HTML) file type. A test can be executed on the rendered preview version based on one or more assertions on the first file. | 2022-08-25 |
20220269588 | SAFETY VERIFICATION SYSTEM FOR ARTIFICIAL INTELLIGENCE SYSTEM, SAFETY VERIFICATION METHOD, AND SAFETY VERIFICATION PROGRAM - An effective system for verifying safety of an artificial intelligence system includes a feature quantity information accepting unit which accepts feature quantity information that includes values of plural feature quantities, that are assumed as those used in an artificial intelligence system, in each of plural first test data used for a test for verifying safety of the artificial intelligence system; and a judgment unit which judges a first combination, that is a combination that is not included in the plural first test data, in combinations of values that plural feature quantities may take, or a second combination, with it plural correct analysis results that should be derived by the artificial intelligence are associated, in the combinations of the values that the plural feature quantities may take. | 2022-08-25 |
20220269589 | AUTOMATICALLY GENERATING DATASETS BY PROCESSING COLLABORATION FORUMS USING ARTIFICIAL INTELLIGENCE TECHNIQUES - Methods, systems, and computer program products for automatically generating datasets by processing collaboration forums using artificial intelligence techniques are provided herein. A computer-implemented method includes obtaining conversational data from collaboration forum sources; classifying, using a first set of artificial intelligence techniques, at least a portion of the conversational data into categories based on designated applications; extracting information, pertaining to test case-related issues, from at least a portion of the classified data; verifying at least a portion of the extracted information by analyzing, using a second set of artificial intelligence techniques, portions of the conversational data attributed to multiple entities and related to the extracted information; generating, using the verified information, one or more datasets related to at least one of the test case-related issues for at least one of the designated applications; performing at least one automated action based on the one or more generated datasets. | 2022-08-25 |
20220269590 | METHODS, SYSTEMS, AND MEDIA FOR GENERATING TEST AUTHORIZATION FOR FINANCIAL TRANSACTIONS - The present disclosure is directed to systems, media, and methods of generating test authorization for financial transactions. One or more computing devices generate an initial data set corresponding to a financial transaction. Alterations to one or more fields of information included in the initial data set are made responsive to instructions received via a user interface. Responsive to the alterations, the one or more computing devices: convert the test data set into a binary file, deserialize the binary file, and generate a transaction file for the financial transaction based on the deserialized test data set. | 2022-08-25 |
20220269591 | GENERATING SYNTHETIC TEST CASES FOR NETWORK FUZZ TESTING - A system for generating synthetic test cases for fuzz testing. One example includes an electronic processor. The electronic processor is configured to pre-process training data, use the training data to train a discriminator DNN to evaluate a test case to determine whether the test case is likely to expose a software vulnerability, and use the discriminator DNN to train a generator DNN to generate a test case that is likely to expose a software vulnerability. The electronic processor uses the discriminator DNN to train the generator DNN by determining whether a test case generated by the generator DNN is likely to expose a software vulnerability and sending a determination of whether the test case generated by the generator DNN is likely to expose a software vulnerability to the generator DNN. The electronic processor is further configured to, when the generator DNN is trained, generate one or more test cases. | 2022-08-25 |
20220269592 | Mocking Robotic Process Automation (RPA) Activities For Workflow Testing - A robot design interface comprises tools for testing a robotic process automation (RPA) workflow. Some embodiments automatically generate a mock workflow comprising a duplicate of the original workflow wherein a set of RPA activities are replaced with substitute activities for testing purposes. Some embodiments expose an intuitive interface co-displaying the substitute activities in parallel to their respective original activities and enabling a user to configure various mock parameters. Testing is then carried out on the mock workflow. | 2022-08-25 |
20220269593 | AUTOMATIC GENERATION OF INTEGRATED TEST PROCEDURES USING SYSTEM TEST PROCEDURES - A method for automatic generation of integrated test procedures using system test procedures includes generating a system test case for each system model of a plurality of system models. The method also includes automatically generating an integrated test harness including a group of interacting system models of the plurality of system models. An output signal from one or more of the interacting system models is an input signal to one or more other interacting system models. The method additionally includes automatically generating an integrated test case for each system model in the integrated test harness and automatically running the integrated test case using an integrated test procedure. The method further includes generating an integrated test procedure coverage report in response to running the integrated test case. | 2022-08-25 |
20220269594 | Automating Identification of Test Cases for Library Suggestion Models - A method, system, and apparatus are disclosed for adding library models to a library knowledge base by defining a template for a library configuration file that conveys information about each library model, custom inputs and code snippets to facilitate library comparison operations, and education content for the library model, where the library configuration file template may be automatically filled by populating selected data fields in the template with information identifying the library model, scraping documentation pages to extract test cases, and then scraping test case code to extract the test case input parameters for input to an input/output matching engine to evaluate a repository of code snippets and identify a set of functionally similar code snippets for inclusion one or more data fields in the template. | 2022-08-25 |