51st week of 2021 patent applcation highlights part 47 |
Patent application number | Title | Published |
20210397416 | Generating a Pseudo-Code from a Text Summarization Based on a Convolutional Neural Network - Aspects of the disclosure relate to generating a pseudo-code from a text summarization based on a convolutional neural network. A computing platform may receive, by a computing device, a first document comprising text in a natural language different from English. Subsequently, the computing platform may translate, based on a neural machine translation model, the first document to a second document comprising text in English. Then, the computing platform may generate an attention-based convolutional neural network (CNN) for the second document. Then, the computing platform may extract, by applying the attention-based CNN, an abstractive summary of the second document. Subsequently, the computing platform may generate, based on the abstractive summary, a flowchart. Then, the computing platform may generate, based on the flowchart, a pseudo-code. Subsequently, the computing platform may display, via an interactive graphical user interface, the flowchart, and the pseudo-code. | 2021-12-23 |
20210397417 | DYNAMIC VALIDATION FRAMEWORK EXTENSION - A programming language framework may be enhanced to provide for dynamic validation. Dynamic validation allows the validator function for any variable to be selected at runtime rather than statically declared at programming-time. Instead of annotating a variable with an annotation that refers to a specific validator function or constraint type, programmers can annotate a variable with an annotation that indicates that the validator function will be selected dynamically at runtime. When a runtime instance of the variable is created, the programming language framework may identify the dynamic validation annotation on the variable, and then use the runtime values in the variable to determine which validator function(s) should be used. | 2021-12-23 |
20210397418 | UTILIZING NATURAL LANGUAGE UNDERSTANDING AND MACHINE LEARNING TO GENERATE AN APPLICATION - A device may receive user input data identifying a canvas, API documents, or tagged assets for an application to be generated, a requirements document for the application, and asset data identifying reusable assets and components for the application. The device may process the user input data, the requirements document, and the asset data, with a first model, to extract entity data and intent classification data. The device may parse the API documents to generate structured data identifying API endpoints, a request API model, and a response API model. The device may process the structured data to generate an API layer. The device may process the canvas to identify UI objects and to map the UI objects to UI elements. The device may generate code for the application based on the asset data, the entity data, the intent classification data, the API layer, and the UI elements. | 2021-12-23 |
20210397419 | APPLICATION SCREEN DISPLAY PROGRAM INSTALLING METHOD - There is provided an application screen display program implementation method for executing application software to display a screen using an information processing apparatus. Each record of a master table for controlling the display of each display element for display elements configuring a screen and transaction data input and output from the display element has a field for holding an index of an array, and association with an index of an array in a source code of an execution program is performed. Therefore, in application development, the required man-hours with respect to the change of the display screen are reduced. | 2021-12-23 |
20210397420 | Spreadsheet-Based Software Application Development - Aspects described herein may be used with local spreadsheet applications, web, and/or cloud-based spreadsheet solutions, to create complex custom software applications. Spreadsheets themselves lack the conceptual framework to be used as a platform tool to build custom or complex software applications. Using the methods and systems described herein using low-code/no-code techniques, a designer can create custom and/or complex software applications using one or more spreadsheets as the underlying blueprints for the software application. The resultant software application may be static/read-only, or may be interactive to allow users to dynamically add, delete, edit, or otherwise amend application data, e.g., via one or more online web pages or via a mobile application. Data transfer may be one-way or bi-directional between the blueprint spreadsheets and the resultant software application, thereby allowing amended data to be transferred from the software application back into spreadsheet form. | 2021-12-23 |
20210397421 | SOFTWARE CODE VECTORIZATION CONVERTER - A code converter uses machine learning to determine conflicts and redundancies in software code. Generally, the code converter uses machine learning to convert software code into vectors that represent the code. These vectors may then be compared with other vectors to determine similarities between code. The similarities may be used to detect conflicts and/or redundancies created during the development process (e.g., when a developer attempts to change the code). | 2021-12-23 |
20210397422 | SOFTWARE CODE CONVERTER FOR RESOLVING CONFLICTS DURING CODE DEVELOPMENT - A code converter uses machine learning to determine conflicts and redundancies in software code. Generally, the code converter uses machine learning to convert software code into vectors that represent the code. These vectors may then be compared with other vectors to determine similarities between code. The similarities may be used to detect conflicts and/or redundancies created during the development process (e.g., when a developer attempts to change the code). | 2021-12-23 |
20210397423 | SOFTWARE CODE CONVERTER FOR RESOLVING REDUNDANCY DURING CODE DEVELOPMENT - A code converter uses machine learning to determine conflicts and redundancies in software code. Generally, the code converter uses machine learning to convert software code into vectors that represent the code. These vectors may then be compared with other vectors to determine similarities between code. The similarities may be used to detect conflicts and/or redundancies created during the development process (e.g., when a developer attempts to change the code). | 2021-12-23 |
20210397424 | NON-TRANSITORY COMPUTER-READABLE MEDIUM, FILE OUTPUT METHOD AND FILE OUTPUT DEVICE - A non-transitory computer-readable medium having stored therein a program for causing a computer to execute a process, the process includes detecting a conflict between a first library and a second library in a first program based on a first definition file indicating that the first program depends on the first library and the second library among a plurality of libraries, generating a logical formula indicating that the first program depends on the first library and does not depend on the second library, and outputting a second definition file indicating that the first program depends on the first library and does not depend on the second library when the logical formula is determined to be satisfiable. | 2021-12-23 |
20210397425 | Systems and Methods for Performing Binary Translation - Systems and methods for performing binary translation include a system that is capable of translating binaries written for use in a source execution environment to binaries compatible with a target execution environment. Consistent with some embodiments, a binary translation system includes a system service and a runtime code module that exists in an application memory address space. The binary translation system translates object-level binaries corresponding to executables, linkers, libraries, and the like and stores the translation in a translation cache that is cryptographically secured to ensure that only a system having a specific key is able to access the translations. If the application or application binary has been modified since the translation was performed, the system service will ensure that the translation is removed from the cache, a new translation is performed, and all threads accessing that translation are updated to the new translation. | 2021-12-23 |
20210397426 | EFFICIENT DEPENDENCY MANAGEMENT FOR SOFTWARE DEVELOPMENT ENVIRONMENTS - A dependency management system preserves and installs software project environments. The dependency management system stores snapshots of project environments on client devices which include one or more software packages. A project environment snapshot includes a dependency graph representing the dependencies of the project environment. The dependency management system may manage a project environment on a client device and update a dependency graph to reflect the dependencies of currently installed packages. As part of the updating of the dependency graph, the dependency management system may automatically resolve dependency conflicts resulting from one or more dependencies of the project environment. The dependency management system further provides project environment snapshots to client systems for installing corresponding project environments. | 2021-12-23 |
20210397427 | TRAINING AN AGENT-BASED HEALTHCARE ASSISTANT MODEL - Systems and methods for training an agent-based assistant model are provided. In embodiments, a method includes: obtaining biometric data of a user from a software application utilizing an assistant model that determines functions of the software application; filtering the biometric data based on predetermined categories, thereby extracting select biometric data; training a first version of the assistant model based on the select biometric data, thereby generating an updated assistant model; generating a summary of changes including changes to the first version of the assistant model that occurred during the training; and sending the summary of changes to a remote federated learning server, wherein the federated learning server trains a general version of the assistant model based on the summary of changes and other summary of changes received from computing devices of other users, thereby generating an updated general version of the assistant model. | 2021-12-23 |
20210397428 | DEPLOYING SOFTWARE UPDATES IN COORDINATION WITH END-USER PRODUCTIVITY - Software updates can be deployed in end user devices in coordination with end-user productivity. A system monitoring engine can be employed on end user devices to compile productivity impact data from which heat maps may be created. An optimal deployment detection engine can employ the heat maps to create or maintain period-based groupings. When software updates are available, the optimal deployment detection engine can employ the period-based groupings to create optimal deployment plans specific to the end user devices. The installation of the software updates can then be performed on each end user device in accordance with that end user device's optimal deployment plan. | 2021-12-23 |
20210397429 | LIVE UPDATES OF STATEFUL COMPONENTS - Methods, systems, and devices supporting live updates for stateful software components are described. A computing system may implement live updating for patching stateful software components. A device may execute a first set of requests at a first version of a software component deployed to a container, where the software component may be a stateful component associated with an in-memory state managed by the container. The device may receive a software patch that includes a second version of the software component from a user device, deploy the second version of the software component to the container, and route a second set of requests to the second version of the software component. The device may update the in-memory state of the software component based on the first version of the software component and the second version of the software component to maintain accurate state information across versions during the patching process. | 2021-12-23 |
20210397430 | SECURE TRANSPORT SOFTWARE UPDATE - An example operation includes one or more of receiving a software update at a first component in a target transport, parsing the software update by a second component in the target transport into a first portion of critical updates and a second portion of non-critical updates, verifying the first portion, by the second component, based on a source of the software update, running, by the second component, the verified first portion with a dedicated process on the target transport for a pre-set period of time, and responsive to positive results over the period of time, running the verified first portion with other processes on the target transport. | 2021-12-23 |
20210397431 | EXECUTION OF TRANSPORT SOFTWARE UPDATE - An example operation includes one or more of receiving an authorization code for a software update by a transport component, executing the software update on the transport component, responsive to a successful execution of the software update, generating a validation code by the transport component, and running the software update on other transport components based on the validation code. | 2021-12-23 |
20210397432 | TECHNIQUES FOR FIRMWARE UPDATES WITH ACCESSORIES - Techniques are provided for updating firmware of an accessory device. An accessory development kit of the accessory device can communicate with an accessory update daemon using a home management daemon of a controller device. Based on a firmware update policy of the accessory device, the accessory update daemon will check for firmware updates. When firmware updates are available, the accessory update daemon can instruct the home management daemon to stage the update. The home management daemon will notify the accessory development kit to be in a stage mode. The accessory update daemon will download the firmware update and send the firmware update to the accessory development kit of the accessory device using an interface for the secure channel provided by the home management daemon. The accessory device can be a third party accessory device that does not have its own firmware updating application. | 2021-12-23 |
20210397433 | ON-BOARD UPDATE DEVICE, UPDATE PROCESSING PROGRAM, PROGRAM UPDATE METHOD, AND ON-BOARD UPDATE SYSTEM - An on-board update device that acquires an update program transmitted from an external server located outside a vehicle, and performs processing to update a program for an on-board ECU provided in the vehicle, the on-board update device including a control unit that controls transmission of the update program to the on-board ECU, wherein, when the transmission is to be resumed after an interruption of the transmission, if identification information of the on-board ECU is different from the identification information before the interruption of the transmission, the control unit determines that the on-board ECU has been replaced during the interruption of the transmission, and performs predetermined processing. | 2021-12-23 |
20210397434 | METHOD FOR MANAGING EQUIPMENT IN ORDER TO UPDATE A FIRMWARE - A method is described for remotely managing a piece of network connection equipment in order to deploy a firmware. The method includes generating connection data of the equipment to the network over a predetermined period of time, determining, on a remote management server, a time slot, specific to the equipment, for downloading the firmware depending on these the generated connection data, transmitting, to the equipment, information relating to the time slot specific to this equipment and to an address of a download server, sending, from the equipment, a request for downloading the firmware to the download server, sending, from the download server, firmware download data to the equipment, and downloading the firmware during the time slot specific to the equipment. | 2021-12-23 |
20210397435 | TECHNIQUES FOR FIRMWARE UPDATES WITH ACCESSORIES - Techniques are provided for updating firmware of an accessory device. An accessory development kit of the accessory device can communicate with an accessory update daemon using a home management daemon of a controller device. Based at least in part on a firmware update policy of the accessory device, the accessory update daemon will check for firmware updates. When firmware updates are available, the accessory update daemon can instruct the home management daemon to stage the update. The home management daemon will notify the accessory development kit to be in a stage mode. The accessory update daemon will download the firmware update and send the firmware update to the accessory development kit of the accessory device using an interface for the secure channel provided by the home management daemon. The accessory device can be a third party accessory device that does not have its own firmware updating application. | 2021-12-23 |
20210397436 | TECHNIQUES FOR FIRMWARE UPDATES WITH ACCESSORIES - Techniques are provided for updating firmware of an accessory device. An accessory development kit of the accessory device can communicate with an accessory update daemon using a home management daemon of a controller device. Based at least in part on a firmware update policy of the accessory device, the accessory update daemon will check for firmware updates. When firmware updates are available, the accessory update daemon can instruct the home management daemon to stage the update. The home management daemon will notify the accessory development kit to be in a stage mode. The accessory update daemon will download the firmware update and send the firmware update to the accessory development kit of the accessory device using an interface for the secure channel provided by the home management daemon. The accessory device can be a third party accessory device that does not have its own firmware updating application. | 2021-12-23 |
20210397437 | System For Determining Availability Of Software Update For Automation Apparatus - A system for determining an availability of a software update for an automation apparatus comprises a reader configured to read wirelessly data from the automation apparatus that is at least partly powered off, a processing circuitry configured to determine, based on the read data, the availability of the software update for the automation apparatus and a database configured to store at least update for the software. The processing circuitry is further configured, in response to the available software update, to enable the automation apparatus for receiving the update for the software. | 2021-12-23 |
20210397438 | REMOTE DETECTION OF DEVICE UPDATES - A method comprising: identifying, by a cloud server, a set of local area networks (LAN) associated with the cloud server, based on a similarity parameter with respect to an end device connected within each of the LANs; forming a communications network comprising all of the LANs in the set; detecting, by at least a subset of the LANs, a download file received by the respective end devices in each of the LANs in the subset; calculating an update event likelihood score with respect to the download file, based, at least in part, on a plurality of parameters associated with the download file; and issuing, by at least one of the LANs in the subset, a notification that the download file is associated with an update event affecting all of the end devices in each of the LANs, when the update event likelihood score exceeds a specified threshold. | 2021-12-23 |
20210397439 | AUTOMATIC PROBABILISTIC UPGRADE OF TENANT DEVICES - In one example of the technology, device information associated with a device upgrade and a plurality of devices includes risk parameters including values associated with a minimum health value that is associated with a minimum acceptable number of healthy devices among the plurality of devices and a confidence value associated with a minimum acceptable probability that the number of healthy devices among the plurality of devices is at least as great as the minimum health value; and, for each device a success probability value that is associated with a probability that the device will be healthy after the device upgrade is performed on the device. A Poisson binomial distribution is iteratively used to determine a set of devices among the plurality of device for which the largest possible number of devices are included in the set of devices while meeting the risk parameters. The set of devices is then upgraded. | 2021-12-23 |
20210397440 | SYSTEM AND METHOD FOR INTELLIGENT POWER MANAGEMENT OF FIRMWARE UPDATES - An information handling system includes power inputs, a battery, a BMC, a memory, and a processor. The BMC determines which of the power inputs are coupled to associated power sources, and a first duration of time that the battery can provide power to the information handling system. The memory stores a firmware element. The processor receives a firmware that includes a second indication as to which of the power inputs are to be coupled to their associated power source as a condition for saving the firmware update, and that includes a second duration of time that it is expected to take to save the firmware update. The information handling system saves the firmware update to the memory when the first indication matches the second indication and when the first duration of time is greater than the second duration of time. | 2021-12-23 |
20210397441 | FIRMWARE UPDATING SYSTEM AND METHOD - A firmware updating system and method are provided. The firmware updating method includes configuring a host to digitally sign a firmware to be updated, and configuring an electronic device to perform an authorization verification on an update tool, and only the update tool that passes the verification has an update permission. The update tool uses an encryption algorithm to encrypt the firmware to be updated that includes a digital signature. After the encryption is completed, the host sends the update tool to the electronic device through the update tool. The electronic device then uses a decryption algorithm to decrypt the received firmware to obtain the firmware to be updated including the digital signature, and write the firmware to be updated into a firmware storage area to be updated. The electronic device then verifies the digital signature in the firmware to be updated. | 2021-12-23 |
20210397442 | Software Patch Comparison - A method includes, receiving a first version of a software patch for an application. The method further includes receiving a second version of the software patch, the second version being associated with an upstream version of the application. The method further includes, comparing the first version of the software patch with the second version of the software patch, the comparing accounting for differences between the first version of the software patch and the second version of the software patch that result from differences between the application and the upstream version of the application. The method further includes, in response to comparing, tagging the first version of the software patch as a match when there are no differences other than the differences between the first version of the software patch and the second version of the software patch that result from differences between the application and the upstream version of the application. | 2021-12-23 |
20210397443 | SOFTWARE UPDATE APPARATUS, MASTER, OTA MASTER, AND NETWORK SYSTEM - A master includes: a communication module configured to request a download of update data from a center; a first storage device configured to store the update data obtained by the download; and one or more processors configured to at a time of execution of the download, check at least one of a free space size of the first storage device or a free space size of a second storage device of each of one or more update-target in-vehicle devices among a plurality of in-vehicle devices connected through an in-vehicle network, and perform control such that, based on the update data, update software is installed, or installed and activated, in the one or more update-target in-vehicle devices, wherein the communication module is configured to request the download of the update data from the center based on the free space size. | 2021-12-23 |
20210397444 | OPPORTUNISTIC SOFTWARE UPDATES DURING SELECT OPERATIONAL MODES - Disclosed embodiments relate to opportunistically updating Electronic Control Unit (ECU) software in a vehicle. Operations may include receiving, at a controller in a vehicle, a wireless transmission indicating a need to update software running on at least one ECU in the vehicle; monitoring an operational status of the vehicle to determine whether the vehicle is in a first mode of operation in which an ECU software update is prohibited; delaying the ECU software update when the operational status is prohibited; continuing to monitor the operational status of the vehicle to determine whether the vehicle is in a second mode of operation in which the ECU software update is permitted; and enabling updating of the at east one ECU with the delayed ECU software update when it is determined that the vehicle is in the second mode of operations. | 2021-12-23 |
20210397445 | IMPLEMENTING A DISTRIBUTED REGISTER TO VERIFY THE STABILITY OF AN APPLICATION SERVER ENVIRONMENT FOR SOFTWARE DEVELOPMENT AND TESTING - A distributed register/ledger in a distributed trust computing network is the basis for determining the stability of a computing environment. Based on the whether the determined computing environment stability meets predetermined thresholds decisions are made on whether to allow for computing code to be tested within the computing environment or allow for edits/changes to computing code to be checked-in to the code repository. | 2021-12-23 |
20210397446 | Dependency Lock In CICD Pipelines - Deployment of a modified service affects the functioning of other services that make use of the service. To address the problems that deployment of modified executable code can cause in other services, a dependency lock is placed on candidate code to prevent deployment until tests on the client services are successfully completed. Developers of client services that rely on a supplier service are enabled to place a dependency lock on the service. As a result, deployment of the supplier service is only allowed when tests of the client services complete successfully. The administrator of the service being deployed may control which other users are able to add dependency locks without giving those users other permissions such as the ability to modify the source code of the service, the ability to deploy the service, and the like. | 2021-12-23 |
20210397447 | AUTOMATED COMPLIANCE AND TESTING FRAMEWORK FOR SOFTWARE DEVELOPMENT - A system for enforcing compliance and testing for software development, comprising an indexing service configured to create a dataset by processing and indexing source code of a project by a developer, perform a code audit on the indexed source code, store results from the code audit in the dataset, gather additional information relating to the provided project, store the additional information in the dataset, and store the dataset into memory; and a monitoring service configured to continuously monitor the project for source code changes and make changes to the dataset as needed. Further comprising an enforcement module to automatically verify code and other media related to the software development process by ensuring obligations from a rules database are met and where not able to automate the compliance check forward to an appropriate authority, receive back the manually reviewed compliance check, then produce and implement automated recommendations for compliance adherence. | 2021-12-23 |
20210397448 | EXTENDED MEMORY COMPONENT - Systems, apparatuses, and methods related to extended memory microcode components for performing extended memory operations are described. An example apparatus can include a plurality of computing devices. Each of the computing devices can include a processing unit and a memory array. The example apparatus can include a plurality of microcode components coupled to each of the plurality of computing devices and each comprise a set of microcode instructions. The example apparatus can further include a communication subsystem coupled to a host and to each of the plurality of computing devices. Each of the plurality of computing devices can be configured to receive a request from the host, retrieve at least one of the set of microcode instructions, transfer a command and the at least one of the set of microcode instructions, and receive a result of performing the operation. | 2021-12-23 |
20210397449 | ROBUST, EFFICIENT MULTIPROCESSOR-COPROCESSOR INTERFACE - Systems and methods for an efficient and robust multiprocessor-coprocessor interface that may be used between a streaming multiprocessor and an acceleration coprocessor in a GPU are provided. According to an example implementation, in order to perform an acceleration of a particular operation using the coprocessor, the multiprocessor: issues a series of write instructions to write input data for the operation into coprocessor-accessible storage locations, issues an operation instruction to cause the coprocessor to execute the particular operation; and then issues a series of read instructions to read result data of the operation from coprocessor-accessible storage locations to multiprocessor-accessible storage locations. | 2021-12-23 |
20210397450 | DEVICES AND METHODS FOR PARALLELIZED RECURSIVE BLOCK DECODING - A decoder for determining an estimate of a vector of information symbols carried by a signal received through a transmission channel represented by a channel matrix is provided. The decoder includes a block division unit configured to divide the vector of information symbols into two or more sub-vectors, each sub-vector being associated with a block level; two or more processors configured to determine, in parallel, candidate sub-vectors and to store the candidate sub-vectors in a first stack. Each processor is configured to determine at least a candidate sub-vector by applying a symbol estimation algorithm and to store each candidate sub-vector with a decoding metric and the block level associated with the candidate sub-vector. The decoding metric is lower than or equal to a decoding metric threshold. A processor among the two or more processors is configured to determine at least a candidate vector from candidate sub-vectors stored in the first stack, the candidate vector being associated with a cumulated decoding metric and to update the decoding metric threshold from the cumulated decoding metric. | 2021-12-23 |
20210397451 | STREAMING ENGINE WITH CACHE-LIKE STREAM DATA STORAGE AND LIFETIME TRACKING - A streaming engine employed in a digital data processor specifies a fixed read only data stream defined by plural nested loops. An address generator produces address of data elements. A steam head register stores data elements next to be supplied to functional units for use as operands. The streaming engine fetches stream data ahead of use by the central processing unit core in a stream buffer constructed like a cache. The stream buffer cache includes plural cache lines, each includes tag bits, at least one valid bit and data bits. Cache lines are allocated to store newly fetched stream data. Cache lines are deallocated upon consumption of the data by a central processing unit core functional unit. Instructions preferably include operand fields with a first subset of codings corresponding to registers, a stream read only operand coding and a stream read and advance operand coding. | 2021-12-23 |
20210397452 | VIRTUAL 3-WAY DECOUPLED PREDICTION AND FETCH - A unified queue configured to perform decoupled prediction and fetch operations, and related apparatuses, systems, methods, and computer-readable media, is disclosed. The unified queue has a plurality of entries, where each entry is configured to store information associated with at least one instruction, and where the information comprises an identifier portion, a prediction information portion, and a tag information portion. The unified queue is configured to update the prediction information portion of each entry responsive to a prediction block, and to update the tag information portion of each entry responsive to a tag and TLB block. The prediction information may be updated more than once, and the unified queue is configured to take corrective action where a later prediction conflicts with an earlier prediction. | 2021-12-23 |
20210397453 | REUSING FETCHED, FLUSHED INSTRUCTIONS AFTER AN INSTRUCTION PIPELINE FLUSH IN RESPONSE TO A HAZARD IN A PROCESSOR TO REDUCE INSTRUCTION RE-FETCHING - Reusing fetched, flushed instructions after an instruction pipeline flush in response to a hazard in a processor to reduce instruction re-fetching is disclosed. An instruction processing circuit is configured to detect fetched performance degrading instructions (Pals) in a pre-execution stage in an instruction pipeline that may cause a precise interrupt that would cause flushing of the instruction pipeline. In response to detecting a PDI in an instruction pipeline, the instruction processing circuit is configured to capture the fetched PDI and/or its successor, younger fetched instructions that are processed in the instruction pipeline behind the PDI, in a pipeline refill circuit. If a later execution of the PDI in the instruction pipeline causes a flush of the instruction pipeline, the instruction processing circuit can inject the fetched PDI and/or its younger instructions previously captured from the pipeline refill circuit into the instruction pipeline to be processed without such instructions being re-fetched. | 2021-12-23 |
20210397454 | INSTRUCTION TO VECTORIZE LOOPS WITH BACKWARD CROSS-ITERATION DEPENDENCIES - Methods and apparatus relating to techniques for vectorizing loops with backward cross-iteration dependencies are described. In an embodiment, execution of one or more instructions resolves a cross-iteration dependency of one or more operations of a loop. The execution of the one or more instructions resolves the cross-iteration dependency of the one or more operations based at least in part on one or more distance count computations to a preceding iteration of the loop. Other embodiments are also disclosed and claimed. | 2021-12-23 |
20210397455 | PREDICTION USING INSTRUCTION CORRELATION - A data processing apparatus is provided, which is able to provide predictions for hard to predict instructions. Prediction circuitry generates predictions relating to predictable instructions in a stream, where the prediction circuitry comprises storage circuitry to store, in respect of each of the predictable instructions, a reference to a set of monitored instructions in the stream to be used for generating predictions for the predictable instructions. Processing circuitry receives the predictions from the prediction circuitry and executes the predictable instructions in the stream using the predictions. Programmable instruction correlation parameter storage circuitry stores a given correlation parameter between a given predictable instruction in the stream and a subset of the set of monitored instructions of the given predictable instruction, to assist the prediction circuitry in generating the predictions. If the programmable instruction correlation parameter storage circuitry is currently storing the given correlation parameter, the prediction circuitry generates a given prediction relating to the given predictable instruction based on the subset of the set of monitored instructions indicated in the programmable instruction correlation parameter storage circuitry. Otherwise the prediction circuitry generates the given prediction relating to the given predictable instruction based on the set of monitored instructions indicated in the storage circuitry. | 2021-12-23 |
20210397456 | SYSTEMS, METHODS, AND DEVICES FOR QUEUE AVAILABILITY MONITORING - A method may include determining, with a queue availability module, that an entry is available in a queue, asserting a bit in a register based on determining that an entry is available in the queue, determining, with a processor, that the bit is asserted, and processing, with the processor, the entry in the queue based on determining that the bit is asserted. The method may further include storing the register in a tightly coupled memory associated with the processor. The method may further include storing the queue in the tightly coupled memory. The method may further include determining, with the queue availability module, that an entry is available in a second queue, and asserting a second bit in the register based on determining that an entry is available in the second queue. The method may further include finding the first bit in the register using a find first instruction. | 2021-12-23 |
20210397457 | ISOLATING APPLICATIONS AT THE EDGE - Disclosed herein are enhancements for deploying application in an edge system of a communication network. In one implementation, a runtime environment identifies a request from a Hypertext Transfer Protocol (HTTP) accelerator service to be processed by an application. In response to the request, the runtime environment may identify an isolation resource to support the request, initiate execution of code for the application, and pass context to the code. Once initiated, the runtime environment may copy data from the artifact to the isolation resource using the context and return control to the HTTP accelerator service upon executing the code. | 2021-12-23 |
20210397458 | SYSTEM AND METHOD OF UTILIZING PLATFORM APPLICATIONS WITH INFORMATION HANDLING SYSTEMS - In one or more embodiments, one or more systems, one or more methods, and/or one or more methods may: register a subroutine configured to store multiple addresses of a volatile memory medium VMM of an information handling system (IHS); for each IHS initialization executable/OS executable pair of multiple IHS initialization executable/OS executable pairs: retrieve, from a first non-volatile memory medium (NVMM), an IHS initialization executable of the IHS initialization executable/OS executable pair; copy, by the IHS initialization executable, an OS executable of the IHS initialization executable/OS executable pair from the first NVMM to the VMM; call, by the IHS initialization executable, the subroutine; store, by the subroutine, an address associated with the OS executable via a data structure stored by the VMM; and copy, by a first OS executable, the OS executable from the VMM to a second NVMM based at least on the address associated with the OS executable. | 2021-12-23 |
20210397459 | SERVER WITH SETUP MENU FOR THE BIOS SETTINGS - A server is provided. The server includes a Basic Input/Output System (BIOS) memory, a storage device, and a processing unit. The BIOS memory stores a BIOS code, and the BIOS code provides a BIOS setup menu and a saving option in the BIOS setup menu for setting information of a plurality of BIOS setup items. The processing unit is coupled to the BIOS memory and the storage device. The processing unit executes the BIOS code during a power-on self-test (POST) process of the server. When executing the saving option, the processing unit stores the setting information of the plurality of BIOS setup items into the BIOS memory and the storage device, and the processing unit also stores a designated file name into the storage device, the designated file name corresponding to the setting information of the plurality of BIOS setup items that is stored into the storage device. | 2021-12-23 |
20210397460 | DUAL MODE HARDWARE RESET - Systems and methods are disclosed, including selectively providing one of a first reset or a second reset to transition to a storage system from a low power mode to an operational power mode in response to a hardware reset signal and a value of a control bit on the storage system. | 2021-12-23 |
20210397461 | PRIORITIZING THE PRE-LOADING OF APPLICATIONS WITH A CONSTRAINED MEMORY BUDGET USING CONTEXTUAL INFORMATION - Embodiments of systems and methods for prioritizing the pre-loading of applications with a constrained memory budget using contextual information are described. In some embodiments, an Information Handling System (IHS) may include a processor and a memory coupled to the processor, the memory having program instructions stored thereon that, upon execution by the processor, cause the IHS to: collect user context information and system context information, detect a triggering event based upon the user context information and the system context information, identify a memory budget for pre-loading one or more applications, and select the one or more applications with one or more settings configured to maintain a memory usage for the pre-loading below the memory budget. | 2021-12-23 |
20210397462 | METHOD AND SYSTEM FOR AUTOMATIC SELECTION OF PROCESS AUTOMATION RPA TOOL AND BOT - The present invention discloses a method and a system for automatic selection of process automation Robotic Process Automation (RPA) tool and BOT. The method comprising receiving input data associated with a process to be executed, selecting an RPA tool from a plurality of RPA tools for process execution based on the input data and historical process data, wherein the selection is performed by calculating information gain for each parameter of the input data and computing probability for each type of RPA tool based on the information gain and the historical process data, identifying one or more BOTs from a plurality of BOTs based on the selected RPA tool, historical BOT data and the input data for the process execution, and executing the identified one or more BOTs on one or more devices based on selection of the one or more devices from available plurality of devices. | 2021-12-23 |
20210397463 | DECLARATIVELY DEFINED USER INTERFACE TIMELINE VIEWS - A device implementing a system to render user interface timeline views for display of dynamic application content includes a processor configured to retrieve a data structure corresponding to user interfaces of an application associated with respective times, and at least one declaratively defined user interface element. The processor is further configured to determine whether a rendering cost of a plurality of the user interfaces complies with an update budget of the application, where the rendering cost includes interpreting the at least one declaratively defined user interface element for the respective times. When the rendering cost is determined to comply, the processor is further configured to render the plurality of the user interfaces in advance of the respective times associated with the plurality of the user interfaces. The processor is further configured to display at least one of the rendered plurality of the user interfaces based on a current time. | 2021-12-23 |
20210397464 | TRANSITIONING APPLICATION WINDOWS BETWEEN LOCAL AND REMOTE DESKTOPS - The disclosure provides for transitioning application windows between local and remote desktops. Example implementations include opening a first file with a first application to generate a first application window on a first desktop window on a user display; based at least on a trigger event for transitioning the first application window from the first desktop window to a second desktop window, determining whether a second application is available for the second desktop window to produce a version of the first application window; and based at least on the second application being available: transferring the first file across a network to become a second file; and opening the second file with the second application to generate a second application window on the second desktop window, the second application window replacing the first application window on the user display. The transition may go either direction. | 2021-12-23 |
20210397465 | CONTAINER-AS-A-SERVICE (CAAS) CONTROLLER FOR MONITORING CLUSTERS AND IMPLEMETING AUTOSCALING POLICIES - Embodiments described herein are generally directed to a controller of a managed container service that facilitates autoscaling based on bare metal machines available within a private cloud. According to an example, a CaaS controller of a managed container service monitors a metric of a cluster deployed on behalf of a customer within a container orchestration system. Responsive to a scaling event being identified for the cluster based on the monitoring and an autoscaling policy associated with the cluster, a BMaaS provider associated with the private cloud may be caused to create an inventory of bare-metal machines available within the private cloud. Finally, a bare metal machine is identified to be added to the cluster by selecting among the bare-metal machines based on the autoscaling policy, the inventory and a best fit algorithm configured in accordance with a policy established by or on behalf of the customer. | 2021-12-23 |
20210397466 | CONTAINER-AS-A-SERVICE (CAAS) CONTROLLER FOR SELECTING A BARE-METAL MACHINE OF A PRIVATE CLOUD FOR A CLUSTER OF A MANAGED CONTAINER SERVICE - Embodiments described herein are generally directed to a controller of a managed container service that facilitates selection among bare metal machines available within a private cloud. According to an example, a request is received by a Container-as-a-Service controller from a CaaS portal to create a cluster based at least in part on resources of a private cloud of a customer of a managed container service. An inventory of bare-metal machines available within the private cloud is received from a Bare-Metal-as-a-Service (BMaaS) provider associated with the private cloud. A particular bare metal machine is identified for the cluster by selecting among the available bare-metal machines based on cluster information associated with the request, the inventory, and a best fit algorithm configured in accordance with a policy established by the customer. | 2021-12-23 |
20210397467 | NETWORK TRANSPARENCY ON VIRTUAL MACHINES USING SOCKET IMPERSONATION - A system includes a hypervisor, a virtual machine (VM), and a host system. The VM includes a kernel and an application and the VM is in communication with the hypervisor. The host system includes a memory and one or more processors, where the one or more processors are in communication with the memory. The host system hosts the VM and the hypervisor. The one or more processors is configured to perform creating, via the kernel, a first socket accessible to the application. A second socket in communication with an endpoint is created at the host system. A virtual communication channel between the hypervisor and the kernel of the VM connects the first socket to the hypervisor. The hypervisor is configured to transmit inputs/outputs (I/Os) received from the application through the virtual channel to the endpoint via the second socket. | 2021-12-23 |
20210397468 | SYSTEMS AND METHODS TO DECREASE THE SIZE OF A COMPOUND VIRTUAL APPLIANCE FILE - An application is provided as a compound virtual appliance having components to be hosted by virtual machines. Each component includes a set of virtual machine disks. Partial versions of the components are created by removing from each component each virtual machine disk determined to be a duplicate of a virtual machine disk of another component. A compact version of the compound virtual appliance is created by packing together the partial versions of the components and a single copy of each virtual machine disk having been determined to be a duplicate. The compact compound virtual appliance is deployed to a customer site. At the customer site, a complete version of the compound virtual appliance is reconstructed by adding back the single copy of each virtual machine disk having been determined to be a duplicate into each component having had the duplicate virtual machine disk removed. | 2021-12-23 |
20210397469 | SYSTEMS AND METHODS FOR COMPUTING A SUCCESS PROBABILITY OF A SESSION LAUNCH USING STOCHASTIC AUTOMATA - Described embodiments provide systems and methods for a management service using virtual delivery agent measurement metrics to determine the probability of the virtual delivery agent successfully launching a connection to a virtual application and desktop service. A probability mass function is implemented to determine the correlation between the measurement metrics over time, and the probability mass function distribution is mapped to a states in a linear Markov chain such that the probability of the virtual delivery agent successfully launching a connection to a virtual application and desktop service is based on the current state of the Markov chain. | 2021-12-23 |
20210397470 | METHOD TO ORGANIZE VIRTUAL MACHINE TEMPLATES FOR FAST APPLICATION PROVISIONING - Virtualized computing instances, such as virtual machines, in a virtualized computing environment are provisioned using a tree-based template structure. The tree-based template structure includes a base node and multiple nodes linked to the base node. Each of the multiple nodes includes at least one component that represents a delta relative to the base node. By matching the requirements and role of a virtualized computing instance to be provisioned with the content(s) of a particular node, the particular node can be selected for cloning/creating the virtualized computing instance. | 2021-12-23 |
20210397471 | METHODS AND APPARATUS TO MANAGE HEAT IN A CENTRAL PROCESSING UNIT - Methods, apparatus, systems, and articles of manufacture to manage heat in a CPU are disclosed. An example apparatus includes a metric collection agent to output a metric representative of a property of the central processor unit including a first core and a second physical core, the first physical core and the second physical core mapped to first and second logical cores by a map. A policy processor is to evaluate the first metric to determine whether to change the map to remap at least one of the first and second logical cores relative to the second one of the first and the second physical cores to move a process between the first and second physical cores to adjust the property, the moving of the process between the physical cores being transparent to an application/OS layer. A mapping controller is responsive to the policy processor to change the map. | 2021-12-23 |
20210397472 | SYSTEM AND METHODS FOR PROVISIONING DIFFERENT VERSIONS OF A VIRTUAL APPLICATION - A computing device may include a memory and a processor cooperating with the memory and configured to provide first and second application layers that include different versions of a virtual application accessible by a client device. The first and second versions of the virtual application may interoperate with application libraries of different application layers. | 2021-12-23 |
20210397473 | METHOD OF CREATING HIGH AVAILABILITY FOR SINGLE POINT NETWORK GATEWAY USING CONTAINERS - Methods and apparatus consistent with the present disclosure may be used in environments where multiple different virtual sets of program instructions are executed by shared computing resources when different processes are performed in a virtual computing environment. Methods consistent with the present disclosure may be used to provide a form of redundancy that does not require two physically distinct computers. Such methods may use a set of physical hardware components and two or more sets of synchronized virtual gateway software. Architectural features of physical hardware components included in an apparatus consistent with the present disclosure may be abstracted from sets of virtual program code when one virtual software process backs up another virtual software process at the apparatus. | 2021-12-23 |
20210397474 | PREDICTIVE SCHEDULED BACKUP SYSTEM AND METHOD - Embodiments for predictive scheduling of backups in a data protection system by initiating a first backup job in a series of scheduled consecutive backup jobs, wherein a second backup job is allowed to begin only after the first backup job is finished and not active, detecting whether or not the first backup job is still active when a second job is to start, and if so, estimating an amount of additional time required to finish the first backup job. The second backup job is then rescheduled to start at least at the end of the additional time. The estimated amount of additional time is determined using a throughput to target storage device parameter. This parameter is periodically checked to determine if there is a change to the estimated amount of additional time, and if so, the estimated time is recalculated based on the changed parameter. | 2021-12-23 |
20210397475 | ADAPTIVE CPU USAGE MECHANISM FOR NETWORKING SYSTEM IN A VIRTUAL ENVIRONMENT - Methods and apparatus consistent with the present disclosure may be used in environments where multiple different virtual sets of program instructions are executed by shared computing resources. These methods may allow actions associated with a first set of virtual software to be paused to allow a second set of virtual software to be executed by the shared computing resources. In certain instances, methods and apparatus consistent with the present disclosure may manage the operation of one or more sets of virtual software at a point in time. Apparatus consistent with the present disclosure may include a memory and one or more processors that execute instructions out of the memory. At certain points in time, a processors of a computing system may pause a virtual process while allowing instructions associated with another virtual process to be executed. | 2021-12-23 |
20210397476 | POWER-PERFORMANCE BASED SYSTEM MANAGEMENT - A method comprises receiving a workload for a computer system; sweeping at least one parameter of the computer system while executing the workload; monitoring one or more characteristics of the computer system while sweeping the at least one parameter, the one or more characteristics including total power consumption of the computer system; generating a power profile for the workload that indicates a respective selected value for the at least one parameter based on analysis of the monitored total power consumption of the computer system while sweeping the at least one parameter; and executing the workload based on the respective selected value of the at least one parameter. | 2021-12-23 |
20210397477 | MACHINE LEARNING DEVICE - A machine learning device is provided in a vehicle able to supply electric power to an outside, and includes a processor configured to perform processing relating to training a machine learning model used in the vehicle. The processor is configured to lower an electric power consumption amount in the processing relating to training when acquiring disaster information compared with when not acquiring the disaster information. | 2021-12-23 |
20210397478 | RESOURCE-USAGE NOTIFICATION FRAMEWORK IN A DISTRIBUTED COMPUTING ENVIRONMENT - A resource-usage notification framework can be implemented for distributed computing environments. For example, a system can determine the resource usage of a software application in a distributed computing environment. The system can determine if the resource usage is within a predefined range of a predefined resource-consumption limit. If so, the system can generate an event notification and transmit the event notification to the software application. The software application can receive the event notification and perform a mitigation operation in response. The mitigation operation can be configured to prevent the resource usage from exceeding the predefined resource-consumption limit or to mitigate an impact of the resource usage exceeding the predefined resource-consumption limit. | 2021-12-23 |
20210397479 | RESPONDING TO APPLICATION DEMAND IN A SYSTEM THAT USES PROGRAMMABLE LOGIC COMPONENTS - Systems and methods involve receiving requests to execute different processing tasks in a data processing system including first and second manycore processor units each having a processing unit and a programmable logic component, causing the tasks to be performed in different instances on the first processing unit and the first programmable logic component of the first manycore processor unit and on the second processing unit and the second programmable logic component of the second manycore processor unit including, in a particular instance, causing a particular task to be performed locally on the first programmable logic component based at least on a mapping consideration, and in another instance, partially reconfiguring the second programmable logic component to perform another task responsive to determining that the second programmable logic component is not already configured to perform the task. | 2021-12-23 |
20210397480 | CLUSTER RESOURCE MANAGEMENT USING ADAPTIVE MEMORY DEMAND - Various examples are disclosed for cluster resource management using adaptive memory demands. In some examples, a local memory estimate is determined for a workload. The local memory estimate is determined using a memory reclamation parameter for the workload executed by a current host of the workload. A destination memory estimate is also determined for the workload. The destination memory estimate is determined using a full memory estimate unreduced by memory reclamation parameters. The workload is executed using a host that is selected in view of an analysis that uses the local memory estimate for the current host and the destination memory estimate for at least one destination host. | 2021-12-23 |
20210397481 | ACCELERATOR, METHOD OF OPERATING THE SAME, AND ELECTRONIC DEVICE INCLUDING THE SAME - A processor-implemented accelerator method includes: reading, from a memory, an instruction to be executed in an accelerator; reading, from the memory, input data based on the instruction; and performing, on the input data and a parameter value included in the instruction, an inference task corresponding to the instruction. | 2021-12-23 |
20210397482 | METHODS AND SYSTEMS FOR BUILDING PREDICTIVE DATA MODELS - Embodiments provide methods and systems for automating configuration and administration of resources (both hardware and software) which are used to perform data modelling tasks. According to embodiments, a method for data modelling includes receiving an object associated with a data modelling task at a model building platform; fetching, by the model building platform, a job template corresponding to the object and filing the job template with control information; running, by the data modelling platform, a job from the job template to inform a Kubernetes service which training nodegroup resource to use to perform the data modelling task and to provide one or more interfaces to a training container to be used to perform the data modelling task; scheduling, by the data modelling platform, the data modelling task on the training nodegroup resource; and receiving, by the data modelling platform, model metrics associated with a plurality of models which were evaluated as part of the data modelling task and outputting information associated with the received model metrics. | 2021-12-23 |
20210397483 | EVALUATION DEVICE, EVALUATION METHOD AND EVALUATION PROGRAM - A performance influence involved in system transition is evaluated in consideration of a timer set for each processing section. An evaluation device | 2021-12-23 |
20210397484 | CONFIGURABLE LOGIC PLATFORM WITH RECONFIGURABLE PROCESSING CIRCUITRY - A configurable logic platform may include a physical interconnect for connecting to a processing system, first and second reconfigurable logic regions, a configuration port for applying configuration data to the first and second reconfigurable logic regions, and a reconfiguration logic function accessible via transactions of the physical interconnect, the reconfiguration logic function providing restricted access to the configuration port from the physical interconnect. The platform may include a first interface function providing an interface to the first reconfigurable logic region and a second interface function providing an interface to the first reconfigurable logic region. The first and second interface functions may allow information to be transmitted over the physical interconnect and prevent the respective reconfigurable logic region from directly accessing the physical interconnect. The platform may include logic configured to apportion bandwidth of the physical interconnect among the interface functions. | 2021-12-23 |
20210397485 | DISTRIBUTED STORAGE SYSTEM AND REBALANCING PROCESSING METHOD - In a distributed storage system, a volume classifier classifies a plurality of volumes into a plurality of groups on the basis of a fluctuation cycle of a load in each volume, a processor (a resource classifier) calculates a total load obtained by summing the loads of the plurality of volumes on the same node within a group at each time and calculates a group load on the basis of a peak of the total load, and the processor of one node (a rebalancer) calculates the group load on a movement destination node in a case where a volume as a movement candidate in rebalancing that moves the volume between nodes is moved from a movement source node to the movement destination node, determines a volume to be moved in the rebalancing and a movement destination volume on the basis of the calculated group load on the movement destination node, and performs the rebalancing. | 2021-12-23 |
20210397486 | REROUTING RESOURCES FOR MANAGEMENT PLATFORMS - Systems, methods and computer-readable media are provided for receiving, from one or more first computing devices associated with a first entity, an indication to suspend an event for an outbound resource that is associated with a first resource indicator that is stored in a first data file in a first resource management software. A second indication to suspend the event is determined to be received from one or more second devices associated with a second entity utilizing a second resource management software. The second indication to suspend the event indicates a modification to both the first data file associated with the first resource management software and a second data file associated with the second resource management software. Based on determining to suspend the event, the outbound resource outbound source is rerouted and the first resource management software is instructed to modify the first resource indicator to correspond to a modification to a second resource indicator associated with an inbound resource stored in the second data file. | 2021-12-23 |
20210397487 | MULTITHREADED ROUTE PROCESSING FOR ROUTING INFORMATION DISPLAY - In some examples, a main thread of a plurality of execution threads executing on a plurality of processing cores of at least one hardware-based processor of a network device may receive a request for information associated with network routes that meet one or more criteria. Each of the plurality of execution threads may process a respective routing information partition to generate respective displayable information associated with a respective subset of the network routes that meets the one or more criteria. The main thread may generate consolidated displayable information associated with the network routes that meet the one or more criteria based on the respective displayable information generated by each of the plurality of execution threads. The main thread may output the consolidated displayable information associated with the network routes that meet the one or more criteria for display at a display device. | 2021-12-23 |
20210397488 | TIME-DIVISION MULTIPLEXING METHOD AND CIRCUIT FOR CONCURRENT ACCESS TO A COMPUTER RESOURCE - The invention relates to a method implemented by computer for arbitration between computer programs seeking to access a shared resource concurrently and each transmitting an access request. The method performs time-division multiple access according to which the time is divided into time slots, each of which is allocated to a critical program for access to the shared resource, each time slot comprising a plurality of time units. The method exploits a processing slack associated with each critical program in order to delay a processing deadline for an access request transmitted by the critical program. The method comprises, for each unit time, a step of selecting a waiting access request and a step of determining authorization for immediate processing of the selected access request. This determining operation comprises, for a unit time which does not correspond to the beginning of a time slot, when the critical program to which the next time slot is allocated has not issued the selected request, authorization for the immediate processing of the selected request if the processing slack of the critical program to which the next time slot is allocated is greater than a threshold. | 2021-12-23 |
20210397489 | METHODS AND SYSTEMS FOR APPLICATION PROGRAM INTERFACE CALL MANAGEMENT - Disclosed are systems and methods for application program interface (API) call management. For example a method may include obtaining API call information for one or more API endpoints, the API call information including a number of API calls to the one or more API endpoints; obtaining resource utilization (RU) information, the RU information including project RU information for one or more projects; analyzing the API call information and the RU information to obtain API cost information, the API cost information including cost per API call for the one or more API endpoints; and managing subsequent API calls to the one or more API endpoints based on the cost per API call. | 2021-12-23 |
20210397490 | SYSTEMS AND METHODS TO COMPUTE A RISK METRIC ON A NETWORK OF PROCESSING NODES - Methods and systems for computing a risk metric on a network of processing nodes are disclosed. The method includes receiving a plurality of events at a plurality of processing nodes. The method further includes at a first processing node, processing on a first event and a known instance of a second event to determine whether the first event matches the known instance of the second event. The method further includes in response to determining that the first event does not match the known instance of the second event, terminating the processing without generating an output, and generating a first output event having a resulting probability computed based on a confidence value of the first event and a first probabilistic value of a first missing event, or in response to determining that the first event matches the known instance of the second event, generating the first output event having the resulting probability computed based on the confidence value of the first event. | 2021-12-23 |
20210397491 | Providing Access to Related Content in Media Presentations - In some implementations, a user device may access a media presentation that includes metadata describing related content item(s). The user device viewing the media presentation is allowed to access content related to portions of the media presentation at times appropriate for the particular related content item(s). The related content item(s) may be provided automatically or based on user input triggering download or copying of a particular related content item, such as to a clipboard of the user device. A computing device may generate a media presentation that includes related content item(s) as metadata in some implementations. The media presentation may be generated by recording a live presentation, assembling one or more media portions, and/or obtaining a complete media presentation and modifying the media presentation to add the related content item(s) and when, while presenting the media presentation on a user device, to allow access to the related content item(s). | 2021-12-23 |
20210397492 | ESTABLISHMENT OF QUEUE BETWEEN THREADS IN USER SPACE - In embodiments of the present disclosure, there is provided a solution for establishing queues between threads in a user space. After creating a first thread on a first application and creating a second thread and a third thread on a second application, a socket connection between the first application and the second application is established in the user space of the operating system. Then, a first queue is established between the first thread and the second thread, while a second different queue is established between the first thread and the third thread. Embodiments of the present disclosure can avoid lock-based queue sharing by setting a separate queue for each pair of threads. Thus, the lockless queue mechanism according to embodiments of the present disclosure can improve the performance of the operating system significantly. | 2021-12-23 |
20210397493 | METHOD AND SYSTEM FOR PROCESSING A STREAM OF INCOMING MESSAGES SENT FROM A SPECIFIC INPUT MESSAGE SOURCE AND VALIDATING EACH INCOMING MESSAGE OF THAT STREAM BEFORE SENDING THEM TO A SPECIFIC TARGET SYSTEM - Methods and systems are provided for processing a stream of incoming messages sent from a specific input message source and validating each incoming message of that stream before sending them to a specific target system. | 2021-12-23 |
20210397494 | FUNCTIONAL TUNING FOR CLOUD BASED APPLICATIONS AND CONNECTED CLIENTS - A cloud computing system including a cloud-based system in communication with a client system including an application gateway that receives from a client application, a request for services associated with a workload and a plurality of cloud-based application services, the plurality of cloud-based application services in operable communication with the application gateway. The system also includes a cloud-based tuning service in operable communication with the cloud-based application services and the application gateway, the cloud-based tuning service identifies a set of application requirements needed to fulfill the request, the cloud-based tuning service coordinating with a client-based tuning service and the application gateway to assign selected application services to fulfill the request, wherein the assignment of selected application services includes assigning at least a portion of the services associated with the workload the client system. | 2021-12-23 |
20210397495 | PREDICTING AND REDUCING HARDWARE RELATED OUTAGES - Disclosed here is a system to automatically predict and reduce hardware related outages. The system can obtain a performance indicator associated with a wireless telecommunication network including a system performance indicator or an application log, along with a machine learning model trained to predict and resolve a hardware error based on the performance indicator. The machine learning model can detect an anomaly associated with the performance indicator by detecting an infrequent occurrence in the performance indicator. The machine learning model can determine whether the anomaly is similar to a prior anomaly indicating a prior hardware error. Upon determining that the anomaly is similar to the prior hardware error, the machine learning model can predict an occurrence of the hardware error. | 2021-12-23 |
20210397496 | STORAGE DEVICE BLOCK-LEVEL FAILURE PREDICTION-BASED DATA PLACEMENT - In a method for data placement in a storage device including one or more blocks and a controller, the method including: receiving, by the controller of the storage device, a request to write data; determining, by the controller, a data status of the data; calculating, by the controller, one or more vulnerability factors of the one or more blocks; determining, by the controller, one or more block statuses of the one or more blocks based on the one or more vulnerability factors; selecting, by the controller, a target block from the one or more blocks based on the data status and the one or more block statuses; and writing, by the controller, the data to the target block. | 2021-12-23 |
20210397497 | INTELLIGENT NETWORK OPERATION PLATFORM FOR NETWORK FAULT MITIGATION - Embodiments of the present disclosure provide systems, methods, and computer-readable storage media that leverage artificial intelligence and machine learning to identify, diagnose, and mitigate occurrences of network faults or incidents within a network. Historical network incidents may be used to generate a model that may be used to evaluate real-time occurring network incidents, such as to identify a cause of the network incident. Clustering algorithms may be used to identify portions of the model that share similarities with a network incident and then actions taken to resolve similar network incidents in the past may be identified and proposed as candidate actions that may be executed to resolve the cause of the network incident. Execution of the candidate actions may be performed under control of a user or automatically based on execution criteria and the configuration of the fault mitigation system. | 2021-12-23 |
20210397498 | INFORMATION PROCESSING APPARATUS, CONTROL METHOD, AND PROGRAM - An information processing apparatus | 2021-12-23 |
20210397499 | METHOD TO EFFICIENTLY EVALUATE A LOG PATTERN - A non-transitory computer-readable medium stores instructions readable and executable by at least one electronic processor ( | 2021-12-23 |
20210397500 | AUTOMATED ROOT-CAUSE ANALYSIS FOR DISTRIBUTED SYSTEMS USING TRACING-DATA - A system for identifying root cause of anomalies in execution of an application comprising a plurality of operations is provided. The system comprising a preprocessing module configured to receive tracing data comprising a plurality of tracing spans each documenting, for a corresponding operation of the application, a plurality of properties and corresponding values, a signal splitting module configured to group the plurality of tracing spans in a plurality of groups such that each of the plurality of groups comprises operations with identical properties and corresponding values, an anomaly detection module configured to determine anomalous operations for each of the plurality of tracing data spans, a scoring module configured to calculate a plurality of anomaly scores each indicating a level of anomaly within each of the plurality of groups and a root cause identification module configured to analyze the anomaly scores and identify root cause of the detected anomalies according to the analysis. | 2021-12-23 |
20210397501 | SYSTEM AND METHOD FOR UNSUPERVISED PREDICTION OF MACHINE FAILURES - A system and method for unsupervised prediction of machine failures. The method includes monitoring sensory inputs related to at least one machine; analyzing, via at least unsupervised machine learning, the monitored sensory inputs, wherein the output of the unsupervised machine learning includes at least one indicator; identifying, based on the at least one indicator, at least one pattern; and determining, based on the at least one pattern and the monitored sensory inputs, at least one machine failure prediction. | 2021-12-23 |
20210397502 | METHOD AND SYSTEM FOR FAULT COLLECTION AND REACTION IN SYSTEM-ON-CHIP - A fault collection and reaction system on a system-on-chip (SoC) includes a plurality of reaction cores assigned to a plurality of applications being executed by a plurality of processor cores on the SoC, at least one look-up table (LUT), and a controller. The at least one LUT stores therein a first mapping between the plurality of reaction cores and corresponding plurality of domain identifiers, and a second mapping between a plurality of faults and a set of reaction combinations. The controller receives a fault indication and a first domain identifier in response to occurrence of a first fault and selects from the plurality of reaction cores, a first reaction core mapped to the first domain identifier, and from the set of reaction combinations, a first reaction combination mapped to the first fault. The first reaction core responds to the fault indication with a reaction based on the selected reaction combination. | 2021-12-23 |
20210397503 | SYSTEM AND METHOD OF RESOLUTION PREDICTION FOR MULTIFUNCTION PERIPHERAL FAILURES - A system and method for predicting device failures and generating proposed resolutions for such an error when occurs includes receiving device status data from each of a plurality of identified multifunction peripherals into a memory. Service history data for each of the multifunction peripherals is stored, the service history data including data corresponding to a plurality of data patterns associated with prior device failures associatively with resolutions implemented to address such failures. Patterns are detected in received device status data. Device failure is predicted for at least one identified multifunction peripheral in accordance with detected patterns and service history data. The predicted device failure is reported along with at least one proposed resolution to address a device error predicted by the predictive device failure data. | 2021-12-23 |
20210397504 | LOG TRANSMISSION CONTROLLER - A log transmission controller includes a log acquirer, a priority storage, an update instruction acquirer, a priority updater and a transmitter. The log acquirer acquires a log indicating respective states of electronic control units connected to the log transmission controller, which is equipped in a moving object. The priority storage stores priority information indicating a priority for transmitting the log to a server, which is disposed at exterior of the moving object. The update instruction acquirer acquires an update instruction, which is generated by an update instructor equipped in the moving object, for instructing to update the priority information stored in the priority storage. The priority updater updates the priority information based on the update instruction. The transmitter transmits the log to the server based on the priority indicated by the updated priority information. | 2021-12-23 |
20210397505 | Stressed Epwr To Reduce Product Level DPPM/UBER - The present disclosure generally relates to identifying read failures that enhanced post write/read (EPWR) would normally miss. After the last logical word line has been written, additional stress is added to each word line. More specifically, the gate bias channel pass read voltage for all unselected word lines is increased, the gate bias on dummy and selected gate word lines is increased, the gate bias on the selected word line is increased, and a pulse read occurs. The increasing and reading occurs for each word line. Thereafter, EPWR occurs. Due to the increasing and reading for every word line, additional read failures are discovered than would otherwise be discovered with EPWR alone. | 2021-12-23 |
20210397506 | LOCALIZATION OF POTENTIAL ISSUES TO OBJECTS - In some examples, a system identifies a potential issue based on comparing measurement data acquired at different hierarchical levels of a computing environment. Within a hierarchical level of the different hierarchical levels, the system determines, based on measurement data acquired for objects in the hierarchical level, whether the potential issue is localized to a subset of the objects. | 2021-12-23 |
20210397507 | CROSS-COMPONENT HEALTH MONITORING AND IMPROVED REPAIR FOR SELF-HEALING PLATFORMS - Systems, apparatuses and methods may provide for technology that detects a successful boot of a first firmware component in a computing system, receives a signal from a second firmware component in the computing system, and detects an incompatibility of the first firmware component with respect to the second firmware component based on the signal. In one example, only the first firmware component is repaired in response to the incompatibility. | 2021-12-23 |
20210397508 | LOCALIZATION OF POTENTIAL ISSUES TO OBJECTS - In some examples, a system identifies a potential issue based on comparing measurement data acquired at different hierarchical levels of a computing environment. Within a hierarchical level of the different hierarchical levels, the system determines, based on measurement data acquired for objects in the hierarchical level, whether the potential issue is localized to a subset of the objects. | 2021-12-23 |
20210397509 | GRANULAR ERROR REPORTING ON MULTI-PASS PROGRAMMING OF NON-VOLATILE MEMORY - A system includes a memory component to, upon completion of second pass programming in response to a multi-pass programming command, write flag bits within a group of memory cells programmed by the multi-pass programming command A processing device, operatively coupled to the memory component, is to perform multi-pass programming of the group of memory cells in association with a logical address. Upon receipt of a read request, the processing device is to determine that a second logical address within the read request does not match the logical address associated with data stored at a physical address of the group of memory cells. The processing device is further to determine a number of first values within the plurality of flag bits, and in response to the number of first values not satisfying a threshold criterion, report, to a host computing device, an uncorrectable data error. | 2021-12-23 |
20210397510 | Managing Open Blocks in Memory Systems - Systems, methods, and apparatus including computer-readable mediums for managing open blocks in memory systems such as NAND flash memory devices are provided. In one aspect, a memory system includes a memory and a memory controller. The memory includes multiple blocks each having a plurality of word lines. The memory controller is coupled to the memory and configured to: evaluate a read disturbance level of an open block, the open block having one or more programmed word lines and one or more blank word lines, and in response to determining that the read disturbance level of the open block is beyond a threshold level, manage each memory cell in at least one of the blank word lines to have a smaller data storing capacity than each memory cell in at least one of the one or more programmed word lines so as to reduce impact of read disturbance. | 2021-12-23 |
20210397511 | NVM ENDURANCE GROUP CONTROLLER USING SHARED RESOURCE ARCHITECTURE - A method and apparatus for allocation of back-end (BE) logic resources between NVM sets. When a controller detects that an NVM set is in an idle state, it deallocates the BE logic from the originally assigned NVM set and provides the BE logic resource to another NVM set. An NVM set controller matrix maps interconnections between the BE logic resource and the new NVM set to enable use of the BE logic resource and the new NVM set. When a new command arrives for the originally assigned NVM set, the BE logic resources is re-allocated to the originally assigned NVM set. | 2021-12-23 |
20210397512 | MEMORY SYSTEM AND METHOD - According to one embodiment, a controller executes a first operation. The first operation includes reading a plurality of data units from a nonvolatile memory and executing a process on the read plurality of data units. The process includes an inverse conversion of a conversion applied to the plurality of data units and first decoding using the plurality of data units that has executed the inverse conversion. The controller acquires first information from one of the plurality of data units that has executed the first operation. The controller compares the acquired first information with an expected value of the first information and re-executes the first operation when the acquired first information and the expected value are not equal to each other. | 2021-12-23 |
20210397513 | MEMORY, MEMORY SYSTEM, AND OPERATION METHOD OF MEMORY - A memory includes: a (downlink error correction circuit suitable for correcting an error in data transferred from a memory controller based on a (downlink error correction code transferred from the memory controller to produce an error-corrected data; a memory error correction code generation circuit suitable for generating a memory error correction code based on the error-corrected data obtained by the downlink error correction circuit; an error injection circuit suitable for injecting an error into at least one among the error-corrected data obtained by the downlink error correction circuit and the memory error correction code when an uncorrected error in the data transferred from the memory controller is detected by the downlink error correction circuit; and a memory core suitable for storing the data and the memory error correction code transferred from the error injection circuit. | 2021-12-23 |
20210397514 | ERROR CHECK CODE (ECC) DECODER AND MEMORY SYSTEM INCLUDING ECC DECODER - An error check code (ECC) decoder includes a buffer, a data converter and a decoding circuit. The buffer stores a plurality of read pages read from a plurality of multi-level cells connected to a same wordline. The data converter adjusts reliability parameters of read bits of the plurality of read pages based on state-bit mapping information and the plurality of read pages to generate a plurality of ECC input data respectively corresponding to the plurality of read pages. The state-bit mapping information indicate mapping relationships between states and bits stored in the plurality of multi-level cells. The decoding circuit performs an ECC decoding operation with respect to the plurality of read pages based on the plurality of ECC input data. An error correction probability is increased by adjusting the reliability parameters of read bits based on the state-bit mapping information. | 2021-12-23 |
20210397515 | MEMORY AND OPERATION METHOD OF MEMORY - A method for operating a memory includes: reading data and an error correction code from a memory core; correcting an error of the read data based on the read error correction code to produce error-corrected data; generating new data by replacing a portion of the error-corrected data with write data, the portion becoming a write data portion; generating a new error correction code based on the new data; and writing the write data portion of the new data and the new error correction code into the memory core. | 2021-12-23 |