07th week of 2022 patent applcation highlights part 39 |
Patent application number | Title | Published |
20220050649 | IMAGE FORMING APPARATUS AND CONTROL METHOD - In a case where the printing of a plurality of copies based on an original image is to be executed by causing a feeding stage which is set for each of the plurality of copies to perform feeding, a feed operation is controlled so as to stop feeding from a feeding stage which serves as a feed target when a sheet attribute designated in a job and a sheet attribute set in the feeding stage are different from each other. | 2022-02-17 |
20220050650 | IMAGE PROCESSING APPARATUS, CONTROL METHOD, AND STORAGE MEDIUM - An image processing apparatus configured to perform processing according to a job includes a detection unit configured to detect a change in job information which is information related to the job, and a generation unit configured to generate a log related to the processing of the job. in a condition that the job includes a plurality of pages, in a case where the change in the job information is not detected by the detection unit, the generation unit generates one log with the job as a unit and in a case where the change in the job information is detected by the detection unit, the generation unit generates a log even during the job based on job information before the change. | 2022-02-17 |
20220050651 | DISPLAY SWITCHING METHOD FOR SMART DISPLAY TERMINAL, DEVICE, EQUIPMENT AND STORAGE MEDIUM - The present disclosure provides display switching method, device and equipment for an intelligent display terminal and a storage medium, and belongs to the technical field of artificial intelligence. The method comprises: receiving a display switch request sent by a first intelligent display terminal, wherein the display switch request comprises purpose information of the display switch and display content information, the display content information is content information displayed by a first intelligent display terminal; determining a target intelligent display terminal according to the purpose information of the display switch; and according to the display content information, sending a data stream corresponding to the display content information to the target intelligent display terminal, wherein the data stream corresponding to the display content information is used for enabling the target intelligent display terminal to display the display content of the first intelligent display terminal. By switching the display content of the first intelligent display terminal to the target intelligent display terminal to display, seamless connection of the display content between different intelligent display terminals is realized, and the user experience is improved. | 2022-02-17 |
20220050652 | ELECTRONIC APPARATUS AND METHOD FOR OUTPUTTING IMAGE THEREOF - An electronic device and a method of controlling the same are provided. The electronic device includes a housing foldable about a first axis; a foldable display foldable about a second axis parallel to the first axis; a folding sensor circuit configured to detect a display state of at least one of the housing or the foldable display; a memory storing instructions; and a processor configured to execute the instructions stored in the memory to: determine a position of a virtual light source with respect to the foldable display based on context information associated with the electronic device; render an image based on the display state detected by the folding sensor circuit and the position of the virtual light source; and control the foldable display to display the rendered image. | 2022-02-17 |
20220050653 | COUPLING DISPLAY DEVICE AND TILED DISPLAY DEVICE HAVING THE SAME - A tile display device includes element display devices, each including an element display panel which defines a display area and a panel driver disposed on a seam area surrounding the display area, and a coupling display device disposed on element display devices and coupled to the element display devices at neighboring seam areas. The coupling display device includes a boundary display panel on the neighboring seam areas, a coupler which couples the boundary display panel to the element display panels to provide a coupling panel structure, a boundary driver which drives the boundary display panel and a signal control center which drives the coupling panel structure. | 2022-02-17 |
20220050654 | HEAD-UP DISPLAY SYSTEM - An HUB system | 2022-02-17 |
20220050655 | INTERACTIVE EXERCISE APPARATUS - An interactive exercise apparatus for allowing a user to invite a friend to join an exercise class includes a mirror display device, a communication module and a control unit. The mirror display device has a mirror configured to reflect an image of the user and a display device configured to display video content which includes an instructor image demonstrating an exercise in the exercise class. The communication module is configured to interconnect with another interactive exercise apparatus of the friend via a network. The control unit is configured to control display content and is operable to control the mirror display device to display the instructor image and a real-time image of the friend to the user. Specifically, the instructor image, the image of the friend and the image of the user reflected by the mirror are shown simultaneously on the mirror display device during the exercise class. | 2022-02-17 |
20220050656 | METHOD FOR DISPLAYING UI COMPONENT AND ELECTRONIC DEVICE - In a method for displaying a user interface (UI) component of a source device on a destination device, at least one function control of a first application of the source device may be combined into a UI component, and the UI component is displayed on the destination device. In addition, an operation may be performed on the function control on the destination device, and an operation result may be transmitted to the source device for execution. | 2022-02-17 |
20220050657 | COMMUNICATION DEVICE AUDIO TRANSMISSION MODIFICATION - A method can include obtaining audio data corresponding to a user of a communication device. The communication device can be configured to transmit the audio data. The method can further include obtaining proximity data indicating a user distance between the user and the communication device. The method can further include determining that the user distance exceeds a threshold distance. The method can further include determining, based at least in part on the audio data, an activity status of the user. The method can further include determining that the activity status is an inactive status. The method can further include modifying, in response to the determining that the threshold distance is exceeded and that the activity status is the inactive status, a transmission of the audio data from the communication device. | 2022-02-17 |
20220050658 | COMMUNICATION DEVICE AUDIO TRANSMISSION MODIFICATION - A computer-implemented method can include obtaining activity data corresponding to an environment where a communication device is located. The activity data can include audio data. The communication device can be configured to transmit the audio data. The method can further include identifying, based at least in part on the activity data, a potential audio disruption. The method can further include determining, based at least in part on the audio data, an activity status of a user of the communication device. The method can further include determining that the activity status is an inactive status. The method can further include modifying, in response to both the identifying the potential audio disruption and the determining that the activity status is the inactive status, a transmission of the audio data from the communication device. | 2022-02-17 |
20220050659 | VIRTUAL SOUND ENGINEER SYSTEM AND METHOD - A virtual sound engineer system includes a mobile device having a processor connected to an interface, the processor being configured to initiate a first remote access digital mixing session to remotely access a first digital mixing console communicatively coupled to a first plurality of peripheral devices disposed at a first location, initiate a second remote access digital mixing session to remotely access a second digital mixing console communicatively coupled to a second plurality of peripheral devices disposed at a second location different from the first, such that remotely accessing the first console and the second console includes adjusting sound output by at least one of the peripheral devices of the first console and at least one of the peripheral devices of the second console and at least a portion of the first mixing session and at least a portion of the second mixing session occur concurrently. | 2022-02-17 |
20220050660 | Stereo Playback Configuration and Control - An example method includes, based on an adjustment to a first displayed volume control, instructing the first playback device to adjust playback volume level; based on an adjustment to a second displayed volume control, instructing the second playback device to adjust playback volume level; after sending the commands, instructing the first and/or second playback device to process an audio stream into a first and/or second channel and to reproduce a respective one of the first and second channel, wherein the grouped first and second playback devices provide multi-channel sound; and based on an adjustment to a third displayed volume control, instructing the first and/or second playback device to adjust a group volume level for both the first and second playback devices. | 2022-02-17 |
20220050661 | ANALYZING GRAPHICAL USER INTERFACES TO FACILITATE AUTOMATIC INTERACTION - Implementations are described herein for analyzing existing graphical user interfaces (“GUIs”) to facilitate automatic interaction with those GUIs, e.g., by automated assistants or via other user interfaces, with minimal effort from the hosts of those GUIs. For example, in various implementations, a user intent to interact with a particular GUI may be determined based at least in part on a free-form natural language input. Based on the user intent, a target visual cue to be located in the GUI may be identified, and object recognition processing may be performed on a screenshot of the GUI to determine a location of a detected instance of the target visual cue in the screenshot. Based on the location of the detected instance of the target visual cue, an interactive element of the GUI may be identified and automatically populate with data determined from the user intent. | 2022-02-17 |
20220050662 | SCALABLE INPUT/OUTPUT SYSTEM AND TECHNIQUES TO TRANSMIT DATA BETWEEN DOMAINS WITHOUT A CENTRAL PROCESSOR - An apparatus for managing input/output (I/O) data may include a streaming I/O controller to receive data from a load/store domain component and output the data as first streaming data of a first data type comprising a first data movement type and first data format type. The apparatus may also include at least one accelerator coupled to the streaming I/O controller to receive the first streaming data, transform the first streaming data to second streaming data having a second data type different than the first data type, and output the second streaming data. In addition, the apparatus may include a streaming interconnect to conduct the second data to a peer device configured to receive data of the second data type. | 2022-02-17 |
20220050663 | SYSTEM AND METHOD FOR IMPROVING LOAD BALANCING IN LARGE DATABASE MANAGEMENT SYSTEM - A method for execution, by a first intermediate node of a plurality of nodes in a database management system, includes receiving a message, where the first intermediate node is limited to communication with a subset of nodes of the plurality of nodes, where the message: includes data that is being sent in accordance with a routing path, is a first size, and indicates a next node of the routing path, and where the subset of nodes includes the next node. The method continues by generating a revised message, wherein the revised message includes the data and has a second size. The method continues by determining whether there is at least one additional intermediate node after the next node in the routing path. When yes, determining an optimal route for forwarding the revised message via a node of the subset of nodes, and sending the revised message to the node. | 2022-02-17 |
20220050664 | SYSTEMS, METHODS, AND DEVICES FOR THE SORTING OF DIGITAL LISTS - Systems, methods, and devices for the sorting of digital lists based on the binary components of their elements. | 2022-02-17 |
20220050665 | METHOD AND SYSTEM FOR PROCESSING FLOATING POINT NUMBERS - A method and system for processing a set of ‘k’ floating point numbers to perform addition and/or subtraction is disclosed. Each floating-point number comprises a mantissa (m | 2022-02-17 |
20220050666 | ACCELERATION TECHNIQUES FOR GRAPH ANALYSIS PROGRAMS - Source code of a graph analysis program expressed in a platform-independent language which supports linear algebra primitives is obtained. An executable version of the program is generated, which includes an invocation of a function of a parallel programming library optimized for a particular hardware platform. A result of executing the program is stored. | 2022-02-17 |
20220050667 | User Interface Design Update Automation - Techniques are disclosed relating to determining a similarity of components of a current webpage to different UI components for use in automatically generating an updated webpage. A computer system may receive information specifying a current webpage, including a particular current UI component and information specifying a plurality of different UI components for an updated webpage. The computer system may identify one or more characteristics of the particular current UI component. The computer system may determine, based on the identified one or more characteristics, a similarity of ones of the plurality of different UI components to the particular current UI component. The computer system may select, based on the determining, a particular different UI component from the plurality of different UI components for use, in the updated webpage, for the particular current UI component. Such techniques may advantageously improve user experience by automatically providing up-to-date user interfaces. | 2022-02-17 |
20220050668 | Method for Executing Program Components on a Control Unit, a Computer-Readable Storage Medium, a Control Unit and a System - A method for executing program components on a control unit includes receiving a first program unit and a second program unit; producing a first proxy definition and a second proxy definition, wherein a proxy definition stipulates access to at least one function and/or a memory area of a program unit, wherein the first proxy definition is associated with the first program unit and the second proxy definition is associated with the second program unit; compiling the first program unit and the second program unit to produce a first program component, a second program component, a first proxy component and a second proxy component; and executing the first program component and the second program component on a control unit, wherein the first program component calls and/or uses at least one function of the second program component by using the first proxy component and the second proxy component. | 2022-02-17 |
20220050669 | REPRESENTING ASYNCHRONOUS STATE MACHINE IN INTERMEDIATE CODE - Representing asynchronous functionality in intermediate code, and then having the runtime compiler, rather than the source code language compiler, declare the corresponding asynchronous state machine. This allows the size of the intermediate code to be smaller thereby facilitating more efficient delivery of the code to end users. Furthermore, the runtime compiler can now use its optimization capability to optimize performance of the asynchronous functionality specific to the actual environment in which the asynchronous work will operate. | 2022-02-17 |
20220050670 | TENANT DECLARATIVE DEPLOYMENTS - A compute container system may support logical partitions for various single tenant systems. These logical partitions may be referred to as logical single-tenant system stacks. An operator or release manager for a logical partition may identify a declarative deployment file defining a deployment configuration for one or more of a plurality of logical single-tenant system stacks. The operator may determine a deployment schedule for implementing one or more system updates for the plurality of logical single-tenant system stacks based on the declarative deployment file and implement the system updates based on the determined deployment schedule. | 2022-02-17 |
20220050671 | DEPLOYING FIRMWARE UPDATES - Firmware updates can be packaged in a manner that enables a firmware update utility to be executed to provide control functionality for the deployment of the firmware updates while leveraging an operating system provided update framework to deliver the firmware updates to the pre-boot environment. Accordingly, control over the deployment of the firmware updates is provided without the difficulties and security risks of employing a custom kernel-mode driver to deliver the firmware updates. | 2022-02-17 |
20220050672 | APPARATUS AND METHOD OF UPDATING SOFTWARE OF A VEHICLE CLUSTER - An apparatus and a method of updating cluster software use a universal serial bus (USB) terminal. The method includes connecting a USB memory to a USB socket of the USB terminal, determining whether a cluster software update file is present in the USB memory, by a head unit, when there is the cluster software update file, changing a USB host to a cluster, and receiving data for update from the USB memory and updating the software of the cluster, by the cluster. | 2022-02-17 |
20220050673 | MANAGEMENT SYSTEM AND CONTROL METHOD THEREOF - A system comprising: a manager apparatus that manages a device; and an information processing apparatus that functions as an agent that performs communication via a network with the device based on an instruction of the manager apparatus, wherein the manager apparatus transmits an instruction of a device operation to the agent, wherein the information processing apparatus, as a function of the agent, when an update of software of a device has been instructed as a device operation from the manager apparatus, transmits to that device an update request, which includes URL information that indicates a reverse proxy which operates in the information processing apparatus, and wherein by the device performing transmission of data in response to the update request to the URL information that indicates the reverse proxy, that data is transferred to the manager apparatus via the information processing apparatus. | 2022-02-17 |
20220050674 | TENANT DECLARATIVE DEPLOYMENTS WITH RELEASE STAGGERING - A method that includes identifying a declarative deployment file defining a deployment configuration for multiple logical single-tenant system stacks supported by a compute container system, where the deployment configuration includes a set of deployment criteria and a failure threshold. The method may further include determining, based on the set of deployment criteria, a set of deployment groups for implementing one or more system updates, where the set of deployment groups includes a first deployment group and the first deployment group includes a first set of logical single-tenant system stacks from the multiple logical single-tenant system stacks supported by the compute container system. The method may further include implementing the one or more system updates for the set of deployment groups based on the failure threshold. | 2022-02-17 |
20220050675 | Representing Source Code as Implicit Configuration Items - Persistent storage may contain: (i) an explicit configuration item table with entries of explicit configuration items representing hardware devices and executable software applications deployed on the hardware devices, (ii) an implicit configuration item table with entries of implicit configuration items representing units of source code, wherein at least some of the executable software applications are compiled versions of the units of source code, and (iii) an implicit relationship table associating pairs of the configuration items. One or more processors may be configured to receive information related to a particular unit of source code; write, to the implicit configuration item table, at least some of the information as an implicit configuration item; determine that the implicit configuration item has one or more identifying attributes in common with an explicit configuration item; and write, to the implicit relationship table, a new entry associating the implicit configuration item and the explicit configuration item. | 2022-02-17 |
20220050676 | Multiprocessor Programming Toolkit for Design Reuse - Techniques for specifying and implementing a software application targeted for execution on a multiprocessor array (MPA). The MPA may include a plurality of processing elements, supporting memory, and a high bandwidth interconnection network (IN), communicatively coupling the plurality of processing elements and supporting memory. In some embodiments, software code may specify one or more cell definitions that include: program instructions executable to perform a function and one or more language constructs. The software code may further instantiate first, second, and third cell instances, each of which is an instantiation of one of the one or more cell definitions, where the instantiation includes configuration of the one or more language constructs such that: the first and second cell instances communicate via respective communication ports and the first and second cell instances are included in the third cell instance. | 2022-02-17 |
20220050677 | METHODS AND SYSTEMS FOR MONITORING CONTRIBUTOR PERFORMANCE FOR SOURCE CODE PROGRAMMING PROJECTS - Methods and systems for monitoring contributor performance for source code programming projects in order to increase the velocity of workflow and the efficiency of project teams. In particular, the methods and systems record the particular type of issue that is tagged for a given contribution, if any, and monitor the amount of programming time of the contributor that is required to resolve the issue. The programming time required to resolve the issue, the type of issue, and/or other characteristics of contributors are then used to generate real-time recommendations related to the performance of the contributor relative to the project team. | 2022-02-17 |
20220050678 | Systems, Apparatuses, And Methods For Fused Multiply Add - Embodiments of systems, apparatuses, and methods for fused multiple add. In some embodiments, a decoder decodes a single instruction having an opcode, a destination field representing a destination operand, and fields for a first, second, and third packed data source operand, wherein packed data elements of the first and second packed data source operand are of a first, different size than a second size of packed data elements of the third packed data operand. Execution circuitry then executes the decoded single instruction to perform, for each packed data element position of the destination operand, a multiplication of a M N-sized packed data elements from the first and second packed data sources that correspond to a packed data element position of the third packed data source, add of results from these multiplications to a full-sized packed data element of a packed data element position of the third packed data source, and storage of the addition result in a packed data element position destination corresponding to the packed data element position of the third packed data source, wherein M is equal to the full-sized packed data element divided by N. | 2022-02-17 |
20220050679 | HANDLING AND FUSING LOAD INSTRUCTIONS IN A PROCESSOR - A system, processor, and/or technique configured to: determine whether two or more load instructions are fusible for execution in a load store unit as a fused load instruction; in response to determining that two or more load instructions are fusible, transmit information to process the two or more fusible load instructions into a single entry of an issue queue; issue the information to process the two or more fusible load instructions from the single entry in the issue queue as a fused load instruction to the load store unit using a single issue port of the issue queue, wherein the fused load instruction contains the information to process the two or more fusible load instructions; execute the fused load instruction in the load store unit; and write back data obtained by executing the fused load instruction simultaneously to multiple entries in the register file. | 2022-02-17 |
20220050680 | TRACKING LOAD AND STORE INSTRUCTIONS AND ADDRESSES IN AN OUT-OF-ORDER PROCESSOR - A computer system, processor, and/or load-store unit has a data cache for storing data, the data cache having a plurality of entries to store the data, each data cache entry addressed by a row and a Way, each data cache row having a plurality of the data cache Ways; a first Address Directory organized and arranged the same as the data cache where each first Address Directory entry is addressed by a row and a Way where each row has a plurality of Ways; a store reorder queue for tracking the store instructions; and a load reorder queue for tracking load instruction. Each of the load and store reorder queues having a Way bit field, preferably less than six bits, for identifying the data cache Way and/or a first Address Directory Way where the Way bit field acts as a proxy for a larger address, e.g. a real page number. | 2022-02-17 |
20220050681 | TRACKING LOAD AND STORE INSTRUCTIONS AND ADDRESSES IN AN OUT-OF-ORDER PROCESSOR - A computer system, processor, and/or load-store unit has a data cache for storing data, the data cache having a plurality of entries to store the data, each data cache entry addressed by a row and a Way, each data cache row having a plurality of the data cache Ways; a first Address Directory organized and arranged the same as the data cache where each first Address Directory entry is addressed by a row and a Way where each row has a plurality of Ways; a store reorder queue for tracking the store instructions; and a load reorder queue for tracking load instruction. Each of the load and store reorder queues having a Way bit field, preferably less than six bits, for identifying the data cache Way and/or a first Address Directory Way where the Way bit field acts as a proxy for a larger address, e.g. a real page number. | 2022-02-17 |
20220050682 | INSTRUCTION HANDLING FOR ACCUMULATION OF REGISTER RESULTS IN A MICROPROCESSOR - A computer system, processor, and method for processing information is disclosed that includes at least one computer processor; a main register file associated with the at least one processor, the main register file having a plurality of entries for storing data, one or more write ports to write data to the main register file entries, and one or more read ports to read data from the main register file entries; one or more execution units including a dense math execution unit; and at least one accumulator register file having a plurality of entries for storing data. The results of the dense math execution unit in an aspect are written to the accumulator register file, preferably to the same accumulator register file entry multiple times, and the data from the accumulator register file is written to the main register file. | 2022-02-17 |
20220050683 | APPARATUSES, METHODS, AND SYSTEMS FOR NEURAL NETWORKS - Methods and apparatuses relating to processing neural networks are described. In one embodiment, an apparatus to process a neural network includes a plurality of fully connected layer chips coupled by an interconnect; a plurality of convolutional layer chips each coupled by an interconnect to a respective fully connected layer chip of the plurality of fully connected layer chips and each of the plurality of fully connected layer chips and the plurality of convolutional layer chips including an interconnect to couple each of a forward propagation compute intensive tile, a back propagation compute intensive tile, and a weight gradient compute intensive tile of a column of compute intensive tiles between a first memory intensive tile and a second memory intensive tile. | 2022-02-17 |
20220050684 | PROGRAM COUNTER (PC)-RELATIVE LOAD AND STORE ADDRESSING - Load store addressing can include a processor, which fuses two consecutive instruction determined to be prefix instructions and treats the two instructions as a single fused instruction. The prefix instruction of the fused instruction is auto-finished at dispatch time in an issue unit of the processor. A suffix instruction of the fused instruction and its fields and the prefix instruction's fields are issued from an issue queue of the issue unit, wherein an opcode of the suffix instruction is issued to a load store unit of the processor, and fields of the fused instruction are issued to the execution unit of the processor. The execution unit forms operands of the suffix instruction, at least one operand formed based on a current instruction address of the single fused instruction. The load store unit executes the suffix instruction using the operands formed by the execution unit. | 2022-02-17 |
20220050685 | Memory Systems and Memory Control Methods - Memory systems and memory control methods are described. According to one aspect, a memory system includes a plurality of memory cells individually configured to store data, program memory configured to store a plurality of first executable instructions which are ordered according to a first instruction sequence and a plurality of second executable instructions which are ordered according to a second instruction sequence, substitution circuitry configured to replace one of the first executable instructions with a substitute executable instruction, and a control unit configured to execute the first and second executable instructions to control reading and writing of the data with respect to the memory, wherein the control unit is configured to execute the first executable instructions according to the first instruction sequence, to execute the substitute executable instruction after the execution of the first executable instructions, and to execute the second executable instructions according to the second instruction sequence as a result of execution of the substitute executable instruction. | 2022-02-17 |
20220050686 | INSTRUCTION DRIVEN DYNAMIC CLOCK MANAGEMENT FOR DEEP PIPELINE AND OUT-OF-ORDER OPERATION OF MICROPROCESSOR USING ON-CHIP CRITICAL PATH MESSENGER AND ELASTIC PIPELINE CLOCKING - Systems and/or methods can include techniques to exploit dynamic timing slack on the chip. By using a special clock generator, the clock period can be shrunk as needed at every cycle. The clock period is determined during operation by checking “critical path messengers” to indicate how much dynamic timing slack exists. Elastic pipeline timing can also be introduced to redistribute timing among pipeline stages to bring further benefits. | 2022-02-17 |
20220050687 | METHOD OF BOOTING ELECTRONIC DEVICE AND ELECTRONIC DEVICE CONTROL SYSTEM, METHODS OF OPERATING AND CONTROLLING ELECTRONIC DEVICE, ELECTRONIC DEVICE, CONTROL TERMINAL, AND ELECTRONIC DEVICE CONTROL SYSTEM - A method of booting an electronic device, methods of operating and controlling an electronic device, an electronic device, a control terminal, and an electronic device control system. The method of booting the electronic device includes: displaying a first graphic code on a screen of the electronic device, each of the first graphic codes corresponding to a preset booting instruction; scanning, by a first control terminal, the first graphic code, and transmitting a booting request of the electronic device to a second control terminal according to the first graphic codes; generating, by the second control terminal, a booting command of the electronic device according to the booting request, and transmitting the booting command to the electronic device; determining, by the electronic device, a turning-on time length of the electronic device according to a matching result between the booting command and the preset booting instruction. | 2022-02-17 |
20220050688 | DEVICE STATE DATA LOADING ONTO RFID CHIP - In one aspect, a device may include at least one processor and storage accessible to the at least one processor. The storage may include instructions executable by the at least one processor to determine a device state such as a device error. The instructions may also be executable to, responsive to the determination, load data related to the device state onto a radio-frequency identification (RFID) chip or other RFID element. | 2022-02-17 |
20220050689 | CONDITIONAL FIRMWARE IMAGES - In an example implementation according to aspects of the present disclosure, a method, system and computing device for supporting conditional firmware images. A set of conditions may be received, wherein the conditions are based on characteristics of a computer. A set of indications corresponding to displayable images may be received. The set of conditions may be compared against a system clock of the computer. Responsive to the comparing, a section of the non-volatile memory may be set to one of the set of indications based on the comparing. One of the displayable images corresponding to the section may be displayed. | 2022-02-17 |
20220050690 | SYSTEM AND METHOD FOR UPDATING HOST OPERATING SYSTEM DRIVERS FROM A MANAGEMENT CONTROLLER - An information handling system includes a basic input/output system (BIOS) that performs a firmware boot operation. During the firmware boot operation, the BIOS determines whether a driver pack management controller setting is enabled within a baseboard management controller of the information handling system. In response to the driver pack management controller setting being enabled, the BIOS copies a binary utility from the baseboard management controller to a system memory, and creates an operating system specific platform binary table to point to the binary utility on the baseboard management controller. In response to the operating system being initialized, a processor invokes the binary utility, mounts a memory partition of the baseboard management controller as a virtual drive of the operating system, and executes the operating system specific binary stage under a fixed globally unique identifier to install a driver pack. | 2022-02-17 |
20220050691 | ELECTRONIC APPARATUS, CONTROL METHOD, AND RECORDING MEDIUM WITH PROGRAM RECORDED THEREON - An electronic apparatus includes a first communicator that performs communication with a computer body based on a first communications standard, a second communicator that performs communication with the computer body based on a second communications standard, a determination processor that classifies and determines a category of an operating system based on a first communications parameter acquired from the first communicator, a setting processor that sets an operation mode corresponding to the category of the operating system as determined by the determination processor, and a controller that controls, based on the communication with the computer body performed by the second communicator, a specified function according to the operation mode set by the setting processor. | 2022-02-17 |
20220050692 | Graphic Rendering Method and Electronic Device - A graphic rendering method includes detecting, by an electronic device, a user input event when displaying a first graphic, generating a graphic rendering instruction based on the input event, where the graphic rendering instruction includes a rendering element of a target graphic, re-rendering a first area that is in the first graphic and that needs to be re-rendered when rendering the target graphic based on the rendering element of the target graphic, synthesizing a re-rendered first area and a second area to obtain the target graphic, where the second area is an area that is in the first graphic and that does not need to be re-rendered, and displaying the target graphic. | 2022-02-17 |
20220050693 | DETERMINE STEP POSITION TO OFFER USER ASSISTANCE ON AN AUGMENTED REALITY SYSTEM - According to one embodiment, a method, computer system, and computer program product for providing support to a user within an augmented reality environment based on an emotional state of the user is provided. The present invention may include based on data gathered from an augmented reality device, determining a step of a series of steps the user is currently performing, identifying the current emotional state of the user based at least in part on data gathered from the augmented reality device, responsive to determining that the current emotional state of the user is frustrated, performing one or more actions to reduce the frustration of the user; and responsive to determining that the current emotional state of the user is frustrated or confused, performing one or more support actions to assist the user in completing the step based on the current emotional state of the user. | 2022-02-17 |
20220050694 | METHOD, SYSTEM AND APPARATUS ASSISTING A USER OF A VIRTUAL ENVIRONMENT - The disclosed systems, and methods are directed to assisting a user of a virtual environment, the method comprising: tracking and storing user interactions of the user with a user interface associated with the virtual environment, the user interactions being associated with the user attempting to perform a task in the virtual environment; performing a background analysis of the user interactions, the background analysis comprising: inputting one or more of the tracked and stored user interactions to a machine learning algorithm (MLA) having been previously trained to identify sequence pattern of user interactions; outputting, by the MLA, one or more sequence patterns of user interactions to be associated with the tracked and stored user interactions; determining, that the user requires assistance to complete the task; and operating an assistance module to guide the user in completing the task. | 2022-02-17 |
20220050695 | SIMPLISTIC MACHINE LEARNING MODEL GENERATION TOOL FOR PREDICTIVE DATA ANALYTICS - Systems and methods for predictive data analytics are provided. A method comprises generating a guided user interface (GUI) that guides one or more user operations on the user interface including: obtaining, from a database, a dataset including a plurality of data objects; determining one or more characteristics associated with a first data object of the plurality of data objects; identifying a subset of the dataset based at least in part on the one or more characteristics; selecting at least one machine learning algorithm; and training a machine learning (ML) model with respect to the first data object using the subset of the dataset and the at least one machine learning algorithm to generate a trained ML model; implementing the trained ML model with respect to the first data object in a cloud server to enable distributing the trained ML model to a plurality of client device via a network. | 2022-02-17 |
20220050696 | Auto-completion for Gesture-input in Assistant Systems - In one embodiment, a method includes receiving an initial input in a first modality from a first user from a client system associated with the first user, determining one or more intents corresponding to the initial input by an intent-understanding module, generating one or more candidate continuation-inputs based on the one or more intents, where the one or more candidate continuation-inputs are in one or more candidate modalities, respectively, and wherein the candidate modalities are different from the first modality, and sending instructions for presenting one or more suggested inputs corresponding to one or more of the candidate continuation-inputs to the client system. | 2022-02-17 |
20220050697 | DATA DRIVEN COMPUTER USER EMULATION - Whether testing intrusion detection systems, conducting training exercises, or creating data sets to be used by the broader cybersecurity community, realistic user behavior is a desirable component of a cyber-range. Existing methods either rely on network level data or replay recorded user actions to approximate real users in a network. Probabilistic models can be fit to actual user data (sequences of application usage) collected from endpoints. Once trained to the user's behavioral data, these models can generate novel sequences of actions from the same distribution as the training data. These sequences of actions can be fed to emulator software via configuration files, which replicate those behaviors on end devices. The models are platform agnostic and can generate behavior data for any emulation software package. In some embodiments a latent variable is added to faithfully capture and leverage time-of-day trends. | 2022-02-17 |
20220050698 | INTELLIGENT CONNECTION PLACEMENTS ON SR-IOV VIRTUAL FUNCTIONS - In an approach to intelligent connection placement across multiple logical ports, a mapping table for a virtual machine is created. A connection request to connect a local port to a port on a peer device is received. Whether an entry exists in the mapping table for the port on the peer device is determined. Responsive to determining that an entry exists in the mapping table for the port on the peer device, whether a virtual function exists the port on the peer device in the mapping table for the same physical function is determined. A virtual function is selected from the mapping table to connect the local port to the port on the peer device. | 2022-02-17 |
20220050699 | WORKSPACE RESILIENCY WITH MULTI-FEED STATUS RESOURCE CACHING - A client device includes resource caches, and a processor coupled to the resource caches. The processor receives resources from different resource feeds, and caches user interfaces (UI) of the resources from the different resource feeds, with at least one resource feed having a resource cache separate from the resource cache of the other resource feeds. Statuses of the resource feeds are determined, with at least one status indicating availability of the at least one resource feed having the separate resource cache. UI elements from the separate resource cache are retrieved for display in response to the at least one resource feed associated with the separate resource cache not being available. | 2022-02-17 |
20220050700 | VIRTUAL BOND FOR EFFICIENT NETWORKING OF VIRTUAL MACHINES - A packet is received by a first virtual machine supported by a host system from a second virtual machine via a shared memory device that is accessible to a plurality of virtual machines supported by the host system. The first virtual machine determines that the second virtual machine is supported by the host system in view of receiving the packet via the shared memory device. Identification information associated with the second virtual machine is stored in a virtual bond data structure, wherein the identification information associated with the second virtual machine being present in the virtual bond data structure causes the first virtual machine to transmit a subsequent packet to the second virtual machine via the shared memory device. | 2022-02-17 |
20220050701 | OPEN-CHANNEL STORAGE DEVICE MANAGEMENT WITH FTL ON VIRTUAL MACHINE - Embodiments of the disclosure provide systems and methods accessing a storage device of a host machine. The method can include: receiving, via a first guest flash translation layer (FTL) instance, a first request for accessing the storage device from a first virtual machine running on a host machine, wherein the first request comprises a first physical address of the storage device; transmitting, via the first FTL instance, the first request to a host FTL driver; converting, via the host FTL driver, the first request into a first hardware command; transmitting, via the host FTL driver, the first hardware command to the storage device; and executing, via the solid state drive, the first hardware command. | 2022-02-17 |
20220050702 | VIRTUALIZATION FOR AUDIO CAPTURE - Captured audio data is provided by a streaming interface to a multimedia application (e.g., a game, voice chat app or a recording app) via a virtual audio driver in accordance with some embodiments. The virtual audio driver is a software module that provides an interface between virtual audio hardware of a virtualized computing environment (e.g., a virtual machine or a remote machine) and the multimedia application, allowing the multimedia application to interact with the audio hardware using application program interfaces (APIs) and other software resources. | 2022-02-17 |
20220050703 | AUTONOMOUS COMPUTER SYSTEM DEPLOYMENT TO A VIRTUALIZED ENVIRONMENT - The invention improves the use of computers as a tool by autonomously creating virtual computing systems in virtualized or cloud environments. An apparatus or product for autonomously deploying a computer system to a virtual environment, and a method using same that includes modeling a computer system in a modeling language to generate a design model. Processes may also include parsing the design model to generate a set of scripts comprising the virtual system components. The virtual system components may be deployed by executing the scripts. The system may validate that the deployed virtual system components are compliant with the design model. The system may be designed in a modeling language such as Systems Modeling Language (SysML) or Unified Modeling Language (UML). Processes may modify system settings to configure, secure, and harden the system components. | 2022-02-17 |
20220050704 | SYMBOL MAPPING SYSTEM FOR CONCURRENT KERNEL AND USER SPACE DEBUGGING OF A VIRTUAL MACHINE - A method is provided comprising: monitoring, by a symbol context manager, context switch events that are generated in a virtual machine, and updating a symbol space map based on the context switch events; receiving, by the symbol context manager, a request to provide a symbol space of the virtual machine, the request being generated by a symbol database interface in response to a symbol query that is received at the symbol database interface from a debugger that is debugging the virtual machine, the symbol query being associated with a symbol that is part of the symbol space; and providing, by the symbol context manager, an indication of the symbol space of the virtual machine, the indication of the symbol space being provided based on the symbol space map. | 2022-02-17 |
20220050705 | METHODS AND SYSTEMS FOR INSTANTIATING AND TRANSPARENTLY MIGRATING EXECUTING CONTAINERIZED PROCESSES - A method for instantiating and transparently migrating executing containerized processes includes receiving, by a container engine executing on a first machine, an instruction to instantiate a container image on the first machine. The container engine transmits, to a modified container runtime process, executing on the first machine, the instruction to instantiate the container image on the first machine. The modified container runtime process generates, on the first machine, a shim process representing the instantiated container image. The shim process forwards the instruction to an agent executing on a second machine, via a proxy connected to the agent via a network connection. The agent directs instantiation of the container image as a containerized process. A scheduler component executing on the first machine determines to migrate the containerized process to a third machine. The scheduler component directs migration of the containerized process to the third machine, during execution of the containerized process. | 2022-02-17 |
20220050706 | COMPUTING SYSTEM WITH DUAL VDA REGISTRATION AND RELATED METHODS - A computing system may be in communication with client computing devices. The computing system may include a cloud infrastructure, an offline cache, and a VDA configured to concurrently have a first registration with the cloud infrastructure, and a second registration with the offline cache, and provide corresponding virtual desktop instances for the client computing devices based upon either the first registration or the second registration. The offline cache may be configured to broker local resources for the virtual desktop instances when the cloud infrastructure is unavailable. The VDA may be configured to transition, with no transition delay, to the offline cache using the second registration when the cloud infrastructure is unavailable. | 2022-02-17 |
20220050707 | NETWORK COMMAND COALESCING ON GPUs - An approach is provided for coalescing network commands in a GPU that implements a SIMT architecture. Compatible next network operations from different threads are coalesced into a single network command packet. This reduces the number of network command packets generated and issued by threads, thereby increasing efficiency, and improving throughput. The approach is applicable to any number of threads and any thread organization methodology, such as wavefronts, warps, etc. | 2022-02-17 |
20220050708 | INVOKING FUNCTIONS OF AGENTS VIA DIGITAL ASSISTANT APPLICATIONS USING ADDRESS TEMPLATES - Systems and methods of invoking functions of agents via digital assistant applications are provided. Each action-inventory can have an address template for an action by an agent. The address template can include a portion having an input variable used to execute the action. A data processing system can parse an input audio signal from a client device to identify a request and a parameter to be executed by the agent. The data processing system can select an action-inventory for the action corresponding to the request. The data processing system can generate, using the address template, an address. The address can include a substring having the parameter used to control execution of the action. The data processing system can direct an action data structure including the address to the agent to cause the agent to execute the action and to provide output for presentation. | 2022-02-17 |
20220050709 | NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM, EVALUATION FUNCTION GENERATION METHOD, AND INFORMATION PROCESSING APPARATUS - A non-transitory computer-readable storage medium storing a program that causes a processor included in an apparatus to execute a process. The process includes specifying a first moving distance in which each of the plurality of mobile objects moves from one task to another task for each of multiple task pairs, based on layout information expressing a movement route in which a plurality of mobile objects is movable in, initial position information indicating an initial position of the plurality of mobile objects, and task information indicating a start position and an end position of a plurality of tasks, specifying a second moving distance of the plurality of mobile objects from the start position to the end position for the plurality of tasks, based on the layout information and the task information, and generating, based on the first moving distance and the second moving distance, an evaluation function of an Ising model. | 2022-02-17 |
20220050710 | JOB PROCESSING IN QUANTUM COMPUTING ENABLED CLOUD ENVIRONMENTS - A compatibility is ascertained between a configuration of a quantum processor (q-processor) of a quantum cloud compute node (QCCN) in a quantum cloud environment (QCE) and an operation requested in a first instruction in a portion (q-portion) of a job submitted to the QCE, the QCE including the QCCN and a conventional compute node (CCN), the CCN including a conventional processor configured for binary computations. In response to the ascertaining, a quantum instruction (q-instruction) is constructed corresponding to the first instruction. The q-instruction is executed using the q-processor of the QCCN to produce a quantum output signal (q-signal). The q-signal is transformed into a corresponding quantum computing result (q-result). A final result is returned to a submitting system that submitted the job, wherein the final result comprises the q-result. | 2022-02-17 |
20220050711 | SYSTEMS AND METHODS TO ORCHESTRATE INFRASTRUCTURE INSTALLATION OF A HYBRID SYSTEM - Methods and apparatus to orchestrate infrastructure installation of a hybrid system are disclosed. An example apparatus includes a first virtual appliance including a management endpoint. The first virtual appliance is to organize tasks to be executed to install a computing infrastructure. The example apparatus includes a first component server to execute tasks. The component server includes a management agent to communicate with the management endpoint to receive a task to be executed to install the computing infrastructure. The first virtual appliance is to associate a role with the first component server and to determine whether the first component server satisfies a prerequisite associated with the role. The first virtual appliance is to facilitate addressing an error when the first component server is determined not to satisfy the prerequisite. | 2022-02-17 |
20220050712 | DISTRIBUTED STREAMING SYSTEM SUPPORTING REAL-TIME SLIDING WINDOWS - In various embodiments, a process for providing a distributed streaming system supporting real-time sliding windows includes receiving a stream of events at a plurality of distributed nodes and routing the events into topic groupings. The process includes using one or more events in at least one of the topic groupings to determine one or more metrics of events with at least one window and an event reservoir including by: tracking, in a volatile memory of the event reservoir, beginning and ending events within the at least one window; and tracking, in a persistent storage of the event reservoir, all events associated with tasks assigned to a respective node. The process includes updating the one or more metrics based on one or more previous values of the one or more metrics as a new event is added or an existing event is expired from the at least one window. | 2022-02-17 |
20220050713 | Distributed Job Scheduling System - A method includes receiving a request to perform a job from a second computing device, where the job includes one or more steps to be completed in a period, and where the request includes a job description for the job, storing the job description into a data store, retrieving a step description corresponding to one of the steps of the job to be performed from the data store, where each of the steps is performed by a corresponding worker system, sending the commands to the communication endpoint for the corresponding worker system, receiving a status update comprising results for the commands from the corresponding worker system, and storing the status update to the data store. | 2022-02-17 |
20220050714 | POWER AWARE SCHEDULING - A method includes, by a scheduling controller, receiving from a user a request for an application to be executed by a computing system associated with a data center, wherein the application includes a plurality of tasks, and wherein the request includes an estimated execution time corresponding to an estimated amount of real-world time that the tasks will be actively running on the computing system to fully execute the application. The method includes receiving from the user a service level objective indicating a target percentage of a total amount of real-world time that the tasks will be actively running on the computing system and generating, in response to determining that the job can be completed according to the service level objective and the estimated execution time, a notification indicating acceptance of the job. | 2022-02-17 |
20220050715 | SYSTEMS AND METHODS CONFIGURED TO ENABLE AN OPERATING SYSTEM FOR CONNECTED COMPUTING THAT SUPPORTS USER USE OF SUITABLE TO USER PURPOSE RESOURCES SOURCED FROM ONE OR MORE RESOURCE ECOSPHERES - Systems and methods for purposeful computing are disclosed that, among other things, include enabling an operating system for connected computing configured for identification, evaluation, selection, and/or use of suitable to user purposes' resources to produce outcomes optimized to such purposes' fulfillment. Such resources populate a distributed resource ecosphere and have associated attributes that inform regarding resource suitability. | 2022-02-17 |
20220050716 | VIRTUAL MACHINE PLACEMENT METHOD AND VIRTUAL MACHINE PLACEMENT DEVICE IMPLEMENTING THE SAME - A method of placing a virtual machine may include: calculating virtual machine workload information, which is a load of the physical machine applied by the virtual machine through a predetermined method of calculating a workload on the basis of information stored in storage and related to operations of the first physical machine and the second physical machine; calculating initial predicted virtual load information related to an initial predicted virtual load related to a load of the physical machine predicted due to the virtual machine for a predetermined period on the basis of the virtual machine workload; calculating search placement information related to a search placement which is a placement of the virtual machine on the first physical machine and the second physical machine through a predetermined method of calculating a search placement; and placing the virtual machine on the physical machine again on the basis of the selected search placement. | 2022-02-17 |
20220050717 | METHODS AND SYSTEMS FOR BALANCING LOADS IN DISTRIBUTED COMPUTER NETWORKS FOR COMPUTER PROCESSING REQUESTS WITH VARIABLE RULE SETS AND DYNAMIC PROCESSING LOADS - Methods and systems are described for balancing loads in distributed computer networks for computer processing requests with variable rule sets and dynamic processing loads. The methods and systems may include determining an initial allocation of the plurality of processing requests to the plurality of available domains that has a lowest initial sum excess processing load. The methods and systems may then retrieve an updated estimated processing load for at least one of the plurality of processing requests and determine a secondary allocation of the plurality of processing requests to the plurality of available domains. | 2022-02-17 |
20220050718 | SCALABILITY ADVISOR - Systems and methods for estimating the scalability of applications in high performance computing and distributed computing environments and for configuring applications based on those estimates are disclosed. A model is disclosed that provides an estimate of the scalability behavior of an application based on basic parameters and a small number of runs on bare metal and cloud systems. The system may also be configured to use the estimated performance to recommend optimal configurations based on different policies, including best performance, lowest cost, and best performance per cost. | 2022-02-17 |
20220050719 | AUTO-SCALING GROUP MANAGEMENT METHOD AND APPARATUS - An auto-scaling group management method is provided, including: receiving an auto-scaling group configuration message, the auto-scaling group configuration message including level information of an auto-scaling group, initialization configuration information of a compute instance of the auto-scaling group, and first auto-scaling policy information of the auto-scaling group; creating the compute instance of the auto-scaling group based on the initialization configuration information in a service server included in a level indicated by the level information of the auto-scaling group; and performing an operation on the compute instance of the auto-scaling group based on the auto-scaling policy information of the auto-scaling group. In the foregoing technical solution, a deployed auto-scaling group may support cross-layer deployment or operation. | 2022-02-17 |
20220050720 | SCALABLE OPERATORS FOR AUTOMATIC MANAGEMENT OF WORKLOADS IN HYBRID CLOUD ENVIRONMENTS - A computer-implemented method for managing one or more operations of a workload includes selecting a resource type for workload management on a platform. One or more operations of the selected resource to be managed are identified. A reconciliation time for execution of each of the identified operations is determined. A reconciliation period between two consecutive reconciliations is determined for each of the identified operations. A minimum number of processes for workload management of a given set of the operations on resources is calculated, and the determined minimum number of processes is deployed to manage the workload. | 2022-02-17 |
20220050721 | RESOURCE INTEGRATION SYSTEM AND RESOURCE INTEGRATION METHOD - A resource integration method includes the following steps: a receiving module receives access information from a guest operating system on the host device; the access information is used to determine whether the frame rate is lower than a frame rate threshold; when the receiving module determines that the frame rate is lower than the frame rate threshold, the receiving module transmits an external resource request signal to the receiving module; after the receiving module receives the external resource request signal, a resource management module (which is located in the bridge module) selects an optimal external device from a specific category (among a plurality of categories in a candidate list), and a calculation operation or a storage operation corresponding to the specific category is transmitted to the optimal external device for calculation or storage by the bridge module. | 2022-02-17 |
20220050722 | MEMORY POOL MANAGEMENT - Examples described herein relate to providing an interface to an operating system (OS) to create different memory pool classes to allocate to one or more processes and allocate a memory pool class with a process of the one or more processes. In some examples, a memory pool class of the different memory pool classes defines a mixture of memory devices in at least one memory pool available for access by the one or more processes. In some examples, memory devices are associated with multiple memory pool classes to provide multiple different categories of memory resource capabilities. | 2022-02-17 |
20220050723 | LIGHTWEIGHT REMOTE PROCESS EXECUTION - The present disclosure involves systems, software, and computer implemented methods for remotely executing binaries in a containerized computing environment using a lightweight inter-process communications protocol (IPC) and UNIX domain sockets. One example method includes establishing, in a shared computing image comprising a plurality of containers, a listening UNIX domain socket, where the listening UNIX domain socket is shared between all containers in the shared computing image. A request to execute a binary in the target container is received at a target container and from a client container using the listening UNIX domain socket. A worker service is generated in the target container. The worker service executes the binary in the target container. A return exit code associated with the executed binary is received and sent to the client container using the UNIX domain socket. | 2022-02-17 |
20220050724 | INFORMATION PROCESSING SYSTEM, INFORMATION PROCESSING METHOD AND INFRASTRUCTURE - An information processing system performs load distribution across a first infrastructure and a second infrastructure having mutually different billing systems. The first infrastructure pairs the first infrastructure with the second infrastructure, and generates a key pair including a private key used in the first infrastructure and a public key used in the second infrastructure, transfers, to the second infrastructure, an application definition input to a container cluster in the first infrastructure, by using the private key, and analyzes the application definition input to the container cluster in the first infrastructure to generate a load balancer definition and a reverse proxy definition. The second infrastructure analyzes the transferred application definition to generate a service definition. | 2022-02-17 |
20220050725 | METHOD FOR MANAGING COMPUTING CAPACITIES IN A NETWORK WITH MOBILE PARTICIPANTS - Technologies and techniques for a mobile end device to offload computing from the mobile end device to at least one edge computer and/or at least one cloud computer. Resource information may be obtained from the at least one edge computer and/or at least one cloud computer. Application information may be obtained from at least one system application in the mobile end device, and A computing capacity may be assigned for the at least one system application in the mobile end device to the at least one edge computer and/or the at least one cloud computer. | 2022-02-17 |
20220050726 | ADVANCED RESOURCE LINK BINDING MANAGEMENT - A link binding chain is disclosed that enables multiple hops of link bindings to be cascaded to form a chain of link bindings. The binding chain can be leveraged when a one-hop link binding is infeasible or fails to be established. Dynamic binding method switching is disclosed for updating the binding method after a link binding has been established such that the link binding may be selected for a more proper or efficient link binding method to adapt to the changing environment. Methods for broker assisted link binding are disclosed to facilitate link binding functionalities between a source resource and a destination resource that are connected through a binding broker. | 2022-02-17 |
20220050727 | INFORMATION PROCESSING APPARATUS - An information processing apparatus that is one embodiment of the present invention: detects execution of software in any of a host environment, and one or more virtual environments; and acquires discrimination information indicating that a detected environment is a first environment, and first name information indicating a name of the software in a name space of the first environment. The information processing apparatus acquires, based on the discrimination information, second name information indicating a name of the first environment in a name space of a second environment. The information processing apparatus converts, based on the second name information, the first name information into third name information indicating a name of the software in the name space of the second environment. The information processing apparatus acquires, based on the third name information, information on the software from an accessible resource. | 2022-02-17 |
20220050728 | DYNAMIC DATA DRIVEN ORCHESTRATION OF WORKLOADS - According to aspects of the present disclosure, systems, methods and computer program products can be provided for dynamic workload orchestration based on data complexity. Methods, computer program products and/or systems are provided for dynamic workload orchestration that perform the following operations: (i) receiving a workload for orchestration; (ii) computing complexity scores for respective portions of the workload, where the complexity scores are computed based at least on parameters describing data associated with the portions of the workload; and (iii) using an orchestration engine to assign the portions of the workload to corresponding compute resources, based on their respective complexity scores. | 2022-02-17 |
20220050729 | Clustering Processes Using Traffic Data - Disclosure is made of methods, apparatus and system for clustering processes for use by a cloud platform. Process clustering may include receiving traffic data transmitted and received between each pair of processes in a set of processes. A matrix may be generated based on the traffic data, the matrix including a row and a column for each process in the set of processes. The matrix may be hierarchically clustered based on the traffic data, the hierarchical clustering outputting a plurality of clusters, each cluster including one or more processes in the set of processes. The plurality of clusters may then be merged into a set of merged clusters of processes. | 2022-02-17 |
20220050730 | METHOD AND APPARATUS FOR CLOUD SERVICE PROVIDER EVENTS GENERATION AND MANAGEMENT - Various methods, apparatuses/systems, and media for automatic generation and management of cloud service provider events are provided. A service provider computing device defines a maturity level of an event; publishes an event schema associated with the maturity level of the event; and transmits the event to an event platform that is configured to provide infrastructure for event production and consumption. The event platform validates the event based on the event schema; calculates a validation score for the event upon validation of the event; and publishes the validation score on a website. A consumer computing device consumes the published event from the event platform. | 2022-02-17 |
20220050731 | UNIVARIATE DENSITY ESTIMATION METHOD - A method for use with a computing device. The method may include receiving a data set including a plurality of univariate data points and determining a target kernel bandwidth for a kernel density estimator (KDE). Determining the target kernel bandwidth may include computing a plurality of sample KDEs and selecting the target kernel bandwidth based on the sample KDEs. The method may further include computing the KDE for the data set using the target kernel bandwidth. For one or more tail regions of the data set, the method may further include computing one or more respective tail extensions. The method may further include computing and outputting a renormalized piecewise density estimator that, in each tail region, equals a renormalization of the respective tail extension for that tail region, and, outside the one or more tail regions, equals a renormalization of the KDE. | 2022-02-17 |
20220050732 | APPLICATION INFRASTRUCTURE CONFIGURATION BASED ON ANNOTATED API SCHEMAS - An infrastructure management system automatically determines a configuration of infrastructure services for the execution of applications that best satisfies predefined target criteria based on receiving annotated application programming interface (API) schemas associated with the applications. The system extracts information from customized annotations in a received API schema, sets up an API gateway with an existing configuration of infrastructure services, and logs requests received at this existing configuration via the gateway. The system generates a set of alternate configurations based on the extracted information, simulates execution of a set of logged requests to determine a set of valid configurations, and subsequently selects a new configuration that satisfies threshold predefined target criteria. The system may update the existing configuration to the new configuration without interrupting application services. | 2022-02-17 |
20220050733 | COMPONENT FAILURE PREDICTION - A method comprises retrieving operating conditions data comprising operational details of one or more components in at least one computing environment. Component replacement data and no fault found (NFF) data of the computing environment are also retrieved. The component replacement data comprises details about components that have been replaced in the computing environment. The NFF data comprises details about components incorrectly identified as having failed in the computing environment and symptoms leading to the incorrect identifications. The method also comprises generating a first mapping between given ones of the operational details and given ones of the replaced components, and generating a second mapping between given ones of the incorrectly identified components and given ones of the symptoms using one or more machine learning algorithms. Using the first and second mappings, at least one failed component is predicted based on one or more symptoms identified in a received support case. | 2022-02-17 |
20220050734 | DETECTING PAGE FAULT TRAFFIC - Methods, systems, and devices for detecting page fault traffic are described. A memory device may execute a self-learning algorithm to determine a priority size for read requests, such as a maximum readahead window size or other size related to page faults in a memory system. The memory device may determine the priority size based at least in part on by tracking how many read requests are received for different sizes of sets of data. Once the priority size is determined, the memory device may detect subsequent read requests for sets of data having the priority size, and the memory device may prioritize or other optimize the execution of such read requests. | 2022-02-17 |
20220050735 | REGRESSION-BASED CALIBRATION AND SCANNING OF DATA UNITS - Read operations can be performed to read data stored at a data block. Parameters reflective of a separation between a pair of programming distributions associated with the data block can be determined based on the plurality of read operations. A read request to read the data stored at the data block can be received. In response to receiving the read request, a read operation can be performed to read the data stored at the data block based on the parameters that are reflective of the separation between the pair of programming distributions associated with the data block. | 2022-02-17 |
20220050736 | SYSTEM AND METHOD FOR DATA-DRIVEN ANALYTICAL REDUNDANCY RELATIONSHIPS GENERATION FOR EARLY FAULT DETECTION AND ISOLATION WITH LIMITED DATA - Example implementations described herein involve a new data-driven analytical redundancy relationship (ARR) generation for fault detection and isolation. The proposed solution uses historical data during normal operation to extract the data-driven ARRs among sensor measurements, and then uses them for fault detection and isolation. The proposed solution thereby does not need to rely on the system model, can detect and isolate more faults than traditional data-driven methods, can work when the system is not fully observable, and does not rely on a vast amount of historical fault data, which can save on memory storage or database storage. The proposed solution can thereby be practical in many real cases where there are data limitations. | 2022-02-17 |
20220050737 | INTERNAL SIGNAL MONITORING CIRCUIT - Disclosed herein is an apparatus that includes a first circuit configured to measure a first time period from a first active edge of one of plurality of internal signals to a second active edge of one of the plurality of internal signals, and a second circuit configured to compare the first time period with a second time period to generate an alert signal. | 2022-02-17 |
20220050738 | SYSTEM AND METHOD FOR RESOLVING ERROR MESSAGES IN AN ERROR MESSAGE REPOSITORY - A method for managing error messages includes obtaining, by a message resolution manager, a plurality of error messages, performing an error message consecutive deduplication on the plurality of error messages to obtain a plurality of deduplicated error messages, generating a plurality of message sequences using the plurality deduplicated error messages, applying a message sequence frequency algorithm to the plurality of message sequences to obtain a high severity message sequence list, and initiating an error message resolution on at least one message sequence specified in the high severity message sequence list. | 2022-02-17 |
20220050739 | SEMICONDUCTOR DEVICE - Forming a semiconductor device includes forming a first conductive line on a substrate, forming a memory cell including a switching device and a data storage element on the first conductive line, and forming a second conductive line on the memory cell. Forming the switching device includes forming a first semiconductor layer, forming a first doped region by injecting a n-type impurity into the first semiconductor layer, forming a second semiconductor layer thicker than the first semiconductor layer, on the first semiconductor layer having the first doped region, forming a second doped region by injecting a p-type impurity into an upper region of the second semiconductor layer, and forming a P-N diode by performing a heat treatment process to diffuse the n-type impurity and the p-type impurity in the first doped region and the second doped region to form a P-N junction of the P-N diode in the second semiconductor layer. | 2022-02-17 |
20220050740 | Method and Apparatus for Memory Error Detection - A system with multiple processing domains sharing a memory resource accessed via a shared memory controller detects a memory error. As data is written to the shared memory resource, each processing domain generates a diagnostic code as a function of the data, the memory address for the data, and of a unique identifier corresponding to the processing domain. The diagnostic code is stored with the data for verification when the data is read back. As the data is read back, the processing domain separates the diagnostic code from the data being read and generates another diagnostic code in the same manner as the original diagnostic code. The other diagnostic code is compared to the initial diagnostic code. If both diagnostic codes are the same, the processing domain can be confident that the data read from the shared memory resource is the same as the data that was originally written. | 2022-02-17 |
20220050741 | APPARATUS AND METHOD FOR SHARING DATA IN A DATA PROCESSING SYSTEM - A controller is coupled to a non-volatile memory device and a host. The controller is configured to perform a cyclic redundancy check on map data associated with user data stored in the memory device, generate an encryption code based on a logical address included in the map data, generate encrypted data through a logical operation on the encryption code and the map data, and transmit the encrypted data to the host. | 2022-02-17 |
20220050742 | ELECTRONIC SYSTEM FOR PERFORMING ERROR CORRECTION OPERATION - An electronic system includes a controller configured to output a clock, a command, and an address, and configured to receive and transmit data. The electronic system also includes a semiconductor device including an error calculation circuit. The semiconductor device is configured to generate, by the error calculation circuit, a parity including information on an error included in transfer data generated from the data, in a write operation initiated by the command, and to generate, by the error calculation circuit, a syndrome including information on an error included in transfer data generated from internal data, in a read operation initiated by the command. | 2022-02-17 |
20220050743 | MODIFYING CONDITIONS FOR MEMORY DEVICE ERROR CONNECTION OPERATIONS - A first error rating for a first memory access operation performed for data stored at a memory device operating at a first state is determined. In response to a determination that the first error rating satisfies a first error rating condition associated with the first state of the memory device, a first error correction operation is performed at the memory device. A change of the state of the memory device from the first state to a second state is detected. A second error rating condition associated with the memory device is determined based on the second state of the memory device. A second error rating is determined for a second memory access operation performed at the memory device. In response to a determination that the second error rating satisfies the second error rating condition, a second error correction operation is performed at the memory device. | 2022-02-17 |
20220050744 | ERROR CACHING TECHNIQUES FOR IMPROVED ERROR CORRECTION IN A MEMORY DEVICE - Methods, systems, and devices for error caching techniques for improved error correction in a memory device are described. An apparatus, such as a memory device, may use an error cache to store indications of memory cells identified as defective and may augment an error correction procedure using the stored indications. If one or more errors are detected in data read from the memory array, the apparatus may check the error cache, and if a bit of the data is indicated as being associated with a defective cell, the bit may be inverted. After such inversion, the data may be checked for errors again. If the inversion corrects an error, the resulting data may be error-free or may include a reduced quantity of errors that may be correctable using an error correction scheme. | 2022-02-17 |
20220050745 | ADAPTIVE PARITY TECHNIQUES FOR A MEMORY DEVICE - Methods, systems, and devices for adaptive parity techniques for a memory device are described. An apparatus, such as a memory device, may use one or more error correction code (ECC) schemes, an error cache, or both to support access operations. The memory device may receive a command from a host device to read or write data. If the error cache includes an entry for the data, the memory device may read or write the data using a first ECC scheme. If the error cache does not include an entry for the data, the memory device may read or write the data without using an ECC scheme or using a second ECC scheme different than the first ECC scheme. | 2022-02-17 |
20220050746 | PARTITIONED MEMORY HAVING ERROR DETECTION CAPABILITY - A memory component comprises a cyclic buffer partition portion and a snapshot partition portion. In response to receiving a signal that a trigger event has occurred, a processing device included in the memory component performs an error correction operation on a portion of data stored in the cyclic buffer partition portion, copies the data stored in the cyclic buffer partition portion to the snapshot partition portion in response to the error correction operation being successful, and sends the data stored in the cyclic buffer partition portion to a processing device operatively coupled to the memory component in response to the error correction operation not being successful. | 2022-02-17 |
20220050747 | USING OVER PROVISIONING SPACE FOR SELECTIVELY STORING BLOCK PARITY - Methods and apparatus for storing parity bits in an available over provisioning (OP) space to recover data lost from an entire memory block. For example, a data storage device may receive data from a host device, write the data to a block, and generate a corresponding block parity. The device may then determine a bit error rate (BER) of the block and an average programming duration to write the data written to the block, calculate a probability of the block becoming defective based on the BER and the average programming duration, and comparing the probability of the block to a set of probabilities respectively corresponding to a set of worst-performing blocks in a NVM. Thereafter, the device may write the block parity to an available over provisioning (OP) space in the NVM responsive to the probability of the block being greater than any probability in the set of probabilities. | 2022-02-17 |
20220050748 | SEMICONDUCTOR MEMORY DEVICES AND METHODS OF OPERATING SEMICONDUCTOR MEMORY DEVICES - A semiconductor memory device includes a memory cell array, an error correction code (ECC) engine, a scrubbing control circuit and a control logic circuit. The memory cell array includes memory cell rows, and each of the memory cell rows including volatile memory cells. The scrubbing control circuit generates scrubbing addresses for performing a normal scrubbing operation on the memory cell rows with a first period based on refresh row addresses for refreshing the memory cell rows. The control logic circuit controls the ECC engine the scrubbing control circuit to distribute a scrubbing operation on weak codewords dynamically within the refresh operation such that a dynamic allocated scrubbing (DAS) operation is performed with a second period smaller than the first period. An error bit is detected in each of the weak codewords during the normal scrubbing operation or normal read operation on at least one of the memory cell rows. | 2022-02-17 |