30th week of 2022 patent applcation highlights part 43 |
Patent application number | Title | Published |
20220236955 | SYSTEM BEHAVIOR PROFILING-BASED DYNAMIC COMPETENCY ANALYSIS - In some examples, system behavior profiling-based dynamic competency analysis may include identifying a plurality of software generation entities that have contributed to a module of a system, and generating an index to associate each software generation entity of the plurality of software generation entities. Execution links may be extracted from execution traces of the system, and an execution competency list may be generated. A dynamic competency score may be generated for each software generation entity for the system, and an overall dynamic competency score and a combined competency score may be determined. A software generation entity role may be obtained for a new application, and a software generation entity of the plurality of software generation entities may be identified to perform the software generation entity role. Development of the new application may be implemented using the identified software generation entity. | 2022-07-28 |
20220236956 | SOFTWARE CREATING DEVICE, SOFTWARE CREATING METHOD, AND PROGRAM - A software creating device and the like can save labor when creating software. A software creating device can create software for controlling equipment such as a certification photograph machine. The software creating device includes, for example: a storage part for storing a plurality of basic modules for executing each of a plurality of processes; and a software creating part for employing the basic modules to perform deep reinforcement learning to create, by part of a combination of the basic modules, software for consecutively performing the plurality of processes in equipment such as a certification photograph machine. | 2022-07-28 |
20220236957 | DYNAMIC APPLICATION BUILDER FOR MULTIDIMENSIONAL DATABASE ENVIRONMENTS - Systems and methods for generating custom applications for querying a multidimensional database of a target platform include, responsive to receiving a custom application request, an application definition is discovered based on data received from one or more sources. The application definition indicates target outputs of the custom application, influencers for each of the target outputs that correspond to members of one or more first dimensions of the multidimensional database, and granularity definitions relative to second dimensions of the multidimensional database for each influencer. Mutually exclusive groups each including two or more target outputs are generated by applying a weighting algorithm to the application definition, and resource-efficient machine written code is dynamically generated based on the groupings and the results of the weighting algorithm. The machine written code is compiled into an application package, which is then deployed to the target platform for execution on the multidimensional database. | 2022-07-28 |
20220236958 | SYSTEMS AND METHODS FOR CREATING SOFTWARE FROM LIBRARY AND CUSTOM COMPONENTS - Methods and systems are disclosed that automate and institutionalize many aspects of the process of creating software. Embodiments automate aspects of pricing, software creation, and delivery using a manufacturing-styled approach to development that reuses existing code and other existing software design features. | 2022-07-28 |
20220236959 | SYSTEMS AND METHODS FOR CREATING SOFTWARE FROM LIBRARY AND CUSTOM COMPONENTS - Methods and systems are disclosed that automate and institutionalize many aspects of the process of creating software. Embodiments automate aspects of pricing, software creation, and delivery using a manufacturing-styled approach to development that reuses existing code and other existing software design features. | 2022-07-28 |
20220236960 | SYSTEMS AND METHODS FOR CREATING SOFTWARE FROM LIBRARY AND CUSTOM COMPONENTS - Methods and systems are disclosed that automate and institutionalize many aspects of the process of creating software. Embodiments automate aspects of pricing, software creation, and delivery using a manufacturing-styled approach to development that reuses existing code and other existing software design features. | 2022-07-28 |
20220236961 | SYSTEMS AND METHODS FOR CREATING SOFTWARE FROM LIBRARY AND CUSTOM COMPONENTS - Methods and systems are disclosed that automate and institutionalize many aspects of the process of creating software. Embodiments automate aspects of pricing, software creation, and delivery using a manufacturing-styled approach to development that reuses existing code and other existing software design features. | 2022-07-28 |
20220236962 | SYSTEMS AND METHODS FOR CREATING SOFTWARE FROM LIBRARY AND CUSTOM COMPONENTS - Methods and systems are disclosed that automate and institutionalize many aspects of the process of creating software. Embodiments automate aspects of pricing, software creation, and delivery using a manufacturing-styled approach to development that reuses existing code and other existing software design features. | 2022-07-28 |
20220236963 | CONFIGURABLE MULTI-INPUT WEB FORMS FOR CODE EDITORS - Various techniques and systems are described herein for providing a multi-input form using a code editor. In various examples, an application programming interface (API) of the code editor used to generate software extensions may be identified. A library may be imported into the API. The library may be configured to implement WebView content within the code editor. One or more commands may be generated using at least one object defined in the library. The one or more commands may define a plurality of input fields to be displayed in the WebView content to a user invoking the API. In various examples, an invocation of the API may be received and the WebView content comprising the plurality of input fields may be displayed. | 2022-07-28 |
20220236964 | SEMANTIC CODE SEARCH BASED ON AUGMENTED PROGRAMMING LANGUAGE CORPUS - A method may include obtaining machine-readable source code. The method may include parsing the source code for one or more code descriptions and identifying a section of the source code corresponding to each of the code descriptions. The method may include determining a description-code pair including a first element representing the code description and a second element representing the section of the source code corresponding to the code description. The method may include generating an augmented programming language corpus based on the description-code pair, the one or more code descriptions, and the source code. The method may include receiving a natural language search query for source-code recommendations, identifying source code from the augmented programming language corpus responsive to the natural language search query, and responding to the natural language search query with the identified source code. | 2022-07-28 |
20220236965 | AUTO-GENERATING INTERACTIVE WORKFLOW USER INTERFACES FOR SIMULATED SYSTEMS - Systems, computer program products, and computer-implemented methods for generating interactive graphical user interfaces, software-based workflows, and data integrations using catalogs of workflow applications and auto-generation of aspects of the workflows. A method of the disclosure may include accessing one or more data stores that store: information indicative of one or more data sources, information indicative of one or more data object types, information indicative of one or more applications, and information indicative of compatibilities between the one or more data object types and the one or more applications; receiving a first user input indicating an association between a first data source and a first data object type; and based on the compatibilities and the indicated association, automatically populating each of the one or more applications that is compatible with the first data object type with data from the first data source, wherein populating includes generating interactive graphical user interfaces. | 2022-07-28 |
20220236966 | COLLABORATIVE VISUAL PROGRAMMING ENVIRONMENT WITH CUMULATIVE LEARNING USING A DEEP FUSION REASONING ENGINE - In one embodiment, a device obtains data models and workflow logic for a visual programming environment. The device constructs, based on the data models and workflow logic for the visual programming environment, a metamodel that comprises a knowledge graph. The device makes, using the metamodel, an evaluation of an interaction between a user and the visual programming environment. The device provides, based on the evaluation, visualization data to a user interface of the visual programming environment. | 2022-07-28 |
20220236967 | VIRTUAL KEYBOARD FOR WRITING PROGRAMMING CODES IN ELECTRONIC DEVICE - Methods and systems machine learning-based methods and systems for facilitating a virtual keyboard for software coding on mobile computing devices. The present disclosure describes a code analysis platform that facilitates coding by enabling to user to program code on a mobile computing device by enabling a virtual keyboard that replaces the existing default keyboard when the user is required to type a software program. | 2022-07-28 |
20220236968 | OPTIMIZED DATA RESOLUTION FOR WEB COMPONENTS - An abstract data graph may be constructed at a server. The abstract data graph may include nodes and links between nodes and may represent computer programming instructions for generating a graphical user interface at a client machine. At least some of the links may represent dependency relationships between portions of the graphical user interface. The abstract data graph may be resolved at the client machine to identify data items, which may be retrieved from the server and used to render the graphical user interface. | 2022-07-28 |
20220236969 | NON-TRANSITORY COMPUTER-READABLE MEDIUM AND CLASS GENERATION METHOD - This disclosure relates to a non-transitory computer-readable recording medium storing a class generation program that causes a computer to execute a process. The process includes a step S | 2022-07-28 |
20220236970 | PROGRAM, INFORMATION CONVERSION DEVICE, AND INFORMATION CONVERSION METHOD - A program causes a computer to serve as an information conversion device that is equipped with at least one of (A)-(E): (A) a replication necessity analysis processor specifying a location where an instruction referred to from phi functions present in one basic block is present and inserting an inter-register transfer instruction therein; (B) an intra-loop constant analysis processor specifying a closed path in which the references of the phi functions are circulated and inserting the inter-register transfer instruction therein; (C) an inter-instruction dependency analysis processor specifying a location where data dependency is present between instructions, which are reference destinations of the phi functions, and inserting the inter-register transfer instruction thereat; (D) an identical instruction reference analysis processor specifying, in a plurality of execution paths, a location where the phi functions referring to a result of the identical instruction before branching are present and inserting the inter-register transfer instruction therein; and (E) a spill-out effectiveness analysis processor storing a parameter value present in loop processing and targeted by the inter-register transfer instruction in a storage element other than a general-purpose register before start of the loop processing, loading the value after end of the loop processing, and deleting the inter-register transfer instruction. | 2022-07-28 |
20220236971 | ADAPTING EXISTING SOURCE CODE SNIPPETS TO NEW CONTEXTS - Implementations are described herein for adapting existing source code snippets to new contexts. In various implementations, a command may be detected to incorporate an existing source code snippet into destination source code. An embedding may be generated based on the existing source code snippet, e.g., by processing the existing source code snippet using an encoder. The destination source code may be processed to identify one or more decoder constraints. Subject to the one or more decoder constraints, the embedding may be processed using a decoder to generate a new version of the existing source code snippet that is adapted to the destination source code. | 2022-07-28 |
20220236972 | CONFLICT RESOLUTION FOR DEVICE-DRIVEN MANAGEMENT - Disclosed are various embodiments for resolving conflicts between workflows in a workflow processing system. A plurality of workflows stored in a workflow queue are evaluated to identify a common dependency of the plurality of workflows. Then, a version hierarchy is created for the common dependency of the plurality of workflows, the version hierarchy identifying multiple versions of the common dependency. In response to execution of a first one of the plurality of workflows stored in the workflow queue, the version hierarchy can be evaluated to identify the most recent version of the common dependency. Then, installation of the most recent version of the common dependency can be initiated. | 2022-07-28 |
20220236973 | AUTOMATED SOFTWARE UPGRADE DOWNLOAD CONTROL BASED ON DEVICE ISSUE ANALYSIS - An apparatus comprises a processing device configured to detect that a given software upgrade is available for a given computing device, to identify other computing devices on which the given software upgrade has been installed that exhibit at least a threshold level of similarity to the given computing device, and to determine whether any issues were encountered on the other computing devices as a result of the given software upgrade. The processing device is also configured to generate a recommendation as to whether to initiate download of the given software upgrade on the given computing device based at least in part on whether any issues were encountered on the other computing devices as a result of the given software upgrade, and to initiate download of the given software upgrade on the given computing device based at least in part on the generated recommendation. | 2022-07-28 |
20220236974 | REVERTING MERGES ACROSS BRANCHES - A system for safely reverting merges across branches in version control systems where shared history cannot be rewritten is described. A computer-implemented method, comprising: identifying a first merge base at a trunk, the first merge base merging with a branch of the trunk; identifying, at the branch, a second merge base, subsequent to the first merge base, the second merge base merging with the trunk; forming a merge base patch branch from the branch at the second merge base, the merge base patch branch including a copy of the first merge base; merging the merge base patch branch with the trunk; and merging the merge base patch branch with the branch. | 2022-07-28 |
20220236975 | OPTIMIZED COMPILATION OF PIPELINES FOR CONTINUOUS DELIVERY OF SERVICES ON DATACENTERS CONFIGURED IN CLOUD PLATFORMS - Computing systems, for example, multi-tenant systems deploy software artifacts in data centers created in a cloud platform using a cloud platform infrastructure language that is cloud platform independent. The system receives pipeline templates including templating expressions that can be substituted with values for generating pipelines. A pipeline can be executed to perform a set of actions associated with continuous delivery of a software artifact. The system stores sets of partially hydrated pipeline templates. The partially hydrated pipeline templates can be compiled into executable pipelines associated with services configured on a datacenter of a cloud platform. The system stores different versions of pipeline templates as deployment packages. The system stores version pointers that identify specific deployment packages that are selected when a software release is deployed. The version pointers allow the deployment package to be updated in case of roll back or for deploying more recent changes. | 2022-07-28 |
20220236976 | VERSIONING OF PIPELINE TEMPLATES FOR CONTINUOUS DELIVERY OF SERVICES ON DATACENTERS CONFIGURED IN CLOUD PLATFORMS - Computing systems, for example, multi-tenant systems deploy software artifacts in data centers created in a cloud platform using a cloud platform infrastructure language that is cloud platform independent. The system receives pipeline templates including templating expressions that can be substituted with values for generating pipelines. A pipeline can be executed to perform a set of actions associated with continuous delivery of a software artifact. The system stores sets of partially hydrated pipeline templates. The partially hydrated pipeline templates can be compiled into executable pipelines associated with services configured on a datacenter of a cloud platform. The system stores different versions of pipeline templates as deployment packages. The system stores version pointers that identify specific deployment packages that are selected when a software release is deployed. The version pointers allow the deployment package to be updated in case of roll back or for deploying more recent changes. | 2022-07-28 |
20220236977 | SYSTEM AND METHOD FOR ANALYZING FIRST PARTY DATA FROM ONE OR MORE SOFTWARE TOOLS - Technologies for retrieving first party data from external sources and generating an entity execution score is provided. The disclosed techniques include joining a cloud-based software application, signing into the cloud-based software application, and requesting certain company related information. The disclosed techniques may further comprise the steps of selecting a company industry vertical, selecting a company funding stage, and providing relevant licensed software tools to be selected. The disclosed techniques may further comprise selecting at least one licensed software tool, integrating the at least one licensed software tool into the cloud-based software application, and initiating a return of data related to a key performance indicator using a company's own, or first party data from the at least one licensed software tool. An execution score is algorithmically computed based in part on at least one key performance indicator data. | 2022-07-28 |
20220236978 | MICRO-SERVICE MANAGEMENT SYSTEM AND DEPLOYMENT METHOD, AND RELATED DEVICE - Embodiments of the present disclosure may provide a microservice management system, device, and apparatus. The system may include a microservice deployment device, a plurality of computing resource pools, and a target service chain. The target service chain may include at least one target microservice entity device, which may be from at lone of the plurality of computing resource pools. The microservice deployment device may be configured to obtain service processing information of the target service chain, generate a deployment update configuration information according to the service processing information of the target service chain, and adjust a deployment position of each of the at least one target microservice entity device on the target service chain according to the deployment update configuration information. | 2022-07-28 |
20220236979 | APPLICATION PATCHING USING VARIABLE-SIZED UNITS - A method, system and non-transitory computer readable instructions for application patching comprising, concatenating compressed data or uncompressed data or a mixture of compressed and uncompressed data into a continuous data set into a continuous data set and dividing the continuous data set into variable sized data chunks. Compressing each of the variable sized data chunks and dividing each of the variable sized data chunks into fixed size data blocks. Encrypting the fixed size data blocks to generate encrypted fixed size data blocks and storing the encrypted fixed sized data blocks or sending the encrypted fixed size data blocks over a network. | 2022-07-28 |
20220236980 | APPLICATION FUNCTION CONSOLIDATION RECOMMENDATION - By analyzing execution of a set of transactions by an application, a set of actual code execution paths of the application are determined. From the set of actual code execution paths, a set of predicted execution paths of the application are predicted using an execution prediction model. The set of predicted execution paths includes the set of actual code execution paths. By determining that paths in the set of predicted execution paths have above a threshold similarity to each other, a cluster of predicted execution paths is identified. The cluster of predicted execution paths is recommended, using a recommendation model, for implementation as a single execution path in a revised version of the application. | 2022-07-28 |
20220236981 | Code Conflict Resolution System and Method, Apparatus, Device, and Medium - This application provides a code conflict resolution system, including a local apparatus, a service apparatus, and a remote apparatus. The local apparatus is configured to: perform resolution on a conflict field generated by code files of a plurality of versions, and send a conflict resolution result to the service apparatus. The conflict field includes at least one conflict block, and the conflict resolution result includes at least one of a resolution result of a local resolvable conflict block and an identifier of a local irresolvable conflict block. The remote apparatus is configured to: obtain the conflict resolution result from the service apparatus, generate a collaborative processing window based on the conflict resolution result, and receive a result of processing the conflict resolution result by a remote user based on the collaborative processing window, so as to improve conflict resolution quality and conflict resolution efficiency. | 2022-07-28 |
20220236982 | SOFTWARE DEVELOPMENT DEVICE AND SOFTWARE DEVELOPMENT PROGRAM - An environment includes various freely settable restrictions for a program executed by an edge device or the like. A software development device generates an object code from a source code and includes an evaluation module for extracting restrictions set in a source code and evaluating whether or not the source code conforms to the restrictions within an application range of the extracted restrictions. A generator module generates an object code so as to conform to the restrictions. | 2022-07-28 |
20220236983 | Computer Implementation Method for Software Architecture Analysis and Software Architecture Analysis Apparatus - An architecture analysis apparatus (106) receives an architecture model input by a user, and analyzes a plurality of modules included in the architecture model to obtain a call relationship between the modules. Then, the plurality of modules are allocated to different layers based on an architecture setting entered by the user, whether a call relationship between the plurality of modules complies with a call rule is detected, and a detection result is displayed to the user by using an interface. | 2022-07-28 |
20220236984 | METHOD FOR IDENTIFYING OPEN-SOURCE SOFTWARE COMPONENTS AT THE SOURCE-CODE LEVEL - According to some exemplary embodiments of the present disclosure, a method for identifying open source software (OSS) components using a processor of a computing device is disclosed. The method for identifying open source software (OSS) components may include: constructing a component database by performing redundancy elimination for each of a plurality of open source software; and identifying a component of target software by using the component database. | 2022-07-28 |
20220236985 | ARITHMETIC OPERATION DEVICE AND ARITHMETIC OPERATION METHOD - An arithmetic operation device causes a convolution arithmetic unit to perform a convolution arithmetic operation between a filter and target data corresponding to a size of the filter in each of a plurality of convolution layers constituting a neural network. The arithmetic operation device includes: a bit reduction unit that reduces a bit string corresponding to a first bit number from a least significant bit of the target data and reduces a bit string corresponding to a second bit number from a least significant bit of a weight that is an element of the filter for each convolution layer; and a bit addition unit that adds a bit string corresponding to a third bit number obtained by adding the first bit number and the second bit number to a least significant bit of a convolution arithmetic operation result output from the convolution arithmetic unit by inputting the target data and the weight after being reduced by the bit reduction unit to the convolution arithmetic unit. | 2022-07-28 |
20220236986 | Multiplier-Accumulator Processing Pipelines and Processing Component, and Methods of Operating Same - An integrated circuit including a plurality of processing components to process image data of a plurality of image frames, wherein each image frame includes a plurality of stages. Each processing component includes a plurality of execution pipelines, wherein each pipeline includes a plurality of multiplier-accumulator circuits configurable to perform multiply and accumulate operations using image data and filter weights, wherein: (i) a first processing component is configured to process all of the data associated with a first plurality of stages of each image frame, and (ii) a second processing component of the plurality of processing components is configured to process all of the data associated with a second plurality of stages of each image frame. The first and second processing component processes data associated with the first and second plurality of stages, respectively, of a first image frame concurrently. | 2022-07-28 |
20220236987 | METHOD FOR OPTIMIZING PERFORMANCE OF ALGORITHM USING PRECISION SCALING - This application relates to a method for optimizing algorithm performance using precision scaling, wherein the method according to an embodiment of present invention comprises obtaining a number of iterations of a unit operation according to precisions of the algorithm including the unit operation that is repeatedly performed, wherein the precisions include a first precision and a second precision, and the number of iterations include a first number of iterations corresponding to the first precision and a second number of iterations corresponding to the second precision; inspecting available precisions of a device on which the algorithm is to be executed, wherein the available precisions include a first available precision corresponding to the first precision and a second available precision corresponding to the second precision; determining an optimal precision by repeatedly performing the unit operation corresponding to an initial operation of the algorithm using the inspected available precision; and repeatedly performing the unit operation corresponding to a remaining operation of the algorithm with the optimal precision. | 2022-07-28 |
20220236988 | MASK OPERATION METHOD FOR EXPLICIT INDEPENDENT MASK REGISTER IN GPU - Provided is a mask operation method for an explicit independent mask register in a GPU. The method comprises: each GPU hardware thread being able to access respective eight 128-bit-wide independent mask registers, which are recorded as $m | 2022-07-28 |
20220236989 | SYSTEMS, METHODS, AND APPARATUS FOR MATRIX MOVE - Detailed herein are embodiment systems, processors, and methods for matrix move. For example, a processor comprising decode circuitry to decode an instruction having fields for an opcode, a source matrix operand identifier, and a destination matrix operand identifier; and execution circuitry to execute the decoded instruction to move each data element of the identified source matrix operand to corresponding data element position of the identified destination matrix operand is described. | 2022-07-28 |
20220236990 | AN APPARATUS AND METHOD FOR SPECULATIVELY VECTORISING PROGRAM CODE - An apparatus and method are provided for speculatively vectorising program code. The apparatus includes processing circuitry for executing program code, the program code including an identified code region comprising at least a plurality of speculative vector memory access instructions. Execution of each speculative vector memory access instruction is employed to perform speculative vectorisation of a series of scalar memory access operations using a plurality of lanes of processing. Tracking storage is used to maintain, for each speculative vector memory access instruction, tracking information providing an indication of a memory address being accessed within each lane. Checking circuitry then references the tracking information during execution of the identified code region by the processing circuitry, in order to detect any inter lane memory hazard resulting from the execution of the plurality of speculative vector memory access instructions. | 2022-07-28 |
20220236991 | APPARATUS AND METHOD FOR VECTOR HORIZONTAL ADD OF SIGNED/UNSIGNED WORDS AND DOUBLEWORDS - An apparatus and method for performing a packed horizontal addition of words and doublewords. One embodiment of a processor includes a decoder to decode a packed horizontal add instruction which includes an opcode and one or more operands used to identify a plurality of packed words; a source register to store a plurality of packed words; execution circuitry to execute the decoded instruction, and a destination register to store a final result as a packed result word in a designated data element position. The execution circuitry includes operand selection circuitry to identify first and second packed words from the source register in accordance with the operands and opcode; adder circuitry to add the two packed words to generate a temporary sum; a temporary storage of at least 17 bits to store the temporary sum; and saturation circuitry to saturate the temporary sum if necessary to generate the final result. | 2022-07-28 |
20220236992 | RISC-V BRANCH PREDICTION METHOD, DEVICE, ELECTRONIC DEVICE AND STORAGE MEDIUM - A RISC-V branch prediction method and device, an electronic device and a computer readable storage medium are provided. On the basis of the prior art, the remaining jump times of the jump instruction are additionally acquired, and the single jump step length (the single jump step length is not fixed to be 1) is calculated according to the difference of remaining jump times during two consecutive jumps, whether the target jump instruction has executed the last jump can be judged according to the single jump step length of a jump instruction and in combination with the real-time remaining jump times, so as to determine the jump times that need to be executed subsequently according to the judgment result. | 2022-07-28 |
20220236993 | FETCH STAGE HANDLING OF INDIRECT JUMPS IN A PROCESSOR PIPELINE - Systems and methods are disclosed for fetch stage handling of indirect jumps in a processor pipeline. For example, a method includes detecting a sequence of instructions fetched by a processor core, wherein the sequence of instructions includes a first instruction, with a result that depends on an immediate field of the first instruction and a program counter value, followed by a second instruction that is an indirect jump instruction; responsive to detection of the sequence of instructions, preventing an indirect jump target predictor circuit from generating a target address prediction for the second instruction; and, responsive to detection of the sequence of instructions, determining a target address for the second instruction before the first instruction is issued to an execution stage of a pipeline of the processor core. | 2022-07-28 |
20220236994 | Filtering Method and System of Parallel Computing Results - A method and system for filtering a parallel computing result. The method comprises: simultaneously generating an input value of a first valid position (FVP) of each fragment, simultaneously calculating the input value of the FVP respectively corresponding to each fragment to obtain an output result corresponding to the input value of each FVP, sequentially selecting the approach of the output result of second to S-th fragments according to the output result of the FVP of each fragment, and filtering the parallel computing result to finally obtain the correct parallel computing result. The use of the parallel filtering approach changes the original serial filtering computing into the parallel computing of S fragments. The computing time is only one-S times of the original time, which can meet the time sequence requirement of the parallel computing while improving the computing efficiency. | 2022-07-28 |
20220236995 | APPARATUSES AND METHODS FOR ORDERING BITS IN A MEMORY DEVICE - Systems, apparatuses, and methods for organizing bits in a memory device are described. In a number of embodiments, an apparatus can include an array of memory cells, a data interface, a multiplexer coupled between the array of memory cells and the data interface, and a controller coupled to the array of memory cells, the controller configured to cause the apparatus to latch bits associated with a row of memory cells in the array in a number of sense amplifiers in a prefetch operation and send the bits from the sense amplifiers, through a multiplexer, to a data interface, which may include or be referred to as DQs. The bits may be sent to the DQs in a particular order that may correspond to a particular matrix configuration and may thus facilitate or reduce the complexity of arithmetic operations performed on the data. | 2022-07-28 |
20220236996 | DUAL-SYSTEM DEVICE AND METHOD FOR DISPLAYING APPLICATION THEREOF, AND STORAGE MEDIUM - A dual-system device and a method for displaying an application thereof, and a storage medium are provided. The dual-system device is configured with a first operating system and a second operating system, a first daemon is configured in the first operating system, and a second daemon is configured in the second operating system. The method includes: completing registration of a first application and generating a first registration list in the first daemon, the first application being an application in the first operating system; acquiring the first registration list of the first operating system from the first daemon by using the second daemon; and displaying the first application in the second operating system based on the first registration list. | 2022-07-28 |
20220236997 | INFORMATION PROCESSING DEVICE AND INDUSTRIAL ROBOT - An information processing device includes a first information processing part having a first calculation part and a second information processing part having a second calculation part, which are communicated with each other. The first information processing part includes a first communication part and a first data storage part. The first calculation part is configured to execute a first communication device driver, a first periodic communication application, a first non-periodic communication application, and a first data processing application. The first data processing application integrates data which are read from a transmission periodic data list stored in the first data storage part with data which are read from a transmission non-periodic data list stored in the first data storage part to process into transmission integrated data, and the transmission integrated data are transmitted from the first communication part through execution of the first communication device driver. | 2022-07-28 |
20220236998 | SYSTEMS AND METHODS FOR BOOTSTRAP MANAGEMENT - The present disclosure is directed techniques for bootstrap management. A method includes: upon an initial launch of an application on a client device, fetching, from a server and using a native component of the application, content for loading a web component of the application on the client device; determining whether a bootstrap management mode is enabled on the client device; and responsive to the bootstrap management mode is enabled and in response to the web component being launched: receiving, at the native component and from the web component, a manifest and a request for bootstrapping resources; caching, by the native component, the manifest from the web component; fetching, from the server and using the native component, the bootstrapping resources requested by the web component; caching, by the native component, the fetched bootstrapping resources in the memory; and providing, by the native component, the fetched bootstrapping resources to the web component. | 2022-07-28 |
20220236999 | UNIFIED WAY TO TRACK USER CONFIGURATION ON A LIVE SYSTEM - A method of remediating configurations of a plurality of system services running in each of a plurality of hosts, wherein each of the hosts is configured with a virtualization software for supporting execution of virtual machines therein, includes the steps of: retrieving actual configurations of the system services, wherein the actual configurations are stored in accordance with a configuration schema of the system services and include a user configuration, which is a configuration initiated by the user, and a system configuration, which is a configuration initiated by the host in response to the user configuration; retrieving desired configurations of the system services from a desired configuration file; comparing each of the actual configurations with a corresponding one of the desired configurations; and upon determining that at least one actual configuration, which is not a system configuration, is different from a corresponding one of the desired configurations, replacing the at least one actual configuration with the corresponding desired configuration. | 2022-07-28 |
20220237000 | MANAGING CONFIGURATIONS OF SYSTEM SERVICES RUNNING IN A CLUSTER OF HOSTS - A method of managing configurations of a plurality of system services, including a first system service and a second system service, in each of a plurality of hosts, wherein each of the hosts is configured with a virtualization software for supporting execution of virtual machines therein includes steps of: upon receiving an application programming interface (API) call to apply configurations of the system services defined in a desired configuration file to the system services, parsing the desired configuration file to identify a first configuration for the first system service and a second configuration for the second system service, and storing the first and second configurations in accordance with a configuration schema defined for the first and second system services, wherein the first system service executes with the stored first configuration applied thereto and the second system service executes with the stored second configuration applied thereto. | 2022-07-28 |
20220237001 | DISPLAY DEVICE AND DISPLAY METHOD THEREOF - A display device includes a first processor, a second processor, and a display module. The first processor is configured to: acquire version information of the second processor upon completion of startup of the second processor; determine version information of the corresponding first processor based on version information of the second processor; load a second configuration file corresponding to the version information of the first processor to output a second display screen associated with the second configuration file to the display module. The display module is configured to display the second display screen. | 2022-07-28 |
20220237002 | Scheduling of Application Preloading - A user device includes an output device and one or more processors. The one or more processors are configured to run an Operating System (OS), to query a component of the OS that possesses information indicative of a user application that the user is currently expected to access, and to preload the user application in a background mode that is unnoticeable on the output device. | 2022-07-28 |
20220237003 | ENTERPRISE PROCESS GRAPHS FOR REPRESENTING RPA DATA - Systems and methods for generating an enterprise process graph are provided. Sets of process data relating to an implementation of RPA (robotic process automation) acquired using a plurality of discovery techniques is received. An enterprise process graph representing the implementation of RPA is generated based on the received sets of process data. | 2022-07-28 |
20220237004 | TARGETING FUNCTIONALITY FOR INTEGRATING AN RPA BOT WITH AN APPLICATION - Disclosed herein are systems and method for robotic-process-automation technology that trains botflows to successfully interact with application software, determine relevant sections of the application, and derive pertinent data from those sections. Such technology creates botflows that navigate various aspects of the application's environment to display or obtain additional data, as needed by the user. The botflows, after being trained to perform such actions, will effectively carry out such actions even after a minor change or update to the application. | 2022-07-28 |
20220237005 | Systems and Methods for Robotic Process Automation of Mobile Platforms - In some embodiments, a robotic process automation (RPA) design application provides a user-friendly graphical user interface that unifies the design of automation activities performed on desktop computers with the design of automation activities performed on mobile computing devices such as smartphones and wearable computers. Some embodiments connect to a model device acting as a substitute for an actual automation target device (e.g. smartphone of specific make and model) and display a model GUI mirroring the output of the respective model device. Some embodiments further enable the user to design an automation workflow by directly interacting with the model GUI. | 2022-07-28 |
20220237006 | SIMULATION FOR ALTERNATIVE COMMUNICATION - A host computing system includes an applications layer containing one or more user applications that perform I/O operations, an access methods layer that communicates with the applications layer, an I/O drivers layer that communicates with the access methods layer, and an SSCH simulation layer that communicates with the I/O drivers layer and that simulates a Fibre Channel connection that is accessed by applications in the applications layer. The host computing system may also include a TCP/IP stack layer that communicates with the SSCH simulation layer to provide TCP/IP communication for the host computing system. TCP/IP communication provided by the TCP/IP stack layer may be separate from any dedicated TCP/IP communication provided by the host. The host computing system may be coupled to a TCP/IP network. A cloud storage may be coupled to the network to communicate with the host computing system. | 2022-07-28 |
20220237007 | SUPERVISORY DEVICE WITH DEPLOYED INDEPENDENT APPLICATION CONTAINERS FOR AUTOMATION CONTROL PROGRAMS - A system and method for supervisory and control support in an industrial automation system, including a supervisory device with a software stack having a host operating system and a plurality of independent application containers Each container includes a modular application platform being associated with a base functionality for the supervisory device and a guest operating system layer integrated with the modular application platform according to a system integration. A one-time integration of system dependencies is executed during development of the container. The independent application containers are portable for direct deployment in an operating system of a type different than that of the host operating system and can run unchanged without requiring any change to component artifacts. | 2022-07-28 |
20220237008 | EMBEDDED COMPUTATION INSTRUCTION SET OPTIMIZATION - The technology disclosed herein pertains to a system and method for providing optimization of embedded computation instruction set (CIS), the method including downloading the CIS to a computational storage device (CSD), committing the CIS to a program slot in a computational storage processor of the CSD, simulating execution of the CIS at the committed slot to generate static analysis of one or more registers of the CIS to determine ranges of values that the one or more registers can take through a lifecycle of the CIS, demoting one or more of the registers to lower size registers, and generating a native instruction set from the CIS based on the register demotions. | 2022-07-28 |
20220237009 | COMMUNICATION APPARATUS, COMMUNICATION SYSTEM, NOTIFICATION METHOD, AND COMPUTER PROGRAM PRODUCT - According to an embodiment, a communication apparatus includes a task and a notification unit. The task stores, in a storage unit, notification information to be notified to a virtual machine as a notification destination via a virtual machine monitor after execution of predetermined processing. The notification unit collectively notifies the virtual machine monitor of a plurality of pieces of notification information stored in the storage unit. | 2022-07-28 |
20220237010 | EXECUTING CONTAINERIZED APPLICATIONS USING PARTIALLY DOWNLOADED CONTAINER IMAGE FILES - Executing containerized applications using partially downloaded container image files is disclosed herein. In some examples, a client computing device transmits a request for a container image file for a containerized application to a repository computing device. The container image file includes a plurality of essential files that are required to begin execution of the containerized application, as well as a plurality of non-essential files. The plurality of essential files and the plurality of non-essential files are received by the client computing device from the repository computing device. Subsequent to the client computing device receiving the plurality of essential files, and concurrently with the client computing device receiving the plurality of non-essential files, the client computing device begins execution of the containerized application. In this manner, execution of the containerized application can begin as soon as the files essential for execution have been received by the client computing device. | 2022-07-28 |
20220237011 | OPTIMIZED DATA RESOLUTION FOR WEB COMPONENTS - An abstract data graph may be constructed at a server. The abstract data graph may include nodes and links between nodes and may represent computer programming instructions for generating a graphical user interface at a client machine. At least some of the links may represent dependency relationships between portions of the graphical user interface. The abstract data graph may be resolved at the client machine to identify data items, which may be retrieved from the server and used to render the graphical user interface. | 2022-07-28 |
20220237012 | OPTIMIZED DATA RESOLUTION FOR WEB COMPONENTS - An abstract data graph may be constructed at a server. The abstract data graph may include nodes and links between nodes and may represent computer programming instructions for generating a graphical user interface at a client machine. At least some of the links may represent dependency relationships between portions of the graphical user interface. The abstract data graph may be resolved at the client machine to identify data items, which may be retrieved from the server and used to render the graphical user interface. | 2022-07-28 |
20220237013 | MANAGING DOWNTIME TO NETWORKING MANAGERS DURING CONFIGURATION UPDATES IN CLOUD COMPUTING ENVIRONMENTS - Described herein are systems and methods that manage configuration updates for networking manager virtual machines. In one example, a method includes identifying an update for at least one networking manager virtual machine. In response to identifying the update, the method notifies a daemon on the host with the networking manager virtual machine to establish a channel with a control plane agent to receive communications in place of the networking manager virtual machine. The method further identifies when the configuration modification is complete for the networking manager virtual machine and notifies the daemon on the host to break the channel with the control plane agent. | 2022-07-28 |
20220237014 | NETWORK FUNCTION PLACEMENT IN VGPU-ENABLED ENVIRONMENTS - Disclosed are aspects of network function placement in virtual graphics processing unit (vGPU)-enabled environments. In one example a network function request is associated with a network function. A scheduler selects a vGPU-enabled GPU to handle the network function request. The vGPU-enabled GPU is selected in consideration of a network function memory requirement or a network function IO requirement. The network function request is processed using an instance of the network function within a virtual machine that is executed using the selected vGPU-enabled GPU. | 2022-07-28 |
20220237015 | METHOD AND SYSTEM FOR COLLECTING USER INFORMATION ACCORDING TO PROVIDING VIRTUAL DESKTOP INFRASTRUCTURE SERVICE - Collecting user information according to providing a virtual desktop infrastructure (VDI) service is disclosed. A user information collection system includes a service provisioning manager configured to manage provisioning of a VDI service provided from a VDI service provider, a charging manager configured to manage charging information according to a use of the VDI service, a policy manager configured to manage a policy for the VDI service, a user manager configured to manage information of the user, a VDI service lifecycle manager configured to manage a lifecycle of the VDI service, and a multi-tenant connection manager configured to manage connection infrastructure information between the VDI service provider and a cloud environment (or external software). | 2022-07-28 |
20220237016 | APPARATUS FOR DETERMINING RESOURCE MIGRATION SCHEDULE - One or more storage devices store resource migration schedule information including a plurality of records. Each of the plurality of records indicating a migration source node and a migration destination node of each of one or more resources. One or more processors are configured to determine a priority of each of the plurality of records such that a record having locality after migration has a higher priority than a priority of a record without locality after migration. The locality is determined based on whether a virtual machine and a volume associated with each other in advance exist in the same node. The one or more processors are configured to determine a migration schedule of each of the plurality of records based on the priority of each of the plurality of records. | 2022-07-28 |
20220237017 | DISTRIBUTED AND ASSOCIATIVE CONTAINER PLATFORM SYSTEM - Provided is a distributed and associative container platform system which has an advantage of providing flexible movement of services and infinite extension of computing resources by interconnecting regionally distributed multiple container platforms and enhancing security. | 2022-07-28 |
20220237018 | ISOLATED PHYSICAL NETWORKS FOR NETWORK FUNCTION VIRTUALIZATION - A method includes, with a Virtual Network Function (VNF) component associated with a VNF, communicating with an access network over a first physical network connected to a first physical network interface of a physical machine associated with the VNF component. The method further includes, with the VNF component, communicating with a core network over a second physical network connected to a second physical network interface of the physical machine, the second network being isolated from the first network. | 2022-07-28 |
20220237019 | MANAGING INTERACTIONS BETWEEN STATEFUL PROCESSES - Systems, methods and computer program products for managing interactions between stateful processes are disclosed. A plurality of stateful processes are assembled in a cluster. Each of the stateful processes includes at least one computing object that is configured to interface with other stateful processes in the cluster. The cluster of stateful processes is used to perform a computing operation. | 2022-07-28 |
20220237020 | SELF-SCHEDULING THREADS IN A PROGRAMMABLE ATOMIC UNIT - Devices and techniques for self-scheduling threads in a programmable atomic unit are described herein. When it is determined that an instruction will not complete within a threshold prior to insertion into a pipeline of the processor, a thread identifier (ID) can be passed with the instruction. Here, the thread ID corresponds to a thread of the instruction. When a response to completion of the instruction is received that includes the thread ID, the thread is rescheduled using the thread ID in the response. | 2022-07-28 |
20220237021 | SYSTEMS AND METHODS OF TELEMETRY DIAGNOSTICS - Systems and method are provided for executing a workflow based on a received alert notification, wherein the workflow includes one or more tasks to be executed by a workflow processor. The workflow is validated when it is determined that each task of the workflow is executable without failure. A job may be generated based on the validated workflow, and a state object in a state engine may be generated to be used by the job for processing by the workflow processor. Each task of the state object may be iterated to complete the workflow, and data may be transmitted in response to the alert notification based on the completed workflow. | 2022-07-28 |
20220237022 | SYSTEM AND METHOD FOR CONTROLLED SHARING OF CONSUMABLE RESOURCES IN A COMPUTER CLUSTER - In one embodiment, a method includes empirically analyzing, by a computer cluster comprising a plurality of computers, a set of active reservations and a current set of consumable resources belonging to a class of consumable resources. Each active reservation is of a managed task type and comprises a group of one or more tasks task requiring access to a consumable resource of the class. The method further includes, based on the empirically analyzing, clocking the set of active reservations each clocking cycle. The method also includes, responsive to the clocking, sorting, by the computer cluster, a priority queue of the set of active reservations. | 2022-07-28 |
20220237023 | SYSTEM AND METHOD OF UTILIZING PLATFORM APPLICATIONS WITH INFORMATION HANDLING SYSTEMS - In one or more embodiments, one or more systems, one or more methods, and/or one or more methods may: register a subroutine configured to store multiple network resource addresses via a volatile memory medium; for each information handling system (IHS) initialization executable of multiple IHS initialization executables: retrieve, from a non-volatile memory medium, the IHS initialization executable; execute the IHS initialization executable via an environment associated with IHS firmware; call, by the IHS initialization executable, the subroutine; and store, by the subroutine, a network resource address associated with an operating system (OS) executable via command line arguments, where the command line arguments are stored via a data structure in the volatile memory medium; and for each network resource address of the command line arguments: retrieve, based at least on the network resource address, an OS executable associated with the network resource address from another IHS via a network. | 2022-07-28 |
20220237024 | DIAGONAL AUTOSCALING OF SERVERLESS COMPUTING PROCESSES FOR REDUCED DOWNTIME - Methods and systems for scaling computing processes within a serverless computing environment are provided. In one embodiment, a method is provided that includes receiving a request to execute a computing process in the serverless computing environment. A first node may be created within the serverless computing environment to execute the computing process. A first amount of computing resources may be assigned to the first node. It may be determined later that the first amount of computing resources are not sufficient to implement the first node. A second amount of computing resources may be determined with a vertical autoscaling process and a second node may be created within the serverless computing environment using a horizontal autoscaling process. The second node may be assigned the second amount of computing resources. The computing process may then be executed using both the first and second nodes within the serverless computing environment. | 2022-07-28 |
20220237025 | ACTIVE BUILD MIGRATION IN CONTINUOUS INTEGRATION ENVIRONMENTS - The technology disclosed herein enables migrating a software build job from a first computing node to a second computing node. An example method may comprise detecting, by a processor, a first software build job executing on a first computing node; detecting a second software build job in a waiting state; determining that the first computing node is capable of executing the second software build job; responsive to determining that a second computing node is capable of executing the first software build job, migrating the first software build job to the second node; and executing the second software build job on the first node. | 2022-07-28 |
20220237026 | VOLATILE MEMORY ACQUISITION - Aspects of the present disclosure relate to volatile memory acquisition using live migration of an execution environment. In examples, a virtualization manager controls execution of an execution environment at a virtualization host. The virtualization manager may enable live migration of the execution environment, such that the execution environment may be migrated to another virtualization host (or “migration target”) for continued execution. Accordingly, such functionality may be used to capture a memory image at a migration target, after which the execution environment continues executing at the original virtualization host. The memory image may be analyzed to identify the presence of malware and/or to generate a list of processes that were executing at the time of the capture. Such aspects may enable capturing a substantially accurate and consistent memory image of the volatile memory of the execution environment without indicating, inadvertently or otherwise, that a capture is occurring to processes executing therein. | 2022-07-28 |
20220237027 | TASK DELEGATION AND COOPERATION FOR AUTOMATED ASSISTANTS - Task delegation and cooperation for automated assistants is presented. A method comprises receiving, at a centralized support center that is in contact with a plurality of automated assistants including a first automated assistant and a second automated assistant, a request to perform a task on behalf of an individual, formulating, at the centralized support center, the task as a plurality of sub-tasks including a first sub-task and a second sub-task, delegating, at the centralized support center, the first sub-task to the first automated assistant, based on a determination at the centralized support center that the first automated assistant is capable of performing the first sub-task, and delegating, at the centralized support center, the second sub-task to the second automated assistant, based on a determination at the centralized support center that the second automated assistant is capable of performing the second sub-task. | 2022-07-28 |
20220237028 | Shared Control Bus for Graphics Processors - Techniques are disclosed relating to a shared control bus for communicating between primary control circuitry and multiple distributed graphics processor units. In some embodiments, a set of multiple processor units includes first and second graphics processors, where the first and second graphics processors are coupled to access graphics data via respective memory interfaces. A shared workload distribution bus is used to transmit control data that specifies graphics work distribution to the multiple graphics processing units. The shared workload distribution bus may be arranged in a chain topology, e.g., to connect the workload distribution circuitry to the first graphics processor and connect the first graphics processor to the second graphics processor such that the workload distribution circuitry communicates with the second graphics processor via the shared workload distribution bus connection to the first graphics processor. Disclosed techniques may facilitate graphics work distribution for a scalable number of processors. | 2022-07-28 |
20220237029 | BATTERY MANAGEMENT SYSTEM AND CONTROLLING METHOD THEREOF - A battery management system in which each of a plurality of battery management systems performs an individual task and transmits results of the tasks to a master battery management system wirelessly, the battery management system including: a task information storage unit including a list of tasks performed by each of the plurality of battery management systems, the performance time, performance cycle, and work priority of each task included in the list of tasks, and the communication priority among the plurality of battery management systems, a schedule determination unit configured to determine a work schedule on the basis of data stored in the task information storage unit, and a priority changing unit configured to adjust the work priority of a task based on the work schedule determined by the schedule determination unit, wherein the schedule determination unit is further configured to adjust the work schedule according to the adjusted work priority. | 2022-07-28 |
20220237030 | BROWSER-BASED PROCESSING OF DATA - In some implementations, a user interface for an application is displayed using a web browser instance on a client device. An input is received to present data on the user interface in a particular view. In response to the input, a first web worker thread corresponding to the web browser instance obtains data from a server, and executes first library routines to store the data in local storage at the client device. A second web worker thread, which corresponds to the web browser instance and the user interface, accesses the data from the local storage by using one or more second library routines, and processes the data to convert to a presentation format corresponding to the particular view. The second web worker thread stores the processed data in the local storage by using one or more third library routines, and provides the processed data for display on the user interface. | 2022-07-28 |
20220237031 | CAPACITY MIDDLEWARE SYSTEM TO MAKE CAPACITY FLUID AMONG KUBERNETES CLUSTERS TO INCREASE RESOURCE UTILIZATION - This invention makes capacity fluid among multiple kubernetes clusters maintained by an organization by introducing a system and method named capacity middleware to shrink and grow clusters based on their resource requirements. Capacity Middleware, run on the Management Cluster alongside an API controlling Clusters and assigns annotations related to priority on objects of Cluster resource, annotation for no preemption Quota to objects of MachineDeployment specifying the number of resources for each cluster and annotation of valid capacity (capacityValidated) by default set to false on objects of Machine resource which is used by the Capacity Middleware as a signal to respond to these objects. The capacity middleware iteratively checks and frees or assigns resources based needs of different clusters based on difference between required capacity and available capacity. A difference of negative suggests need for preempting resource whereas a difference in positive number suggest additionally required resources. | 2022-07-28 |
20220237032 | METHOD, DEVICE, AND COMPUTER PROGRAM PRODUCT FOR DEPLOYING VISUAL RESOURCE - The present disclosure relates to a method, a device, and a program product for deploying a visual resource. In one method, a resource requirement of a vision application for the visual resource in a network system is acquired. Based on the resource requirement, the visual resource which will be called by the vision application is predicted. Based on processing capabilities of various edge devices and the visual resource in the network system, an edge device located near a terminal device in the network system is identified, wherein the terminal device is configured to run the vision application. Based on a time requirement in the resource requirement, the visual resource is deployed to the edge device. Further, a corresponding device and a corresponding program product are provided. | 2022-07-28 |
20220237033 | TECHNOLOGIES FOR DATA MIGRATION BETWEEN EDGE ACCELERATORS HOSTED ON DIFFERENT EDGE LOCATIONS - Technologies for migrating data between edge accelerators hosted on different edge locations include a device hosted on a present edge location. The device includes one or more processors to: receive a workload from a requesting device, determine one or more accelerator devices hosted on the present edge location to perform the workload, and transmit the workload to the one or more accelerator devices to process the workload. The one or more processor is further to determine whether to perform data migration from the one or more accelerator devices to one or more different edge accelerator devices hosted on a different edge location, and send, in response to a determination to perform the data migration, a request to the one or more accelerator devices on the present edge location for transformed workload data to be processed by the one or more different edge accelerator devices. | 2022-07-28 |
20220237034 | AUTO-SCALING FOR ALLOCATION OF CLOUD SERVICE RESOURCES IN APPLICATION DEPLOYMENTS - Described embodiments provide systems and methods of allocating cloud resources for application deployments. A resource allocator may identify a first metric indicating usage of cloud resources by clients in a first release environment for an application update. The resource allocator may generate, using the first metric, a resource capacity model for predicting usage of the cloud resources by clients in a second release environment for the application update. The resource allocator may determine, using the resource capacity model, a metric predicting the usage of the cloud resources by the clients in the second release environment. The resource allocator may generate instructions to set an allocation of the cloud computing resources for performing deployment of the application update to the second release environment based on the second metric. | 2022-07-28 |
20220237035 | SYSTEM FOR ELECTRONIC IDENTIFICATION OF ATTRIBUTES FOR PERFORMING MAINTENANCE, MONITORING, AND DISTRIBUTION OF DESIGNATED RESOURCE ASSETS - Embodiments of the present invention provide a system for electronic identification of attributes for performing maintenance, monitoring, and distribution of designated resource assets. In particular, the system may be configured to extract one or more legacy resources from a data repository of an entity system associated with an entity, wherein the legacy resources are in a first format, convert the one or more legacy resources from the first format to a second format, process the one or more legacy resources, via one or more machine learning models, identify one or more attributes based on processing the one or more legacy resources via the one or machine learning models, and implement one or more actions based on the one or more attributes. | 2022-07-28 |
20220237036 | SYSTEM AND METHOD FOR OPERATION ANALYSIS - An information handling system for obtaining composed information handling systems includes resource set components and a system control processor. The system control processor makes an identification, based on monitoring of a resource set component of the resource set components, of an operation event; in response to the identification: makes a determination that the operation event is not immediately remediable based on the monitoring of the resource set component; in response to the determination: modifies the monitoring of the resource set component to obtain refined operation data for the resource set component; and performs an action set, based on the refined operation data, to modify operation of the resource set component. | 2022-07-28 |
20220237037 | Executing A Big Data Analytics Pipeline Using Shared Storage Resources - Executing a big data analytics pipeline in a storage system that includes compute resources and shared storage resources, including: receiving, from a data producer, a dataset; storing, within the storage system, the dataset; allocating processing resources to an analytics application; and executing the analytics application on the processing resources, including ingesting the dataset from the storage system. | 2022-07-28 |
20220237038 | RESOURCE ALLOCATION CONTROL DEVICE, COMPUTER SYSTEM, AND RESOURCE ALLOCATION CONTROL METHOD - In a management node that controls the amount of hardware resources of storage nodes to be allocated to the software of distributed data stores executed by storage nodes, the management node includes a disk device that stores a performance model indicating the correspondence relationship between the amount of hardware resources and the performance that can be implemented by the hardware of the resource amount, and a central processing unit (CPU) connected to the disk device, in which the CPU receives the target performance by distributed data stores, determines the hardware resource amount required to achieve the target performance based on the performance model, and sets to allocate hardware of the determined resource amount to the programs of the distributed data stores. | 2022-07-28 |
20220237039 | Throttle Memory as a Service based on Connectivity Bandwidth - Systems, methods and apparatuses to throttle network communications for memory as a service are described. For example, a computing device can borrow an amount of random access memory of the lender device over a communication connection between the lender device and the computing device. The computing device can allocate virtual memory to applications running in the computing device, and configure at least a portion of the virtual memory to be hosted on the amount of memory loaned by the lender device to the computing device. The computing device can throttle data communications used by memory regions in accessing the amount of memory over the communication connection according to the criticality levels of the contents stored in the memory regions. | 2022-07-28 |
20220237040 | ACCELERATOR RESOURCE MANAGEMENT METHOD AND APPARATUS - An accelerator resource management method and apparatus are disclosed. The accelerator resource management method includes receiving a task request for a neural network-related task and a resource scheduling policy for the neural network-related task, obtaining information on a current resource utilization status of an accelerator cluster comprising a plurality of accelerators, in response to the task request, and allocating an accelerator resource for performing the task based on a utility of a resource allocation that is based on the resource scheduling policy and the information. | 2022-07-28 |
20220237041 | PARALLEL PROCESSING SYSTEM PERFORMING IN-MEMORY PROCESSING - A parallel processing system includes a host and a memory device. The host includes a central processing unit configured to process processing in-memory (PIM) requests generated in a plurality of threads for in-memory processing and a memory controller configured to generate a PIM command corresponding to the PIM request. The memory device including a plurality of computing cores each including a bank and a computing circuit. The memory device is configured to perform in-memory processing in one of the plurality of computing cores according to the PIM command. The host allocates the plurality of computing cores to the plurality of threads, and PIM commands of each thread are processed using the computing core allocated to that thread. | 2022-07-28 |
20220237042 | RESOURCE PRE-FETCH USING AGE THRESHOLD - Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for identifying a set of resources in response to crawling multiple webpages that use at least one resource in the set. For each resource in the set, a system determines an age of the resource using a timestamp for the resource. The system determines a pre-fetch measure of the resource based on the age of the resource and usage information that describes use of the resource at a webpage. The system selects a first resource from the set based on the pre-fetch measure and determines whether a respective age of the selected first resource exceeds a threshold age. The system generates an index entry for a pre-fetch index. The index entry includes a command to pre-fetch the first resource based on a determination that the respective age of the first resource exceeds the threshold age. | 2022-07-28 |
20220237043 | METHOD AND SYSTEM FOR LEARNING TO TEMPORAL ALIGN SIGNALS WITH INTERSPERSED OUTLIERS - A method is provided. The method includes receiving a first signal and a second signal, generating a first sequence based on the first signal by embedding at least one feature in the first signal, generating a second sequence based on the second signal by embedding at least one feature in the second signal, determining a minimum cost path of aligning the first sequence with the second sequence based on the embedded at least one feature in the first sequence and the embedded at least one feature in the second sequence, and aligning the first sequence with second sequence based on the determined minimum cost path. | 2022-07-28 |
20220237044 | DYNAMIC CLIENT/SERVER SELECTION FOR MACHINE LEARNING EXECUTION - Apparatuses, methods, systems, and program products are disclosed for dynamic client/server selection for machine learning execution. An apparatus includes a processor and a memory that stores code executable by the processor. The code is executable by the processor to receive a request at a first device to execute a machine learning workload for the first device, dynamically determine at least one characteristic of the first device that is related to execution of the machine learning workload, dynamically determine at least one characteristic of a second device that is related to execution of the machine learning workload, and select one of the first and second devices to execute the machine learning workload in response to the at least one characteristic of the selected one of the first and second devices being more suitable for execution of the machine learning workload than another of the first and second devices. | 2022-07-28 |
20220237045 | METHOD, DEVICE, AND PROGRAM PRODUCT FOR MANAGING COMPUTING SYSTEM - A method includes: acquiring a set of operations to be performed on multiple computing units in the computing system; determining, based on the set of operations, the state of the multiple computing units, and an allocation model, an allocation action for allocating the set of operations to the multiple computing units and a reward for the allocation action, wherein the allocation model describes an association relationship among a set of operations, the state of multiple computing units, the allocation action for allocating the set of operations to the multiple computing units, and the reward for the allocation action; receiving an adjustment for the reward in response to determining that a match degree between the reward for the allocation action and a performance index of the computing system after the allocation action is performed satisfies a predetermined condition; and generating, based on the adjustment, training data for updating the allocation model. | 2022-07-28 |
20220237046 | SYSTEM AND METHOD FOR ACCESS MANAGEMENT FOR APPLICATIONS - A system and method for access management for applications is disclosed. The system and method includes at least: initializing, at execution time of an application code, a scan of actions performed by the application code on resources of a cloud computing environment; identifying an existing set of permissions for the resources; identifying one or more accessed permissions by the application code based on the actions performed by the application code on the resources; generating a new set of permissions for accessing the resources based on the identifying the existing set of permissions and the one or more accessed permissions; transmitting the new set of permissions to a database for storage and later retrieval; and applying the new set of permissions to the resources when the application code is executed in a production environment. | 2022-07-28 |
20220237047 | FORECAST OF RESOURCES FOR UNPRECEDENTED WORKLOADS - One or more processors receive resource type and capability information and activity information of workloads of a domain. A first model is generated and trained to map the resource information to the activity information of domain workloads. The activity information is decomposed into a set of activity core elements (ACEs). The one or more processors generate a second model, wherein the second model is trained to predict a set of resource types and resource capabilities of the respective resource types, based on an input of the first set of ACEs decomposed from the activity information of the workloads of the domain. The one or more processors receive a second set of ACEs that are decomposed from activities associated with an unprecedented workload, and the one or more processors generate a predicted set of resources to perform the second set of ACEs. | 2022-07-28 |
20220237048 | AFFINITY AND ANTI-AFFINITY FOR SETS OF RESOURCES AND SETS OF DOMAINS IN A VIRTUALIZED AND CLUSTERED COMPUTER SYSTEM - An example method of placing resources in domains in a virtualized computing system is described. A host cluster includes a virtualization layer executing on hardware platforms of the hosts. The method includes: determining, at a virtualization management server, definitions of the domains and resource groups, each of the domains including a plurality of placement targets, each of the resource groups including a plurality of the resources; receiving, at the virtualization management server from the user, affinity/anti-affinity rules that control placement of the resource groups within the domains; and placing, by the virtualization management server, the resource groups within the domains based on the affinity/anti-affinity rules. | 2022-07-28 |
20220237049 | AFFINITY AND ANTI-AFFINITY WITH CONSTRAINTS FOR SETS OF RESOURCES AND SETS OF DOMAINS IN A VIRTUALIZED AND CLUSTERED COMPUTER SYSTEM - An example method of placing resources in domains of a virtualized computing system is described. A host cluster includes a virtualization layer executing on hardware platforms of the hosts. The method includes: determining, at a virtualization management server, definitions of the domains and resource groups, each of the domains including a plurality of placement targets, each of the resource groups including a plurality of the resources; receiving, at the virtualization management server from the user, affinity/anti-affinity rules that control placement of the resource groups within the domains; receiving, at the virtualization management server from the user, constraints that further control placement of the resource groups within the domains; and placing, by the virtualization management server, the resource groups within the domains based on the affinity/anti-affinity rules and the constraints. | 2022-07-28 |
20220237050 | SYSTEM AND METHOD FOR MANAGEMENT OF COMPOSED SYSTEMS USING OPERATION DATA - A composed system manager for managing operation of composed information handling systems includes storage for storing telemetry models for the composed information handling systems and a telemetry manager. The telemetry manager makes a determination that a composed information handling system of the composed information handling systems has been instantiated; in response to the determination, identifies resource set components allocated to the composed information handling system; generates a telemetry model of the telemetry models for the composed information handling system based on the resource set components; and configures the resource set components based on the telemetry model to aggregate operation data generated by the resource set components. | 2022-07-28 |
20220237051 | METHOD AND SYSTEM FOR PROVIDING COMPOSABLE INFRASTRUCTURE CAPABILITIES - A system control processor manager uses composed information handling systems that utilize resource sets of information handling systems and an infrastructure manager. The infrastructure manager obtains a composition request for a composed information handling system; allocates a portion of resource sets to the composed information handling system using a telemetry data map; makes a determination that at least one of the portion of the allocated resource sets is hosted by an information handling system that does not include a physical system control processor; and in response to the determination: provides the information handling system with access to a system control processor without adding any physical system control processors to the information handling system; and directs access requests, by entities hosted by the information handling system and directed to the portion of the allocated resource sets, through the system control processor. | 2022-07-28 |
20220237052 | DEPLOYING MICROSERVICES INTO VIRTUALIZED COMPUTING SYSTEMS - Methods, systems and computer program products for configuring microservices platforms in one or more computing clusters. In one of the computing clusters, a request to instantiate a microservice platform is received, wherein the request is received in a computing cluster having a first node and a second node, and wherein the first node and second node comprise a first virtualized storage controller and a second virtualized storage controller, respectively. The storage controllers each manage their respective storage pools comprising local storage devices. A first microservice manager is deployed on the first node and a second microservice manager is deployed on the second node. The first virtualized storage controller on the first node performs storage management operations for a first microservice instantiated by the first microservice manager, and the second virtualized storage controller on the second node performs storage management operations for a second microservice instantiated by the second microservice manager. | 2022-07-28 |
20220237053 | METHOD AND SYSTEM FOR USING DEFINED COMPUTING ENTITIES - System and method uses a defined entity type that describes a data structure of a defined computing entity and at least one behavior of the defined computing entity based on user input information. The at least one behavior of the defined computing entity is defined by associating at least one interface with the defined entity type, where the at least one interface represents a reference entity type with a collection of behavior information. An operation is then executed on the defined computing entity according to the at least one behavior of the defined computing entity. | 2022-07-28 |
20220237054 | DYNAMIC PERSONALIZED API ASSEMBLY - Methods, computer readable media, and devices for dynamic personalized API assembly are provided. One method may include receiving a data query from a client by a CDN, parsing the data query to generate a modified data query, transmitting the modified data query to an origin server, receiving a query response from the origin server, generating a modified query response based on the query response, and sending the modified query response to the client. Another method may include receiving an API call by an origin server, generating an API response by creating a payload file and adding markup directives indicating whether content is cacheable, and transmitting the API response. | 2022-07-28 |