52nd week of 2021 patent applcation highlights part 52 |
Patent application number | Title | Published |
20210405973 | TRUE RANDOM NUMBER GENERATOR BASED ON PERIOD JITTER - A true random number generator (TRNG) for generating a sequence of random numbers of bits is disclosed. The TRNG includes a TRNG cell configured to generate a sequence of bits logically alternating with a mean frequency and with substantially random period jitter; a period monitor configured to generate a first sequence of random bits based on a set of periods of the sequence of logically alternating bits; and a sampling circuit configured to sample the first sequence of random bits in response to a sampling clock to generate a second sequence of random bits. | 2021-12-30 |
20210405974 | MATRIX TRANSPOSE AND MULTIPLY - Embodiments for a matrix transpose and multiply operation are disclosed. In an embodiment, a processor includes a decoder and execution circuitry. The decoder is to decode an instruction having a format including an opcode field to specify an opcode, a first destination operand field to specify a destination matrix location, a first source operand field to specify a first source matrix location, and a second source operand field to specify a second source matrix location. The execution circuitry is to, in response to the decoded instruction, transpose the first source matrix to generate a transposed first source matrix, perform a matrix multiplication using the transposed first source matrix and the second source matrix to generate a result, and store the result in a destination matrix location. | 2021-12-30 |
20210405975 | MAINTENANCE AND COMMISSIONING - An industrial integrated development environment (IDE) supports commissioning features that facilitate intelligent deployment of an automation system project to appropriate industrial devices (e.g., industrial controllers, drives, HMI terminals, etc.). In some embodiments, the industrial IDE system can generate validation checklists that can be used during commissioning to validate the system and manage project validation sign-off procedures. After commissioning of the system, the IDE system can also support a number of runtime monitoring features, including monitoring the automation system during operation and providing assistance with regard to detecting, predicting, and correcting maintenance issues. | 2021-12-30 |
20210405976 | SYSTEM AND METHOD FOR AUTOMATED SOFTWARE ENGINEERING - Systems and methods for automated software engineering are disclosed. A particular embodiment is configured to: establish a data connection with a software code repository; provide a collection of autonomous computer programs or bots configured to automatically perform a specific software development life cycle (SDLC) task; use a first bot of the collection of bots to perform an automatic code review of a software module from the software code repository; use a second bot of the collection of bots to perform automatic unit testing of the software module from the software code repository; and use a third bot of the collection of bots to perform an automatic deployment of the software module from the software code repository. A health engine module can monitor the execution of the other software modules and capture execution metrics. Any of the bots in the bot collection can be machine learning models trained using training data. | 2021-12-30 |
20210405977 | METHOD OF CONVERTING SOURCE CODE IN A UNIVERSAL HYBRID PROGRAMMING ENVIRONMENT - A method and system for providing a hybrid block and text-based programming environment. The hybrid block and text-based programming environment provides a software development tool suitable for users of different programming skill levels to write and understand code. The hybrid programming environment enables a user to view and edit source code through multiple graphical representative displays of the source code in a manner not previously achievable. Each of the graphical representative displays is linked to a particular programming view that has a unique set of rules related to the functionality of the displayed graphical elements to enable the more comprehensive functionality. The graphical representative displays provide a tool to educate novice programmers as they become more proficient and assist in the transition between block-based and textual representations. | 2021-12-30 |
20210405978 | METHOD AND SYSTEM FOR CONFIGURING PROCESSES OF SOFTWARE APPLICATIONS USING ACTIVITY FRAGMENTS - A method for creating processes in a software application. The method includes obtaining an activity fragment. The activity fragment includes an activity fragment name and an activity fragment configuration. The method further includes obtaining a process specification specifying an activity, and obtaining activity configuration instructions. The activity configuration instructions specify inclusion of the activity fragment in the activity. The method also includes building, based on the process specification, a process. Building the process includes associating the activity fragment with the activity. | 2021-12-30 |
20210405979 | FAST COMPILING SOURCE CODE WITHOUT DEPENDENCIES - Techniques for an ultra-fact software compilation of source code are provided. A compiler receives software code and may divide it into code sections. A map of ordered nodes may be generated, such that each node in the map may include a code section and the order of the nodes indicates an execution order of the software code. Each code section may be compiled into an executable object in parallel and independently from other code sections. A binary executable may be generated by linking executable objects generated from the code sections. The methodology significantly differs from existing source code compilation techniques because conventional compilers build executable sequentially, whereas the embodiments divide the source code into multiple smaller code sections and compile them individually and in parallel. Compiling multiple code sections improves the compilations in order of magnitude from conventional techniques. | 2021-12-30 |
20210405980 | LONG METHOD AUTOFIX ENGINE - A method and apparatus are disclosed for eliminating overlong source code segments (e.g., methods) by evaluating input source code segments for a plurality of predetermined code metric values to identify a first long code segment based on predetermined code metric values for output and storage in a codefix issue queue, applying multiple extraction algorithms to the first long code segment to generate a second code segment that is semantically equivalent to and shorter than the first long code segment; and then generating a fixed codegraph representation of the software program using the second code segment to replace the first long code segment. | 2021-12-30 |
20210405981 | OFFLOADING SERVER AND OFFLOADING PROGRAM - An offloading server includes: a data transfer designation section configured to analyze reference relationships of variables used in loop statements in an application and designate, for data that can be transferred outside a loop, a data transfer using an explicit directive that explicitly specifies a data transfer outside the loop; a parallel processing designation section configured to identify loop statements in the application and specify a directive specifying application of parallel processing by an accelerator and perform compilation for each of the loop statements; and a parallel processing pattern creation section configured to exclude loop statements causing a compilation error from loop statements to be offloaded and create a plurality of parallel processing patterns each of which specifies whether to perform parallel processing for each of the loop statements not causing a compilation error. | 2021-12-30 |
20210405982 | BYTECODE TRANSFORMATIONS USING VIRTUAL ARTIFACTS - Methods and systems for transforming bytecodes using virtual artifacts are disclosed. In one aspect, a method is provided that includes receiving a build request to convert source code into a first bytecode. A first virtual artifact may be identified within the source code and it may be determined that a local repository does not store the first virtual artifact. A real artifact that corresponds to the first virtual artifact may be retrieved from a centralized repository. A bytecode transformation may be applied to the real artifact to generate a second bytecode and the second bytecode may be added to the first bytecode. | 2021-12-30 |
20210405983 | SMART CONTRACT MAPPING TO TRUSTED HARDWARE IN A DATA CONFIDENCE FABRIC - Mapping applications or smart contracts to a data confidence fabric. A smart contract is deployed and executed in a data confidence fabric based on trust requirements of the smart contract. The trust requirements are mapped to the nodes of the data confidence fabric. A ledger is created on the identified nodes and the application is deployed to and run on the identified nodes. | 2021-12-30 |
20210405984 | Computer Model Management System - Techniques are disclosed relating to a method that includes receiving, by a particular computer system included in an enterprise computer system, deployment instructions, from a user, for a particular version of a machine-learning model. One or more versions of the machine-learning model may be stored in a database. The particular computer system may select, based on the deployment instructions, a destination within the enterprise computer system for deploying the particular version. The selected destination may provide access to a particular data set. The particular computer system may schedule a deployment of the particular version from the database to the selected destination. The deployed version of the machine-learning model may operate on the particular data set. Performance data associated with operation of the deployed version of the machine-learning model is collected. | 2021-12-30 |
20210405985 | Computer-Automated Software Release and Deployment Architecture - A method includes storing sets of deployment parameters. A first set of deployment parameters specifies deployment of a first application to a first environment. The method includes, in response to receiving a pointer to an executable form of the first application, storing the pointer as part of the first set. The method includes generating release objects, each identifying a specific version of deployment parameters. The method includes assigning the release objects to the plurality of environments. The method includes deploying the release objects to the assigned environments. A first release object corresponds to the first application and identifies a specified version of the first set. The method includes, subsequent to the first release object being assigned to the first environment, configuring the first environment according to the specified version of the first set, copying the pointed-to executable form to the first environment, and initiating execution of the copied executable. | 2021-12-30 |
20210405986 | ULTRA-FAST INSTALL AND UPDATE OF AN OPERATING SYSTEM - Techniques for an ultra-fast installation of a new operating system are provided. Conventional dependencies are ignored in a way that allows multiple components to be installed at once, even when certain components traditionally could not be installed until one or more other components had successfully completed their installation. An operating system executing on a computing device receives a container with files that collectively include a new operating system and a definition with the locations for the files in memory. An uninstalled state may be assigned to each file. Each file may be moved from the container to the location specified in the definition in parallel and independently of other files. After each file is moved, each file may be switched from the uninstalled state to the installed state. The new operating system may be transitioned from an uninstalled state to an installed state once files are switched to installed states. | 2021-12-30 |
20210405987 | DYNAMIC DRIVER SELECTION BASED ON FIRMWARE FOR A HARDWARE COMPONENT - An apparatus for dynamic driver selection based on firmware for a hardware component includes a processor and a memory that stores program code executable by the processor to perform operations including identifying hardware components installed on a device prior to installing an operating system. The operations include determining a level for firmware installed on one or more hardware components of the identified hardware components. The operations include determining a level for a device driver available to the operating system for communicating with the hardware component. The operations include in response to determining that the available device driver level is not compliant with the firmware level for the hardware component, dynamically retrieving, from a repository of device drivers, a device driver that has a level that is compliant with the firmware level. The operations include installing the compliant device driver on the device during installation of the operating system. | 2021-12-30 |
20210405988 | SYSTEM AND METHOD FOR AUTOMATIC DEPLOYMENT OF A CLOUD ENVIRONMENT - A method for the rapid, automatic, and adaptative deployment of a cloud environment that is secure, that adapts to different hardware architectures, network architectures, cloud services, technologies, and user needs, and that requires minimal user input. Configuration data may be generated for a collection of software components, which may include user inputs and randomly generated data. This data may be stored in a configuration database that is updated as deployment proceeds. Available hardware such as servers, storage, and networks may be discovered automatically and added to the configuration database. An initial software component may be deployed to coordinate subsequent steps, and then additional software components may be deployed in a sequence that considers dependencies. Software components may be organized into deployment groups. Users may select subsets of the components to deploy. The deployed cloud environment may be tested and validated automatically. | 2021-12-30 |
20210405989 | METHOD FOR TRANSFERRING A SOFTWARE INSTALLATION PROCEDURE ONTO A MEDICAL DEVICE - A method is for transferring a software installation procedure onto a medical device. In an embodiment, the method includes providing an image on a transmit system via a computing unit of the transmit system, the image including a visual depiction based on the installation procedure; optically transferring the image from the transmit system onto the medical device; and determining the installation procedure on the medical device based upon the visual depiction of the image via a computing unit of the medical device. | 2021-12-30 |
20210405990 | METHOD, DEVICE, AND STORAGE MEDIUM FOR DEPLOYING MACHINE LEARNING MODEL - Embodiments of the present disclosure relate to a method, a device, and a storage medium for deploying a machine learning model. The method includes: determining, at a first computing device, a configuration of a second computing device, wherein computing power of the first computing device is greater than that of the second computing device and the configuration of the second computing device indicates at least a processor architecture of the second computing device; acquiring a program code of a trained machine learning model corresponding to the configuration of the second computing device, wherein the program code is adapted to the processor architecture; and providing the program code of the machine learning model to the second computing device, for deploying the machine learning model on the second computing device. | 2021-12-30 |
20210405991 | AUTOMATIC UPDATE SCHEDULER SYSTEMS AND PROCESSES - A computer-implemented method includes: registering, by a computer device, a device to a network; collecting, by the computer device, device data from the device through the network; compiling, by the computer device, training data from the collected device data; training, by the computer device, a machine learning model using the training data; predicting, by the computer device and using the machine learning model, a time when the device will be in an inactive system state; and automatically scheduling, by the computer device and based on the predicting, an application of an update for the time when the device is in the inactive system state. | 2021-12-30 |
20210405992 | Managed Rooms Operational Maintenance - A device-implemented method for providing managed services is described herein. An example method includes determining each managed device of a plurality of managed devices available in a shared space, and determining health data for the shared space. The health data describes properties of each managed device of the plurality of managed devices, including update information, performance information, configuration information, and security information. The method further includes identifying a problem with at least one of the plurality of managed devices, the problem being identifiable based on analysis of the health data for the shared space. The method also includes coordinating operational maintenance for the at least one of the plurality of managed devices to alleviate the problem. Coordinating operational maintenance may include performing at least two of a software update, a configuration change, and uninstalling software. | 2021-12-30 |
20210405993 | Method of Delivering and Updating Software on Peripheral Devices Connected to Set-Top Boxes, IoT-Hubs, or Gateways - Disclosed are an apparatus and method for securely delivering and updating software on a peripheral device in an area network. Software for a peripheral device is obtained from an entity responsible for the functionality of the peripheral device. The software is validated for functionality and integrity, and it is then encrypted at the headend of a network infrastructure which securely delivers the software to a processor responsible for controlling the interface of the area network. The processor decrypts the validated software, and it delivers the validated software to a peripheral device on the area network. The validated software is executed on the peripheral device, such that the peripheral device executes an authentic version of the software from the entity responsible for the functionality of the peripheral device. | 2021-12-30 |
20210405994 | SERVER, UPDATE MANAGING METHOD, NON-TRANSITORY STORAGE MEDIUM, SOFTWARE UPDATING DEVICE, CENTER, AND OTA MASTER - A server includes: a first storage device storing prerequisite condition information including one or more prerequisite conditions to be satisfied by a vehicle when updating of software of an electronic control unit installed in the vehicle is executed; and one or more processors configured to transmit the prerequisite condition information to the vehicle based on a request from the vehicle. | 2021-12-30 |
20210405995 | ENTERPRISE FIRMWARE MANAGEMENT - Examples described herein include systems and methods managing firmware versions of user devices that are enrolled in an enterprise mobility management system. The system can include a management server that sends profiles to enrolled devices, causing those devices to restrict further firmware updates and register with a firmware server. The management server can retrieve available firmware versions and display those in a console. An administrator can select target firmware versions in the console. The management server can the cause the enrolled devices to update to the target firmware versions. This can include sending a call from the management server to the firmware server, causing an automatic update. It can also include sending a command from the management server to an enrolled device, causing the enrolled device to prompt a user prior to requesting a firmware update. This can allow an administrator to prevent user devices from installing firmware updates that could expose the enterprise to security risks or negatively impact operation of enterprise applications. | 2021-12-30 |
20210405996 | SERVER, MANAGING METHOD, NON-TRANSITORY STORAGE MEDIUM, SOFTWARE UPDATING DEVICE, CENTER, AND OVER-THE-AIR MASTER - A server configured to communicate with a vehicle includes: a storage device configured to store usage information, in which settings information of software executed by at least one of a plurality of electronic control units installed in the vehicle is correlated with user identification information that identifies a user of the vehicle; and one or more processors configured to receive, from a software updating device that is one of the electronic control units, the user identification information specified by the software updating device, and transmit settings information correlated with the user identification information to the software updating device based on the user identification information which is specified and the usage information. | 2021-12-30 |
20210405997 | PROCESSING FRAMEWORK FOR IN-SYSTEM PROGRAMMING IN A CONTAINERIZED ENVIRONMENT - The present disclosure relates to computer-implemented methods, software, and systems for lifecycle processing of declarative artifacts. Declarative artifacts defining a target state for application content related to a software application are read. When running, the software application includes runtime artifacts executing in a containerized environment. Model definition objects for processing during runtime of the software application based on the declarative artifacts are created and stored in a model repository at a container associated with the software application. The model repository is scanned as well as the runtime artifacts executing as part of the software application in the containerized runtime environment to identify a model definition object from the model repository for processing at runtime of the software application. An operation related to a runtime artifact to run as part of the running software application at the containerized runtime environment is executed based on input from the identified model definition object. | 2021-12-30 |
20210405998 | ELEMENT DETECTION - Provided herein are systems and methods for providing digital guidance in an underlying computer application. In one exemplary implementation, a method includes setting a rule or rules, in a computing device, in advance of digital guidance content creation, for detecting, upon later playback of the content, page elements of the underlying computer application that are associated with the content. The exemplary method further includes recording, in the computing device, steps of the digital guidance content as the steps are created by a content author, and automatically applying, in the computing device, the previously set rule or rules for detecting page elements, and thereby assigning strong attributes to the page elements. The method further includes saving, in the computing device, the content steps along with the strong attributes of the page elements associated with the content steps. | 2021-12-30 |
20210405999 | CONTEXT-AWARE UNDO-REDO SERVICE OF AN APPLICATION DEVELOPMENT PLATFORM - A computing device is disclosed herein. The computing device includes a memory that stores processor executable instructions for an application development platform and a context-aware undo-redo service of the application development platform. The computing device includes a processor that executes the processor executable instructions to cause the computing device to receive a first invocation of an undo operation with respect to environment variables on screens. The computing device further navigates, according to an active context, to a configuration screen of the screens to make the configuration screen visible in response to the first invocation. The configuration screen shows a portion of the environment variables. The computing device also receives a second invocation of the undo operation and executes the undo operation in response to the second invocation to reverse changes to the portion of the environment variables shown by the configuration screen while the configuration screen is visible. | 2021-12-30 |
20210406000 | REDUCED PROCESSING LOADS VIA SELECTIVE VALIDATION SPECIFICATIONS - Disclosed are embodiments for reducing processing requirements in complex build environments. Complex build environments frequently perform multiple builds per day, in some cases, multiple builds are occurring in parallel. Some of these builds and some fail. Moreover, a definition of success or failure of a build can vary across individual engineers or teams of engineers. In a complex build environment that is rapidly generating multiple build results simultaneously, identifying which builds are appropriate for use can be difficult. Many teams solve this problem by increasing a frequency of builds to rapidly detect any problems with documents recently checked into a document repository. However, this relatively high frequency of builds can impose large processing and/or cost burdens on an organization. By providing sophisticated methods of extracting validation information from existing builds, the disclosed embodiments reduce processing requirements and improved efficiency of enterprise build environments. | 2021-12-30 |
20210406001 | METHOD AND SYSTEM FOR FAST BUILDING AND TESTING SOFTWARE - Example methods are provided for performing fast building and testing a software suite with multiple software components. In one example, the method may include obtaining a changed code file, identifying a software component of the software suite impacted by the changed code file, and instructing to generate a software component build based on the software component but without other software components of the software suite. Before completing generating the software component build, the method may also include selecting a software suite build. The method further includes instructing to prepare a testbed based on the software suite build and instructing to test the software component build on the testbed. | 2021-12-30 |
20210406002 | CLIENT-SIDE ENRICHMENT AND TRANSFORMATION VIA DYNAMIC LOGIC FOR ANALYTICS - Described are systems and methods for client side enrichment and transform via dynamic logic for analytics across various platforms for improved performance, features, and uses. Analytics data collected in client applications is transformed and enriched before being sent to the downstream pipeline using native code and logic bundled into the core application code. The additional logic specific to manipulation of analytics may be unbundled from client-side application code and still be executed on on-device to achieve the same result. The logic may be written in a single language, such as JavaScript, and run across all clients including web browser and mobile operating systems. | 2021-12-30 |
20210406003 | META-INDEXING, SEARCH, COMPLIANCE, AND TEST FRAMEWORK FOR SOFTWARE DEVELOPMENT USING SMART CONTRACTS - A system and method for meta-indexing, search, compliance, and test framework for software development using smart contracts is provided, comprising an indexing service configured to create a dataset by processing and indexing source code of a project provided by a developer, perform a code audit on the indexed source code, store results from the code audit in the dataset, gather additional information relating to the provided project, store the additional information in the dataset, and store the dataset into memory; and a monitoring service configured to continuously monitor the project for at least source code changes and make changes to the dataset as needed. Additionally, a smart contract authority creates and enforces smart contracts for every transaction taking place upon the software essentially mandating and guaranteeing the security and authenticity of the software during the software's development and use. | 2021-12-30 |
20210406004 | SYSTEM AND METHOD FOR IMPLEMENTING A CODE AUDIT TOOL - An embodiment of the present invention is directed to a code audit tool that intelligently analyzes and profiles code, such as Python code, based on a variety of previously unmeasured factors and metrics including a set of software dimensions, such as Algorithmic Complexities; Software Sizing Metrics; Anti-Pattern Implementations; Maintainability Metrics; Dependency Mappings; Runtime Metrics; Testing Metrics; and Security Metrics. Once this analysis is complete, a standardized report card or other scoring interface may be generated. This may include analytical findings as well as suggestions and recommend steps so that developers can make informed decisions, enhance their code bases and improve the score assigned to their code. | 2021-12-30 |
20210406005 | COMPUTER-IMPLEMENTED METHODS AND SYSTEMS FOR MEASURING, ESTIMATING, AND MANAGING ECONOMIC OUTCOMES AND TECHNICAL DEBT IN SOFTWARE SYSTEMS AND PROJECTS - An interrelated set of tools and methods is disclosed for: (1) measuring the relationship between software source code attributes (such as code quality, design quality, test quality, and complexity metrics) and software economics outcome metrics (such as maintainability, agility, and cost) experienced by development and maintenance organizations, (2) using this information to project or estimate the level of technical debt in a software codebase, (3) using this information to estimate the financial value of efforts focused on improving the codebase (such as rewriting or refactoring), and (4) using this information to help manage a software development effort over its lifetime so as to improve software economics, business outcomes, and technical debt while doing so. | 2021-12-30 |
20210406006 | SYSTEMS AND METHODS FOR FIRMWARE-BASED USER AWARENESS ARBITRATION - A method may include, in an operating system, implementing a sensor hub in firmware of a platform controller hub of an information handling system, the sensor hub configured to implement a plurality of sensor physical microdrivers, each of the plurality of sensor physical microdrivers corresponding to a respective sensor of a plurality of sensors and configured to communicate a signal representing a physical quantity sensed by the respective sensor; a plurality of algorithm microdrivers implemented as virtual microdrivers, each of the plurality of algorithm microdrivers corresponding to a respective sensor physical microdriver of the plurality of sensor physical microdrivers; and a user-awareness arbitration microdriver implemented as a virtual microdriver and configured to receive an arbitration policy for user awareness detection, receive sensor information from the plurality of algorithm microdrivers, and based on the arbitration policy, apply arbitration logic to the sensor information to determine a user awareness. | 2021-12-30 |
20210406007 | GENERATING OPTIMIZED MICROCODE INSTRUCTIONS FOR DYNAMIC PROGRAMMING BASED ON IDEMPOTENT SEMIRING OPERATIONS - In one embodiments, a method is provided. The method includes determining whether a set of algorithmic operations can be represented using an algebraic formulation. The method also includes generating a sequence of idempotent semiring operations based on the set of algorithmic operations in response to determining that the set of algorithmic operations can be represented using the algebraic formulation. The sequence of idempotent semiring operations are part of an algebraic idempotent semiring, represent the algebraic formulation, and comprise one or more of an associative, commutative pick operation that forms an abelian monoid and an associative tally operation that forms a monoid and distributes over the pick operation. The method also includes generating a sequence of microcode instructions based on the sequence of idempotent semiring operations, wherein the sequence of microcode instructions carries out the sequence of idempotent semiring operations. | 2021-12-30 |
20210406008 | SAFETY SUPERVISED GENERAL PURPOSE COMPUTING DEVICES - A computing device including a plurality of sensors, a system-on-module, a safety microcontroller and a plurality of communication interfaces communicatively coupling the system-on-module, the safety microcontroller and the plurality of sensors together. The system-on module can include an integrated interconnection of a plurality of different types of cores and one or more different types of memory. The system-on module can be configured to control operation and or performance of a system. The safety microcontroller can be configured to provide safety supervision of the system-on-module. | 2021-12-30 |
20210406009 | APPARATUS FOR OPTIMIZED MICROCODE INSTRUCTIONS FOR DYNAMIC PROGRAMMING BASED ON IDEMPOTENT SEMIRING OPERATIONS - In one embodiments, a method is provided. The method includes determining whether a set of algorithmic operations can be represented using an algebraic formulation. The method also includes generating a sequence of idempotent semiring operations based on the set of algorithmic operations in response to determining that the set of algorithmic operations can be represented using the algebraic formulation. The sequence of idempotent semiring operations are part of an algebraic idempotent semiring, represent the algebraic formulation, and comprise one or more of an associative, commutative pick operation that forms an abelian monoid and an associative tally operation that forms a monoid and distributes over the pick operation. The method also includes generating a sequence of microcode instructions based on the sequence of idempotent semiring operations, wherein the sequence of microcode instructions carries out the sequence of idempotent semiring operations. | 2021-12-30 |
20210406010 | PROCESSOR AND CONTROL METHOD FOR PROCESSOR - A processor having a systolic array that can perform operations efficiently is provided. The processor includes multiple processing cores aligned in a matrix, and each of the processing cores includes an arithmetic unit array including multiple arithmetic units that can form a systolic array. Each of the processing cores includes a first memory that stores first data, a second memory that stores second data, a first multiplexer that connects a first input for receiving the first data at the arithmetic unit array to an output of the first memory in the processing core or an output of the arithmetic unit array in an adjacent processing core, and a second multiplexer that connects a second input for receiving the second data at the arithmetic unit array to an output of the second memory in the processing core or an output of the arithmetic unit array in an adjacent processing core. | 2021-12-30 |
20210406011 | Systems, Apparatuses, And Methods For Fused Multiply Add - Embodiments of systems, apparatuses, and methods for fused multiple add. In some embodiments, a decoder decodes a single instruction having an opcode, a destination field representing a destination operand, and fields for a first, second, and third packed data source operand, wherein packed data elements of the first and second packed data source operand are of a first, different size than a second size of packed data elements of the third packed data operand. Execution circuitry then executes the decoded single instruction to perform, for each packed data element position of the destination operand, a multiplication of a M N-sized packed data elements from the first and second packed data sources that correspond to a packed data element position of the third packed data source, add of results from these multiplications to a full-sized packed data element of a packed data element position of the third packed data source, and storage of the addition result in a packed data element position destination corresponding to the packed data element position of the third packed data source, wherein M is equal to the full-sized packed data element divided by N. | 2021-12-30 |
20210406012 | LOADING AND STORING MATRIX DATA WITH DATATYPE CONVERSION - Embodiments for loading and storing matrix data with datatype conversion are disclosed. In an embodiment, a processor includes a decoder and execution circuitry. The decoder is to decode an instruction having a format including an opcode field to specify an opcode, a first destination operand field to specify a first destination matrix location, and a first source operand field to specify a first source matrix location. The execution circuitry is to, in response to the decoded instruction, convert data elements from a plurality of source element locations of a first source matrix specified by the first source matrix location from a first datatype to a second datatype to generate a plurality of converted data elements and to store each of the plurality of converted data elements in one of a plurality of destination element locations in a first destination matrix specified by the first destination matrix location. | 2021-12-30 |
20210406013 | PROCESSING DEVICE, INFORMATION PROCESSING DEVICE, AND INFORMATION PROCESSING METHOD - A processing device includes at least one memory, and at least one processor configured to receive a packet including a plurality of instructions, context information indicating an execution state of the plurality of instructions, and data to be processed by the plurality of instructions, execute at least a part of the plurality of instructions based on the context information, and transmit the packet to another processing device, after executing the at least the part of the plurality of instructions by the processing device, to cause at least a part of remaining instructions among the plurality of instructions to be executed, based on the context information, the remaining instructions being not executed by the processing device. | 2021-12-30 |
20210406014 | CACHE MANAGEMENT OPERATIONS USING STREAMING ENGINE - A stream of data is accessed from a memory system using a stream of addresses generated in a first mode of operating a streaming engine in response to executing a first stream instruction. A block cache management operation is performed on a cache in the memory using a block of addresses generated in a second mode of operating the streaming engine in response to executing a second stream instruction. | 2021-12-30 |
20210406015 | Execution or Write Mask Generation for Data Selection in a Multi-Threaded, Self-Scheduling Reconfigurable Computing Fabric - Representative apparatus, method, and system embodiments are disclosed for configurable computing. A representative system includes an asynchronous packet network having a plurality of data transmission lines forming a data path transmitting operand data; a synchronous mesh communication network; a plurality of configurable circuits arranged in an array, each configurable circuit of the plurality of configurable circuits coupled to the asynchronous packet network and to the synchronous mesh communication network, each configurable circuit of the plurality of configurable circuits adapted to perform a plurality of computations; each configurable circuit of the plurality of configurable circuits comprising: a memory storing operand data; and an execution or write mask generator adapted to generate an execution mask or a write mask identifying valid bits or bytes transmitted on the data path or stored in the memory for a current or next computation. | 2021-12-30 |
20210406016 | MATRIX DATA SCATTER AND GATHER BY ROW - Embodiments for gathering and scattering matrix data by row are disclosed. In an embodiment, a processor includes a storage matrix, a decoder, and execution circuitry. The decoder is to decode an instruction having a format including an opcode field to specify an opcode and a first operand field to specify a set of irregularly spaced memory locations. The execution circuitry is to, in response to the decoded instruction, calculate a set of addresses corresponding to the set of irregularly spaced memory locations and transfer a set of rows of data between the storage and the set of irregularly spaced memory locations. | 2021-12-30 |
20210406017 | METHOD FOR CONTROL-FLOW INTEGRITY PROTECTION, APPARATUS, DEVICE AND STORAGE MEDIUM - Embodiments of the present disclosure provide a method for control-flow integrity protection, including: changing preset bits of all legal target addresses of a current indirect branch instruction in a control flow of a program to be protected to be same; and rewriting preset bits of a current target address of the current indirect branch instruction to be same as the preset bits of the legal target addresses, so that the program to be protected terminates when the current target address is tampered with. By changing the preset bits of all the legal target addresses of the current indirect branch instruction to be same and rewriting the preset bits of the current target address to be consistent with the preset bits of the legal target addresses, traditional label comparison is replaced by the preset bit overlap operation, reducing performance overhead and improving attack defense efficiency. | 2021-12-30 |
20210406018 | APPARATUSES, METHODS, AND SYSTEMS FOR INSTRUCTIONS FOR MOVING DATA BETWEEN TILES OF A MATRIX OPERATIONS ACCELERATOR AND VECTOR REGISTERS - Systems, methods, and apparatuses relating to one or more instructions that utilize direct paths for loading data into a tile from a vector register and/or storing data from a tile into a vector register are described. In one embodiment, a system includes a matrix operations accelerator circuit comprising a two-dimensional grid of processing elements, a plurality of registers that represents a two-dimensional matrix coupled to the two-dimensional grid of processing elements, and a coupling to a cache; and a hardware processor core comprising: a vector register, a decoder to decode a single instruction into a decoded single instruction, the single instruction including a first field that identifies the two-dimensional matrix, a second field that identifies a set of elements of the two-dimensional matrix, and a third field that identifies the vector register, and an execution circuit to execute the decoded single instruction to cause a store of the set of elements from the plurality of registers that represents the two-dimensional matrix into the vector register by a coupling of the hardware processor core to the matrix operations accelerator circuit that is separate from the coupling to the cache. | 2021-12-30 |
20210406019 | APPARATUSES, METHODS, AND SYSTEMS FOR INSTRUCTIONS FOR OPERATING SYSTEM TRANSPARENT INSTRUCTION STATE MANAGEMENT OF NEW INSTRUCTIONS FOR APPLICATION THREADS - Systems, methods, and apparatuses relating to an instruction for operating system transparent instruction state management of new instructions for application threads are described. In one embodiment, a hardware processor includes a decoder to decode a single instruction into a decoded single instruction, and an execution circuit to execute the decoded single instruction to cause a context switch from a current state to a state comprising additional state data that is not supported by an execution environment of an operating system that executes on the hardware processor. | 2021-12-30 |
20210406020 | PROCESSOR WITH HARDWARE SUPPORTED MEMORY BUFFER OVERFLOW DETECTION - A processor with fault generating circuitry responsive to detecting a processor write is to a stack location that is write protected, such as for storing a return address at the stack location. | 2021-12-30 |
20210406021 | DUAL DATA STREAMS SHARING DUAL LEVEL TWO CACHE ACCESS PORTS TO MAXIMIZE BANDWIDTH UTILIZATION - A streaming engine employed in a digital data processor specifies fixed first and second read only data streams. Corresponding stream address generator produces address of data elements of the two streams. Corresponding steam head registers stores data elements next to be supplied to functional units for use as operands. The two streams share two memory ports. A toggling preference of stream to port ensures fair allocation. The arbiters permit one stream to borrow the other's interface when the other interface is idle. Thus one stream may issue two memory requests, one from each memory port, if the other stream is idle. This spreads the bandwidth demand for each stream across both interfaces, ensuring neither interface becomes a bottleneck. | 2021-12-30 |
20210406022 | SYSTEM, APPARATUS AND METHOD FOR FINE-GRAIN ADDRESS SPACE SELECTION IN A PROCESSOR - In one embodiment, a processor comprises: a first configuration register to store a pointer to a process address space identifier (PASID) table; and an execution circuit coupled to the first configuration register. The execution circuit, in response to a first instruction, is to obtain command data from a first location identified in a source operand of the first instruction, obtain a PASID table handle from the command data, access a first entry of the PASID table using the pointer from the first configuration register and the PASID table handle to obtain a PASID value, insert the PASID value into the command data, and send the command data to a device coupled to the processor. Other embodiments are described and claimed. | 2021-12-30 |
20210406023 | PARALLEL SLICE PROCESSOR HAVING A RECIRCULATING LOAD-STORE QUEUE FOR FAST DEALLOCATION OF ISSUE QUEUE ENTRIES - An execution unit circuit for use in a processor core provides efficient use of area and energy by reducing the per-entry storage requirement of a load-store unit issue queue. The execution unit circuit includes a recirculation queue that stores the effective address of the load and store operations and the values to be stored by the store operations. A queue control logic controls the recirculation queue and issue queue so that that after the effective address of a load or store operation has been computed, the effective address of the load operation or the store operation is written to the recirculation queue and the operation is removed from the issue queue, so that address operands and other values that were in the issue queue entry no longer require storage. When a load or store operation is rejected by the cache unit, it is subsequently reissued from the recirculation queue. | 2021-12-30 |
20210406024 | INSTRUCTION ADDRESS TRANSLATION AND INSTRUCTION PREFETCH ENGINE - Techniques for performing instruction fetch operations are provided. The techniques include determining instruction addresses for a primary branch prediction path; requesting that a level 0 translation lookaside buffer (“TLB”) caches address translations for the primary branch prediction path; determining either or both of alternate control flow path instruction addresses and lookahead control flow path instruction addresses; and requesting that either the level 0 TLB or an alternative level TLB caches address translations for either or both of the alternate control flow path instruction addresses and the lookahead control flow path instruction addresses. | 2021-12-30 |
20210406025 | METHOD OF AND SYSTEM FOR GENERATING A RANK-ORDERED INSTRUCTION SET USING A RANKING PROCESS - A system for generating rank-ordered instruction sets includes at least a computing device, wherein the at least a computing device is configured to generate a first rank-ordered list of instructions, wherein generating further comprises receiving a plurality of user objectives, determine, using a first ranking process and a plurality of objectives, a rank-ordered objective set, identify, using a first machine-learning process and ranked-ordered goal set, an instruction set including a plurality of instructions, wherein the plurality of instructions includes an instruction for addressing each objective of the plurality of objectives, generate, using a second ranking process and a first plurality of instructions, the first ranked-ordered list of instructions for addressing the rank-ordered objective set. provide the rank-ordered instruction set to a user device, receive, from the user device, a plurality of user data, and generate, using the plurality of user data, a second rank-ordered list of instructions. | 2021-12-30 |
20210406026 | COALESCING ADJACENT GATHER/SCATTER OPERATIONS - According to one embodiment, a processor includes an instruction decoder to decode a first instruction to gather data elements from memory, the first instruction having a first operand specifying a first storage location and a second operand specifying a first memory address storing a plurality of data elements. The processor further includes an execution unit coupled to the instruction decoder, in response to the first instruction, to read contiguous a first and a second of the data elements from a memory location based on the first memory address indicated by the second operand, and to store the first data element in a first entry of the first storage location and a second data element in a second entry of a second storage location corresponding to the first entry of the first storage location. | 2021-12-30 |
20210406027 | ADVANCED PROCESSOR ARCHITECTURE - The invention relates to a method for processing instructions out-of-order on a processor comprising an arrangement of execution units. The inventive method comprises looking up operand sources in a Register Positioning Table and setting operand input references of the instruction to be issued accordingly, checking for an Execution Unit (EXU) available for receiving a new instruction, and issuing the instruction to the available Execution Unit and entering a reference of the result register addressed by the instruction to be issued to the Execution Unit into the Register Positioning Table (RPT). | 2021-12-30 |
20210406028 | Systems and Methods for Policy Execution Processing - A system and method of processing instructions may comprise an application processing domain (APD) and a metadata processing domain (MTD). The APD may comprise an application processor executing instructions and providing related information to the MTD. The MTD may comprise a tag processing unit (TPU) having a cache of policy-based rules enforced by the MTD. The TPU may determine, based on policies being enforced and metadata tags and operands associated with the instructions, that the instructions are allowed to execute (i.e., are valid). The TPU may write, if the instructions are valid, the metadata tags to a queue. The queue may (i) receive operation output information from the application processing domain, (ii) receive, from the TPU, the metadata tags, (iii) output, responsive to receiving the metadata tags, resulting information indicative of the operation output information and the metadata tags; and (iv) permit the resulting information to be written to memory. | 2021-12-30 |
20210406029 | PROGRAMMING LANGUAGE TRIGGER MECHANISMS FOR PARALLEL ASYNCHRONOUS ENUMERATIONS - Embodiments described herein are directed to a programming language trigger mechanism. The trigger mechanism is a small piece of code that a software developer utilizes in a computer program. The trigger mechanism enables computing operations or tasks to be performed asynchronously and in a parallel fashion. In particular, logic (e.g., operations or tasks) associated with the trigger mechanism are provided to a plurality of resources for processing in parallel. Each resource asynchronously processes the task provided thereto and asynchronously provides the result. The results are asynchronously returned as an enumeration. The enumeration enables the software developer to enumerate through the returned elements as a simple stream of results as they are calculated. | 2021-12-30 |
20210406030 | COMPUTER SYSTEM USING A PLURALITY OF SINGLE INSTRUCTION MULTIPLE DATA (SIMD) ENGINES FOR EFFICIENT MATRIX OPERATIONS - A computer system including a plurality of SIMD engines and a corresponding plurality of output register sets. Operand A register file stores one or more Operand A values, each including a plurality of operand words. Operand B register file stores one or more Operand B values, each including a plurality of operand words. Operand A distribution circuit receives an Operand A value from the Operand A register file, and selectively routes one or more of the operand words of the received Operand A value to create a plurality of input Operand A values, which are selectively routed to the SIMD engines. Operand B distribution circuit receives one or more Operand B values from the Operand B register file, and selectively routes one or more of the operand words of the Operand B value(s) to create a plurality of input Operand B values, which are selectively routed to the SIMD engines. | 2021-12-30 |
20210406031 | SIMD Operand Permutation with Selection from among Multiple Registers - Techniques are disclosed relating to operand routing among SIMD pipelines. In some embodiments, an apparatus includes a set of multiple hardware pipelines configured to execute a single-instruction multiple-data (SIMD) instruction for multiple threads in parallel, wherein the instruction specifies first and second architectural registers. In some embodiments, the pipelines include execution circuitry configured to perform operations using one or more pipeline stages of the pipeline. In some embodiments, the pipelines include routing circuitry configured to select, based on the instruction, a first input operand for the execution circuitry from among: a value from the first architectural register from thread-specific storage for another pipeline and a value from the second architectural register from thread-specific storage for a thread assigned to another pipeline. In some embodiments, the routing circuitry may support a shift and fill instruction that facilitates storage of an arbitrary portion of a graphics frame in one or more registers. | 2021-12-30 |
20210406032 | COMPLEX COMPUTING DEVICE, COMPLEX COMPUTING METHOD, ARTIFICIAL INTELLIGENCE CHIP AND ELECTRONIC APPARATUS - The present application discloses a complex computing device, a complex computing method, an artificial intelligence chip and an electronic apparatus, and relates to a field of artificial intelligence chips. One of the solutions includes: an input interface receives complex computing instructions and arbitrates each complex computing instruction to a corresponding computing component respectively, according to the computing types in the respective complex computing instructions; each computing component is connected to the input interface, acquires a source operand from a complex computing instruction to perform complex computing, and generates computing result instruction to feed back to an output interface; the output interface arbitrates the computing result in each computing result instruction to the corresponding instruction source respectively, according to the instruction source identifier in each computing result instruction. | 2021-12-30 |
20210406033 | METHOD FOR RUNNING APPLETS, AND ELECTRONIC DEVICE - The embodiment of the present disclosure provides a method and an apparatus for running applets, an electronic device and a storage medium. The method includes obtaining uniform resource identifier (URI) information of the applet based on an opening request when the opening request of the applet is received, wherein a format of the URI comprises a protocol name of a target boot protocol and content corresponding to the protocol, and the target boot protocol is a universal boot protocol applied to a plurality of host applications when opening the applet; analyzing the URI information based on grammatical format description rules of the target boot protocol, to obtain a first parameter and a second parameter; obtaining an execution file package of the applet based on the first parameter; and rendering corresponding page resources in the execution file package based on the second parameter. Therefore, the opening of the applet across clients and servers may be unified with the pre-defined target boot protocol, which provides an important prerequisite for the multi-terminal operation of the applet and ensures that the applet has multi-terminal operation capabilities. | 2021-12-30 |
20210406034 | METHODS AND APPARATUS FOR BOOT TIME REDUCTION IN A PROCESSOR AND PROGRAMMABLE LOGIC DEVICE ENVIROMENT - Methods and apparatus for boot time reduction in a processor and programmable logic device environment are disclosed. An example apparatus includes a multicore processor including a first core and a second core. A bootstrap processor is to initialize the first core into a standby mode and initialize the second core into a non-standby mode. A programmable logic device is to be programmed with instructions to be executed by the programmable logic device by the second core via a first connection initialized by the second core. The bootstrap processor is to, upon completion of the programming of the programmable logic device, initialize a data connection between the programmable logic device and the second core. | 2021-12-30 |
20210406035 | SYSTEMS AND METHODS FOR AUTOMATICALLY UPDATING COMPUTE RESOURCES - Systems and methods for automatically removing and replacing outdated compute resources in a cluster. The systems and methods include a configurable monitoring system that is configured to detect outdated compute resources and trigger a cycling process to automatically replace the detected outdated compute resources with new compute resources. The disclosed systems and methods safely rotate a group of compute resources by identifying and detaching outdated compute resources, waiting until the outdated compute resources have been drained of pending jobs scheduled on these resources, waiting until replacement compute resources have started and then cordoning, draining, deleting and terminating the outdated compute resources. | 2021-12-30 |
20210406036 | DATA PROCESSING METHOD, APPARATUS, AND MEDIUM - Aspects of the present disclosure can provide a data processing method, apparatus, and medium. The data processing method is applied to a terminal and can include obtaining a data-to-be-processed through an integrated circuit on the terminal, sending the data-to-be-processed to an application processor of the terminal, and processing, by the application processor, the data-to-be-processed and generating a result data. | 2021-12-30 |
20210406037 | FIELD DEVICE CONFIGURATION TOOL - A field device configuration tool includes: a user interface; a processing unit; and a memory. The processing unit controls the user interface to present information for a parameter of a field device on a display of the user interface. The user interface enables a user of the field device to input a semantic identification ID for the parameter of the field device. The processing unit maps the semantic identification ID to the parameter for the field device as a mapping. The processing unit saves the mapping between the semantic ID and the parameter in the memory as a saved mapping. | 2021-12-30 |
20210406038 | DEVICE AND METHOD FOR COMPUTER-AIDED PROCESSING OF DATA - A device and a method ( | 2021-12-30 |
20210406039 | MANAGED CONTROL PLANE SERVICE - At a managed control plane service, end-user application programming interfaces (APIs) of an application to be implemented at a provider network are determined. A set of common operational requirements of the application, to be fulfilled without obtaining program code for the requirements, are identified. In response to an invocation of an end-user API of the application, computations are performed at a resource selected by the managed control plane service, and one or more tasks to satisfy a common operational requirement are initiated by the managed control plane service. | 2021-12-30 |
20210406040 | CREATING DEFAULT LAYOUT CONSTRAINTS FOR GRAPHICAL INTERFACES - In some implementations, a method of generating a constraint-based adaptive graphical user interface (GUI) from a static GUI design includes, obtaining a static GUI that includes a plurality of views, identifying a root view and a child view of the static GUI, applying one or more constraints to the child view based on a spatial relation of the child view to borders of the root view, determining that the child view is not fully constrained, in response to determining that the child view is not fully constrained, applying one or more additional constraints to the child view based on a spatial distance between the child view and an additional view that is a neighbor of the child view, and generating the constraint-based adaptive GUI in one or more sizes that differ from a size of the static GUI based on the one or more constraints. | 2021-12-30 |
20210406041 | Analytics Dashboards for Critical Event Management Software Systems, and Related Software - Analytics dashboards for critical event management systems that include artificial-intelligence (AI) functionalities, and related software. AI functionalities disclosed include pattern recognition and predictive modelling. One or more pattern-recognition algorithms can be used, for example, to identified patterns or other groupings within stored critical events, which can then be used to improve response performance and/or to inform the generation of predictive models. One or more predictive-modeling algorithms can be used to generate one or more predictive models that can then be used, for example, to make predictions about newly arriving critical events that can then be used, among other things, to provide optimal response performance and allow users to efficiently and effectively manage responses critical events. These and other features are described in detail. | 2021-12-30 |
20210406042 | DESKTOP ENABLING OF WEB DOCUMENTS - Systems and methods for interacting with a web-based document using a desktop-based application, wherein the application includes a web content renderer and is configured to appear as an application native to the operating system using the native graphical user interface for selecting a web-based document for the application to open, retrieving the contents of the document from they URL associated with the document, displaying, using the application, the contents of the retrieved document using the graphical user interface, and enabling, using the application, a user to edit the contents or the retrieved document using the graphical user interface. | 2021-12-30 |
20210406043 | TERMINAL DEVICE, SCREEN DISPLAY SYSTEM, DISPLAY METHOD, AND PROGRAM - A terminal device used by a user includes a display, a memory; and a processor configured to execute inferring a layout of a screen optimized for the user, and displaying the screen with the inferred layout on the display. | 2021-12-30 |
20210406044 | DYNAMIC ACTIONABLE NOTIFICATIONS - Systems and methods for using dynamic actionable notifications are disclosed. The method includes: receiving, at a client device, a dynamic actionable notification associated with an event at a remote server, the dynamic actionable notification including one or more action items associated with the event; detecting user interaction with the dynamic actionable notification; retrieving current status of the one or more action items from the remote server; displaying one or more actionable graphical elements in a user interface of the dynamic actionable notification based on the retrieved current status of the one or more action items. | 2021-12-30 |
20210406045 | SYSTEM FOR DATA AGGREGATION AND ANALYSIS OF DATA FROM A PLURALITY OF DATA SOURCES - An interactive user interface for receiving and displaying data is described. The interactive user interface may display data sets from a plurality of external applications and/or data sources. Received data sets may be compiled to form an interactive graphical unit, also called a “card,” that may be displayed in a format based upon that of the native external application of the received data sets. Cards may be grouped with other cards. A card may include a link which allows users to access the native external application of the card to make any desired modifications or changes. | 2021-12-30 |
20210406046 | METHOD AND SYSTEM FOR ASYNCHRONOUS NOTIFICATIONS FOR USERS IN CONTEXTUAL INTERACTIVE SYSTEMS - A terminal server of a virtual assistant system for proactively triggering notifications is disclosed. The terminal server is configured to: receive data indicative of a change of a service related state associated with a user of at least one terminal client; generate accordingly a close-ended type question; instruct a transmission of the close-ended type question to the at least one terminal client; in response to a retransmission request, received from the at least one terminal client in relation to the transmission: not perform the close-ended type question, access a storage of the service related state to generate accordingly a new close-ended type question, instruct a transmission of the new close-ended type question to the at least one terminal client, analyze a closed type answer provided by the at least one terminal client, and instruct transmission of a current response to the answer provided by the user. | 2021-12-30 |
20210406047 | SYSTEM AND METHOD FOR AUTOMATIC SEGMENTATION OF DIGITAL GUIDANCE CONTENT - Provided herein are systems and methods for providing digital guidance in an underlying computer application. In one exemplary implementation, a method includes recording, in a computing device, steps of digital guidance content as the steps are created by a content author. The exemplary method also includes automatically segmenting, in the computing device, the digital guidance content as it is being created such that the digital guidance content is only associated with segments of the underlying computer application where the content is relevant. The exemplary method further includes making the digital guidance content available for playback to an end user on a computing device only when the end user is in a segment of the underlying computer application that is relevant to the digital guidance content. | 2021-12-30 |
20210406048 | BUILDING AND MANAGING COHESIVE INTERACTION FOR VIRTUAL ASSISTANTS - A method includes receiving data comprising a plurality of requests and a plurality of responses to the requests. The requests and the responses are associated with a virtual assistant programmed to address the plurality of requests. In the method, a machine learning (ML) classifier is used to partition the requests into a plurality of partitions corresponding to a plurality of request types. An interface for a user is generated to display a subset of the requests corresponding to at least one partition of the plurality of partitions and to display a response corresponding to the subset of the plurality of requests, wherein the response is based on one or more of the plurality of responses. The interface is configured to permit editing of the response by the user. The method also includes processing the response edited by the user, and transmitting the edited response to the virtual assistant. | 2021-12-30 |
20210406049 | FACILITATING MESSAGE COMPOSITION BASED ON ABSENT CONTEXT - Methods, computer systems, computer-storage media, and graphical user interfaces are provided for facilitating message composition, according to embodiments of the present invention. In one embodiment, message data associated with a message being composed is obtained. The message data is analyzed to determine a message type indicating a type of message and a message context representation representing a context provided within the message being composed. Context representations representing expected contexts associated with the message type of the message are identified. Thereafter, an absent context missing in the message being composed is determined based on a comparison of the message context representation with the set of context representations. A recommendation related to the absent context can be provided, for example, for display via a user interface. | 2021-12-30 |
20210406050 | TECHNIQUES TO DECREASE A LIVE MIGRATION TIME FOR A VIRTUAL MACHINE - Examples may include techniques to decrease a live migration time for a virtual machine (VM). Examples include selecting data to copy or not copy during a live migration of the VM from a source host server to a destination host server. | 2021-12-30 |
20210406051 | NESTED VIRTUAL MACHINE SUPPORT FOR HYPERVISORS OF ENCRYPTED STATE VIRTUAL MACHINES - A method includes creating, by a hypervisor executing on a processing device, a first virtual machine nested within a second virtual machine. The method further includes identifying a context of the second virtual machine and providing, to a context of the first virtual machine, a parent context pointer indicating the context of the second virtual machine. | 2021-12-30 |
20210406052 | FAST DISASTER RECOVERY IN THE CLOUD FOR VIRTUAL MACHINES WITH SIGNIFICANT MEDIA CONTENT - One example method includes processing a file of a VM to create a decreased quality file that is a version of the file, storing the decreased quality file at a recovery site along with metadata indicating a file type and path name for the decreased quality file, in response to a disaster recovery request, creating a partial user VM at the recovery site, and the partial user VM includes the decreased quality file, and with the partial user VM, serving the decreased quality file to a user in response to a request from the user. | 2021-12-30 |
20210406053 | RIGHTSIZING VIRTUAL MACHINE DEPLOYMENTS IN A CLOUD COMPUTING ENVIRONMENT - The present disclosure relates to systems, methods, and computer readable media for rightsizing virtual machine deployments on a cloud computing system. For example, systems disclosed herein may predict utilization of resources for a customer deployment and determine a desired goal state including a deployment of virtual machines having rightsized specifications that align more closely with the predicted utilization. Systems disclosed herein may utilize the goal state in view of the deployment data, policies, and other information to determine an action plan including deployment actions for transitioning a current state of a customer deployment to the goal state. By rightsizing virtual machine deployments, systems described herein may affect more efficient utilization of cloud computing resources and decrease costs associated with over-allocation of cloud computing resources. | 2021-12-30 |
20210406054 | SAFE ENTROPY SOURCE FOR ENCRYPTED VIRTUAL MACHINES - Systems and methods for ensuring that data received from a virtual device is random are provided. A processing device may be used to generate, by a virtual device executing on a hypervisor, data intended for a virtual machine (VM) having a guest memory that includes one or more encrypted pages and one or more unencrypted pages. Data written to an encrypted page of the guest memory by the VM is encrypted using an encryption key assigned to the VM and information read from the encrypted page by the VM is decrypted using the encryption key. The hypervisor may write the data to the encrypted page, wherein the data is not encrypted by the encryption key assigned to the VM because it is written by the hypervisor. The VM reads the data from the encrypted page as randomized data because it cannot be properly decrypted by the encryption key. | 2021-12-30 |
20210406055 | SYSTEM, APPARATUS AND METHOD FOR ENABLING FINE-GRAIN QUALITY OF SERVICE OR RATE CONTROL FOR WORK SUBMISSIONS - In one embodiment, a processor comprises: a first configuration register to store quality of service (QoS) information for a process address space identifier (PASID) value associated with a first process; and an execution circuit coupled to the first configuration register, where the execution circuit, in response to a first instruction, is to obtain command data from a first location identified in a source operand of the first instruction, insert the QoS information and the PASID value into the command data, and send a request comprising the command data to a device coupled to the processor, to enable the device to use the QoS information of a plurality of requests to manage sharing between a plurality of processes. Other embodiments are described and claimed. | 2021-12-30 |
20210406056 | Technology For Moving Data Between Virtual Machines Without Copies - A processor comprises a core, a cache, and a ZCM manager in communication with the core and the cache. In response to an access request from a first software component, wherein the access request involves a memory address within a cache line, the ZCM manager is to (a) compare an OTAG associated with the memory address against a first ITAG for the first software component, (b) if the OTAG matches the first ITAG, complete the access request, and (c) if the OTAG does not match the first ITAG, abort the access request. Also, in response to a send request from the first software component, the ZCM manager is to change the OTAG associated with the memory address to match a second ITAG for a second software component. Other embodiments are described and claimed. | 2021-12-30 |
20210406057 | SUPPORT FOR ENCRYPTED MEMORY IN NESTED VIRTUAL MACHINES - A method includes receiving a memory access request comprising a first memory address and translating the first memory address to a second memory address using a first page table associated with the first virtual machine. The first page table indicates whether the memory of the first virtual machine is encrypted. The method further includes determining that the first virtual machine is nested within a second virtual machine and translating the second memory address to a third memory address using a second page table associated with the second virtual machine. The second page table indicates whether the memory of the second virtual machine is encrypted. | 2021-12-30 |
20210406058 | ATOMIC GROUPS FOR CONFIGURING HCI SYSTEMS - An information handling system may include at least one processor, and a non-transitory memory coupled to the at least one processor. The information handling system may be configured to execute a configuration procedure to set up a plurality of information handling resources of the information handling system, and wherein the configuration procedure includes a plurality of logical groups related to different types of configuration. Each logical group may include one or more atomic groups, each atomic group including a plurality of logically related atomic operations. In response to a failure of a particular atomic operation of a particular atomic group, the information handling system may be configured to roll back the particular atomic operation and allow the configuration procedure to be restarted at a beginning of the particular atomic group. | 2021-12-30 |
20210406059 | TRANSACTION SCHEDULING FOR A USER DATA CACHE BY ASSESSING UPDATE CRITERIA - Transaction scheduling is described for a user data cache by assessing update criteria. In one example an event records memory stores a list of events each corresponding to performance of a transaction at a remote resource for a user. The memory has criteria for each event and a criterion value for each criterion and event combination. An event manager assesses criteria for each event by performing an operation on the stored criterion value for each criterion and event combination, assigning a score for each criterion and event combination, and compiling the assigned scores to generate a composite score for each event. The events are ordered based on the respective composite scores and executed in the ordered sequence by performing a corresponding transaction at remote resource. Updated criterion values are stored for executed events. | 2021-12-30 |
20210406060 | Technology For Optimizing Hybrid Processor Utilization - A data processing system comprises a hybrid processor comprising a big TPU and a small TPU. At least one of the TPUs comprises an LP of a processing core that supports SMT. The hybrid processor further comprises hardware feedback circuitry. A machine-readable medium in the data processing system comprises instructions which, when executed, enable an OS in the data processing system to collect (a) processor topology data from the hybrid processor and (b) hardware feedback for at least one of the TPUs from the hardware feedback circuitry. The instructions also enable the OS to respond to a determination that a thread is ready to be scheduled by utilizing (a) an OP setting for the ready thread, (b) the processor topology data, and (c) the hardware feedback to make a scheduling determination for the ready thread. Other embodiments are described and claimed. | 2021-12-30 |
20210406061 | SIGNALING TIMEOUT AND COMPLETE DATA INPUTS IN CLOUD WORKFLOWS - There is included a method and apparatus comprising computer code configured to cause a processor or processors to perform obtaining an input of at least one of a task and a workflow, setting a timeout for the input of the at least one of the task and the workflow, determining whether the at least one of the task and the workflow observes a lack of data of the input for a duration equal to the timeout, determining, in response to determining that the at least one of the task and the workflow observed the lack of data of the input for the duration equal to the timeout, an unavailability of further data of the input, applying an update to the at least one of the task and the workflow based on determining the unavailability, and processing the at least one of the task and the workflow. | 2021-12-30 |
20210406062 | Application Start Method and Apparatus - An application start method includes obtaining a target application list when a size of free memory is greater than a preset threshold. The target application list is used to store one or more application identifiers of one or more applications whose memory is released. The application start method further includes starting, in the background, a process of an application identified in the target application list. | 2021-12-30 |
20210406063 | SYSTEM AND METHOD OF UTILIZING PLATFORM APPLICATIONS WITH INFORMATION HANDLING SYSTEMS - In one or more embodiments, one or more systems, one or more methods, and/or one or more methods may: register a subroutine configured to store multiple network resource addresses via a volatile memory medium; for each information handling system (IHS) initialization executable of multiple IHS initialization executables: retrieve, from a non-volatile memory medium, the IHS initialization executable; execute the IHS initialization executable via an environment associated with IHS firmware; call, by the IHS initialization executable, the subroutine; and store, by the subroutine, a network resource address associated with an operating system (OS) executable via command line arguments, where the command line arguments are stored via a data structure in the volatile memory medium; and for each network resource address of the command line arguments: retrieve, based at least on the network resource address, an OS executable associated with the network resource address from another IHS via a network. | 2021-12-30 |
20210406064 | SYSTEMS AND METHODS FOR ASYNCHRONOUS JOB SCHEDULING AMONG A PLURALITY OF MANAGED INFORMATION HANDLING SYSTEMS - A method may include, at a management module configured to manage a plurality of information handling systems: receiving administrator preferences for a job to be scheduled at each of the plurality of information handling systems, based on the administrator preferences, assigning for each of the plurality of information handling systems a respective time slot for performing the job at such information handling system, in order to avoid or minimize overlap among the respective time slots, and creating for each of the plurality of information handling systems a respective job request for performing the job at such information handling system, the job request including a scheduled time for execution of the job based on the respective time slot of such information handling system. | 2021-12-30 |
20210406065 | SYSTEMS AND METHODS FOR IMPROVING SCHEDULING OF TASK OFFLOADING WITHIN A VEHICLE - System, methods, and other embodiments described herein relate to improving scheduling of computing tasks in a mobile environment for a vehicle. In one embodiment, a method includes receiving an offloading request associated with a computing task from the vehicle, wherein the offloading request includes context information and a task descriptor related to the computing task. The method also includes scheduling the computing task to execute on a server if the context information and the task descriptor satisfy criteria for using computing resources associated with the server for the vehicle. The method also includes partitioning the computing task into subtasks if the context information satisfies the criteria. A machine learning module may decide partitions of the computing task according to the context information. The method also includes sending a scheduling signal including a scheduling message to the vehicle and the scheduling message includes scheduling information and task partition information associated with offloading the subtasks. | 2021-12-30 |
20210406066 | END-TO-END QUALITY OF SERVICE MECHANISM FOR STORAGE SYSTEM USING PRIORITIZED THREAD QUEUES - At least one processing device comprises a processor and a memory coupled to the processor. The at least one processing device is configured to associate different classes of service with respective threads of one or more applications executing on at least one of a plurality of processing cores of a storage system, to configure different sets of prioritized thread queues for respective ones of the different classes of service, to enqueue particular ones of the threads associated with particular ones of the classes of service in corresponding ones of the prioritized thread queues, and to implement different dequeuing policies for selecting particular ones of the enqueued threads from the different sets of prioritized thread queues based at least in part on the different classes of service. The at least one processing device illustratively comprises at least a subset of the plurality of processing cores of the storage system. | 2021-12-30 |
20210406067 | DISTRIBUTED STORAGE METHOD, ELECTRONIC APPARATUS AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM - The present disclosure provides a distributed storage method, involving the technical fields of computer and cloud computing, and including: reading and sending data to an external shuffle service in response to a request of a task from a driver thread; modifying a state of the task to a waiting-for-completion state after finishing sending the data to the external shuffle service; and sending the waiting-for-completion state to the driver thread, to cause the driver thread to release an executor thread corresponding to the task. The distributed storage method can reduce the waste of the resources of the executor thread and improves the efficiency of task operations. The present disclosure also provides an electronic apparatus, and a non-transitory computer-readable storage medium. | 2021-12-30 |
20210406068 | METHOD AND SYSTEM FOR STREAM COMPUTATION BASED ON DIRECTED ACYCLIC GRAPH (DAG) INTERACTION - A stream computing method and apparatus based on directed acyclic graph (DAG) interaction is provided. A stream computing method based on DAG interaction includes the following steps: generating first DAG job stream description information according to a first DAG node graph composed of DAG nodes belonging to a first type set; converting the first DAG job stream description information into second DAG job stream description information by converting the DAG nodes belonging to the first type set into DAG nodes belonging to a second type set suitable for a FLINK engine; encapsulating the second DAG job stream description information into a DAG execution package, the DAG execution package comprising the second DAG job stream description information and an arithmetic logic of nodes associated with the second DAG job stream description information; and sending the DAG execution package to a job running cluster. | 2021-12-30 |
20210406069 | CONFIGURABLE SCHEDULER FOR GRAPH PROCESSING ON MULTI-PROCESSOR COMPUTING SYSTEMS - Systems and methods are disclosures for scheduling code in a multiprocessor system. Code is portioned into code blocks by a compiler. The compiler schedules execution of code blocks in nodes. The nodes are connected in a directed acyclical graph with a top node, terminal node and a plurality of intermediate nodes. Execution of the top node is initiated by the compiler. After executing at least one instance of the top node, an instruction in the code block indicates to the scheduler to initiate at least one intermediary node. The scheduler schedules a thread for execution of the intermediary node. The data for the nodes resides in a plurality of data buffers; the index to the data buffer is stored in a command buffer. | 2021-12-30 |
20210406070 | Managing Storage Device Compute Operations - Example storage systems, storage devices, and methods provide novel management of storage device compute operations using intermediate results, such as approximate or partial results, to optimize processing flow. An example system has a storage medium and a storage controller coupled to the storage medium that is configured to evaluate a processing capability of a storage device and determine, based on the processing capability, that only a portion of a multi-stage compute operation is completable within a requested processing timeframe. The storage processor may further be configured to determine and provide an intermediate result, which may include an approximation or a partial result of the multi-stage compute operation. The intermediate result may be used by a client to manage its own processing while it awaits a final processing result. | 2021-12-30 |
20210406071 | MANAGED INTEGRATION OF CONSTITUENT SERVICES OF MULTI-SERVICE APPLICATIONS - At a managed control plane service, constituent services and operational requirements of an application are identified. In response to an end-user request directed to the application, contents of an inter-service request are generated at a resource selected by the managed control plane service for a first constituent service, and a response to the message is generated at another resource selected for a second constituent service. Tasks to be performed for the operational requirements are initiated by the managed control plane service. | 2021-12-30 |
20210406072 | INFORMATION PROCESSING PROGRAM, INFORMATION PROCESSING APPARATUS, AND INFORMATION PROCESSING METHOD - Disclosed system specifies, based on measurement results of communication times taken for accessing a plurality of external databases, relation between the communication times taken for accessing the plurality of external databases, calculates, when accepting an instruction to execute processing using at least one of the plurality of external databases, a processing load when accessing the at least one of the external databases, based on the relation between the communication times, and controls an access to data included in the at least one of the external databases according to the calculated processing load. | 2021-12-30 |