30th week of 2021 patent applcation highlights part 47 |
Patent application number | Title | Published |
20210232354 | PROCESS DEFERRAL SYSTEM AND METHODS USING A SERVICE PROVIDER - A processing system includes a service provider, such as a printing device, that receives requests to process tasks or jobs from service requesters. At times, the service provider will not be available for processing and will generate deferral responses to the service requesters. The deferral responses include a condition to be met before processing requests can be resent from the service requesters. After the condition is met, the processing requests are resent to the service provider to be fulfilled. | 2021-07-29 |
20210232355 | IMAGE FORMING APPARATUS, METHOD FOR CONTROLLING IMAGE FORMING APPARATUS, AND NON-TRANSITORY COMPUTER READABLE MEDIUM - An image forming apparatus includes a storage unit storing sheets each including a wireless tag, a printer, a communication device, and a controller for controlling the printer to convey a first sheet, acquiring first strengths of signals from a first tag on a first sheet during the conveyance of the first sheet and second strengths from tags not sheets being conveyed. Based on the acquired strengths, the controller determines a threshold value for distinguishing between sheets being conveyed and not being conveyed. The controller then controls the communication device to write information to a tag on a sheet being conveyed based on the threshold value. | 2021-07-29 |
20210232356 | PRINTING SYSTEM AND NONTRANSITORY STORAGE MEDIUM STORING PROGRAM READABLE BY MOBILE TERMINAL - A printing system includes an information processing apparatus, a mobile terminal, and a printer. The information processing apparatus is configured to transmit job identification information and address information to the mobile terminal, and transmit a printing job to the printer when receiving a transmission request for the printing job identified by the job identification information from the printer. The mobile terminal is configured to transmit the job identification information and the address information to a selected execution printer. The printer is configured to transmit a transmission request for transmitting the printing job to the information processing apparatus specified by the received address information when receiving the job identification information and the address information from the mobile terminal, and execute printing processing in response to receiving the transmission request. | 2021-07-29 |
20210232357 | MULTICOMPONENT CONTENT SYSTEM INTEGRATING PHYSICAL STRUCTURES WITH DIGITAL MEDIA - The disclosure includes a multi-media book system. An embodiment of the system can include a book system. The book system includes both an open and closed configuration. The book system includes a cover device and a book device. The book device is configured to fit inside of the cover device. Both the cover device and the book device are configured to function like a book and can include pages. These pages can be coupled to a binding or spine of the book device, separate from the cover device. The cover device includes a cover interior surface and a cover exterior surface. The book device includes a book interior surface and a book exterior surface. The book exterior surface is configured to mate with the cover interior surface via a magnetic coupling. | 2021-07-29 |
20210232358 | SYSTEMS FOR GENERATING AUDIO SIGNALS AND ASSOCIATED METHODS - The present disclosure is directed to methods, devices, and systems for playing audio signals associated with an electric in vehicle. The method includes, for example, (1) determining a speed of the electric vehicle; (2) receiving, from a memory, a plurality of sound frequency characteristics corresponding to the determined speed of the electric vehicle; and (3) generating an audio signal segment corresponding to the received sound frequency characteristics by a speaker of the electric vehicle. The sound frequency characteristics include a plurality of segments. Each of the segments includes an amplitude of a number of frequency characteristics in a sound produced by a powertrain assembly (e.g., an electric motor) in a speed range. | 2021-07-29 |
20210232359 | SPATIAL MANAGEMENT OF AUDIO - The present disclosure generally relates to user interfaces for managing spatial audio. Some exemplary techniques include user interfaces for transitioning between visual elements. Some exemplary techniques include user interfaces for previewing audio. Some exemplary techniques include user interfaces for discovering music. Some exemplary techniques include user interfaces for managing headphone transparency. Some exemplary techniques include user interfaces for manipulating multiple audio streams of an audio source. | 2021-07-29 |
20210232360 | TRANSMISSION CONTROL FOR AUDIO DEVICE USING AUXILIARY SIGNALS - An apparatus and method of transmission control for an audio device. The audio device uses sources other than the microphone to determine nuisance, and uses this to calculate a gain as well as to make the transmit decision. Using the gain results in a more nuanced nuisance mitigation than using the transmit decision on its own. | 2021-07-29 |
20210232361 | CONVERSATIONAL VIRTUAL HEALTHCARE ASSISTANT - A conversation user interface enables patients to better understand their healthcare by integrating diagnosis, treatment, medication management, and payment, through a system that uses a virtual assistant to engage in conversation with the patient. The conversation user interface conveys a visual representation of a conversation between the virtual assistant and the patient. An identity of the patient, including preferences and medical records, is maintained throughout all interactions so that each aspect of this integrated system has access to the same information. The conversation user interface presents allows the patient to interact with the virtual assistant using natural language commands to receive information and complete task related to his or her healthcare. | 2021-07-29 |
20210232362 | REFINEMENT OF VOICE QUERY INTERPRETATION - A system for refinement of a voice query interpretation interprets a voice query received at a voice-enabled device to identify commands responsive to the voice query for execution at the voice-enabled device, and enables refinement of the interpretation of the voice query through a graphical user interface generated and displayed at a GUI-capable device. The graphical user interface includes a set of selectable options relating to the voice query and identifying a refinement of the interpretation of the voice query to enable control and/or adjustment of commands to be executed by the voice-enabled device. For example, if one of the selectable options is selected, then a command associated with the selected option is identified and executed by the voice-enabled device. | 2021-07-29 |
20210232363 | COGNITIVE AND INTERACTIVE SENSOR BASED SMART HOME SOLUTION - Systems and methods for smart sensors are provided. A smart sensor includes: a case; a power adapter configured to be plugged directly into an electrical outlet; a computer processor; a microphone; a speaker; a camera; at least one sensor; a control switch; a sync button; a USB port; and a memory storing: an operating system; a voice control module; a peer interaction module; a remote interaction module; and a cognitive module. In embodiments, the power adapter includes prongs that extend from a back side of the case, and the microphone, the speaker, the camera, and the at least one sensor are on a front side of the case opposite the back side of the case. | 2021-07-29 |
20210232364 | SYSTEMS AND METHODS FOR VARIABLE BANDWIDTH ANNEALING - A filter multiplexer for variable bandwidth annealing selection is described. The filter multiplexer has multiple pathways, where each pathway comprises a switch and a filter. Each filter has a different cutoff frequency from the other filters. Switches may be cryogenic switches. Each pathway may be communicatively coupled to an external annealing line. Upon receiving a problem, an annealing bandwidth can be selected, set or configured via the multiplexer to operate a quantum processor with a desired annealing schedule. The multiplexer may be used for calibration of a quantum processor by performing a calibration with a large annealing bandwidth, then calibrating the quantum processor by iterating through all available annealing bandwidths from the multiplexer. | 2021-07-29 |
20210232365 | INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING SYSTEM, AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM FOR STORING PROGRAM - An information processing device includes: a memory; and a processor coupled to the memory, the processor being configured to: sort stream data buffered in units of wraps of a sequential recording medium, in a column order and a time order of the stream data, as primary data to be written into a primary wrap of the sequential recording medium; and control writing of the sorted primary data into the primary wrap, wherein the sorting of the stream data is configured to sort secondary data to be written into a secondary wrap that follows the primary wrap, in a reverse order of the column order and in the time order, and wherein the controlling of the primary data is configured to control writing of the sorted secondary data into the secondary wrap. | 2021-07-29 |
20210232366 | DYNAMIC DIRECTIONAL ROUNDING - A method, computer readable medium, and system are disclosed for rounding floating point values. Dynamic directional rounding is a rounding technique for floating point operations. A floating point operation (addition, subtraction, multiplication, etc.) is performed on an operand to compute a floating point result. A sign (positive or negative) of the operand is identified. In one embodiment, the sign determines a direction in which the floating point result is rounded (towards negative or positive infinity). When used for updating parameters of a neural network during backpropagation, dynamic directional rounding ensures that rounding is performed in the direction of the gradient. | 2021-07-29 |
20210232367 | REAL TIME CONFIGURATION OF MULTIPLE TRUE RANDOM NUMBER GENERATOR SOURCES FOR OPTIMIZED ENTROPY GENERATION - A computer-implemented method for generating one or more random numbers includes configuring a mapper to feed inputs of a random number generation system using a subset of noise sources from multiple noise sources. The random number generation system generates a random number based on the inputs. The method further includes evaluating the subset of noise sources and detecting that a first noise source from the subset of noise sources has degraded in quality. The method further includes evaluating a second noise source from the available noise sources, the second noise source not being in the subset of noise sources. In response to the second noise source satisfying a predetermined threshold criterion, the first noise source is replaced with the second in the subset of noise sources for providing random bit streams to facilitate generating the random number by the random number generation system. | 2021-07-29 |
20210232368 | ELECTRONIC APPARATUS AND CONTROL METHOD THEREOF - An electronic apparatus and a control method of the electronic apparatus is provided. The method includes acquiring source code written in a programing language, identifying a structure including a function pointer from the source code, identifying a plurality of initialized variables as a plurality first variables among variables of the function pointer included in the identified structure, and modifying the source code by changing an indirect call using an unmodifiable variable among the plurality of first variables to a direct call. | 2021-07-29 |
20210232369 | Development Environment for Real-Time Dataflow Programming Language - A dataflow programming language can be used to express reactive dataflow programs that can be used in pattern-driven real-time data analysis. One or more tools are provided for the dataflow programming language for checking syntactic and semantic correctness, checking logical correctness, debugging, translation of source code into a secure, portable format (e.g., packaged code), translation of source code (or packaged code) into platform-specific code, batch-mode interpretation, interactive interpretation, simulation and visualization of the dataflow environment, remote execution, monitoring, or any combination of these. These tools embody a method of developing, debugging, and deploying a dataflow graph device. | 2021-07-29 |
20210232370 | CONTROL SYSTEM, CONTROLLING METHOD FOR CONTROL SYSTEM, AND PROGRAM FOR CONTROL SYSTEM - A control system including a control device and a development supporting device for developing a plurality of programming languages executed in the control device, wherein the development supporting device includes an input unit that inputs source codes of the plurality of different programming languages, a mapping information producing unit that performs mapping of shared variables selected in the source codes, respectively, and that produces shared variable mapping information, and a transmit unit that transmits source codes and shared variable mapping information to the control device, wherein the control device includes a program executing unit that executes programs described by source codes, and a shared variable processing unit that processes each of mapped shared variables as common shared variables based on shared variable mapping information. | 2021-07-29 |
20210232371 | COMPOSITION ENABLEMENT FOR PARTNER AND CUSTOMER EXTENSIBILITY OF INVERSION OF CONTROL OBJECTS - The present disclosure provides techniques for composition enablement for extensibility of a system. The techniques include delivering an interface to a first downstream provider, where the interface includes a bean implementation format. Then performing at least one of: (i) receiving a provider level (POL) selection from the first downstream provider, where the POL selection corresponds to a POL stored in an extender, and delivering a first bean implementation to the first downstream provider based on the POL, and (ii) receiving a constructed bean implementation from the downstream provider, determining a POL of the downstream provider, and storing the constructed bean implementation in the extender at the determined POL. | 2021-07-29 |
20210232372 | METHOD AND COMPUTER PROGRAM PRODUCT FOR AN UI SOFTWARE APPLICATION - A computer program product comprising computer-readable instructions that, when executed in a computer system including one or more computers, cause the computer system to generate or update a user interface of a software application, the computer program product including programmably interconnected objects, said objects including one or more model objects, one or more view objects, and one or more controller objects, wherein each model object is interconnected for data exchange with one or more view objects and/or with one or more controller objects; and each model object includes one or more sub-model objects including hierarchically structured data and representing a state of the user interface; and each view object is associated with at least one model object or at least one sub-model object and configured to generate the user interface or update the user interface in accordance with a change of the state. | 2021-07-29 |
20210232373 | Integrated System for Designing a User Interface - The present disclosure is directed to systems and methods for creating a design of a collection. For example, the method may include providing a single sign-on process over a communications network for enabling a user to access a design environment. The method may include, in response to the user being validated, accessing a user profile associated with the user. The method may include identifying, based on the user profile, a first plurality of user interface (UI) features for designing a UI. The method may include receiving a request to create a collection for designing the UI. The method may include, in response to creating the collection, receiving, from the user, a design for the collection including a selection of UI features from among the first plurality of UI features. The method may include storing the design of the collection in a repository, the design being accessible as a template. | 2021-07-29 |
20210232374 | Integrated System for Designing a User Interface - The present disclosure is directed to systems and methods for determining which UI features from the gallery of UI features to incorporate in a design environment. For example, the method may include generating a gallery of user interface (UI) features based on a machine learning model trained to analyze usage of different UI features from among a plurality of UI features to identify usage patterns of the different UI features. The method may include receiving user feedback analyzing the gallery of UI features. The method may include determining, based on a combination of the user feedback and the machine learning model, which UI features from the gallery of UI features to incorporate in a design environment. The method may include providing the determined UI features in the design environment accessed over a communications network via a single sign-on process. | 2021-07-29 |
20210232375 | Integrated System for Designing a User Interface - The present disclosure is directed to systems and methods for deploying a prototype of a user interface. For example, the method may include providing a single sign-on process over a communications network for enabling a user to access a design environment. The method may also include providing the design environment to the user for designing the UI. The method may also include deploying, via the design environment, the prototype of the UI to an instance from among a plurality of instances. Each of the plurality of instances may be associated with a different stage in a development process for designing the UI. Deploying the prototype of the UI may include transitioning the prototype of the UI from a first stage to a second stage of the development process. | 2021-07-29 |
20210232376 | VECTORIZED REPRESENTATION METHOD OF SOFTWARE SOURCE CODE - The invention provides a vectorized representation method of a software source code. The vectorized representation method is an AST-based neural network which is a hierarchical vector representation method comprising the following implementation steps: step 1-1, converting an original software source code into an AST at the lowest layer, and then further dividing the AST according to source code statements to acquire a smaller statement tree sequence, wherein statement trees in the statement tree sequence are different in sequence, and the statement tree sequence is consistent with an original statement sequence; step 1-2, encoding the statement trees into statement vectors e | 2021-07-29 |
20210232377 | ENCODING DEPENDENCIES IN CALL GRAPHS - A method for modifying a call graph may include identifying, in source code, a first call site including a first predicate and a call from a first function to a second function. The first call site may correspond to a first edge of the call graph. The first edge may connect a first node corresponding to the first function and a second node corresponding to the second function. The method may further include modifying the call graph by labelling the first edge with a first encoding of the first predicate, and identifying, in the source code, a second call site including a second predicate and a call from a third function to the first function. The method may further include in response to determining that the first predicate is unsatisfied, modifying the call graph by labelling the second edge with a second encoding of a violation of the first predicate. | 2021-07-29 |
20210232378 | PROGRAM CALLING, ELECTRONIC DEVICE, AND STORAGE MEDIUM - The present disclosure provides a program calling method. The method includes: loading dependency relationship data in binary data of a program into a runtime environment in response to calling to a first element in an execution process of the program, where the dependency relationship data includes an index key and an index value; obtaining the index value from the dependency relationship data, the index value corresponding to a second element, the first element depending on the second element; and calling the second element corresponding to the index value. | 2021-07-29 |
20210232379 | SYSTEMS AND METHODS FOR SCALABLE HIERARCHICAL POLYHEDRAL COMPILATION - A system for compiling programs for execution thereof using a hierarchical processing system having two or more levels of memory hierarchy can perform memory-level-specific optimizations, without exceeding a specified maximum compilation time. To this end, the compiler system employs a polyhedral model and limits the dimensions of a polyhedral program representation that is processed by the compiler at each level using a focalization operator that temporarily reduces one or more dimensions of the polyhedral representation. Semantic correctness is provided via a defocalization operator that can restore all polyhedral dimensions that had been temporarily removed. | 2021-07-29 |
20210232380 | METHOD AND APPARATUS FOR OPERATING A MOBILE APPLICATION STORE - A method and apparatus for operating a mobile application store. In an exemplary embodiment, the method includes executing a first mobile application store that operates in an environment of an operating system of the apparatus; receiving user input to select a mobile application in the store to run; connecting to a second mobile application store operable only in an environment of an operating system of a mobile device; obtaining user account information for authenticating a connection to the second mobile application store; requesting for the mobile application to be downloaded to the apparatus from the second mobile application store; downloading and installing the mobile application; and instructing an emulator installed on the apparatus to run the mobile application. The application selected to run is configured to run only in the environment of the operating system of the mobile device. | 2021-07-29 |
20210232381 | DELTA FILE WITH REVERSING DATA - A processing system is configured to process instructions in a delta file, received at an input of the processing system, to generate a target file from a source file and to regenerate the source file from the target file. The delta file comprises copy instructions and reversing data. The copy instructions instruct the processing system to include one or more copy strings from the source file in the target file. The reversing data is received as part of the delta file and is used to regenerate all of the source file that is outside the one or more copy strings. The processing system is configured to generate the target file from the source file by reading the copy strings from the source file and including them in the target file. The processing system is further configured to regenerate the source file from the target file by reading the copy strings from the target file and including them in the regenerated source file and using the reversing data to include, in the regenerated source file, all of the source file that is outside the one or more copy strings. | 2021-07-29 |
20210232382 | VEHICULAR SOFTWARE UPDATE APPARATUS - A vehicular software update apparatus is used to a vehicle to update software stored in an electronic control unit mounted on the vehicle. The vehicular software update apparatus includes a low power communication device configured to perform a wide area wireless communication with a low power consumption. The low power communication device is operated in an update confirmation state including a state where neither power generation in the vehicle nor power supplying to the vehicle is performed, and is caused to download software update information which is information necessary to update the software, when the software update information, is provided by the server. | 2021-07-29 |
20210232383 | VIRTUAL MACHINE UPDATE WHILE KEEPING DEVICES ATTACHED TO THE VIRTUAL MACHINE - A computing system running a host operating system and a virtual machine (VM). The computing system includes at least one device that is directly assigned to the VM. The computing system is configured to execute one or more first VM components and one or more second VM components. The one or more first VM components are configured to manage the one or more second VM components via one or more identification pointers. While the one or more second VM components remain loaded in a system memory, and the directly assigned device remains attached to the VM and remains configured to communicate with the one or more second VM component, the one or more first VM components are shut down and restored. | 2021-07-29 |
20210232384 | System and Method for Runtime Capsule Firmware Update with Low-Latency Software SMIs - Systems and methods for performing flash updates during runtime are discussed. More particularly, the amount of secure memory required to prevent tampering during the update process is limited by storing hashes of logical blocks of the update image in secure memory after initial validation while storing the update image in non-secure RAM or another non-secure memory location. Additionally, disruptions to the computing platform are limited by dividing the logical blocks into smaller progress units to minimize the amount of time spent in the secure operating environment performing the update. | 2021-07-29 |
20210232385 | Power Safe Offline Download - The present disclosure generally relates to using a single firmware slot in a slower boot media while temporarily leveraging high speed media and dual boot designs to allow booting into a cached copy of firmware to guarantee power safety while writing the single firmware slot on the slower boot media. The device boots up with original firmware stored in a first non-volatile memory device when powered on. The device then checks a second non-volatile memory device for new firmware. If there is new firmware stored in the second non-volatile memory device, the device loads the new firmware into a volatile memory device and reboots with the new firmware. The device then writes the new firmware to the firmware slot of the first non-volatile memory device. If the device experiences a power cycle while writing the new firmware, the device can reboot with a cached copy of the new firmware. | 2021-07-29 |
20210232386 | PROJECT VISUALIZATIONS - In some examples, a system represents tasks of a project as feature nodes of a force-directed graph, and connects, in the force-directed graph, sub-feature nodes representing sub-features associated by links to the feature nodes in the force-directed graph. The system sets a size of each respective sub-feature node of the sub-feature nodes based on an amount of resource usage expended on a respective sub-feature represented by the respective sub-feature node. The system causes display of the force-directed graph, and collapses or expands a portion of the force-directed graph responsive to user interaction with the force-directed graph. | 2021-07-29 |
20210232387 | LINKING COPIED CODE - Aspects of the invention include receiving, by a processor, a request to copy a code from a source file and receiving, by the processor, a request to paste the code into a destination file. Aspects also include creating, by the processor based at least in part on the request to paste the code, an entry in a database, the entry having an identification of the source file, an identification of the destination file, a location of the code in the source file, and a location of the code in the destination file. | 2021-07-29 |
20210232388 | SOFTWARE PIPELINE CONFIGURATION - In certain embodiments, a software pipeline (“pipeline”) is configured by the use of gates for progressing an application from one stage to another (e.g., from a development stage to a production stage). A configuration file having a set of attribute values that is descriptive of an application, and a gate mapping file having information associated with the gates to be invoked for different combinations of attribute values are obtained. The configuration file is processed using the gate mapping file to determine a set of gates to be invoked for progressing the application in the pipeline based on the attribute values of the application. The set of gates are invoked to cause a corresponding set of software routines to be executed for progressing the application. | 2021-07-29 |
20210232389 | METHODS AND SYSTEMS FOR GENERATING APPLICATION BUILD RECOMMENDATIONS - Methods and systems for managing an online application database and application search. Search queries for applications are received from users. Unfulfilled queries are stored in memory. The platform identifies one or more application features based on the search queries within the stored unfulfilled queries, and generates an application build recommendation specifying the one or more application features. The application build recommendation is output to one or more developer accounts. If a new application is received, the platform may determine whether the new application contains features that sufficiently correspond to the features in one of the application build recommendations. User accounts that submitted the unfulfilled queries that served as the basis for the matching application build recommendation may be notified of the availability of the new application. | 2021-07-29 |
20210232390 | MICROSERVICE DECOMPOSITION STRATEGY OF MONOLITHIC APPLICATIONS - Systems and techniques that facilitate automated recommendation of microservice decomposition strategies for monolithic applications are provided. In various embodiments, a community detection component can detect a disjoint code cluster in a monolithic application based on a code property graph characterizing the monolithic application. In various aspects, the code property graph can be based on a temporal code evolution of the monolithic application. In various embodiments, a topic modeling component can identify a functional purpose of the disjoint code cluster based on a business document corpus corresponding to the monolithic application. In various embodiments, a microservices component can recommend a microservice to replace the disjoint code cluster based on the functional purpose. | 2021-07-29 |
20210232391 | System and Method for the Delivery of Software Documentation - A system and method for simplifying the creation of documentation, and especially software documentation. A software application, referred to as a source metadata tagger and document compiler is used to add metadata to a final output document. This metadata contains identifiers that are associated with various source files. In this way, the system can easily determine which source file is being reviewed and/or flagged by the reviewer. This information can be used by the ticketing/notification system to create a work item for the appropriate developer or development group. This is vastly simpler than the current system, where human intervention is required to determine which source file is being flagged. | 2021-07-29 |
20210232392 | SOFTWARE ANALYSIS DEVICE, SOFTWARE ANALYSIS METHOD, AND SOFTWARE ANALYSIS PROGRAM - A software analysis device being capable of analyzing dependency between software components more comprehensively and with higher accuracy than a conventional technology is provided. The software analysis device comprising: a first analyzing unit that statically analyzes a structure of a source code of software and analyzes dependency between objects of the software; and a second analyzing unit that executes a program indicated by the source code to acquire first information regarding an operation of the objects and analyzes dependency between the objects based on the first information. The software analysis device analyzes dependency between the objects based on an analysis result of the first analyzing unit and an analysis result of the second analyzing unit. | 2021-07-29 |
20210232394 | DATA FLOW PROCESSING METHOD AND RELATED DEVICE - The present disclosure relates to data flow processing methods and devices. One example method includes obtaining a dependency relationship and an execution sequence of operating a data flow by a plurality of processing units, generating synchronization logic based on the dependency relationship and the execution sequence, and inserting the synchronization logic into an operation pipeline of each of the plurality of processing unit to generate executable code. | 2021-07-29 |
20210232395 | METHODS AND DEVICES FOR HARDWARE CHARACTERIZATION OF COMPUTING DEVICES - A machine characterization device for determining one or more machine characterization parameters of a computing device depending on a machine signature determined from sets of timing measurements associated with at least one machine characterization instruction executed by one or more processors comprised in the computing device using at least two machine configurations. A machine configuration comprises a sequence of two or more machine configuration instructions defining an order of execution of one or more instructions by the one or more processors. | 2021-07-29 |
20210232396 | APPARATUS AND METHOD FOR INHIBITING INSTRUCTION MANIPULATION - An apparatus and method are provided for inhibiting instruction manipulation. The apparatus has execution circuitry for performing data processing operations in response to a sequence of instructions from an instruction set, and decoder circuitry for decoding each instruction in the sequence in order to generate control signals for the execution circuitry. Each instruction comprises a plurality of instruction bits, and the decoder circuitry is arranged to perform a decode operation on each instruction to determine from the value of each instruction bit, and knowledge of the instruction set, the control signals to be issued to the execution circuitry in response to that instruction. An input path to the decoder circuitry comprises a set of wires over which the instruction bits of each instruction are provided. Scrambling circuitry is used to perform a scrambling function on each instruction using a secret scrambling key, such that the wire within the set of wires over which any given instruction bit is provided to the decoder circuitry is dependent on the secret scrambling key. The decode operation performed by the decoder circuitry is then adapted to incorporate a descrambling function using the secret scrambling key to reverse the effect of the scrambling function. As a result, independent of which wire any given instruction bit is provided on, the decode operation is arranged when decoding a given instruction to correctly interpret each instruction bit of that given instruction, based on knowledge of the instruction set, in order to determine from the value of each instruction bit the control signals to be issued to the execution circuitry in response to that given instruction. | 2021-07-29 |
20210232397 | MASK PATTERNS GENERATED IN MEMORY FROM SEED VECTORS - The present disclosure includes apparatuses and methods related to mask patterns generated in memory from seed vectors. An example method includes performing operations on a plurality of data units of a seed vector and generating, by performance of the operations, a vector element in a mask pattern. | 2021-07-29 |
20210232398 | SYSTEMS AND METHODS FOR MINIMIZING FREQUENCY OF GARBAGE COLLECTION BY DEDUPLICATION OF VARIABLES - An information handling system may include a processor and a program of instructions embodied in non-transitory computer-readable media and configured to, when read and executed by the processor: in response to a request to write a variable to a solid state device, store the variable to a memory location of the solid state device, the variable including variable data and a variable status indicative of a validity of the variable data, the variable status having a plurality of bits wherein each of the plurality of bits are set to an initial value and in response to a request to modify the variable, modify the variable status by changing one of the plurality of bits from the initial value to a logical complement of the initial value to change the validity of the variable data. The validity of the variable data may be based on whether an even number or odd number of the plurality of bits are equal to the complement of the initial value. | 2021-07-29 |
20210232399 | Method, System, and Computer Program Product for Dynamically Assigning an Inference Request to a CPU or GPU - A method for dynamically assigning an inference request is disclosed. A method for dynamically assigning an inference request may include determining at least one model to process an inference request on a plurality of computing platforms, the plurality of computing platforms including at least one Central Processing Unit (CPU) and at least one Graphics Processing Unit (GPU), obtaining, with at least one processor, profile information of the at least one model, the profile information including measured characteristics of the at least one model, dynamically determining a selected computing platform from between the at least one CPU and the at least one GPU for responding to the inference request based on an optimized objective associated with a status of the computing platform and the profile information, and routing, with at least one processor, the inference request to the selected computing platform. A system and computer program product are also disclosed. | 2021-07-29 |
20210232400 | BRANCH PREDICTOR - A branch predictor provides a predicted branch instruction outcome for a current block of at least one instruction. The branch predictor comprises branch prediction tables to store branch prediction entries providing branch prediction information; lookup circuitry to perform, based on indexing information associated with the current block, a table lookup in a looked up subset of the branch prediction tables; and prediction generating circuitry to generate the predicted branch instruction outcome for the current block based on the branch prediction information in the branch prediction entries looked up in the looked up subset of branch prediction tables. The looked up subset of branch prediction tables is selected based on lookup filtering information obtained for the current block. Lookups to tables other than the looked up subset are suppressed. | 2021-07-29 |
20210232401 | PREDICATED LOOPING ON MULTI-PROCESSORS FOR SINGLE PROGRAM MULTIPLE DATA (SPMD) PROGRAMS - Single Program, Multiple Data (SPMD) parallel processing of SPMD instructions can be generated among processors assigned to a task in a plurality of threads. The SPMD parallel processing can be increased in speed by performing predicated looping with the SPMD instructions in an activated SPMD mode of operation over a non-SPMD mode. Execution of overhead instructions is removed from the SPMD instructions associated with a thread in order to only execute the loop body of a loop associated with a data element of a data set in an enhanced Zero Loop Overhead (ZOL) device. | 2021-07-29 |
20210232402 | METHOD FOR VECTORIZING HEAPSORT USING HORIZONTAL AGGREGATION SIMD INSTRUCTIONS - Techniques are provided for vectorizing Heapsort. A K-heap is used as the underlying data structure for indexing values being sorted. The K-heap is vectorized by storing values in a contiguous memory array containing a beginning-most side and end-most side. The vectorized Heapsort utilizes horizontal aggregation SIMD instructions for comparisons, shuffling, and moving data. Thus, the number of comparisons required in order to find the maximum or minimum key value within a single node of the K-heap is reduced resulting in faster retrieval operations. | 2021-07-29 |
20210232403 | SYSTEM AND METHOD FOR MANAGING COMPONENT UPDATES - An asset includes physical computing resources and a physical computing resources manager. The physical computing resources manager obtains a power management update for a physical computing resource of the physical computing resources of the asset; in response to obtaining the power management update: obtains, using an out-of-band manager, a power management descriptor for the asset; updates the power management descriptor based on the power management update; stages the power management descriptor at a location; and performs a low resource consumption reboot using the location to implement the power management update. | 2021-07-29 |
20210232404 | SYSTEM AND METHOD FOR MANAGING DEVICES DURING REBOOT - An asset includes a physical computing resource. The physical computing resource is directly used by a virtual entity. The asset also includes a resource manager. The resource manager disconnects the virtual entity from the physical computing resource during a low resource consumption reboot of the asset until the low resource consumption reboot of the asset is complete. The resource manager also directly connects the virtual entity to the physical computing resource after the low resource consumption reboot of the asset. | 2021-07-29 |
20210232405 | CIRCUIT AND REGISTER TO PREVENT EXECUTABLE CODE ACCESS - Certain aspects provide a computing system including a first CPU configured to load executable code for the computing system. The computing system further includes a first memory configured to store the executable code at an address range of the first memory. The first memory is local to a second CPU. The computing system further includes a one-time programmable or read-only register configured to store an indication of the address range. The computing system further includes a circuit configured to: determine if a first memory address associated with a first memory access command is in the address range based on the indication of the address range stored in the register; when the first memory address is in the address range, refrain from sending the first command to the first memory; and when the first memory address is not in the address range, send the first command to the first memory. | 2021-07-29 |
20210232406 | METHOD AND SYSTEM FOR BOOT TIME OPTIMIZATION OF EMBEDDED MULTIPROCESSOR SYSTEMS - An embedded multiprocessor system is provided that includes a multiprocessor system on a chip (SOC), a memory coupled to the multiprocessor SOC, the memory storing application software partitioned into an initial boot stage and at least one additional boot stage, and a secondary boot loader configured to boot load the initial boot stage on at least one processor of the multiprocessor SOC, wherein the initial boot stage begins executing and flow of data from the initial boot stage to the at least one additional boot stage is disabled, wherein the application software is configured to boot load a second boot stage of the at least one additional boot stage on at least one other processor of the multiprocessor SOC and to enable flow of data between the initial boot stage and the second boot stage. | 2021-07-29 |
20210232407 | METHOD AND SYSTEM FOR COMPRESSING APPLICATION DATA FOR OPERATIONS ON MULTI-CORE SYSTEMS - A system and method to compress application control data, such as weights for a layer of a convolutional neural network, is disclosed. A multi-core system for executing at least one layer of the convolutional neural network includes a storage device storing a compressed weight matrix of a set of weights of the at least one layer of the convolutional network and a decompression matrix. The compressed weight matrix is formed by matrix factorization and quantization of a floating point value of each weight to a floating point format. A decompression module is operable to obtain an approximation of the weight values by decompressing the compressed weight matrix through the decompression matrix. A plurality of cores executes the at least one layer of the convolutional neural network with the approximation of weight values to produce an inference output. | 2021-07-29 |
20210232408 | Simulated Visual Hierarchy While Facilitating Cross-Extension Communication - To provide a hierarchical visual paradigm while maintaining the communication advantages of sibling extensions, a visual hierarchy simulation extension generates and maintains placeholders in a visually hierarchical manner, with the visual positioning of such placeholders informing the visual positioning of overlays of frames hosting the visual output of sibling extensions. Such a visual hierarchy simulation extension is utilized to layout and establish a desired visual hierarchy. One or more modules of computer-executable instructions are invoked to provide the relevant functionality, including the obtaining of the visual positioning of placeholders, the relevant visual translation between the visual positioning of placeholders and the visual overlaying of corresponding frames, the generation and movement of the corresponding frames, and the instantiation of extension content within the corresponding frames. The visual hierarchy simulation extension is hosted independently from the one or more modules. | 2021-07-29 |
20210232409 | DYNAMIC RESIZING OF A PORTION OF A VIRTUAL DEVICE USER INTERFACE - In certain embodiments, a change to a display resolution (or other display configuration) to be used at a physical device may be effectuated without the need to reboot a virtual device associated with the physical device. In some embodiments, a display resolution for a portion of a virtual device user interface of a virtual device is determined based on display configuration information corresponding to a first physical device (e.g., a display resolution of the first physical device). The portion of the virtual device user interface is configured based on the determined display resolution, and the configured portion is sent to the first physical device. In some embodiments, in response to obtaining second display configuration information from a second physical device, the portion of the virtual device user interface is resized (e.g., without rebooting the virtual device), and the resized portion is sent to the second physical device. | 2021-07-29 |
20210232410 | SYSTEMS AND METHODS FOR EVALUATING AND UPDATING DEPRECATED PRODUCTS - Method and systems provide tools for evaluating the impact of component deprecations with a datacenter formed from a plurality of IHSs. Upon receiving a notification of a deprecated component, configuration files that invoke the deprecated component and are in use within the datacenter are identified. Estimates are generated for the resources that would be required to replace references to the deprecated component within the identified configuration files. Estimates may be generated based on compilation errors, test suite failures and historical error repair data. A tree is generated of the dependencies on the deprecated component within the identified configuration files. Based on characteristics of the dependency tree and also based on the resource estimates for replacing references to the deprecated component, a risk level is generated for the deprecated component. The risk level may be generated for individual IHSs, groups of IHSs, or an entire data center. | 2021-07-29 |
20210232411 | SECURE CONFIGURATION CORRECTIONS USING ARTIFICIAL INTELLIGENCE - Methods and systems for detecting and responding to erroneous application configurations are presented. In one embodiment, a method is provided that includes receiving a configuration for an application and receiving execution metrics for the application. The configuration and the execution metrics may be compared to a knowledge base of reference configurations and reference execution metrics and a particular reference configuration may be identified from the knowledge base that corresponds to the configuration. The particular reference configuration may represent an erroneous configuration of the application that needs to be corrected. A configuration correction may then be identified based on the particular reference configuration. | 2021-07-29 |
20210232412 | TOUCHED HOME - The subject disclosure relates to employing automatically configuring a device based on a set of policies. In an aspect, disclosed is a system comprising an identification component that identifies a device requiring setup based on a set of identification data. In another aspect, the system includes a comparison component that compares the set of identification data to a set of reference data stored in a reference database. In yet another aspect, the system can include a transmission component that transmits a set of policy data to the device based on a subset of reference data determined to correspond to the device. | 2021-07-29 |
20210232413 | THIRD PARTY EXECUTABLE ASSET BUNDLE DEPLOYMENT - Technologies are provided for generating executable asset bundles using a plug-in module loaded in an integrated development environment (IDE). The IDE can be used to create and edit source code assets and three-dimensional (3D) model assets that can be compiled into an executable program. The plug-in module can be used to generate an executable asset bundle based on a subset of the source code assets. Optionally, the executable asset bundle can include a subset of the 3D model assets. The IDE can be used to generate an executable program based on the remaining source code assets and 3D model assets. The executable program and the executable asset bundle can be distributed separately. The executable program can be executed by a client computing device and used to load the executable asset bundle on the client device. Loading the executable asset bundle can comprise downloading it from a remote server. | 2021-07-29 |
20210232414 | AGENT DEVICE, AGENT SYSTEM, AND RECORDING MEDIUM - An agent device receives input information that is input by the user, in a case in which the input information is a question from the user, executes inference processing on the input information to infer an intent of the question in order to acquire a response to the question based on the intent, in a case in which a plurality of the responses are acquired, provides the notification device with option information that includes the plurality of responses as options, in a case in which new input information is received, determines whether the new input information is information requiring the inference processing or is selection information relating to a selection result from selection of the options, and in a case in which the new input information is the selection information, provides the notification device with response information regarding the response associated with the selection result without executing the inference processing. | 2021-07-29 |
20210232415 | STATEFUL VIRTUAL COMPUTE SYSTEM - A system for providing a stateful virtual compute system is provided. The system may be configured to maintain a plurality of virtual machine instances. The system may be further configured to receive a request to execute a program code and select a virtual machine instance to execute the program code on the selected virtual machine instance. The system may further associate the selected virtual machine instance with shared resources and allow program codes executed in the selected virtual machine instance to access the shared resources. | 2021-07-29 |
20210232416 | EXTENSION APPLICATION MECHANISMS THROUGH INTRA-PROCESS OPERATION SYSTEMS - The present disclosure relates to computer-implemented methods, software, and systems for providing extension application mechanisms. Memory is allocated for a virtual environment to run in an address space of an application that is to be extended with extension logic in a secure manner. The virtual environment is configured for execution of commands related to an extension functionality of the application. A virtual processor for an execution of a command of the commands is initialized at the virtual environment. The virtual processor is operable to manage one or more guest operating systems (OS). A first guest OS is loaded at the allocated memory and application logic of the extension functionality is copied into the allocated memory. The virtual environment is started to execute the first guest OS and the application logic of the extension functionality in relation to associated data of the application in the allocated memory. | 2021-07-29 |
20210232417 | PACKET HANDLING BASED ON MULTIPROCESSOR ARCHITECTURE CONFIGURATION - Example methods and systems for packet handling based on a multiprocessor architecture configuration are provided. One example method may comprise: in response to receiving a first ingress packet that requires processing by a first virtual central processing unit (VCPU) running on the first node, steering the first ingress packet towards a first receive (RX) queue and performing local memory access on the first node to access the first ingress packet from the first RX queue. The method may also comprise: in response to receiving a second ingress packet that requires processing by a second VCPU running on the second node, steering the second ingress packet towards a second RX queue and performing local memory access on the second node to access the second ingress packet from the second RX queue. | 2021-07-29 |
20210232418 | GLOBAL CACHE FOR CONTAINER IMAGES IN A CLUSTERED CONTAINER HOST SYSTEM - Container images are managed in a clustered container host system with a shared storage device. Hosts of the system include a virtualization software layer that supports execution of virtual machines (VMs) in the hosts, and one or more VMs have implemented therein a container engine that supports execution of containers within the respective VMs. Deploying a container in a first VM includes creating a virtual disk in the storage device, storing a container image in the virtual disk, mounting the virtual disk to the first VM, and updating a metadata cache to associate the container image to the virtual disk. Deploying the container in a second VM executed in a host different from a host in which the first VM is executed, includes checking the metadata cache to determine that the container image is stored in the virtual disk, and mounting the virtual disk to the second VM. | 2021-07-29 |
20210232419 | CANARY PROCESS FOR GRACEFUL WORKLOAD EVICTION - Memory shortage is detected in a clustered container host system so that workloads can be shut down gracefully. A method of managing memory in a virtual machine (VM) in which containers are executed, includes the steps of: monitoring a dummy process that runs in the VM concurrently with the containers, the dummy process being configured to be terminated by an operating system of the VM under a low memory condition before any other processes running in the VM; upon detecting that the dummy process has been terminated, selecting one of the containers to be terminated; and terminating processes of the selected container. | 2021-07-29 |
20210232420 | RESTORING THE STATE OF PAUSED VIRTUAL MACHINE ENVIRONMENTS WITH EXTERNAL ATTACHED VOLUMES - A system receives a pause request to pause a virtual environment that includes one or more virtual machines, each respective virtual machine having a mounting point connected to at least one corresponding block level storage volume. The system builds a model and a dependency graph of one or more components in the virtual environment. The system stores the model, the dependency graph and tags a snapshot of each corresponding block level storage volume. The system stops the one or more components in accordance with dependency logic of the dependency graph and stops the one or more virtual machines. The system builds the virtual environment and restarts the virtual machines in response to a resume request. | 2021-07-29 |
20210232421 | IMPLEMENTING ERASURE CODING WITH PERSISTENT MEMORY - A computer-implemented method according to one embodiment includes receiving a request to perform a transaction in persistent memory at a first node; implementing the transaction within a volatile transaction cache at the first node; determining parity data for the transaction at the first node; sending the parity data from the first node to a parity node; and transferring results of the transaction from the volatile transaction cache to the persistent memory at the first node. | 2021-07-29 |
20210232422 | METHOD AND APPARATUS FOR PREDICTING AND SCHEDULING COPY INSTRUCTION FOR SOFTWARE PIPELINED LOOPS - A method for scheduling instructions for execution on a computer system includes scanning a plurality of loop instructions that are modulo scheduled to identify a first instruction and a second instruction that both utilize a register of the computer system upon execution of the plurality of instructions. The loop has a first initiation interval. The first instruction defines a first value of the register in a first iteration of the loop and he second instruction redefines the value of the register to a second value in a subsequent iteration of the loop prior to a use of the first value in the first iteration of the loop. A copy instruction is inserted in the loop instructions to copy the first value prior to execution of the second instruction. A schedule is determined after the insertion of the one or more copy instructions giving a second initiation interval. | 2021-07-29 |
20210232423 | PROCESS DEFERRAL SYSTEM AND METHODS USING TOKENS - A processing system includes a service provider, such as a printing device, that receives requests to process tasks or jobs from service requesters. At times, the service provider will not be available for processing and will generate deferral responses to the service requesters. The deferral responses include a condition to be met before processing requests can be resent from the service requesters. After the condition is met, the processing requests are resent to the service provider to be fulfilled. | 2021-07-29 |
20210232424 | PROCESS DEFERRAL SYSTEM AND METHODS USING A SERVICE REQUESTER - A processing system includes a service provider, such as a printing device, that receives requests to process tasks or jobs from service requesters. At times, the service provider will not be available for processing and will generate deferral responses to the service requesters. The deferral responses include a condition to be met before processing requests can be resent from the service requesters. After the condition is met, the processing requests are resent to the service provider to be fulfilled. | 2021-07-29 |
20210232425 | PROCESS PRIORITIZATION FOR INFORMATION HANDLING SYSTEMS - An information handling system may determine that a first process of a list of processes is a top-ranked process and may adjust one or more settings of the information handling system associated with the first process. The information handling system may monitor performance parameters of the information handling system following the adjustment of the settings. Based on monitoring the performance parameters, the information handling system may determine that a performance score of the information handling system is below a threshold performance score and may reduce a ranking of the first process based on the determination. The ranking of the first process may be reduced such that a second process becomes a new top-ranked process. The information handling system may then adjust one or more settings associated with the second process. | 2021-07-29 |
20210232426 | APPARATUS, METHOD, AND SYSTEM FOR ENSURING QUALITY OF SERVICE FOR MULTI-THREADING PROCESSOR CORES - A simultaneous multi-threading (SMT) processor core capable of thread-based biasing with respect to execution resources. The SMT processor includes priority controller circuitry to determine a thread priority value for each of a plurality of threads to be executed by the SMT processor core and to generate a priority vector comprising the thread priority value of each of the plurality of threads. The SMT processor further includes thread selector circuitry to make execution cycle assignments of a pipeline by assigning to each of the plurality of threads a portion of the pipeline's execution cycles based on each thread's priority value in the priority vector. The thread selector circuitry is further to select, from the plurality of threads, tasks to be processed by the pipeline based on the execution cycle assignments. | 2021-07-29 |
20210232427 | MANAGING THROUGHPUT FAIRNESS AND QUALITY OF SERVICE IN FILE SYSTEMS - Embodiments are directed to managing file systems over a network. Jobs may be provided to a storage computer in a file system. Control models may be associated with the jobs. Scores may be generated based on the control models. Each job may be associated with a score provided by its associated control model. And, each job that may be behind its corresponding schedule may be associated with a higher score value than each other job that may be either on its corresponding other schedule or ahead of its corresponding other schedule. Commands may be selected for execution based on the commands being associated with a job that may be associated with the higher score value that may be greater than score values associated with other jobs. The jobs may be ranked based on the updated scores. Subsequent commands may be selected and executed based on the ranking of the jobs. | 2021-07-29 |
20210232428 | Systems and Methods for Dynamic Load Distribution in a Multi-Tier Distributed Platform - A controller provides dynamic load distribution in a multi-tier distributed platform. The controller may receive a request at a first Point-of-Presence (“PoP”) with a first set of resources. The first PoP may be part of a distributed platform with several distributed PoPs at different network locations. The controller may classify the requested task with a priority, may determine resource availability, and may dynamically distribute the request by (i) providing the request to the first set of resources in response to classifying the task with a high first priority, and determining the availability of the first set of resources to be less than a threshold, and (ii) providing the request to a second PoP in response to classifying the task with a lower second priority, and determining the availability of the first set of resources to be less than the threshold. | 2021-07-29 |
20210232429 | METHOD, APPARATUS, AND TERMINAL FOR ACCELERATING COLD STARTUP OF AN APPLICATION - This application provides a method and an apparatus, for accelerating cold startup of an application. The method includes after identifying an event that instructs an operating system of a terminal to cold start up an application, obtaining, from a plurality of dimensions, current status information related to the cold startup of the application, where the current status information includes a hardware configuration of the terminal, current load of the operating system of the terminal, resource overheads for cold starting up the application, and duration corresponding to each of a plurality of tasks in a process of cold starting up the application. The method also includes determining, by analyzing the current status information, a plurality of objects that need to be optimized in the current process of cold starting up the application; and then obtaining, based on the determined objects. | 2021-07-29 |
20210232430 | PROCESSOR ZERO OVERHEAD TASK SCHEDULING - A method for scheduling tasks on a processor includes detecting, in a task selection device communicatively coupled to the processor, a condition of each of a plurality of components of a computer system comprising the processor, determining a plurality of tasks that can be next executed on the processor based on the condition of each of the plurality of components, transmitting a signal to an arbiter of the task selection device that the plurality of tasks can be executed, determining, at the arbiter, a next task to be executed on the processor, storing, by the task selection device, the entry point address of the next task to be executed on the processor, and transferring, by the processor, execution to the stored entry point address of the next task to be executed. | 2021-07-29 |
20210232431 | AUTOMATED OPERATING SYSTEM PATCHING USING AUTO SCALING GROUP AND PERSISTENT VOLUMES - Systems and methods for updating an Operating System (OS) using cloud-based resources are described. A server computing system enables an auto-scaling group (ASG) to launch one or more instances based on a first machine image. The first machine image associated with a first Operating System (OS). The ASG is associated with a stateful service and configured with a resource tag having a value similar to a value assigned to the stateful service. The computer system receives a second machine image associated with a second OS generated based on the first OS. The computer system enables the ASG to terminate the one or more instances launched based on the first machine image and to launch one or more instances based on the second machine image. The instances launched based on the first machine image and based on the second machine image are associated with persistent volumes. | 2021-07-29 |
20210232432 | RESERVATION-BASED HIGH-PERFORMANCE COMPUTING SYSTEM AND METHOD - A method includes communicatively coupling a shared computing resource to core computing resources associated with a first project. The core computing resources associated with the first project are configured to use the shared computing resource to perform data processing operations associated with the first project. The method also includes reassigning the shared computing resource to a second project by (i) powering down the shared computing resource, (ii) disconnecting the shared computing resource from the core computing resources associated with the first project, (iii) communicatively coupling the shared computing resource to core computing resources associated with the second project, and (iv) powering up the shared computing resource. The core computing resources associated with the second project are configured to use the shared computing resource to perform data processing operations associated with the second project. The shared computing resource lacks non-volatile memory to store data related to the first and second projects. | 2021-07-29 |
20210232433 | OPERATIONS COST AWARE RESOURCE ALLOCATION OPTIMIZATION SYSTEMS AND METHODS - Optimization systems and methods to generate optimized resource allocations are disclosed. To generate an optimized resource allocation, a system or method accesses a relationship model defining a statistical relationship between operations data values and performance data values. The relationship model also includes a confidence score indicating a degree of confidence in the statistical relationship between the operations data values and the performance data values for each of a plurality of the operations data values. Using the relationship model, an operations value is selected to achieve an optimal value of the performance data values. The selected operations value is selected from a set of operations data values for which the corresponding confidence score exceeds a specified threshold, and the optimal value of the performance data represents an optimum of the performance data values that are mapped, by the relationship model, to the operations data values in the set. | 2021-07-29 |
20210232434 | SYSTEMS AND METHODS FOR DETERMINING TARGET ALLOCATION PARAMETERS FOR INITIATING TARGETED COMMUNICATIONS IN COMPLEX COMPUTING NETWORKS - This disclosure is directed to systems and methods for determining target allocation parameters for initiating targeted communications in complex computing networks, which may be associated with the allocation of allocatables in execution events over a period of time. The systems and methods may include receiving a desired allocation; determining a first available allocation at a first time; generating allocation information for a second period comprising the first time; determining a second available allocation at a second time; determining a remaining available allocation, based on the allocation information and the second available allocation; and determining one or more target allocation parameters for initiating a targeted communication to a computing device after the second time. | 2021-07-29 |
20210232435 | TILE SUBSYSTEM AND METHOD FOR AUTOMATED DATA FLOW AND DATA PROCESSING WITHIN AN INTEGRATED CIRCUIT ARCHITECTURE - A system and method for a computing tile of a multi-tiled integrated circuit includes a plurality of distinct tile computing circuits, wherein each of the plurality of distinct tile computing circuits is configured to receive fixed-length instructions; a token-informed task scheduler that: tracks one or more of a plurality of distinct tokens emitted by one or more of the plurality of distinct tile computing circuits; and selects a distinct computation task of a plurality of distinct computation tasks based on the tracking; and a work queue buffer that: contains a plurality of distinct fixed-length instructions, wherein each one of the fixed-length instructions is associated with one of the plurality of distinct computation tasks; and transmits one of the plurality of distinct fixed-length instructions to one or more of the plurality of distinct tile computing circuits based on the selection of the distinct computation task by the token-informed task scheduler. | 2021-07-29 |
20210232436 | MULTI-STAGE IOPS ALLOCATION - Systems and methods for policy-based apportionment of input/output operations (IOPS) in computing systems. Embodiments access a policy that specifies IOPS limits. Two or more virtual machines that are associated with the policy and two or more nodes that host those virtual machines are identified. In a first allocation stage, an inter-node policy manager prescribes an initial IOPS limit to the two or more nodes. The allocation amounts sent to the nodes depend at least in part on performance capabilities of respective nodes. In a second allocation stage, for each node that had received a limit amount, that amount is apportioned to the sets of virtual machines that execute on respective host nodes. Each node of the two or more nodes invokes its own node-local IOPS monitoring. Each node reports IOPS usage data to the inter-node policy manager, which in turn adjusts the node-level IOPS apportionments based on the node-level usage. | 2021-07-29 |
20210232437 | DATA PROCESSING METHOD AND APPARATUS, AND COMPUTING DEVICE - This disclosure provides a data processing method, including: receiving, by a first computing device, a first packet sent by a second computing device, where the first computing device is configured to assist the second computing device in performing service processing, the first computing device is a computing device in a heterogeneous resource pool, the first computing device communicates with the second computing device through a network, the heterogeneous resource pool includes at least one first computing device, and the first packet includes an instruction used to request the first computing device to process to-be-processed data; processing, by the first computing device, the to-be-processed data based on the instruction; and sending, by the first computing device, a second packet to the second computing device, where the second packet includes a processing result of the to-be-processed data. | 2021-07-29 |
20210232438 | SERVERLESS LIFECYCLE MANAGEMENT DISPATCHER - A method, in a serverless life-cycle management, (LCM) dispatcher, and an associated serverless LCM dispatcher for implementing a workload in a virtualization network. The method comprises receiving a workload trigger comprising an indication of a first workload, obtaining a description of the first workload from a workload description database based on the indication of the first workload, categorising, based on the description and the workload trigger, the first workload as a non LCM workload capable of being implemented with no LCM routines, or an LCM workload capable of being implemented using LCM routines. Furthermore, responsive to categorising the first workload as an LCM workload, the method comprises determining a LCM capability level for implementing the first workload, identifying an LCM component capable of providing the LCM capability level, and transmitting an implementation request to the LCM component to implement the first workload. | 2021-07-29 |
20210232439 | DYNAMIC WORKLOAD MIGRATION TO EDGE STATIONS - One example method, which may be performed at an end device configured to communicate with an edge station, includes listening for a broadcast signal from the edge station, joining a broadcast channel, receiving edge station information, selecting an edge station, transmitting a manifest to the selected edge station, receiving route information from the selected edge station, accessing a container identified in the route information, and issuing a call to the selected edge station to execute an application workload on the container. | 2021-07-29 |
20210232440 | EXECUTION OF FUNCTIONS BY CLUSTERS OF COMPUTING NODES - Example techniques for execution of functions by clusters of computing nodes are described. In an example, if a cluster does not have resources available for executing a function for handling a service request, the cluster may request another cluster for executing the function. A result of execution of the function may be received by the cluster and used for handling the service request. | 2021-07-29 |
20210232441 | DATA RACE DETECTION WITH PER-THREAD MEMORY PROTECTION - Data race detection in multi-threaded programs can be achieved by leveraging per-thread memory protection technology in conjunction with a custom dynamic memory allocator to protect shared memory objects with unique memory protection keys, allowing data races to be turned into inter-thread memory access violations. In various embodiments, threads acquire or release the keys used for accessing protected memory objects at the entry and exit points of critical sections within the program. An attempt by a thread to access a protected memory object within a critical section without the associated key triggers a protection fault, which may be indicative of a data race. | 2021-07-29 |
20210232442 | MOVEABLE DISTRIBUTED SYNCHRONIZATION OBJECTS - A resource sharing method, system, and computer program product in a distributed computing environment, includes in response to a first condition, determining a first node on which an access rate of a synchronization object is greatest, storing the synchronization object on the first node for use in synchronizing access to a resource, and in response to a second condition, determining a second node on which an access rate of the synchronization object is greatest, and relocating the synchronization object from a storage on the first node to a storage on the second node. | 2021-07-29 |
20210232443 | Virtualised Gateways - A system comprising a gateway for interfacing external data sources with one or more accelerators. The gateway comprises a plurality of virtual gateways, each of which is configured to stream data from the external data sources to one or more associated accelerators. The plurality of virtual gateways are each configured to stream data from external data sources so that the data is received at an associated accelerator in response to a synchronisation point being obtained by a synchronisation zone. Each of the virtual gateways is assigned a virtual ID so that when data is received at the gateway, data can be delivered to the appropriate gateway. | 2021-07-29 |
20210232444 | SWITCH EVENT ORDERING - Examples disclosed herein relate to a method comprising detecting a plurality of changes in a database, wherein the database is used to configure a switch operating traffic on a network. The method may include determining that a subset of the plurality of changes are to be deferred before being used to configure the switch, wherein each change in the subset has a potential dependency with at least one other change in the subset. The method may also include iterating through each change in the subset. The iteration may include confirming that a target change has a dependency with another change in the subset, resolving the dependency and transmitting the target change to an object manager for configuration of the switch. | 2021-07-29 |
20210232445 | NOTIFICATION APPARATUS AND NON-TRANSITORY COMPUTER READABLE MEDIUM - A notification apparatus includes memory and a processor. The processor is configured to: store history of user operations in the memory; and, in response to detection of a first operation that does not satisfy a condition, notify a first user who has performed the first operation of information on a second user who has performed a second operation satisfying the condition, based on the history stored in the memory. | 2021-07-29 |
20210232446 | APPLICATION PROGRAM INTERFACE MANAGER LOG VERBOSITIES - A server that includes an application programming interface (API) manager; a request evaluation module to, when executed by a processor, evaluate a request received from a client device at the API manager; a log rules registry to maintain a number of log verbosity rules; and a log verbosity adjustment module to, when executed by the processor, apply the log verbosity rules to the request and adjust the verbosity of a generated log to be written by the API. | 2021-07-29 |
20210232447 | METHOD FOR MANAGING MULTIPLE OPERATING SYSTEMS IN A TERMINAL - The disclosure provides a method for managing multiple operating systems in a terminal. The terminal includes multiple operating systems and a management system. The management system is configured to manage the multiple operating systems. The management system includes a cross-system application database. The method includes: when a first operating system in the multiple operating systems runs in a foreground, and a second operating system in the multiple operating systems runs in a background, if the second operating system receives a first message of a first application in the second operating system, sending, by the second operating system, a notification message to the management system; storing, by the management system, the notification message into the cross-system application database; and listening, by the first operating system, on the cross-system application database, and outputting a prompt of the first message when listening and obtaining the notification message. | 2021-07-29 |
20210232448 | CLIPBOARD CONTROL METHOD AND SYSTEM BASED ON MOBILE TERMINAL - The present disclosure provides a clipboard control method and system based on a mobile terminal. The method includes: setting a clipboard, including setting an allowable number of copy content items in the clipboard and a survival time corresponding to each copy content in the clipboard, and setting the clipboard to be able to save multiple pieces of copy contents at the same time; receiving an operation instruction from a user to copy the multiple pieces of contents, processing the copy contents in the clipboard, and saving the multiple pieces of the copy contents in the clipboard when the multiple pieces of contents need to be copied; and, receiving an operation instruction from the user to select a paste function when needing to paste, displaying the multiple pieces of the copy contents sequentially according to the selected pasting function, selecting and pasting a specified content as needed. | 2021-07-29 |
20210232449 | ASYNCHRONOUS DIGITAL PROTOCOL GENERATOR FOR ONE-WAY COMMUNICATION STREAMS - In an aspect of the present disclosure, an ad hoc protocol generation system is disclosed. The ad hoc protocol generation system may include a first processor configured to receive a function and a target device ID. The ad hoc protocol generation system may include a second processor in electronic communication with the first processor. The first processor may be programmed to transmit a synchronous signal to the second processor. The synchronous signal may include a payload containing an asynchronous protocol definition selected based on the target device ID. The payload may further contain a pass-through payload comprising the function. The second processor may be programmed to receive the synchronous signal and transmit an asynchronous signal to a target device. The asynchronous signal may be formatted according to the asynchronous protocol definition. The asynchronous signal may contain the pass-through payload. | 2021-07-29 |
20210232450 | MANAGING UNCORRECTABLE USER DATA - A technique for managing user data in a storage system includes accessing RAID metadata to identify user data that the storage system backs with broken RAID arrays. The technique further includes marking metadata that points to at least some of that user data to identify such user data as uncorrectable. | 2021-07-29 |
20210232451 | SYSTEMS AND METHODS FOR ERROR RECOVERY - Embodiments of the present disclosure include an error recovery method comprising detecting a computing error, restarting a first artificial intelligence processor of a plurality of artificial intelligence processors processing a data set, and loading a model in the artificial intelligence processor, wherein the model corresponds to a same model processed by the plurality of artificial intelligence processors during a previous processing iteration by the plurality of artificial intelligence processors on data from the data set. | 2021-07-29 |
20210232452 | Method and System For Managing Memory Device - The subject technology provides for managing a data storage system. A data operation error for a data operation initiated in a first non-volatile memory die of a plurality of non-volatile memory die in the data storage system is detected. An error count for an error type of the data operation error for the first non-volatile memory die is incremented. The incremented error count satisfies a first threshold value for the error type of the data operation error is determined. The first non-volatile memory die is marked for exclusion from subsequent data operations. | 2021-07-29 |
20210232453 | INDEXING AND RECOVERING ENCODED BLOCKCHAIN DATA - Disclosed herein are computer-implemented methods, computer-implemented systems, and non-transitory, computer-readable media, to index blockchain data for storage. One computer-implemented method includes generating one or more encoded blocks by executing error correction coding (ECC) on one or more blocks of a blockchain. Each of the one or more encoded blocks are divided into a plurality of datasets. An index is provided for the one or more encoded blocks, where the index is used to index each dataset of the plurality of datasets to a blockchain node at which a respective dataset is stored. | 2021-07-29 |
20210232454 | METHOD OF ENCODING DATA - Techniques for encoding data are described herein. The method includes receiving a block payload at a physical layer to be transmitted via a data bus. The method includes establishing a block header comprising an arrangement of bits, the block header defining two block header types, wherein a hamming distance between block header types is at least four. | 2021-07-29 |