36th week of 2022 patent applcation highlights part 42 |
Patent application number | Title | Published |
20220283799 | CENTER, UPDATE MANAGEMENT METHOD, AND NON-TRANSITORY STORAGE MEDIUM - A center is configured to communicate with an OTA master configured to control software update of an electronic control unit installed in a vehicle. The center includes one or more processors configured to: receive a notification indicating progress of software update processing of the electronic control unit from the OTA master; manage an update status indicating an processing state of the software update processing in the vehicle based on the notification received by the one or more processors; and when the one or more processors receive a third notification following a first notification, set the update status based on the third notification, the third notification being different from a second notification scheduled to be received following the first notification. | 2022-09-08 |
20220283800 | METHOD AND SYSTEM FOR SOFTWARE APPLICATION OPTIMIZATION USING NATURAL LANGUAGE-BASED QUERIES - A method for software application optimization using natural language-based queries. The method includes obtaining a user-provided query. The user-provided query includes a constraint to be used for an identification of an application element that matches the constraint, from a set of application elements of a software application. The user-provided query is a string that includes a human language sentence. The method further includes deriving a formalized query from the user-provided query by translating the user-provided query into a syntactic construct of segmented sentence elements and obtaining the application element that matches the constraint. Obtaining the application element that matches the constraint includes deriving a pattern representation of the user-provided query from the formalized query and identifying the application element that matches the pattern representation of the user-provided query from the plurality of application elements. | 2022-09-08 |
20220283801 | PIPELINE RELEASE VALIDATION - A pipeline (e.g., a DevOps or DevSecOps pipeline) may include utilities corresponding to stages within the pipeline. A device may execute the pipeline on a version of a codebase, where the version of the codebase is associated with an immutable identifier of a version control management system. The device may generate metadata for one or more of the utilities of the pipeline based executing the pipeline on the version of the codebase. The device may store the metadata at a database, where the immutable identifier is designated as a primary key for the stored metadata. The device may verify the metadata at one or more gates of the pipeline based on a comparison of the stored metadata to a set of policy information associated with the one or more gates. | 2022-09-08 |
20220283802 | AUTOMATION OF TASK IDENTIFICATION IN A SOFTWARE LIFECYCLE - A system and method for automation of task identification and control in a software lifecycle. Software context for a software asset is extracted from context repositories of the software asset during software development and operation, the extracted context data is matched to relevant tasks in a knowledge database to select tasks for the software asset, and task prioritization and orchestration are presented in a prioritized task list during a software lifecycle. | 2022-09-08 |
20220283803 | LANGUAGE AGNOSTIC CODE CLASSIFICATION - A system may include a computer processor and a repository configured to store a first code fragment including language features represented in a first programming language, and a second code fragment including language features represented in a second programming language. The system may further include a universal code fragment classifier, executing on the computer processor and configured to generate a first universal abstract syntax tree for the first code fragment and a second universal abstract syntax tree for the second code fragment, generate, using a graph embedding model, first vectors for the first universal abstract syntax tree and second vectors for the second universal abstract syntax tree, and classify, by executing an abstract syntax tree classifier on the first vectors and the second vectors, the first code fragment as a first code category and the second code fragment as a second code category. | 2022-09-08 |
20220283804 | METHODS AND SYSTEMS FOR MONITORING CONTRIBUTORS TO SOFTWARE PLATFORM DEVELOPMENT - Methods and systems for a platform development version control system for monitoring contributors to software platform development. The methods and systems generate data analytics on contributors to software platform development using group affiliations as listed in a group directory (e.g., a corporate directory for an entity providing the software platform) as a common organizing factor. For example, by organizing the methods and systems according to the group affiliations, the methods and systems may generate data analytics on contributions of contributors within those groups, irrespective of whether or not the group members are working on the same project. The methods and systems may then provide recommendations and graphical representations based on the data analytics. | 2022-09-08 |
20220283805 | MICRO-FRONTEND AS A SERVICE - Embodiments disclosed are directed to a system that performs steps to transmit, to a client device, a host application for storage on a browser of the client device. The host application is used to facilitate loading of a micro-frontend application onto the browser at runtime of the host application, for integration with and use in conjunction with the host application. The system also receives, from the host application, a request to load the micro-frontend application onto the browser. Based on receiving the request, a manifest file is accessed indicating a version of the micro-frontend application to be loaded onto the browser. The micro-frontend application is retrieved based on the version indicated in the manifest file and transmitted to the host application for loading onto the browser. | 2022-09-08 |
20220283806 | PROCESSING-IN-MEMORY DEVICE HAVING A PLURALITY OF GLOBAL BUFFERS AND PROCESSING-IN-MEMORY SYSTEM INCLUDING THE SAME - A processing-in-memory (PIM) device includes a plurality of multiplication and accumulation (MAC) operators configured to perform MAC arithmetic operations using weight data and vector data to generate and output MAC result data. The PIM device also includes a first global buffer and a second global buffer configured to alternately perform a vector data provision operation of providing the vector data to the plurality of MAC operators and a MAC result data storage operation of storing the MAC result data. | 2022-09-08 |
20220283807 | SYSTEM AND METHOD FOR THE GENERATION AND STORAGE OF EXECUTION TRACING INFORMATION - A system and method for the storage, within one or more virtual execution context registers, execution tracing information indicative of process/code flow within a processor system. This stored information can include a time stamp, information indicative of where the instruction pointer of the system was pointing prior to any process discontinuity, information indicative of where the instruction pointer of the system was pointing after any process discontinuity, and the number of times a specific instruction or sub-process is executed during a particular process. The data collected and stored can be utilized within such a system for the identification and analysis of code interrupts and profile-guided optimization. | 2022-09-08 |
20220283808 | SYSTEM AND METHOD FOR THE DETECTION OF PROCESSING HOT-SPOTS - A system and method for the storage, within one or more virtual execution context registers, tracing information indicative of process/code flow within a processor system. This stored information can include a time stamp, information indicative of where the instruction pointer of the system was pointing prior to any process discontinuity, information indicative of where the instruction pointer of the system was pointing after any process discontinuity, and the number of times a specific instruction or sub-process is executed during a particular process. The data collected and stored can be utilized within such a system for the identification and analysis of processing hot-spots. | 2022-09-08 |
20220283809 | Converting a Stream of Data Using a Lookaside Buffer - A stream of data is accessed from a memory system by an autonomous memory access engine, converted on the fly by the memory access engine, and then presented to a processor for data processing. A portion of a lookup table (LUT) containing converted data elements is preloaded into a lookaside buffer associated with the memory access engine. As the stream of data elements is fetched from the memory system each data element in the stream of data elements is replaced with a respective converted data element obtained from the LUT in the lookaside buffer according to a content of each data element to thereby form a stream of converted data elements. The stream of converted data elements is then propagated from the memory access engine to a data processor. | 2022-09-08 |
20220283810 | Method and Apparatus for Vector Based Matrix Multiplication - A method is provided that includes performing, by a processor in response to a vector matrix multiply instruction, multiplying an m×n matrix (A matrix) and a n×p matrix (B matrix) to generate elements of an m×p matrix (R matrix), and storing the elements of the R matrix in a storage location specified by the vector matrix multiply instruction. | 2022-09-08 |
20220283811 | LOOP BUFFERING EMPLOYING LOOP CHARACTERISTIC PREDICTION IN A PROCESSOR FOR OPTIMIZING LOOP BUFFER PERFORMANCE - Methods and apparatus for providing loop buffering employing loop iteration and exit branch prediction in a processor for optimizing loop buffer performance are disclosed herein. A loop buffer circuit in the processor can be configured to predict the number of iterations that a detected loop in an instruction stream will be executed before the loop is exited is predicted, to reduce or avoid under- or over-iterating loop replay. The loop buffer circuit can also be configured to predict the loop exit branch of the detected loop to predict the exact number of full iterations of the loop to be replayed and what instructions to replay for the last partial iteration of the loop, to further reduce or avoid under- or over-iterating loop replay. The loop buffer circuit can also be configured to predict the exit target address of the loop to provide the starting address for fetching new instructions following loop exit for resuming fetching of new instructions following the loop exit. | 2022-09-08 |
20220283812 | SYSTEM AND METHOD FOR SHARED REGISTER CONTENT INFORMATION - A system and method for the provision of a shared register within a virtual processor base/virtual execution context arrangement. The disclosed arrangement utilizes chiplets comprising core logic and defined instruction sets. The chiplets are adapted to operate in conjunction with one or more active execution contexts to enable the execution of particular processes. In particular, the shared register space is created within the same physical memory utilized to supports execution contexts. | 2022-09-08 |
20220283813 | FLEXIBLE RETURN AND EVENT DELIVERY - Techniques for flexible return and event delivery are described. As an example, an exemplary apparatus includes decoder circuitry to decode a single instruction, the single instruction to include a field for an opcode; and execution circuitry to execute the decoded single instruction according to the opcode to cause a return from an event handler while staying in a most privileged level and establish a return context that was in effect before event delivery. | 2022-09-08 |
20220283814 | PROCESSOR, PROCESSOR OPERATION METHOD AND ELECTRONIC DEVICE COMPRISING SAME - Disclosed are a processor, a processor operation method and an electronic device comprising same. The disclosed processor operation method comprises the steps of: identifying an instruction for instructing the execution of a first operation and address information of an operand corresponding to the instruction; and executing the instruction on the basis of whether or not the address information of the operand satisfies a predetermined condition. In the step of executing the instruction, a second operation configured to the instruction is executed for the operand if the address information of the operand satisfies the predetermined condition, and the first operation is executed for the operand if the address information of the operand does not satisfy the predetermined condition. | 2022-09-08 |
20220283815 | SYSTEM AND METHOD FOR SECURELY DEBUGGING ACROSS MULTIPLE EXECUTION CONTEXTS - A system and method for a virtual processor base/virtual execution context arrangement. The disclosed arrangement utilizes chiplets comprising core logic and defined instruction sets. The chiplets are adapted to operate in conjunction with one or more active execution contexts to enable the execution of particular processes. In particular, the defined instruction sets includes a instructions for processor debugging. The system and method support the compartmentalization of such debugging instructions so as to provide enhanced processor and process security. | 2022-09-08 |
20220283816 | REUSING FETCHED, FLUSHED INSTRUCTIONS AFTER AN INSTRUCTION PIPELINE FLUSH IN RESPONSE TO A HAZARD IN A PROCESSOR TO REDUCE INSTRUCTION RE-FETCHING - Reusing fetched, flushed instructions after an instruction pipeline flush in response to a hazard in a processor to reduce instruction re-fetching is disclosed. An instruction processing circuit is configured to detect fetched performance degrading instructions (PDIs) in a pre-execution stage in an instruction pipeline that may cause a precise interrupt that would cause flushing of the instruction pipeline. In response to detecting a PDI in an instruction pipeline, the instruction processing circuit is configured to capture the fetched PDI and/or its successor, younger fetched instructions that are processed in the instruction pipeline behind the PDI, in a pipeline refill circuit. If a later execution of the PDI in the instruction pipeline causes a flush of the instruction pipeline, the instruction processing circuit can inject the fetched PDI and/or its younger instructions previously captured from the pipeline refill circuit into the instruction pipeline to be processed without such instructions being re-fetched. | 2022-09-08 |
20220283817 | SYSTEM AND METHOD FOR THE CREATION AND PROVISION OF EXECUTION VIRTUAL CONTEXT INFORMATION - A system and method for virtual processor customization based upon the particular workload placed upon the virtual processor by one or more execution contexts within a given program or process. The customization serves to optimize the virtual processor architecture based upon a determination as to the size and/or type or virtual execution registers optimally suited for supporting a given execution context. This results in a time-variant processor architecture comprised of a virtual processor base and a virtual execution context. | 2022-09-08 |
20220283818 | HEXADECIMAL FLOATING POINT MULTIPLY AND ADD INSTRUCTION - An instruction to perform an operation selected from a plurality of operations configured for the instruction is executed. The executing includes determining a value of a selected operand of the instruction. The determining the value is based on a control of the instruction and includes reading the selected operand of the instruction from a selected operand location to obtain the value of the selected operand, based on the control having a first value, and using a predetermined value as the value of the selected operand, based on the control having a second value. The value and another selected operand of the instruction are multiplied to obtain a product. An arithmetic operation is performed using the product and a chosen operand of the instruction to obtain an intermediate result. A result from the intermediate result is obtained and placed in a selected location. | 2022-09-08 |
20220283819 | PROCESSOR BRANCH PREDICTION CIRCUIT EMPLOYING BACK-INVALIDATION OF PREDICTION CACHE ENTRIES BASED ON DECODED BRANCH INSTRUCTIONS AND RELATED METHODS - A processor branch prediction circuit employs back-invalidation of prediction cache entries based on decoded branch instructions. The execution information of a previously executed branch instruction is obtained from a prediction cache entry and compared to generated decode information in an instruction decode circuit. Execution information of branch instructions stored in the prediction cache entry is updated in response to a mismatch of the execution information and the decode information of the branch instruction. Existing branch prediction circuits invalidate prediction cache entries of a block of instructions when the block of instructions is invalidated in an instruction cache. As a result, valid branch instruction execution information may be unnecessarily discarded. Updating prediction cache entries in response to a mismatch of the execution information and the decode information of the branch instruction maintains the execution information in the prediction cache. | 2022-09-08 |
20220283820 | DATA PARALLELISM IN DISTRIBUTED TRAINING OF ARTIFICIAL INTELLIGENCE MODELS - Methods, systems, apparatuses, and computer program products are described herein that enable execution of a large AI model on a memory-constrained target device that is communicatively connected to a parameter server, which stores a master copy of the AI model. The AI model may be dissected into smaller portions (e.g., layers or sub-layers), and each portion may be executed as efficiently as possible on the target device. After execution of one portion of the AI model is finished, another portion of the AI model may be downloaded and executed at the target device. To improve efficiency, the input samples may be divided into microbatches, and a plurality of microbatches executing in sequential order may form a minibatch. The size of the group of microbatches or minibatch can be adjusted to reduce the communication overhead. Multi-level parallel parameters reduction may be performed at the parameter server and the target device. | 2022-09-08 |
20220283821 | SYSTEMS AND METHODS FOR RESOURCE ISOLATION FOR NETWORK BOOT USING VIRTUAL MACHINE MONITOR - An information handling system may include a processor and a basic input/output system configured to be the first code executed by the processor when the information handling system is booted and configured to initialize components of the information handling system into a known state, the basic input/output system further configured to implement a virtual machine monitor, the virtual machine monitor configured to isolate resources of the information handling system allocated to a network boot process of the information handling system from other resources of the information handling system allocated to other components of the basic input/output system. | 2022-09-08 |
20220283822 | STATE MACHINE PROCESSING METHOD, STATE PROCESSING METHOD, ELECTRONIC DEVICE, AND STORAGE MEDIUM - Provided are a state machine processing method, a state processing method, an electronic device, and a storage medium, which relates to the field of artificial intelligences such as computer technologies and Internet of vehicles. The method includes the following. A global state and an individual state are determined from all states of a state processing object; individual states are classified to obtain at least one local-state set; a sub-state machine is constructed for each of the at least one local-state set, where each sub-state machine manages an individual state in one of the at least one local-state set; and a layered finite state machine is constructed according to the sub-state machine and the global state. | 2022-09-08 |
20220283823 | DYNAMIC PLUGIN MANAGEMENT FOR SYSTEM HEALTH - The disclosure provides an approach for providing an extendable system health management framework in a network. Embodiments include receiving, by a manager, a system health plugin. Embodiments include determining, by the manager, an association between the system health plugin and a host in the network based on the host satisfying one or more conditions. Embodiments include providing, by the manager, the system health plugin to the host for installation in a system health agent on the host. Embodiments include receiving, by the manager, from the host, status information for the system health plugin. | 2022-09-08 |
20220283824 | PROCESSING SYSTEM AND PROCESSING METHOD FOR PERFORMING EMPHASIS PROCESS ON BUTTON OBJECT OF USER INTERFACE - A processing system and a processing method for a user interface are provided. The processing method includes a learning phase and an application phase. After a specific model is established in the learning phase, the user interface with specific meanings such as closing and rejection can be automatically found in the application phase for performing an emphasis process. | 2022-09-08 |
20220283825 | Electronic Device and Driving Method Therefor, Driving module, and Computer-Readable Storage Medium - A driving method for an electronic device is provided. The electronic device includes a touch display panel, and a display surface of the touch display panel is divided into multiple application launching regions. The driving method includes: determining a movement trajectory of a touch point that is at least partially overlapped with the appearance identifier of an application program; and in a case where an end point of the movement trajectory is located in one of the multiple application launching regions, displaying an application interface of the application program in the application launching region in which the end point of the movement trajectory is located. | 2022-09-08 |
20220283826 | ARTIFICIAL INTELLIGENCE (AI) SYSTEM AND METHOD FOR AUTOMATICALLY GENERATING BROWSER ACTIONS USING GRAPH NEURAL NETWORKS - For one embodiment of the present disclosure, an artificial intelligence (AI) system and method are disclosed herein for automatically generating browser actions using graph neural networks. A computer implemented method includes receiving, with an artificial intelligence (AI) agent, an input including a high-level natural language request or task or a text request or task, and in response to the input, automatically obtaining, with the AI agent, an html graph for a web application that is associated with the input. The method further includes automatically obtaining an appropriate domain specific semantic graph (DSG) in response to obtaining the html graph for the web application and based on a known set of DSGs and automatically generating, with a graph neural network (GNN), a labeled html graph in response to providing the html graph and the appropriate DSG to the GNN. | 2022-09-08 |
20220283827 | DETERMINING SEQUENCES OF INTERACTIONS, PROCESS EXTRACTION, AND ROBOT GENERATION USING ARTIFICIAL INTELLIGENCE / MACHINE LEARNING MODELS - Use of artificial intelligence (AI)/machine learning (ML) models is disclosed to determine sequences of user interactions with computing systems, extract common processes, and generate robotic process automation (RPA) robots. The AI/ML model may be trained to recognize matching n-grams of user interactions and/or a beneficial end state. Recorded real user interactions may be analyzed, and matching sequences may be implemented as corresponding activities in an RPA workflow. | 2022-09-08 |
20220283828 | APPLICATION SHARING METHOD, ELECTRONIC DEVICE AND COMPUTER READABLE STORAGE MEDIUM - An application sharing method, an electronic device and a computer readable storage medium are provided. The method includes: in a case that a first electronic device is connected to a second electronic device, displaying a running interface of a target application in a virtual screen; and sharing the running interface of the target application displayed in the virtual screen to the second electronic device. | 2022-09-08 |
20220283829 | SYSTEM FOR IMPLEMENTING AUTO DIDACTIC CONTENT GENERATION USING REINFORCEMENT LEARNING - Systems, computer program products, and methods are described herein for implementing auto didactic content generation using reinforcement learning. The present invention is configured to retrieve a user interaction portfolio of a user associated with a completion of a first task; determine requirements associated with the first task; determine an interaction score associated with the user; determine a target interaction score associated with the first task; determine that the interaction score associated with the user is less than the target interaction score; electronically receive, from a knowledge repository, a first video file demonstrating the one or more interaction requirements; generate a modified first video file; and transmit control signals configured to cause a computing device of the user to display the modified first video file to the user. | 2022-09-08 |
20220283830 | MANAGING VIRTUAL APPLICATION PERFORMANCE IN A VIRTUAL COMPUTING ENVIRONMENT - Systems and methods of managing virtual application performance in a virtual computing environment are provided. A system determines an application interaction score based on corresponding application interaction factors associated with sessions. The system determines the application interaction score for each virtual application accessed during each of the sessions. The system generates an aggregated application interaction score for each of the sessions based at least on combining the application interaction score for each of the virtual applications accessed during a corresponding session. The system performs an action based at least in part on the aggregated application interaction score to improve performance of a virtual application accessed via the virtual computing environment. | 2022-09-08 |
20220283831 | ACTION RECIPES FOR A CROWDSOURCED DIGITAL ASSISTANT SYSTEM - Embodiments of the present invention are directed to action recipes for a crowdsourced digital assistant. Users can define an action recipe by recording a set of inputs across one or more applications, by providing multiple sub-commands in a single on-the-fly command, by providing one or more associated commands, or otherwise. An action recipe dataset is generated, and stored and indexed on a user device and/or on an action cloud server. As such, any user can invoke an action recipe by providing an associated command to a crowdsourced digital assistant application on a user device. The crowdsourced digital assistant searches for a matching command on the user device and/or the action cloud server, and if a match is located, the corresponding action recipe dataset is accessed, and the crowdsourced digital assistant emulates the actions in the action recipe on the user device. | 2022-09-08 |
20220283832 | DATA IO AND SERVICE ON DIFFERENT PODS OF A RIC - To provide a low latency near RT RIC, some embodiments separate the RIC's functions into several different components that operate on different machines (e.g., execute on VMs or Pods) operating on the same host computer or different host computers. Some embodiments also provide high speed interfaces between these machines. Some or all of these interfaces operate in non-blocking, lockless manner in order to ensure that critical near RT RIC operations (e.g., datapath processes) are not delayed due to multiple requests causing one or more components to stall. In addition, each of these RIC components also has an internal architecture that is designed to operate in a non-blocking manner so that no one process of a component can block the operation of another process of the component. All of these low latency features allow the near RT RIC to serve as a high speed IO between the E2 nodes and the xApps. | 2022-09-08 |
20220283833 | SPP SERVER, VIRTUAL MACHINE CONNECTION CONTROL SYSTEM, SPP SERVER CONNECTION CONTROL METHOD AND PROGRAM - The SPP server | 2022-09-08 |
20220283834 | SYSTEM AND METHOD TO MONITOR AND MANAGE A PASSTHROUGH DEVICE - An information handling system includes a service module that may detect an action performed on a passthrough device, invoke an application programming interface on a hypervisor, receive a response to the action on the passthrough device from the hypervisor, and push management information to a management controller. The hypervisor may detect the passthrough device, proxy an operating system call associated with the action to a guest operating system of the virtual machine over the application programming interface, and transmit the response received from the guest operating system to the service module. The guest operating system may echo the operating system call on a virtual machine, and proxy the response to the operating system call to the hypervisor. | 2022-09-08 |
20220283835 | LIBRARY BASED VIRTUAL MACHINE MIGRATION - A method includes determining to migrate a guest VM from a source host to a destination host and, in response to determining to migrate the guest VM, determining that a page of the guest VM matches a page of a VM image of a plurality of VM images in a VM library associated with the source host. The method further includes forwarding an identifier of the page of the VM image to the destination host, the destination host to retrieve, in view of the identifier, the page of the VM image from a second VM library associated with the destination host to instantiate the guest VM at the destination host. | 2022-09-08 |
20220283836 | DYNAMIC CONFIGURATION OF VIRTUAL OBJECTS - The disclosure provides an approach for the dynamic configuration of virtualized objects. A virtual object may be associated with a desired state defining a first plurality of resources for allocating to the virtual object. The first plurality of resources correspond to one or more resource types. Techniques include determining that each of a plurality of hosts does not have sufficient available resources to allocate the first plurality of resources to the virtual object according to the desired state. Techniques include selecting, a first host of the plurality of hosts to run the virtual object. Techniques include allocating a second plurality of resources to the virtual object from the first host, wherein the second plurality of resources is less than the first plurality of resources, and running the virtual object in the first host. | 2022-09-08 |
20220283837 | SMARTNIC BASED VIRTUAL SPLITTER ENSURING MICROSECOND LATENCIES - A data protection system includes a splitter configured to reduce latencies when splitting writes in a computing environment. The splitter captures a write and adds metadata to augment the write with virtual related information. The augmented data is provided to a smartNIC while the write is then processed in the IO stack. The smartNIC may have a volume only visible to the splitter. The smartNIC also includes processing power that allows data protection operations to be performed at the smartNIC rather than with the processing resources of the host. | 2022-09-08 |
20220283838 | SYSTEM AND METHOD ENABLING SOFTWARE-CONTROLLED PROCESSOR CUSTOMIZATION FOR WORKLOAD OPTIMIZATION - A system and method for virtual processor customization based upon the particular workload placed upon the virtual processor by one or more execution contexts within a given program or process. The customization serves to optimize the virtual processor architecture based upon a determination as to the size and/or type or virtual execution registers optimally suited for supporting a given execution context. This results in a time-variant processor architecture which not only provides optimized computational attributes, but also affords a high degree of inherent process security. | 2022-09-08 |
20220283839 | DIRECT ACCESS TO HARDWARE ACCELERATOR IN AN O-RAN SYSTEM - Some embodiments provide various methods for offloading operations in an O-RAN (Open Radio Access Network) onto control plane (CP) or edge applications that execute on host computers with hardware accelerators in software defined datacenters (SDDCs). At the CP or edge application operating on a machine executing on a host computer with a hardware accelerator, the method of some embodiments receives data, from an O-RAN E | 2022-09-08 |
20220283840 | CONFIGURING DIRECT ACCESS TO HARDWARE ACCELERATOR IN AN O-RAN SYSTEM - Some embodiments provide various methods for offloading operations in an O-RAN (Open Radio Access Network) onto control plane (CP) or edge applications that execute on host computers with hardware accelerators in software defined datacenters (SDDCs). At the CP or edge application operating on a machine executing on a host computer with a hardware accelerator, the method of some embodiments receives data, from an O-RAN E2 unit, to perform an operation. The method uses a driver of the machine to communicate directly with the hardware accelerator to direct the hardware accelerator to perform a set of computations associated with the operation. This driver allows the communication with the hardware accelerator to bypass an intervening set of drivers executing on the host computer between the machine's driver and the hardware accelerator. Through this driver, the application in some embodiments receives the computation results, which it then provides to one or more O-RAN components (e.g., to the E2 unit that provided the data, another E2 unit or another control plane or edge application). | 2022-09-08 |
20220283841 | USING HYPERVISOR TO PROVIDE VIRTUAL HARDWARE ACCELERATORS IN AN O-RAN SYSTEM - Some embodiments provide various methods for offloading operations in an O-RAN (Open Radio Access Network) onto control plane (CP) or edge applications that execute on host computers with hardware accelerators in software defined datacenters (SDDCs). At the CP or edge application operating on a machine executing on a host computer with a hardware accelerator, the method of some embodiments receives data, from an O-RAN E2 unit, to perform an operation. The method uses a driver of the machine to communicate directly with the hardware accelerator to direct the hardware accelerator to perform a set of computations associated with the operation. This driver allows the communication with the hardware accelerator to bypass an intervening set of drivers executing on the host computer between the machine's driver and the hardware accelerator. Through this driver, the application in some embodiments receives the computation results, which it then provides to one or more O-RAN components (e.g., to the E2 unit that provided the data, another E2 unit or another control plane or edge application). | 2022-09-08 |
20220283842 | RUNNING SERVICES IN SDL OF A RIC - To provide a low latency near RT RIC, some embodiments separate the RIC's functions into several different components that operate on different machines (e.g., execute on VMs or Pods) operating on the same host computer or different host computers. Some embodiments also provide high speed interfaces between these machines. Some or all of these interfaces operate in non-blocking, lockless manner in order to ensure that critical near RT RIC operations (e.g., datapath processes) are not delayed due to multiple requests causing one or more components to stall. In addition, each of these RIC components also has an internal architecture that is designed to operate in a non-blocking manner so that no one process of a component can block the operation of another process of the component. All of these low latency features allow the near RT RIC to serve as a high speed IO between the E2 nodes and the xApps. | 2022-09-08 |
20220283843 | RIC WITH A DEDICATED IO THREAD AND MULTIPLE DATA PROCESSING THREADS - To provide a low latency near RT RIC, some embodiments separate the RIC's functions into several different components that operate on different machines (e.g., execute on VMs or Pods) operating on the same host computer or different host computers. Some embodiments also provide high speed interfaces between these machines. Some or all of these interfaces operate in non-blocking, lockless manner in order to ensure that critical near RT RIC operations (e.g., datapath processes) are not delayed due to multiple requests causing one or more components to stall. In addition, each of these RIC components also has an internal architecture that is designed to operate in a non-blocking manner so that no one process of a component can block the operation of another process of the component. All of these low latency features allow the near RT RIC to serve as a high speed IO between the E2 nodes and the xApps. | 2022-09-08 |
20220283844 | APPARATUS AND METHOD FOR PORT MAPPING OF VIRTUAL MACHINES IN CLOUD INFRASTRUCTURE - A method and apparatus for mapping a virtual machine in a cloud infrastructure to a network port. The method includes: obtaining, by a smart monitoring device distinct from the virtual machine and a network appliance, a first location of the virtual machine in the cloud infrastructure; mapping, by the smart monitoring device, the first location of the virtual machine to a first source port in the network appliance and a first destination port in the network appliance; based on a determination that the virtual machine is not at the first location, obtaining, by the smart monitoring device, a second location of the virtual machine in the cloud infrastructure; and based on the determination that the virtual machine is not at the first location, mapping, by the smart monitoring device, the second location of the virtual machine to a second source port in the network appliance and a second destination port in the network appliance. | 2022-09-08 |
20220283845 | SUGGESTION PRESENTATION METHOD AND NON-TRANSITORY COMPUTER-READABLE RECORDING MEDIUM - Provided is an improvement suggestion presentation method implemented by a computer, including acquiring first parameters relating to an operation status of a first system, acquiring second parameters relating to operation statuses of second systems, identifying a distribution of each of the second parameters, calculating a difference between one of the first parameters and the distribution of a third parameter, which is a same type as the one of the first parameters, of the second parameters, for each of the first parameters, identifying, from among the first parameters, a resource parameter indicating an amount of allocation of a resource that improves the operation status of the first system, based on the differences, and presenting the resource parameter identified. | 2022-09-08 |
20220283846 | POD DEPLOYMENT METHOD AND APPARATUS - This application relates to the field of cloud computing technologies, and discloses a pod deployment method and apparatus. A master node receives an instruction for deploying one or more scheduling domains, where the instruction includes a quantity of each type of resources occupied by each of the scheduling domains; selects a worker node for deploying the one or more scheduling domains, and sends, to the worker node, the instruction for deploying the one or more scheduling domains, where the instruction includes the quantity of each type of resources occupied by the scheduling domain; and sends, to the worker node based on association information of a scheduling domain, an instruction for deploying one or more pods, where the instruction includes a quantity of pods, a quantity of containers included in each pod, and an identifier of the scheduling domain to which resources used by the one or more pods belong. | 2022-09-08 |
20220283847 | TASK DISPATCH - Apparatuses and methods are disclosed for performing data processing operations in main processing circuitry and delegating certain tasks to auxiliary processing circuitry. User-specified instructions executed by the main processing circuitry comprise a task dispatch specification specifying an indication of the auxiliary processing circuitry and multiple data words defining a delegated task comprising at least one virtual address indicator. In response to the task dispatch specification the main processing circuitry performs virtual-to-physical address translation with respect to the at least one virtual address indicator to derive at least one physical address indicator, and issues a task dispatch memory write transaction to the auxiliary processing circuitry comprises the indication of the auxiliary processing circuitry and the multiple data words, wherein the at least one virtual address indicator in the multiple data words is substituted by the at least one physical address indicator. | 2022-09-08 |
20220283848 | ENABLING MODERN STANDBY FOR UNSUPPORTED APPLICATIONS - Modern Standby is enabled for unsupported applications. An enabler driver can be included on a system that supports Modern Standby and can be configured to detect when applications are loaded on the system. When an unsupported application is loaded, the enabler driver can interface with an enabler service to determine whether the unsupported application is Modern Standby capable. If so, the enabler driver can add the unsupported application to a throttle job object that the operating system uses to determine which applications should remain active during Modern Standby. In instances where an application is deployed in a container, an enabler container service can be leveraged to determine whether the containerized application is Modern Standby capable. If so, the enabler driver can add the container to the throttle job object. | 2022-09-08 |
20220283849 | ENABLING WORKERS TO SWAP BETWEEN MOBILE DEVICES - A method for identifying a second device by a first device for establishing a communication between the first device and the second device is described here. The method includes receiving, by a processor of a first device, a voice command from a worker in a workplace. In an example, the method comprises pausing, by the processor, a workflow operation executing on the first device. The method further comprises performing, by the processor, a voice recognition to analyze the voice command of the worker. The method includes activating, by the processor, a communication module of the first device based on the voice recognition, to identify a second device in proximity to the first device. The method includes terminating, by the processor, a connection between the first device and the wearable electronic device. Thus, terminating, by the processor, a second connection of the first device with the second device. | 2022-09-08 |
20220283850 | DATA TRANSFER SCHEDULING FOR HARDWARE ACCELERATOR - A computing device, including a processor configured to perform data transfer scheduling for a hardware accelerator including a plurality of processing areas. Performing data transfer scheduling may include receiving a plurality of data transfer instructions that encode requests to transfer data to respective processing areas. Performing data transfer scheduling may further include identifying a plurality of transfer path conflicts between the data transfer instructions. Performing data transfer scheduling may further include sorting the data transfer instructions into a plurality of transfer instruction subsets. Within each transfer instruction subset, none of the data transfer instructions have transfer path conflicts. For each transfer instruction subset, performing data transfer scheduling may further include conveying the data transfer instructions included in that transfer instruction subset to the hardware accelerator. The data transfer instructions may be conveyed in a plurality of sequential data transfer phases that correspond to the transfer instruction subsets. | 2022-09-08 |
20220283851 | ELECTRONIC DEVICE INCLUDING ACCELERATORS OF HETEROGENEOUS HARDWARE TYPES - An electronic device includes: a host processor configured to control an operation of the electronic device; accelerators of heterogeneous hardware types configured to exchange data with each other through direct communication; and a control unit configured to convert a command received from the host processor, based on a type of each of the accelerators and transfer a result of the converting to a corresponding accelerator among the accelerators. | 2022-09-08 |
20220283852 | DYNAMIC MODELER - Methods, systems, and apparatus, including computer programs encoded on computer storage media, for dynamically modeling a page using dynamic data. One of the methods includes obtaining, from a user device associated with a first resource of a dynamic modeling system, a dynamic final event comprising data representing a transaction of the dynamic modeling system; generating, by a rule monitor of the dynamic modeling system and using the data representing the transaction, a task chain for the plurality of tasks, comprising: generating a plurality of tasks in the task chain, and determining, for each task in the task chain, one or more criteria for executing the task; and for each task of the plurality of tasks: determining, by the rule monitor, that the one or more criteria for executing the task are satisfied; and in response to determining that the one or more criteria are satisfied, executing the task. | 2022-09-08 |
20220283853 | ANALYSIS SYSTEM, ANALYSIS METHOD, AND ANALYSIS PROGRAM - An analysis system includes processing circuitry configured to extract each running process and each thread in each process from data that records a state of a memory of an analysis object apparatus, acquire an object belonging to the process or the thread having been extracted, and specify a same object belonging to a plurality of processes or a plurality of threads among objects acquired and associate the plurality of processes or the plurality of threads to which the same object belongs. | 2022-09-08 |
20220283854 | DETERMINING A JOB GROUP STATUS BASED ON A RELATIONSHIP BETWEEN A GENERATION COUNTER VALUE AND A TICKET VALUE FOR SCHEDULING THE JOB GROUP FOR EXECUTION - A job scheduler system includes one or more hardware processors, a memory including a job group queue stored in the memory, and a job scheduler engine configured to create a first job group in the job group queue, the first job group includes a generation counter having an initial value, receive a first request to steal the first job group, determine a state of the first job group based at least in part on the generation counter, the state indicating that the first job group is available to steal, based on the determining the state of the first job group, atomically increment the generation counter, thereby making the first job group unavailable for stealing, and alter an execution order of the first job group ahead of at least one other job group in the job group queue. | 2022-09-08 |
20220283855 | METHOD FOR CONTROLLING WEARABLE DEVICE, ELECTRONIC DEVICE, AND COMPUTER-READABLE STORAGE MEDIUM - A method for controlling a wearable device is provided. A first system has a higher power consumption than a second system. The method includes: obtaining a scheduled start time; after the start time is reached, obtaining user behavior data; and when the user behavior data satisfies a preset system switching condition, switching the system running on the wearable device to the second system. | 2022-09-08 |
20220283856 | SUSTAINABILITY AND EFFICIENCY AS A SERVICE - Minimizing an energy use of virtual machines at one or more information handling systems, including receiving a plurality of computing tasks, each task associated with an energy efficiency indicator; positioning each of the tasks within a task queue indicating an order of execution of the tasks based on the energy efficiency indicator for each task; identifying a plurality of virtual machines, each virtual machine associated with a thermal efficiency indicator based on a historical energy usage of the virtual machine; sorting the virtual machines to identify a distribution of the virtual machines based on the thermal efficiency indicator of the respective virtual machines; allocating the virtual machines to execute the tasks based on i) the distribution of the virtual machines and ii) the task queue; and executing the tasks by the virtual machines based on the allocation. | 2022-09-08 |
20220283857 | ENABLING DYNAMIC MOBILE DEVICE USAGE SUGGESTIONS - Apparatuses, methods, systems, and program products are disclosed for dynamic mobile device resource management. An apparatus includes a processor and a memory that stores code executable by the processor. The code is executable by the processor to determine a current usage rate of a constrained resource of an electronic device based on a user's activity on the electronic device, forecast a remaining utility of the constrained resource based on the current usage rate and historical usage of the electronic device, and take an action related to the constrained resource in response to the forecasted remaining utility of the constrained resource satisfying a utility threshold. | 2022-09-08 |
20220283858 | SERVERLESS RUNTIME CONTAINER ALLOCATION - A method, system, and computer program product for implementing automated serverless runtime container allocation is provided. The method includes defining a number of runtime containers and associated characteristics required for each worker node of a plurality of worker nodes for execution of a specified workload. The specified workload is dispatched to the plurality of worker nodes and a specified portion of the specified workload is assigned to each worker node. An application executing a universal runtime container that includes potential application runtimes and associated supported software versions within a layered modifiable format is generated and unused layers are removed from the universal runtime container. The specified workload is executed via the universal runtime container and a set of available universal runtime containers is refilled on an associated work node. | 2022-09-08 |
20220283859 | METHOD AND DEVICE FOR PROCESSING DATA - A computer-implemented method for processing data for applications in the field of cloud computing and/or edge computing, for vehicles. The method includes: providing multiple computing services using at least two different hardware resources, and using the multiple computing services. | 2022-09-08 |
20220283860 | GUARANTEED QUALITY OF SERVICE IN CLOUD COMPUTING ENVIRONMENTS - Systems, methods, apparatuses, and computer-readable media for guaranteed quality of service (QoS) in cloud computing environments. A workload related to an immutable log describing a transaction may be received. A determination is made based on the immutable log that a first compute node stores at least one data element to process the transaction. Utilization levels of computing resources of the first compute node may be determined. Utilization levels of links connecting the first compute node to the fabric may be determined. A determination may be made, based on the utilization levels, that processing the workload on the first compute node satisfies one or more QoS parameters specified in a service level agreement (SLA). The workload may be scheduled for processing on the first compute node based on the determination that processing the workload on the first compute node satisfies the one or more QoS parameters specified in the SLA. | 2022-09-08 |
20220283861 | Routing Log-Based Information - Routing log-based information between production servers and logging servers is disclosed. A log entry for a logging server is generated at a production server. A shard identifier is computed for a shard associated with the logging server based on application of a hashing algorithm to properties associated with the production server. The hashing algorithm and properties are selected to prevent or minimize the likelihood of computing of the same shard identifier by another production server for the same shard associated with the logging server. The log entry is transmitted to the shard associated with the logging server. A determination is made that the logging server has malfunctioned by detecting that the log entry transmitted to the shard is absent. In response, another shard identifier is computed for another shard of another logging server and a subsequent log entry from the production server is transmitted to the another shard of the another logging over. No load balancers are used by the routing system. | 2022-09-08 |
20220283862 | INVOCATION CALLS ENABLING APPLICATIONS TO INITIATE REACTIVE ACTIONS IN RESPONSE TO RESOURCE STATE CHANGES OF ASSETS - An apparatus comprises a processing device configured to register one or more applications to receive resource state change invocation calls from one or more assets of an information technology infrastructure, to detect resource state changes for the one or more assets of the information technology infrastructure, and to provide, from a given one of the one or more assets of the information technology infrastructure to a given one of the one or more applications, a given resource state change invocation call responsive to detecting one or more resource state changes for the given asset. The processing device is also configured to receive, from the given application, an instruction to initiate one or more reactive actions based at least in part on the detected one or more resource state changes for the given asset, and to apply at least one of the one or more reactive actions to the given asset. | 2022-09-08 |
20220283863 | RESPONDING TO APPLICATION DEMAND IN A SYSTEM THAT USES PROGRAMMABLE LOGIC COMPONENTS - Systems and methods provide an extensible, multi-stage, realtime application program processing load adaptive, manycore data processing architecture shared dynamically among instances of parallelized and pipelined application software programs, according to processing load variations of said programs and their tasks and instances, as well as contractual policies. The invented techniques provide, at the same time, both application software development productivity, through presenting for software a simple, virtual static view of the actually dynamically allocated and assigned processing hardware resources, together with high program runtime performance, through scalable pipelined and parallelized program execution with minimized overhead, as well as high resource efficiency, through adaptively optimized processing resource allocation. | 2022-09-08 |
20220283864 | ADVANCED MEMORY TRACKING FOR KERNEL MODE DRIVERS - Described are examples for tracking memory usage of a driver. A memory allocation request related to the driver to allocate a portion of memory for the driver can be traced in a kernel mode of an operating system. One or more associated allocation parameters can be recorded, and an allocation history of the driver over a period of time can be reported during execution of the driver and based on the one or more allocation parameters indicated by the memory allocation request. | 2022-09-08 |
20220283865 | ELECTRONIC APPARATUS AND CONTROL METHOD THEREOF - An electronic apparatus includes: a memory; a storage; and a processor, wherein: the electronic apparatus is configured to execute a plurality of processes as data of the plurality of processes is loaded into the memory based on execution of at least one program stored in the storage, the processor is configured to: identify a function currently running among a plurality of functions providable by the electronic apparatus, and based on a relationship between the plurality of processes and the identified function, terminate at least one process among the plurality of running processes, and allow a storage area of the memory loaded with the data of the terminated process to be available for another process. | 2022-09-08 |
20220283866 | JOB TARGET ALIASING IN DISAGGREGATED COMPUTING SYSTEMS - Deployment of arrangements of physical computing components coupled over a communication fabric are presented herein. In one example, a method includes presenting, to a workload manager, a target machine capable of receiving execution jobs from the workload manager. The target machine has a network state and comprises a selection of computing components. The method also includes receiving a job issued by the workload manager that is directed to the target machine. Based on properties of the job, the method includes determining resource requirements for handling the job, forming a composed machine comprising physical computing components that support the resource requirements of the job, transferring the network state of the target machine to the composed machine and indicating the network state of the composed machine to the workload manager, and initiating execution of the job on the composed machine. | 2022-09-08 |
20220283867 | MANAGEMENT OF A SCALABLE POOL OF WORKSTATION INSTANCES - Various embodiments of the present application set forth a computer-implemented method comprising receiving, from a client, a request for a workstation instance having a first configuration, in response to the request, generating a first workstation pool associated with the first configuration, wherein the first workstation pool includes at least two unassigned workstation instances having the first configuration, and assigning at least a first workstation instance included in the at least two unassigned workstation instances to the client. | 2022-09-08 |
20220283868 | ACCELERATOR CONTROL SYSTEM AND ACCELERATOR CONTROL METHOD - In an accelerator control system ( | 2022-09-08 |
20220283869 | Resource Scheduling Method, Apparatus and System - The embodiments of the present disclosure disclose a resource scheduling method, apparatus and system. The method comprises: receiving, by a management node, a Pod creation request from a user, wherein the Pod creation request comprises: requirements of each container for each type of shareable resources; selecting, by the management node, a node for a Pod object to be created, and allocating each type of shareable resources of each container to the Pod object according to shareable resource information of a shareable device in the selected node; and binding the Pod object, the selected node and allocated resources, and storing the Pod object bound to the selected node and the allocated resources. The embodiments of the present disclosure introduce a management and scheduling mechanism for shared resources, thereby saving resources and increasing the resource utilization. | 2022-09-08 |
20220283870 | FORECASTING AND REPORTING AVAILABLE ACCESS TIMES OF PHYSICAL RESOURCES - An approach is provided for forecasting and reporting available access times of a physical resource. Availability data of a physical resource may be determined from historical data of the physical resource, counter data of the physical resource, reservation data of the physical resource, constraint data of the physical resource, and criteria data of the physical resource, or any combination thereof. The availability data may be displayed on client computing devices upon request. | 2022-09-08 |
20220283871 | Multi-Account Cloud Service Usage Package Sharing Method and Apparatus, and Related Device - This application provides a multi-account cloud service usage package sharing method and apparatus, and a related device. The method includes: receiving a sharing policy of a first usage package of a first account and a sharing policy of a second usage package of a second account; generating a sharing execution plan according to the sharing policy of the first usage package and the sharing policy of the second usage package; sorting a use record of a usage package of the first account and a use record of a usage package of the second account according to the sharing execution plan, to generate a to-be-deducted queue; and deducting the first usage package and the second usage package according to the to-be-deducted queue. | 2022-09-08 |
20220283872 | CONTAINER SERVICE MANAGEMENT METHOD AND APPARATUS - A container service management method and apparatus, to integrate a container service and a container service management function into an NFV MANO system. The method includes: receiving, by a container service management entity, a creation request for a container service, where the creation request is used to request to create a specified container service, and the creation request carries a first management policy for managing a lifecycle of the specified container service; creating, by the container service management entity, the specified container service in response to the creation request; and managing, by the container service management entity, the lifecycle of the specified container service according to the first management policy. | 2022-09-08 |
20220283873 | VM MEMORY RECLAMATION BY BUFFERING HYPERVISOR-SWAPPED PAGES - In some aspects, a non-transitory computer readable storage medium includes instructions stored thereon that, when executed by a processor, cause the processor to detect that system software is proceeding to swap memory content of a virtual machine (VM) from memory to storage, wherein the memory is allocated to the VM; buffer the memory content; and perform alternative memory reclamation of the memory. | 2022-09-08 |
20220283874 | Automatic Identification of Computer Agents for Throttling - Computer agents can be throttled individually. In an example, when a computer agent completes a work item, the computer agent reports this to a central component that maintains a vote value for that agent and that increases the respective vote value based on the completed work item. When the central component determines that system performance is sufficiently diminished, central component can throttle the performance of those computer agents having respective vote values above a predetermined threshold value. | 2022-09-08 |
20220283875 | STORAGE SYSTEM, RESOURCE CONTROL METHOD, AND RECORDING MEDIUM - Resources of physical servers are appropriately allocated. In a storage system including one or more physical servers, one or more protocol VMs to which resources of the physical servers are allocated and which perform processing related to a protocol of a file storage through a frontend network with a client and one or more filesystem VMs which perform processing related to management of a file in the file storage are formed in the physical server. The physical server acquires load information regarding loads of the protocol VMs and the filesystem VMs and controls the allocation of the resources of the physical servers to the protocol VMs and the filesystem VMs based on the load information. | 2022-09-08 |
20220283876 | DYNAMIC RESOURCE ALLOCATION FOR EFFICIENT PARALLEL PROCESSING OF DATA STREAM SLICES - A method for processing slices of a data stream in parallel by different workers includes receiving events of the data stream and forwarding the events to respective ones of the workers for updating respective states of the respective workers and for outputting results of data processing of the events. The states comprise hierarchically grouped state variables. At least one of the workers checks whether it is in a terminable state by checking that state variables that are owned by the worker in a current state of the worker have initial values. | 2022-09-08 |
20220283877 | SURROGATE PROCESS CREATION TECHNIQUE FOR HIGH PROCESS-PER-SERVER SCENARIOS - A system and method for launching parallel processes on a server configured to process a number of parallel processes. A request is received from a parallel application to start a number of parallel processes. In response to this request a launcher creates a surrogate. The surrogate inherits communications channels from the launcher. The surrogate then executes activities related to the launch of the parallel processes, and then launches the parallel processes. The parallel processes are launched and the surrogate is terminated. | 2022-09-08 |
20220283878 | DEPENDENCY-BASED DATA ROUTING FOR DISTRIBUTED COMPUTING - A data router receives data from a data source and stores the data in a buffer of the data router. The data router analyzes the data in the buffer to identify the data source. The data router uses a routing map to identify a destination for the data based on the data source and streams the data from the buffer to the destination. | 2022-09-08 |
20220283879 | RESILIENT ADAPTIVE BIASED LOCKING IN MULTI-THREAD CONCURRENT PROGRAM EXECUTION - A computer-implemented method and system for resilient adaptive biased locking. The method includes adding, in a system including an adaptive lock reservation scheme having a learning state, a component comprising a per class counter that counts, collectively, a number of learning failures and a number of revocation failures. An embodiment includes initializing the per class counter upon loading a class with a predetermined value representing at least one of a maximum number of learning failures and cancellation instances associated with the class. An embodiment includes initializing, based on a determination of an operational state of the per class counter for an object transitioning from one of the learning state and a biased state to a flatlock state, a lock word of the object directly to the flatlock state while bypassing the biased state. | 2022-09-08 |
20220283880 | SYSTEMS AND METHODS FOR ENABLING CONCURRENT APPLICATIONS TO PERFORM EXTREME WIDEBAND DIGITAL SIGNAL PROCESSING WITH MULTICHANNEL COHERENCY - A method for digital signal processing of sensor data includes receiving digitized samples of sensor signals via a network connection; converting the digitized samples into a standardized format; storing the converted digitized samples in a shared memory data structure in memory of a single instruction multiple data (SIMD) processor; and providing zero-copy read access to the converted digitized samples stored in the shared memory data structure to a plurality of applications. | 2022-09-08 |
20220283881 | METHODS AND APPARATUS FOR DATA PIPELINES BETWEEN CLOUD COMPUTING PLATFORMS - Methods, apparatus, systems and articles of manufacture are disclosed to establish a data pipeline between cloud computing platforms. An example apparatus includes at least one memory, machine readable instructions in the apparatus, and processor circuitry to execute the machine readable instructions to at least extract a data producer name from data, the data to be provided from a data producer to a data consumer, identify a buffer identifier based on a mapping of the data producer name to the buffer identifier, cause transmission of the data to a buffer associated with the buffer identifier, and cause transmission of the data from the buffer to the data consumer based on an association between the buffer identifier and a data consumer name, the data consumer name corresponding to the data consumer. | 2022-09-08 |
20220283882 | DATA IO AND SERVICE ON DIFFERENT PODS OF A RIC - To provide a low latency near RT RIC, some embodiments separate the RIC's functions into several different components that operate on different machines (e.g., execute on VMs or Pods) operating on the same host computer or different host computers. Some embodiments also provide high speed interfaces between these machines. Some or all of these interfaces operate in non-blocking, lockless manner in order to ensure that critical near RT RIC operations (e.g., datapath processes) are not delayed due to multiple requests causing one or more components to stall. In addition, each of these RIC components also has an internal architecture that is designed to operate in a non-blocking manner so that no one process of a component can block the operation of another process of the component. All of these low latency features allow the near RT RIC to serve as a high speed IO between the E2 nodes and the xApps. | 2022-09-08 |
20220283883 | DISTRIBUTED PROCESSING IN A MESSAGING PLATFORM - A method for distributed processing involves receiving a graph (G) of targets and of influencers, with each influencer related to at least one target, receiving an action graph of actions performed by one or more of the influencers, and key partitioning G across shards. The method further involves transposing the first graph (G) to obtain a first transposed graph (G | 2022-09-08 |
20220283884 | DATA PROCESSING SYSTEM, DATA PROCESSING APPARATUS, AND RECORDING MEDIUM - A data processing apparatus ( | 2022-09-08 |
20220283885 | APPLICATION PROGRAMMING INTERFACE COMPATIBILITY - A system, comprising a memory and a processor, where the processor is in communication with the memory, is configured to receive a request to determine a compatibility of a first version of an application programming interface (API) with a second version of the API. Next, a model of the first version of the API and a model of the second version of the API is retrieved. Each of the models is parsed to determine a first set of functionality of the first version of the API and a second set of functionality of the second version of the API. The first set of functionality is mapped to the second set of functionality to determine differences between the first set of functionality and the second set of functionality. The compatibility of the first version of the API with the second version of the API is determined based on the differences. | 2022-09-08 |
20220283886 | SYSTEMS AND METHODS FOR FETCHING, SECURING, AND CONTROLLING PRIVATE CREDENTIALS OVER A DISPARATE COMMUNICATION NETWORK WITHOUT RECOMPILING SOURCE CODE - Methods and systems use a client application that resides on a client device (e.g., comprising private and secure access credentials). Using the client application, a user may authorize and/or accept a plurality of configuration files. Upon acceptance, the client application is capable of connecting to any interface predefined in the plurality of configuration files (e.g., corresponding to a predetermined list of network systems). The configuration files may be downloaded from a package repository or may be included during software installation. | 2022-09-08 |
20220283887 | SYSTEM AND METHOD FOR AUTOMATICALLY MONITORING AND DIAGNOSING USER EXPERIENCE PROBLEMS - The following relates generally to diagnosing problems with websites. In some embodiments, a webpage interaction processor receives a list of potential user experience problems. The webpage interaction processor then extracts click data from the website, and processes the extracted click data into grams. Subsequently, an analytics engine is trained based on the processed click data. The trained analytics engine may then diagnose the problem of the website with a potential user experience problem from the received list of potential user experience problems. In some embodiments, the process is entirely automated. | 2022-09-08 |
20220283888 | SYSTEM-ON-CHIP AND METHOD OF OPERATING THE SAME - A system-on-chip is provided. The system-on-chip includes a system bus, a plurality of IP units connected to the system bus, a processor unit including a plurality of cores configured to control the plurality of IP units via the system bus, a monitoring unit configured to monitor a state of the processor unit, and an error detection unit configured to operate as a master device for the plurality of IP units and monitor a register in which error information indicating whether an error has occurred in each of the plurality of IP units is stored. | 2022-09-08 |
20220283889 | FAILURE PROBABILITY EVALUATION DEVICE AND FAILURE PROBABILITY EVALUATION METHOD - Provided are: a technique for improving failure probability evaluation precision, even when the center and tails of an occurrence frequency distribution for stress or strength, or a physical quantity associated with stress or strength, such as load, for example, do not conform to the same probability distribution, by precisely estimating a tail probability density function; and a high-precision failure probability evaluation device. This results in a failure probability evaluation device comprising: a probability density estimation function estimation unit comprising a storage unit for storing a failure model, which computes the probability of failure in a mechanical system, and a probability variable occurrence frequency distribution used in the failure model, a tail estimation unit for estimating a probability density function for the tails of the occurrence frequency distribution on the basis of an extreme-value statistical model, a center estimation unit for estimating a probability density function for the parts of the occurrence frequency distribution other than the tails, and a connection unit for using the probability density function for the tails and the probability density function for the parts other than the tails to estimate an overall probability density function for the occurrence frequency distribution; and a failure probability computation unit for computing the probability of failure in the mechanical system on the basis of the overall probability density function and the failure model. | 2022-09-08 |
20220283890 | METHOD AND SYSTEM FOR PREDICTING USER INVOLVEMENT REQUIREMENTS FOR UPGRADE FAILURES - A method for managing upgrades of components of clients includes obtaining an upgrade failure prediction request associated with a client of the clients, and in response to obtaining an update failure prediction request: obtaining live data associated with the client, matching the live data with a training data cluster, selecting relevant features associated with processed training data of the training data cluster, generating an upgrade failure prediction using the live data associated with the relevant features and a prediction model, making a determination that the upgrade failure prediction implicates an action is required, and based on the determination, initiating performance of the action. | 2022-09-08 |
20220283891 | SYSTEMS AND METHODS TO IDENTIFY PRODUCTION INCIDENTS AND PROVIDE AUTOMATED PREVENTIVE AND CORRECTIVE MEASURES - Various methods, apparatuses/systems, and media for identifying production incidents and implementing automated preventive and corrective measures are disclosed. A processor automatically triggers, in response to a generated incident of a job/process/host failure, a self-healing service. The processor identifies an application to which the event generated belongs to by accessing a database that stores the application and host details; fetches functional identification (ID) of the application from the database, identifies the type of job failure or service degradation; automatically executes, by utilizing predefined micro services, the steps required for mitigation; records, in response to executing, outcome of the mitigation in the database along with output at each stage of execution; and evaluates the outcome of the mitigation by executing health checks using micro services to determine whether the failed job or process or host is healthy; and closes the incident based on healthy determination. | 2022-09-08 |
20220283892 | SYSTEM AND METHOD FOR DEBUGGING MICROCONTROLLER USING LOW-BANDWIDTH REAL-TIME TRACE - The present disclosure relates to a system for real-time debugging of microcontroller, the system includes a microcontroller configured in an embedded device to execute a set of instructions, the microcontroller includes a counter unit that generates a set of values for the executed set of instructions. An on-chip debugger (OCD) fetches a selective set of data packets of the set of instructions from the microcontroller. An encoder encodes the selective set of data packets to store the encoded set of data packets in a storage unit, wherein encoding of the set of data packets is performed to compress the data for minimal information size such that the external debugger unit (EDU) receives the encoded set of data packets with minimal information size through the external interface. | 2022-09-08 |
20220283893 | System and Method for Modular Construction of Executable Programs Having Self-Contained Program Elements - A method for performing a fault tolerant automated sequence of computer implemented tasks including, presenting for selection by a user a plurality of pre-programmed elements, each pre-programmed element being independently executable relative each other pre-programmed element, receiving from the user a selection of one or more of the pre-programmed elements and a sequence for performing each pre-programmed element to form an exemplary routine, creating an instance of the exemplary routine, the instance of the exemplary routine including an instance of each of the selected pre-programmed elements arranged for performance in accordance with the sequence and configured to perform tasks defined by the pre-programmed elements and the sequence, initiating implementation of the instance of the exemplary routine by initiating performance of the instances of the pre-programmed elements in accordance with the sequence, and executing each instance of the pre-programmed elements according to the sequence. | 2022-09-08 |
20220283894 | DETECTION METHOD AND SYSTEM APPLIED TO INTELLIGENT ANALYSIS AND DETECTION FOR FLASH, INTELLIGENT TERMINAL, AND COMPUTER-READABLE STORAGE MEDIUM - An intelligent terminal, and a computer-readable storage medium is provided. The detection method includes: obtaining a column set, a page set, and a block set, and presetting a bad column set, a bad page set, an error threshold, and an initial bad block template; alternately obtaining bad page elements and bad column elements from the block set in sequence based on the error threshold, and alternately updating the bad page set and the bad column set in sequence; based on the bad column sets and the bad page sets corresponding to different error thresholds, updating the error threshold, obtaining a final column set from the bad column sets, and obtaining a final page set from the bad page sets; and obtaining a final bad block template. The present invention can reduce the impact of a bad page on a subsequent operation of selecting a bad column element. | 2022-09-08 |
20220283895 | Recovering a Container Storage System - Recovery of a container storage provider, including: storing, within a first database, configuration information related to the container storage provider; storing, within a second database, the configuration information; and responsive to detecting that one or more components associated with the container storage provider have become unavailable, creating a replacement component using configuration information contained in the second database. | 2022-09-08 |
20220283896 | RANDOM NUMBER GENERATOR - The present disclosure relates to a circuit for testing a random number generator adapted to delivering a series of random bits and comprising at least one test unit configured to detect a defect in the series of random bits, said test circuit being adapted to verifying whether, after the detection of a first defect by the test unit, the number of random bits, generated by the random number generator without the detection of a second defect by said unit test, is smaller than a first threshold. | 2022-09-08 |
20220283897 | VERIFYING METHOD FOR ECC CIRCUIT OF SRAM - A verifying method for an error checking and correcting (ECC) circuit of a static random-access memory (SRAM) includes inputting original data into an error-correcting-and-coding procedure to output first data; obtaining second data according to an error-injecting mask; performing a bit operation on the first data and the second data to obtain third data; writing the third data into a test target area in the SRAM as fourth data; reading the fourth data from the test target area; inputting the fourth data into an error-correcting-and-decoding procedure to output fifth data and an error message; and obtaining a verification result according to the fifth data, the original data, the error message, and the second data. | 2022-09-08 |
20220283898 | ERROR CODE CALCULATION ON SENSING CIRCUITRY - Examples of the present disclosure provide apparatuses and methods for error code calculation. The apparatus can include an array of memory cells that are coupled to sense lines. The apparatus can include a controller configured to control a sensing circuitry, that is coupled to the sense lines, to perform a number of operations without transferring data via an input/output (I/O) lines. The sensing circuitry can be controlled to calculate an error code for data stored in the array of memory cells and compare the error code with an initial error code for the data to determine whether the data has been modified. | 2022-09-08 |