25th week of 2022 patent applcation highlights part 58 |
Patent application number | Title | Published |
20220197782 | OFFLINE DEBUGGING METHOD - The present invention discloses an offline debugging method, comprising: S01: obtaining interfaces which require return values in test flows of the test device; S02: setting the return value corresponding to each of the interfaces which require the return values, adding M debugging strategies and determining a debugging strategy required to be started; S03: compiling a configuration file comprises the M debugging strategies into an executable file required by the target platform; S04: setting up a virtual machine to fit the target platform, and transferring the executable file to the virtual machine; S05: invoking the test flow, returning the return value set by the debugging strategy corresponding to the interface which require the return value and obtaining an debugging result correspondingly. Therefore, the present invention solves a problem that debugging relays on hardware devices in semiconductor automation test, so as to reduce complexity and difficulty of debugging of a test device. | 2022-06-23 |
20220197783 | SOFTWARE APPLICATION COMPONENT TESTING - Aspects of the present invention disclose a method, computer program product, and system for performing testing on a portion of an application. The method includes one or more processors identifying a test configuration for testing an application. The application comprises a plurality of components. The test configuration includes an indication to test at least one component of the application. The method further includes one or more processors testing the indicated at least one component of the application. The method further includes one or more processors determining a validation result of testing the indicated at least one component of the application. | 2022-06-23 |
20220197784 | METHOD AND SYSTEM FOR GUARANTEEING GAME QUALITY BY USING ARTIFICIAL INTELLIGENCE AGENT - A method of guaranteeing game quality by using an artificial intelligence (AI) agent is provided. The method includes extracting an item list (hereinafter referred to as an inspection item list) for inspecting quality of a target game, extracting and storing log data corresponding to a test performance result for each item of the inspection item list, performing imitation learning of an AI agent model on the basis of the stored log data, performing an automatic test for inspecting quality of the target game by using the AI agent model on which the imitation learning is completed, and automatically recording a bug and an error detected by the AI agent model. | 2022-06-23 |
20220197785 | METHOD FOR MODIFYING BASIC INPUT/OUTPUT SYSTEM OF SERVER - A method to be implemented by the server includes steps of: during a power-on self-test, determining whether a storage device is communicatively connected to the server; when it is determined that a storage device is communicatively connected to the server, determining whether the storage device stores a script file having a preset filename; and when it is determined that the storage device stores a script file having the preset filename, performing a process of modifying the BIOS based on the script file. | 2022-06-23 |
20220197786 | Data Processing Method and Apparatus, Electronic Device, and Storage Medium - This application discloses a data processing method and apparatus, an electronic device, and a storage medium. When execution is performed at an operation layer of a neural network model, based on a pre-stored buffer allocation relationship, a first address range for cyclic addressing is set for a first buffer corresponding to input data and a second address range for cyclic addressing is set for a second buffer corresponding to an output result. Subsequently, cyclic addressing can be performed in the first buffer based on the first address range for cyclic addressing, to read the input data for the operation layer; and cyclic addressing can be performed in the second buffer based on the second address range for cyclic addressing, to write the output result of the operation layer into the second buffer. In this way, efficiency of buffer utilization can be effectively improved, and further operation efficiency for the model is improved. | 2022-06-23 |
20220197787 | DATA TIERING IN HETEROGENEOUS MEMORY SYSTEM - A heterogeneous memory system includes a memory device including first and second memories and a controller including a cache. The controller identifies memory access addresses among addresses for memory regions of the memory device; track, for a set period, a number of memory accesses for each memory access address; classify each memory access address into a frequently accessed address or a normal accessed address based on the number of memory accesses in the set period; and allocate the first memory for frequently accessed data associated with the frequently accessed address and the second memory for normal data associated with the normal accessed address. | 2022-06-23 |
20220197788 | DATA STORAGE DEVICE WITH SPARE BLOCKS FOR REPLACING BAD BLOCK IN SUPER BLOCK AND OPERATING METHOD THEREOF - A data storage device includes a memory device and a controller. The memory device includes a plurality of planes, wherein each of the planes includes two or more memory blocks. The controller is configured to control an operation of the memory device. The controller is further configured to generate a first super block as a super block including two or more way-interleavable memory blocks among the plurality of memory blocks of the plurality of planes, determine whether each of the memory blocks included in the first super block is a bad block, retrieve a spare block for replacing a first memory block determined as a bad block, in the plurality of planes; and generate a second replacing super block as a super block in which the first memory block is replaced with a second memory block positioned in a plane which does not have the first memory block, when all spare blocks of a plane including the first memory block are used. | 2022-06-23 |
20220197789 | ADJUSTMENT OF GARBAGE COLLECTION PARAMETERS IN A STORAGE SYSTEM - A system, method, and machine-readable storage medium for performing garbage collection in a distributed storage system are provided. In some embodiments, an efficiency level of a garbage collection process is monitored. The garbage collection process may include removal of one or more data blocks of a set of data blocks that is referenced by a set of content identifiers. The set of slice services and the set of data blocks may reside in a cluster, and a set of probabilistic filters (e.g., Bloom filters) may indicate whether the set of data blocks is in-use. At least one parameter of a probabilistic filter of the set of probabilistic filters may be adjusted (e.g., increased or reduced) if the efficiency level is below the efficiency threshold. Garbage collection may be performed on the set of data blocks in accordance with the set of probabilistic filters. | 2022-06-23 |
20220197790 | VALID DATA IDENTIFICATION FOR GARBAGE COLLECTION - Methods, systems, and devices for valid data identification for garbage collection are described. In connection with writing data to a block of memory cells, a memory system may identify a portion of a logical address space that includes a logical address for the data. The memory system may set a bit of a bitmap, which may indicate that the block includes data having a logical address within a portion of the logical address space corresponding to the bit. The logical address space may be divided into any quantity of portions, each corresponding to a different subset of a logical-to-physical (L | 2022-06-23 |
20220197791 | INSERT OPERATION - An apparatus comprises memory access circuitry to access a memory system; a plurality of memory mapped registers, including at least an insert register and a producer pointer register; and control circuitry to perform an insert operation in response to receipt of an insert request from a requester device sharing access to the memory system. The insert request specifies an address mapped to the insert register and an indication of a payload. The insert operation includes controlling the memory access circuitry to write the payload to a location in the memory system selected based on a producer pointer value stored in the producer pointer register, and updating the producer pointer register to increment the producer pointer value. | 2022-06-23 |
20220197792 | RANDOM SEED GENERATING CIRCUIT OF MEMORY SYSTEM - A random seed generating circuit of a memory system includes a first address generating circuit, a second address generating circuit, a table circuit and a seed generating circuit. The first address generating circuit generates an initial address based on target page information. The second address generating circuit generates a plurality of table addresses based on the target page information and a plurality of partial addresses, which are divided from the initial address. The table circuit outputs, from a plurality of tables, a plurality of table values respectively corresponding to the plurality of table addresses. The seed generating circuit generates a random seed based on the plurality of table values. | 2022-06-23 |
20220197793 | COMPRESSED CACHE MEMORY WITH DECOMPRESS ON FAULT - An embodiment of an integrated circuit may comprise, coupled to a core, a hardware decompression accelerator, a compressed cache, a processor and communicatively coupled to the hardware decompression accelerator and the compressed cache, and memory and communicatively coupled to the processor, wherein the memory stores microcode instructions which when executed by the processor causes the processor to store a first address to a decompression work descriptor, retrieve a second address where a compressed page is stored in the compressed cache from the decompression work descriptor at the first address in response to an indication of a page fault, and send instructions to the hardware decompression accelerator to decompress the compressed page at the second address. Other embodiments are disclosed and claimed. | 2022-06-23 |
20220197794 | DYNAMIC SHARED CACHE PARTITION FOR WORKLOAD WITH LARGE CODE FOOTPRINT - An embodiment of an integrated circuit may comprise a core, a first level core cache memory coupled to the core, a shared core cache memory coupled to the core, a first cache controller coupled to the core and communicatively coupled to the first level core cache memory, a second cache controller coupled to the core and communicatively coupled to the shared core cache memory, and circuitry coupled to the core and communicatively coupled to the first cache controller and the second cache controller to determine if a workload has a large code footprint, and, if so determined, partition N ways of the shared core cache memory into first and second chunks of ways with the first chunk of M ways reserved for code cache lines from the workload and the second chunk of N minus M ways reserved for data cache lines from the workload, where N and M are positive integer values and N minus M is greater than zero. Other embodiments are disclosed and claimed. | 2022-06-23 |
20220197795 | WRITE DATA FOR BIN RESYNCHRONIZATION AFTER POWER LOSS - A system includes a memory device and a processing device, operatively coupled to the memory device, the processing device to perform operations comprising: measuring one of a temperature voltage shift or a read bit error rate of fixed data stored in the memory device in response to detecting a power on of the memory device, the fixed data having been programmed in response to detecting a power loss; estimating an amount of time for which the memory device was powered off based on results of the measuring; and in response to the amount of time satisfying a threshold criterion, updating a value for a temporal voltage shift of a block family based on the amount of time. | 2022-06-23 |
20220197796 | MULTI-CACHE BASED DIGITAL OUTPUT GENERATION - Multi-cache-based digital output generation is provided. A system receives data objects that include fields from a remote data source. The system sorts the data objects based on a field to generate a sorted data set. The system cleans the sorted data set to generate a clean data set based on a policy. The system receives a request for a type of digital output based on the data objects received from the data source and loads a portion of the clean data set to a first level cache. The system selects a machine learning model configured for the type of digital output, and loads a primary cache with a subset of fields stored in the first level cache selected based on the machine learning model. The system generates, based on the first level cache being complete, digital output corresponding to the type of digital output from data in the primary cache. | 2022-06-23 |
20220197797 | DYNAMIC INCLUSIVE LAST LEVEL CACHE - An embodiment of an integrated circuit may comprise a core, and a cache controller coupled to the core, the cache controller including circuitry to identify data from a working set for dynamic inclusion in a next level cache based on an amount of re-use of the next level cache, send a shared copy of the identified data to a requesting core of one or more processor cores, and maintain a copy of the identified data in the next level cache. Other embodiments are disclosed and claimed. | 2022-06-23 |
20220197798 | SINGLE RE-USE PROCESSOR CACHE POLICY - An embodiment of an integrated circuit may comprise a core, and a cache controller coupled to the core, the cache controller including circuitry to identify single re-use data evicted from a core cache, and retain the identified single re-use data in a next level cache based on an overall re-use of the next level cache. Other embodiments are disclosed and claimed. | 2022-06-23 |
20220197799 | Instruction and Micro-Architecture Support for Decompression on Core - Methods and apparatus relating to an instruction and/or micro-architecture support for decompression on core are described. In an embodiment, decode circuitry decodes a decompression instruction into a first micro operation and a second micro operation. The first micro operation causes one or more load operations to fetch data into one or more cachelines of a cache of a processor core. Decompression Engine (DE) circuitry decompresses the fetched data from the one or more cachelines of the cache of the processor core in response to the second micro operation. Other embodiments are also disclosed and claimed. | 2022-06-23 |
20220197800 | SYSTEM AND METHODS TO PROVIDE HIERARCHICAL OPEN SECTORING AND VARIABLE SECTOR SIZE FOR CACHE OPERATIONS - Graphics processors of the present design provide hierarchical open sectors and variable cache sizes for cache operations. In one embodiment, a graphics processor comprises a cache memory having a hierarchical open sector design including a first hierarchy of upper and lower regions with each region including a second hierarchy of sectors. A cache controller is configured to initially open a first sector of the lower region, to receive a memory request that does not match an address in the first sector, and to open a second sector of the lower region. | 2022-06-23 |
20220197801 | DATA PROCESSING APPARATUS AND DATA ACCESSING CIRCUIT - A data processing apparatus including a memory circuit and a data accessing circuit is provided, in which the memory circuit includes multiple cache ways configured to store data. In response to a first logic state of an enabling signal, if a tag of an address of an access requirement is the same as a corresponding tag of the multiple cache ways, the data accessing circuit determines that a cache hit occurs. In response to a second logic state of the enabling signal, if the address is within one or more predetermined address intervals specified by the data accessing circuit, the data accessing circuit determines that the cache hit occurs, and if the address is outside the one or more predetermined address intervals, the data accessing circuit determines that a cache miss occurs. | 2022-06-23 |
20220197802 | SYSTEMS AND METHODS FOR MAINTAINING CACHE COHERENCY - Cache coherency of a global address space of a cache can be maintained with one or more tier control units (TCUs). The global address space of the cache may be shared by multiple domains. Domains may include multiple controllers and a local interconnect operatively coupling the controllers to the cache. The local interconnect of each domain may maintain a cache coherency of a local address space of the cache shared by the controllers of the domain. The one or more TCUs may be operatively coupled to the local interconnects of the domains to maintain the cache coherency of the global address space. | 2022-06-23 |
20220197803 | SYSTEM, APPARATUS AND METHOD FOR PROVIDING A PLACEHOLDER STATE IN A CACHE MEMORY - In one embodiment, a system includes an (input/output) I/O domain and a compute domain. The I/O domain includes an I/O agent and a I/O domain caching agent. The compute domain includes a compute domain caching agent and a compute domain cache hierarchy. The I/O agent issues an ownership request to the compute domain caching agent to obtain ownership of a cache line in the compute domain cache hierarchy. In response to the ownership request, the compute domain caching agent places the cache line in the compute domain cache hierarchy in a placeholder state. The placeholder state reserves the cache line for performance of a write operation by the I/O agent. The compute domain caching agent writes data received from the I/O agent to the cache line in the compute domain cache hierarchy and transitions the state of the cache line out of the placeholder state. | 2022-06-23 |
20220197804 | METHODS AND APPARATUSES INVOLVING RADAR SYSTEM DATA PATHS - Exemplary aspects for a specific example concern a radar system having sensor circuitry including multiple radar sensors to provide sensor data via multiple virtual channels and multiple data types, a memory circuit with memory buffers, and a bus-interface circuit to control bus interconnects for bus communications involving a radar signal transmitter and the memory circuit. Radar signals are received and processed, via data acquisition path circuitry in multiple circuit paths and via streams of data in response to and to accommodate the operations of the sensor circuitry. A master controller conveys data, via the bus-interface circuit, to the buffers for the sensor data, and generates selectable-type transactions to be linked in selected ones of the buffers, in response to the data provided from the sensor circuitry and based on the sensor data being provided via different ones of the multiple virtual channels and of the multiple data types. | 2022-06-23 |
20220197805 | PAGE FAULT MANAGEMENT TECHNOLOGIES - Examples described herein relate to at least one processor and circuitry, when operational, to: in connection with a request from a device to copy data to a destination memory address: based on a page fault, copy the data to a backup page and after determination of a virtual-to-physical address translation, copy the data from the backup page to a destination page identified by the physical address. In some examples, the copy the data to a backup page is based on a page fault and an indication that a target buffer for the data is at or above a threshold level of fullness. In some examples, copying the data to a backup page includes: receive the physical address of the backup page from the device and copy data from the device to the backup page based on identification of the backup page. | 2022-06-23 |
20220197806 | HIGH SPEED MEMORY SYSTEM INTEGRATION - Embodiments disclosed herein include memory architectures with stacked memory dies. In an embodiment, an electronic device comprises a base die and an array of memory dies over and electrically coupled to the base die. In an embodiment, the array of memory dies comprise caches. In an embodiment, a compute die is over and electrically coupled to the array of memory dies. In an embodiment, the compute die comprises a plurality of execution units. | 2022-06-23 |
20220197807 | LATENCY-AWARE PREFETCH BUFFER - An apparatus configured to provide latency-aware prefetching, and related systems, methods, and computer-readable media, are disclosed. The apparatus comprises a prefetch buffer comprising at least a first entry, and the first entry comprises a memory operation prefetch request portion storing a first previous memory operation prefetch request. The apparatus further comprises a prefetch buffer replacement circuit, which is configured to select an entry of the prefetch buffer storing a previous memory operation prefetch request for replacement with a subsequent memory operation prefetch request, and to replace the previous memory operation prefetch request in the selected entry with the subsequent memory operation prefetch request. | 2022-06-23 |
20220197808 | SYSTEM, APPARATUS AND METHOD FOR PREFETCHING PHYSICAL PAGES IN A PROCESSOR - In one embodiment, a processor includes: one or more execution circuits to execute instructions; a stream prediction circuit coupled to the one or more execution circuits, the stream prediction circuit to receive demand requests for information and, based at least in part on the demand requests, generate a page prefetch hint for a first page; and a prefetcher circuit to generate first prefetch requests each for a cache line, the stream prediction circuit decoupled from the prefetcher circuit. Other embodiments are described and claimed. | 2022-06-23 |
20220197809 | HARDWARE CONFIGURATION SELECTION USING MACHINE LEARNING MODEL - Techniques for identifying a hardware configuration for operation are disclosed. The techniques include applying feature measurements to a trained model; obtaining output values from the trained model, the output values corresponding to different hardware configurations; and operating according to the output values, wherein the output values include one of a certainty score, a ranking, or a regression value. | 2022-06-23 |
20220197810 | CALCULATOR AND CALCULATION METHOD - A calculator includes a processing core and a cache. The cache includes a data memory that holds data transferred from a main memory and a cache controller that controls transfer of data between the main memory and the data memory. The cache controller is configured to calculate, upon occurrence of a cache miss, a cycle count requested for arithmetic processing on one unit amount of data based on a cache miss occurrence interval and a memory access latency requested, and update a prefetch distance based on the calculated cycle count and the memory access latency, the prefetch distance indicating a relative distance on the main memory between a location from which the one unit amount of data transferred from the main memory due to the cache miss and a location from which a next one unit amount of data is to be prefetched. | 2022-06-23 |
20220197811 | METHOD AND APPARATUS FOR REPLACING DATA FROM NEAR TO FAR MEMORY OVER A SLOW INTERCONNECT FOR OVERSUBSCRIBED IRREGULAR APPLICATIONS - A data management method wherein a working set is distributed between near and far memories includes migrating first data from the far to the near memory according to a prefetcher algorithm. The first data (a subset of the working set) is maintained in the near memory in data structures according to predetermined semantics of the prefetcher that dictate that certain of the first data is prefetched when a first function evaluates as true. The method further includes detecting that the near memory has reached capacity, and in response, adaptively migrating a portion of the first data out of the near and into the far memory according to an eviction algorithm that is based on the set of prefetcher semantics such that certain of the portion of the first data is evicted when a second function evaluates as true, wherein the second function equals the inverse of the first function. | 2022-06-23 |
20220197812 | SYSTEMS AND METHODS FOR EFFICIENT DATA BUFFERING - In one embodiment, a system may include a memory unit, a first processing unit configured to write data into a memory region of the memory unit, a second processing unit configured to read data from the memory region, a first control unit configured to control the first processing unit's access to the memory unit and, and a second control unit configured to control the second processing unit's access to the memory unit. The first control unit may be configured to obtain, from the second control unit, a first memory address associated with a data reading process of the second processing unit, receive a write request from the first processing unit, the read request having an associated second memory address, and write data into the memory region based on the write request in response to a determination that the second memory address falls outside of the guarded reading region. | 2022-06-23 |
20220197813 | APPLICATION PROGRAMMING INTERFACE FOR FINE GRAINED LOW LATENCY DECOMPRESSION WITHIN PROCESSOR CORE - Methods and apparatus relating to techniques for increasing per core memory bandwidth by using forget store operations are described. In an embodiment, a cache stores a buffer. Execution circuitry executes an instruction. The instruction causes one or more cachelines in the cache to be marked based on a start address for the buffer and a size of the buffer. A marked cacheline in the cache is to be prevented from being written back to memory. Other embodiments are also disclosed and claimed. | 2022-06-23 |
20220197814 | PER-PROCESS RE-CONFIGURABLE CACHES - The disclosed embodiments relate to per-process configuration caches in storage devices. A method is disclosed comprising initiating a new process, the new process associated with a process context; configuring a region in a memory device, the region associated with the process context, wherein the configuring comprises setting one or more cache parameters that modify operation of the memory device; and mapping the process context to the region of the memory device | 2022-06-23 |
20220197815 | RECOVERY OF LOGICAL-TO-PHYSICAL TABLE INFORMATION FOR A MEMORY DEVICE - Methods, systems, and devices for recovery of logical-to-physical (L2P) table information for a memory device are described. A memory system may detect an error in one or more pointers of the L2P table using an error detecting code that is uncorrectable using the code. The memory system may determine a set of candidate codewords for the set of bits, where each of the candidate codewords includes one or more corresponding candidate pointers, and check whether a candidate codeword is correct based on whether a logical address corresponding to a candidate pointer of the candidate codeword matches a logical address stored as metadata for a set of data at a physical address pointed to by the candidate pointer. The memory system may limit the set of candidate codewords or order the candidate codewords for evaluate to reduce a latency associated with identifying a correct candidate codeword. | 2022-06-23 |
20220197816 | COMPRESSED CACHE MEMORY WITH PARALLEL DECOMPRESS ON FAULT - An embodiment of an integrated circuit may comprise, coupled to a core, hardware decompression accelerators, a compressed cache, a processor communicatively coupled to the hardware decompression accelerators and the compressed cache, and memory communicatively coupled to the processor, wherein the memory stores microcode instructions that when executed by the processor causes the processor to load a page table entry in response to an indication of a page fault, determine if the page table entry indicates that the page is to be decompressed on fault, and, if so determined, modify a first decompression work descriptor at a first address and a second decompression work descriptor at a second address based on information from the page table entry, and generate a first enqueue transaction to the hardware decompression accelerators with the first address of the first decompression work descriptor and a second enqueue transaction to the hardware decompression accelerators with the second address of the second decompression work descriptor. Other embodiments are disclosed and claimed. | 2022-06-23 |
20220197817 | MEMORY SYSTEM AND METHOD FOR CONTROLLING NONVOLATILE MEMORY - According to one embodiment, when a read request received from a host includes a first identifier indicative of a first region, a memory system obtains a logical address from the received read request, obtains a physical address corresponding to the obtained logical address from a logical-to-physical address translation table which manages mapping between logical addresses and physical addresses of the first region, and reads data from the first region, based on the obtained physical address. When the received read request includes a second identifier indicative of a second region, the memory system obtains physical address information from the read request, and reads data from the second region, based on the obtained physical address information. | 2022-06-23 |
20220197818 | METHOD AND APPARATUS FOR PERFORMING OPERATIONS TO NAMESPACES OF A FLASH MEMORY DEVICE - The invention introduces a method for performing operations to namespaces of a flash memory device, by a processing unit of a storage device, at least including the steps: receiving a cross-namespace data-movement command from a host, requesting to move user data of a first logical address of a first namespace to a second logical address of a second namespace; cutting first physical address information corresponding to the first logical address of a first logical-physical mapping table corresponding to the first namespace; and storing the first physical address information in an entry corresponding to a second logical address of a second logical-physical mapping table corresponding to the second namespace. | 2022-06-23 |
20220197819 | DYNAMIC LOAD BALANCING FOR POOLED MEMORY - Examples described herein relate to a memory controller to allocate an address range for a process among multiple memory pools based on a service level parameters associated with the address range and performance capabilities of the multiple memory pools. In some examples, the service level parameters include one or more of latency, network bandwidth, amount of memory allocation, memory bandwidth, data encryption use, type of encryption to apply to stored data, use of data encryption to transport data to a requester, memory technology, and/or durability of a memory device. | 2022-06-23 |
20220197820 | ISOLATED PERFORMANCE DOMAINS IN A MEMORY SYSTEM - A computing system having memory components, including first memory and second memory. The computing system further includes a processing device, operatively coupled with the memory components, to: store a memory allocation value in association with a context of executing instructions; execute a set of instructions in the context; allocate, for execution of the set of instructions in the context, an amount of memory, including an amount of the first memory and an amount of the second memory; and access the amount of the second memory via the amount of the first memory during the execution of the set of instructions in the context. | 2022-06-23 |
20220197821 | DEVICE, SYSTEM AND METHOD FOR SELECTIVELY DROPPING SOFTWARE PREFETCH INSTRUCTIONS - Techniques and mechanisms for providing information to determine whether a software prefetch instruction is to be executed. In an embodiment, one or more entries of a translation lookaside buffer (TLB) each include a respective value which indicates whether, according to one or more criteria, corresponding data has been sufficiently utilized. Insufficiently utilized data is indicated in a TLB entry with an identifier of an executed instruction to prefetch the corresponding data. An eviction of the TLB entry results in the creation of an entry in a registry of prefetch instructions. The entry in the registry includes the identifier of the executed prefetch instruction, and a value indicating a number of times that one or more future prefetch instructions are to be dropped. In another embodiment, execution of a subsequent prefetch instruction—which also corresponds to the identifier—is prevented based on the registry entry. | 2022-06-23 |
20220197822 | 64-BIT VIRTUAL ADDRESSES HAVING METADATA BIT(S) AND CANONICALITY CHECK THAT DOES NOT FAIL DUE TO NON-CANONICAL VALUES OF METADATA BIT(S) - Techniques to allow use of metadata in unused bits of virtual addresses are described. A processor of an aspect includes a decode circuit to decode a memory access instruction. The instruction to indicate one or more memory address operands that are to have address generation information and metadata. An execution circuit coupled with the decode circuit to generate a 64-bit virtual address based on the one or more memory address operands. The 64-bit virtual address having a bit | 2022-06-23 |
20220197823 | METHOD AND APPARATUS FOR TTL-BASED CACHE MANAGEMENT USING REINFORCEMENT LEARNING - a method and an apparatus for managing a cache for storing content by determining popularity of the content based on content requests received during a current time slot for the content; transmitting information about the popularity of the content to a time-to-live (TTL) controller and receiving, from the TTL controller, TTL values for each popularity level determined by the TTL controller based on the information about the popularity; and managing the content based on the TTL values for each popularity level are provided. | 2022-06-23 |
20220197824 | Elastic resource management in a network switch - An elastic memory system that may include memory banks, clients that are configured to obtain access requests associated with input addresses; first address converters that are configured to convert the input addresses to intermediate addresses within a linear address space; address scramblers that are configured to convert the intermediate addresses to physical addresses while balancing a load between the memory banks; atomic operation units; an interconnect that is configured to receive modified access requests that are associated with the physical addresses, and send the modified access requests downstream, wherein atomic modified access requests are sent to the atomic operation units; wherein the atomic operations units are configured to execute the atomic modified access requests; wherein the memory banks are configured to respond to the atomic modified access requests and to non-atomic modified access requests. | 2022-06-23 |
20220197825 | SYSTEM, METHOD AND APPARATUS FOR TOTAL STORAGE ENCRYPTION - The disclosed embodiments are generally directed to inline encryption of data at line speed at a chip interposed between two memory components. The inline encryption may be implemented at a System-on-Chip (“SOC” or “SOC”). The memory components may comprise Non-Volatile Memory express (NVMe) and a dynamic random access memory (DRAM). An exemplary device includes an SOC to communicate with a Non-Volatile Memory NVMe circuitry to provide direct memory access (DMA) to an external memory component. The SOC may include: a cryptographic controller circuitry; a cryptographic memory circuitry in communication with the cryptographic controller, the cryptographic memory circuitry configured to store instructions to encrypt or decrypt data transmitted through the SOC; and an encryption engine in communication with the crypto controller circuitry, the encryption engine configured to encrypt or decrypt data according to instructions stored at the crypto memory circuitry. Other embodiments are also disclosed and claimed. | 2022-06-23 |
20220197826 | METHOD AND APPARATUS FOR PROTECTING A MEMORY FROM A WRITE ATTACK - A method and apparatus of protecting a memory from a write attack includes dividing a cacheline of memory into a plurality of sub-blocks. A codeword is generated from at least one sub-block of the plurality of sub-blocks and a complement of the at least one sub-block. One of the generated codewords is selected, wherein the selected codeword is used for storage in memory. | 2022-06-23 |
20220197827 | METHOD AND SYSTEM FOR MEMORY ATTACK MITIGATION - A method and system for memory attack mitigation in a memory device includes receiving, at a memory controller, an allocation of a page in memory. One or more device controllers detects an aggressor-victim set within the memory. Based upon the detection, an address of the allocated page is identified for further action. | 2022-06-23 |
20220197828 | METHOD OF PROTECTING A SYSTEM SUCH AS A MICROCONTROLLER, AND CORRESPONDING SYSTEM - A system includes a processing unit, a memory configured to store at least one first group of instructions and one second group of instructions for execution by the processing unit, the processing unit being configured to sequentially extract from the memory instructions of the first group and instructions of the second group for their execution. The system also includes a controller including a first auxiliary memory configured to store a protection criterion, a comparator configured to compare the storage address of each extracted instruction with the protection criterion, and a control circuit configured to, in response to the storage address meeting the protection criterion, trigger a protection mechanism including at least one prohibition for the processing unit to execute again at least one portion of the instructions of the first group, during the execution of the instructions of the second group. | 2022-06-23 |
20220197829 | HIGH CAPACITY HIDDEN MEMORY - An embodiment of an apparatus may include a processor, memory communicatively coupled to the processor, and circuitry communicatively coupled to the processor and the memory, the circuitry to manage a portion of the memory as hidden memory outside a range of physical memory accessible by user applications, and control access to the hidden memory from the processor with hidden page tables. Other embodiments are disclosed and claimed. | 2022-06-23 |
20220197830 | MANAGING LOCK COORDINATOR REBALANCE IN DISTRIBUTED FILE SYSTEMS - Managing lock coordinator rebalance in distributed file systems is provided herein. A node device of a cluster of node devices can comprise a processor and a memory that stores executable instructions that, when executed by the processor, facilitate performance of operations. The operations can comprise determining an occurrence of a group change between a cluster of node devices and executing a probe function based on the occurrence of the group change. Further, the operations can comprise reasserting first locks of a group of locks based on a result of the probe function indicating reassertion of the first locks. The second locks of the group of locks, other than the first locks, are not reasserted based on the result of the probe function. The cluster of node devices can operate as a distributed file system. | 2022-06-23 |
20220197831 | SYSTEM AND METHOD FOR FACILITATING EFFICIENT HOST MEMORY ACCESS FROM A NETWORK INTERFACE CONTROLLER (NIC) - A network interface controller (NIC) capable of efficient memory access is provided. The NIC can be equipped with an operation logic block, a signaling logic block, and a tracking logic block. The operation logic block can maintain an operation group associated with packets requesting an operation on a memory segment of a host device of the NIC. The signaling logic block can determine whether a packet associated with the operation group has arrived at or departed from the NIC. Furthermore, the tracking logic block can determine that a request for releasing the memory segment has been issued. The tracking logic block can then determine whether at least one packet associated with the operation group is under processing in the NIC. If no packet associated with the operation group is under processing in the NIC, tracking logic block can notify the host device that the memory segment can be released. | 2022-06-23 |
20220197832 | DISTRIBUTION OF DATA AND MEMORY TIMING PARAMETERS ACROSS MEMORY MODULES BASED ON MEMORY ACCESS PATTERNS - A processor distributes memory timing parameters and data among different memory modules based upon memory access patterns. The memory access patterns indicate different types, or classes, of data for an executing workload, with each class associated with different memory access characteristics, such as different row buffer hit rate levels, different frequencies of access, different criticalities, and the like. The processor assigns each memory module to a data class and sets the memory timing parameters for each memory module according to the module's assigned data class, thereby tailoring the memory timing parameters for efficient access of the corresponding data. | 2022-06-23 |
20220197833 | ENABLING DEVICES WITH ENHANCED PERSISTENT MEMORY REGION ACCESS - A host command is received to configure a system to have a configuration designating an interface standard for exposing a storage element and a persistent memory region (PMR). The storage element is visible to a first protocol of the interface standard and the PMR is visible to a second protocol of the interface standard. The storage element is implemented on a first memory device of the system including a non-volatile memory device and the PMR is implemented on a second memory device of the system. The system is configured in accordance with the configuration. | 2022-06-23 |
20220197834 | DATA TRANSMISSION METHOD FOR CONVOLUTION OPERATION, FETCHER, AND CONVOLUTION OPERATION APPARATUS - A data transmission method for a convolution operation, and a convolution operation apparatus including a fetcher that includes a loader, at least one sender, a buffer controller, and a reuse buffer. The method includes loading, by the loader, input data of an input feature map according to a loading order, based on input data stored in the reuse buffer, a shape of a kernel to be used for a convolution operation, and two-dimensional (2D) zero-value information of weights of the kernel; storing, by the buffer controller, the loaded input data in the reuse buffer of an address cyclically assigned according to the loading order; and selecting, by each of the at least one sender, input data corresponding to each output data of a convolution operation among the input data stored in the reuse buffer, based on one-dimensional (1D) zero-value information of the weights, and outputting the selected input data. | 2022-06-23 |
20220197835 | DATA STORAGE DEVICE WITH AN EXCLUSIVE CHANNEL FOR FLAG CHECKING OF READ DATA, AND NON-VOLATILE MEMORY CONTROL METHOD - A non-volatile memory control technology. In response to a read command, a non-volatile memory interface controller temporarily stores data read from a non-volatile memory to the system memory and, accordingly, asserts a flag in the system memory. Through a flag reading channel provided by a interconnect bus, the host bridge controller confirms that the flag is asserted to correctly read the data from the system memory. A master computing unit reads the system memory through a data reading channel provided by the interconnect bus, without being delayed by the status checking of the flag. The interconnect bus further provides a flag writing channel and a data writing channel. | 2022-06-23 |
20220197836 | DATA STORAGE DEVICE WITH AN EXCLUSIVE CHANNEL FOR FLAG CHECKING OF READ DATA, AND NON-VOLATILE MEMORY CONTROL METHOD - A non-volatile memory control technology. In response to a read command, a non-volatile memory interface controller temporarily stores data read from a non-volatile memory to a system memory and, accordingly, asserts a flag in the system memory. Through a write channel provided by the interconnect bus, the host bridge controller confirms that the flag is asserted to correctly read the data from the system memory. A master computing unit reads the system memory through a read channel provided by the interconnect bus, without being delayed by the status checking of the flag. The host bridge controller executes a data detection command or a preset vendor command to issue a write request for programming data in a virtual address, to trigger a handshake between the host bridge controller and the system memory through the write channel. During the handshake, flag checking is achieved. | 2022-06-23 |
20220197837 | JUST-IN-TIME (JIT) SCHEDULER FOR MEMORY SUBSYSTEMS - The memory sub-systems of the present disclosure discloses a just-in-time (JIT) scheduling system and method. In one embodiment, a system receives a request to perform a memory operation using a hardware resource associated with a memory device. The system identifies a traffic class corresponding to the memory operation. The system determines a number of available quality of service (QoS) credits for the traffic class during a current scheduling time frame. The system determines a number of QoS credits associated with a type of the memory operation. Responsive to determining the number of QoS credits associated with the type of the memory operation is less than the number of available QoS credits, the system submits the memory operation to be processed at a memory device. | 2022-06-23 |
20220197838 | SYSTEM AND METHOD FOR FACILITATING EFFICIENT EVENT NOTIFICATION MANAGEMENT FOR A NETWORK INTERFACE CONTROLLER (NIC) - A network interface controller (NIC) capable of efficient event management is provided. The NIC can be equipped with a host interface, a first memory device, and an event management module. During operation, the host interface can couple the NIC to a host device. The event management module can identify an event associated with an event queue stored in a second memory device of the host device. The event management module can insert, into a buffer, an event notification associated with the event. The buffer can be associated with the event queue and stored in the first memory device. If the buffer has met a release criterion, the event management module can insert, via the host interface, the aggregated event notifications into the event queue. | 2022-06-23 |
20220197839 | ELECTRONIC DEVICE - An electronic device includes a core circuit and a detecting circuit. The core circuit receives a first clock signal and a second clock signal that are different. The core circuit generates a first working state and a second working state respectively according to the first clock signal and the second clock signal. The detecting circuit detects a relationship between the first working state and the second working state to generate a reset signal. The reset signal is configured to reset the relationship between the first working state and the second working state to an initial corresponding relationship, and reduce an influence of electromagnetic interference on the electronic device. | 2022-06-23 |
20220197840 | SYSTEM DIRECT MEMORY ACCESS ENGINE OFFLOAD - Systems, devices, and methods for direct memory access. A system direct memory access (SDMA) device disposed on a processor die sends a message which includes physical addresses of a source buffer and a destination buffer, and a size of a data transfer, to a data fabric device. The data fabric device sends an instruction which includes the physical addresses of the source and destination buffer, and the size of the data transfer, to first agent devices. Each of the first agent devices reads a portion of the source buffer from a memory device at the physical address of the source buffer. Each of the first agent devices sends the portion of the source buffer to one of second agent devices. Each of the second agent devices writes the portion of the source buffer to the destination buffer. | 2022-06-23 |
20220197841 | COMMUNICATION CONTROL DEVICE, COMMUNICATION CONTROL METHOD, INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND COMPUTER PROGRAM PRODUCT - A communication control device according to an embodiment includes one or more hardware processors functioning as a transmission control unit and a communication unit. The transmission control unit performs control of transmission of messages by opening and closing a gate based on transmission permission information. The transmission permission information is generated based on gate control information including a plurality of entries for determining whether to open a plurality of gates corresponding to a plurality of queues. The transmission permission information indicates an amount of transmittable messages in a period corresponding to one or more continuous entries. The communication unit transmits and receives messages in accordance with the control of the transmission control unit. | 2022-06-23 |
20220197842 | DYNAMIC USB-C MODE SELECTION OSPM POLICY METHOD AND APPARATUS - A scheme to enhance USB-C port policy by dynamically entering optimal USB-C alternate mode with an informed feedback mechanism to OSPM which influences the USB-C port DPM. In some embodiments, when a USB4 device is connected to a port, the scheme parses the alternate modes and power characteristics from the class descriptor information of the enumerated device. In some embodiments, the parsed information is provided as a feedback to the OSPM that instructs the USB-C/PD DPM to enter or switch mode that shall meet the policy criteria of the OS configuration in a dynamic command control from the OS. In some embodiments, the USB-C DPM dynamically chooses to enter an optimal mode based on the power and thermal conditions information available in the embedded controller and indicate the OS about the changes. As such, the OS is aware of the USB operation mode. | 2022-06-23 |
20220197843 | SYSTEMS AND METHODS FOR SINGLE-WIRE MULTI-PROTOCOL DISCOVERY AND ASSIGNMENT TO PROTOCOL-AWARE PURPOSE-BUILT ENGINES - A method may be provided for a system having a logic device interfaced between a management controller and a plurality of subsystems, wherein the logic device includes a plurality of purpose-built engines, each purpose-built engine configured to perform single-wire communication with one or more subsystems in accordance with a particular protocol associated with such purpose-built engine and a purpose-built engine group switch interfaced between the plurality of purpose-built engines and a plurality of connectors for communicatively coupling the plurality of subsystems to the logic device. The method may include establishing, with a purpose-built engine group switch, a plurality of communication routes based on one or more switch control signals, wherein each route of the plurality of communication routes is established between a respective purpose-built engine and a respective connector. The method may also include monitoring all possible one-wire communication paths between the purpose-built engines and the subsystems for announcements of protocol types. The method may further include in response to such monitoring, communicating the switch control signals to the purpose-built engine group switch in accordance with supported communications protocols of individual purpose-built engines of the plurality of purpose-built engines. | 2022-06-23 |
20220197844 | BOOTSTRAPPING CIRCUIT, SAMPLING APPARATUSES, RECEIVER, BASE STATION, MOBILE DEVICE AND METHOD OF OPERATING A BOOTSTRAPPING CIRCUIT - A bootstrapping circuit for a semiconductor switch is provided. The bootstrapping circuit includes a capacitor, a first node for coupling to an input node of the semiconductor switch, and a second node for coupling to a control node of the semiconductor switch. Further, the bootstrapping circuit includes a switch circuit configured to selectively couple the capacitor to a charge source while the semiconductor switch is open and to selectively close a conductive path between the first node and the second node for closing the semiconductor switch. The conductive path includes the capacitor. The bootstrapping circuit additionally includes charge injection circuitry configured to inject charge into the conductive path before, while or after the conductive path is closed by the switch circuit. | 2022-06-23 |
20220197845 | SYSTEM AND METHOD FOR FACILITATING OPERATION MANAGEMENT IN A NETWORK INTERFACE CONTROLLER (NIC) FOR ACCELERATORS - A network interface controller (NIC) capable of efficient operation management for host accelerators is provided. The NIC can be equipped with a host interface and triggering logic block. During operation, the host interface can couple the NIC to a host device. The triggering logic block can obtain, via the host interface from the host device, an operation associated with an accelerator of the host device. The triggering logic block can determine whether a triggering condition has been satisfied for the operation based on an indicator received from the accelerator. If the triggering condition has been satisfied, the triggering logic block can obtain a piece of data generated from the accelerator from a memory location and execute the operation using the piece of data. | 2022-06-23 |
20220197846 | MULTI-DIE INTEGRATED CIRCUIT WITH DATA PROCESSING ENGINE ARRAY - An integrated circuit includes an interposer, a first die coupled to the interposer, a second die coupled to the interposer, and a third die coupled to the interposer and having a plurality of die interfaces. The first die includes a first data processing engine (DPE) array having a first plurality of DPEs and a first DPE interface coupled to the first plurality of DPEs therein. The second die includes a second DPE array having a second plurality of DPEs and a second DPE interface coupled to the second plurality of DPEs therein. The first DPE interface of the first die is configured to communicate with a first die interface of the plurality of die interfaces via the interposer. The second DPE interface of the second die is configured to communicate with a second die interface of the plurality of die interfaces via the interposer. | 2022-06-23 |
20220197847 | TECHNIQUES TO SUPPORT MULITPLE INTERCONNECT PROTOCOLS FOR AN INTERCONNECT - Embodiments may be generally direct to apparatuses, systems, method, and techniques to detect a message to communicate via an interconnect coupled with a device capable of communication via a plurality of interconnect protocols, the plurality of interconnect protocols comprising a non-coherent interconnect protocol, a coherent interconnect protocol, and a memory interconnect protocol. Embodiments also include determining an interconnect protocol of the plurality of interconnect protocols to communicate the message via the interconnect based on the message, and providing the message to a multi-protocol multiplexer coupled with the interconnect, the multi-protocol multiplexer to communicate the message utilizing the interconnect protocol via the interconnect with the device. | 2022-06-23 |
20220197848 | SYSTEMS AND METHODS FOR EXPANDING MEMORY ACCESSS - A system and device for expanding accessible memory of a processor is provided. An interposer is coupled to the processor and a memory module. The interposer is coupled to a first connection and a second connection. The interposer includes a memory controller circuit. The memory controller circuit receives signals from the processor, using the first connection, and transmits the received signals to the memory module, using the second connection. The interposer expands memory access without an unnecessary second processor. | 2022-06-23 |
20220197849 | HIGH SPEED ON DIE SHARED BUS FOR MULTI-CHANNEL COMMUNICATION - A shared bus for inter-channel communication comprising two or more channels having signal processing elements such that each channel is configured to receive and process an incoming channel specific signal. A sequence generator is configured to generate a test sequence suitable for testing the signal processing elements of a channel. An error checker is configured to error check incoming channel specific signals. A shared bus connects to the two or more channels to communicate an incoming channel specific signal to the error checker and communicate the test sequence to the signal processing elements of a channel. One or more pull up resistors and/or termination resistors connect to the shared bus. The bus may comprise a clock signal path and a data signal path. The test sequence may be a pseudo-random bit sequence. The bus interface comprises an open collector current mode logic driver in cascode arrangement. | 2022-06-23 |
20220197850 | SYSTEM ARCHITECTURE TO DIRECTLY SYNCHRONIZE TIME-BASE BETWEEN ARM GENERIC TIMERS AND PCIE PTM PROTOCOL - A system timer bus used by the processor elements in an ARM-based system on a chip (SoC) is driven using a Precision Time Measurement (PTM) value. This allows the processor elements to be synchronized to the PCIe ports that use PTM. When two SoCs are connected using PCIe links, this example allows the processor elements in both SoCs to be synchronized. As the processor elements are synchronized, associated tasks on the two SoCs are synchronized, so that overall operations are synchronized. | 2022-06-23 |
20220197851 | SYSTEMS AND METHODS FOR MULTI-ARCHITECTURE COMPUTING - Disclosed herein are systems and methods for multi-architecture computing. For example, in some embodiments, a computing system may include: a processor system including at least one first processor core having a first instruction set architecture (ISA); a memory device coupled to the processor system, wherein the memory device has stored thereon a first binary representation of a program for the first ISA; and control logic to suspend execution of the program by the at least one first processor core and cause at least one second processor core to resume execution of the program, wherein the at least one second processor core has a second ISA different from the first ISA; wherein the program is to generate data having an in-memory representation compatible with both the first ISA and the second ISA. | 2022-06-23 |
20220197852 | Circuits And Methods For Coherent Writing To Host Systems - A circuit system includes slow running logic circuitry that generates write data and a write command for a write request. The circuit system also includes fast running logic circuitry that receives the write data and the write command from the slow running logic circuitry. The fast running logic circuitry stores the write data and the write command. A host system generates a write response in response to receiving the write command from the fast running logic circuitry. The host system sends the write response to the fast running logic circuitry. The fast running logic circuitry sends the write data to the host system in response to receiving the write response from the host system before providing the write response to the slow running logic circuitry. | 2022-06-23 |
20220197853 | Central Processing Unit - A central processing unit which achieves increased processing speed is provided. In a CPU constituted of a RISC architecture, a program counter which indicates an address in an instruction memory and a general-purpose register which is designated as an operand in an instruction to be decoded by an instruction decoder are constituted of asynchronous storage elements. | 2022-06-23 |
20220197854 | Reconfigurable System-On-Chip - A system-on-chip comprises: a first sub-circuit having a defined interface and a defined fixed-hardware functionality; a second reconfigurable sub-circuit being signal-connected via the interface to the first sub-circuit; and one or more terminals. The second sub-circuit is configured as an interface circuit between the terminals and the first sub-circuit. The first sub-circuit and the second sub-circuit are split into a plurality of individual first and second circuit blocks. At least one of said first circuit blocks is signal-connected via signal connections, each running through one or more of the second circuit blocks, to one or more other first circuit blocks or one or more of the terminals. One or more of said signal connections are reconfigurable, by the respective one or more second circuit blocks pertaining to the respective signal connection. The SOC is reconfigurable before or during its operation by reconfiguring at least one of said second circuit blocks. | 2022-06-23 |
20220197855 | MICRO-NETWORK-ON-CHIP AND MICROSECTOR INFRASTRUCTURE - Systems and methods described herein may relate to data transactions involving a microsector architecture. Control circuitry may organize transactions to and from the microsector architecture to, for example, enable direct addressing transactions as well as batch transactions across multiple microsectors. A data path disposed between programmable logic circuitry of a column of microsectors and a column of row controllers may form a micro-network-on-chip used by a network-on-chip to interface with the programmable logic circuitry. | 2022-06-23 |
20220197856 | System, Apparatus And Method For Dynamically Configuring One Or More Hardware Resources Of A Processor - In one embodiment, a processor includes: at least one configuration register to store configuration information for a hardware resource including a control circuit to configure the hardware resource based at least in part on the configuration information; a performance monitor to maintain performance information during execution of an application on the processor; and a controller coupled to the at least one configuration register. The controller may dynamically provide the configuration information to the at least one configuration register based at least in part on the performance information, and the control circuit is to adjust a performance tuning of the hardware resource according to the configuration information. Other embodiments are described and claimed. | 2022-06-23 |
20220197857 | DATA EXCHANGE PATHWAYS BETWEEN PAIRS OF PROCESSING UNITS IN COLUMNS IN A COMPUTER - A time deterministic computer is architected so that exchange code compiled for one set of tiles, e.g., a column, can be reused on other sets. The computer comprises: a plurality of processing units each having an input interface with a set of input wires, and an output interface with a set of output wires: a switching fabric connected to each of the processing units by the respective set of output wires and connectable to each of the processing units by the respective set of output wires and connectable to each of the processing units by the respective input wires via switching circuitry controllable by its associated processing unit; the processing units arranged in columns, each column having a base processing unit proximate the switching fabric and multiple processing units one adjacent the other in respective positions in the direction of the column. | 2022-06-23 |
20220197858 | DYNAMIC ALLOCATION OF ARITHMETIC LOGIC UNITS FOR VECTORIZED OPERATIONS - A system includes a processing device that includes a vector arithmetic logic unit comprising a plurality of arithmetic logic units (ALUs), and a first processor core operatively coupled to the vector arithmetic logic unit, the processing device to receive a first vector instruction from the first processor core, wherein the first vector instruction specifies at least one first input vector having a first vector length, identify a first subset of the ALUs in view of the first vector length and one or more allocation criteria, execute, using the first subset of the set of ALUs, one or more first ALU operations specified by the first vector instruction, wherein the vector arithmetic logic unit executes the first ALU operations in parallel with one or more second ALU operations specified by a second vector instruction received from a second processor core. | 2022-06-23 |
20220197859 | SCALABLE MCTP INFRASTRUCTURE - Methods and apparatus for scalable MCTP infrastructure. A system is split into independent MCTP domains, wherein each MCTP domain uses Endpoint Identifiers (EIDs) for endpoint devices within the MCTP domain in a manner similar to conventional MCTP operations. A new class of MCTP devices (referred to as a Domain Controllers) is provided to enable inter-domain communication and communication with global devices. Global traffic originators or receivers like a BMC (Baseboard Management Controller), Infrastructure Processing Unit (IPU), Smart NIC (Network Interface Card), Debugger, or PROT (Platform Root or Trust) discover and establish two-way communication through the Domain Controllers to any of the devices in the target domain(s). The Domain Controllers are configured to implement tunneled connections between global devices and domain endpoint devices. The tunneled connections may employ encapsulated messages with outer and inner headers and/or augmented MCTP messages with repurposed fields used to store source and destination EIDs. | 2022-06-23 |
20220197860 | HYBRID SNAPSHOT OF A GLOBAL NAMESPACE - A method of generating a hybrid snapshot includes receiving a request to generate a snapshot of a distributed file system and identifying a first storage resource of the distributed file system and a second storage resource of the distributed file system based on the request. The method further includes generating the snapshot of the distributed file system, the snapshot including a data-full snapshot of the first storage resource and a data-less snapshot of the second storage resource. | 2022-06-23 |
20220197861 | SYSTEM AND METHOD FOR REDUCING READ AMPLIFICATION OF ARCHIVAL STORAGE USING PROACTIVE CONSOLIDATION - System and method for managing snapshots of storage objects in a storage system use a consolidation operation to reduce read amplification for stored snapshots of a storage object that are stored in log segments in the storage system according to a log-structured file system as storage service objects. The consolidation operation involves identifying target log segments among the log segments that include live blocks that are associated with the latest snapshot of the storage object and determining the number of the live blocks included in each of the target log segments. Based on the number of the live blocks in each of the target log segments, candidate consolidation log segments are determined from the target log segments. The live blocks in the candidate consolidation log segments are then consolidated to new log segments, which are uploaded to the storage system as new storage service objects. | 2022-06-23 |
20220197862 | JOURNALING APPARATUS AND METHOD IN A NON-VOLATILE MEMORY SYSTEM - A memory system includes a memory device including memory blocks, and a controller configured to generate a result indicative of whether a number of free memory blocks satisfies a reference after beginning of garbage collection for the memory device, selectively perform a journaling operation for a request based on the result, and program data, collected by the garbage collection, in the memory device. | 2022-06-23 |
20220197863 | GENERATION AND DISTRIBUTION OF TECHNICAL MODEL VIEWPOINTS - A method includes storing technical models in a network-accessible model repository. Each technical model is labeled with descriptive metadata and comprises one or more model views labeled with functional metadata. A request is received from a stakeholder device, the request specifying descriptive attributes and functional attributes applied to an associated stakeholder. Technical model(s) are retrieved based on the descriptive metadata labelling the retrieved technical models being determined to satisfy the descriptive attributes included in the request. For each retrieved technical model, one or more model views are compiled based on the functional metadata labelling the compiled model view being determined to satisfy the functional attributes included in the request. The stakeholder device is provided secure access to a specification package that comprises one or more viewpoints for the retrieved technical model(s), each viewpoint comprising one or more of the compiled model views for each of the retrieved technical models. | 2022-06-23 |
20220197864 | FILE STORAGE AND COMPUTER SYSTEM - In a file storage that is coupled to a cloud storage storing data and manages a file, the cloud storage compresses and stores the data, and the file storage includes a processor. When data of a part of a file held in the file storage is updated, the processor is configured to compress updated update part data so that the update part data is in a compressed state and transmit the update part data in the compressed state to the cloud storage, and cause the cloud storage to replace the updated part of the file with the update part data and to store a range including the update part data in a compressed state in the cloud storage. | 2022-06-23 |
20220197865 | TECHNIQUES FOR SERVING ARCHIVED ELECTRONIC MAIL - A system for providing user access to electronic mail includes an email client and an email server. The email client receives and communicates a user interaction with an email message The email server that receives the communication, determines whether the email message stored in a live database or in a backup storage. Upon determination that the email message is stored in a backup storage, the email server performs a message exchange with a backup storage system to perform the user-requested action. | 2022-06-23 |
20220197866 | METHOD AND DEVICE TO AUTOMATICALLY IDENTIFY THEMES AND BASED THEREON DERIVE PATH DESIGNATOR PROXY INDICIA - Methods, devices and computer program products are provided that, under control of one or more processors, perform resource theme identification (RTI) automatically by; accessing an active resource that includes a path designator (PD) element that includes at least a portion of a path designator for a resource; analyzing the active resource to identify a text element, an audio element and/or an image element; analyzing the text/image element utilizing an RTI algorithm, that applies at least one of natural language understanding (NLU) or image recognition (IR), to identify the one or more themes; deriving proxy indicia based on the theme(s); substituting, into the active resource, the proxy indicia for the path designator to present the proxy indicia in place of the path designator, the proxy indicia linked to the path designator; and displaying the active resource including the proxy indicia. | 2022-06-23 |
20220197867 | IGNORE OBJECTS FROM SYNCRHONIZING TO CONTENT MANAGEMENT SYSTEM - Techniques and systems are provided for utilizing an ignore file in a content management system. For example, a process can detect an ignore file stored in a synchronized directory of a client device. The process can access the ignore file and interpret rules stored therein that prohibit any objects matching the rule from being synchronized with a content management server. During indexing of a synchronized directory, the process can apply the rules from the ignore file and mark objects that match a rule in the ignore file by writing an attribute to the object that identifies it as an ignored object that is exempt from synchronization with the content management server. | 2022-06-23 |
20220197868 | DATA COMPRESSION USING DICTIONARIES - Data units of a dataset may be compressed by clustering the data units into clusters, selecting a reference unit for each unit cluster, and compressing data units of each unit cluster using the reference unit of the unit cluster as a dictionary. The computational efficiency of the clustering algorithm may be improved by not applying it to data units themselves, but rather to hash values of the data units, where the hash values have a much smaller size than the data units. The hash function may be a locality-sensitive hash (LSH) function. The reference unit of a cluster may be determined in any of a variety of ways, for example, by selecting a centroid or exemplar of the cluster. Clusters, including their references values, may be indexed in a cluster index (e.g., a Faiss index), which may be searched to assign future added or modified data units to clusters. | 2022-06-23 |
20220197869 | Widget Synchronization in Accordance with Synchronization Preferences - Improved techniques and apparatus for managing data between a host device (e.g., host computer) and a client device. The data being managed can, for example, pertain to portable computer programs, such as widgets. The managing of the data thus can involve transfer of portable computer programs (e.g., widgets) between the host device and the client device. In one embodiment, the transfer of portable computer programs between a host device and a client device can be referred to as synchronization. | 2022-06-23 |
20220197870 | Scenario Execution System, Log Management Device, Log Recording Method, And Program - A scenario execution system includes a scenario execution terminal, a log management device, and a storage device. The scenario execution terminal includes a scenario execution unit and an event notification unit. The scenario execution unit executes a scenario file having a scenario indicating a procedure of operations in the scenario execution terminal described therein. The event notification unit notifies the log management device of information on an event generated by the scenario execution unit executing the scenario file. The log management device includes a log recording unit that receives the information on the event from the scenario execution terminal and records log data having the received information on the event set therein on the storage device. The storage device adds, to the log data, information uniquely generated on the basis of log data recorded before such log data and stores resultant log data. | 2022-06-23 |
20220197871 | Self-Healing Infrastructure for a Dual-Database System - A database system could include a first database engine, a second database engine, and a replication engine. The database system could also include processors configured to perform operations. The operations could involve obtaining indicators that are respectively associated with performance issues that can occur in the database system, each indicator defining one or more conditions that, when satisfied, cause the indicator to become active. The operations could also involve obtaining mappings between: (i) at least some of the indicators, and (ii) remediation subroutines. The operations could additionally involve receiving operational data related to the first database engine, the second database engine, or the replication engine; determining, based on the operational data and the conditions defined by the indicators, that a particular indicator is active; determining, based on the mappings, that the particular indicator has an associated remediation subroutine; and executing the associated remediation subroutine. | 2022-06-23 |
20220197872 | SHARE REPLICATION BETWEEN REMOTE DEPLOYMENTS - Provided herein are systems and methods for an efficient method of replicating share objects to remote deployments. For example, the method may include generating a global representation of a share object of a first database account located in a first region. The share object includes grant metadata associated with a set of objects of a database located in the first region and associated with the first database account. The method may further include, in response to a database refresh command received from a second database account associated with a database replica located in a second region, replicating the set of objects of the database to the database replica. The method may further include, in response to a share refresh command received from the second database account, replicating the grant metadata to a share object replica located in the second region. | 2022-06-23 |
20220197873 | PAGE SPLIT DETECTION AND AFFINITY IN QUERY PROCESSING PUSHDOWNS - Methods for page split detection and affinity in query processing pushdowns are performed by systems and devices. Page servers perform pushdown operations based on specific, and specifically formatted or generated, information, instructions, and data provided thereto from a compute node. Page servers also determine that page splits have occurred during reading of data pages maintained by page servers during pushdown operations, and also during fulfillment of compute node data requests. To detect a data page has split, page servers utilize information from a compute node of an expected next data page which is compared to a next data page in the page server page index. A mismatch in the comparison by page servers indicates data page was split. Compute nodes and page servers store and maintain off-row data generated during data operations via page affinity considerations where the off-row data is stored at the same page server as the data. | 2022-06-23 |
20220197874 | EFFICIENT STORAGE OF KEY-VALUE DATA WITH SCHEMA INTEGRATION - Methods, apparatus, and processor-readable storage media for efficient storage of key-value data with schema integration are provided herein. An example computer-implemented method includes obtaining a metrics data message associated with a product, wherein the metrics data message has a first format and comprises a schema version and a type of the product; identifying one of a plurality of schema definitions for the metrics data message based at least in part on the schema version and the type of the product; converting the metrics data message into a second format based on the identified schema definition, wherein the second format removes at least some redundant data from the metrics data message; and storing the converted metrics data message in a metrics database. | 2022-06-23 |
20220197875 | COMPUTER-IMPLEMENTED METHOD FOR STORING UNLIMITED AMOUNT OF DATA AS A MIND MAP IN RELATIONAL DATABASE SYSTEMS - A computer implemented method for creating and managing a database system comprising data structures for storing, in a memory, data and relations between the data, the method comprising the steps of creating a mind map structure wherein each node of the mind map represents a set in the first data structure and each branch represents a relation in the fifth data structure of the database in which there are defined five data structures that hold all information relating to tables, records and relations, namely: a first data structure comprising a definition of at least one data set, a second data structure comprising definitions of properties of objects, a third data structure comprising definitions of objects, a fourth data structure comprising definitions of properties of each object, a fifth data structure comprising definitions of relations and a sixth data structure for storing definitions of relations between objects. | 2022-06-23 |
20220197876 | DEVICE MANAGEMENT - Various example embodiments for supporting management of a communication device are presented. Various example embodiments for supporting management of a communication device based on a management model may be configured to support management of the communication device based on a management model that includes multiple data models configured to model objects representing elements of the communication device. Various example embodiments for supporting management of a communication device based on a management model may be configured to support management of the communication device based on a management model that includes multiple data models configured to support, for a given object representing an element of the communication device, association of sets of instance data generated for the object based on the multiple data models such that the communication device knows to apply the sets of instance data to the underlying operational object instance for the given object representing the element of the communication device. | 2022-06-23 |
20220197877 | DATA SIMULATION FOR REGRESSION ANALYSIS - A simulated dataset is queried for regression by validating a structured query language (SQL) statement, determining a pattern type of the SQL statement, reconstructing the SQL statement according to a predetermined process for the pattern type, creating a mutated SQL statement for querying a simulated dataset, and validating the mutated SQL statement. The simulated dataset is based on a confidential dataset having the confidential elements removed or replaced. | 2022-06-23 |
20220197878 | Compressed Read and Write Operations via Deduplication - Systems, apparatuses, and methods for implementing a collapsed stack are disclosed. A parallel processor includes a plurality of compute units for executing wavefronts of a given application. Each compute unit includes multiple single-instruction, multiple-data (SIMD) units. When the work-items executing on the execution lanes of a SIMD unit are writing data values to a stack, many of the data values are repeated. In these cases, when the lanes are pushing duplicate data values to the stack, a control unit deduplicates the duplicate data values and stores the deduplicated data values. The control unit then generates a control word that maps the deduplicated data values to execution lanes and stores the control word in association with the stored data values. When the stored data values are restored, the control word is used to determine which lanes receive which values of the stored data values. | 2022-06-23 |
20220197879 | METHODS AND SYSTEMS FOR AGGREGATING AND QUERYING LOG MESSAGES - Methods and systems described herein are directed to aggregating and querying log messages. Methods and systems determine event types of log message generated by event sources of the distributed computing system. The event types are aggregated into aggregated records for a shortest time unit and event types are aggregated into aggregated records for longer time units based on the aggregated records associated with the shortest time unit. In response to a query regarding occurrences of an event type in a query time interval, the query time interval is split into subintervals with time lengths that range from the shortest time unit to a longest time unit that lie within the query time interval. The method determines a total event count of occurrences of the event type in the query time interval based on the aggregated records with time stamps in the subintervals. The event count in the query time interval may be used to detect abnormal behavior of the event sources. | 2022-06-23 |
20220197880 | DATA MODEL AND DATA SERVICE FOR CONTENT MANAGEMENT SYSTEM - The disclosed technology addresses the need in the art for a content management system that can be highly flexible to the needs of its subjects. The present technology permits any object to be shared by providing a robust and flexible access control list mechanism. The present technology utilizes a data structure that is highly efficient that both minimizes the amount of information that needs to be written into any database, but also allows for fast reads and writes of information from authoritative tables that are a source of truth for the content management system, while allowing for maintenance of indexes containing more refined data that allow for efficient retrieval of certain information that would normally need to be calculated when it is needed. | 2022-06-23 |
20220197881 | MULTIPATH VERIFICATION OF DATA TRANSFORMS IN A SYSTEM OF SYSTEMS - Processing circuitry is configured to obtain a data structure that defines a plurality of conversions of data between pairs of fields; perform a search to identify a plurality of paths from a source node of the data structure to a destination node of the data structure, wherein the source node corresponds to a first field of the fields and the destination node corresponds to a second field of the fields; convert, for each path of the plurality of paths, transforms represented by corresponding edges of the path to a sequence of transforms that conform to a solver format; process the sequence of transforms for each path to determine whether all paths of the plurality of paths are equivalent up to an equivalence relation; and output an indication of whether all paths of the plurality of paths are equivalent up to an equivalence relation. | 2022-06-23 |