18th week of 2020 patent applcation highlights part 52 |
Patent application number | Title | Published |
20200133837 | MEMORY CONTROLLER AND MEMORY SYSTEM - A memory controller for preventing the storage area of a flash memory being reduced is provided. The memory controller controlling access to a flash memory based on a command provided from a host system, the memory controller includes: a processor, a RAM (random access memory), and a mask ROM (read only memory) in which a first firmware is written, wherein the memory controller is configured to: perform a search for a second firmware written in the flash memory based on the first firmware at a start-up time; and write a third firmware provided from the host system in the RAM when the second firmware is not found through the search and perform an initialization based on the third firmware written in the RAM. | 2020-04-30 |
20200133838 | DATA STORAGE DEVICE AND OPERATING METHOD THEREOF - A data storage device includes a memory device and a controller. The memory device includes a plurality of planes, wherein each of the planes includes two or more memory blocks. The controller is configured to control an operation of the memory device. The controller is further configured to generate a first super block as a super block including two or more way-interleavable memory blocks among the plurality of memory blocks of the plurality of planes, determine whether each of the memory blocks included in the first super block is a bad block, retrieve a spare block for replacing a first memory block determined as a bad block, in the plurality of planes; and generate a second replacing super block as a super block in which the first memory block is replaced with a second memory block positioned in a plane which does not have the first memory block, when all spare blocks of a plane including the first memory block are used. | 2020-04-30 |
20200133839 | READ QUALITY OF SERVICE FOR NON-VOLATILE MEMORY - A method and apparatus to reduce read latency and improve read quality of service (Read QoS) for non-volatile memory, such as NAND array in a NAND device. For read commands that collide with an in-progress program array operation targeting the same program locations in a NAND array, the in-progress program is suspended and the controller allows the read command to read from the internal NAND buffer instead of waiting for the in-progress program to complete. For read commands queued during an in-progress program that is processing pre-reads in preparation for a program array operation, pre-read bypass allows the reads to be serviced between the pre-reads and before the program's array operation starts. In this manner, read commands can be serviced without suspending the in-progress program. Allowing internal NAND buffer reads and enabling pre-read bypass reduces read latency and improves Read QoS. | 2020-04-30 |
20200133840 | DATA PROCESSING METHOD AND APPARATUS, AND FLASH DEVICE - A method for adjusting over provisioning space and a flash device are provided. The flash device includes user storage space for storing user data and over provisioning space for garbage collection within the flash device. The flash device receives an operation instruction, and then performs an operation on user data stored in the user storage space based on the operation instruction. Further, the flash device identifies a changed size of user data after performing the operation. Based on the changed size of data, a target adjustment parameter is identified. Further, the flash device adjusts the capacity of the over provisioning space according to the target adjustment parameter. According to the method, the over provisioning ratio can be dynamically adjusted, thereby, a life of the flash device can be prolonged. | 2020-04-30 |
20200133841 | SCALABLE GARBAGE COLLECTION - A method of scalable garbage collection includes receiving an indication to perform a garbage collection process on a section of a database of a storage array comprising a plurality of storage devices. The method further includes determining, by a processing device of a storage array controller of the storage array, whether the section corresponds to any check-pointed data set. The method further includes, if the section does not correspond to any check-pointed data set: performing the garbage collection process on the section. The method further includes, if the section does correspond to a check-pointed data set: performing, by the processing device, a scalable garbage collection process on the section. | 2020-04-30 |
20200133842 | EFFICIENTLY PURGING NON-ACTIVE BLOCKS IN NVM REGIONS USING VIRTBLOCK ARRAYS - Techniques for efficiently purging non-active blocks in an NVM region of an NVM device using virtblocks are provided. In one set of embodiments, a host system can maintain, in the NVM device, a pointer entry (i.e., virtblock entry) for each allocated data block of the NVM region, where page table entries of the NVM region that refer to the allocated data block include pointers to the pointer entry, and where the pointer entry includes a pointer to the allocated data block. The host system can further determine that a subset of the allocated data blocks of the NVM region are non-active blocks and can purge the non-active blocks from the NVM device to a mass storage device, where the purging comprises updating the pointer entry for each non-active block to point to a storage location of the non-active block on the mass storage device. | 2020-04-30 |
20200133843 | PERIODIC FLUSH IN MEMORY COMPONENT THAT IS USING GREEDY GARBAGE COLLECTION - An amount of valid data for each data block of multiple data blocks stored at a first memory is determined. An operation to write valid data of a particular data block from the first memory to a second memory is performed based on the amount of valid data for each data block. A determination is made that a threshold condition associated with when valid data of the data blocks was written to the first memory has been satisfied. In response to determining that the threshold condition has been satisfied, the operation to write valid data of the data blocks from the first memory to the second memory is performed based on when the valid data was written to the first memory. | 2020-04-30 |
20200133844 | DATA MERGE METHOD, MEMORY STORAGE DEVICE AND MEMORY CONTROL CIRCUIT UNIT - A data merge method for a rewritable non-volatile memory module including a plurality of physical units is provided according to an exemplary embodiment of the disclosure. The method includes: obtaining a first logical distance value between a first physical unit and a second physical unit among the physical units, and the first logical distance value reflects a logical dispersion degree between at least one first logical unit mapped by the first physical unit and at least one second logical unit mapped by the second physical unit; and performing a data merge operation according to the first logical distance value, so as to copy valid from a source node to a recycling node. | 2020-04-30 |
20200133845 | STORAGE DEVICE, METHOD AND NON-VOLATILE MEMORY DEVICE PERFORMING GARBAGE COLLECTION USING ESTIMATED NUMBER OF VALID PAGES - Garbage collection is performed according to an estimated number of valid pages. A storage device estimates a valid page count at a future time based on a valid page count at each of past time steps and a present time step using a neural network model and selects a victim block that undergoes the garbage collection from memory blocks based on an estimated valid page count. A memory block having a lowest estimated valid page count or having an estimated valid page count having a maintaining tendency is selected as the victim block or a memory block having the estimated valid page count having a decreasing tendency is excluded from selecting the victim block. | 2020-04-30 |
20200133846 | EFFICIENTLY PURGING NON-ACTIVE BLOCKS IN NVM REGIONS USING POINTER ELIMINATION - Techniques for efficiently purging non-active blocks in an NVM region of an NVM device using pointer elimination are provided. In one set of embodiments, a host system can, for each level 1 (L1) page table entry of each snapshot of the NVM region, determine whether a data block of the NVM region that is pointed to by the L1 page table entry is a non-active block, and if the data block is a non-active block, remove a pointer to the data block in the L1 page table entry and reduce a reference count parameter associated with the data block by 1. If the reference count parameter has reached zero at this point, the host system purge the data block from the NVM device to the mass storage device. | 2020-04-30 |
20200133847 | EFFICIENTLY PURGING NON-ACTIVE BLOCKS IN NVM REGIONS WHILE PRESERVING LARGE PAGES - Techniques for efficiently purging non-active blocks in an NVM region of an NVM device while preserving large pages are provided. In one set of embodiments, a host system can receive a write request with respect to a data block of the NVM region, where the data block is referred to by a snapshot of the NVM region and was originally allocated as part of a large page. The host system can further allocate a new data block in the NVM region, copy contents of the data block to the new data block, and update the data block with write data associated with the write request. The host system can then update a level 1 (L1) page table entry of the NVM region's running point to point to the original data block. | 2020-04-30 |
20200133848 | METHOD, APPARATUS, AND COMPUTER PROGRAM PRODUCT FOR MANAGING STORAGE SPACE - Techniques involve managing a storage space. In response to receiving an allocation request for allocating a storage space, a storage space size and a slice size are obtained. A first storage system and a second storage system are selected from multiple storage systems, the first storage system and the second storage system includes a first storage device group and a second storage device group respectively, and the first storage device group does not overlap the second storage device group. A first slice group and a second slice group is obtained from the first storage system and the second storage system respectively, on the basis of the size of the storage space and the size of the slice. A user storage system is built at least on the basis of the first slice group and the second slice group, so as to respond to the allocation request. | 2020-04-30 |
20200133849 | ERROR-CHECKING IN NAMESPACES ON STORAGE DEVICES USING A NAMESPACE TABLE AND METADATA - Systems and methods for storing and validating namespace metadata are disclosed. An exemplary system includes a memory component and a processing device identifying a namespace identifier associated with a first write instruction from a host process and combining the namespace identifier with a namespace offset included in the first write instruction to form a logical address. The logical address is translated into a physical address and included in a second write instruction along with data to be written and the physical address. The second write instruction is sent to a memory component causing the data to be written at the physical address, and the logical address to be stored as metadata associated with the data. The logical address may be translated using a namespace table and one or more translation tables, where the namespace table has entries including a starting location and size of a namespace in a translation table. | 2020-04-30 |
20200133850 | Apparatus and Method to Access a Memory Location - A method for accessing two memory locations in two different memory arrays based on a single address string includes determining three sets of address bits. A first set of address bits are common to the addresses of wordlines that correspond to the memory locations in the two memory arrays. A second set of address bits concatenated with the first set of address bits provides the address of the wordline that corresponds to a first memory location in a first memory array. A third set of address bits concatenated with the first set of address bits provides the address of the wordline that corresponds to a second memory location in a second memory array. The method includes populating the single address string with the three sets of address bits and may be performed by an address data processing unit. | 2020-04-30 |
20200133851 | CIRCUITRY AND METHOD - Circuitry comprises memory circuitry providing a plurality of memory locations;
| 2020-04-30 |
20200133852 | METHOD, APPARATUS, AND COMPUTER PROGRAM PRODUCT FOR MANAGING STORAGE SYSTEM - Techniques involve managing a storage system. A target storage device is selected from multiple storage devices associated with the storage system in response to respective wear degrees of the multiple storage devices being higher than a first predetermined threshold. Regarding multiple extents in the multiple storage devices, respective access loads of the multiple extents are determined. A source extent is selected from multiple extents residing on storage devices other than the target storage device, on the basis of the respective access loads of the multiple extents. Data in the source extent are moved to the target storage device. Various storage devices in a resource pool may be prevented from reaching the end of life at close times, and further data loss may be avoided. | 2020-04-30 |
20200133853 | METHOD AND SYSTEM FOR DYNAMIC MEMORY MANAGEMENT IN A USER EQUIPMENT (UE) - A method of dynamic memory management in a user equipment (UE) is provided. The method includes receiving, by the UE, transport block size (TBS) information, from a base station (BS), associated with a data packet to be transmitted by the BS to the UE; identifying, by the UE, a plurality of empty bins and a size of each of the plurality of empty bins in a memory of the UE; detecting, by the UE, a presence of one or more empty bins, among the plurality of empty bins in the memory, with a size of each of the one or more empty bins greater than the TBS of the data packet; and allocating, by the UE, a smallest size empty bin, with a size greater than the TBS of the data packet, among the one or more empty bins to the data packet. | 2020-04-30 |
20200133854 | NEURAL NETWORK SYSTEM INCLUDING DATA MOVING CONTROLLER - Provided is a neural network system for processing data transferred from an external memory. The neural network system includes an internal memory storing input data transferred from the external memory, an operator performing a multidimensional matrix operation by using the input data of the internal memory and transferring a result of the multidimensional array operation as output data to the internal memory, and a data moving controller controlling an exchange of the input data or the output data between the external memory and the internal memory. The data moving controller reorders a dimension order with respect to an access address of the external memory to generate an access address of the internal memory, for the multidimensional matrix operation. | 2020-04-30 |
20200133855 | ACCESSING QUEUE DATA - A method and apparatus of accessing queue data is provided. According to the method, a double-layer circular queue is constructed, where the double-layer circular queue includes one or more inner-layer circular queues established based on an array, and the one or more inner-layer circular queues constitute an outer-layer circular queue of the double-layer circular queue based on a linked list. A management pointer of the outer-layer circular queue is set. Data accessing is performed on the inner-layer circular queues by using the management pointer. | 2020-04-30 |
20200133856 | USING A MACHINE LEARNING MODULE TO PERFORM DESTAGES OF TRACKS WITH HOLES IN A STORAGE SYSTEM - In response to an end of track access for a track in a cache, a determination is made as to whether the track has modified data and whether the track has one or more holes. In response to determining that the track has modified data and the track has one or more holes, an input on a plurality of attributes of a computing environment in which the track is processed is provided to a machine learning module to produce an output value. A determination is made as to whether the output value indicates whether one or more holes are to be filled in the track. In response to determining that the output value indicates that one or more holes are to be filled in the track, the track is staged to the cache from a storage drive. | 2020-04-30 |
20200133857 | INCREASING PERFORMANCE OF WRITE THROUGHPUT USING MACHINE LEARNING - Techniques for processing data may include: determining a first amount denoting an amount of write pending data stored in cache to be redirected through storage class memory (SCM) when destaging cached write pending data from the cache; performing first processing that destages write pending data from the cache, the first processing including: selecting, in accordance with the first amount, a first portion of write pending data that is destaged from the cache and stored in the SCM and a second portion of write pending data that is destaged directly from the cache and stored on one or more physical storage devices providing back-end non-volatile physical storage; and subsequent to storing the first portion of write pending data to the SCM, transferring the first portion of write pending data from the SCM to the one or more physical storage devices providing back-end non-volatile physical storage. | 2020-04-30 |
20200133858 | METHOD, APPARATUS AND COMPUTER PROGRAM PRODUCT FOR STORING DATA - Techniques store data. Such techniques involve: in response to receiving a request for writing data to a file, determining a type of the file; determining a compression property of the data based on the determined type; determining a storage area corresponding to the data to be written in a storage device for storing the file; and storing the data into the storage area, based on the compression property. Such techniques can determine, based on a type of a file involved in an input/output (I/O) request, a compression property of the data for the I/O request, and further determine whether a data compression operation is to be performed prior to storing the data. Accordingly, the techniques can avoid an unnecessary compression operation while reducing the storage space required for storing data as much as possible, thereby improving performance of system. | 2020-04-30 |
20200133859 | In-Memory Dataflow Execution with Dynamic Placement of Cache Operations and Action Execution Ordering - A dataflow execution environment is provided with dynamic placement of cache operations and action execution ordering. An exemplary method comprises: obtaining a current cache placement plan for a dataflow comprised of multiple operations and a corresponding current cache gain estimate; selecting an action to execute from a plurality of remaining dataflow actions based on a predefined policy; executing one or more operations in a lineage of the selected action and estimating an error as a difference in an observed execution time and an estimated execution time given by a cost model; obtaining an alternative cache placement plan for the dataflow following the execution in conjunction with a predefined new plan determination criteria being satisfied and a corresponding alternative cache gain estimate; implementing the alternative cache placement plan in conjunction with a predefined new plan implementation criteria being satisfied; and selecting a next action to execute from a plurality of remaining actions in the dataflow based on a predefined policy. | 2020-04-30 |
20200133860 | METHOD, DEVICE AND COMPUTER PROGRAM PRODUCT FOR VALIDATING CACHE FILE - Embodiments of the present disclosure provide a method, device and computer program product for validating a cache file. In an embodiment, a reference cache file associated with the backed up data is divided into a plurality of reference segments. Reference check information is generated for the respective reference segments of the plurality of reference segments, and the generated reference check information is stored. In response to the initiating of a backup job, the stored reference check information is used to validate the cache file. | 2020-04-30 |
20200133861 | METHODS AND SYSTEMS FOR MANAGING SYNONYMS IN VIRTUALLY INDEXED PHYSICALLY TAGGED CACHES - Methods and systems for managing synonyms in VIPT caches are disclosed. A method includes tracking lines of a copied cache using a directory, examining a specified bit of a virtual address that is associated with a load request and determining its status and making an entry in one of a plurality of parts of the directory based on the status of the specified bit of the virtual address that is examined. The method further includes updating one of, and invalidating the other of, a cache line that is associated with the virtual address that is stored in a first index of the copied cache, and a cache line that is associated with a synonym of the virtual address that is stored at a second index of the copied cache, upon receiving a request to update a physical address associated with the virtual address. | 2020-04-30 |
20200133862 | ASYMMETRIC MEMORY TAG ACCESS AND DESIGN - Various aspects are described herein. In some aspects, the disclosure provides techniques for accessing tag information in a memory line. The techniques include determining an operation to perform on at least one memory line of a memory. The techniques further include performing the operation by accessing only a portion of the at least one memory line, wherein the only the portion of the at least one memory line comprises one or more flag bits that are independently accessible from remaining bits of the at least one memory line. | 2020-04-30 |
20200133863 | CORRELATED ADDRESSES AND PREFETCHING - An apparatus is provided that includes cache circuitry that comprises a plurality of cache lines. The cache circuitry treats one or more of the cache lines as trace lines each having correlated addresses and each being tagged by a trigger address. Prefetch circuitry causes data at the correlated addresses stored in the trace lines to be prefetched. | 2020-04-30 |
20200133864 | METHOD, APPARATUS AND COMPUTER PROGRAM PRODUCT FOR MANAGING I/O OPERATION - Techniques manage an input/output (I/O) operation. Such techniques involve estimating a first storage area in a storage device to be accessed by an upcoming random I/O operation, first data being stored in the estimated first storage area. Such techniques further involve, before the random I/O operation is executed, pre-fetching the first data from the first storage area into a cache associated with the storage device. Such techniques enable implementation of the cache pre-fetch for random I/O operations, thereby effectively improving the performance of data access. | 2020-04-30 |
20200133865 | CACHE MAINTENANCE OPERATIONS IN A DATA PROCESSING SYSTEM - An interconnect system and method of operating the system are disclosed. A master device has access to a cache and a slave device has an associated data storage device for long-term storage of data items. The master device can initiate a cache maintenance operation in the interconnect system with respect to a data item temporarily stored in the cache causing action to be taken by the slave device with respect to storage of the data item in the data storage device. For long latency operations the master device can issue a separated cache maintenance request specifying the data item and the slave device. In response an intermediate device signals an acknowledgment response indicating that it has taken on responsibility for completion of the cache maintenance operation and issues the separated cache maintenance request to the slave device. The slave device signals the acknowledgement response to the intermediate device and on completion of the cache maintenance operation with respect to the data item stored in the data storage device signals a completion response to the master device. | 2020-04-30 |
20200133866 | BYTE SELECT CACHE COMPRESSION - The disclosure herein provides techniques for designing cache compression algorithms that control how data in caches are compressed. The techniques generate a custom “byte select algorithm” by applying repeated transforms applied to an initial compression algorithm until a set of suitability criteria is met. The suitability criteria include that the “cost” is below a threshold and that a metadata constraint is met. The “cost” is the number of blocks that can be compressed by an algorithm as compared with the “ideal” algorithm. The metadata constraint is the number of bits required for metadata. | 2020-04-30 |
20200133867 | METHOD, APPARATUS, AND COMPUTER PROGRAM PRODUCT FOR PROVIDING CACHE SERVICE - Techniques provide cache service in a storage system. Such techniques involve a storage cell pool, a cache and an underlying storage system. The storage cell pool includes multiple storage cells, a storage cell among the multiple storage cells being mapped to a physical address in the underlying storage system via an address mapping of the storage system. Specifically, an access request for target data at a virtual address in the storage cell pool is received, and the type of the access request is determined. The access request is served with the cache on the basis of the determined type, where the cache is used to cache data according to a format of a storage cell in the storage cell pool. The cache directly stores data in various storage cells in the pool that is visible to users, so that response speed for the access request may be increased. | 2020-04-30 |
20200133868 | SHARED LOADS AT COMPUTE UNITS OF A PROCESSOR - A processor reduces bus bandwidth consumption by employing a shared load scheme, whereby each shared load retrieves data for multiple compute units (CUs) of a processor. Each CU in a specified group monitors a bus for load accesses directed to a cache shared by the multiple CUs. In response to identifying a load access on the bus, a CU determines if the load access is a shared load access for its share group. In response to identifying a shared load access for its share group, the CU allocates an entry of a private cache associated with the CU for data responsive to the shared load access. The CU then monitors the bus for the data targeted by the shared load. In response to identifying the targeted data on the bus, the CU stores the data at the allocated entry of the private cache. | 2020-04-30 |
20200133869 | METHOD, ELECTRONIC DEVICE AND COMPUTER PROGRAM PRODUCT FOR DATA STORAGE - Techniques perform data storage. Such techniques may involve writing metadata to a first cache of a first processor, the metadata indicating allocation of a storage resource to user data. Such techniques may further involve determining an address range of the metadata in the first cache. Such techniques may further involve copying only data stored in the address range in the first cache to a second cache of a second processor. Accordingly, the data transmission volume between two processors is reduced, which helps to improve the overall performance of a storage system. | 2020-04-30 |
20200133870 | METHOD, DEVICE AND COMPUTER PROGRAM PRODUCT FOR CACHE MANAGEMENT - Techniques perform cache management. Such techniques involve: obtaining a first cache page of the cache to be flushed, the first cache page being associated with a target storage block in a storage device; determining from the cache a set of target cache pages to be flushed, each of the set of target cache pages being associated with the target storage block; and writing data in the first cache page and data in each of the set of target cache pages into the target storage block simultaneously. | 2020-04-30 |
20200133871 | METHOD, DEVICE AND COMPUTER PROGRAM PRODUCT FOR DATA WRITING - Techniques perform data writing. Such techniques involve: in response to receiving a first write request, searching a cache for a target address associated with the first write request; in response to missing of the target address in the cache, determining a page usage rate in the cache; and in response to determining that the page usage rate exceeds an upper threshold, performing the first write request with a first available page in the cache. The first available page is reclaimed, independent of a refresh cycle of the cache, in response to completing the performing of the first write request. | 2020-04-30 |
20200133872 | INCREASED PARALLELIZATION EFFICIENCY IN TIERING ENVIRONMENTS - A computer-implemented method, according to one embodiment, includes: identifying block addresses which are associated with a given object, and combining the block addresses to a first set in response to determining that at least one token is currently issued on one or more of the identified block addresses. A first portion of the block addresses is transitioned to a second set, where the first portion includes ones of the block addresses determined as having a token currently issued thereon. Moreover, a second portion of the block addresses is divided into equal chunks, where the second portion includes the block addresses remaining in the first set. The chunks in the first set are allocated across two or more parallelization units. Furthermore, the block addresses in the second set are divided into equal chunks, and the chunks in the second set are allocated to at least one dedicated parallelization unit. | 2020-04-30 |
20200133873 | SYNCHRONIZED ACCESS TO DATA IN SHARED MEMORY BY PROTECTING THE LOAD TARGET ADDRESS OF A LOAD-RESERVE INSTRUCTION - A data processing system includes multiple processing units all having access to a shared memory. A processing unit includes a processor core that executes memory access instructions including a load-type instruction. Execution of the load-type instruction generates a corresponding request that specifies a target address. The processing unit further includes a read-claim state machine that, responsive to receipt of the request, protects the load target address against access by any conflicting memory access request during a protection interval following servicing of the request. | 2020-04-30 |
20200133874 | MANAGED NVM ADAPTIVE CACHE MANAGEMENT - Disclosed in some examples are memory devices which feature customizable Single Level Cell (SLC) and Multiple Level Cell (MLC) configurations. The configuration (e.g., the size and position) of the SLC cache may have an impact on power consumption, speed, and other performance of the memory device. An operating system of an electronic device to which the memory device is installed may wish to achieve different performance of the device based upon certain conditions detectable by the operating system. In this way, the performance of the memory device can be customized by the operating system through adjustments of the performance characteristics of the SLC cache. | 2020-04-30 |
20200133875 | METHOD, APPARATUS AND COMPUTER PROGRAM PRODUCT FOR MANAGING DATA ACCESS - In response to receiving a read request for target data, an external address of the target data is obtained from the read request, which is an address unmapped to a storage system; hit information of the target data in cache of the storage system is determined based on the external address; and based on the hit information, an address from the external address and an internal address for providing the target data is determined. The internal address is determined based on the external address and a mapping relationship. Therefore, it can shorten the data access path, accelerate the responding speed for the data access request, and allow the cache to prefetch the data more efficiently. | 2020-04-30 |
20200133876 | DISAGGREGATED COMPUTING ARCHITECTURE USING DEVICE PASS-THROUGH WHEREIN IDEPENDENT PHYSICAL ADDRESS SPACES BETWEEN SYSTEMS NODES ARE IMPLEMENTED IN A SINGLE EXECUTION ENVIRONMENT - The present disclosure relates to a disaggregated computing architecture comprising: a first compute node ( | 2020-04-30 |
20200133877 | MAPPING ENTRY INVALIDATION - A memory access system may include a first memory address translator, a second memory address translator and a mapping entry invalidator. The first memory address translator translates a first virtual address in a first protocol of a memory access request to a second virtual address in a second protocol and tracks memory access request completions. The second memory address translator is to translate the second virtual address to a physical address of a memory. The mapping entry invalidator requests invalidation of a first mapping entry of the first mapping address translator requests invalidation of a second mapping entry of the second memory address translator corresponding to the first mapping entry following invalidation of the first mapping entry and based upon the tracked memory access request completions. | 2020-04-30 |
20200133878 | SECURE MEMORY ACCESS IN A VIRTUALIZED COMPUTING ENVIRONMENT - A processor supports secure memory access in a virtualized computing environment by employing requestor identifiers at bus devices (such as a graphics processing unit) to identify the virtual machine associated with each memory access request. The virtualized computing environment uses the requestor identifiers to control access to different regions of system memory, ensuring that each VM accesses only those regions of memory that the VM is allowed to access. The virtualized computing environment thereby supports efficient memory access by the bus devices while ensuring that the different regions of memory are protected from unauthorized access. | 2020-04-30 |
20200133879 | MEMORY SYSTEM AND METHOD FOR CONTROLLING NONVOLATILE MEMORY - According to one embodiment, when a read request received from a host includes a first identifier indicative of a first region, a memory system obtains a logical address from the received read request, obtains a physical address corresponding to the obtained logical address from a logical-to-physical address translation table which manages mapping between logical addresses and physical addresses of the first region, and reads data from the first region, based on the obtained physical address. When the received read request includes a second identifier indicative of a second region, the memory system obtains physical address information from the read request, and reads data from the second region, based on the obtained physical address information. | 2020-04-30 |
20200133880 | SELECTIVELY ENABLED RESULT LOOKASIDE BUFFER - A processing system selectively enables and disables a result lookaside buffer (RLB) based on a hit rate tracked by a counter, thereby reducing power consumption for lookups at the result lookaside buffer during periods of low hit rates and improving the overall hit rate for the result lookaside buffer. A controller increments the counter in the event of a hit at the RLB and decrements the counter in the event of a miss at the RLB. If the value of the counter falls below a threshold value, the processing system temporarily disables the RLB for a programmable period of time. After the period of time, the processing system re-enables the RLB and resets the counter to an initial value. | 2020-04-30 |
20200133881 | METHODS AND SYSTEMS FOR OPTIMIZED TRANSLATION LOOKASIDE BUFFER (TLB) LOOKUPS FOR VARIABLE PAGE SIZES - A computer system includes a translation lookaside buffer (TLB) and a processor. The TLB comprises a first TLB array and a second TLB array, and stores entries comprising virtual address information and corresponding real address information. The processor is configured to receive a first virtual address for translation, and to concurrently determine if the TLB stores a physical address associated with the first virtual address based on a first portion and a second portion of the first virtual address. The first portion is associated with a first page size and the second portion is associated with a second page size (different from the first page size). The first portion is used to perform lookup in either one of the first TLB array and the second TLB array and the second portion is used for performing lookup in other one of the first TLB array and the second TLB array. | 2020-04-30 |
20200133882 | MEMORY SYSTEM AND OPERATING METHOD THEREOF - A memory system includes: a memory device storing host data provided from a host; and a memory controller managing and transferring the host data between the host and the memory device, wherein the memory controller comprises: a write buffer temporarily storing the host data to be transferred to the memory device; a buffer monitoring device checking a usage amount of the write buffer during a predetermined period; a buffer usage comparing device generating a flush control signal based on a usage amount comparison result by comparing the usage amount checked during a current period corresponding to the predetermined period with the usage amount checked during a previous period corresponding to the predetermined period; and a first flush device transferring the host data temporarily stored in the write buffer to the memory device in response to the flush control signal. | 2020-04-30 |
20200133883 | Asynchronous Tracking for High-Frequency and High-Volume Storage - Asynchronous file tracking may include a first process that adds files to a cache and that generates different instances of a tracking file to track the files as they are entered into the cache. A second process, executing on the device, asynchronously accesses one or more instances of the tracking file at a different rate than the first process generates the tracking file instances. The second process may update a record of cached files based on a set of entries from each of the different instances of the tracking file accessed by the second process. Each set of entries may identify a different set of files that are cached by the device. The second process may then purge one or more cached files that satisfy eviction criteria while the first process continues to asynchronously add files to the cache and create new instances to track the newly cached files. | 2020-04-30 |
20200133884 | NVRAM SYSTEM MEMORY WITH MEMORY SIDE CACHE THAT FAVORS WRITTEN TO ITEMS AND/OR INCLUDES REGIONS WITH CUSTOMIZED TEMPERATURE INDUCED SPEED SETTINGS - An apparatus is described. The apparatus includes a memory controller to interface with a memory side cache and an NVRAM system memory. The memory controller has logic circuitry to favor items cached in the memory side cache that are expected to be written to above items cached in the memory side cache that are expected to only be read from. | 2020-04-30 |
20200133885 | DYNAMIC MEMORY PROTECTION - Presented herein are methods and systems for adjusting code files to apply memory protection for dynamic memory regions supporting run-time dynamic allocation of memory blocks. The code file(s), comprising a plurality of routines, are created for execution by one or more processors using the dynamic memory. Adjusting the code file(s) comprises analyzing the code file(s) to identify exploitation vulnerable routine(s) and adding a memory integrity code segment configured to detect, upon execution completion of each vulnerable routine, a write operation exceeding from a memory space of one or more of a subset of most recently allocated blocks allocated in the dynamic memory to a memory space of an adjacent block using marker(s) inserted in the dynamic memory in the boundary(s) of each of the subset's blocks. In runtime, in case the write operation is detected, the memory integrity code segment causes the processor(s) to initiate one or more predefined actions. | 2020-04-30 |
20200133886 | EFFICIENT CODING IN A STORAGE SYSTEM - A method for efficient name coding in a storage system is provided. The method includes identifying common prefixes, common suffixes, and midsections of a plurality of strings in the storage system, and writing the common prefixes, midsections and common suffixes to a string table in the storage system. The method includes encoding each string of the plurality of strings as to position in the string table of prefix, midsection and suffix of the string, and writing the encoding of each string to memory in the storage system for the plurality of strings, in the storage system. | 2020-04-30 |
20200133887 | SECURING DATA LOGS IN MEMORY DEVICES - An apparatus including non-volatile memory to store a forensic key and data, the data received from a host computing system. A processing device is coupled to the non-volatile memory and is to: allow writing the data, by the host computing system, to a region of the non-volatile memory; in response to a lock signal received from the host computing system, assert a lock on the region of the non-volatile memory, the lock to cause a restriction on access to the region of the non-volatile memory by an external device; and provide unrestricted access, by the external device, to the region of the non-volatile memory in response to verification of the forensic key received from the external device. | 2020-04-30 |
20200133888 | APPARATUS AND METHOD FOR HANDLING PAGE PROTECTION FAULTS IN A COMPUTING SYSTEM - Method and apparatus for handling page protection faults in combination particularly with the dynamic conversion of binary code executable by a one computing platform into binary code executed instead by another computing platform. In one exemplary aspect, a page protection fault handling unit is used to detect memory accesses, to check page protection information relevant to the detected access by examining the contents of a page descriptor store, and to selectively allow the access or pass on page protection fault information in accordance with the page protection information. | 2020-04-30 |
20200133889 | APPARATUS AND METHOD FOR HANDLING PAGE PROTECTION FAULTS IN A COMPUTING SYSTEM - Method and apparatus for handling page protection faults in combination particularly with the dynamic conversion of binary code executable by a one computing platform into binary code executed instead by another computing platform. In one exemplary aspect, a page protection fault handling unit is used to detect memory accesses, to check page protection information relevant to the detected access by examining the contents of a page descriptor store, and to selectively allow the access or pass on page protection fault information in accordance with the page protection information. | 2020-04-30 |
20200133890 | CONTROL ARRANGEMENT FOR A COFFEE MACHINE - A control arrangement for a coffee machine is provided and comprises a central unit having a main control unit and a plurality of peripheral units/components. Each peripheral unit/component is connected to the central unit by means of a “smart” connector, which is coded and which can provide information relating to the unit/component connected thereto to the main control unit. In order to allow information to be transferred, the central unit comprises a master communication device, each peripheral unit/component is provided with a slave communication device, and a communication line is provided for connecting the master communication device to the slave communication devices. The transferred information is unambiguously associated to the unit/component and may comprise counters, historical information, performance data and the like. | 2020-04-30 |
20200133891 | WIRELESS COMMUNICATION PROTOCOL HAVING A PREDETERMINED REPORT RATE - In some embodiments a transceiver is configured to wirelessly transfer data between a host computing device and one or more peripheral devices over a communication path using a communication data construct comprising a packet structure arranged in a repetitive communication structure. The repetitive communication structure can include a transmit time window within which the host transmits data to the one or more connected peripheral devices and a receive time window within which the host receives data from the one or more connected peripheral devices. A duration of the receive time window is set based on a predetermined communication report rate between the host computing device and the one or more connected peripheral devices. A new peripheral device is added as a connected peripheral device when the new peripheral device transmits a request to the host to be added as a connected peripheral device and the receive time window has time available to add the new peripheral device. | 2020-04-30 |
20200133892 | EMULATED ENDPOINT CONFIGURATION - Techniques for emulating a configuration space by a peripheral device may include receiving a access request, determining that the access request is for an emulated configuration space of the peripheral device, and retrieving an emulated configuration from an emulated configuration space. The access request can then be serviced by using the emulated configuration. | 2020-04-30 |
20200133893 | SYSTEM AND METHOD FOR INDIVIDUAL ADDRESSING - In one embodiment, a system includes a bus interface including a first processor, an indirect address storage storing a number of indirect addresses, and a direct address storage storing a number of direct addresses. The system also includes a number of devices connected to the bus interface and configured to analyze data. Each device of the number of devices includes a state machine engine. The bus interface is configured to receive a command from a second processor and to transmit an address for loading into the state machine engine of at least one device of the number of devices. The address includes a first address from the number of indirect addresses or a second address from the number of direct addresses. | 2020-04-30 |
20200133894 | TRAY FOR AVIONICS BAY COMPRISING A RECORDING DEVICE, ASSOCIATED AVIONICS BAY AND AIRCRAFT - A tray for an avionics bay, comprising a body and a recording device rigidly connected to each other in order to reduce the space requirement of acquisition systems on board an aircraft and dedicated to the prediction of failures. The recording device comprises a first input/output port to be connected to the avionics bay, a second input/output port to be connected to an item of electrical equipment, a data bus for routing signals between the first and the second input/output port, a collection member configured for acquiring at least some of the signals routed by the data bus between the first and the second input/output port, and a memory configured to store the signals acquired by the collection member. | 2020-04-30 |
20200133895 | DETERMINING MULTIPLE VIRTUAL HOST PORTS ON A SAME PHYSICAL HOST PORT - Multiple virtual host ports corresponding to a same physical host port may be determined by or on behalf of a storage system, for example, in response to logging the one or more virtual host ports into the storage system. For one or more virtual host ports, it may be determined whether the virtual host port is connected to a same fabric port as another virtual host port, where a fabric port is a port of a fabric configured to connect to a virtual host port. If two virtual host ports are determined to be connected to a same fabric port, it may be concluded that the two virtual host ports correspond to (e.g., share) a same physical host port. One or more actions may be taken on a storage network based at least in part on a determination that two virtual host ports are sharing a same physical host port. | 2020-04-30 |
20200133896 | Method and Apparatus for Redundant Array of Independent Drives Parity Quality of Service Improvements - An information handling system includes a host configured to write a non-volatile memory express (NVMe) command on a memory submission queue slot. The NVMe command includes a pre-fetch command and a non-completion command. A controller uses the pre-fetch command to monitor read operations, and to place on hold an execution of the monitored read operations and an issuance of an interrupt in response to the non-completion command. | 2020-04-30 |
20200133897 | PROVIDING INFORMATION FOR A CONTROLLER MEMORY BUFFER ELASTICITY STATUS OF A MEMORY SUB-SYSTEM TO A HOST SYSTEM - An indication of a capacity of a CMB elasticity buffer and an indication of a throughput of one or more memory components associated with the CMB elasticity buffer can be received. An amount of time for data at the CMB elasticity buffer to be transmitted to one or more memory components can be determined based on the capacity of the CMB elasticity buffer and the throughput of the one or more memory components. Write data can be transmitted from a host system to the CMB elasticity buffer based on the determined amount of time for data at the CMB elasticity buffer to be transmitted to the one or more memory components. | 2020-04-30 |
20200133898 | Artificial Intelligence-Enabled Management of Storage Media Access - The present disclosure describes apparatuses and methods for artificial intelligence-enabled management of storage media. In some aspects, a media access manager of a storage media system receives, from a host system, host input/output commands (I/Os) for access to storage media of the storage media system. The media access manager provides information describing the host I/Os to an artificial intelligence engine and receives, from the artificial intelligence engine, a prediction of host system behavior with respect to subsequent access of the storage media. The media access manager then schedules, based on the prediction of host system behavior, the host I/Os for access to the storage media of the storage system. By so doing, the host I/Os may be scheduled to optimize host system access of the storage media, such as to avoid conflict with internal I/Os of the storage system or preempt various thresholds based on upcoming idle time. | 2020-04-30 |
20200133899 | LOAD REDUCED NONVOLATILE MEMORY INTERFACE - A storage circuit includes a buffer coupled between the storage controller and the nonvolatile memory devices. The circuit includes one or more groups of nonvolatile memory (NVM) devices, a storage controller to control access to the NVM device, and the buffer. The buffer is coupled between the storage controller and the NVM devices. The buffer is to re-drive signals on a bus between the NVM devices and the storage controller, including synchronizing the signals to a clock signal for the signals. The circuit can include a data buffer, a command buffer, or both. | 2020-04-30 |
20200133900 | MIXING RESTARTABLE AND NON-RESTARTABLE REQUESTS WITH PERFORMANCE ENHANCEMENTS - A computer-implemented method includes setting a respective flag in a first buffer of a hardware accelerator. The first buffer includes the respective flag of the first buffer, and a second buffer of the hardware accelerator includes a respective flag of the second buffer. A hardware state of the hardware accelerator is maintained in the first buffer, based on the respective flag of the first buffer being set. A first request directed to the hardware accelerator is received. It is determined that that the first buffer has the respective flag set. The first request is passed to the hardware accelerator, where passing the first request includes passing to the hardware accelerator a pointer to the first buffer, based on the first buffer having the respective flag set. | 2020-04-30 |
20200133901 | COMMUNICATION DEVICE, COMMUNICATION METHOD, PROGRAM, AND COMMUNICATION SYSTEM - To perform communication more definitely and efficiently. | 2020-04-30 |
20200133902 | TRANSLATION CIRCUITRY FOR AN INTERCONNECTION IN A SEMICONDUCTOR PACKAGE - Systems and method include one or more die coupled to an interposer. The interposer includes interconnection circuitry configured to electrically connect the one or more die together via the interposer. The interposer also includes translation circuitry configured to translate communications as they pass through the interposer. For instance, in the interposer, the translation circuitry translates communications, in the interposer, from a first protocol of a first die of the one or more die to a second protocol of a second die of the one or more die. | 2020-04-30 |
20200133903 | SYSTEMS AND METHODS FOR COMBINING MULTIPLE MEMORY CHANNELS - Systems, apparatus and methods are provided to combine multiple channels in a multi-channel memory controller to save area and reduce power and cost. An apparatus may comprise a first memory controller configured to access a first channel using a first protocol, a second memory controller configured to access a second channel using a second protocol that is different from the first protocol and a physical interface coupled to the first memory controller and a second memory controller. The physical interface may comprise a set of pins for an address and command bus shared by the first memory controller and the second memory controller for the first memory controller and the second memory controller to send address or command to respective channels by time division multiplexing. | 2020-04-30 |
20200133904 | COMMUNICATION DEVICE, COMMUNICATION METHOD, PROGRAM, AND COMMUNICATION SYSTEM - To perform communication more definitely and efficiently. | 2020-04-30 |
20200133905 | MEMORY REQUEST MANAGEMENT SYSTEM - A memory request management system may include a memory device and a memory controller. The memory controller may include a read queue, a write queue, an arbitration circuit, a read credit allocation circuit, and a write credit allocation circuit. The read queue and write queue may store corresponding requests from request streams. The arbitration circuit may send requests from the read queue and write queue to the memory device based on locations of addresses indicated by the requests. The read credit allocation circuit may send an indication of an available read credit to a request stream in response to a read request from the request stream being sent from the read queue to the memory device. The write credit allocation circuit may send an indication of an available write credit to a request stream in response to a write request from the request stream being stored at the write queue. | 2020-04-30 |
20200133906 | Self-Configuring Peripheral Module - A peripheral module of a programmable controller and method for operating the peripheral module, wherein in a calibration mode a base voltage value is supplied by the peripheral module to a terminal via a switching device, the supply potential is changed at a start time by the peripheral module to the modified value and a response time at which the expected change occurs is acquired, and the valid time interval is ascertained by the peripheral module utilizing the start time and the response time. | 2020-04-30 |
20200133907 | INTERPOSER SYSTEMS FOR INFORMATION HANDLING SYSTEMS - A computing apparatus including a printed circuit board (PCB) including a first central processing unit (CPU) socket and additional CPU socket(s); a CPU coupled to the first CPU socket; a base interposer coupled to the additional CPU socket(s); and one or more devices connected to the base interposer, wherein the base interposer provides a connection between the CPU and the one or more devices. | 2020-04-30 |
20200133908 | TYPE-C INTERFACE CONTROLLING CIRCUIT, CONTROLLING METHOD AND MOBILE TERMINAL - The present disclosure provides a Type-C interface controlling circuit, a controlling method, and a mobile terminal, wherein the Type-C interface controlling circuit includes: a Type-C interface, a first transmission module, a second transmission module, a switching module, and a detection module. The first end of the detection module is connected to the Type-C interface for detecting a connection state of the Type-C interface, and the second end of the detection module is connected to the switching module, and the detection module controls a connection relationship between the first end of the switching module and the second end of the switching module according to the connection state. | 2020-04-30 |
20200133909 | WRITES TO MULTIPLE MEMORY DESTINATIONS - Examples described herein relate to configuring a target network interface to recognize packets that are to be written directly from the network interface to multiple memory destinations. A packet can include an identifier that a portion of the packet is to be written to multiple memory devices at specific addresses. The packet is validated to determine if the target network interface is permitted to directly copy the portion of the packet to memory of the target. The target network interface can perform a direct copy to multiple memory locations of a portion of the packet. | 2020-04-30 |
20200133910 | PROGRAMMED DELAY FOR RFFE BUS TRIGGERS - Systems, methods, and apparatus for improving bus latency are described. A data communication apparatus has an interface circuit adapted to couple the apparatus to a first serial bus, a clock source configured to provide a clock signal and a trigger handler. The interface circuit may be configured to receive trigger configuration information in a first transaction conducted over a serial bus, and receive a trigger actuation command from a bus master coupled to the serial bus. The trigger handler may be configured to delay a trigger actuation signal for a delay duration defined by the trigger configuration information, and provide the trigger actuation signal after the delay duration has expired. The trigger actuation signal may be generated in response to the trigger actuation command. | 2020-04-30 |
20200133911 | MEMORY LOG RETRIEVAL AND PROVISIONING SYSTEM - A memory log retrieval and provisioning system includes a server device that is coupled to a support system via a network. The server device includes a memory device having at least one memory log. A memory log retrieval and provisioning subsystem is coupled to the memory device, and determines that a memory log retrieval event has occurred in the server device. In response to determining that the memory log retrieval event has occurred, the memory log retrieval and provisioning subsystem automatically retrieves the at least one memory log from the memory device without receiving user instructions subsequent to detecting the memory log retrieval event. The memory log retrieval and provisioning subsystem then automatically transmits the at least one memory log through the network to the support system without receiving user instructions subsequent to automatically retrieving the at least one memory log. | 2020-04-30 |
20200133912 | DEVICE MANAGEMENT MESSAGING PROTOCOL PROXY - Embodiments provide a proxy between device management messaging protocols that are used to manage devices that are I2C bus endpoints coupled to a remote access controller. A map is generated of the detected I2C bus endpoints. Mapped I2C bus endpoints that support PLDM (Platform Level Data Model) messaging are identified. Next, the mapped I2C bus endpoints that do not correspond to an identified PLDM endpoint are presumed to be IPMI (Intelligent Platform Management Interface) endpoints and are mapped accordingly. A virtual PLDM endpoint for each of the presumed IPMI I2C bus endpoints. A remote access controller is configured for use of PLDM messaging with the virtual PLDM endpoints such that these PLDM messages are translated by the proxy to equivalent IPMI commands and transmitted to the IPMI endpoints. The proxy similarly converts IPMI messages from the IPMI endpoints to equivalent PLDM messages and provided to the remote access controller via the virtual PLDM endpoint. | 2020-04-30 |
20200133913 | DISAGGREGATED COMPUTER SYSTEM - A computer system includes a processor and a memory. The processor is located on a first circuit board having a first connector. The memory is located on a second circuit board having a second connector. The first circuit board and the second board are physically separated from each other but connect to each other through the connector. The processor and the memory are communicated to each other based on a differential signaling scheme. | 2020-04-30 |
20200133914 | Synchronization in a Multi-Tile, Multi-Chip Processing Arrangement - A method of operating a system comprising multiple processor tiles divided into a plurality of domains wherein within each domain the tiles are connected to one another via a respective instance of a time-deterministic interconnect and between domains the tiles are connected to one another via a non-time-deterministic interconnect. The method comprises: performing a compute stage, then performing a respective internal barrier synchronization within each domain, then performing an internal exchange phase within each domain, then performing an external barrier synchronization to synchronize between different domains, then performing an external exchange phase between the domains. | 2020-04-30 |
20200133915 | Relational Database Conversion and Purge - A computer implemented method, system and media are provided to convert relational database files hosted on a client server to operating database files. The operating database files are transferred using FTP protocol to a remote archival server. A relational database is created from the transferred operating database files on the remote archival server. | 2020-04-30 |
20200133916 | METHOD, DEVICE AND COMPUTER READABLE MEDIUM FOR ACCESSING FILES - Embodiments of the present disclosure provide a method, device and computer readable medium for accessing a file. The method described herein comprises: receiving, in a virtual file system on a client, a request for opening a file in the virtual file system from an application, the request comprising a path for the file; determining whether the file has been opened successfully at the client; in response to determining that the file fails to be opened at the client, searching a first cache of the virtual file system for the path, the first cache being configured to store paths for files that fail to be opened at the client; and in response to success in finding the path in the first cache, returning an indication of failure in opening the file to the application. | 2020-04-30 |
20200133917 | METHOD, DEVICE AND COMPUTER PROGRAM PRODUCT FOR MANAGING DATA REPLICATION - Embodiments of the present disclosure provide a method, a device and a computer program product for managing data replication. According to example implementations of the present disclosure, a replication policy model associated with data replication of a source device can be obtained, which is determined based on historical status information of the source device and a historical replication policy corresponding to the historical status information; current status information of the source device is determined, wherein the current status information indicates status information associated with pending data replication of the source device; and a target replication policy is determined based on the replication policy model and the current status information, which indicates a replication policy to be applied for performing the pending data replication. Therefore, the replication policy can be adjusted automatically based on the status of the source device, enabling a more efficient and intelligent data replication of the source device. | 2020-04-30 |
20200133918 | DRIVE RECORDER OPERATION SYSTEM, DRIVE RECORDER, OPERATION METHOD, AND RECORDING MEDIUM FOR OPERATION - A drive recorder operation system includes a drive recorder and an operation server. The drive recorder generates index information respectively corresponding to a plurality of video files, sends the index information to the operation server, and prohibits a video file from being overwritten based on an overwrite prohibition command from the operation server. The operation server receives an instruction designating index information and sends an overwrite prohibition command including the designated index information to the drive recorder. | 2020-04-30 |
20200133919 | METHOD, ELECTRONIC DEVICE AND COMPUTER PROGRAM PRODUCT FOR SNAPSHOT REPLICATION - Techniques involve: in response to a first session for asynchronous snapshot replication being established between a first source device and a destination device, determining whether the first source device and the destination device have a common baseline snapshot. Such techniques further involve: in response to determining absence of the baseline snapshot, determining whether initial synchronization from a second source device to the destination device is completed. Such techniques further involve: replicating, based on a result of the determining, at least one user snapshot at the first source device to the destination device. Accordingly, duplicated user snapshots at the destination device are significantly reduced. The snapshot management and space utilization of the destination device are also improved. | 2020-04-30 |
20200133920 | METHOD, DEVICE, AND COMPUTER PROGRAM PRODUCT FOR DELETING SNAPSHOTS - Techniques perform snapshot deletion. Such techniques involve: determining an object associated with a to-be-executed snapshot deletion request in a snapshot deletion request list of a storage system; in response to the object being included in a predefined set of objects, determining information associated with the to-be-executed snapshot deletion request, the information including at least one of: a number of snapshot deletion requests in the snapshot deletion request list which correspond to snapshots associated with the object, and a waiting time of the to-be-executed snapshot deletion request in the snapshot deletion request list; determining, based on the information and from the snapshot deletion request list, a set of snapshot deletion requests associated with the object; and deleting snapshots corresponding to snapshot deletion requests in the set. Accordingly, the performance of snapshot deletion operations may be improved without any impact on other service on the storage system. | 2020-04-30 |
20200133921 | METHOD AND APPARATUS FOR SHARING INFORMATION RECORDED ON BLOCKCHAIN BASED ON ANCHORING - A method of sharing information on the basis of anchoring and an anchoring device supporting the same and more particularly are provided. One of the methods comprises, acquiring anchoring information including first field information permitted for sharing from a target transaction record recorded in a first blockchain and recording the acquired anchoring information in a second blockchain. | 2020-04-30 |
20200133922 | OFFLINE CAPABILITIES FOR LIVE APPLICATIONS IN A CLOUD COLLABORATION PLATFORM - Disclosed herein are system, method, and computer program product embodiments for providing offline capabilities to customizable live applications in a cloud collaboration platform. The cloud collaboration platform may provide offline functions and a data application programming interface to devices connecting to the cloud collaboration platform. The offline capabilities allow devices to store data related to documents and customizable live applications in a local cache. The offline capabilities retrieve data from and store modifications to data within the local cache. The cloud collaboration platform may subsequently process the changes and determine if conflicts arise, resolving conflicts where appropriate and possible. The cloud collaboration platform may then determine a final state for a record, return the final state to the devices, and update the local caches. | 2020-04-30 |
20200133923 | TECHNIQUES FOR IMPROVING STORAGE SPACE EFFICIENCY WITH VARIABLE COMPRESSION SIZE UNIT - Techniques for data processing a data set may comprise: performing first processing that forms a first compression unit, wherein the first compression unit includes a data chunks including a first data chunk having a first entropy value less than an entropy threshold, the first processing including: receiving a second data chunk; determining, in accordance with criteria, whether to add the second data chunk to the first compression unit; and responsive to determining to add the second data chunk to the first compression unit, adding the second data chunk to the first compression unit; and compressing the first compression unit as a single compressible unit. The second chunk may be added if its entropy value is less than the entropy threshold and if entropy values of the first and second chunks are similar. The second chunk may be added if the resulting compression unit provides sufficient storage/compression benefit. | 2020-04-30 |
20200133924 | IMPORTING AND EXPORTING CIRCUIT LAYOUTS - A computer-implemented method includes executing, using a computer, a process including a main thread that receives a layout file. The layout file includes a first plurality of tags and compressed information blocks. Each tag of the first plurality is associated with a compressed information block. The method further includes decompressing the compressed information blocks using sub-threads and thereby obtaining decompressed information blocks. The sub-threads are created by the main thread, and each sub-thread corresponds to a compressed information block. The decompressed information blocks are combined into decompressed layout information. The decompressed file is partitioned and each partition is provided to a node of a distributed computing system for performing layout correction. Multiple result files each in a compressed format are obtained from the distributed computing system and the result files are combined to obtain a single result file without decompressing and re-compressing the results from the distributed computing system. | 2020-04-30 |
20200133925 | INTERNET OF THINGS ARCHITECTURE WITH A CLOUD-BASED INTEGRATION PLATFORM - An Internet of Things architecture includes a gateway, a first dongle, a first wireless device, and a cloud platform. The gateway includes a first serial port. The first dongle is received by the first serial port, and is configured to communicate using a first communication technology. The first wireless device is configured to communicate a first coded message to the first dongle using the first communication technology. The cloud platform includes a network abstraction layer that includes a first communication technology module configured to receive the first coded message associated with the first communication technology and output a first decoded message. | 2020-04-30 |
20200133926 | SYSTEM AND METHOD FOR IMPLEMENTING NATIVE CONTRACT ON BLOCKCHAIN - A computer-implemented method for implementing native contract on blockchain comprises: obtaining combined bytecode associated with a blockchain contract, wherein the combined bytecode comprises an indicator representing a type of the blockchain contract; determining the type of the blockchain contract based at least on the indicator; and executing the blockchain contract based on the determined type of the blockchain contract. | 2020-04-30 |
20200133927 | Systems and Methods for Increasing Database Access Concurrency - The various embodiments described herein include methods, devices, and systems for reading and writing data from a database table. In one aspect, a method of reading and writing data from a database table, includes: (1) initiating a write transaction to write data to a first non-key column of a row of the database table, the database table having a plurality of rows, each row comprising a primary key and a plurality of non-key columns; (2) locking the first non-key column; and (3) in accordance with a determination that the second non-key column is not locked, initiating a read transaction to read data from the second non-key column, where initiation of the read transaction occurs prior to completion of the write transaction. | 2020-04-30 |
20200133928 | DEDUPLICATING DATA AT SUB-BLOCK GRANULARITY - A technique for performing data deduplication operates at sub-block granularity by searching a deduplication database for a match between a candidate sub-block of a candidate block and a target sub-block of a previously-stored target block. When a match is found, the technique identifies a duplicate range shared between the candidate block and the target block and effects persistent storage of the duplicate range by configuring mapping metadata of the candidate block so that it points to the duplicate range in the target block. | 2020-04-30 |
20200133929 | INTELLIGENT DATA QUALITY - Examples of an intelligent data quality application are defined. In an example, the system receives a data quality requirement from a user. The system obtains target data from a plurality of data sources. The system implements an artificial intelligence component sort the target data into a data cascade. The data cascade may include a plurality of attributes associated with the data quality requirement. The system may evaluate the data cascade to identify a data pattern model for each of the attributes. The system may implement a first cognitive learning operation to determine a mapping context from the data cascade and a conversion rule from the data pattern model. The system may establish a data harmonization model corresponding to the data quality requirement by performing a second cognitive learning operation. The system may generate a data cleansing result corresponding to the data quality requirement. | 2020-04-30 |
20200133930 | INFORMATION PROCESSING METHOD, INFORMATION PROCESSING SYSTEM, AND NON-TRANSITORY COMPUTER READABLE STORAGE MEDIUM - An information processing apparatus ( | 2020-04-30 |
20200133931 | LOCATION-BASED RECOMMENDATIONS USING NEAREST NEIGHBORS IN A LOCALITY SENSITIVE HASHING (LSH) INDEX - Software for a website hosting short-text services creates an index of buckets for locality sensitive hashing (LSH). The software stores the index in an in-memory database of key-value pairs. The software creates, on a mobile device, a cache backed by the in-memory database. The software then uses a short text to create a query embedding. The software map the query embedding to corresponding buckets in the index and determines which of the corresponding buckets are nearest neighbors to the query embedding using a similarity measure. The software displays location types associated with each of the buckets that are nearest neighbors in a view in a graphical user interface(GUI) on the mobile device and receives a user selection as to one of the location types. Then the software displays the entities for the selected location type in a GUI view on the mobile device. | 2020-04-30 |
20200133932 | AUTOMATIC GENERATION OF DATA FOUNDATION FRAGMENTS - A system, method, and computer-readable medium, including creating at least one data foundation table, each of the at least one data foundation tables being created for each of one or more set tables in a database based on information stored in a first set container relying on the one or more set tables; linking at least one of the created data foundation tables to a customer table in the database, the created data foundation table being linked to the customer table based on a primary key for the customer table; and storing all of the created data foundation tables in the a dedicated data structure hosted by the first set container. | 2020-04-30 |
20200133933 | AUGMENTATION PLAYBACK - A system and method including receiving a request to perform an operation relying on sets-related tables of a semantic layer universe; injecting in response to the received request, persisted Data Foundation (DF) objects stored in a dedicated data structure of a first set container into the in-memory representation of the semantic layer universe, each of the DF objects being automatically created based on sets-related tables of the semantic layer universe; injecting, by the processor and in response to the received request, persisted business layer (BL) objects stored in a dedicated data structure of the first set container into the in-memory representation of the semantic layer universe, each of the BL objects being automatically created based on the sets-related tables of the semantic layer universe; and executing the operation on the augmented semantic universe, including using the injected DF objects and the injected BL objects. | 2020-04-30 |
20200133934 | COMPRESSED ROW STATE INFORMATION - In one aspect, there is provided a method. The method may include accessing a multi-version concurrency control block providing row state for a block of rows in a table of a database, the multi-version concurrency control block including a header portion and a data portion, the header portion including a type indicator indicating whether all of the rows of the block are visible to a plurality of threads at a database management system or invisible to the plurality of threads at the database management system. Related systems, methods, and articles of manufacture are also disclosed. | 2020-04-30 |
20200133935 | METHOD, APPARATUS AND COMPUTER PROGRAM PRODUCT FOR COMPARING FILES - The present disclosure provides a method, a device, and a computer program product for file comparison. In one embodiment, the method includes determining a set of data blocks of the first file associated with the first segment and a set of data blocks of the second file associated with the second segment, obtaining a first mapping information for data blocks in the set of data blocks of the first file and a second mapping information for data blocks in the set of data blocks of the second file, and determining a difference between the first segment and the second segment based on the first mapping information and the second mapping information. | 2020-04-30 |
20200133936 | COUNTING METHOD, COUNTER AND STORAGE MEDIUM - A counting method, a counter and a storage medium, wherein the method includes: detecting whether a count application source exists; obtaining a quantity of counts caused by a current count application source in unit time and server-related parameters, if a count application source exists; determining a mode in which the current count application source updates the database according to the quantity of counts or the server-related parameters, the mode including a real-time mode and a high-performance mode; accumulating the counts caused by the current count application source running in the server to obtain an accumulated value, if the current count application source updates the database in the high-performance mode; and updating the accumulated value of the current count application source to the database according to a preset frequency. | 2020-04-30 |