Patents - stay tuned to the technology

Inventors list

Assignees list

Classification tree browser

Top 100 Inventors

Top 100 Assignees


05th week of 2021 patent applcation highlights part 47
Patent application numberTitlePublished
20210034475METHOD AND SYSTEM FOR OFFLOADING A CONTINUOUS HEALTH-CHECK AND RECONSTRUCTION OF DATA IN A NON-ACCELERATOR POOL - A method for managing data includes identifying, by a compute acceleration device (CAD), a first chunk to be health-checked using storage metadata, generating a first chunk identifier using the first chunk, making a determination that the first chunk identifier does not match a second chunk identifier stored in the storage metadata, and in response to the determination: obtaining a plurality of chunks associated with the first chunk, regenerating the first chunk using the plurality of chunks to generate a new first chunk, storing the new first chunk in a data node, wherein the CAD is executing in the data node, updating the storage metadata based on storage of the new first chunk to obtain updated storage metadata, and sending a copy of the updated storage metadata to at least one other CAD in a second data node.2021-02-04
20210034476CHANGE-PROTECTED DATABASE SYSTEM - A request to update an original data value in a first row in a database table in a database system. An updated data value is written to a second row in a staging table in the database system. The updated data value corresponds with the original data value. The first row includes a database table key, which is also included in the second row. The original data value in the database table is replaced with a corresponding replacement value, which is determined based on a value replacement update function that takes as input the updated data value. The staging table maintains a record value for reversing the update to the database table.2021-02-04
20210034477TRANSACTION RECOVERY FROM A FAILURE ASSOCIATED WITH A DATABASE SERVER - In some examples, a system sends a transaction to a database server to cause storing of data of the transaction in a cache of the database server, where the data in the cache is for inclusion in a backup of data from the database server to a remote data store (e.g., the backup may be in a cloud and may be a snapshot). The system detects a failure associated with the database server, and in response to detecting the failure, requests, from the database server or a replacement database server, transaction information of at least one transaction that was successfully applied to the remote data store, the transaction information based on the backup of data. The system causes replay one or more transactions to recover data at the database server or the replacement database server, to perform recovery of the database server or the replacement database server to a current state.2021-02-04
20210034478MEMORY CONTROLLER AND OPERATING METHOD THEREOF - A memory controller capable of detecting a code having an error among codes stored in a Read Only Memory (ROM) controls a memory device. The memory controller includes: a code memory for storing codes used to perform an operation; a code executor for executing the codes stored in the code memory to perform the operation; a debug controller for setting a suspend code address for suspending the execution of the codes used to perform the operation; an initialization controller for controlling an initialization operation of at least one of the debug controller and the code executor; and an interfacing component for receiving a suspend code setting request corresponding to an operation of setting the suspend code address and providing the received suspend code setting request to the debug controller.2021-02-04
20210034479ROBOT APPLICATION MANAGEMENT DEVICE, SYSTEM, METHOD AND PROGRAM - A robot application is executed by executing a plurality of kinds of virtual containers in cooperation with each other. To this end, a robot application management device (2021-02-04
20210034480FAILURE SHIELD - An example graphics system can include a first portion including a graphics driver and graphics hardware and a second portion communicatively coupled to the first portion. The second portion can include a display system communicatively coupled to a GUI application and a shim layer to shield the second portion from failure responsive to failure of the first portion.2021-02-04
20210034481METHODS, SYSTEMS, AND COMPUTER READABLE STORAGE DEVICES FOR MANAGING FAULTS IN A VIRTUAL MACHINE NETWORK - Faults are managed in a virtual machine network. Failure of operation of a virtual machine among a plurality of different types of virtual machines operating in the virtual machine network is detected. The virtual machine network operates on network elements connected by transport mechanisms. A cause of the failure of the operation of the virtual machine is determined, and recovery of the virtual machine is initiated based on the determined cause of the failure.2021-02-04
20210034482STORAGE SYSTEM - A storage system includes a first storage controller including a plurality of main storage media and one or more processor cores, and a second storage controller including a plurality of main storage media and one or more processor cores and performing communication with the first storage controller. Storage areas of the main storage media in the first storage controller are allocated to an address map. In response to the occurrence of failures in one or mode main storage media of the main storage media of the first storage controller, the first storage controller performs restarting to reallocate the storage areas of the main storage media excluding one or more main storage media having caused the failures to an address map reduced than before the occurrence of the failures. The second storage controller continues operating during the restarting of the first storage controller.2021-02-04
20210034483COMPUTER DUPLICATION AND CONFIGURATION MANAGEMENT SYSTEMS AND METHODS - In part, the disclosure relates to systems and methods to rapidly copy the computer operating system, drivers and applications from a source computer to a target computer using a duplication engine. Once the copy is complete the source computer will resume execution, and the target computer will first alter its configuration (also referred to as a role or personality) and then resume execution conforming to its new configuration as indicated by a profile stored in protected or specialized memory. The profile can be value, a file, or other memory structure and is protected in the sense that the profile (and or the region of memory where it is stored) must not be overwritten by a state transfer from the source computer to the target computer.2021-02-04
20210034484MULTI-PAGE OPTIMIZATION OF ASYNCHRONOUS DATA REPLICATION TRANSFER - A method is used in managing asynchronous replication. The method receives a multi-page replication request in conjunction with the replication process, where the first storage system comprises a plurality of storage devices and the second storage system comprises a plurality of storage devices. The method determines at least one replication condition meets a threshold. In response, the method optimizes the multi-page replication request to optimize the replication process.2021-02-04
20210034485Mitigating Real Node Failure of a Doubly Mapped Redundant Array of Independent Nodes - Mitigating the effects of a real node failure in a doubly mapped redundant array of independent nodes, e.g., doubly mapped cluster is disclosed. In response to a change in an accessibility to data stored on an extent of a real storage device of a real node of a real cluster, wherein the extent of the real storage device corresponds to a portion of a mapped storage device of a mapped node of a doubly mapped cluster, substituting a reserved extent of a real storage device for the extent of the real storage device. The substituting the reserved extent of the real storage device can correspond to a change in a topology of the doubly mapped cluster, wherein the change in the topology comprises replacing the portion of the mapped storage device with a substitute portion of a mapped storage device that corresponds to the replacement extent of the real storage device. The changed topology can enable writing of data to the substituted portion of a mapped storage device that can cause writing of corresponding data to the reserved extent of the real storage device.2021-02-04
20210034486CLIENT-ASSISTED PHASE-BASED MEDIA SCRUBBING - A technique of receiving a write transaction directed to a group of memory parcels of a memory device from a client source. The technique determines a state of a first indicator used to indicate which one of two data structures contains a newer mapping of the group of memory parcels, while the other data structure contains an older mapping of the group of memory parcels. The technique determines a state of a second indicator used to indicate which one of the two data structures is in current use for the group of memory parcels and compares the states of the two indicators. When a data structure in current use does not contain the newer mapping, the technique changes the state of the second indicator to the state of the first indicator. The technique writes content of the write transaction to storage locations based on the newer mapping.2021-02-04
20210034487HARDWARE AND DRIVER VALIDATION - Compatibility testing systems and methods are disclosed that provide scalable validation testing of systems and devices. In examples, systems and devices are identified to provide fundamental information about driver operations and driver extensions functionality. The identification allows systems and devices having particular similarities to be grouped in object groups. Compatibility tests are tagged as corresponding to the identifiable systems, devices, and/or object groups, compatibility testing system and methods map test sets specifically tailored to systems and devices as identified by their driver operations and driver extensions functionality. The tailored test sets include tests that ensure compatibility and through optimized test-to-device target mapping, an optimal set of testing set is discovered and scheduled to run. Strategically controlling the amount of testing distributed and executed increases compatibility testing speed and scalability.2021-02-04
20210034488INFORMATION PROCESSING APPARATUS, CONTROL METHOD, AND PROGRAM - An information processing apparatus generates a graph that represents an action of a program. On the graph, an edge represents action contents of a process in an event. Further, two nodes connected by the edge respectively represent a subject and an object of the event. The information processing apparatus outputs the generated graph. Further, the information processing apparatus also alters the generated graph. When an index value of an event satisfies a first predetermined condition which index value is based on the number of occurrences or the frequency of occurrences of the event, the information processing apparatus alters the graph with respect to an edge representing the event.2021-02-04
20210034489Physical Execution Monitor - A method of monitoring execution of computer instructions includes receiving data items representing real-time measurements of side-channel information emanating from execution of computer instructions, each data item forming a value of a corresponding dimension of a side-channel information vector, receiving, for two or more of the dimensions of the side-channel vector, classifiers that assign the corresponding values of a side-channel vector to classes, and classifying the data items in accordance with the received classifiers, wherein an orthogonal distance of the data item from the classifier indicates a confidence value of the classification, generating a combined a confidence value for the side-channel information vector a, and outputting a signal if a confidence value indicates affiliation to a selected one of the two classes with a predetermined probability. The method conducts a self-test by generating a combined confidence value based to ensure correct outputting of the confidence value.2021-02-04
20210034490DISTRIBUTED LEDGER FOR TRACKING A CONDITION OF AN ELECTRONIC DEVICE - Systems and methods are described herein for determining a current condition of an electronic device configured to communicate over a wireless network. For example, the method can include collecting units of condition data of the electronic device. In some instances, each unit of condition data is generated by a sensor of the electronic device that performs an ongoing measurement of a parameter of the electronic device or detects an occurrence of an event that affects the condition of the electronic device. The electronic device provides the units of condition data over a network to a manager node. The units of condition data are then stored in a distributed ledger. The network node can determine a current condition of the electronic device based on the units of condition data stored in the distributed ledger.2021-02-04
20210034491SYSTEM FOR ENVIRONMENTAL IMPACT - The system and method may receive transaction data for a financial account associated with a user during a first time period and a second time period. A first environmental impact score for the transaction data associated with the user in the first time period may be determined and a second environmental impact score for the transaction data associated with the user in the second time period may also be determined. The first environmental impact score and the second environmental impact score may be compared. The system and method may determine whether there has been a change from the first environmental impact score to the second environmental impact score. In response to a determination that the second environmental impact score is less than the first environmental impact score; a bonus score may be determined for the user.2021-02-04
20210034492Optimization of Power and Computational Density of a Data Center - Techniques for optimizing power and computational density of data centers are described. According to various embodiments, a benchmark test is performed by a computer data center system. Thereafter, transaction information and power consumption information associated with the performance of the benchmark test are accessed. A service efficiency metric value is then generated based on the transaction information and the power consumption information, the service efficiency metric value indicating a number of transactions executed via the computer data center system during a specific time period per unit of power consumed in executing the transactions during the specific time period. The generated service efficiency metric value is then compared to a target threshold value. Thereafter, a performance summary report indicating the generated service efficiency metric value, and indicating a result of the comparison of the generated service efficiency metric value to the target value, is generated.2021-02-04
20210034493SYSTEMS AND METHODS FOR USER ANALYSIS - A method for user mining is provided. The method may include obtaining a plurality of first feature vectors of a plurality of positive samples and a plurality of second feature vectors of a plurality of negative samples, and generating a plurality of expanded first feature vectors based on the plurality of first feature vectors and second feature vectors and expanded second feature vectors. Each first feature vector may include first feature information that describes a plurality of features of a corresponding positive sample. Each second feature vector may include second feature information that describes a plurality of features of a corresponding negative sample. The method may further include determining one or more core features related to the plurality of positive samples among the plurality of features corresponding to the plurality of first feature vectors based on a trained binary model.2021-02-04
20210034494METHOD AND SYSTEM FOR COUNTERING CAPACITY SHORTAGES ON STORAGE SYSTEMS - A method and system for countering capacity shortages on storage systems. Specifically, the method and system disclosed herein entail proactively performing countermeasures directed to freeing-up storage capacity on storage systems. The countermeasures may be deployed based on forecasts projecting the future consumption of storage capacity on the storage systems.2021-02-04
20210034495DYNAMICALLY UPDATING DEVICE HEALTH SCORES AND WEIGHTING FACTORS - Disclosed is a computer implemented method to adjust device health weighting factors, the method comprising, determine a set of monitored devices including a first monitored device. The method comprises, determining a set of parameters, wherein each parameter is associated with one operating metric of each of the monitored devices. The method comprises, receiving a set of usage data, including a usage history for each parameter. The method further comprises, performing trend analysis on the set of usage data configured to identify a relative influence of each parameter on the set of monitored devices. The method also comprises, generating a set of weighting factors based on the trend analysis, and wherein each parameter in the set of parameters is associated with a weighting factor, and calculating a health score for the first monitored device, wherein the calculation is based on the set of weighting factors.2021-02-04
20210034496AUDITING-AS-A-SERVICE - Auditing information is captured from a processing stack of an invoked application. An annotation customized for that invocation context is processed to filter and/or add additional audition information available from the processing stack. The customized auditing information is then sent to a destination based on a processing context of the invoked application when the invoked application completes processing. In an embodiment, the customized auditing information is housed in a data store and an interface is provided for customized query processing, report processing, event processing, a notification processing.2021-02-04
20210034497LOG RECORD ANALYSIS BASED ON LOG RECORD TEMPLATES - Log record analysis based on log record templates is disclosed. A plurality of log records that comprise log data generated by one or more logging entities over a period of time is accessed. Each log record of the plurality of log records corresponds to one log record template of a plurality of different log record templates. The log records are analyzed to determine a particular log record template of the plurality of different log record templates to which a majority of the plurality of log records corresponds. An action is taken that is at least partially based on the particular log record template.2021-02-04
20210034498COUNTERMEASURE IMPLEMENTATION FOR PROCESSING STAGE DEVICES - According to examples, an apparatus may include a first processing stage device to process data, a second processing stage device to process the processed data, and a control device. The control device may access first performance information of the first processing stage device, determine, from the accessed first performance information, an operational state of the first processing stage device, and determine whether a countermeasure is to be taken based on the determined operational state of the first processing stage device, in which the countermeasure is to improve the operational state of the first processing stage device. The control device may also, based on a determination that a countermeasure is to be taken, output an instruction for the countermeasure to be implemented.2021-02-04
20210034499MULTI-CORE I/O TRACE ANALYSIS - Improved mechanisms and techniques for recording and aggregating trace information from multiple computing modules of a storage system may be provided. On a storage system having multiple computing modules, where each computing module has multiple processing cores, processing cores may record trace information for I/O operations in dedicated local memory—i.e., memory in the same computing module as the processing core that is dedicated to the computing module. One of the processing cores may be configured to aggregate trace information from across multiple computing modules into its dedicated local memory by accessing trace information from the dedicated local memories of the other computing modules in addition to its own. The aggregated information in one dedicated local memory then may be analyzed for functionality and/or performance and additional action taken based on the analysis.2021-02-04
20210034500ARTIFICIAL INTELLIGENCE ENABLED OUTPUT SPACE EXPLORATION FOR GUIDED TEST CASE GENERATION - A method for testing software applications in a system under test (SUT) includes building a reference model of the SUT comprising a computer-based neural network, training the reference model using input data and corresponding output data generated by the SUT, selecting an output value within a domain of possible output values of the SUT representing an output that is not represented in the output data used to train the reference model, applying the selected output value to the reference model and tracing the selected output through the reference model to identify test input values that when input to the reference model, produce the selected output value and using the identified test input values to test the system under test.2021-02-04
20210034501CODE OPTIMIZATION FOR CONNECTED MANAGED RUNTIME ENVIRONMENTS - A first instance of a managed runtime environment is provided. An optimized version of the code unit and a corresponding set of one or more speculative assumptions are received at the first instance of the managed runtime environment, wherein the optimized version of the code unit produces the same logical results as the code unit unless at least one of the set of one or more speculative assumptions is not true, and wherein the optimized version of the code unit and the corresponding set of one or more speculative assumptions are generated by an entity that is different from the first instance of the managed runtime environment. The optimized version of the code unit is executed at the first instance of the managed runtime environment. Whether the set of one or more speculative assumptions hold true is monitored at the first instance of the managed runtime environment.2021-02-04
20210034502DATA VERIFICATION SYSTEM - Aspects of the disclosure provide for a computer program product comprising a computer readable medium having program instructions embodied therewith, the program instructions executable by a processor to generate a set of scenarios corresponding to a test data set and depending on a selected data analysis model, determine a value for each point in time over a defined time interval and an exposure profile that is a continuous time representation of each value determined for each point in time, determine a risk envelope desired for the scenarios, determine a test statistic defining a fraction of the defined time interval that the exposure profile is outside the risk envelope, determine a cumulative distribution of the test statistic, the cumulative distribution having a critical value corresponding to a defined probability of accuracy of the data analysis model, and validate the data analysis model based on the critical value and the test statistic.2021-02-04
20210034503A METHOD OF ACCESSING METADATA WHEN DEBUGGING A PROGRAM TO BE EXECUTED ON PROCESSING CIRCUITRY - A technique is provided for accessing metadata when debugging a program to be executed on processing circuitry. The processing circuitry operates on data formed of data granules having associated metadata items. A method of operating a debugger is provided that comprises controlling the performance of metadata access operations when the debugger decides to access a specified number of metadata items. In particular, the specified number is such that the metadata access operation needs to be performed by the processing circuitry multiple times in order to access the specified number of metadata items. Upon deciding to access a specified number of metadata items, the debugger issues at least one command to cause the processing circuitry to perform a plurality of instances of the metadata access operation in order to access at least a subset of the specified number of metadata items. The number of metadata items accessed by each instance of the metadata access operation is non-deterministic by the debugger from the metadata access operation. However, the at least one command is such that the plurality of instances of the metadata access operation are performed by the processing circuitry without the debugger interrogating the processing circuitry between each instance of the metadata access operation to determine progress in the number of metadata items accessed. Such an approach can significantly improve the efficiency of performing such accesses to metadata items under debugger control.2021-02-04
20210034504VALIDATION OF INSPECTION SOFTWARE - Systems and methods for validating an inspection software are provided. A dataset which defines properties for a three-dimensional component having at least one known defect is obtained. The three-dimensional component is inspected with the inspection software using the dataset to obtain inspection results for the three-dimensional component. The inspection results are compared to reference results for the three-dimensional component. When the inspection results correspond to the reference results within a predetermined threshold, a signal indicative of validation of the inspection software is issued.2021-02-04
20210034505MODIFIED EXECUTABLES - Example implementations relate to testing an original executable. In an example, the original executable is received at a network device. A modified executable is generated by replacing calls in the original executable to production application programming interfaces (APIs) with calls to mock APIs. The modified executable is executed on the network device. Information associated with execution of the modified executable on the network device is recorded for post-execution analysis.2021-02-04
20210034506TESTING A COMPUTER PROGRAM - Examples described relate to examples for testing a computer program. In an example, a first set of factors to be considered in a first test plan for testing a computer program may be selected. A first set of test data may be assigned to the first set of factors. A first test may be performed on the computer program, based on the first set of factors, considering the first set of test data. The first test results obtained in response to performing the first test on the computer program may be analyzed. In response to analyzing the first test results, at least one of the first test plan, the first set of factors, and the first set of test data for testing the computer program may be updated.2021-02-04
20210034507SYSTEMS AND METHODS FOR AUTOMATED INVOCATION OF ACCESSIBILITY VALIDATIONS IN ACCESSIBILITY SCRIPTS - Systems and methods for automated invocation of accessibility validations in accessibility scripts are disclosed. According to one embodiment, in an information processing apparatus comprising at least one computer processor, an automated accessibility test program performing the following: (1) invoking an automated test program; (2) invoking the automated accessibility test program in the automated test program; (3) loading a webpage to be validated; (4) identifying at least one interactive webpage element on the webpage; (5) causing the automated accessibility program to validate the interactive webpage element with the automated accessibility program; (6) storing a result of the validation; and (7) performing an action validation on the interactive webpage element.2021-02-04
20210034508SYSTEMS AND METHODS FOR TESTING ONLINE USE-CASE SCENARIOS IN A STAGING ENVIRONMENT - Methods and systems are presented for automatically configuring a staging environment to facilitate testing of online use-case scenarios for an online service provider. In response to receiving a request to test an online use-case scenario, a user account configuration may be derived from the use-case scenario. Account data for creating a user account is generated based on the user account configuration. The account data is inserted into a database of the staging environment to create the user account within the staging environment. A workflow associated with the online use-case scenario is automatically performed based on the newly generated user account in the staging environment. One or more defects observed while performing the workflow is reported to a user.2021-02-04
20210034509ELECTRONIC APPARATUS AND CONTROLLING METHOD THEREOF - An electronic apparatus is provided. The electronic apparatus according to an embodiment includes a memory configured to store computer executable instructions, and a processor configured to, by executing the computer executable instructions, based on a request for executing a program being received and an available capacity of a first area of the memory to be allocated to the program being insufficient, swap-out page data stored in the first area to a second area of the memory, wherein the processor is further configured to swap out the page data partially or entirely based on an attribute of the page data.2021-02-04
20210034510SYSTEM AND METHOD FOR REDUCED LOCK CONTENTION AND IMPROVED PERFORMANCE - A method, computer program product, and computer system for setting a preferred alignment value to a size of an address space mapped by one or more root pages. An allocation request may be received for the address space. A binary buddy allocation scheme may be executed to allocate an extent for the allocation request based upon, at least in part, the preferred alignment value.2021-02-04
20210034511VIRTUAL ADDRESS SPACE DUMP IN A COMPUTER SYSTEM - A method, computer system, and computer program product for operating a computer system to carry out a data dump of a data image of memory contents. Computer operations are temporarily suspended to service the dump request in order to dump the volatile memory contents required for the data image and to generate a record of the non-volatile memory pages which need to be dumped. Computer operations are then resumed under supervision of a monitoring process which screens access requests to the non-volatile memory against the dump record. A request relating to a page contained in the dump record is acted upon by writing the contents of that page to the dump storage space, so the page contents is dumped before it is modified. The dump record in continually updated to keep track of what is still outstanding to complete the dump until such time as the dump is complete.2021-02-04
20210034512MEMORY CONTROLLER AND METHOD OF OPERATING THE SAME - The memory controller controls at least one memory device including a plurality of stream storage areas. The memory controller comprises a buffer, a write history manager, a write controller, and a garbage collection controller. The buffer stores write data. The write history manager stores write count values for each of the plurality of stream storage areas and generates write history information indicating a write operation frequency for each of the plurality of stream storage areas based on the write count values. The write controller controls the at least one memory device to store the write data provided from the buffer. The garbage collection controller controls the at least one memory device to perform a garbage collection operation on a target stream storage area selected from among the plurality of stream storage areas based on the write history information.2021-02-04
20210034513STORAGE DEVICE AND OPERATING METHOD THEREOF - A storage device includes a nonvolatile memory device that includes a first area, a second area, and a third area, and a controller that receives a write command and first data from a host device, preferentially writes the first data in the first area or the second area rather than the third area when the first data is associated with a turbo write, and writes the first data in the first area, the second area, or the third area when the first data is associated with a normal write. The controller moves second data between the first area, the second area, and the third area based on the policy received from the host device.2021-02-04
20210034514STORAGE DEVICE - A storage device includes a nonvolatile memory device including a first region and a second region, and a controller that receives a first operation command including move attribute information and a first logical block address from an external host device and moves first data corresponding from the first region to the second region in response to the received first operation command, and when the first operation command does not include the move attribute information, the controller performs a first operation corresponding to the first operation command.2021-02-04
20210034515SCRUBBER DRIVEN WEAR LEVELING IN OUT OF PLACE MEDIA TRANSLATION - A process for wear-leveling in a memory subsystem where references to invalidated chunks and a write count for each of the invalidated chunks of a memory subsystem are received by a wear-leveling manager. The wear-leveling manager orders the received references to the invalidated chunks of the memory subsystem in a tracking structure based on the write count of each of the invalidated chunks, and provides a reference to at least one of the invalidated chunks based on the ordering from the tracking structure to a write scheduler to service a write request, wherein the memory subsystem is wear-leveled by biasing the order of the invalidated chunks to prioritize low write count chunks.2021-02-04
20210034516SYSTEM AND METHOD FOR IMPROVING WRITE PERFORMANCE FOR LOG STRUCTURED STORAGE SYSTEMS - A method, computer program product, and computer system for identifying, by a computing device, a list of objects containing a plurality of physical layer blocks (PLBs). One or more next PLBs of the plurality of PLBs may be allocated from a selected free object of the list of objects. One or more additional free objects from the list of objects may be generated. Garbage collection may be performed between an inactive object of the plurality of objects and the selected free object.2021-02-04
20210034517METHOD, APPARATUS, DEVICE AND COMPUTER-READABLE STORAGE MEDIUM FOR STORAGE MANAGEMENT - Example embodiments of the present disclosure provide a method, an apparatus, a device and a computer-readable storage medium for storage management. The method for storage management includes: obtaining an available channel mode of a plurality of channels in a memory of a data processing system, the available channel mode indicating availabilities of the plurality of channels, and each of the plurality of channels being associated with a set of addresses in the memory; obtaining a channel data-granularity of the plurality of channels, the channel data-granularity indicating a size of a data block that can be carried on each channel; obtaining a target address of data to be transmitted in the memory; and determining a translated address corresponding to the target address based on the available channel mode and the channel data-granularity.2021-02-04
20210034518SYSTEM AND METHOD FOR BALANCE LOCALIZATION AND PRIORITY OF PAGES TO FLUSH IN A SEQUENTIAL LOG - A method, computer program product, and computer system for staging writes into a log in chronological order, wherein each write may have a log record of a plurality of log records describing data of the write. The log record may be organized into a bucket of a plurality of buckets associated with a range of a plurality of ranges within a backing store, wherein each bucket of the plurality of buckets may include two keys respectively. The log record of the plurality of log records may be flushed from the bucket of the plurality of buckets to the backing store at a location and in an order determined based upon, at least in part, the two keys included with the bucket.2021-02-04
20210034519HOST CACHE COHERENCY WHEN READING DATA - When a read request for the data portion is received from an application executing on a host, the host may determine whether the data portion is in host cache, and if so, whether the logical storage unit of the data portion is shared by another host system. If there is another host system sharing the logical storage unit, a latest version stored on the storage system may be determined and compared to the version stored in the host cache. If the version in the host cache is the same as the latest version stored on the storage system, the data portion may be retrieved from the host cache. If the version in the host cache is not the latest version stored on the storage system, the data portion may be retrieved from the storage system, and the host cache may be updated with the latest version of the data portion.2021-02-04
20210034520TECHNIQUES FOR REDUCING SIZE OF LOG RECORDS - Techniques for processing I/O operations include: receiving a write I/O operation that writes first data to a target location, wherein the target location is represented as a logical device and offset within a logical address space of the logical device; storing a log record for the write I/O operation in a log file; and performing first processing of the log record. The log record includes log data, comprising the first data, and a log descriptor. The log descriptor includes a target logical address for the target location in a file system logical address space. The log descriptor includes a first value denoting the binary logarithm of an extent size of the first logical device. The first processing includes flushing the log record from the log file to store the first data of the log record on an extent of physical storage provisioned for the logical device.2021-02-04
20210034521DATA DEFINED CACHES FOR SPECULATIVE AND NORMAL EXECUTIONS - A cache system, having: a first cache; a second cache; a configurable data bit; and a logic circuit coupled to a processor to control the caches based on the configurable bit. When the configurable bit is in a first state, the logic circuit is configured to: implement commands for accessing a memory system via the first cache, when an execution type is a first type; and implement commands for accessing the memory system via the second cache, when the execution type is a second type. When the configurable data bit is in a second state, the logic circuit is configured to: implement commands for accessing the memory system via the second cache, when the execution type is the first type; and implement commands for accessing the memory system via the first cache, when the execution type is the second type.2021-02-04
20210034522STORAGE DEVICE AND OPERATING METHOD THEREOF - A storage device for outputting a program completion response before a program operation is completed includes a buffer memory for storing data from a host, a memory device for storing data from the buffer memory, and a memory controller for controlling the buffer memory and the memory device. The buffer memory stores the data according to mapping information. The memory controller includes a response controller for outputting a remapping request for changing mapping in the buffer memory, when the data and a storage request corresponding thereto are received, and outputting a storage completion response, when a remapping operation is completed, and a mapping controller for outputting, based on the remapping request, the mapping information on usable storage areas except an unusable area, by performing a remapping operation of changing an area in which the data is stored among the usable storage areas of the buffer memory to the unusable area.2021-02-04
20210034523FAULT TOLERANT SYSTEMS AND METHODS FOR CACHE FLUSH COORDINATION - In part, the disclosure relates to a method of performing a checkpoint process in an active-active computer system including a first node and a second node, wherein each node includes an active checkpoint cache, flush cache, and data storage. In various embodiments, flush operations are coordinated between nodes. The method includes receiving a request for a checkpoint operation at the first node; pausing activity at the first node; notifying the second node of the impending checkpoint operation; performing the checkpoint operation, wherein data associated with the checkpoint operation includes the active checkpoint cache and the flush cache; merging the active checkpoint cache into the flush cache; and resuming activity at the first node. The method may also include each node informing the other node of the completion of cache flush operations.2021-02-04
20210034524STACKED MEMORY DEVICE SYSTEM INTERCONNECT DIRECTORY-BASED CACHE COHERENCE METHODOLOGY - A system includes a plurality of host processors and a plurality of hybrid memory cube (HMC) devices configured as a distributed shared memory for the host processors. An HMC device includes a plurality of integrated circuit memory die including at least a first memory die arranged on top of a second memory die, and at least a portion of the memory of the memory die is mapped to include at least a portion of a memory coherence directory; and a logic base die including at least one memory controller configured to manage three-dimensional (3D) access to memory of the plurality of memory die by at least one second device, and logic circuitry configured to implement a memory coherence protocol for data stored in the memory of the plurality of memory die.2021-02-04
20210034525PROGRAMMABLE BROADCAST ADDRESS - A method for initializing functional blocks on an electronic chip includes writing a programmable broadcast address to one or more functional blocks in a broadcast group; setting the one or more functional blocks in the broadcast group to a broadcast enable mode; writing one or more transactions to the programmable broadcast address; and disabling the broadcast enable mode.2021-02-04
20210034526Network Interface Device - A network interface device comprises a programmable interface configured to provide a device interface with at least one bus between the network interface device and a host device. The programmable interface is programmable to support a plurality of different types of a device interface.2021-02-04
20210034527APPLICATION AWARE SOC MEMORY CACHE PARTITIONING - Systems, apparatuses, and methods for dynamically partitioning a memory cache among a plurality of agents are described. A system includes a plurality of agents, a communication fabric, a memory cache, and a lower-level memory. The partitioning of the memory cache for the active data streams of the agents is dynamically adjusted to reduce memory bandwidth and increase power savings across a wide range of applications. A memory cache driver monitors activations and characteristics of the data streams of the system. When a change is detected, the memory cache driver dynamically updates the memory cache allocation policy and quotas for the agents. The quotas specify how much of the memory cache each agent is allowed to use. The updates are communicated to the memory cache controller to enforce the new policy and enforce the new quotas for the various agents accessing the memory.2021-02-04
20210034528DYNAMICALLY ADJUSTING PREFETCH DEPTH - Disclosed is a computer implemented method and system to dynamically adjust prefetch depth, the method comprising, identifying a first prefetch stream, wherein the first prefetch stream is identified in a prefetch request queue (PRQ), and wherein the first prefetch stream includes a first prefetch depth. The method also comprises determining a number of inflight prefetches, and comparing, a number of prefetch machines against the number of inflight prefetches, wherein each of the prefetch machines is configured to monitor one prefetch request. The method further includes adjusting, in response to the comparing, the first prefetch depth of the first prefetch stream.2021-02-04
20210034529DYNAMICALLY ADJUSTING PREFETCH DEPTH - Disclosed is a computer implemented method to dynamically adjust prefetch depth, the method comprising sending, to a first prefetch machine, a first prefetch request configured to fetch a first data address from a first stream at a first depth to a lower level cache. The method also comprises sending, to a second prefetcher, a second prefetch request configured to fetch the first data address from the first stream at a second depth to a highest-level cache. The method further comprises determining the first data address is not in the lower level cache, determining, that the first prefetch request is in the first prefetch machine, and determining, in response to the first prefetch request being in the first prefetch machine, that the first stream is at steady state. The method comprises adjusting, in response to determining that the first stream is at steady state, the first depth.2021-02-04
20210034530FAST CACHE LOADING WITH ZERO FILL - A processor system includes a processor core, a cache, a cache controller, and a cache assist controller. The processor core issues a read/write command for reading data from or writing data to a memory. The processor core also outputs an address range specifying addresses for which the cache assist controller can return zero fill, e.g., an address range for the read/write command. The cache controller transmits a cache request to the cache assist controller based on the read/write command. The cache assist controller receives the address range output by the processor core and compares the address range to the cache request. If a memory address in the cache request falls within the address range, the cache assist controller returns a string of zeroes, rather than fetching and returning data stored at the memory address.2021-02-04
20210034531CACHE WITH SET ASSOCIATIVITY HAVING DATA DEFINED CACHE SETS - A cache system, having: a first cache set; a second cache set; and a logic circuit coupled to a processor to control the caches based on at least respective first and second registers. When a connection to an address bus receives a memory address from the processor, the logic circuit is configured to: generate a set index from at least the address; and determine whether the generated set index matches with a content stored in the first register or with a content stored in the second register. And, the logic circuit is configured to implement a command via the first cache set in response to the generated set index matching with the content stored in the first register and via the second cache set in response to the generated set index matching with the content stored in the second register.2021-02-04
20210034532DATA STORAGE DEVICE, DATA PROCESSING SYSTEM, AND OPERATING METHOD OF DATA STORAGE DEVICE - A data storage device may include a controller configured to generate an ID based on a name and a version of an application transmitted from a host device together with a logic address, and generate an L2P map list for each application based on the ID; and a nonvolatile memory apparatus including a plurality of map blocks configured to store map data for each ID.2021-02-04
20210034533MANAGING WRITE ACCESS TO DATA STORAGE DEVICES FOR SPONTANEOUS DE-STAGING OF CACHE - Writes to one or more physical storage devices may be blocked after a certain storage consumption threshold (WBT) for each physical storage device. A WBT for certain designated physical storage devices may be applied in addition to, or as an alternative to, determining and applying a user-defined background task mode threshold (UBTT) for certain designated physical storage devices. In some embodiments, the WBT and UBTT for a physical storage device designated for spontaneous de-staging may be a same threshold value. Write blocking management may include, for each designated physical storage device, blocking any writes to the designated physical storage device after a WBT for the designated physical storage device has been reached, and restoring (e.g., unblocking) writes to the designated physical storage device after storage consumption on the physical storage device has been reduced to a storage consumption threshold (WRT) lower than the WBT.2021-02-04
20210034534SYSTEM AND METHOD FOR DUAL NODE PARALLEL FLUSH - A method, computer program product, and computer system for identifying a first node that has written a first page of a plurality of pages to be flushed. A second node that has written a second page of the plurality of pages to be flushed may be identified. It may be determined whether the first page of the plurality of pages is to be flushed by one of the first node and the second node and whether the second page of the plurality of pages is to be flushed by one of the first node and the second node based upon, at least in part, one or more factors. The first node may allocate the first page of the plurality of pages and the second page of the plurality of pages to be flushed in parallel by one of the first node and the second node based upon, at least in part, the one or more factors.2021-02-04
20210034535DATA STORAGE DEVICE AND OPERATING METHOD THEREOF - A data storage device may include: a nonvolatile memory configured to store L2P (Logical to Physical) map data and user data; and a controller configured to determine whether read commands which are sequentially transferred from a host device correspond to a backward sequential read, increase a backward sequential read count when the read commands are backward sequential read, set a pre-read start logical block address (LBA) and a length according to a preset condition, when the backward sequential read count is equal to or greater than a reference value, and load an L2P map of the corresponding LBA and user data corresponding to the L2P map from the nonvolatile memory in advance.2021-02-04
20210034536STORAGE DEVICE, MEMORY SYSTEM COMPRISING THE SAME, AND OPERATING METHOD THEREOF - A memory system includes a storage device including a nonvolatile memory device and a storage controller configured to control the nonvolatile memory device, and a host that accesses the storage device. The storage device transfers map data, in which a physical address of the nonvolatile memory device and a logical address provided from the host are mapped, to the host depending on a request of the host. The host stores and manages the transferred map data as map cache data. The map cache data are managed depending on a priority that is determined based on a corresponding area of the nonvolatile memory device.2021-02-04
20210034537COMMAND RESULT CACHING FOR BUILDING APPLICATION CONTAINER IMAGES - Implementations of the disclosure provide systems and methods for receiving, by a processing device, a request for an application image. A sequence of commands associated with the application image and a value of a parameter associated with the sequence of commands is received. Responsive to determining that the sequence of commands has been previously executed with the value of the parameter, the processing device retrieves, from a cache, a result of executing the sequence with the value of the parameter. The application image is built using the first result of executing the sequence.2021-02-04
20210034538VOLATILE READ CACHE IN A CONTENT ADDRESSABLE STORAGE SYSTEM - A distributed storage system comprises a first module and a second module. The first module processes read requests for an address range, to send to the second module. The first module receives an address associated with a read request for a data page stored on the second module. A method searches a table on the first module for a content-based signature of the data page based on the address and provides the data page from a first module read cache if the content-based signature is in the read cache, where content-based signatures in the table are associated with the address range.2021-02-04
20210034539MEMORY-AWARE PRE-FETCHING AND CACHE BYPASSING SYSTEMS AND METHODS - Systems, apparatuses, and methods related to memory management are described. For example, these may include a first memory level including memory pages in a memory array, a second memory level including a cache, a pre-fetch buffer, or both, and a memory controller that determines state information associated with a memory page in the memory array targeted by a memory access request. The state information may include a first parameter indicative of a current activation state of the memory page and a second parameter indicative of statistical likelihood (e.g., confidence) that a subsequent memory access request will target the memory page. The memory controller may disable storage of data associated with the memory page in the second memory level when the first parameter associated with the memory page indicates that the memory page is activated and the second parameter associated with the memory page is greater than or equal to a threshold.2021-02-04
20210034540AVOID CACHE LOOKUP FOR COLD CACHE - Methods and apparatus relating to techniques for avoiding cache lookup for cold cache. In an example, an apparatus comprises logic, at least partially comprising hardware logic, to receive, in a read/modify/write (RMW) pipeline, a cache access request from a requestor, wherein the cache request comprises a cache set identifier associated with requested data in the cache set, determine whether the cache set associated with the cache set identifier is in an inaccessible invalid state, and in response to a determination that the cache set is in an inaccessible state or an invalid state, to terminate the cache access request. Other embodiments are also disclosed and claimed.2021-02-04
20210034541MEMORY SYSTEM, MEMORY CONTROL DEVICE, AND MEMORY CONTROL METHOD - A memory system includes a memory that includes a buffer region having a plurality of buffer storage regions, each buffer storage region including a plurality of buffer memory cells, the plurality of buffer memory cells being cells storing data of 1 bit or a plurality of bits in units of the buffer storage regions, and a first storage region having a plurality of first storage regions including the plurality of first memory cells storing data of a plurality of bits; and a control circuit that changes at least one buffer storage region in which data is written to at least one first storage region, and changes at least one free first storage region into at least one buffer storage region to replace the changed at least one buffer storage region.2021-02-04
20210034542MULTIPLYING DATA STORAGE DEVICE READ THROUGHPUT - A data storage system includes a logical space having logical block addresses (LBAs) divided into non-overlapping LBA ranges, and a physical space having pairs of physical bands. The system also includes a map in which first successive alternate LBAs of each different one of the non-overlapping LBA ranges are mapped to successive adjacent physical blocks of a first physical band of each different pair of the pairs of physical bands, and second successive alternate LBAs of each different one of the non-overlapping LBA ranges are mapped to successive adjacent physical blocks of a second physical band of each different pair of the pairs of physical bands. A controller employs the map to concurrently read data from a first physical block of the first physical band of one pair of physical bands and from a first physical block of the second physical band of the same pair of physical bands.2021-02-04
20210034543HASH-BASED ONE-LEVEL MAPPING FOR STORAGE CLUSTERS - A method comprising: storing, in a memory, a mapping tree that is implemented by using an array of mapping pages, the mapping tree having a depth of D, wherein D is an integer greater than or equal to 0; receiving a write request that is associated with a first type-1 address; storing, in a storage device, data associated with the write request, the data associated with the write request being stored in the storage device based on a first type-2 address; generating a map entry that maps the first type-1 address to the first type-2 address; calculating a first hash digest of the first type-1 address; and storing the map entry in a first mapping page.2021-02-04
20210034544HARDWARE FOR SPLIT DATA TRANSLATION LOOKASIDE BUFFERS - Systems, methods, and apparatuses relating to hardware for split data translation lookaside buffers. In one embodiment, a processor includes a decode circuit to decode instructions into decoded instructions, an execution circuit to execute the decoded instructions, and a memory circuit comprising a load data translation lookaside buffer circuit and a store data translation lookaside buffer circuit separate and distinct from the load data translation lookaside buffer circuit, wherein the memory circuit sends a memory access request of the instructions to the load data translation lookaside buffer circuit when the memory access request is a load data request and to the store data translation lookaside buffer circuit when the memory access request is a store data request to determine a physical address for a virtual address of the memory access request.2021-02-04
20210034545REDUCING IMPACT OF CONTEXT SWITCHES THROUGH DYNAMIC MEMORY-MAPPING OVERALLOCATION - A method including: receiving, via a processor, established upper bounds for dynamic structures in a multi-tenant system; creating, via the processor, arrays comprising related memory-management unit (MMU) mappings to be placed together; and placing the dynamic structures within the arrays, the placing comprising for each array: skipping an element of the array based on determining that placing a dynamic structure in that element would cause the array to become overcommitted and result in a layout where accessing all elements would impose a translation look aside buffer (TLB) replacement action; and scanning for an array-start entry by placing the start of a first element at an address from which an entire array can be placed without TLB contention, and accessing, via the processors, all non-skipped elements without incurring TLB replacements.2021-02-04
20210034546TRANSPARENT ENCRYPTION - There is disclosed a computing apparatus, including: a memory; a memory encryption controller to encrypt at least a region of the memory; and a network interface to communicatively couple the computing apparatus to a remote host; wherein the memory encryption controller is configured to send an encrypted packet decryptable via an encryption key directly from the memory to the remote host via the network interface, bypassing a network protocol stack.2021-02-04
20210034547SECURING MEMORY USING PROTECTED MEMORY REGIONS - In exemplary aspects described herein, system memory is secured using protected memory regions. Portions of a system memory are assigned to endpoint devices, such as peripheral component interconnect express (PCIe) compliant devices. The portions of the system memory can include protected memory regions. The protected memory regions of the system memory assigned to each of the endpoint devices are configured to control access thereto using device identifiers and/or process identifiers, such as a process address space ID (PASID). When a transaction request is received by a device, the memory included in that request is used to determine whether it corresponds to a protected memory region. If so, the transaction request is executed if the identifiers in the request match the identifiers for which access is allowed to that protected memory region.2021-02-04
20210034548IMPLEMENTING MANAGEMENT COMMANDS UTILIZING AN IN-BAND INTERFACE - A computer-implemented method according to one embodiment includes receiving, at a peripheral device via an in-band interface, a predetermined command; determining, by the peripheral device, a predetermined identifier within the predetermined command; and implementing, by the peripheral device, parameter data associated with the predetermined identifier, in response to the determining.2021-02-04
20210034549DECLARATIVE TRANSACTIONAL COMMUNICATIONS WITH A PERIPHERAL DEVICE VIA A LOW-POWER BUS - The disclosed techniques enable a software program to communicate with a peripheral device (e.g., a sensor), via a low-level communication protocol such as the I2021-02-04
20210034550EXPANDER I/O MODULE DISCOVERY AND MANAGEMENT SYSTEM - An expander I/O module discovery/management system includes a secondary system chassis housing an expander I/O module coupled to a server device. The server device identifies the secondary system chassis and an expander I/O module port utilized by that server device, and then generates and transmits an expander I/O module reporting communication identifying the secondary system chassis and the expander I/O module port. A primary system chassis houses a switching I/O module coupled to the expander I/O module. The switching I/O module receives the expander I/O module reporting communication and determines that the secondary system chassis identified in the expander I/O module reporting communication is different than the primary system chassis. In response, the switching I/O module assigns a virtual slot to the expander I/O module, and assigns a virtual port associated with the virtual slot to the expander I/O module port identified in the expander I/O module reporting communication.2021-02-04
20210034551CONTROL DEVICE AND ADJUSTMENT METHOD - A control device is used to adjust an output voltage of a voltage generator, and includes a master circuit, a slave circuit, and a power-scaling control circuit. The master circuit is coupled to a first bus. The slave circuit is coupled to a second bus. In a normal mode, the first and second buses are connected to each other via the power-scaling control circuit, the master circuit accesses the slave circuit via the first and second buses. In an adjustment mode, the power-scaling control circuit controls the master circuit to stop accessing the slave circuit, and the power-scaling control circuit adjusts the output voltage. When the master circuit sends a trigger signal, the power-scaling control circuit enters the adjustment mode. When the master circuit does not send the trigger signal, the power-scaling control circuit enters the normal mode.2021-02-04
20210034552STORAGE SYSTEM WITH SUBMISSION QUEUE SELECTION UTILIZING APPLICATION AND SUBMISSION QUEUE PRIORITY - A host device comprises a plurality of communication adapters and is configured to communicate with a storage system. Each communication adapter comprises a plurality of input-output (IO) submission queues each having a submission queue priority class. A multi-path input-output (MPIO) driver is configured to deliver IO operations to the storage system over the network. The MPIO driver obtains an IO operation that targets a given logical volume of the storage system and determines a process tag value associated with the obtained IO operation. A mapping between the determined process tag value and a given submission queue priority class is determined and IO submission queues are identified as having the given submission queue priority class based at least in part on the mapping. A target IO submission queue is selected from the identified IO submission queues and the IO operation is dispatched to the selected target IO submission queue.2021-02-04
20210034553PROFILE-BASED MEMORY OPERATION - Various embodiments described herein provide for operation of a memory sub-system based on a profile (also referred to herein as an operational profile) that causes the memory sub-system to have a specific set of operational characteristics. Additionally, some embodiments can provide dynamic switching between profiles based on a set of conditions being satisfied, such as current time of day or detection of a particular data input/out (I/O) pattern with respect to the memory sub-system.2021-02-04
20210034554METHODS FOR PERFORMING MULTIPLE MEMORY OPERATIONS IN RESPONSE TO A SINGLE COMMAND AND MEMORY DEVICES AND SYSTEMS EMPLOYING THE SAME - Memory devices, memory systems, and methods of operating memory devices and systems are disclosed in which a single command can trigger a memory device to perform multiple operations, such as a single refresh command that triggers the memory device to both perform a refresh command and to perform a mode register read. One such memory device comprises a memory, a mode register, and circuitry configured, in response to receiving a command to perform a refresh operation at the memory, to perform the refresh operation at the memory, and to perform a read of the mode register. The memory can be a first memory portion, the memory device can comprise a second memory portion, and the circuitry can be further configured, in response to the command, to provide on-die termination at the second memory portion of the memory system during at least a portion of the read of the mode register.2021-02-04
20210034555MULTI-LEVEL DATA CACHE AND STORAGE ON A MEMORY BUS - This invention provides a system having a processor assembly interconnected to a memory bus and a memory-storage combine, interconnected to the memory bus. The memory-storage combine is adapted to allow access, through the memory bus, a combination of random access memory (RAM) based data storage and non-volatile mass data storage. A controller is arranged to address the both RAM based data storage and the non-volatile mass data storage as part of a unified address space in the manner of RAM.2021-02-04
20210034556SYSTEM AND METHOD FOR REGULATING HOST IOs AND INTERNAL BACKGROUND OPERATIONS IN A STORAGE SYSTEM - A method, computer program product, and computer system for monitoring host IO latency. It may be identified that a rate of the host IO latency is at a one of a plurality of levels. At least one of a rate of background IOs and a rate of host IOs may be regulated based upon, at least in part, the rate of the host IO latency being at the one of the plurality of levels.2021-02-04
20210034557INTELLIGENT CONTROLLER AND SENSOR NETWORK BUS, SYSTEM AND METHOD INCLUDING A DYNAMIC BANDWIDTH ALLOCATION MECHANISM - A machine automation system for controlling and operating an automated machine. The system includes a controller and sensor bus including a central processing core and a multi-medium transmission intranet for implementing a dynamic burst to broadcast transmission scheme where messages are burst from nodes to the central processing core and broadcast from the central processing core to all of the nodes.2021-02-04
20210034558INTERRUPT SYSTEM FOR RISC-V ARCHITECTURE - An interrupt system for RISC-V architecture includes an original register in a CLIC, a pushmcause register, a pushmepc register, an interrupt response register, and an mtvt2 register; the pushmcause register is used to store a value in an mcause on a stack by means of an instruction; the pushmepc register is used to store a value in an mepc on a stack by means of an instruction; the interrupt response register is used to respond to a non-vectored interrupt request issued by a CLIC by means of an instruction, obtain an interrupt subroutine entry address, and modify a global interrupt enable; and the mtvt2 register is used to store a base address of an non-vectored interrupt in a CLIC mode.2021-02-04
20210034559Packet Processing Device and Packet Processing Method - A packet processing device includes: a line adapter configured to receive packets from a communication line; a packet combining unit configured to generate a combined packet by combining a plurality of packets received from the communication line; a packet memory configured to store packets received from the communication line; and a combined packet transferring unit configured to DMA transfer the combined packet generated by the packet combining unit to the packet memory. The combined packet transferring unit writes information of an address of first data of each packet inside the combined packet on the packet memory into a descriptor that is a data area on a memory set in advance.2021-02-04
20210034560PERSISTENT KERNEL FOR GRAPHICS PROCESSING UNIT DIRECT MEMORY ACCESS NETWORK PACKET PROCESSING - A graphics processing unit may, in accordance with a kernel, determine that at least a first packet is written to a memory buffer of the graphics processing unit by a network interface card via a direct memory access, process the at least the first packet in accordance with the kernel, and provide a first notification to a central processing unit that the at least the first packet is processed in accordance with the kernel. The graphics processing unit may further determine that at least a second packet is written to the memory buffer by the network interface card via the direct memory access, process the at least the second packet in accordance with the kernel, where the kernel comprises a persistent kernel, and provide a second notification to the central processing unit that the at least the second packet is processed in accordance with the kernel.2021-02-04
20210034561MESSAGES BASED ON INPUT/OUTPUT DEVICE SIGNALS TO VIRTUAL COMPUTERS - A computer-readable medium may store machine-readable instructions for execution by a processor. There may be a connection between the processor and a virtual computer. The processor may establish a first data channel between the processor and the virtual computer based on the connection between the processor and the virtual computer. The connection may comprise a second data channel to transfer input/output (I/O) data between the processor and the virtual computer. The processor may receive an input signal from an I/O device coupled to the processor. The processor may provide an input message to the virtual computer via the first data channel, the input message based on the input signal.2021-02-04
20210034562INFORMATION INPUT DEVICE, METHOD, AND PROGRAM - An information input device includes: a communication interface configured to communicate with each of a first external apparatus that operates using a first operating system and a second external apparatus that operates using a second operating system; and a controller configured to operate in a first mode corresponding to a first driver used by the first external apparatus when transferring data to the first external apparatus, and operate in a second mode corresponding to a second driver different from the first driver and used by the second external apparatus when transferring data to the second external apparatus.2021-02-04
20210034563METHODS AND APPARATUS FOR AN INTERFACE - Various embodiments of the present technology may provide methods and apparatus for an interface. The interface may be configured to detect a hot unplug condition based on a first output voltage at an output terminal of a first buffer circuit and a second output voltage at an output terminal of a second buffer circuit, wherein the first and second buffer circuits receive a common input. The interface may further detect the hot unplug condition based on a difference of a peak magnitude of the first output voltage and a peak magnitude of the second output voltage.2021-02-04
20210034564INTELLIGENT CONTROLLER AND SENSOR NETWORK BUS, SYSTEM AND METHOD INCLUDING MULTI-LAYER PLATFORM SECURITY ARCHITECTURE - A machine automation system for controlling and operating an automated machine. The system includes a controller and sensor bus including a central processing core and a multi-medium transmission intranet for implementing a dynamic burst to broadcast transmission scheme where messages are burst from nodes to the central processing core and broadcast from the central processing core to all of the nodes.2021-02-04
20210034565RECALIBRATION OF PHY CIRCUITRY FOR THE PCI EXPRESS (PIPE) INTERFACE BASED ON USING A MESSAGE BUS INTERFACE - An interface couples a controller to a physical layer (PHY) block, where the interface includes a set of data pins comprising transmit data pins to send data to the PHY block and receive data pins to receive data from the PHY block. The interface further includes a particular set of pins to implement a message bus interface, where the controller is to send a write command to the PHY block over the message bus interface to write a value to at least one particular bit of a PHY message bus register, bits of the PHY message bus register are mapped to a set of control and status signals, and the particular bit is mapped to a recalibration request signal to request that the PHY block perform a recalibration.2021-02-04
20210034566Memory Network Processor - A multi-processor system with processing elements, interspersed memory, and primary and secondary interconnection networks optimized for high performance and low power dissipation is disclosed. In the secondary network multiple message routing nodes are arranged in an interspersed fashion with multiple processors. A given message routing node may receive messages from other message nodes, and relay the received messages to destination message routing nodes using relative offsets included in the messages. The relative offset may specify a number of message nodes from the message node that originated a message to a destination message node.2021-02-04
20210034567MULTI-PORT MEMORY ARCHITECTURE FOR A SYSTOLIC ARRAY - A memory architecture and a processing unit that incorporates the memory architecture and a systolic array. The memory architecture includes: memory array(s) with multi-port (MP) memory cells; first wordlines connected to the cells in each row; and, depending upon the embodiment, second wordlines connected to diagonals of cells or diagonals of sets of cells. Data from a data input matrix is written to the memory cells during first port write operations using the first wordlines and read out from the memory cells during second port read operations using the second wordlines. Due to the diagonal orientation of the second wordlines and due to additional features (e.g., additional rows of memory cells that store static zero data values or read data mask generators that generate read data masks), data read from the memory architecture and input directly into a systolic array is in the proper order, as specified by a data setup matrix.2021-02-04
20210034568PROCESSOR AND CONTROL METHOD THEREOF - A processor is provided. The processor includes a plurality of processing elements configured to be arranged in a matrix form, and a controller configured to control the plurality of processing elements during a plurality of cycles to process a target data, control first processing elements so that each of the first processing elements operates data provided from adjacent first processing elements and the input first element and inputs each of second elements included in a second row among the plurality of elements to second processing elements arranged in the second row among the plurality of processing elements, control the second processing elements so that each of the second processing elements operates data provided from adjacent second processing elements and the input second element, and operates data provided from the adjacent first processing elements in the same column among the first processing elements and pre-stored operation data.2021-02-04
20210034569METHOD, DEVICE, AND COMPUTER PROGRAM PRODUCT FOR MANAGING APPLICATION SYSTEM - Techniques manage application systems in an application environment. The application environment includes a first application system, a second application system and a third application system. First snapshot information of a first group of snapshots of the first application system is obtained, the first application system being in active state. Second snapshot information of a second group of snapshots of the second application system is obtained, the second application system being in standby state. It is determined whether the second application system and the third application system have a common snapshot based on the first snapshot information and the second snapshot information. Data is synchronized to the third application system depending on whether the second application system and the third application system have a common snapshot. Overheads required during data synchronization may be reduced as far as possible, and the efficiency of data synchronization may be improved.2021-02-04
20210034570METHOD, DEVICE, AND COMPUTER PROGRAM PRODUCT FOR MANAGING SNAPSHOT IN APPLICATION ENVIRONMENT - Techniques manage snapshots in an application environment. The application environment includes a first application system and a second application system. A group of snapshots of the first application system are identified in a fracture state where synchronous communication between the first application system and the second application system is paused. A group of snapshot differences between two successive snapshots in the group of snapshots are obtained, the group of snapshots being arranged in chronological order that the group of snapshots are generated. The group of snapshot differences are transmitted from the first application system to the second application system in response to determining the synchronous communication between the first application system and the second application system is resumed. Accordingly, snapshots in the application environment can be managed more effectively, and further data synchronization between the first application system and the second application system may be realized.2021-02-04
20210034571TRANSACTION LOG INDEX GENERATION IN AN ENTERPRISE BACKUP SYSTEM - Certain embodiments disclosed herein reduce or eliminate a communication bottleneck at the storage manager by reducing communication with the storage manager while maintaining functionality of an information management system. In some implementations, operations performed as part of a backup process may be stored in transaction logs. These transaction logs may include information about a transaction performed between the client computing system and the network storage that hosts the backup of the client computing system. The transaction logs may be provided to a secondary storage system that can be used to form a backup index. The backup index may be used to facilitate accessing the data stored at the network storage. Advantageously, generating the transaction logs and separating the generation of the backup index from the backup process can reduce resource usage during performance of the backup and speed up the backup process while further reducing interaction with the storage manager.2021-02-04
20210034572SHARING COLLECTIONS WITH EXTERNAL TEAMS - The disclosed technology provides for sharing of collections between teams from external entities. The present technology allows administrators of an entity to manage what teams from their entity can be exposed outside of the entity and to manage how their entity is viewed by external partners. Sharing between teams provides benefits of easier sharing whereby it is not necessary to share a collection individually with all users of a team. It also provides a more logical sharing paradigm where collaboration is otherwise thought of between two partner entities and not specific employees of those entities. Sharing between teams allows an administrator to manage the user accounts associated with the team so that as team members come and go, all current team members will have access to projects in which the team is involved. Additionally, established teams can be configured to enjoy the full collaborative benefits of the content management system.2021-02-04
20210034573SYSTEM AND METHOD FOR PARALLEL FLUSHING WITH BUCKETIZED DATA - A method, computer program product, and computer system for organizing a plurality of log records into a plurality of buckets, wherein each bucket is associated with a range of a plurality of ranges within a backing store. A bucket of the plurality of buckets from which a portion of the log records of the plurality of log records are to be flushed may be selected. The portion of the log records may be organized into parallel flush jobs. The portion of the log records may be flushed to the backing store in parallel.2021-02-04
20210034574SYSTEMS AND METHODS FOR VERIFYING PERFORMANCE OF A MODIFICATION REQUEST IN A DATABASE SYSTEM - Provided are systems and methods for verifying, in a database system, that a modification request to events data is completed. The method marks a modification request as verifying and implements a search strategy to search for unmodified events data (the stragglers) in the least expensive query scope first and then keep expanding the scope of the query until at least one unmodified events data is found (a straggler), which is marked as a fail. This strategy includes (i) beginning at a lowest scope search, (ii) searching a (database first: continuing to expand the search scope as high as it can go without a fail, and (iii) only when it's finished searching the database without a fail, searching a search engine in the same way. When the searches are done, and no fails have been marked, the method marks the request as done.2021-02-04
Website © 2025 Advameg, Inc.