Patent application number | Description | Published |
20100293353 | TASK QUEUING IN A NETWORK COMMUNICATIONS PROCESSOR ARCHITECTURE - Described embodiments provide a method of assigning tasks to queues of a processing core. Tasks are assigned to a queue by sending, by a source processing core, a new task having a task identifier. A destination processing core receives the new task and determines whether another task having the same identifier exists in any of the queues corresponding to the destination processing core. If another task with the same identifier as the new task exists, the destination processing core assigns the new task to the queue containing a task with the same identifier as the new task. If no task with the same identifier as the new task exists in the queues, the destination processing core assigns the new task to the queue having the fewest tasks. The source processing core writes the new task to the assigned queue. The destination processing core executes the tasks in its queues. | 11-18-2010 |
20120002546 | MULTICASTING TRAFFIC MANAGER IN A NETWORK COMMUNICATIONS PROCESSOR ARCHITECTURE - Described embodiments provide a method of processing packets of a network processor. One or more tasks are generated corresponding to received packets associated with one or more data flows. A traffic manager receives a task corresponding to a data flow, the task provided by a processing module of the network processor. The traffic manager determines whether the received task corresponds to a unicast data flow or a multicast data flow. If the received task corresponds to a multicast data flow, the traffic manager determines, based on identifiers corresponding to the task, an address of launch data stored in launch data tables in a shared memory, and reads the launch data. Based on the identifiers and the read launch data, two or more output tasks are generated corresponding to the multicast data flow, and the two or more output tasks are added at the tail end of a scheduling queue. | 01-05-2012 |
20120020210 | BYTE-ACCURATE SCHEDULING IN A NETWORK PROCESSOR - Described embodiments provide for scheduling packets for transmission by a network processor. A traffic manager generates a tree scheduling hierarchy having a root scheduler and N scheduler levels. The network processor generates tasks corresponding to received packets. The traffic manager enqueues the received task in the associated queue, the queue having a corresponding parent scheduler at each of one or more next levels of the scheduling hierarchy up to the root scheduler. Each scheduler determines one or more tasks to schedule from a given queue based on a default packet size of the packet corresponding to the task. The corresponding packet data is read from a shared memory, and, at each corresponding parent scheduler up to the root scheduler, an actual size of the packet data is updated. Scheduling weights of each corresponding parent scheduler are updated based on the actual size of the packet data. | 01-26-2012 |
20120020223 | PACKET SCHEDULING WITH GUARANTEED MINIMUM RATE IN A TRAFFIC MANAGER OF A NETWORK PROCESSOR - Described embodiments provide rate setting for nodes of a scheduling hierarchy of a network processor. The scheduling hierarchy is a tree structure having a root scheduler and N scheduler levels. The network processor generates tasks corresponding to received packets. A traffic manager queues received tasks in a queue of the scheduling hierarchy associated with a data flow of the task. The queue has a parent scheduler at each level of the hierarchy up to the root scheduler. A scheduler selects a child node for transmission based on a number of arbitration credits in an arbitration credit bucket of each child. An arbitration credit value is determined for each child by maintaining a time stamp value corresponding to a time value of a previous selection of the child node and determining an elapsed time value based on the time stamp value and a current time value, scaled by a scaling factor. | 01-26-2012 |
20120020249 | PACKET DRAINING FROM A SCHEDULING HIERARCHY IN A TRAFFIC MANAGER OF A NETWORK PROCESSOR - Described embodiments provide for controlling a state of each node in a scheduling hierarchy of a network processor. A traffic manager generates a tree scheduling hierarchy having a root scheduler and N scheduler levels. The network processor generates tasks corresponding to received packets. A traffic manager enqueues received tasks in a queue of the scheduling hierarchy associated with a data flow. The traffic manager maintains scheduling data structures for each node in the scheduling hierarchy. The scheduling data structures include a backpressure indicator and a timer indicator. If the backpressure indicator is set, the traffic manager sets the node as unavailable for scheduling and removes the node from the scheduling hierarchy. If the timer indicator is set, the traffic managers sets the node as unavailable for scheduling. Otherwise, if neither the backpressure indicator nor the timer indicator is set, the traffic manager sets the node as available for scheduling. | 01-26-2012 |
20120020250 | SHARED TASK PARAMETERS IN A SCHEDULER OF A NETWORK PROCESSOR - Described embodiments provide sharing data between nodes in a scheduling hierarchy of a network processor. A traffic manager generates a tree scheduling hierarchy having a root scheduler and N scheduler levels. The network processor generates tasks corresponding to received packets, each task having a shared parameter ID. The traffic manager determines the shared parameter ID value of the received task and queues the received task in a queue of the scheduling hierarchy. The queue has a scheduler level M and a parent scheduler at each of M-1 levels in the scheduling hierarchy. The traffic manager determines a shared parameter ID value of the queue. The traffic manager loads, from a shared memory to a corresponding level one cache, one or more shared parameter values corresponding to at least one of the determined shared parameter ID value of the received task and the determined shared parameter ID value of the queue. | 01-26-2012 |
20120020251 | MODULARIZED SCHEDULING ENGINE FOR TRAFFIC MANAGEMENT IN A NETWORK PROCESSOR - Described embodiments provide for scheduling packets for transmission by a network processor. A traffic manager generates a scheduling hierarchy having a root scheduler and N scheduler levels. The network processor generates tasks corresponding to received packets. A finite state machine (FSM) enqueues the received task in the associated queue. The queue has a corresponding scheduler level M, with a corresponding parent scheduler at each of M−1 levels in the scheduling hierarchy, where M is a positive integer less than or equal to N. Nodes at each of the N scheduling levels send messages only with one node at a relative next higher level and with one or more nodes at a relative next lower level. Each node in the scheduling hierarchy updates corresponding statistics and control indicators based on messages received from the node at the next higher level and the one or more nodes at the next lower level. | 01-26-2012 |
20120020366 | PACKET DRAINING FROM A SCHEDULING HIERARCHY IN A TRAFFIC MANAGER OF A NETWORK PROCESSOR - Described embodiments provide for restructuring a scheduling hierarchy of a network processor having a plurality of processing modules and a shared memory. The scheduling hierarchy schedules packets for transmission. The network processor generates tasks corresponding to each received packet associated with a data flow. A traffic manager receives tasks provided by one of the processing modules and determines a queue of the scheduling hierarchy corresponding to the task. The queue has a parent scheduler at each of one or more next levels of the scheduling hierarchy up to a root scheduler, forming a branch of the hierarchy. The traffic manager determines if the queue and one or more of the parent schedulers of the branch should be restructured. If so, the traffic manager drops subsequently received tasks for the branch, drains all tasks of the branch, and removes the corresponding nodes of the branch from the scheduling hierarchy. | 01-26-2012 |
20120020367 | SPECULATIVE TASK READING IN A TRAFFIC MANAGER OF A NETWORK PROCESSOR - Described embodiments provide for scheduling packets for transmission by a network processor. The network processor generates tasks corresponding to received packets associated with a data flow. A traffic manager of the network processor receives tasks provided by a processing module of the network processor and generates a tree scheduling hierarchy having one or more scheduling levels. Each received task is queued in a queue of the scheduling hierarchy associated with the received task, the queue having a corresponding parent scheduler in each level of the scheduling hierarchy, forming a branch of the scheduling hierarchy. A parent scheduler selects a child node to transmit a task. A task read module determines a thread corresponding to the selected child node to read corresponding packet data from a shared memory. The traffic manager forms one or more output tasks for transmission based on the packet data corresponding to the thread. | 01-26-2012 |
20120020368 | DYNAMIC UPDATING OF SCHEDULING HIERARCHY IN A TRAFFIC MANAGER OF A NETWORK PROCESSOR - Described embodiments provide for dynamically controlling a scheduling rate of each node in a scheduling hierarchy of a network processor. A traffic manager generates a tree scheduling hierarchy having a root scheduler and N scheduler levels. The network processor generates tasks corresponding to received packets. A traffic manager enqueues received tasks in a queue of the scheduling hierarchy associated with a data flow. The queue has a parent scheduler at each level of the hierarchy up to the root scheduler. The traffic manager maintains one or more scheduling data structures for each node in the scheduling hierarchy. If the traffic manager receives a rate reduction request corresponding to a given node of the scheduling hierarchy, the traffic manager updates one or more indicators in the scheduling data structure corresponding to the given node and removes the given node from the scheduling hierarchy, thereby reducing the scheduling rate of the node. | 01-26-2012 |
20120020369 | SCHEDULING HIERARCHY IN A TRAFFIC MANAGER OF A NETWORK PROCESSOR - Described embodiments provide for dynamically constructing a scheduling hierarchy of a network processor. A traffic manager generates a tree scheduling hierarchy having a root scheduler and N scheduler levels. The network processor generates tasks corresponding to received packets. The traffic manager queues the received task in the associated queue, the queue having a corresponding parent scheduler at each of one or more next levels of the scheduling hierarchy up to the root scheduler. A parent scheduler selects, starting at the root scheduler and iteratively repeating at each of the corresponding N scheduling levels until a queue is selected, a child node to transmit at least one task. The traffic manager forms output packets for transmission based on the at least one task from the selected queue. | 01-26-2012 |
20120020370 | ROOT SCHEDULING ALGORITHM IN A NETWORK PROCESSOR - Described embodiments provide for arbitrating between nodes of scheduling hierarchy of a network processor. A traffic manager generates a tree scheduling hierarchy having a root scheduler and N scheduler levels. The network processor generates tasks corresponding to received packets. The traffic manager queues the received task in an associated queue of the scheduling hierarchy. The root scheduler performs smooth deficit weighted round robin (SDWRR) arbitration between each child node of the root scheduler. The SDWRR arbitration includes checking one or more status indicators of each child node of the given scheduler and selecting, based on the status indicators, a first active child node of the scheduler and updating the one or more status indicators corresponding to the selected child node. Thus, a task is scheduled for transmission by the traffic manager every cycle of the network processor. | 01-26-2012 |
20120020371 | MULTITHREADED, SUPERSCALAR SCHEDULING IN A TRAFFIC MANAGER OF A NETWORK PROCESSOR - Described embodiments schedule packets for transmission by a network processor. A traffic manager generates a scheduling hierarchy having a root scheduler and N levels. The network processor generates tasks corresponding to received packets. The traffic manager enqueues tasks in an associated queue. The queue has a corresponding level M, with a corresponding parent scheduler at each of M−1 levels in the scheduling hierarchy, where M is less than or equal to N. In a single scheduling cycle, a parent scheduler selects a child node to transmit one or more tasks, and the child node responds whether the scheduling is accepted, and if so, with a number of tasks for scheduling. Starting at the parent scheduler and iteratively repeating at each level until reaching the root scheduler, statistics corresponding to the selected node are updated. Output packets corresponding to the scheduled tasks are transmitted, thereby achieving a superscalar task scheduling throughput. | 01-26-2012 |
20120023498 | LOCAL MESSAGING IN A SCHEDULING HIERARCHY IN A TRAFFIC MANAGER OF A NETWORK PROCESSOR - Described embodiments provide for queuing tasks in a scheduling hierarchy of a network processor. A traffic manager generates a tree scheduling hierarchy having a root scheduler and N scheduler levels. The network processor generates tasks corresponding to received packets. The traffic manager performs a task enqueue operation for the task. The task enqueue operation includes adding the received task to an associated queue of the scheduling hierarchy, where the queue is associated with a data flow of the received task. The queue has a corresponding scheduler level M, where M is a positive integer less than or equal to N. Starting at the queue and iteratively repeating at each scheduling level until reaching the root scheduler, each node in the scheduling hierarchy maintains an actual count of tasks corresponding to the node. Each node communicates a capped task count to a corresponding parent scheduler at a relative next scheduler level. | 01-26-2012 |
Patent application number | Description | Published |
20090158001 | ACCESSING CONTROL AND STATUS REGISTER (CSR) - A system may comprise one or more source agents, target agents, and a plurality of directory agents, which may determine the target agent to which one or more transactions generated by the source agents is to be sent. A controller may identify one of a plurality of directory agents to process the transactions. The directory agent may determine the control and status registers of the target agents to which the transaction is to be sent. The target agent may complete the transaction after receiving the transaction from the directory agent. The directory agents may store a memory map to resolve the target agent to which the transactions is to be sent. The directory based distributed CSR access may provide scalability to ever increasing number of heterogeneous agents in the system. | 06-18-2009 |
20100332801 | Adaptively Handling Remote Atomic Execution - In one embodiment, a method includes receiving an instruction for decoding in a processor core and dynamically handling the instruction with one of multiple behaviors based on whether contention is predicted. If no contention is predicted, the instruction is executed in the core, and if contention is predicted data associated with the instruction is marshaled and sent to a selected remote agent for execution. Other embodiments are described and claimed. | 12-30-2010 |
20140149651 | Providing Extended Cache Replacement State Information - In an embodiment, a processor includes a decode logic to receive and decode a first memory access instruction to store data in a cache memory with a replacement state indicator of a first level, and to send the decoded first memory access instruction to a control logic. In turn, the control logic is to store the data in a first way of a first set of the cache memory and to store the replacement state indicator of the first level in a metadata field of the first way responsive to the decoded first memory access instruction. Other embodiments are described and claimed. | 05-29-2014 |
20140156896 | ADVANCED PROGRAMMABLE INTERRUPT CONTROLLER IDENTIFIER (APIC ID) ASSIGNMENT FOR A MULTI-CORE PROCESSING UNIT - Following a restart or a reboot of a system that includes a multi-core processor, the multi-core processor may assign each active and eligible core a unique advanced programmable interrupt controller (APIC) identifier (ID). Initialization logic may detect a state of each of the plurality of processing cores as active or inactive. The initialization logic may detect an attribute of each of the plurality of processing cores as eligible to be assigned an APIC ID or as ineligible to be assigned the APIC ID. | 06-05-2014 |
20140164705 | PREFETCH WITH REQUEST FOR OWNERSHIP WITHOUT DATA - A method performed by a processor is described. The method includes executing an instruction. The instruction has an address as an operand. The executing of the instruction includes sending a signal to cache coherence protocol logic of the processor. In response to the signal, the cache coherence protocol logic issues a request for ownership of a cache line at the address. The cache line is not in a cache of the processor. The request for ownership also indicates that the cache line is not to be sent to the processor. | 06-12-2014 |
Patent application number | Description | Published |
20120137278 | GENERATING A CUSTOMIZED SET OF TASKS FOR MIGRATION OF A DEPLOYED SOFTWARE SOLUTION - A migration set list generator specifies a physical topology of a deployed software solution, wherein the software solution comprises a plurality of software components and data associated with said plurality of software components integrated into a single entity. The migration set list generator specifies at least one solution change to the deployed software solution to meet at least one business requirement and specifies at least one migration strategy for migrating the deployed software solution, wherein the at least one migration strategy comprises at least one of a product level strategy specified for a particular software component of the plurality of software components and at least one solution level strategy specified for the plurality of software components, wherein the product level strategy overrides the solution level strategy for the particular software component. The migration set list generator generates a plurality of migration tasks for making the at least one solution change to the deployed software solution specified in the physical topology based on the at least one migration strategy and generates a recommended physical topology yielded for the software solution if the physical topology is updated according to the plurality of migration tasks. | 05-31-2012 |
20130014097 | GENERATING A CUSTOMIZED SET OF TASKS FOR MIGRATION OF A DEPLOYED SOFTWARE SOLUTION - A migration set list generator specifies a physical topology of a deployed software solution, wherein the software solution comprises software components and data associated with the software components integrated into a single entity. The migration set list generator specifies at least one solution change to the deployed software solution to meet at least one business requirement and specifies at least one migration strategy for migrating the deployed software solution. The migration set list generator generates migration tasks for making the at least one solution change to the deployed software solution specified in the physical topology based on the at least one migration strategy and generates a recommended physical topology yielded for the software solution if the physical topology is updated according to the migration tasks. | 01-10-2013 |
20130179560 | Tracking Changes to Data Within Various Data Repositories - A computer system retrieves from data repositories change information indicating changes to entries. Each data repository is associated with a corresponding interface and at least two data repositories are associated with different interfaces, and at least one data repository lacks tracking of changes to entries stored therein. The change information retrieved from the data repository is stored within a storage unit. The stored information includes identification of each repository entry change without storage of the changed entry. Requests are processed to provide change information for entries within the data repositories, wherein processing the change information request for one of the entries includes retrieving from the storage unit the identification of the repository entry change for the one of the entries. Embodiments of the present invention further include a method and computer program product for tracking changes within data repositories in substantially the same manner described above. | 07-11-2013 |
20140007070 | Managing Software Product Lifecycle Across Multiple Operating System Platforms | 01-02-2014 |
20150052510 | GENERATING A CUSTOMIZED SET OF TASKS FOR MIGRATION OF A DEPLOYED SOFTWARE SOLUTION - A migration set list generator specifies a physical topology of a deployed software solution, wherein the software solution comprises software components and data associated with the software components, integrated into a single entity. The migration set list generator specifies at least one solution change to the deployed software solution to meet at least one business requirement and specifies at least one migration strategy for migrating the deployed software solution. The migration set list generator generates migration tasks for making the at least one solution change to the deployed software solution specified in the physical topology based on the at least one migration strategy and generates a recommended physical topology yielded for the software solution if the physical topology is updated according to the migration tasks. | 02-19-2015 |
Patent application number | Description | Published |
20110077924 | METHODS AND SYSTEMS FOR MITIGATING DRILLING VIBRATIONS - Methods and systems of reducing drilling vibrations include generation a vibration performance index using at least one frequency-domain model having a velocity-dependent friction relationship. The vibration performance index may be used to aid in the design or manufacture of a drill tool assembly. Additionally or alternatively, the vibration performance index may inform drilling operations to reduce vibrations. | 03-31-2011 |
20110214878 | Methods and Systems For Modeling, Designing, and Conducting Drilling Operations That Consider Vibrations - A method and apparatus associated with the production of hydrocarbons is disclosed. The method, which relates to modeling and operation of drilling equipment, includes constructing one or more surrogates for at least a portion of a bottom hole assembly (BHA) and calculating performance results from each of the one or more surrogates. The calculated results of the modeling may include one or more vibration performance indices that characterize the BHA vibration performance of the surrogates for operating parameters and boundary conditions, which may be substantially the same as conditions to be used, being used, or previously used in drilling operations. The selected BHA surrogate may then be utilized in a well construction operation and thus associated with the production of hydrocarbons. | 09-08-2011 |
20120123757 | Methods to Estimate Downhole Drilling Vibration Indices From Surface Measurement - Method to estimate severity of downhole vibration for a wellbore drill tool assembly, comprising: identifying a dataset comprising selected drill tool assembly parameters; selecting a reference level of downhole vibration index for the drill tool assembly; identifying a surface drilling parameter and calculating a reference surface vibration attribute for the selected reference level of downhole vibration index; determining a surface parameter vibration attribute derived from at least one surface measurement or observation obtained in a drilling operation, the determined surface parameter vibration attribute corresponding to the identified surface drilling parameter; and estimating a downhole vibration index severity indicator by evaluating the determined surface parameter vibration attribute with respect to the identified reference surface vibration attribute. | 05-17-2012 |
20120130693 | Methods to Estimate Downhole Drilling Vibration Amplitude From Surface Measurement - Method to estimate severity of downhole vibration for a drill tool assembly, including: identifying a dataset comprising selected drill tool assembly parameters; selecting a reference level of downhole vibration amplitude for the drill tool assembly; identifying a surface drilling parameter and calculating a reference surface vibration attribute for the selected reference level of downhole vibration amplitude; determining a surface parameter vibration attribute derived from at least one surface measurement or observation obtained in a drilling operation, the determined surface parameter vibration attribute corresponding to the identified surface drilling parameter; and estimating a downhole vibration severity indicator by evaluating the determined surface parameter vibration attribute with respect to the identified reference surface vibration attribute. | 05-24-2012 |
20130213638 | Methods of Using Nano-Particles In Wellbore Operations - Methods for heating a material within a wellbore using nano-particles such as carbon nano-tubes. The material may be a flowable material such as cement, drilling mud, an acidizing fluid, or other material. Generally the methods comprise placing the flowable material in proximity to a radial wall of a wellbore. The methods also include running an energy generator into the wellbore. In one aspect, energizing the nano-particles in the filter cake causes the nano-particles to be activated, and increases a temperature within the flowable material to a temperature that is greater than an initial circulation temperature of the flowable material. Activating the energy generator may also assist in curing the flowable material in situ. | 08-22-2013 |
Patent application number | Description | Published |
20130097369 | APPARATUS, SYSTEM, AND METHOD FOR AUTO-COMMIT MEMORY MANAGEMENT - An apparatus, system, and method are disclosed for auto-commit memory management. The method includes receiving an auto-commit request from a client, such as a barrier request or a checkpoint request. The auto-commit request is associated with an auto-commit buffer of a non-volatile recording device. The method includes issuing a serializing instruction that flushes data from a processor complex to the auto-commit buffer. The method includes determining completion of the serializing instruction flushing the data to the auto-commit buffer. | 04-18-2013 |
20130185475 | SYSTEMS AND METHODS FOR CACHE PROFILING - A cache module leverages a logical address space and storage metadata of a storage module (e.g., virtual storage module) to cache data of a backing store. The cache module maintains access metadata to track access characteristics of logical identifiers in the logical address space, including accesses pertaining to data that is not currently in the cache. The access metadata may be separate from the storage metadata maintained by the storage module. The cache module may calculate a performance metric of the cache based on profiling metadata, which may include portions of the access metadata. The cache module may determine predictive performance metrics of different cache configurations. An optimal cache configuration may be identified based on the predictive performance metrics. | 07-18-2013 |
20130185488 | SYSTEMS AND METHODS FOR COOPERATIVE CACHE MANAGEMENT - A cache module leverages storage metadata to cache data of a backing store on a non-volatile storage device. The cache module maintains access metadata pertaining to access characteristics of logical identifiers in the logical address space, including access characteristics of un-cached logical identifiers (e.g., logical identifiers associated with data that is not stored on the non-volatile storage device). The access metadata may be separate and/or distinct from the storage metadata. The cache module determines whether to admit data into the cache and/or evict data from the cache using the access metadata. A storage module may provide eviction candidates to the cache module. The cache module may select candidates for eviction. The storage module may leverage the eviction candidates to improve the performance of storage recovery and/or grooming operations. | 07-18-2013 |
20130185508 | SYSTEMS AND METHODS FOR MANAGING CACHE ADMISSION - A cache layer leverages a logical address space and storage metadata of a storage layer (e.g., virtual storage layer) to cache data of a backing store. The cache layer maintains access metadata to track data characteristics of logical identifiers in the logical address space, including accesses pertaining to data that is not in the cache. The access metadata may be separate and distinct from the storage metadata maintained by the storage layer. The cache layer determines whether to admit data into the cache using the access metadata. Data may be admitted into the cache when the data satisfies cache admission criteria, which may include an access threshold and/or a sequentiality metric. Time-ordered history of the access metadata is used to identify important/useful blocks in the logical address space of the backing store that would be beneficial to cache. | 07-18-2013 |
20130212321 | Apparatus, System, and Method for Auto-Commit Memory Management - Apparatuses, systems, methods, and computer program products are disclosed. A method includes receiving a request to copy data from a first location to a second location. The data may be associated with an identifier known to a client that initiated the request. One of the locations may include an auto-commit buffer of a non-volatile device. An auto-commit buffer may be configured to commit stored data from the auto-commit buffer to a non-volatile medium of a non-volatile device in response to a restart event. A method includes copying the data from the first location to the second location. A method includes preserving the identifier known to the client and an association between the identifier and a location of the data at the second location such that client can retrieve the data based on the identifier known to the client. | 08-15-2013 |
20130275391 | Data Expiry in a Non-Volatile Device - Apparatuses, systems, and methods are disclosed for data expiry. A method includes examining metadata associated with data in a non-volatile recording medium. A method includes expiring data from a non-volatile recording medium in response to metadata indicating that an expiration period for the data has been satisfied. | 10-17-2013 |
20130275656 | APPARATUS, SYSTEM, AND METHOD FOR KEY-VALUE POOL IDENTIFIER ENCODING - Apparatuses, systems, and methods are disclosed for a key-value store. A method includes encoding a key of a key-value pair into a logical address of a sparse logical address space for a non-volatile medium. A method includes mapping a logical address to a physical location in the non-volatile medium. A method includes storing a value of a key-value pair at a physical location. | 10-17-2013 |
20130332660 | Hybrid Checkpointed Memory - Apparatuses, systems, methods, and computer program products are disclosed for hybrid checkpointed memory. A method includes referencing data of a range of virtual memory of a host. The referenced data is already stored by a non-volatile medium. A method includes writing, to a non-volatile medium, data of a range of virtual memory that is not stored by the non-volatile medium. A method includes providing access to data of a range of virtual memory from a non-volatile medium using a persistent identifier associated with referenced data and written data. | 12-12-2013 |
20140089264 | SNAPSHOTS FOR A NON-VOLATILE DEVICE - Apparatuses, systems, and methods are disclosed for snapshots of a non-volatile device. A method includes writing data in a sequential log structure for a non-volatile device. A method includes marking a point, in a sequential log structure, for a snapshot of data. A method includes preserving a logical-to-physical mapping for a snapshot based on a marked point and a temporal order for data in a sequential log structure. | 03-27-2014 |
20140089265 | Time Sequence Data Management - An apparatus, system, and method are disclosed for data management. The method includes writing data in a sequential log structure. The method also includes receiving a time sequence request from a client. The method further includes servicing the time sequence request based on a temporal order of the data in the sequential log structure. | 03-27-2014 |
20140156965 | ADVANCED GROOMER FOR STORAGE ARRAY - Techniques are disclosed relating to reclaiming data on recording media. In one embodiment, an apparatus has a solid-state memory array including a plurality of blocks. The solid-state memory array may implement a cache for one or more storage devices. Respective operational effects are determined relating to reclaiming ones of the plurality of blocks. One of the plurality of blocks is selected as a candidate for reclamation based on the determined operational effects, and the selected block is reclaimed. In some embodiments, the determined operational effects for a given block indicate a number of write operations to be performed to reclaim the given block. In some embodiments, operational effects are determined based on criteria relating to assigned quality-of-service levels. In some embodiments, operational effects are determined based on information relating virtual storage units. | 06-05-2014 |
20140325115 | Conditional Iteration for a Non-Volatile Device - Apparatuses, systems, methods, and computer program products are disclosed for conditional iteration. A method includes receiving a request comprising a condition. A method includes checking an address mapping structure for entries satisfying a condition for a request. A method includes providing a result for a request based on one or more entries satisfying a condition for a request. | 10-30-2014 |
Patent application number | Description | Published |
20140095775 | SYSTEMS AND METHODS FOR CACHE ENDURANCE - A cache and/or storage module may be configured to reduce write amplification in a cache storage. Cache layer write amplification (CLWA) may occur due to an over-permissive admission policy. The cache module may be configured to reduce CLWA by configuring admission policies to avoid unnecessary writes. Admission policies may be predicated on access and/or sequentiality metrics. Flash layer write amplification (FLWA) may arise due to the write-once properties of the storage medium. FLWA may be reduced by delegating cache eviction functionality to the underlying storage layer. The cache and storage layers may be configured to communicate coordination information, which may be leveraged to improve the performance of cache and/or storage operations. | 04-03-2014 |
20140195480 | PERSISTENT MEMORY MANAGEMENT - Apparatuses, systems, methods, and computer program products are disclosed for persistent memory management. Persistent memory management may include providing a persistent data structure stored at least partially in volatile memory configured to ensure persistence of the data structure in a non-volatile memory medium. Persistent memory management may include replicating a persistent data structure in volatile memory buffers of at least two non-volatile storage devices. Persistent memory management may include preserving a snapshot copy of data in association with completion of a barrier operation for the data. Persistent memory management may include determining which interface of a plurality of supported interfaces is to be used to flush data from a processor complex. | 07-10-2014 |
20140195564 | PERSISTENT DATA STRUCTURES - Apparatuses, systems, methods, and computer program products are disclosed for a persistent data structure. A method includes associating a logical identifier with a data structure. A method includes writing data of a data structure to a first region of a volatile memory module. A volatile memory module may be configured to ensure that data is preserved in response to a trigger. A method includes copying data of a data structure from a volatile memory module to a non-volatile storage medium such that the data of the data structure remains associated with a logical identifier. | 07-10-2014 |
20140281260 | ESTIMATING ACCESS FREQUENCY STATISTICS FOR STORAGE DEVICE - Techniques are disclosed relating to determining statistics associated with the storage of data on a medium. In one embodiment, a computing system maintains a management statistic for a storage device, and uses the management statistic as a proxy for a workload statistic for a storage block within the storage device. In some embodiments, the storage block is a first storage block included within a second storage block of the storage device. In one embodiment, the management statistic is a timestamp indicative of when a write operation was performed for the second storage block; the workload statistic is a write frequency of the first storage block. In one embodiment, the management statistic is a number of read operations performed for the second storage block; the using includes deriving, based on the number of read operation, a read frequency for the first storage block as the workload statistic. | 09-18-2014 |
20140281307 | HANDLING SNAPSHOT INFORMATION FOR A STORAGE DEVICE - Techniques are disclosed relating to handling snapshot data for a storage device. In one embodiment, a computing system maintains information that indicates the state of data associated with an application at a particular point in time. In this embodiment, the computing system assigns an epoch number to a current epoch, where the current epoch is an interval between the particular point in time and a future point in time. In this embodiment, the computing system writes, during the current epoch, a block of data to the storage device. In this embodiment, the writing the block of data includes storing the epoch number with the block of data. | 09-18-2014 |
20140310499 | SYSTEMS, METHODS AND INTERFACES FOR DATA VIRTUALIZATION - A data services module performs log storage operations in response to requests by storing data on one or more storage devices, and appending information pertaining to the requests to a separate metadata log. A log order of the metadata log may correspond to an order in which the requests were received, regardless of the order in which data of the requests are written to the storage devices. The requests may correspond to identifiers of a logical address space. The data services module implements an any-to-any translation layer configured to map identifiers of the logical address space to the stored data. The virtualization module may include a metadata management module configured to checkpoint the translation layer metadata by, inter alia, appending aggregate, checkpoint entries to the metadata log. The data services module may leverage the translation layer between the logical identifiers and underlying storage locations to efficiently implement logical manipulation operations. | 10-16-2014 |
20150039577 | SYSTEMS AND METHODS FOR ATOMIC STORAGE OPERATIONS - An atomic storage module may be configured to implement atomic storage operation directed to a first set of identifiers in reference to a second, different set of identifiers. In response to completing the atomic storage operation, the atomic storage module may move the corresponding data to the first, target set of identifiers. The move operation may comprise modifying a logical interface of the data. The move operation may further include storing persistent metadata configured to bind the data to the first set of identifiers. | 02-05-2015 |
20150113326 | SYSTEMS AND METHODS FOR DISTRIBUTED ATOMIC STORAGE OPERATIONS - An aggregation module combines a plurality of logical address spaces to form a conglomerated address space. The logical address spaces comprising the conglomerated address space may correspond to different respective storage modules and/or storage devices. An atomic aggregation module coordinates atomic storage operations within the conglomerated address space, and which span multiple storage modules. The aggregation module may identify the storage modules used to implement the atomic storage request, assign a sequence indicator to the atomic storage request, and issue atomic storage requests (sub-requests) to the storage modules. The storage modules may be configured to store a completion tag comprising the sequence indicator upon completing the sub-requests issued thereto. The aggregation module may identify incomplete atomic storage requests based on the completion information stored on the storage modules. | 04-23-2015 |
20150134926 | SYSTEMS AND METHODS FOR LOG COORDINATION - A storage module may be configured to perform log storage operations on a storage log maintained on a non-volatile storage medium. An I/O client may utilize storage services of the storage module to maintain an upper-level log. The storage module may be configured to coordinate log storage and/or management operations between the storage log and the upper-level log. The coordination may include adapting a segment size of the logs to reduce write amplification. The coordination may further include coordinating validity information between log layers, adapting log grooming operations to reduce storage recovery overhead, defragmenting upper-level log data within the storage address space, preventing fragmentation of upper-level log data, and so on. The storage module may coordinate log operations by use of log coordination messages communicated between log layers. | 05-14-2015 |
20160070652 | GENERALIZED STORAGE VIRTUALIZATION INTERFACE - A storage system implements a sparse, thinly provisioned logical-to-physical translation layer. The storage system may perform operations to modify logical-to-physical mappings, including creating, removing, and/or modifying any-to-any and/or many-to-one mappings between logical identifiers and stored data (logical manipulation operations). The storage system records persistent metadata to render the logical manipulation (LM) operations persistent and crash-safe. The storage system may provide access to LM functionality through a generalized LM interface. Clients may leverage the LM interface to efficiently implement higher-level functionality and/or offload LM operations to the storage system. | 03-10-2016 |
Patent application number | Description | Published |
20130290945 | SYSTEM AND METHOD FOR PERFORMING AN IN-SERVICE SOFTWARE UPGRADE IN NON-REDUNDANT SYSTEMS - An information handling system is provided. The information handling system includes one or more devices coupled together to route information between the one or more devices and other devices coupled thereto based on routing information stored in the one or more devices. The one or more devices includes a routing processor, one or more line cards coupled to the routing processor, the one or more line cards receiving the routing information from the routing processor for routing data packets to a destination, and a memory coupled to the routing processor. The routing processor is configured to create an active image having a current state of the routing information and create a standby image having the current state of the routing information, wherein the standby image requests the current state of the routing information from the active image using a key that is calculated using a portion of the routing information. | 10-31-2013 |
20140119371 | SYSTEMS AND METHODS FOR STACKING FIBRE CHANNEL SWITCHES WITH FIBRE CHANNEL OVER ETHERNET STACKING LINKS - An information handling system is provided. The information handling system includes systems and methods for expanding the port count in a single Fibre Channel domain by adding modular Fibre Channel switches. Such a system includes a system enclosure that contains a plurality of Fibre Channel modules configured to send and receive Fibre Channel packets, the Fibre Channel modules providing a plurality of Fibre Channel ports and a switch processor coupled to the plurality of Fibre Channel ports and to a plurality of Ethernet ports. The switch processor is configured to apply a stacking header to Fibre Channel packets for transmission from one of the plurality of Ethernet ports over a stacking link to another switch processor in another system enclosure. | 05-01-2014 |
20140359185 | SYSTEMS AND METHODS FOR ADAPTIVE INTERRUPT COALESCING IN A CONVERGED NETWORK - An information handling system is provided. The information handling system includes an information handling device having one or more processors in communication with a network interface card. The network interface card includes one or more interfaces for receiving frames the information handling device is coupled to an external network device. The device also includes a memory that is in communication with the one or more processors and stores a classification matrix. The classification matrix is used to generate a current interrupt throttling rate from a plurality of candidate interrupt throttling rates that are applied to the received frames according to at least two properties of each frame of the received frames. A method for providing adaptive interrupt coalescing is also provided. | 12-04-2014 |
20150106651 | SYSTEM AND METHOD FOR PERFORMING AN IN-SERVICE SOFTWARE UPGRADE IN NON-REDUNDANT SYSTEMS - An information handling system is provided. The information handling system includes one or more devices coupled together to route information between the one or more devices and other devices coupled thereto based on routing information stored in the one or more devices. The one or more devices includes a routing processor, one or more line cards coupled to the routing processor, the one or more line cards receiving the routing information from the routing processor for routing data packets to a destination, and a memory coupled to the routing processor. The routing processor is configured to create an active image having a current state of the routing information and create a standby image having the current state of the routing information, wherein the standby image requests the current state of the routing information from the active image using a key that is calculated using a portion of the routing information. | 04-16-2015 |