Patent application number | Description | Published |
20080222076 | Enabling Instant Productivity Functionality on Information Handling Systems - A method for enabling instant on access of productivity content on an information handling system. The method includes providing the information handling system with a first operating system partition and a second operating system partition wherein the first operating system partition is a main operating system partition and the second operating system partition is an instant on operating system partition, storing a synchronization management module within the first operating system partition, storing a productivity module within the second operating system partition, synchronizing productivity content stored on the first operating system partition with the productivity module via the synchronization management module, and accessing the synchronized productivity content via the productivity module of the second operating system partition. | 09-11-2008 |
20100306418 | Methods and System for Configuring a Peripheral Device with an Information Handling System - A method for configuring a peripheral device in communication with an information handling system (IHS) is disclosed, wherein the method includes receiving visual data associated with the peripheral device and mapping configuration data to the peripheral device based on the visual data. The method further includes utilizing the configuration data to configure the peripheral device in communication with the IHS. An information handling system (IHS) in communication with an image capturing device is further disclosed including a storage device operable to store a database, the database configured to store a standard image of a peripheral device, wherein the standard image is associated with configuration data for the peripheral device. The system further includes a memory coupled to the storage device and a processor to receive visual data associated with the peripheral device from the image capturing device. The processor is operable to execute a software application configured to match the visual data with the standard image to configure the peripheral device based on the configuration data. | 12-02-2010 |
20110178886 | System and Method for Manufacturing and Personalizing Computing Devices - A system, method, and computer-readable medium are disclosed for separating the installation of an operating system from the fulfillment, installation, and entitlement of other digital assets. Information associated with the purchase of a system and digital assets to be processed by the system is received, including the system's unique system identifier. The unique system identifier is associated with the digital assets to generate digital assets entitlement data. A personalization agent installed on the system determines the system's unique system identifier and automatically downloads the purchased digital assets, which comprises an operating system (OS), and their associated digital assets entitlement data. Once downloaded, the personalization agent uses the digital assets entitlement data to first install the OS according to any restrictions imposed by the OS manufacturer, and then the remaining digital assets, on the system, which is then entitled to process the installed digital assets. | 07-21-2011 |
20110178887 | System and Method for Separation of Software Purchase from Fulfillment - A system, method, and computer-readable medium are disclosed for separating the purchase of digital assets from their fulfillment. Information associated with the purchase of a system and digital assets to be processed by the system is received, including the system's unique system identifier. The unique system identifier is associated with the digital assets to generate digital assets entitlement data. A personalization agent installed on the system determines the system's unique system identifier and automatically downloads the purchased digital assets and their associated digital assets entitlement data. Once downloaded, the personalization agent uses the digital assets entitlement data to install the purchased digital assets on the system, thereby entitling the system to process the installed digital assets. | 07-21-2011 |
20110178888 | System and Method for Entitling Digital Assets - A system, method, and computer-readable medium are disclosed for managing the entitlement of digital assets. System identifier data associated with a target system, including its unique system identifier, is received, along with digital assets selection data corresponding to digital assets data to be processed by the target system. The system identifier data is processed with the digital assets selection data to generate digital assets entitlement data. The digital assets data and the digital assets entitlement data is then provided to a personalization agent associated with the target system. In turn, the personalization agent processes the digital assets entitlement data and the digital assets data for installation on the target system, thereby entitling the system to process the installed digital assets data. | 07-21-2011 |
20110191476 | System and Method for Migration of Digital Assets - A system, method, and computer-readable medium are disclosed for automatically migrating entitled digital assets from a source system to the target system. A first personalization agent is installed on a target system. A first set of digital assets entitlement data is provided along with an associated first set of digital assets data, which is then installed on the target system by the first personalization agent. A second set of digital assets entitlement data associated with a second set of digital assets data installed on a source system is determined by a second personalization agent. The second set of digital assets entitlement data is disassociated from the second set of system identifier data and then associated with the first set of system identifier data. The second set of digital assets is then installed the target system by the first personalization agent. | 08-04-2011 |
20110191765 | System and Method for Self-Provisioning of Virtual Images - A system, method, and computer-readable medium are disclosed for automatically provisioning a virtual image on a target system. A service operating system comprising a virtual machine monitor and a personalization agent is installed on a target system. A set of digital assets entitlement data is provided along with an associated set of digital assets data contained in a virtual software image, which is then installed on the target system by the personalization agent. | 08-04-2011 |
20110191863 | System and Method for Identifying Systems and Replacing Components - A system, method, and computer-readable medium are disclosed for managing a system's entitlement to digital assets when the system's components are replaced. A unique system identifier, comprising the unique identifiers of predetermined system components, is associated with digital assets data to generate digital assets entitlement data, which in turn entitles the system to process the digital assets data. The digital assets entitlement is perpetuated when a first unique system component identifier is replaced with a second unique system component identifier. | 08-04-2011 |
20110231281 | System and Method for Handling Software Activation in Entitlement - A system, method, and computer-readable medium are disclosed for separating the purchase of digital assets from their fulfillment and activation. Digital assets purchase information comprising digital assets identifier information and activation key data, and system identifier information comprising system identifier data, is received. The purchase information and the system identifier information are processed to generate digital assets activation request data, which is then processed by the provider of the digital assets to generate digital assets activation data. Associated digital assets data is provided with the digital assets activation data and then processed with the purchase transaction data to generate digital assets entitlement data. A personalization agent associated with a target system automatically downloads the purchased digital assets and associated digital assets entitlement data, which is used to install the digital assets, thereby entitling the system to process the installed digital assets. | 09-22-2011 |
20110289350 | Restoration of an Image Backup Using Information on Other Information Handling Systems - A backup and restoration process which first attempts to recover information blocks from locally connected information handling systems executing a backup/restore service before looking to the slower access cloud store to recover data blocks. | 11-24-2011 |
20120209736 | System and Method for Handling Software Activation in Entitlement - A system, method, and computer-readable medium are disclosed for separating the purchase of digital assets from their fulfillment and activation. Digital assets purchase information comprising digital assets identifier information and activation key data, and system identifier information comprising system identifier data, is received. The purchase information and the system identifier information are processed to generate digital assets activation request data, which is then processed by the provider of the digital assets to generate digital assets activation data. Associated digital assets data is provided with the digital assets activation data and then processed with the purchase transaction data to generate digital assets entitlement data. A personalization agent associated with a target system automatically downloads the purchased digital assets and associated digital assets entitlement data, which is used to install the digital assets, thereby entitling the system to process the installed digital assets. | 08-16-2012 |
20120233706 | System and Method for Secure Licensing for an Information Handling System - Systems and methods for reducing problems and disadvantages associated with traditional approaches to secure licensing for an information handling system are provided. In accordance with additional embodiments of the present disclosure, a method may include: (i) booting an information handling system to an operating system stored on a memory of a secure licensing device coupled to a port of the information handling system; (ii) establishing a secure wireless network connection between the secure licensing device and a licensing server; (iii) retrieving information regarding one or more hardware components of the information handling system; (iv) retrieving a license key for a software program associated with information handling system from the licensing server; (v) generating a unique marker binding the license key to the one or more hardware components; and (vi) storing the unique marker on the information handling system. | 09-13-2012 |
20130151583 | Automatic And Dynamic Information Handling System Personalization - Information handling systems personalized by addition of a physical component, such as lid having an emblem that attaches to a portable information handling system, have software associated with the physical component automatically applied by interacting with an entitlement network location. An identifier associated with the physical component is automatically read by the information handling system and sent to the entitlement network location to retrieve entitlements for the use of software at the information handling system. | 06-13-2013 |
20130326634 | Simple Product Purchase for Multisystem Accounts - A system and method are disclosed for managing the assignment of digital goods licenses. A user selects digital goods to be assigned to a group of target systems, followed by the retrieval of digital goods entitlement records for each system in the group. Systems that are not entitled to the selected digital goods are removed, along with any systems that already have the digital goods installed. If an insufficient number of licenses are available for the remaining systems in the group, then the number of required licenses is determined, followed by their procurement. The available and procured digital goods licenses are then respectively assigned to each system in the group. | 12-05-2013 |
20130339501 | Automated Digital Migration - A system, method, and computer-readable medium are disclosed for performing automated, peer-to-peer migrations of entitled digital assets. A first identifier corresponding to a source system, and a first set of entitlement data corresponding to a set of digital assets installed on the source system, are processed to generate a first set of entitlements entitling the source system to use the set of digital assets. The first identifier is then cross-referenced to a second identifier corresponding to a target system. A migration request and the second identifier are received from the target system, which are then processed to initiate the migration of the digital assets from the source system to the target system. The second identifier and the first set of entitlement data are subsequently processed to generate a second set of digital asset entitlements entitling the target system to use the set of digital assets. | 12-19-2013 |
20140019584 | Acceleration of Cloud-Based Migration/Backup Through Pre-Population - A system, method, and computer-readable medium are disclosed for performing automated, cloud-based migrations of entitled digital assets. A set of entitlement data corresponding to a set of digital assets installed on a first system is processed with a set of digital asset source data to generate an equivalent set of digital assets. A first identifier associated with the source system is then cross-referenced to a second identifier associated with a target system. The second identifier and the set of entitlement data are processed to generate a second set of digital asset entitlements entitling the target system to use the set of equivalent digital assets. A migration request and the second identifier are then processed to provide the set of equivalent digital assets to the target system. | 01-16-2014 |
20140020105 | Distributing Software Images with Mixed Licensing - A system, method, and computer-readable medium are disclosed for managing the licensing of digital assets associated with a system image. A system image is processed to identify digital asset identification information, which in turn is processed to determine whether its associated digital asset is available for licensing from a system manufacturer. If the digital asset could not be identified, or if it is unavailable for licensing from the system manufacturer, then it is marked as “custom.” Otherwise, it is marked as “available” and presented to a system purchaser. Any digital assets that the system purchaser elects to license are marked as “license” and all other digital assets are marked as “custom.” Digital assets marked as “license” are then licensed from the system manufacturer. Both licensed and “custom” digital assets are installed and their corresponding licenses are applied to the system image, which in turn is applied to the target system. | 01-16-2014 |
20140047101 | Method for Personalized Shopping Recommendations - A system, method, and computer-readable medium are disclosed for providing personalized recommendations based upon a user's system profile and usage. A personalized recommendation system receives a first set of input data and a second set of input data, the first set of input data comprising traditional recommendation input data and the second set of input data comprising recommendation input data associated with the profile and usage of a user's system. The first and second sets of input data are then processed to generate and provide a personalized recommendation. | 02-13-2014 |
20140047133 | Method and System for Late Binding of Features - A system, method, and computer-readable medium are disclosed for entitling the implementation of a feature associated with a device after it is manufactured. A feature entitlement management system receives a device's unique identifier, which is then processed to determine which features associated with the device are available for implementation. Once determined, the available features are provided to the user of the device, who in turn selects a feature for implementation. A feature entitlement is then generated by performing late binding entitlement operations to associate the selected feature's corresponding entitlement data with the device's unique identifier. The resulting feature entitlement is then is processed to implement the selected feature. | 02-13-2014 |
20140059236 | Process for Peer-To-Peer Download of Software Installer - A system, method, and computer-readable medium are disclosed for performing automated, peer-to-peer migrations of entitled digital assets. A first identifier corresponding to a source system, and a first set of entitlement data corresponding to a set of digital assets installed on the source system, are processed to generate a first set of entitlements entitling the source system to use the set of digital assets. The first identifier is then cross-referenced to a second identifier corresponding to a target system. A migration request and the second identifier are received from the target system, which are then processed to initiate the migration of the digital assets from the source system to the target system. The second identifier and the first set of entitlement data are subsequently processed to generate a second set of digital asset entitlements entitling the target system to use the set of digital assets. | 02-27-2014 |
20140068481 | Rich User Experience in Purchasing and Assignment - A system, method, and computer-readable medium are disclosed for assisting a user in performing a drag-and-drop operation within a graphical user interface (GUI). Drag-and-drop assistance operations are initiated by the selection of a source graphical object within a GUI. Association data corresponding to the selected source graphical object is processed to identify associated target graphical objects. Once the associated target graphical objects have been identified, visual indication data is processed to generate a visual cue, which is then displayed within the GUI and used to assist the user in performing a drag-and-drop operation. | 03-06-2014 |
20140108098 | SYSTEM AND METHOD FOR OPTIMIZING ENTITLEMENTS OF DIGITAL ASSETS - In accordance with embodiments of the present disclosure, an information handling system for managing the entitlement of digital assets may include a storage medium and a processor. The processor may be configured to receive digital asset usage information regarding usage of a digital asset within an enterprise. The processor may also be configured to receive entitlement information regarding existing entitlements for usage of the digital asset within the enterprise. The processor may further configured to receive available entitlement information regarding entitlements other than the existing entitlements that may be acquired for usage of the digital asset within the enterprise. The processor may additionally be configured to determine based on a comparison of the digital asset usage information to the available entitlement information, whether acquisition of entitlements other than the existing entitlements is more cost efficient. | 04-17-2014 |
20140108588 | System and Method for Migration of Digital Assets Leveraging Data Protection - In accordance with embodiments of the present disclosure, an information handling system may include a storage medium and a processor. The storage medium may be configured to store data comprising backup data associated with a source system. The processor may be configured to migrate the data from the storage medium to a target system. The processor may further be configured to during migration of the data from the storage medium to the target system, receive additional data comprising additional backup data associated with the source system and store the additional data to the storage medium. The processor may also be configured to migrate the additional data to the target system. | 04-17-2014 |
20140108593 | System and Method for Migration of Digital Assets - In accordance with embodiments of the present disclosure, an information handling system for migrating digital assets may include a storage medium and a processor. The storage medium may be configured to store information regarding digital assets to be migrated from a source system to a target system. The processor may be configured to inventory digital assets present on the source system, including inventorying information regarding each digital asset. The processor may further be configured to assign each digital asset a priority based on the information regarding each digital asset. The processor may also be configured to transfer digital assets to the target system based on the priorities of the digital assets. | 04-17-2014 |
20140108616 | SYSTEM AND METHOD FOR ENTITLING DIGITAL ASSETS - In accordance with embodiments of the present disclosure, an information handling system for managing the entitlement of digital assets, may include a storage medium comprising a catalog of digital assets and a processor. The processor may be configured to communicate from the information handling system identities of one or more of the digital assets in the catalog to a user at a target system remote from the information handling system. The processor may also be configured to receive from the user a request for a particular digital asset to be entitled for use on the target system. The processor may further be configured to responsive to approval of the request, cause the digital asset to be downloaded to the target system and entitle the digital asset for use on the target system. | 04-17-2014 |
20140108657 | SYSTEM AND METHOD FOR MANAGING ENTITLEMENT OF DIGITAL ASSETS - In accordance with embodiments of the present disclosure, an information handling system for managing the entitlement of digital assets may include a storage medium and a processor. The storage medium may include entitlement data associated with one or more digital assets. The processor may be configured to receive a plurality of entitlements for a digital asset from a parent device. The processor may also be configured to bind the digital asset and the plurality of entitlements to the information handling system. The processor may additionally be configured to allocate the plurality of entitlements among a plurality of child devices. The processor may be further be configured to bind the plurality of entitlements to the plurality of child devices in accordance with the entitlements. | 04-17-2014 |
20140114783 | SYSTEM AND METHOD FOR MIGRATION OF DIGITAL ASSETS - An information handling system may include a storage medium and a processor. The storage medium may comprise a repository of source system identifier data, target system identifier data, digital assets data, and digital assets entitlement data. The processor may be configured to, based on at least one of the source system identifier data, target system identifier data, digital assets data, and digital assets entitlement data, determine whether an entitlement for a digital asset is transferable from a source system to a target system. The processor may further be configured to responsive to determining the entitlement for the digital asset is not transferable, present a user with a plurality of options regarding the digital asset. The processor may also be configured to, based at least on a response of the user, acquire a new or modified entitlement for the digital asset for use on the target system. | 04-24-2014 |
20140115290 | SYSTEM AND METHOD FOR MIGRATION OF DIGITAL ASSETS - In accordance with the present disclosure, an information handling system for migrating digital assets may include a storage medium and a processor. The storage medium may be configured to store information regarding digital assets to be migrated from a source system to a target system. The processor may be configured to, for each of one or more digital assets of the source system, determine if the digital asset is a candidate for migration to a cloud storage provider. The processor may also be configured to, for each digital asset determined to be a candidate for migration to the cloud storage provider, determine if a user desires to migrate the digital asset to the cloud storage provider. The processor may further be configured to, for each digital asset the user desires to migrate to the cloud storage provider, transfer the digital asset from the source system to the cloud storage provider. | 04-24-2014 |
20140131437 | Automatic and Dynamic Information Handling System Personalization - Information handling systems personalized by addition of a physical component, such as lid having an emblem that attaches to a portable information handling system, have software associated with the physical component automatically applied by interacting with an entitlement network location. An identifier associated with the physical component is automatically read by the information handling system and sent to the entitlement network location to retrieve entitlements for the use of software at the information handling system. | 05-15-2014 |
20140180928 | System and Method for Handling Software Activation in Entitlement - A system, method, and computer-readable medium are disclosed for separating the purchase of digital assets from their fulfillment and activation. Digital assets purchase information comprising digital assets identifier information and activation key data, and system identifier information comprising system identifier data, is received. The purchase information and the system identifier information are processed to generate digital assets activation request data, which is then processed by the provider of the digital assets to generate digital assets activation data. Associated digital assets data is provided with the digital assets activation data and then processed with the purchase transaction data to generate digital assets entitlement data. A personalization agent associated with a target system automatically downloads the purchased digital assets and associated digital assets entitlement data, which is used to install the digital assets, thereby entitling the system to process the installed digital assets. | 06-26-2014 |
20140317057 | SYSTEMS AND METHODS FOR DIGITAL FULFILLMENT OF SYSTEM IMAGES - In accordance with embodiments of the present disclosure, an information handling system for deploying a target image to a particular target system may include a storage medium and a processor communicatively coupled to the storage medium. The process may be configured to receive one or more target images and store the one or more target images to the storage medium, receive unique system identifiers for each of one or more target systems and store the unique system identifiers to the storage medium, generate one or more entitlements binding each of the one or more target systems to a respective target image of the one or more target images based on the one or more target images and the unique system identifiers, and deploy a target image having an entitlement for the particular target system. | 10-23-2014 |
20140330934 | SYSTEMS AND METHODS FOR DIGITAL FULFILLMENT OF STREAMING APPLICATIONS - In accordance with embodiments of the present disclosure, an information handling system for deployment of a streaming application to a streaming application environment comprising the information handling system and one or more target systems may include computer-readable media for storing a library of one or more sequenced applications and entitlement data associated with the one or more sequenced applications and a processor communicatively coupled to the computer-readable media. The processor may be configured to communicate a query for an entitlement to the sequenced application to a digital assets entitlement system server, responsive to a determination that an entitlement exists for the streaming application environment to the sequenced application, receive the sequenced application from the digital assets entitlement system server, and deploy and provision the sequenced application to the one or more target systems via application streaming. | 11-06-2014 |
20150222589 | SYSTEMS AND METHODS FOR RESOLUTION OF UNIFORM RESOURCE LOCATORS IN A LOCAL NETWORK - In accordance with embodiments of the present disclosure, a method for resolving a uniform resource locator may include receiving, at a router, a uniform resource locator from a client information handling system within a local network of the router. The method may also include processing, by the router, the uniform resource locator to determine if the uniform resource locator includes a local domain name of a local information handling system within the local network. The method may further include resolving, by the router, a unique address associated with the uniform resource locator and the local information handling system responsive to determining that the uniform resource locator includes the local domain name of the local information handling system, wherein such resolving is performed without resort to a domain name service external to the local network. | 08-06-2015 |
20150278293 | ASYNCHRONOUS IMAGE REPOSITORY FUNCTIONALITY - Embodiments of methods for asynchronous image repository functionality are presented. In an embodiment, a method includes storing user data in a data storage device that is local to a user interface device, storing a copy of the user data to a storage location that is remote from the user interface device, performing a service for a user of the user interface device using the copy of the user data stored to the storage location, and communicating information associated with the service back to the user interface device. Additionally, the data image may be directly scanned for malicious software. In a further embodiment, the method may include providing a software inventory associated with the user data, such as software, stored in the image. | 10-01-2015 |
Patent application number | Description | Published |
20120311269 | NON-UNIFORM MEMORY-AWARE CACHE MANAGEMENT - An apparatus is disclosed for caching memory data in a computer system with multiple system memories. The apparatus comprises a data cache for caching memory data. The apparatus is configured to determine a retention priority for a cache block stored in the data cache. The retention priority is based on a performance characteristic of a system memory from which the cache block is cached. | 12-06-2012 |
20130159818 | Unified Data Masking, Data Poisoning, and Data Bus Inversion Signaling - Provided herein is a method and system for providing and analyzing unified data signaling that includes setting, or analyzing a state of a single indicator signal, generating or analyzing a data pattern of a plurality of data bits, and signal, or determine, based on the state of the single indicator signal and the pattern of the plurality of data bits, that data bus inversion has been applied to the plurality of data bits or that the plurality of data bits is poisoned. | 06-20-2013 |
20140122801 | MEMORY CONTROLLER WITH INTER-CORE INTERFERENCE DETECTION - Embodiments are described for a method for controlling access to memory in a processor-based system comprising monitoring a number of interference events, such as bank contentions, bus contentions, row-buffer conflicts, and increased write-to-read turnaround time caused by a first core in the processor-based system that causes a delay in access to the memory by a second core in the processor-based system; deriving a control signal based on the number of interference events; and transmitting the control signal to one or more resources of the processor-based system to reduce the number of interference events from an original number of interference events. | 05-01-2014 |
20140173210 | MULTI-CORE PROCESSING DEVICE WITH INVALIDATION CACHE TAGS AND METHODS - A data processing device is provided that facilitates cache coherence policies. In one embodiment, a data processing device utilizes invalidation tags in connection with a cache that is associated with a processing engine. In some embodiments, the cache is configured to store a plurality of cache entries where each cache entry includes a cache line configured to store data and a corresponding cache tag configured to store address information associated with data stored in the cache line. Such address information includes invalidation flags with respect to addresses stored in the cache tags. Each cache tag is associated with an invalidation tag configured to store information related to invalidation commands of addresses stored in the cache tag. In such embodiment, the cache is configured to set invalidation flags of cache tags based upon information stored in respective invalidation tags. | 06-19-2014 |
20140173225 | REDUCING MEMORY ACCESS TIME IN PARALLEL PROCESSORS - Apparatus, computer readable medium, and method of servicing memory requests are presented. A first plurality of memory requests are associated together, wherein each of the first plurality of memory requests is generated by a corresponding one of a first plurality of processors, and wherein each of the first plurality of processors is executing a first same instruction. A second plurality of memory requests are associated together, wherein each of the second plurality of memory requests is generated by a corresponding one of a second plurality of processors, and wherein each of the second plurality of processors is executing a second same instruction. A determination is made to service the first plurality of memory requests before the second plurality of memory requests and the first plurality of memory requests is serviced before the second plurality of memory requests. | 06-19-2014 |
20140177347 | INTER-ROW DATA TRANSFER IN MEMORY DEVICES - A method and apparatus for inter-row data transfer in memory devices is described. Data transfer from one physical location in a memory device to another is achieved without engaging the external input/output pins on the memory device. In an example method, a memory device is responsive to a row transfer (RT) command which includes a source row identifier and a target row identifier. The memory device activates a source row and storing source row data in a row buffer, latches the target row identifier into the memory device, activates a word line of a target row to prepare for a write operation, and stores the source row data from the row buffer into the target row. | 06-26-2014 |
20140177362 | Memory Interface Supporting Both ECC and Per-Byte Data Masking - A memory and a method of storing data in a memory are provided. The memory comprises a memory block comprising data bits and additional bits. The memory includes logic which, when receiving a first command, writes data into the data bits of the memory block, wherein the data is masked according to a first input. The logic, in response to a second command, writes data into the data bits of the memory block and writes a second input into the additional bits of the memory block. | 06-26-2014 |
20140181411 | PROCESSING DEVICE WITH INDEPENDENTLY ACTIVATABLE WORKING MEMORY BANK AND METHODS - A data processing device is provided that includes an array of working memory banks and an associated processing engine. The working memory bank array is configured with at least one independently activatable memory bank. A dirty data counter (DDC) is associated with the independently activatable memory bank and is configured to reflect a count of dirty data migrated from the independently activatable memory bank upon selective deactivation of the independently activatable memory bank. The DDC is configured to selectively decrement the count of dirty data upon the reactivation of the independently activatable memory bank in connection with a transient state. In the transient state, each dirty data access by the processing engine to the reactivated memory bank is also conducted with respect to another memory bank of the array. Upon a condition that dirty data is found in the other memory bank, the count of dirty data is decremented. | 06-26-2014 |
20140181415 | PREFETCHING FUNCTIONALITY ON A LOGIC DIE STACKED WITH MEMORY - Prefetching functionality on a logic die stacked with memory is described herein. A device includes a logic chip stacked with a memory chip. The logic chip includes a control block, an in-stack prefetch request handler and a memory controller. The control block receives memory requests from an external source and determines availability of the requested data in the in-stack prefetch request handler. If the data is available, the control block sends the requested data to the external source. If the data is not available, the control block obtains the requested data via the memory controller. The in-stack prefetch request handler includes a prefetch controller, a prefetcher and a prefetch buffer. The prefetcher monitors the memory requests and based on observed patterns, issues additional prefetch requests to the memory controller. | 06-26-2014 |
20150261472 | MEMORY INTERFACE SUPPORTING BOTH ECC AND PER-BYTE DATA MASKING - A memory and a method of storing data in a memory are provided. The memory comprises a memory block comprising data bits and additional bits. The memory includes logic which, when receiving a first command, writes data into the data bits of the memory block, wherein the data is masked according to a first input. The logic, in response to a second command, writes data into the data bits of the memory block and writes a second input into the additional bits of the memory block. | 09-17-2015 |
Patent application number | Description | Published |
20130159812 | MEMORY ARCHITECTURE FOR READ-MODIFY-WRITE OPERATIONS - According to one embodiment, a memory architecture implemented method is provided, where the memory architecture includes a logic chip and one or more memory chips on a single die, and where the method comprises: reading values of data from the one or more memory chips to the logic chip, where the one or more memory chips and the logic chip are on a single die; modifying, via the logic chip on the single die, the values of data; and writing, from the logic chip to the one or more memory chips, the modified values of data. | 06-20-2013 |
20130262780 | Apparatus and Method for Fast Cache Shutdown - An apparatus and method to enable a fast cache shutdown is disclosed. In one embodiment, a cache subsystem includes a cache memory and a cache controller coupled to the cache memory. The cache controller is configured to, upon restoring power to the cache subsystem, inhibit writing of modified data exclusively into the cache memory. | 10-03-2013 |
20140040532 | STACKED MEMORY DEVICE WITH HELPER PROCESSOR - A processing system comprises one or more processor devices and other system components coupled to a stacked memory device having a set of stacked memory layers and a set of one or more logic layers. The set of logic layers implements a helper processor that executes instructions to perform tasks in response to a task request from the processor devices or otherwise on behalf of the other processor devices. The set of logic layers also includes a memory interface coupled to memory cell circuitry implemented in the set of stacked memory layers and coupleable to the processor devices. The memory interface operates to perform memory accesses for the processor devices and for the helper processor. By virtue of the helper processor's tight integration with the stacked memory layers, the helper processor may perform certain memory-intensive operations more efficiently than could be performed by the external processor devices. | 02-06-2014 |
20140040698 | STACKED MEMORY DEVICE WITH METADATA MANGEMENT - A processing system comprises one or more processor devices and other system components coupled to a stacked memory device having a set of stacked memory layers and a set of one or more logic layers. The set of logic layers implements a metadata manager that offloads metadata management from the other system components. The set of logic layers also includes a memory interface coupled to memory cell circuitry implemented in the set of stacked memory layers and coupleable to the devices external to the stacked memory device. The memory interface operates to perform memory accesses for the external devices and for the metadata manager. By virtue of the metadata manager's tight integration with the stacked memory layers, the metadata manager may perform certain memory-intensive metadata management operations more efficiently than could be performed by the external devices. | 02-06-2014 |
20140068304 | METHOD AND APPARATUS FOR POWER REDUCTION DURING LANE DIVERGENCE - A method and device for reducing power during an instruction lane divergence includes idling an inactive execution lane during the lane divergence. | 03-06-2014 |
20140089699 | POWER MANAGEMENT SYSTEM AND METHOD FOR A PROCESSOR - The present disclosure relates to a method and apparatus for dynamically controlling power consumption by at least one processor. A power management method includes monitoring, by power control logic of the at least one processor, performance data associated with each of a plurality of executions of a repetitive workload by the at least one processor. The method includes adjusting, by the power control logic following an execution of the repetitive workload, an operating frequency of at least one of a compute unit and a memory controller upon a determination that the at least one processor is at least one of compute-bound and memory-bound based on monitored performance data associated with the execution of the repetitive workload. | 03-27-2014 |
20140136870 | TRACKING MEMORY BANK UTILITY AND COST FOR INTELLIGENT SHUTDOWN DECISIONS - A device receives an indication that a memory bank is to be powered down, and determines, based on receiving the indication, shutdown scores corresponding to powered up memory banks. Each shutdown score is based on a shutdown metric associated with powering down a powered up memory bank. The device may power down a selected memory bank based on the shutdown scores. | 05-15-2014 |
20140136873 | TRACKING MEMORY BANK UTILITY AND COST FOR INTELLIGENT POWER UP DECISIONS - A device receives an indication that a memory bank is to be powered up, and determines, based on receiving the indication, power scores corresponding to powered down memory banks. Each power score corresponds to a power metric associated with powering up a powered down memory bank. The device powers up a selected memory bank based on the plurality of power scores. | 05-15-2014 |
20140143492 | Using Predictions for Store-to-Load Forwarding - The described embodiments include a core that uses predictions for store-to-load forwarding. In the described embodiments, the core comprises a load-store unit, a store buffer, and a prediction mechanism. During operation, the prediction mechanism generates a prediction that a load will be satisfied using data forwarded from the store buffer because the load loads data from a memory location in a stack. Based on the prediction, the load-store unit first sends a request for the data to the store buffer in an attempt to satisfy the load using data forwarded from the store buffer. If data is returned from the store buffer, the load is satisfied using the data. However, if the attempt to satisfy the load using data forwarded from the store buffer is unsuccessful, the load-store unit then separately sends a request for the data to a cache to satisfy the load. | 05-22-2014 |
20140143493 | Bypassing a Cache when Handling Memory Requests - The described embodiments include a computing device that handles memory requests. In some embodiments, when a memory request is to be sent to a cache in the computing device or to be bypassed to a next lower level of a memory hierarchy in the computing device based on expected memory request resolution times, a bypass mechanism is configured to send the memory request to the cache or bypass the memory request to the next lower level of the memory hierarchy. | 05-22-2014 |
20140143495 | METHODS AND APPARATUS FOR SOFT-PARTITIONING OF A DATA CACHE FOR STACK DATA - A method of partitioning a data cache comprising a plurality of sets, the plurality of sets comprising a plurality of ways, is provided. Responsive to a stack data request, the method stores a cache line associated with the stack data in one of a plurality of designated ways of the data cache, wherein the plurality of designated ways is configured to store all requested stack data. | 05-22-2014 |
20140143498 | METHODS AND APPARATUS FOR FILTERING STACK DATA WITHIN A CACHE MEMORY HIERARCHY - A method of storing stack data in a cache hierarchy is provided. The cache hierarchy comprises a data cache and a stack filter cache. Responsive to a request to access a stack data block, the method stores the stack data block in the stack filter cache, wherein the stack filter cache is configured to store any requested stack data block. | 05-22-2014 |
20140143499 | METHODS AND APPARATUS FOR DATA CACHE WAY PREDICTION BASED ON CLASSIFICATION AS STACK DATA - A method of way prediction for a data cache having a plurality of ways is provided. Responsive to an instruction to access a stack data block, the method accesses identifying information associated with a plurality of most recently accessed ways of a data cache to determine whether the stack data block resides in one of the plurality of most recently accessed ways of the data cache, wherein the identifying information is accessed from a subset of an array of identifying information corresponding to the plurality of most recently accessed ways; and when the stack data block resides in one of the plurality of most recently accessed ways of the data cache, the method accesses the stack data block from the data cache. | 05-22-2014 |
20140149710 | CREATING SIMD EFFICIENT CODE BY TRANSFERRING REGISTER STATE THROUGH COMMON MEMORY - Methods, media, and computing systems are provided. The method includes, the media are configured for, and the computing system includes a processor with control logic for allocating memory for storing a plurality of local register states for work items to be executed in single instruction multiple data hardware and for repacking wavefronts that include work items associated with a program instruction responsive to a conditional statement. The repacking is configured to create repacked wavefronts that include at least one of a wavefront containing work items that all pass the conditional statement and a wavefront containing work items that all fail the conditional statement. | 05-29-2014 |
20140156941 | Tracking Non-Native Content in Caches - The described embodiments include a cache with a plurality of banks that includes a cache controller. In these embodiments, the cache controller determines a value representing non-native cache blocks stored in at least one bank in the cache, wherein a cache block is non-native to a bank when a home for the cache block is in a predetermined location relative to the bank. Then, based on the value representing non-native cache blocks stored in the at least one bank, the cache controller determines at least one bank in the cache to be transitioned from a first power mode to a second power mode. Next, the cache controller transitions the determined at least one bank in the cache from the first power mode to the second power mode. | 06-05-2014 |
20140156975 | Redundant Threading for Improved Reliability - In some embodiments, a method for improving reliability in a processor is provided. The method can include replicating input data for first and second lanes of a processor, the first and second lanes being located in a same cluster of the processor and the first and second lanes each generating a respective value associated with an instruction to be executed in the respective lane, and responsive to a determination that the generated values do not match, providing an indication that the generated values do not match. | 06-05-2014 |
20140164708 | SPILL DATA MANAGEMENT - A processor discards spill data from a memory hierarchy in response to the final access to the spill data has been performed by a compiled program executing at the processor. In some embodiments, the final access determined based on a special-purpose load instruction configured for this purpose. In some embodiments the determination is made based on the location of a stack pointer indicating that a method of the executing program has returned, so that data of the returned method that remains in the stack frame is no longer to be accessed. Because the spill data is discarded after the final access, it is not transferred through the memory hierarchy. | 06-12-2014 |
20140173378 | PARITY DATA MANAGEMENT FOR A MEMORY ARCHITECTURE - A processor system as presented herein includes a processor core, cache memory coupled to the processor core, a memory controller coupled to the cache memory, and a system memory component coupled to the memory controller. The system memory component includes a plurality of independent memory channels configured to store data blocks, wherein the memory controller controls the storing of parity bits in at least one of the plurality of independent memory channels. In some implementations, the system memory is realized as a die-stacked memory component. | 06-19-2014 |
20140173379 | DIRTY CACHELINE DUPLICATION - A method of managing memory includes installing a first cacheline at a first location in a cache memory and receiving a write request. In response to the write request, the first cacheline is modified in accordance with the write request and marked as dirty. Also in response to the write request, a second cacheline is installed that duplicates the first cacheline, as modified in accordance with the write request, at a second location in the cache memory. | 06-19-2014 |
20140181410 | MANAGEMENT OF CACHE SIZE - In response to a processor core exiting a low-power state, a cache is set to a minimum size so that fewer than all of the cache's entries are available to store data, thus reducing the cache's power consumption. Over time, the size of the cache can be increased to account for heightened processor activity, thus ensuring that processing efficiency is not significantly impacted by a reduced cache size. In some embodiments, the cache size is increased based on a measured processor performance metric, such as an eviction rate of the cache. In some embodiments, the cache size is increased at regular intervals until a maximum size is reached. | 06-26-2014 |
20140181412 | MECHANISMS TO BOUND THE PRESENCE OF CACHE BLOCKS WITH SPECIFIC PROPERTIES IN CACHES - A system and method for efficiently limiting storage space for data with particular properties in a cache memory. A computing system includes a cache and one or more sources for memory requests. In response to receiving a request to allocate data of a first type, a cache controller allocates the data in the cache responsive to determining a limit of an amount of data of the first type permitted in the cache is not reached. The controller maintains an amount and location information of the data of the first type stored in the cache. Additionally, the cache may be partitioned with each partition designated for storing data of a given type. Allocation of data of the first type is dependent at least upon the availability of a first partition and a limit of an amount of data of the first type in a second partition. | 06-26-2014 |
20140181414 | MECHANISMS TO BOUND THE PRESENCE OF CACHE BLOCKS WITH SPECIFIC PROPERTIES IN CACHES - A system and method for efficiently limiting storage space for data with particular properties in a cache memory. A computing system includes a cache array and a corresponding cache controller. The cache array includes multiple banks, wherein a first bank is powered down. In response a write request to a second bank for data indicated to be stored in the powered down first bank, the cache controller determines a respective bypass condition for the data. If the bypass condition exceeds a threshold, then the cache controller invalidates any copy of the data stored in the second bank. If the bypass condition does not exceed the threshold, then the cache controller stores the data with a clean state in the second bank. The cache controller writes the data in a lower-level memory for both cases. | 06-26-2014 |
20140181421 | PROCESSING ENGINE FOR COMPLEX ATOMIC OPERATIONS - A system includes an atomic processing engine (APE) coupled to an interconnect. The interconnect is to couple to one or more processor cores. The APE receives a plurality of commands from the one or more processor cores through the interconnect. In response to a first command, the APE performs a first plurality of operations associated with the first command. The first plurality of operations references multiple memory locations, at least one of which is shared between two or more threads executed by the one or more processor cores. | 06-26-2014 |
20140181427 | Compound Memory Operations in a Logic Layer of a Stacked Memory - Some die-stacked memories will contain a logic layer in addition to one or more layers of DRAM (or other memory technology). This logic layer may be a discrete logic die or logic on a silicon interposer associated with a stack of memory dies. Additional circuitry/functionality is placed on the logic layer to implement functionality to perform various data movement and address calculation operations. This functionality would allow compound memory operations—a single request communicated to the memory that characterizes the accesses and movement of many data items. This eliminates the performance and power overheads associated with communicating address and control information on a fine-grain, per-data-item basis from a host processor (or other device) to the memory. This approach also provides better visibility of macro-level memory access patterns to the memory system and may enable additional optimizations in scheduling memory accesses. | 06-26-2014 |
20140181453 | Processor with Host and Slave Operating Modes Stacked with Memory - A system, method, and computer program product are provided for a memory device system. One or more memory dies and at least one logic die are disposed in a package and communicatively coupled. The logic die comprises a processing device configurable to manage virtual memory and operate in an operating mode. The operating mode is selected from a set of operating modes comprising a slave operating mode and a host operating mode. | 06-26-2014 |
20140181457 | Write Endurance Management Techniques in the Logic Layer of a Stacked Memory - A system, method, and memory device embodying some aspects of the present invention for remapping external memory addresses and internal memory locations in stacked memory are provided. The stacked memory includes one or more memory layers configured to store data. The stacked memory also includes a logic layer connected to the memory layer. The logic layer has an Input/Output (I/O) port configured to receive read and write commands from external devices, a memory map configured to maintain an association between external memory addresses and internal memory locations, and a controller coupled to the I/O port, memory map, and memory layers, configured to store data received from external devices to internal memory locations. | 06-26-2014 |
20140181458 | DIE-STACKED MEMORY DEVICE PROVIDING DATA TRANSLATION - A die-stacked memory device incorporates a data translation controller at one or more logic dies of the device to provide data translation services for data to be stored at, or retrieved from, the die-stacked memory device. The data translation operations implemented by the data translation controller can include compression/decompression operations, encryption/decryption operations, format translations, wear-leveling translations, data ordering operations, and the like. Due to the tight integration of the logic dies and the memory dies, the data translation controller can perform data translation operations with higher bandwidth and lower latency and power consumption compared to operations performed by devices external to the die-stacked memory device. | 06-26-2014 |
20140181467 | HIGH LEVEL SOFTWARE EXECUTION MASK OVERRIDE - Methods, and media, and computer systems are provided. The method includes, the media includes control logic for, and the computer system includes a processor with control logic for overriding an execution mask of SIMD hardware to enable at least one of a plurality of lanes of the SIMD hardware. Overriding the execution mask is responsive to a data parallel computation and a diverged control flow of a workgroup. | 06-26-2014 |
20140181483 | Computation Memory Operations in a Logic Layer of a Stacked Memory - Some die-stacked memories will contain a logic layer in addition to one or more layers of DRAM (or other memory technology). This logic layer may be a discrete logic die or logic on a silicon interposer associated with a stack of memory dies. Additional circuitry/functionality is placed on the logic layer to implement functionality to perform various computation operations. This functionality would be desired where performing the operations locally near the memory devices would allow increased performance and/or power efficiency by avoiding transmission of data across the interface to the host processor. | 06-26-2014 |
20140223445 | Selecting a Resource from a Set of Resources for Performing an Operation - The described embodiments comprise a selection mechanism that selects a resource from a set of resources in a computing device for performing an operation. In some embodiments, the selection mechanism is configured to perform a lookup in a table selected from a set of tables to identify a resource from the set of resources. When the identified resource is not available for performing the operation and until a resource is selected for performing the operation, the selection mechanism is configured to identify a next resource in the table and select the next resource for performing the operation when the next resource is available for performing the operation. | 08-07-2014 |
20140372711 | SCHEDULING MEMORY ACCESSES USING AN EFFICIENT ROW BURST VALUE - A memory accessing agent includes a memory access generating circuit and a memory controller. The memory access generating circuit is adapted to generate multiple memory accesses in a first ordered arrangement. The memory controller is coupled to the memory access generating circuit and has an output port, for providing the multiple memory accesses to the output port in a second ordered arrangement based on the memory accesses and characteristics of an external memory. The memory controller determines the second ordered arrangement by calculating an efficient row burst value and interrupting multiple row-hit requests to schedule a row-miss request based on the efficient row burst value. | 12-18-2014 |
20140376320 | SPARE MEMORY EXTERNAL TO PROTECTED MEMORY - A memory subsystem employs spare memory cells external to one or more memory devices. In some embodiments, a processing system uses the spare memory cells to replace individual selected cells at the protected memory, whereby the selected cells are replaced on a cell-by-cell basis, rather than exclusively on a row-by-row, column-by-column, or block-by-block basis. This allows faulty memory cells to be replaced efficiently, thereby improving memory reliability and manufacturing yields, without requiring large blocks of spare memory cells. | 12-25-2014 |
20150016172 | QUERY OPERATIONS FOR STACKED-DIE MEMORY DEVICE - An integrated circuit (IC) package includes a stacked-die memory device. The stacked-die memory device includes a set of one or more stacked memory dies implementing memory cell circuitry. The stacked-die memory device further includes a set of one or more logic dies electrically coupled to the memory cell circuitry. The set of one or more logic dies includes a query controller and a memory controller. The memory controller is coupleable to at least one device external to the stacked-die memory device. The query controller is to perform a query operation on data stored in the memory cell circuitry responsive to a query command received from the external device. | 01-15-2015 |
20150019813 | MEMORY HIERARCHY USING ROW-BASED COMPRESSION - A system includes a first memory and a device coupleable to the first memory. The device includes a second memory to cache data from the first memory. The second memory includes a plurality of rows, each row including a corresponding set of compressed data blocks of non-uniform sizes and a corresponding set of tag blocks. Each tag block represents a corresponding compressed data block of the row. The device further includes decompression logic to decompress data blocks accessed from the second memory. The device further includes compression logic to compress data blocks to be stored in the second memory. | 01-15-2015 |
20150019834 | MEMORY HIERARCHY USING PAGE-BASED COMPRESSION - A system includes a device coupleable to a first memory. The device includes a second memory to cache data from the first memory. The second memory is to store a set of compressed pages of the first memory and a set of page descriptors. Each compressed page includes a set of compressed data blocks. Each page descriptor represents a corresponding page and includes a set of location identifiers that identify the locations of the compressed data blocks of the corresponding page in the second memory. The device further includes compression logic to compress data blocks of a page to be stored to the second memory and decompression logic to decompress compressed data blocks of a page accessed from the second memory. | 01-15-2015 |
20150026511 | PARTITIONABLE DATA BUS - A method and a system are provided for partitioning a system data bus. The method can include partitioning off a portion of a system data bus that includes one or more faulty bits to form a partitioned data bus. Further, the method includes transferring data over the partitioned data bus to compensate for data loss due to the one or more faulty bits in the system data bus. | 01-22-2015 |
20150100758 | DATA PROCESSOR AND METHOD OF LANE REALIGNMENT - A data processor includes a register file divided into at least a first portion and a second portion for storing data. A single instruction, multiple data (SIMD) unit is also divided into at least a first lane and a second lane. The first and second lanes of the SIMD unit correspond respectively to the first and second portions of the register file. Furthermore, each lane of the SIMD unit is capable of data processing. The data processor also includes a realignment element in communication with the register file and the SIMD unit. The realignment element is configured to selectively realign conveyance of data between the first portion of the register file and the first lane of the SIMD unit to the second lane of the SIMD unit. | 04-09-2015 |
20150199126 | PAGE MIGRATION IN A 3D STACKED HYBRID MEMORY - A die-stacked hybrid memory device implements a first set of one or more memory dies implementing first memory cell circuitry of a first memory architecture type and a second set of one or more memory dies implementing second memory cell circuitry of a second memory architecture type different than the first memory architecture type. The die-stacked hybrid memory device further includes a set of one or more logic dies electrically coupled to the first and second sets of one or more memory dies, the set of one or more logic dies comprising a memory interface and a page migration manager, the memory interface coupleable to a device external to the die-stacked hybrid memory device, and the page migration manager to transfer memory pages between the first set of one or more memory dies and the second set of one or more memory dies. | 07-16-2015 |
20150293845 | MULTI-LEVEL MEMORY HIERARCHY - Described is a system and method for a multi-level memory hierarchy. Each level is based on different attributes including, for example, power, capacity, bandwidth, reliability, and volatility. In some embodiments, the different levels of the memory hierarchy may use an on-chip stacked dynamic random access memory, (providing fast, high-bandwidth, low-energy access to data) and an off-chip non-volatile random access memory, (providing low-power, high-capacity storage), in order to provide higher-capacity, lower power, and higher-bandwidth performance. The multi-level memory may present a unified interface to a processor so that specific memory hardware and software implementation details are hidden. The multi-level memory enables the illusion of a single-level memory that satisfies multiple conflicting constraints. A comparator receives a memory address from the processor, processes the address and reads from or writes to the appropriate memory level. In some embodiments, the memory architecture is visible to the software stack to optimize memory utilization. | 10-15-2015 |
Patent application number | Description | Published |
20080302543 | Expandable packer system - The expandable casing packing element systems for cased and open-hole wellbores include an expandable casing member having a sealing device comprising a sealing element disposed between at least two retainer rings. In one embodiment, both retainer rings have flat cross-sections and the sealing element is forced radially outward by the expansion of the expandable casing against the two retainer rings such that the sealing element protrudes outwardly beyond the retainer rings and engages the wall of a wellbore in three locations. In another embodiment, both of the two retainer rings include flares that extend outwardly from the body of the expandable casing to which they are attached. As the expandable casing is expanded, the flares are forced inward to compress the sealing element which is then extruded radially outward through a gap between the two retainer rings to engage and seal off the wellbore. | 12-11-2008 |
20090205840 | EXPANDABLE DOWNHOLE ACTUATOR, METHOD OF MAKING AND METHOD OF ACTUATING - Disclosed herein is a downhole actuator. The actuator includes, a discontinuous tubular being configured to restrict longitudinal expansion while longitudinally contracting in response to radial expansion. | 08-20-2009 |
20100078180 | Expandable packer system - The expandable casing packing element systems for cased and open-hole wellbores include an expandable casing member having a sealing device comprising a sealing element disposed between at least two retainer rings. In one embodiment, both retainer rings have flat cross-sections and the sealing element is forced radially outward by the expansion of the expandable casing against the two retainer rings such that the sealing element protrudes outwardly beyond the retainer rings and engages the wall of a wellbore in three locations. In another embodiment, both of the two retainer rings include flares that extend outwardly from the body of the expandable casing to which they are attached. As the expandable casing is expanded, the flares are forced inward to compress the sealing element which is then extruded radially outward through a gap between the two retainer rings to engage and seal off the wellbore. | 04-01-2010 |
20110037230 | Expandable packer system - The expandable casing packing element systems for cased and open-hole wellbores include an expandable casing member having a sealing device comprising a sealing element disposed between at least two retainer rings. The retainer rings have flat cross-sections and the sealing element is forced radially outward by the expansion of the expandable casing against the two retainer rings such that the sealing element protrudes outwardly beyond the retainer rings and engages the wall of a wellbore in three locations. The retainer rings can also include flares that extend outwardly from the body of the expandable casing to which they are attached. As the expandable casing is expanded, the flares are forced inward to compress the sealing element which is then extruded radially outward through a gap between the two retainer rings to engage and seal off the wellbore. | 02-17-2011 |
20110114336 | Apparatus and Methods for Multi-Layer Wellbore Construction - In aspects, the present disclosure provides a monobore wellbore construction apparatus and method, which in one embodiment may include a series of overlapping expandable liner sections. In one aspect, the overlapping liner sections may be expanded and pressed to provide no gaps along the length of the liner system. In another aspect, the liner sections may include centralizers and/or circumferential seals that provide sealing functions and spaces between the overlapping liner sections. The liner sections may be lined with a suitable sealing material, including an epoxy or may be filled with cement or another desired materials. | 05-19-2011 |
20120061097 | Pump Down Liner Expansion Method - A string to be expanded is run in with a running string that supports a swage assembly. The running string is secured to the existing tubular and the top of the string to be expanded is sealed around the supported running string. The pressure applied to the annular space above the seal drives the liner over the swage. A cement shoe is affixed to the lower end of the string that is expanded after becoming detached from the running string assembly. When the expanded liner bottoms on a support, generally the hole bottom, the cement is delivered through the shoe and the expansion of the top of the string into a recess of the string above continues. The swage assembly with the seal and the anchor are then recovered as the running string is removed during the process of expanding the top of the expanded string into the lower end recess of the existing string already in the wellbore. | 03-15-2012 |
20120085549 | Pump Down Swage Expansion Method - The tubular string to be expanded is run in on a running string. The swage assembly has a seal from the running string to the existing tubular and the top of the tubular string to be expanded also has a similar seal against the exiting tubular. Annulus pressure around the running string drives the swage assembly to support the expanded tubular to the exiting tubular and to continue expansion to the end of the tubular. Cementing then takes place followed by reconfiguring the swage assembly to engage the liner hanger seal with the result being a monobore connection in a single trip including the cementing. | 04-12-2012 |
20120211221 | Annulus Mounted Potential Energy Driven Setting Tool - An actuator and method for setting a subterranean tool uses an externally mounted actuator on a tubular string that is operably engaged to the tool to be actuated. At the desired location for actuation a signal is given to a valve assembly. The opening of the valve releases the pressurized compressible fluid against a floating piston. The piston drives viscous fluid ahead of itself through the now open valve that in turn drives an actuating piston whose movement sets the tool. The triggering mechanism to open the valve can be a variety of methods including an acoustic signal, a vibration signal, a change in magnetic field, or elastic deformation of the tubular wall adjacent the valve assembly. | 08-23-2012 |
20120279725 | Expandable Tubular Centralizer - A centralizer device that is set by expansion of an inner expandable tubular member. The centralizer is made up of either one or a plurality of centralizer bands which radially surround an inner, radially expandable tubular member. Each of the centralizer bands includes one or more expansion sections which buckles under tension to form a loop. Generally semicircular, opposing tension straps are affixed to opposite end portions of the expansion section. The tension straps underlie the expansion section and radially surround the inner tubular member in a close-fitting, snug relation. A closure mechanism is used to secure each centralizer band to the inner tubular member. | 11-08-2012 |
20130020092 | Remote Manipulation and Control of Subterranean Tools - A subterranean tool that is self contained for actuation can be run into a desired location on an automatic set mode controlled by a timer. If a problem develops in getting the tool to the desired location in time a magnetic field created by permanent or electro-magnets can be brought to bear on the tool to stop the timer before the tool actuates. Once the tool is subsequently positioned at the desired location another magnetic field can be brought to bear near the tool to set it. Alternatively, the tool can be run to the desired location without activation with the timer and then the magnetic field can be brought to the tool to set it. The magnetic field can be lowered to the tool with wireline or can be dropped or pumped past the tool to actuate the tool. Optionally the field can be generated from within an object that ultimately lands on a seat to provide a backup way to set the tool using tubing pressure. | 01-24-2013 |
20130048308 | Integrated Continuous Liner Expansion Method - An additional string is run through an existing string using a running string. With the strings overlapping an upper inflatable secures them together leaving gaps. The upper inflatable creates an upper expanded zone where the swage assembly is then built. The swage assembly has a seal and upon pressure being applied between the upper inflatable and the seal the swage assembly releases the running string and is pushed to expand the additional string until tagging a cement shoe. The running string is rejoined to the swage assembly and after cementing a lower inflatable is deployed to make a bell and to set an external packer if used. If there is an external packer the shoe releases and on the way out of the hole the upper inflatable sets a seal in the lap. | 02-28-2013 |
20130056228 | Annular Seal for Expanded Pipe with One Way Flow Feature - The seal has a base ring that expands with the underlying supporting tubular. Extending from the base ring is a pleated structure with segments folded over each other so that the run in shape is small and up against the supporting tubular for run in. The pleated segments can have internal stiffeners that also add a bias radially outwardly when the structure is freed to move in that direction. A retaining band keeps the assembly retracted until tubular expansion defeats the band to allow the unitary structure to move out radially to the wellbore or surrounding tubular. The pleated portion unfolds and spans outwardly from the base ring to retain pressure differential in one direction while allowing fluid flow in the opposite direction. The assembly can be attached to a swage device so that pressure from above into the set seal can drive one or more swage members to expand a tubular. | 03-07-2013 |
20130098634 | MONOBORE EXPANSION SYSTEM - ANCHORED LINER - Methods for forming a wellbore may include placing an upper section of inside a lower section of a parent liner; positioning an upper sealing member and a lower sealing member in the wellbore to form a pressure chamber, and expanding the second liner using the pressure chamber. The sealing members move axially relative to one another and the second liner has an inner bore that is hydraulically isolated from the pressure chamber. A related apparatus may include upper and lower sealing members that cooperate to form a pressure chamber that is hydraulically isolated from an inner bore of the second liner. A work string may include the sealing members, a connector that extends through the pressure chamber and the second liner; and an expander. The expander expands the second liner in response to the axial separation of the sealing members. | 04-25-2013 |
20130299169 | One Trip Casing or Liner Directional Drilling With Expansion and Cementing - A tubular string is advanced with a bottom hole assembly as the hole is drilled and reamed in a desired direction with the aid of directional drilling equipment adjacent the bit. When the advanced tubular forms the desired lap to the existing tubular, the assembly can be configured to cement the tubular and expansion can then be accomplished to fill the annular space and enhance the cement bonding. The expansion equipment can create a bottom bell on the expanded tubular and expand the top end into a bell of the existing tubular so that a monobore is created as the process is repeated with each added string. Numerous variations are contemplated for each single trip including but not limited to the direction of expansion, whether cementing or expansion occurs first, reforming folded tubing in the hole as well as the nature of the expansion tool and pressure control when drilling. | 11-14-2013 |
20130312982 | THERMAL RELEASE MECHANISM FOR DOWNHOLE TOOLS - A release mechanism for use in setting a downhole tool comprises two connectors releasably connected to one other. One of the connectors includes a material having a coefficient of thermal expansion that is different from a material included in the second connector. The difference in the coefficients of thermal expansion causes one of the connectors to expand greater than the other connector when heat is applied to one or both of the connectors. As a result of the greater expansion of one of the connectors, the connectors release from each other. Upon release, an actuator within the downhole tool is permitted to move and cause actuation or setting of the downhole tool. | 11-28-2013 |
20140131953 | Self-energized Seal or Centralizer and Associated Setting and Retraction Mechanism - A self energizing structure can function as a centralizer or as a seal when allowed to spring out after a retainer is moved away from an overlying position for run in to protect the structure. Segments extend from a common base ring and are radially offset during run in. Alternating segments have landing surfaces on opposed ends such that on release of the structure the intervening segments land on such surfaces to form a cohesive single layer with all segments circumferentially aligned and against a surrounding tubular or the borehole wall. The structure is held retracted with a bi-directionally movable sleeve operable in a variety of ways from the surface. Internally the sleeve has splines to push the segments with the landing surfaces back so that the structure can collapse back into the sleeve for removal. Structures can be stacked and used as centralizers with alternating segments removed. | 05-15-2014 |
20140144653 | Annulus Mounted Potential Energy Driven Setting Tool - An actuator and method for setting a subterranean tool uses an externally mounted actuator on a tubular string that is operably engaged to the tool to be actuated. At the desired location for actuation a signal is given to a valve assembly. The opening of the valve releases the pressurized compressible fluid against a floating piston. The piston drives viscous fluid ahead of itself through the now open valve that in turn drives an actuating piston whose movement sets the tool. The triggering mechanism to open the valve can be a variety of methods including an acoustic signal, a vibration signal, a change in magnetic field, or elastic deformation of the tubular wall adjacent the valve assembly. | 05-29-2014 |
20150152707 | Compliant Seal for Irregular Casing - A setting assembly for a packer seal features a series of peripherally mounted rod pistons that abut the seal to be set by advancing the seal relative to a tapered surface. When parts of the seal engage an inner tubular wall before other parts of the seal the continuation of application of hydraulic pressure to the pistons moves parts of the seal that have yet to make contact with the tubular wall further relative to the ramp so that plastic deformation of the seal assembly can occur to allow portions thereof to move radially further outwardly to seal in the region where the radius of the tubular is enlarged. When hydraulic pressure is applied to the pistons in an opposite direction a lock mechanism is defeated and the c-ring or scroll reverts to a smaller shape optionally aided by a garter spring so that the packer can be selectively retrieved. | 06-04-2015 |
20150300097 | Magnetic Switch and Uses Thereof in Wellbores - In one aspect, an apparatus for use in a wellbore is disclosed that in one non-limiting embodiment includes a string for placement in the wellbore and a switch on an outside of the string, wherein the switch includes a plurality of magnetic elements that provide a continuous electrical when the plurality of magnetic elements are aligned by an externally applied magnetic field. In one embodiment, the switch includes a channel that houses the plurality of magnetic elements that remain unaligned until the magnetic elements are aligned by the externally applied magnetic field. | 10-22-2015 |
20150354350 | Downhole Vibratory Communication System and Method - Systems and methods for controlling one or more downhole tools. A vibratory signal is produced by interaction between an actuation profile and a contact profile. In response to the vibratory signal, a controller actuates one or more downhole tools. | 12-10-2015 |