Patent application number | Description | Published |
20090029582 | Tamper-Evident Connector - Embodiments of a tamper-evident connector are disclosed which may optionally be used in a trusted computing environment. In an exemplary embodiment, a tamper-evident connection includes a mate-once engaging assembly for providing with a first component, the mate-once engaging assembly including a foldable portion. The tamper-evident connection also includes a receiving chamber for providing with a second component, the mate-once engaging assembly fitting in the receiving chamber to physically secure the first component to the second component, the foldable portion of the mate-once engaging assembly unfolding during removal of the mate-once engaging assembly from the receiving chamber to provide evidence of tampering when the first component has been removed from the second component. Optionally, the first component is a Trusted Platform Module (TPM) and the second component is a system board. | 01-29-2009 |
20100081311 | Tamper-Evident Connector - Embodiments of a tamper-evident connector are disclosed which may optionally be used in a trusted computing environment. In an exemplary embodiment, a tamper-evident connection includes a mate-once engaging assembly for providing with a first component, the mate-once engaging assembly including a foldable portion. The tamper-evident connection also includes a receiving chamber for providing with a second component, the mate-once engaging assembly fitting in the receiving chamber to physically secure the first component to the second component, the foldable portion of the mate-once engaging assembly unfolding during removal of the mate-once engaging assembly from the receiving chamber to provide evidence of tampering when the first component has been removed from the second component. Optionally, the first component is a Trusted Platform Module (TPM) and the second component is a system board. | 04-01-2010 |
20110205078 | COMPONENT INSTALLATION GUIDANCE - In accordance with embodiments, a system includes a plurality of component slots and at least one indicator associated with each of said slots. The system also includes a controller coupled to the indicators. The indicators selectively provide installation guidance of components into said slots based on signals from the controller. | 08-25-2011 |
20120030493 | Power Capping System And Method - A system, and a corresponding method, for temporarily capping power consumption includes a mechanism for determining total power consumption by a number of components, a mechanism for disconnecting and reconnecting power to one or more of the components, and a mechanism for determining when to disconnect and reconnect power to the components. | 02-02-2012 |
20120074783 | PASSIVE IMPEDANCE MATCHING - Methods, devices, and systems for passive impedance matching are provided. An example of a method of passive impedance matching includes providing a substantially equivalent impedance between a source and a load for three single-phase power supplies via a geometry of a busbar. The busbar can be coupled to the three-phase power supplies as the source and coupled to a plurality of electronic machines as the load. | 03-29-2012 |
20120079299 | Enclosure Power Controller - A system and method for controlling power consumption is described herein. A computer system includes an enclosure. The enclosure is configured to contain a plurality of removable compute nodes. The enclosure includes a power controller configured to individually control an amount of power consumed by each of the plurality of removable compute nodes. The power controller provides a plurality of power control signals. Each power control signal is provided to and controls the power consumption of one of the plurality of removable compute nodes. | 03-29-2012 |
20120123597 | ENCLOSURE AIRFLOW CONTROLLER - A system and method for controlling airflow in a computer system enclosure are described herein. A computer system includes an enclosure and an airflow controller. The enclosure is configured to contain a plurality of compute nodes. The airflow controller configured to control a flow of air provided to each of the plurality of compute nodes. The airflow controller is configured to receive unsolicited requests for airflow from each of the plurality of compute nodes. The airflow controller is further configured to control airflow provided to a given compute node based on the unsolicited airflow requests received from a subset of the compute nodes, including the given node. The subset of the compute nodes is assigned to a same airflow zone by the airflow controller. | 05-17-2012 |
20120131249 | METHODS AND SYSTEMS FOR AN INTERPOSER BOARD - In accordance with at least some embodiments, a system ( | 05-24-2012 |
20140351470 | METHODS AND SYSTEMS FOR AN INTERPOSER BOARD - In accordance with at least some embodiments, a system includes an aggregator backplane coupled to a plurality of fans and power supplies and configured to consolidate control and monitoring for the plurality of fans and power supplies. The system also includes a plurality of compute nodes coupled to the aggregator backplane, wherein each compute node selectively communicates with the aggregator backplane via a corresponding interposer board. Each interposer board is configured to translate information passed between its corresponding compute node and the aggregator backplane. | 11-27-2014 |
20140380070 | VOLTAGE REGULATOR CONTROL SYSTEM - A processor power management system and method are disclosed. The system includes a voltage regulator control system that is communicatively coupled to each of a plurality of processors. The voltage regulator control system is to generate a processor voltage that is provided to each of the plurality of processors and to control a magnitude of the processor voltage based on receiving power management request signal s that are provided from each of the plurality of processors. | 12-25-2014 |
20150241486 | CONTROLLER TO DETERMINE A RISK OF THERMAL DAMAGE BASED ON CURRENT MEASUREMENTS - Examples disclose a system with an input current sensing circuit to measure an input current of a current carrier. Additionally, the examples disclose an output sensing circuit to measure an output current from the current carrier. Further, the examples disclose a controller to determine a power loss associated with the current carrier based on the input and the output current measurements. The power loss indicates a risk of a thermal damage to the current carrier. | 08-27-2015 |
20150241940 | ENCLOSURE POWER CONTROLLER - A system and method for controlling power consumption is described herein. A computer system includes an enclosure. The enclosure is configured to contain a plurality of removable compute nodes. The enclosure includes a power controller configured to individually control an amount of power consumed by each of the plurality of removable compute nodes. The power controller provides a plurality of power control signals. Each power control signal is provided to and controls the power consumption of one of the plurality of removable compute nodes. | 08-27-2015 |
20160036650 | MICROCONTROLLER AT A CARTRIDGE OF A CHASSIS - A method for monitoring computing resources of a cartridge is described herein. The method may include receiving, via the microcontroller, data associated with the computing components of the cartridge. The method may include providing, via the microcontroller, the data based on the monitoring to a management controller that is remote from the cartridge. The method may include analyzing, via the management controller, the data received. The method may include communicating, via the management controller, operational signals based on the analysis to the microcontroller in response to data received. | 02-04-2016 |
20160048184 | SHARING FIRMWARE AMONG AGENTS IN A COMPUTING NODE - Sharing firmware among a plurality of agents including a plurality of central processing units (CPUs) on a node is described. In an example, a computing node includes: a bus; a non-volatile memory, coupled to the bus, to store firmware for the plurality of agents; a power sequencer to implement a power-up sequence for the plurality of CPUs; a plurality of power control state machines respectively controlling states of the plurality of CPUs based on output of the power sequencer; and a bus controller to selectively couple the plurality of agents to the non-volatile memory based on state of the plurality of power control state machines. | 02-18-2016 |
Patent application number | Description | Published |
20090300211 | REDUCING IDLE TIME DUE TO ACKNOWLEDGEMENT PACKET DELAY - Mechanisms for reducing the idle time of a computing device due to delays in transmitting/receiving acknowledgement packets are provided. A first data amount corresponding to a window size for a communication connection is determined. A second data amount, in excess of the first data amount, which may be transmitted with the first data amount, is calculated. The first and second data amounts are then transmitted from the sender to the receiver. The first data amount is provided to the receiver in a receive buffer of the receiver. The second data amount is maintained in a switch port buffer of a switch port without being provided to the receive buffer. The second data amount is transmitted from the switch port buffer to the receive buffer in response to the switch port detecting an acknowledgement packet from the receiver. | 12-03-2009 |
20100318666 | EXPEDITING ADAPTER FAILOVER - Expediting adapter failover may minimize network downtime and preserve network performance. Embodiments may comprise copying a primary adapter memory of a failing primary adapter to a standby adapter memory of a standby adapter. Copying the memory may expedite TCP/IP offload adapter failover by maintaining TCP/IP stack and connection information. In several embodiments, Copy Logic may copy primary adapter memory to standby adapter memory. In some embodiments, Detect Logic may monitor primary adapter viability and may initiate failover. In additional embodiments, Assess Logic may assess whether the IO bus is operative permitting Direct Logic to copy adapter memory via, e.g., DMA. In other embodiments, Packet Logic may fragment primary adapter memory into network packets sent through the network to the standby adapter where Unpack Logic may unpack them into memory. | 12-16-2010 |
20110153931 | HYBRID STORAGE SUBSYSTEM WITH MIXED PLACEMENT OF FILE CONTENTS - A storage subsystem combining solid state drive (SSD) and hard disk drive (HDD) technologies provides low access latency and low complexity. Separate free lists are maintained for the SSD and the HDD and blocks of file system data are stored uniquely on either the SSD or the HDD. When a read access is made to the subsystem, if the data is present on the SSD, the data is returned, but if the block is present on the HDD, it is migrated to the SSD and the block on the HDD is returned to the HDD free list. On a write access, if the block is present in the either the SSD or HDD, the block is overwritten, but if the block is not present in the subsystem, the block is written to the HDD. | 06-23-2011 |
20110274120 | NETWORK DATA PACKET FRAMENTATION AND REASSEMBLY METHOD - The method determines whether a particular jumbo data packet benefits from fragmentation and reassembly management during communication through a network or networks. The method determines the best communication path, including path partners, between a sending information handling system (IHS) and a receiving IHS for the jumbo packet. A packet manager determines the maximum transmission unit (MTU) size for each path partner or switch in the communication path including the sending and receiving IHSs. The method provides transfer of the jumbo packets intact between those path partner switches of the communication path exhibiting MTU sized for jumbo or larger packet transfer. The method provides fragmentation of jumbo packets into multiple normal packets for transfer between switches exhibiting normal packet MTU sizes. The packet manager reassembles multiple normal packets back into jumbo packets for those network devices, including the receiving IHS, capable of managing jumbo packets. | 11-10-2011 |
20120096240 | Application Performance with Support for Re-Initiating Unconfirmed Software-Initiated Threads in Hardware - A method, system and computer-usable medium are disclosed for managing prefetch streams in a virtual machine environment. Compiled application code in a first core, which comprises a Special Purpose Register (SPR) and a plurality of first prefetch engines, initiates a prefetch stream request. If the prefetch stream request cannot be initiated due to unavailability of a first prefetch engine, then an indicator bit indicating a Prefetch Stream Dispatch Fault is set in the SPR, causing a Hypervisor to interrupt the execution of the prefetch stream request. The Hypervisor then calls its associated operating system (OS), which determines prefetch engine availability for a second core comprising a plurality of second prefetch engines. If a second prefetch engine is available, then the OS migrates the prefetch stream request from the first core to the second core, where it is initiated on an available second prefetch engine. | 04-19-2012 |
20120096241 | Performance of Emerging Applications in a Virtualized Environment Using Transient Instruction Streams - A method, system and computer-usable medium are disclosed for managing transient instruction streams. Transient flags are defined in Branch-and-Link (BRL) instructions that are known to be infrequently executed. A bit is likewise set in a Special Purpose Register (SPR) of the hardware (e.g., a core) that is executing an instruction request thread. Subsequent fetches or prefetches in the request thread are treated as transient and are not written to lower-level caches. If an instruction is non-transient, and if a lower-level cache is non-inclusive of the L1 instruction cache, a fetch or prefetch miss that is obtained from memory may be written in both the L1 and the lower-level cache. If it is not inclusive, a cast-out from the L1 instruction cache may be written in the lower-level cache. | 04-19-2012 |
20120179873 | Performance of Emerging Applications in a Virtualized Environment Using Transient Instruction Streams - A method, system and computer-usable medium are disclosed for managing transient instruction streams. Transient flags are defined in Branch-and-Link (BRL) instructions that are known to be infrequently executed. A bit is likewise set in a Special Purpose Register (SPR) of the hardware (e.g., a core) that is executing an instruction request thread. Subsequent fetches or prefetches in the request thread are treated as transient and are not written to lower-level caches. If an instruction is non-transient, and if a lower-level cache is non-inclusive of the L1 instruction cache, a fetch or prefetch miss that is obtained from memory may be written in both the L1 and the lower-level cache. If it is not inclusive, a cast-out from the L1 instruction cache may be written in the lower-level cache. | 07-12-2012 |
20120180052 | Application Performance with Support for Re-Initiating Unconfirmed Software-Initiated Threads in Hardware - A method, system and computer-usable medium are disclosed for managing prefetch streams in a virtual machine environment. Compiled application code in a first core, which comprises a Special Purpose Register (SPR) and a plurality of first prefetch engines, initiates a prefetch stream request. If the prefetch stream request cannot be initiated due to unavailability of a first prefetch engine, then an indicator bit indicating a Prefetch Stream Dispatch Fault is set in the SPR, causing a Hypervisor to interrupt the execution of the prefetch stream request. The Hypervisor then calls its associated operating system (OS), which determines prefetch engine availability for a second core comprising a plurality of second prefetch engines. If a second prefetch engine is available, then the OS migrates the prefetch stream request from the first core to the second core, where it is initiated on an available second prefetch engine. | 07-12-2012 |
20120198121 | METHOD AND APPARATUS FOR MINIMIZING CACHE CONFLICT MISSES - A method for minimizing cache conflict misses is disclosed. A translation table capable of facilitating the translation of a virtual address to a real address during a cache access is provided. The translation table includes multiple entries, and each entry of the translation table includes a page number field and a hash value field. A hash value is generated from a first group of bits within a virtual address, and the hash value is stored in the hash value field of an entry within the translation table. In response to a match on the entry within the translation table during a cache access, the hash value of the matched entry is retrieved from the translation table, and the hash value is concatenated with a second group of bits within the virtual address to form a set of indexing bits to index into a cache set. | 08-02-2012 |
20120198187 | Technique for preserving memory affinity in a non-uniform memory access data processing system - Techniques for preserving memory affinity in a computer system is disclosed. In response to a request for memory access to a page within a memory affinity domain, a determination is made if the request is initiated by a processor associated with the memory affinity domain. If the request is not initiated by a processor associated with the memory affinity domain, a determination is made if there is a page ID match with an entry within a page migration tracking module associated with the memory affinity domain. If there is no page ID match, an entry is selected within the page migration tracking module to be updated with a new page ID and a new memory affinity ID. If there is a page ID match, then another determination is made whether or not there is a memory affinity ID match with the entry with the page ID field match. If there is no memory affinity ID match, the entry is updated with a new memory affinity ID; and if there is a memory affinity ID match, an access counter of the entry is incremented. | 08-02-2012 |
20120203878 | Method for Changing Ethernet MTU Size on Demand with No Data Loss - A method and system for substantially avoiding loss of data and enabling continuing connection to the application during an MTU size changing operation in an active network computing device. Logic is added to the device driver, which logic provides several enhancements to the MTU size changing operation/process. Among these enhancements are: (1) logic for temporarily pausing the data coming in from the linked partner while changing the MTU size; (2) logic for returning a “device busy” status to higher-protocol transmit requests during the MTU size changing process. This second logic prevents the application from issuing new requests until the busy signal is removed; and (3) logic for enabling resumption of both flows when the MTU size change is completed. With this new logic, the device driver/adapter does not have any transmit and receive packets to process for a short period of time, while the MTU size change is ongoing. | 08-09-2012 |
20120221812 | METHOD FOR PRESERVING MEMORY AFFINITY IN A NON-UNIFORM MEMORY ACCESS DATA PROCESSING SYSTEM - A method for preserving memory affinity in a computer system is disclosed. The method reduces and sometimes eliminates memory affinity loss due to process migration by restoring the proper memory affinity through dynamic page migration. The memory affinity access patterns of individual pages are tracked continuously. If a particular page is found almost always to be accessed from a particular remote access affinity domain for a certain number of times, and without any intervening requests from other access affinity domain, the page will migrate to that particular remote affinity domain so that the subsequent memory access becomes local memory access. As a result, the proper pages are migrated to increase memory affinity. | 08-30-2012 |
20130111135 | VARIABLE CACHE LINE SIZE MANAGEMENT | 05-02-2013 |
20130111136 | VARIABLE CACHE LINE SIZE MANAGEMENT | 05-02-2013 |
20130151784 | DYNAMIC PRIORITIZATION OF CACHE ACCESS - Some embodiments of the inventive subject matter are directed to determining that a memory access request results in a cache miss and determining an amount of cache resources used to service cache misses within a past period in response to determining that the memory access request results in the cache miss. Some embodiments are further directed to determining that servicing the memory access request would increase the amount of cache resources used to service cache misses within the past period to exceed a threshold. In some embodiments, the threshold corresponds to reservation of a given amount of cache resources for potential cache hits. Some embodiments are further directed to rejecting the memory access request in response to the determining that servicing the memory access request would increase the amount of cache resources used to service cache misses within the past period to exceed the threshold. | 06-13-2013 |
20130151788 | DYNAMIC PRIORITIZATION OF CACHE ACCESS - Some embodiments of the inventive subject matter are directed to a cache comprising a tracking unit and cache state machines. In some embodiments, the tracking unit is configured to track an amount of cache resources used to service cache misses within a past period. In some embodiments, each of the cache state machines is configured to, determine whether a memory access request results in a cache miss or cache hit, and in response to a cache miss for a memory access request, query the tracking unit for the amount of cache resources used to service cache misses within the past period. In some embodiments, the each of the cache state machines is configured to service the memory access request based, at least in part, on the amount of cache resources used to service the cache misses within the past period according to the tracking unit. | 06-13-2013 |
20130218892 | HYBRID STORAGE SUBSYSTEM WITH MIXED PLACEMENT OF FILE CONTENTS - A storage subsystem combining solid state drive (SSD) and hard disk drive (HDD) technologies provides low access latency and low complexity. Separate free lists are maintained for the SSD and the HDD and blocks of file system data are stored uniquely on either the SSD or the HDD. When a read access is made to the subsystem, if the data is present on the SSD, the data is returned, but if the block is present on the HDD, it is migrated to the SSD and the block on the HDD is returned to the HDD free list. On a write access, if the block is present in the either the SSD or HDD, the block is overwritten, but if the block is not present in the subsystem, the block is written to the HDD. | 08-22-2013 |
20140095791 | PERFORMANCE-DRIVEN CACHE LINE MEMORY ACCESS - According to one aspect of the present disclosure a system and technique for performance-driven cache line memory access is disclosed. The system includes: a processor, a cache hierarchy coupled to the processor, and a memory coupled to the cache hierarchy. The system also includes logic executable to, responsive to receiving a request for a cache line: divide the request into a plurality of cache subline requests, wherein at least one of the cache subline requests comprises a high priority data request and at least one of the cache subline requests comprises a low priority data request; service the high priority data request; and delay servicing of the low priority data request until a low priority condition has been satisfied. | 04-03-2014 |
20140095796 | PERFORMANCE-DRIVEN CACHE LINE MEMORY ACCESS - According to one aspect of the present disclosure, a method and technique for performance-driven cache line memory access is disclosed. The method includes: receiving, by a memory controller of a data processing system, a request for a cache line; dividing the request into a plurality of cache subline requests, wherein at least one of the cache subline requests comprises a high priority data request and at least one of the cache subline requests comprises a low priority data request; servicing the high priority data request; and delaying servicing of the low priority data request until a low priority condition has been satisfied. | 04-03-2014 |
20140258642 | DYNAMIC PRIORITIZATION OF CACHE ACCESS - Some embodiments of the inventive subject matter are directed to operations that include determining that an access request to a computer memory results in a cache miss. In some examples, the operations further include determining an amount of cache resources used to service additional cache misses that occurred within a period prior to the cache miss. Furthermore, in some examples, the operations further include servicing the access request to the computer memory based, at least in part, on the amount of the cache resources used to service the additional cache misses within the period prior to the cache miss. | 09-11-2014 |
Patent application number | Description | Published |
20080232349 | Optimization of Network Adapter Utilization in EtherChannel Environment - Method, system and computer program product for transferring data in a data processing system network. A method for transferring data in a data processing system network according to the invention includes determining an adapter among a plurality of adapters that has the lowest transmit latency, and assigning data to be transferred to the adapter determined to have the lowest transmit latency. The data to be transferred is then transferred by the assigned adapter. The present invention utilizes network adapters to transfer data in a more efficient manner. | 09-25-2008 |
20090199216 | MULTI-LEVEL DRIVER CONFIGURATION - A method, medium and implementing processing system are provided in which the Operating System (OS) driver is divided into two parts, viz. an upper level OS driver and a lower level OS driver. The lower level OS driver sets up the adapter hardware and any adapter hardware work-around. The upper level OS driver is interfaced to the OS communication stack and each can be compiled separately. The upper OS driver is compiled and shipped with the OS to make sure it is compatible with the OS communication stack. The lower OS driver, in an exemplary embodiment, is compiled and stored in an adapter flash memory. The OS dynamically combines the upper and lower OS drivers together during the load time. | 08-06-2009 |
20120066688 | PROCESSOR THREAD LOAD BALANCING MANAGER - An operating system of an information handling system (IHS) determines a process tree of data sharing threads in an application that the IHS executes. A load balancing manager assigns a home processor to each thread of the executing application process tree and dispatches the process tree to the home processor. The load balancing manager determines whether a particular poaching processor of a virtual or real processor group is available to execute threads of the executing application within the home processor of a processor group. If ready or run queues of a prospective poaching processor are empty, the load balancing manager may move or poach a thread or threads from the home processor ready queue to the ready queue of the prospective poaching processor. The poaching processor executes the poached threads to provide load balancing to the information handling system (IHS). | 03-15-2012 |
20120204188 | PROCESSOR THREAD LOAD BALANCING MANAGER - A processor thread load balancing manager employs an operating system of an information handling system (IHS) that determines a process tree of data sharing threads in an application that the IHS executes. The load balancing manager assigns a home processor to each thread of the executing application process tree and dispatches the process tree to the home processor. The load balancing manager determines whether a particular poaching processor of a virtual or real processor group is available to execute threads of the executing application within the home processor of a processor group. If ready or run queues of a prospective poaching processor are empty, the load balancing manager may move or poach a thread or threads from the home processor ready queue to the ready queue of the prospective poaching processor. The poaching processor executes the poached threads to provide load balancing to the information handling system (IHS). | 08-09-2012 |
20120215982 | Partial Line Cache Write Injector for Direct Memory Access Write - A cache within a computer system receives a partial write request and identifies a cache hit of a cache line. The cache line corresponds to the partial write request and includes existing data. In turn, the cache receives partial write data and merges the partial write data with the existing data into the cache line. In one embodiment, the existing data is “modified” or “dirty.” In another embodiment, the existing data is “shared.” In this embodiment, the cache changes the state of the cache line to indicate the storing of the partial write data into the cache line. | 08-23-2012 |