Patent application number | Description | Published |
20090105808 | Rapid Exchange Stent Delivery System and Associated Components - A rapid exchange stent delivery catheter includes an inner tubular member having a proximal portion, a distal portion, a stent holding portion located adjacent the distal portion of the inner member, and a guide wire lumen extending from a proximal guide wire opening disposed distal of the proximal portion of the inner member to a distal guide wire opening disposed at a distal end of the inner member. The proximal guide wire opening has a first length. An outer tubular member is slidably disposed about the inner member. The outer member has a proximal portion, a distal portion, and a guide wire opening disposed distal of the proximal portion of the outer member. The guide wire opening of the outer member has a second length that is shorter than the first length and a guide wire ramp extends into, and is movable along the first length. | 04-23-2009 |
20090312702 | BIFURCATION CATHETER ASSEMBLY WITH DISTALLY MOUNTED SIDE BALLOON AND METHODS - A catheter assembly and related methods directed to a main balloon and a side balloon, wherein the side balloon is coupled in fluid communication with the main balloon at a location distal of the side balloon. In one example, a side inflation member couples the side balloon in fluid communication to the main balloon at a distal end portion of the main balloon. A side catheter branch of the catheter assembly, which defines a side guidewire lumen, can be operatively mounted to the side balloon at a side balloon connection point to help maintain alignment of the side catheter branch relative to the side balloon. | 12-17-2009 |
20100030316 | BIFURCATION CATHETER DUAL BALLOON BOND AND METHODS - A catheter assembly includes a main catheter branch having a catheter shaft and a distal end portion. A main balloon and a side balloon are positioned at the distal end portion of the catheter shaft. The main balloon includes a distal waste portion at a distal end thereof and a proximal waste portion at a proximal end thereof. The side balloon includes an inflatable portion, a proximal waste portion, and a distal waste portion, wherein the proximal and distal waste portions define a side inflation lumen. The proximal waste portion of the side balloon and the proximal waste portion of the main balloon are secured to the distal end portion of the catheter shaft at a single bond or connection point to create a proximal balloon joint, wherein the main inflation lumen is in fluid communication with the main balloon and the side inflation lumen. | 02-04-2010 |
20100036477 | STENT EDGE PROTECTION AND METHODS - A catheter assembly and related methods directed to stent edge protection for an edge of a stent. One example stent edge protect member is positioned with a distal end portion of the stent edge protect member arranged proximal of and adjacent to a proximal end of the stent. The stent edge protect member defines an outer surface that transitions from an outer surface of the stent at the proximal end of the stent to an outer surface of the catheter branch on which the stent edge protect member is positioned. The stent edge protect member can be positioned on a single catheter branch or multiple catheter branches. | 02-11-2010 |
20120209367 | BIFURCATION CATHETER DUAL BALLOON BOND AND METHODS - A catheter assembly includes a main catheter branch having a catheter shaft and a distal end portion. A main balloon and a side balloon are positioned at the distal end portion of the catheter shaft. The main balloon includes a distal waist portion at a distal end thereof and a proximal waist portion at a proximal end thereof. The side balloon includes an inflatable portion, a proximal waist portion, and a distal waist portion, wherein the proximal and distal waist portions define a side inflation lumen. The proximal waist portion of the side balloon and the proximal waist portion of the main balloon are secured to the distal end portion of the catheter shaft at a single bond or connection point to create a proximal balloon joint, wherein the main inflation lumen is in fluid communication with the main balloon and the side inflation lumen. | 08-16-2012 |
Patent application number | Description | Published |
20110145130 | GLOBAL ELECTRONIC TRADING SYSTEM - Methods, systems, and computer readable media for facilitating trading two items (L,Q) from the group of items comprising commodities and financial instruments. At least two agents ( | 06-16-2011 |
20120041865 | TRADING USING INTERMEDIATE ENTITIES - Systems and methods for facilitating trades between two trading entities are disclosed. A computer system may match a bid of a first trading entity for an item with an offer of a second trading entity for the item. The first and second trading entities may each have a credit relationship with the third trading entity. In response to the matching, the computer system may record indications of trades of the item between the first trading entity and a third trading entity, and between the third trading entity and another trading entity such as the second trading entity. The trades may be booked back-to-back such that a net position to the third trading entity in the item is zero. When the first and second trading entities are connected by a plurality of intermediate entities, the computer system may record indications of one or more additional trades of the item. | 02-16-2012 |
20120041866 | TRADING SYSTEM WITH INDIVIDUALIZED ORDER BOOKS - Systems and methods for electronic trading are disclosed. A trading system may store information indicative of limits on trading of items between trading entities, including an entity that is a non-credit extending entity. The computer system may then determine respective order books for at least two trading entities, where the order books include dealable bids and offers that have been individualized using stored trading limits. The stored trading limits may in some cases include different limits for different items (which may be different foreign currency pairs, in one embodiment). In other instances, trading limits may be indicative of a net position that a trading entity is permitted to take in an item. Bids and offers may be individualized based on different costs associated with different trading entities. | 02-16-2012 |
20120041867 | APPLICATION PROGRAMMING INTERFACE FOR TRADING SYSTEM - A trading system with an application programming interface (API) is disclosed. The API includes a set of routines executable to permit client computer systems to automatically make and take orders for items. The API can permit, for example, machine-to-machine communication that automatically posts an order to the trading system or automatically hits an order that has previously been posted to the trading system. The API can also permit a variety of other functions, including reformatting limit order books. The trading system may also implement a graphical user interface (GUI). In one embodiment, the items may be foreign exchange instruments. | 02-16-2012 |
20120041868 | AGGREGATION OF TRADING ORDERS - Systems and methods for generating limit order books are disclosed. A computer system may receive, from a plurality of trading entities, orders that are specified using a machine-to-machine communication protocol. The computer system may select two or more of the received orders, including orders from at least two different ones of the plurality of trading entities, and then generate a limit order book that includes the selected orders. The computer system may then convey the limit order book to a graphical user interface of a trader. In one embodiment, the orders may be for foreign exchange instruments. | 02-16-2012 |
20120041869 | AUTOMATED TRADING - Systems and methods for automated trading are disclosed. In one embodiment, a computer system may execute program instructions to generate dealable prices at which a first trading entity is willing to buy and/or sell an item. The system may then communicate the generated prices from the computer system to a trading system, causing the trading system to post the communicated prices. In another embodiment, the computer system may execute program instructions to determine to hit dealable prices for items posted to the trading system. For example, these actions may be performed for spot trades of a foreign currency pair, without requiring the use of a graphical user interface. In other embodiments, the computer system may use received prices to automatically generate a pricing forecast. | 02-16-2012 |
20120041894 | REQUESTS FOR QUOTES FROM INDIRECT CREDIT LINES - Systems and methods for processing requests for quotes (RFQs) are disclosed. A computer may receive a request for quote from a first trading entity. The computer system may then obtain quotes responsive to the RFQ. The obtained quotes include a quote from a second trading entity that has an indirect credit relationship to the first trading entity. The computer system then provides at least one of the obtained quotes to the first trading entity. The request for quote may be for a spot trade of a foreign exchange item, for example. | 02-16-2012 |
20150332399 | APPLICATION PROGRAMMING INTERFACE FOR TRADING SYSTEM - A trading system with an application programming interface (API) is disclosed. The API includes a set of routines executable to permit client computer systems to automatically make and take orders for items. The API can permit, for example, machine-to-machine communication that automatically posts an order to the trading system or automatically hits an order that has previously been posted to the trading system. The API can also permit a variety of other functions, including reformatting limit order books. The trading system may also implement a graphical user interface (GUI). In one embodiment, the items may be foreign exchange instruments. | 11-19-2015 |
20150379638 | TRADING SYSTEM WITH INDIVIDUALIZED ORDER BOOKS - Systems and methods for electronic trading are disclosed. A trading system may store information indicative of limits on trading of items between trading entities, including an entity that is a non-credit extending entity. The computer system may then determine respective order books for at least two trading entities, where the order books include dealable bids and offers that have been individualized using stored trading limits. The stored trading limits may in some cases include different limits for different items (which may be different foreign currency pairs, in one embodiment). In other instances, trading limits may be indicative of a net position that a trading entity is permitted to take in an item. Bids and offers may be individualized based on different costs associated with different trading entities. | 12-31-2015 |
20150379639 | TRADING USING INTERMEDIATE ENTITIES - Systems and methods for facilitating trades between two trading entities are disclosed. A computer system may match a bid of a first trading entity for an item with an offer of a second trading entity for the item. The first and second trading entities may each have a credit relationship with the third trading entity. In response to the matching, the computer system may record indications of trades of the item between the first trading entity and a third trading entity, and between the third trading entity and another trading entity such as the second trading entity. The trades may be booked back-to-back such that a net position to the third trading entity in the item is zero. When the first and second trading entities are connected by a plurality of intermediate entities, the computer system may record indications of one or more additional trades of the item. | 12-31-2015 |
20160048916 | AGGREGATION OF TRADING ORDERS - Systems and methods for generating limit order books are disclosed. A computer system may receive, from a plurality of trading entities, orders that are specified using a machine-to-machine communication protocol. The computer system may select two or more of the received orders, including orders from at least two different ones of the plurality of trading entities, and then generate a limit order book that includes the selected orders. The computer system may then convey the limit order book to a graphical user interface of a trader. In one embodiment, the orders may be for foreign exchange instruments. | 02-18-2016 |
20160048917 | AUTOMATED TRADING - Systems and methods for automated trading are disclosed. In one embodiment, a computer system may execute program instructions to generate dealable prices at which a first trading entity is willing to buy and/or sell an item. The system may then communicate the generated prices from the computer system to a trading system, causing the trading system to post the communicated prices. In another embodiment, the computer system may execute program instructions to determine to hit dealable prices for items posted to the trading system. For example, these actions may be performed for spot trades of a foreign currency pair, without requiring the use of a graphical user interface. In other embodiments, the computer system may use received prices to automatically generate a pricing forecast. | 02-18-2016 |
Patent application number | Description | Published |
20080314044 | HEAT SHIELDS FOR USE IN COMBUSTORS - A combustor includes an inner liner; an outer liner circumscribing the inner liner and forming a combustion chamber with the inner liner; a combustor dome coupled to the inner and outer liners; and a plurality of heat shields coupled to combustor dome. Each of the heat shields includes a heat shield plate defined by a first edge facing the inner liner and a second edge facing the outer liner; and a plurality of baffles extending from the heat shield plate. Each of the plurality of baffles includes two ribs and a connection portion connecting the two ribs to form a closed portion and an opposite open portion. The open portion of each of the plurality of baffles faces the first edge or the second edge. | 12-25-2008 |
20100162712 | QUENCH JET ARRANGEMENT FOR ANNULAR RICH-QUENCH-LEAN GAS TURBINE COMBUSTORS - A combustor for a turbine engine includes an outer liner having a first group of air admission holes and defining a plurality of outer liner regions. The combustor further includes an inner liner circumscribed by the outer liner and forming a combustion chamber therebetween, the inner liner having a second group of air admission holes and defining a plurality of inner liner regions. The combustor further includes a plurality of fuel injectors extending into the combustion chamber and configured to deliver an air-fuel mixture to the combustion chamber, each of the plurality of fuel injectors being associated with one of the outer liner regions and one of the inner liner regions. The first group within a respective outer liner region includes air admission holes that circumferentially alternate between approximately a first size and approximately a second size, the first size being different than the second size. | 07-01-2010 |
20100212324 | DUAL WALLED COMBUSTORS WITH IMPINGEMENT COOLED IGNITERS - A combustor for a gas turbine engine includes an inner liner and an outer liner circumscribing the inner liner and forming a combustion chamber with the inner liner. The outer liner is a dual walled liner with a first wall and a second wall. The combustor includes a fuel igniter comprising a tip portion configured to ignite an air and fuel mixture in the combustion chamber and an igniter tube positioning the fuel igniter relative to the combustion chamber. The igniter tube includes a plurality of holes configured to direct cooling air toward the tip portion of the fuel igniter. | 08-26-2010 |
20100218503 | PLUNGED HOLE ARRANGEMENT FOR ANNULAR RICH-QUENCH-LEAN GAS TURBINE COMBUSTORS - A combustor may include an outer liner having a first row and a second row of circumferentially distributed air admission holes. The second row of the outer liner may be downstream of the first row of the outer liner, and the air admission holes of the first row of the outer liner may be larger than the air admission holes of the second row of the outer liner. An inner liner is circumscribed by the outer liner and has a third and fourth row of circumferentially distributed air admission holes. The air admission holes of the third row of the inner liner may be larger than the air admission holes of the fourth row of the inner liner. The inner and outer liners may form a combustion chamber, and at least a portion of the air admission holes of the first, second, third, or fourth rows may be plunged. | 09-02-2010 |
20100218504 | ANNULAR RICH-QUENCH-LEAN GAS TURBINE COMBUSTORS WITH PLUNGED HOLES - A combustor may include an outer liner having a first group of air admission holes and defining a plurality of outer liner regions. The combustor may further include an inner liner having a second group of air admission holes and defining a plurality of inner liner regions. The first group of air admission holes within a respective outer liner region may include a first plunged air admission hole approximately axially aligned with the respective fuel injector, a second plunged air admission hole approximately on the outer boundary line between the respective outer liner region and a first adjacent outer liner region, the first air admission hole being downstream of the second air admission hole, and a third plunged air admission hole approximately on the outer boundary line between the respective outer liner region and a second adjacent outer liner region. | 09-02-2010 |
20110185739 | GAS TURBINE COMBUSTORS WITH DUAL WALLED LINERS - A combustor for a turbine engine includes a hot wall and a cold wall forming a dual walled liner and a liner cavity with the hot wall. The cold wall defines a plurality of impingement cooling holes configured to deliver an impingement cooling flow. A first downstream end terminates the hot wall and is configured to receive the impingement cooling flow from the plurality of impingement cooling holes, and a second downstream end terminates the cold wall and is longer in a generally downstream direction than the first downstream end. A combustion chamber is formed with the dual walled liner and the liner and faces an opposite side of the hot wall relative to the combustion chamber. The combustion chamber has a longitudinal axis and is configured to receive an air-fuel mixture in the generally downstream direction along the longitudinal axis. | 08-04-2011 |
20110219774 | CIRCUMFERENTIALLY VARIED QUENCH JET ARRANGEMENT FOR GAS TURBINE COMBUSTORS - A combustor for a turbine engine is provided. The combustor includes a first liner; a second liner positioned relative to the first liner to form a combustion chamber therebetween, the combustion chamber configured to receive a fuel-air mixture; an igniter positioned relative to the combustion chamber and configured to ignite the fuel-air mixture; a first group of air admission holes positioned in the first liner and forming a regular circumferential pattern around the first liner; and a second group of air admission holes positioned in the first liner at a first circumferential position corresponding to the igniter, the second group of air admission holes departing from the regular circumferential pattern. | 09-15-2011 |
20150113993 | GAS TURBINE ENGINES HAVING FUEL INJECTOR SHROUDS WITH INTERIOR RIBS - A fuel injector assembly includes a fuel injector and a fuel injector shroud housing the fuel injector. The fuel injector includes a body and a nozzle coupled to the body. The fuel injector shroud includes a swirler device defining a center opening proximate to the nozzle of the fuel injector and a plurality of swirler holes surrounding the center opening, a body section with an air inlet configured to admit a flow of air into the fuel injector shroud and a dome section defining a mount for securing the swirler device to the body section, and at least one interior rib positioned on an interior surface of the dome section configured to direct the flow of air to the swirler holes of the swirler device such that the flow of air exiting through the swirler is mixed with the flow of fuel exiting the nozzle. | 04-30-2015 |
Patent application number | Description | Published |
20110320700 | Concurrent Refresh In Cache Memory - Concurrent refresh in a cache memory includes calculating a refresh time interval at a centralized refresh controller, the centralized refresh controller being common to all cache memory banks of the cache memory, transmitting a starting time of the refresh time interval to a bank controller, the bank controller being local to, and associated with, only one cache memory bank of the cache memory, sampling a continuous refresh status indicative of a number of refreshes necessary to maintain data within the cache memory bank associated with the bank controller, requesting a gap in a processing pipeline of the cache memory to facilitate the number of refreshes necessary, receiving a refresh grant in response to the requesting, and transmitting an encoded refresh command to the bank controller, the encoded refresh command indicating a number of refresh operations granted to the cache memory bank associated with the bank controller. | 12-29-2011 |
20110320701 | OPTIMIZING EDRAM REFRESH RATES IN A HIGH PERFORMANCE CACHE ARCHITECTURE - Optimizing refresh request transmission rates in a high performance cache comprising: a refresh requestor configured to transmit a refresh request to a cache memory at a first refresh rate, the first refresh rate comprising an interval, the interval comprising receiving a plurality of first signals, the first refresh rate corresponding to a maximum refresh rate, and a refresh counter operatively coupled to the refresh requestor and configured to reset in response to receiving a second signal, increment in response to receiving each of a plurality of refresh requests from the refresh requestor, and reset and transmit a current count to the refresh requestor in response to receiving a third signal, wherein the refresh requestor is configured to transmit a refresh request at a second refresh rate, in response to receiving the current count from the refresh counter and determining that the current count is greater than a refresh threshold. | 12-29-2011 |
20110320729 | CACHE BANK MODELING WITH VARIABLE ACCESS AND BUSY TIMES - Various embodiments of the present invention manage access to a cache memory. In one embodiment, a set of cache bank availability vectors are generated based on a current set of cache access requests currently operating on a set of cache banks and at least a variable busy time of a cache memory includes the set of cache banks. The set of cache bank availability vectors indicate an availability of the set of cache banks. A set of cache access requests for accessing a set of given cache banks within the set of cache banks is received. At least one cache access request in the set of cache access requests is selected to access a given cache bank based on the a cache bank availability vectors associated with the given cache bank and the set of access request parameters associated with the at least one cache access that has been selected. | 12-29-2011 |
20110320730 | NON-BLOCKING DATA MOVE DESIGN - A mechanism for data buffering is provided. A portion of a cache is allocated as buffer regions, and another portion of the cache is designated as random access memory (RAM). One of the buffer regions is assigned to a processor. A data block is stored to the one of the buffer regions of the cache according an instruction of the processor. The data block is stored from the one of the buffer regions of the cache to the memory. | 12-29-2011 |
20110320778 | CENTRALIZED SERIALIZATION OF REQUESTS IN A MULTIPROCESSOR SYSTEM - Serializing instructions in a multiprocessor system includes receiving a plurality of processor requests at a central point in the multiprocessor system. Each of the plurality of processor requests includes a needs register having a requestor needs switch and a resource needs switch. The method also includes establishing a tail switch indicating the presence of the plurality of processor requests at the central point, establishing a sequential order of the plurality of processor requests, and processing the plurality of processor requests at the central point in the sequential order. | 12-29-2011 |
20110320862 | Edram Macro Disablement in Cache Memory - Embedded dynamic random access memory (EDRAM) macro disablement in a cache memory includes isolating an EDRAM macro of a cache memory bank, the cache memory bank being divided into at least three rows of a plurality of EDRAM macros, the EDRAM macro being associated with one of the at least three rows, iteratively testing each line of the EDRAM macro, the testing including attempting at least one write operation at each line of the EDRAM macro, determining if an error occurred during the testing, and disabling write operations for an entire row of EDRAM macros associated with the EDRAM macro based on the determining. | 12-29-2011 |
20120210070 | NON-BLOCKING DATA MOVE DESIGN - A mechanism for data buffering is provided. A portion of a cache is allocated as buffer regions, and another portion of the cache is designated as random access memory (RAM). One of the buffer regions is assigned to a processor. A data block is stored to the one of the buffer regions of the cache according an instruction of the processor. The data block is stored from the one of the buffer regions of the cache to the memory. | 08-16-2012 |
20120278548 | OPTIMIZING EDRAM REFRESH RATES IN A HIGH PERFORMANCE CACHE ARCHITECTURE - Optimizing EDRAM refresh rates in a high performance cache architecture. An aspect of the invention includes receiving a plurality of first signals. A refresh request is transmitted via a refresh requestor to a cache memory at a first refresh rate which includes an interval, including a subset of the first signals. The first refresh rate corresponds to a maximum refresh rate. A refresh counter is reset based on receiving a second signal. The refresh counter is incremented after receiving each of a number of refresh requests. A current count is transmitted from a refresh counter to the refresh requestor based on receiving a third signal. The refresh request is transmitted at a second refresh rate, which is less than the first refresh rate. The refresh request is transmitted based on receiving the current count from the refresh counter and determining that the current count is greater than a refresh threshold. | 11-01-2012 |
20130042144 | EDRAM MACRO DISABLEMENT IN CACHE MEMORY - A computer implemented method of embedded dynamic random access memory (EDRAM) macro disablement. The method includes isolating an EDRAM macro of a cache memory bank, the cache memory bank being divided into at least three rows of a plurality of EDRAM macros, the EDRAM macro being associated with one of the at least three rows. Each line of the EDRAM macro is iteratively tested, the testing including attempting at least one write operation at each line of the EDRAM macro. It is determined that an error occurred during the testing. Write perations for an entire row of EDRAM macros associated with the EDRAM macro are disabled based on the determining. | 02-14-2013 |
20130061001 | SYSTEM REFRESH IN CACHE MEMORY - System refresh in a cache memory that includes generating a refresh time period (RTIM) pulse at a centralized refresh controller of the cache memory and activating a refresh request at the centralized refresh controller based on generating the RTIM pulse. The refresh request is associated with a single cache memory bank of the cache memory. A refresh grant is received and transmitted to a bank controller. The bank controller is associated with and localized at the single cache memory bank of the cache memory. | 03-07-2013 |
20130339608 | MULTILEVEL CACHE HIERARCHY FOR FINDING A CACHE LINE ON A REMOTE NODE - Embodiments relate to accessing a cache line on a multi-level cache system having a system memory. Based on a request for exclusive ownership of a specific cache line at the local node, requests are concurrently sent to the system memory and remote nodes of the plurality of nodes for the specific cache line by the local node. The specific cache line is found in a specific remote node. The specific remote node is one of the remote nodes. The specific cache line is removed from the specific remote node for exclusive ownership by another node. Based on the specified node having the specified cache line in ghost state, any subsequent fetch request is initiated for the specific cache line from the specific node encounters the ghost state. When the ghost state is encountered, the subsequent fetch request is directed only to nodes of the plurality of nodes. | 12-19-2013 |
20130339609 | MULTILEVEL CACHE HIERARCHY FOR FINDING A CACHE LINE ON A REMOTE NODE - Embodiments relate to accessing a cache line on a multi-level cache system having a system memory. Based on a request for exclusive ownership of a specific cache line at the local node, requests are concurrently sent to the system memory and remote nodes of the plurality of nodes for the specific cache line by the local node. The specific cache line is found in a specific remote node. The specific remote node is one of the remote nodes. The specific cache line is removed from the specific remote node for exclusive ownership by another node. Based on the specified node having the specified cache line in ghost state, any subsequent fetch request is initiated for the specific cache line from the specific node encounters the ghost state. When the ghost state is encountered, the subsequent fetch request is directed only to nodes of the plurality of nodes. | 12-19-2013 |
20130339613 | STORING DATA IN A SYSTEM MEMORY FOR A SUBSEQUENT CACHE FLUSH - Embodiments relate to storing data to a system memory. An aspect includes accessing successive entries of a cache directory having a plurality of directory entries by a stepper engine, where access to the cache directory is given a lower priority than other cache operations. It is determined that a specific directory entry in the cache directory has a change line state that indicates it is modified. A store operation is performed to send a copy of the specific corresponding cache entry to the system memory as part of a cache management function. The specific directory entry is updated to indicate that the change line state is unmodified. | 12-19-2013 |
20130339623 | CACHE COHERENCY PROTOCOL FOR ALLOWING PARALLEL DATA FETCHES AND EVICTION TO THE SAME ADDRESSABLE INDEX - A technique for cache coherency is provided. A cache controller selects a first set from multiple sets in a congruence class based on a cache miss for a first transaction, and places a lock on the entire congruence class in which the lock prevents other transactions from accessing the congruence class. The cache controller designates in a cache directory the first set with a marked bit indicating that the first transaction is working on the first set, and the marked bit for the first set prevents the other transactions from accessing the first set within the congruence class. The cache controller removes the lock on the congruence class based on the marked bit being designated for the first set, and resets the marked bit for the first set to an unmarked bit based on the first transaction completing work on the first set in the congruence class. | 12-19-2013 |
20130339785 | DYNAMIC CACHE CORRECTION MECHANISM TO ALLOW CONSTANT ACCESS TO ADDRESSABLE INDEX - A technique is provided for a cache. A cache controller accesses a set in a congruence class and determines that the set contains corrupted data based on an error being found. The cache controller determines that a delete parameter for taking the set offline is met and determines that a number of currently offline sets in the congruence class is higher than an allowable offline number threshold. The cache controller determines not to take the set in which the error was found offline based on determining that the number of currently offline sets in the congruence class is higher than the allowable offline number threshold. | 12-19-2013 |
20140082289 | STORING DATA IN A SYSTEM MEMORY FOR A SUBSEQUENT CACHE FLUSH - Embodiments relate to storing data to a system memory. An aspect includes accessing successive entries of a cache directory having a plurality of directory entries by a stepper engine, where access to the cache directory is given a lower priority than other cache operations. It is determined that a specific directory entry in the cache directory has a change line state that indicates it is modified. A store operation is performed to send a copy of the specific corresponding cache entry to the system memory as part of a cache management function. The specific directory entry is updated to indicate that the change line state is unmodified. | 03-20-2014 |
20140095926 | DYNAMIC CACHE CORRECTION MECHANISM TO ALLOW CONSTANT ACCESS TO ADDRESSABLE INDEX - A technique is provided for a cache. A cache controller accesses a set in a congruence class and determines that the set contains corrupted data based on an error being found. The cache controller determines that a delete parameter for taking the set offline is met and determines that a number of currently offline sets in the congruence class is higher than an allowable offline number threshold. The cache controller determines not to take the set in which the error was found offline based on determining that the number of currently offline sets in the congruence class is higher than the allowable offline number threshold. | 04-03-2014 |
20140258621 | NON-DATA INCLUSIVE COHERENT (NIC) DIRECTORY FOR CACHE - Embodiments relate to a non-data inclusive coherent (NIC) directory for a symmetric multiprocessor (SMP) of a computer. An aspect includes determining a first eviction entry of a highest-level cache in a multilevel caching structure of the first processor node of the SMP. Another aspect includes determining that the NIC directory is not full. Another aspect includes determining that the first eviction entry of the highest-level cache is owned by a lower-level cache in the multilevel caching structure. Another aspect includes, based on the NIC directory not being full and based on the first eviction entry of the highest-level cache being owned by the lower-level cache, installing an address of the first eviction entry of the highest-level cache in a first new entry in the NIC directory. Another aspect includes invalidating the first eviction entry in the highest-level cache. | 09-11-2014 |
20150058569 | NON-DATA INCLUSIVE COHERENT (NIC) DIRECTORY FOR CACHE - Embodiments relate to a non-data inclusive coherent (NIC) directory for a symmetric multiprocessor (SMP) of a computer. An aspect includes determining a first eviction entry of a highest-level cache in a multilevel caching structure of the first processor node of the SMP. Another aspect includes determining that the NIC directory is not full. Another aspect includes determining that the first eviction entry of the highest-level cache is owned by a lower-level cache in the multilevel caching structure. Another aspect includes, based on the NIC directory not being full and based on the first eviction entry of the highest-level cache being owned by the lower-level cache, installing an address of the first eviction entry of the highest-level cache in a first new entry in the NIC directory. Another aspect includes invalidating the first eviction entry in the highest-level cache. | 02-26-2015 |
20160110287 | GRANTING EXCLUSIVE CACHE ACCESS USING LOCALITY CACHE COHERENCY STATE - A cache coherency management facility to reduce latency in granting exclusive access to a cache in certain situations. A node requests exclusive access to a cache line of the cache. The node is in one region of nodes of a plurality of regions of nodes. The one region of nodes includes the node requesting exclusive access and another node of the computing environment, in which the node and the another node are local to one another as defined by a predetermined criteria. The node requesting exclusive access checks a locality cache coherency state of the another node, the locality cache coherency state being specific to the another node and indicating whether the another node has access to the cache line. Based on the checking indicating that the another node has access to the cache line, a determination is made that the node requesting exclusive access is to be granted exclusive access to the cache line. The determining being independent of transmission of information relating to the cache line from one or more other nodes of the one or more other regions of nodes. | 04-21-2016 |
20160110288 | GRANTING EXCLUSIVE CACHE ACCESS USING LOCALITY CACHE COHERENCY STATE - A cache coherency management facility to reduce latency in granting exclusive access to a cache in certain situations. A node requests exclusive access to a cache line of the cache. The node is in one region of nodes of a plurality of regions of nodes. The one region of nodes includes the node requesting exclusive access and another node of the computing environment, in which the node and the another node are local to one another as defined by a predetermined criteria. The node requesting exclusive access checks a locality cache coherency state of the another node, the locality cache coherency state being specific to the another node and indicating whether the another node has access to the cache line. Based on the checking indicating that the another node has access to the cache line, a determination is made that the node requesting exclusive access is to be granted exclusive access to the cache line. The determining being independent of transmission of information relating to the cache line from one or more other nodes of the one or more other regions of nodes. | 04-21-2016 |