Patent application number | Description | Published |
20080270572 | Scalable computing apparatus - Disclosed are scalable computing pods that may be embodied in trailers, storage containers, or other portable structures that optimize computing, power, cooling and building infrastructure. The pods integrate required power and cooling infrastructure to provide a standalone turnkey computing solution. A user connects the pod to utility AC power and a data pipe. The scalable computing pods utilize liquid cooling, eliminate coolant conversions, and eliminate unnecessary power conversion to drastically improve efficiency. | 10-30-2008 |
20090094418 | SYSTEM AND METHOD FOR ACHIEVING CACHE COHERENCY WITHIN MULTIPROCESSOR COMPUTER SYSTEM - An embodiment of a multiprocessor computer system comprises main memory, a remote processor capable of accessing the main memory, a remote cache device operative to store accesses by said remote processor to said main memory, and a filter tag cache device associated with the main memory. The filter cache device is operative to store information relating to remote ownership of data in the main memory including ownership by the remote processor. The filter cache device is operative to selectively invalidate filter tag cache entries when space is required in the filter tag cache device for new cache entries. The remote cache device is responsive to events indicating that a cache entry has low value to the remote processor to send a hint to the filter tag cache device. The filter tag cache device is responsive to a hint in selecting a filter tag cache entry to invalidate. | 04-09-2009 |
20110129225 | Optical Polymorphic Computer Systems - Embodiments of the present invention are directed to high-bandwidth, low-latency optical fabrics for broadcasting between nodes. In one embodiment, an optical fabric includes an optical communication path optically coupled to a broadcasting node and optically coupled to one or more broadcast receiving nodes. The optical fabric also includes a first optical element optically coupled to the optical communication path and configured to broadcast an optical signal generated by the broadcasting nodes onto the optical communication path, and one or more optical elements optically coupled to the optical communication path and configured to divert a portion the broadcast optical signal onto each of the one or more receiving nodes. | 06-02-2011 |
20110179423 | MANAGING LATENCIES IN A MULTIPROCESSOR INTERCONNECT - In a computing system having a plurality of transaction source nodes issuing transactions into a switching fabric, an underserviced node notifies source nodes in the system that it needs additional system bandwidth to timely complete an ongoing transaction. The notified nodes continue to process already started transactions to completion, but stop the introduction of new traffic into the fabric until such time as the underserviced node indicates that it has progressed to a preselected point. | 07-21-2011 |
20110280513 | OPTICAL CONNECTOR INTERCONNECTION SYSTEM AND METHOD - A method for connecting adjacent computing board devices. A source computing board may be provided. An optical engine attaches to the source computing board. A plurality of source optical connectors couples to the optical engine. A first optical connector may be positioned at a location on the source computing board for a first preset type of computing component on an adjacent computing board. A second optical connector may be positioned at a fixed coordinate related to the first optical connector on the source computing board. | 11-17-2011 |
20120020663 | BUS-BASED SCALABLE OPTICAL FABRICS - Various embodiments of the present invention are directed to arrangements of multiple optical buses to create scalable optical interconnect fabrics for computer systems. In one aspect, a multi-bus fabric ( | 01-26-2012 |
20130286825 | FEED-FORWARD ARBITRATION - Feed-forward arbitration is disclosed. An example method of feed-forward arbitration includes determining an aggregated measure of urgency of packets waiting in a queue. The method also includes sending the aggregated measure to switching node arbiters along the path that an urgent packet will take, to reduce backpressure along a path of an urgent packet by biasing arbiters in favor of the packet | 10-31-2013 |
20140250274 | MAPPING PERSISTENT STORAGE - A computer apparatus and related method to access storage is provided. In one aspect, a controller maps an address range of a data block of storage into an accessible memory address range of at least one of a plurality of processors, in a further aspect, the controller ensures that copies of the data block cached in a plurality of memories by a plurality of processors are consistent. | 09-04-2014 |
20150081982 | SHIELDING A MEMORY DEVICE - A method of shielding a memory device ( | 03-19-2015 |
20150370721 | MAPPING MECHANISM FOR LARGE SHARED ADDRESS SPACES - The present disclosure provides techniques for mapping large shared address spaces in a computing system. A method includes creating a physical address map for each node in a computing system. Each physical address map maps the memory of a node. Each physical address map is copied to a single address map to form a global address map that maps all memory of the computing system. The global address map is shared with all nodes in the computing system. | 12-24-2015 |
20160050027 | OPTICAL CONNECTOR INTERCONNECTION SYSTEM AND METHOD - A method for connecting adjacent computing board devices. A source computing board may be provided. An optical engine attaches to the source computing board. A plurality of source optical connectors couples to the optical engine. A first optical connector may be positioned at a location on the source computing board for a first preset type of computing component on an adjacent computing board. A second optical connector may be positioned at a fixed coordinate related to the first optical connector on the source computing board. | 02-18-2016 |
20160054944 | EXTERNAL MEMORY CONTROLLER - A computing system is disclosed herein. The computing system includes a computing node and a remote memory node coupled to the computing node via a system fabric. The computing node includes a plurality of processors and a master memory controller. The master memory controller is external to the plurality of processors. The master memory controller routes requests corresponding to requests from the plurality of processors across the system fabric to the remote memory node and returns a response. | 02-25-2016 |
20160077985 | MULTI-MODE AGENT - According to an example, a multi-mode agent may include a processor interconnect (PI) interface to receive data from a processor and to selectively route the data to a node controller logic block, a central switch, or an optical interface based on one of a plurality of modes of operation of the multi-mode agent. The modes of operation may include a glueless mode where the PI interface is to route the data directly to the optical interface and bypass the node controller logic block and the central switch, a switched glueless mode where the PI interface is to route the data directly to the central switch for routing to the optical interface, and bypass the node controller logic block, and a glued mode where the PI interface is to route the data directly to the node controller logic block for routing to the central switch and further to the optical interface. | 03-17-2016 |
Patent application number | Description | Published |
20080270708 | System and Method for Achieving Cache Coherency Within Multiprocessor Computer System - A system and method are disclosed for achieving cache coherency in a multiprocessor computer system having a plurality of sockets with processing devices and memory controllers and a plurality of memory blocks. In at least some embodiments, the system includes a plurality of node controllers capable of being respectively coupled to the respective sockets of the multiprocessor computer, a plurality of caching devices respectively coupled to the respective node controllers, and a fabric coupling the respective node controllers, by which cache line request signals can be communicated between the respective node controllers. Cache coherency is achieved notwithstanding the cache line request signals communicated between the respective node controllers due at least in part to communications between the node controllers and the respective caching devices to which the node controllers are coupled. In at least some embodiments, the caching devices track remote cache line ownership for processor and/or input/output hub caches. | 10-30-2008 |
20080270743 | System and Method for Achieving Enhanced Memory Access Capabilities - A computer system, related components such as a processor agent, and related method are disclosed. In at least one embodiment, the computer system includes a first core, at least one memory device including a first memory segment, and a first memory controller coupled to the first memory segment. Further, the computer system includes a fabric and at least one processor agent coupled at least indirectly to the first core and the first memory segment, and also coupled to the fabric. A first memory request of the first core in relation to a first memory location within the first memory segment proceeds to the first memory controller by way of the at least one processor agent and the fabric. | 10-30-2008 |
20110274391 | SYSTEM AND METHODS FOR ROUTING OPTICAL SIGNALS | 11-10-2011 |