25th week of 2014 patent applcation highlights part 80 |
Patent application number | Title | Published |
20140173164 | Providing A Load/Store Communication Protocol With A Low Power Physical Unit - In one embodiment, a converged protocol stack can be used to unify communications from a first communication protocol to a second communication protocol to provide for data transfer across a physical interconnect. This stack can be incorporated in an apparatus that includes a protocol stack for a first communication protocol including transaction and link layers, and a physical (PHY) unit coupled to the protocol stack to provide communication between the apparatus and a device coupled to the apparatus via a physical link. This PHY unit may include a physical unit circuit according to the second communication protocol. Other embodiments are described and claimed. | 2014-06-19 |
20140173165 | Expander for Loop Architectures - An expander for a device architecture, such as a SAS-compatible expander for a SAS architecture, is configured to follow a set of discovery rules that are applied following detection of a discovery-triggering event, such as system power up or reset. According to one of the discovery rules, the expander waits until after a specified duration following the detected discovery-triggering event before passing on, to any other expanders, any requests to check the status of their discovery processing. Using appropriate values for the specified durations for different expanders, the discovery procedure will be performed without any infinite-messaging problems, even when the device architecture has a loop. | 2014-06-19 |
20140173166 | REDUCTION OF IDLE POWER IN A COMMUNICATION PORT - Techniques for reducing idle power consumption of a port are described herein. An example method includes determining device presence using a pull-down resistor disposed in a downstream port. The method also includes initiating a low power state of a link between the downstream port and an upstream device. The method also includes disabling the pull-down resistor in response to initiating the low power state. | 2014-06-19 |
20140173167 | PCI EXPRESS SWITCH AND COMPUTER SYSTEM USING THE SAME - Disclosed herein are a PCI Express switch and a computer system using the switch, which do not require a separate switch device for communication between computers, and enable a switch to be mounted in each PCI Express (PCIe) device, thus enabling main memory to be shared between the computers. The PCI Express switch is employed in a computer system, and includes a downstream port for transmitting a packet, and an upstream port for receiving the packet, wherein the downstream port and the upstream port are directly connected to another computer system. The present invention has a structure which enables the memory of other computers to be accessed by changing only the structure of a switch within a computer. Accordingly, there is an advantage in that the memory of other computers can be directly accessed without requiring a separate switch device or complicated software for a connection between computers. | 2014-06-19 |
20140173168 | SAS EXPANDER BASED PERSISTENT CONNECTIONS - A network device comprising a first attach point, a second attach point, a switch and persistent connection logic is provided. The first attach point may connect the network device to a first link, and the second attach point may connect the network device to a second link. The switch may connect the first attach point to the second attach point. The persistent connection logic may create a persistent connection between a first network element and a second network element, where the persistent connection comprises the network device, the first link and the second link. The network device may also implement a non-persistent connection between two network elements, where the non-persistent connection may comprises the network device. | 2014-06-19 |
20140173169 | CONTROLLING ACCESS TO GROUPS OF MEMORY PAGES IN A VIRTUALIZED ENVIRONMENT - Embodiments of an invention for controlling access to groups of memory pages in a virtualized environment are disclosed. In one embodiment, a processor includes a virtualization unit and a memory management unit. The virtualization unit is to transfer control of the processor to a virtual machine. The memory management unit is to perform, in response to an attempt to execute on the virtual machine an instruction stored on a first page, a page walk through a paging structure to find a second page and to allow access to the second page without exiting the virtual machine based at least in part on a bit being set in a leaf level entry corresponding to the second page in the paging structure and a corresponding bit being set in each entry corresponding to the first page in each level of the paging structure. | 2014-06-19 |
20140173170 | MULTIPLE SUBARRAY MEMORY ACCESS - A multiple subarray-access memory system is disclosed. The system includes a plurality of memory chips, each including a plurality of subarrays and a memory controller in communication. with the memory chips, the memory controller to receive a memory fetch width (“MFW”) instruction during an operating system start-up and responsive to the MFW instruction to fix a quantity of the subarrays that will be activated in response to memory access requests. | 2014-06-19 |
20140173171 | System and Method to Create a Non-Volatile Bootable RAM Disk - A manufacturing testing system includes an information handling system, a RAM memory device including a reserved physical RAM address space, non-volatile bootable disk, and a header for the reserved physical RAM address space. The head may include a non-volatile bootable disk signature, a start physical address, a length of reserved space, and a processor. | 2014-06-19 |
20140173172 | SYSTEM AND METHOD TO UPDATE READ VOLTAGES IN A NON-VOLATILE MEMORY IN RESPONSE TO TRACKING DATA - A method includes reading a representation of tracking data from at least a portion of a non-volatile memory. The method further includes adjusting a read voltage based on a comparison between a number of bits in tracking data as compared to a count of bits in the representation of the tracking data. | 2014-06-19 |
20140173173 | METHOD, DEVICE, AND SYSTEM INCLUDING CONFIGURABLE BIT-PER-CELL CAPABILITY - A method includes providing a partition command to a device that includes a memory array including a plurality of memory cells. In response to the providing of the partition command, the memory cells of the memory array are partitioned to select a portion of the memory array. In response to the providing of the partition command, one of bit numbers that are to be stored in one memory cell is selected, so that each of the memory cells included in the selected portion stores data with the selected one of the bit numbers. | 2014-06-19 |
20140173174 | LOWER PAGE READ FOR MULTI-LEVEL CELL MEMORY - An electronic memory or controller may use a first type of read command, addressed to a first page of memory of an electronic memory that includes information to indicate that a second page of memory of the electronic memory has not been programmed and a second type of read command, addressed to the first page of memory, that includes information to indicate that the second page of memory has been programmed. The first page of memory may include a lower page of a multi-level cell (MLC), and the second page of memory may include an upper page of the same MLC. The second page of memory is enabled during a period of time that the first type of read command is used. | 2014-06-19 |
20140173175 | NAND COMMAND AGGREGATION - An embodiment is a method and apparatus to provide an optimization of commands in a flash device. Commands sent by at least a top-level processor to a flash device are buffered in a buffer. The buffered commands are analyzed for an optimizing condition. The commands are aggregated if the optimizing condition is met. The aggregated commands are sent to the flash device. | 2014-06-19 |
20140173176 | HEAP-BASED MECHANISM FOR EFFICIENT GARBAGE COLLECTION BLOCK SELECTION - N page counters are associated with N blocks in the flash subsystem. Each of the N page counters indicates a count of invalid pages in each corresponding block in the N blocks. A max heap structure is formed over the N page counters. At least one of the N page counters is updated each time the count changes. The max heap structure is updated each time the at least one of the N page counters is updated. | 2014-06-19 |
20140173177 | Write Performance In Solid State Storage by Recognizing Copy Source to Target Operations and Only Storing Updates Instead of Entire Block - A mechanism is provided in a data processing system for accessing a solid state drive. Responsive to receiving request to write an update to a block of data in the solid state drive with an update option set, the mechanism reads the block of data from the solid state drive. The mechanism determines a difference between the update and the block of data. The mechanism compresses the difference to form an update record. The mechanism stores the update record and modifies metadata of the block of data to reference the update record. | 2014-06-19 |
20140173178 | Joint Logical and Physical Address Remapping in Non-volatile Memory - A method includes, for data items that are to be stored in a non-volatile memory in accordance with respective logical addresses, associating the logical addresses with respective physical storage locations in the non-volatile memory, and storing the data items in the respective associated physical storage locations. A remapping command, which specifies a group of source logical addresses that are associated with respective source physical storage locations, is received. In response to the remapping command, destination physical storage locations and destination logical addresses are selected jointly for replacing the source physical storage locations and the source logical addresses, respectively, so as to meet a joint performance criterion with respect to the logical addresses and the physical storage locations. The data items are copied from the source physical storage locations to the respective destination physical storage locations, and the destination physical storage locations are re-associated with the respective destination logical addresses. | 2014-06-19 |
20140173179 | VIRTUAL BOUNDARY CODES IN A DATA IMAGE OF A READ-WRITE MEMORY DEVICE - Methods, systems and devices are provided for configuring a read-write memory device with a data image. The method includes determining a data image distribution based on a virtual block size of a series of virtual blocks designated for the read-write memory device. The data image is divided into one or more data image portions, wherein a virtual boundary code is appended to at least one of the data image portions. The data image portions are stored in respective virtual blocks of the series of virtual blocks, skipping over any bad block within the read-write memory device, even between the virtual blocks. | 2014-06-19 |
20140173180 | TRACKING READ ACCESSES TO REGIONS OF NON-VOLATILE MEMORY - A data storage device includes a memory and a controller and may perform a method that includes updating, in the controller, a value of a particular counter of a set of counters in response to a read access to a particular region of the non-volatile memory that is tracked by the particular counter. Read accesses to a first region of the non-volatile memory are tracked by a first counter of the set of counters and read accesses to a second region of the non-volatile memory are tracked by a second counter of the set of counters. The method includes, in response to the value of the particular counter indicating that a count of read accesses to the particular region equals or exceeds a first threshold, initiating a remedial action to the particular region of the non-volatile memory. | 2014-06-19 |
20140173181 | RAPID VIRTUAL MACHINE SUSPEND AND RESUME - A method of enabling “fast” suspend and “rapid” resume of virtual machines (VMs) employs a cache that is able to perform input/output operations at a faster rate than a storage device provisioned for the VMs. The cache may be local to a computer system that is hosting the VMs or may be shared cache commonly accessible to VMs hosted by different computer systems. The method includes the steps of saving the state of the VM to a checkpoint file stored in the cache and locking the checkpoint file so that data blocks of the checkpoint file are maintained in the cache and are not evicted, and resuming execution of the VM by reading into memory the data blocks of the checkpoint file stored in the cache. | 2014-06-19 |
20140173182 | NONVOLATILE SEMICONDUCTOR MEMORY - According to one embodiment, a memory includes a temporary storage area which temporary stores data in a read/write operation to an array. The temporary storage area comprises a clamp FET connected between a first data bus and a second data bus, a first precharge FET connected between the first data bus and first potential, a second precharge FET connected between the second data bus and the first potential, a first storage area connected to the first data bus, and a second storage area connected to the second data bus. The control circuit is configured to generate a precharge state in which the first data bus is precharged to the first potential and the second data bus is precharged to a second potential lower than the first potential, when the data is transferred from the second storage area to the first storage area. | 2014-06-19 |
20140173183 | DATA STORAGE DEVICE AND METHOD OF OPERATING THE SAME - An operating method of a data storage device including nonvolatile memory devices includes making a victim block list for victim blocks for which a merge operation is to be performed and copying valid pages of the victim bocks to a merge block. The method also includes determining whether there is a victim block which has an erase-held valid page selectively erasing the victim blocks included in the victim block list, according to which victim blocks have an erase-held page, and updating the victim block list according to which victim blocks are erased. | 2014-06-19 |
20140173184 | DATA STORAGE DEVICE AND OPERATING METHOD THEREOF - A method of operating a data storage device includes setting program verify voltages for verifying whether memory cells of a nonvolatile memory device are programmed to desired program states; transmitting the set program verify voltages to the nonvolatile memory device; generating data patterns respectively corresponding to program states based on the program verify voltages; transmitting a data pattern corresponding to the program verify voltages to the nonvolatile memory device; and programming the memory cells with the transmitted data pattern. | 2014-06-19 |
20140173185 | Write Performance in Fault-Tolerant Clustered Storage Systems - Embodiments of the invention relate to supporting transaction data committed to a stable storage. Committed data in the cluster is stored in the persistent cache layer and replicated and stored in the cache layer of one or more secondary nodes. One copy is designated as a master copy and all other copies are designated as replica, with an exclusive write lock assigned to the master and a shared write lock extended to the replica. An acknowledgement of receiving the data is communicated following confirmation that the data has been replicated to each node designated to receive the replica. Managers and a director are provided to support management of the master copy and the replicas within the file system, including invalidation of replicas, fault tolerance associated with failure of a node holding a master copy, recovery from a failed node, recovered of the file system from a power failure, and transferring master and replica copies within the file system. | 2014-06-19 |
20140173186 | Journaling RAID System - A method of providing data storage is disclosed that includes writing a plurality of data non-sequentially to at least one first storage drive, the at least one first storage drive having a random first input/output operations per second (IOPS) speed, and writing the plurality of data and an associated plurality of journal metadata sequentially to at least one second storage drive, the at least one second storage drive having a second random IOPS speed that is slower than the first random IOPS speed. | 2014-06-19 |
20140173187 | VIRTUAL BOUNDARY CODES IN A DATA IMAGE OF A READ-WRITE MEMORY DEVICE - Methods, systems and devices are provided for revising a data image of a read-write memory device. The method includes accessing an initial data image from an initial virtual block corresponding to an actual block of a series of actual blocks of the read-write memory device. The initial data image includes an initial boot loader. Also, a backup data image is stored in a remote virtual block spaced away and following in the series of actual blocks from the initial virtual block. The backup data image includes a backup boot loader. Additionally, the initial data image is erased from the initial virtual block and a replacement data image is stored in the initial virtual block. The initial virtual block may include more than one virtual block spaced away and proceeding in the series of actual blocks from the remote virtual block. | 2014-06-19 |
20140173188 | INFORMATION PROCESSING DEVICE - An information processing device includes: an SSD storage controlling unit for storing a physical address of a storage region of data stored in an SSD (Solid State Drive) and a number of updates of the storage region, as SSD update information into the SSD; a backup storage controlling unit for storing copy data of the data stored in the SSD, and copy update information obtained by copying the SSD update information, in association with each other into a backup storage part; an acquiring unit for acquiring the copy update information corresponding to data associated with the SSD update information acquired from the SSD, from the backup storage part; and a deciding unit for deciding the data to be stored into the backup storage part based on the acquired SSD update information and the acquired copy update information. | 2014-06-19 |
20140173189 | COMPUTING SYSTEM USING NONVOLATILE MEMORY AS MAIN MEMORY AND METHOD FOR MANAGING THE SAME - A method of managing data of a computing system is provided, where the computing system uses a nonvolatile memory as a main memory. The method includes loading a process into the nonvolatile memory in response to a first run request, freezing the process loaded into the nonvolatile memory in response to an exit request of the process, and activating the process frozen in the nonvolatile memory in response to a second run request of the process. Freezing the process releases control of the process without deleting the process loaded into the nonvolatile memory. | 2014-06-19 |
20140173190 | TECHNIQUES TO PERFORM POWER FAIL-SAFE CACHING WITHOUT ATOMIC METADATA - A method and system to allow power fail-safe write-back or write-through caching of data in a persistent storage device into one or more cache lines of a caching device. No metadata associated with any of the cache lines is written atomically into the caching device when the data in the storage device is cached. As such, specialized cache hardware to allow atomic writing of metadata during the caching of data is not required. | 2014-06-19 |
20140173191 | SEMICONDUCTOR MEMORY SYSTEM HAVING A SNAPSHOT FUNCTION - In a semiconductor memory computer equipped with a flash memory, use of backed-up data is enabled. The semiconductor memory computer includes an address conversion table for detecting physical addresses of at least two pages storing data by designating a logical address from one of logical addresses to be designated by a reading request. The semiconductor memory computer includes a page status register for detecting one page status allocated to each page, and page statuses to be detected include the at least following four statuses: (1) a latest data storage status, (2) a not latest data storage status, (3) an invalid data storage status, and (4) an unwritten status. By using the address conversion table and the page status register, at least two data s (latest data and past data) can be read for one designated logical address from a host computer. | 2014-06-19 |
20140173192 | EXECUTION ENGINE FOR EXECUTING SINGLE ASSIGNMENT PROGRAMS WITH AFFINE DEPENDENCIES - The execution engine is a new organization for a digital data processing apparatus, suitable for highly parallel execution of structured fine-grain parallel computations. The execution engine includes a memory for storing data and a domain flow program, a controller for requesting the domain flow program from the memory, and further for translating the program into programming information, a processor fabric for processing the domain flow programming information and a crossbar for sending tokens and the programming information to the processor fabric. | 2014-06-19 |
20140173193 | TECHNIQUE FOR ACCESSING CONTENT-ADDRESSABLE MEMORY - A tag unit configured to manage a cache unit includes a coalescer that implements a set hashing function. The set hashing function maps a virtual address to a particular content-addressable memory unit (CAM). The coalescer implements the set hashing function by splitting the virtual address into upper, middle, and lower portions. The upper portion is further divided into even-indexed bits and odd-indexed bits. The even-indexed bits are reduced to a single bit using a XOR tree, and the odd-indexed are reduced in like fashion. Those single bits are combined with the middle portion of the virtual address to provide a CAM number that identifies a particular CAM. The identified CAM is queried to determine the presence of a tag portion of the virtual address, indicating a cache hit or cache miss. | 2014-06-19 |
20140173194 | COMPUTER SYSTEM MANAGEMENT APPARATUS AND MANAGEMENT METHOD - The present invention measures an actual utilization frequency of data and controls a location of this data in a storage apparatus in a case where a host computer makes joint use of a storage apparatus and a cache apparatus. A portion of data used by an application program | 2014-06-19 |
20140173195 | SYSTEM AND METHOD FOR IN-BAND LUN PROVISIONING IN A DATA CENTER NETWORK ENVIRONMENT - A method is provided in one example embodiment and includes instantiating a virtual adapter on a network device connected to a storage array, the virtual adapter capable of communicating with the storage array; determining storage configuration properties for the network device; and provisioning a portion of the storage array to the network device in accordance with the determined storage configuration properties. The method may further comprise associating the network device with a service profile, where the storage configuration properties are specified in the service profile. Still further, the method may comprise configuring the network device in accordance with the associated service profile, where the instantiating is also performed in accordance with the associated service profile. | 2014-06-19 |
20140173196 | RAPID VIRTUAL MACHINE SUSPEND AND RESUME - A method of enabling “fast” suspend and “rapid” resume of virtual machines (VMs) employs a cache that is able to perform input/output operations at a faster rate than a storage device provisioned for the VMs. The cache may be local to a computer system that is hosting the VMs or may be shared cache commonly accessible to VMs hosted by different computer systems. The method includes the steps of saving the state of the VM to a checkpoint file stored in the cache and locking the checkpoint file so that data blocks of the checkpoint file are maintained in the cache and are not evicted, and resuming execution of the VM by reading into memory the data blocks of the checkpoint file stored in the cache. | 2014-06-19 |
20140173197 | METHOD AND STORAGE DRIVE FOR WRITING PORTIONS OF BLOCKS OF DATA IN RESPECTIVE ARRAYS OF MEMORY CELLS OF CORRESPONDING INTEGRATED CIRCUITS - A storage drive includes a first integrated circuit, a second integrated circuit, an interface, an encoder, and a write module. The first integrated circuit includes a first array of memory cells. The second integrated circuit includes a second array of memory cells. The interface is connected to a host. The interface is configured to receive a first block of data transmitted from the host to the storage drive. The encoder is configured to encode the first block of data. The write module is configured to write (i) a first portion of the encoded first block of data to a first row of the first array of memory cells, and (ii) a second portion of the encoded first block of data to a first row of the second array of memory cells. | 2014-06-19 |
20140173198 | METHOD AND APPARATUS FOR DECOMPOSING I/O TASKS IN A RAID SYSTEM - A data access request to a file system is decomposed into a plurality of lower-level I/O tasks. A logical combination of physical storage components is represented as a hierarchical set of objects. A parent I/O task is generated from a first object in response to the data access request. A child I/O task is generated from a second object to implement a portion of the parent I/O task. The parent I/O task is suspended until the child I/O task completes. The child I/O task is executed in response to an occurrence of an event that a resource required by the child I/O task is available. The parent I/O task is resumed upon an event indicating completion of the child I/O task. Scheduling of any child I/O task is not conditional on execution of the parent I/O task, and a state diagram regulates the child I/O tasks. | 2014-06-19 |
20140173199 | Enhancing Analytics Performance Using Distributed Multi-Tiering - Embodiments of the invention relate to cluster-centric tiered storage with a flexible tier definition to support performance of transactions. Object data is distributed in a multi-tiered shared-nothing cluster. Hierarchical tiers of data storage are assigned different roles within the hierarchy. The tiers are managed globally across the cluster and objects are placed in tiers according to a flexible tier definition. The probability of object access is computed for objects, and objects are moved to different tiers responsive to the computation to minimize system runtime. The location of an object is further optimized in response to an access request. | 2014-06-19 |
20140173200 | NON-BLOCKING CACHING TECHNIQUE - The described implementations relate to processing of electronic data. One implementation is manifested as a system that can include a cache module and at least one processing device configured to execute the cache module. The cache module can be configured to store data items in slots of a cache structure, receive a request for an individual data item that maps to an individual slot of the cache structure, and, when the individual slot of the cache structure is not available, return without further processing the request. For example, the request can be received from a calling application or thread that can proceed without blocking irrespective of whether the request is fulfilled by the cache module. | 2014-06-19 |
20140173201 | ACQUIRING REMOTE SHARED VARIABLE DIRECTORY INFORMATION IN A PARALLEL COMPUTER - Methods, parallel computers, and computer program products for acquiring remote shared variable directory (SVD) information in a parallel computer are provided. Embodiments include a runtime optimizer determining that a first thread of a first task requires shared resource data stored in a memory partition corresponding to a second thread of a second task. Embodiments also include the runtime optimizer requesting from the second thread, in response to determining that the first thread of the first task requires the shared resource data, SVD information associated with the shared resource data. Embodiments also include the runtime optimizer receiving from the second thread, the SVD information associated with the shared resource data. | 2014-06-19 |
20140173202 | INFORMATION PROCESSING APPARATUS AND SCHEDULING METHOD - An information processing apparatus includes: at least one access unit that issues a memory access request for a memory; an arbitration unit that arbitrates the memory access request issued from the access unit; a management unit that allows the access unit that is an issuance source of the memory access request according to a result of the arbitration made by the arbitration unit to perform a memory access to the memory; a processor that accesses the memory through at least one cache memory; and a timing adjusting unit that holds a process relating to the memory access request issued by the access unit for a holding time set in advance and cancels the holding of the process relating to the memory access request in a case where power of the at least one cache memory is turned off in the processor before the holding time expires. | 2014-06-19 |
20140173203 | Block Memory Engine - In an embodiment, a processor is disclosed and includes a cache memory and a memory execution cluster coupled to the cache memory. The memory execution cluster includes a memory execution unit to execute instructions including non-block memory instructions, and block memory logic to execute one or more block memory operations. Other embodiments are described and claimed. | 2014-06-19 |
20140173204 | ANALYZING UPDATE CONDITIONS FOR SHARED VARIABLE DIRECTORY INFORMATION IN A PARALLEL COMPUTER - Methods, parallel computers, and computer program products for analyzing update conditions for shared variable directory (SVD) information in a parallel computer are provided. Embodiments include a runtime optimizer receiving a compare-and-swap operation header. The compare-and-swap operation header includes an SVD key, a first SVD address, and an updated first SVD address. The first SVD address is associated with the SVD key in a first SVD associated with a first task. Embodiments also include the runtime optimizer retrieving from a remote address cache associated with the second task, a second SVD address indicating a location within a memory partition associated with the first SVD in response to receiving the compare-and-swap operation header. Embodiments also include the runtime optimizer determining whether the second SVD address matches the first SVD address and transmitting a result indicating whether the second SVD address matches the first SVD address. | 2014-06-19 |
20140173205 | ANALYZING UPDATE CONDITIONS FOR SHARED VARIABLE DIRECTORY INFORMATION IN A PARALLEL COMPUTER - Methods, parallel computers, and computer program products for analyzing update conditions for shared variable directory (SVD) information in a parallel computer are provided. Embodiments include a runtime optimizer receiving a compare-and-swap operation header. The compare-and-swap operation header includes an SVD key, a first SVD address, and an updated first SVD address. The first SVD address is associated with the SVD key in a first SVD associated with a first task. Embodiments also include the runtime optimizer retrieving from a remote address cache associated with the second task, a second SVD address indicating a location within a memory partition associated with the first SVD in response to receiving the compare-and-swap operation header. Embodiments also include the runtime optimizer determining whether the second SVD address matches the first SVD address and transmitting a result indicating whether the second SVD address matches the first SVD address. | 2014-06-19 |
20140173206 | Power Gating A Portion Of A Cache Memory - In an embodiment, a processor includes multiple tiles, each including a core and a tile cache hierarchy. This tile cache hierarchy includes a first level cache, a mid-level cache (MLC) and a last level cache (LLC), and each of these caches is private to the tile. A controller coupled to the tiles includes a cache power control logic to receive utilization information regarding the core and the tile cache hierarchy of a tile and to cause the LLC of the tile to be independently power gated, based at least in part on this information. Other embodiments are described and claimed. | 2014-06-19 |
20140173207 | Power Gating A Portion Of A Cache Memory - In an embodiment, a processor includes multiple tiles, each including a core and a tile cache hierarchy. This tile cache hierarchy includes a first level cache, a mid-level cache (MLC) and a last level cache (LLC), and each of these caches is private to the tile. A controller coupled to the tiles includes a cache power control logic to receive utilization information regarding the core and the tile cache hierarchy of a tile and to cause the LLC of the tile to be independently power gated, based at least in part on this information. Other embodiments are described and claimed. | 2014-06-19 |
20140173208 | METHODS AND APPARATUS FOR MULTI-LEVEL CACHE HIERARCHIES - A multi-level cache structure in accordance with one embodiment includes a first cache structure and a second cache structure. The second cache structure is hierarchically above the first cache. The second cache includes a tag array comprising a plurality of tag entries corresponding to respective addresses of data within a system memory; a selector array associated with the tag array; and a data array configured to store a subset of the data. The selector array is configured to specify, for each corresponding tag entry, whether the data array includes the data corresponding to that tag entry. | 2014-06-19 |
20140173209 | Presenting Enclosure Cache As Local Cache In An Enclosure Attached Server - Presenting enclosure cache as local cache in an enclosure attached server, including: determining, by the enclosure, a cache hit rate for local server cache in each of a plurality of enclosure attached servers; determining, by the enclosure, an amount of available enclosure cache for use by one or more of the enclosure attached servers; and offering, by the enclosure, some portion of the available enclosure cache to an enclosure attached server in dependence upon the cache hit rate and the amount of available enclosure cache. | 2014-06-19 |
20140173210 | MULTI-CORE PROCESSING DEVICE WITH INVALIDATION CACHE TAGS AND METHODS - A data processing device is provided that facilitates cache coherence policies. In one embodiment, a data processing device utilizes invalidation tags in connection with a cache that is associated with a processing engine. In some embodiments, the cache is configured to store a plurality of cache entries where each cache entry includes a cache line configured to store data and a corresponding cache tag configured to store address information associated with data stored in the cache line. Such address information includes invalidation flags with respect to addresses stored in the cache tags. Each cache tag is associated with an invalidation tag configured to store information related to invalidation commands of addresses stored in the cache tag. In such embodiment, the cache is configured to set invalidation flags of cache tags based upon information stored in respective invalidation tags. | 2014-06-19 |
20140173211 | Partitioning Caches for Sub-Entities in Computing Devices - Some embodiments include a partitioning mechanism that partitions a cache memory into sub-partitions for sub-entities. In the described embodiments, the cache memory is initially partitioned into two or more partitions for one or more corresponding entities. During a partitioning operation, the partitioning mechanism is configured to partition one or more of the partitions in the cache memory into two or more sub-partitions for one or more sub-entities of a corresponding entity. A cache controller then uses a corresponding sub-partition for memory accesses by the one or more sub-entities. | 2014-06-19 |
20140173212 | ACQUIRING REMOTE SHARED VARIABLE DIRECTORY INFORMATION IN A PARALLEL COMPUTER - Methods, parallel computers, and computer program products for acquiring remote shared variable directory (SVD) information in a parallel computer are provided. Embodiments include a runtime optimizer determining that a first thread of a first task requires shared resource data stored in a memory partition corresponding to a second thread of a second task. Embodiments also include the runtime optimizer requesting from the second thread, in response to determining that the first thread of the first task requires the shared resource data, SVD information associated with the shared resource data. Embodiments also include the runtime optimizer receiving from the second thread, the SVD information associated with the shared resource data. | 2014-06-19 |
20140173213 | RAPID VIRTUAL MACHINE SUSPEND AND RESUME - A method of enabling “fast” suspend and “rapid” resume of virtual machines (VMs) employs a cache that is able to perform input/output operations at a faster rate than a storage device provisioned for the VMs. The cache may be local to a computer system that is hosting the VMs or may be shared cache commonly accessible to VMs hosted by different computer systems. The method includes the steps of saving the state of the VM to a checkpoint file stored in the cache and locking the checkpoint file so that data blocks of the checkpoint file are maintained in the cache and are not evicted, and resuming execution of the VM by reading into memory the data blocks of the checkpoint file stored in the cache. | 2014-06-19 |
20140173214 | Retention priority based cache replacement policy - A data processing system includes a cache memory | 2014-06-19 |
20140173215 | METHODS AND SYSTEMS FOR PROVISIONING A BOOTABLE IMAGE ON TO AN EXTERNAL DRIVE - The present invention relates to a method of optimizing the provisioning of a bootable image onto a storage device. In some embodiments, a host device executes a provisioning application to image a storage drive as a bootable drive. During the provisioning process, the storage device is configured to disguise its use of write caching during the provisioning process. In one embodiment, the storage device is configured to suppress forced unit access commands and cache flush commands for the provisioning application. In another embodiment, the storage device is configured to reject forced unit access commands. The storage device may disguise its use of write caching based on various criteria, such as a length of time, a counter, and the like. | 2014-06-19 |
20140173216 | Invalidation of Dead Transient Data in Caches - Embodiments include methods, systems, and articles of manufacture directed to identifying transient data upon storing the transient data in a cache memory, and invalidating the identified transient data in the cache memory. | 2014-06-19 |
20140173217 | TRACKING PREFETCHER ACCURACY AND COVERAGE - A method, an apparatus, and a non-transitory computer readable medium for tracking accuracy and coverage of a prefetcher in a processor are presented. A table is maintained and indexed by an address, wherein each entry in the table corresponds to one address. A number of demand requests that hit in the table on a prefetch, a total number of demand requests, and a number of prefetch requests are counted. The accuracy of the prefetcher is calculated by dividing the number of demand requests that hit in the table on a prefetch by the number of prefetch requests. The coverage of the prefetcher is calculated by dividing the number of demand requests that hit in the table on a prefetch by the total number of demand requests. The table and the counters are reset when a reset condition is reached. | 2014-06-19 |
20140173218 | CROSS DEPENDENCY CHECKING LOGIC - Systems and methods for maintaining an order of transactions in the coherence point. The coherence point stores attributes associated with received transactions in an input request queue (IRQ). When a new transaction is received by the coherence point, the IRQ is searched for other entries with the same request address or the same victim address as the new transaction. If one or more matches are found, the new transaction entry points to the entry storing the most recently received transaction with the same address. The new transaction is stalled until the transaction it points to has been completed in the coherence point. | 2014-06-19 |
20140173219 | LIGHTWEIGHT OBSERVABLE VALUES FOR MULTIPLE GRIDS - A method, computer program product, and computer system for updating observable values for multiple user-interface components. A computer system reads first values indexed by keys from a cache, in response to receiving a request from the multiple user-interface components. The computer system reads second values, which are indexed by the keys, from persistent storage. The computer system compares the first values and the second values based on the keys. The computer system writes the second values as new values of the first values in the cache. The computer system notifies one or more observers for respective ones of the first values, wherein the respective ones of the first values are changed. And, the computer system notifies the one or more observers for the first values that reading and writing operations are finished. | 2014-06-19 |
20140173220 | Using Logical Block Addresses with Generation Numbers as Data Fingerprints to Provide Cache Coherency - The technique introduced here involves using a block address and a corresponding generation number as a “fingerprint” to uniquely identify a sequence of data within a given storage domain. Each block address has an associated generation number which indicates the number of times that data at that block address has been modified. This technique can be employed, for example, to maintain cache coherency among multiple storage nodes. It can also be employed to avoid sending the data to a network node over a network if it already has the data. | 2014-06-19 |
20140173221 | CACHE MANAGEMENT - The present disclosure provides techniques for cache management. A data block may be received from an IO interface. After receiving the data block, the occupancy level of a cache memory may be determined. The data block may be directed to a main memory if the occupancy level exceeds a threshold. The data block may be directed to a cache memory if the occupancy level is below a threshold. | 2014-06-19 |
20140173222 | Validating Cache Coherency Protocol Within a Processor - A mechanism is provided for effectively validating cache coherency within a processor. For each node in a set of nodes, responsive to a node in a set of nodes being a controlling node, at least one action is performed on each controlled node mapped to the controlling node. After performing the at least one action on each controlled node mapped to the controlling node or responsive to the node failing to be a controlling node, a self-modifying branch test pattern is executed based on the selected execution pattern in the condition register through the set of nodes. Responsive to the self-modifying branch test pattern ending, values output from the execution unit during execution of the self-modifying branch test pattern are compared to a set of expected results. Responsive to a match of the comparison for the execution patterns in the set of execution patterns, the execution unit is validated. | 2014-06-19 |
20140173223 | STORAGE CONTROLLER WITH HOST COLLABORATION FOR INITIALIZATION OF A LOGICAL VOLUME - A device includes a storage controller for accessing a logical volume. The storage controller collaborates with a host to initialize the logical volume such that host resources perform a portion of the initialization of the logical volume. | 2014-06-19 |
20140173224 | SEQUENTIAL LOCATION ACCESSES IN AN ACTIVE MEMORY DEVICE - Embodiments relate to sequential location accesses in an active memory device that includes memory and a processing element. An aspect includes a method for sequential location accesses that includes receiving from the memory a first group of data values associated with a queue entry at the processing element. A tag value associated with the queue entry and specifying a position from which to extract a first subset of the data values is read. The queue entry is populated with the first subset of the data values starting at the position specified by the tag value. The processing element determines whether a second subset of the data values in the first group of data values is associated with a subsequent queue entry, and populates a portion of the subsequent queue entry with the second subset of the data values. | 2014-06-19 |
20140173225 | REDUCING MEMORY ACCESS TIME IN PARALLEL PROCESSORS - Apparatus, computer readable medium, and method of servicing memory requests are presented. A first plurality of memory requests are associated together, wherein each of the first plurality of memory requests is generated by a corresponding one of a first plurality of processors, and wherein each of the first plurality of processors is executing a first same instruction. A second plurality of memory requests are associated together, wherein each of the second plurality of memory requests is generated by a corresponding one of a second plurality of processors, and wherein each of the second plurality of processors is executing a second same instruction. A determination is made to service the first plurality of memory requests before the second plurality of memory requests and the first plurality of memory requests is serviced before the second plurality of memory requests. | 2014-06-19 |
20140173226 | LOGICAL OBJECT DELETION - The presently disclosed subject matter includes a method and system for enabling the deletion of logical objects characterized by an object identifier (OID). Upon restart following a system interruption, one or more logical objects are identified, each object being addressed by an interrupted delete request. For each identified logical object performing a deletion, the deletion including: reading one or more physical blocks stored in a physical storage space, wherein the one or more physical blocks were linked to the identified logical object before the system interruption, each of the physical blocks includes an OID stored therein indicating a logical object currently linked to the respective physical block; obtaining OIDs stored respectively in the one or more physical blocks; and freeing those physical blocks from among the one or more physical blocks, which store an OID identical to the respective OID of the identified logical object. | 2014-06-19 |
20140173227 | METHOD AND APPARATUS FOR MANAGING MEMORY IN VIRTUAL MACHINE ENVIRONMENT - A method and apparatus for managing a memory in a portable terminal including a main memory, a secondary memory, and a plurality virtual machines allocated by partitioning the main memory are provided. The method includes generating, by the virtual machines, monitoring information by monitoring access to the main memory and the secondary memory and swapping out with respect to the secondary memory; determining memory allocation amounts for each of the virtual machines by using the monitoring information; and allocating the main memory to the virtual machines in a partitioning scheme based on the determined memory allocation amounts. | 2014-06-19 |
20140173228 | MEMORY SYSTEM AND SYSTEM ON CHIP INCLUDING THE SAME - In one example embodiment, a memory system includes a hierarchical first-in first-out (FIFO) memory configured to store data, and a FIFO controller configured to control inputting and outputting of data to and from the FIFO memory, wherein the FIFO memory includes a first layer. The first layer includes a high-speed input FIFO memory configured to receive data from an external device and a high-speed output FIFO memory configured to output data to the external device. The FIFO memory further includes a second layer. The second layer includes a main FIFO memory configured to receive data from the high-speed input FIFO memory and output data to the high-speed output FIFO memory. | 2014-06-19 |
20140173229 | Method and Apparatus for Automated Migration of Data Among Storage Centers - A method for controlling the storage of data among multiple regional storage centers coupled through a network in a global storage system is provided. The method includes steps of: defining at least one dataset comprising at least a subset of the data stored in the global storage system; defining at least one ruleset for determining where to store the dataset; obtaining information regarding a demand for the dataset through one or more data requesting entities operating in the global storage system; and determining, as a function of the ruleset, information regarding a location for storing the dataset among regional storage centers having available resources that reduces the total distance traversed by the dataset in serving at least a given one of the data requesting entities and/or reduces the latency of delivery of the dataset to the given one of the data requesting entities. | 2014-06-19 |
20140173230 | APPLICATION PROGRAMMING INTERFACES FOR DATA SYNCHRONIZATION WITH ONLINE STORAGE SYSTEMS - The disclosed embodiments provide a system that manages access to data associated with an online storage system. During operation, the system enables synchronization of the data between an electronic device and the online storage system through an application programming interface (API) with an application on the electronic device. Next, the system uses the API to provide a synchronization state of the data to the application, wherein the synchronization state comprises at least one of a download state, an upload state, an idle state, a transfer progress, a cached state, and an error state. | 2014-06-19 |
20140173231 | SEMICONDUCTOR MEMORY DEVICE AND SYSTEM OPERATING METHOD - Disclosed is a semiconductor memory device for controlling a memory block. The semiconductor memory device includes a plurality of memory blocks to store data, and controller. The memory controller requests a first memory block, of the plurality of memory blocks, to performing a copy operation to copy the first memory block to a second memory block of the plurality of memory blocks. The controller then requests the first memory block to perform an operation different than the copy operation. The controller then requests the memory block to stop the copy operation based on the to perform an operation different than the copy operation. Finally, the controller requests the memory block to resume the copy operation after the operation different than the copy operation is completed. | 2014-06-19 |
20140173232 | Method and Apparatus for Automated Migration of Data Among Storage Centers - A method for controlling the storage of data among multiple regional storage centers coupled through a network in a global storage system is provided. The method includes steps of: defining at least one dataset comprising at least a subset of the data stored in the global storage system; defining at least one ruleset for determining where to store the dataset; obtaining information regarding a demand for the dataset through one or more data requesting entities operating in the global storage system; and determining, as a function of the ruleset, information regarding a location for storing the dataset among regional storage centers having available resources that reduces the total distance traversed by the dataset in serving at least a given one of the data requesting entities and/or reduces the latency of delivery of the dataset to the given one of the data requesting entities. | 2014-06-19 |
20140173233 | INFORMATION PROCESSING DEVICE, STORAGE PROCESSING METHOD, AND COMPUTER READABLE RECORDING MEDIUM HAVING PROGRAM STORED THEREIN - An information processing device includes: a calculator that calculates the number of pages used for storing management information in a first storage medium; a storage processor that sets pages corresponding to the calculated number of pages as free pages and stores the management information in the set free pages to thereby store the management information in the first storage medium; and an update processor that performs a process of updating position management information that indicates a storage position of the management information in the first storage medium. The information processing device can quickly write out logs on a memory. | 2014-06-19 |
20140173234 | SEMICONDUCTOR MEMORY DEVICE AND MEMORY SYSTEM - A semiconductor memory system or device includes a memory cell array and an address converter. The memory cell array includes a plurality of memory blocks, and there is at least one block that serves as a buffer. Each of the memory blocks includes at least one memory cell row. An address converting circuit along with a block copy circuit performs a block copy operation of copying data of a first memory block, which is a source block among the memory blocks, into a second block, which is a buffer or destination block, and maps a first logical address for accessing the first memory block onto a physical address designating the second block. The first memory block then can serve as a new destination block after the block copy operation of the first memory block is completed. | 2014-06-19 |
20140173235 | RESILIENT DISTRIBUTED REPLICATED DATA STORAGE SYSTEM - A resilient distributed replicated data storage system is described herein. The storage system includes zones that are independent, and autonomous from each other. The zones include nodes that are independent and autonomous. The nodes include storage devices. When a data item is stored, it is partitioned into a plurality of data objects and a plurality of parity objects are calculated. Reassembly instructions are created for the data item. The data objects, parity objects and reassembly instructions are spread across nodes and zones in the storage system according to a policy for the data item. When a zone is inaccessible, a virtual zone is created and used until the intended zone is available. When a read request is received, the data item is prepared from the lowest latency nodes according to the reassembly instructions, and a virtual zone is accessed in place of a real zone when the real zone is inaccessible. | 2014-06-19 |
20140173236 | SECURE COMPUTER SYSTEM FOR PREVENTING ACCESS REQUESTS TO PORTIONS OF SYSTEM MEMORY BY PERIPHERAL DEVICES AND/OR PROCESSOR CORES - A computer system is provided for preventing peripheral devices and/or processor cores from accessing restricted portions of system memory. For example, the computer system can include a host bridge, system memory coupled to the host bridge via a first access bus, a security processor coupled to the host bridge via a memory access bus that allows the security processor to access system memory and to access the peripheral device, and a security processor memory management unit (SPMMU) coupled between the peripheral device and the host bridge. The security processor is configured to program the SPMMU via the memory access bus to specify a first restricted range of physical addresses in the system memory that the peripheral device is not permitted to access. The SPMMU can then process access requests from the peripheral device and deny access requests that are determined to be within the first restricted range. | 2014-06-19 |
20140173237 | STORAGE DEVICE, AND METHOD FOR PROTECTING DATA IN STORAGE DEVICE - A storage device includes a memory including a first storage area configured to store area information that indicates a geographical area, and a second storage area configured to store data, and a processor coupled to the memory and configured to append data storage information, which indicates a location of the storage device, to the data to be stored in the second storage area, and allow a piece of the data stored in the second storage area to become available, the piece having the data storage information indicating that the location of the storage device falls within an area indicated by the area information, while the storage device is located within the area indicated by the area information. | 2014-06-19 |
20140173238 | Methods and Circuits for Securing Proprietary Memory Transactions - Described are systems and method for protecting data and instructions shared over a memory bus and stored in memory. Independent and separately timed stream ciphers for write and read channels allow timing variations between write and read transactions. Data and instructions can be separately encrypted prior to channel encryption to further secure the information. pad generators and related cryptographic circuits are shared for read and write data, and to secure addresses. The cryptographic circuits can support variable data widths, and in some embodiments memory devices incorporate security circuitry that can implement a shared-key algorithm using repurposed memory circuitry. | 2014-06-19 |
20140173239 | REFRESHING OF MEMORY BLOCKS USING ADAPTIVE READ DISTURB THRESHOLD - A method includes storing data in a memory that includes multiple memory blocks. A level of distortion that affects a given memory block of the memory is estimated. An adaptive read disturb threshold is set for the given memory block as a function of the estimated level of distortion. Upon detecting that a number of read operations performed in the given memory block exceeds the read disturb threshold, the data stored in the memory block is copied to an alternative storage location. | 2014-06-19 |
20140173240 | MEMORY CONTROLLER WITH STAGGERED REQUEST SIGNAL OUTPUT - A memory controller having a time-staggered request signal output. A first timing signal is generated while a second timing signal is generated having a first phase difference relative to the first timing signal. An address value is transmitted in response to the first timing signal and a control value is transmitted in response to the second timing signal, the address value and control value constituting portions of a first memory access request. | 2014-06-19 |
20140173241 | METHOD OF GENERATING OPTIMIZED MEMORY INSTANCES USING A MEMORY COMPILER - A method of generating optimized memory instances using a memory compiler is disclosed. Data pertinent to describing a memory to be designed are provided, and front-end models and back-end models are made to supply a library. Design criteria are received via a user interface. Design of the memory is optimized among speed, power and area according to the provided library and the received design criteria, thereby generating memory instances. | 2014-06-19 |
20140173242 | METHOD AND APPARATUS FOR CONTROLLING A STORAGE DEVICE - A mass storage device such as a disk drive or SSD (solid state drive) employs optimization logic for reduced power consumption in a host personal electronic device that identifies and prioritizes performance and power trade-offs by considering user expectations, user presence and application responsiveness. The storage device receives commands and information from the host device indicative of user expectations about application invocation, data freshness, and usage patterns, and determines a operational state indicative of behavior settings for reducing power consumption while maintaining the performance constraints required by the user expectations. The granularity of performance considerations communicated from the host device to the mass storage device is expanded to permit the storage device to determine, based on performance constraints from user expectations, appropriate and specific power reduction measures for maintaining the user experience. | 2014-06-19 |
20140173243 | EFFICIENT MANAGEMENT OF COMPUTER MEMORY USING MEMORY PAGE ASSOCIATIONS AND MEMORY COMPRESSION - A method for managing memory operations includes reading a first memory page from a storage device responsive to a request for the first memory page. The first memory page is stored to a system memory. Based on a pre-established set of association rules, one or more associated memory pages are identified that are related to the first memory page. The associated memory pages are read from the storage device and compressed to generate corresponding compressed associated memory pages. The compressed associated memory pages are also stored to the system memory to enable faster access to the associated memory pages during processing of the first memory page. The compressed associated memory pages are individually decompressed in response to the particular page being required for use during processing. | 2014-06-19 |
20140173244 | FILTERING REQUESTS FOR A TRANSLATION LOOKASIDE BUFFER - The present application describes a method and apparatus for filtering requests to a translation lookaside buffer (TLB). Some embodiments of the method include receiving, from a first translation lookaside buffer (TLB), an indication of a first virtual address associated with a request to a second TLB for a page table entry in response to a miss in the first TLB. Some embodiments of the method also include filtering the request based on a comparison of the first virtual address and one or more second virtual addresses associated with one or more previous requests to the second TLB. | 2014-06-19 |
20140173245 | RELATIVE ADDRESSING USAGE FOR CPU PERFORMANCE - The embodiments provide a computing device for incorporating data into code such that the data is relative to the code and, thereby, available for relative addressing. The computing device may include a code generator configured to receive source code from a source code database, and generate executable object code from the source code. The executable object code may include at least one instruction referencing data having an absolute address from a data source. Also, the computing device may include a data incorporator configured to transfer the data from the data source into the executable object code, where the transferred data is relative to the at least one instruction. Further, the computing device may include a relative addresser configured to adjust the at least one instruction to include a relative address for the transferred data including converting the absolute address to the relative address. | 2014-06-19 |
20140173246 | SCHEDULING APPLICATION INSTANCES TO CONFIGURABLE PROCESSING CORES BASED ON APPLICATION REQUIREMENTS AND RESOURCE SPECIFICATION - Systems and methods provide a processing task load and type adaptive manycore processor architecture, enabling flexible and efficient information processing. The architecture enables executing time variable sets of information processing tasks of differing types on their assigned processing cores of matching types. This involves: for successive core allocation periods (CAPs), selecting specific processing tasks for execution on the cores of the manycore processor for a next CAP based at least in part on core capacity demand expressions associated with the processing tasks hosted on the processor, assigning the selected tasks for execution at cores of the processor for the next CAP so as to maximize the number of processor cores whose assigned tasks for the present and next CAP are associated with same core type, and reconfiguring the cores so that a type of each core in said array matches a type of its assigned task on the next CAP. | 2014-06-19 |
20140173247 | PROCESSING APPARATUS AND METHOD OF SYNCHRONIZING A FIRST PROCESSING UNIT AND A SECOND PROCESSING UNIT - A processing apparatus, comprising at least a first processing unit and a second processing unit, is proposed. The first processing unit comprises a set of first stateful elements, the second processing unit comprises a set of second stateful elements. A set of synchronization data lines may connect the first stateful elements to the second stateful elements in a pairwise manner. A control unit may control the first processing unit, the second processing unit and the synchronization data lines so as to copy the states of the first stateful elements in parallel via the synchronization data lines to the second stateful elements in response to a synchronization request. A method of synchronizing the processing units is also proposed. | 2014-06-19 |
20140173248 | Performing Frequency Coordination In A Multiprocessor System Based On Response Timing Optimization - In an embodiment, a processor includes a core to execute instructions and a logic to receive memory access requests from the core and to route the memory access requests to a local memory and to route snoop requests corresponding to the memory access requests to a remote processor. The logic is configured to maintain latency information regarding a difference between receipt of responses to the snoop requests from the remote processor and receipt of responses to the memory access requests from the local memory. Other embodiments are described and claimed. | 2014-06-19 |
20140173249 | SYSTEM AND METHOD FOR CONNECTING A SYSTEM ON CHIP PROCESSOR AND AN EXTERNAL PROCESSOR - A system and method are provided for connecting a system on chip (SoC) processor and an external processor. The SoC processor receives as input a content stream, and processes the content stream. Further, the application processor that is connected to the SoC processor receives the processed content stream, performs further processing on the processed content stream, and outputs the further processed content stream hack to the SoC processor. | 2014-06-19 |
20140173250 | SELECTION OF A PRIMARY MICROPROCESSOR FOR INITIALIZATION OF A MULTIPROCESSOR SYSTEM - Embodiments of the present invention provide a method for initializing a plurality of processors of a multi-processor system by executing, at each respective processor of the plurality of processors, at least a portion of local initialization code stored on the respective processor. Receiving, at a designated processor of the plurality of processors, external initialization code stored in external memory, wherein the remainder of the plurality of processors do not have access to the external initialization code stored in external memory. Determining, the designated processor, send at least a portion of the external initialization code to a processor of the remainder of the plurality of processors. | 2014-06-19 |
20140173251 | SELECTION OF A PRIMARY MICROPROCESSOR FOR INITIALIZATION OF A MULTIPROCESSOR SYSTEM - Embodiments of the present invention provide a method for initializing a plurality of processors of a multi-processor system by executing, at each respective processor of the plurality of processors, at least a portion of local initialization code stored on the respective processor. Receiving, at a designated processor of the plurality of processors, external initialization code stored in external memory, wherein the remainder of the plurality of processors do not have access to the external initialization code stored in external memory. Determining, the designated processor, send at least a portion of the external initialization code to a processor of the remainder of the plurality of processors. | 2014-06-19 |
20140173252 | SYSTEM-ON-CHIP DESIGN STRUCTURE AND METHOD - Aspects may include a method of designing a system-on-chip. The method may include receiving multiple processing modules, each representing in software one of multiple processing units of a system-on-chip. The method may further include modeling communications from one or more of the multiple processing modules as accesses to memory. The method may further include generating a coherent memory module associated with the multiple processing modules based on modeling the communications from the one or more of the multiple processing modules as accesses to memory. The coherent memory module may represent in software a coherent memory associated with the multiple processing units. | 2014-06-19 |
20140173253 | Methods and Apparatus for Storing Expanded Width Instructions in a VLIW Memory for Deferred Execution - Techniques are described for decoupling fetching of an instruction stored in a main program memory from earliest execution of the instruction. An indirect execution method and program instructions to support such execution are addressed. In addition, an improved indirect deferred execution processor (DXP) VLIW architecture is described which supports a scalable array of memory centric processor elements that do not require local load and store units. | 2014-06-19 |
20140173254 | CACHE PREFETCH FOR DETERMINISTIC FINITE AUTOMATON INSTRUCTIONS - In a DFA scanning engine used to match regular expressions or similar rules, instructions to execute DFA state transitions are accessed through an instruction cache. Each DFA instruction may indicate varying numbers of transitions or branches from a current state. The cache pre-fetches a requested number of additional instructions consecutively following an accessed instruction. The DFA engine accesses an instruction from the cache corresponding to a state within a small number of transitions from the root state. When a low-branching instruction is executed to access a next instruction from the root state, or when a low-branching instruction is executed to access a next instruction from the cache, a fixed or configurable pre-fetch length is requested. Some instructions such as low-branching instructions may contain a pre-fetch hint. | 2014-06-19 |
20140173255 | INSTRUCTION SET FOR SUPPORTING WIDE SCALAR PATTERN MATCHES - A processor includes an instruction decoder to receive an instruction having a first operand, a second operand, and a third operand, and an execution unit coupled to the instruction decoder to execute the instruction, the execution unit to individually perform a shift operation by at least one bit for each of a plurality of data elements stored in a storage location indicated by the second operand, for each of the data elements that has an overflow in response to the shift-left operation, to carry over the overflow into an adjacent data element based on a first bitmask obtained from the third operand, generating a final result, and to store the final result in a storage location indicated by the first operand. | 2014-06-19 |
20140173256 | PROCESSOR CONFIGURED FOR OPERATION WITH MULTIPLE OPERATION CODES PER INSTRUCTION - A method of associating operation codes with instructions for execution in a processor includes the steps of assigning the operation codes to the instructions in a manner that allows a given instruction to have multiple assigned operation codes and selecting a particular one of the multiple assigned operation codes for use in executing a program containing the given instruction. The assigning step may be implemented in conjunction with design of the processor, and may further comprise the steps of determining frequency of occurrence of adjacent pairs of instructions in one or more programs likely to be run on the processor, and assigning the operation codes to the instructions based at least in part on the determined frequency of occurrence of the adjacent pairs of instructions. The selecting step may be implemented in conjunction with code generation for the program containing the given instruction, for example, in a code assembler. | 2014-06-19 |
20140173257 | REQUESTING SHARED VARIABLE DIRECTORY (SVD) INFORMATION FROM A PLURALITY OF THREADS IN A PARALLEL COMPUTER - Methods, parallel computers, and computer program products for requesting shared variable directory (SVD) information from a plurality of threads in a parallel computer are provided. Embodiments include a runtime optimizer detecting that a first thread requires a plurality of updated SVD information associated with shared resource data stored in a plurality of memory partitions. Embodiments also include a runtime optimizer broadcasting, in response to detecting that the first thread requires the updated SVD information, a gather operation message header to the plurality of threads. The gather operation message header indicates an SVD key corresponding to the required updated SVD information and a local address associated with the first thread to receive a plurality of updated SVD information associated with the SVD key. Embodiments also include the runtime optimizer receiving at the local address, the plurality of updated SVD information from the plurality of threads. | 2014-06-19 |
20140173258 | TECHNIQUE FOR PERFORMING MEMORY ACCESS OPERATIONS VIA TEXTURE HARDWARE - A texture processing pipeline can be configured to service memory access requests that represent texture data access operations or generic data access operations. When the texture processing pipeline receives a memory access request that represents a texture data access operation, the texture processing pipeline may retrieve texture data based on texture coordinates. When the memory access request represents a generic data access operation, the texture pipeline extracts a virtual address from the memory access request and then retrieves data based on the virtual address. The texture processing pipeline is also configured to cache generic data retrieved on behalf of a group of threads and to then invalidate that generic data when the group of threads exits. | 2014-06-19 |
20140173259 | COMPUTER PROCESSOR WITH INSTRUCTION FOR EXECUTION BASED ON AVAILABLE INSTRUCTION SETS - A system and method for testing whether a computer processor is capable of executing a requested instruction set. The system includes a computer processor configured to receive an encoded conditional branch instruction in a form of machine code executable directly by the computer processor, and implement the encoded conditional branch instruction unconditionally, based on underlying hardware architecture of the computer processor. The Method for testing whether a computer processor is capable of executing a requested instruction set, the method including, receiving an encoded conditional branch instruction in a form of machine code executable directly by the computer processor, and implementing the encoded conditional branch instruction unconditionally, based on underlying hardware architecture of the computer processor. | 2014-06-19 |
20140173260 | COMPUTER PROCESSOR WITH INSTRUCTION FOR EXECUTION BASED ON AVAILABLE INSTRUCTION SETS - A system and method for testing whether a computer processor is capable of executing a requested instruction set. The system includes a computer processor configured to receive an encoded conditional branch instruction in a form of machine code executable directly by the computer processor, and implement the encoded conditional branch instruction unconditionally, based on underlying hardware architecture of the computer processor. The Method for testing whether a computer processor is capable of executing a requested instruction set, the method including, receiving an encoded conditional branch instruction in a form of machine code executable directly by the computer processor, and implementing the encoded conditional branch instruction unconditionally, based on underlying hardware architecture of the computer processor. | 2014-06-19 |
20140173261 | COMPUTER PROCESSOR WITH INSTRUCTION FOR EXECUTION BASED ON AVAILABLE INSTRUCTION SETS - A system and method for testing whether a computer processor is capable of executing a requested instruction set. The system includes a computer processor configured to receive an encoded conditional branch instruction in a form of machine code executable directly by the computer processor, and implement the encoded conditional branch instruction unconditionally, based on underlying hardware architecture of the computer processor. The Method for testing whether a computer processor is capable of executing a requested instruction set, the method including, receiving an encoded conditional branch instruction in a form of machine code executable directly by the computer processor, and implementing the encoded conditional branch instruction unconditionally, based on underlying hardware architecture of the computer processor. | 2014-06-19 |
20140173262 | Energy-Focused Compiler-Assisted Branch Prediction - A processing system to reduce energy consumption and improve performance in a processor, controlled by compiler inserted information ahead of a selected branch instruction, to statically expose and control how the prediction should be completed and which mechanism should be used to achieve energy and performance efficiency. | 2014-06-19 |
20140173263 | BOOTING FROM A TRUSTED NETWORK IMAGE - The present invention extends to methods, systems, and computer program products for booting from a trusted network image. The image can be executed from a trusted source on a Wide Area Network (“WAN”) to perform a maintenance operation, such as, for example, malware scanning, operating system repair, factory reset, etc. at the computer system. Trust can be established using a Certificate Authority or an out of band communication channel (e.g., voice communication, text message, electronic mail, etc.) to retrieve a one-time pad (“OTP”). Using the OTP the computer can validate that it is connected to the trusted source. The trusted source can chain to additional images hosted on a third-party server. The additional images can provide a user with options for various different maintenance operations or various different implementations of the same maintenance operation. For example, the trusted source can link to multiple different malware scanners. | 2014-06-19 |