21st week of 2012 patent applcation highlights part 66 |
Patent application number | Title | Published |
20120131222 | ELEPHANT FLOW DETECTION IN A COMPUTING DEVICE - Example embodiments relate to elephant flow detection in a computing device. In example embodiments, a computing device may monitor a socket for a given flow. The computing device may then determine whether the flow is an elephant flow based on the monitoring of the socket. If so, the computing device may signal the network that transmits the flow that the flow is an elephant flow. | 2012-05-24 |
20120131223 | Object-Based Transport Protocol - Methods and apparatuses are provided that facilitate providing an object-based transport protocol that allows transmission of arbitrarily sized objects over a network protocol layer. The object-based transport protocol can also provide association of metadata with the objects to control communication thereof, and/or communication of response objects. Moreover, the object-based transport protocol can maintain sessions with remote network nodes that can include multiple channels, which can be updated over time to seamlessly provide mobility, increased data rates, and/or the like. In addition, properties can be modified remotely by network nodes receiving objects related to the properties. | 2012-05-24 |
20120131224 | System and Method for Pushing Information from a Host System to a Mobile Data Communication Device - An embodiment of a communication system includes an Internet-based network server having a redirector component executing thereon, wherein the network server is connected to a host computer system via a wide area network connection. The redirector component is configured for commencing redirection of user data items from the host computer system to a mobile computer via a wireless network. The mobile computer is configured for receiving user data items redirected from the host computer system for a user, wherein the mobile computer includes another redirector component that is adapted to push at least a portion of a user data item received from the host computer system to another device based on a trigger flag set at the mobile computer. | 2012-05-24 |
20120131225 | DATA CENTER NETWORK SYSTEM AND PACKET FORWARDING METHOD THEREOF - A data center network system and a packet forwarding method are provided. The data center network includes a management server and a plurality of machines containing physical machines and virtual machines. The management server configures a logical media access control (MAC) address for each of the machines, wherein most significant bytes of each of the logical MAC addresses are set as 0. When a data packet is about to be sent from a physical machine, the physical machine executes an encapsulation procedure on the data packet for forwarding the data packet to an intermediate node between a transmitter and a receiver of the data packet, and the intermediate node executes a decapsulation procedure on the data packet for forwarding the data packet to the true receiver. Accordingly, the number of virtual machines exposed to the forwarding table of Ethernet switches can be effectively reduced. | 2012-05-24 |
20120131226 | Temporary collaborative ad-hoc network of hardware nodes to perform function - That a trigger for creating a temporary collaborative ad-hoc network of hardware nodes has occurred is detected. In response, the temporary collaborative ad-hoc network is created via intercommunication among the hardware nodes. After the temporary collaborative ad-hoc network has been created, the temporary collaborative ad-hoc network performs a given function. A particular hardware node within the temporary collaborative ad-hoc network can perform a roll call request so that it ascertains a list of the hardware nodes within the temporary collaborative ad-hoc network. Performing the roll call request can include ascertaining that a correctness of the list of the hardware nodes satisfies a threshold, such that the correctness of the list of the hardware nodes is not necessarily completely correct. | 2012-05-24 |
20120131227 | SERIAL PERIPHERAL INTERFACE AND METHOD FOR DATA TRANSMISSION - A serial peripheral interface of an integrated circuit including multiple pins is provided. The pins are coupled to the integrated circuit. The integrated circuit receives an instruction through only one of the plurality of pins. The integrated circuit receives an address through the plurality of pins. The integrated circuit sends a read out data through the plurality of pins. | 2012-05-24 |
20120131228 | METHOD AND APPARATUS FOR EXECUTING APPLICATION OF MOBILE DEVICE - An apparatus and method for executing an application within a mobile device is provided. The method includes detecting, by the mobile device, a connection with a host device through a wired interface; receiving, from the host device, a request to execute a specific application installed in the mobile device; and executing the specific application according to the received request. | 2012-05-24 |
20120131229 | INPUT COMMAND - A method for detecting an input command including configuring a sensor to determine whether a user is within a proximity of a computing machine, configuring an input device to detect an input command entered by the user when the user is within the proximity of the computing machine, and transmitting the input command for the computing machine to process. | 2012-05-24 |
20120131230 | Authenticating, Tracking, and Using a Peripheral - This document describes techniques ( | 2012-05-24 |
20120131231 | DETERMINING ADDRESSES OF ELECTRICAL COMPONENTS ARRANGED IN A DAISY CHAIN - In one aspect, a system includes electrical components arranged in a daisy chain that include a first electrical component disposed at a first end of the daisy chain and a second electrical component disposed at an opposite end of the daisy chain than the first end. Each of the first and second electrical components includes an input port, an output port and a common port. The input port of the first electrical component is coupled to one of a supply voltage port or ground and the common ports of the first and second electrical components are coupled to the other one of the supply voltage or the ground. An address of the second electrical component is determined before addresses of the other of the electrical components are determined, and the addresses determine a position of an electrical component with respect to the other of the electrical components. | 2012-05-24 |
20120131232 | CONFIGURING AN INPUT/OUTPUT ADAPTER - A computer-implemented method includes initializing a driver associated with an input/output adapter in response to receiving an initialize driver request from a client application. The input/output adapter may be initialized to enable adapter capabilities of the input/output adapter to be determined. The computer-implemented method also includes determining the adapter capabilities of the input/output adapter and determining slot capabilities of a slot associated with the input/output adapter. The computer-implemented method further includes setting configurable capabilities of the input/output adapter based on the adapter capabilities and the slot capabilities. | 2012-05-24 |
20120131233 | METHOD FOR ASSIGNING DEVICE ADDRESSES TO SENSORS IN A PHYSIOLOGICAL MEASUREMENT SYSTEM, A PHYSIOLOGICAL MEASUREMENT SYSTEM, AND A CONNECTOR ELEMENT FOR A PHYSIOLOGICAL MEASUREMENT SYSTEM - A method for assigning addresses to a plurality of physiological sensor units is disclosed. A physiological measurement system and a connector element for a physiological measurement system are also disclosed. To enable identification of identical sensor units in a measurement system, each input port of a connector element, such as a trunk cable, is adapted to determine at least part of a device address of a sensor unit connected to that input port, thereby to obtain a unique device address for each sensor unit connected to the connector element. | 2012-05-24 |
20120131234 | ELECTRONIC CIRCUIT FOR INTERCONNECTING A SMARTCARD CHIP - The invention relates to an electronic circuit ( | 2012-05-24 |
20120131235 | USING A TABLE TO DETERMINE IF USER BUFFER IS MARKED COPY-ON-WRITE - A method, system and computer program product for determining if a buffer is marked copy-on-write. A user applications selects a buffer in user space to store data involved in a write/read operation. The user application searches a table storing addresses of buffers in user space that are marked copy-on-write to determine if the address of the selected buffer is listed in the table. If the address is listed in the table, then the selected buffer is marked copy-on-write. If the address is not listed in the table, then the selected buffer is not marked copy-on-write. By having a table store a list of addresses of buffers in user space that are marked copy-on-write by the kernel, the user application is now able to know whether the buffer in user space is marked copy-on-write. | 2012-05-24 |
20120131236 | COMMUNICATION BETWEEN A COMPUTER AND A DATA STORAGE DEVICE - A method for communicating between a computer and a data storage device comprises receiving, by a data storage device, information indicative of a plurality of commands and information indicative of a memory location in a computer associated with each of the plurality of commands. The method further comprises executing, by the data storage device, one of the plurality of commands. In one embodiment, executing the command comprises directly accessing the computer memory location associated with the command. | 2012-05-24 |
20120131237 | EXTENSION DEVICE AND COMMUNICATION CHECK METHOD - An extension device that extends a communication path between a host device and an input/output (IO) device, the extension device including a determination unit configured to determine whether a first logical path exists between the host device and the IO device, a logical path establishment unit configured to request the IO device to establish a second logical path between the extension device and the IO device when the first logical path does not exist between the host device and the IO device, and a communication check unit configured to check communication on the second logical path established between the extension device and the IO device that establishes the second logical path. | 2012-05-24 |
20120131238 | Management of Redundant Physical Data Paths in a Computing System - A redundancy manager manages commands to peripheral devices in a computer system. These peripheral devices have multiple pathways connecting it to the computer system. The redundancy manager determines the number of independent pathways connected to the peripheral device, presents only one logical device to the operating system and any device driver and any other command or device processing logic in the command path before the redundancy manager. For each incoming command, the redundancy manager determines which pathways are properly functioning and selects the best pathway for the command based at least partly upon a penalty model where a path may be temporarily penalized by not including the pathway in the path selection process for a predetermined time. The redundancy manager further reroutes the command to an alternate path and resets the device for an alternate path that is not penalized or has otherwise failed. | 2012-05-24 |
20120131239 | DYNAMIC RESOURCE ALLOCATION FOR DISTRIBUTED CLUSTER-STORAGE NETWORK - Various embodiments for operating a distributed cluster storage network having a host computer system and a storage subsystem are provided. In one embodiment, by way of example only, at a first of a plurality of storage control nodes a request is received to write data to storage from the host computer system. The data is forwarded by a forwarding layer at the first of the plurality of storage control nodes to a second of the plurality of storage control nodes. Buffer resource are allocated for the data to be written to the storage by a buffer control component at each of the plurality of storage control nodes. The constrained status indicator of the buffer resource is communicated to the forwarding layer. Additional system and computer program product embodiments are disclosed and provide related advantages. | 2012-05-24 |
20120131240 | SLIDING WRITE WINDOW MECHANISM FOR WRITING DATA - Various embodiments writing data are provided. In one embodiment, the data arranged in a plurality of write intervals is loaded into a plurality of buffers, the totality of the plurality of buffers configured as a sliding write window mechanism adapted for movement to accommodate the write intervals. The data may reach the storage system out of a sequential order, and by loading it appropriately into the said buffers the data is ordered sequentially before it is written to the storage media. When a commencing section of the sliding write window is filled up with written data, this section is flushed to the storage media, and the window slides forward, to accommodate further data written by the writers. The writers are synchronized with the interval reflected by the current position of the sliding write window, and they send data to be written only where this data fits into the current interval of the window. | 2012-05-24 |
20120131241 | SIGNAL PROCESSING SYSTEM, INTEGRATED CIRCUIT COMPRISING BUFFER CONTROL LOGIC AND METHOD THEREFOR - A signal processing system comprising buffer control logic arranged to allocate a plurality of buffers for the storage of information fetched from at least one memory element. Upon receipt of fetched information to be buffered, the buffer control logic is arranged to categorise the information to be buffered according to at least one of: a first category associated with sequential flow and a second category associated with change of flow, and to prioritise respective buffers from the plurality of buffers storing information relating to the first category associated with sequential flow ahead of buffers storing information relating to the second category associated with change of flow when allocating a buffer for the storage of the fetched information to be buffered. | 2012-05-24 |
20120131242 | Method and Device for Asynchronous Communication of Data on a Single Conductor - The invention relates to the asynchronous communication of data in complex integrated systems, be it inside integrated circuit chips or between integrated circuit chips, for example in a compact stack of chips. According to the invention, the transmission is done on a single conductor of exchanges. The data are transmitted on this conductor in the form of at least three levels of potential, the first level representing a first value of data item transmitted, the second representing a second value of data item transmitted, and the third representing an inactive level. An acknowledgment signal is transmitted on the same exchange conductor as the data. This signal is preferably sent by the receiver in the form of the forcing of the exchange conductor by the receiver to the inactive potential level, the sender detecting this forcing. | 2012-05-24 |
20120131243 | MULTIPLEXING PIN CONTROL CIRCUIT FOR COMPUTER SYSTEM - A multiplexing pin control circuit for a computer system with multiple chips is provided, and includes a Southbridge chip having at least one multiplexing pin; at least one control module including a first connecting terminal electrically connected to the multiplexing pin, a second connecting terminal, and a control end receiving an enable signal; and a peripheral apparatus having an input/output (I/O) interface electrically connected to the second connecting terminal. When the enable signal is at a first level or a second level, the peripheral apparatus is electrically isolated from or connected to the second connecting terminal correspondingly. The control module switches on or switches off an electrical connection of the multiplexing pin with an external circuit, thereby avoiding an interference with a level voltage of the multiplexing pin during an initialization and reset period and further ensuring the multiplexing function of the pin during a normal operation period. | 2012-05-24 |
20120131244 | Encoding Data Using Combined Data Mask and Data Bus Inversion - A data encoding scheme for transmission of data from one circuit to another circuit combines DBI encoding and non-DBI encoding and uses a data mask signal to indicate the type of encoding used. The data mask signal in a first state indicates that the data transmitted from one circuit to said another circuit is to be ignored, and the data mask signal in a second state indicates that the data transmitted from one circuit to said another circuit is not to be ignored. If the data mask signal is in the second state, a first subset of the data is encoded with data bus inversion and a second subset of the data is encoded differently from data bus inversion. Such encoding has the advantage that SSO noise is dramatically reduced when the encoded data is transmitted from one circuit to another circuit. | 2012-05-24 |
20120131245 | TRANSFER OF CONTROL BUS SIGNALING ON PACKET-SWITCHED NETWORK - Embodiments of the invention are generally directed to transfer of control bus signaling on a packet-switched network. An embodiment of a method includes sending control signals from a first device on a first control bus, the control signals being sent according to an interface protocol, the control signals being intended for a second device. The method further includes detecting a current state of the first control bus, where the current state is a control signal value driven by the first device; inserting a control signal representing the current state of the control bus into a data packet; and transmitting the data packet to the second device via a packet-switched network. | 2012-05-24 |
20120131246 | SYSTEM-ON-CHIP AND DATA ARBITRATION METHOD THEREOF - A system-on-a-chip semiconductor device comprises a first master device configured to issue a request having a transaction ID, a plurality of slave devices configured to provide data in response to the request, and an interconnector configured to include a slave interface for providing the request to one or more master interfaces and for supplying response data to the first master device based on operation characteristics of the first master. | 2012-05-24 |
20120131247 | APPARATUS FOR PERIPHERAL DEVICE CONNECTION USING SPI IN PORTABLE TERMINAL AND METHOD FOR DATA TRANSMISSION USING THE SAME - In one embodiment, an apparatus for peripheral device connection using a Serial Peripheral Interface (SPI) in a portable terminal is provided and a method for data transmission using the same. The apparatus includes an SPI controller for activating each of slaves by independently assigning at least one serial data line to each of the slaves that reads/writes data from/to each of the slaves through at least one serial control line, a slave unit including at least one slave which under the control of the SPI controller, reads out data from the buffer and then performs data transmission between the slaves, and a buffer for temporarily storing the data to be transmitted in order to transmit data between the slaves which may have different data processing speeds and different data transmission speeds. | 2012-05-24 |
20120131248 | MANAGING COMPRESSED MEMORY USING TIERED INTERRUPTS - Systems and methods to manage memory are provided. A particular method may include initiating a memory compression operation. The method may further include initiating a first interrupt configured to affect a first process executing on a processor in response to a first detected memory level. A second initiated interrupt may be configured to affect the first process executing on the processor in response to a second detected memory level, and a third interrupt may be initiated to affect the first process executing on the processor in response to a third detected memory level. At least of the first, the second, and the third detected memory levels are affected by the memory compression operation. | 2012-05-24 |
20120131249 | METHODS AND SYSTEMS FOR AN INTERPOSER BOARD - In accordance with at least some embodiments, a system ( | 2012-05-24 |
20120131250 | Programming Devices and Programming Methods - A programming device includes communication circuitry for communicating with an electronic device. A first set of one or more electrical contacts connected to the communication circuitry and configured to physically contact a corresponding second set of one or more electrical contacts located on a substrate. One or more guides configured to align the first and second sets of electrical contacts when the substrate physically contacts the one or more guides. | 2012-05-24 |
20120131251 | FAST AND COMPACT CIRCUIT FOR BUS INVERSION - A processor based system with at least one processor, at least one memory controller and optionally other devices having bussed system with a fast and compact majority voter in the circuitry responsible for the bus inversion decision. The majority voter is implemented in analog circuitry having two branches. One branch sums the advantage of transmitting the bits without inversion, the other sums the advantage of transmitting the bits with inversion. The majority voter computes the bus inversion decision in slightly more than one gate delay by simultaneously comparing current drive in each branch. | 2012-05-24 |
20120131252 | INTELLIGENT PCI-EXPRESS TRANSACTION TAGGING - Systems and methods of routing data units such as data packets or data frames that provide improved system performance and more efficient use of system resources. The disclosed systems and methods employ memory mapping approaches in conjunction with transaction ID tag fields from the respective data units to assign each tag value, or at least one range of tag values, to a specified address, or at least one range of specified addresses, for locations in internal memory that store corresponding transaction parameters. The disclosed systems and methods can also apply selected bits from the transaction ID tag fields to selector inputs of one or more multiplexor components for selecting corresponding transaction parameters at data inputs to the multiplexor components. The disclosed systems and methods may be employed in memory-read data transfer transactions to recover the transaction parameters necessary to determine destination addresses for memory locations where the memory-read data are to be transmitted. | 2012-05-24 |
20120131253 | PCIE NVRAM CARD BASED ON NVDIMM - A memory system controller includes one or more sockets for accommodating NVDIMM cards produced by different NVDIMM providers; a PCIe interface for coupling the memory system controller to a host; and a controller coupled to the PCIe interface over a PCIe-compliant connection and to the one or more sockets over respective DDR2 connections. The controller is configured to manage data transfers between the host and a specified one of the NVDIMM sockets in which an NVDIMM card is accommodated as DMA reads and writes, format data received from the PCIe interface for transmission to the specified NVDIMM socket over the corresponding one or more DDR2 interfaces, and initiate save and restore operations on the NVDIMM card accommodated within the specified NVDIMM socket in response to power failure and power restoration indications. | 2012-05-24 |
20120131254 | SWITCH APPARATUS FOR SWITCHING DISPLAY, KEYBOARD, AND MOUSE - A switch apparatus includes first to third video graphics array (VGA) interfaces, first to sixth universal serial bus (USB) interfaces, a single-pole double-throw (SPDT) switch, and first to eighteenth electronic switches. The first VGA interface is connected to the second and third VGA interfaces through the electronic switches. The first USB interface is connected the second and third USB interfaces through the electronic switches. The fourth USB interface is connected to the fifth and sixth USB interfaces through the electronic switches. The SPDT switch is used to control the first VGA interface to be selectively connected to the second or third VGA interface, and control the first USB interface to be selectively connected to the second or third USB interface, and control the fourth USB interface to be selectively connected to the fifth or sixth USB interface. | 2012-05-24 |
20120131255 | MULTI-PORT SYSTEM AND METHOD FOR ROUTING A DATA ELEMENT WITHIN AN INTERCONNECTION FABRIC - A method and structure(s) for providing a data path between and among nodes and processing elements within an interconnection fabric are described. More specifically, a device comprising a first circuit configured to couple between a first bus and a link is described. The circuit may be configured to operate as a bridge, support PCI configuration cycles, send outgoing information serially through the link in a format different from that of the first bus, and allow a host processor, communicating through the first bus, to selectively address one or more remote devices to which the device is configured to allow access. In some embodiments, the first circuit may support “spoof-proof” data protocols, and the device may operate in multiple modes including root bridge, leaf bridge, and gateway mode. Multiple addressing models may also be used. | 2012-05-24 |
20120131256 | I/O CONTROL SYSTEMS AND METHODS - An input/output (“I/O”) port control system is provided. The system can include an I/O controller ( | 2012-05-24 |
20120131257 | Multi-Context Configurable Memory Controller - The exemplary embodiments provide a multi-context configurable memory controller comprising: an input-output data port array comprising a plurality of input queues and a plurality of output queues; at least one configuration and control register to store, for each context of a plurality of contexts, a plurality of configuration bits; a configurable circuit element configurable for a plurality of data operations, each data operation corresponding to a context of a plurality of contexts, the plurality of data operations comprising memory address generation, memory write operations, and memory read operations, the configurable circuit element comprising a plurality of configurable address generators; and an element controller, the element controller comprising a port arbitration circuit to arbitrate among a plurality of contexts having a ready-to-run status, and the element controller to allow concurrent execution of multiple data operations for multiple contexts having the ready-to-run status. | 2012-05-24 |
20120131258 | SEMICONDUCTOR MEMORY APPARATUS AND METHOD FOR OPERATING THE SAME - A semiconductor memory apparatus includes, inter alia, a master chip and a plurality of slave chips. Each of the slave chips includes a plurality of banks. A first reception signal, a first timing signal, a bank address signal, and a slice selection signal to the slave chips may be provided by a master chip. The slave chips include a slice determining unit configured to compare the slice selection signal and a slice code and generate a slice enable signal, and a bank selecting unit configured to receive the bank address signal in response to the first reception signal and the slice enable signal and generate a bank enable signal in response to the bank address signal and the first timing signal. | 2012-05-24 |
20120131259 | SHARING MEMORY PAGES HAVING REGULAR EXPRESSIONS WITHIN A VIRTUAL MACHINE - A lightweight technique for sharing memory pages within a virtual machine (VM) is provided. This technique can be used on its own to implement intra-VM page sharing or it can be augmented with sharing across VMs. Memory pages whose content can be described by some succinct grammar, such as a regular expression or simple pattern, are identified for sharing within a VM. If the content of a page matches some simple pattern, it is proposed to share such a page, but only in the scope of the VM to which it belongs, i.e., intra-VM sharing. All other pages, i.e., those that are not simple patterns, can be candidates for sharing in the scope of all currently active VMs, i.e., inter-VM sharing. Either fully functional page sharing across VMs and/or page sharing in the context of each VM can be implemented. | 2012-05-24 |
20120131260 | HYPERVISOR PAGE FAULT PROCESSING IN A SHARED MEMORY PARTITION DATA PROCESSING SYSTEM - Hypervisor page fault processing logic is provided for a shared memory partition data processing system. The logic, responsive to an executing virtual processor of the shared memory partition data processing system encountering a hypervisor page fault, allocates an input/output (I/O) paging request to the virtual processor from an I/O paging request pool and increments an outstanding I/O paging request count for the virtual processor. A determination is then made whether the outstanding I/O paging request count for the virtual processor is at a predefined threshold, and if not, the logic places the virtual processor in a wait state with interrupt wake-up reasons enabled based on the virtual processor's state, otherwise, it places the virtual processor in a wait state with interrupt wake-up reasons disabled. | 2012-05-24 |
20120131261 | SUB-BLOCK ACCESSIBLE NONVOLATILE MEMORY CACHE - Subject matter disclosed herein relates to sub-block accessible cache memory. | 2012-05-24 |
20120131262 | Method and Apparatus for EEPROM Emulation for Preventing Data Loss in the Event of a Flash Block Failure - A defect resistant EEPROM emulator ( | 2012-05-24 |
20120131263 | MEMORY STORAGE DEVICE, MEMORY CONTROLLER THEREOF, AND METHOD FOR RESPONDING HOST COMMAND - A memory storage device, a memory controller thereof, and a method for responding host commands are provided. The memory storage device has a flash memory chip and a buffer memory. The present method includes receiving a write command issued by a host system and determining whether the write command causes the memory storage device to trigger a data moving procedure. If the write command does not cause the memory storage device to trigger the data moving procedure, the present method further includes sending an acknowledgement message corresponding to the write command to the host system after data corresponding to the write command is completely transferred to the buffer memory. | 2012-05-24 |
20120131264 | STORAGE DEVICE - According to one embodiment, a storage device comprises a first storage unit having blocks, each including pages, a second storage unit having a free block list, and a free page list, and a control unit. In write data in units of blocks, the control unit generates compressed data blocks by compressing the data in units of blocks, writes the compressed data blocks to the blocks which can be written in accordance with the information held in the free block list, holds, in the free page list, the information about pages existing in free areas which are provided in the blocks holding compressed data blocks and which holds no compressed data blocks. In write data in units of pages, the control unit writes the data in units of pages to pages existing in the free areas, in accordance with the information held in the free page list. | 2012-05-24 |
20120131265 | WRITE CACHE STRUCTURE IN A STORAGE SYSTEM - A method of writing data units to a storage device. The data units are cached in a first level cache sorted by logical address. A group (G | 2012-05-24 |
20120131266 | MEMORY CONTROLLER, DATA STORAGE SYSTEM INCLUDING THE SAME, METHOD OF PROCESSING DATA - A data storage system includes a controller configured to receive data and data information about the data from a host, analyze the data information, detect whether the data has been compressed, and compress the data according to a detection result; and a nonvolatile memory device configured to store the data compressed by the controller and information about whether the data has been compressed. The controller includes a buffer configured to temporarily store the data and the data information received from the host, an analyzer configured to output, based on an analysis result, a compression control flag that indicates whether the data has been compressed, and a compressor configured to selectively compress or bypass the data based on the compression control flag, and to transmit the data to the nonvolatile memory device. | 2012-05-24 |
20120131267 | MEMORY DEVICE DISTRIBUTED CONTROLLER SYSTEM - A memory device distributed controller circuit distributes memory control functions amongst a plurality of memory controllers. A master controller receives an interpreted command and activates the appropriate slave controllers depending on the command. The slave controllers can include a data cache controller that is coupled to and controls the data cache and an analog controller that is coupled to and controls the analog voltage generation circuit. The respective controllers have appropriate software/firmware instructions that determine the response the respective controllers take in response to the received command. | 2012-05-24 |
20120131268 | DATA STORAGE DEVICE - A data storage device comprising: at least two flash devices for storing data; a circuit board, wherein each of the flash devices are integrated on the circuit board; a controller integrated on the circuit board for reading and writing to each flash devices, wherein the controller interfaces each flash devices; at least one NOR Flash device in communication with the controller through a host bus; at least one host bus memory device in communication with the controller and at least one NOR Flash device through the host bus; at least one interface in communication with the controller and adapted to physically and electrically couple to a system, receive and store data therefrom and retrieve and transmit data to the system. | 2012-05-24 |
20120131269 | ADAPTIVE MEMORY SYSTEM FOR ENHANCING THE PERFORMANCE OF AN EXTERNAL COMPUTING DEVICE - An adaptive memory system is provided for improving the performance of an external computing device. The adaptive memory system includes a single controller, a first memory type (e.g., Static Random Access Memory or SRAM), a second memory type (e.g., Dynamic Random Access Memory or DRAM), a third memory type (e.g., Flash), an internal bus system, and an external bus interface. The single controller is configured to: (i) communicate with all three memory types using the internal bus system; (ii) communicate with the external computing device using the external bus interface; and (iii) allocate cache-data storage assignment to a storage space within the first memory type, and after the storage space within the first memory type is determined to be full, allocate cache-data storage assignment to a storage space within the second memory type. | 2012-05-24 |
20120131270 | STORAGE SYSTEM AND CONTROL METHOD THEREOF - Proposed are a storage system and its control method capable of dealing with the unique problems that arise when using a nonvolatile memory as the memory device while effectively preventing performance deterioration. This storage system is provided with a plurality of memory modules having one or more nonvolatile memory chips, and a controller for controlling the reading and writing of data from and in each memory module. The memory module decides the nonvolatile memory chip to become a copy destination of data stored in the nonvolatile memory when a failure occurs in the nonvolatile memory chip of a self memory module, and copies the data stored in the failed nonvolatile memory chip to the nonvolatile memory chip decided as the copy destination. | 2012-05-24 |
20120131271 | STORAGE DEVICE AND METHOD OF CONTROLLING STORAGE SYSTEM - With respect to a storage system in which quick formatting and sequential formatting can be run concurrently, the time it takes to process an access request from a host is prevented from becoming prolonged even when a normal sequential formatting process is executed with respect to a storage volume which frequently incurs I/O penalties. The storage device measures the load from the host per configurational unit (storage medium) of LUs, and divides the LUs into a group of LUs whose load per storage medium is low, and a group of LUs whose load per storage medium is high. Further, the density per unit of LU capacity of I/O penalties incurred in a storage volume for which quick formatting is being executed is calculated. Sequential formatting is then executed, with priority, with respect to the LUs belonging to the group with low loads and in order of descending density of incurred I/O penalties. | 2012-05-24 |
20120131272 | Data Processing System and Storage Subsystem Provided in Data Processing System - A first storage subsystem includes a first storage device and a second storage device(s). A second storage subsystem includes a third storage device and a fourth storage device. A third storage subsystem comprises a fifth storage device and a sixth storage device. The first storage subsystem generates a dataset comprising an update number expressing the update order of the first storage device and write data stored in the first storage device, stores the generated dataset in the one or more second storage devices, and transmits the data set to the second and third storage subsystems. Each of the second and third storage subsystems stores the received dataset in the third storage device or fifth storage device, reads a dataset from the third or fifth storage device according to the update number, and stores the write data within the dataset in the fourth storage device or sixth storage device. | 2012-05-24 |
20120131273 | METHOD AND SYSTEM FOR STORING MEMORY COMPRESSED DATA ONTO MEMORY COMPRESSED DISKS - In a computer system supporting memory compression, wherein memory compressed data is managed in units of memory sectors of size S, wherein data is stored on disk in a different compressed format, and wherein data on said disk is managed in units of disk sectors of size D, a method for storing memory compressed data on a compressed disk includes combining at least one of compressed memory directory information, a system header, compressed data controls, and pads into a data structure having a same size S as a memory sector, grouping the data structure and the data contained in the desired memory sectors into groups of D/S items, and storing each of the groups in a separate disk sector. | 2012-05-24 |
20120131274 | Legacy Data Managment - Various systems, processes, products, and techniques may be used to manage legacy data. In one general implementation, a system, process, and/or product for managing legacy data may include the ability to determine whether a data request has been received and, if a data request has been received, determine whether the data request is associated with legacy data of an external storage management system. If the data request is not associated with legacy data of an external storage management system, the system, process, and/or product may retrieve data from a local storage array, and if the data request is associated with legacy data of an external storage management system, the system, process, and/or product may request legacy data from an external storage management system. The system, process, and/or product may also generate a response to the data request. | 2012-05-24 |
20120131275 | NETWORK-ATTACHED STORAGE SYSTEM - The invention discloses a network-attached storage system including an interface module, a plurality of storage devices and a storage module. The interface module is configured to be attached to a network. The interface module is for receiving a transmission protocol information transmitted over the network, and processing the information into storage data and access instructions. The storage module is for receiving the storage data and the access instructions, and controlling, according to the access instructions, access of the storage data to the primary storage devices through a transmission interface. | 2012-05-24 |
20120131276 | INFORMATION APPARATUS AND METHOD FOR CONTROLLING THE SAME - An object is to efficiently set configurations of a storage apparatus. Provided is an information apparatus communicably coupled to a storage apparatus 10, which validates a script executed by the storage apparatus 10 for setting a configuration of the storage apparatus 10, the information apparatus generating configurations of the storage apparatus 10 when after each command described in a script is executed sequentially; and performing consistency validation on the script by determining whether or not the command described in the script is normally executable in a case the command is executed on an assumption that the storage apparatus 10 has the configuration immediately before the execution. | 2012-05-24 |
20120131277 | ACTIVE MEMORY PROCESSOR SYSTEM - In general, the present invention relates to data cache processing. Specifically, the present invention relates to a system that provides reconfigurable dynamic cache which varies the operation strategy of cache memory based on the demand from the applications originating from different external general processor cores, along with functions of a virtualized hybrid core system. The system includes receiving a data request, selecting an operational mode based on the data request and a predefined selection algorithm, and processing the data request based on the selected operational mode. | 2012-05-24 |
20120131278 | DATA STORAGE APPARATUS AND METHODS - Data storage apparatus and methods are disclosed. A disclosed example data storage apparatus comprises a cache layer and a processor in communication with the cache layer. The processor is to dynamically enable or disable the cache layer via a cache layer enable line based on a data store access type. | 2012-05-24 |
20120131279 | MEMORY ELEMENTS FOR PERFORMING AN ALLOCATION OPERATION AND RELATED METHODS - Apparatus for memory elements and related methods for performing an allocate operation are provided. An exemplary memory element includes a plurality of way memory elements and a replacement module coupled to the plurality of way memory elements. Each way memory element is configured to selectively output data bits maintained at an input address. The replacement module is configured to enable output of the data bits maintained at the input address of a way memory element of the plurality of way memory elements for replacement in response to an allocate instruction including the input address. | 2012-05-24 |
20120131280 | SYSTEMS AND METHODS FOR BACKING UP STORAGE VOLUMES IN A STORAGE SYSTEM - Systems and methods for backing up storage volumes are provided. One system includes a primary side, a secondary side, and a network coupling the primary and secondary sides. The secondary side includes first and second VTS including a cache and storage tape. The first VTS is configured to store a first portion of a group of storage volumes in its cache and migrate the remaining portion to its storage tape. The second VTS is configured to store the remaining portion of the storage volumes in its cache and migrate the first portion to its storage tape. One method includes receiving multiple storage volumes from a primary side, storing the storage volumes in the cache of the first and second VTS, migrating a portion of the storage volumes from the cache to storage tape in the first VTS, and migrating a remaining portion of the storage volumes from the cache to storage tape in the second VTS. | 2012-05-24 |
20120131281 | Converting Victim Writeback to a Fill - In one embodiment, a processor may be configured to write ECC granular stores into the data cache, while non-ECC granular stores may be merged with cache data in a memory request buffer. In one embodiment, a processor may be configured to detect that a victim block writeback hits one or more stores in a memory request buffer (or vice versa) and may convert the victim block writeback to a fill. In one embodiment, a processor may speculatively issue stores that are subsequent to a load from a load/store queue, but prevent the update for the stores in response to a snoop hit on the load. | 2012-05-24 |
20120131282 | Providing A Directory Cache For Peripheral Devices - In one embodiment, the present invention includes a processor having at least one core and uncore logic. The uncore logic can include a home agent to act as a guard to control access to a memory region. Either in the home agent or another portion of the uncore logic, a directory cache may be provided to store ownership information for a portion of the memory region owned by an agent coupled to the processor. In this way, when an access request for the memory region misses in the directory cache, a memory transaction can be avoided. Other embodiments are described and claimed. | 2012-05-24 |
20120131283 | MEMORY MANAGER FOR A NETWORK COMMUNICATIONS PROCESSOR ARCHITECTURE - Described embodiments provide a network processor having a plurality of processing modules coupled to a system cache and a shared memory. A memory manager allocates blocks of the shared memory to a requesting one of the processing modules. The allocated blocks store data corresponding to packets received by the network processor. The memory manager maintains a reference count for each allocated memory block indicating a number of processing modules accessing the block. One of the processing modules reads the data stored in the allocated memory blocks, stores the read data to corresponding entries of the system cache and operates on the data stored in the system cache. Upon completion of operation on the data, the processing module requests to decrement the reference count of each memory block. Based on the reference count, the memory manager invalidates the entries of the system cache and deallocates the memory blocks. | 2012-05-24 |
20120131284 | MULTI-CORE ACTIVE MEMORY PROCESSOR SYSTEM - In general, the present invention relates to data cache processing. Specifically, the present invention relates to a system that provides reconfigurable dynamic cache which varies the operation strategy of cache memory based on the demand from the applications originating from different external general processor cores, along with functions of a virtualized hybrid core system. The system includes receiving a data request, selecting an operational mode based on the data request and a predefined selection algorithm, and processing the data request based on the selected operational mode. The system is further configured to delegate computational or memory resource needs to a plurality of sub-processing cores for processing to satisfy application demands. | 2012-05-24 |
20120131285 | LOCKING AND SIGNALING FOR IMPLEMENTING MESSAGING TRANSPORTS WITH SHARED MEMORY - Disclosed are systems and methods for transporting data using shared memory comprising allocating, by one of a plurality of sender application, one or more pages, wherein the one or more pages are stored in a shared memory, wherein the shared memory is partitioned into one or more pages, and writing data, by the sender application, to the allocated one or more pages, wherein a page is either available for use or allocated to the sender applications, wherein the one or more pages become available after the sender application has completed writing the data. The systems and methods further disclose sending a signal, by the sender application, to a receiver application, wherein the signal notifies the receiver application that writing the data to a particular page is complete, reading, by the receiver application, the data from the one or more pages, and de-allocating, by the receiver application, the one or more pages. | 2012-05-24 |
20120131286 | DYNAMIC DETECTION AND REDUCTION OF UNALIGNED I/O OPERATIONS - Detection and reduction of unaligned input/output (“I/O”) requests is implemented by a storage server determining an alignment value for data stored by the server within a storage system on behalf of a first client, writing the alignment value to a portion of the volume that stores the data for the first client, but not to a portion of the volume that stores data for a second client, and changing a location of data within the portion of the volume that stores the data for the first client, but not a location of data in the portion of the volume that stores data for the second client, to an alignment corresponding to the alignment value. The alignment value is applied to I/O requests directed to the portion of the volume that stores the data blocks for the first client after the location of the data blocks has been changed. | 2012-05-24 |
20120131287 | STORAGE CONTROL APPARATUS AND LOGICAL VOLUME SIZE SETTING METHOD - The present invention efficiently changes a volume size while maintaining a copy pair as-is. A PVOL and a SVOL # | 2012-05-24 |
20120131288 | Reconfigurable Integrated Circuit Architecture With On-Chip Configuration and Reconfiguration - The exemplary embodiments provide a reconfigurable integrated circuit capable of on-chip configuration and reconfiguration, comprising: a plurality of configurable composite circuit elements; a configuration and control bus; a memory; and a sequential processor. Each composite circuit element comprises: a configurable circuit; and an element interface and control circuit, the element interface and control circuit comprising an element controller and at least one configuration and control register to store one or more configuration and control words. The configuration and control bus is coupled to the plurality of configurable composite circuit elements, and comprises a plurality of address and control lines and a plurality of data lines. The sequential processor can write configurations to the configuration and control registers of an addressed configurable composite circuit element to configure or reconfigure the configurable circuit. | 2012-05-24 |
20120131289 | MULTIPATH SWITCHING OVER MULTIPLE STORAGE SYSTEMS - A system comprises a first storage system, a second storage system, a plurality of switches, and a server connected with the first storage system via a first group of switches and connected with the second storage system via a second group of switches. The first group and the second group have at least one switch which is not included in both the first and second groups. The first storage system receives I/O commands targeted to first logical units from the server via the first group of switches. The first storage system maintains first information regarding the ports of both the first and second storage systems. The first information is used to generate multipath communication between the server and the first storage system, including at least one path which passes through the second storage system and at least one other path which does not pass through the second storage system. | 2012-05-24 |
20120131290 | Backup Memory Administration - Methods, systems, and computer program products for backup memory administration are provided. Embodiments include storing in an active memory device, by a memory backup controller, blocks of computer data received from random access memory; recording in a change log, by the memory backup controller, identifications of each block of computer data that is stored in the active memory device; detecting, by the memory backup controller, a backup trigger event; and responsive to the detecting of the backup trigger event: copying, by the memory backup controller, from the active memory device, to a backup memory device, the blocks of data identified in the change log; and clearing, by the memory backup controller, the change log. | 2012-05-24 |
20120131291 | APPLICATION AWARE SNAPSHOTS - A method of creating an application aware snapshot comprises a host system receiving a request for a snapshot, and the host system sending a SCSI snapshot command directly to a disk controller without requiring a disk controller-specific provider to be installed on the host system. The disk controller then receives the SCSI snapshot command and creates the snapshot in response to receiving the SCSI command, wherein the snapshot is created in accordance with logic contained within the disk controller without requiring a disk controller-specific provider to be installed on the host system. Using the SCSI snapshot command facilitates a plurality of host systems being able to send a SCSI snapshot command directly to a disk controller without requiring a disk controller-specific provider to be installed on each of the plurality of host systems. Similarly, a single host system is able to send a SCSI snapshot command directly to a plurality of disk controllers without requiring a disk controller-specific provider for each disk controller to be installed on the host system. | 2012-05-24 |
20120131292 | VARIABLE DATA PRESERVATION PREWRITE - In one aspect of the present description, a data preservation function is provided for preserving a set of data on a source storage device at a point in time, and includes identifying as a function of prior update usage, such as input/output usage, of the data to be preserved, a portion of the data which is more likely to be the subject of updates during at least a portion of the data preservation operation as compared to the remaining portion of the data to be preserved, and copies the identified portion of the data to be preserved to a target storage device. In another aspect, the size of the portion of data to be identified is variable. In one embodiment, the size of the portion of data to be identified is a function of a parameter of the command, such that a user can specify the command parameter which affects the size of the portion of data which is prewritten to the target storage device. The parameter may be, for example, a percentage of the data to be preserved, such that a user can specify the percentage of the point-in-time data which is prewritten to the target storage device. Alternatively, the parameter may be, for example, a probability of a collision occurring, such that a user can specify a probability of a collision occurring. Other features and aspects may be realized, depending upon the particular application. | 2012-05-24 |
20120131293 | DATA ARCHIVING USING DATA COMPRESSION OF A FLASH COPY - Embodiments of the disclosure relate to archiving data in a storage system. An exemplary embodiment comprises making a flash copy of data in a source volume, compressing data in the flash copy wherein each track of data is compressed into a set of data pages, and storing the compressed data pages in a target volume. Data extents for the target volume may be allocated from a pool of compressed data extents. After each stride worth of data is compressed and stored in the target volume, data may be destaged to avoid destage penalties. Data from the target volume may be decompressed from a flash copy of the target volume in a reverse process to restore each data track, when the archived data is needed. Data may be compressed and uncompressed using a Lempel-Ziv-Welch process. | 2012-05-24 |
20120131294 | PORTABLE DEVICE AND BACKUP METHOD THEREOF - Another embodiment of the invention provides a data saving system including a portable device having a first data, a third party and a storage management server. The storage management server connects at least one backup device, wherein when the portable device wants to save the first data, the portable device transmits the first data and a save command to the third party, the storage management server monitors the third party to determine whether there is data designated to the storage management server, and if yes, the storage management server acquires and transmits the first data to the backup device. | 2012-05-24 |
20120131295 | DATA PROCESSING APPARATUS, ACCESS CONTROL METHOD, AND STORAGE MEDIUM - When an accessible state of an external memory unit is instructed to be canceled and data is not storing in the external memory unit, a data processing apparatus cancels the accessible state of the external memory unit if the external memory unit is not set as a backup destination, and does not cancel the accessible state of the external memory unit if the external memory unit is set as the backup destination. | 2012-05-24 |
20120131296 | Volume Management for Network-Type Storage Devices - An administrator's work load increases because the administrator has to both allocate volumes of PC server device applications and take over volumes for applications based on changes in PC server devices. A volume management system solves the problem with a computer system having storage devices each having a unit managing volume configuration information based on each application, a unit managing volume usage information based on the application volumes, and a unit managing and partitioning allocatable areas of the storage devices based on performance and reliability. The system has a unit selecting suitable allocation regions in accordance with the volume usages of the applications; a unit selecting a suitable allocation region based on change of host performance and migrating a volume to the suitable allocation region when the host configuration of an application changes; and a unit changing configuration information to perform change of setting on each host. | 2012-05-24 |
20120131297 | METHOD AND APPARATUS FOR SEAMLESS MANAGEMENT FOR DISASTER RECOVERY - A method, apparatus, article of manufacture, and system are presented for establishing redundant computer resources. According to one embodiment, in a system including a plurality of processor devices and a plurality of storage devices, the processor devices, the storage devices and the management server being connected via a network, the method comprises storing device information relating to the processor devices and the storage devices and topology information relating to topology of the network, identifying at least one primary computer resource, selecting at least one secondary computer resource suitable to serve as a redundant resource corresponding to the at least one primary computer resource based on the device information and the topology information, and assigning the at least one secondary computer resource as a redundant resource corresponding to the at least one primary computer resource. | 2012-05-24 |
20120131298 | REMOTE COPY SYSTEM - Even when a host does not give a write time to write data, consistency can be kept among data stored in secondary storage systems. The present system has plural primary storage systems each having a source volume and plural secondary storage systems each having a target volume. Once data is received from a host, each of the plural storage systems creates write-data management information having sequential numbers and reference information and sends, to one of the primary storage systems, the data, sequential number and reference information. Each of the secondary storage systems records reference information corresponding to the larges sequential number among serial sequential numbers and stores, in a target volume in an order of sequential numbers, data corresponding to reference information having a value smaller than the reference information based on the smallest value reference information among reference information recorded in each of the plural secondary storage systems. | 2012-05-24 |
20120131299 | METHOD AND SYSTEM FOR REDUCING ENTRIES IN A CONTENT-ADDRESSABLE MEMORY - A method for reducing memory entries in a ternary content-addressable memory may include determining if a first entry and a second entry are associated with the same data value. The method may also include determining if the first entry can be masked such that searching the memory with the content value of either of the first entry or the second entry returns the same data value. The method may further include, in response to determining that the first entry and a second entry are associated with the same data value and determining that the first entry can be masked such that addressing the memory with the content value of either of the first entry or the second entry returns the same data value: (i) masking the first entry such that addressing the memory with the content value of either of the first entry or the second entry returns the same data value; and (ii) deleting the second entry. | 2012-05-24 |
20120131300 | DETERMINING SUITABLE NETWORK INTERFACE FOR PARTITION DEPLOYMENT/RE-DEPLOYMENT IN A CLOUD ENVIRONMENT - Migrating a logical partition (LPAR) from a first physical port to a first target physical port, includes determining a configuration of an LPAR having allocated resources residing on a computer and assigned to the first physical port of the computer. The configuration includes a label that specifies a network topology that is provided by the first physical port and the first target physical port has a port label that matches the label included in the configuration of the LPAR. The first target physical port with available capacity to service the LPAR is identified and the LPAR is migrated from the first physical port to the target physical port by reassigning the LPAR to the first target physical port. | 2012-05-24 |
20120131301 | ACCESS APPARATUS AND AVAILABLE STORAGE SPACE CALCULATION METHOD - A method used in an access module that uses a file system to manage a nonvolatile memory of an information recording module enables an available storage space to be calculated in a short time before file data is recorded, and shortens the time required from initialization of the file system to recording. An access module ( | 2012-05-24 |
20120131302 | CONTROL METHOD OF VIRTUAL VOLUMES AND STORAGE APPARATUS - Multiple types of storage devices which have different performance are appropriately allocated to multiple virtual volumes in accordance with the performance requirements of the respective virtual volumes. In cases where, among virtual volumes | 2012-05-24 |
20120131303 | Thin Provisioned Space Allocation - A storage monitoring system may reside between a file system and a storage system in a thin provisioned storage system. The storage monitoring system may create space holder files within a volume, where the space holder files contain an address space not backed up with physical storage. As requests for storage space are received from a file system, the storage monitoring system may allocate physical space to the volume by provisioning portions of the physical storage device to the volume and by removing one of the space holder files. The storage monitoring system may present alerts when physical storage space is low, as well as return an amount of physical space available to a volume size request. | 2012-05-24 |
20120131304 | Adaptive Wear Leveling via Monitoring the Properties of Memory Reference Stream - Adaptive write leveling in limited lifetime memory devices including performing a method for monitoring a write data stream that includes write line addresses. A property of the write data stream is detected and a write leveling process is adapted in response to the detected property. The write leveling process is applied to the write data stream to generate physical addresses from the write line addresses. | 2012-05-24 |
20120131305 | PAGE AWARE PREFETCH MECHANISM - A processor includes a prefetch aware prefetch unit having a storage with a number of entries, and each entry corresponds to a different prefetch data stream. Each entry may be configured to store information corresponding to a page size of the prefetch data stream, along with, for example, an address corresponding to the prefetch data stream. For each entry, the prefetch unit may be configured to determine whether a prefetch of data in the data stream will cross a page boundary associated with the data stream based upon the page size information. | 2012-05-24 |
20120131306 | Streaming Translation in Display Pipe - In an embodiment, a display pipe includes one or more translation units corresponding to images that the display pipe is reading for display. Each translation unit may be configured to prefetch translations ahead of the image data fetches, which may prevent translation misses in the display pipe (at least in most cases). The translation units may maintain translations in first-in, first-out (FIFO) fashion, and the display pipe fetch hardware may inform the translation unit when a given translation or translation is no longer needed. The translation unit may invalidate the identified translations and prefetch additional translation for virtual pages that are contiguous with the most recently prefetched virtual page. | 2012-05-24 |
20120131307 | DATA STRUCTURE FOR ENFORCING CONSISTENT PER-PHYSICAL PAGE CACHEABILITY ATTRIBUTES - A data structure for enforcing consistent per-physical page cacheability attributes is disclosed. The data structure is used with a method for enforcing consistent per-physical page cacheability attributes, which maintains memory coherency within a processor addressing memory, such as by comparing a desired cacheability attribute of a physical page address in a PTE against an authoritative table that indicates the current cacheability status. This comparison can be made at the time the PTE is inserted into a TLB. When the comparison detects a mismatch between the desired cacheability attribute of the page and the page's current cacheability status, corrective action can be taken to transition the page into the desired cacheability state. | 2012-05-24 |
20120131308 | SYSTEM, DEVICE, AND METHOD FOR ON-THE-FLY PERMUTATIONS OF VECTOR MEMORIES FOR EXECUTING INTRA-VECTOR OPERATIONS - A device system and method for processing program instructions, for example, to execute intra vector operations. A fetch unit may receive a program instruction defining different operations on data elements stored at the same vector memory address. A processor may include different types of execution units each executing a different one of a predetermined plurality of elemental instructions. Each program instruction may be a combination of one or more of the elemental instructions. The processor may receive a vector of data elements stored non-consecutively at the same vector memory address to be processed by a same one of the elemental instructions and a vector of configuration values independently associated with executing the same elemental instruction on the non-consecutive data elements. At least two configuration values may be different to implement different operations by executing the same elemental instruction using the different configuration values on the vector of non-consecutive data elements. | 2012-05-24 |
20120131309 | HIGH-PERFORMANCE, SCALABLE MUTLICORE HARDWARE AND SOFTWARE SYSTEM - Traditionally, providing parallel processing within a multi-core system has been very difficult. Here, however, a system in provided where serial source code is automatically converted into parallel source code, and a processing cluster is reconfigured “on the fly” to accommodate the parallelized code based on an allocation of memory and compute resources. Thus, the processing cluster and its corresponding system programming tool provide a system that can perform parallel processing from a serial program that is transparent to a user. | 2012-05-24 |
20120131310 | Methods And Apparatus For Independent Processor Node Operations In A SIMD Array Processor - A control processor is used for fetching and distributing single instruction multiple data (SIMD) instructions to a plurality of processing elements (PEs). One of the SIMD instructions is a thread start (Tstart) instruction, which causes the control processor to pause its instruction fetching. A local PE instruction memory (PE Imem) is associated with each PE and contains local PE instructions for execution on the local PE. Local PE Imem fetch, decode, and execute logic are associated with each PE. Instruction path selection logic in each PE is used to select between control processor distributed instructions and local PE instructions fetched from the local PE Imem. Each PE is also initialized to receive control processor distributed instructions. In addition, local hold generation logic is associated with each PE. A PE receiving a Tstart instruction causes the instruction path selection logic to switch to fetch local PE Imem instructions. | 2012-05-24 |
20120131311 | CORRELATION-BASED INSTRUCTION PREFETCHING - The disclosed embodiments provide a system that facilitates prefetching an instruction cache line in a processor. During execution of the processor, the system performs a current instruction cache access which is directed to a current cache line. If the current instruction cache access causes a cache miss or is a first demand fetch for a previously prefetched cache line, the system determines whether the current instruction cache access is discontinuous with a preceding instruction cache access. If so, the system completes the current instruction cache access by performing a cache access to service the cache miss or the first demand fetch, and also prefetching a predicted cache line associated with a discontinuous instruction cache access which is predicted to follow the current instruction cache access. | 2012-05-24 |
20120131312 | Data processing apparatus and method - A data processing apparatus | 2012-05-24 |
20120131313 | Error recovery following speculative execution with an instruction processing pipeline - An instruction processing pipeline | 2012-05-24 |
20120131314 | Ganged Hardware Counters for Coordinated Rollover and Reset Operations - Mechanisms for controlling rollover or reset of hardware performance counters in the data processing system. A signal indicating that a rollover or reset of a first hardware performance counter has occurred is received and it is determined if the first hardware performance counter is analytically related to one or more second hardware performance counters based on defined ganged hardware performance counter sets. A signal is sent to each of the one or more second hardware performance counters in response to a determination that the first hardware performance counter is analytically related to the one or more second hardware performance counters. Each of the one or more second hardware performance counters is reset to an initial value in response to the one or more second hardware performance counters receiving the signal from the ganged hardware performance counter rollover/reset logic. | 2012-05-24 |
20120131315 | DATA PROCESSING APPARATUS - A data processing apparatus may include a processing unit that performs processing related to data, a first register that holds a value for defining an operation of the processing unit, a second register that holds a value output from the first register, the second register outputting the value to the processing unit, a first control unit that performs control for writing a value in the first register, a second control unit that performs control for rewriting the value held by the second register with the value output from the first register, after the value is written in the first register, and a third control unit that performs control for rewriting the value held by the second register with an invalid value, at which the processing of the processing unit is stopped, during a period for which the value is written in the first register. | 2012-05-24 |
20120131316 | METHOD AND APPARATUS FOR IMPROVED SECURE COMPUTING AND COMMUNICATIONS - A method and apparatus are disclosed that may comprise applying compact markup notation to a general recursive computing system including hardware and software components, the compact markup notation defining things, places, paths, actions and causes within at least one of the hardware and the software of the general recursive computing system, to establish a set of data comprising a definitive description of the general recursive computing system in the compact notation; and synthesizing a self-aware and self-monitoring primitive recursive computing system utilizing the definitive description in the compact markup notation. | 2012-05-24 |
20120131317 | TICKET AUTHORIZED SECURE INSTALLATION AND BOOT - A method and apparatus for secure software installation to boot a device authorized by a ticket are described herein. A ticket request including a device identifier of the device is sent for the ticket which includes attributes for one or more components to boot the device into an operating state. The ticket is cryptographically validated to match the one or more components with corresponding attributes included in the ticket. If successfully matched, the one or more components are executed to boot the device. | 2012-05-24 |
20120131318 | SERVER AND METHOD FOR PERFORMING DATA RECOVERY OF THE SERVER - A method for performing data recovery of a server sends a data recovery request from a basic input output system (BIOS) of the server to a backup microchip of the server if a master operating system (OS) of an original microchip of the server is not available when the server is powered on, obtains a backup initial OS of the server from a storage unit of the backup microchip, and boots the backup initial OS according to a bootstrap of the backup initial OS. The method further obtains a backup master OS of the server from the storage unit of the backup microchip, sends the backup master OS to the original microchip of the server, and restarts the server according to the backup master OS. | 2012-05-24 |
20120131319 | SECURITY PROTECTION SYSTEM AND METHOD - A server includes a baseboard management controller (BMC). The server receives a first password and a second password input by a user. The BMC stores a first cryptograph corresponding to the first password in a field-replaceable unit (FRU) of the BMC. If a second cryptograph corresponding to the second password is the same as the first cryptograph, the server is started up. If the second cryptograph is not the same as the first cryptograph and a number of times that the second password has been input is greater than a predefined number of times, the server is locked. | 2012-05-24 |
20120131320 | BOOTING APPARATUS AND METHOD USING SNAPSHOT IMAGE - Provided are a booting apparatus and method using a snapshot image. A snapshot image may be divided into a plurality of blocks. Each of the blocks may be stored in a nonvolatile memory in a compressed or non-compressed format. The snapshot image may be incrementally loaded in units of the blocks during booting. The loading and decompression of the blocks may be performed in parallel. | 2012-05-24 |
20120131321 | Contextual History of Computing Objects - Various features for a computer operating system include mechanisms for operating where a single native application, in the form of a Web browser, exists for an operating system, and all other applications run as Web apps of the browser application. A computer-implemented object tracking method includes instantiating, a first time, an operating system object on a computing device; automatically identifying contextual meta data that defines a state of objects that are open on the computing device, other than the instantiated operating system object, when the operating system object is instantiated; and storing the identifying contextual meta data in correlation with the operating system object, wherein the contextual meta data identifies one or more objects that are active in the operating system when the operating system object is instantiated. | 2012-05-24 |