26th week of 2013 patent applcation highlights part 67 |
Patent application number | Title | Published |
20130166790 | CONTROLLING APPLICATIONS ACCORDING TO CONNECTION STATE AND EXECUTION CONDITION - The disclosure is related to controlling an application execution in user equipment. A user request for executing an application may be received. An execution condition may be obtained associated with the requested application. Whether the user equipment is connected to an external device may be detected. An execution of the requested application may be controlled based on a detection result and the obtained execution condition. | 2013-06-27 |
20130166791 | OUTPUT DEVICE, LOG COLLECTING METHOD FOR OUTPUT DEVICE, AND STORAGE MEDIUM - The purpose of this invention is to acquire log information of all System on Chips in an output device for debugging without having much effect on existing functions of the output device and conducting complicated steps. If a storage device preliminarily formatted with a predefined volume label is connected to the output device, log information read based on the same volume label as the volume label of the storage device is written to the storage device. | 2013-06-27 |
20130166792 | IMAGE PROCESSING METHOD, IMAGE PROCESSING APPARATUS, AND CONTROL PROGRAM - An image processing method includes: dividing received data into a header and a body; and writing the data in at least one buffer through a direct memory access (DMA) transfer. | 2013-06-27 |
20130166793 | HOST CHANNEL ADAPTER WITH PATTERN-TYPE DMA - An input/output (I/O) device includes a memory buffer and off-loading hardware. The off-loading hardware is configured to accept from a host a scatter/gather list including one or more entries. The entries include at least a pattern-type entry that specifies a period of a periodic pattern of addresses that are to be accessed in a memory of the host. The off-loading hardware is configured to transfer data between the memory buffer of the I/O device and the memory of the host by accessing the addresses in the memory of the host in accordance with the periodic pattern at intervals indicated in the period. | 2013-06-27 |
20130166794 | INTERRUPT EVENT MANAGEMENT - An interrupt management apparatus is provided for managing interrupt events generated by, for example, peripheral devices and computing modules. The interrupt management apparatus has an event decoder for receiving one or more interrupt signals from one or more interrupt sources and for decoding a received interrupt signal to produce control data relating to an interrupt event. The apparatus also has a sequence memory for storing one or more sequences, a sequence including one or more steps for handling one or more interrupt events, and one or more sequencers for interpreting one or more steps of a sequence stored in the sequence memory, the one or more sequencers being arranged to receive said control data from the event decoder. This enables the apparatus to manage said interrupt events without assistance from a central processing unit. | 2013-06-27 |
20130166795 | SYSTEM AND METHOD FOR STREAMING DATA IN FLASH MEMORY APPLICATIONS - Systems and methods for streaming data are disclosed. In various implementations, the system comprises a hardware device and input streaming interface operably connected to the hardware device. The input streaming interface is configured to inform a data source, based on a determination that a receiving device will accept data transmitted by the hardware device, that the input streaming interface is ready to receive data, and then receive, in response to the detecting the activation of a source signal and a data initiation signal associated with the data source, source data transmitted by the data source over a data bus, and forward the source data to the hardware device. | 2013-06-27 |
20130166796 | MOBILE TERMINAL - The disclosure discloses a mobile terminal, which comprises a Universal Serial Bus (USB) main chip and a USB interface, wherein the USB interface is connected to an external USB cable, a Printed Circuit Board (PCB) trace is connected between the USB main chip and the USB interface; and a USB signal test point is arranged on the PCB trace. With the disclosure, changes of signal quality caused by addition of a USB signal test point can be effectively reduced, and the communication quality is ensured. | 2013-06-27 |
20130166797 | STORAGE APPARATUS AND METHOD FOR CONTROLLING SAME - Proposed are a storage apparatus and a method of controlling same which make it possible to prevent deterioration in the response performance of the whole system effectively and in advance. | 2013-06-27 |
20130166798 | MULTI-PROTOCOL TUNNELING OVER AN I/O INTERCONNECT - Described are embodiments of methods, apparatuses, and systems for multi-protocol tunneling across a multi-protocol I/O interconnect of computer apparatus. A method for multi-protocol tunneling may include establishing a first communication path between ports of a switching fabric of a multi-protocol interconnect of a computer apparatus in response to a peripheral device being connected to the computer apparatus, establishing a second communication path between the switching fabric and a protocol-specific controller, and routing, by the multi-protocol interconnect, packets of a protocol of the peripheral device from the peripheral device to the protocol-specific controller over the first and second communication paths. Other embodiments may be described and claimed. | 2013-06-27 |
20130166799 | METHOD FOR ALLOCATING SUBSCRIBER ADDRESSES TO BUS SUBSCRIBERS OF A BUS-BASED CONTROL SYSTEM - A bus-based control system comprises a plurality of bus subscribers which are connected to one another by means of a communication medium. The bus subscribers are assigned logical subscriber addresses. Next, the assigned subscriber addresses are verified. For this purpose, the bus subscribers use a defined mathematical operation to calculate a common first check value which is compared with a second check value. The mathematical operation begins with a defined starting value and comprises a number of operation steps which use a number of defined operands. Each of the subscriber addresses to be verified forms a different operand, and each bus subscriber executes at least one operation step. | 2013-06-27 |
20130166800 | Method and device for transmitting data having a variable bit length - A method for serially transmitting data in a bus system having at least two bus users, which exchange data frames over the bus, the bus users deciding which data frames they receive, as a function of an identifier, the data frames having a logic structure according to the CAN standard, ISO 11898-1, the temporal bit length (L | 2013-06-27 |
20130166801 | BUS BRIDGE APPARATUS - Disclosed is a bus bridge apparatus may prevent a transfer performance from being lowered due to bus protocol performance mismatch between interconnections. The bus bridge apparatus is used to transfer data to a slave device of a network-based interconnection from a master device of a bus-based interconnection, data of the master device may be buffered by an internal buffer, and may then be transferred to the slave device. At this time, lowering of a transfer efficiency may be prevented by converting a transfer timing of addresses and data to be optimized to a transfer protocol of the network-based interconnection through a protocol converter. | 2013-06-27 |
20130166802 | TRANSPONDER, METHOD AND RECORDING MEDIUM CONTAINING INSTRUCTIONS FOR CONTROLLING THE SAME - A transponder connected to a master, a transmission module and a reception module, the transponder including: a memory which stores a table indicating whether or not a command from the master is executable, wherein the table including: type information indicating a type of the command; and first status information including at least one of: a transmission status indicating a communication status of the transmission module; and a reception status indicating a communication status of the reception module; an acquiring unit that acquires second status information including at least one of: a current transmission status indicating a current communication status of the transmission module; and a current reception status indicating a current communication status of the reception module; a judging unit that judges, in response to a received command received from the master, whether or not the received command is executable using the table and the second status information. | 2013-06-27 |
20130166803 | DEQUEUE OPERATION USING MASK VECTOR TO MANAGE INPUT/OUTPUT INTERRUPTIONS - A command is issued to reset one or more pending interrupt indicators and arbitrate for ownership of the interrupt. Responsive to a processor receiving the command, a check is made of a selected pending interrupt indicator. If the selected pending interrupt indicator is not set, another pending interrupt indicator is checked, instead of providing a negative response and reissuing the command. In this way, one dequeue command can replace multiple dequeue commands and the overhead of leaving and re-entering the interrupt handler is reduced. A negative response is reserved for those situations in which there are no pending interrupt indicators to be reset. | 2013-06-27 |
20130166804 | INFORMATION PROCESSING APPARATUS AND RECORDING APPARATUS USING THE SAME - A memory control unit is connected to a first bus and a second bus and that controls writing and reading of data to a memory; a control unit controls the information processing apparatus; a first circuit device is connected to the first bus and outputs a data write request to the memory control unit and a notification signal; a second circuit device is connected to the first bus and outputs a data read request to the memory control unit in accordance with the notification signal and an interrupt signal to the control unit in response to the data read request; and a third circuit device is connected to the second bus and outputs a data read request stored in the memory to the memory control unit in accordance with an instruction from the control unit which has received an interrupt signal. | 2013-06-27 |
20130166805 | INTERRUPT CAUSE MANAGEMENT DEVICE AND INTERRUPT PROCESSING SYSTEM - A peripheral device sends an interrupt generation notification to a bus bridge. The bus bridge receives the interrupt generation notification, transfers the received interrupt generation notification to a CPU, reads an interrupt cause from the peripheral device that has sent the interrupt generation notification, and writes to a memory the interrupt cause that has been read. Upon receiving the interrupt generation notification, the CPU reads the interrupt cause from the memory which allows fast access, and begins interrupt processing corresponding to the interrupt cause. Interrupt processing time up to commencement of the interrupt processing can be reduced. | 2013-06-27 |
20130166806 | PCI RISER CARD - A PCI riser card includes a printed circuit board and an array of PCI connectors located on the printed circuit board. A plug portion extends from an edge of the printed circuit board and may be inserted into a PCI connector of a system board. The array of PCI connectors includes a first PCI connector and a second PCI connector. The first PCI connector is parallel to the second PCI connector is arranged in inverse relative to the second PCI connector. | 2013-06-27 |
20130166807 | SAFELY EJECTING A CLIENT DEVICE USING A DEDICATED BUTTON - Methods and systems for ejecting a device, including providing a host device, providing a client device, the client device being coupled to the host device, receiving a request to eject the client device, the request being initiated using a dedicated eject button, and in response to receiving the request, software-ejecting the client device. | 2013-06-27 |
20130166808 | DOCKING STATION FOR ELECTRONIC DEVICE - A docking station for an electronic device, includes a base for supporting a bottom of the electronic device, a supporting portion secured to the base for abutting a rear portion of the electronic device; and two clipping portions rotatably coupled to opposite ends of the supporting portion. The clipping portions respectively clip opposite sides of the electronic device and cooperate with the base and the supporting portion to hold the electronic device in the docking station. | 2013-06-27 |
20130166809 | DRIVE CIRCUIT FOR PERIPHERAL COMPONENT INTERCONNECT-EXPRESS (PCIE) SLOTS - A drive circuit is used in an electronic device comprising multiple peripheral component interconnect-express (PCIE) slots. The drive circuit includes a motherboard, a first signal generation circuit, a second signal generation circuit, and a first delay circuit. The motherboard provides a control signal to the first signal generation circuit and the first delay circuit. The first signal generation circuit outputs immediate drive signals to first multiple PCIE slots. The first delay circuit outputs a first delay control signal to the second signal generation circuit after a predetermined time. The second signal generation circuit outputs drive signals to drive second multiple PCIE slots. | 2013-06-27 |
20130166810 | APPARATUS FOR PROCESSING REGISTER WINDOW OVERFLOW AND UNDERFLOW - An apparatus for processing a register window overflow and underflow includes register windows each configured to include local registers and incoming registers, dedicated internal memories configured to store contents of the local registers and the incoming registers for each word, dedicated data buses configured to connect the local registers and the incoming registers and the respective dedicated internal memories, a memory word counter configured to perform counting in order to determine whether or not there is a storage space of a word unit in the dedicated internal memories, and a logic block configured to control an operation of the dedicated data buses when one of a window overflow and a window underflow is generated based on the count value of the memory word counter. | 2013-06-27 |
20130166811 | METHODS AND STRUCTURE FOR COMMUNICATING BETWEEN A SATA HOST AND A SATA TARGET DEVICE THROUGH A SAS DOMAIN - Methods and structure for directly coupling SATA hosts (SATA initiators) with SATA target devices through a SAS fabric and an enhanced SAS expander supporting such direct couplings. The enhanced SAS expander comprises SATA/STP connection logic to open a SAS (STP) connection between a directly attached SATA host and a SATA target device in response to receipt of an FIS from the host or target while no connection is presently open. The opened connection is closed after expiration of a predetermined timeout period of inactivity between the connected host and target. Thus, simpler, less costly SATA hosts and SATA target devices may be utilized while gaining the advantage of SAS architecture flexibility in configuration and scalability. SATA hosts may be coupled through the SAS fabric with a larger number of SATA target devices and multiple SATA hosts may be coupled with the SAS fabric. | 2013-06-27 |
20130166812 | TRANSPORT OF PCI-ORDERED TRAFFIC OVER INDEPENDENT NETWORKS - A system and method are disclosed for connecting PCI-ordered agents based on fully independent networks. The system and method are free of PCI topology constraints, so that the system and method can be implemented in an inexpensive and scalable way. The method disclosed is used to handle and transport PCI-ordered traffic on a fabric. Based on the actual ordering requirement of the set of PCI agents, the fabric includes two, three, or four independent networks. | 2013-06-27 |
20130166813 | MULTI-PROTOCOL I/O INTERCONNECT FLOW CONTROL - Described are embodiments of methods, apparatuses, and systems for multi-protocol tunneling across a multi-protocol I/O interconnect of computer apparatus. A method for managing flow across the multi-protocol I/O interconnect may include providing, by a first port of a switching fabric of a multi-protocol interconnect to a second port of the switching fabric, a first credit grant packet and a second credit grant packet as indications of unoccupied space of a buffer associated with a path between the first port and a second port, and simultaneously routing a first data packet of a first protocol and a second data packet of a second protocol, different from the first protocol, on the path from the second port to the first port based at least in part on receipt by the second port of the first and second credit grant packets. Other embodiments may be described and claimed. | 2013-06-27 |
20130166814 | COMPUTER READABLE RECORDING MEDIUM HAVING STORED THEREIN INFORMATION PROCESSING PROGRAM, INFORMATION PROCESSING APPARATUS AND INFORMATION PROCESSING METHOD - A program for causing an information processing apparatus to execute a process of a virtual calculator, the process including judging, when a switching of a virtual address space being a processing target of a virtual calculation apparatus occurs, whether or not a there exits physical calculation apparatus in which cache information of a physical address space corresponding to a virtual address space of a switching destination is accumulated; selecting the physical calculation apparatus when there exists a physical calculation apparatus in which the cache information of the physical address space is accumulated, and selecting the physical calculation apparatus in which cache information itself is not accumulated when there exists no physical calculation apparatus in which the cache information is accumulated; and assigning the selected physical calculation apparatus to the virtual calculation apparatus in which the switching of the virtual address space being a processing target has occurred. | 2013-06-27 |
20130166815 | Memory controllers to output data signals of a number of bits and to receive data signals of a different number of bits - A memory controller has a digital signal processor. The digital signal processor is configured to output a digital data signal of M+N bits of program data intended for programming a memory cell of a memory device. The digital signal processor is configured to receive a digital data signal of M+L bits read from the memory cell of the memory device and to retrieve from the received digital data signal M bits of data that were stored in the memory cell. | 2013-06-27 |
20130166816 | Apparatus, System, and Method for Managing Contents of a Cache - Apparatuses, systems, and methods are disclosed for managing contents of a cache. A method includes receiving a read request for data stored in a non-volatile cache. A method includes determining whether a read request satisfies a frequent read threshold for a cache. A method includes writing data of a read request forward on a sequential log-based writing structure of a cache in response to determining that the read request satisfies a frequent read threshold. | 2013-06-27 |
20130166817 | DATA TRANSFER SYSTEMS WITH POWER MANAGEMENT - A controller for transferring data between a host and a storage medium includes a data transfer unit and a control unit. The data transfer unit transfers data according to control commands from the host if the data transfer unit is enabled. The control unit coupled to the data transfer unit receives a status signal indicating whether the storage medium is coupled to a socket of the controller. The control unit provides interruption signals to the host. Power and a clock signal for the control unit are disabled if the status signal indicates that the storage medium is decoupled from the controller. | 2013-06-27 |
20130166818 | MEMORY LOGICAL DEFRAGMENTATION DURING GARBAGE COLLECTION - A method and system defragments data during garbage collection. Garbage collection may be more efficient when the valid data that is aggregated together is related or logically linked. In particular, data from the same file or that is statistically correlated may be combined in the same blocks during garbage collection. | 2013-06-27 |
20130166819 | SYSTEMS AND METHODS OF LOADING DATA FROM A NON-VOLATILE MEMORY TO A VOLATILE MEMORY - A method may be performed in a data storage device that includes a controller, a non-volatile memory, and a volatile memory. The method includes loading a first portion of stored data from the non-volatile memory to the volatile memory according to one or more load priority indicators accessible to the controller. The method further includes, in response to completion of the loading of the first portion of the stored data to the volatile memory and prior to completion of loading a second portion of the stored data to the volatile memory, sending a signal to indicate to a host device operatively coupled to the data storage device that the volatile memory is ready for use by the host device. | 2013-06-27 |
20130166820 | METHODS AND APPRATUSES FOR ATOMIC STORAGE OPERATIONS - A method and apparatus for storing data packets in two different logical erase blocks pursuant to an atomic storage request is disclosed. Each data packet stored in response to the atomic storage request comprises persistent metadata indicating that the data packet pertains to an atomic storage request. In addition, a method and apparatus for restart recovery is disclosed. A data packet preceding an append point is identified as satisfying a failed atomic write criteria, indicating that the data packet pertains to a failed atomic storage request. One or more data packets associated with the failed atomic storage request are identified and excluded from an index of a non-volatile storage media. | 2013-06-27 |
20130166821 | LOW LATENCY AND PERSISTENT DATA STORAGE - Persistent data storage with low latency is provided by a method that includes receiving a low latency store command that includes write data. The write data is written to a first memory device that is implemented by a nonvolatile solid-state memory technology characterized by a first access speed. It is acknowledged that the write data has been successfully written to the first memory device. The write data is written to a second memory device that is implemented by a volatile memory technology. At least a portion of the data in the first memory device is written to a third memory device when a predetermined amount of data has been accumulated in the first memory device. The third memory device is implemented by a nonvolatile solid-state memory technology characterized by a second access speed that is slower than the first access speed. | 2013-06-27 |
20130166822 | SOLID-STATE STORAGE MANAGEMENT - Solid-state storage management for a system that includes a main board and a solid-state storage board separate from the main board is provided. The sold-state storage board includes a solid-state memory device and solid-state storage devices. The system is configured to perform a method that includes a correspondence being established, by a software module located on the main board, between a first logical address and a first physical address on the solid-state storage devices. The correspondence between the first logical address and the first physical address is stored in a location on the solid-state memory device. The method also includes translating the first logical address into the first physical address. The translating is performed by an address translator module located on the solid-state storage board and is based on the previously established correspondence between the first logical address and the first physical address. | 2013-06-27 |
20130166823 | NON-VOLATILE MEMORY DEVICE - A non-volatile memory device includes a plurality of bit lines; a plurality of page buffers corresponding to the bit lines, respectively, and configured to each store a write data; and a control circuit configured to control at least one page buffer of the plurality of page buffers to store the write data of a first logic level and control other ones of the plurality of page buffers to store the write data of a second logic level, wherein the control circuit is further configured to select the at least one page buffer based on an address inputted to the control circuit. Since write data of diverse patterns may be generated within a non-volatile memory device by using a portion of the bits of the address, a test operation of the non-volatile memory device may be performed within a short time. | 2013-06-27 |
20130166824 | BLOCK MANAGEMENT FOR NONVOLATILE MEMORY DEVICE - A method of managing memory blocks in a nonvolatile memory device comprises identifying a full memory block among a plurality of memory blocks in the nonvolatile memory device, determining whether a block life of the full memory block exceeds a threshold value, and upon determining that the block life of the full memory block exceeds the threshold value, selecting the full memory block as a target block for garbage collection. | 2013-06-27 |
20130166825 | Method Of Controlling Non-Volatile Memory, Non-Volatile Memory Controller Therefor, And Memory System Including The Same - A method of controlling a non-volatile memory device having multiple planes including receiving write requests from a host, the write requests each including a logical address, a write command, and a data set; storing the data sets at an address of a buffer; storing the buffer address in a mapping table that maps addresses of the buffer to the multiple planes; sequentially transmitting the data sets stored at respective buffer addresses to page buffers, respectively, of the planes corresponding to the buffer addresses according to the mapping table; and programming in parallel at least two data sets stored in respective page buffers to memory cells of the non-volatile memory device. | 2013-06-27 |
20130166826 | SOLID-STATE DEVICE MANAGEMENT - An embodiment is a method for establishing a correspondence between a first logical address and a first physical address on solid-state storage devices located on a solid-state storage board. The solid-state storage devices include a plurality of physical memory locations identified by physical addresses, and the establishing is by a software module located on a main board that is separate from the solid-state storage board. The correspondence between the first logical address and the first physical address is stored in in a location on a solid-state memory device that is accessible by an address translator module located on the solid-state storage board. The solid-state memory device is located on the solid-state storage board. The first logical address is translated to the first physical address by the address translator module based on the previously established correspondence between the first logical address and the first physical address. | 2013-06-27 |
20130166827 | WEAR-LEVEL OF CELLS/PAGES/SUB-PAGES/BLOCKS OF A MEMORY - The invention is directed to a method for wear-leveling cells or pages or sub-pages or blocks of a memory such as a flash memory, the method comprising:—receiving (S | 2013-06-27 |
20130166828 | DATA UPDATE APPARATUS AND METHOD FOR FLASH MEMORY FILE SYSTEM - Disclosed herein are a data update apparatus and method. The apparatus includes an update identification unit, a data storage unit, a block allocation unit, and a data update unit. The update identification unit determines whether the input/output request signal corresponds to an update signal. The data storage unit stores mapping information about the blocks of an arbitrary file in a metadata area. The block allocation unit stores addresses of one or more free blocks, which are selected from among blocks included in the data storage unit and in which data has not been stored. The data update unit acquires the addresses of the free blocks, writes the update data to the free blocks, and updates existing block addresses, which belong to information included in mapping information of the data storage unit and to which the update data has been mapped, with the addresses of the free blocks. | 2013-06-27 |
20130166829 | Fast Block Device and Methodology - A device, method and system is directed to fast data storage on a block storage device. New data is written to an empty write block. A location of the new data is tracked. Meta data associated with the new data is written. A lookup table may be updated based in part on the meta data. The new data may be read based the lookup table configured to map a logical address to a physical address. | 2013-06-27 |
20130166830 | BLOCK MANAGEMENT METHOD FOR FLASH MEMORY AND CONTROLLER AND STORAGE SYSETM USING THE SAME - A block management method for managing a mapping relationship between a plurality of logical blocks and a plurality of physical blocks of a flash memory is provided. The block management method includes: grouping the logical blocks into a plurality of logical zones; recording the mapping relationship between each logical block in each logical zone and all the data physical blocks among the physical blocks in a corresponding logical zone table in unit of the logical zones; and recording all the no-data physical blocks among the physical blocks with a single no-data physical block table. Thereby, the logical blocks can be mapped to all the physical blocks so that frequent access to specific physical blocks can be avoided when a user writes data into a specific logical zone frequently, and accordingly the lifespan of the flash memory can be prolonged. | 2013-06-27 |
20130166831 | Apparatus, System, and Method for Storing Metadata - Apparatuses, systems, and methods are disclosed for storing metadata. A mapping module is configured to maintain a mapping structure for logical addresses of a non-volatile device. A metadata module is configured to store membership metadata for the logical addresses with logical-to-physical mappings for the logical addresses in the mapping structure. | 2013-06-27 |
20130166832 | METHODS AND ELECTRONIC DEVICES FOR ADJUSTING THE OPERATING FREQUENCY OF A MEMORY - Methods and electronic devices for adjusting an operating frequency of a memory are disclosed. The method includes: transmitting to the memory a first command that instructs the memory to hold the data information in the memory; transmitting to the memory controller a second command that adjusts the first frequency of the memory controller to a second frequency; and transmitting to the memory a third command that instructs the memory to exchange the data information according to the second frequency of the memory controller. According to the disclosure, it is possible to dynamically adjust the frequency of the memory during operation, avoiding the need of the user to turn off and then turn on the electronic device to adjust the frequency of the memory. | 2013-06-27 |
20130166833 | ELECTRONIC APPARATUS WITH A SAFE CONDITIONAL ACCESS SYSTEM (CAS) AND CONTROL METHOD THEREOF - An electronic apparatus is provided, which includes a central processing unit (CPU), a first memory unit which performs communication with the CPU, and a second memory unit which stores therein conditional access system (CAS) software and platform software. According to the method of controlling the apparatus, upon booting, the CPU copies the CAS software to an internal memory area which may be within the CPU, copies the platform software to the first memory unit and executes the CAS and platform software, and executes CAS operations through communication between the CAS software and the platform software. | 2013-06-27 |
20130166834 | SUB PAGE AND PAGE MEMORY MANAGEMENT APPARATUS AND METHOD - A method and apparatus for managing a virtual address to physical address translation utilize a subpage level fault detecting and access. The method and apparatus may also use an additional subpage and page store Non-Volatile Store (NVS). The method and apparatus determines whether a page fault occurs or whether a subpage fault occurs to effect an address translation and also operates such that if a subpage fault had occurred, a subpage is loaded corresponding to the fault from a NVS to a DRAM, such as DRAM or any other suitable volatile memory historically referred to as main memory. The method and apparatus, if a page fault has occurred, determines if a page fault has occurred without operating system assistance and is a hardware page fault detection system that loads a page corresponding to the fault from NVS to DRAM. | 2013-06-27 |
20130166835 | ARITHMETIC PROCESSING SYSTEM AND METHOD, AND NON-TRANSITORY COMPUTER READABLE MEDIUM - An arithmetic processing system includes the following elements. Plural storage media, which are physically independent, having storage regions are provided. Plural processors execute processing by using the storage regions of the plural storage media. An allocating unit allocates the storage regions of the plural storage media to the plural processors. A determining unit determines whether a total value of storage amounts necessary for the plural processors to execute processing is equal to or smaller than a value obtained by subtracting a storage capacity of one of the storage media from a total capacity of the plural storage media. A reallocating unit reallocates the allocated storage regions to the plural processors when the above-described determination result is positive. A discontinuing unit discontinues an operation performed by a storage medium which does not contain any of the storage regions reallocated to the plural processors as a result of reallocating the storage regions. | 2013-06-27 |
20130166836 | CONFIGURABLE MEMORY CONTROLLER/MEMORY MODULE COMMUNICATION SYSTEM - A memory system includes a first memory module and a second memory module. A memory controller is coupled to the first and second memory modules and reads configuration information from the first and second memory modules using a memory channel. The controller also configures a switch coupled between the controller and one of the memory modules to communicate using either a chip select line or a memory address line. | 2013-06-27 |
20130166837 | DESTAGING OF WRITE AHEAD DATA SET TRACKS - Exemplary methods, computer systems, and computer program products for efficient destaging of a write ahead data set (WADS) track in a volume of a computing storage environment are provided. In one embodiment, the computer environment is configured for preventing destage of a plurality of tracks in cache selected for writing to a storage device. For a track N in a stride Z of the selected plurality of tracks, if the track N is a first WADS track in the stride Z, clearing at least one temporal bit for each track in the cache for the stride Z minus 2 (Z−2), and if the track N is a sequential track, clearing the at least one temporal bit for the track N minus a variable X (N−X). | 2013-06-27 |
20130166838 | SYSTEM AND METHOD FOR BALANCING BLOCK ALLOCATION ON DATA STORAGE DEVICES - A modular block allocator includes a front end module and a back end module communicating with each another via an application programming interface (API). The front end module receives cleaner messages requesting dirty buffers associated with the cleaner messages be cleaned. The back end module provides low and high level data structures which are formed by examining bitmaps associated with data storage devices. A stripe set data structure mapping to the low level data structures are formed. The front end module cleans the dirty buffers by allocating data blocks in the high level data structures to the dirty buffers. The low level data structures are used to map the allocated data blocks to the stripe set and when the stripe set is full it is sent to the data storage devices. | 2013-06-27 |
20130166839 | SUB-LUN AUTO-TIERING - Embodiments of the invention include systems and methods for auto-tiering multiple file systems across a common resource pool. Storage resources are allocated as a sub-LUN auto-tiering (SLAT) sub-pool. The sub-pool is managed as a single virtual address space (VAS) with a virtual block address (VBA) for each logical block address of each data block in the sub-pool, and a portion of those VBAs can be allocated to each of a number of file systems. Mappings are maintained between each logical block address in which file system data is physically stored and a VBA in the file system's portion of the virtual address space. As data moves (e.g., is added, auto-tiered, etc.), the mappings can be updated. In this way, multiple SLAT file systems can exploit the full resources of the common SLAT sub-pool and maximize the resource options available to auto-tiering functions. | 2013-06-27 |
20130166840 | DYNAMIC HARD DISK MAPPING METHOD AND SERVER USING THE SAME - A dynamic hard disk mapping method and a server using the same are disclosed. The server includes a first motherboard, a second motherboard, a first disk group corresponding to the first motherboard, and a second disk group corresponding to the second motherboard. In the dynamic hard disk mapping method, at first, a disk redistributing instruction is received and stored. Thereafter, a reset instruction is received and performed. Then, the number of hard disks of the first disk group and the number of hard disks of the second disk group are summed up to obtain a total hard disk number N, wherein N is a positive integer greater than zero. Thereafter, the disk redistributing instruction is read, and a redistribution computation is performed in accordance with the disk redistributing instruction to obtain a third disk group corresponding to the first motherboard and a fourth disk group corresponding to the second motherboard. | 2013-06-27 |
20130166841 | STORAGE SYSTEM AND DATA MANAGEMENT METHOD - A storage system and data management method is provided that improves the reliability and fault tolerance of the hard disks saving data utilizing an AOU function. A storage system comprises a first correlating section for correlating a plurality of RAID groups composed of a plurality of physical disks, and the pool region, a second correlating section for correlating the pool region and the storage regions of the virtual volumes, a first allocation section for allocating first data from the host apparatus to the first storage region of the first RAID group based on write requests from the host apparatus, and a second allocation section for distributing second data from the host apparatus at and allocating the second data to any storage regions of the RAID group, with the exception of the first storage region of the first RAID group allocated by the first allocation section, based on write requests. | 2013-06-27 |
20130166842 | CREATION OF LOGICAL UNITS VIA BORROWING OF ALTERNATIVE STORAGE AND SUBSEQUENT MOVEMENT OF THE LOGICAL UNITS TO DESIRED STORAGE - A computational device receives a request to create a logical unit. Associated with the request is a first type of storage pool in which creation of the logical unit is desired. In response to determining that adequate space is not available to create the logical unit in the first type of storage pool, a determination is made as to whether a first indicator is configured to allow borrowing of storage space from a second type of storage pool. In response to determining that the first indicator is configured to allow borrowing of storage space from the second type of storage pool, the logical unit is created in the second type of storage pool and a listener application is initiated. The listener application determines that free space that is adequate to store the logical unit has become available in the first type of storage pool. The logical unit is moved from the second type of storage pool to the first type of storage pool, in response to determining, via the listener application, that free space that is adequate to store the logical unit has become available in the first type of storage pool. | 2013-06-27 |
20130166843 | CARD AND HOST DEVICE - A host device is configured to read and write information from and into a card and to supply a supply voltage that belongs to a first voltage range or a second voltage range which is lower than the first voltage range, and issues a voltage identification command to the card. The voltage identification command includes a voltage range identification section, an error detection section, and a check pattern section. The voltage range identification section includes information indicating which one of the first voltage range and the second voltage range the supply voltage belongs. The error detection section has a pattern configured to enable the card which has received the voltage identification command to detect errors in the voltage identification command. The check pattern section has a preset pattern. | 2013-06-27 |
20130166844 | STORAGE IN TIERED ENVIRONMENT FOR COLDER DATA SEGMENTS - Exemplary embodiments for storing data by a processor device in a computing environment are provided. In one embodiment, by way of example only, from a plurality of available data segments, a data segment having a storage activity lower than a predetermined threshold is identified as a colder data segment. A chunk of storage is located to which the colder data segment is assigned. The colder data segment is compressed. The colder data segment is migrated to the chunk of storage. A status of the chunk of storage is maintained in a compression data segment bitmap. | 2013-06-27 |
20130166845 | METHOD AND DEVICE FOR RECOVERING DESCRIPTION INFORMATION, AND METHOD AND DEVICE FOR CACHING DATA IN DATABASE - A method for recovering description information, or a method for caching data in a database, includes: judging whether a database is closed normally after the last operation; if the database is not closed normally, traversing each data block in a level-2 cache, where corresponding disk location information is saved in a header of each data block; obtaining a data block in a disk according to the disk location information; and when the obtained data block in the disk is the same as a corresponding data block in the level-2 cache, establishing description information according to location information of the data block in the disk and location information of the data block in the level-2 cache, where the description information is used to describe correspondence between the location information of data in the disk and the location information of data in the level-2 cache. | 2013-06-27 |
20130166846 | Hierarchy-aware Replacement Policy - Some implementations disclosed herein provide techniques and arrangements for a hierarchy-aware replacement policy for a last-level cache. A detector may be used to provide the last-level cache with information about blocks in a lower-level cache. For example, the detector may receive a notification identifying a block evicted from the lower-level cache. The notification may include a category associated with the block. The detector may identify a request that caused the block to be filled into the lower-level cache. The detector may determine whether one or more statistics associated with the category satisfy a threshold. In response to determining that the one or more statistics associated with the category satisfy the threshold, the detector may send an indication to the last-level cache that the block is a candidate for eviction from the last-level cache. | 2013-06-27 |
20130166847 | INFORMATION PROCESSING APPARATUS AND CACHE CONTROL METHOD - According to one embodiment, an apparatus includes a storage module, a cache module, and a changing module. The cache module is configured to use a first cache data storage region in a storage region of a first storage device as a cache of the storage module, and to manage cache management information includes position information indicating a position of the first cache data storage region. The changing module is configured to store cache data stored in the first cache data storage region in a second cache data storage region in a storage region of a second storage device when it is requested to use the second cache data storage region as the cache of the storage module, and to update the position information. | 2013-06-27 |
20130166848 | VIRTUAL COMPUTER SYSTEM, VIRTUAL COMPUTER CONTROL METHOD, VIRTUAL COMPUTER CONTROL PROGRAM, RECORDING MEDIUM, AND INTEGRATED CIRCUIT - A virtual machine system comprises: a processor for executing a secure operating system and a normal operating system; and a cache memory. The cache memory stores data in a manner that allows for identification of whether the data has been read from a secure storage area of an external main memory. The cache memory writes back data to the main memory in a manner that reduces the number of times data is intermittently written back to the secure storage area which occurs when the processor is executing the normal operating system. | 2013-06-27 |
20130166849 | Physically Remote Shared Computer Memory - A computing system with physically remote shared computer memory, the computing system including: a remote memory management module, a plurality of computing devices, a plurality of remote memory modules that are external to the plurality of computing devices, and a remote memory controller, the remote memory management module configured to partition the physically remote shared computer memory amongst a plurality of computing devices; each computing device including a computer processor and a local memory controller, the local memory controller including: a processor interface, a local memory interface, and a local interconnect interface; each remote memory controller including: a remote memory interface and a remote interconnect interface, wherein the remote memory controller is operatively coupled to the data communications interconnect via the remote interconnect interface such that the remote memory controller is coupled for data communications with the local memory controller over the data communications interconnect. | 2013-06-27 |
20130166850 | CONTENT ADDRESSABLE MEMORY DATA CLUSTERING BLOCK ARCHITECTURE - An apparatus having a first circuit and a second circuit. The first circuit may be configured to (i) parse a first data word into a first data portion and a second data portion and (ii) parse a first address into a first address portion and a second address portion. The second circuit generally has a plurality of memory blocks. The second circuit may be configured to store the second data portion in a particular one of the memory blocks using (i) the first data portion to determine the particular memory block and (ii) the first address portion to determine a particular one of a plurality of locations within the particular memory block. The data portion may not be stored in the memory blocks. The particular location may be determined independently of the second address portion. | 2013-06-27 |
20130166851 | INCOMING BUS TRAFFIC STORAGE SYSTEM - In managing incoming bus traffic storage for store cell memory (SCM) in a sequential-write, random-read system, a priority encoder system can be used to find a next empty cell in the sequential-write step. Each cell in the SCM has a bit that indicates whether the cell is full or empty. The priority encoder encodes the next empty cell using these bits and the current write pointer. The priority encoder can also find next group of empty cells by being coupled to AND operators that are coupled to each group of cells. Further, a cell locator selector selects a next empty cell location among priority encoders for cell groups of various sizes according to an opcode by appending ‘0’s to cell locations outputs from priority encoders that are smaller than the size of the SCM. | 2013-06-27 |
20130166852 | METHOD FOR HIBERNATION MECHANISM AND COMPUTER SYSTEM THEREFOR - A method for hibernation mechanism and a computer system therefor are provided. The method includes the followings. An initial process of a hibernation mechanism is performed in a computer system, in which a non-swappable memory of a main memory is partitioned into a plurality of non-swappable segments, and each segment corresponds to a status value indicating whether the content of the segment has been changed. During a process of entering a hibernation state, for each non-swappable segment, it is determined whether the segment is to be written to a storage device according to the status value. The segment is written into the storage device when a determination result indicates the segment has been changed, or else the computer does not write the segment to the storage device when the determination result indicates the segment is has not been changed. | 2013-06-27 |
20130166853 | SEMICONDUCTOR MEMORY DEVICE AND OPERATING METHOD THEREOF - In a semiconductor memory device, when input data is latched to page buffers, first, the data is sequentially latched to even page buffers and subsequently latched to odd page buffers, and then the data is programmed to each memory cell. Thus, when data having a size of a half page or smaller is read, a read operation is performed only on even memory cells or odd memory cells, thus reducing a time required for the read operation. | 2013-06-27 |
20130166854 | STORAGE APPARATUS - An information processing apparatus that performs examination-mode processing to read test data from a first special area included in a first storage device of a plurality of storage devices and write the test data to a second special area included in a second storage device of the plurality of storage devices; and stores an execution result of the examination-mode processing in a result storage area. The execution result including information that identifies the first storage device, information that identifies the second storage device and a characteristic of the transfer of the test data. | 2013-06-27 |
20130166855 | SYSTEMS, METHODS, AND INTERFACES FOR VECTOR INPUT/OUTPUT OPERATIONS - Data of a vector storage request pertaining to one or more disjoint, non-adjacent, and/or non-contiguous logical identifier ranges are stored contiguously within a log on a non-volatile storage medium. A request consolidation module modifies one or more sub-requests of the vector storage request in response to other, cached storage requests. Data of an atomic vector storage request may comprise persistent indicators, such as persistent metadata flags, to identify data pertaining to incomplete atomic storage requests. A restart recovery module identifies and excludes data of incomplete atomic operations. | 2013-06-27 |
20130166856 | SYSTEMS AND METHODS FOR PRESERVING THE ORDER OF DATA - A device includes an input processing unit and an output processing unit. The input processing unit dispatches first data to one of a group of processing engines, records an identity of the one processing engine in a location in a first memory, reserves one or more corresponding locations in a second memory, causes the first data to be processed by the one processing engine, and stores the processed first data in one of the locations in the second memory. The output processing unit receives second data, assigns an entry address corresponding to a location in an output memory to the second data, transfers the second data and the entry address to one of a group of second processing engines, causes the second data to be processed by the second processing engine, and stores the processed second data to the location in the output memory. | 2013-06-27 |
20130166857 | STORAGE DEVICE AND METHOD FOR CONTROLLING STORAGE DEVICE - A write DMA includes a write unit, a read unit and a parity generation unit. The read unit reads parity data from one of two NAND flashes storing the parity data therein. The parity generation unit generates parity data based on the read parity data and a plurality of stripes obtained by dividing user data. The write unit writes a stripe into any of a plurality of NAND flashes storing stripes therein, and writes generated parity data into the other NAND flash from which parity data is not read. | 2013-06-27 |
20130166858 | STORAGE APPARATUS AND METHOD FOR CONTROLLING STORAGE APPARATUS - A storage apparatus includes a map CM and CMs. The map CM includes an acquisition unit, an update unit, and a notification unit. When the structure of the storage apparatus is changed, the acquisition unit acquires new map information from each expander or from each expander and each enclosure. Then, the update unit updates the map information stored in a map information table storage unit based on the acquired new map information. The notification unit notifies the acquired new map information to other CMs. The CMs each include an update unit. The update unit updates the map information stored in a map information table storage unit based on the new map information notified by the map CM. | 2013-06-27 |
20130166859 | IDENTIFYING UNALLOCATED MEMORY SEGMENTS - A network device that includes a first memory to store packets in segments; a second memory to store pointers associated with the first memory; a third memory to store summary bits and allocation bits, where the allocation bits correspond to the segments. The network device also includes a processor to receive a request for memory resources; determine whether a pointer is stored in the second memory, where the pointer corresponds to a segment that is available to store a packet; and send the pointer when the pointer is stored in the second memory. The processor is further to perform a search to identify other pointers when the pointer is not stored in the second memory, where performing the search includes identifying a set of allocation bits, based on an unallocated summary bit, that corresponds to the other pointers; identify another pointer, of the other pointers, based on an unallocated allocation bit of the set of allocation bits; and send the other pointer in response to the request. | 2013-06-27 |
20130166860 | MEMORY ACCESS CONTROL DEVICE AND COMPUTER SYSTEM - A memory interleaving device accesses a memory in an interleaved manner for changing the number of ways of interleaving during system operation. During a copy which changes a first configuration before changing the number of ways in the interleaving to a second configuration after changing the number of ways in the interleaving, a memory access control device reads the memory in the first configuration before changing the number of ways of the interleaving for an external read request and writes the memory in both of the first configuration before changing the number of ways in the interleaving and the second configuration after changing the number of ways in the interleaving for an external write request. | 2013-06-27 |
20130166861 | DATA STORAGE APPARATUS AND METHOD OF CONTROLLING DATA STORAGE APPARATUS - A compressing unit generates a plurality of types of compressed blocks for each of divided blocks of data, by using a plurality of algorithm executing units. A comparing unit stores, in a storage unit, comparison result information on a compressed block having the smallest size. A writing start determining unit makes a decision to start writing of a write block and compression of a next block, when a quotient obtained by dividing the size of the compressed block that is indicated in the comparison result information by a writing speed is determined to be less than or equal to an elapsed time. A writing unit selects, as a write block, a compressed block having the smallest size among the generated compressed blocks at the time the start decision is made by the writing start determining unit, and writes the selected write block to a block storage unit. | 2013-06-27 |
20130166862 | EFFICIENT BACKUP REPLICATION - A system for backup replication comprises a processor and a memory. The processor is configured to determine data present in a most recent backup not present in a previous backup; transmit an extent specification; and transmit data segment fingerprints of the one or more data segments. The memory is coupled to the processor and is configured to provide the processor with instructions. | 2013-06-27 |
20130166863 | APPLICATION CONSISTENT SNAPSHOTS OF A SHARED VOLUME - The present invention extends to methods, systems, and computer program products for creating a snapshot of a shared volume that is application consistent across various nodes of a cluster. The invention enables a snapshot of a volume to be initiated on one node which causes all applications in the cluster that use the volume to persist their data to the volume prior to the snapshot being created. Accordingly, the snapshot is application consistent to all applications in the cluster that use the volume. The invention also enables applications on various nodes to perform post snapshot processing on the created snapshot. The invention can be used in an existing backup system that is not cluster aware to enable the existing backup system to create application consistent snapshots of a volume shared by applications across multiple nodes of a cluster. | 2013-06-27 |
20130166864 | SYSTEMS AND METHODS OF PERFORMING A DATA SAVE OPERATION - A method includes determining, based on an indication from a host device operatively coupled to a data storage device that includes a controller, a non-volatile memory including a hibernate area, a volatile memory, a non-volatile memory interface, and a volatile memory interface, that the data storage device is to enter a low-power state. The method includes, in response to determining that the data storage device is to enter a low-power state, performing a data save operation. The data save operation bypasses the non-volatile memory interface and the volatile memory interface and copies data from the volatile memory of the data storage device to the hibernate area of the non-volatile memory of the data storage device. | 2013-06-27 |
20130166865 | Systems and Methods for Managing Parallel Access to Multiple Storage Systems - Systems and methods for managing parallel access to multiple storage systems are disclosed. In one implementation, a host system operatively coupled to at least a first memory system and a second memory system separates a file into a plurality of data chunks. The host system stores a first copy of the plurality of data chunks in the first memory and stores a second copy of the plurality of data chunks in the second memory. The host reads a data chunk of the plurality of data chunks of the file from the first memory system or the second memory system based on a determination of whether the first memory system or the second memory system is able to provide the data chunk to the host system more quickly. The host system may then assemble the data of the file based on the data chunk. | 2013-06-27 |
20130166866 | SYSTEMS AND METHODS OF PERFORMING A DATA SAVE OPERATION - A method includes entering a hibernation mode in a data storage device with a controller, a non-volatile memory, and a volatile memory having a first portion and a second portion. The hibernation mode is entered by copying, to the second portion, data that is in the first portion and that is flagged to remain available at the volatile memory during the hibernation mode, and powering off the first portion while maintaining power to the second portion. | 2013-06-27 |
20130166867 | PREVENTION OF OVERLAY OF PRODUCTION DATA BY POINT IN TIME COPY OPERATIONS IN A HOST BASED ASYNCHRONOUS MIRRORING ENVIRONMENT - A primary storage controller is configured to communicate with a secondary storage controller via a system data mover. In response to receiving a command to perform a point in time copy of a source volume of the primary storage controller to a target volume of the primary storage controller, a determination is made as to whether the target volume of the primary storage controller is a source for an asynchronous data replication operation, initiated by the system data mover, between the primary storage controller and the secondary storage controller. In response to determining that the target volume of the primary storage controller is the source for the asynchronous data replication operation, initiated by the system data mover, the point in time copy of the source volume of the primary storage controller to the target volume of the primary storage controller is performed. | 2013-06-27 |
20130166868 | METHOD AND SYSTEM FOR PROVIDING CONTENT TO A RECIPIENT DEVICE - The invention relates to a computer-implemented method for providing content to a particular recipient device of a plurality of recipient devices. Copies of one or more content elements of the content are generated and one or more of the copies are modified to obtain modified copies of the content elements. The content elements, including the one or more modified copies of the content elements, are stored in a storage. Selection information is transmitted to the particular recipient device in response to a request for providing the content. The selection information prescribes to the recipient device the modified copy to be retrieved by the recipient device for substantially each content element for which a modified copy is available. | 2013-06-27 |
20130166869 | UNLOCK A STORAGE DEVICE - Unlocking a storage device including identifying a platform configuration register value in response to a computing machine powering on, configuring a security component to seal an authorization based on the platform configuration register value and storing a sealed authorization onto non-volatile memory, and unlocking the storage device in response to the computing machine resuming from a sleep state and unsealing the sealed authorization with the security component from the non-volatile memory. | 2013-06-27 |
20130166870 | VOLTAGE AND TIMING CALIBRATION METHOD USED IN MEMORY SYSTEM - A voltage and timing calibration method used in a memory system. A memory controller adjusts timing and voltages of the controller and voltages of a memory buffer according to data returned by the buffer based on timing and voltages at a memory controller side of the buffer, to calibrate timing and voltages between the controller and controller side. According to data read by the buffer from a memory chip unit on the basis of timing and voltages at a memory chip side of the buffer, the controller adjusts the timing and voltage at the chip side and the voltage of the chip unit; or the buffer adjusts the timing and voltage at the chip side and the voltage of the chip unit, to calibrate the timing and voltage between the chip side and chip unit. Therefore, hardware resources of the buffer can be saved and the circuit can be simplified. | 2013-06-27 |
20130166871 | MEMORY CONTROL METHOD FOR A COMPUTER SYSTEM - A memory control method for a computer system is provided. The method includes the steps of: (a) calculating an operation cost of each of given M memory objects in each of N memory regions, M being an integer larger than 0, wherein the operation cost is a quantifiable parameter of a said memory region with respect to a said memory object operating therein; (b) determining an optimized allocation of the M memory objects in the N memory regions according to the calculated operation cost of each of the M memory objects in each of the N memory regions. | 2013-06-27 |
20130166872 | METHODS AND APPARATUS FOR MIGRATING THIN PROVISIONING VOLUMES BETWEEN STORAGE SYSTEMS - Multiple storage systems have capability to provide thin provisioning volumes to host computers and capability to transfer (import/export) management information regarding thin provisioning between storage systems. Moreover, at least one of the storage systems posses capability to provide storage area of other storage system as own storage area virtually via connection to the other storage system (i.e. external storage). Target storage system achieves efficient migration and unifying storage resource pool by importing or referring the management information obtained from source storage system and by utilizing the source storage system as external storage. One implementation involves method and process for migration of thin provisioning volumes using chunks having same length between source storage system and destination storage system. In this implementation, storage resource pool is unified by importing management information from the source storage system, and automated page-based relocation is performed to adjust actual location of data. | 2013-06-27 |
20130166873 | MANAGEMENT OF LOW-PAGING SPACE CONDITIONS IN AN OPERATING SYSTEM - A virtual memory management unit can implement various techniques for managing paging space. The virtual memory management unit can monitor a number of unallocated large sized pages and can determine when the number of unallocated large sized pages drops below a page threshold. Unallocated contiguous smaller-sized pages can be aggregated to obtain unallocated larger-sized pages, which can then be allocated to processes as required to improve efficiency of disk I/O operations. Allocated smaller-sized pages can also be reorganized to obtain the unallocated contiguous smaller-sized pages that can then be aggregated to yield the larger-sized pages. Furthermore, content can also be compressed before being written to the paging space to reduce the number of pages that are to be allocated to processes. This can enable efficient management of the paging space without terminating processes. | 2013-06-27 |
20130166874 | I/O CONTROLLER AND METHOD FOR OPERATING AN I/O CONTROLLER - An I/O controller, coupled to a processing unit and to a memory, includes an I/O link interface configured to receive data packets having virtual addresses; an address translation unit having an address translator to translate received virtual addresses into real addresses by translation control entries and a cache allocated to the address translator to cache a number of the translation control entries; an I/O packet processing unit for checking the data packets received at the I/O link interface and for forwarding the checked data packets to the address translation unit; and a prefetcher to forward address translation prefetch information from a data packet received to the address translation unit; the address translator configured to fetch the translation control entry for the data packet by the address translation prefetch information from the allocated cache or, if the translation control entry is not available in the allocated cache, from the memory. | 2013-06-27 |
20130166875 | WRITE DATA MASK METHOD AND SYSTEM - In various embodiments, dedicated mask pins are eliminated by sending a data mask on address lines of the interface. A memory controller receives a request for a memory write operation from a memory client and determines the granularity of the write data from a write data mask sent by the client. Granularity, as used herein, indicates a quantity of write data to which each bit of the received write data mask applies. In an embodiment, the memory controller generates a particular write command and a particular write data mask based on the granularity of the write data. The write command generated is typically the most efficient of several write commands available, but embodiments are not so limited. The write command is transmitted on command lines of the interface, and the write data mask is transmitted on address lines of the interface. | 2013-06-27 |
20130166876 | METHOD AND APPARATUS FOR USING A PREVIOUS COLUMN POINTER TO READ ENTRIES IN AN ARRAY OF A PROCESSOR - A method and apparatus are described for using a previous column pointer to read a subset of entries of an array in a processor. The array may have a plurality of rows and columns of entries, and each entry in the subset may reside on a different row of the array. A previous column pointer may be generated for each of the rows of the array based on a plurality of bits indicating the number of valid entries in the subset to be read, the previous column pointer indicating whether each entry is in a current column or a previous column. The entries in the subset may be read and re-ordered, and invalid entries in the subset may be replaced with nulls. The valid entries and nulls may then be outputted. | 2013-06-27 |
20130166877 | SHAPED REGISTER FILE READS - One embodiment of the present invention sets forth a technique for performing a shaped access of a register file that includes a set of N registers, wherein N is greater than or equal to two. The technique involves, for at least one thread included in a group of threads, receiving a request to access a first amount of data from each register in the set of N registers, and configuring a crossbar to allow the at least one thread to access the first amount of data from each register in the set of N registers. | 2013-06-27 |
20130166878 | VECTOR SIMD PROCESSOR - Operation parallelism of a data processor is enhanced by floating-point inner product execution units compatible with single instruction multiple data (SIMD). An operating system that can significantly enhance the level of operation parallelism per instruction while maintaining efficiency of floating-point length-4 vector inner product execution units is implemented. The floating-point length-4 vector inner product execution units are defined in the minimum width (32 bits for single precision) even where an extensive operating system becomes available, and compose the inner product execution units to be compatible with SIMD. The mutually augmenting effects of the inner product execution units and SIMD-compatible composition enhances the level of operation parallelism dramatically. Composition of the floating-point length-4 vector inner product execution units to calculate the sum of the inner product of length-4 vectors and scalar to be compatible with SIMD of four in parallel results in a processing capability of 32 FLOPS per cycle. | 2013-06-27 |
20130166879 | MULTIPROCESSOR SYSTEM AND SYNCHRONOUS ENGINE DEVICE THEREOF - The invention discloses a multiprocessor System and synchronous engine device thereof. the synchronous engine includes: a plurality of storage queues, wherein one of the queues stores all synchronization primitives from one of the processors; a plurality of scheduling modules, selecting the synchronization primitives for execution from the plurality of storage queues and then according to the type of the synchronization primitive transmitting the selected synchronization primitives to corresponding processing modules for processing, scheduling modules corresponding in a one-to-one relationship with the storage queues; a plurality of processing modules, receiving the transmitted synchronization primitives to execute different functions; a virtual synchronous memory structure module, using small memory space and mapping main memory spaces of all processors into a synchronization memory structure to realize the function of all synchronization primitives through a control logic; a main memory port, communicating with virtual synchronous memory structure module to read and write the main memory of all processors, and initiate an interrupt request to processors; a configuration register, storing various configuration information required by processing modules. | 2013-06-27 |
20130166880 | PROCESSING DEVICE AND METHOD FOR CONTROLLING PROCESSING DEVICE - A processing device has an instruction buffer retaining one or more instructions obtained by an instruction fetch request, an instruction execution control unit decoding and executing an instruction, a branch prediction mechanism retaining one or more branch histories including a distance flag indicating a difference between a branch instruction address and a branch destination instruction address and performing a branch prediction of an instruction, and an instruction fetch control unit issuing the instruction fetch request. When a branch prediction result is a branch taken and it is judged based on the distance flag that the instruction fetch request for the branch destination instruction address is included in the instruction fetch requests in a sequential direction which are issued until the branch prediction result is outputted, the control unit causes to output an instruction retained in the instruction buffer without issuing an instruction fetch request for the branch destination instruction address. | 2013-06-27 |
20130166881 | METHODS AND APPARATUS FOR SCHEDULING INSTRUCTIONS USING PRE-DECODE DATA - Systems and methods for scheduling instructions using pre-decode data corresponding to each instruction. In one embodiment, a multi-core processor includes a scheduling unit in each core for selecting instructions from two or more threads each scheduling cycle for execution on that particular core. As threads are scheduled for execution on the core, instructions from the threads are fetched into a buffer without being decoded. The pre-decode data is determined by a compiler and is extracted by the scheduling unit during runtime and used to control selection of threads for execution. The pre-decode data may specify a number of scheduling cycles to wait before scheduling the instruction. The pre-decode data may also specify a scheduling priority for the instruction. Once the scheduling unit selects an instruction to issue for execution, a decode unit fully decodes the instruction. | 2013-06-27 |
20130166882 | METHODS AND APPARATUS FOR SCHEDULING INSTRUCTIONS WITHOUT INSTRUCTION DECODE - Systems and methods for scheduling instructions without instruction decode. In one embodiment, a multi-core processor includes a scheduling unit in each core for scheduling instructions from two or more threads scheduled for execution on that particular core. As threads are scheduled for execution on the core, instructions from the threads are fetched into a buffer without being decoded. The scheduling unit includes a macro-scheduler unit for performing a priority sort of the two or more threads and a micro-scheduler arbiter for determining the highest order thread that is ready to execute. The macro-scheduler unit and the micro-scheduler arbiter use pre-decode data to implement the scheduling algorithm. The pre-decode data may be generated by decoding only a small portion of the instruction or received along with the instruction. Once the micro-scheduler arbiter has selected an instruction to dispatch to the execution unit, a decode unit fully decodes the instruction. | 2013-06-27 |
20130166883 | METHOD AND APPARATUS FOR PERFORMING LOGICAL COMPARE OPERATIONS - A method and apparatus for including in a processor instructions for performing logical-comparison and branch support operations on packed or unpacked data. In one embodiment, instruction decode logic decodes instructions for an execution unit to operate on packed data elements including logical comparisons. A register file including 128-bit packed data registers stores packed single-precision floating point (SPFP) and packed integer data elements. The logical comparisons may include comparison of SPFP data elements and comparison of integer data elements and setting at least one bit to indicate the results. Based on these comparisons, branch support actions are taken. Such branch support actions may include setting the at least one bit, which in turn may be utilized by a branching unit in response to a branch instruction. Alternatively, the branch support actions may include branching to an indicated target code location. | 2013-06-27 |
20130166884 | METHOD AND APPARATUS FOR PERFORMING LOGICAL COMPARE OPERATIONS - A method and apparatus for including in a processor instructions for performing logical-comparison and branch support operations on packed or unpacked data. In one embodiment, instruction decode logic decodes instructions for an execution unit to operate on packed data elements including logical comparisons. A register file including 128-bit packed data registers stores packed single-precision floating point (SPFP) and packed integer data elements. The logical comparisons may include comparison of SPFP data elements and comparison of integer data elements and setting at least one bit to indicate the results. Based on these comparisons, branch support actions are taken. Such branch support actions may include setting the at least one bit, which in turn may be utilized by a branching unit in response to a branch instruction. Alternatively, the branch support actions may include branching to an indicated target code location. | 2013-06-27 |
20130166885 | METHOD AND APPARATUS FOR ON-CHIP TEMPERATURE - When an instruction is executed on an integrated circuit (IC), an activity level and temperature are measured. A relationship between the activity level and temperature is determined, to estimate the temperature from the activity level. The activity level is monitored and is input to a scheduler, which estimates the IC temperature based on the activity level. The scheduler distributes work taking into account the temperature of various IC regions and may include distributing work to the IC region that has a lowest estimated temperature or relatively lower estimated temperature (e.g., lower than the average IC or IC region temperature). When the utilization level of one or more IC regions is high, the scheduler is configured to reduce the clock speed or the voltage of the one or more IC regions, or flag the regions as being unavailable for additional workload. | 2013-06-27 |
20130166886 | SYSTEMS, APPARATUSES, AND METHODS FOR A HARDWARE AND SOFTWARE SYSTEM TO AUTOMATICALLY DECOMPOSE A PROGRAM TO MULTIPLE PARALLEL THREADS - Systems, apparatuses, and methods for a hardware and software system to automatically decompose a program into multiple parallel threads are described. For example, a method according to one embodiment comprises: analyzing a single-threaded region of executing program code, the analysis including identifying dependencies within the single-threaded region; determining portions of the single-threaded region of executing program code which may be executed in parallel based on the analysis; assigning the portions to two or more parallel execution tracks; and executing the portions in parallel across the assigned execution tracks. | 2013-06-27 |
20130166887 | DATA PROCESSING APPARATUS AND DATA PROCESSING METHOD - According to one embodiment, a data processing apparatus includes a processor and a memory. The processor includes core blocks. The memory stores a command queue and task management structure data. The command queue stores a series of kernel functions. The task management structure data defines an order of execution of kernel functions by associating a return value of a previous kernel function with an argument of a subsequent kernel function. Core blocks of the processor are capable of executing different kernel functions. | 2013-06-27 |
20130166888 | PREDICTIVE OPERATOR GRAPH ELEMENT PROCESSING - Techniques are described for predictively starting a processing element. Embodiments receive streaming data to be processed by a plurality of processing elements. An operator graph of the plurality of processing elements that defines at least one execution path is established. Embodiments determine a historical startup time for a first processing element in the operator graph, where, once started, the first processing element begins normal operations once the first processing element has received a requisite amount of data from one or more upstream processing elements. Additionally, embodiments determine an amount of time the first processing element takes to receive the requisite amount of data from the one or more upstream processing elements. The first processing element is then predictively started at a first startup time based on the determined historical startup time and the determined amount of time historically taken to receive the requisite amount of data. | 2013-06-27 |
20130166889 | METHOD AND APPARATUS FOR GENERATING FLAGS FOR A PROCESSOR - A method and apparatus are described for generating flags in response to processing data during an execution pipeline cycle of a processor. The processor may include a multiplexer configured generate valid bits for received data according to a designated data size, and a logic unit configured to control the generation of flags based on a shift or rotate operation command, the designated data size and information indicating how many bytes and bits to rotate or shift the data by. A carry flag may be used to extend the amount of bits supported by shift and rotate operations. A sign flag may be used to indicate whether a result is a positive or negative number. An overflow flag may be used to indicate that a data overflow exists, whereby there are not a sufficient number of bits to store the data. | 2013-06-27 |