13th week of 2014 patent applcation highlights part 68 |
Patent application number | Title | Published |
20140089525 | COMPRESSED ANALYTICS DATA FOR MULTIPLE RECURRING TIME PERIODS - Analytics data for a network-based site may be compressed according to recurring time periods. An analytics service may obtain analytics data for network-based sites to compress into a compressed analytics data stream. To compress the analytics data, the analytic service may identify a particular time period corresponding to each analytic data value and may add the analytic data value to the compressed analytics data stream as either a baseline object for the particular time period or a difference object relative to an existing baseline object for the particular time period. These objects may be interleaved according to a time-based ordering of multiple different recurring time periods. An analytic service may send the compressed analytics data stream to an analytics client. The analytics client may decompress a portion of the compressed analytics trend without decompressing the remaining portions of the compressed analytics data stream. | 2014-03-27 |
20140089526 | Communicating Data Among Personal Clouds - A gateway device performs authentication with an interconnect system that coordinates communication of data among a plurality of personal clouds. The gateway device is part of a first personal cloud that includes electronic devices. Data is communicated between the first personal cloud and at least a second personal cloud based on information in the interconnect system, wherein the communication is performed according to at least one rule provided at the gateway device, wherein the at least one rule includes at least one criterion relating to data to be shared among the plurality of personal clouds. | 2014-03-27 |
20140089527 | SYSTEM AND METHOD OF CONNECTING A COMPUTER TO A PERIPHERAL OF ANOTHER COMPUTER - A system and method of connecting a computer to a peripheral of another computer. An example system includes a processor connected to a network and to the one and the other computers through the network. The processor executes web service software which establishes a discovery service for receiving a peripheral connection request from application software of the one computer and peripheral management software which receives information from the other computer through the web service software about the peripherals of the other computer. In addition, the peripheral management software logically associates the peripherals and the other computer with a peripheral station, receives the peripheral connection request from the application software, maps the one computer to a requested peripheral of the peripheral station, and sends information to the application software through the web service software to facilitate connection by the application software to the requested peripheral of the peripheral station through the other computer. | 2014-03-27 |
20140089528 | Use of free pages in handling of page faults - A method for data transfer includes receiving in an input/output (I/O) operation data to be written to a specified virtual address in a host memory. Upon receiving the data, it is detected that a first page that contains the specified virtual address is swapped out of the host memory. Responsively to detecting that the first page is swapped out, the received data are written to a second, free page in the host memory, and the specified virtual address is remapped to the free page. | 2014-03-27 |
20140089529 | Management Data Input/Output Protocol With Page Write Extension - A process to manage data between one or more MDIO manageable devices situated on the same bus utilizing the MDIO protocol. The data management efficiency can be increased through the use of an MDIO protocol that includes a page-write mode. The MDIO protocol including the page-write mode can reduce the overhead for a write operation by omitting various portions of the MDIO communication frame format, including the preamble, start-of-frame, operational code, port address, device address, and turn-around fields that generally precede data to be written. The MDIO protocol including the page-write mode may include a next-data code to initiate the page-write mode. | 2014-03-27 |
20140089530 | OPTIMIZING PARALLEL BUILD OF APPLICATION - Optimizing a parallel build of an application includes, in parallel execution of commands, recording command sequence numbers and access information of the commands and detecting an execution conflict based on the command sequence numbers and the access information of the commands using a processor. Commands involved in the execution conflict are re-executed serially. | 2014-03-27 |
20140089531 | GENERATING RECOMMENDATIONS FOR PERIPHERAL DEVICES COMPATIBLE WITH A PROCESSOR AND OPERATING SYSTEM OF A COMPUTER - Computer program code (program code) identifies peripheral devices connected to a computer having a log file. Program code determines whether the peripheral devices identified are functioning properly or not functioning properly. The program code collects information about the configuration of the computer. The program code sets parameters that create a profile for the computer. The program code stores the profile and the log file in a database along with profiles and log files of other computers having peripheral devices identified by the program code. The program code utilizes the profiles and log files to generate recommendations for peripheral devices that are compatible with at least one processor and operating system of the computer. | 2014-03-27 |
20140089532 | Method for Shortening Enumeration of Tightly Coupled USB Device - In a Universal Serial Bus (USB) enumeration procedure, a USB Host questions a USB Device for its capabilities and chooses a set of capabilities that best fit. When the USB Device is enumerated, the USB Host may perform several time-consuming and power-consuming operations. However, when the USB Device is tightly or permanently coupled to the USB Host, part of the enumeration procedure may be redundant and can be eliminated. Accordingly, a method, an apparatus, and a computer program product for shortening enumeration of the USB Device tightly coupled to the USB Host are provided. The USB Host sends a request for a Device descriptor to the USB Device, receives a plurality of descriptors in a single transfer from the USB Device, and sets a configuration of the USB Device based on the received plurality of descriptors. | 2014-03-27 |
20140089533 | ALLOCATION OF FLOW CONTROL CREDITS FOR HIGH PERFORMANCE DEVICES - Methods and apparatus relating to allocation of flow control credits for high performance devices are described. In some embodiments, controls and/or configuration structures may be provided for the OS (Operating System) or VMM (Virtual Machine Manager) to indicate possible processor affinity (e.g., of a device driver for a given PCIe device) to the platform components (in a platform dependent fashion, for example). Using this data, the platform components could configure the RC (Root Complex) ports and/or intermediate components (such as switches, bridges, etc.) to pre-allocate buffers for the links coupling the PCIe device to the RC ports or intermediate components. Other embodiments are also disclosed and claimed. | 2014-03-27 |
20140089534 | CONFIGURABLE INPUT/OUTPUT PROCESSOR - The invention pertains to a configurable input/output processing device, a method for configuring the configurable input/output processing device and a computer program product for performing the steps of the method. The configurable input/output processing device ( | 2014-03-27 |
20140089535 | INFORMATION PROCESSING SYSTEM, DEVICE, MOBILE TERMINAL AND DEVICE DRIVER INSTALLATION METHOD - There is provided an information processing system including an information processing apparatus, at least one device, and a mobile terminal, the information processing apparatus being configured to perform a data communication with the at least one device in accordance with a first communication method, the mobile terminal being configured to perform the data communication with the information processing apparatus in accordance with the first communication method, and a data communication with the at least one device in accordance with a second communication method. If there exists device information, among a plurality of pieces of the device information the information processing apparatus obtained from the devices, coincides with the particular device information, the device corresponding to the piece of information that is identical to the particular device information is set as the device subjected to the installation. | 2014-03-27 |
20140089536 | ADC SEQUENCING - A device comprises a central processing unit (CPU) and a memory configured for storing memory descriptors. The device also includes an analog-to-digital converter controller (ADC controller) configured for managing an analog-to-digital converter (ADC) using the memory descriptors. In addition, the device includes a direct memory access system (DMA system) configured for autonomously sequencing conversion operations performed by the ADC without CPU intervention by transferring the memory descriptors directly between the memory and the ADC controller for controlling the conversion operations performed by the ADC. | 2014-03-27 |
20140089537 | AUTOMATING DIGITAL DISPLAY - A device comprises a central processing unit (CPU), a display controller configured for controlling a digital display and a memory configured for storing data corresponding to the digital display. The device includes a direct memory access (DMA) controller configured for autonomously transferring the data from the memory directly to the display controller without CPU intervention. | 2014-03-27 |
20140089538 | MANAGED ACCESS TO PERIPHERALS OF A SERVICE TERMINAL - Managed access to one or more peripherals of a service terminal is provided. A master controller controls access to the peripheral(s) by applications of the service terminal, wherein only a single application can access the peripheral(s) at a time, by identifying an application of the applications for placing into an on-focus state in order to enable access to the peripheral(s) by the identified application, and placing the identified application into the on-focus state, where access to the peripheral(s) by the identified application is enabled. The remaining applications of the applications execute in an off-focus state in which the master controller simulates, for the remaining applications, connectivity to the peripheral(s), and in which access to the peripheral(s) by the remaining applications is disabled transparent to the remaining applications while the access to the peripheral(s) by the identified application is enabled. | 2014-03-27 |
20140089539 | Lockless Spin Buffer - Implementations of the present disclosure are directed to enabling data transfer between data producers and data consumers. Implementations include generating a data structure, the data structure including a lockless spin buffer (LLSB), the LLSB including two or more lockless components, each of the two or more lockless components including a plurality of elements to be written to and read from, providing one or more write pointers to enable one or more data producers to write to each of the two or more lockless components, and providing one or more read pointers to enable one or more data consumers to read from each of the two or more lockless components, the one or more data producers being able to write to the LLSB concurrently with the one or more data consumers being able to read from the LLSB. | 2014-03-27 |
20140089540 | STORAGE APPARATUS AND METHOD OF CONTROLLING THE SAME - A storage apparatus | 2014-03-27 |
20140089541 | BUS PROTOCOL COMPATIBLE DEVICE AND METHOD THEREFOR - A bus protocol compatible device, includes a transmitter having a first mode for providing a reference clock signal to an output, and a second mode for providing a training sequence to the output, and a power state controller for placing the transmitter in the first mode for a first period of time in response to a change in a link state, and in the second mode after an expiration of the first period of time. | 2014-03-27 |
20140089542 | CHAINED INFORMATION EXCHANGE SYSTEM COMPRISING A PLURALITY OF MODULES CONNECTED TOGETHER BY HARDENED DIGITAL BUSES - A chained information exchange system ( | 2014-03-27 |
20140089543 | Master Mode and Slave Mode of Computing Device - A computing device to couple with a second computing device. The computing device switches between a master mode and a slave mode based on whether the computing device is docked with the second computing device. | 2014-03-27 |
20140089544 | INFORMATION PROCESSING DEVICE AND DATA COMMUNICATION METHOD - An information processing device may include a master device and at least one slave device, which may be connected each other by using two types of signal lines comprising a serial clock line and a serial data line. A datum may be transmitted between the master device and the slave device according to a predetermined communication method by using the two types of signal lines. If either the master device resets the slave device, or a power supply to the master device and the slave device is turned on, the slave device may commence a starting operation. A notification of a starting condition may be provided to the master device by way of at least one of the serial clock line and the serial data line. | 2014-03-27 |
20140089545 | LEASED LOCK IN ACTIVE-ACTIVE HIGH AVAILABILITY DAS SYSTEMS - A method and system for IO processing in a storage system is disclosed. In accordance with the present disclosure, a controller may take long term “lease” of a portion (e.g., an LBA range) of a virtual disk of a RAID system and then utilize local locks for IOs directed to the leased portion. The method and system in accordance with the present disclosure eliminates inter-controller communication for the majority of IOs and improves the overall performance for a High Availability Active-Active DAS RAID system. | 2014-03-27 |
20140089546 | INTERRUPT TIMESTAMPING - A system and method for maintaining accurate interrupt timestamps. A semiconductor chip includes an interrupt controller (IC) with an interface to multiple sources of interrupts. In response to receiving an interrupt, the IC copies and records the value stored in a main time base counter used for maintaining a global elapsed time. The IC sends an indication of the interrupt to a corresponding processor. Either an interrupt service routine (ISR) or a device driver requests a timestamp associated with the interrupt. Rather than send a request to the operating system to obtain a current value stored in the main time base counter, the processor requests the recorded timestamp from the IC. The IC identifies the stored timestamp associated with the interrupt and returns it to the processor. | 2014-03-27 |
20140089547 | SMART PLUG OR CRADLE - There is provided a method and apparatus for allowing a user of a mobile device to securely access a storage device of a home network of the user. The method and apparatus advantageously allow for the user to share data stored on the home network with other users, or to give full or restricted access to other computing devices. The apparatus consists of a network element residing on the home network of the user, which enables communications between the network storage and the mobile device when the mobile device is in a remote location. | 2014-03-27 |
20140089548 | Systems, Methods, and Articles of Manufacture To Stream Data - Systems and methods for streaming data. Systems allow read/write across multiple or N device modules. Device modules on a bus ring configure at power up (during initialization process); this process informs each device module of its associated address values. Each ringed device module analyzes an address indicator word, which identifies an address at which a read/write operation is intended for, and compares the address designated by the address indicator word to its assigned addresses; when the address designated by the address indicator word is an address associated with the device module, the device module read/writes from/to the address designated by the address indicator word. Memory controller (ring controller or master bus) is not required to ‘know’ which memory chip/device module in a daisy chain the address command word is intended for. Therefore, system embodiments allow streaming without consideration of a number of memory chips/device modules on bus. The bus isolates modules like object oriented programming. | 2014-03-27 |
20140089549 | NON-LINEAR TERMINATION FOR AN ON-PACKAGE INPUT/OUTPUT ARCHITECTURE - An on-package interface. A first set of single-ended transmitter circuits on a first die. A first set of single-ended receiver circuits on a second die. The receiver circuits have a termination circuit comprising an inverter and a resistive feedback element. A plurality of conductive lines couple the first set of transmitter circuits and the first set of receiver circuits. The lengths of the plurality of conductive lines arc matched. | 2014-03-27 |
20140089550 | Low Power Signaling for Data Transfer - Methods, systems and computer readable storage medium embodiments for communicating over a data bus include, determining a number of changes in bit value in respective bit positions between a previous bit string and a current bit string, transmitting either the current bit string in an inverted form over the data bus if the determined number of changes in bit value exceeds a threshold or the current bit string in non-inverted form if the determined number of changes in bit value does not exceed a threshold, and transmitting an additional at least one bit along with the current bit string having a logic value that indicates whether the current bit string is in an inverted form or non-inverted form. Methods, systems, and computer readable storage medium embodiments for receiving bit strings over a bus are also disclosed. | 2014-03-27 |
20140089551 | COMMUNICATION OF DEVICE PRESENCE BETWEEN BOOT ROUTINE AND OPERATING SYSTEM - Various embodiments are directed to creating multiple device blocks associated with hardware devices, arranging the device blocks in an order indicative of positions of the hardware devices in a hierarchy of buses and bridges, and enabling access to the multiple device blocks from an operating system. An apparatus comprises a processor circuit and storage storing instructions operative on the processor circuit to create a device table comprising multiple device blocks, each device block corresponding to one of multiple hardware devices accessible to the processor circuit, the device blocks arranged in an order indicative of relative positions of the hardware devices in a hierarchy of buses and at least one bridge device; enable access to the device table by an operating system; and execute a second sequence of instructions of the operating system operative on the processor circuit to access the device table. Other embodiments are described and claimed herein. | 2014-03-27 |
20140089552 | USB HUBS WITH GALVANIC ISOLATION - A universal serial bus (USB) hub includes a USB AFE circuit module, a hub core and an isolator circuit module interposed between the USB AFE circuit module and the hub core. Data communications between the hub core and the first USB AFE circuit module pass through the isolator circuit module. A method for communicating through a universal serial bus hub includes providing a USB AFE circuit module, providing a hub core, providing an isolator circuit module interposed between the USB AFE circuit module and the hub core, and directing communication from the USB AFE circuit module to the hub core through the isolator circuit module. | 2014-03-27 |
20140089553 | INTERFACE BETWEEN A HOST AND A PERIPHERAL DEVICE - Disclosed are various embodiments for an interface between a host device and one or more peripheral devices in a computing system. A peripheral-side controller, a host-side controller, and a peripheral-side translator are located on a peripheral device that is in communication with a host device. The peripheral-side translator transfers data from an internal bus in the peripheral device to an external interface for the peripheral device. The internal bus is associated with a first bus protocol, and the external interface is associated with a second bus protocol. | 2014-03-27 |
20140089554 | UNIVERSAL SERIAL BUS SIGNAL TEST DEVICE - A universal serial bus (USB) signal test device includes a printed circuit board. A first connector, a second connector, and a number of USB hub integrated circuit (ICs) are arranged on the printed circuit board. The USB hub ICs are connected in series. A USB signal is passed through the USB hub ICs and an auxiliary test device in that order. The USB signals are measured with an oscilloscope after being passed through the USB hub ICs and the auxiliary test device. | 2014-03-27 |
20140089555 | DEVICE, SYSTEM AND METHOD OF MULTI-CHANNEL PROCESSING - Some demonstrative embodiments include devices, systems and methods of multi-channel processing. For example, a multi-channel data processor may process data of a plurality of channels, the multi-channel data processor is to switch from processing a first channel to processing a second channel of the plurality of channels by performing a context switch during a single clock cycle, the context switch including storing first state context corresponding to a processing state of the first channel and loading previously stored second state context corresponding to a processing state of the second channel. | 2014-03-27 |
20140089556 | SESSION KEY ASSOCIATED WITH COMMUNICATION PATH - Techniques for associating a session key with a communication path are provided. A host may provide a session key to a library controller over a first communications path. The library controller may associate the communications path with the session key. In one aspect, commands received over the first communications path are associated with the session key. In another aspect, the session key may be associated with a second communications path when the communications path is to be changed. | 2014-03-27 |
20140089557 | IMAGE STORAGE OPTIMIZATION IN VIRTUAL ENVIRONMENTS - A method for monitoring and managing virtual machine image storage in a virtualized computing environment is proposed, where the method for managing storage utilized by a virtual machine can include identifying one or more unused disk blocks in a guest virtual machine image, and removing the unused disk blocks from the guest virtual machine image. | 2014-03-27 |
20140089558 | DYNAMIC REDUNDANCY MAPPING OF CACHE DATA IN FLASH-BASED CACHING SYSTEMS - A method for managing redundancy of data in a solid-state cache system including at least three solid-state storage modules. The method may include designating one or more extents of each dirty mirror pair to be of a particular priority order of at least two priority orders. The at least two priority orders can include at least a highest priority order. The highest priority order can have a higher relative priority than the other priority orders. The method may also include performing at least one redundancy conversion iteration. Each redundancy conversion iteration includes converting extents of at least two dirty mirror pairs into at least one RAID 5 group and at least one unconverted extent. The extents of the at least two dirty mirror pairs can include extents designated to be of a highest remaining priority order. Each redundancy conversion iteration can also include deallocating the at least one unconverted extent. | 2014-03-27 |
20140089559 | APPARATUS, SYSTEM AND METHOD FOR ADAPTIVE CACHE REPLACEMENT IN A NON-VOLATILE MAIN MEMORY SYSTEM - Techniques and mechanisms for adaptively changing between replacement policies for selecting lines of a cache for eviction. In an embodiment, evaluation logic determines a value of a performance metric which is for writes to a non-volatile memory. Based on the determined value of the performance metric, a parameter value of a replacement policy is determined. In another embodiment, cache replacement logic performs a selection of a line of cache for data eviction, where the selection is in response to the policy unit providing an indication of the determined parameter value. | 2014-03-27 |
20140089560 | MEMORY DEVICES AND METHODS HAVING WRITE DATA PERMUTATION FOR CELL WEAR REDUCTION - A memory system can include a plurality of memory elements each comprising a memory layer having at least one layer programmable between at least two different impedance states; a data input configured to receive multi-bit write data values; and a permutation circuit coupled between the memory elements and the data input, and configured to repeatedly permute the multi-bit write data values prior to writing such data values into the memory elements. | 2014-03-27 |
20140089561 | Techniques Associated with Protecting System Critical Data Written to Non-Volatile Memory - Examples are disclosed for techniques associated with protecting system critical data written to non-volatile memory. In some examples, system critical data may be written to a non-volatile memory using a first data protection scheme. User data that includes non-system critical data may also be written to the non-volatile memory using a second data protection scheme. For these examples, both data protection schemes may have a same given data format size. Various examples are provided for use of the first data protection scheme that may provide enhanced protection for the system critical data compared to protection provided to user data using the second data protection scheme. Other examples are described and claimed. | 2014-03-27 |
20140089562 | EFFICIENT I/O PROCESSING IN STORAGE SYSTEM - Exemplary embodiments provide information processing system and data processing for efficient I/O processing in the storage system. In one aspect, a storage system comprises: a memory; and a controller being operable to execute a process for data stored in the memory so that an address of the data stored in the memory is changed between a first address managed in a virtual memory on a server and a second address managed by the controller, based on a command containing an address corresponding to the first address, the command being sent from the server to the storage system. In some embodiments, the memory includes a server data memory and a storage data memory. In specific embodiments, in response to the command from the server, the controller is operable to change a status of data stored in the memory from server data to storage data or from storage data to server data. | 2014-03-27 |
20140089563 | CONFIGURATION INFORMATION BACKUP IN MEMORY SYSTEMS - According to one configuration, a memory system includes a configuration manager and multiple memory devices. The configuration manager includes status detection logic, retrieval logic, and configuration management logic. The status detection logic receives notification of a failed attempt by a first memory device to be initialized with custom configuration settings stored in the first memory device. In response to the notification, the retrieval logic retrieves a backup copy of configuration settings information from a second memory device in the memory system. The configuration management logic utilizes the backup copy of the configuration settings information retrieved from the second memory device to initialize the first memory device. | 2014-03-27 |
20140089564 | METHOD OF DATA COLLECTION IN A NON-VOLATILE MEMORY - A method of data collection is performed in a non-volatile memory that has a number of blocks and each block has multiple pages. A timestamp is recorded associated with a data written to the non-volatile memory. Some of the written data are moved from a plurality of different pages respectively to a first block according to the timestamps associated with the plurality of written data stored in the plurality of different pages. | 2014-03-27 |
20140089565 | SOLID STATE DEVICE WRITE OPERATION MANAGEMENT SYSTEM - A solid state device (SSD) write operation management system including a file system that incorporates SSD status information into its operational logic is disclosed. By incorporating SSD status information, the system achieves various advantages over conventional systems, such as enhanced write performance and extended SSD lifespan. The system processes various criteria to select the optimal virtual device (“vdev”) for data allocation in response to a write request. The first criterion utilizes Program/Erase counts of physical blocks contained in the SSDs. Another criterion is the number of physical free blocks of a drive. If the average of the selected vdev's physical free blocks is higher than the OP threshold, then the system selects for data allocation the vdev with the greatest amount of logical free space. In the instance that the average is lower, the system schedules garbage collection for the vdev. | 2014-03-27 |
20140089566 | DATA STORING METHOD, AND MEMORY CONTROLLER AND MEMORY STORAGE APPARATUS USING THE SAME - A data storing method and a memory controller and a memory storage apparatus using the same are provided. The method includes logically grouping physical erase units into a data area and a spare area; selecting a physical erase unit form the spare area as a first data collecting unit; and selecting a physical erase unit from the spare area as a second data collecting unit. The method also includes writing data received from a host into the first data collecting unit. The method further includes performing a data arranging operation to move valid data in a third physical erase unit to the second data collecting unit and associating the third physical erase unit with the spare area. Accordingly, the method can effectively enhance the performance of the write operation. | 2014-03-27 |
20140089567 | HARDWARE INTEGRITY VERIFICATION - A flash memory management method and apparatus provides for the separation of the command and data paths so that communication paths may be used more efficiently, taking account of the characteristics of NAND FLASH circuits where the times to read, write and erase data differ substantially. A unique sequence identifier is assigned to a write command and associated data and association of the data and commands are validated prior to writing to the memory by comparing the unique sequence numbers of the data and command prior to executing the command. This comparison is performed after the data and command have traversed the communication paths. | 2014-03-27 |
20140089568 | EMBEDDED MULTIMEDIA CARD (EMMC), HOST FOR CONTROLLING THE EMMC, AND METHODS OF OPERATING THE EMMC AND THE HOST - A method of operating an eMMC system includes receiving a first command defining a first operation from the host, and storing the first command in a first command register among N command registers, and receiving a second command defining a second operation from the host, and storing the second command in a second command register among the N command registers, wherein the second command is received while the first operation is being performed. | 2014-03-27 |
20140089569 | WRITE CACHE SORTING - A method of managing a non-volatile memory system is described where data elements stored in a buffer are characterized by attributes and a write data tag is created for the data elements. A plurality of write data tag queues is maintained so that different data attributes may be applied as sorting criteria when the data elements are formed into pages for storage in the non-volatile memory. The memory system may be organized as a RAID system and a write data tag queue may be associated with a specific RAID group such that the data pages may be written from a buffer to the non-volatile memory in accordance with the results of sorting each write data queue. The data elements stored in the buffer may be received from a user, or be read from the non-volatile memory during the performance of system overhead operations. | 2014-03-27 |
20140089570 | SEMICONDUCTOR MEMORY - A memory block area in a semiconductor memory includes program segments. Each program segment includes a group of memory cells arranged at positions where word lines and bit lines intersect and connected to a common source line. The word lines are shared by the program segments. At program operation time source line switches are used for supplying first voltage to a source line in a program segment, of the program segments, including a memory cell to be programmed and supplying second voltage to a source line in a program segment, of the program segments, not including the memory cell to be programmed. | 2014-03-27 |
20140089571 | BIT INVERSION IN MEMORY DEVICES - Bit inversions occurring in memory systems and apparatus are provided. Data is acquired from a source destined for a target. As the data is acquired from the source, the set bits associated with data are tabulated. If the total number of set bits exceeds more than half of the total bits, then an inversion flag is set. When the data is transferred to the target, the bits are inverted during the transfer if the inversion flag is set. | 2014-03-27 |
20140089572 | DISTRIBUTED PAGE-TABLE LOOKUPS IN A SHARED-MEMORY SYSTEM - The disclosed embodiments provide a system that performs distributed page-table lookups in a shared-memory multiprocessor system with two or more nodes, where each of these nodes includes a directory controller that manages a distinct portion of the system's address space. During operation, a first node receives a request for a page-table entry that is located at a physical address that is managed by the first node. The first node accesses its directory controller to retrieve the page-table entry, and then uses the page-table entry to calculate the physical address for a subsequent page-table entry. The first node determines the home node (e.g., the managing node) for this calculated physical address, and sends a request for the subsequent page-table entry to that home node. | 2014-03-27 |
20140089573 | METHOD FOR ACCESSING MEMORY DEVICES PRIOR TO BUS TRAINING - Embodiments of the invention describe apparatuses, systems and methods for enabling memory device access prior to bus training, thereby enabling firmware image storage in non-flash nonvolatile memory, such as DDR DRAM. The increasing size of firmware images, such as BIOS, MRC, and ME firmware, makes current non-volatile storage solutions, such as SPI flash memory, impractical; executing BIOS code in flash is slow, and having a separate non-volatile memory device increases device costs. Furthermore, solutions such as Cache-as-RAM, which are utilized for running the pre-memory BIOS code, are limited by the cache size that is not scalable to the increasing complexity of BIOS code. | 2014-03-27 |
20140089574 | SEMICONDUCTOR MEMORY DEVICE STORING MEMORY CHARACTERISTIC INFORMATION, MEMORY MODULE AND MEMORY SYSTEM HAVING THE SAME, AND OPERATING METHOD OF THE SAME - A semiconductor memory device storing memory characteristic information, a memory module including the semiconductor memory device, a memory system, and an operating method of the semiconductor memory device. The semiconductor memory device may include a cell array including a plurality of areas; a command decoder configured to decode a command and generate an internal command; and an information storage unit configured to store characteristic information of at least one of the plurality of areas. When a first command and a first row address accompanying the first command are received, characteristic information of an area corresponding to the first row address is provided to an outside. | 2014-03-27 |
20140089575 | Semiconductor Memory Asynchronous Pipeline - An asynchronously pipelined SDRAM has separate pipeline stages that are controlled by asynchronous signals. Rather than using a clock signal to synchronize data at each stage, an asynchronous signal is used to latch data at every stage. The asynchronous control signals are generated within the chip and are optimized to the different latency stages. Longer latency stages require larger delays elements, while shorter latency states require shorter delay elements. The data is synchronized to the clock at the end of the read data path before being read out of the chip. Because the data has been latched at each pipeline stage, it suffers from less skew than would be seen in a conventional wave pipeline architecture. Furthermore, since the stages are independent of the system clock, the read data path can be run at any CAS latency as long as the re-synchronizing output is built to support it. | 2014-03-27 |
20140089576 | METHOD, APPARATUS AND SYSTEM FOR PROVIDING A MEMORY REFRESH - A memory controller to implement targeted refreshes of potential victim rows of a row hammer event. In an embodiment, the memory controller receives an indication that a specific row of a memory device is experiencing repeated accesses which threaten the integrity of data in one or more victim rows physically adjacent to the specific row. The memory controller accesses default offset information in the absence of address map information which specifies an offset between physically adjacent rows of the memory device. In another embodiment, the memory controller determines addresses for potential victim rows based on the default offset information. In response to the received indication of the row hammer event, the memory controller sends for each of the determined plurality of addresses a respective command to the memory device, where the commands are for the memory device to perform targeted refreshes of potential victim rows. | 2014-03-27 |
20140089577 | VOLATILE MEMORY DEVICE AND MEMORY CONTROLLER - A volatile memory device includes a memory cell array, a command decoder, a self-refresh circuit, and a register. The command decoder is configured to decode a self-refresh entry command, a self-refresh exit command, and a register read command based on external command signals received from outside the volatile memory device. The self-refresh circuit is configured to automatically refresh the memory cell array during a self-refresh mode which be entered in response to the self-refresh entry command and be exited in response to the self-refresh exit command. The register is configured to store an accessible state in response to the self-refresh exit command, and output the stored accessible state in response to the register read command. The accessible state indicates whether or not the memory cell array is ready to be read or written. | 2014-03-27 |
20140089578 | MULTI-UPDATABLE LEAST RECENTLY USED MECHANISM - A control unit of a least recently used (LRU) mechanism for a ternary content addressable memory (TCAM) stores counts indicating a time sequence with resources in entries of the TCAM. The control unit receives an access request with a mask defining related resources. The TCAM is searched to find partial matches based on the mask. The control unit increases the counts for entries corresponding to partial matches, preserving an order of the counts. If the control unit also finds an exact match, its count is updated to be greater than the other increased counts. After each access request, the control unit searches the TCAM to find the entry having the lowest count, and writes the resource of that entry to an LRU register. In this manner, the system software can instantly identify the LRU entry by reading the value in the LRU register. | 2014-03-27 |
20140089579 | INFORMATION PROCESSING SYSTEM, RECORDING MEDIUM, AND INFORMATION PROCESSING METHOD - An information processing system that sets, coupling information defining a logically coupling corresponding to a first disk, to an uncoupled state indicating that a first logical machine associated with the first disk and the first disk are not coupled, in response to a request to stop the first logical machine, and releases a coupling between the first logical machine and the first disk based on the uncoupled state set in the coupling information. | 2014-03-27 |
20140089580 | HANDLING ENCLOSURE UNAVAILABILITY IN A STORAGE SYSTEM - The presently disclosed subject matter includes, inter alia, a storage system and a method of managing allocation of data in case an enclosure in a storage system becomes unavailable. The storage system has a storage space configured as a plurality of RAID groups, each RAID group comprising N parity members. According to one aspect of the disclosed subject matter, responsive to a write request, at least one section allocated to a disk in an unavailable enclosure is identified; at least one temporary RAID group in a spare storage space of the storage system is allocated and data related to the write request is written to the alternative RAID group. | 2014-03-27 |
20140089581 | CAPACITY-EXPANSION OF A LOGICAL VOLUME - Expanding capacity of a logical volume is described. In an example a logical volume is described by a global metadata unit and a plurality of local metadata units. The global metadata unit includes a description of the logical volume, a list of the plurality of local metadata units, and ranges of logical blocks of the logical volume corresponding to the plurality of local metadata units. Each of the local metadata units includes a description of a local RAID set and a range of logical blocks on the local RAID set. When a new drive is to be added to the logical volume to increase capacity, a new local metadata unit is created. The new local metadata unit includes a description of a new local RAID set to be added to the RAID volume and a range of logical blocks on the new drive. The new local metadata unit is added to the global metadata unit to expand the logical volume to incorporate the new local RAID set. | 2014-03-27 |
20140089582 | DISK ARRAY APPARATUS, DISK ARRAY CONTROLLER, AND METHOD FOR COPYING DATA BETWEEN PHYSICAL BLOCKS - According to one embodiment, a disk array controller includes a data copy unit and a physical block replacement unit. The data copy unit copies data from a master logical disk to a backup logical disk in order to set the master logical disk and the backup logical disk in a synchronization status. The physical block replacement unit allocates a third physical block to the backup logical disk, before data is copied from a first physical block allocated to the master logical disk to the backup logical disk, when the allocation is changed to the third physical block instead of a second physical block that is associated with the first physical block and is allocated to the backup logical disk. | 2014-03-27 |
20140089583 | LIBRARY APPARATUS, CONTROL METHOD OF THE SAME AND STORAGE MEDIUM STORING COMPUTER PROGRAM - Disclosed is a library apparatus and the like capable of correlating a slot to be shared into which a recording medium has been thrown to an appropriate logical library easily. | 2014-03-27 |
20140089584 | ACCELERATED PATH SELECTION BASED ON NUMBER OF WRITE REQUESTS AND SEQUENTIAL TREND - Embodiments herein relate to selecting an accelerated path based on a number of write requests and a sequential trend. One of an accelerated path and a cache path is selected between a host and a storage device based on at least one of a number of write requests and a sequential trend. The cache path connects the host to the storage device via a cache. The number of write requests is based on a total number of random and sequential write requests from a set of outstanding requests from the host to the storage device. The sequential trend is based on a percentage of sequential read and sequential write requests from the set of outstanding requests. | 2014-03-27 |
20140089585 | HIERARCHY MEMORY MANAGEMENT - In one embodiment, a storage system comprises: a first type interface being operable to communicate with a server using a remote memory access; a second type interface being operable to communicate with the server using a block I/O (Input/Output) access; a memory; and a controller being operable to manage (1) a first portion of storage areas of the memory to allocate for storing data, which is to be stored in a physical address space managed by an operating system on the server and which is sent from the server via the first type interface, and (2) a second portion of the storage areas of the memory to allocate for caching data, which is sent from the server to a logical volume of the storage system via the second type interface and which is to be stored in a storage device of the storage system corresponding to the logical volume. | 2014-03-27 |
20140089586 | ARITHMETIC PROCESSING UNIT, INFORMATION PROCESSING DEVICE, AND ARITHMETIC PROCESSING UNIT CONTROL METHOD - An L2 cache control unit searches for a cache memory according to a memory access request which is provided from a request storage unit 0 through a CPU core unit, and retains in request storage units 1 and 2 the memory access request that has a cache mistake that has occurred. A bank abort generation unit counts, for each bank, the number of memory access requests to the main storage device, and instructs the L2 cache control unit to interrupt access when any of the number of counted memory access requests exceeds a specified value. According to the instruction, the L2 cache control unit interrupts the processing of the memory access request retained in the request storage unit 0. A main memory control unit issues the memory access request retained in the request storage unit 2 to the main storage device. | 2014-03-27 |
20140089587 | PROCESSOR, INFORMATION PROCESSING APPARATUS AND CONTROL METHOD OF PROCESSOR - An entry information storing unit | 2014-03-27 |
20140089588 | Method and system of storing and retrieving data - Method and system of storing data by a software application. Each read query of a data storage system by a software application is first solely issued to a plurality of cache nodes, which returns the queried data if available. If not available, the software application receives a miss that triggers a fetch of the queried data from one or more database systems on a first dedicated interface. Upon having retrieved the queried data, the software application adds the queried data to at least one cache node on a second dedicated. Each writing of the one or more database systems by the software application is also concurrently performed in the at least one cache node. Hence, population of the at least one cache node is quickly done at each missed read query of the at least one cache node and at each write query of the data storage system. | 2014-03-27 |
20140089589 | BARRIER COLORS - Methods and processors for enforcing an order of memory access requests in the presence of barriers in an out-of-order processor pipeline. A speculative color is assigned to instruction operations in the front-end of the processor pipeline, while the instruction operations are still in order. The instruction operations are placed in any of multiple reservation stations and then issued out-of-order from the reservation stations. When a barrier is encountered in the front-end, the speculative color is changed, and instruction operations are assigned the new speculative color. A core interface unit maintains an architectural color, and the architectural color is changed when a barrier retires. The core interface unit stalls instruction operations with a speculative color that does match the architectural color. | 2014-03-27 |
20140089590 | SYSTEM CACHE WITH COARSE GRAIN POWER MANAGEMENT - Methods and apparatuses for reducing power consumption of a system cache within a memory controller. The system cache includes multiple ways, and individual ways are powered down when cache activity is low. A maximum active way configuration register is set by software and determines the maximum number of ways which are permitted to be active. When searching for a cache line replacement candidate, a linear feedback shift register (LFSR) is used to select from the active ways. This ensures that each active way has an equal chance of getting picked for finding a replacement candidate when one or more of the ways are inactive. | 2014-03-27 |
20140089591 | SUPPORTING TARGETED STORES IN A SHARED-MEMORY MULTIPROCESSOR SYSTEM - The present embodiments provide a system for supporting targeted stores in a shared-memory multiprocessor. A targeted store enables a first processor to push a cache line to be stored in a cache memory of a second processor in the shared-memory multiprocessor. This eliminates the need for multiple cache-coherence operations to transfer the cache line from the first processor to the second processor. The system includes an interface, such as an application programming interface (API), and a system call interface or an instruction-set architecture (ISA) that provides access to a number of mechanisms for supporting targeted stores. These mechanisms include a thread-location mechanism that determines a location near where a thread is executing in the shared-memory multiprocessor, and a targeted-store mechanism that targets a store to a location (e.g., cache memory) in the shared-memory multiprocessor. | 2014-03-27 |
20140089592 | SYSTEM CACHE WITH SPECULATIVE READ ENGINE - Methods and apparatuses for processing speculative read requests in a system cache within a memory controller. To expedite a speculative read request, the request is sent on parallel paths through the system cache. A first path goes through a speculative read engine to determine if the speculative read request meets the conditions for accessing memory. A second path involves performing a tag lookup to determine if the data referenced by the request is already in the system cache. If the speculative read request meets the conditions for accessing memory, the request is sent to a miss queue where it is held until a confirm or cancel signal is received from the tag lookup mechanism. | 2014-03-27 |
20140089593 | RECOVERING FROM DATA ERRORS USING IMPLICIT REDUNDANCY - Some implementations disclosed herein provide techniques and arrangements for recovery of data stored in memory shared by a number of processors through information stored in a cache directory. A core of a processor may initiate access (e.g., read or write) to particular data located in a first cache that is accessible to the core. In response to detecting an error associated with accessing the particular data, a location in the processor that includes the particular data may be identified and the particular data may be copied from the location to the first cache. | 2014-03-27 |
20140089594 | DATA PROCESSING METHOD, CACHE NODE, COLLABORATION CONTROLLER, AND SYSTEM - The present invention provides a data processing method based on a cache node group for data caching, where each cache node in the group includes a local replacement-allowable data storage space for storing data accessed by a local client and a collaborative replacement-allowable data storage space for storing data content accessed by a non-local client. By using the data processing method to process data content stored in the local replacement-allowable data storage space and the collaborative replacement-allowable data storage space of the cache node, the clients can obtain data more accurately and directly during access to the cache node, thereby meeting different requirements for local optimization of the cache node. | 2014-03-27 |
20140089595 | UTILITY AND LIFETIME BASED CACHE REPLACEMENT POLICY - Embodiments of the invention describe an apparatus, system and method for utilizing a utility and lifetime based cached replacement policy as described herein. For processors having one or more processor cores and a cache memory accessible via the processor core(s), embodiments of the invention describe a cache controller to determine, for a plurality of cache blocks in the cache memory, an estimated utility and lifetime of the contents of each cache block, the utility of a cache block to indicate a likelihood of use its contents, the lifetime of a cache block to indicate a duration of use of its contents. Upon receiving a cache access request resulting in a cache miss, said cache controller may select one of the cache blocks to be replaced based, at least in part, on one of the estimated utility or estimated lifetime of the cache block. | 2014-03-27 |
20140089596 | Read-Copy Update Implementation For Non-Cache-Coherent Systems - A technique for implementing read-copy update in a shared-memory computing system having two or more processors operatively coupled to a shared memory and to associated incoherent caches that cache copies of data stored in the memory. According to example embodiments disclosed herein, cacheline information for data that has been rendered obsolete due to a data update being performed by one of the processors is recorded. The recorded cacheline information is communicated to one or more of the other processors. The one or more other processors use the communicated cacheline information to flush the obsolete data from all incoherent caches that may be caching such data. | 2014-03-27 |
20140089597 | Caching Based on Spatial Distribution of Accesses to Data Storage Devices - Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for quantifying a spatial distribution of accesses to storage systems and for determining spatial locality of references to storage addresses in the storage systems, are described. In one aspect, a method includes determining a measure of spatial distribution of accesses to a data storage system based on multiple distinct groups of accesses to the data storage system, and adjusting a caching policy used for the data storage system based on the determined measure of spatial distribution. | 2014-03-27 |
20140089598 | METHODS AND APPARATUS FOR MANAGING PAGE CROSSING INSTRUCTIONS WITH DIFFERENT CACHEABILITY - An instruction in an instruction cache line having a first portion that is cacheable, a second portion that is from a page that is non-cacheable, and crosses a cache line is prevented from executing from the instruction cache. An attribute associated with the non-cacheable second portion is tracked separately from the attributes of the rest of the instructions in the cache line. If the page crossing instruction is reached for execution, the page crossing instruction and instructions following are flushed and a non-cacheable request is made to memory for at least the second portion. Once the second portion is received, the whole page crossing instruction is reconstructed from the first portion saved in the previous fetch group. The page crossing instruction or portion thereof is returned with the proper attribute for a non-cached fetched instruction and the reconstructed instruction can be executed without being cached. | 2014-03-27 |
20140089599 | PROCESSOR AND CONTROL METHOD OF PROCESSOR - A processor includes a cache write queue configured to store write requests, based on store instructions directed to a cache memory issued by an instruction issuing unit, into entries provided with stream_wait flag, and to output a write request including no stream_wait flag set thereon, from among the stored write requests, to a pipeline operating unit which performs pipeline operation with respect to the cache memory, the cache write queue being further configured to determine, when a stream flag attached to the store instruction is set, that there will be succeeding store instruction directed to a data area same as that accessed by the store instruction, to set the stream_wait flag so as to store the write request into the entry, to merge the write requests based on the store instructions, directed to the same data area, into a single write request, and then to hold the merged write request. | 2014-03-27 |
20140089600 | SYSTEM CACHE WITH DATA PENDING STATE - Methods and apparatuses for utilizing a data pending state for cache misses in a system cache. To reduce the size of a miss queue that is searched by subsequent misses, a cache line storage location is allocated in the system cache for a miss and the state of the cache line storage location is set to data pending. A subsequent request that hits to the cache line storage location will detect the data pending state and as a result, the subsequent request will be sent to a replay buffer. When the fill for the original miss comes back from external memory, the state of the cache line storage location is updated to a clean state. Then, the request stored in the replay buffer is reactivated and allowed to complete its access to the cache line storage location. | 2014-03-27 |
20140089601 | MANAGING A REGION CACHE - A method for managing a cache region including receiving a new region to be stored within the cache, the cache including multiple regions defined by one or more ranges having a starting index and an ending index, and storing the new region in the cache in accordance with a cache invariant, the cache invariant ensuring that regions in the cache are not overlapping and that the regions are stored in a specified order. | 2014-03-27 |
20140089602 | SYSTEM CACHE WITH PARTIAL WRITE VALID STATES - Methods and apparatuses for processing partial write requests in a system cache within a memory controller. When a write request that updates a portion of a cache line misses in the system cache, the write request writes the data to the system cache without first reading the corresponding cache line from memory. The system cache includes error correction code bits which are redefined as word mask bits when a cache line is in a partial dirty state. When a read request hits on a partial dirty cache line, the partial data is written to memory using a word mask. Then, the corresponding full cache line is retrieved from memory and stored in the system cache. | 2014-03-27 |
20140089603 | Techniques for Managing Power and Performance of Multi-Socket Processors - Examples are disclosed for managing power and performance of multi-socket processors. In some examples, a utilization rate of a first processor circuitry in a first processor socket may be determined. An active memory ratio of a cache for the first processor circuitry may be compared to a threshold ratio or a data traffic rate between the first processor circuitry and a second processor circuitry in a second processor socket may be compared to a threshold rate. According to some examples, a first power state of the first processor circuitry may be changed based on the determined utilization rate. The first power state may also be changed based on the comparison of the active memory ratio to the threshold ratio or the comparison of the data traffic rate to the threshold rate. | 2014-03-27 |
20140089604 | BIPOLAR COLLAPSIBLE FIFO - A system and method for efficient dynamic utilization of shared resources. A computing system includes a shared buffer accessed by two requestors generating access requests. Any entry within the shared buffer may be allocated for use by a first requestor or a second requestor. The storage buffer stores received indications of access requests from the first requestor beginning at a first end of the storage buffer. The storage buffer stores received indications of access requests from the second requestor beginning at a second end of the storage buffer. The storage buffer maintains an oldest stored indication of an access request for the first requestor at the first end and an oldest stored indication of an access request for the second requestor at the second end. The shared buffer deallocates in-order of age from oldest to youngest allocated entries corresponding to a given requestor of the first requestor and the second requestor. | 2014-03-27 |
20140089605 | DATA STORAGE DEVICE - A data storage device may include an interface that is arranged and configured to interface with a host, a command bus, multiple memory devices that are operably coupled to the command bus and a controller that is operably coupled to the interface and to the command bus. The controller may be arranged and configured to receive a read metadata command for a specified one of the memory devices from the host using the interface, read metadata from the specified memory device and communicate the metadata to the host using the interface. | 2014-03-27 |
20140089606 | Reader-Writer Synchronization With High-Performance Readers And Low-Latency Writers - Data writers desiring to update data without unduly impacting concurrent readers perform a synchronization operation with respect to plural processors or execution threads. The synchronization operation is parallelized using a hierarchical tree having a root node, one or more levels of internal nodes and as many leaf nodes as there are processors or threads. The tree is traversed from the root node to a lowest level of the internal nodes and the following node processing is performed for each node: (1) check the node's children, (2) if the children are leaf nodes, perform the synchronization operation relative to each leaf node's associated processor or thread, and (3) if the children are internal nodes, fan out and repeat the node processing with each internal node representing a new root node. The foregoing node processing is continued until all processors or threads associated with the leaf nodes have performed the synchronization operation. | 2014-03-27 |
20140089607 | INPUT/OUTPUT TRAFFIC BACKPRESSURE PREDICTION - According to one aspect of the present disclosure, a method and technique for input/output traffic backpressure prediction is disclosed. The method includes: performing a plurality of memory transactions; determining, for each memory transaction, a traffic value corresponding to a time for performing the respective memory transactions; responsive to determining the traffic value for a respective memory transaction, determining a median value based on the determined traffic values; determining whether successive median values are incrementing; and responsive to a quantity of successively incrementing median values exceeding a threshold, indicating a prediction of a backpressure condition. | 2014-03-27 |
20140089608 | POWER SAVINGS VIA DYNAMIC PAGE TYPE SELECTION - An operating system monitors a performance metric of a direct memory access (DMA) engine on an I/O adapter to update a translation table used during DMA operations. The translation table is used during a DMA operation to map a virtual address provided by the I/O adapter to a physical address of a data page in the memory modules. If the DMA engine is being underutilized, the operating system updates the translation table such that a virtual address maps to physical address corresponding to a memory location in a more energy efficient memory module. However, if the DMA engine is over-utilized, the operating system may update the translation table such that the data used in the DMA engine is stored in memory modules that provide quicker access times—e.g., the operating system may map virtual addresses to physical addresses in DRAM rather than phase change memory. | 2014-03-27 |
20140089609 | INTERPOSER HAVING EMBEDDED MEMORY CONTROLLER CIRCUITRY - A system is provided that includes an interposer having memory controller circuitry embedded therein. The interposer includes conductive vias that are embedded within and that extend through the interposer. The memory controller circuitry can be coupled to some of the conductive vias. In some implementations, other ones of the conductive vias are configured to be coupled to a processor and a memory module that can be mounted along a surface of the interposer. Conductive links are disposed on a surface of the interposer to couple the processor and the memory module to the memory controller circuitry. | 2014-03-27 |
20140089610 | Dynamically Improving Performance of a Host Memory Controller and a Memory Device - Methods, apparatuses, systems, and computer-readable media for dynamically improving performance of a host memory controller and a hosted memory device are presented. According to one or more aspects, a memory controller may establish a data connection with a memory device. The memory controller may perform a first write operation of a plurality of write operations to the memory device using a first block size. Subsequently, the memory controller may perform a second write operation of the plurality of write operations to the memory device using a second block size different from the first block size. The memory controller then may determine an optimal value for a block size parameter based at least in part on the plurality of write operations. Thereafter, the memory controller may use the optimal value for the block size parameter in performing one or more regular tasks involving the memory device. | 2014-03-27 |
20140089611 | MEMORY MANAGEMENT CONTROL SYSTEM, MEMORY MANAGEMENT CONTROL METHOD, AND STORAGE MEDIUM STORING MEMORY MANAGEMENT CONTROL PROGRAM - Disclosed is a memory management control system or the like, which can decrease degradation of processing performance. | 2014-03-27 |
20140089612 | ELECTRONIC COUNTER IN NON-VOLATILE LIMITED ENDURANCE MEMORY - An electronic counter comprising
| 2014-03-27 |
20140089613 | MANAGEMENT OF DATA ELEMENTS OF SUBGROUPS - A plurality of subgroups with a least recently used (LRU) list of data elements associated with count variables. The LRU lists have a top entry to store a most recently used data element and a bottom entry to store a least recently used data element. If a data element is accessed, then increase the value of the count variable and move the accessed data element to the top entry of the LRU list of the subgroup associated with the data element. If the value of the count variable of the accessed data element of the top entry is greater than a value of a count variable of a data element of a bottom entry of a LRU list of a subgroup with a higher priority, then swap the data element of the bottom entry with the accessed data element of the top entry. | 2014-03-27 |
20140089614 | COMPUTER SYSTEM AND DATA MANAGEMENT METHOD - A first storage system copies data of a virtual area of a first virtual volume to a virtual area of a second virtual volume of a second storage system, monitors accesses with respect to multiple virtual areas of the first virtual volume, updates access information related to the accesses of the multiple virtual areas, and, based on the access information, reallocates data inside an actual area of a first pool allocated to the virtual area of the first virtual volume. The first storage system sends the access information to the second storage system. The second storage system receives the access information, and, based on the access information, reallocates data inside the actual area allocated to a virtual area of the second virtual volume. | 2014-03-27 |
20140089615 | COMPUTER AND COMPUTER CONTROL METHOD - When a shutdown is detected, an MBR or a backup MBR are read. When the data of the MBR is not identical to the data of the backup MBR, the MBR is copied to the backup MBR. When the backup MBR cannot normally be read or is improper, the MBR is copied to the backup MBR. When the backup MBR cannot normally be read or is improper while the power of the computer is turned on, the MBR is copied to the backup MBR. | 2014-03-27 |
20140089616 | Enabling Virtualization Of A Processor Resource - In one embodiment, a processor includes an access logic to determine whether an access request from a virtual machine is to a device access page associated with a device of the processor and if so, to re-map the access request to a virtual device page in a system memory associated with the VM, based at least in part on information stored in a control register of the processor. Other embodiments are described and claimed. | 2014-03-27 |
20140089617 | Trust Zone Support in System on a Chip Having Security Enclave Processor - An SOC implements a security enclave processor (SEP). The SEP may include a processor and one or more security peripherals. The SEP may be isolated from the rest of the SOC (e.g. one or more central processing units (CPUs) in the SOC, or application processors (APs) in the SOC). Access to the SEP may be strictly controlled by hardware. For example, a mechanism in which the CPUs/APs can only access a mailbox location in the SEP is described. The CPU/AP may write a message to the mailbox, which the SEP may read and respond to. The SEP may include one or more of the following in some embodiments: secure key management using wrapping keys, SEP control of boot and/or power management, and separate trust zones in memory. | 2014-03-27 |
20140089618 | METHOD AND SYSTEM TO PROVIDE STORAGE UTILIZING A DAEMON MODEL - A method and system for providing storage using a daemon model. An example system comprises a parent daemon trigger configured to launch a parent storage daemon in response to a storage command from a client, a parent daemon module to perform storage access pre-processing operations to generate initialization data, a storage command detector, a child process trigger module to launch a child process in response to a subsequent storage command, and a child processing module to process subsequent storage commands using the child process. | 2014-03-27 |
20140089619 | OBJECT REPLICATION FRAMEWORK FOR A DISTRIBUTED COMPUTING ENVIRONMENT - A device may receive information that identifies a data item and a data item operation. The device may store a first sequence identifier, a data item reference that references the data item, and an operation reference that references the operation. The first sequence identifier may reference the data item and operation references, and may indicate an order in which the first sequence identifier is stored. The device may store the data item in a memory location, may store an identification of the memory location, may remove a reference to the data item by a previous sequence identifier, and/or may add the data item, may modify the data item, or may delete the data item depending on whether the operation is an add operation, a modify operation, or a delete operation. The device may transmit, to a slave device, the first sequence identifier, the data item reference, and the operation reference. | 2014-03-27 |
20140089620 | SYSTEM AND METHOD FOR CONTROLLING MEMORY COMMAND DELAY - A system with a processor in communication with a memory controller in communication with a plurality of memory devices wherein one of the plurality of memory devices is interposed between the memory controller and the remaining plurality of memory devices. By programming command delay in the memory controller, the command delay coordinates the execution of the command signal across all memory devices. The processor provides control signals to the memory controller that, in response, decodes the control signals and determines the mode of operation of one or more of the memory devices. The processor is also in communication with storage media and stores data in or retrieves data from the storage media. | 2014-03-27 |
20140089621 | INPUT/OUTPUT TRAFFIC BACKPRESSURE PREDICTION - According to one aspect of the present disclosure a system and technique for input/output traffic backpressure prediction is disclosed. The system includes a processor unit and logic executable by the processor unit to: determine, for each of a plurality of memory transactions, a traffic value corresponding to a time for performing the respective memory transactions; responsive to determining the traffic value for a respective memory transaction, determine a median value based on the determined traffic values; determine whether successive median values are incrementing; and responsive to a quantity of successively incrementing median values exceeding a threshold, indicate a prediction of a backpressure condition. | 2014-03-27 |
20140089622 | MEMORY LOCATION DETERMINING DEVICE, MEMORY LOCATION DETERMINING METHOD, DATA STRUCTURE, MEMORY, ACCESS DEVICE, AND MEMORY ACCESS METHOD - A memory location determining device determines memory locations for storing M pieces of compressed data each of which is compressed from one of M pieces of N-bit data. For each piece of compressed data, the memory location determining device performs a first arithmetic operation on an address value of a corresponding piece of N-bit data, and determines to store X bits of the piece of compressed data and a flag indicating whether or not the piece of compressed data exceeds X bits at a location indicated by the result value of the first arithmetic operation. When the piece of compressed data exceeds X bits, the memory location determining device further performs a second arithmetic operation on the address value of the corresponding piece of N-bit data and determines to store one or more bits of the piece of compressed data other than the X bits. | 2014-03-27 |
20140089623 | COLUMN ADDRESS DECODING - Methods, memories and systems to access a memory may include generating an address during a first time period, decoding the address during the first time period, and selecting one or more cells of a buffer coupled to a memory array based, at least in part, on the decoded address, during a second time period. | 2014-03-27 |
20140089624 | COOPERATION OF HOARDING MEMORY ALLOCATORS IN A MULTI-PROCESS SYSTEM - A second memory allocator receives a request to allocate memory from a second process of the second memory allocator executing on a computer, and determines that memory for allocation to the second process is not available from a memory hoard of the second memory allocator. The second memory allocator determines that memory for allocation to the second process is not available from an operating system of the computer, and transmits the request to release memory to a first memory allocator. The first memory allocator of a first process executing on the computer receives the request from the second memory allocator executing on the computer to release memory. Responsive to the request from the second memory allocator to release memory, the first memory allocator releases hoarded memory previously hoarded for allocation to the first process. | 2014-03-27 |