Patents - stay tuned to the technology

Inventors list

Assignees list

Classification tree browser

Top 100 Inventors

Top 100 Assignees


14th week of 2015 patent applcation highlights part 59
Patent application numberTitlePublished
20150095513PRIORITY BASED ANYCAST ROUTING - A technique for selecting a network node from a plurality of nodes employing anycast addressing based on a priority. The plurality of nodes is configured with an anycast address. At each node, the anycast address is associated with a unique priority value that represents a priority associated with the node. Traffic destined for the anycast address is forwarded to the node whose priority value indicates the highest priority. If the node becomes unavailable, traffic destined for the anycast address is forwarded to a node whose priority value indicates the next highest priority, and so on.2015-04-02
20150095514Content Centric M2M System - A method for routing data at a first node of a network including a second node. The method includes: providing the first node with a database storing an association between: an URL component identifying content at the first node, and a unique identifier associated to the second node, the unique identifier being specified by the second node; registering content locally stored in the second node by storing in the database an association between an URL component identifying the locally stored content and the unique identifier of the second node; and, upon reception from a requesting entity, by the first node, of a content request having a content identifier, the method includes: checking in the database whether the content identifier includes the URL component, and forwarding the content request to the second node if the content identifier includes the URL component associated with the unique identifier of the second node.2015-04-02
20150095515DETERMINATION OF A SUITABLE TARGET FOR AN INITIATOR BY A CONTROL PLANE PROCESSOR - A first computational device receives a response generated by a second computational device for a third computational device. A target that is suitable for use by the third computational device is determined. The response is transmitted with an address of the target to the third computational device.2015-04-02
20150095516CONTENT NODE NETWORK ADDRESS SELECTION FOR CONTENT DELIVERY - Systems, methods, apparatuses, and software that select network addresses of a content node of a content delivery network are provided herein. In one example, a method of operating a control node to perform network address selection that selects between different communication service providers according to network characteristics is presented. The control node receives a domain name lookup request from an end user device to reach a content node. The control node processes network characteristics and the domain name lookup request to select a network address that corresponds to one of the communication service providers. The end user device can use the selected network address to reach the content node over the selected communication service provider.2015-04-02
20150095517METHOD AND APPARATUS FOR PROVIDING RECOMMENDATIONS TO A USER OF A CLOUD COMPUTING SERVICE - A method and apparatus is disclosed for transferring digital content from a computing cloud to a computing device and generating recommendations for the user of the computing device.2015-04-02
20150095518I/O DEVICE SHARING SYSTEM AND I/O DEVICE SHARING METHOD - An I/O device sharing system characterized by comprising: an I/O device (2015-04-02
20150095519DEVICE PROGRAMMING SYSTEM WITH WHOLE CHIP READ AND METHOD OF OPERATION THEREOF - A system and method of operation of a device programming system includes: a socket adapter having a source socket and a destination socket for reading a configuration information from a master device; a partition table calculated from the master device; and a master data file formed from the partition table and the configuration information, the master data file for configuring a programmable device.2015-04-02
20150095520SEMICONDUCTOR MEMORY APPARATUS AND DATA INPUT AND OUTPUT METHOD THEREOF - A semiconductor memory apparatus includes an input data bus inversion unit, a data input line, a termination unit, a data recovery unit and a memory bank. The input data bus inversion unit determines whether or not to invert a plurality of input data based on an operation mode signal and the plurality of input data and generates a plurality of conversion data. The data input line transmits the plurality of conversion data. The termination unit terminates the data input line in response to the operation mode signal. The data recovery unit receives the plurality of conversion data and generates a plurality of storage data. The memory bank configured to store the plurality of storage data.2015-04-02
20150095521Methods and Systems for Determining Memory Usage Ratings for a Process Configured to Run on a Device - Methods and systems for determining memory usage ratings for system processes and providing for display are described. An example method may include determining, by a processor, a memory usage value for a process configured to run on a computing device over a time period, and the memory usage value is indicative of an amount of memory of the computing device that the process uses while running. The method may also include determining a memory usage rating for the process based on the memory usage value and a run time for the process. The memory usage rating for the process indicates an amount of memory the process uses over the time period and the run time indicates how long the process runs during the time period. The method may also include providing for display, by the processor, a representation of the memory usage rating of the process over the time period.2015-04-02
20150095522SEMICONDUCTOR MEMORY - A semiconductor memory in accordance with an embodiment includes: a control unit configured to generate a plurality of second control signals in response to a page size signal and a plurality of first control signals; a plurality of input/output switches configured to be coupled to each of a plurality of unit memory blocks and activated in response to the plurality of second control signals; and a plurality of page change switches configured to couple data lines of the plurality of unit memory blocks in response to the page size signal.2015-04-02
20150095523INFORMATION PROCESSING APPARATUS, DATA TRANSFER APPARATUS, AND DATA TRANSFER METHOD - A data transfer apparatus includes a reception unit that receives and stores data from a first apparatus therein, and a transmission unit that stores data transferred from the reception unit and transmits the data to a second apparatus. The transmission unit includes an information holding unit that holds data information relating to the data transferred thereto for each transfer path, a common holding unit commonly used by the plurality of transfer paths, and a first controller that performs, based on an inputting situation of the data information for each transfer path, control for inputting the data information to the information holding unit after passing the common holding unit. The reception unit includes a suppression unit that suppresses new data transfer to the transmission unit in response to an information amount of the information held in the common holding unit.2015-04-02
20150095524POLLING DETERMINATION - Techniques for polling an input/output (I/O) device are described herein. The techniques include polling a device for data from the I/O device, and receiving the data from the I/O device at the host device as a result of the polling. The techniques include determining whether the data received is the same as data received at a previous polling of the I/O device. Upon determining the data received is the same, the techniques include decreasing the polling rate if the data is the same, and if it is not the same. Upon determining the data is not the same, the techniques include increasing the polling rate if the data is not the same.2015-04-02
20150095525INTEGRATED CIRCUIT COMPRISING AN IO BUFFER DRIVER AND METHOD THEREFOR - An integrated circuit for bias stress condition removal comprising at least one input/output (IO) buffer driver circuit comprising at least one input signal is described. A primary buffer driver stage receives the at least one input signal and providing an output signal in a first time period; and a secondary buffer driver stage receives the at least one input signal and providing an output signal in a second time period. The primary buffer driver stage and the secondary buffer driver stage cooperate and an operational mode of the primary buffer driver stage and an operational mode of the secondary buffer driver stage is varied to produce a varying output signal.2015-04-02
20150095526COMMUNICATION APPARATUS, COMMUNICATION SYSTEM AND ADAPTER - A communication apparatus for carrying out communications to and from an external apparatus that includes a first interconnecting unit and a first non-transparent port and effects an interconnection for communications via the first non-transparent port is provided. The communication apparatus includes a second interconnecting unit that includes a second non-transparent port communicably connected to the first non-transparent port. The second interconnecting unit effects an interconnection for communications via the second non-transparent port. The second interconnecting unit performs, when the communication apparatus carries out communications to and from the external apparatus, address translation between an address for use by the communication apparatus and an address for use by the second non-transparent port.2015-04-02
20150095527DEVICE MANAGEMENT USING VIRTUAL INTERFACES CROSS-REFERENCE TO RELATED APPLICATIONS - Methods managing data communication between a peripheral device and host computer system are provided. A physical interface for communicating data between a peripheral device and the plurality of applications executing on the host computer system is opened and controlled by a software module. A first virtual interface and a second virtual interface of the software module are exposed to an operating system of the host computer system, and the operating system exposes the first virtual interface and the second virtual interface to the first application and the second application. The first virtual interface is used for communicating data between the peripheral device and the first application through the physical interface, and the second virtual interface is used for communicating data between the peripheral device and the second application through the physical interface.2015-04-02
20150095528METHOD AND APPARATUS FOR STORING DATA - An allocation instruction is received that includes a target data operand and a storage medium operand indicating a storage medium for storing the target data. A data dependency is identified that specifies peripheral data on which the target data depends. In response to determining that the allocation instruction will cause the target data and the peripheral data to locate to different storage mediums having different data IO rates, the execution of the allocation instruction is prevented. In another embodiment, in response to determining that the allocation instruction allocates the target data from a first storage medium to a second storage medium having a faster data IO rate, the allocation instruction is modified to also allocate the peripheral data specified in the data dependency to the second storage medium.2015-04-02
20150095529SYSTEM AND APPARATUS FOR TRANSFERRING DATA BETWEEN COMMUNICATION ELEMENTS - A method, device and machine-readable storage device for transferring data between identity modules is disclosed. Data is stored in one of a first removable storage module coupled to a donor communication device and a memory of the donor communication device, or both. A first portion of the data is provided to a server. The server provides the first portion of the data to a second removable storage module coupled to a recipient communication device responsive to a determination that a recipient communication device has a right to the data. Additional embodiments are disclosed.2015-04-02
20150095530DYNAMIC PORT NAMING IN A CHASSIS - A tool for dynamically naming network ports and switch ports in a chassis. The tool retrieves, by one or more computer processors, chassis specifications of the chassis. The tool retrieves, by one or more computer processors, identifying information for components of the chassis. The tool determines, by one or more computer processors, a plurality of network ports and a plurality of switch ports within the chassis not assigned an alternative port name. The tool constructs, by one or more computer processors, alternative port names for the plurality of network ports and the plurality of switch ports within the chassis not assigned an alternative port name.2015-04-02
20150095531LANE DIVISION MULTIPLEXING OF AN I/O LINK - A system can include a host device and a remote terminal. The host device can include a host terminal, the host terminal including a host configuration manager to allocate a data lane to an I/O protocol and a protocol multiplexer to carry out allocation of the data lane based on the allocation of the configuration manager. The remote terminal can include a remote configuration manager. The remote configuration manager is to communicate with the remote configuration manager via a control bus to detect connection of an I/O device to an I/O port and to allocate the data lane to the I/O protocol.2015-04-02
20150095532CONTROLLER AREA NETWORK (CAN) DEVICE AND METHOD FOR CONTROLLING CAN TRAFFIC - Embodiments of a device and method are disclosed. In an embodiment, a CAN device is disclosed. The CAN device includes a TXD input interface, a TXD output interface, an RXD input interface, an RXD output interface, and a traffic control system connected between the TXD input and output interfaces and between the RXD input and output interfaces. The traffic control system is configured to detect the presence of CAN Flexible Data-rate (FD) traffic on the RXD input interface and if the traffic control system detects the presence of CAN FD traffic on the RXD input interface, disconnect the RXD input interface from the RXD output interface and disconnect the TXD input interface from the TXD output interface.2015-04-02
20150095533ELECTRONIC DEVICE HAVING TWO WIRELESS COMMUNICATION COMPONENTS - An electronic device may include a first wireless communication component and a second wireless communication component. The first wireless communication component is to operate (or communicate) using a first protocol, and to receive a wireless signal from another device by the first protocol. The second wireless communication component is to operate (or communicate) using a second protocol, and to begin communicating with the another device using the second protocol in response to receiving a trigger signal.2015-04-02
20150095534Serdes Interface Architecture for Multi-Processor Systems - A local device, such as a field-programmable gate array, has a local state machine and a local interface component for communicating with a remote device that implements a remote state machine. The local interface component receives a new set of incoming data from the remote device and determines whether the new set is good data or bad data. If good data, then the local interface component causes the new set of data to transmitted internally for use by the local state machine. If bad data, then the local interface component does not forward the new set of data to the local state machine, which instead continues to use a previously received set of good data. Although the clock rate of the local and remote state machines may differ from the frame rate of the local interface component, their operations are nevertheless synchronized.2015-04-02
20150095535MULTI-CYCLE DELAY FOR COMMUNICATION BUSES - A system is disclosed that may compensate for bus timing that may vary over operating conditions of a bus. The system may include a communication bus, a first functional unit configured to transmit data via the communication bus, and a second functional unit configured to receive data via the bus. The first functional unit may transmit a first value via the communication bus to the second functional unit. The first functional unit may be further configured to assert a data valid signal responsive to a determination that a first time period has elapsed since the transmission of the first data value. The second functional unit may be configured to receive the first data value and sample the first data value dependent upon the data valid signal.2015-04-02
20150095536METHOD FOR AUTOMATICALLY SETTING ID IN UART RING COMMUNICATION - Disclosed is a method for automatically setting ID in UART ring communication in which a master and a plurality of slaves are formed in a ring-type network, the method including initializing the master to output a master ID (initializing step), receiving, by the plurality of slaves, the master ID, setting its own IDs by adding the master ID to a reference value and outputting the set ID (slave ID setting step), changing, by the plurality of slaves, its own IDs based on whether its own ID is same as the received ID, receiving, by the master, the IDs outputted by the plurality of slaves, and changing a currently highest value of slave IDs stored in the master in response to values of received slave IDs (changing step), and finishing the ID setting or re-setting the slave IDs, in response to the Current Max Slave ID (finish determining step).2015-04-02
20150095537CAMERA CONTROL INTERFACE SLEEP AND WAKE UP SIGNALING - A device is provided comprising a control data bus including at least a first line. A master device may be coupled to the control data bus and configured to control the control data bus. A plurality of slave devices may be coupled to the control data bus and share the first line. The master device may be configured to send a single global wake up signal on the control data bus that causes any sleeping slave devices to wake up. Alternatively, the master device may send a global wake up signal followed by a targeted sleep signal to non-targeted slave devices to implement a “targeted wake up” of specific slave devices. The master device may send the single global wake up signal by bringing the first line low for a predetermined period of time.2015-04-02
20150095538FACILITATING RESOURCE USE IN MULTICYCLE ARBITATION FOR SINGLE CYCLE DATA TRANSFER - Techniques are disclosed to provide arbitration between input ports and output ports of a switch. For each of at least one input port of a group of input ports, a respective request is received specifying for the respective input port to be allocated a clock cycle in which to send data to a group of output ports. A grant of the request of a primary input port is issued at each clock cycle, the primary input port including a first input port of the at least one input port. Upon a determination, subsequent to a first clock cycle count elapsing, that an input arbiter has not yet accepted any grant of the request of the primary input port, a grant is issued at each clock cycle, including alternating between issuing a grant of the request of the primary input port and of an alternate input port, respectively.2015-04-02
20150095539FACILITATING RESOURCE USE IN MULTICYLE ARBITRATION FOR SINGLE CYCLE DATA TRANSFER - Techniques are disclosed to provide arbitration between input ports and output ports of a switch. For each of at least one input port of a group of input ports, a respective request is received specifying for the respective input port to be allocated a clock cycle in which to send data to a group of output ports. A grant of the request of a primary input port is issued at each clock cycle, the primary input port including a first input port of the at least one input port. Upon a determination, subsequent to a first clock cycle count elapsing, that an input arbiter has not yet accepted any grant of the request of the primary input port, a grant is issued at each clock cycle, including alternating between issuing a grant of the request of the primary input port and of an alternate input port, respectively.2015-04-02
20150095540EXTERNAL DEVICE AND A TRANSMISSION SYSTEM AND THE METHOD OF THE HETEROGENEOUS DEVICE - An external device and a transmission system and the method of the heterogeneous device can access the USB2015-04-02
20150095541METHOD AND SYSTEM FOR ENUMERATING DIGITAL CIRCUITS IN A SYSTEM-ON-A-CHIP (SOC) - Methods and systems for enumerating digital circuits in a system-on-a-chip (SOC) are disclosed. The method includes incrementing an enumeration value received from a previous enumerable instance to uniquely identify an immediately adjacent enumerable instance of a plurality of enumerable instances in a daisy chain configuration.2015-04-02
20150095542COLLECTIVE COMMUNICATIONS APPARATUS AND METHOD FOR PARALLEL SYSTEMS - A collective communication apparatus and method for parallel computing systems. For example, one embodiment of an apparatus comprises a plurality of processor elements (PEs); collective interconnect logic to dynamically form a virtual collective interconnect (VCI) between the PEs at runtime without global communication among all of the PEs, the VCI defining a logical topology between the PEs in which each PE is directly communicatively coupled to a only a subset of the remaining PEs; and execution logic to execute collective operations across the PEs, wherein one or more of the PEs receive first results from a first portion of the subset of the remaining PEs, perform a portion of the collective operations, and provide second results to a second portion of the subset of the remaining PEs.2015-04-02
20150095543DATA BUS SYSTEM AND RECORDING APPARATUS - A data bus system includes a plurality of recording apparatuses, a transmission path, and a management apparatus. The plurality of recording apparatuses are configured to record and hold data. The transmission path is connected to the plurality of recording apparatuses by wireless communication and configured to transmit the data. The management apparatus is configured to manage the plurality of recording apparatuses and the transmission path.2015-04-02
20150095544COMPLETION COMBINING TO IMPROVE EFFECTIVE LINK BANDWIDTH BY DISPOSING AT END OF TWO-END LINK A MATCHING ENGINE FOR OUTSTANDING NON-POSTED TRANSACTIONS - An apparatus and method are disclosed in which unrelated completion operations intended for a single destination (requestor) are coalesced to improve achievable data bandwidth. During transmission, the completion operations are collected and compressed into a single packet and transmitted along the link. At a receiving end of the link, the single packet is decompressed and the previously unrelated packets are returned to their initial state before receipt by the requestor. The method can be implemented in the root complex, end points, and/or switches, in the case of a PCIe implementation, but can also be applied to other protocols besides PCIe.2015-04-02
20150095545METHOD AND APPARATUS FOR CONTROLLING CACHE MEMORY - A method of controlling a cache memory includes receiving location information of one piece of data included in a data block and size information of the data block; mapping the data block onto cache memory by using the location information and the size information; and selecting at least one unit cache out of unit caches included in the cache memory based on the mapping result.2015-04-02
20150095546SYSTEM AND METHOD FOR BANK LOGICAL DATA REMAPPING - A method and system are disclosed for remapping logical addresses between memory banks of discrete or embedded multi-bank storage device. The method may include a controller of a storage device tracking a total erase count for a storage device, determining if an erase count imbalance greater than a threshold exists between banks, and then remapping logical address ranges from the highest erase count bank to the lowest erase count bank to even out wear between the banks. The system may include a controller that may maintain a bank routing table, an erase counting mechanism and execute instructions for triggering a remapping process to remap an amount of logical addresses such that an address range is reduced for a hotter bank and increased for a colder bank.2015-04-02
20150095547MAPPING MEMORY CONTROLLER CONNECTORS TO MEMORY CONNECTORS - Provided are a device, system, and method for mapping memory controller connectors to memory connectors. A memory is programmed to transmit for each of a plurality of the memory data connectors, a pattern on the memory data connectors that has a first value for a selected memory data connector of the memory data connectors and a different value from the first value for the memory data connectors other than the selected memory data connector. For each of the memory data connectors, a read command is issued to read the pattern on the memory data connectors. a device data connector receiving the first value in the read pattern is mapped to the selected memory data connector transmitting the first value.2015-04-02
20150095548HANDLING MEMORY-MAPPED INPUT-OUTPUT (MMIO) BASED INSTRUCTIONS USING FAST ACCESS ADDRESSES - When a guest of a virtual machine attempts to accesses an address that causes an exit from the guest to the hypervisor of a host, the hypervisor receives an indication of an exit by a guest to the hypervisor. The received address is associated with a memory-mapped input-output (MMIO) instruction. The hypervisor determines, based on the received indication, that the exit is associated with the memory-mapped input-output (MMIO) instruction. The hypervisor identifies the address that caused the exit as a fast access address. The hypervisor identifies one or more memory locations associated with the fast access address, where the one or more memory locations store information associated with the MMIO instruction. The hypervisor identifies the MMIO instruction based on the stored information. The hypervisor executes the MMIO instruction on behalf of the guest.2015-04-02
20150095549SYSTEMS AND METHODS FOR MANAGING DATA IN A COMPUTING ENVIRONMENT - Improved data management systems for managing and maintaining unstructured data in a computing system environment. Data content is associated with particular types of metadata to create data objects. In certain examples, the metadata is stored in various fields of the data objects, certain fields being designated as permanently read-only after their creation. Such fields can include, for instance, a unique identifier, a type of content and a classification governing copy permissions relating to the data object. Data objects, or didgets, can be grouped into logical containers referred to as chambers, which are further grouped by common control elements or attributes into domains. Chambers within a particular domain can generally freely share information therebetween, including copies of various types of didgets. A control program, or didget manager, in each domain manages the creation of didgets and subsequent operations directed thereto.2015-04-02
20150095550GENERATING RANDOM NUMBERS UTILIZING ENTROPIC NATURE OF NAND FLASH MEMORY MEDIUM - Methods and apparatus related to generating random numbers utilizing the entropic nature of NAND flash memory medium are described. In one embodiment, a data pattern is written to a portion of a non-volatile memory device and is subsequently read multiple times. Based on the read operations, at least one bit is marked for random number generation based at least partially on comparison of a number of flips by the at least one bit and a threshold value. Other embodiments are also disclosed and claimed.2015-04-02
20150095551VOLATILE MEMORY ARCHITECUTRE IN NON-VOLATILE MEMORY DEVICES AND RELATED CONTROLLERS - In some embodiments, one register of a non-volatile memory can be used for read operations and another register of the non-volatile memory can be used for programming operations. For instance, a cache register of a NAND flash memory can be used in connection with read operations and a data register of the NAND flash memory can be used in connection with programming operations. Data registers of a plurality of non-volatile memory devices, such as NAND flash memory devices, can implement a distributed volatile cache (DVC) architecture in a managed memory device, according to some embodiments. According to certain embodiments, data can be moved and/or swapped between registers to perform certain operations in the non-volatile memory devices without losing the data stored while other operations are performed.2015-04-02
20150095552MEMORY SYSTEM FOR MIRRORING DATA - A memory system is disclosed, which may include a memory unit of a first type, susceptible to loss of data from corrupting events, and a memory unit of a second type, less susceptible to loss of data from corrupting events than the memory unit of the first type, and a mirrored memory interface (MMI). The MMI may be coupled to a memory controller, the memory unit of the first type, and the memory unit of the second type. The MMI may, in response to a memory controller write command, receive data from the memory controller and write the data to the memory unit of the first type and to the memory unit of the second type. The MMI may also, in response to a memory controller read command, read data from the memory unit of the first type and send the data to the memory controller.2015-04-02
20150095553SELECTIVE SOFTWARE-BASED DATA COMPRESSION IN A STORAGE SYSTEM BASED ON DATA HEAT - In a data storage system, in response to receipt from a processor system of a write input/output operation (IOP) including an address and data, a storage controller of the data storage system determines whether or not the address is a hot address that is more frequently accessed. In response to determining that the address is a hot address, the storage controller stores the data in the data storage system in uncompressed form. In response to determining that the address is not a hot address, the storage controller compresses the data to obtain compressed data and stores the compressed data in the data storage system.2015-04-02
20150095554STORAGE PROCESSOR MANAGING SOLID STATE DISK ARRAY - A method of writing to one or more solid state disks (SSDs) employed by a storage processor includes receiving a command, creating sub-commands from the command based on a granularity, and assigning the sub-commands to the SSDs independently of the command thereby causing striping across the SSDs.2015-04-02
20150095555METHOD OF THIN PROVISIONING IN A SOLID STATE DISK ARRAY - A method of thin provisioning in a storage system is disclosed. The method includes communicating to a user a capacity of a virtual storage, the virtual storage capacity being substantially larger than that of a storage pool. Further, the method includes assigning portions of the storage pool to logical unit number (LUN) logical block address (LBA)-groups only when the LUN LBA-groups are being written to and maintaining a mapping table to track the association of the LUN LBA-groups to the storage pool.2015-04-02
20150095556MEMORY SYSTEM - A memory system includes a first memory chip, a second memory chip, and a memory controller. The first memory chip and the second memory chip are connected to the memory controller via a plurality of data lines including a first data line and a second data line. The first memory chip is configured to outputs status information via the first data line to the memory controller. The second memory chip is configured to output status information via the second data line to the memory controller at the same time as the first memory chip.2015-04-02
20150095557SEMICONDUCTOR APPARATUS AND SYSTEM - A semiconductor apparatus and system are provided. The semiconductor apparatus includes a host core configured to drive at least one device drive and a solid state drive (SSD), a flash interface configured to interface with the host core and the SSD, and an internal bus configured to transmit signals between the host core and the flash interface, wherein the host core, the flash interface, and the internal bus are disposed on a single chip substrate, and the SSD is not disposed on the single chip substrate.2015-04-02
20150095558STORAGE AND PROGRAMMING METHOD THEREOF - A program method of a storage device which includes at least one nonvolatile memory device and a memory controller to control the at least one nonvolatile memory device, the program method comprising: performing a first normal program operation to store first user data in a memory block; detecting, at the memory controller, a first event; performing a dummy program operation to store dummy data in at least one page of the memory block in response to the detection of the first event; and performing a second normal program operation to store second user data in the memory block after the dummy program operation, dummy program operations being operations in which random data is programmed into the memory block, normal program operations being operations in which data other than random data is programmed in the memory block.2015-04-02
20150095559PERFORMANCE IMPROVEMENT OF A CAPACITY OPTIMIZED STORAGE SYSTEM INCLUDING A DETERMINER - A system for storing data comprises a performance storage unit and a performance segment storage unit. The system further comprises a determiner. The determiner determines whether a requested data is stored in the performance storage unit. The determiner determines whether the requested data is stored in the performance segment storage unit in the event that the requested data is not stored in the performance storage unit.2015-04-02
20150095560PROGRAM-DISTURB MANAGEMENT FOR PHASE CHANGE MEMORY - Subject matter disclosed herein relates to a memory device, and more particularly to read or write performance of a phase change memory.2015-04-02
20150095561PROMOTION OF PARTIAL DATA SEGMENTS IN FLASH CACHE - For efficient track destage in secondary storage in a more effective manner, for temporal bits employed with sequential bits for controlling the timing for destaging the track in a primary storage, a preference of movement to lower speed cache level is implemented based on at least one of an amount of holes and a data heat metric. If a first bit has at least one of a lower amount of holes and a hotter data heat metric, it is moved to the lower speed cache level ahead of a second bit that has at least one of a higher amount of holes and a cooler data heat. If the first bit has a hotter data heat and greater than a predetermined number of holes, the first bit is discarded.2015-04-02
20150095562METHOD FOR MANAGING A MEMORY APPARATUS - A memory apparatus includes at least one NV memory element, which includes a plurality of blocks. A method for managing the memory apparatus includes: receiving a first access command from a host; analyzing the first access command to obtain a first host address; linking the first host address to a first page of the physical block; receiving a second access command from the host; analyzing the second access command to obtain a second host address; linking the second host address to a second page of the physical block; recording a valid/invalid page count of the physical block corresponding to accessing pages of the physical block; and determining whether to erase a portion of the blocks according to the valid/invalid page count. A difference value of the first host address and the second host address is greater than a number of pages of the physical block.2015-04-02
20150095563MEMORY MANAGEMENT - Apparatus, systems, and methods to manage memory operations are described. In one embodiment, an electronic device comprises a processor and a memory control logic to retrieve a global sequence number from a memory device, receive a read request for data stored in a logical block address in the memory device, retrieve a media sequence number from the logical block address in the memory device, and return a null response in lieu of the data stored in the logical block address when the media sequence number is older than the global sequence number. Other embodiments are also disclosed and claimed.2015-04-02
20150095564APPARATUS AND METHOD FOR SELECTING MEMORY OUTSIDE A MEMORY ARRAY - An apparatus includes a memory module, which includes a memory array. The memory array includes rows of memory and columns of memory. The apparatus also includes at least one row of memory not in the memory array and a register. The register includes an address space and a row/column indicator. The apparatus also includes row selection logic to select the at least one row to be activated if the address from an address bus equals the register value and if the row/column indicator indicates row.2015-04-02
20150095565READ TRAINING A MEMORY CONTROLLER - Provided are a device and computer readable storage medium for programming a memory module to initiate a training mode in which the memory module transmits continuous bit patterns on a side band lane of the bus interface; receiving the bit patterns over the bus interface; determining from the received bit patterns a transition of values in the bit pattern to determine a data eye between the determined transitions of the values; and determining a setting to control a phase interpolator to generate interpolated signals used to sample data within the determined data eye.2015-04-02
20150095566Reading Speed of Updated File by Tape Drive File System - The mechanism provides for updating a file written on a medium in a system including a tape drive connected to a host. The mechanism receives, from the host, a change data part that is changed in the file as an update target. The mechanism writes the change data part to a data end position on the medium including a non-change data part that is not changed in the file sequentially stored on the medium. The mechanism calculates seek time required for positioning of a head of the tape drive from a medium position of the non-change data to a medium position of the change data part. The mechanism copies the change data part to an external storage device when the seek time is more than or equal to a predetermined value.2015-04-02
20150095567STORAGE APPARATUS, STAGING CONTROL METHOD, AND COMPUTER-READABLE RECORDING MEDIUM HAVING STORED STAGING CONTROL PROGRAM - A cache controller controls data input/output of the storage device and causes the semiconductor storage device to function as a cache memory of the storage device. A staging controller performs, when data is staged from the storage device to the cache memory, first staging amount control until a staging amount to the cache memory exceeds a first threshold after the storage apparatus starts up; performs second staging amount control until a variation per unit time of a read amount from the cache memory falls within a predetermined range after the first period; and performs third staging amount control after the second period. With this configuration, the semiconductor apparatus can be efficiently used.2015-04-02
20150095568STORAGE SYSTEM AND STORAGE DEVICE CONFIGURATION REPORTING - A storage environment includes a storage system and a reporting device. The storage system may operate in a first configuration utilizing only internal storage and may operate in a second configuration utilizing only external storage. The reporting device determines the operating configuration of the storage system and generates a report comprising at least one field identifying the storage system and at least one field identifying the operating configuration of the storage system. The storage system may also include a managed disk group that may operate in a first configuration comprising only internal storage and may operate in a second configuration comprising only external storage. The reporting device may determine the operating configuration of the managed disk group and generates a report including at least one field identifying the managed disk group and at least one field identifying the operating configuration of the managed disk group.2015-04-02
20150095569CONTINUOUS RECORDING MULTICHANNEL DISK-BASED DATA LOGGING - Continuous recording multichannel disk-based data logging apparatus. The apparatus includes a plurality of disk drives and an interface including a plurality of parallel inputs. The interface is responsive to data at the inputs to write the data on an active plurality of the disk drives, at least one disk drive being idle. The interface is responsive to a user request for data on one of the active disk drives to substitute an idle disk drive into the active plurality in place of that one, to read the requested data, and to cause that one to become idle.2015-04-02
20150095570DATA MIRRORING CONTROL APPARATUS AND METHOD - A data mirroring control apparatus includes a command distributing unit configured to transmit a first write command to a plurality of mirroring storage devices, the first write command including an instruction for data requested by a host to be written; and a memory lock setting unit configured to set a memory lock on the data requested by the host to be written among data stored in a host memory and configured to release the memory lock on the data after the data with the memory lock is written to the plurality of mirroring storage devices.2015-04-02
20150095571SYSTEMS AND METHODS FOR PERFORMING STORAGE OPERATIONS IN A COMPUTER NETWORK - Methods and systems are described for performing storage operations on electronic data in a network. In response to the initiation of a storage operation and according to a first set of selection logic, a media management component is selected to manage the storage operation. In response to the initiation of a storage operation and according to a second set of selection logic, a network storage device to associate with the storage operation. The selected media management component and the selected network storage device perform the storage operation on the electronic data.2015-04-02
20150095572STORAGE CONTROL APPARATUS AND STORAGE CONTROL METHOD - An operation controller stops the operation of two or more storage devices among a plurality of storage devices constituting one or more logical storage areas. When writing of data to a stopped storage device among the storage devices constituting each logical storage area is requested, an access controller performs control to maintain data redundancy in each logical storage area by working storage devices among the storage devices constituting each logical storage area and by a spare storage device that is different from the storage devices constituting each logical storage area, by writing the data in the spare storage device instead of the storage device to which the writing is requested.2015-04-02
20150095573FILE PROCESSING METHOD AND STORAGE DEVICE - A file processing method and a storage device are disclosed. In the method, a storage device receives T files that are to be stored in the RAID, and determines a sequence number of a check block in a stripe of the RAID. The storage device repeatedly obtains a data block of the K2015-04-02
20150095574COMPUTING SYSTEM INCLUDING STORAGE SYSTEM AND METHOD OF WRITING DATA THEREOF - Provided is a method of writing data of a storage system. The method includes causing a host to issue a first writing command; causing the host, when a queue depth of the first writing command is a first value, to store the first writing command in an entry which is assigned in advance and is included in a cache; causing the host to generate a writing completion signal for the first writing command; and causing the host to issue a second writing command.2015-04-02
20150095575CACHE MIGRATION MANAGEMENT METHOD AND HOST SYSTEM APPLYING THE METHOD - Provided are a cache migration management method and a host system configured to perform the cache migration management method. The cache migration management method includes: moving, in response to a request for cache migration with respect to first data stored in a main storage device, the first data and second data related to the first data from the main storage device to a cache storage device; and adding information about the first data moved to the cache storage device and the second data moved to the cache storage device, the moving of the first data and the second data to the cache storage device including storing the first data moved to the cache storage device and the second data moved to the cache storage device at continuous physical addresses of the cache storage device in an order in which the first data and the second data are to be loaded to a host device.2015-04-02
20150095576CONSISTENT AND EFFICIENT MIRRORING OF NONVOLATILE MEMORY STATE IN VIRTUALIZED ENVIRONMENTS - Updates to nonvolatile memory pages are mirrored so that certain features of a computer system, such as live migration of applications, fault tolerance, and high availability, will be available even when nonvolatile memory is local to the computer system. Mirroring may be carried out when a cache flush instruction is executed to flush contents of the cache into nonvolatile memory. In addition, mirroring may be carried out asynchronously with respect to execution of the cache flush instruction by retrieving content that is to be mirrored from the nonvolatile memory using memory addresses of the nonvolatile memory corresponding to target memory addresses of the cache flush instruction.2015-04-02
20150095577PARTITIONING SHARED CACHES - Technology is provided for partitioning a shared unified cache in a multi-processor computer system. The technology can receive a request to allocate a portion of a shared unified cache memory for storing only executable instructions, partition the cache memory into multiple partitions, and allocate one of the partitions for storing only executable instructions. The technology can further determine the size of the portion of the cache memory to be allocated for storing only executable instructions as a function of the size of the multi-processor's L1 instruction cache and the number of cores in the multi-processor.2015-04-02
20150095578INSTRUCTIONS AND LOGIC TO PROVIDE MEMORY FENCE AND STORE FUNCTIONALITY - Instructions and logic provide memory fence and store functionality. Some embodiments include a processor having a cache to store cache coherent data in cache lines for one or more memory addresses of a primary storage. A decode stage of the processor decodes an instruction specifying a source data operand, one or more memory addresses as destination operands, and a memory fence type. Responsive to the decoded instruction, one or more execution units may enforce the memory fence type, then store data from the source data operand to the one or more memory addresses, and ensure that the stored data has been committed to primary storage. For some embodiments, the primary storage may comprise persistent memory. For some embodiments, cache lines corresponding to the memory addresses may be flushed, or marked for persistent write back to primary storage. Alternatively the cache may be bypassed, e.g. by performing a streaming vector store.2015-04-02
20150095579APPARATUS AND METHOD FOR EFFICIENT HANDLING OF CRITICAL CHUNKS - An apparatus and method for efficient handling of critical chunks. For example, one embodiment of an apparatus comprises a plurality of agents to perform a respective plurality of data processing functions, at least one of the data processing functions comprising transmitting and receiving chunks of data to and from a memory controller, respectively; a system agent to coordinate requests for transmitting and receiving the chunks of data to and from the memory controller, the system agent comprising: a memory for temporarily storing the chunks of data during transmission between the agents and the memory controller; and scheduling logic to prioritize critical chunks over non-critical chunks across multiple outstanding requests while ensuring that the non-critical chunks do not result in starvation.2015-04-02
20150095580SCALABLY MECHANISM TO IMPLEMENT AN INSTRUCTION THAT MONITORS FOR WRITES TO AN ADDRESS - A processor includes a cache-side address monitor unit corresponding to a first cache portion of a distributed cache that has a total number of cache-side address monitor storage locations less than a total number of logical processors of the processor. Each cache-side address monitor storage location is to store an address to be monitored. A core-side address monitor unit corresponds to a first core and has a same number of core-side address monitor storage locations as a number of logical processors of the first core. Each core-side address monitor storage location is to store an address, and a monitor state for a different corresponding logical processor of the first core. A cache-side address monitor storage overflow unit corresponds to the first cache portion, and is to enforce an address monitor storage overflow policy when no unused cache-side address monitor storage location is available to store an address to be monitored.2015-04-02
20150095581DATA CACHING POLICY IN MULTIPLE TENANT ENTERPRISE RESOURCE PLANNING SYSTEM - A cache manager application provides a data caching policy in a multiple tenant enterprise resource planning (ERP) system. The cache manager application manages multiple tenant caches in a single process. The application applies the caching policy. The caching policy optimizes system performance compared to local cache optimization. As a result, tenants with high cache consumption receive a larger portion of caching resources.2015-04-02
20150095582Method for Specifying Packet Address Range Cacheability - A method for specifying packet address range cacheability is provided. The method includes passing a memory allocation request from an application running on a network element configured to implement packet forwarding operations to an operating system of a network element, the memory allocation request including a table ID associated with an application table to be stored using the memory allocation. The method also includes allocating a memory address range by the operating system to the application in response to the memory allocation request, and inserting an entry in a cacheability register, the entry including the table ID included in the memory allocation request and the memory address range allocated in response to the memory allocation request.2015-04-02
20150095583DATA PROCESSING SYSTEM WITH CACHE LINEFILL BUFFER AND METHOD OF OPERATION - When data in first and second requests from a processor does not reside in cache memory, a first data element responsive to the second request is received by a cache controller from an external memory module after a first data element responsive to the first request and before the second data element responsive to the first request. Ownership of a linefill buffer is assigned to the first request when the first data element responsive to the first request is received. Ownership of the linefill buffer is re-assigned to the second request when the first data element responsive to the second request is received after the first data element responsive to the first request is received.2015-04-02
20150095584UTILITY-BASED INVALIDATION PROPAGATION SCHEME SELECTION FOR DISTRIBUTED CACHE CONSISTENCY - A computerized method for dynamic consistency management of server side cache management units in a distributed cache, comprising: updating a server side cache management unit by a client; assigning each of a plurality of server side cache management units to one of a plurality of propagation topology groups according to an analysis of a plurality of cache usage measurements thereof, each of said propagation topology groups is associated with a different write request propagation scheme; and managing client update notifications of members of each of said propagation topology groups according to the respective said different write request propagation scheme which is associated therewith.2015-04-02
20150095585CONSISTENT AND EFFICIENT MIRRORING OF NONVOLATILE MEMORY STATE IN VIRTUALIZED ENVIRONMENTS - Updates to nonvolatile memory pages are mirrored so that certain features of a computer system, such as live migration of applications, fault tolerance, and high availability, will be available even when nonvolatile memory is local to the computer system. Mirroring may be carried out when a cache flush instruction is executed to flush contents of the cache into nonvolatile memory. In addition, mirroring may be carried out asynchronously with respect to execution of the cache flush instruction by retrieving content that is to be mirrored from the nonvolatile memory using memory addresses of the nonvolatile memory corresponding to target memory addresses of the cache flush instruction.2015-04-02
20150095586STORING NON-TEMPORAL CACHE DATA - Embodiments herein provide for using one or more cache memory to facilitate non-temporal transaction. A request to store data into a cache associated with a processor is received. In response to receiving the request, a determination is made as to whether the data to be stored is non-temporal data. A predetermined location of the cache is selected; the location to which storing of the non-temporal data is restricted to a predetermined location, in response to determining the data to be stored is non-temporal data. The non-temporal data is data that is not accessed within a predetermined period of time. The non-temporal data is stored into the predetermined location.2015-04-02
20150095587REMOVING CACHED DATA - Embodiments of the present invention provide a method and apparatus for removing cached data. The method comprises determining activeness of a plurality of divided lists; ranking the plurality of divided lists according to the determined activeness of the plurality of divided lists. The method comprises removing a predetermined amount of cached data from the plurality of divided lists according to the ranking result when the used capacity in the cache area reaches a predetermined threshold. Through embodiments of the present invention, the activeness of each divided list may be used to wholly measure the heat of access to the cached data included by each divided list, and upon removal, the cached data with lower heat of access in the whole system can be removed and the cached data with higher heat of access in the whole system can be retained so as to improve the read/write rate of the system.2015-04-02
20150095588LOCK-BASED AND SYNCH-BASED METHOD FOR OUT OF ORDER LOADS IN A MEMORY CONSISTENCY MODEL USING SHARED MEMORY RESOURCES - In a processor, a lock-based method for out of order loads in a memory consistency model using shared memory resources. The method includes implementing a memory resource that can be accessed by a plurality of cores; and implementing an access mask that functions by tracking which words of a cache line are accessed via a load, wherein the cache line includes the memory resource, wherein the load sets a mask bit within the access mask when accessing a word of the cache line, and wherein the mask bit blocks accesses from other loads from a plurality of cores. The method further includes checking the access mask upon execution of subsequent stores from the plurality of cores to the cache line; and causing a miss prediction when a subsequent store to the portion of the cache line sees a prior mark from a load in the access mask, wherein the subsequent store will signal a load queue entry corresponding to that load by using a tracker register and a thread ID register.2015-04-02
20150095589CACHE MEMORY SYSTEM AND OPERATING METHOD FOR THE SAME - A cache memory system includes a cache memory, which stores cache data corresponding to portions of main data stored in a main memory and priority data respectively corresponding to the cache data; a table storage unit, which stores a priority table including information regarding access frequencies with respect to the main data; and a controller, which, when at least one from among the main data is requested, determines whether cache data corresponding to the request is stored in the cache memory, deletes one from among the cache data based on the priority data, and updates the cache data set with new data, wherein the priority data is determined based on the information regarding access frequencies.2015-04-02
20150095590METHOD AND APPARATUS FOR PAGE-LEVEL MONITORING - An apparatus and method for page level monitoring are described. For example, one embodiment of a method for monitoring memory pages comprises storing information related to each of a plurality of memory pages including an address identifying a location for a monitor variable for each of the plurality of memory pages in a data structure directly accessible only by a software layer operating at or above a first privilege level; detecting virtual-to-physical page mapping consistency changes or other page modifications to a particular memory page for which information is maintained in the data structure; responsively updating the monitor variable to reflect the consistency changes or page modifications; checking a first monitor variable associated with a first memory page prior to execution of first program code; and refraining from executing the first program code if the first monitor variable indicates consistency changes or page modifications to the first memory page.2015-04-02
20150095591METHOD AND SYSTEM FOR FILTERING THE STORES TO PREVENT ALL STORES FROM HAVING TO SNOOP CHECK AGAINST ALL WORDS OF A CACHE - In a processor, a method for filtering stores to prevent all stores from having to snoop check against all words of a cache. The method includes implementing a cache wherein stores snoop the caches for address matches to maintain coherency; marking a portion of a cache line if a given core out of a plurality of cores loads from that portion by using an access mask; checking the access mask upon execution of subsequent stores to the cache line; and causing a miss prediction when a subsequent store to the portion of the cache line sees a prior mark from a load in the access mask.2015-04-02
20150095592STORAGE CONTROL APPARATUS, STORAGE CONTROL METHOD, AND COMPUTER-READABLE RECORDING MEDIUM HAVING STORED STORAGE CONTROL PROGRAM - A storage apparatus includes a processing unit that functions a SAN OS which performs SAN control and a NAS OS which performs NAS control to be operated on a virtualized OS, an inter-OS communication unit that transmits and receives data between the NAS OS and the SAN SO, a transmission controller that transmits a NAS input/output request received in the NAS OS to the SAN OS through the inter-OS communication unit, and a NAS request processing unit that processes the NAS input/output request received from the transmission controller in the SAN OS. With this configuration, the NAS and the SAN can be efficiently integrated in a storage apparatus.2015-04-02
20150095593RECORDING APPARATUS - A recording apparatus includes a recording unit, a waveguide forming unit, a communication unit, and a memory controller. The recording unit is configured to record and hold data. The waveguide forming unit is configured to function as a transmission path that transmits the data. The communication unit is configured to communicate with the waveguide forming unit. The memory controller is configured to control input and output of the data to and from the recording unit.2015-04-02
20150095594Method and Apparatus for Bit-Interleaving - A manner of processing data for transmission in a data communication network. Node having a main memory and an interleaver is provided. Received data is stored in the main memory and a bandwidth map is prepared. The data is then selectively read out and pre-processing according to the bandwidth map and stored in an interleaver memory. The date is later read out and post-processed before interleaving into a downstream data frame. The pre- and post-processing provide the data in a more efficient form for interleaving.2015-04-02
20150095595CONFIGURABLE SPREADING FUNCTION FOR MEMORY INTERLEAVING - A method of interleaving a memory by mapping address bits of the memory to a number N of memory channels iteratively in successive rounds, wherein in each round except the last round: selecting a unique subset of address bits, determining a maximum number (L) of unique combinations possible based on the selected subset of address bits, mapping combinations to the N memory channels a maximum number of times (F) possible where each of the N memory channels gets mapped to an equal number of combinations, and if and when a number of combinations remain (K, which is less than N) that cannot be mapped, one to each of the N memory channels, entering a next round. In the last round, mapping remaining most significant address bits, not used in the subsets in prior rounds, to each of the N memory channels.2015-04-02
20150095596Techniques for Improving Performance of a Backup System - Techniques for improving performance of a backup system are disclosed. In one particular exemplary embodiment, the techniques may be realized as a method for improving performance of a backup system. The method may comprise performing a backup of a client device, tracking, using at least one computer processor, references to data segments that are located outside of a unit of storage associated with the backup, calculating utilization of the unit of storage associated with the backup based on the tracked references, determining if the calculated utilization meets a specified parameter, and determining one or more responsive actions in the event the calculated utilization meets the specified parameter.2015-04-02
20150095597HIGH PERFORMANCE INTELLIGENT VIRTUAL DESKTOP INFRASTRUCTURE USING VOLATILE MEMORY ARRAYS - Certain aspects of the disclosure relate to a system and method for performing intelligent virtual desktop infrastructure (iVDI) using volatile memory arrays. The system has a hypervisor server and a storage server in communication via a file sharing protocol. A random access memory (RAM) disk is launched on a volatile memory array using a RAM disk driver. The RAM disk driver then assigns local and remote storages of the storage server as primary and secondary backup storages for the RAM disk. A group of virtual machine (VM) images is deployed to the RAM disk, and deduplication is performed on the VM images to release some memory space of the RAM disk. The deploying and deduplicating of the VM images continues repeatedly until the RAM disk is almost full. Then, the VM images in the RAM disk are copied to the backup storages as backup copies.2015-04-02
20150095598TECHNIQUES TO COMPOSE MEMORY RESOURCES ACROSS DEVICES - Examples are disclosed for composing memory resources across devices. In some examples, memory resources associated with executing one or more applications by circuitry at two separate devices may be composed across the two devices. The circuitry may be capable of executing the one or more applications using a two-level memory (2LM) architecture including a near memory and a far memory. In some examples, the near memory may include near memories separately located at the two devices and a far memory located at one of the two devices. The far memory may be used to migrate one or more copies of memory content between the separately located near memories in a manner transparent to an operating system for the first device or the second device. Other examples are described and claimed.2015-04-02
20150095599STORAGE PROCESSING APPARATUS, COMPUTER-READABLE RECORDING MEDIUM STORING PROGRAM FOR CONTROLLING STORAGE, AND STORAGE SYSTEM - A storage processing apparatus controls a second volume of a second virtual storage device storing a duplicate of a first volume of a first virtual storage device, the storage processing apparatus including a memory that stores a first identifier of the first volume received from the first virtual storage device; and a controller that establishes the first identifier stored in the memory as a second identifier of the second volume, and reports the first identifier established by the establishing in accordance with a notification request of the second identifier.2015-04-02
20150095600ATOMIC TRANSACTIONS TO NON-VOLATILE MEMORY - Durable atomic transactions for non-volatile media are described. A processor includes an interface to a non-volatile storage medium and a functional unit to perform instructions associated with an atomic transaction. The instructions are to update data at a set of addresses in the non-volatile storage medium atomically. The functional unit is operable to perform a first instruction to create the atomic transaction that declares a size of the data to be updated atomically. The functional unit is also operable to perform a second instruction to start execution of the atomic transaction. The functional unit is further operable to perform a third instruction to commit the atomic transaction to the set of addresses in the non-volatile storage medium, wherein the updated data is not visible to other functional units of the processing device until the atomic transaction is complete.2015-04-02
20150095601INTERFACE METHODS AND APPARATUS FOR MEMORY DEVICES - A disclosed example apparatus includes an interface (2015-04-02
20150095602Creating A Program Product Or System For Executing A Perform Frame Management Instruction - Creating a computer program product or a computer system to execute a frame management instruction which identifies a first and second general register. The first general register contains a frame management field having a key field with access-protection bits and a block-size indication. If the block-size indication indicates a large block then an operand address of a large block of data is obtained from the second general register. The large block of data has a plurality of small blocks each of which is associated with a corresponding storage key having a plurality of storage key access-protection bits. If the block size indication indicates a large block, the storage key access-protection bits of each corresponding storage key of each small block within the large block is set with the access-protection bits of the key field.2015-04-02
20150095603METHOD AND DEVICE FOR CLEARING PROCESS IN ELECTRONIC DEVICE - A method and a device for clearing a process in an electronic device are provided. The method includes calculating an amount of memory allocated for a preset time period when a memory application is requested, predicting an amount of memory to be allocated for a future setting time period based on the amount of the memory, and selecting and clearing at least one of present processes based on the amount of the memory to be allocated. Accordingly, sufficient memory can be obtained in a short period of time by recalling a plurality of processes. In this way, the electronic device can continuously allocate an abundance of memory.2015-04-02
20150095604CONTROL DEVICE THAT SELECTIVELY REFRESHES MEMORY - A control device includes circuits configured to detect an access request for a memory area in memory that stores information by charging and discharging charge; determining whether any one among write_information written to the memory area that corresponds to the detected access request and read_information read from the memory area coincides with information stored in the memory area when charge is discharged; and performing control to suspend a refresh operation for the memory area when any one among the write_information and the read_information is determined to coincide with the information stored in the memory when the charge is discharged.2015-04-02
20150095605Latency-Aware Memory Control - A system, method and computer-readable storage device for accessing heterogeneous memory system, are provided. A memory controller schedules access of a command to a memory region in a set of memory regions based on an access priority associated with the command and where the set of memory regions have corresponding access latencies. The memory controller also defers access of the command to the set of memory regions using at least two queues and the access priority.2015-04-02
20150095606SYSTEM AND METHOD FOR PREDICTING MEMORY PERFORMANCE - A method, computer program product, and computing system for defining an optimal execution time (t) for a concurrent memory operation to be performed on a transactional memory system. An abort probability (p) is associated with the optimal execution time (t) based, at least in part, upon a probability curve. The probability curve is empirically derived and based upon the performance of the transactional memory system. A probable execution time (T2015-04-02
20150095607VERIFICATION OF DYNAMIC LOGICAL PARTITIONING - Embodiments of the present invention disclose a method, computer program product, and system for verifying transitions between logical partition configurations. A computer system divides the physical resources of a processing core into logical partitions, each of which has at least one processing subcore. The computer system loads the contexts of the logical partitions and assigns test cases to each processing subcore. The processing subcore executes the test case, verifying the context of the logical partition. The computer system reassigns the test cases to different processing cores in anticipation of reconfiguring the number of logical partitions on the processing core. The computing system reconfigures the number of logical partitions on the processing core and executes the test cases as assigned on the reconfigured logical partitions. (124 words)2015-04-02
20150095608VERIFICATION OF DYNAMIC LOGICAL PARTITIONING - Embodiments of the present invention disclose a method, computer program product, and system for verifying transitions between logical partition configurations. A computer system divides the physical resources of a processing core into logical partitions, each of which has at least one processing subcore. The computer system loads the contexts of the logical partitions and assigns test cases to each processing subcore. The processing subcore executes the test case, verifying the context of the logical partition. The computer system reassigns the test cases to different processing cores in anticipation of reconfiguring the number of logical partitions on the processing core. The computing system reconfigures the number of logical partitions on the processing core and executes the test cases as assigned on the reconfigured logical partitions.2015-04-02
20150095609APPARATUS AND METHOD FOR COMPRESSING A MEMORY ADDRESS - An apparatus and method for converting between a full memory address and a compressed memory address. For example, one embodiment comprises one or more translation tables having a plurality of translation entries, each translation entry identifiable with a pointer value and storing a portion of a full memory address usable within the processor to address data and instructions; and address translation logic to use the translation tables to convert between the full address and a compressed version of the full address, the compressed version of the full address having the pointer value substituted for the portion of the full memory address, wherein a first portion of the processor performs operations using the compressed version of the full address and a second portion of the processor performs operations using the full address.2015-04-02
20150095610MULTI-STAGE ADDRESS TRANSLATION FOR A COMPUTING DEVICE - Providing for address translation in a virtualized system environment is disclosed herein. By way of example, a memory management apparatus is provided that comprises a shared translation look-aside buffer (TLB) that includes a plurality of translation types, each supporting a plurality of page sizes, one or more processors, and a memory management controller configured to work with the one or more processors. The memory management controller includes logic configured for caching virtual address to physical address translations and intermediate physical address to physical address translations in the shared TLB, logic configured to receive a virtual address for translation from a requester, logic configured to conduct a table walk of a translation table in the shared TLB to determine a translated physical address in accordance with the virtual address, and logic configured to transmit the translated physical address to the requester.2015-04-02
20150095611METHOD AND PROCESSOR FOR REDUCING CODE AND LATENCY OF TLB MAINTENANCE OPERATIONS IN A CONFIGURABLE PROCESSOR - A memory management unit (MMU) is disclosed for storing mappings between virtual addresses and physical addresses. The MMU includes a translation look-aside buffer (TLB) and a memory management unit controller. The TLB stores mappings between a virtual address and a physical address. The MMU controller receives a request to insert an entry into the TLB and performs a set of operations based on the received request. The MMU controller determines whether an entry stored in the TLB is associated with the virtual address of the request, removes the entry stored in the TLB that is associated with the virtual address and inserts the requested entry into the TLB.2015-04-02
20150095612TECHNIQUES FOR HANDLING MEMORY ACCESSES BY PROCESSOR-INDEPENDENT EXECUTABLE CODE IN A MULTI-PROCESSOR ENVIRONMENT - A method and apparatus for virtual address mapping are provided. The method includes determining an offset value respective of at least a first portion of code stored on a code memory unit, generating a first virtual code respective of the first portion of code and a second virtual code respective of a second portion of code stored on the code memory unit; mapping the first virtual code to a first virtual code address and the second virtual code to a second virtual code address; generating a first virtual data respective of the first portion of data and a second virtual data respective of the second portion of data; and mapping the first virtual data to a first virtual data address and the second virtual data to a second virtual data address.2015-04-02
Website © 2025 Advameg, Inc.