23rd week of 2014 patent applcation highlights part 68 |
Patent application number | Title | Published |
20140156862 | Clustering Support Across Geographical Boundaries - An approach is presented that provides computer clustering support across geographical boundaries. Inter-node communications are managed in a cluster by having each node operate at the network device driver (NDD) level within the kernel. Multiple types of NDD are utilized (Ethernet, SAN, DISK etc.) to provide redundancy so that nodes can reliably exchange heartbeat. To align with this architecture, for remote nodes, a pseudo NDD is used over Transmission Control Protocol (TCP) based communication interface to work along side other NDDs. Thus, the same packet which is sprayed over the NDDs pertaining to local nodes can be sprayed over the TCPSOCK NDD interface for remote nodes. Nodes (local or remote) receive the same packet and reassemble and process it in the same manner. | 2014-06-05 |
20140156863 | DASH CLIENT AND RECEIVER WITH A DOWNLOAD RATE ESTIMATOR - A client device presents streaming media and includes a stream manager for controlling streams, a request accelerator for making network requests for content, a source component coupled to the stream manager and the request accelerator for determining which requests to make, a network connection, and a media player. A process for rate estimation is provided that will react quickly to reception rate changes. The rate estimator can use an adaptive windowed average and take into account the video buffer level and the change in video buffer level in a way so to guarantee that the rate adjusts fast enough if there is a need, while keeping the windowing width large (and thus the measurement variance) large. A guarantee might be that when a rate drop or rise happens, the estimator adjusts its estimate within a time proportional to a buffer drain rate or buffer fill level. | 2014-06-05 |
20140156864 | MULTIMEDIA STREAM BUFFER AND OUTPUT METHOD AND MULTIMEDIA STREAM BUFFER MODULE - A buffer and output method and a buffer module for a multimedia stream are provided, wherein multimedia stream packets are received and stored into a first buffer, and when an actual remaining time calculated according to an accumulated idle time and a buffer time, is greater than a remaining time threshold value, following steps are performed. A first present time is read, and a sleep instruction is executed, so as to wait for a preset idle time. A second present time is read, and an actual idle time different from the preset idle time is calculated according to the first present time and the second present time. A part of buffer units are enabled according to the accumulated idle time accumulated according to the actual idle time, to output the stored multimedia stream packets to a second buffer of a player. | 2014-06-05 |
20140156865 | Generic Substitution Parameters in DASH - A method for preparing media content in a Dynamic Adaptive Streaming over Hypertext Transfer Protocol (DASH), comprising generating a parameter that comprises an identifier associated with a string value and encoding the parameter within a media presentation description (MPD), wherein the parameter is configured to be set with a parameter value independently of when the MPD is generated, and wherein the MPD provides presentation information for a media content. In another embodiment, a method for adaptive streaming of a media content in a DASH, comprising receiving an MPD that provides presentation information for the media content, determining one or more generic parameters within the MPD, and substituting one or more values for the generic parameters obtained from the MPD, wherein the generic parameters reference at least one of the following: attributes within the MPD, remote elements not available during MPD generation, and streaming client applications. | 2014-06-05 |
20140156866 | Efficient Data Transmission Between Computing Devices - The subject disclosure is directed towards technology by which data transmission sizes are reduced when uploading files over a network. By processing hash values corresponding to a plurality of data blocks of a file to potentially be uploaded to a server, the server identifies any already known data block or blocks of the file. The server performs a server-local copy operation that writes the known data block into a server-local copy of the file. If applicable, the server returns hash values corresponding to unknown data blocks to a client, by which the client responds by uploading copies of the unknown data blocks. Accordingly, the client and the server maintain the server-local copy of the file by transferring only unknown data blocks. | 2014-06-05 |
20140156867 | OFFLOAD PROCESSING INTERFACE - Disclosed are various embodiments providing offload processing circuitry of a network switch. The offload processing circuitry receives an administrative packet from a network switch, the offload processing circuitry being communicatively coupled to a network switch via an Ethernet interface. The offload processing circuitry identifies a receive offload header associated with the administrative packet, the receive offload header comprising a media access control (MAC) address associated with the offload processing circuitry. The offload processing circuitry updates a state machine based at least upon the administrative packet. The offload processing circuitry transmits packets to a network switch by encapsulating it in offload header. The network switch uses the offload header to forward the packet to a proper port. | 2014-06-05 |
20140156868 | Proximity Detection for Media Proxies - A method of detecting proximity between a media proxy and a client uses a proximity probe to query a plurality of media proxies, forcing the media proxies to respond to a proximity server. The proximity server uses an algorithm to determine which media proxy is closest to the client based on the responses. In an alternate embodiment, the same sorts of proximity probes may be used to determine if two media endpoints have a direct connection such that they may bypass a media proxy. | 2014-06-05 |
20140156869 | Intelligent Delivery Agent for Short Message Distribution Center - A message distribution center (MDC) and Intelligent Delivery Agent are implemented in a wireless Internet gateway interposed between content providers and a wireless carrier to subjectively examine and direct messages via SMTP based on desired rules (e.g., non-peak hours, paying subscribers only, etc.) using standard SMTP Gateway and other well-known protocols. The MDC includes an individual queue for each subscriber, and the provider is informed through conventional SMTP protocol messages that the short message has been accepted. If the carrier has specifically disallowed service for a particular MIN (e.g., in the case of churning), then the content provider is informed through an SMTP interchange that the recipient is invalid. An MDC provides a single mechanism for interacting with subscribers of multiple carriers, regardless of each carrier's underlying infrastructure. For the carrier, an MDC can protect their SS7 network by intelligently throttling messages and configuring message delivery parameters to be more network friendly. An MDC can receive outside a relevant wireless network recipient handset presence information. In the disclosed embodiment, a content provider communicates with the MDC using SMTP protocol messages, and the MDC communicates with wireless carriers preferably using RMI/SMPP techniques. | 2014-06-05 |
20140156870 | COMMUNICATION SYSTEM AND SERVER - In a communication system, using source address, source port number, destination address and destination port number, terminals A and B located behind NAT routers A and B, respectively, communicate with each other through the NAT routers A and B. A server SV which is not located behind any NAT routers identifies the NAT router A as a NAT router of a certain type which translates the source port number into a different source port number if the destination port number is changed. The server SV then guesses a port number obtained by adding (or subtracting) a value to (or from) the source port number used for transmission of data from the terminal A through the NAT router A as a port number of the terminal A in a case where the NAT router has been identified as the certain type, and informs the terminal B of the guessed port number. | 2014-06-05 |
20140156871 | POLLING OF I/O DEVICES ON HOST INITIATED COMMUNICATION TRANSPORTS - A disclosed data processing system includes a processor and an operating system kernel that includes communication drivers to support sideband interrupt deferring of polling associated with I/O requests. The communication drivers may implement a driver stack that includes a sideband miniport driver to detect an application program read request for device data from an input/output (I/O) device. The I/O device may be a sensor or another type of human interface device. The sideband miniport driver may pend the read request and maintain an interrupt pipe of a communication transport between the host system and the I/O device in a disabled state. With the interrupt pipe disabled, the host system drivers are unable to poll the I/O device. The sideband miniport driver may pend the read request and keep the interrupt pipe disabled until a sideband interrupt is communicated to the sideband miniport driver. | 2014-06-05 |
20140156872 | SECURE ELEMENT SYSTEM INTEGRATED HARD MACRO - Systems and methods are provided that allow a secure processing system (SPS) to be implemented as a hard macro, thereby isolating the SPS from a peripheral processing system (PPS). The SPS and the PPS, combination, may form a secure element that can be used in conjunction with a host device and a connectivity device to allow the host device to engage in secure transactions, such as mobile payment over a near field communications (NFC) connection. As a result of the SPS being implemented as a hard macro isolated from the PPS, the SPS may be certified once, and re-used in other host devices without necessitating re-certification. | 2014-06-05 |
20140156873 | ELECTRONIC APPARATUS - A programmable display device includes a USB interface to which a USB removable drive device is connected, a nonvolatile memory configured to store USB removable drive device peculiar information peculiar to the USB removable drive device and drive allocation fixing setting information indicating correspondence between the USB removable drive device and a drive number and incorporated in the programmable display device, and a control unit configured to allocate, when information coinciding with the USB removable drive device peculiar information acquired from the USB removable drive device connected to the USB interface is included in drive allocation information stored in the nonvolatile memory, a drive number associated according to the drive allocation information to the USB removable drive device connected to the USB interface. | 2014-06-05 |
20140156874 | METHOD AND SYSTEM FOR A MULTIMEDIA DEVICE OPERABLE BY A CONTROL DEVICE - The disclosed embodiments relate to a system and method for disabling control inputs to a multimedia device. More specifically, there is provided a method including determining, in response to receiving a do not interfere (DNI) control input from a control device for the multimedia device, that the multimedia device is operating in a DNI mode. In the DNI mode, the multimedia device persistently maintains a state of the multimedia device. The method also includes operating the multimedia device according to the DNI mode, in response to receiving an operation control input. The operation control input represents a request to change the state. | 2014-06-05 |
20140156875 | COEXISTING STANDARD AND PROPRIETARY CONNECTION USAGE - A method for coexisting standard connection and proprietary connection use is disclosed. The method may include connecting a peripheral device to a host computing device, wherein the peripheral device is connected via one of a standard connection type and a proprietary connection type. The method may include detecting the connection type. The method may include determining if the connection type is supported by the host computing device. The method may also include rendering a message indicating the connection type is not supported by the host system if the connection type is not supported. | 2014-06-05 |
20140156876 | DEVICE DISCONNECT DETECTION - Systems and methods for operating a universal serial bus are described herein. The method includes sending packet data from a USB | 2014-06-05 |
20140156877 | STORAGE RESOURCE USAGE ANALYSIS FOR CUSTOMIZED APPLICATION OPTIONS - Described are techniques for analyzing storage resources. I/O operations which are directed to a set of storage resources and received at a data storage system from a first application are monitored. First information characterizing the I/O operations from the first application is collected in accordance with said monitoring. Using the first information, a first execution profile for the first application characterizing I/O operations of the first application is determined. It is determined whether the first execution profile of the first application matches any of a set of predetermined execution profiles for known applications. Each of the predetermined execution profiles characterizes I/O operations of one of the known applications. First processing is performed in accordance with one or more criteria including whether the first execution profile matches any of the set of predetermined execution profiles. | 2014-06-05 |
20140156878 | STORAGE DEVICE TO PROVIDE ACCESS TO STORAGE RESOURCES OVER A DATA STORAGE FABRIC - Provide access to storage resources of a storage device over a data storage fabric. Allow a zone manager of a first switch to assign a zone group to one of a plurality of phys of an expander of the storage device to allow the first switch to access storage resources of the storage device. If status of the phy that is assigned a zone group indicates a disconnection condition with the first switch, then configure the zone group of the expander of the storage device to prevent access to the storage resources of the storage device. If after the disconnection condition, the status of the phy indicates a reconnection condition with a second switch, then allow a zone manager of the second switch to assign a zone group to the phy to allow the second switch to access storage resources of the storage device. | 2014-06-05 |
20140156879 | ACTIVE CABLE WITH INDICATORS SHOWING OPERATING MODES AND LINKING STATUS - An active cable includes: a cable body with two ends and two cable plugs being connected to the two ends of the cable body respectively. Each cable plug includes: an electrical connector configured for transmitting and receiving power, high speed data and low speed control signals; a transceiver circuitry connected with the electrical connector and the cable body and configured to transmit and receive the high speed data between the electrical connector and the cable body; an indicator; a driving circuitry connected with the indicator and configured to drive the indicator; and a cable controller connected with the electrical connector, the transceiver circuitry and the driving circuitry and configured to determine an operating mode and linking status of the active cable and transmit an internal control signal to the driving circuitry accordingly. | 2014-06-05 |
20140156880 | MEMORY CONTROLLER AND OPERATING METHOD THEREOF - A memory controller is provided which includes a host interface configured to provide an interface for communication with a host; a buffer memory configured to store user data and metadata of the user data; and a DMA controller configured to access the buffer memory to check the metadata and to provide user data corresponding to a logical block address requested from a host to the host interface according to the checking result. The metadata includes status information of the user data stored at the buffer memory. Before providing the host interface with user data corresponding to a first logical block address requested from the host, the DMA controller checks metadata of user data corresponding to a second logical block address requested from the host. | 2014-06-05 |
20140156881 | MEMORY SYSTEM HAVING HIGH DATA TRANSFER EFFICIENCY AND HOST CONTROLLER - According to one embodiment, the host controller includes a register set to issue command, and a direct memory access (DMA) unit and accesses a system memory and a device. First, second, third and fourth descriptors are stored in the system memory. The first descriptor includes a set of a plurality of pointers indicating a plurality of second descriptors. Each of the second descriptors comprises the third descriptor and fourth descriptor. The third descriptor includes a command number, etc. The fourth descriptor includes information indicating addresses and sizes of a plurality of data arranged in the system memory. The DMA unit sets, in the register set, the contents of the third descriptor forming the second descriptor, from the head of the first descriptor as a start point, and transfers data between the system memory and the host controller in accordance with the contents of the fourth descriptor. | 2014-06-05 |
20140156882 | MEMORY DEVICE, OPERATING METHOD THEREOF, AND DATA STORAGE DEVICE INCLUDING THE SAME - A memory device includes a data read/write block configured to store data in memory cells and read data from the memory cells; an input/output buffer block configured to buffer input data inputted through data pads and control signals inputted through control signal pads, and provide buffered input data and control signals to the data read/write block, and buffer read data read out through the data read/write block, and output buffered read data to an external device through the data pads, and a control logic configured to activate or deactivate the input/output buffer block based on an address which is inputted from the external device. | 2014-06-05 |
20140156883 | Method And Apparatus For Removing Data From A Recycled Electronic Device - A method and apparatus for transferring data from a recycled electronic device such as a mobile phone is disclosed herein. The apparatus is preferably a recycling kiosk which includes electrical connectors and an inspection area with an upper chamber, a lower chamber, a transparent plate and at least one camera in order to perform a visual analysis and an electrical analysis of the electronic device for determination of a value of the electronic device. The recycling kiosk also includes a processor managing the transfer of data. | 2014-06-05 |
20140156884 | ADAPTIVE ACCESSORY DETECTION AND MODE NEGOTIATION - Methods and apparatus for providing device detection and operational mode negotiation entirely over extant high-speed data bus pins or terminals. In one exemplary embodiment, methods and apparatus are disclosed enabling detection, negotiation and serial/video data transfer over USB 2.0 data interface pins in order to consolidate pin count on the interface and associated connector. Existing USB detection mechanisms are also leveraged to the maximum extent so as to eliminate the need for additional detection protocols. This approach allows for smaller connector and parent device form factor, while still maintaining all of the functional capabilities required for that interface. The breadth of USB-capable devices supported by such an interface is also markedly improved over prior art techniques. | 2014-06-05 |
20140156885 | External Device Extension Method and External Device - The present invention relates to an external device extension method and an external device. The external device is provided with a storage device interface and firmware for implementing operation requests of standard functions of the storage device interface. When the external device is connected to a host, the firmware communicates with the host according to standards of the storage device interface, so that the external device is identified by the host as a standard external storage device, and one or more of operation names, parameter names, data names, and/or device status names supported by the external device are simulated as one or more directories and/or files. Upon receiving a standard directory and/or file read/write request from the host, the external device executes a corresponding external device operation instruction, processes written data, and returns, according to the read request, data formatted according to the request from the host. The use of the external device does not need any driver to be installed, and all functions of the external device can be accessed and used, so as to make it possible that some smart appliances using embedded software can be connected to and use the external devices. | 2014-06-05 |
20140156886 | DATA MIGRATION METHOD AND APPARATUS - The present invention provides a data migration method and apparatus, where the method includes: after a second control board is inserted into a second control board slot, receiving, by a first control board, type information from the second control board, and determining whether the type information of the second control board and type information of the first control board are the same; and when determining that the type information of the second control board and the type information of the first control board are different, sending, by the first control board, configuration data stored by the first control board itself to the second control board, so that the second control board utilizes the configuration data to perform a configuration. | 2014-06-05 |
20140156887 | METHOD AND DEVICE FOR SERIAL DATA TRANSMISSION WHICH IS ADAPTED TO MEMORY SIZES - A method is described for serial data transmission in a bus system having at least two participating data processing units, the data processing units exchanging messages via the bus, the sent messages having a logical structure in accordance with CAN standard ISO 11898-1. When a first changeover condition is present, then, deviating from CAN, the data field of the messages can include more than eight bytes, the values of the data length code being interpreted, given the presence of the first changeover condition to determine the size of the data field. For forwarding data between the data field and the application software, at least one buffer memory is provided, and, if the size of the data field differs from the size of the buffer memory used, the forwarded quantity of data is adapted at least corresponding to the difference in size between the data field and the buffer memory. | 2014-06-05 |
20140156888 | METHOD AND SYSTEM FOR DYNAMICALLY PROGRAMMABLE SERIAL/PARALLEL BUS INTERFACE - An apparatus, method, and system embodying some aspects of the present embodiments for arbitrating communication between multiple communication devices are provided. The arbitration system include two communication devices and a packet traffic arbiter. The communication devices can be configured to receive or transmit data transmissions. The data transmissions can comprise protocol information. The protocol information can comprise transmission coordination information, handover information, and spectrum information. The packet traffic arbiter can be configured to coordinate the data transmissions between the two communication devices. The coordination can reduce traffic collisions or interference between low-power activities of the two communication devices. | 2014-06-05 |
20140156889 | APPARATUS AND METHOD FOR MONITORING SIGNALS TRANSMITTED IN BUS - A signal monitor includes a signal collecting unit, two registers, a processing unit, and a Universal Asynchronous Receiver/Transmitter (UART). The signal collecting unit collects real-time signals from a bus, converts the real-time signals into accessible data, and stores the accessible data in the two registers. The processing unit reads the accessible data from the two registers and determines status of the real-time signals by examining the accessible data in the reading. The UART asynchronously transmits the status of the real-time signals to a control terminal. A method for monitoring signals transmitted in a bus is also provided. | 2014-06-05 |
20140156890 | METHOD OF PERFORMING COLLECTIVE COMMUNICATION AND COLLECTIVE COMMUNICATION SYSTEM USING THE SAME - A method of performing collective communication in a collective communication system includes processing nodes, including: determining whether a command message, regarding one function among a broadcast function, a scatter function, and a gather function, is generated by a processor; determining a transmission order between the processing nodes by giving transmission priorities to processing nodes that do not communicate, based on a status of each of the processing nodes if it is determined that the command message regarding the one function among the broadcast function, the scatter function, and the gather function, is generated by the processor; and performing communication with respect to the command message based on the determined transmission order. | 2014-06-05 |
20140156891 | SYSTEMS AND METHODS FOR AUTOMATICALLY GENERATING MASTER-SLAVE LATCH STRUCTURES WITH FULLY REGISTERED FLOW CONTROL - A method for automatically generating master-slave latch structures is disclosed. A method includes, from another logic synthesis system that invokes a logic synthesis system for generating master-slave latch structures, accessing high level design descriptions of a master-slave latch structure that indicate a fully registered flow control structure design and based on the high level design descriptions, generating a master-slave latch structure design to include at least one master-slave latch pair. | 2014-06-05 |
20140156892 | METHOD, SYSTEM, AND APPARATUS FOR LINK LATENCY MANAGEMENT - A link latency management for a high-speed point-to-point network (pTp) is described The link latency management facilitates calculating latency of a serial interface by tracking a round trip delay of a header that contains latency information. Therefore, the link latency management facilitates testers, logic analyzers, or test devices to accurately measure link latency for a point-to-point architecture utilizing a serial interface. | 2014-06-05 |
20140156893 | Can bus edge timing control apparatus, systems and methods - Structures and methods herein insert one or more parallel “recessive nulling” driver impedances across a controller area network (CAN) bus starting at the time of a dominant-to-recessive data bit transition and extending for a selected recessive nulling time period. Doing so increases a rate of decay of a CAN bus dominant-to-recessive differential signal waveform, permits a shortened recessive bit time period, and allows for increased CAN bus bandwidth. Various modes of operation are applicable to various CAN bus node topologies. Recessive nulling may be applied to only the beginning portion of a recessive bit following a dominant bit (“LRN mode”) or to the entire recessive bit time (“HRN mode”). And, some embodiments may apply LRN operations to some recessive CAN frame bits and HRN operations to others. | 2014-06-05 |
20140156894 | MSI EVENTS USING DYNAMIC MEMORY MONITORING - A method and system for managing message-signaled interrupt-based events sent from an event source to a host or a guest is disclosed. A central processing unit instructs an event source to write a message-signaled interrupt to a designated address of a random access memory of the host. The host or a guest of the central processing unit executes a memory monitoring instruction to the designated address. The host or the guest enters a wait state. The host or the guest detects a write of the message-signaled interrupt by the event source to the designated address, the message-signaled interrupt comprising data items pertaining to an event to be performed. The host or the guest exits from the wait state. The host or the guest performs an atomic operation with respect to the event based on the data items in the message-signaled interrupt. | 2014-06-05 |
20140156895 | USB DEVICE INTERRUPT SIGNAL - A method and system for sending an interrupt signal is described herein. The method may include detecting sensor data in a sensor controller and detecting a powered down port between the sensor controller and an operating system. The method may also include sending the interrupt signal from the sensor controller to the operating system. In addition, the method may include detecting the operating system has provided power to the powered down port. Furthermore, the method may include sending the sensor data from the sensor controller to the operating system. | 2014-06-05 |
20140156896 | ADVANCED PROGRAMMABLE INTERRUPT CONTROLLER IDENTIFIER (APIC ID) ASSIGNMENT FOR A MULTI-CORE PROCESSING UNIT - Following a restart or a reboot of a system that includes a multi-core processor, the multi-core processor may assign each active and eligible core a unique advanced programmable interrupt controller (APIC) identifier (ID). Initialization logic may detect a state of each of the plurality of processing cores as active or inactive. The initialization logic may detect an attribute of each of the plurality of processing cores as eligible to be assigned an APIC ID or as ineligible to be assigned the APIC ID. | 2014-06-05 |
20140156897 | METHOD OF CONNECTING A PCIe BUS EXTENSION SYSTEM - A PCIe bus extension system, method, interface card and cable for connecting a PCIe-compliant peripheral device to a PCIe bus of a computer system. The interface card includes a printed circuit board, an edge connector adapted for insertion into a PCIe expansion slot on a motherboard of the computer system for transmitting PCIe signals between the motherboard and the interface card, an interface port configured to mate with a connector of the cable, and a logic integrated circuit on the printed circuit board, the logic integrated circuit functionally connecting the edge connector with the expansion slot and amplifying and propagating clock and data PCIe signals therebetween that are compliant with a PCIe standard. The interface card and cable lacks the capability of transmitting power therethrough to a PCIe-compliant peripheral device connected to the interface card through the interface port. | 2014-06-05 |
20140156898 | PCI AND PCI EXPRESS VIRTUAL HOT PLUG SYSTEMS AND METHODS - Virtual hot plug systems and methods are described for Peripheral Component Interconnect (PCI), PCI Express (PCIe), and variants thereof. Specifically, the virtual hot plug systems and methods enable PCI related devices to support a hot plug configuration with such devices lacking hot plug controller hardware. The virtual hot plug systems and methods include intelligent polling logic for discovering the new PCI/PCIe devices with proper logic to access the hardware and avoid hanging or locking up the operating system. | 2014-06-05 |
20140156899 | BROADCAST SERIAL BUS TERMINATION - A subsea broadcast serial bus system includes a broadcast serial bus having a first signal line and a second signal line. One or more nodes are connected in parallel to the first signal line and the second signal line of the broadcast serial bus. Each node connects the first signal line to the second signal line via a node impedance. A subsea node connected to the broadcast serial bus includes an adjustable impedance that may be adjusted based on the number of nodes connected to the broadcast serial bus. | 2014-06-05 |
20140156900 | MODULAR CONTROL APPARATUS - A control apparatus has a number of modules arranged next to one another in a longitudinal direction. The modules each comprise at least one module part having a housing. Furthermore, the module part comprises a first electrical bus connector on a first side of the housing for electrical connection to a first neighboring module part adjacent in the longitudinal direction, and a second electrical bus connector on a second side, opposite the first side, of the housing for electrical connection to a second neighboring module part adjacent in the longitudinal direction. The module part further comprises at least one movable element, movable between a first position and a second position. In the first position, the movable element provides an electrical connection between the first bus connector and the second bus connector and, in the second position, provides an insulation point between the first bus connector and the second bus connector. | 2014-06-05 |
20140156901 | COMPUTING DEVICE, A SYSTEM AND A METHOD FOR PARALLEL PROCESSING OF DATA STREAMS - An apparatus for identification of an input data against one or more learned signals is provided. The apparatus comprising a number of computational cores, each core comprises properties having at least some statistical independency from other of the computational, the properties being set independently of each other core, each core being able to independently produce an output indicating recognition of a previously learned signal, the apparatus being further configured to process the produced outputs from the number of computational cores and determining an identification of the input data based the produced outputs. | 2014-06-05 |
20140156902 | LINE CODING FOR LOW-RADIO NOISE MEMORY INTERFACES - According to some embodiments, a method and apparatus are provided to receive a first data burst associated with a first data line and a second data burst associated with a second data line, determine a first one or more stuff bits to be transmitted after the first data burst and a second one or more stuff bits to be transmitted after the second data burst, and output data comprising the first data burst and the first one or more stuff bits and the second data burst and the second one or more stuff bits. | 2014-06-05 |
20140156903 | SCALABLE STORAGE SYSTEM HAVING MULTIPLE STORAGE CHANNELS - Techniques are generally described related to a scalable storage system. One example scalable storage system may include a first storage channel including a first storage node, a second storage node, and a first serial link. The first storage node is coupled with the second storage node via the first serial link. The scalable storage system may include a multi-channel interface including a first input-channel coupled with the first storage node and a first output-channel coupled with the second storage node. For a first request transmitted from a computer system and received by the multi-channel interface, the multi-channel interface is configured to direct the first request via the first input-channel to the first storage node of the first storage channel. The first storage node is configured to process the first request. Upon determining that a request segment in the first request is directed to the second storage node, the first serial link is configured to transmit the request segment from the first storage node to the second storage node, allowing the second storage node to process the request segment. | 2014-06-05 |
20140156904 | BUS CONTROLLER - A bus controller has a displacer, an arithmetic logic unit coupled to the displacer, and a replacer selectively coupled to the displacer and the arithmetic logic unit. | 2014-06-05 |
20140156905 | System and Method for Intelligent Platform Management Interface Keyboard Controller Style Interface Multiplexing - An information handling system includes a processing node, an input/output (I/O) module coupled to the processing node via a high bandwidth interface, and a service processor coupled to the I/O module via a multi-master interface. A transaction between the processing node and the service processor that is targeted to a low pin count (LPC) bus is executed between the processing node and the service processor via the high bandwidth interface and the multi-master interface. | 2014-06-05 |
20140156906 | Virtual Trunking Over Physical Links - A technique in which at least one controlling bridge controls data traffic among devices located lower in hierarchy below the controlling bridge. Those devices include a plurality of porting devices, such as line modules and port extenders, which ultimately communicate with an end point device, referred to as a station. At least two physical pathways from a controlling bridge to a station are grouped together into a virtual trunk to provide multiple physical pathways for packet transfer when operating in a dual-homed mode. | 2014-06-05 |
20140156907 | SMART MEMORY - Systems and methods to process packets of information using an on-chip processing system include a memory bank, an interconnect module, a controller, and one or more processing engines. The packets of information include a packet header and a packet payload. The packet header includes one or more operator codes. The transfer of individual packets is guided to a processing engine through the interconnect module and through the controller by operator codes included in the packets. | 2014-06-05 |
20140156908 | STALE POINTER DETECTION WITH OVERLAPPING VERSIONED MEMORY - In general, in one aspect, the invention relates to a method for managing virtual memory (VM). The method includes receiving, from an application, a first access request comprising a first VM address identifying a VM location, obtaining a current VM location version value for the VM location, obtaining a first submitted VM location version value from the first VM address, and in response to a determination that the current VM location version value and the first submitted VM location version value match: servicing the first access request using the first VM address. | 2014-06-05 |
20140156909 | Systems and Methods for Dynamic Optimization of Flash Cache in Storage Devices - In various embodiments, a storage device includes a magnetic media, a cache memory, and a drive controller. In embodiments, the drive controller is configured to establish a portion of the cache memory as an archival zone having a cache policy to maximize write hits. The drive controller is further configured to pre-erase the archival zone, direct writes from a host to the archival zone, and flush writes from the archival zone to the magnetic media. In embodiments, the drive controller is configured to establish a portion of the cache memory as a retrieval zone having a cache policy to maximize read hits. The drive controller is further configured to pre-fetch data from the magnetic media to the retrieval zone, transfer data from the retrieval zone to a host upon request by the host, and transfer read ahead data to the retrieval zone to replace data transferred to the host. | 2014-06-05 |
20140156910 | Automated Space Management for Server Flash Cache - Techniques for automatically allocating space in a flash storage-based cache are provided. In one embodiment, a computer system collects I/O trace logs for a plurality of virtual machines or a plurality of virtual disks and determines cache utility models for the plurality of virtual machines or the plurality of virtual disks based on the I/O trace logs. The cache utility model for each virtual machine or each virtual disk defines an expected utility of allocating space in the flash storage-based cache to the virtual machine or the virtual disk over a range of different cache allocation sizes. The computer system then calculates target cache allocation sizes for the plurality of virtual machines or the plurality of virtual disks based on the cache utility models and allocates space in the flash storage-based cache based on the target cache allocation sizes. | 2014-06-05 |
20140156911 | HOST READ COMMAND RETURN REORDERING BASED ON TIME ESTIMATION OF FLASH READ COMMAND COMPLETION - Managing data returns to a host in response to read commands, an operation monitor of a solid-state drive (SSD) manages counters used to hold metrics that characterize the estimated time to complete a read operation on a corresponding flash die. A timer generates a periodic event which decrements the counters over time. The value stored in each counter is generated for flash operations submitted to the corresponding die and is, generally, based on the operational history and the physical location of the operation. Whenever a read command is scheduled for submission to a particular die, the time estimate for that particular read operation is retrieved and, based on this information, the optimum order in which to return data to the host is determined. This order is used to schedule and program data transfers to the host so that a minimum number of read commands get blocked by other read commands. | 2014-06-05 |
20140156912 | MEMORY MANAGEMENT METHOD, AND MEMORY CONTROLLER AND MEMORY STORAGE APPARATUS USING THE SAME - A memory management method and a memory controller and a memory storage apparatus using the same are provided. The method includes applying different detection biases to read data stored in physical pages of a rewritable non-volatile memory module and calculating the number of error bits according the read data. The method further includes estimating a value of a wearing degree of each physical page according to the calculated number of error bits and operating the rewritable non-volatile memory module according to the value of the wearing degree of each physical page. Accordingly, the method can effectively identify the wearing degree of the rewritable non-volatile memory module and operate the rewritable non-volatile memory module by applying a corresponding management mechanism, so as to prevent data errors. | 2014-06-05 |
20140156913 | DATA PROCESSING METHOD, MEMORY CONTROLLER AND MEMORY STORAGE APPARATUS - A data processing method, a memory controller and a memory storage apparatus are provided. The method includes receiving a write command from a host system. A write data stream corresponding to the write command includes multiple sub-data streams, and each of the sub-data streams is attached with a data index mark by an application installed in the host system. The application determines the data index mark attached to each sub-data stream in accordance with a first rule including a predetermined function, an initial parameter selecting manner and a parameter increasing manner, in which the first rule is pre-agreed by the application with the memory storage apparatus. The method also includes reordering the sub-data streams according to the first rule and the data index mark of each sub-data stream. The method further includes transmitting the reordered sub-data streams to a smartcard chip in the memory storage apparatus. | 2014-06-05 |
20140156914 | BLIND AND DECISION DIRECTED MULTI-LEVEL CHANNEL ESTIMATION - Data which is read back from a multi-level storage device is received. For each bin in a set of bins, a portion of reads which fall into that particular bin and which are to be maintained is received. The set of bins is adjusted so that the read-back data, after assignment using the adjusted set of bins, matches the received portions of reads which are to be maintained. | 2014-06-05 |
20140156915 | PARTITIONING A FLASH MEMORY DATA STORAGE DEVICE - A method of partitioning a data storage device that has a plurality of memory chips includes determining a number memory chips in the data storage device, defining, via a host coupled to the data storage device, a first partition of the data storage device, where the first partition includes a first subset of the plurality of memory chips, defining a second partition of the data storage device via the host where the second partition includes a second subset of the plurality of memory chips, such that the first subset does not include any memory chips of the second subset and wherein the second subset does not include any memory chips of the first subset. | 2014-06-05 |
20140156916 | CONTROL ARRANGEMENTS AND METHODS FOR ACCESSING BLOCK ORIENTED NONVOLATILE MEMORY - A read/write arrangement is described for use in accessing at least one nonvolatile memory device in read/write operations with the memory device being made up of a plurality of memory cells which memory cells are organized as a set of pages that are physically and sequentially addressable with each page having a page length such that a page boundary is defined between successive ones of the pages in the set. The read/write arrangement includes a control arrangement that is configured to store and access a group of data blocks that is associated with a given write operation in a successive series of pages of the memory such that at least an initial page in the series is filled and each block includes a block length that is different than the page length. | 2014-06-05 |
20140156917 | Storage Devices, Flash Memories, and Methods of Operating Storage Devices - A storage device is provided including a flash memory, and a controller programming first bit data and second bit data into the flash memory and not backing up the first bit data when programming the first bit data and the second bit data in the same transaction and backing up the first bit data when programming the first bit data and the second bit data in different transactions, wherein the first bit data is less significant bit data than the second bit data, and each of the transactions is determined using a sync signal transmitted from a host. | 2014-06-05 |
20140156918 | STORAGE DEVICES INCLUDING MEMORY DEVICE AND METHODS OF OPERATING THE SAME - Storage devices including a memory device and methods of operating the storage devices are provided. The storage devices may include a controller which is configured to program first bit data and second bit data paired with the first bit data into a memory device. The first bit data may be less significant bit data than the second bit data. The controller may be configured to selectively perform or skip backup of the first bit data when programming the second bit data. | 2014-06-05 |
20140156919 | Isolation Switching For Backup Memory - Certain embodiments described herein include a memory system having a volatile memory subsystem, a non-volatile memory subsystem, a controller coupled to the non-volatile memory subsystem, and a circuit coupled to the volatile memory subsystem, to the controller, and to a host system. In a first mode of operation, the circuit is operable to selectively isolate the controller from the volatile memory subsystem, and to selectively couple the volatile memory subsystem to the host system to allow data to be communicated between the volatile memory subsystem and the host system. In a second mode of operation, the circuit is operable to selectively couple the controller to the volatile memory subsystem to allow data to be communicated between the volatile memory subsystem and the nonvolatile memory subsystem using the controller, and the circuit is operable to selectively isolate the volatile memory subsystem from the host system. | 2014-06-05 |
20140156920 | Isolation Switching For Backup Of Registered Memory - Certain embodiments described herein include a memory system having a register coupled to a host system and operable to receive address and control signals from the host system, a volatile memory subsystem, a non-volatile memory subsystem, a controller coupled to the non-volatile memory subsystem, and a circuit coupled to the register, the volatile memory subsystem, and the controller. In a first mode of operation, the circuit is operable to selectively isolate the controller from the volatile memory subsystem, and to selectively couple the volatile memory subsystem to the register to allow data to be communicated between the volatile memory subsystem and the host system. In a second mode of operation, the circuit is operable to selectively couple the controller to the volatile memory subsystem to allow data to be communicated between the volatile memory subsystem and the non-volatile memory subsystem using the controller, and is operable to selectively isolate the volatile memory subsystem from the register. | 2014-06-05 |
20140156921 | METHODS FOR WRITING DATA TO NON-VOLATILE MEMORY-BASED MASS STORAGE DEVICES - Methods of operating a non-volatile solid state memory-based mass storage device having at least one non-volatile memory component. In one aspect of the invention, the one or more memory components define a memory space partitioned into user memory and over-provisioning pools based on a P/E cycle count stored in a block information record. The storage device transfers the P/E cycle count of erased blocks to a host and the host stores the P/E cycle count in a content addressable memory. During a host write to the storage device, the host issues a low P/E cycle count number as a primary address to the content addressable memory, which returns available block addresses of blocks within the over-provisioning pool as a first dimension in a multidimensional address space. Changed files are preferably updated in append mode and the previous version can be maintained for version control. | 2014-06-05 |
20140156922 | NON-VOLATILE MEMORY DEVICE ADAPTED TO IDENTIFY ITSELF AS A BOOT MEMORY - Non-volatile memory devices and methods of their operation are provided. One such non-volatile memory device has an interface and a control circuit. The non-volatile memory device is adapted to identify itself as a boot memory in response to receiving an interrogation request on the interface. | 2014-06-05 |
20140156923 | ROW HAMMER MONITORING BASED ON STORED ROW HAMMER THRESHOLD VALUE - Detection logic of a memory subsystem obtains a threshold for a memory device that indicates a number of accesses within a time window that causes risk of data corruption on a physically adjacent row. The detection logic obtains the threshold from a register that stores configuration information for the memory device, and can be a register on the memory device itself and/or can be an entry of a configuration storage device of a memory module to which the memory device belongs. The detection logic determines whether a number of accesses to a row of the memory device exceeds the threshold. In response to detecting the number of accesses exceeds the threshold, the detection logic can generate a trigger to cause the memory device to perform a refresh targeted to a physically adjacent victim row. | 2014-06-05 |
20140156924 | SEMICONDUCTOR MEMORY DEVICE WITH IMPROVED OPERATING SPEED AND DATA STORAGE DEVICE INCLUDING THE SAME - A semiconductor memory device includes a power block configured to generate an internal voltage based on an external voltage which is applied through a power pad; a circuit block configured to operate according to the internal voltage and drive memory cells; and a CAM (content addressed memory) block configured to operate according to the external voltage and store setting information necessary for driving of the memory cells. | 2014-06-05 |
20140156925 | SELECTION OF ALLOCATION POLICY AND FORMAT FOR VIRTUAL MACHINE DISK IMAGES - A system and method are disclosed for selecting an allocation policy and format for storing a disk image of a virtual machine (VM). In accordance with one embodiment, a computer system that hosts a virtual machine (VM) selects an allocation policy and format for storing the disk image on a particular storage device (e.g., a magnetic hard disk, a Universal Serial Bus [USB] solid state drive, a Redundant Array of Independent Disks [RAID] system, a network attached storage [NAS] array, etc.), where the selection is based on one or more capabilities of the storage device, and on a parameter that indicates a tradeoff between performance and storage consumption. | 2014-06-05 |
20140156926 | AUTOMATED STORAGE PROVISIONING WITHIN A CLUSTERED COMPUTING ENVIRONMENT - The present invention provides an approach for automatic storage planning and provisioning within a clustered computing environment (e.g., a cloud computing environment). The present invention will receive planning input for a set of storage area network volume controllers (SVCs), the planning input indicating a potential load on the SVCs and its associated components. Configuration data for a set of storage components (i.e., the set of SVCs, a set of managed disk (Mdisk) groups associated with the set of SVCs, and a set of backend storage systems) will also be collected. Based on this configuration data, the set of storage components will be filtered to identify candidate storage components capable of addressing the potential load. Then, performance data for the candidate storage components will be analyzed to identify an SVC and an Mdisk group to address the potential load. | 2014-06-05 |
20140156927 | COMPUTING DEVICE SYSTEM AND INFORMATION MANAGING METHOD FOR REARRANGEMENT OF DATA BASED ON ACCESS CHARACTERISTIC RELATED TO A TASK - Technique for decision criterion for determining a transfer destination layer in rearrangement processing. A computer configures rearrangement reference information showing whether an access characteristic related to a task executed on a plurality of host computing devices is considered as a decision criterion for transfer destination determination in rearrangement processing of transferring data between actual storage areas of physical storage devices of different response performance. Storage subsystem refers to the rearrangement reference information and, based on an access characteristic of the plurality of computing devices with respect to the actual storage areas assigned to the plurality of computing devices, executes rearrangement processing of transferring data stored in the actual storage areas to different actual storage areas in the physical storage devices of different response performance. | 2014-06-05 |
20140156928 | Media Aware Distributed Data Layout - A storage system includes a plurality of vdisks, with each vdisk containing a plurality of storage segments, and each segment providing a specific class of service (CoS) for storage. Each vdisk stores files with data and meta data distributed among its storage segments. A storage system includes a memory having multiple classes of service. The system includes an interface for storing a file as blocks of data associated with a class of service in the memory. The interface chooses the class of service for a block on a block by block basis for storage. A file system for storing a file. A file system for storing includes a plurality of vdisks. A method for storing a file. | 2014-06-05 |
20140156929 | NETWORK-ON-CHIP USING REQUEST AND REPLY TREES FOR LOW-LATENCY PROCESSOR-MEMORY COMMUNICATION - A Network-On-Chip (NOC) organization comprises a die having a cache area and a core area, a plurality of core tiles arranged in the core area in a plurality of subsets, at least one cache memory bank arranged in the cache area, whereby the at least one cache memory bank is distinct from each of the plurality of core tiles. The NOC organization further comprises an interconnect fabric comprising a request tree to connect to a first cache memory bank of the at least one cache memory bank, each core tile of a first one of the subsets, the first subset corresponding to the first cache memory bank, such that each core tile of the first subset is connected to the first cache memory bank only, and allow guiding data packets from each core tile of the first subset to the first memory bank, and a reply tree to connect the first cache memory bank to each core tile of the first subset, and allow guiding data packets from the first cache memory bank to a core tile of the first subset. | 2014-06-05 |
20140156930 | CACHING OF VIRTUAL TO PHYSICAL ADDRESS TRANSLATIONS - A data processing apparatus comprising: at least one initiator device for issuing transactions, a hierarchical memory system comprising a plurality of caches and a memory and memory access control circuitry. The initiator device identifies storage locations using virtual addresses and the memory system stores data using physical addresses, the memory access control circuitry is configured to control virtual address to physical address translations. The plurality of caches, comprise a first cache and a second cache. The first cache is configured to store a plurality of address translations of virtual to physical addresses that the initiator device has requested. The second cache is configured to store a plurality of address translations of virtual to physical addresses that it is predicted that the initiator device will subsequently request. The first and second cache are arranged in parallel with each other such that the first and second caches can be accessed during a same access cycle. | 2014-06-05 |
20140156931 | STATE ENCODING FOR CACHE LINES - A method and apparatus for state encoding of cache lines is described. Some embodiments of the method and apparatus support probing, in response to a first probe of a cache line in a first cache, a copy of the cache line in a second cache when the cache line is stale and the cache line is associated with a copy of the cache line stored in the second cache that can bypass notification of the first cache in response to modifying the copy of the cache line. | 2014-06-05 |
20140156932 | ELIMINATING FETCH CANCEL FOR INCLUSIVE CACHES - A method and apparatus for eliminating fetch cancels for inclusive caches are presented. Some embodiments of the apparatus include a first cache configurable to issue fetch or prefetch requests to a second cache that is inclusive of said at least one first cache. The first cache is not permitted to cancel issued fetch or prefetch requests to the second cache. Some embodiments of the method include preventing the first cache(s) from canceling issued fetch or prefetch requests to a second cache that is inclusive of the first cache(s). | 2014-06-05 |
20140156933 | System, Method, and Apparatus for Improving Throughput of Consecutive Transactional Memory Regions - Systems, apparatuses, and methods for improving TM throughput using a TM region indicator (or color) are described. Through the use of TM region indicators younger TM regions can have their instructions retired while waiting for older TM regions to commit. | 2014-06-05 |
20140156934 | STORAGE APPARATUS AND MODULE-TO-MODULE DATA TRANSFER METHOD - A storage apparatus includes controller modules configured to have a cache memory and to control a storage device, respectively, and communication channels that connect the controller modules in a mesh topology, where one controller module providing an instruction to perform data transfer in which the controller module is specified as a transfer source and another controller module is specified as a transfer destination. The instruction is provided to a controller module directly connected to the other controller modules using a corresponding one of the communication channels, and configured to perform data transfer from the cache memory of the one controller module to the cache memory of the other controller module, in accordance with the instruction. | 2014-06-05 |
20140156935 | Unified Exclusive Memory - In one embodiment, a processor includes at least one execution unit, a near memory, and memory management logic. The memory management logic may be to manage the near memory and a far memory as a unified exclusive memory, where the far memory is external to the processor. Other embodiments are described and claimed. | 2014-06-05 |
20140156936 | SYSTEMS AND METHODS FOR MANAGING DESTAGE CONFLICTS - Destaging storage tracks from each rank that includes a greater than a predetermined percentage of a predetermined amount of storage space with respect to a current amount of storage space allocated to each rank until the current amount of storage space used by each respective rank is equal to the predetermined percentage of the predetermined amount of storage space. The destage storage tracks are declined from being destaged from each rank that includes less than or equal to the predetermined percentage of the predetermined amount of storage space rank. | 2014-06-05 |
20140156937 | SYSTEMS AND METHODS FOR BACKGROUND DESTAGING STORAGE TRACKS - Storage tracks are destaged from the write cache if the host(s) is/are idle and are not destaged from the write cache if one or more of the hosts is/are not idle. The storage tracks are refrained from being destaged from the write cache if the at least one host is not idle. Each rank is monitored for write operations from the at least one host, and a determination is made if the at least one host is idle with respect to each respective rank based on monitoring each rank for write operations from the at least one host such that the at least one host may be determined to be idle with respect to a first rank and not idle with respect to a second rank. | 2014-06-05 |
20140156938 | CACHE REGION CONCEPT - A method to store objects in a memory cache is disclosed. A request is received from an application to store an object in a memory cache associated with the application. The object is stored in a cache region of the memory cache based on an identification that the object has no potential for storage in a shared memory cache and a determination that the cache region is associated with a storage policy that specifies that objects to be stored in the cache region are to be stored in a local memory cache and that a garbage collector is not to remove objects stored in the cache region from the local memory cache. | 2014-06-05 |
20140156939 | METHODOLOGY FOR FAST DETECTION OF FALSE SHARING IN THREADED SCIENTIFIC CODES - A profiling tool identifies a code region with a false sharing potential. A static analysis tool classifies variables and arrays in the identified code region. A mapping detection library correlates memory access instructions in the identified code region with variables and arrays in the identified code region while a processor is running the identified code region. The mapping detection library identifies one or more instructions at risk, in the identified code region, which are subject to an analysis by a false sharing detection library. A false sharing detection library performs a run-time analysis of the one or more instructions at risk while the processor is re-running the identified code region. The false sharing detection library determines, based on the performed run-time analysis, whether two different portions of the cache memory line are accessed by the generated binary code. | 2014-06-05 |
20140156940 | Mechanism for Page Replacement in Cache Memory - A mechanism for page replacement for cache memory is disclosed. A method of the disclosure includes referencing an entry of a data structure of a cache in memory to identify a stored value of an eviction counter, the stored value of the eviction counter placed in the entry when a page of a file previously stored in the cache was evicted from the cache, determining a refault distance of the page of the file based on a difference between the stored value of the eviction counter and a current value of the eviction counter, and adjusting a ratio of cache lists maintained by the processing device to track pages in the cache, the adjusting based on the determined refault distance. | 2014-06-05 |
20140156941 | Tracking Non-Native Content in Caches - The described embodiments include a cache with a plurality of banks that includes a cache controller. In these embodiments, the cache controller determines a value representing non-native cache blocks stored in at least one bank in the cache, wherein a cache block is non-native to a bank when a home for the cache block is in a predetermined location relative to the bank. Then, based on the value representing non-native cache blocks stored in the at least one bank, the cache controller determines at least one bank in the cache to be transitioned from a first power mode to a second power mode. Next, the cache controller transitions the determined at least one bank in the cache from the first power mode to the second power mode. | 2014-06-05 |
20140156942 | Shared Virtual Memory Between A Host And Discrete Graphics Device In A Computing System - In one embodiment, the present invention includes a device that has a device processor and a device memory. The device can couple to a host with a host processor and host memory. Both of the memories can have page tables to map virtual addresses to physical addresses of the corresponding memory, and the two memories may appear to a user-level application as a single virtual memory space. Other embodiments are described and claimed. | 2014-06-05 |
20140156943 | INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND PROGRAM - An information processing apparatus includes a detection unit configured to detect a damaged file from files stored in a cache area, a determination unit configured to determine whether the damaged file detected by the detection unit is restorable, a restoration unit configured to, if the determination unit determines that the damaged file is restorable, delete every restorable file in the cache area including the damaged file and restore the deleted file in the cache area, and an initialization unit configured to, if the determination unit determines that the damaged file is not restorable, delete every file in the cache area and initialize the cache area. | 2014-06-05 |
20140156944 | MEMORY MANAGEMENT APPARATUS, METHOD, AND SYSTEM - The present invention discloses a memory management apparatus, method, and system. An OS-based memory management apparatus associated with main memory includes a memory allocation controller configured to control a first memory region within the main memory such that the first memory region is used as a buffer cache depending on whether an input/output device is active or not in order to use the first memory region, allowing memory reservation for the input/output device, in the OS. The memory allocation controller controls the first memory region such that the first memory region is used as an eviction-based cache. | 2014-06-05 |
20140156945 | MULTI-STAGE TRANSLATION OF PREFETCH REQUESTS - A device for multi-stage translation of prefetch requests includes a prefetch queue for providing queued prefetch requests, each of the queued prefetch requests including N different control entries; N serial-connected translation stages for the translation of N control entries of one of the queued prefetch requests into a translated prefetch request, wherein a translation in a i-th translation stage is dependent on a translation in a (i−1)-th translation stage, i ε [1, . . . ,N]; and a prefetch issuer which is configured to control an index for each of the N different control entries in the prefetch queue and to issue a prefetch of the indexed control entry of the N different control entries for the highest non-stalled translation stage. | 2014-06-05 |
20140156946 | MEMORY PREFETCH SYSTEMS AND METHODS - Systems and methods are disclosed herein, including those that operate to prefetch a programmable number of data words from a selected memory vault in a stacked-die memory system when a pipeline associated with the selected memory vault is empty. | 2014-06-05 |
20140156947 | METHOD AND APPARATUS FOR SUPPORTING A PLURALITY OF LOAD ACCESSES OF A CACHE IN A SINGLE CYCLE TO MAINTAIN THROUGHPUT - A method for supporting a plurality of requests for access to a data cache memory (“cache”) is disclosed. The method comprises accessing a first set of requests to access the cache, wherein the cache comprises a plurality of blocks. Further, responsive to the first set of requests to access the cache, the method comprises accessing a tag memory that maintains a plurality of copies of tags for each entry in the cache and identifying tags that correspond to individual requests of the first set. The method also comprises performing arbitration in a same clock cycle as the accessing and identifying of tags, wherein the arbitration comprises: (a) identifying a second set of requests to access the cache from the first set, wherein the second set accesses a same block within the cache; and (b) selecting each request from the second set to receive data from the same block. | 2014-06-05 |
20140156948 | APPARATUSES AND METHODS FOR PRE-FETCHING AND WRITE-BACK FOR A SEGMENTED CACHE MEMORY - Apparatuses and methods for a cache memory are described. In an example method, a transaction history associated with a cache block is referenced, and requested information is read from memory. Additional information is read from memory based on the transaction history, wherein the requested information and the additional information are read together from memory. The requested information is cached in a segment of a cache line of the cache block and the additional information in cached another segment of the cache line. In another example, the transaction history is also updated to reflect the caching of the requested information and the additional information. In another example, read masks associated with the cache tag are referenced for the transaction history, the read masks identifying segments of a cache line previously accessed. | 2014-06-05 |
20140156949 | FAULT HANDLING IN ADDRESS TRANSLATION TRANSACTIONS - A data processing apparatus having a memory configured to store tables having virtual to physical address translations, a cache configured to store a subset of the virtual to physical address translations and cache management circuitry configured to control transactions received from the processor requesting virtual address to physical address translations. The data processing apparatus identifies where a faulting transaction has occurred during execution of a context and whether the faulting transaction has a transaction stall or transaction terminate fault. The cache management circuitry is responsive to identification of the faulting transaction having a transaction terminate fault to invalidate all address translations in the cache that relate to the context of the faulting transaction such that a valid bit associated with each entry in the cache is set to invalid for the address translations. | 2014-06-05 |
20140156950 | EMULATED MESSAGE SIGNALED INTERRUPTS IN MULTIPROCESSOR SYSTEMS - A processor with coherency-leveraged support for low latency message signaled interrupt handling includes multiple execution cores and their associated cache memories. A first cache memory associated a first of the execution cores includes a plurality of cache lines. The first cache memory has a cache controller including hardware logic, microcode, or both to identify a first cache line as an interrupt reserved cache line and map the first cache line to a selected memory address associated with an I/O device. The selected system address may be a portion of configuration data in persistent storage accessible to the processor. The controller may set a coherency state of the first cache line to shared and, in response to detecting an I/O transaction including I/O data from the I/O device and containing a reference to the selected memory address, emulate a first message signaled interrupt identifying the selected memory address. | 2014-06-05 |
20140156951 | MULTICORE, MULTIBANK, FULLY CONCURRENT COHERENCE CONTROLLER - This invention optimizes non-shared accesses and avoids dependencies across coherent endpoints to ensure bandwidth across the system even when sharing. The coherence controller is distributed across all coherent endpoints. The coherence controller for each memory endpoint keeps a state around for each coherent access to ensure the proper ordering of events. The coherence controller of this invention uses First-In-First-Out allocation to ensure full utilization of the resources before stalling and simplicity of implementation. The coherence controller provides Snoop Command/Response ID Allocation per memory endpoint. | 2014-06-05 |
20140156952 | INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND COMPUTER READABLE MEDIUM - According to one embodiment, an information processing apparatus with mode switching function, includes a first management module which is capable of accessing a predetermined area of a memory, and a second management module which is capable of accessing the predetermined area and another area of the memory. The first management module is incapable of accessing the other area of the memory. The second management module is configured to exchange information in the predetermined area for information in the other area, in accordance with mode switching. | 2014-06-05 |
20140156953 | Unified Optimistic and Pessimistic Concurrency Control for a Software Transactional Memory (STM) System - A method and apparatus for unified concurrency control in a Software Transactional Memory (STM) is herein described. A transaction record associated with a memory address referenced by a transactional memory access operation includes optimistic and pessimistic concurrency control fields. Access barriers and other transactional operations/functions are utilized to maintain both fields of the transaction record, appropriately. Consequently, concurrent execution of optimistic and pessimistic transactions is enabled. | 2014-06-05 |
20140156954 | SYSTEM AND METHOD FOR ACHIEVING ENHANCED PERFORMANCE WITH MULTIPLE NETWORKING CENTRAL PROCESSING UNIT (CPU) CORES - The present disclosure discloses a method and network device for achieving enhanced performance with multiple CPU cores in a network device having a symmetric multiprocessing architecture. The disclosed method allows for storing, by each central processing unit (CPU) core, a non-atomic data structure, which is specific to each networking CPU core, in a memory shared by the plurality of CPU cores. Also, the memory is not associated with any locking mechanism. In response to a data packet is received by a particular CPU core, the disclosed system will update a value of the non-atomic data structure corresponding to the particular CPU core. The data structure may be a counter or a fragment table. Further, a dedicated CPU core is allocated to process only data packets received from other CPU cores, and is responsible for dynamically responding to queries receives from a control plane process. | 2014-06-05 |
20140156955 | USAGE OF SNAPSHOTS PREPARED BY A DIFFERENT HOST - A system and method are disclosed for preparing and using snapshots in a virtualized environment. In accordance with one example, a first computer system prepares, in an area of a storage device, a snapshot of a virtual disk of a virtual machine that is hosted by a second computer system. The first computer system then provides to the second computer system a reference to the prepared snapshot. | 2014-06-05 |
20140156956 | SYSTEM, METHOD AND A NON-TRANSITORY COMPUTER READABLE MEDIUM FOR CREATING A WARRANTED SNAPSHOT - A method for providing a warranted snapshot that may include: receiving a request to create a first warranted snapshot of a first logical volume at a first point in time and creating the first warranted snapshot if the first warranted snapshot is non-writable and if an amount of physical storage actually devoted by a storage system to the first logical volume at the first point of time does not exceed a size of a free physical storage space that is available for storing any future data delta associated with the first warranted snapshot. The creating of the first warranted snapshot may include allocating a first virtual portion of a physical storage space for storing any future data delta associated with the first warranted snapshot. A size of the first virtual portion equals the amount of physical storage actually devoted to the first logical volume at the first point of time. | 2014-06-05 |
20140156957 | LIVE SNAPSHOTS OF MULTIPLE VIRTUAL DISKS - A system and method are disclosed for servicing requests to create live snapshots of a plurality of virtual disks in a virtualized environment. In accordance with one example, a computer system issues one or more commands to create a first snapshot of a first virtual disk of a virtual machine and a second snapshot of a second virtual disk of the virtual machine while the virtual machine is running. The computer system determines that the creating of the second snapshot failed and, in response, destroys the first snapshot. | 2014-06-05 |
20140156958 | COMMON CONTIGUOUS MEMORY REGION OPTIMIZED VIRTUAL MACHINE MIGRATION WITHIN A WORKGROUP - Embodiments of the invention relate to scanning, by a first processor in a work group, a memory associated with the first processor for data. The first processor updates a first data structure to include at least a portion of the data based on the scanning. The first processor transmits a representation of the first data structure to one or more peer processors of the first processor included in the work group using a dedicated link. The first processor receives a representation of a second data structure associated with at least one of the one or more peer processors of the first processor. The first processor updates the first data structure based on the received representation of the second data structure. | 2014-06-05 |
20140156959 | CONCURRENT ARRAY-BASED QUEUE - According to one embodiment, a method for implementing an array-based queue in memory of a memory system that includes a controller includes configuring, in the memory, metadata of the array-based queue. The configuring comprises defining, in metadata, an array start location in the memory for the array-based queue, defining, in the metadata, an array size for the array-based queue, defining, in the metadata, a queue top for the array-based queue and defining, in the metadata, a queue bottom for the array-based queue. The method also includes the controller serving a request for an operation on the queue, the request providing the location in the memory of the metadata of the queue. | 2014-06-05 |
20140156960 | MANAGING PERMISSIONS FOR LOGICAL VOLUME MANAGERS - A logical volume manager (LVM) may manage a plurality of logical volumes and a plurality of drives in a logical data storage using metadata stored on plurality of drives. The metadata may include a first set of permissions for a storage location in one of the logical volumes. The LVM may analyze permission data associated with the storage location and may override metadata (e.g., the permissions in the metadata) with a second set of permissions obtained from the permission data. The LVM may use the second set of permission data to access the storage location. | 2014-06-05 |
20140156961 | Access to Memory Region Including Confidential Information - Embodiments herein relate to accessing a memory region including confidential information. A memory request from a process may be received. The memory request may include a process ID (PID) of the process, a requested memory address, and a requested access type. The memory request may be compared to a permission set associated with a memory region including the confidential information. Access to the memory region by the process may be controlled based on the comparison. | 2014-06-05 |