34th week of 2014 patent applcation highlights part 71 |
Patent application number | Title | Published |
20140237125 | METHOD, APPARATUS, AND SYSTEM FOR ESTABLISHING DEVICE-TO-DEVICE CONNECTION - The present invention discloses a method, an apparatus, and a system for establishing a device-to-device connection, and relates to the field of communications technologies. The method includes: obtaining, by a device-to-device D2D server, registration information of a first device needing to perform D2D communication; performing pre-matching for the first device according to the registration information of the first device to find a second device; and triggering an evolved Node Base eNB to establish a D2D communication link between the first device and the second device. In the present invention, a D2D server triggers an eNB to establish a direct device-to-device connection between two devices. | 2014-08-21 |
20140237126 | Sharing Location Information During A Communication Session - In one embodiment, a method includes establishing a communication session between a first device and a second device. The first device is a mobile computing device. The location of the first device is received at the second device. The location of the first device is displayed on a graphical user interface of the second device during the communication session. | 2014-08-21 |
20140237127 | Extending SIP P-Served User Header Over IMS Interfaces - The invention provides a method of handling SIP messages in an IMS core network. The method comprises receiving a first network entity, a first SI P message that includes an identification of a served user to which the first SIP message relates. The first IMS network entity is in the served user's home network. The first SIP message is forwarded as a second SIP message to a second network entity in the served user's home IMS core network. The second SI P message includes a P-Served-User, PSU, header identifying the served user. | 2014-08-21 |
20140237128 | Method and wireless terminal device for rapidly establishing dual-stack wireless connection - A method and wireless terminal device for rapidly establishing a dual-stack wireless connection. The wireless terminal device sends various types of packet data protocol (PDP) activation requests after being registered onto a wireless network, and determines an Internet Protocol (IP) type supported by the network based on a result returned by the network; after receiving a connection instruction from a user, the wireless terminal device initiates one PDP activation request corresponding to a determined IP type supported by the network; if a result that the PDP activation is performed successfully is returned by the network, the wireless terminal device establishes a data connection. Compared to the related art, the wireless terminal device of the present disclosure determines the IP type supported by the network in advance when it is registered onto the network, and thus may prevent a fall-back process and rapidly establish the connection when receiving the connection instruction. | 2014-08-21 |
20140237129 | Service Redirection from a Policy and Charging Control Architecture - For authorizing redirection services in a policy and charging control architecture, a policy and charging control rules function (PCRF) server determines control rules and redirection information per service in an internet protocol connectivity access network session, and a policy charging and enforcement function (PCEF) device receives control rules and redirection information per service basis, determines redirection per service request, and triggers the redirection. Upon a first request for a service, the PCEF device returns a redirection message with a redirection identifier; and upon completion of the service redirection, the first request for the service reaching the PCEF, the PCEF verifies the service is authorized and submits a service allowance toward the service server in charge of the service. Methods are also disclosed. | 2014-08-21 |
20140237130 | DEVICE MANAGEMENT SERVICE - Systems and methods are described that comprise receiving at a platform an enrollment request from a client device. The enrollment request comprises a request key and device data of the client device. Device identification is generated and issued to the client device in the absence of a previous enrollment event. A response to the client device is generated, and the response is a response to the enrollment request that includes the device identification. Subsequent sessions between the client device and the platform are controlled with the device identification. | 2014-08-21 |
20140237131 | SECURED COMMUNICATION CHANNEL BETWEEN CLIENT DEVICE AND DEVICE MANAGEMENT SERVICE - Systems and methods are described that comprise issuing a request to a client device from a platform. The request is an electronic message that includes an electronic link. An acknowledgement is received from the client device, and the acknowledgement is generated upon activation of the electronic link. A secure channel is established between the platform and a client application of the client device upon receipt of the acknowledgement. Establishment of the secure channel comprises the client application logging into a care application of the platform with a device identification that was received from the platform during an enrollment transaction. A session is conducted over the secure channel, and the session comprises the care application remotely controlling the client device via the client application. | 2014-08-21 |
20140237132 | COMMUNICATION APPARATUS, COMPUTER-READABLE STORAGE MEDIUM HAVING STORED THEREIN COMMUNICATION CONTROL PROGRAM, COMMUNICATION CONTROL METHOD, AND COMMUNICATION SYSTEM - A first determination section determines, when an instruction for connection establishment is issued, whether or not connection information including target apparatus specifying information that uniquely specifies a predetermined target apparatus is stored in a first storage section. A first connection section performs, when the first determination section determines that the connection information is stored in the first storage section, a process for establishing a connection by wireless communication to the target apparatus that is specified based on the target apparatus specifying information stored in the first storage section. | 2014-08-21 |
20140237133 | PAGE DOWNLOAD CONTROL METHOD, SYSTEM AND PROGRAM FOR IE CORE BROWSER - The present invention discloses a method, a system and a program of page download control for an IE kernel browser, including: starting an IE kernel browser process and starting a preset download process; registering a communication protocol in the IE kernel browser process and waiting for a page download request based on a corresponding communication protocol; when the IE kernel browser process receives the page download request, triggering the download process to control page download according to a preset download rule; and return download status information to the IE kernel browser process during the page download procedure. The present invention is able to effectively control the page download process of the IE kernel browser and improve the efficiency and stability of the page download. | 2014-08-21 |
20140237134 | STREAMING DELAY PATTERNS IN A STREAMING ENVIRONMENT - The method and system receive streaming data to be processed by a plurality of processing elements comprising one or more stream operators. One embodiment is directed to a method and a system for managing processing in a streaming application. A stream operator may select a delay pattern. The stream operator may compare one or more performance factors from the delay pattern to one or more optimal performance factors. The stream operator may delay the stream of tuples using the delay pattern if the performance factors are determined by the optimal performance factors. | 2014-08-21 |
20140237135 | METHOD OF SYNCHRONIZING A PLURALITY OF CONTENT DIRECTORY DEVICE (CDS) DEVICES, CDS DEVICE, AND SYSTEM - Provided is a method and system for synchronizing a plurality of content directory service (CDS) devices in a home network environment. The method of synchronizing the plurality of CDS devices of a home network, which includes the plurality of CDS devices and a control point (CP), comprises (a) requesting a first CDS device among the plurality of CDS devices to start synchronization using the CP; (b) performing the synchronization with a second CDS device among the plurality of CDS devices using the first CDS device; (c) selecting a third CDS device from the plurality of CDS devices and requesting the third CDS device to start synchronization with the first CDS device or the second CDS device using the CP; and (d) performing the synchronization with the first CDS device or the second CDS device using the third CDS device. | 2014-08-21 |
20140237136 | COMMUNICATION SYSTEM, COMMUNICATION CONTROLLER, COMMUNICATION CONTROL METHOD, AND MEDIUM - A communication system includes a transmitter configured to transmit a packet, a repeater configured to impose a bandwidth limit on the packet, a receiver configured to receive the packet on which the bandwidth limit is imposed, and a control device configured to set, when a reception interval of reception packets received by the receiver is longer than a transmission interval of packets transmitted to the repeater, a transmission rate in the transmitter in accordance with the reception interval. | 2014-08-21 |
20140237137 | SYSTEM FOR DISTRIBUTING FLOW TO DISTRIBUTED SERVICE NODES USING A UNIFIED APPLICATION IDENTIFIER - In one embodiment, a method includes obtaining a flow, identifying an application associated with the flow, and identifying a first unique application identifier (UAID) for the application. The first UAID uniquely identifies the application. The method also includes adding the first UAID to the flow, and routing the flow through a network after adding the first UAID to the flow. | 2014-08-21 |
20140237138 | Performance-Based Routing Method and Device - Embodiments of the present invention relate to the field of communications technologies, and disclose a performance-based routing method and device, which can implement exchange of a performance route by expanding a BGP protocol. A first PCR receives first performance routing information sent by a second PCR. The first performance routing information includes a first performance parameter attribute. It is determined whether a performance route corresponding to the first performance routing information exists in an adjacent routing information base-in Adj-RIB-in of the first PCR. The performance route is added to the Adj-RIB-in when the performance route does not exist in the Adj-RIB-in. | 2014-08-21 |
20140237139 | Per-Request Control Of DNS Behavior - In various embodiments, a user or subscriber of a domain name system (DNS) service that provides various DNS resolution options or features, such as misspelling redirection, parental filters, domain blocking, or phishing protection through the DNS process, can influence how requests for domain name (DNS) information are handled on a per-request basis. The user or subscriber may configure the DNS client software of their personal computer or configure their broadband router to provide control information to a DNS server with DNS resolution options that enables the DNS server to resolve DNS queries with the DNS resolution options on a per-request basis. As a result, the user can mitigate exposure to pop-ups, pop-unders, banner ads, fraudulent offers, malware, viruses, or the like, from websites using the domain name system. | 2014-08-21 |
20140237140 | NETWORK ADDRESS TRANSLATION - Address translation sufficient for use in translating addresses included in messages carried or otherwise transmitted between inside and outside network is contemplated. The contemplated address translation may facilitate operation of a network address translator (NAT), carrier grade network address translator (CGN), or other device similarly configured to facilitate translating inside addresses used to address messages carried over the inside network relative to outside addresses used to facilitate carrying messages over the outside network. | 2014-08-21 |
20140237141 | Pulse Width Modulated Outputs for an Output Module in an Industrial Controller - An output module for an industrial controller configurable to simplify setup and commissioning is disclosed. The output module includes configurable PWM outputs that may be scheduled to start at different times within the PWM period, that may be configured to generate a fixed number of PWM pulses, and that may have an extendable PWM period. The output terminals are configurable to enter a first state upon generation of a fault and further configurable to enter a second state after a configurable time delay following the fault being generated. The output module may receive inputs signals directly from another module and set output signals at the terminals responsive to these signals. | 2014-08-21 |
20140237142 | BANDWIDTH CONFIGURABLE IO CONNECTOR - Systems and methods of interconnecting devices may include an input/output (IO) interface having one or more device-side data lanes and transceiver logic to receive a bandwidth configuration command. The transceiver logic may also configure a transmit bandwidth of the one or more device-side data lanes based on the bandwidth configuration command. Additionally, the transceiver logic can configure a receive bandwidth of the one or more device-side data lanes based on the bandwidth configuration command. | 2014-08-21 |
20140237143 | Debugging Fixture - A fixture, for connecting a host device and a universal serial bus (USB) device, the fixture comprises a plurality of connectors; a plurality of first signal pins, located at first ends of the plurality of connectors for connecting to the host device; and a plurality of second signal pins, located at second ends of the plurality of connectors for connecting to the USB device; wherein a first part of the plurality of connectors are used for transmitting signals between the host device and the USB device in a USB mode; wherein a second part of the plurality of connectors are retained in a specified state for providing a control signal to control the USB device to enter an operating mode. | 2014-08-21 |
20140237144 | METHOD TO EMULATE MESSAGE SIGNALED INTERRUPTS WITH INTERRUPT DATA - Methods to emulate a message signaled interrupt (MSI) with interrupt data are described herein. An embodiment of the invention includes a memory decoder to monitor a predetermined memory block allocated to a device, an interrupt controller to receive an emulated messaged signaled interrupt (MSI) signal from the memory decoder in response to a posted write transaction to the predetermined memory block initiated from the device, and an execution unit to execute an interrupt service routine (ISR) associated with the device to service the MSI using interrupt data retrieved from the predetermined memory block, without having to obtain the interrupt data from the device via an input output (IO) transaction. | 2014-08-21 |
20140237145 | DUAL-BUFFER SERIALIZATION AND CONSUMPTION OF VARIABLE-LENGTH DATA RECORDS PRODUCED BY MULTIPLE PARALLEL THREADS - Under control of the consumer, it is determined that a first buffer is empty and that a second buffer contains data; a first compare-double-and-swap operation within a spin loop is executed to swap a double pointer of the first buffer and a double pointer of the second buffer, wherein responsive to the executing of the operation the consumer drains the second buffer, and wherein the executing of the operation directs the at least one producer to fill the first buffer; and it is determined that the first buffer and the second buffer are empty and the consumer waits for a notification from one of i) the at least one producer and ii) a timer. Under control of the at least one producer, a second compare-double-and-swap operation within a spin loop is executed to atomically locate the first buffer and update the double pointer of the first buffer. | 2014-08-21 |
20140237146 | AUTOMATED NETWORK TRIGGERING-FORWARDING DEVICE - An automated network triggering-forwarding device connected with a control computer and an information input equipment by a network or a cable, respectively, is provided, which comprises a static output module, a dynamic forwarding module, and an information feedback module. A preset trigger signal is output by touching a key or combination of keys of a key output module, the output information of the information input equipment is forwarded to the control computer by the dynamic forwarding module, and the information fed back by the control computer is displayed on the feedback display module and a voice prompt is provided through the voice output module by the information feedback module. The buttons are imparted with different output definitions according to different service requirements, and tart triggering other recognizing devices to operate when needed according to operator demand. The dynamic output function can be externally connected with a plurality of non-network equipment. | 2014-08-21 |
20140237147 | SYSTEMS, METHODS, AND INTERFACES FOR ADAPTIVE PERSISTENCE - A storage module may be configured to service I/O requests according to different persistence levels. The persistence level of an I/O request may relate to the storage resource(s) used to service the I/O request, the configuration of the storage resource(s), the storage mode of the resources, and so on. In some embodiments, a persistence level may relate to a cache mode of an I/O request. I/O requests pertaining to temporary or disposable data may be serviced using an ephemeral cache mode. An ephemeral cache mode may comprise storing I/O request data in cache storage without writing the data through (or back) to primary storage. Ephemeral cache data may be transferred between hosts in response to virtual machine migration. | 2014-08-21 |
20140237148 | DATA PROCESSING DEVICE AND DATA PROCESSING METHOD - A data processing device includes a first sub-arbiter configured to arbitrate an access by first and second masters that access data stored in a memory; a second sub-arbiter configured to arbitrate an access to the memory by a plurality of masters other than the first and the second masters; a main arbiter configured to prioritize the access to the memory by the first sub-arbiter over the access to the memory by the second sub-arbiter; and a limiting unit configured to limit an amount of the access to the memory by the second master within a preset range. | 2014-08-21 |
20140237149 | SENDING A NEXT REQUEST TO A RESOURCE BEFORE A COMPLETION INTERRUPT FOR A PREVIOUS REQUEST - In an embodiment, in response to receiving a completion interrupt for a first request from a resource, a determination is made whether relocation of memory contents accessed by performance of the first request is in progress. If the relocation of the memory contents accessed by performance of the first request is in progress, a second request is sent to the resource before the memory relocation completes. If the relocation of the memory contents accessed by the performance of the first request is not in progress, the completion interrupt for the first request is sent to the virtual machine that initiated the first request. | 2014-08-21 |
20140237150 | ELECTRONIC COMPUTER AND INTERRUPT CONTROL METHOD - An electronic computer includes a processor that executes a thread and an interrupt handler, and monitors load of the processor; and an interrupt controller that is configured to determine a notification timing for an interrupt request to call the interrupt handler, the notification timing being determined based on the load and an effect of execution of the interrupt handler on user performance of the thread under execution by the processor; and notify the processor of the interrupt request, based on the notification timing. When the load is higher than a threshold, the interrupt controller sets the notification timing for an interrupt request that does not affect the user performance, to be later than the notification timing for an interrupt request that affects the user performance. Based on notification of the interrupt request, the processor calls and executes the interrupt handler that corresponds to the interrupt request. | 2014-08-21 |
20140237151 | DETERMINING A VIRTUAL INTERRUPT SOURCE NUMBER FROM A PHYSICAL INTERRUPT SOURCE NUMBER - In an embodiment, a request is received from a virtual machine that specifies a virtual ISN and a hardware resource. A physical ISN is selected that is assigned to the hardware resource. The physical ISN is assigned to the virtual ISN as an assigned pair. The request and the physical ISN are sent to the hardware resource. A physical interrupt is received from the hardware resource that specifies the physical ISN. In response to the receipt of the physical interrupt that specifies the physical ISN, the virtual machine and the virtual ISN that is assigned to the first physical ISN are determined from the physical interrupt and the assigned pair from among a plurality of virtual machines. In response to determining the virtual machine and first virtual ISN that is assigned to the physical ISN, a virtual interrupt that comprises that virtual ISN is sent to the virtual machine. | 2014-08-21 |
20140237152 | Folded Memory Modules - A memory module comprises a data interface including a plurality of data lines and a plurality of configurable switches coupled between the data interface and a data path to one or more memories. The effective width of the memory module can be configured by enabling or disabling different subsets of the configurable switches. The configurable switches may be controlled by manual switches, by a buffer on the memory module, by an external memory controller, or by the memories on the memory module. | 2014-08-21 |
20140237153 | DEVICE-READY-STATUS TO FUNCTION-READY-STATUS CONVERSION - A method for sending readiness notification messages to a root complex in a peripheral component interconnect express (PCIe) subsystem. The method includes receiving a device-ready-status (DRS) message in a downstream port that is coupled to an upstream port in a PCIe component. The method further includes setting a bit in the downstream port indicating that the DRS message has been received. | 2014-08-21 |
20140237154 | Integrating Non-Peripheral Component Interconnect (PCI) Resources Into A Computer System - In one embodiment, the present invention includes an apparatus having an adapter to communicate according to a personal computer (PC) protocol and a second protocol. A first interface coupled to the adapter is to perform address translation and ordering of transactions received from upstream of the adapter. The first interface is coupled in turn to heterogeneous resources, each of which includes an intellectual property (IP) core and a shim, where the shim is to implement a header of the PC protocol for the IP core to enable its incorporation into the apparatus without modification. Other embodiments are described and claimed. | 2014-08-21 |
20140237155 | Providing A Peripheral Component Interconnect (PCI)-Compatible Transaction Level Protocol For A System On A Chip (SoC) - In one embodiment, the present invention includes an apparatus having an adapter to communicate according to a personal computer (PC) protocol and a second protocol. A first interface coupled to the adapter is to perform address translation and ordering of transactions received from upstream of the adapter. The first interface is coupled in turn via one or more physical units to heterogeneous resources, each of which includes an intellectual property (IP) core and a shim, where the shim is to implement a header of the PC protocol for the IP core to enable its incorporation into the apparatus without modification. Other embodiments are described and claimed. | 2014-08-21 |
20140237156 | MULTI-PATH ID ROUTING IN A PCIE EXPRESS FABRIC ENVIRONMENT - PCIe is a point-to-point protocol. A PCIe switch fabric has multi-path routing supported by adding an ID routing prefix to a packet entering the switch fabric. The routing is converted within the switch fabric from address routing to ID routing, where the ID is within a Global Space of the switch fabric. Rules are provided to select optimum routes for packets within the switch fabric, including rules for ordered traffic, unordered traffic, and for utilizing congestion feedback. In one implementation a destination lookup table is used to define the ID routing prefix for an incoming packet. The ID routing prefix may be removed at a destination host port of the switch fabric. | 2014-08-21 |
20140237157 | SYSTEM AND METHOD FOR PROVIDING AN ADDRESS CACHE FOR MEMORY MAP LEARNING - A system for interfacing with a co-processor or input/output device is disclosed. According to one embodiment, the system provides a one-hot address cache comprising a plurality of one-hot addresses and a host interface to a host memory controller of a host system. Each one-hot address of the plurality of one-hot addresses has a bit width. The plurality of one-hot addresses is configured to store the data associated with a corresponding memory address in an address space of a memory system and provide the data to the host memory controller during a memory map learning process. The plurality of one-hot addresses comprises a zero address of the bit width and a plurality of non-zero addresses of the bit width, and each one-hot address of the plurality of non-zero addresses of the one-hot address cache has only one non-zero address bit of the bit width. | 2014-08-21 |
20140237158 | Managing the Translation Look-Aside Buffer (TLB) of an Emulated Machine - A mechanism is provided for managing the translation look-aside buffer (TLB) of an emulated computer, in which an extension to the TLB is provided so as to improve virtual address translation capacity for the emulated central processing unit (CPU). | 2014-08-21 |
20140237159 | APPARATUS, SYSTEM, AND METHOD FOR ATOMIC STORAGE OPERATIONS - A virtual storage layer (VSL) for a non-volatile storage device presents a logical address space of a non-volatile storage device to storage clients. Storage metadata assigns logical identifiers in the logical address space to physical storage locations on the non-volatile storage device. Data is stored on the non-volatile storage device in a sequential log-based format. Data on the non-volatile storage device comprises an event log of the storage operations performed on the non-volatile storage device. The VSL presents an interface for requesting atomic storage operations. Previous versions of data overwritten by the atomic storage device are maintained until the atomic storage operation is successfully completed. Data pertaining to a failed atomic storage operation may be identified using a persistent metadata flag stored with the data on the non-volatile storage device. Data pertaining to failed or incomplete atomic storage requests may be invalidated and removed from the non-volatile storage device. | 2014-08-21 |
20140237160 | INTER-SET WEAR-LEVELING FOR CACHES WITH LIMITED WRITE ENDURANCE - A cache controller includes a first register that updates after every memory location swap operation on a number of cache sets in a cache memory and resets every N−1 memory location swap operations. N is a number of the cache sets in the cache memory. The memory controller also has a second register that updates after every N−1 memory location swap operations, and resets every (N | 2014-08-21 |
20140237161 | Systems and Methods for User Configuration of Device Names - A system includes a device, a BIOS, and a processor. The BIOS includes a storage operable to store predefined identifier/user defined name pairs. The processor is operable to, detect the device, determine a predefined identifier for the device, and access the storage to locate a predefined identifier/user defined name pair corresponding to the predefined identifier. The processor is further operable to provide a user defined name of the predefined identifier/user defined name pair when the predefined identifier/user defined name pair is present, and provide the predefined identifier of the predefined identifier/user defined name pair when the predefined identifier/user defined name pair is not present. | 2014-08-21 |
20140237162 | NON-VOLATILE MEMORY CHANNEL CONTROL USING A GENERAL PURPOSE PROGRAMMABLE PROCESSOR IN COMBINATION WITH A LOW LEVEL PROGRAMMABLE SEQUENCER - A system includes a control processor, a non-volatile memory device interface, and a micro-sequencer. The control processor may be configured to receive commands and send responses via a command interface. The non-volatile memory device interface may be configured to couple the system to one or more non-volatile memory devices. The micro-sequencer is generally coupled to (i) the control processor and (ii) the non-volatile memory device interface. The micro-sequencer includes a control store readable by the micro-sequencer and writable by the control processor. In response to receiving a particular one of the commands, the control processor is enabled to cause the micro-sequencer to begin executing at a location in the control store according to the particular command and the micro-sequencer is enabled to perform at least a portion of the particular command according to a protocol of the one or more non-volatile memory devices coupled to the non-volatile memory device interface. | 2014-08-21 |
20140237163 | REDUCING WRITES TO SOLID STATE DRIVE CACHE MEMORIES OF STORAGE CONTROLLERS - Methods and structure are provided for reducing the number of writes to a cache of a storage controller. One exemplary embodiment includes a storage controller that has a non-volatile flash cache memory, a primary memory that is distinct from the cache memory, and a memory manager. The memory manager is able to receive data for storage in the cache memory, to generate a hash key from the received data, and to compare the hash key to hash values for entries in the cache memory. The memory manager can write the received data to the cache memory if the hash key does not match one of the hash values. Also, the memory manager can modify the primary memory instead of writing to the cache if the hash key matches a hash value, in order to reduce the amount of data written to the cache memory. | 2014-08-21 |
20140237164 | HYBRID DRIVE THAT IMPLEMENTS A DEFERRED TRIM LIST - A hybrid drive controller maintains a deferred trim list that holds a subset of logical addresses of writes performed on magnetic disks. For example, if a write command is issued to an LBA space that overlaps a portion stored in flash memory and the write is to be performed on the magnetic disks, the trimming of the overlapping portion in the flash memory will be deferred. Instead of trimming, the logical addresses associated with the overlapping portion will be added to the deferred trim list and trimming of the logical addresses in the deferred trim list will be carried out at a later time, asynchronous to the write that caused them to be added to the list. | 2014-08-21 |
20140237165 | MEMORY CONTROLLER, METHOD OF OPERATING THE SAME AND MEMORY SYSTEM INCLUDING THE SAME - A memory controller controlling a nonvolatile memory device having a plurality of memory blocks as a data storage space includes an error detection and correction circuit and a reclaim control unit. The error detection and correction circuit receives data from a memory block and calculates a comparison result by comparing a bit error rate of the received data and a predetermined value. The reclaim control unit determines whether or not to perform a read reclaim operation depending on the comparison result and a read voltage used to read the data. The read reclaim operation copies the data to a memory block different from a memory block having stored the data. | 2014-08-21 |
20140237166 | HIGHER-LEVEL REDUNDANCY INFORMATION COMPUTATION - Higher-level redundancy information computation enables a Solid-State Disk (SSD) controller to provide higher-level redundancy capabilities to maintain reliable operation in a context of failures of non-volatile (e.g. flash) memory elements during operation of an SSD. A first portion of higher-level redundancy information is computed using parity coding via an XOR of all pages in a portion of data to be protected by the higher-level redundancy information. A second portion of the higher-level redundancy information is computed using a weighted-sum technique, each page in the portion being assigned a unique non-zero “index” as a weight when computing the weighted-sum. Arithmetic is performed over a finite field (such as a Galois Field). The portions of the higher-level redundancy information are computable in any order, such as an order based on order of read operation completion of non-volatile memory elements. | 2014-08-21 |
20140237167 | Apparatus and Methods for Peak Power Management in Memory Systems - Disclosed are apparatus and techniques for managing power in a memory system having a controller and nonvolatile memory array. In one embodiment, prior to execution of each command with respect to the memory array, a request for execution of such command is received with respect to the memory array. In response to receipt of each request for each command, execution of such command is allowed or withheld with respect to the memory array based on whether such command, together with execution of other commands, is estimated to exceed a predetermined power usage specification for the memory system. | 2014-08-21 |
20140237168 | Mass Storage Controller Volatile Memory Containing Metadata Related to Flash Memory Storage - A storage controller is provided that contains multiple processors. In some embodiments, the storage controller is coupled to a flash memory module having multiple flash memory groups, each flash memory group corresponding to a distinct flash port in the storage controller, each flash port comprising an associated processor. Each processor handles a portion of one or more host commands, including reads and writes, allowing multiple parallel pipelines to handle one or more host commands simultaneously. | 2014-08-21 |
20140237169 | HOT MEMORY BLOCK TABLE IN A SOLID STATE STORAGE DEVICE - Solid state storage devices and methods for populating a hot memory block look-up table (HBLT) are disclosed. In one such method, an indication to an accessed page table or memory map of a non-volatile memory block is stored in the HBLT. If the page table or memory map is already present in the HBLT, the priority location of the page table or memory map is increased to the next priority location. If the page table or memory map is not already stored in the HBLT, the page table or memory map is stored in the HBLT at some priority location, such as the mid-point, and the priority location is incremented with each subsequent access to that page table or memory map. | 2014-08-21 |
20140237170 | STORAGE DEVICE, AND READ COMMAND EXECUTING METHOD - A storage device of the embodiment includes memory, a control section, a table holding section for managing a table for holding an identifier, a logical address, and a data length based on a read command, an issuing section for issuing the logical address and the data length for each identifier to the control section, a buffer for holding data received from the memory along with the identifier, and an identifier queue for receiving the identifier of a number proportional to a data length when the data of the logical address of the same identifier is received in the buffer. The storage device of the embodiment includes a transfer section for transferring the data corresponding to the identifier received in the buffer to outside when the identifier is held as incomplete readout in the table in order from the identifier at a head of the identifier queue. | 2014-08-21 |
20140237171 | SOLID-STATE DISK WITH WIRELESS FUNCTIONALITY - A system including an interface module to interface a solid-state disk controller to a computing device. A memory control module exchanges data with the computing device via the interface module and caches the data in a solid-state memory controlled by the solid-state disk controller. A network interface module communicates with the computing device via the interface module and interfaces the computing device to a wireless network. A crossbar module has a master bus (Mbus) interface bridged to an advanced high-performance bus (AHB). A memory communicates with one or more of the network interface module and the crossbar module via one or more of the Mbus interface and the AHB. In response to data being cached from the computing device to the solid-sate memory or data cached in the solid-state memory being output to the computing device, the network interface module buffers data received from the wireless network in the memory. | 2014-08-21 |
20140237172 | IMPARTING DURABILITY TO A TRANSACTIONAL MEMORY SYSTEM - A transactional memory system uses a volatile memory as primary storage for transactions. Data is selectively stored in a non-volatile memory to impart durability to the transactional memory system to allow the transactional memory system to be restored to a consistent state in the event of data loss to the volatile memory. | 2014-08-21 |
20140237173 | AGGREGATION OF WRITE TRAFFIC TO A DATA STORE - A method and a processing device are provided for sequentially aggregating data to a write log included in a volume of a random-access medium. When data of a received write request is determined to be suitable for sequentially aggregating to a write log, the data may be written to the write log and a remapping tree, for mapping originally intended destinations on the random-access medium to one or more corresponding entries in the write log, may be maintained and updated. At time periods, a checkpoint may be written to the write log. The checkpoint may include information describing entries of the write log. One or more of the checkpoints may be used to recover the write log, at least partially, after a dirty shutdown. Entries of the write log may be drained to respective originally intended destinations upon an occurrence of one of a number of conditions. | 2014-08-21 |
20140237174 | Highly Efficient Design of Storage Array Utilizing Multiple Cache Lines for Use in First and Second Cache Spaces and Memory Subsystems - A method of operating a cache memory includes the step of storing a set of data in a first space in a cache memory, a set of data associated with a set of tags. A subset of the set of data is stored in a second space in the cache memory, the subset of the set of data associated with a tag of a subset of the set of tags. The tag portion of an address is compared with the subset of data in the second space in the cache memory in that said subset of data is read when the tag portion of the address and the tag associated with the subset of data match. The tag portion of the address is compared with the set of tags associated with the set of data in the first space in cache memory and the set of data in the first space is read when the tag portion of the address matches one of the sets of tags associated with the set of data in the first space and the tag portion of the address and the tag associated with the subset of data in the second space do not match. | 2014-08-21 |
20140237175 | PARALLEL PROCESSING COMPUTER SYSTEMS WITH REDUCED POWER CONSUMPTION AND METHODS FOR PROVIDING THE SAME - A parallel processing computing system includes an ordered set of m memory banks and a processor core. The ordered set of m memory banks includes a first and a last memory bank, wherein m is an integer greater than 1. The processor core implements n virtual processors, a pipeline having p ordered stages, including a memory operation stage, and a virtual processor selector function. | 2014-08-21 |
20140237176 | SYSTEM AND METHOD FOR UNLOCKING ADDITIONAL FUNCTIONS OF A MODULE - A system for interfacing with a co-processor or input/output device is disclosed. According to one embodiment, the system performs a maze unlock sequence by operating a memory device in a maze unlock mode. The maze unlock sequence involves writing a first data pattern of a plurality of data patterns to a memory address of the memory device, reading a first set of data from the memory address, and storing the first set of data in a validated data array. The maze unlock sequence further involves writing a second data pattern of the plurality of data patterns to the memory address, reading a second set of data from the memory address, and storing the second set of data in the validated data array. A difference vector array is generated from the validate data array and an address map of the memory device is identified based on the difference vector array. | 2014-08-21 |
20140237177 | MEMORY MODULE AND MEMORY SYSTEM HAVING THE SAME - A memory module includes a master memory device and at least one slave memory device. The master memory device may generate a refresh clock signal, and perform a refresh operation in synchronization with the refresh clock signal. The slave memory device may be connected to receive the refresh clock signal, and perform a refresh operation in synchronization with the refresh clock signal. | 2014-08-21 |
20140237178 | STORAGE RESOURCE ACKNOWLEDGMENTS - A technique to adjust storage resource acknowledgments and a method thereof is Provided. In one aspect, a request for an operation associated with data is received, and it is determined whether the operation has attained a particular state. In a further aspect, the particular state is adjustable. In another example, the operation has reached the particular state, completion of the operation is acknowledged. | 2014-08-21 |
20140237179 | INFORMATION SYSTEM AND DATA TRANSFER METHOD OF INFORMATION SYSTEM - Availability of an information system including a storage apparatus and a host computer is improved. A host system includes a first storage apparatus provided with a first volume for storing data, and a second storage apparatus for storing the data sent from the first storage apparatus. In case of a failure occurring in the first storage apparatus, the host sends the data to be sent to the first storage apparatus to the second storage apparatus. The same identification number is used by the host computer for accessing data stored in the first volume via a first virtual volume and for accessing data stored in a second volume of the second storage system via a second virtual volume. | 2014-08-21 |
20140237180 | DETERMINING EFFICIENCY OF A VIRTUAL ARRAY IN A VIRTUALIZED STORAGE SYSTEM - A virtualized storage system comprises at least one host, at least one virtual array, a backend array and a management server. The host requests storage operations to the virtual array, and the virtual array executes storage operations for the host. The backend array, coupled to the virtual array, comprises physical storage for the virtual array. The management server determines the efficiency for the virtual array. The management server determines an input throughput data rate between the host and the virtual array based on storage operations between host and virtual array. The management server also determines an output throughput data rate, from the virtual array to the backend array. The output throughput data rate is based on the storage operations that require access to the backend array. The management server determines the efficiency of the virtual array using the input throughput data rate and the output throughput data rate. | 2014-08-21 |
20140237181 | METHOD AND APPARATUS FOR PREPARING A CACHE REPLACEMENT CATALOG - Methods and systems to intelligently cache content in a virtualization environment using virtualization software such as VMWare ESX or Citrix XenServer or Microsoft HyperV or Redhat KVM or their variants are disclosed. Storage IO operations (reads from and writes to disk) are analyzed (or characterized) for their overall value and pinned to cache if their value exceeds a certain defined threshold based on criteria specific to the New Technology File System (NTFS) file-system. Analysis/characterization of NTFS file systems for intelligent dynamic caching include analyzing storage block data associated with a Virtual Machine of interest in accordance with a pre-determined data model to determine the value of the block under analysis for long term or short term caching. Integer values assigned to different types of NTFS objects in a white list data structure called a catalog that can be used to analyze the storage block data. | 2014-08-21 |
20140237182 | METHOD AND APPARATUS FOR SERVICING READ AND WRITE REQUESTS USING A CACHE REPLACEMENT CATALOG - Methods and systems to intelligently cache content in a virtualization environment using virtualization software such as VMWare ESX or Citrix XenServer or Microsoft HyperV or Redhat KVM or their variants are disclosed. Storage IO operations (reads from and writes to disk) are analyzed (or characterized) for their overall value and pinned to cache if their value exceeds a certain defined threshold based on criteria specific to the New Technology File System (NTFS) file-system. Analysis/characterization of NTFS file systems for intelligent dynamic caching include analyzing storage block data associated with a Virtual Machine of interest in accordance with a pre-determined data model to determine the value of the block under analysis for long term or short term caching. Integer values assigned to different types of NTFS objects in a white list data structure called a catalog that can be used to analyze the storage block data. | 2014-08-21 |
20140237183 | SYSTEMS AND METHODS FOR INTELLIGENT CONTENT AWARE CACHING - Methods and systems to intelligently cache content in a virtualization environment using virtualization software such as VMWare ESX or Citrix XenServer or Microsoft HyperV or Redhat KVM or their variants are disclosed. Storage IO operations (reads from and writes to disk) are analyzed (or characterized) for their overall value and pinned to cache if their value exceeds a certain defined threshold based on criteria specific to the New Technology File System (NTFS) file-system. Analysis/characterization of NTFS file systems for intelligent dynamic caching include analyzing storage block data associated with a Virtual Machine of interest in accordance with a pre-determined data model to determine the value of the block under analysis for long term or short term caching. Integer values assigned to different types of NTFS objects in a white list data structure called a catalog that can be used to analyze the storage block data. | 2014-08-21 |
20140237184 | SYSTEM AND METHOD FOR MULTI-TIERED META-DATA CACHING AND DISTRIBUTION IN A CLUSTERED COMPUTER ENVIRONMENT - A system and method caches and distributes meta-data for one or more data containers stored on a plurality of volumes configured as a striped volume set (SVS) and served by a plurality of nodes interconnected as a cluster. The SVS comprises one meta-data volume (MDV) configured to store a canonical copy of certain meta-data, including access control lists and directories, associated with all data containers stored on the SVS, and one or more data volumes (DV) configured to store, at least, data content of those containers. In addition, for each data container stored on the SVS, one volume is designated a container attribute volume (CAV) and, as such, is configured to store (“cache”) a canonical copy of certain, rapidly-changing attribute meta-data, including time stamps and container length, associated with that container. | 2014-08-21 |
20140237185 | ONE-CACHEABLE MULTI-CORE ARCHITECTURE - Technologies are generally described for methods, systems, and devices effective to implement one-cacheable multi-core architectures. In one example, a multi-core processor that includes a first and second tile may be configured to implement a one-cacheable architecture. The second tile may be configured to generate a request for a data block. The first tile may be configured to receive the request for the data block, and determine that the requested data block is part of a group of data blocks identified as one-cacheable. The first tile may further determine that the requested data block is stored in a first cache in the first tile. The first tile may send the data block from the first cache in the first tile to the second tile, and invalidate the data blocks of the group of data blocks in the first cache in the first tile. | 2014-08-21 |
20140237186 | FILTERING SNOOP TRAFFIC IN A MULTIPROCESSOR COMPUTING SYSTEM - Filtering snoop traffic in a multiprocessor computing system, each processor in the multiprocessor computing system coupled to a high level cache and a low level cache, the including: receiving a snoop message that identifies an address in shared memory targeted by a write operation; identifying a set in the high level cache that maps to the address in shared memory; determining whether the high level cache includes an entry associated with the address in shared memory; responsive to determining that the high level cache does not include an entry corresponding to the address in shared memory: determining whether the set in the high level cache has been bypassed by an entry in the low level cache; and responsive to determining that the set in the high level cache has not been bypassed by an entry in the low level cache, discarding the snoop message. | 2014-08-21 |
20140237187 | ADAPTIVE MULTILEVEL BINNING TO IMPROVE HIERARCHICAL CACHING - A device driver calculates a tile size for a plurality of cache memories in a cache hierarchy. The device driver calculates a storage capacity of a first cache memory. The device driver calculates a first tile size based on the storage capacity of the first cache memory and one or more additional characteristics. The device driver calculates a storage capacity of a second cache memory. The device driver calculates a second tile size based on the storage capacity of the second cache memory and one or more additional characteristics, where the second tile size is different than the first tile size. The device driver transmits the second tile size to a second coalescing binning unit. One advantage of the disclosed techniques is that data locality and cache memory hit rates are improved where tile size is optimized for each cache level in the cache hierarchy. | 2014-08-21 |
20140237188 | ELECTRONIC INFORMATION CACHING - Electronic information is made more readily available to one or more access requestors based on an anticipated demand for the electronic information using a process, system or computer software. For instance, electronic information stored on a first storage medium is identified for transport (e.g., in response to a request of at least one of the access requestors), and the electronic information is transported accordingly. Afterwards, a determination is made to store the electronic information on a second storage medium that is more accessible to the access requestors than the first storage medium. The determination is based on an anticipated demand of the access requestors for the electronic information. The anticipated demand is determined based at least on information that is not particular to any single access requestor. The electronic information then is stored on the second storage medium and the access requestors are provided access to the electronic information from the second storage medium. | 2014-08-21 |
20140237189 | COMPRESSION STATUS BIT CACHE AND BACKING STORE - One embodiment of the present invention sets forth a technique for increasing available storage space within compressed blocks of memory attached to data processing chips, without requiring a proportional increase in on-chip compression status bits. A compression status bit cache provides on-chip availability of compression status bits used to determine how many bits are needed to access a potentially compressed block of memory. A backing store residing in a reserved region of attached memory provides storage for a complete set of compression status bits used to represent compression status of an arbitrarily large number of blocks residing in attached memory. Physical address remapping (“swizzling”) used to distribute memory access patterns over a plurality of physical memory devices is partially replicated by the compression status bit cache to efficiently integrate allocation and access of the backing store data with other user data. | 2014-08-21 |
20140237190 | MEMORY SYSTEM AND MANAGEMENT METHOD THEROF - A memory system having multiple memory layers is provided. The memory system includes an upper memory layer and an intermediate memory layer comprising a first sub-memory consisting of a nonvolatile memory and a second sub-memory consisting of a volatile memory in a parallel structure positioned below the upper memory layer, and a memory management unit that controls operations of the upper memory layer and the intermediate memory layer. The intermediate memory layer is referred by the upper memory layer, and the memory management unit stores data meeting a predetermined condition among data stored in the second sub-memory into the first sub-memory in advance when a user device comprising the memory system is operating in a normal mode. | 2014-08-21 |
20140237191 | METHODS AND APPARATUS FOR INTRA-SET WEAR-LEVELING FOR MEMORIES WITH LIMITED WRITE ENDURANCE - Efficient techniques are described for extending the usable lifetime for memories with limited write endurance. A technique for wear-leveling of caches addresses unbalanced write traffic on cache lines which cause heavily written cache lines to fail much fast than other lines in the cache. A counter is incremented for each write operation to a cache array. A line affected by a current write operation which caused the counter to meet a threshold is evicted from the cache rather than writing data to the affected line. A dynamic adjustment of the threshold can be made depending on the operating program. Updates to a current replacement policy pointer are stopped due to the counter meeting the threshold. | 2014-08-21 |
20140237192 | METHOD AND APPARATUS FOR CONSTRUCTING MEMORY ACCESS MODEL - Embodiments of the present invention provide a method and an apparatus for constructing a memory access model, and relate to the field of computers. The method includes: obtaining a page table corresponding to a process referencing a memory block, and clearing a Present bit included in each page table entry stored in the page table; and constructing a memory access model of the memory block according to the number of access times of each page in the memory block and time obtained through timing, where the memory access model at least includes the number of access times and an access frequency of each page in the memory block. The apparatus includes: a first obtaining module, a first monitoring module, a first increasing module, and a second obtaining module. The present invention can reduce the memory consumption and an impact on the system performance, and avoid a system breakdown. | 2014-08-21 |
20140237193 | CACHE WINDOW MANAGEMENT - A method of managing a plurality of least recently used (LRU) queues having entries that correspond to cached data includes ordering a first plurality of entries in a first queue according to a first recency of use of cached data. The first queue corresponds to a first priority. A second plurality of entries in a second queue are ordered according to a second recency of use of cached data. The second queue corresponds to a second priority. A first entry is selected in the first queue based on the order of the first plurality of entries in the first queue. A recency property associated with the first entry is compared with a recency property associated with a second entry in the second queue. Based on a result of this comparison, the first entry and the second entry may be swapped. | 2014-08-21 |
20140237194 | EFFICIENT VALIDATION OF COHERENCY BETWEEN PROCESSOR CORES AND ACCELERATORS IN COMPUTER SYSTEMS - A method of testing cache coherency in a computer system design allocates different portions of a single cache line for use by accelerators and processors. The different portions of the cache line can have different sizes, and the processors and accelerators can operate in the simulation at different frequencies. The verification system can control execution of the instructions to invoke different modes of the coherency mechanism such as direct memory access or cache intervention. The invention provides a further opportunity to test any accelerator having an original function and an inverse function by allocating cache lines to generate an original function output, allocating cache lines to generate an inverse function output based on the original function output, and verifying correctness of the original and inverse functions by comparing the inverse function output to the original function input. | 2014-08-21 |
20140237195 | N-DIMENSIONAL COLLAPSIBLE FIFO - A system and method for efficient dynamic utilization of shared resources. A computing system includes a shared data structure accessed by multiple requestors. Both indications of access requests and indices pointing to entries within the data structure are stored in storage buffers. Each storage buffer maintains at a selected end an oldest stored indication of an access request from a respective requestor. Each storage buffer stores information for the respective requestor in an in-order contiguous manner beginning at the selected end. The indices stored in a given storage buffer are updated responsive to allocating new data or deallocating stored data in the shared data structure. Entries in a storage buffer are deallocated in any order and remaining entries are collapsed toward the selected end to eliminate gaps left by the deallocated entry. | 2014-08-21 |
20140237196 | CHARGED PARTICLE BEAM WRITING APPARATUS, AND BUFFER MEMORY DATA STORAGE METHOD - A charged particle beam writing apparatus includes a buffer memory including a memory region capable of contemporarily storing writing data for data processing regions, wherein writing data including data files is temporarily stored for each of the data processing regions, a dividing unit to divide the memory region of the buffer memory into a first region being large and a second region being small, a specifying unit to specify the memory region such that a data file being large is preferentially stored in the first region and a data file being small is stored at least in the second region, concerning the data files for each of the data processing regions included in the writing data, and a data processing unit to read data files corresponding to each of the data processing regions from the buffer memory, and to perform data processing using the read data files. | 2014-08-21 |
20140237197 | NON-UNIFORM MEMORY ACCESS (NUMA) RESOURCE ASSIGNMENT AND RE-EVALUATION - A system and a method are disclosed for providing for non-uniform memory access (NUMA) resource assignment and re-evaluation. In one example, the method includes receiving, by a processing device, a request to launch a first process in a system having a plurality of Non-Uniform Memory Access (NUMA) nodes, determining, by the processing device, a resource requirement of the first process, determining, based on resources available on the plurality of NUMA nodes, a preferred NUMA node of the plurality of NUMA nodes to execute the first process, the preferred NUMA node being determined by the processing device without user input, and binding, by the processing device, the first process to the preferred NUMA node. | 2014-08-21 |
20140237198 | REDUCING EFFECTIVE CYCLE TIME IN ACCESSING MEMORY MODULES - A method reduces a cycle time of an individual memory module to an effective cycle time shorter than the cycle time using a plurality of memory modules having a circular sequence. The method includes initiating a set of read operations on different memory modules of the plurality of memory modules in the circular sequence from a first read operation initiated on a first module of the plurality of memory modules to a last read operation initiated on the second module. After initiating each read operation of the set of read operations on a particular memory module of the plurality of memory modules and prior to initiating a next read operation in the set of read operations, the method initiates a set of write operations to write a same value to all of the plurality of memory modules in the circular sequence beginning one memory module after the particular memory module. | 2014-08-21 |
20140237199 | APPARATUS AND METHOD FOR HANDLING PAGE PROTECTION FAULTS IN A COMPUTING SYSTEM - Method and apparatus for handling page protection faults in combination particularly with the dynamic conversion of binary code executable by a one computing platform into binary code executed instead by another computing platform. In one exemplary aspect, a page protection fault handling unit is used to detect memory accesses, to check page protection information relevant to the detected access by examining the contents of a page descriptor store, and to selectively allow the access or pass on page protection fault information in accordance with the page protection information. | 2014-08-21 |
20140237200 | READOUT OF INTERFERING MEMORY CELLS USING ESTIMATED INTERFERENCE TO OTHER MEMORY CELLS - A method includes storing data in a memory that includes multiple analog memory cells. After storing the data, an interference caused by a first group of the analog memory cells to a second group of the analog memory cells is estimated. The data stored in the first group is reconstructed based on the estimated interference caused by the first group to the second group. | 2014-08-21 |
20140237201 | DATA REPLICATION WITH DYNAMIC COMPRESSION - A method for replicating data between two or more network connected data storage devices, the method including dynamically determining whether to compress data prior to transmitting across the network based, at least in part, on bandwidth throughput between the network connected data storage devices. If it has been determined to compress the data, the method involves compressing the data and transmitting the compressed data over the network. If it has been determined not to compress the data, the method involves transmitting the data, uncompressed, over the network. Dynamically determining whether to compress data may include comparing bandwidth measurements with a predetermined policy defining when compression should be utilized. In some embodiments, the policy may define that compression should be utilized when an estimated time for compressing the data and transmitting the compressed data is less than an estimated time for transmitting the data uncompressed. | 2014-08-21 |
20140237202 | SYSTEM FOR PREVENTING DUPLICATION OF AUTONOMOUS DISTRIBUTED FILES, STORAGE DEVICE UNIT, AND DATA ACCESS METHOD - There is provided an autonomous distributed type file system which is connected to a data reference device through a first network. The autonomous distributed type file system includes a plurality of storage device units which are mutually connected through a second network and are connected to the first network. Each of the storage device unit includes a local storage and a local controller. The local controller includes a storage directory and a duplicated data maintaining unit. The duplicated data maintaining unit refers to the storage directory, and continuously keeps same contents of duplicated data items in a range without running out of storage capacity of an own node. When there is no free space in the storage capacity, duplicate writing of the data with the same contents is prevented. | 2014-08-21 |
20140237203 | SEMICONDUCTOR MEMORY CARD ACCESS APPARATUS, A COMPUTER-READABLE RECORDING MEDIUM, AN INITIALIZATION METHOD, AND A SEMICONDUCTOR MEMORY CARD - A predetermined number of erasable blocks positioned at a start of a volume area in a semiconductor memory card are provided so as to include volume management information. A user area following the volume management information includes a plurality of clusters. A size of a partition control area from a master boot record & partition table sector to a partition boot sector is determined so that the plurality of clusters in the user area are not arranged so as to straddle erasable block boundaries. Since cluster boundaries and erasable block boundaries in the user area are aligned, there is no need to perform wasteful processing in which two erasable blocks are erased to rewrite one cluster. | 2014-08-21 |
20140237204 | STORAGE SYSTEM AND OBJECT MANAGEMENT METHOD - A storage system comprises a second NAS storage apparatus comprising a processor and a storage medium and a third NAS storage apparatus for migrating an object managed by a first NAS storage apparatus. The processor stores path information of an object for which migration has started after including the path information in object management information, in the storage medium prior to migrating the object entity to the third NAS storage apparatus. The processor, after receiving the object entity from the first NAS storage apparatus and migrating the object entity to the third NAS storage apparatus, stores the third NAS storage apparatus path information to the object entity in the object management information, and reflects the management information in the third NAS storage apparatus. | 2014-08-21 |
20140237205 | SYSTEM AND METHOD FOR PROVIDING A COMMAND BUFFER IN A MEMORY SYSTEM - A system for interfacing with a co-processor or input/output device is disclosed. According to one embodiment, the system is configured to receive a command from a host memory controller of a host system and store the command in a command buffer entry. The system determines that the command is complete using a buffer check logic and provides the command to a command buffer. The command buffer comprises a first field that specifies an entry point of the command within the command buffer entry. | 2014-08-21 |
20140237206 | Managing Personal Information on a Network - Devices, systems, and methods are provided for managing personal information by providing a centralized source or database for a user's information and enabling the user to regulate privacy levels for each information item or category of information. Templates are provided as a table of hierarchies or an onion layers model. Private information may be stored in an inner layer while public information may be stored in an outer layer, and multiple layers and categories can be defined and customized within the template. A requesting entity requests information via a disseminating server that acts as a gateway for authenticating, authorizing, and providing access to the requesting entity. The user may therefore control and regulate their online presence simply by monitoring who requests their information and adjusting privacy levels accordingly. | 2014-08-21 |
20140237207 | METHOD AND SYSTEM FOR ENHANCED PERFORMANCE IN SERIAL PERIPHERAL INTERFACE - A method of conducting an operation in an integrated circuit having a plurality of memory cells includes receiving an operating command for the memory cells and receiving a first address segment associated with the memory cells in at least one clock cycle after receiving the operating command. The method further includes receiving a first performance enhancement indicator in at least one clock cycle after ending the first address segment while before starting to transfer data, for determining whether an enhanced operation is to be performed. | 2014-08-21 |
20140237208 | PROTECTING MEMORY DIAGNOSTICS FROM INTERFERENCE - Disclosed herein are techniques for managing diagnostics of computer memory. A range of contiguous addresses of a physical memory are associated with or mapped to addresses of a virtual memory. The range of contiguous addresses is protected from interference. | 2014-08-21 |
20140237209 | MEMORY MANAGEMENT METHOD, MEMORY MANAGEMENT APPARATUS AND NUMA SYSTEM - Embodiments of the present invention provide a memory management method, a memory management apparatus and a NUMA system. The memory management method includes: determining, according to a memory demand information which includes memory demand information sent by a processor, whether a memory controller meeting the memory demand information exists in a local processing node which the processor; and if exists, determining, in the memory controller meeting the memory demand information, a memory management area meeting the memory demand information, and allocating the memory management area meeting the memory demand information to the processor. Therefore, the memory controller and the memory management area do not need to be determined in a processing node that does not meet the requirements, which can rapidly find a storing area meeting the requirements, and improve the memory allocation efficiency. | 2014-08-21 |
20140237210 | CAPACITY FORECASTING FOR BACKUP STORAGE - A system for capacity forecasting for backup storage comprises a processor and a memory. The processor is configured to calculate a set of statistical analyses for subsets of a set of capacities at points in time. The processor is further configured to determine a selected statistical analysis from the set of statistical analyses. The processor is further configured to forecast a full capacity time based at least in part of the selected statistical analysis. The memory is coupled to the processor and configured to provide the processor with instructions. | 2014-08-21 |
20140237211 | SYSTEM AND METHOD FOR VOLUME BLOCK NUMBER TO DISK BLOCK NUMBER MAPPING - The present invention provides a system and method for virtual block numbers (VBNs) to disk block number (DBN) mapping that may be utilized for both single and/or multiple parity based redundancy systems. Following parity redistribution, new VBNs are assigned to disk blocks in the newly added disk and disk blocks previously occupied by parity may be moved to the new disk. | 2014-08-21 |
20140237212 | TRACKING AND ELIMINATING BAD PREFETCHES GENERATED BY A STRIDE PREFETCHER - A method, an apparatus, and a non-transitory computer readable medium for tracking prefetches generated by a stride prefetcher are presented. Responsive to a prefetcher table entry for an address stream locking on a stride, prefetch suppression logic is updated and prefetches from the prefetcher table entry are suppressed when suppression is enabled for that prefetcher table entry. A stride is a difference between consecutive addresses in the address stream. A prefetch request is issued from the prefetcher table entry when suppression is not enabled for that prefetcher table entry. | 2014-08-21 |
20140237213 | HIGH DOSE RADIATION DETECTOR - Described is a processor comprising: a plurality of radiation detectors; a first logic unit to receive outputs from the plurality of radiation detectors, the logic unit to generate an output according to the received outputs, the output of the first logic unit indicating whether the processor was exposed to incoming radiations; and a second logic unit to receive the output from the first logic unit, and to cause the processor to perform an action according to the output from the first logic unit. | 2014-08-21 |
20140237214 | APPARATUS AND METHOD OF A CONCURRENT DATA TRANSFER OF MULTIPLE REGIONS OF INTEREST (ROI) IN AN SIMD PROCESSOR SYSTEM - This present invention provides a fast data transfer for a concurrent transfer of multiple ROI areas between an internal memory array and a single memory where each PE can specify the parameter set for the area to be transferred independently from the other PE. For example, for a read transfer, the requests are generated in a way that first the first element of each ROI area is requested from the single memory for each PE before the following elements of each ROI area are requested. After the first element from each ROI area has been received from the single memory in a control processor and has been transferred from the control processor over a bus system to the internal memory array, all elements are in parallel stored to the internal memory array. Then, the second element of each ROI area is requested from the single memory for each PE. The transfer finishes after all elements of each ROI area are transferred to their assigned PEs. | 2014-08-21 |
20140237215 | Methods and Apparatus for Scalable Array Processor Interrupt Detection and Response - Hardware and software techniques for interrupt detection and response in a scalable pipelined array processor environment are described. Utilizing these techniques, a sequential program execution model with interrupts can be maintained in a highly parallel scalable pipelined array processing containing multiple processing elements and distributed memories and register files. When an interrupt occurs, interface signals are provided to all PEs to support independent interrupt operations in each PE dependent upon the local PE instruction sequence prior to the interrupt. Processing/element exception interrupts are supported and low latency interrupt processing is also provided for embedded systems where real time signal processing is required. Further, a hierarchical interrupt structure is used allowing a generalized debug approach using debut interrupts and a dynamic debut monitor mechanism. | 2014-08-21 |
20140237216 | MICROPROCESSOR - A microprocessor according to an aspect of the present invention includes an arithmetic operation unit. The arithmetic operation unit includes: a plurality of arithmetic operation devices arranged in a multi-stage arrangement; a delay device provided to each stage of the arithmetic operation devices excluding a final stage, and configured to delay an arithmetic operation result of the arithmetic operation devices for one cycle; and a selector provided to each stage of the arithmetic operation devices excluding the final stage, and configured to select either the arithmetic operation result of the arithmetic operation devices or the arithmetic operation result delayed for one cycle in the delay device and output the selected result to the arithmetic operation device in a next stage. The microprocessor is configured to collectively process a plurality of arithmetic operations from the arithmetic operation unit by controlling a selecting condition in the selector. | 2014-08-21 |
20140237217 | VECTORIZATION IN AN OPTIMIZING COMPILER - An optimizing compiler includes a vectorization mechanism that optimizes a computer program by substituting code that includes one or more vector instructions (vectorized code) for one or more scalar instructions. The cost of the vectorized code is compared to the cost of the code with only scalar instructions. When the cost of the vectorized code is less than the cost of the code with only scalar instructions, the vectorization mechanism determines whether the vectorized code will likely result in processor stalls. If not, the vectorization mechanism substitutes the vectorized code for the code with only scalar instructions. When the vectorized code will likely result in processor stalls, the vectorization mechanism does not substitute the vectorized code, and the code with only scalar instructions remains in the computer program. | 2014-08-21 |
20140237218 | SIMD INTEGER MULTIPLY-ACCUMULATE INSTRUCTION FOR MULTI-PRECISION ARITHMETIC - A multiply-and-accumulate (MAC) instruction allows efficient execution of unsigned integer multiplications. The MAC instruction indicates a first vector register as a first operand, a second vector register as a second operand, and a third vector register as a destination. The first vector register stores a first factor, and the second vector register stores a partial sum. The MAC instruction is executed to multiply the first factor with an implicit second factor to generate a product, and to add the partial sum to the product to generate a result. The first factor, the implicit second factor and the partial sum have a same data width and the product has twice the data width. The most significant half of the result is stored in the third vector register, and the least significant half of the result is stored in the second vector register. | 2014-08-21 |
20140237219 | MICROSTACKSHOTS - A method and apparatus of a device that captures a stackshot of an executing process is described. In an exemplary embodiment, the device detects an interrupt of the process occurring during the execution of the process, where the process execution can be in a kernel space and user space, and the interrupt occurs during the user space. The device further determines whether to capture a stackshot during the interrupt using a penalty function. If the stackshot is to be captured, the device captures the stackshot and saves the stackshot. | 2014-08-21 |
20140237220 | Configuring a Trusted Platform Module - A method includes storing configuration data for a Trusted Platform Module (TPM) in a pre-boot environment such as Unified Extensible Firmware Interface (UEFI), reading the configuration data, and automatically configuring the TPM based upon the configuration data. The configuring includes storing values of TPM parameters in non-volatile memory of the TPM. A method includes UEFI firmware of a circuit board on an assembly line configuring a TPM. An information handling system includes UEFI firmware and a TPM. The UEFI firmware configures the TPM from a configuration file stored in memory of the UEFI firmware. | 2014-08-21 |
20140237221 | APPLICATION EXECUTION ENVIRONMENT SETTING APPARATUS AND METHOD FOR MOBILE TERMINAL - An apparatus and method for application execution environment setting in a mobile terminal are provided. The application execution environment setting apparatus configures application execution environment variables on a per application basis in consideration of variable values assigned by the user in the past. | 2014-08-21 |
20140237222 | Multi-Model Modes of One Device - A portable media player may provide multi-modes for a user. Each mode may define different features and content that are customized for a particular mode. Based a selected mode, the media player may provide access to only content, features, hardware, user interface elements, and the like that the user wishes to have access to when the mode is enabled. The media player may provide the user different experiences, looks and feels for each mode. | 2014-08-21 |
20140237223 | SYSTEM BOOT WITH EXTERNAL MEDIA - Various aspects of the present disclosure provide for a system that is able to boot from a variety of media that can be connected to the system, including SPI NOR and SPI NAND memory, universal serial bus (“USB”) devices, and devices attached via PCIe and Ethernet interfaces. When the system is powered on, the system processor is held in a reset mode, while a microcontroller in the system identifies an external device to be booted, and then copies a portion of boot code from the external device to an on-chip memory. The microcontroller can then direct the reset vector to the boot code in the on-chip memory and brings the system processor out of reset. The system processor can execute the boot code in-place on the on-chip memory, which initiates the system memory and the second stage boot loader. | 2014-08-21 |
20140237224 | NETWORK BOOT SYSTEM - [SUBJECTS] In a network boot system having a read cache mechanism, the subject is to suppress a decreased boot time of a terminal due to an access with respect to a local disk. | 2014-08-21 |