04th week of 2010 patent applcation highlights part 58 |
Patent application number | Title | Published |
20100023633 | METHOD AND SYSTEM FOR IMPROVING CONTENT DIVERSIFICATION IN DATA DRIVEN P2P STREAMING USING SOURCE PUSH - A system and method for improving content diversification in data driven streaming includes computing a weight factor and a qualification factor for each of at least two nodes among a plurality of nodes, based upon a bandwidth of each node. Content is pushed to a node based on the qualification factor and the weight factor of each node. The qualification factor is updated for the node which received pushed content. | 2010-01-28 |
20100023634 | FLOW-RATE ADAPTATION FOR A CONNECTION OF TIME-VARYING CAPACITY - A system and methods for adapting streaming data for transmission over a connection of time-varying capacity are disclosed. A streaming server individually adapts transmission rates of signals directed to subtending clients according to measurements characterizing connections from the streaming server to the clients. The measurements may relate to characteristics such as transfer delay, data-loss fraction, and occupancy level of a buffer at a client's receiver. A flow controller associated with the streaming server derives metrics from measurements taken over selected time windows to determine a permissible transmission rate from the server to each active client. Metrics related to a specific characteristic may include a mean value over a moving window as well as short and long term tendencies of respective measurements. An adaptable encoder at the streaming server encodes signals to meet permissible transmission rates. | 2010-01-28 |
20100023635 | DATA STREAMING THROUGH TIME-VARYING TRANSPORT MEDIA - Methods of data streaming from an encoder to a decoder through a connection subjected to time-varying conditions are disclosed. The connection is assigned a nominal flow rate and an encoding coefficient associated with the connection modifies the nominal flow rate to determine a permissible flow rate compatible with a time-varying state of the connection. Multiple performance characteristics are associated with the connection and corresponding sets of performance measurements taken over adaptively selected time windows are acquired. Performance metrics having one-to-one correspondence to the performance characteristics are determined and compared with lower bounds and upper bounds of respective predefined acceptance intervals. A current encoding coefficient is computed as a function of the performance metrics and used to determine the permissible flow rate. The encoder's configuration is adapted to produce an encoded signal which maximizes signal fidelity under the constraint of the permissible flow rate. | 2010-01-28 |
20100023636 | ONE-WAY MEDIA STREAMING SYSTEM AND METHOD THEREOF - A one-way media streaming system and a corresponding method thereof are provided. The system includes a relay server, a first user device, and a second user device. The first user device and the relay server determine the format of a specialized media streaming packet by exchanging at least one Session Initiation Protocol (SIP) option message. The first user device sends the specialized media streaming packet conforming to the format through a firewall to the relay server. The second user device sends a one-way media streaming packet to the relay server. The relay server changes the destination port of the one-way media streaming packet to be the same as the source port of the specialized media streaming packet of the first user device. The relay server then relays the one-way media streaming packet through the firewall to the first user device. | 2010-01-28 |
20100023637 | SYSTEM, METHOD OR APPARATUS FOR COMBINING MULTIPLE STREAMS OF MEDIA DATA - Embodiments of methods, apparatuses, devices and systems associated with combining or mixing digital media streams are disclosed. | 2010-01-28 |
20100023638 | SYSTEM AND METHOD FOR STREAMING AUDIO - A system and method is provided of synchronizing data streaming. The method can include the operation of receiving an incoming media packet having a timestamp from a media server at a client device. A further operation is synchronizing the clocks for a client device with a clock for a media server. The timestamp can be compared with a next play time for a packet. Another operation can be placing the incoming media packet into a user buffer at a playing position in the user buffer based on the next play time. The incoming media packet can then be played using a media output device accessible to an end user. | 2010-01-28 |
20100023639 | SYSTEM AND METHOD FOR STREAMING AUDIO USING A SEND QUEUE - A system and method are provided for preparing a streaming media system for initial presentation of a media stream. The system includes a media server configured to send out media packets for a media stream at periodic clocked intervals. A framer can be located with the media server to divide the media stream into media packets. A media client is also provided to receive the media packets for the media stream from the media server. A send queue can be located in the server. The send queue can be configured to store a defined length of programming from the media stream, and the send queue can immediately fill a client's user buffer when an activation event occurs. | 2010-01-28 |
20100023640 | SOFTWARE STREAMING SYSTEM AND METHOD - A method for streaming software may include downloading blocks associated with a software title until an executable threshold is reached, initiating execution of the software title, and continuing to download blocks of the software title while the software title is executed. Another method for streaming software may include sending to a client data sufficient for the client to build a virtual directory structure for use in executing a software title, streaming a subset of blocks associated with the software title to the client, and streaming additional blocks associated with the software title to the client on demand. A system for streaming software may include a server computer and a client computer. The server computer may include a program database and a streaming engine. In operation the streaming engine may stream an executable streaming application from the program database to the client. | 2010-01-28 |
20100023641 | COMMUNICATION TERMINAL, TERMINAL, COMMUNICATION SYSTEM, COMMUNICATION METHOD AND PROGRAM - A communication source application is specified in TCP/IP stream communication. The communication source terminal | 2010-01-28 |
20100023642 | METHOD AND SYSTEM FOR TRANSFORMING INPUT DATA STREAMS - The present system and method transforms an input data stream in a first data format of a plurality of first data formats to an output data stream in a second data format of a plurality of second data formats. A plurality of input connector modules receive respective input data streams and at least one input queue stores the received input data streams. A plurality of job threads is operatively connected to the at least one input queue, each job thread, in parallel with at least one other job thread, formatting a stored input data stream to produce an output data stream. At least one output queue respectively stores the output data streams from the plurality of job threads. A plurality of output connector modules is operatively connected to the at least one output queue, the output connector modules supplying respective output data streams. | 2010-01-28 |
20100023643 | NETWORK DEVICE, NETWORK DEVICE MANAGEMENT APPARATUS, NETWORK DEVICE CONTROL METHOD, NETWORK DEVICE MANAGEMENT METHOD, PROGRAM, AND STORAGE MEDIUM - A network device includes a communicator which communicate with an information processing apparatus on a network by using a first communication protocol requiring authentication and a second communication protocol requiring no authentication, a setting unit which sets the operation mode of the communicator so as to communicate with an object necessary for Plug and Play by using the second communication protocol and communicate with an object other than the object necessary for Plug and Play by using the first communication protocol, and a determination unit which determines, using identification information to identify configuration information contained in information received from the information processing apparatus, whether the configuration information is necessary for the Plug and Play. | 2010-01-28 |
20100023644 | INSPECTING WEB BROWSER STATE INFORMATION FROM A SYNCHRONOUSLY-INVOKED SERVICE - The present invention provides a browser-independent method to inspect the state of any Web Browser from a service that has been invoked synchronously. The remote agent responds to the service request with instructions for the browser to synchronously and recursively invoke another service request with a specific portion of the browser state as the arguments. This allows the browser to continue operating in a synchronous manner, while appearing to behave like a multi-threaded application that is responsive to state inspection requests. | 2010-01-28 |
20100023645 | System and Method for Dynamically Managing Message Flow - System and method for dynamically managing message flow. According to the example embodiments, an intermediary network device or a client device dynamically manages the flow of messages received from an electronic exchange by analyzing the client device's capabilities, such as CPU utilization. Based on a percentage of total CPU utilization, the level of throttling is dynamically adjusted, such that if the percentage of CPU utilization, or load, increases, then throttling is increased from a lower level to a higher level. Similarly, if the percentage of CPU utilization decreases significantly enough, then throttling is decreased to a lower level. | 2010-01-28 |
20100023646 | COMMUNICATION SYSTEM, INFORMATION PROCESSING APPARATUS, SERVER, AND COMMUNICATION METHOD - A first information processor transmits a bubble packet to a second communication control unit for leaving transmission record in a first communication control unit by way of the first communication control unit, a second information processor transmits a reply packet to one or more ports including at least the bubble packet transmitting port as the port of the first communication control unit used in transmission of bubble packet, and the first information processor receives the reply packet transmitted from the second information processor by way of the second communication control unit. In this configuration, the invention presents a communication system capable of establishing communication between plural information processors for communicating by way of communication control unit (NAT). | 2010-01-28 |
20100023647 | SWAPPING PPRC SECONDARIES FROM A SUBCHANNEL SET OTHER THAN ZERO TO SUBCHANNEL SET ZERO USING CONTROL BLOCK FIELD MANIPULATION - A method for swapping peer-to-peer remote copy (PPRC) secondary device definitions from a subchannel set other than zero to subchannel set zero by the utilization of control block-field manipulation includes identifying a PPRC primary and secondary device pair, wherein a PPRC primary device definition resides at subchannel set zero and the PPRC secondary device definition resides at a subchannel set other than subchannel set zero; within operating system definitions of the PPRC pair, exchanging physical information associated with the PPRC primary and secondary devices, including pathing, node descriptor, device number, and a field cross referencing device numbers of the PPRC pair; and within channel subsystem definitions of the PPRC primary and secondary devices, via a SwapSubchannel instruction, exchanging physical information, including path and control unit information while retaining a subchannel identifier, a subchannel set identifier, and a subchannel interruption parameter pointing to the operating system definition of the device. | 2010-01-28 |
20100023648 | METHOD FOR INPUT OUTPUT EXPANSION IN AN EMBEDDED SYSTEM UTILIZING CONTROLLED TRANSITIONS OF FIRST AND SECOND SIGNALS - A method for expanding input/output in an embedded system is described in which no additional strobes or enable lines are necessary from the host controller. By controlling the transitions of the signal levels in a specific way when controlling two existing data or select lines, an expansion input and/or output device can generate a strobe and/or enable signal internally. This internal strobe and/or enable signal is then used to store output data or enable input data. The host controller typically utilizes software or firmware to control the data transitions, but no additional wires are needed, and no changes are needed to existing peripheral devices. Thus, an existing system can be expanded when there are no additional control lines available and no unused states in existing signals. | 2010-01-28 |
20100023649 | CHANGING CLASS OF DEVICE - A class changing apparatus includes a link unit configured to be linked with a client device to transmit and receive data. The class change apparatus also includes a storage unit configured to store apparatus information including class information of the client device. The class changing apparatus further includes a control unit coupled to the link unit and the storage unit and controlling operations of the class changing apparatus including a class changing operation, wherein the class change operation includes transmitting at least one command including a command for rebranching into the selected class to the client device through the link unit and registering class information as changed class information in the storage unit in response to detecting a class change request. | 2010-01-28 |
20100023650 | SYSTEM AND METHOD FOR USING A SMART CARD IN CONJUNCTION WITH A FLASH MEMORY CONTROLLER TO DETECT LOGON AUTHENTICATION - A system and method of operating a device connected to a host computer in a manner to preserve knowledge of logon authentication status to the host computer. Upon initialization of the device perform a pattern matching operation of an instruction sequence received by the second microcontroller. When the instruction sequence matches a prestored sequence indicative of performance of a logon process on the host computer tracking a logon state by the second microcontroller. Exchanging the logon state between the second and first microcontrollers such that when the second microcontroller resets, the second microcontroller may recover the logon state from the first microcontroller. Other systems and methods are disclosed. | 2010-01-28 |
20100023651 | Method and System for Detecting State of Field Asset Using Packet Capture Agent - Methods and systems for detecting a state of a field asset using a packet capture agent is disclosed. A method may include capturing one or more packets transmitted on a shared bus in a field asset and determining the occurrence of a door event based at least one the one or more captured packets. | 2010-01-28 |
20100023652 | Storage Device and Control Method Thereof - The present invention provides a storage device and a control method thereof which can enhance general-use property and availability of a storage system while enhancing I/O performance of the storage system as a whole. The storage device is provided with an external connection function in which a command is generated in response to a read request or a write request given by a host computer, and the generated command is issued to an external storage device via any of a plurality of ports. In such a storage device, a channel processor, for every kind of the command, issues a test command to the external storage device in a plurality of issuing methods, measures an I/O performance for every issuing method, displays a result of measurement of the I/O performance for every method, and/or sets the issuing method in issuing the command to the external storage device based on the result of measurement of the I/O performance for every issuing method. | 2010-01-28 |
20100023653 | SYSTEM AND METHOD FOR ARBITRATING BETWEEN MEMORY ACCESS REQUESTS - A system having memory access capabilities, the system includes: (i) a dynamic voltage and frequency scaling (DVFS) controller, adapted to determine a level of a voltage supply supplied to a first memory access requester and a frequency of a clock signal provided to the first memory access requester and to generate a DVFS indication that is indicative of the determination; (ii) a hardware access request determination module, adapted to determine a priority of memory access request issued by the first memory access requester in response to the DVFS indication; and (iii) a direct memory access arbitrator, adapted to arbitrate between memory access requests issued by the first memory access requester and another memory access requester in response to priorities associated with the memory access requests. | 2010-01-28 |
20100023654 | METHOD AND SYSTEM FOR INPUT/OUTPUT PADS IN A MOBILE MULTIMEDIA PROCESSOR - Methods and systems for a low noise amplifier with tolerance to large inputs are disclosed and may include generating at least one control signal that controls a plurality of directional modes of at least one contact pad on a mobile multimedia processor (MMP) in an integrated circuit. Selectable modes may include: bidirectional, input, and an output mode. Each of the modes includes a bypass mode and a processing mode that may be controlled by the generated control signal. Received data may be processed by circuitry in the MMP when the processing mode may be enabled. Received data may be passed through the MMP without being processed by the MMP when the bypass mode may be enabled. An additional signal may be generated to dynamically pull-down a potential of the at least one contact pad and/or to pull-up a potential of said at least one contact pad. | 2010-01-28 |
20100023655 | Data Storage Apparatus and Method of Data Transfer - A data storage apparatus having improved data transfer performance. The storage apparatus has: plural controllers connected to each other by first data transfer paths; plural processors controlling the controllers; and second data transfer paths through which the controllers send data to various devices. Each of the controllers has a data-processing portion for transferring data to the first and second data transfer paths. The data-processing portion has a header detection portion for detecting first header information constituting data, a selection portion for selecting data sets having continuous addresses of transfer destination and using the same data transfer path from plural data sets such that a coupled data set is created from the selected data sets, a header creation portion for creating second header information about the coupled data set, and coupled data creation means for creating the coupled data set from the selected data sets and from the second header information. | 2010-01-28 |
20100023656 | Data transfer method - There is provided a data transfer method in an IEEE1394 system including a band request node and a transfer band management node. The method includes generating, at the band request node, a transfer request that can detect a data amount of transfer data and transmitting the transfer request from the band request node to the transfer band management node, determining, by the transfer band management node, whether a transfer band requested by the transfer request is ensured or not, notifying, from the transfer band management node, the band request node of the determination result, and transferring data from the band request node according to the determination result. | 2010-01-28 |
20100023657 | SYSTEM AND METHOD FOR SERIAL DATA COMMUNICATIONS BETWEEN HOST AND COMMUNICATIONS DEVICES, AND COMMUNICATIONS DEVICE EMPLOYED IN THE SYSTEM AND METHOD - A communications device includes a communications circuit, a memory, an identifier generator, and a latency controller. The communications circuit exchanges serial data with a host computer and a downstream device, and includes a first input, a first output, a second input, and a second output. The first input receives data from the host computer. The first output transmits data to the host computer. The second input receives data from the downstream device. The second output transmits data to the downstream device. The memory is accessible through the communications circuit. The identifier generator generates an identifier number unique to the communications device in response to an identifier setup request received at the first input. The latency controller determines, based on the generated identifier number, a period of latency required to access the memory through the communications circuit. | 2010-01-28 |
20100023658 | SYSTEM AND METHOD FOR ENABLING LEGACY MEDIUM ACCESS CONTROL TO DO ENERGY EFFICENT ETHERNET - A system and method for enabling legacy media access control (MAC) to do energy efficient Ethernet (EEE). A backpressure mechanism is included in an EEE enhanced PHY that is responsive to a detected need to transition between various power modes of the EEE enhanced PHY. Through the backpressure mechanism, the EEE enhanced PHY can indicate to the legacy MAC that transmission of data is to be deferred due to a power savings initiative in the EEE enhanced PHY. | 2010-01-28 |
20100023659 | KEYBOARD - A keyboard includes a keyboard control circuit, a card reader unit, at least one universal serial bus (USB) interface, a switch, and a BLUETOOTH unit. The USB interface is capable of coupling to the card reader unit. The BLUETOOTH unit is selectively connected to the keyboard control circuit or the card reader unit via the switch. | 2010-01-28 |
20100023660 | KVM SYSTEM - A keyboard-video-mouse (KVM) system is disclosed. The KVM system comprises a module, a KVM switch and a signal cable. The module transmits a single-ended video signal from a computer, converts a universal asynchronous receiver/transmitter (UART) signal to an input/output (IO) signal, and transmits the IO signal to the computer. The KVM switch receives the single-ended video signal from the module and outputs the UART signal to the module. The signal cable transmits the single-ended video signal from the module to the KVM switch and transmits the UART signal from the KVM switch to the first module. | 2010-01-28 |
20100023661 | PORTABLE IMAGE SHARING SYSTEM - A system allowing a portable display apparatus to share an image from a host computer is disclosed. The image sharing system is composed of a portable display apparatus, a transmitter and a computer. An image of the computer is acquired by software to send to the transmitter connected to the computer. The transmitter wirelessly sends the image to the portable display apparatus for displaying. A user who holds the portable display apparatus can see the image of the computer without seeing the computer. | 2010-01-28 |
20100023662 | BUS MASTERING METHOD - A mastering method of a bus includes the following steps: receiving a command; determining if the command is a bus master command to generate a determined result; outputting at least one break event to switch a processor from a non-snoop state into a snoop state according to the determined result; and outputting at least one bus master request to access the bus; wherein the break event is ahead of the bus master request that corresponds to the break event. | 2010-01-28 |
20100023663 | METHOD AND DEVICE FOR DETECTING BUS SUBSCRIBERS - In order to register the order of bus subscribers ( | 2010-01-28 |
20100023664 | BUS SYSTEM AND OPERATING METHOD THEREOF - An operating method applied to an out-of-order executive bus system includes: according to dependency constraints, linking requests using the bus system to form dependency request links having an order; and processing the order of the requests according to the dependency request links. In addition, a bus system is provided. The bus system includes a request queue and a dependency request link generator. The request queue receives and stores a newly received request including at least a link tag. The dependency request link generator generates N dependency request links according to dependency constraints of N link tags of the newly received request, where N is any positive integer. Each link tag of the newly received request is implemented to indicate a link relation with respect to an order of the newly received request and a plurality of unserved requests preceding the newly received request. | 2010-01-28 |
20100023665 | MULTIPROCESSOR SYSTEM, ITS CONTROL METHOD, AND INFORMATION RECORDING MEDIUM - To provide a multiprocessor system in which data transmission efficiency is unlikely to be affected if a damaged processor should exist among a plurality of processors. The multiprocessor system has a plurality of processing modules, including a predetermined number, being three or more, of processors, and a bus for relaying data transmission among the respective processing modules, and specifies at least one damaged processor; selects as a communication restricted processor subjected to communication restriction at least one of the processors connected to the bus at a position determined according to a position where the damaged processor is connected to the bus; and restricts data transmission by the communication restricted processor via the bus. | 2010-01-28 |
20100023666 | Interrupt control for virtual processing apparatus - A data processing system supporting one or more virtual processing apparatuses is provided with external interrupt interface hardware | 2010-01-28 |
20100023667 | HIGH AVAILABILITY SYSTEM AND EXECUTION STATE CONTROL METHOD - A high availability system includes a first server computer for a first virtual computer and a first hypervisor and a second server computer for a second virtual computer and a second hypervisor. The first virtual computer executes a processing and the second virtual computer executes the processing behind from the first virtual computer. Information associated with an event is transmitted. The event provides an input to the first virtual computer. In the second hypervisor, a control unit performs, control based on the information to match the execution state of the second virtual computer and that of the first virtual computer, and control associated with the information, when the event associated with the information is predetermined one of an I/O completion interrupt from the first virtual storage and an interrupt handler call corresponding to the interrupt, after the interrupt from the second virtual storage corresponding to the interrupt is caught. | 2010-01-28 |
20100023668 | COMPUTER SYSTEM HAVING MULTI-FUNCTION CARD READER MODULE WITH PCI EXPRESS INTERFACE - A computer system includes a host, a PCI Express bus and a multi-function card reader module. The PCI Express bus is coupled to the host. The multi-function card reader module includes a plurality of card readers, a PCI Express interface and a PCI Express host controller. The plurality of card readers correspond to a plurality of memory card formats, respectively. The PCI Express interface is coupled to the PCI Express bus. The PCI Express host controller is coupled to the PCI Express interface and the plurality of card readers for controlling data transmission between the PCI Express interface and the plurality of card readers. | 2010-01-28 |
20100023669 | HOST CONTROLLER DISPOSED IN MULTI-FUNCTION CARD READER - A host controller disposed in a multi-function card reader includes: a Serial Advanced Technology Attachment (SATA) interface configured for coupling to a host computer; and a port multiplier having a control port and a plurality of peripheral device ports. The control port is coupled to the SATA interface, and the peripheral device ports are coupled to a plurality of peripheral device interfaces, respectively. The peripheral device interfaces are disposed in the multi-function card reader, and include at least one flash memory card interface. | 2010-01-28 |
20100023670 | METHOD AND SYSTEM FOR ROUTING DATA IN A PARALLEL TURBO DECODER - Described herein are system(s) and method(s) for routing data in a parallel Turbo decoder. Aspects of the present invention address the need for reducing the physical circuit area, power consumption, and/or latency of parallel Turbo decoders. According to certain aspects of the present invention, address routing-networks may be eliminated, thereby reducing circuit area and power consumption. According to other aspects of the present invention, address generation may be moved from the processors to dedicated address generation modules, thereby decreasing connectivity overhead and latency. | 2010-01-28 |
20100023671 | Enhanced Microprocessor or Microcontroller - A processor device has a data memory with a linear address space, the data memory being accessible through a plurality of memory banks. At least a subset of the memory banks are organized such that each memory bank of the subset has at least a first and second memory area, wherein no consecutive memory block is formed by the second memory areas of a plurality of consecutive memory banks. An address adjustment unit is provided which, when a predefined address range is used, translates an address within the predefined address range to access said second memory areas such that through the address a plurality of second memory areas form a continuous linear memory block. | 2010-01-28 |
20100023672 | Method And System For Virtual Fast Access Non-Volatile RAM - A method of writing data to a non-volatile memory with minimum units of erase of a block, a page being a unit of programming of a block, may read a page of stored data addressable in a first increment of address from the memory into a page buffer, the page of stored data comprising an allocated data space addressable in a second increment of address, pointed to by an address pointer, and comprising obsolete data. The first increment of address is greater than the second increment of address. A portion of stored data in the page buffer may be updated with the data to form an updated page of data. Storage space for the updated page of data may be allocated. The updated page of data may be written to the allocated storage space. The address pointer may be updated with a location of the allocated storage space. | 2010-01-28 |
20100023673 | AVOIDANCE OF SELF EVICTION CAUSED BY DYNAMIC MEMORY ALLOCATION IN A FLASH MEMORY STORAGE DEVICE - The operating firmware of a portable flash memory storage device is stored in the relatively large file storage memory, which is non executable. It is logically parsed into overlays to fit into an executable memory. The overlays can be of differing sizes to organize function calls efficiently while minimizing dead space or unnecessarily separating functions that should be within one or a group of frequently accessed overlays. For an overlay having functions that require data allocation, the data allocation can cause eviction. This self eviction is avoided altogether or after initial runtime. | 2010-01-28 |
20100023674 | Flash DIMM in a Standalone Cache Appliance System and Methodology - A method, system and program are disclosed for accelerating data storage in a cache appliance cluster that transparently monitors NFS and CIFS traffic between clients and NAS subsystems and caches files in a multi-rank flash DIMM cache memory by pipelining multiple page write and page program operations to different flash memory ranks, thereby improving write speeds to the flash DIMM cache memory. | 2010-01-28 |
20100023675 | WEAR LEVELING METHOD, AND STORAGE SYSTEM AND CONTROLLER USING THE SAME - A wear leveling method for a flash is provided, wherein the flash memory includes a plurality of physical blocks grouped into at least a data area and a spare area. The method includes setting a first predetermined threshold value as a wear-leveling start value and randomly generating a random number as a memory erased count, wherein the random number is smaller than the wear-leveling start value. The method also includes counting the memory erased count each time when the physical blocks are erased and determining whether the memory erased count is smaller than the wear-leveling start value, wherein a physical blocks switching is performed between the data area and the spare area when the memory erased count is not smaller then the wear-leveling start value. Accordingly, it is possible to uniformly use the physical blocks, so as to effectively prolong a lifetime of the store system. | 2010-01-28 |
20100023676 | SOLID STATE STORAGE SYSTEM FOR DATA MERGING AND METHOD OF CONTROLLING THE SAME ACCORDING TO BOTH IN-PLACE METHOD AND OUT-OF-PLACE METHOD - A solid state storage system includes a controller configured to divide memory blocks of a flash memory area into first blocks and second blocks corresponding to the first blocks, newly allocates pages of the second blocks when an external write command is requested. The controller is also configured to allocate selected sectors in the allocated pages according to sector addresses and execute a write command. | 2010-01-28 |
20100023677 | SOLID STATE STORAGE SYSTEM THAT EVENLY ALLOCATES DATA WRITING/ERASING OPERATIONS AMONG BLOCKS AND METHOD OF CONTROLLING THE SAME - A solid state storage system that evenly allocates data writing/erasing operations among blocks is presented. The solid state storage system includes a controller. The controller is configured to set a representative value that becomes a block allocation reference in accordance with predetermined information of blocks in a flash memory area. The controller is also configured to calculate a data value that becomes life time information according to the predetermined information in a current state for each block. The controller is also configured to determine a block where a deviation is generated between the representative value and the data value. The controller is also configured to allocate block where the deviation is generated as a new block where data is written. | 2010-01-28 |
20100023678 | NONVOLATILE MEMORY DEVICE, NONVOLATILE MEMORY SYSTEM, AND ACCESS DEVICE - When an access device accesses a nonvolatile memory device, the nonvolatile memory device or the access device detects or calculates a temperature T of the nonvolatile memory device. A temperature-adaptive control part of the nonvolatile memory device controls an access rate to a nonvolatile memory on the basis of the temperature T. Accordingly, the control part controls the rate so that the temperature T of the nonvolatile memory devices cannot exceed a limit temperature Trisk. In this manner, a nonvolatile memory system can eliminate a risk of a burn when ejecting the semiconductor memory device and can read and write data at a high speed. | 2010-01-28 |
20100023679 | SYSTEMS AND TECHNIQUES FOR NON-VOLATILE MEMORY BUFFERING - An apparatus, system, method, and article for non-volatile memory buffering are described. The apparatus may include a data storage manager to store a data item in a rewritable non-volatile memory buffer. The data item may have a file size less than or equal to a threshold value. The rewritable non-volatile memory buffer may include one or more rewritable memory regions configured to store the data item. Other embodiments are described and claimed. | 2010-01-28 |
20100023680 | Method for Controlling Non-Volatile Semiconductor Memory System - In a memory system using a storage medium, which is inserted into an electronic apparatus via a connector to add a memory function thereto, the storage medium has a GROUND terminal, a power supply terminal, a control terminal and a data input/output terminal, and the connector has a function of being sequentially connected to each of the terminals. When the storage medium is inserted into the connector, the GROUND terminal and control terminal of the storage medium are connected to corresponding terminals of the connector before the power supply terminal and data input/output terminal of the storage medium are connected to corresponding terminals of the connector. Thus, it is possible to improve the stability when a memory card is inserted into or ejected from the memory system. | 2010-01-28 |
20100023681 | Hybrid Non-Volatile Memory System - The present invention presents a hybrid non-volatile system that uses non-volatile memories based on two or more different non-volatile memory technologies in order to exploit the relative advantages of each these technology with respect to the others. In an exemplary embodiment, the memory system includes a controller and a flash memory, where the controller has a non-volatile RAM based on an alternate technology such as FeRAM. The flash memory is used for the storage of user data and the non-volatile RAM in the controller is used for system control data used by the control to manage the storage of host data in the flash memory. The use of an alternate non-volatile memory technology in the controller allows for a non-volatile copy of the most recent control data to be accessed more quickly as it can be updated on a bit by bit basis. In another exemplary embodiment, the alternate non-volatile memory is used as a cache where data can safely be staged prior to its being written to the to the memory or read back to the host. | 2010-01-28 |
20100023682 | Flash-Memory System with Enhanced Smart-Storage Switch and Packed Meta-Data Cache for Mitigating Write Amplification by Delaying and Merging Writes until a Host Read - A flash memory solid-state-drive (SSD) has a smart storage switch that reduces write acceleration that occurs when more data is written to flash memory than is received from the host. Page mapping rather than block mapping reduces write acceleration. Host commands are loaded into a Logical-Block-Address (LBA) range FIFO. Entries are sub-divided and portions invalidated when a new command overlaps an older command in the FIFO. Host data is aligned to page boundaries with pre- and post-fetched data filling in to the boundaries. Repeated data patterns are detected and encoded by compressed meta-data codes that are stored in meta-pattern entries in a meta-pattern cache of a meta-pattern flash block. The sector data is not written to flash. The meta-pattern entries are located using a meta-data mapping table. Storing host CRC's for comparison to incoming host data can detect identical data writes that can be skipped, avoiding a write to flash. | 2010-01-28 |
20100023683 | Associative Matrix Observing Methods, Systems and Computer Program Products Using Bit Plane Representations of Selected Segments - Associative matrix compression methods, systems, computer program products and data structures compress an association matrix that contains counts that indicate associations among pairs of attributes. Selective bit plane representations of those selected segments of the association matrix that have at least one count is performed, to allow compression. More specifically, a set of segments is generated, a respective one of which defines a subset, greater than one, of the pairs of attributes. Selective identifications of those segments that have at least one count are stored. The at least one count that is associated with a respective identified segment is also stored as at least one bit plane representation. The at least one bit plane representation identifies a value of the at least one associated count for a bit position of the count that corresponds to the associated bit plane. | 2010-01-28 |
20100023684 | METHOD AND APPARATUS FOR REDUCING POWER CONSUMPTION IN A CONTENT ADDRESSABLE MEMORY - Power consumption in a Content Addressable Memory (CAM) circuit is reduced by use of a CAM circuit. According to one embodiment of the CAM circuit, the CAM circuit includes a plurality of match lines and match line restoration circuitry. The match line restoration circuitry is configured to prevent at least one of the match lines from being restored to a pre-evaluation state responsive to corresponding enable information. | 2010-01-28 |
20100023685 | Storage device and control method for the same - A storage device is provided with: a first management section that manages a storage area provided by one or more hard disk drives on a basis of a predetermined unit created using one or more parameters; a second management section that manages, on a basis of a pool configured by at least one or more of the units, a management policy about the capacity of the pools and a threshold value of the storage capacity of the pools; a power supply control section that controls each of the hard disk drives of the unit under the management of the first management section to be in either a first turn-ON state or a second turn-ON state with a low power consumption; and a control section that selects, when the storage capacity of any of the pools exceeds the threshold value, the management policy of the pools, and any of the managed parameters considered optimum to serve as the unit of the storage area, and after making the power supply control section to change the state of the hard disk drive of the selected unit from the second turn-ON state to the first turn-ON state, adds the unit of the storage area to the capacity of the pool. With such a configuration, the storage device can achieve reduction of power consumption, simplification of management, and increase of use efficiency of storage resources. | 2010-01-28 |
20100023686 | METHOD FOR IMPROVING RAID 1 READING EFFICIENCY - A method for improving redundant array of inexpensive disks 1 (RAID 1 array) reading efficiency, which includes providing a disk head address of each disk in a RAID 1 array; receiving a read command and providing a reading file address of the read command; choosing a first preferred disk from the disks in the RAID 1 array, which has a disk head address closest to the reading file address among disks of the RAID 1 array; and sending the read command to the first preferred disk. | 2010-01-28 |
20100023687 | METHOD FOR PROVIDING A CUSTOMIZED RESPONSE FROM A DISK ARRAY - The present invention discloses a method for providing a customized response from disk array, which is implemented on a disk array server having disks. The method comprises steps of receiving a request packet from a front end server, determining whether the request packet is for a response of disk environment information of one of the disks, and directly providing a customized response back to the front end server when the request packet is for a response of disk environment information of one of the disks, or passing the request packet to an originally aimed disk of the disk array server when the request packet is for data access on the originally aimed disk. | 2010-01-28 |
20100023688 | SYMMETRICAL STORAGE ACCESS ON INTELLIGENT DIGITAL DISK RECORDERS - A method includes designating at least three storage partitions on at least two logical drives, placing a first storage partition on a first of the logical drives adjacent to a second storage partition on a second of the logical drives separate from the first logical drives, and creating a third partition among both the first and second of the logical drives. The first, second and third partitions are balanced for storage access symmetry such that the drives bear equal storage placement. | 2010-01-28 |
20100023689 | MEMORY DEVICE AND METHOD OF REPRODUCING DATA USING THE SAME - A memory device having a resume function is provided. The memory device provides the resume function that generates and stores reproduction history information when reproduction of data is stopped due to the generation of an interrupt, providing the production history information to a newly connected reproducing apparatus, and resuming reproducing of the data from a location where the reproduction of the data had previously been stopped. Since the reproduction history information is stored in the memory device, the data reproduction may be resumed from the location where reproduction of the data was stopped when the memory device is moved from one reproducing apparatus to another reproducing apparatus. | 2010-01-28 |
20100023690 | CACHING DYNAMIC CONTENTS AND USING A REPLACEMENT OPERATION TO REDUCE THE CREATION/DELETION TIME ASSOCIATED WITH HTML ELEMENTS - An event to delete a structured object of a Web page rendered in a browser can be detected. The structured object comprises an HTML element set that was dynamically created for the Web page. The structured object can be placed in a cache without deleting memory allocations for the structured object. An event to dynamically create a new object of the Web page can be detected. The cache can be queried to find an object with structure equivalent to that of the new object. The found object can be taken from the cache and used as the new object after content of the cached object is replaced with that needed for the new object. Memory allocation and deallocation costs that would otherwise be needed to dispose of a dynamic HTML element set and to create a new HTML element set are thus saved using the cache. | 2010-01-28 |
20100023691 | SYSTEM AND METHOD FOR IMPROVING A BROWSING RATE IN A HOME NETWORK - A system and method for improving a browsing rate in a Universal Plug and Play (UPnP) Audio/Video (AV) home network. A control point predicts browse data using a pre-fetching operation and pre-fetches and stores the predicted browse data, which is temporarily stored in a cache implemented in the control point. Accordingly, when a user has selected a corresponding container, the control point displays the pre-fetched browse data. The user can directly use the browse data and experiences a fast response. | 2010-01-28 |
20100023692 | MODULAR THREE-DIMENSIONAL CHIP MULTIPROCESSOR - A chip multiprocessor die supports optional stacking of additional dies. The chip multiprocessor includes a plurality of processor cores, a memory controller, and stacked cache interface circuitry. The stacked cache interface circuitry is configured to attempt to retrieve data from a stacked cache die if the stacked cache die is present but not if the stacked cache die is absent. In one implementation, the chip multiprocessor die includes a first set of connection pads for electrically connecting to a die package and a second set of connection pads for communicatively connecting to the stacked cache die if the stacked cache die is present. Other embodiments, aspects and features are also disclosed. | 2010-01-28 |
20100023693 | Method and system for tiered distribution in a content delivery network - A tiered distribution service is provided in a content delivery network (CDN) having a set of surrogate origin (namely, “edge”) servers organized into regions and that provide content delivery on behalf of participating content providers, wherein a given content provider operates an origin server. According to the invention, a cache hierarchy is established in the CDN comprising a given edge server region and either (a) a single parent region, or (b) a subset of the edge server regions. In response to a determination that a given object request cannot be serviced in the given edge region, instead of contacting the origin server, the request is provided to either the single parent region or to a given one of the subset of edge server regions for handling, preferably as a function of metadata associated with the given object request. The given object request is then serviced, if possible, by a given CDN server in either the single parent region or the given subset region. The original request is only forwarded on to the origin server if the request cannot be serviced by an intermediate node. | 2010-01-28 |
20100023694 | Memory access system, memory control apparatus, memory control method and program - A memory control apparatus disposed in a memory access system having a bus, a single storage unit with a bank structure and a bus arbitrating unit, includes: an access-request accepting means for accepting sequential access requests for data located at sequential addresses in the storage unit, sequential access requests for data located at discrete addresses in the storage unit as sequential access requests, or access requests for data located at sequential addresses in the storage unit which cannot be made into a single access request as sequential access requests; and an access-request rearranging means for rearranging sequential access requests accepted by the access-request accepting means in an order of banks of the storage unit within a range of access requests relating to either a data write request output from one of data processing units or a data read request output therefrom to control an access control of the storage unit. | 2010-01-28 |
20100023695 | Victim Cache Replacement - A data processing system includes a processor core having an associated upper level cache and a lower level victim cache. In response to a memory access request of the processor core, the lower level cache victim determines whether the memory access request hits or misses in the directory of the lower level victim cache, and the upper level cache determines whether a castout from the upper level cache is to be performed and selects a victim coherency granule for eviction from the upper level cache. In response to determining that a castout from the upper level cache is to be performed, the upper level cache evicts the selected victim coherency granule. In the eviction, the upper level cache reads out the victim coherency granule from the data array of the upper level cache only in response to an indication that the memory access request misses in the directory of the lower level victim cache. | 2010-01-28 |
20100023696 | Methods and System for Resolving Simultaneous Predicted Branch Instructions - A method of resolving simultaneous branch predictions prior to validation of the predicted branch instruction is disclosed. The method includes processing two or more predicted branch instructions, with each predicted branch instruction having a predicted state and a corrected state. The method further includes selecting one of the corrected states. Should one of the predicted branch instructions be mispredicted, the selected corrected state is used to direct future instruction fetches. | 2010-01-28 |
20100023697 | Testing Real Page Number Bits in a Cache Directory - Testing real page number bits in a cache directory is provided. A specification of a cache to be tested is retrieved in order to test the real page number bits of the cache directory associated with the cache. A range within a real page number address of the cache directory is identified for performing page allocations using the specification of the cache. A random value x is generated that identifies a portion of the real page number bits to be tested. A first random value y is generated that identifies a first congruence class from a set of congruence classes within the portion of the cache to be tested. Responsive to the first congruence class failing to be allocated a predetermined number of times, one page size of memory for the first congruence class is allocated and a first allocation value is incremented by a value of 1. | 2010-01-28 |
20100023698 | Enhanced Coherency Tracking with Implementation of Region Victim Hash for Region Coherence Arrays - A method and system for precisely tracking lines evicted from a region coherence array (RCA) without requiring eviction of the lines from a processor's cache hierarchy. The RCA is a set-associative array which contains region entries consisting of a region address tag, a set of bits for the region coherence state, and a line-count for tracking the number of region lines cached by the processor. Tracking of the RCA is facilitated by a non-tagged hash table of counts represented by a Region Victim Hash (RVH). When a region is evicted from the RCA, and lines from the evicted region still reside in the processor's caches (i.e., the region's line-count is non-zero), the RCA line-count is added to the corresponding RVH count. The RVH count is decremented by the value of the region line count following a subsequent processor cache eviction/invalidation of the region previously evicted from the RCA. | 2010-01-28 |
20100023699 | System and Method for Usage Analyzer of Subscriber Access to Communications Network - A system and a method are described, whereby a data cache enables the realization of an efficient design of a usage analyzer for monitoring subscriber access to a communications network. By exploiting the speed advantages of cache memory, as well as adopting innovative data loading and retrieval choices, significant performance improvements in the time required to access the necessary data records can be realized. | 2010-01-28 |
20100023700 | Dynamically Maintaining Coherency Within Live Ranges of Direct Buffers - Reducing coherency problems in a data processing system is provided. Source code that is to be compiled is received and analyzed to identify at least one of a plurality of loops that contain a memory reference. A determination is made as to whether the memory reference is an access to a global memory that should be handled by a direct buffer. Responsive to an indication that the memory reference is an access to the global memory that should be handled by the direct buffer, the memory reference is marked for direct buffer transformation. The direct buffer transformation is then applied to the memory reference. | 2010-01-28 |
20100023701 | CACHE LINE DUPLICATION IN RESPONSE TO A WAY PREDICTION CONFLICT - Embodiments of the present invention provide a system that handles way mispredictions in a multi-way cache. The system starts by receiving requests to access cache lines in the multi-way cache. For each request, the system makes a prediction of a way in which the cache line resides based on a corresponding entry in the way prediction table. The system then checks for the presence of the cache line in the predicted way. Upon determining that the cache line is not present in the predicted way, but is present in a different way, and hence the way was mispredicted, the system increments a corresponding record in a conflict detection table. Upon detecting that a record in the conflict detection table indicates that a number of mispredictions equals a predetermined value, the system copies the corresponding cache line from the way where the cache line actually resides into the predicted way. | 2010-01-28 |
20100023702 | Shared JAVA JAR files - Techniques are disclosed for sharing programmatic modules among isolated virtual machines. A master JVM process loads data from a programmatic module, storing certain elements of that data into its private memory region, and storing other elements of that data into a “read-only” area of a shareable memory region. The master JVM process copies loaded data from its private memory region into a “read/write” area of the shareable memory region. Instead of re-loading the data from the programmatic module, other JVM processes map to the read-only area and also copy the loaded data from the read/write area into their own private memory regions. The private memory areas of all of the JVM processes begin at the same virtual memory address, so references between read-only data and copied data are preserved correctly. As a result, multiple JVM processes start up faster, and memory is conserved by avoiding the redundant storage of shareable data. | 2010-01-28 |
20100023703 | HARDWARE TRANSACTIONAL MEMORY SUPPORT FOR PROTECTED AND UNPROTECTED SHARED-MEMORY ACCESSES IN A SPECULATIVE SECTION - A system and method is disclosed for implementing a hardware transactional memory system capable of executing a speculative section of code containing both protected and unprotected memory access operations. A processor in a multi-processor system is configured to execute a section of code that performs a transaction using shared memory, such that a first subset of memory operations in the section of code is performed atomically with respect to the concurrent execution of the one or more other processors and a second subset of memory operations in the section of code is not. In some embodiments, the section of code includes a plurality of declarator operations, each of which is executable to designate a respective location in the shared memory as protected. | 2010-01-28 |
20100023704 | VIRTUALIZABLE ADVANCED SYNCHRONIZATION FACILITY - A system and method for executing a transaction in a transactional memory system is disclosed. The system includes a processor of a plurality of processors coupled to shared memory, wherein the processor is configured to execute a section of code, including a plurality of memory access operations to the shared memory, as an atomic transaction relative to the execution of the plurality of processors. According to embodiments, the processor is configured to determine whether the memory access operations include any of a set of disallowed instructions, wherein the set includes one or more instructions that operate differently in a virtualized computing environment than in a native computing environment. If any of the memory access operations are ones of the disallowed instructions, then the processor aborts the transaction. | 2010-01-28 |
20100023705 | PROCESSOR ARCHITECTURE HAVING MULTI-PORTED MEMORY - A data processing system includes a multiport memory module including a plurality of first ports and a plurality of second ports. The data processing system includes a plurality of first buses and a plurality of second buses. A plurality of hardware acceleration modules configured to communicate with respective ones of the plurality of first ports via respective ones of the plurality of first buses. The data processing system includes a processor module. A random access memory (RAM) module configured to store data. The processor module and the RAM module communicate with the multiport memory module via respective ones of the plurality of second buses. A shared bus includes a first bus portion configured to communicate with the plurality of hardware acceleration modules at a first rate. A second bus portion configured to communicate with the processor module and the RAM module at a second rate that is different than the first rate. A bus bridge that communicates with the first bus portion and the second bus portion. | 2010-01-28 |
20100023706 | COEXISTENCE OF ADVANCED HARDWARE SYNCHRONIZATION AND GLOBAL LOCKS - A computer-implemented method and article of manufacture is disclosed for enabling computer programs utilizing hardware transactional memory to safely interact with code utilizing traditional locks. A thread executing on a processor of a plurality of processors in a shared-memory system may initiate transactional execution of a section of code, which includes a plurality of access operations to the shared-memory, including one or more to locations protected by a lock. Before executing any operations accessing the location associated with the lock, the thread reads the value of the lock as part of the transaction, and only proceeds if the lock is not held. If the lock is acquired by another thread during transactional execution, the processor detects this acquisition, aborts the transaction, and attempts to re-execute it. | 2010-01-28 |
20100023707 | PROCESSOR WITH SUPPORT FOR NESTED SPECULATIVE SECTIONS WITH DIFFERENT TRANSACTIONAL MODES - A system and method are disclosed wherein a processor of a plurality of processors coupled to shared memory, is configured to initiate execution of a section of code according to a first transactional mode of the processor. The processor is configured to execute a plurality of protected memory access operations to the shared memory within the section of code as a single atomic transaction with respect to the plurality of processors. The processor is further configured to initiate, within the section of code, execution of a subsection of the section of code according to a second transactional mode of the processor, wherein the first and second transactional modes are each associated with respective recovery actions that the processor is configured to perform in response to detecting an abort condition. | 2010-01-28 |
20100023708 | VARIABLE-LENGTH CODE (VLC) BITSTREAM PARSING IN A MULTI-CORE PROCESSOR WITH BUFFER OVERLAP REGIONS - An information handling system includes a multi-core processor that processes variable-length code (VLC) bitstream data. The bitstream data includes multiple codewords that the processor organizes into functionally common subsets. The processor includes a general purpose processor (GPU) and one or more special purpose processor (SPUs). An SPU of the processor may includes two SPU buffers. The processor first transfers bitstream data into GPU buffer memory and then populates the SPU buffers one after another with bitstream data. The SPU buffers may each include an overlap region that the SPU populates with the same bitstream data. The SPU parses the bitstream data in the SPU buffers in alternating fashion. The SPU may shift parsing from the one SPU buffer to the other SPU buffer when parsing reaches a subset boundary within an overlap region. | 2010-01-28 |
20100023709 | ASYMMETRIC DOUBLE BUFFERING OF BITSTREAM DATA IN A MULTI-CORE PROCESSOR - An information handling system includes a multi-core processor that processes variable-length code (VLC) bitstream data. The bitstream data includes multiple codewords for interpretation. The processor includes a general purpose unit (GPU) and a special purpose unit (SPU). The GPU includes GPU buffers and the SPU includes SPU buffers. After populating one GPU buffer with bitstream data, the processor populates another GPU buffer with subsequent bitstream data. The processor may populate the GPU buffers in alternating fashion. The processor populates one SPU buffer with bitstream data while parsing bitstream data in the other SPU buffer. The GPU of the processor populates the SPU buffers in alternating fashion. The size of the GPU buffers may be a multiple of the size of the SPU buffers. After the SPU buffers consume the bitstream data from one GPU buffer, the other GPU buffer transfers its bitstream data to the SPU buffers for parsing. | 2010-01-28 |
20100023710 | SYSTEMS, METHODS, AND APPARATUS FOR SUBDIVIDING DATA FOR STORAGE IN A DISPERSED DATA STORAGE GRID - An efficient method for breaking source data into smaller data subsets and storing those subsets along with coded information about some of the other data subsets on different storage nodes such that the original data can be recreated from a portion of those data subsets in an efficient manner. | 2010-01-28 |
20100023711 | Interleaver Memory Allocation Method and Apparatus - According to one embodiment, memory is allocated between an interleaver buffer and a de-interleaver buffer in a communication device based on downstream and upstream memory requirements. The upstream de-interleaver memory requirement is determined based on upstream channel conditions obtained for a communication channel used by the communication device. The memory is allocated between the interleaver and de-interleaver buffers based on the downstream and upstream memory requirements. The downstream interleaver memory requirement may be determined based on one or more predetermined downstream configuration parameters. Alternatively, the downstream interleaver memory requirement may also be determined based on the upstream channel conditions by estimating the downstream capacity of the communication channel based on the upstream channel conditions and determining an interleaver buffer size that satisfies one or more predetermined downstream configuration parameters and the downstream capacity estimate. | 2010-01-28 |
20100023712 | Storage subsystem and method of executing commands by controller - A storage subsystem capable of processing time-critical control commands while suppressing deterioration of the system performance to a minimum. When various commands are received in a multiplex manner via the same port from plural host devices, the channel adapter of the storage subsystem extracts commands of a first kind from the received commands. Then, the adapter executes the extracted commands of the first kind with high priority within a given unit time until a given number of guaranteed activations is reached. At the same time, commands of a second kind are enqueued in a queue of commands. After the commands of the first kind are executed as many as the number of guaranteed activations, the commands of the second kind are executed in the unit time. | 2010-01-28 |
20100023713 | Archive system and contents management method - There is provided an archive system that performs processing on arbitrary contents, the system including a grouping section that groups multiple archive nodes included in a cluster, a policy section that defines a requirement for performing processing on the arbitrary contents, and a control section that determines a group for performing processing on the arbitrary contents based on the group information about the definition of the grouping of the multiple archive nodes and the requirement and controls the determined group to perform the processing. | 2010-01-28 |
20100023714 | PARALLEL DATA STORAGE SYSTEM - A parallel data storage system for storing data received from, or retrieving data to, a host system using multiple data storage devices. The system includes an interface for communicating with the host system and a buffer configured to store data sectors received from the host system via the interface. A switch is used to selectively connect the interface and the data storage devices to the buffer to facilitate the transfer of data into and out of the buffer. The data sectors are transferred by segmenting each sector into multiple smaller data cells and distributing these data cells among the data storage devices using an arbitrated distribution method. | 2010-01-28 |
20100023715 | SYSTEM FOR IMPROVING START OF DAY TIME AVAILABILITY AND/OR PERFORMANCE OF AN ARRAY CONTROLLER - An apparatus comprising a storage array, a controller, a cache storage area and a backup storage area. The storage array may include a plurality of storage devices. The controller may be configured to send one or more commands configured to control reading and writing data to and from the storage array. The commands may include volume configuration information used by each of the plurality of storage devices. The cache storage area may be within the controller and may be configured to store a copy of the commands. The backup storage area may be configured to store the copy of the commands during a power failure. | 2010-01-28 |
20100023716 | STORAGE CONTROLLER AND STORAGE CONTROL METHOD - Difference information between two snapshots from a first point-in-time snapshot, which has been copied, to an N | 2010-01-28 |
20100023717 | REMOTE COPY SYSTEM AND REMOTE SITE POWER SAVING METHOD - A computer migrates to the same remote controller, from among a plurality of remote virtual computers at a remote site, two or more remote virtual computers belonging to a group configured from remote virtual computers with similar remote copy patterns. In the remote controller, these two or more remote virtual computers and remote virtual computers with dissimilar remote copy patterns do not reside. | 2010-01-28 |
20100023718 | Methods For Data-Smuggling - The present invention discloses methods for an application, running on a host system, to access a restricted area of a storage device, the method including the steps of: providing a file system for running on the host system; restricting access, by the file system, to the restricted area; sending an indication, from the application to the storage device, that data being sent by the application to the storage device via the file system is intended for the restricted area; detecting the indication in the storage device; and making the data, residing in the restricted area, available for reading by the application upon receiving an application request. Preferably, the method further includes the step of: releasing wasted areas, of the storage device, for use by the file system. Preferably, the method further includes the step of: copying non-restricted data from a non-restricted area into the restricted area. | 2010-01-28 |
20100023719 | METHOD AND CIRCUIT FOR PROTECTION OF SENSITIVE DATA IN SCAN MODE - A reset generator for resetting at least one register in a register bank. The register generator comprises a scan mode input terminal configured to input a scan mode signal, a system reset input terminal configured to input a system reset signal, a secure reset output terminal configured to output a secure reset signal and a combination logic unit configured to combine the scan mode signal and the system reset signal. The combination is such that when the scan mode of the at least one register is activated, the secure reset signal is immediately activated for resetting the at least one register. The activation of the secure reset signal is independent of the system reset signal. The secure reset signal is deactivated when the system reset signal is deactivated and the secure reset signal follows the activation/deactivation cycles of the system reset signal after deactivation. | 2010-01-28 |
20100023720 | METHOD AND APPARATUS FOR RECOGNIZING CHANGES TO DATA - The present invention refers to a method and apparatus, in which changes to relevant data are made easily recognizable. The data is stored in the same sector of a flash memory as a program which is used for the start-up or operation of a device. Due to the characteristics of flash memory the complete sector including the program is deleted when deleting the relevant data, by which the device is no longer operable and a malfunction and damage can be avoided. Furthermore, a bitwise inverted form of the data is stored in the flash memory, and it is inspected whether the original and the inverted form of the data coincide. A change to the data, which is not recognizable by the inspection, requires the deletion of the sector, thereby also deleting the program and thus the device is no longer operable. | 2010-01-28 |
20100023721 | MEMORY SYSTEM AND HOST DEVICE - A memory system includes a nonvolatile memory, and a memory controller for performing control to extend the maximum value of a logical address by erasing data of the nonvolatile memory which has become unnecessary in accordance with a command from the outside, and reassigning the data which has become unnecessary to a memory area assigned to a part of the logical address. | 2010-01-28 |
20100023722 | STORAGE DEVICE FOR USE IN A SHARED COMMUNITY STORAGE NETWORK - A storage device configured to join a shared community storage network. All or a portion of the storage device is registered with the community storage network as a storage node. Once registered with the network, third party data may be stored on the storage node and remotely accessed by third parties. In addition, data stored on the storage device by a user may be stored in the shared community storage network by encrypting the data, adding redundancy, and distributing it to other storage nodes within the storage network. Data that is stored in the storage network is accessible to the user even if their storage device is inaccessible or fails. The user may receive economic or non-economic incentives for allowing the storage device to join the shared community storage network. | 2010-01-28 |
20100023723 | Paging Memory Contents Between A Plurality Of Compute Nodes In A Parallel Computer - Methods, apparatus, and products are disclosed for paging memory contents between a plurality of compute nodes in a parallel computer that includes: identifying, by a master node, a memory allocation request for an application executing on the master node, the memory allocation request requesting additional computer memory for use by the application during execution; requesting, by the master node from a slave node, an available memory notification specifying to the master node the computer memory available for allocation on the slave node; allocating, by the master node, at least a portion of the computer memory available for allocation on the slave node in dependence upon the memory allocation request and the available memory notification; and transferring, by the master node, contents of a portion of the computer memory on the master node to the allocated portion of the computer memory on the slave node. | 2010-01-28 |
20100023724 | Network Based Virtualization Performance - The disclosed embodiments support improvements in network performance in networks such as storage area networks. This is particularly important in networks such as those implementing virtualization. These improvements, therefore, support improved mechanisms for performing processing in network devices such as switches, routers, or hosts. These improvements include various different mechanisms which may be used separately or in combination with one another. These mechanisms include methods and apparatus for processing traffic in an arbitrated loop, performing striping to support fairness and/or loop tenancy, performing configuration of network devices such as switches to enable virtualization to be performed closest to the storage device (e.g., disk), ascertaining a CPU efficiency that quantifies the impact of virtualization on a processor, and configuring or accessing a striped volume to account for metadata stored in each storage partition. | 2010-01-28 |
20100023725 | GENERATION AND UPDATE OF STORAGE GROUPS CONSTRUCTED FROM STORAGE DEVICES DISTRIBUTED IN STORAGE SUBSYSTEMS - A plurality of storage subsystems and a plurality of storage devices are maintained, and wherein each storage subsystem includes at least one storage device of the plurality of storage devices. A plurality of storage groups is generated, wherein each storage group includes one or more storage devices selected from the plurality of storage devices, and wherein the one or more storage devices selected in each storage group are included in at least two different storage subsystems. The plurality of storage groups is adjusted based on: (a) usage statistics of the data, wherein the usage statistics are stored in a log file; and (b) properties and organization of the data stored in a plurality of data structures. | 2010-01-28 |
20100023726 | Dual Hash Indexing System and Methodology - A method, system and program are disclosed for accelerating data storage in a cache appliance that transparently monitors NFS and CIFS traffic between clients and NAS subsystems and caches files in a cache memory by using a dual hash technique to rapidly store and/or retrieve connection state information for cached connections in a plurality of index tables that are indexed by hashing network protocol address information with a pair of irreducible CRC hash algorithms to obtain an index to the memory location of the connection state information. | 2010-01-28 |
20100023727 | IP ADDRESS LOOKUP METHOD AND APPARATUS BY USING BLOOM FILTER AND MULTI-HASHING ARCHITECTURE - The present invention relates to an IF address lookup apparatus using a Bloom filter and a multi-hashing architecture that includes a buffering means that outputs a prefix of an inputted address having the number of bits reduced by one bit whenever a control signal is received at the time of outputting the prefix of the inputted address; a hashing hardware that generates a plurality of hashing indexes by hashing the prefix (hereinafter, referred to as “output prefix”) outputted from the buffering means; a Bloom filter that determines whether or not the output prefix is an entry of the hash table by using the plurality of hashing indexes; and a processor that includes the hash table and an overflow table and outputs a prefix that matches the output prefix by searching entries of locations of the hash table indicated by the plurality of hashing indexes and entries stored in the overflow table when a Bloom filter's determination result is positive and outputs the control signal to the buffering means when the matched prefix is not provided or the Bloom filter's determination result is negative. | 2010-01-28 |
20100023728 | METHOD AND SYSTEM FOR IN-PLACE MULTI-DIMENSIONAL TRANSPOSE FOR MULTI-CORE PROCESSORS WITH SOFTWARE-MANAGED MEMORY HIERARCHY - A method and system for transposing a multi-dimensional array for a multi-processor system having a main memory for storing the multi-dimensional array and a local memory is provided. One implementation involves partitioning the multi-dimensional array into a number of equally sized portions in the local memory, in each processor performing a transpose function including a logical transpose on one of said portions and then a physical transpose of said portion, and combining the transposed portions and storing back in their original place in the main memory. | 2010-01-28 |
20100023729 | IMPLEMENTING SIGNAL PROCESSING CORES AS APPLICATION SPECIFIC PROCESSORS - Methods and apparatus are provided for efficiently implementing signal processing cores as application specific processors. A signal processing core, such as a Fast Fourier Transform (FFT) core or a Finite Impulse Response (FIR) core includes a data path and a control path. A control path is implemented using processor components to increase resource efficiency. Both the data path and the control path can be implemented using function units that are selected, parameterized, and interconnected. A variety of signal processing algorithms can be implemented on the same application specific processor. | 2010-01-28 |
20100023730 | Circular Register Arrays of a Computer - The invention provides a method and apparatus for eliminating the stack overflow and underflow in a dual stack computer | 2010-01-28 |
20100023731 | Generation of parallelized program based on program dependence graph - A method of generating a parallelized program includes calculating an execution order of vertices of a degenerate program dependence graph, generating basic blocks by consolidating vertices including neither branching nor merging, generating procedures each corresponding to a respective one of the vertices, and generating a procedure control program by arranging an instruction to execute a first procedure after an instruction to wait for output data transfer from a second procedure for a dependence relation crossing a border between the basic blocks, generating an instruction to register a dependence relation that a third procedure has on output data transfer from a fourth procedure for a dependence relation within one of the basic blocks, and generating an instruction to perform a given data transfer directly from procedure to procedure for each of a data transfer within one of the basic blocks and a data transfer crossing a border between the basic blocks. | 2010-01-28 |
20100023732 | OPTIMIZING NON-PREEMPTIBLE READ-COPY UPDATE FOR LOW-POWER USAGE BY AVOIDING UNNECESSARY WAKEUPS - A technique for low-power detection of a grace period following a shared data element update operation that affects non-preemptible data readers. A grace period processing action is implemented that requires a processor that may be running a non-preemptible reader of the shared data element to pass through a quiescent state before further grace period processing can proceed. A power status of the processor is also determined. Further grace period processing may proceed without requiring the processor to pass through a quiescent state if the power status indicates that quiescent state processing by the processor is unnecessary. | 2010-01-28 |