45th week of 2009 patent applcation highlights part 47 |
Patent application number | Title | Published |
20090276509 | Method and Apparatus for Operating a Host Computer in a Network Environment - The present invention provides a method and an apparatus that utilize a portable apparatus to operate a host computer in a network environment. The portable apparatus including an operating system and a list of software applications are installed in a removable data storage medium. The basic input/output system (BIOS) of the host computer will directly or indirectly identify the portable apparatus as its boot drive. The host computer immersed in a network environment will further load the operating system in the portable apparatus into its random access semiconductor memory (RAM). In one embodiment of the invention, a hardware profile which contains host and peripheral device-related information is either stored or operatively accessible by the host computer. The operating system is capable of incorporating information from the hardware profile during an OS-loading procedure. | 2009-11-05 |
20090276510 | System and Method for Network Design - A system and method for network design are disclosed. In one embodiment of a method incorporating teachings of the present disclosure, embedded network information describing at least one existing network element and a plurality of physical locations available for locating new network nodes may be received. A demand forecast for a coverage area of a proposed network may also be received. In an embodiment in which a proposed network has a core layer, a number of core layer nodes to be included in the proposed network may also be received. In an embodiment in which a proposed network has an aggregation layer, a number of aggregator layer nodes to include in the proposed network may be calculated by adding a positive integer to a lower bound number of aggregator layer nodes. Consideration may be given to these and other inputs in connection with generating a network design. | 2009-11-05 |
20090276511 | CONTROLLING METHOD, COMPUTER SYSTEM, AND PROCESSING PROGRAM OF BOOTING UP A COMPUTER - A computer system having computers executing programs, a management computer managing said computers, and a storage system which can be accessed from said computers via a network, wherein said management computer includes: a storage unit for storing network identity information of said network which is allocated to said computers, application identity information indicating said programs, and area identity information indicating areas in said storage system in which said programs are stored, said network identity information and said area identity information being related to said application identity information, a control unit for sending to said computers said network identity information and said area identity information corresponding to said application identity information, in order to boot programs indicated by said application identity information, in response to entry of a boot request including said application identity information. | 2009-11-05 |
20090276512 | BIOS SELECTION FOR PLURALITY OF SERVERS - A method for selecting a basic input output system (BIOS) and an operating system (OS) for a server managed by a controller in communication with a plurality of servers is provided. The server is detected. A map describing a relationship between the server and the plurality of servers is consulted. The map at least partially defines a policy for the server. Vital product data (VPD) of the server is used in conjunction with the map and the policy to select at least one of the BIOS and the OS for the server prior to an application of power to the server. | 2009-11-05 |
20090276513 | POLICY CONTROL ARCHITECTURE FOR SERVERS - A system for implementing and controlling a plurality of server-specific policies for a plurality of servers using a management module in communication with the plurality of servers is provided. A policy controller module is operational on the management module. The policy controller module is adapted for defining a first policy of a plurality of policies for each of the plurality of servers, managing a plurality of rules relevant to an execution of the first policy, and coordinating the execution of the first policy with an execution of a second policy of the plurality of policies. | 2009-11-05 |
20090276514 | DISCARDING SENSITIVE DATA FROM PERSISTENT POINT-IN-TIME IMAGE - A network storage server implements a method to discard sensitive data from a Persistent Point-In-Time Image (PPI). The server first efficiently identifies a dataset containing the sensitive data from a plurality of datasets managed by the PPI. Each of the plurality of datasets is read-only and encrypted with a first encryption key. The server then decrypts each of the plurality of datasets, except the dataset containing the sensitive data, with the first encryption key. The decrypted datasets are re-encrypted with a second encryption key, and copied to a storage structure. Afterward, the first encryption key is shredded. | 2009-11-05 |
20090276515 | MULTI-MODALITY NETWORK FOR IMPROVED WORKFLOW - Described herein are systems and methods for multi-modality networks that provide access to different medical devices though a common operator interface. In an example embodiment, a multi-modality network comprises a host computer and a plurality of medical devices. The medical devices may support different diagnostic and/or therapeutic modalities. The host computer communicates with the medical devices through a communications network. In one embodiment, the host computer includes a display and a control console that allow the physician or operator to control the different medical devices and view images and measurements from the different medical devices through a common interface. Further, the host computer may provide computing resources, e.g., a general purpose image processor, that can be shared among the different modalities. | 2009-11-05 |
20090276516 | DOWNLOAD AND DATA TRANSFER GAMING SYSTEM - A download and data transfer gaming system utilizes a hybrid peer-to-peer, segmented file distribution protocol to vastly improve the download capabilities of a gaming system by improving the upload cost burdened by the download host. The system redistributes this cost to the download clients by allowing clients on the gaming system to upload pieces of a file to each other. This system is much more redundant by eliminating the possibility of a client missing a download broadcast. The system alleviates this possibility of missing packets and bad data integrity by using SHA-1 verification of the file pieces. The benefits of the improved bandwidth capabilities enable the download of much larger files, thus enhancing the game play experience. | 2009-11-05 |
20090276517 | DOWNLOAD AND DATA TRANSFER GAMING METHOD - A download and data transfer gaming method utilizes a hybrid peer-to-peer, segmented file distribution protocol to vastly improve the download capabilities of a gaming system by improving the upload cost burdened by the download host. The method redistributes this cost to the download clients by allowing clients on the gaming system to upload pieces of a file to each other. This method is much more redundant by eliminating the possibility of a client missing a download broadcast. The method alleviates this possibility of missing packets and bad data integrity by using SHA-1 verification of the file pieces. The benefits of the improved bandwidth capabilities enable the download of much larger files, thus enhancing the game play experience. | 2009-11-05 |
20090276518 | TECHNIQUE FOR REGULATING LINK TRAFFIC - A system which regulates communication with a server is described. During operation, the system determines a retransmission rate of data packets during a first set of conversations between a group of users and the server via a peering link. Next, the system compares the retransmission rate and an historical retransmission rate of data packets during a second set of conversations between a second group of users and the server via the peering link. The system then adjusts a target acceptance rate of the server to requests to initiate conversations with additional users via the peering link based on the comparison of the retransmission rate and the historical retransmission rate. Additionally, the system accepts or rejects a request to initiate a conversation between another user and the server via the peering link based on an actual acceptance rate of the server to requests to initiate the conversations and the target acceptance rate. | 2009-11-05 |
20090276519 | Method and System for Achieving Better Efficiency in a Client Grid Using Node Resource Usage and Tracking - A system, method and computer program product for distributing task assignments on a computer network comprising a client grid having at least one server coupled to at least one client node and a plurality of client computers coupled to the client node through a plurality of monitoring agents. Each monitoring agent collects data regarding the resources a particular client computer makes available to the grid and transmits the data to the grid server when the client computer requests a grid task. The system generates a resource probability distribution based on the historical computing resource data and employs a scheduling algorithm to distribute grid tasks to the client computers using at least the probability distribution. | 2009-11-05 |
20090276520 | METHOD AND APPARATUS FOR SERVER ELECTION, DISCOVERY AND SELECTION IN MOBILE AD HOC NETWORKS - A method and apparatus for server election, discovery and selection in mobile ad hoc communication networks is disclosed. The server election method may include a server node that may elect itself as a server in network if the received server capabilities, network server lists and network specific parameters allow its election as a server in the network. A client node may discover and select a server in the network by transmitting a server discovery request to a plurality of nodes in the network, receiving advertisements from one or more servers in the network in response to the server discovery request, and selecting a server based on the received stability and connectivity information for each server from which advertisements are received. | 2009-11-05 |
20090276521 | JUDICIAL MONITORING ON PEER-TO-PEER NETWORKS - The invention relates to a procedure for judicial monitoring in peer-to-peer networks, in which participants to be monitored are marked, and in which furthermore upon setting up a peer-to-peer communication with a marked participant the connection is diverted via a monitoring server and access to the communication data takes place with an appropriate monitoring server service. This realizes the requirement for judicial monitoring in a simple way. | 2009-11-05 |
20090276522 | COOPERATIVE MONITORING OF PEER-TO-PEER NETWORK ACTIVITY - Particular embodiments include cooperative monitoring of peer-to-peer activity on a network including maintaining communication between a local monitoring process and a network monitoring process such that a process can use both network monitoring and local monitoring. The cooperative monitoring includes monitoring a local peer using local monitoring of a point in the network by monitoring packets passing through the point, monitoring the network using network monitoring by a monitoring system or agent coupled to the network, and analyzing the result of network monitoring and local monitoring to determine at least one file transfer association with the local peer. | 2009-11-05 |
20090276523 | EXPRESSION-BASED WEB LOGGER FOR USAGE AND NAVIGATIONAL BEHAVIOR TRACKING - Configurably storing data in a plurality of files based on expressions and conditions associated with the data. Logging software enables tracking of the navigation pattern of users for selected network properties under specified conditions. The logging software is configurable such that most current and future logging specifications may be fulfilled without any code changes to the logging software. | 2009-11-05 |
20090276524 | THIN CLIENT TERMINAL, OPERATION PROGRAM AND METHOD THEREOF, AND THIN CLIENT SYSTEM - A thin client terminal | 2009-11-05 |
20090276525 | METHOD AND APPARATUS FOR INTEGRATING HETEROGENEOUS SENSOR DATA IN UBIQUITOUS SENSOR NETWORK - Provided is an apparatus and method for integrating heterogeneous sensor data in a ubiquitous sensor network. The method includes steps of receiving an integrated query for heterogeneous sensor networks from a sensor network management system, disintegrating the received integrated query to be suitable for each sensor network and transmitting the disintegrated queries to each corresponding sensor network. The method further includes steps of generating responses as integrated sensor data and storing the integrated sensor data into an integrated database, when receiving responses to respective queries from each of the sensor networks, and converting the integrated sensor data stored in the integrated database into a preset data format and transmitting the converted sensor data to the sensor network management system. | 2009-11-05 |
20090276526 | Access Control List Endpoint Implementation - A method, system, and computer program product for providing direct communications between FCoE endpoint devices within the same fibre channel network zone. A direct fibre channel (DFC) utility provides an FCoE stack with an exclusive ability to define an Ethertype within an ethertype field of an Ethernet packet with “FCoE”. In addition, the DFC utility enables storage of access control lists (ACLs) containing allowed destination addresses and allowed source addresses within the adapter of an FCoE endpoint. Additionally, the DFC utility initiates an exchange of messages with an Ethernet switch to determine a feasibility of establishing direct connections between endpoints. In particular, the DFC utility determines whether the Ethernet switch supports FCoE ACL checking. Further, the DFC utility creates a zone ID for the FCoE endpoint device. The DFC utility allows direct communication between FCoE endpoints within the same fibre zone. | 2009-11-05 |
20090276527 | Light Weight Process Abstraction For Distributed Systems - Methods and apparatus provide for a Process Descriptor to obtain an identity of an entity controlling resources of a plurality of computer systems linked via a network which access a common set of network file systems. Via a process abstraction, the Process Descriptor allows a user to describe a run-time configuration for a process to be run with the entity. The entity instantiates an instance for the process of the first application according to the first run-time configuration. For each process described by the process abstraction, the process' run-time configuration includes one or more unique network address associated with the process and network file systems, from the common set of network file systems, accessible by the process. By associating a unique network address with the process, communication with that process' instance is available wherever the instance is executing within the entity. | 2009-11-05 |
20090276528 | Methods to Optimally Allocating the Computer Server Load Based on the Suitability of Environmental Conditions - A method includes generating a space information value for each of a plurality of spaces based on at least one environmental condition measurement for the corresponding space. Each space includes one or more computing devices. The space information value includes information regarding the relative suitability of a corresponding space for accepting computing load. The method also includes determining an allocation of additional computing load based on the space information values. | 2009-11-05 |
20090276529 | CONNECTING EXTERNAL DEVICES TO A GAMING VOICE CHAT SERVICE - Voice chat enhances the game playing experience by allowing gamers in different locations to have conversations within the gaming environment. Functionality can be implemented within a gaming system to send an external invitation a user who is logged out of the game system to participate in a voice chat and/or multiplayer game session. The user can choose to accept the invitation and participate in the voice chat session on a device such as a mobile phone. Automatically generating external requests improves convenience for players, especially when inviting several other players to a voice chat session, because they do not have to find external contact information for each player who is not logged in. | 2009-11-05 |
20090276530 | Devices, Systems, Methods and Software for Computer Networking - A method of accessing content from a server using a client device, the client device having at least first and second connections to one or more remote computers. The method includes testing the performance of the first connection, testing the performance of the second connection and selecting, in response to the performance testing, the first connection or the second connection for accessing content from the server using the client device. | 2009-11-05 |
20090276531 | Media File Sharing, Correlation Of Metadata Related To Shared Media Files And Assembling Shared Media File Collections - The present invention provides for systems and methods for communicating media files and creating a collection of media files, also referred to herein as a master media file. In addition, the systems and methods of the present invention provide for the creation of automatic metadata and compilation of metadata associated with the collection of media files. The present invention is able to bond devices, referred to herein as slave devices, such as media capture devices, presence devices and/or sensor devices and instruct the slave devices, particularly the media capture devices, to communicate captured media files with a specified set of metadata included. | 2009-11-05 |
20090276532 | REDUCING OCCURRENCE OF USER EQUIPMENT REGISTRATION EXPIRY DURING CALLS - Methods, systems, User Equipment (UE), and computer readable medium for reducing the occurrence of UE registration expiry are provided. A method of reducing the occurrence of UE registration expiry during calls includes registering a UE with a network for a registration period, determining a re-registration threshold time period, comparing a duration of a remaining portion of the registration period at a particular time with the re-registration threshold time period, and attempting to re-register the UE with the network for a further registration period if the remaining portion is less than or equal to the re-registration threshold time period, wherein the determining of the re-registration threshold time period includes at least one of setting the threshold time period to a value greater than 600 seconds, determining the threshold time period according to a remaining talk time of the UE at the particular time, determining the threshold time period according to a state of the UE at the particular time, determining the threshold time period according to a duration of at least one previous call made by the UE, determining the threshold time period according to a statistical parameter of a plurality of calls made by the UE, determining the threshold time period according to a statistical parameter of a plurality of calls made by at least one UE, determining the threshold time period according to a predefined maximum call duration, and determining the threshold time period independently of a length of the registration period. | 2009-11-05 |
20090276533 | Authentication Option Support for Binding Revocation in Mobile Internet Protocol version 6 - A network component comprising at least one processor configured to implement a method comprising sending a message comprising an authentication mobility option to a mobile node, wherein the message is configured to revoke a mobility binding for the mobile node is disclosed. Also disclosed is a system comprising a home agent configured to send a binding revocation indication (BRI) to a mobile node and receive a binding revocation acknowledgement (BRA) from the mobile node, wherein the BRI comprises a first authentication mobility option and the BRA comprises a second authentication mobility option. Included is a method comprising receiving a BRI message comprising an authentication mobility option from a home agent, analyzing the authentication mobility option, and sending a BRA message to the home agent. | 2009-11-05 |
20090276534 | Enterprise Device Policy Management - Methods and systems for managing policies of portable data storage devices in conjunction with a third-party service are disclosed. One or more candidates of a plurality of members in an enterprise may be identifying via the third-party service. Each of the plurality of members may be associated with a respective portable data storage device. An indication provided by the third-party service of one or more candidate devices may be obtained. The one or more candidate devices may each be a portable data storage device associated with a respective candidate. Policies of the one or more candidate devices may be modified. | 2009-11-05 |
20090276535 | MEDIA STREAMING OF WEB CONTENT DATA - Methods for streaming web content data via a computer-readable medium. The web content data comprises one or more media samples. The media samples are encoded in a streaming media format as a web component stream. The web component stream is combined with other component streams comprising additional data other than web content data into a presentation stream. The presentation stream is transmitted via a media server to a client. Rendering commands, which are included in one or more rendering samples encoded in the web component stream along with the media samples, coordinate synchronization between the media samples and the additional data when the client renders the presentation stream. | 2009-11-05 |
20090276536 | DISTRIBUTION METHOD, PREFERABLY APPLIED IN A STREAMING SYSTEM - The invention relates to a data live streaming system comprising at least one data live streaming broadcaster LSB and at least two live streaming recipients LSR, said at least two live streaming recipients LSR forming at least a part of a peer-to-peer streaming network and said at least two live streaming recipients LSR each comprising means for generation of peer-to peer streaming to other live streaming recipients LSR of said peer-to peer streaming network and wherein said peer-to peer streaming to other streaming recipients LSR comprises loss resilient code representations of data from said at least one live streaming broadcaster LSB. | 2009-11-05 |
20090276537 | Mechanisms for role negotiation in the establishment of secure communication channels in peer-to-peer environments - A method of establishing secure communication channels in peer-to-peer environments is provided, that includes eliminating a role conflict between at least first peer and a second peer, determining which the peer will act as a client and which the peer will act as a server in a secure connection handshake, and when the first peer or the second peer detects a role conflict an attribute of the handshake message is used as a tiebreaker to determine a wait period, where the first peer or the second peer cancels its own requests, drops an incoming request or denies an incoming request and waits a random amount of time before resending the connection request, where a random time interval used by the peers can be different to reduce a chance for role conflict. | 2009-11-05 |
20090276538 | DEVICES AND METHODS FOR PROVIDING NETWORK ACCESS CONTROL UTILIZING TRAFFIC-REGULATION HARDWARE - Disclosed are devices and methods for providing network access control utilizing traffic-regulation hardware, the device including: at least one client-side port for operationally connecting to a client system; at least one network-side port for operationally connecting to a network; a logic module for regulating network traffic, based on device-related data, between the ports, the logic module including: a memory unit for storing and loading the device-related data; and a CPU for processing the device-related data; and at least one relay, between at least one respective client-side port and at least one respective network-side port, configured to open upon receiving a respective network-access-denial command from the logic module. Preferably, the logic module is configured to maintain an open-relay line-rate when at least one relay is open, and to maintain a closed-relay line-rate when at least one relay is closed. | 2009-11-05 |
20090276539 | Conversational Asyncronous Multichannel Communication through an Inter-Modality Bridge - A communications apparatus is configured to bridge modalities and different communications formats. The apparatus may include a bridge to receive an input through a modality gateway and to deliver an output through an output channel, a communication engine configured to manipulate the input into the output, a router configured to route the configured output to a respective output channel, and a controller configured to control the bridge. The controller may determine a new modality depending on a context of the communications apparatus. | 2009-11-05 |
20090276540 | PEER-TO-PEER (P2P) NETWORK SYSTEM AND METHOD OF OPERATING THE SAME BASED ON REGION - A peer-to-peer (P2P) network system and a method of operating the P2P network system based on region are provided. If an edge peer storing a resource information list of a super peer migrates to a different super peer and is registered and connected with the different super peer, the edge peer transfers the resource information list to the different super peer to share the resource information list. Resources may be searched based on a region information list into which resource information lists of adjacent super peers are integrated. | 2009-11-05 |
20090276541 | GRAPHICAL DATA PROCESSING - A method and system | 2009-11-05 |
20090276542 | METHOD AND APPARATUS FOR TIME AND FREQUENCY TRANSFER IN COMMUNICATION NETWORKS - A timing system for time synchronization between a time server and a time client over a packet network. The timing system includes a time server for generating current timestamp information and a time client having a phase-locked loop driven client clock counter. The time client periodically exchanges time transfer protocol messages with the time server over the packet network, and calculates an estimated client time based on the timestamp information. The phase-locked loop in the time client receives periodic signals representing the estimated server time as its input and calculates a signal which represents the error difference between the estimated server time and the time indicated by the time client clock counter. The error difference eventually converges to zero or a given error range indicating the time presented by the client clock counter, which is driven by the phase-locked loop having locked onto the time of the time server. | 2009-11-05 |
20090276543 | Peer to peer broadcast content synchronization - A method and apparatus for synchronizing recorded broadcast content on a peer to peer system, in which portions of the content are synchronized between the peers by referring to program clock reference values. | 2009-11-05 |
20090276544 | Mapping a Virtual Address to PCI Bus Address - Registering memory space within a data processing system is performed. One or more open calls are received from an application to access one or more input/output (I/O) devices. Responsive to receiving the one or more open calls, one or more I/O map and pin calls are sent in order to register memory space for the one or more I/O devices within at least one storage area that will be accessed by the application. At least one virtual I/O bus address is received for each registered memory space of the one or more I/O devices. At least one I/O command is executed using the at least one virtual I/O bus address without intervention by an operating system or operating system image. | 2009-11-05 |
20090276545 | MEMORY MODULE WITH CONFIGURABLE INPUT/OUTPUT PORTS - A memory module has one or more memory devices, a controller in communication with the one or more memory devices, and a plurality of input/output ports. The controller is configured to configure each input/output port as an input, an output, or a bidirectional input/output. | 2009-11-05 |
20090276546 | TECHNIQUES FOR DETECTION AND SERIAL COMMUNICATION FOR A NON-USB SERIAL INTERFACE OVER USB CONNECTOR - According to an example embodiment, an apparatus may include a non-Universal Serial Bus (non-USB) serial interface, a USB connector, a first protection circuit connected between a first data connection of the non-USB serial interface and a first data connection of the USB connector, a second protection circuit connected between a second data connection of the non-USB serial interface and a second data connection of the USB connector, a processor, and a detection circuit connected to the second data connection of the USB connector, the detection circuit configured to output a signal to the processor indicating an attachment or connection of a second non-USB serial interface to the USB connector. | 2009-11-05 |
20090276547 | System and method for simplified data transfer - Systems and methods of performing a simplified data transfer are provided. For example, a simplified data transfer system may include two or more devices configured to perform a simplified data transfer. The first device may be configured to save and transfer data associated with applications open on the first device. When the second device initiates communication, the first device may automatically send the open application data to the second device. | 2009-11-05 |
20090276548 | DYNAMICALLY SETTING BURST TYPE OF A DOUBLE DATA RATE MEMORY DEVICE - One or more external control pins and/or addressing pins on a memory device are used to set one or both of a burst length and burst type of the memory device. | 2009-11-05 |
20090276549 | Access for host stacks - This invention relates to a method, a computer program product, a device, and a system for using one host controller by at least two host stacks and handling accesses to the host controller based on access rules. | 2009-11-05 |
20090276550 | SERIAL LINK BUFFER FILL-LEVEL COMPENSATION USING MULTI-PURPOSE START OF PROTOCOL DATA UNIT TIMING CHARACTERS - Embodiments of the invention provide improved timing compensation for a bidirectional serial link in order to relax accuracy requirements of clock sources used for the link. Fill levels of receiver buffers at either ends of the link are used to determine a particular type of start of PDU (SOP) character sequence to use when forming a PDU for transmission over the link. When a given type of SOP character sequence is present in a PDU received at one end of the link, a next PDU to be transmitted from the same end of the link is delayed by a predetermined amount of time to allow the receiver buffer at the other end of the link to decrease its fill level before receiving the next PDU. | 2009-11-05 |
20090276551 | Native and Non-Native I/O Virtualization in a Single Adapter - Mechanisms for enabling both native and non-native input/output virtualization (IOV) in a single I/O adapter are provided. The mechanisms allow a system with a large number of logical partitions (LPARs) and system images to use IOV to share a native IOV enabled I/O adapter or endpoint that does not implement the necessary number of virtual functions (VFs) for each LPAR and system image. A number of VFs supported by the I/O adapter, less one, are assigned to LPARs and system images so that they may make use of native IOV using these VFs. The remaining VF is associated with a virtual intermediary (VI) which handles non-native IOV of the I/O adapter. Any remaining LPARs and system images share the I/O adapter using the non-native IOV via the VI. Thus, any number of LPARs and system images may share the same I/O adapter or endpoint. | 2009-11-05 |
20090276552 | MOTHERBOARD AND POWER MANAGING METHOD FOR GRAPHIC CARD INSTALLED THEREON - A motherboard and a power managing method for a graphic card installed thereon are provided. When the motherboard is switched to a second performance mode from a first performance mode, a microcontroller in the motherboard outputs a regulation signal to the graphic card through an exclusive connection interface, so as to correspondingly adjust an operation parameter of the graphic card, thus achieving better overall power saving and performance improving the effects of a computer. | 2009-11-05 |
20090276553 | CONTROLLER, HARD DISK DRIVE AND CONTROL METHOD - A data transfer system includes: a shared resource accessed from one or more devices; a plurality of request generation units each configured to generate a request for the device to access the shared resource, and output a remaining time value indicating how much time remains until the request is accepted before affecting an operation of an apparatus including the controller; and an arbitration unit configured to compare the remaining time values when the plurality of requests and the remaining time values are inputted from the plurality of request generation units, and give an access right to access the shared resource to a request with less remaining time. | 2009-11-05 |
20090276554 | COMPUTER SYSTEM AND DATA-TRANSMISSION CONTROL METHOD - A computer system includes a bridge having a transmitting channel and a controller, wherein the transmitting channel is controlled by the controller; a first slot disposed therein a first pin connected to the controller; and a second slot disposed therein a second pin connected to the controller. The controller is enabled through the first pin and the second pin while a first device is plugged in the first slot and a second device is plugged in the second slot. A data is transmitted to the first and second devices through the transmitting channel. Alternatively, the data is directly transmitted to the first device if only the first slot is plugged with the first device. | 2009-11-05 |
20090276555 | Hot Plug Control Apparatus and Method - An apparatus for controlling a hot plug bus slot on a bus has an input for receiving a set of float signals (i.e., the set may have one or more float signals), and a driver having an output electrically couplable with the bus. The apparatus also has float logic operatively coupled with the input. The float logic is responsive to the set of float signals to cause the output to float at a high impedance in response to receipt of the set of float signals. | 2009-11-05 |
20090276556 | MEMORY CONTROLLER AND METHOD FOR WRITING A DATA PACKET TO OR READING A DATA PACKET FROM A MEMORY - A memory controller and a method for data access are provided. The memory controller writes a data packet to or reads a data packet from a memory. The memory controller comprises a first register, a second register, a data packet adjuster, and a burst length determination unit. The first register stores a data bus width. The second register stores an operating frequency of the memory controller. The burst length determination unit determines a burst length according to the operating frequency. The data packet adjuster adjusts the data packet according to the data bus width and the burst length. | 2009-11-05 |
20090276557 | BUS SYSTEM FOR USE WITH INFORMATION PROCESSING APPARATUS - A processor bus linked with at least a processor, a memory bus linked with a main memory, and a system bus linked with at least an input/output device are connected to a three-way connection control system. The control system includes a bus-memory connection controller connected to address buses and control buses respectively of the processor, memory, and system buses to transfer address and control signals therebetween. The control system further includes a data path switch connected to data buses respectively of the processor, memory, and system buses to transfer data via the data buses therebetween depending on t | 2009-11-05 |
20090276558 | LANE MERGING - A buffer is associated with each of a plurality of data lanes of a multi-lane serial data bus. Data words are timed through the buffers of active ones of the data lanes. Words timed through buffers of active data lanes are merged onto a parallel bus such that data words from each of the active data lanes are merged onto the parallel bus in a pre-defined repeating sequence of data lanes. This approach allows other, non-active, data lanes to remain in a power conservation state. | 2009-11-05 |
20090276559 | Arrangements for Operating In-Line Memory Module Configurations - In one embodiment, a method is disclosed for timing responses to a plurality of memory requests. The method can include sending a plurality of memory requests to a plurality of in-line memory modules. The requests can be sent over a channel from a plurality of channels, where each channel can have a plurality of lanes. The method can receive responses to the plurality of memory requests over the channel and monitor the response to detect a timing relationship between at least two lanes from the plurality of lanes. In addition, the method can adjust a timing of a register loading and unloading sequence in response to the monitoring of multiple lanes and channels. Other embodiments are also disclosed. | 2009-11-05 |
20090276560 | Copyback Optimization for Memory System - In a copyback or read operation for a non-volatile memory subsystem, data page change indicators are used to manage transfers of data pages between a register in non-volatile memory and a controller that is external to the non-volatile memory. | 2009-11-05 |
20090276561 | SPI NAND PROTECTED MODE ENTRY METHODOLOGY - One or more techniques are provided for restricting access to protected modes of operation in a memory device. In one embodiment, detection circuitry is provided and configured to receive and evaluate a protected mode entry sequence for accessing a protected mode of operation. The detection circuitry may be further configured to temporarily enable an output pin on a serial interface between the memory device and a master device to receive inputs, such that a entry sequence may be entered on both the input and output pins. In another embodiment, the detection circuitry may be enabled only if a security code is first provided, thus requiring both the correct security code and entry sequence before protected mode access is allowed. The memory device may also include a parallel NAND memory array, and detection logic may be further configured to enable a serial-to-parallel NAND translator once protected mode access is allowed. | 2009-11-05 |
20090276562 | Flash cache flushing method and system - A flash memory system that uses repeated writing of the data to achieve stable storage, is adapted for efficient cache flushing operations by utilizing a part of the non-volatile flash memory array as a designated buffer for the data, in which data integrity is retained until all repeat writing thereof is complete. Repeated writing is carried out from the designated buffer directly to the final storage locations in the flash memory array, for example using simple internal copy back operations. | 2009-11-05 |
20090276563 | Incremental State Updates - A system and method are described that manage incremental state updates in such a way that multiple threads within a processor can each operate, in effect, on their own set of state data. The system and method are applicable to any processor in which multiple threads require access to sets of state information which differ from one another by a relatively small number of state changes. | 2009-11-05 |
20090276564 | Systematic memory shift for pre-segmented memory - A method of extending the life of a segmented memory device, consistent with certain embodiments involves providing a segmented memory device having a plurality of user defined segments with each segment having a starting and an ending address, and wherein the size and number of the segments is user defined; determining that a threshold number of write operations has been reached by reference to a write counter; copying data from a specified one of the segments to a temporary storage location; shifting the starting and ending address of each segment by a specified address increment; moving data stored in each segment except the specified segment by the specified address increment such that all data in the memory device has been shifted by the specified increment except for the data in the specified one of the segments, wherein data at a last segment is fragmented to wrap from an end of the memory device's addressable locations to a beginning of the memory device's addressable locations; copying the data from the specified one of the segments from the temporary storage location to a location shifted by the shift increment; and redefining the segments so that the user definitions remain applicable to the size and number of segments defined by the user, but with the addresses shifted by the specified increment. This abstract is not to be considered limiting, since other embodiments may deviate from the features described in this abstract. | 2009-11-05 |
20090276565 | STORAGE CONTROL APPARATUS, DATA MANAGEMENT SYSTEM AND DATA MANAGEMENT METHOD - A storage control apparatus according to the present invention includes a plurality of connecting units connected to one or more host computers and one or more hard disk drives as storage media for storing data, one or more non-volatile storage media which are of a different type from the hard disk drives and which store data WRITE requested from the host computer, a plurality of processing units for processing WRITE and READ requests from the host computer by using the hard disk drives or the non-volatile storage media and, a plurality of memory units for storing control information to be by the processing units. | 2009-11-05 |
20090276566 | CREATING LOGICAL DISK DRIVES FOR RAID SUBSYSTEMS - A computer storage system includes multiple disk trays, each disk tray holding two or more physical disks. The disks on a single tray are virtualized into a single logical disk. The single logical disk reports to the RAID (redundant array of inexpensive disks) subsystem, creating the impression that there is one large capacity disk. In one implementation, each disk in the tray is allocated to a different RAID group. By allocating the disks in a tray to different RAID groups, if the tray is removed, only a portion of several different RAID groups are removed. This arrangement permits a simple reconstruction of the RAID groups if a disk tray is removed from the system. | 2009-11-05 |
20090276567 | Compensating for write speed differences between mirroring storage devices by striping - A method and system for data storage provides a digital fast-write storage device, a plurality of digital slow-write storage devices, and a controller. The digital fast-write storage device might be a solid state drive. The digital slow-write storage devices might be conventional rotational media drives. Typically, read operations are directed to the fast-write storage device. The slow-write storage devices provide redundancy by mirroring the contents of the high-speed storage device. Data on the slow-write storage devices is organized in stripes, allowing data to be written in parallel. The number of slow-write storage devices can be chosen to compensate for the speed differential on write operations. In some embodiments, the controller will represent the storage system as a virtual disk drive. | 2009-11-05 |
20090276568 | Storage system, data processing method and storage apparatus - Proposed are a storage system, data processing method and storage apparatus capable of performing stable data I/O processing. Each of the storage apparatuses configured in the storage group stores group configuration information containing priority information given to each storage apparatus, and the storage apparatus with the highest priority becomes a master and performs virtualization processing and data I/O processing, and another storage apparatus belonging to this storage group performs internal processing of the storage group. | 2009-11-05 |
20090276569 | APPARATUS AND METHOD FOR REALLOCATING LOGICAL TO PHYSICAL DISK DEVICES USING A STORAGE CONTROLLER WITH ACCESS FREQUENCY AND SEQUENTIAL ACCESS RATIO CALCULATIONS AND DISPLAY - A storage controller calculates an access frequency of each logical disk; that is selects a first logical disk device of which the access frequency exceeds a first predetermined value, the first logical disk device being allocated to a first physical disk device; selects a second logical disk device which has the access frequency equal to or less than a second predetermined value, the second logical disk device being allocated to a second physical disk device; and reallocates the first and second logical device; and reallocates the first and second logical devices to the second and the first physical disk device, respectively. | 2009-11-05 |
20090276570 | GUARANTEED MEMORY CARD PERFORMANCE TO END-OF-LIFE - In order to maintain a memory system's performance levels to its end-of-life, latency threshold level(s) are specified and associated with different memory system operating parameters. In one embodiment, the memory system monitors and gathers performance statistics in real time, and in accordance with specific memory transfer sizes. A current latency level can be dynamically calculated using the performance statistics and compared to previously established latency threshold levels. If the current latency level is greater than or equal to a specific latency threshold level, the memory system's configuration setting can be adjusted according to the operating parameters associated with the latency threshold level to offset the increased latency. | 2009-11-05 |
20090276571 | Enhanced Direct Memory Access - A method for facilitating direct memory access in a computing system in response to a request to transfer data is provided. The method comprises selecting a thread for transferring the data, wherein the thread executes on a processing core within the computing system; providing the thread with the request, wherein the request comprises information for carrying out a data transfer; and transferring the data according to the request. The method may further comprise: coordinating the request with a memory management unit, such that virtual addresses may be used to transfer data; invalidating a cache line associated with the source address or flushing a cache line associated with the destination address, if requested. Multiple threads can be selected to transfer data based on their proximity to the destination address. | 2009-11-05 |
20090276572 | Memory Management Among Levels of Cache in a Memory Hierarchy - Methods, apparatus, and product for memory management among levels of cache in a memory hierarchy in a computer with a processor operatively coupled through two or more levels of cache to a main random access memory, caches closer to the processor in the hierarchy characterized as higher in the hierarchy, including: identifying a line in a first cache that is preferably retained in the first cache, the first cache backed up by at least one cache lower in the memory hierarchy, the lower cache implementing an LRU-type cache line replacement policy; and updating LRU information for the lower cache to indicate that the line has been recently accessed. | 2009-11-05 |
20090276573 | Transient Transactional Cache - In one embodiment, a processor comprises an execution core, a level 1 (L1) data cache coupled to the execution core and configured to store data, and a transient/transactional cache (TTC) coupled to the execution core. The execution core is configured to generate memory read and write operations responsive to instruction execution, and to generate transactional read and write operations responsive to executing transactional instructions. The L1 data cache is configured to cache memory data accessed responsive to memory read and write operations to identify potentially transient data and to prevent the identified transient data from being stored in the L1 data cache. The TTC is also configured to cache transaction data accessed responsive to transactional read and write operations to track transactions. Each entry in the TTC is usable for transaction data and for transient data. | 2009-11-05 |
20090276574 | ARITHMETIC DEVICE, ARITHMETIC METHOD, HARD DISC CONTROLLER, HARD DISC DEVICE, PROGRAM CONVERTER, AND COMPILER - This arithmetic device includes: a first memory to store a first program; a first arithmetic module to read the first program from the first memory to execute the first program; a second memory to store a second program which is embedded in processing of the first program and called from the first arithmetic module and executed, and whose access speed is lower than the first memory; a third memory storing data temporarily and whose access speed is higher than the second memory; a second arithmetic module to read the second program from the second memory and store in a third memory; and a third arithmetic module to read the second program from the third memory to execute the second program, in accordance with a call from the first arithmetic module to execute the first program. | 2009-11-05 |
20090276575 | INFORMATION PROCESSING APPARATUS AND COMPILING METHOD - According to one embodiment, an information processing apparatus includes a processor, a cache, and a cache controller. The processor is configured to output a memory access request for accessing an entity of a variable stored in a variable-storage region provided in a memory by using first or second memory address. Both the first and second memory addresses are allocated to the variable-storage region. The cache is configured to store some of data items stored in the memory. The cache controller is configured to access the memory or the cache by using a memory address designating the variable-storage region, in accordance with one of the first and second memory addresses which is included in a memory access request coming from the processor. | 2009-11-05 |
20090276576 | Methods and Apparatus storing expanded width instructions in a VLIW memory for deferred execution - Techniques are described for decoupling fetching of an instruction stored in a main program memory from earliest execution of the instruction. An indirect execution method and program instructions to support such execution arc addressed. In addition, an improved indirect deferred execution processor (DXP) VLIW architecture is described which supports a scalable array of memory centric processor elements that do not require local load and store units. | 2009-11-05 |
20090276577 | Adaptive caching for high volume extract transform load process - A method, system, and medium related to a mechanism to cache key-value pairs of a lookup process during an extract transform load process of a manufacturing execution system. The method includes preloading a cache with a subset of a set of key-value pairs stored in source data; receiving a request of a key-value pair; determining whether the requested key-value pair is in the preloaded cache; retrieving the requested key-value pair from the preloaded cache if the requested key-value pair is in the preloaded cache; queuing the requested key-value pair in an internal data structure if the requested key-value pair is not in the preloaded cache until a threshold number of accumulated requested key-value pairs are queued in the internal data structure; and executing a query of the source data for all of the accumulated requested key-value pairs. | 2009-11-05 |
20090276578 | CACHE COHERENCY PROTOCOL IN A DATA PROCESSING SYSTEM - A data processing system includes a first master having a cache, a second master, a memory operably coupled to the first master and the second master via a system interconnect. The cache includes a cache controller which implements a set of cache coherency states for data units of the cache. The cache coherency states include an invalid state; an unmodified non-coherent state indicating that data in a data unit of the cache has not been modified and is not guaranteed to be coherent with data in at least one other storage device of the data processing system, and an unmodified coherent state indicating that the data of the data unit has not been modified and is coherent with data in the at least one other storage device of the data processing system. | 2009-11-05 |
20090276579 | CACHE COHERENCY PROTOCOL IN A DATA PROCESSING SYSTEM - A method includes detecting a bus transaction on a system interconnect of a data processing system having at least two masters; determining whether the bus transaction is one of a first type of bus transaction or a second type of bus transaction, where the determining is based upon a burst attribute of the bus transaction; performing a cache coherency operation for the bus transaction in response to the determining that the bus transaction is of the first type, where the performing the cache coherency operation includes searching at least one cache of the data processing system to determine whether the at least one cache contains data associated with a memory address the bus transaction; and not performing cache coherency operations for the bus transaction in response to the determining that the bus transaction is of the second type. | 2009-11-05 |
20090276580 | SNOOP REQUEST MANAGEMENT IN A DATA PROCESSING SYSTEM - In a data processing system, a method includes a first master initiating a transaction via a system interconnect to a target device. After initiating the transaction, a snoop request corresponding to the transaction is provided to a cache of a second master. The transaction is completed. After completing the transaction, a snoop lookup operation corresponding to the snoop request in the cache of the second master is performed. The transaction may be completed prior to or after providing the snoop request. In response to performing the snoop lookup operation, a snoop response may be provided, where the snoop response is provided after completing the transaction. When the snoop response indicates an error, a snoop error may be provided to the first master. | 2009-11-05 |
20090276581 | METHOD, SYSTEM AND APPARATUS FOR REDUCING MEMORY TRAFFIC IN A DISTRIBUTED MEMORY SYSTEM - The present disclosure provides a method for reducing memory traffic in a distributed memory system. The method may include storing a presence vector in a directory of a memory slice, said presence vector indicating whether a line in local memory has been cached. The method may further include protecting said memory slice from cache coherency violations via a home agent configured to transmit and receive data from said memory slice, said home agent configured to store a copy of said presence vector. The method may also include receiving a request for a block of data from at least one processing node at said home agent and comparing said presence vector with said copy of said presence vector stored in said home agent. The method may additionally include eliminating a write update operation between said home agent and said directory if said presence vector and said copy are equivalent. Of course, many alternatives, variations and modifications are possible without departing from this embodiment. | 2009-11-05 |
20090276582 | External Memory Controller Node - A memory controller to provide memory access services in an adaptive computing engine is provided. The controller comprises: a network interface configured to receive a memory request from a programmable network; and a memory interface configured to access a memory to fulfill the memory request from the programmable network, wherein the memory interface receives and provides data for the memory request to the network interface, the network interface configured to send data to and receive data from the programmable network. | 2009-11-05 |
20090276583 | External Memory Controller Node - A memory controller to provide memory access services in an adaptive computing engine is provided. The controller comprises: a network interface configured to receive a memory request from a programmable network; and a memory interface configured to access a memory to fulfill the memory request from the programmable network, wherein the memory interface receives and provides data for the memory request to the network interface, the network interface configured to send data to and receive data from the programmable network. | 2009-11-05 |
20090276584 | External Memory Controller Node - A memory controller to provide memory access services in an adaptive computing engine is provided. The controller comprises: a network interface configured to receive a memory request from a programmable network; and a memory interface configured to access a memory to fulfill the memory request from the programmable network, wherein the memory interface receives and provides data for the memory request to the network interface, the network interface configured to send data to and receive data from the programmable network. | 2009-11-05 |
20090276585 | Information Processing Device Having Securing Function - An access information storage section ( | 2009-11-05 |
20090276586 | Wrap-around sequence numbers for recovering from power-fall in non-volatile memory - Incrementing sequence numbers in the metadata of non-volatile memory is used in the event of a resume from power fail to determine which data in the memory is current and valid, and which data is not. To reduce the amount of metadata space consumed by these sequence numbers, the numbers are permitted to be small enough to wrap around when the maximum value is reached. Two different techniques are disclosed to keep this wrap around condition from causing ambiguity in the relative values of the sequence numbers. | 2009-11-05 |
20090276587 | SELECTIVELY PERFORMING A SINGLE CYCLE WRITE OPERATION WITH ECC IN A DATA PROCESSING SYSTEM - A circuit includes a memory having error correction, circuitry which initiates a write operation to memory. When error correction is enabled and the write operation to the memory has the width of N bits, the write operation to the memory is performed in one access to the memory, and when error correction is enabled and the write operation to the memory has the width of M bits, where M bits is less than N bits, the write operation to the memory is performed in more than one access to the memory. In one example, the one access to the memory includes a write access to the memory, and the more than one access to the memory includes a read access to the memory and a write access to the memory. | 2009-11-05 |
20090276588 | Free space utilization in tiered storage systems - Embodiments of the invention include first storage mediums having first storage characteristics for making up a first pool of capacity of a first tier of storage, and second storage mediums having second storage characteristics for making up a second pool of capacity of a second tier of storage. Free capacity of the first and second pools is shared between the first and second tiers of storage. When the first pool has an amount of free capacity available over a reserved amount of free capacity reserved for first tier data, a first quantity of second tier data is moved from the second tier to the first tier. In exemplary embodiments of the invention, the first and second storage mediums are contained within one or more thin provisioning storage systems, and data is moved between the first and second tiers by allocating thin provisioning chunks to the data being moved. | 2009-11-05 |
20090276589 | METHOD AND APPARATUS FOR DATA DOWNLOAD FROM A MOBILE VEHICLE - A method and apparatus are provided for autonomous data download. The method includes the steps of navigating an autonomous data download device to a first data storage device located in a mobile vehicle and parking the autonomous data download device adjacent to the first data storage device. The method further includes the steps of connecting the autonomous data download device to the first data storage device and downloading data from the first data storage device to the autonomous data download device. The method thereafter includes the steps of navigating the autonomous data download device to a location determined to be suitable for transmission of the data and transmitting the data from the autonomous data download device to a second data storage device after determining that the autonomous data download device has reached the location determined to be suitable for transmission of the data. | 2009-11-05 |
20090276590 | MAINTAINING CHECKPOINTS DURING BACKUP OF LIVE SYSTEM - Techniques introduced here support block level transmission of a logical container from a network storage controller to a backup system. In accordance with the techniques, transmission can be restarted using checkpoints created at the block level by allowing restarts from various points within a logical container, for example a point at which 10%, 50%, or 75% of the logical container had been transmitted. The transmission can be restarted while maintaining data consistency of the logical container data and included meta-data. Advantageously, changes made prior to a checkpoint restart to, for example, meta-data, do not lead to inconsistent logical container backups. | 2009-11-05 |
20090276591 | EXTENSIBLE APPLICATION BACKUP SYSTEM AND METHOD - An archive method and system receives a backup request for a target dataset used by an application on a primary storage system to be backed up on a secondary storage system. Different applications may each have a corresponding proprietary application format for storing their datasets. An application translator module is loaded into an extensible backup manager that converts between a proprietary application format associated with the target dataset and a predetermined storage format used by the extensible backup manager. The application translator module converts from the proprietary application format into the predetermined storage format when the baseline backup of the target dataset has not yet been performed. An incremental backup uses the application translator module to convert from the proprietary application format associated with the application into the predetermined storage format of the extensible backup manager. Once completed, a data mover component causes the incremental backup and the baseline backup of the entire target dataset, if scheduled, to be moved from the primary storage to the secondary storage and stored in the predetermined storage format rather than the proprietary application format associated with the application. | 2009-11-05 |
20090276592 | BACKUP COPY ENHANCEMENTS TO REDUCE PRIMARY VERSION ACCESS - A method, system, and computer program product for performing a backup operation in a computing environment is provided. A dataset corresponding to a backup copy is examined to determine if the dataset has changed from a previous backup operation. If the dataset has not changed, a backup inventory registry is consulted to determine a current version of a backup copy. The current version is one of a plurality of available versions. The backup operation is performed using the current version of the backup copy. | 2009-11-05 |
20090276593 | Data storage systems, methods and networks having a snapshot efficient block map - A data storage system includes a storage device divided into a plurality of blocks for storing data for a plurality of volumes, and a processor to execute instructions for maintaining a block map corresponding to the data stored on the storage device. The storage system may be part of a storage system network. The block map stores reference data indicating which of the volumes reference which blocks on the storage device, and which blocks on the storage device are unallocated. The reference data may include, for groups of one or more blocks, a first value identifying the oldest volume in which the group of blocks was allocated and a second value identifying the newest volume in which the group of blocks was allocated. The volumes may include one or more snapshots. | 2009-11-05 |
20090276594 | Storage system - Provided is a storage system capable of simply and promptly changing the operation of a storage subsystem as a stand-alone system and the operation of a storage subsystem as a virtual storage system. This storage system is able to set a first mode that operates as a stand-alone system and a second mode that operates as a virtual storage system from a management apparatus to each of the multiple storage subsystems. | 2009-11-05 |
20090276595 | PROVIDING A SINGLE DRIVE LETTER USER EXPERIENCE AND REGIONAL BASED ACCESS CONTROL WITH RESPECT TO A STORAGE DEVICE - A method and a storage device may be provided. The storage device may include physical storage subdivided into a number of regions. The regions may start and end based on logical block addresses specified in a region table. At least one of the regions may be mapped to a logical drive letter. One or more others of the regions may be mapped to a subfolder with respect to the logical drive letter. The storage device may include an access control table. Each entry of the access control table may correspond to a respective region of the physical storage. Each of the entries of the access control table may indicate whether the respective region is protected and whether at least one entity is permitted protected access to the respective region after being successfully authenticated. | 2009-11-05 |
20090276596 | RETENTION OF ACTIVE DATA STORED IN MEMORY USING MULTIPLE INDEXING SYSTEMS FOR DATA STORAGE - A method and apparatus for retention of active data stored in memory using multiple indexing systems for data storage. An embodiment of a method for retention of active data in a storage server includes reading data into a first location of a main memory of the storage server. The data in the first location indexes data elements in a long-term data storage in a first manner. The method further provides for copying the data from the first location into a second location in the main memory of the storage server, where the data in the second location indexing the data elements in the long-term data storage in a second manner. | 2009-11-05 |
20090276597 | MEMORY CONTROLLER-ADAPTIVE 1T/2T TIMING CONTROL - Circuits, methods, and apparatus that adaptively control 1T and 2T timing for a memory controller interface. An embodiment of the present invention provides a first memory interface as well as an additional memory interface, each having a number of address and control lines. The address and control lines of the redundant memory interface may be individually enabled and disabled. If a line in the additional interface is enabled, it and its corresponding line in the first interface drive a reduced load and may operate at the higher 1T data rate. If a line in the additional interface is disabled, then its corresponding line in the first interface drives a higher load and may operate at the slower 2T data rate. In either case, the operating speed of the interface may also be considered in determining whether each line operates with 1T or 2T timing. | 2009-11-05 |
20090276598 | METHOD AND SYSTEM FOR CAPACITY-BALANCING CELLS OF A STORAGE SYSTEM - A plurality of cells forming at least a portion of a hive of a data storage system may be capacity balanced by fragmenting a portion of at least one non-empty tile of one of the plurality of cells and moving the fragmented portion to another one of the plurality of cells. A plurality of cells forming at least a portion of a hive of a fixed content storage system may be capacity balanced by identifying at least one of the plurality of cells from which objects are to be moved, and for each of the at least one of the plurality of cells identified, determining a number of objects to be moved to another one of the plurality of cells, identifying one or more tiles that collectively have approximately the number of objects to be moved, and moving the one or more tiles to the another one of the plurality of cells. | 2009-11-05 |
20090276599 | CONFIGURABLE TRANSACTIONAL MEMORY FOR SYNCHRONIZING TRANSACTIONS - A configurable transactional memory synchronizes transactions from clients. The configurable transactional memory includes a memory buffer and a transactional buffer. The memory buffer includes allocation control and storage, and the allocation control is configurable to selectively allocate the storage between a transactional buffer and a data buffer for the data words. The transactional buffer stores state indicating each combination of a data word and a client for which the data word is referenced by a write access in the transaction in progress from the client. The transactional arbiter generates the completion status for the transaction in progress from each client. The completion status is either committed for no collision or aborted for a collision. A collision is an access that references a data word of the transaction from the client following a write access that references the data word of another transaction in progress from another client. | 2009-11-05 |
20090276600 | METHOD AND APPARATUS FOR DETERMINING MEMORY USAGE FOR A COMPUTING DEVICE - One embodiment of the present invention provides a system that determines memory usage for a computing device. Within the computing device, an operating system manages memory allocation, and speculatively allocates otherwise-unused memory in an attempt to improve performance. During operation, the system receives a request to estimate the memory usage for the computing device. In response, the system determines an active subset of the computing device's memory, for instance by determining the set of memory pages that have been accessed within a specified recent timeframe. The system then uses this active subset to produce an estimate of actively-used memory for the computing device. By producing an estimate of actively-used memory, which does not include inactive program memory and inactive memory speculatively-allocated for the operating system, the system facilitates determining the actual amount of additional memory available for programs on the computing device. | 2009-11-05 |
20090276601 | VIRTUAL MEMORY MAPPING FOR EFFICIENT MEMORY USAGE - A processor (e.g. utilizing an operating system and/or circuitry) may access physical memory by paging, where a page is the smallest partition of memory mapped by the processor from a virtual address to a physical address. An application program executing on the processor addresses a virtual address space so that the application program may be unaware of physical memory paging mechanisms. A memory control layer manages physical memory space in units of sub-blocks, wherein a sub-blocks is smaller than a size of the page. Multiple virtual address blocks may be mapped to the same physical page in memory. A sub-block can be moved from a page (e.g. from one physical memory to a second physical memory) without moving other sub-blocks within the page in a manner that is transparent to the application program. | 2009-11-05 |
20090276602 | MEMORY MANAGEMENT SYSTEM FOR REDUCING MEMORY FRAGMENTATION - A memory management system for a process formulated in the C/C++ language in a processing unit includes an allocator which processes memory blocks of predetermined size, for example 64 Kb. Large objects are defined as being objects having a size of between 256 and 64 Kb. For such objects, 64 Kb memory block is considered to be a memory region (“chunk”) able to accommodate several large objects of different sizes. When an object is no longer used by the process, the space freed can be returned to the operating system. Before this, this free space is merged with adjacent free spaces. To search for adjacent free spaces, the Bruijn sequence algorithm is used, applied to the bit field disposed in each predetermined memory region. | 2009-11-05 |
20090276603 | TECHNIQUES FOR EFFICIENT DATALOADS INTO PARTITIONED TABLES - Techniques for efficiently loading data into a partition of a partitioned table of a database are provided. Data is stored in a swap table and the high water mark of the swap table has been reset prior to storing the data. The swap table is swapped with the partition. After the swap, the swap table becomes the partition of the partitioned table and the partition of the partitioned table becomes the swap table, and the swap table is truncated to reset the high water mark of the swap table. | 2009-11-05 |
20090276604 | ASSIGNING MEMORY FOR ADDRESS TYPES - Various example implementations are disclosed. According to one example, an integrated circuit may include a key extractor, a translation table block, and a memory assigner. The key extractor may be configured to receive data, extract key-related information from the data, and send the key-related information to a first memory device. The translation table block may be configured to update a mapping table based on a memory assigner assigning physical portions of the first memory device to each of a plurality of address types, receive an index from the first memory device in response to the key extractor sending the key-related information to the first memory device, and send a data request to a second memory device based on the received index, the data request identifying a physical portion of the second memory device. | 2009-11-05 |
20090276605 | Retaining an Association Between a Virtual Address Based Buffer and a User Space Application that Owns the Buffer - Registering memory space for an application is performed. One or more open calls are received from an application to access one or more input/output (I/O) devices. Responsive to receiving the one or more open calls, one or more I/O map and pin calls are sent in order to register memory space for the one or more I/O devices within at least one storage area that will be accessed by the application. A verification is made as to whether the memory space to be registered is associated with the application. Responsive to the memory space being associated with the application, at least one virtual I/O bus address is received for each registered memory space of the one or more I/O devices. At least one I/O command is executed using the at least one virtual I/O bus address without intervention by an operating system or operating system image. | 2009-11-05 |
20090276606 | METHOD AND SYSTEM FOR PARALLEL HISTOGRAM CALCULATION IN A SIMD AND VLIW PROCESSOR - The present invention provides histogram calculation for images and video applications using a SIMD and VLIW processor with vector Look-Up Table (LUT) operations. This provides a speed up of histogram calculation by a factor of N times over a scalar processor where the SIMD processor could perform N LUT operations per instruction. Histogram operation is partitioned into a vector LUT operation, followed by vector increment, vector LUT update, and at the end by reduction of vector histogram components. The present invention could be used for intensity, RGBA, YUV, and other type of multi-component images. | 2009-11-05 |
20090276607 | VIRTUALIZATION PLATFORM WITH DEDICATED CACHE ACCESS - A computing system supports a virtualization platform with dedicated cache access. The computing system is configured for usage with a memory and a cache and comprises an instruction decoder configured to decode a cache-line allocation instruction and control logic. The control logic is coupled to the instruction decoder and controls the computing system to execute a cache-line allocation instruction that loads portions of data and code regions of the memory into dedicated cache-lines of the cache which are exempted from eviction according to a cache controller replacement policy. | 2009-11-05 |
20090276608 | MICRO PROCESSOR, METHOD FOR ENCODING BIT VECTOR, AND METHOD FOR GENERATING BIT VECTOR - In a microprocessor for pipeline processing instruction execution, dependency relationship information representing a dependency relationship of each of a plurality of instructions with all the preceding instructions is stored, and whether or not the instructions in stages after instruction issue depend on the instruction of a miss speculation is judged based on the dependency relationship information if the miss speculation occurs during the execution of the plurality of instructions in accordance with a set schedule. Thus, this microprocessor can perform a recovery processing for invalidating only the instructions in a dependency relationship at once in the case of a miss speculation in speculative scheduling. | 2009-11-05 |