37th week of 2008 patent applcation highlights part 62 |
Patent application number | Title | Published |
20080222308 | Wiki groups of an online community - A method, apparatus and system of wiki groups of an online community are disclosed. In one embodiment, a method includes creating a profile of a unregistered user of an online community based on a publicly available data and a registered user provided data, automatically associating the unregistered user to a public group formed of matching interests identified through the publicly available data and as described by the registered user provided data with other registered users in the online community, processing a communication between registered users of the online community and the unregistered user. The method may include associating an address data with the profile based on the publicly available data and an input of the registered user, processing a postage payment and a service payment provided by a member of the public group communicating with the unregistered user through a postal mail communication. | 2008-09-11 |
20080222309 | Method and apparatus for network filtering and firewall protection on a secure partition - A management virtual machine on a virtualization technology enabled platform includes a means for providing a firewall and deep packet inspection. An isolated secure partition is provided to host the management application and network packet filtering and firewall functions to provide a secure and trusted platform for manageability applications. A protected component in the operating system in a user partition moves network traffic to the secure partition for inspection and filtering. | 2008-09-11 |
20080222310 | APPARATUS AND METHOD FOR DETERMINING ORIENTATION OF BLADE SERVER INSERTED INTO A BLADE CHASSIS - An apparatus and method is provided for determining the orientation of a blade server with respect to a blade chassis, whenever the blade is inserted into a chassis with either vertical or horizontal slots. In an embodiment, wherein the blade server has opposing first and second edges, first and second connectors are located in pre-specified corresponding relationship with the first and second blade server edges. A first device in the blade chassis generates an information signal, wherein the information signal has an element that indicates the spatial location of a reference feature of the chassis. The embodiment includes a path for sending the information signal to either the first connector or the second connector, according to the orientation of the blade inserted into the chassis. A second device identifies the connector that receives the information signals, and uses the connector identity and the signal element together to determine the orientation of the inserted blade with respect to the chassis. | 2008-09-11 |
20080222311 | Management of shared storage I/O resources - Automated management of shared I/O resources involves use of a policy engine for implementing I/O scheduling group I/O policies. The I/O policies are used for determining whether corresponding I/O requests should be issued to a shared storage system immediately or should be delayed via corresponding policy-based queues. In the context of database systems, a database administrator can specify policies regarding how I/O resources should be used and the database system itself enforces the policies, rather than requiring the database administrator enforce the I/O usage of the database and of the individual users. | 2008-09-11 |
20080222312 | APPARATUS AND METHOD FOR OPTIMIZING USE OF A MODEM JACK - An apparatus and method are disclosed for optimizing use of a modem jack. For a computer system that includes a modem, an apparatus may include a first branch including a first end that is arranged to connect to an Ethernet jack of the modem, a second branch including a second end that is arranged to connect to an Ethernet interface of the computer system, and a third branch including a third end that is arranged to connect to a USB interface of the computer system. The Ethernet jack of the modem will then receive both Ethernet and USB signals. | 2008-09-11 |
20080222313 | EMBEDDED INTERFACE - The present invention provides a universal host-to-host intelligent controller that facilitates the transfer of electronic data from one electronic data processing (EDP) device to another. The invention includes a printed circuit board (PCB) contained in a housing and may also include a removable memory module. The PCB contains drivers and software code that automatically load and execute on said EDP devices when the PCB is connected to the EDP devices. The drivers and software code facilitate the direct transfer of data from storage on one EDP device to storage on the other EDP device. The controller includes at least two EDP connectors coupled to the PCB. These connectors can take the form of high-speed data cables and static PCB connectors as well as wireless antennae. The controller can also be incorporated into one or both EDP devices. | 2008-09-11 |
20080222314 | CONTENTS DATA STORAGE DEVICE AND CONTENTS DATA UPDATE SYSTEM - A content data storage apparatus that enables content data on a plurality of client apparatuses to be individually updated according to the preferences or the like of users. Content data on a memory card is stored in association with client content management information for managing the content data and client content identification information for identifying the content data in management units. When client content identification information stored on a memory card connected to an input/output unit matches any storage content identification information stored on a hard disk, a control unit updates the content data of the memory card based on content data included in management units identified by this storage content identification information. | 2008-09-11 |
20080222315 | PROXY ASSOCIATION FOR DEVICES - A first connection is established between a first device and a host, wherein the first device is host-capable. A second connection is established between a second device and the host. Proxy association is performed between the first device and the second device by the host to associate the first and second devices, wherein the first and second devices are unable to directly associate, wherein the host passes association information between the first and second devices. | 2008-09-11 |
20080222316 | Communication system, communication method, and communication device - A disclosed communication system, communication method, and communication device enable an intended command to be easily acquired from a movement of an operator and executed. The communication system includes a first communication device and a second communication device communicating with each other. The first communication device includes a physical quantity detection unit that detects a physical quantity that changes as the first communication device moves. Based on an increase or a decrease in the physical quantity detected by the physical quantity detection unit, a communication direction is determined by a direction determination unit of the first communication device. Based on the thus determined communication direction, a communication unit of the first communication device communicates with the second communication device. | 2008-09-11 |
20080222317 | Data Flow Control Within and Between DMA Channels - In one embodiment, a direct memory access (DMA) controller comprises a transmit circuit and a data flow control circuit coupled to the transmit circuit. The transmit circuit is configured to perform DMA transfers, each DMA transfer described by a DMA descriptor stored in a data structure in memory. There is a data structure for each DMA channel that is in use. The data flow control circuit is configured to control the transmit circuit's processing of DMA descriptors for each DMA channel responsive to data flow control data in the DMA descriptors in the corresponding data structure. | 2008-09-11 |
20080222318 | Information selecting apparatus and storage medium storing information selecting program - An information selecting apparatus includes a computer, and makes a user select an arbitrary item from a plurality of items by a direction input of the user. Each of the items is assigned to a direction based on an input frequency, for example. In a case that a direction input is performed by means of a polygonal guide, the items are assigned to directions corresponding to the vertexes of the guide and directions corresponding to the parts except for the vertexes. For example, a high-frequency item is assigned to the direction corresponding to the vertex, or a range of the direction assigned to the high-frequency item is relatively made larger. Furthermore, when a direction corresponding to the vertex is input, items assigned to the parts except for the vertexes may temporarily be assigned to other vertexes. In addition, when predetermined operation data is input, the items of the directions corresponding to the vertexes and the items of the directions corresponding to the parts except for the vertexes may be interchanged. | 2008-09-11 |
20080222319 | APPARATUS, METHOD, AND PROGRAM FOR OUTPUTTING INFORMATION - A technique of protection of personal data is provided, in which there is no need to repeatedly instruct search conditions, and also provided is a technique of protection of personal data in which operation cost can be reduced. Respective personal data include multiple items and an item value of each of the items. A information processing apparatus selects at least one of the items for each of multiple personal data. The information processing apparatus counts, for each of the multiple personal data, the number of personal data that include a combination of the same item values as an item value of the selected item. As a result, the information processing apparatus outputs only item values of items having the number of personal data equal to or larger than a threshold, to an output device. | 2008-09-11 |
20080222320 | COMMUNICATION NODE APPARATUS, NETWORK SYSTEM HAVING THE COMMUNICATION NODE APPARATUS, AND DATA TRANSMITTING SYSTEM - Disclosed is a data transmission system in which a set of asymmetrical serial buses are formed by a set of multiplexed unidirectional buses and a reverse-direction sole serial bus. A synchronization signal is superimposed on the signal transmitted over each of the multiplexed unidirectional buses. The multiplexed unidirectional buses are used mainly for data transfer, and the reverse-direction sole serial bus is used for transmitting the control information, such as ACK response, to the data transfer. | 2008-09-11 |
20080222321 | METHOD AND SYSTEM FOR TRACKING DEVICE DRIVER REQUESTS - A computer implemented method, an apparatus, and a computer usable program product for tracking device driver requests in a data processing system is provided. A controller receives a request from a device driver. The controller associates a timestamp and at least one pointer to the request, wherein the timestamp indicates a time the request is received by an operating system. The controller then links the request from the device driver in a queue in the operating system, wherein the pointer identifies the location of the request in the queue. | 2008-09-11 |
20080222322 | Structure for an Apparatus Configured to Implement Commands in Input/Output (IO) Hub - A design structure comprising a schematic structure of an apparatus configured to implement commands in an input/output (IO) hub comprising a programmable command generator having an input coupled to an external interface and an output providing commands. The programmable command generator selectively couples commands in a path between a front end of the IO hub and an IO hub logic address and command routing output. | 2008-09-11 |
20080222323 | MULTIMEDIA ADAPTING APPARATUS - The present invention discloses a multimedia adapting apparatus. The multimedia adapting apparatus includes a communicating module, a buffer, a primary controller, a command register, a status register, a secondary controller, a media hardware engine, and a memory unit. The buffer stores the audiovisual content from the multimedia player. The primary controller handles the operation of audiovisual content between the multimedia player and the portable multimedia devices. The status register stores a plurality of statuses associated with the portable multimedia devices. The command register stores a command set associated the operation of audiovisual content between the multimedia player and the portable multimedia devices according to the statuses of the status register. The communicating module couples the buffer and the primary controller, respectively, to the multimedia player, for communicating with the multimedia player based on a plurality of control signals associated with the command set. | 2008-09-11 |
20080222324 | SYSTEM METHOD STRUCTURE IN NETWORK PROCESSOR THAT INDICATES LAST DATA BUFFER OF FRAME PACKET BY LAST FLAG BIT THAT IS EITHER IN FIRST OR SECOND POSITION - A method and structure for determining when a frame of information comprised of one or more buffers of data being transmitted in a network processor has completed transmission is provided. The network processor includes several control blocks, one for each data buffer, each containing control information linking one buffer to another. Each control block has a last bit feature which is a single bit settable to “one or “zero” and indicates the transmission of when the data buffer having the last bit. The last bit is in a first position when an additional data buffer is to be chained to a previous data buffer indicating an additional data buffer is to be transmitted and a second position when no additional data buffer is to be chained to a previous data buffer. The position of the last bit is communicated to the network processor indicating the ending of a particular frame. | 2008-09-11 |
20080222325 | PROGRAMMABLE CONTROLLER WITH BUILDING BLOCKS - A PLC of building block type includes a switch module incorporating a switch part having N-to-N switch function between serial communication lines with a plurality of lines and a plurality of device modules individually incorporating device systems with various advanced-function device module characteristics. A CPU system having CPU functions of the PLC may be incorporated in the switch module, the switch module incorporating the CPU system and the plurality of device modules being connected together into a single body in a building block structure through module-connecting mechanisms. Dedicated serial communication lines each with a single line or a plurality of lines connect between the switch module incorporating the CPU system and each of the plurality of device modules such that a star-shaped serial communication network is formed with the switch module incorporating the CPU system as a central node and each of the plurality of device modules as a peripheral node. | 2008-09-11 |
20080222326 | KVM SWITCH SYSTEM CAPABLE OF WIRELESSLY TRANSMITTING KEYBOARD-MOUSE DATA AND RECEIVING VIDEO/AUDIO DRIVING COMMAND - A control management system for controlling electrical devices is disclosed. The control management system comprises a plurality of electrical devices, and a keyboard-video-mouse switch. Each electrical device corresponds to a transforming unit for generating a protocol command signal, and a first protocol signal transceiver for wirelessly transmitting the protocol command signal via a communication interface. The keyboard-video-mouse switch comprises a plurality of second protocol signal transceivers, a plurality of converting modules, a plurality of system controllers, and a switch unit. Each of the second protocol signal transceiver corresponds to one of the first protocol signal transceivers and is used for receiving the protocol command signal from the corresponding first protocol signal transceiver. Each converting module, coupled to one of the plurality of second protocol signal transceivers, is used for converting the protocol command signal into a driving command. Each system controller, coupled to one of the plurality of converting module, is used for generating video/audio data stream based on the driving command. A switch unit is used for switching to a route to deliver the video/audio data stream from one of the plurality of converting modules to a display. | 2008-09-11 |
20080222327 | VERTICAL ADAPTERS AND VERTICAL DEVICE FOR MOUNTING TO A HORIZONTAL SERVICE INTERFACE - A vertical device for mounting to a host having a horizontal surface with an upwardly oriented host service interface providing a mechanical service and at least one electrical service as well as an adapter for mounting a vertical device to a host of this type. The device or adapter has a main body with a horizontal portion and a vertical portion extending generally downwardly from the horizontal portion. A device service interface extends downwardly from the horizontal portion for engagement with the host service interface. At least one functionality is provided on the vertical portion of the vertical device or the adapter. | 2008-09-11 |
20080222328 | Semiconductor memory module and memory system, and method of communicating therein - Example embodiments relate to a semiconductor memory module and memory system, and a method of communicating therein. According to an example embodiment, a semiconductor memory system may include a memory controller, M interconnected memory elements, and/or N data buses, where N is a natural number and M is a divisor of N. The N data buses may connect the M memory elements to the memory controller. Each memory element may use N/M of the N number of data buses. | 2008-09-11 |
20080222329 | I/O and memory bus system for DFPs and units with two- or multi-dimensional programmable cell architectures - A general bus system is provided which combines a number of internal lines and leads them as a bundle to the terminals. The bus system control is predefined and does not require any influence by the programmer. Any number of memories, peripherals or other units can be connected to the bus system (for cascading). | 2008-09-11 |
20080222330 | Semiconductor integrated circuit and image processing apparatus having the same - An ASIC includes a receiving unit, a transmission interface, a reception interface, a buffer, and a control unit. When the receiving unit receives a second write request while the transmission interface is in process of transmitting to a transmission line a first write request and write data, the control unit causes the receiving unit to store the second write request in the buffer. When the receiving unit receives a read request while the second write request is present in the buffer, the control unit causes the receiving unit to send the read request to the transmission interface prior to the second write request. | 2008-09-11 |
20080222331 | Computer-Implemented System And Method For Lock Handling - Computer-implemented systems and methods for handling access to one or more resources. Executable entities that are running substantially concurrently provide access requests to an operating system (OS). One or more traps of the OS are avoided to improve resource accessing performance through use of information stored in a shared locking mechanism. The shared locking mechanism indicates the overall state of the locking process, such as the number of processes waiting to retrieve data from a resource and/or whether a writer process is waiting to access the resource. | 2008-09-11 |
20080222332 | COMBINED ENGINE FOR VIDEO AND GRAPHICS PROCESSING - The system includes an arbiter, a combined engine, a frame buffer, and a display processing unit. The arbiter provides three input channels: a first channel for graphics, a second channel for video and a third channel for processor. The arbiter performs prioritization and arbitration between the video and graphics and processor requests sent to the system. The arbiter has three output ports coupled to the combined engine. The combined engine is a hardware engine capable of processing either video data or graphics data. The output of the combined engine is provided to the frame buffer for the storage of pixel data. The output of the frame buffer is coupled to a display processing unit that renders the pixel data for display. | 2008-09-11 |
20080222333 | Methods and Apparatus for Scalable Array Processor Interrupt Detection and Response - Hardware and software techniques for interrupt detection and response in a scalable pipelined array processor environment are described. Utilizing these techniques, a sequential program execution model with interrupts can be maintained in a highly parallel scalable pipelined array processing containing multiple processing elements and distributed memories and register files. When an interrupt occurs, interface signals are provided to all PEs to support independent interrupt operations in each PE dependent upon the local PE instruction sequence prior to the interrupt. Processing/element exception interrupts are supported and low latency interrupt processing is also provided for embedded systems where real time signal processing is required. Further, a hierarchical interrupt structure is used allowing a generalized debug approach using debut interrupts and a dynamic debut monitor mechanism. | 2008-09-11 |
20080222334 | ELECTRONIC APPARATUS AND DATA CORRUPTION PREVENTION METHOD - An electronic apparatus includes a main body, a panel unit removable from the main body, and an attachment unit configured to indirectly attach a removable memory device to the main body through the panel unit. The main body includes a control unit configured to logically disconnect a data line between the removable memory device and the main body at a time when a user operation for removing the panel unit from the main body is performed and before the panel unit is removed from the main body. | 2008-09-11 |
20080222335 | MODE SETTING METHOD AND SYSTEM INCLUDING PCI BUS IN HOT PLUG OF PCI DEVICE - The invention is to provide a mode setting method and a system including a PCI bus in the hot plug of a PCI device which is capable of supporting a platform unique function for a PCI device that is hot-added. Therefore, in a system including a PCI bus according to an exemplary embodiment of the invention, a south bridge directly notifies firmware that a PCI device is hot-added and thus, it is possible to support the platform unique function for the hot-added PCI device without modifying an OS or an open hot plug driver. | 2008-09-11 |
20080222336 | DATA PROCESSING SYSTEM - To allow to use arithmetic circuits of sharable resources by priority with a simple procedure. In a data processing system including central processing units and a plurality of arithmetic circuits, wherein the central processing units are able to supply a command to one arithmetic circuit based on one fetched instruction and supply a command to other arithmetic circuit based on other fetched instruction, a memory circuit is provided which is used to store first information indicating which arithmetic circuit is executing a command, and second information indicating which central processing unit has reserved the arithmetic circuit for execution of the next command. When the arithmetic circuit is already executing a command, reservation of the arithmetic circuit for execution of the next command using the second information of the memory circuit, makes it possible, after the execution, to assign operation commands fast to the arithmetic circuits and cause them to execute the commands. | 2008-09-11 |
20080222337 | Pipeline accelerator having multiple pipeline units and related computing machine and method - A pipeline accelerator includes a bus and a plurality of pipeline units, each unit coupled to the bus and including at least one respective hardwired-pipeline circuit. By including a plurality of pipeline units in the pipeline accelerator, one can increase the accelerator's data-processing performance as compared to a single-pipeline-unit accelerator. Furthermore, by designing the pipeline units so that they communicate via a common bus, one can alter the number of pipeline units, and thus alter the configuration and functionality of the accelerator, by merely coupling or uncoupling pipeline units to or from the bus. This eliminates the need to design or redesign the pipeline-unit interfaces each time one alters one of the pipeline units or alters the number of pipeline units within the accelerator. | 2008-09-11 |
20080222338 | Apparatus and method for sharing devices between multiple execution domains of a hardware platform - A method and apparatus for sharing peripheral devices between multiple execution domains of a hardware platform are described. In one embodiment, the method includes the configuration end-point devices, bridges and interconnects of a hardware platform including at least two execution domains. When a configuration requests is issued from an execution domain, the configuration requests may be intercepted. Hence, the received configuration request is not used to configure the peripheral end-points, bridges or interconnects of the hardware platform. Configuration information decoded from intercepted configuration request may be stored as virtual configuration information. In one embodiment, configuration information is read from a target of the configuration request to identify actual configuration information. This actual configuration information may be stored within a translation table and mapped to the virtual configuration information to enable translation of domain specific addresses to real (actual) addresses. Other embodiments are described and claimed. | 2008-09-11 |
20080222339 | Processor architecture with switch matrices for transferring data along buses - There is described a processor architecture, comprising: a plurality of first bus pairs, each first bus pair including a respective first bus running in a first direction (for example, left to right) and a respective second bus running in a second direction opposite to the first direction (for example right to left); a plurality of second bus pairs, each second bus pair including a respective third bus running in a third direction (for example downwards) and a respective fourth bus running in a fourth direction opposite to the third direction (for example upwards), the third and fourth buses intersecting the first and second buses; a plurality of switch matrices, each switch matrix located at an intersection of a first and a second pair of buses; a plurality of elements arranged in an array, each element being arranged to receive data from a respective first or second bus, and transfer data to a respective first or second bus. The elements in the array include processing elements, for operating on received data, and memory elements, for storing received data. The described architecture has the advantage that it requires relatively little memory, and the memory requirements can be met by local memory elements in the array. | 2008-09-11 |
20080222340 | Bus Interface Controller For Cost-Effective HIgh Performance Graphics System With Two or More Graphics Processing Units - A bus interface controller manages a set of serial data lanes. The bus interface controller supports operating a subset of the serial data lanes as a private bus. | 2008-09-11 |
20080222341 | Method And Apparatus For Automatically Switching Between USB Host And Device - An apparatus and method for automatically switching between USB host and device is provided. In a device with a USB interface, the present invention automatically switches between a USB host and USB device by detecting the handshake protocol of the D+ and D− pins of the USB interface. The apparatus for automatically switching between USB host and device includes ah host mode element, a device mode element, a random auto-switcher, and a detection element. The random auto-switcher switches the connection to the host mode element or the device mode element at random time. The detection element monitors the handshake protocol of the D+ and D− pins of USB interface and the external USB-interface device to determine whether the host mode or the device mode is in use. When the present invention detects the external USB-interfaced device is a host, the present invention switches to become a USB device. Similarly, when the present invention detects the external USB-interfaced device is a USB device, the present invention switches to become a USB host. | 2008-09-11 |
20080222342 | Crossbar comparator - A device includes a first crossbar array having first input columns and first output rows, wherein a plurality of the rows of the first crossbar array are configured to store first stored data in the form of high or low resistance states, and a second crossbar array having second input columns and second output rows, wherein a plurality of the rows of the second crossbar array are configured to store second stored data in the form of high or low resistance states. The second stored data is a complement of the first stored data and the first output rows are electrically connected to the second output rows. The device provides for data storage and comparison for computer processing, audio/speech recognition, and robotics applications. | 2008-09-11 |
20080222343 | MULTIPLE ADDRESS SEQUENCE CACHE PRE-FETCHING - A method is provided for pre-fetching data into a cache memory. A first cache-line address of each of a number of data requests from at least one processor is stored. A second cache-line address of a next data request from the processor is compared to the first cache-line addresses. If the second cache-line address is adjacent to one of the first cache-line addresses, data associated with a third cache-line address adjacent to the second cache-line address is pre-fetched into the cache memory, if not already present in the cache memory. | 2008-09-11 |
20080222344 | Facilitating Integration of a Virtual Tape Library System with a Physical Tape Library System - Facilitating integration of a virtual tape library system with a physical tape library system. A virtual library tape (VTL) system acts as an intermediary between a physical library tape (PTL) system and a backup host, such that the backup host is always aware of, and can manage the media present in the VTL and also in the PTL. Exporting of data from a virtual tape to a physical tape is performed by the VTL system for high performance. Also a one to one mapping exists between the virtual tape barcode label and the physical tape barcode label, thereby facilitating restores directly from physical tape. | 2008-09-11 |
20080222345 | METHOD FOR ACCESSING MEMORY DATA - A memory access method for accessing data from a non-volatile memory in a south bridge is provided. Memory access is performed under a system management mode (SMM). Under the protection of the SMM mode, the desired memory address is not altered by an interrupt handler, therefore memory data is accessed correctly. | 2008-09-11 |
20080222346 | Selectively utilizing a plurality of disparate solid state storage locations - A method for selectively utilizing a plurality of disparate solid state storage locations is disclosed. The technology initially receives class types for a plurality of disparate solid state storage locations. The characteristics of the received data are determined. The received data is then allocated to one of the plurality of disparate solid state storage locations based upon the determined characteristics of the received data. | 2008-09-11 |
20080222347 | Method and apparatus for protecting flash memory - A method is provided for protecting flash memory residing on a computing device. The method includes: receiving a data file having a digital signature at a main processor; forwarding the data file from the main processor to a secondary processor for signature validation; validating the digital signature associated with the data file at the secondary processor; enabling a write capability of a flash memory upon successful validation of the digital signature; and writing the data file to the flash memory. | 2008-09-11 |
20080222348 | File system for managing files according to application - The present invention discloses systems for managing files according to application. A digital storage system including: a storage memory having program code configured: to identify an application identity of an application issuing a storage command to access a file; and to adjust a storage mode of the file according to the application identity; and a processor for executing the program code. Preferably, the identifying is performed using a PID that is an indicator of the application identity. Preferably, the adjusting includes adjusting the storage mode according to the storage command. Preferably, the adjusting is performed using an SAT and/or an AST. A digital storage system including: a storage memory having program code configured: to identify an application scenario associated with a storage command to access a file; and to adjust a storage mode of the file according to the application scenario; and a processor for executing the program code. | 2008-09-11 |
20080222349 | IEEE 1394 INTERFACE-BASED FLASH DRIVE USING MULTILEVEL CELL FLASH MEMORY DEVICES - A flash drive and method of transferring data from a system to a flash drive. The flash drive includes a casing, a plurality of flash memory devices within the casing, each of the flash memory devices having multilevel cells, an IEEE 1394 interface controller within the casing, coupled to the flash memory devices, and interfacing with the flash memory devices for interleaved multichannel access to and from at least two of the flash memory devices, and at least one IEEE 1394 interface connector projecting from the casing for interfacing the flash memory devices with a system through the controller. The method entails coupling a plurality of multilevel cell flash memory devices to a system through an IEEE 1394 interface controller and at least one IEEE 1394 interface connector, and performing interleaved multichannel access to and from at least two of the flash memory devices. | 2008-09-11 |
20080222350 | Flash memory device for storing data and method thereof - A flash memory device which comprises a controller and one or plurality of flash memories for storing data and method thereof are disclosed. The controller comprises a control interface to accept data access which is from a main board and is managed by a control element of flash memory and a buffer management element. Through a micro-processing element in the controller, the data access from main board is checked for a random access or a serial page access. The random access and serial page access are written to different blocks by different processes in one or plurality of flash memories. The lifetime and processing speed of flash memories are improved for reduced erasure times during writing data. | 2008-09-11 |
20080222351 | HIGH-SPEED OPTICAL CONNECTION BETWEEN CENTRAL PROCESSING UNIT AND REMOTELY LOCATED RANDOM ACCESS MEMORY - A data transmission assembly includes a first connection terminal coupled to a processing unit and a second connection terminal coupled to a random access memory (RAM) resource. The data transmission assembly also includes a first electrical/optical (EO) signal converter and a second EO signal converter. The first EO signal converter is coupled to the first connection terminal and the second EO signal converter is coupled to the second connection terminal. The data transmission assembly also includes an optical signal propagation medium with a first end and a second end. The first end is attached to the first EO signal converter, and the second end is attached to the second EO signal converter. The signal propagation medium carries signals between the first connection terminal and the second connection terminal to support memory accesses performed by the processing unit to access data at memory locations within the RAM resource. | 2008-09-11 |
20080222352 | METHOD, SYSTEM AND PROGRAM PRODUCT FOR EQUITABLE SHARING OF A CAM TABLE IN A NETWORK SWITCH IN AN ON-DEMAND ENVIRONMENT - A method, system and program product for equitable sharing of a CAM (Content Addressable Memory) table among multiple users of a switch. The method includes reserving buffers in the table to be shared, the remaining buffers being allocated to each user. The method further includes establishing whether or not an address contained in a packet from a user is listed in a buffer in the table, if the address is listed, updating a time-to-live value for the buffer for forwarding the packet and, if the address is not listed, determining whether or not the user has exceeded its allocated buffers and whether or not the reserved buffers have been exhausted, such that, if the user has exceeded its allocated buffers and the reserved buffers have been exhausted, the address is not added to the table and the user is precluded from using any additional buffers in the network switch. | 2008-09-11 |
20080222353 | METHOD OF CONVERTING A HYBRID HARD DISK DRIVE TO A NORMAL HDD - A method of converting a hybrid hard disk drive (HDD) to a normal HDD when a system is powered on depending on whether the total number of defective blocks in a non-volatile cache (NVC) exceeds a predetermined threshold. The method of converting a hard disk drive (HDD) from a hybrid HDD to a normal HDD where the HDD has a normal hard disk and a non-volatile cache includes the steps of determining whether a mode conversion flag is enabled during a power-on period. When the mode conversion flag is enabled, operating the HDD as a normal HDD. When the mode conversion flag is disabled, determining whether an operating mode of the HDD is a normal mode or a hybrid mode. When the operating mode of the HDD is in the normal mode, the HDD operates as a normal HDD. A determination is made when the HDD is in the hybrid mode as to whether the total number of defective blocks in the non-volatile cache is greater than a predetermined threshold. The HDD is operated as a hybrid HDD when the total number of defective blocks is not greater than the threshold. The mode conversion flag is enabled and the HDD is operated as a hybrid HDD when the total number of defective blocks is greater than the threshold. | 2008-09-11 |
20080222354 | Method of creating a multiple of virtual SATA ports in a disk array controller - This invention discloses a method of creating a multiple of virtual SATA ports in a disk array controller, and the method builds a port multiplier in a SATA disk array controller by a software method, and the port multiplier defines several slices capable of identifying the address of a computer host system. The port multiplier is connected to at least one disk set, and each disk is divided into several data blocks corresponding to data blocks of another disk of the same disk set to constitute a synchronously updated disk backup system. The software method provides a method of connecting several storage units to overcome the restriction on the point-to-point connection of the SATA disk array system, so as to achieve a multi-driving function and a serial bus system. The invention also has the features of a low pin count and a high-frequency transmission. | 2008-09-11 |
20080222355 | Method for managing volume groups considering storage tiers - A tiered storage system according to the present invention provides for the management of migration groups. When a migration group is defined, a reference tier position is determined and the relative tier position of each constituent logical device is determined. Movement of a migration group involves migrating data in its constituent logical devices to target logical devices. The migration group is then defined by the target devices. A virtualization system makes the transition transparent to host devices. | 2008-09-11 |
20080222356 | CONNECTING DEVICE OF STORAGE DEVICE AND COMPUTER SYSTEM INCLUDING THE SAME CONNECTING DEVICE - In an environment in which plural external storage devices having different function control interfaces are intermixed, when a function of a storage device is controlled from a computer, a common interface for controlling the function of the storage device is provided. A device that provides the common interface manages an interrelationship between a storage area recognized by a host computer and a storage area provided by the storage device and associates a storage area which becomes a target of a function control instruction with the storage device that provides the storage area. A type of the storage device that provides the storage area which becomes the target of the function control instruction is identified and function control is ordered through a function control interface unique to the device. | 2008-09-11 |
20080222357 | Low power computer with main and auxiliary processors - A processing device comprises a processor, low power nonvolatile memory that communicates with the processor, high power nonvolatile memory that communicates with the processor. The processing device manages data using a cache hierarchy comprising a high power (HP) nonvolatile memory level for data in the high power nonvolatile memory and a low power (LP) nonvolatile memory level for data in the low power nonvolatile memory. The LP nonvolatile memory level has a higher level in the cache hierarchy than the HP nonvolatile memory level. | 2008-09-11 |
20080222358 | METHOD AND SYSTEM FOR PROVIDING AN IMPROVED STORE-IN CACHE - A system and method of providing a cache system having a store-in policy and affording the advantages of store-in cache operation, while simultaneously providing protection against soft-errors in locally modified data, which would normally preclude the use of a store-in cache when reliability is paramount. The improved store-in cache mechanism includes a store-in L1 cache, at least one higher-level storage hierarchy; an ancillary store-only cache (ASOC) that holds most recently stored-to lines of the store-in L1 cache, and a cache controller that controls storing of data to the ancillary store-only cache (ASOC) and recovering of data from the ancillary store-only cache (ASOC) such that the data from the ancillary store-only cache (ASOC) is used only if parity errors are encountered in the store-in L1 cache. | 2008-09-11 |
20080222359 | STORAGE SYSTEM AND DATA MANAGEMENT METHOD - The present invention comprises a CHA | 2008-09-11 |
20080222360 | MULTI-PORT INTEGRATED CACHE - A multi-port instruction/data integrated cache which is provided between a parallel processor and a main memory and stores therein a part of instructions and data stored in the main memory has a plurality of banks, and a plurality of ports including an instruction port unit consisting of at least one instruction port used to access an instruction from the parallel processor and a data port unit consisting of at least one data port used to access data from the parallel processor. Further, a data width which can be specified to the bank from the instruction port is set larger than a data width which can be specified to the bank from the data port. | 2008-09-11 |
20080222361 | PIPELINED TAG AND INFORMATION ARRAY ACCESS WITH SPECULATIVE RETRIEVAL OF TAG THAT CORRESPONDS TO INFORMATION ACCESS - A cache design is described in which corresponding accesses to tag and information arrays are phased in time, and in which tags are retrieved (typically speculatively) from a tag array without benefit of an effective address calculation subsequently used for a corresponding retrieval from an information array. In some exploitations, such a design may allow cycle times (and throughput) of a memory subsystem to more closely match demands of some processor and computation system architectures. In some cases, phased access can be described as pipelined tag and information array access, though strictly speaking, indexing into the information array need not depend on results of the tag array access. Our techniques seek to allow early (indeed speculative) retrieval from the tag array without delays that would otherwise be associated with calculation of an effective address eventually employed for a corresponding retrieval from the information array. Speculation can be resolved using the eventually calculated effective address or using separate functionality. In some embodiments, we use calculated effective addresses for way selection based on tags retrieved from the tag array. | 2008-09-11 |
20080222362 | Method and Apparatus for Execution of a Process - Techniques are provided for enabling execution of a process employing a cache Method steps can include obtaining a first probability of accessing a given artifact in a state S | 2008-09-11 |
20080222363 | SYSTEMS AND METHODS OF MAINTAINING FRESHNESS OF A CACHED OBJECT BASED ON DEMAND AND EXPIRATION TIME - A device that implements a method for performing integrated caching in a data communication network. The device is configured to receive a packet from a client over the data communication network, wherein the packet includes a request for an object. At the operating system/kernel level of the device, one or more of decryption processing of the packet, authentication and/or authorization of the client, and decompression of the request occurs prior to and integrated with caching operations. The caching operations include determining if the object resides within a cache, serving the request from the cache in response to a determination that the object is stored within the cache, and sending the request to a server in response to a determination that the object is not stored within the cache. | 2008-09-11 |
20080222364 | SNOOP FILTERING SYSTEM IN A MULTIPROCESSOR SYSTEM - A system and method for supporting cache coherency in a computing environment having multiple processing units, each unit having an associated cache memory system operatively coupled therewith. The system includes a plurality of interconnected snoop filter units, each snoop filter unit corresponding to and in communication with a respective processing unit, with each snoop filter unit comprising a plurality of devices for receiving asynchronous snoop requests from respective memory writing sources in the computing environment; and a point-to-point interconnect comprising communication links for directly connecting memory writing sources to corresponding receiving devices; and, a plurality of parallel operating filter devices coupled in one-to-one correspondence with each receiving device for processing snoop requests received thereat and one of forwarding requests or preventing forwarding of requests to its associated processing unit. Each of the plurality of parallel operating filter devices comprises parallel operating sub-filter elements, each simultaneously receiving an identical snoop request and implementing one or more different snoop filter algorithms for determining those snoop requests for data that are determined not cached locally at the associated processing unit and preventing forwarding of those requests to the processor unit. In this manner, a number of snoop requests forwarded to a processing unit is reduced thereby increasing performance of the computing environment. | 2008-09-11 |
20080222365 | Managed Memory System - A managed memory system is provided. More specifically, in one embodiment, there is provided a system including a memory device and a switch coupled to the memory device. The switch has at least a first switch position and a second switch position. The system also includes a memory controller coupled to the first switch position and a processor interface coupled to the second switch position. | 2008-09-11 |
20080222366 | MEMORY SHARING SYSTEM - A memory-use-information memory area stores therein a program ID, a request-source memory address, a request memory size which configure information for uniquely identifying a program file loaded into a storage area for virtual machine-A or storage area for virtual machine-B in association with a physical memory address. A memory reservation section uses, as the retrieval key, the program ID, request-source memory address, and request memory size of a program file corresponding to a memory reservation request to retrieval the memory-use-information memory area. When a entry that matches said retrieval key exists, the memory reservation section allows sharing of the memory area between a plurality of virtual machines. | 2008-09-11 |
20080222367 | Branching Memory-Bus Module with Multiple Downlink Ports to Standard Fully-Buffered Memory Modules - A branching memory-bus module has one uplink port and two or more downlink ports. Frames sent downstream by a host processor are received on the uplink port and repeated to the multiple downlink ports to two or more branches of memory modules. Frames sent upstream to the processor by a memory module on a downlink port are repeated to the uplink port. A branching Advanced Memory Buffer (AMB) on the branching memory-bus module has re-timing and re-synchronizing buffers that repeat frames to the multiple downlink ports. Elastic buffers can merge and synchronize frames from different downlink branches. Separate northbound and southbound lanes may be replaced by bidirectional lanes to reduce pin counts. Latency from the host processor to the farthest memory module is reduced by branching compared with a serial daisy-chain of fully-buffered memory modules. Point-to-point bus segments have only two endpoints despite branching by the branching AMB. | 2008-09-11 |
20080222368 | Updating Memory Contents of a Processing Device - A method of updating memory content stored in a memory of a processing device, the memory comprising a plurality of addressable memory blocks, the memory content being protected by a current integrity protection data item stored in the processing device, the method comprising determining a first subset of memory blocks that require an update, and a second subset of memory blocks that remain unchanged by said updating; calculating, as parallel processes, a first and a second integrity protection data item over the memory blocks; wherein the first integrity protection data item is calculated over the current memory contents of the first and second subsets of memory blocks; and wherein the second integrity protection data item is calculated over the current memory contents of the second subset of memory blocks and the updated memory block contents of the first subset of memory blocks. | 2008-09-11 |
20080222369 | Access Control Partitioned Blocks in Shared Memory - A method for controlling multiple access to partitioned areas of a shared memory and a digital processing apparatus having the shared memory are disclosed. According to embodiments of the present invention, the storage area of a shared memory is partitioned to a plurality of storage areas, and each processor accesses a storage area through each access port to store data and transfers an authority to access the pertinent storage area to the other processor, thereby allowing access by the other processor. With the present invention, the data communication time between the plurality of processors can be minimized, and the process efficiency of each processor can be optimized. | 2008-09-11 |
20080222370 | METHOD AND APPARATUS FOR DATA STREAM MANAGEMENT - A method and apparatus of managing data stream, the method comprising archiving received data in a circular buffer; utilizing a breakpoint in realizing the archived received data continuity, wherein the breakpoint is set to the last data portion of the archived received data; when the archiving of the received data approaches the end of the circular buffer, stitching the last portion of the archived received data to the start of the circular buffer; and setting the breakpoint to the updated last data portion of the archived data. | 2008-09-11 |
20080222371 | Method for Managing Memory Access and Task Distribution on a Multi-Processor Storage Device - In a system for reading and writing data, the system including a controller, multiple microprocessor units accessible to the controller, and multiple memory device configurations, each having one dedicated bus connection to individual ones or multiples of the microprocessor units, a method for managing access to one or more of the memory device configurations includes the steps, (a) receiving a request at the controller requiring access of at least one of the memory device configurations, (b) determining at the controller, which microprocessor unit or units will handle the request, (c) handing the request to the selected microprocessor unit or units, (d) determining at the microprocessor unit or units, the tasks specified in the request for that microprocessor unit or units and (e) determining a memory address or addresses in one or more of the memory device configurations and accessing the memory device configuration or configurations to satisfy the request. | 2008-09-11 |
20080222372 | TURBO DECODER - A turbo decoder has at least two Bahl, Cocke, Jelinek, and Raviv (BCJR) processors in parallel, each in serial communication with respective interleavers. The BCJR processors and interleavers are in communication with a memory module that is internally split into non-overlapping memory banks. The turbo decoder includes respective sorter circuits in communication with the output of each BCJR processor/interleaver. A sorter circuit receives a data block from a BCJR processor/interleaver and directs the data block to the memory bank designated by an address assigned to the data block by an interleaver. | 2008-09-11 |
20080222373 | RETAINING DISK IDENTIFICATION IN OPERATING SYSTEM ENVIRONMENT AFTER A HARDWARE-DRIVEN SNAPSHOT RESTORE FROM A SNAPSHOT-LUN CREATED USING SOFTWARE-DRIVEN SNAPSHOT ARCHITECTURE - A program, method and system are disclosed for managing a snapshot backup restore through a hardware snapshot interface, i.e. a hardware-driven snapshot restore, based upon a software-driven snapshot backup, e.g. created with software such as volume shadow copy service (VSS). When conventional hardware-driven snapshot restores are performed using a snapshot backup that was created using the VSS-based software such as copy services, data access issues can arise, due to the operating system assigning of a new disk signature to the disk being restored. This problem can be overcome by temporarily storing the original disk signature and then overwriting the new, incorrect disk signature after initializing the restore. This can ensure that the operating system identifies the source LUNs (and accordingly, the drive letter and mount points of the disk) using the same disk signature as before the restore. | 2008-09-11 |
20080222374 | Computer system, management computer, storage system and volume management method - To provide a computer system in which the primary site administrator for managing all the sites can make the configuration of the authority to be granted beforehand for the administrator of each site to fill in for a part of its own authority, even during the absence such as at the time of disaster. The computer system includes one or more storage systems | 2008-09-11 |
20080222375 | Method and system for the transparent migration of virtual machines storage - Method for transferring storage data of a virtual machine to be migrated from a first host device to a second host device via a communication network, including: running the virtual machine on the first host device; storing, on a local storage device of the first host device, a disk image used by the virtual machine; detecting, while the virtual machine is running on the first host device, any changes made to the disk image used by the virtual machine; establishing a connection over the communication network from the first host device to the second host device; transferring, to the second host device while the virtual machine is running on the first host device, the disk image used by the virtual machine and the detected any changes made; modifying the disk image transferred to the second host device in response to the detected any changes transferred to the second host device; and starting, using the modified disk image, a migrated virtual machine on the second host device at a current state of the virtual machine running on the first host device. | 2008-09-11 |
20080222376 | VIRTUAL INCREMENTAL STORAGE APPARATUS METHOD AND SYSTEM - An apparatus for managing incremental storage includes a storage pool management module that allocates storage volumes to a virtual volume. Also included is an incremental log corresponding to the virtual volume, which maps virtual addresses to storage addresses. The apparatus may also include a replication module that sends replicated data to the virtual volume and a policy management module that determines allocation criteria for the storage pool management module. In one embodiment, the incremental log includes a lookup table that translates read and write requests to physical addresses on storage volumes within the virtual volume. The replicated data may include incremental snapshot data corresponding to one or more primary volumes. The various embodiments of the virtual incremental storage apparatus, method, and system facilitate dynamic adjustment of the storage capacity of the virtual volume to accommodate changing amounts of storage utilization. | 2008-09-11 |
20080222377 | ACHIEVING DATA CONSISTENCY WITH POINT-IN-TIME COPY OPERATIONS IN A PARALLEL I/O ENVIRONMENT - A method for processing a point-in-time copy of data associated with a logical storage volume where the data to be copied is stored in a striped or parallelized fashion across more than one physical source volume. The method includes receiving a point-in-time copy command concerning a logical volume and distributing the point-in-time copy command in-band to each of the physical source volumes containing a portion of the striped data. The method also includes establishing a point-in-time copy relationship between each physical source volume and one of a corresponding set of multiple physical target volumes. The method further includes copying the data stored on each physical source volume to the corresponding physical target volume. The in-band copy command and the striped data may be distributed over I/O channels between a server and the physical storage and processed sequentially. | 2008-09-11 |
20080222378 | MEMORY MODULE AND MEMORY MODULE SYSTEM - A memory module and a memory module system are provided. The memory module system includes a plurality of memory modules each module comprising a plurality of memory blocks and a plurality of corresponding routers each storing a channel identification (ID) and a module ID corresponding to one or more memory blocks; and a controller configured to access the memory modules. During initialization, the controller reads and stores the channel ID and the module ID from each of the routers. The controller outputs a channel ID and a module ID that correspond to one or more memory blocks to be accessed. | 2008-09-11 |
20080222379 | System and method for memory hub-based expansion bus - A system memory includes a memory hub controller, a memory module accessible by the memory hub controller, and an expansion module having a processor circuit coupled to the memory module and also having access to the memory module. The memory hub controller is coupled to the memory hub through a first portion of a memory bus on which the memory requests from the memory hub controller and memory responses from the memory hub are coupled. A second portion of the memory bus couples the memory hub to the processor circuit and is used to couple memory requests from the processor circuit and memory responses provided by the memory hub to the processor circuit. | 2008-09-11 |
20080222380 | SYSTEM AND METHOD FOR DYNAMIC MEMORY ALLOCATION - A method for managing the allocation of memory to one or more applications. The method includes allocating a variety of fixed size memory blocks to a requesting application, each of the fixed size memory blocks being free of header information to maximize memory usage. Free, or unused blocks of data of the same fixed size are maintained in a freelist having a number of block roots corresponding to the number of differently fixed size memory blocks. Each block root stores a root pointer to an unused memory block previously allocated to the application. To conserve memory, each unused memory block will store branch pointers to other identically sized unused memory blocks, thereby forming a sequential chain of unused memory blocks with the block root. Therefore, applications requesting the same sized memory block can re-use previously allocated fixed size memory blocks. | 2008-09-11 |
20080222381 | STORAGE OPTIMIZATION METHOD - In a system and method for examining the configuration of a storage area network using a browser application, linking to the storage area network using a browser application, obtaining data from a device on the storage area network, parsing the data into records, eliminating redundancies in the records, storing the records in a database, and providing access to the database to a user through the browser. | 2008-09-11 |
20080222382 | PERFORMANCE MONITORING DEVICE AND METHOD THEREOF - A performance monitoring device and method are disclosed. The device monitors performance events of a processor. A counter is adjusted in response to the occurrence of a particular performance event. The counter can be associated with a particular instruction address range, or a data address range, so that the counter is adjusted only when the performance event occurs at the instruction address range or the data address range. Accordingly, the information stored in the counter can be analyzed to determine if a particular instruction address range or data address range results in a particular performance event. Multiple counters, each associated with a different performance event, instruction address range, or data address range, can be employed to allow for a detailed analysis of which portions of a program lead to particular performance events. | 2008-09-11 |
20080222383 | Efficient On-Chip Accelerator Interfaces to Reduce Software Overhead - In one embodiment, a processor comprises execution circuitry and a translation lookaside buffer (TLB) coupled to the execution circuitry. The execution circuitry is configured to execute a store instruction having a data operand; and the execution circuitry is configured to generate a virtual address as part of executing the store instruction. The TLB is coupled to receive the virtual address and configured to translate the virtual address to a first physical address. Additionally, the TLB is coupled to receive the data operand and to translate the data operand to a second physical address. A hardware accelerator is also contemplated in various embodiments, as is a processor coupled to the hardware accelerator, a method, and a computer readable medium storing instruction which, when executed, implement a portion of the method. | 2008-09-11 |
20080222384 | APPARATUS AND METHOD FOR EXECUTING RAPID MEMORY MANAGEMENT UNIT EMULATION AND FULL-SYSTEM SIMULATOR - A method for performing rapid memory management unit emulation of a computer program in a computer system, wherein address injection space of predefined size is allocated in the computer system and a virtual page number and a corresponding physical page number are stored in said address injection space, said method comprising steps of: comparing the virtual page number of the virtual address of a load/store instruction in a code segment in said computer program with the virtual address page number stored in said address injection space; if the two virtual page numbers are the same, then obtaining the corresponding physical address according to the physical page number stored in said address injection space; otherwise, performing address translation lookaside buffer search, that is, TLB search to obtain the corresponding physical address; and reading/writing data from/to said obtained corresponding physical address. The present invention also provides an apparatus and computer program product for implementing the method described above. | 2008-09-11 |
20080222385 | PARAMETER SETTING METHOD AND APPARATUS FOR NETWORK CONTROLLER - A method for setting at least one of parameters of a peripheral device coupled to a host includes: executing a program code stored in a first storage unit of a host to obtain setting data corresponding to the at least one of the parameters; storing the setting data into a second storage unit of the host; generating an indication signal to the peripheral device to indicate that the setting data has been stored in the second storage unit; transferring the setting data from the second storage unit of the host to the peripheral device; and performing a function of the peripheral device according to the setting data. | 2008-09-11 |
20080222386 | COMPRESSION OF IPV6 ADDRESSES IN A NETFLOW DIRECTORY - Modified flow keys holding compressed IPv6 addresses are stored in a flow table to improve memory utilization. The compressed IPv6 addresses are utilized to access a compression table holding the full IPv6 address, and full IPv6 address are substituted into the modified flow key to form an unmodified flow key. | 2008-09-11 |
20080222387 | Correction of incorrect cache accesses - The application describes a data processor operable to process data, and comprising: a cache in which a storage location of a data item within said cache is identified by an address, said cache comprising a plurality of storage locations and said data processor comprising a cache directory operable to store a physical address indicator for each storage location comprising stored data; a hash value generator operable to generate a generated hash value from at least some of said bits of said address said generated hash value having fewer bits than said address; a buffer operable to store a plurality of hash values relating to said plurality of storage locations within said cache; wherein in response to a request to access said data item said data processor is operable to compare said generated hash value with at least some of said plurality of hash values stored within said buffer and in response to a match to indicate a indicated storage location of said data item; and said data processor is operable to access one of said physical address indicators stored within said cache directory corresponding to said indicated storage location and in response to said accessed physical address indicator not indicating said address said data processor is operable to invalidate said indicated storage location within said cache. | 2008-09-11 |
20080222388 | Simulation of processor status flags - The dynamic efficient and accurate simulation of processor status flags is described. One exemplary embodiment includes simulation of processor status flags of a first CPU type on a second CPU type using simple arithmetic operations to calculate status flags in parallel, and by keeping an intermediate state that allows efficient calculation of status flags when they are needed. In this way, sufficient intermediate state exists to generate desired status flags either directly or with a simple operation. | 2008-09-11 |
20080222389 | INTERPROCESSOR MESSAGE TRANSMISSION VIA COHERENCY-BASED INTERCONNECT - A method includes communicating a first message between processors of a multiprocessor system via a coherency interconnect, whereby the first message includes coherency information. The method further includes communicating a second message between processors of the multiprocessor system via the coherency interconnect, whereby the second message includes interprocessor message information. A system includes a coherency interconnect and a processor. The processor includes an interface configured to receive messages from the coherency interconnect, each message including one of coherency information or interprocessor message information. The processor further includes a coherency management module configured to process coherency information obtained from at least one of the messages and an interrupt controller configured to generate an interrupt based on interprocessor message information obtained from at least one of the messages. | 2008-09-11 |
20080222390 | Low Noise Coding for Digital Data Interface - A digital data interface system comprises a data transmitter configured to transmit a data word across a plurality of data lines. The data word can comprise a plurality of digital data bits having a bit number order from a lowest bit number to a highest bit number with the lowest ordered bit numbers having higher noise content and the highest ordered bit numbers having higher harmonic content. The system also comprises an encoder configured to arrange the plurality of digital data bits as serialized data sets to be transmitted over each of the plurality of data lines by the data transmitter with consecutive data bits of at least one serialized data set being matched such that bits with the higher harmonic content are matched with bits of the higher noise content to substantially mitigate of at least one of the noise content and the harmonic content of the data word. | 2008-09-11 |
20080222391 | Apparatus and Method for Optimizing Scalar Code Executed on a SIMD Engine by Alignment of SIMD Slots - An apparatus and method for optimizing scalar code executed on a single instruction multiple data (SIMD) engine is provided that aligns the slots of SIMD registers. With the apparatus and method, a compiler is provided that parses source code and, for each statement in the program, generates an expression tree. The compiler inspects all storage inputs to scalar operations in the expression tree to determine their alignment in the SIMD registers. This alignment is propagated up the expression tree from the leaves. When the alignments of two operands in the expression tree are the same, the resulting alignment is the shared value. When the alignments of two operands in the expression tree are different, one operand is shifted. For shifted operands, a shift operation is inserted in the expression tree. The executable code is then generated for the expression tree and shifts are inserted where indicated. | 2008-09-11 |
20080222392 | Method and arrangements for pipeline processing of instructions - In one embodiment a method for parallel processing in a processing pipeline is disclosed. The method can include determining that a jump instruction is loaded in a main path of a processing pipeline prior to the jump instruction being executed. The method can load a jump hit target instruction in a bypass path of the pipeline in response to determining that the jump instruction is loaded in the main path. The bypass path can bypass at least one stage of the processing pipeline and couple into the main path in a stage that is prior to the execute stage. The method can switch the jump hit target instruction into the main path in response to a successful jump-hit condition. The bypass path and the main path can operate concurrently and in parallel. | 2008-09-11 |
20080222393 | Method and arrangements for pipeline processing of instructions - In one embodiment a method for operating a processing pipeline is disclosed. The method can include fetching an instruction in a first clock cycle, decoding the instruction in a second clock cycle and fetching an instruction data associated with the instruction in the second clock cycle. The method can also include associating the instruction data with the instruction and feeding the instruction and the instruction data to a processing unit utilizing the association. The method can also include loading a register with instruction data wherein the number of bits of instruction data loaded per clock cycle varies based on the amount of instruction data required to execute at least one instruction in a clock cycle. | 2008-09-11 |
20080222394 | Systems and Methods for TDM Multithreading - Systems and methods for distributing thread instructions in the pipeline of a multi-threading digital processor are disclosed. More particularly, hardware and software are disclosed for successively selecting threads in an ordered sequence for execution in the processor pipeline. If a thread to be selected cannot execute, then a complementary thread is selected for execution. | 2008-09-11 |
20080222395 | System and Method for Predictive Early Allocation of Stores in a Microprocessor - A system and method for predictive early allocation of stores in a microprocessor is presented. During instruction dispatch, an instruction dispatch unit retrieves an instruction from an instruction cache (Icache). When the retrieved instruction is an interruptible instruction, the instruction dispatch unit loads the interruptible instruction's instruction tag (IITAG) into an interruptible instruction tag register. A load store unit loads subsequent instruction information (instruction tag and store data) along with the interruptible instruction tag in a store data queue entry. Comparison logic receives a completing instruction tag from completion logic, and compares the completing instruction tag with the interruptible instruction tags included in the store data queue entries. In turn, deallocation logic deallocates those store data queue entries that include an interruptible instruction tag that matches the completing instruction tag. | 2008-09-11 |
20080222396 | Low Overhead Access to Shared On-Chip Hardware Accelerator With Memory-Based Interfaces - In one embodiment, a method is contemplated. Access to a hardware accelerator is requested by a user-privileged thread. Access to the hardware accelerator is granted to the user-privileged thread by a higher-privileged thread responsive to the requesting. One or more commands are communicated to the hardware accelerator by the user-privileged thread without intervention by higher-privileged threads and responsive to the grant of access. The one or more commands cause the hardware accelerator to perform one or more tasks. Computer readable media comprises instructions which, when executed, implement portions of the method are also contemplated in various embodiments, as is a hardware accelerator and a processor coupled to the hardware accelerator. | 2008-09-11 |
20080222397 | Hard Object: Hardware Protection for Software Objects - In accordance with one embodiment, additions to the standard computer microprocessor architecture hardware are disclosed comprising novel page table entry fields | 2008-09-11 |
20080222398 | Programmable processor with group floating-point operations - A programmable processor that comprises a general purpose processor architecture, capable of operation independent of another host processor, having a virtual memory addressing unit, an instruction path and a data path; an external interface; a cache operable to retain data communicated between the external interface and the data path; at least one register file configurable to receive and store data from the data path and to communicate the stored data to the data path; and a multi-precision execution unit coupled to the data path. The multi-precision execution unit is configurable to dynamically partition data received from the data path to account for an elemental width of the data and is capable of performing group floating-point operations on multiple operands in partitioned fields of operand registers and returning catenated results. In other embodiments the multi-precision execution unit is additionally configurable to execute group integer and/or group data handling operations. | 2008-09-11 |
20080222399 | METHOD FOR THE HANDLING OF MODE-SETTING INSTRUCTIONS IN A MULTITHREADED COMPUTING ENVIRONMENT - The present invention relates to the provisioning of mode-setting instruction as they relate to requisite hardware within a processing system. As such, the processing system allows for multiple programs, or processing threads of execution, to independently specify Modes, wherein modes are program specified assertions in regard to the processing system hardware environment (e.g., the temperature, voltage, frequency, gating functions, etc.). Thus, the objectives of the present invention are to facilitate a mutually acceptable environment for all of the processing threads that are being executed within a processing system; this objecting being subject to the respective processing requirements as requested by Mode-setting instructions that are specified by each executed processing thread. | 2008-09-11 |
20080222400 | Power Consumption of a Microprocessor Employing Speculative Performance Counting - Reduction of power consumption and chip area of a microprocessor employing speculative performance counting, comprising splitting a counter and a backup register of a speculative counting mechanism performing the speculative performance counting into first and second parts each, re-using an available storage within the microprocessor as first parts respectively; integrating at least one dedicated pre-counter into the microprocessor as second parts respectively; splitting the data handled by the speculative counting mechanism in high-order and low-order bits; storing the high order bits in the first parts; storing the low order bits in the second parts; updating the first parts periodically; and saving and propagating the carry-out from the second parts to high-order bits when a corresponding first part of the second parts is next updated respectively. | 2008-09-11 |
20080222401 | METHOD AND SYSTEM FOR ENABLING STATE SAVE AND DEBUG OPERATIONS FOR CO-ROUTINES IN AN EVENT-DRIVEN ENVIRONMENT - A method of enabling state save and debug operations for co-routines for first failure data capture (FFDC) in an event-driven environment. A stack management utility allocates space for a context structure, which includes a state field, and a stack pointer in a buffer. A context management utility initializes a first context structure of a first co-routine and saves a state of the first context structure in response to an execution request for a second co-routine. The context management utility sets a second context structure as a current context. When execution of the current context is complete, the context management utility restores the first context structure of the first co-routine as the current context. If the state field is not set to a valid value, a state save function “state saves” all allocated co-routine stacks and context structures, restores the entire system to a previous valid state, and restarts operations. | 2008-09-11 |
20080222402 | Method and apparatus for product comparison - A method of comparing products is disclosed. The method includes selecting a first configuration representing a first product with a first attribute, selecting a second configuration representing a second product with a second attribute, and displaying the first attribute and the second attribute. As will be noted, the first attribute is defined in the first configuration, and the second attribute is defined in the second configuration. | 2008-09-11 |
20080222403 | SYSTEM AND METHOD TO PROVIDE DEVICE UNIQUE DIAGNOSTIC SUPPORT WITH A SINGLE GENERIC COMMAND - An embodiment of this invention provides a system and method for a diagnostic computer application executing on a host computer to extract vendor unique diagnostic information from an attached peripheral device. The peripheral device is pre-configured to respond with device unique information in response to certain standard interface protocol inquiries. Standard interface inquiry commands are used to extract detailed instructions from the device. These instructions may contain device unique small computer system interface (SCSI) command sequences, for example. The command sequences allow a user of the host computer to extract detailed data from the peripheral device about the peripheral device's operational, performance and health statistics. | 2008-09-11 |
20080222404 | In-system programming system and method for motherboard - An in-system programming system and method is provided, which is applicable for chip programming of a computer motherboard. Firstly, a programming interface is configured in the computer motherboard, in which one end of the programming interface is connected to an on-board programmer, and the other end is connected to a plurality of chips to be programmed, thereby achieving the communication between the on-board programmer and the chips. Next, a motherboard connector and the programming interface are connected, and the motherboard connector and the on-board programmer are communicated through a communication interface of the on-board programmer. Then, the other end of the motherboard connector is connected to a programmable master-control program. Then, when the programmable master-control program is used for programming, programming contents of the programmable master-control program are transmitted to the on-board programmer through the communication between the motherboard connector and the on-board programmer, so as to program the chip. | 2008-09-11 |
20080222405 | COMPUTER INITIALIZATION SYSTEM - An initialization data generator includes a task database in which task descriptions for initializing a computer are specified related with task IDs and an initialization database in which initialization data descriptions for initializing a computer are stored related with initialization data IDs. The initialization data generator takes input of the computer ID of a computer to be initialized and task data, reads task descriptions and initialization data descriptions according to task ordering related with the task data from the task database and the initialization database, based on the task IDs, task ordering, and the initialization data IDs for software modules which are loaded into the computer to be initialized by the tasks corresponding to the task IDs, specified in the task data, and generates and transfers initialization data to the computer to be initialized, thereby initializing the computer to be initialized. | 2008-09-11 |
20080222406 | Hot-pluggable information processing device and setting method - A generator generates configuration information of a virtual hardware unit based on configuration information in a PCIBOX. The generator generates recognition information for recognizing the virtual PCIBOX as a PCIBOX that is connected with a slot from the configuration information of the virtual PCIBOX. When PCIBOX is connected with the slot, the generator overwrites the configuration information in PCIBOX with the configuration information of the virtual PCIBOX. | 2008-09-11 |
20080222407 | Monitoring Bootable Busses - A security circuit in a computer monitors data busses that support memory capable of booting the computer during the computer reset/boot cycle. When activity oil one of the data busses indicates the computer is booting from a non-authorized memory location, the security circuit disrupts the computer, for example, by causing a reset. Execution from the non-authorized memory location may occur when an initial jump address at a known location, such as the top of memory, is re-programmed to a memory location having a rogue BIOS program. | 2008-09-11 |