38th week of 2008 patent applcation highlights part 63 |
Patent application number | Title | Published |
20080228977 | Method and Apparatus for Dynamic Hardware Arbitration - A method and apparatus for dynamically arbitrating, in hardware, requests for a resource shared among multiple clients. Multiple data streams or service requests require access to a shared resource, such as memory, communication bandwidth, etc. A hardware arbiter monitors the streams' traffic levels and determines when one or more of their arbitration weights should be adjusted. When a queue used by one of the streams is filled to a threshold level, the hardware reacts by quickly and dynamically modifying that queue's arbitration weight. Therefore, as the queue is filled or emptied to different thresholds, the queue's arbitration weight rapidly changes to accommodate the corresponding client's temporal behavior. The arbiter may also consider other factors, such as the client's type of traffic, a desired quality of service, available credits, available descriptors, etc. | 2008-09-18 |
20080228978 | METHOD OF DETERMING REQUEST TRANSMISSION PRIORITY SUBJECT TO REQUEST CONTENT AND TRANSTTING REQUEST SUBJECT TO SUCH REQUEST TRANSMISSION PRIORITY IN APPLICATION OF FIELDBUS COMMUNICATION FRAMEWORK - A method of determining request transmission priority subject to request content and transmitting request subject to such request transmission priority in application of Fieldbus communication framework in which the communication device determines whether the received requests have the priority subject to the respective content, and also determines whether there is any logical operation condition established, and then the communication device transmits the received external requests to the connected slave device as an ordinary request or priority request, preventing the slave device from receiving an important external request sent by the main control end or manager at a late time. | 2008-09-18 |
20080228979 | Trigger core - A method to detect an event between a data source and a data sink using a trigger core is described herein. The method comprises monitoring control lines and an associated data stream for a programmable pattern, wherein the pattern is one or more of a condition, state or event. The method further comprises generating an indication by updating a status register, sending an interrupt or asserting a control line upon a pattern match. | 2008-09-18 |
20080228980 | Microcontroller and Method for the Operation Thereof - A microcontroller (MC) has integrated functional modules (FM) encompassing a first functional module (FM | 2008-09-18 |
20080228981 | DESIGN STRUCTURE FOR DYNAMICALLY ALLOCATING LANES TO A PLURALITY OF PCI EXPRESS CONNECTORS - A design structure embodied in a machine readable storage medium for designing, manufacturing, and/or testing a design for dynamically allocating lanes to a plurality of PCI Express connectors is disclosed that include identifying whether a PCI Express device is installed into each PCI Express connector, and assigning a portion of the lanes to each PCI Express connector having a PCI Express device installed into the PCI Express connector. Dynamically allocating lanes to a plurality of PCI Express connectors may also include identifying a device type for each PCI Express device installed into the plurality of PCI Express connectors, creating allocation rules that specify the allocation of lanes to the plurality of PCI Express connectors, and receiving user allocation preferences that specify the allocation of lanes to the plurality of PCI Express connectors. | 2008-09-18 |
20080228982 | Modular Expandable Mobile Navigation Device - An expandable system for mobile navigation facilitates a modular implementation of structural units to add desired functionality to a base navigation device. The system is embodied as a handheld mobile navigation device in one arrangement, including a base unit housing containing circuitry for determining a geographic location of the navigation device and a module unit housing containing circuitry for delivering additional functional activity. In particular, the base unit housing includes a primary interface for interconnecting with a secondary interface of the module unit housing, to enable signals generated or handled by a circuitry component of the module unit to be relayed to the circuitry of the base unit housing. Additionally, the module unit housing is configured to be releasably attached directly with the base unit housing upon the primary interface and secondary interface interconnecting with one another, to form the mobile navigation device as a physically connected package. | 2008-09-18 |
20080228983 | ELECTRONIC DEVICE TO WHICH AN OPTION DEVICE CAN BE MOUNTED AND A RECORDING MEDIUM - The present invention relates to an electronic device enabling an option configuration including an option device to be displayed on an information processing apparatus even if the information processing apparatus does not know the option device. The electronic device includes a storage unit storing respective device information items of the electronic device and the option devices; an option determination unit determining which option device is currently mounted to the electronic device; an option position detection unit detecting a position of the option device determined by the option determination unit relative to the electronic device; and an option configuration information generation unit generating option configuration information for displaying on the display unit of the information processing apparatus based on the device information items in the storage unit, a determination result of the option determination unit and the position of the option device detected by the option position detection unit. | 2008-09-18 |
20080228984 | Single-Chip Multi-Media Card/Secure Digital (MMC/SD) Controller Reading Power-On Boot Code from Integrated Flash Memory for User Storage - A Multi-Media Card/Secure Digital (MMC/SD) single-chip flash device contains a MMC/SD flash microcontroller and flash mass storage blocks containing flash memory arrays that are block-addressable rather than randomly-addressable. MMC/SD transactions from a host MMC/SD bus are read by a bus transceiver on the MMC/SD flash microcontroller. Various routines that execute on a CPU in the MMC/SD flash microcontroller are activated in response to commands in the MMC/SD transactions. A flash-memory controller in the MMC/SD flash microcontroller transfers data from the bus transceiver to the flash mass storage blocks for storage. Rather than boot from an internal ROM coupled to the CPU, a boot loader is transferred by DMA from the first page of the flash mass storage block to an internal RAM. The flash memory is automatically read from the first page at power-on. The CPU then executes the boot loader from the internal RAM to load the control program. | 2008-09-18 |
20080228985 | Embedded Processor with Direct Connection of Security Devices for Enhanced Security - An integrated circuit, a computer system and a method of operating an computer system is disclosed. The method includes receiving a request for an authentication, at a microcontroller and requesting security data from a security device. The method also includes receiving the security data from the security device, at the microcontroller and evaluating the security data. The method also includes approving the authentication if the security data is evaluated as acceptable. | 2008-09-18 |
20080228986 | ARCHITECTURE FOR CONTROLLING PERIPHERAL DEVICES - A peripheral component interface device capable of being removably coupled to an input-output interface in a computer, and at least one peripheral device is described. The peripheral component interface device includes a first communication bus configured to be removably coupled to the input-output interface associated with the computer, a second communication bus configured to be removably coupled to the input-output interface associated with the computer, and a signal regulation circuit electrically coupled to the first communication bus and the second communication bus. In one embodiment, the signal regulation circuit is responsive to commands from the second communication bus to control a signal from the first communication bus passing to the at least one peripheral device, when the at least one peripheral device is coupled to the peripheral component interface device. | 2008-09-18 |
20080228987 | Storage system and method of storage system path control - The present invention uses memory resources effectively and connects each storage device by a plurality of paths in a switchable manner, thus improving reliability and ease of use, by virtualizing external memory resources as internal memory resources. External storage | 2008-09-18 |
20080228988 | METHOD FOR TRANSMITTING CONFIGURATION DATA VIA A CONFIGURATION DATA BUS IN A MEMORY ARRANGEMENT, CONFIGURATION DATA BUS STRUCTURE, MEMORY ARRANGEMENT, AND COMPUTER SYSTEM - A method transmits configuration data in a memory arrangement. The method includes controlling, with a control unit of the memory arrangement, data transmissions via a configuration data bus in the memory arrangement, the controlling including controlling transmitting configuration data of the memory arrangement for storing in at least two register units of the memory arrangement via the configuration data bus from the control unit to each of the at least two register units. The method includes storing, in the at least two register units, the configuration data. The at least two register units have a same bus address identifying the at least two register units on the configuration data bus. The method includes requesting, with the control unit, configuration data stored in the at least two register units. The method includes transmitting, under control of the control unit, the stored configuration data via the configuration data bus from only one of the at least two register units to the control unit. | 2008-09-18 |
20080228989 | METHOD AND DEVICE FOR SECURING THE READING OF A MEMORY - A method reads a datum saved in a memory by selecting an address of the memory in which the datum to be read is saved, reading the datum in the memory at the selected address, saving the datum read in a storage space, and when the memory is not being accessed by a CPU, reading the datum in the memory, reading the datum saved in the storage space, and activating an error signal if the datum read in the memory is different from the datum saved. The method can be applied particularly to the protection of smart card integrated circuits. | 2008-09-18 |
20080228990 | STORAGE APPARATUS HAVING UNUSED PHYSICAL AREA AUTONOMOUS MANAGEMENT FUNCTION - A physical extent assurance unit manages correspondence of a logical disk accessed from a host computer with physical extents. A data pattern generation response unit generates a predetermined data pattern, and returns this data pattern in response to a data request from the host computer. A pattern matching unit checks the data pattern of a storage area every access to storage media or periodically. When the entire area of the assured physical extent defines the predetermined data pattern, the pattern matching unit deletes the logical disk allocation of the assured physical extent. | 2008-09-18 |
20080228991 | RING BUFFER MANAGEMENT - A method is provided for managing access to a ring buffer, for at least one data transfer channel for a determined amount of data, with this ring buffer comprising a series of buffer sub-areas spaced apart by a memory address offset and ordered from a first buffer sub-area to a last buffer sub-area. A starting address is initialized from a first register storing the value of the memory address of the first buffer sub-area, and a counter is initialized from a second register storing the value of the number of buffer sub-areas in the buffer. The buffer sub-areas are successively accessed, from the first buffer sub-area to the last buffer sub-area, starting from the starting address and as a function of the memory address offset, on the basis of the value of the counter. The initialization and access steps are repeated such that the determined amount of data is transferred. | 2008-09-18 |
20080228992 | SYSTEM, METHOD AND APPARATUS FOR ACCELERATING FAST BLOCK DEVICES - A system, method and apparatus directed to fast data storage on a block storage device. New data is written to an empty write block. If the new data is compressible, a compressed version of the new is written into the meta data. A location of the new data is tracked. Meta data associated with the new data is written. A lookup table may be updated based in part on the meta data. The new data may be read based the lookup table configured to map a logical address to a physical address. Disk operations may use state data associated with the meta data to determine the empty write block. A write speed-limit may also be determined based on a lifetime period, a number of life cycles and a device-erase-sector-count for the device. A write speed for the device may be slowed based on the determined write speed-limit. | 2008-09-18 |
20080228993 | WIRELESS DATA COMMUNICATIONS USING FIFO FOR SYNCHRONIZATION MEMORY - A microprocessor system architecture is disclosed which allows for the selective execution of programmed ROM microcode or, alternatively, RAM microcode if there has been a correction or update made to the ROM microcode originally programmed into the system. Patched or updated RAM microcode is utilized or executed only to the extent of changes to the ROM microcode, otherwise the ROM microcode is executed in its normal fashion. When a patch is received, it is loaded into system RAM along with instructions or other appropriate signals to direct the execution of the patched or updated microcode from RAM instead of the existing ROM microcode. Various methods are presented for selecting the execution of the appropriate microcode depending upon whether there have been changes made to it. | 2008-09-18 |
20080228994 | Solid memory module structure with extensible capacity - A solid memory module structure with extensible capacity includes at least a non-volatile memory module, each of which has at least a memory chip, a first connector, and a control unit. And A Solid memory module includes at least a second connector, which electrically connects the first connector of the volatile memory module, and a system interfac | 2008-09-18 |
20080228995 | Portable Data Storage Device Using a Memory Address Mapping Table - A portable data storage device includes a USB controller, a master control unit and a NAND flash memory device. The master control unit receives data to be written to logical addresses, and instructions to read data from logical addresses. It uses a memory address mapping table to associate the logical addresses with the physical addresses in the memory device, and writes data to or reads data from the physical address corresponding to the logical address. The mapping is changed at intervals, so that different ones of the physical address regions are associated at different times with the logical addresses. This increases the speed of the device, and also means that no physical addresses are rapidly worn out by being permanently associated with logical addresses to which data is written relatively often. | 2008-09-18 |
20080228996 | Portable Data Storage Device Using Multiple Memory Devices - A portable data storage device includes a USB interface ( | 2008-09-18 |
20080228997 | ZONED INITIALIZATION OF A SOLID STATE DRIVE - Zoned initialization of a solid state drive is provided. A solid state memory device includes a controller for controlling storage and retrieval of data to and from the device. A set of solid state memory components electrically coupled to the controller. The set is electrically divided into a first zone and a second zone, wherein the first zone is at least partially initialized independent from the second zone. An interface is coupled between the controller and the set of solid state memory components to facilitate transfer of data between the set of solid state memory components and the controller. | 2008-09-18 |
20080228998 | MEMORY STORAGE VIA AN INTERNAL COMPRESSION ALGORITHM - The subject specification discloses flash memory device with the capability of performing both internal compression as well as internal de-compression. Each of these actions takes place through appropriate algorithms. In normal operation, the compression occurs prior to a writing of data in a flash memory device. The compressed data travels to a storage location. The de-compression occurs after the reading of stored data and de-compressed data travels to an external system. | 2008-09-18 |
20080228999 | Dual use for data valid signal in non-volatile memory - In some types of non-volatile memory devices, the same signal from a memory device may be used for two purposes: During a read operation, the signal may be used by a memory controller to latch the data that is being received from the memory device. During a block erase operation and/or a block write operation, the signal may be used to notify the memory controller that the operation has been completed by the memory device. | 2008-09-18 |
20080229000 | FLASH MEMORY DEVICE AND MEMORY SYSTEM - A memory system comprises a flash memory, a processing unit, and a flash controller including address and control registers, the address and control registers being configured to receive information from the processing unit, wherein the flash controller is configured to control a copy-back program operation of the flash memory in hardware based on information stored in the address and control registers. | 2008-09-18 |
20080229001 | Solid memory module with extensible capacity - A solid memory module with extensible capacity includes at least a non-volatile memory module, each of which has at least a memory chip and a first connector, and at least a second connector, which electrically connects the first connector of the volatile memory module, at least a control unit and as a system interface. This control unit obtains external signals by this system interface and then transmits to this non-volatile memory module by the control unit to store or use the memory content. | 2008-09-18 |
20080229002 | SEMICONDUCTOR MEMORY AND INFORMATION PROCESSING SYSTEM - A semiconductor memory ( | 2008-09-18 |
20080229003 | STORAGE SYSTEM AND METHOD OF PREVENTING DETERIORATION OF WRITE PERFORMANCE IN STORAGE SYSTEM - Provided is a storage system capable of inhibiting the deterioration of its write performance. This storage system includes a flash memory, a cache memory, and a controller for controlling the reading, writing and deletion of data of the flash memory and the reading and writing of data of the cache memory, and detecting the generation of a defective block in the flash memory. When the controller detects the generation of a defective block in the flash memory, it migrates prescribed data stored in the flash memory to the cache memory and, even upon receiving from the host computer a command for updating the migrated data, disables the writing of data in the flash memory based on the command. | 2008-09-18 |
20080229004 | PROCESSOR SYSTEM USING SYNCHRONOUS DYNAMIC MEMORY - A processor system including: a processor and controller core connected via an internal bus; and a plurality of synchronous memory chips connected to the processor via an external bus; the controller core including a mode register selected by an address signal from the processor core and written with an information by a data signal from the processor core to select the operation mode of the plurality of synchronous memory chips, and a control unit to prescribe the operate mode to the plurality of synchronous memory chips based on the information written in the mode register, wherein the controller core outputs a mode setting signal based on the information written in the mode register or an access address signal from the processor core to the plurality of synchronous memory chips via the external bus selectively; and wherein the clock signal is commonly supplied to the plurality of synchronous memory chips. | 2008-09-18 |
20080229005 | Multi Partitioned Storage Device Emulating Dissimilar Storage Media - A digital media. In one embodiment, the digital media devices includes a storage unit/partition that emulates a Compact Disc-Read Only Memory (CD-ROM), and optionally, a second storage unit/partition that acts as a Read/Write storage device. | 2008-09-18 |
20080229006 | High Bandwidth Low-Latency Semaphore Mapped Protocol (SMP) For Multi-Core Systems On Chips - A system and method for dynamically managing movement of semaphore data within the system. The system includes, but is no limited to, a plurality of functional units communicating over the network, a memory device communication with the plurality of functional units over the network, and at least one semaphore storage unit communicating with the plurality of functional unites and the memory device over the network. The plurality of functional units include a plurality of functional unit memory locations. The memory device includes a plurality of memory device memory locations. The at least one semaphore storage unit includes a plurality of semaphore storage unit memory locations. The at least one semaphore storage unit controls dynamic movement of the semaphore data among the plurality of functional unit memory locations, the plurality of memory device memory locations, the plurality of semaphore storage unit memory locations, and any combinations therof. | 2008-09-18 |
20080229007 | Enhancements to an XDR Memory Controller to Allow for Conversion to DDR2 - A memory control apparatus includes a data stream format converter and a physical layer converter. The data stream format converter is configured to convert an incoming data stream that has a data stream format corresponding to a first memory type into a format-converted data stream that has a data stream format corresponding to a second memory type. The second memory type is different from the first memory type. The physical layer converter is configured to convert the format-converted data stream into a physical-layer-converted data stream that has at least one physical parameter corresponding to the second memory type. The format-converted data stream has at least one physical parameter corresponding to the first memory type. | 2008-09-18 |
20080229008 | Sharing physical memory locations in memory devices - A memory structure includes a plurality of address banks where each address bank is operative to store a memory address. In certain embodiments, at least two of the address banks share physical memory locations for at least one redundant most significant bit. Additionally, at least two of the address banks in certain embodiments share physical memory locations for at least one redundant most significant bit and at least one redundant least significant bit. At least two of the address banks in certain embodiments also share physical memory locations for at least one redundant interior bit. | 2008-09-18 |
20080229009 | SYSTEMS AND METHODS FOR PUSHING DATA - A system for pushing data, the system includes a source node that stores a coherent copy of a block of data. The system also includes a push engine configured to determine a next consumer of the block of data. The determination being made in the absence oft he push engine detecting a request for the block of data from the next consumer. The push engine causes the source node to push the block of data to a memory associated with the next consumer to reduce latency of the next consumer accessing the block of data. | 2008-09-18 |
20080229010 | STORAGE SYSTEM AND METHOD FOR CONTROLLING CACHE RESIDENCY SETTING IN THE STORAGE SYSTEM - In a storage system adopting an external storage connection configuration, a first storage apparatus is capable of integrally managing the cache residency settings made in second storage apparatuses, which serve as external storage apparatuses. The first storage apparatus stores the cache residency information for the second storage apparatuses, i.e., external storage apparatuses, in a shared memory thereof. When the storage system receives a cache residency setting request from a management device or the like, the first storage apparatus issues a cache residency setting instruction to a second storage apparatus with reference to the residency information. In accordance with the setting instruction, the second storage apparatus sets a cache-resident area in a cache memory thereof. | 2008-09-18 |
20080229011 | CACHE MEMORY UNIT AND PROCESSING APPARATUS HAVING CACHE MEMORY UNIT, INFORMATION PROCESSING APPARATUS AND CONTROL METHOD - A cache memory unit connecting to a main memory system having a cache memory area in which, if memory data that the main memory system has is registered therewith, the registered memory data is accessed by a memory access instruction that accesses the main memory system and a local memory area with which local data to be used by the processing section is registered and in which the registered local data is accessed by a local memory access instruction, which is different from the memory access instruction. | 2008-09-18 |
20080229012 | RAID Array Auto-Initialization (RAAI) - A system and method are provided for efficiently initializing a redundant array of independent disks (RAID). The method monitors host write operations and uses that information to select the optimal method to perform a parity reconstruction operation. The bins to which data access write operations have not occurred can be initialized using a zeroing process. In one aspect, the method identifies drives in the RAID array capable of receiving a ‘WriteRepeatedly’ command and leverages that capability to eliminate the need for the RAID disk array controller to provide initialization data for all disk array initialization transfers. This reduces the RAID array controller processor and I/O bandwidth required to initialize the array and further reduces the time to initialize a RAID array. In a different aspect, a method is provided for efficiently selecting a host write process for optimal data redundancy and performance in a RAID array. | 2008-09-18 |
20080229013 | CACHE SYNCHRONIZATION IN A RAID SUBSYSTEM USING SERIAL ATTACHED SCSI AND/OR SERIAL ATA - A RAID system includes a pair of RAID controllers adapted to operate in active-active mode, each controller including a cache memory and at least one SAS/SATA I/O chip connected to a plurality of hard disk drives. Each SAS/SATA I/O chip includes more SAS/SATA ports than required to carry data to the hard drives. The caches in the respective controllers are synchronized via the extra SAS/SATA ports in each controller. | 2008-09-18 |
20080229014 | Disk Interface Card - A disk interface card includes a disk interface, a cache memory, a bus interface and a microprocessor. The disk interface card is electrically connected to a host disk interface of a host through a cable. The disk interface card can be cooperated with a conventional disk array card, which is able to connect with several hard drives, so as to form an external hard drive array. The external hard drive array can be connected to the host through the cable. The disk interface card passively uses the cache memory to store an access command from the host, and waits for the disk array card to come to read out the access command therefrom. | 2008-09-18 |
20080229015 | PORTABLE MEMORY APPARATUS HAVING A CONTENT PROTECTION FUNCTION AND METHOD OF MANUFACTURING THE SAME - A portable memory apparatus having a content protection function is provided. The portable memory apparatus includes a memory and a memory control unit. The memory includes a read-only memory area which stores content and is set to so that only read operations are allowed, a writable memory area which is set so that read and write operations are allowed, and a special memory area which stores information needed to operate the portable memory apparatus and is set so that only authenticated programs are allowed to read from and/or write to the special memory area. The memory control unit controls the read and write operations on each of the areas. | 2008-09-18 |
20080229016 | Boot in a media player with external memory - A media player is presented that scans the media files stored on an external memory card in order to update the internal database of the player. Media manager software on a personal computer sets a dirty bit in the internal memory of the media player whenever the media files on the external memory card are altered. The media player checks the dirty bit on start up or when the memory card is inserted. If the dirty bit is set, the media player scans the media files on the memory card, updates its database, then clears the dirty bit. If the dirty bit is not set, the media player does not scan the memory card. The dirty bit is associated in the internal memory with an identifier for the memory card, allowing the use of multiple memory cards. | 2008-09-18 |
20080229017 | Systems and Methods of Providing Security and Reliability to Proxy Caches - The present solution provides a variety of techniques for accelerating and optimizing network traffic, such as HTTP based network traffic. The solution described herein provides techniques in the areas of proxy caching, protocol acceleration, domain name resolution acceleration as well as compression improvements. In some cases, the present solution provides various prefetching and/or prefreshening techniques to improve intermediary or proxy caching, such as HTTP proxy caching. In other cases, the present solution provides techniques for accelerating a protocol by improving the efficiency of obtaining and servicing data from an originating server to server to clients. In another cases, the present solution accelerates domain name resolution more quickly. As every HTTP access starts with a URL that includes a hostname that must be resolved via domain name resolution into an IP address, the present solution helps accelerate HTTP access. In some cases, the present solution improves compression techniques by prefetching non-cacheable and cacheable content to use for compressing network traffic, such as HTTP. The acceleration and optimization techniques described herein may be deployed on the client as a client agent or as part of a browser, as well as on any type and form of intermediary device, such as an appliance, proxying device or any type of interception caching and/or proxying device. | 2008-09-18 |
20080229018 | Save data discrimination method, save data discrimination apparatus, and a computer-readable medium storing save a data discrimination program - A save data discrimination method saves calculation results including an element which is periodically saved when a computer executes a program repeating the same arithmetic process. The method includes analyzing a loop structure of the program from a source code of the program to detect a main loop of the arithmetic process repeated in the program and a sub-loop included in the main loop, determining a point of entrance to the main loop as a checkpoint that is a point for saving data of the calculation results, and analyzing the contents of the arithmetic process described in the main loop to identify reference-first elements which are elements only referred to and elements defined after being referred to as data to be saved at the checkpoint determined at the point of entrance. | 2008-09-18 |
20080229019 | METHOD AND SYSTEM FOR EFFICIENT FRAGMENT CACHING - Methods for serving data include maintaining an incomplete version of an object at a server and at least one fragment at the server. In response to a request for the object from a client, the incomplete version of the object, an identifier for a fragment comprising a portion of the objects and a position for the fragment within the object are sent to the client. After receiving the incomplete version of the object, the identifier, and the position, the client requests the fragment from the server using the identifier. The object is constructed by including the fragment in the incomplete version of the object in a location specified by the position. | 2008-09-18 |
20080229020 | Systems and Methods of Providing A Multi-Tier Cache - The present solution provides a variety of techniques for accelerating and optimizing network traffic, such as HTTP based network traffic. The solution described herein provides techniques in the areas of proxy caching, protocol acceleration, domain name resolution acceleration as well as compression improvements. In some cases, the present solution provides various prefetching and/or prefreshening techniques to improve intermediary or proxy caching, such as HTTP proxy caching. In other cases, the present solution provides techniques for accelerating a protocol by improving the efficiency of obtaining and servicing data from an originating server to server to clients. In another cases, the present solution accelerates domain name resolution more quickly. As every HTTP access starts with a URL that includes a hostname that must be resolved via domain name resolution into an IP address, the present solution helps accelerate HTTP access. In some cases, the present solution improves compression techniques by prefetching non-cacheable and cacheable content to use for compressing network traffic, such as HTTP. The acceleration and optimization techniques described herein may be deployed on the client as a client agent or as part of a browser, as well as on any type and form of intermediary device, such as an appliance, proxying device or any type of interception caching and/or proxying device. | 2008-09-18 |
20080229021 | Systems and Methods of Revalidating Cached Objects in Parallel with Request for Object - The present solution provides a variety of techniques for accelerating and optimizing network traffic, such as HTTP based network traffic. The solution described herein provides techniques in the areas of proxy caching, protocol acceleration, domain name resolution acceleration as well as compression improvements. In some cases, the present solution provides various prefetching and/or prefreshening techniques to improve intermediary or proxy caching, such as HTTP proxy caching. In other cases, the present solution provides techniques for accelerating a protocol by improving the efficiency of obtaining and servicing data from an originating server to server to clients. In another cases, the present solution accelerates domain name resolution more quickly. As every HTTP access starts with a URL that includes a hostname that must be resolved via domain name resolution into an IP address, the present solution helps accelerate HTTP access. In some cases, the present solution improves compression techniques by prefetching non-cacheable and cacheable content to use for compressing network traffic, such as HTTP. The acceleration and optimization techniques described herein may be deployed on the client as a client agent or as part of a browser, as well as on any type and form of intermediary device, such as an appliance, proxying device or any type of interception caching and/or proxying device. | 2008-09-18 |
20080229022 | EFFICIENT SYSTEM BOOTSTRAP LOADING - An efficient system for bootstrap loading scans cache lines into a cache store queue during a scan phase, and then transmits the cache lines from the cache store queue to a cache memory array during a functional phase. Scan circuitry stores a given cache line in a set of latches associated with one of a plurality of cache entries in the cache store queue, and passes the cache line from the latch set to the associated cache entry. The cache lines may be scanned from test software that is external to the computer system. Read/claim dispatch logic dispatches store instructions for the cache entries to read/claim machines which write the cache lines to the cache memory array without obtaining write permission, after the read/claim machines evaluate a mode bit which indicates that cache entries in the cache store queue are scanned cache lines. In the illustrative embodiment the cache memory is an L2 cache. | 2008-09-18 |
20080229023 | SYSTEMS AND METHODS OF USING HTTP HEAD COMMAND FOR PREFETCHING - The present solution provides a variety of techniques for accelerating and optimizing network traffic, such as HTTP based network traffic. The solution described herein provides techniques in the areas of proxy caching, protocol acceleration, domain name resolution acceleration as well as compression improvements. In some cases, the present solution provides various prefetching and/or prefreshening techniques to improve intermediary or proxy caching, such as HTTP proxy caching. In other cases, the present solution provides techniques for accelerating a protocol by improving the efficiency of obtaining and servicing data from an originating server to server to clients. In another cases, the present solution accelerates domain name resolution more quickly. As every HTTP access starts with a URL that includes a hostname that must be resolved via domain name resolution into an IP address, the present solution helps accelerate HTTP access. In some cases, the present solution improves compression techniques by prefetching non-cacheable and cacheable content to use for compressing network traffic, such as HTTP. The acceleration and optimization techniques described herein may be deployed on the client as a client agent or as part of a browser, as well as on any type and form of intermediary device, such as an appliance, proxying device or any type of interception caching and/or proxying device. | 2008-09-18 |
20080229024 | SYSTEMS AND METHODS OF DYNAMICALLY CHECKING FRESHNESS OF CACHED OBJECTS BASED ON LINK STATUS - The present solution provides a variety of techniques for accelerating and optimizing network traffic, such as HTTP based network traffic. The solution described herein provides techniques in the areas of proxy caching, protocol acceleration, domain name resolution acceleration as well as compression improvements. In some cases, the present solution provides various prefetching and/or prefreshening techniques to improve intermediary or proxy caching, such as HTTP proxy caching. In other cases, the present solution provides techniques for accelerating a protocol by improving the efficiency of obtaining and servicing data from an originating server to server to clients. In another cases, the present solution accelerates domain name resolution more quickly. As every HTTP access starts with a URL that includes a hostname that must be resolved via domain name resolution into an IP address, the present solution helps accelerate HTTP access. In some cases, the present solution improves compression techniques by prefetching non-cacheable and cacheable content to use for compressing network traffic, such as HTTP. The acceleration and optimization techniques described herein may be deployed on the client as a client agent or as part of a browser, as well as on any type and form of intermediary device, such as an appliance, proxying device or any type of interception caching and/or proxying device. | 2008-09-18 |
20080229025 | SYSTEMS AND METHODS OF USING THE REFRESH BUTTON TO DETERMINE FRESHNESS POLICY - The present solution provides a variety of techniques for accelerating and optimizing network traffic, such as HTTP based network traffic. The solution described herein provides techniques in the areas of proxy caching, protocol acceleration, domain name resolution acceleration as well as compression improvements. In some cases, the present solution provides various prefetching and/or prefreshening techniques to improve intermediary or proxy caching, such as HTTP proxy caching. In other cases, the present solution provides techniques for accelerating a protocol by improving the efficiency of obtaining and servicing data from an originating server to server to clients. In another cases, the present solution accelerates domain name resolution more quickly. As every HTTP access starts with a URL that includes a hostname that must be resolved via domain name resolution into an IP address, the present solution helps accelerate HTTP access. In some cases, the present solution improves compression techniques by prefetching non-cacheable and cacheable content to use for compressing network traffic, such as HTTP. The acceleration and optimization techniques described herein may be deployed on the client as a client agent or as part of a browser, as well as on any type and form of intermediary device, such as an appliance, proxying device or any type of interception caching and/or proxying device. | 2008-09-18 |
20080229026 | System and method for concurrently checking availability of data in extending memories - This invention discloses an extended memory comprising a first tag RAM for storing one or more tags corresponding to data stored in a first storage module, and a second tag RAM for storing one or more tags corresponding to data stored in a second storage module, wherein the first and second storage modules are separated and independent memory units, the numbers of bits in the first and second tag RAMs differ, and an address is concurrently checked against both the first and second tag RAMs using a first predetermined bit field of the address for checking against a first tag from the first tag RAM and using a second predetermined bit field of the address for checking against a second tag from the second tag RAM. | 2008-09-18 |
20080229027 | PREFETCH CONTROL DEVICE, STORAGE DEVICE SYSTEM, AND PREFETCH CONTROL METHOD - A prefetch control device controls prefetching of read-out data into cache memory which improves efficiency of data reading from a storage device by caching data passed between the storage device and a computing device, determines whether data read out from the storage device to the computing device is sequentially accessed data or not, decides a prefetch amount for the read-out data in accordance with a predetermined condition if the read-out data is determined to be sequentially accessed data, and prefetches the read-out data of the prefetch amount. | 2008-09-18 |
20080229028 | UNIFORM EXTERNAL AND INTERNAL INTERFACES FOR DELINQUENT MEMORY OPERATIONS TO FACILITATE CACHE OPTIMIZATION - A computer implemented method, software infrastructure and computer usable program code for improving application performance. A delinquent memory operation instruction is identified. A delinquent memory operation instruction is an instruction associated with cache misses that exceeds a threshold number of cache misses. A directive is inserted in a code region associated with the delinquent memory operation to form annotated code. The directive indicates an address of the delinquent memory operation instruction and a number of memory latency cycles expected to be required for the delinquent memory operation instruction to execute. The information included in the annotated code is used to optimize execution of an application associated with the delinquent memory operation instruction. | 2008-09-18 |
20080229029 | Semiconductor Memory System Having Plurality of Ranks Incorporated Therein - A semiconductor memory system which can integrate a plurality of ranks without occupying an increased area. The semiconductor memory system includes a memory device that has a plurality of ranks each having banks integrated therein, and a shared circuit section that is integrated in the memory device and is shared by the plurality of ranks. The plurality of ranks are selectively operated based on the signals provided from the shared circuit section. | 2008-09-18 |
20080229030 | Efficient Use of Memory Ports in Microcomputer Systems - A microcomputer system includes first and second IP blocks, a multi-port memory, a shared memory field allocated to the second IP block, and a second memory field. A first memory controller is configured to control access to the first and shared memory fields. A second memory controller is configured to control access to the second and shared memory fields. A bus controller operates in response to access request signals provided from the first and second IP blocks. When the second memory controller is in a ready state and the first and second IP blocks request access to the first and shared memory fields, the bus controller provides the first memory controller with access to the access request signal of the first IP block and provides the second memory controller with access to the access request signal of the second IP block. | 2008-09-18 |
20080229031 | Method of Automated Resource Management In A Partition Migration Capable Environment - A method, system and program are disclosed for automatically adjusting the allocation of a plurality of information processing system (IPS) resources among a plurality of logical partitions (LPARs). An LPAR is created on a first central processor complex (CPC) and a first LPAR identifier is generated. A configuration change manager is implemented on the LPAR to communicate changes in the LPAR's identifier to an automated resource manager (ARM). IPS resources are automatically allocated to the LPAR. If the LPAR is migrated a second CPC, a second LPAR identifier is similarly generated, resulting in an LPAR configuration change event. The ARM is notified that the migrated LPAR's identifier has changed and receives the changed LPAR identifier. Comparison operations are performed to determine whether the second LPAR identifier matches the first CPC. If not, resources allocated to the migrated LPAR are released for automated allocation to other LPARs comprising the first CPC. | 2008-09-18 |
20080229032 | CELL PROCESSOR ATOMIC OPERATION - A method is disclosed for atomic operation in a processor system comprising a main memory and a power processor element (PPE) including a power processor unit (PPU) and an external cache. A processor system and processor readable medium for implementing the method are also disclosed. | 2008-09-18 |
20080229033 | Method For Processing Data in a Memory Arrangement, Memory Arrangement and Computer System - A method processes data in a memory arrangement. The method includes receiving and transmitting the data from the memory arrangement in the form of data packets according to a predefined protocol. The method includes distributing each received data packet to at least two separate data packet processing units. Each data packet processing unit is coupled to a portion of memory cells of the memory arrangement. The method includes processing, at each data packet processing unit, parts of the received data packets that relate to the portion of the memory cells the data packet processing unit is coupled to. The method includes generating a data packet to be transmitted including setting up, with each data packet processing unit, a part of the data packet to be transmitted. | 2008-09-18 |
20080229034 | DATA MANAGEMENT FOR IMAGE PROCESSING - An image processing system includes a memory for storing data associated with pixels of images, with the pixels having spatial coordinates in an image coordinate system having first and second axes; a processing device including a processor which processes the associated data; and an interface device which accesses in memory addresses associated with pixels of a block of pixels. In the interface device, access information is received indicating a base memory address, information regarding the dimensions of the block along the axes of the image coordinate system, and a storage method. At least one access rule is selected from multiple rules as a function of the storage method. The memory is accessed at the addresses associated with the pixels in the block, by applying the selected rule starting from the base address and taking into account the dimensions of the block. | 2008-09-18 |
20080229035 | SYSTEMS AND METHODS FOR IMPLEMENTING A STRIDE VALUE FOR ACCESSING MEMORY - Systems and methods for implementing a stride valise for memory are provided. One embodiment includes a system comprising a plurality of memory modules configured to store interleaved data in a plurality of memory storage units according to a predetermined interleave. The plurality of memory storage units can be defined by a memory range of consecutive addresses. The system also comprises a memory test device configured to access a portion of the plurality of memory storage units in a sequence that repeats according to a programmable stride value. | 2008-09-18 |
20080229036 | Information Processing apparatus and computer-readable storage medium - A computer-readable storage medium stores a program for causing a processor to perform a process including: acquiring a first address that specifies a start address of a first area on the main memory where a target data to be cached is stored and range information that specifies a size of the first area on the main memory; converting the first address into a second address that specifies a start address of a second area on the local memory, the second area having a one-to-n correspondence (n=positive integer) to a part of a bit string of the first address; copying the target data stored in the first area specified by the first address and the range information onto the second area specified by the second address and the range information; and storing the second address to allow accessing the target data copied onto the local memory. | 2008-09-18 |
20080229037 | SYSTEMS AND METHODS FOR CREATING COPIES OF DATA, SUCH AS ARCHIVE COPIES - A system and method of creating archive copies of data sets is described. In some examples, the system creates an archive copy from an original data set. In some examples, the system creates an archive copy when creating a recovery copy for a data set. In some examples, the system creates a copy without redundant data, and then encrypts the data set. | 2008-09-18 |
20080229038 | COPY SYSTEM AND COPY METHOD - Proposed are a copy system and a copy method capable of performing initial copy in a short amount of time and with high reliability. On a primary side, the area in the first volume to which data was written from the host is managed, a second bitmap is created reflecting the contents of the first bitmap, and the second bitmap is sent to the secondary side on the primary side. On a secondary side, a third bitmap to which the second bitmap sent from the primary-side is merged is created, on the primary side, only the valid data containing data written by the host in the first volume is copies to the second volume based on the second bitmap, and, on the secondary-side, the differential between the first and second volumes during the initial copy is managed based on the third bitmap. | 2008-09-18 |
20080229039 | Dual Writing Device and Its Control Method - A first storage system misrepresents an identifier of the storage system and an identifier of a volume and provides the host computer with a first volume. A second storage system misrepresents an identifier of the storage system and an identifier of a second volume as being identical to those misrepresented by the first storage system and provides the host computer with a second volume. A management computer acquires, upon detection of a failure in an access, a status of copying, a status of the first storage system, and a status of the second storage system and controls an access from the host computer with reference to the plurality of acquired statuses. Accordingly, even when a fault occurs in one of the two storage systems, a network that connects the two storage systems, or the like, the host computer can access to latest data. | 2008-09-18 |
20080229040 | NETWORK STORAGE SYSTEM, MANAGEMENT METHOD THEREFOR, AND CONTROL PROGRAM PRODUCT THEREFOR - A storage system, a storage management method, and a control program product are provided. The storage system is improved in comfortability, convenience, and economy by reducing the amount of copies in a storage device in a network storage system, by heightening storage efficiency and increasing an access speed. In a network storage system in which a plurality of client terminals are directly connected to a storage device via a network, the storage device includes an MV logical disk that stores read-only shared data and a BV logical disk from/onto which data specific to each client terminal is read/written. A control unit that controls read/write operations includes, with an access management table, an LDK management table which has a reference logical disk number column used to issue a command to refer to the MV logical disk when data other than the write data is read. | 2008-09-18 |
20080229041 | Electrical Transmission System in Secret Environment Between Virtual Disks and Electrical Transmission Method Thereof - The present invention relates to a secure transmission system and secure transmission method that securely transmit data stored in a computer to different computers via a Local Area Network or the Internet. The secure transmission system includes a virtual disk, configured to allow only an authorized application program module to gain an access and read, write and edit information data; and a secure communication application module including a user information generation means for generating intrinsic user information at the time of setting up the virtual disk, a user information storage means for storing the generated user information, an outgoing file management means for searching the virtual disk for information data to be sent and compressing the found information data, generating the header information of the information data in which user information about a sender and/or a recipient is contained, and adding the generated header information to the user information, an incoming file management means for reading the header information of received information data, decompressing compressed information data, and storing the decompressed information data on the virtual disk, and a file security means for encrypting and decrypting information data to be sent or received information data. | 2008-09-18 |
20080229042 | METHOD FOR LOCKING NON VOLATILE MEMORY WORDS IN AN ELECTRONIC DEVICE FITTED WITH RF COMMUNICATION MEANS - The electronic device, in particular a transponder, includes a non volatile memory (EEPROM) having a plurality of words | 2008-09-18 |
20080229043 | Information processing apparatus and computer usable medium therefor - An information processing apparatus capable of executing at least one information processing operation is provided. The information processing apparatus includes a process control system to execute one of the at least one information processing operation to a piece of data stored in a first data storage, which is indicated by a first storage name, when the piece of data in the first data storage is recognized. The information processing apparatus further includes the first storage name including a character string to specify the information processing operation to be performed, and a data relocating system to relocate the piece of data from the first data storage when the data processing operation is completed. | 2008-09-18 |
20080229044 | Pipelined buffer interconnect fabric - A method and system to transfer data from one or more data sources to one or more data sinks using a pipelined buffer interconnect fabric is described. The method comprises receiving a request for a data transfer from the data source to the data sink, assigning a first buffer and a first bus to the data source, locking the first buffer and the first bus so as to enable only the data source to transfer data to the first buffer via the first bus, receiving a signal from the data source indicating completion of data transfer to the first buffer, unlocking the first buffer and the first bus, assigning the first buffer and the first bus to the data sink, assigning a second buffer and a second bus to the data source, locking the second buffer and the second bus so as to enable only the data source to transfer data to the second buffer via the second bus and enabling the data sink to read data from the first buffer via the first bus while the data source writes to the second buffer via the second bus, thereby pipelining the data transfer from the data source to the data sink. The transfer of data from data source to data sink is controlled by programming the pipelined buffer interconnect via one or more of software, control registers and control signals. | 2008-09-18 |
20080229045 | STORAGE SYSTEM PROVISIONING ARCHITECTURE - In some embodiments, a storage controller comprises a first input/output port that provides an interface to a host computer, a second input/output port that provides an interface a storage device, a processor that receives input/output requests generated by the host computer and, in response to the input/output requests, generates and transmits input/output requests to the storage device, and a memory module communicatively connected to the processor. The memory module comprises logic instructions stored in a computer-readable medium which, when executed by the processor, configure the processor to receive, from the host computer, a write input/output request that identifies a logical volume; compare an amount of storage space available in the logical volume with an amount of storage space required to complete the write operation, and allocate additional storage space to the logical volume if the amount of storage space available in the logical volume is insufficient to complete the write operation. Other embodiments may be described. | 2008-09-18 |
20080229046 | Unified support for solid state storage - In a method for providing unified support for solid state storage, a solid state storage class driver is provided to enable uniform operating system access to a plurality of dissimilar solid state storage devices. A common functionality of the plurality of dissimilar solid state storage devices is abstracted via a solid state storage port driver. A solid state storage bus driver is utilized to expose an interface feature of a solid state storage device, wherein the solid state storage device is selected from the plurality of dissimilar solid state storage devices such that the interface feature is accommodated while simultaneously enabling the operating system to support access to the plurality of dissimilar solid state storage devices in a unified manner. | 2008-09-18 |
20080229047 | Disk Space Allocation - A method and system for allocating blocks of disk in persistent storage to requesting threads. A primary data structure is provided for organizing and categorizing blocks of disk space. In addition, a secondary data structure is provided for maintaining a list of all active file system processes and blocks of disk space used by those processes. Blocks of disk space are assigned to pages. At such time as a thread may request allocation of disk space, both data structures are reviewed to determine if the requested disk space is available and to limit access of available disk space to a single page of memory to a single thread at any one time. | 2008-09-18 |
20080229048 | Method and apparatus for chunk allocation in a thin provisioning storage system - Physical storage space in a storage system is not allocated to a segment of a targeted volume until the segment of the volume is first targeted for storing write data. When write data is received, the storage system determines whether the targeted volume is designated for storing a first data type that is accessed frequently by I/O operations or designated for storing a second data type that is accessed less frequently than the first data type. Physical storage space for storing the write data is allocated from a first logical partition of the physical storage designated for storing the first data type when the targeted volume is of the first data type and from a second logical partition of the physical storage designated for storing the second data type when the targeted volume is of the second data type. Allocation of frequently accessed data is controlled and performance bottlenecking avoided. | 2008-09-18 |
20080229049 | PROCESSOR CARD FOR BLADE SERVER AND PROCESS. - System including a processor card containing at least two processors, and a memory card containing at least two memory units. At least one memory unit is associated with each processor. A controller dynamically allocates memory in the at least two memory units to the at least two processors. | 2008-09-18 |
20080229050 | DYNAMIC PAGE ON DEMAND BUFFER SIZE FOR POWER SAVINGS - A portable electronic device includes a processing device, a memory operatively coupled to said processing device, said memory comprising a plurality of blocks, wherein at least one block of the plurality of blocks may be powered independent of other blocks of the plurality of blocks, and a logic circuit operative to dynamically adjust a demand page buffer size within the memory and utilized by the processor, thereby permitting a corresponding adjustment of a number of powered memory blocks within the memory. | 2008-09-18 |
20080229051 | Broadcasting Instructions/Data to a Plurality of Processors in a Multiprocessor Device Via Aliasing - A mechanism for broadcasting instructions/data to a plurality of processors in a multiprocessor device via aliasing is provided. In order to broadcast data to a plurality of processors, a control processor writes to the registers that store the identifiers of the processors and sets two or more of these registers to a same value. The control processor may write the desired data/instructions to be broadcast to a portion of memory corresponding to the starting address associated with the processor identifier of the two or more processors. When the two or more processors look for a starting address of their local store from which to read the two or more processors will identify the same starting address, essentially aliasing the memory region. The two or more processors will read the instructions/data from the same aliased memory region starting at the identified starting address and process the same instructions/data. | 2008-09-18 |
20080229052 | Data processing apparatus and method for implementing a replacement scheme for entries of a storage unit - A data processing apparatus and method are provided for implementing a replacement scheme for entries of a storage unit. The data processing apparatus has processing circuitry for executing multiple program threads including at least one high priority program thread and at least one lower priority program thread. A storage unit is then shared between the multiple program threads and has multiple entries for storing information for reference by the processing circuitry when executing the program threads. A record is maintained identifying for each entry whether the information stored in that entry is associated with a high priority program thread or a lower priority program thread. Replacement circuitry is then responsive to a predetermined event in order to select a victim entry whose stored information is to be replaced. To achieve this, the replacement circuitry performs a candidate generation operation to identify a plurality of randomly selected candidate entries, and then references the record in order to preferentially select as the victim entry a candidate entry whose stored information is associated with a lower priority program thread. This improves the performance of the high priority program thread(s) by preferentially evicting from the storage unit entries associated with lower priority program threads. | 2008-09-18 |
20080229053 | Expanding memory support for a processor using virtualization - In one embodiment, the present invention includes a system including a processor to access a maximum memory space of a first size using a memory address having a first length, a chipset coupled to the processor to interface the processor to a memory including a physical memory space, where the chipset is to access a maximum memory space larger than the first maximum memory space, and a virtual machine monitor (VMM) to enable the processor to access the full physical memory space of a memory. Other embodiments are described and claimed. | 2008-09-18 |
20080229054 | METHOD FOR PERFORMING JUMP AND TRANSLATION STATE CHANGE AT THE SAME TIME - A method for performing a jump and translation state change procedure at the same time is disclosed. The method includes: carrying out a series of instruction processing in a first function in a first translation state; and executing a jump instruction which jumps to a target address in a second function and initiates and completes a translation state change to a second translation state at the same time; wherein an address of a next instruction after the jump instruction is stored as a return address in a first register. | 2008-09-18 |
20080229055 | Hardware-Based Secure Code Authentication - The present invention provides for authentication of code, such as boot code. A memory addressing engine is employable to select a portion of a memory, as a function of a step value, as a first input hash value. The step value allows for the non-commutative cumulative hashing of a plurality of memory portions with a second input hash value, such as a previous hash value that has been rotated left. An authenticator circuit is employable to perform a hash upon the portion of memory and the second input hash value. A comparison circuit is then employable to compare an output of the authenticator circuit to an expected value. | 2008-09-18 |
20080229056 | METHOD AND APPARATUS FOR DUAL-HASHING TABLES - Methods and apparatus for dual hash tables are disclosed. An example method includes logically dividing a hash table data structure into a first hash table and a second hash table, where the first hash table and the second hash table are substantially logically equivalent. The example method further includes receiving a key and a corresponding data value, applying a first hash function to the key to produce a first index to a first bucket in the first hash table, and applying a second hash function to the key to produce a second index to a second bucket in the second hash table. In the example method the key and the data value are inserted in one of the first hash table and the second hash table based on the first index and the second index. | 2008-09-18 |
20080229057 | ADAPTIVE PROFILING BY PROGRESSIVE REFINEMENT - A system/method for profiling a sequence of values from a range to determine a frequency of occurrence of a subrange includes, for a current block, determining whether cells of the current block include a count cell or a pointer cell. If the cell includes a pointer cell, follow an address that the pointer makes reference to and designate a new block as the current block and repeat the determining step for the new block. If the cell includes a count cell, increment the count cell and compare the incremented count cell to a threshold. If the count exceeds the threshold, convert the count cell to a pointer cell, which points to a newly allocated block. The newly allocated block is made the current block, and the steps are repeated until count cells do not exceed the threshold or a limit resolution is achieved. | 2008-09-18 |
20080229058 | Configurable Microprocessor - A configurable microprocessor that handles low computing-intensive workloads by partitioning a single processor core into two smaller corelets. The process partitions resources of a single microprocessor core to form a plurality of corelets and assigns a set of the partitioned resources to each corelet. Each set of partitioned resources is dedicated to one corelet to allow each corelet to function independently of other corelets in the plurality of corelets. The process also combines a plurality of corelets into a single microprocessor core by combining corelet resources to form a single microprocessor core. The combined resources feed the single microprocessor core. | 2008-09-18 |
20080229059 | Message routing scheme - Each possessor node in an array of nodes has a respective local node address, and each local node address comprises a plurality of components having an order of addressing significance from most to least significant. Each node comprises: mapping means configured to map each component of the local node address onto a respective routing direction, and a switch arranged to receive a message having a destination node address identifying a destination node. The switch comprises: means for comparing the local node address to the destination node address to identify a the most significant non-matching component; and means for routing the message to another node, on the condition that the local node address does not match the destination node address, in the direction mapped to the most significant non-matching component. | 2008-09-18 |
20080229060 | MICRO CONTROLLER AND METHOD OF UPDATING THE SAME - A micro controller includes a first storing circuit configured to store program data for performing a power on operation of a system, and a second storing circuit configured to temporarily store algorithm program data for operation of the system loaded from an external storing means while the system operates in response to control of the first storing circuit. | 2008-09-18 |
20080229061 | Processor Element for use in a Network of Processor Elements - In order to detect objects using a processor element for use in a network of processor elements which are connected to one another, the processor element comprises a processor, at least one interface for coupling to further processor elements of the network and an oscillator having a connection for coupling to an electrode outside the processor element. | 2008-09-18 |
20080229062 | Method of sharing registers in a processor and processor - A method of sharing registers in a processor includes executing a data processing instruction so as to obtain a result of the data processing instruction, which is to be written into a register of the processor. Register sharing information is obtained so as to control writing of the result into the register and/or at least one further register of the processor. | 2008-09-18 |
20080229063 | Processor Array with Separate Serial Module - A processor array has processor elements ( | 2008-09-18 |
20080229064 | Package designs for fully functional and partially functional chips - A method including obtaining an operational status of a first processor core, where the first processor core is associated with a plurality of processor cores located on a chip; configuring a first IO block of a package design based on the operational status of the first processor core, where the package design is based on a fully functional chip; and configuring a stackup of the package design after configuring the first IO block for use with the chip. | 2008-09-18 |
20080229065 | Configurable Microprocessor - A configurable microprocessor which combines a plurality of corelets into a single microprocessor core to handle high computing-intensive workloads. The process first selects two or more corelets in the plurality of corelets. The process combines resources of the two or more corelets to form combined resources, wherein each combined resource comprises a larger amount of a resource available to each individual corelet. The process then forms a single microprocessor core from the two or more corelets by assigning the combined resources to the single microprocessor core, wherein the combined resources are dedicated to the single microprocessor core, and wherein the single microprocessor core processes instructions with the dedicated combined resources. | 2008-09-18 |
20080229066 | System and Method for Compiling Scalar Code for a Single Instruction Multiple Data (SIMD) Execution Engine - A system, method, and computer program product are provided for performing scalar operations using a SIMD data parallel execution unit. With the mechanisms of the illustrative embodiments, scalar operations in application code are identified that may be executed using vector operations in a SIMD data parallel execution unit. The scalar operations are converted, such as by a static or dynamic compiler, into one or more vector load instructions and one or more vector computation instructions. In addition, control words may be generated to adjust the alignment of the scalar values for the scalar operation within the vector registers to which these scalar values are loaded using the vector load instructions. The alignment amounts for adjusting the scalar values within the vector registers may be statically or dynamically determined. | 2008-09-18 |
20080229067 | DATA POINTERS WITH FAST CONTEXT SWITCHING - An apparatus and method are disclosed for multiple data pointer registers and a means for quickly switching active context between the data pointer registers. | 2008-09-18 |
20080229068 | ADAPTIVE FETCH GATING IN MULTITHREADED PROCESSORS, FETCH CONTROL AND METHOD OF CONTROLLING FETCHES - A multithreaded processor, fetch control for a multithreaded processor and a method of fetching in the multithreaded processor. Processor event and use (EU) signals are monitored for downstream pipeline conditions indicating pipeline execution thread states. Instruction cache fetches are skipped for any thread that is incapable of receiving fetched cache contents, e.g., because the thread is full or stalled. Also, consecutive fetches may be selected for the same thread, e.g., on a branch mis-predict. Thus, the processor avoids wasting power on unnecessary or place keeper fetches. | 2008-09-18 |
20080229069 | System, Method And Software To Preload Instructions From An Instruction Set Other Than One Currently Executing - An instruction preload instruction executed in a first processor instruction set operating mode is operative to correctly preload instructions in a different, second instruction set. The instructions are pre-decoded according to the second instruction set encoding in response to an instruction set preload indicator (ISPI). In various embodiments, the ISPI may be set prior to executing the preload instruction, or may comprise part of the preload instruction or the preload target address. | 2008-09-18 |
20080229070 | Cache circuitry, data processing apparatus and method for prefetching data - Cache circuitry, a data processing apparatus including such cache circuitry, and a method for prefetching data into such cache circuitry, are provided. The cache circuitry has a cache storage comprising a plurality of cache lines for storing data values, and control circuitry which is responsive to an access racquet issued by a device of the data processing apparatus identifying a memory address of a data value to be accessed, to cause a lookup operation to be performed to determine whether the data value for that memory address is stored within the cache storage. If not, a linefill operation is initiated to retrieve the data value from memory. Further, prefetch circuitry is provided which is responsive to a determination that the memory address specified by a current access request is the same as a predicted memory address, to perform either a first prefetch linefill operation or a second prefetch linefill operation to retrieve from memory at least one further data value in anticipation of that data value being the subject of a subsequent access request. The selection of either the first prefetch linefill operation or the second prefetch linefill operation is performed in dependence on an attribute of the current access request. The first prefetch linefill operation involves issuing a sequence of memory addresses to memory, and allocating into a corresponding sequence of cache lines the data values returned from the memory in response to that sequence of addresses. The second prefetch linefill operation comprises issuing a selected memory address to memory, and storing in a linefill buffer the at least one data value returned from the memory in response to that memory address, with that at least one data value only being allocated into the cache when a subsequent access request specifies the selected memory address. By such an approach, the operation of the prefetch circuitry can be altered to take into account the type of access request being issued. | 2008-09-18 |
20080229071 | PREFETCH CONTROL APPARATUS, STORAGE DEVICE SYSTEM AND PREFETCH CONTROL METHOD - A prefetch control apparatus includes a prefetch controller for controlling prefetch of read data into a cache memory caching data to be transferred between a computer apparatus and a storage device, and which enhances a read efficiency of the read data from the storage device, a sequentiality decider for deciding whether the read data that are read from the storage device toward the computer apparatus are sequential access data, a locality decider for deciding whether the read data have locality of data arrangement in the predetermined storage area, in a case where the read data that are read from the storage device toward the computer apparatus have been decided not to be sequential access data, and a prefetcher for prefetching the read data in a case where the read data has the locality of the data arrangement. | 2008-09-18 |
20080229072 | PREFETCH PROCESSING APPARATUS, PREFETCH PROCESSING METHOD, STORAGE MEDIUM STORING PREFETCH PROCESSING PROGRAM - A prefetch processing apparatus includes a central-processing-unit monitor unit that monitors processing states of the central processing unit in association with time elapsed from start time of executing a program. A cache-miss-data address obtaining unit obtains cache-miss-data addresses in association with the time elapsed from the start time of executing the program, and a cycle determining unit determines a cycle of time required for executing the program. An identifying unit identifies a prefetch position in a cycle in which a prefetch-target address is to be prefetched by associating the cycle determined by the cycle determining unit with the cache-miss data addresses obtained by the cache-miss-data address obtaining unit. The prefetch-target address is an address of data on which prefetch processing is to be performed. | 2008-09-18 |
20080229073 | Address calculation and select-and insert instructions within data processing systems - A data processing system | 2008-09-18 |
20080229074 | Design Structure for Localized Control Caching Resulting in Power Efficient Control Logic - A design structure for an integrated circuit (IC) including a decoder decoding instructions, shadow latches storing instructions as a localized loop, and a state machine controlling the decoder and the plurality of shadow latches. When the state machine identifies instructions that are the same as those stored in the localized loop, it deactivates the decoder and activates the plurality of shadow latches to retrieve and execute the localized loop in place of the instructions provided by the decoder. Additionally, a method of providing localized control caching operations in an IC to reduce power dissipation is provided. The method includes initializing a state machine to control the IC, providing a plurality of shadow latches, decoding a set of instructions, detecting a loop of decoded instructions, caching the loop of decoded instructions in the shadow latches as a localized loop, detecting a loop end signal for the loop and stopping the caching of the localized loop. | 2008-09-18 |
20080229075 | MICROCONTROLLER WITH LOW-COST DIGITAL SIGNAL PROCESSING EXTENSIONS - A set of low-cost microcontroller extensions facilitates Digital Signal Processing (DSP) applications by incorporating a Multiply-Accumulate (MAC) unit in a Central Processing Unit (CPU) of the microcontroller which is responsive to the extensions. | 2008-09-18 |
20080229076 | MACROSCALAR PROCESSOR ARCHITECTURE - A macroscalar processor architecture is described herein. In one embodiment, an exemplary processor includes one or more execution units to execute instructions and one or more iteration units coupled to the execution units. The one or more iteration units receive one or more primary instructions of a program loop that comprise a machine executable program. For each of the primary instructions received, at least one of the iteration units generates multiple secondary instructions that correspond to multiple loop iterations of the task of the respective primary instruction when executed by the one or more execution units. Other methods and apparatuses are also described. | 2008-09-18 |