27th week of 2009 patent applcation highlights part 72 |
Patent application number | Title | Published |
20090172298 | CACHED DIRTY BITS FOR CONTEXT SWITCH CONSISTENCY CHECKS - Embodiments of an invention using cached dirty bits for context switch consistency checks are disclosed. In one embodiment, a processor includes control logic and a cache. The control logic is to cause a consistency check to be performed on a subset of a plurality of state components during a first context switch. The cache is to store a dirty entry for each state component to indicate whether the corresponding state component is included in the subset. | 2009-07-02 |
20090172299 | System and Method for Implementing Hybrid Single-Compare-Single-Store Operations - A hybrid Single-Compare-Single-Store (SCSS) operation may exploit best-effort hardware transactional memory (HTM) for good performance in the case that it succeeds, and may transparently resort to software-mediated transactions if the hardware transactional mechanisms fail. The SCSS operation may compare a value in a control location to a specified expected value, and if they match, may store a new value in a separate data location. The control value may include a global lock, a transaction status indicator, and/or a portion of an ownership record, in different embodiments. If another transaction in progress owns the data location, the SCSS operation may abort the other transaction or may help it complete by copying the other transactions' write set into its own right set before acquiring ownership. A hybrid SCSS operation, which is usually nonblocking, may be applied to building software transactional memories (STMs) and/or hybrid transactional memories (HyTMs), in some embodiments. | 2009-07-02 |
20090172300 | Device and method for creating a distributed virtual hard disk on networked workstations - Method and device for providing a virtual drive on a workstation PC which is connected via a network to other workstation PCs, encompassing a driver which makes available the virtual drive and carries out the following steps:
| 2009-07-02 |
20090172301 | INTELLIGENT NETWORK INTERFACE CARD (NIC) OPTIMIZATIONS - Intelligent NIC optimizations includes system and methods for Token Table Posting, use of a Master Completion Queue, Notification Request Area (NRA) associated with completion queues, preferably in the Network Interface Card (NIC) for providing notification of request completions, and what we call Lazy Memory Deregistration which allows non-critical memory deregistration processing to occur during non-busy times. These intelligent NIC optimizations which can be applied outside the scope of VIA (e.g. iWARP and the like), but also support VIA. | 2009-07-02 |
20090172302 | Information Processing Apparatus, Information Processing Method, and Program - The present invention relates to an information processing apparatus, an information processing method, and a program capable of simplifying an interrupt processing and reducing a time necessary to the interrupt processing. An interrupt generation unit | 2009-07-02 |
20090172303 | HYBRID TRANSACTIONS FOR LOW-OVERHEAD SPECULATIVE PARALLELIZATION - A method and apparatus for a hybrid transactional memory system is herein described. A first transaction is executed utilizing a first style of a transactional memory system and a second transaction is executed in parallel utilizing a second style of a transactional memory system. For example, a main thread is executed utilizing an update-in place Software Transactional Memory (STM) system while a parallel thread, such as a helper thread, is executed utilizing a write buffering STM. As a result, a main thread may directly update memory locations, while a helper thread's transactional writes are buffered to ensure they do not invalidate transactional reads of the main thread. Therefore, parallel execution of threads is achieved, while ensuring at least one thread, such as a main thread, does not degrade below an amount of execution cycles it would take to execute the main thread serially. | 2009-07-02 |
20090172304 | Obscuring Memory Access Patterns in Conjunction with Deadlock Detection or Avoidance - Methods, apparatus and systems for memory access obscuration are provided. A first embodiment provides memory access obscuration in conjunction with deadlock avoidance. Such embodiment utilizes processor features including an instruction to enable monitoring of specified cache lines and an instruction that sets a status bit responsive to any foreign access (e.g., write or eviction due to a read) to the specified lines. A second embodiment provides memory access obscuration in conjunction with deadlock detection. Such embodiment utilizes the monitoring feature, as well as handler registration. A user-level handler may be asynchronously invoked responsive to a foreign write to any of the specified lines. Invocation of the handler more frequently than expected indicates that a deadlock may have been encountered. In such case, a deadlock policy may be enforced. Other embodiments are also described and claimed. | 2009-07-02 |
20090172305 | EFFICIENT NON-TRANSACTIONAL WRITE BARRIERS FOR STRONG ATOMICITY - A method and apparatus for providing optimized strong atomicity operations for non-transactional writes is herein described. Locks are acquired upon initial non-transactional writes to memory locations. The locks are maintained until an event is detected resulting in the release of the locks. As a result, in the intermediary period between acquiring and releasing the locks, any subsequent writes to memory locations that are locked are accelerated through non-execution of lock acquire operations. | 2009-07-02 |
20090172306 | System and Method for Supporting Phased Transactional Memory Modes - A phased transactional memory (PhTM) may support a plurality of transactional memory implementations, including software, hardware, and hybrid implementations, and may provide mechanisms for dynamically transitioning between transactional memory modes in response to changing workload characteristics; upon discovering that the current mode does not perform well, is not suitable, or does not support functionality required for particular transactions; or according to scheduled phases. A system providing PhTM may be configured to transition from a first transactional memory mode to a second transactional memory mode while ensuring that transactions executing in the first transactional memory mode do not interfere with correct execution of transactions in the second transactional memory mode. The system may be configured to abort transactions in progress or to wait for transactions to complete, be aborted, or reach a safe transition point before transitioning to a new mode, and may use a global mode indicator in coordinating transitions. | 2009-07-02 |
20090172307 | STORAGE DEVICE WITH TRANSACTION INDEXING CAPABILITY - In one aspect, a system for indexing transactions over a plurality of communication lines is described. In various embodiments, the system includes a host controller and a plurality of storage devices in communication with one another. Each of the storage devices is configured to store data. The communication lines facilitate communications between the host controller and the plurality of storage devices. A selected one of the storage devices is configured to function as a transaction indexer to monitor the communication lines and index and store selected transaction information associated with operations that occur over the communication lines. While the host controller may be arranged to configure the transaction indexer, the transaction monitoring, indexing and storing are performed substantially automatically by the transaction indexer without requiring further instructions from the host controller. | 2009-07-02 |
20090172308 | Storage controller for flash memory including a crossbar switch connecting a plurality of processors with a plurality of internal memories - A controller designed for use with a flash memory storage module, including a crossbar switch designed to connect a plurality of internal processors with various internal resources, including a plurality of internal memories. The memories contain work lists for the processors. In one embodiment, the processors communicate by using the crossbar switch to place tasks on the work lists of other processors. | 2009-07-02 |
20090172309 | Apparatus and method for controlling queue - An apparatus includes a queue element which stores a plurality of memory access requests to be issued to a memory device, the memory access requests including a store request and a load request, and a controller which changes an order of the store and load requests so that the order includes a string of the store requests and a string of the load requests. | 2009-07-02 |
20090172310 | APPARATUS AND METHOD FOR CONTROLLING MEMORY OVERRUN - A memory address filter is configurable to emulate memory overrun performance of a legacy memory using an electronic memory of equal or greater capacity. The address filter includes a comparator configured to determine whether a target address is greater than a maximum legacy-address. Memory emulation at target address values greater than the maximum legacy-address value includes one or more of inhibiting the memory transaction; accomplishing the requested memory transaction at the maximum legacy-address value; and accomplishing the requested memory transaction at an address equivalent to the target address wrapped according to the maximum legacy-address value. In some embodiments, the address filter accepts one or more configuration parameters, such as memory depth, wrap-around, and overwrite enable. | 2009-07-02 |
20090172311 | APPARATUS FOR TESTING MEMORY DEVICE - Embodiments relate to an apparatus that may test a memory device. According to embodiments, a period of memory development may be reduced in a manner of testing a delay of a major part in a memory by adding a simple circuit without using expensive equipment and by which a memory development cost can be lowered. According to embodiments, a memory device may include a memory array and a redundancy memory. According to embodiments, a device may include a programmable redundancy decoder determining a drive force to corresponding to a selection signal, the programmable redundancy decoder outputting the determined drive force to a word line of the redundancy memory and a delay difference generating unit generating a delay difference signal corresponding to a delay difference between first and second word line signals outputted from the redundancy memory. | 2009-07-02 |
20090172312 | ELECTRONIC DEVICE WITH SERIAL ATA INTERFACE AND POWER SAVING METHOD FOR SERIAL ATA BUSES - In an electronic device with a serial ATA interface, upon detection of the issue or reception of a preset command, a confirmation device, such as a CPU, confirms the completion of execution of the command. Upon confirming the completion of execution of the command, a controller, which may also be the CPU, controls shifting of the serial ATA interface to a power saving mode. | 2009-07-02 |
20090172313 | ELECTRONIC DEVICE WITH SERIAL ATA INTERFACE AND POWER SAVING METHOD FOR SERIAL ATA BUSES - In an electronic device with a serial ATA interface, upon detection of the issue or reception of a preset command, a confirmation device, such as a CPU, confirms the completion of execution of the command. Upon confirming the completion of execution of the command, a controller, which may also be the CPU, controls shifting of the serial ATA interface to a power saving mode. | 2009-07-02 |
20090172314 | CODE REUSE AND LOCALITY HINTING - A method and apparatus for handling reusable and non-reusable code is herein described. Page table entries include code reuse and locality fields to hold hints for associated pages. If a code reuse and locality field holds a non-reusable value to indicate an associated page holds non-reusable code, then an instruction decoded from the associated page is not stored in the trace to obtain maximum efficiency and power savings from the trace cache and decode logic. | 2009-07-02 |
20090172315 | PRIORITY AWARE SELECTIVE CACHE ALLOCATION - A method and apparatus for is herein described providing priority aware and consumption guided dynamic probabilistic allocation for a cache memory. Utilization of a sample size of a cache memory is measured for each priority level of a computer system. Allocation probabilities for each priority level are updated based on the measured consumption/utilization, i.e. allocation is reduced for priority levels consuming too much of the cache and allocation is increased for priority levels consuming too little of the cache. In response to an allocation request, it is assigned a priority level. An allocation probability associated with the priority level is compared with a randomly generated number. If the number is less than the allocation probability, then a fill to the cache is performed normally. In contrast, a spatially or temporally limited fill is performed if the random number is greater than the allocation probability. | 2009-07-02 |
20090172316 | MULTI-LEVEL PAGE-WALK APPARATUS FOR OUT-OF-ORDER MEMORY CONTROLLERS SUPPORTING VIRTUALIZATION TECHNOLOGY - The invention relates generally to computer memory access. Embodiments of the invention provide a multi-level page-walk apparatus and method that enable I/O devices to execute multi-level page-walks with an out-of-order memory controller. In embodiments of the invention, the multi-level page-walk apparatus includes a demotion-based priority grant arbiter, a page-walk tracking queue, a page-walk completion queue, and a command packetizer. | 2009-07-02 |
20090172317 | MECHANISMS FOR STRONG ATOMICITY IN A TRANSACTIONAL MEMORY SYSTEM - A method and apparatus for providing efficient strong atomicity is herein described. Optimized strong operations may be inserted at non-transactional read accesses to provide efficient strong atomicity. A global transaction value is copied at a beginning of a non-transational function to a local transaction value; essentially creating a local timestamp of the global transaction value. At a non-transactional memory access within the function, a counter value or version value is compared to the LTV to see if a transaction has started updating memory locations, or specifically the memory location accessed. If memory locations have not been updated by a transaction, execution is accelerated by avoiding a full set of slowpath strong atomic operations to ensure validity of data accessed. In contrast, the slowpath operations may be executed to resolve contention between a transactional and non-transaction access contending for the same memory location. | 2009-07-02 |
20090172318 | Memory control device - A memory control device that can improve the speed of a memory interface. A packet disassembly section disassembles packet data into segments and detects packet quality information. A memory management section has an address management table and manages a state in which the packet data is stored according to the packet quality information. A segment/request information disassembler disassembles the segments into data by an access unit by which memories can be written/read, and generates write requests and read requests according to the access unit. A memory access controller avoids a bank access to which is prohibited because of a bank constraint, extracts a write request or a read request corresponding to an accessible bank from the write requests or the read requests generated, and gains write/read access to the memories. | 2009-07-02 |
20090172319 | SYSTEMS AND METHODS FOR RECOVERING ELECTRONIC INFORMATION FROM A STORAGE MEDIUM - In one embodiment of the invention, a method is provided for retrieving certain electronic information previously stored on certain storage media after a threshold set in the storage retention criteria has been exceeded in an electronic information storage system that stores electronic information on storage media in accordance with a storage retention criteria is provided. The method includes storing a record in a memory associated with a system manager that assigns the storage retention criteria to the certain electronic data, designating the storage media available for overwrite after the threshold set in the storage retention policy has been exceeded, identifying the certain storage media available for overwrite, and retrieving information from the certain media after the threshold set in the storage retention policy has been exceeded. | 2009-07-02 |
20090172320 | Keystroke monitoring apparatus and method - Keystrokes input by a user are stored in non-volatile memory together with time stamps, creating a record of keystrokes and associated time stamps. At least some of the time stamps are generated and recorded in response to receipt of specific keystroke events, such as a specific keystroke, a specific sequence of keystrokes, a keystroke following an interval of inactivity or an interval of inactivity following a keystroke. The resulting keystroke record may show sessions of keystrokes received, with a start and end time stamp for each session. An alteration record is also provided to track alterations and erasures of the keystroke record. | 2009-07-02 |
20090172321 | STORAGE SUB-SYSTEM FOR A COMPUTER COMPRISING WRITE-ONCE MEMORY DEVICES AND WRITE-MANY MEMORY DEVICES AND RELATED METHOD - Methods and apparatus for a solid state non-volatile storage sub-system of a computer is provided. The storage sub-system may include a write-once storage sub-system memory device and a write-many storage sub-system memory device. Numerous other aspects are provided. | 2009-07-02 |
20090172322 | Automatically Adjusting a Number of Backup Data Sources Concurrently Backed Up to a Storage Device on a Server Computer - Various embodiments of a system and method for backing up data to a backup server computer are disclosed. According to one embodiment of the method, a group of backup data sources may be associated with a writer module on the backup server computer. Each backup data source may comprise data to be backed up from one of a plurality of client computer systems. The writer module may write the data from each of the backup data sources in the group to a target storage device in order to concurrently backup each backup data source to the target storage device. The writer module may also keep track of the rate at which data is written to the target storage device. The number of backup data sources in the group may be automatically adjusted based on the write rate, e.g., in order to maximize throughput to the target storage device. | 2009-07-02 |
20090172323 | METHODS AND APPRATUS FOR DEMAND-BASED MEMORY MIRRORING - A method includes determining an amount of memory space in a memory device available for memory mirroring. The method further includes presenting the available memory space to an operating system. The method further includes selecting at least a portion of the amount of memory space to be used for memory mirroring with the operating system. The method further includes adding a non-selected portion of the available memory to memory space available to the operating system during operation. An associated system and machine readable medium are also disclosed. | 2009-07-02 |
20090172324 | Storage system and method for opportunistic write-verify - A storage system that stores verify commands for all the write commands requiring verification in a verify-list that will be processed as a background task is described. The verify-list can include coded data fields that flexibly designate selected alternative states or possibilities for how and where the user data is actually stored. Alternatives for the verify-list include storing the actual raw data, no data, the data in compressed form, a CRC type signature of the data and/or a pointer to a backup copy of the data that is stored either in non-volatile memory such as flash memory or on the disk media in a temporary area. In case of a verification error in various alternative embodiments the user data can be recovered using the backup copy in the verify-list in the write cache, the backup copy in flash memory or on the disk, or from the host. | 2009-07-02 |
20090172325 | INFORMATION PROCESSING APPARATUS AND DATA RECOVERING METHOD - In an information processing apparatus, when an instruction is issued to write back storage contents of a main memory unit that is non-volatile, data and a write destination address included in a backup data that is set with a read permission are extracted from the backup data stored in a backup memory unit that is non-volatile. Further, according to the data and the write destination address extracted from the backup data, the data is written to a storage area of the main-memory unit indicated by the write destination address. | 2009-07-02 |
20090172326 | EMULATED STORAGE SYSTEM SUPPORTING INSTANT VOLUME RESTORE - In a back-up storage system, an apparatus and methods for mounting a data volume corresponding to a back-up data set to a host computer. In one example, a method includes mounting a data volume on a host computer, the data volume comprising at least one data file, the data file corresponding to a most recently backed-up version of the at least one data file stored on a backup storage system, and storing, on the backup storage system, data corresponding to a second version of the at least one data file that is more recent than the most recently backed-up version of the at least one data file stored on the backup storage system while preserving the most recently backed-up version of the at least one data file. | 2009-07-02 |
20090172327 | Optimistic Semi-Static Transactional Memory Implementations - A lock-based software transactional memory (STM) implementation may determine whether a transaction's write-set is static (e.g., known in advance not to change). If so, and if the read-set is not static, the STM implementation may execute, or attempt to execute, the transaction as a semi-static transaction. A semi-static transaction may involve obtaining, possibly after incrementing, a reference version value against which to subsequently validate that memory locations, such as read-set locations, have not been modified concurrently with the semi-static transaction. The read-set locations may be validated while locks are held for the locations to be written (e.g., the write-set locations). After committing the modifications to the write-set locations and as part of releasing the locks, versioned write-locks associated with the write-set locations may be updated to reflect the previously obtained, or newly incremented, reference version value. | 2009-07-02 |
20090172328 | SYSTEM AND METHOD FOR HIGH PERFORMANCE SECURE ACCESS TO A TRUSTED PLATFORM MODULE ON A HARDWARE VIRTUALIZATION PLATFORM - A system and method for high performance secure access to a trusted platform module on a hardware virtualization platform. The virtualization platform including Virtual Machine Monitor (VMM) managed components coupled to the VMM. One of the VMM managed components is a TPM (Trusted Platform Module). The virtualization platform also includes a plurality of Virtual Machines (VMs). Each of the virtual machines includes a guest Operating System (OS), a TPM device driver (TDD), and at least one security application. The VMM creates an intra-partition in memory for each TDD such that other code and information at a same or higher privilege level in the VM cannot access the memory contents of the TDD. The VMM also maps access only from the TDD to a TPM register space specifically designated for the VM requesting access. Contents of the TPM requested by the TDD are stored in an exclusively VMM-managed protected page table that provides hardware-based memory isolation for the TDD. | 2009-07-02 |
20090172329 | Providing secure services to a non-secure application - A data processing apparatus comprising a data processor for processing data in a secure and a non-secure mode, said data processor processing data in said secure mode having access to secure data that is not accessible to said data processor processing data in said non-secure mode; and a further processing device for performing a task in response to a request from said data processor issued from said non-secure mode, said task comprising processing data at least some of which is secure data, said further processing device comprising a secure data store, said secure data store not being accessible to processes running on said data processor in non-secure mode; wherein prior to issuing any of said requests said data processor is adapted to perform a set up operation on said further data processing device, said set up operation being performed by said data processor operating in said secure mode and comprising storing secure data in said secure data store on said further processing device, said secure data being secure data required by said further processing device to perform said task; wherein in response to receipt of said request from said data processor operating in said non-secure mode said further data processing device performs said task using data stored in said secure data store to access any secure data required. | 2009-07-02 |
20090172330 | Protection of user-level applications based on page table information - In one embodiment, the present invention includes a virtual machine monitor (VMM) to access a protection indicator of a page table entry (PTE) of a page of a set of memory buffers and determine a state of the protection indicator, and if the protection indicator indicates that the page is a user-level page and if certain information of an agent that seeks to use the page matches that in a protected memory address array, a page table base register (PTBR) is updated to a protected page table (PPT) base address. Other embodiments are described and claimed. | 2009-07-02 |
20090172331 | Securing content for playback - A graphics engine may include a decryption device, a renderer, and a sprite or overlay engine, all connected to a display. A memory may have a protected and non-protected portions in one embodiment. An application may store encrypted content on the non-protected portion of said memory. The decryption device may access the encrypted material, decrypt the material, and provide it to the renderer engine of a graphics engine. The graphics engine may then process the decrypted material using the protected portion of the memory. Only graphics devices can access the protected portion of the memory in at least one mode, preventing access by outside sources. In addition, the protected memory may be stolen memory that is not identified to the operating system, making that stolen memory inaccessible to applications running on the operating system. | 2009-07-02 |
20090172332 | Information processing apparatus and method of updating stack pointer - A instruction execution part of an information processing device outputs an access request including a first address information to specify an access destination based on an execution of an access command of an address space in a memory. The instruction execution part also outputs a check request including a second address information to specify a stack pointer point after extension based on an execution of a stack extension command to extend a stack included in the address space in the memory by updating a stack pointer. A protection violation detection section of the information processing device detects whether the access destination includes the plurality of the partial spaces by collating the first information with the memory protection information stored in the memory protection information storage section. | 2009-07-02 |
20090172333 | STORAGE DEVICE COORDINATOR AND A HOST DEVICE THAT INCLUDES THE SAME - A storage device coordinator intercepts a memory command issued by a host device and intended for a target storage device which is one of a plurality of storage devices, and, if the memory command is not optimal, transforms the memory command into one or more storage commands, each being associated with a respective storage device selected from the plurality of storage devices according to an optimization rule. A host device is also provided, which includes the storage device coordinator. A data storage system is also provided, which includes the storage device coordinator. | 2009-07-02 |
20090172334 | Data sorting device and method thereof - A data sorting device and a method thereof are disclosed, wherein the data sorting device includes plural storage modules and an enabling controller. Moreover, each storage module has a falling edge-triggered register and a rising edge-triggered register, and each storage module receives a serial data in response to the rising edge of clock and the falling edge of clock. Furthermore, the enabling controller is connected with each storage module for enabling each storage module by sequence turns in response to the trigger of the rising edge of clock. | 2009-07-02 |
20090172335 | FLASH DEVICES WITH RAID - Methods and apparatus of the present invention include multiple flash storage devices that are configured to form a single storage device that is flexible and scalable. Reliability and performance are improved while keeping the power consumption benefits compared to conventional hard disk drives. | 2009-07-02 |
20090172336 | Allocating Memory in a Broker System - Memory allocation in a Broker system for managing the communication between a plurality of clients and a plurality of servers. The method may include allocating memory for a plurality of memory pools; and dividing each memory pool into memory blocks of a size which is specific to the type of a resource. The resource may be related to the communication managed by the Broker. | 2009-07-02 |
20090172337 | COOPERATIVE MECHANISM FOR EFFICIENT APPLICATION MEMORY ALLOCATION - System, method and computer program product for allocating physical memory to processes. The method includes enabling a kernel to free memory in a physical memory space corresponding to arbitrarily sized memory allocations released by processes or applications in a virtual memory space. After freeing the memory, the system determines whether freed physical memory in the physical memory space spans one or more fixed size memory units (e.g., page frames). The method further includes designating a status of the one or more page frames as available for reuse; the freed page frames marked as available for reuse being available for backing a new process without requiring the kernel to delete data included in the freed memory released by the process. The kernel may organize pages marked as available for reuse in one or more local “pools” that is organized according to a variety of schemes which provide system efficiencies in that the kernel can eliminate the need for deleting of old data in those page frames without compromising data security. | 2009-07-02 |
20090172338 | FEEDBACK LINKER FOR INCREASED DELTA PERFORMANCE - A method, system and program for generating an updated memory image including updated program code to be loaded into a storage medium that has stored thereon a current memory image including a current program code version. The method comprises receiving an updated input code comprising a number of segments, wherein each segment is relocatable within the updated memory image; arranging the segments within the updated memory image. The arranging further comprises receiving a representation of the current program code version; performing at least one optimization step adapted to decrease an objective function under at least one predetermined layout constraint, the objective function being indicative of a magnitude of differences between the current program code version and the updated program code version, the layout constraint being indicative of at least one constraint imposed on the arrangement of segments within the memory image. | 2009-07-02 |
20090172339 | Apparatus and method for controlling queue - An apparatus includes a queue element which stores a plurality of memory access requests to be issued to a memory device, the memory access requests including a store request and a load request, and a controller which controls the queue element. The controller includes an address decision element which decides whether a first address of a first memory access request and a second address of a second memory access request relate with each other. The controller issues the second memory access request together with issuing of the first memory access request when the first address and the second address relate with each other. | 2009-07-02 |
20090172340 | Methods and arrangements to remap non-volatile storage - Methods and arrangements for remapping the map between logical space and physical space in non-volatile storage are described. Embodiments include transformations, code, state machines or other logic to divide the non-volatile storage of the computing device into two portions, a fixed portion and a floating portion. The embodiments may also include remapping in system firmware of the computing device the current map from logical space to physical space of the floating portion of the non-volatile storage. The embodiments may also include storing the revised map. The embodiments may also include using the revised map to access the floating portion of the non-volatile storage. | 2009-07-02 |
20090172341 | USING A MEMORY ADDRESS TRANSLATION STRUCTURE TO MANAGE PROTECTED MICRO-CONTEXTS - Embodiments of an invention for using a memory address translation structure to manage protected micro-contexts are disclosed. In one embodiment, an apparatus includes an interface and memory management logic. The interface is to perform a transaction to fetch information from a memory. The memory management logic is to translate an untranslated address to a memory address. The memory management logic includes a storage location, a series of translation stages, and determination logic. The storage location is to store an address of a data structure for the first translation stage. Each of the translation stages includes translation logic to find an entry in a data structure based on a portion of the untranslated address. Each entry is to store an address of a different data structure for the first translation stage, an address of a data structure for a successive translation stage, or the physical address. The determination logic is to determine whether an entry is storing an address of a different data structure for the first translation stage. | 2009-07-02 |
20090172342 | ROBUST INDEX STORAGE FOR NON-VOLATILE MEMORY - A non-volatile memory data address translation scheme is described that utilizes a hierarchal address translation system that is stored in the non-volatile memory itself. Embodiments of the present invention utilize a hierarchal address data and translation system wherein the address translation data entries are stored in one or more data structures/tables in the hierarchy, one or more of which can be updated in-place multiple times without having to overwrite data. This hierarchal address translation data structure and multiple update of data entries in the individual tables/data structures allow the hierarchal address translation data structure to be efficiently stored in a non-volatile memory array without markedly inducing write fatigue or adversely affecting the lifetime of the part. The hierarchal address translation of embodiments of the present invention also allow for an address translation layer that does not have to be resident in system RAM for operation. | 2009-07-02 |
20090172343 | USING A TRANSLATION LOOKASIDE BUFFER TO MANAGE PROTECTED MICRO-CONTEXTS - Embodiments of an invention for using a translation lookaside buffer to manage protected micro-contexts are disclosed. In one embodiment, an apparatus includes an interface and memory management logic. The interface is to perform a transaction to fetch information from a memory. The memory management logic is to translate an untranslated address to a memory address. The memory management logic includes a storage location, a series of translation stages, determination logic, and a translation lookaside buffer. The storage location is to store an address of a data structure for the first translation stage. Each of the translation stages includes translation logic to find an entry in a data structure based on a portion of the untranslated address. Each entry is to store an address of a different data structure for the first translation stage, an address of a data structure for a successive translation stage, or the physical address. The determination logic is to determine whether an entry is storing an address of a different data structure for the first translation stage. The translation lookaside buffer is to store translations. | 2009-07-02 |
20090172344 | METHOD, SYSTEM, AND APPARATUS FOR PAGE SIZING EXTENSION - A method, system, and apparatus may initialize a fixed plurality of page table entries for a fixed plurality of pages in memory, each page having a first size, wherein a linear address for each page table entry corresponds to a physical address and the fixed plurality of pages are aligned. A bit in each of the page table entries for the aligned pages may be set to indicate whether or not the fixed plurality of pages is to be treated as one combined page having a second page size larger than the first page size. Other embodiments are described and claimed. | 2009-07-02 |
20090172345 | TRANSLATION MANAGEMENT OF LOGICAL BLOCK ADDRESSES AND PHYSICAL BLOCK ADDRESSES - Systems and/or methods that facilitate PBA and LBA translations associated with a memory component(s) are presented. A memory controller component facilitates determining which memory component, erase block, page, and data block contains a PBA in which a desired LBA and/or associated data is stored. The memory controller component facilitates control of performance of calculation functions, table look-up functions, and/or search functions to locate the desired LBA. The memory controller component generates a configuration sequence based in part on predefined optimization criteria to facilitate optimized performance of translations. The memory controller component and/or associated memory component(s) can be configured so that the translation attributes are determined in a desired order using the desired translation function(s) to determine a respective translation attribute based in part on the predefined optimization criteria. The LBA to PBA translations can be performed in parallel by memory components. | 2009-07-02 |
20090172346 | TRANSITIONING BETWEEN SOFTWARE COMPONENT PARTITIONS USING A PAGE TABLE POINTER TARGET LIST - Embodiments of apparatuses, articles, methods, and systems for intra-partitioning components within an execution environment, and transitioning between partitions using a page table pointer target list are generally described herein. Other embodiments may be described and claimed. | 2009-07-02 |
20090172347 | Data storage device - A storage device includes a memory for storing data in a plurality of logical volumes; a controlling unit for controlling an access to data in accordance with a process comprising the steps of: generating mapping information indicative of a correspondence between logical volume information and recognition information; generating a pseudo logical volume and pseudo logical volume information associated with the pseudo logical volume, the pseudo logical volume being another of the logical volumes; and upon receipt of a command for canceling an assignment of one of the logical volumes to the corresponding recognition information, modifying the mapping information so that recognition information that has been indicative of said one of the logical volumes becomes indicative of the pseudo logical volume information associated with the pseudo logical volume. | 2009-07-02 |
20090172348 | METHODS, APPARATUS, AND INSTRUCTIONS FOR PROCESSING VECTOR DATA - A computer processor includes control logic for executing LoadUnpack and PackStore instructions. In one embodiment, the processor includes a vector register and a mask register. In response to a PackStore instruction with an argument specifying a memory location, a circuit in the processor copies unmasked vector elements from the vector register to consecutive memory locations, starting at the specified memory location, without copying masked vector elements. In response to a LoadUnpack instruction, the circuit copies data items from consecutive memory locations, starting at an identified memory location, into unmasked vector elements of the vector register, without copying data to masked vector elements. Other embodiments are described and claimed. | 2009-07-02 |
20090172349 | METHODS, APPARATUS, AND INSTRUCTIONS FOR CONVERTING VECTOR DATA - A computer processor includes a decoder for decoding machine instructions and an execution unit for executing those instructions. The decoder and the execution unit are capable of decoding and executing vector instructions that include one or more format conversion indicators. For instance, the processor may be capable of executing a vector-load-convert-and-write (VLoadConWr) instruction that provides for loading data from memory to a vector register. The VLoadConWr instruction may include a format conversion indicator to indicate that the data from memory should be converted from a first format to a second format before the data is loaded into the vector register. Other embodiments are described and claimed. | 2009-07-02 |
20090172350 | Non-volatile processor register - A processor using a vertically configured non-volatile memory array that can retain values through a power failure is disclosed. The processor may include a register block configured to store and retrieve one or more values, the register block being a vertically configured non-volatile memory array, an arithmetic block configured to perform an arithmetic operation on the one or more values, and a control block configured to control the register block, the arithmetic block, and a memory block. The vertically configured non-volatile memory array may include a plurality of two-terminal memory elements. The two-terminal memory elements may be resistivity-sensitive and store data in the absence of power. The two-terminal memory elements store data as plurality of conductivity profiles that can be non-destructively read by applying a read voltage across the terminals of the memory element and data can be written by applying a write voltage across the terminals. | 2009-07-02 |
20090172351 | DATA PROCESSING DEVICE AND METHOD - A data processing device comprising a multidimensional array of coarse grained logic elements processing data and operating at a first clock rate and communicating with one another and/or other elements via busses and/or communication lines operated at a second clock rate is disclosed, wherein the first clock rate is higher than the second and wherein the coarse grained logic elements comprise storage means for storing data needed to be processed. | 2009-07-02 |
20090172352 | DYNAMIC RECONFIGURABLE CIRCUIT - A dynamic reconfigurable circuit including a plurality of processing elements each provided with an arithmetic data input port, a configuration data input port and an output port, a data network that is coupled to the arithmetic data input ports and the output ports of the plurality of processing elements, a configuration memory that is coupled via a configuration path to the configuration data input port of a first processor element being at least one of the plurality of processing elements, and an immediate value network that is independent from the data network and that is coupled to the configuration data input port of a second processor element being at least one of the plurality of processing elements. An internal register of a third processor element is coupled to the immediate value network so that data stored in the internal register can be outputted to the immediate value network. | 2009-07-02 |
20090172353 | SYSTEM AND METHOD FOR ARCHITECTURE-ADAPTABLE AUTOMATIC PARALLELIZATION OF COMPUTING CODE - Systems and methods for architecture-adaptable automatic parallelization of computing code are described herein. In one aspect, embodiments of the present disclosure include a method of generating a plurality of instruction sets from a sequential program for parallel execution in a multi-processor environment, which may be implemented on a system, of, identifying an architecture of the multi-processor environment in which the plurality of instruction sets are to be executed, determining running time of each of a set of functional blocks of the sequential program based on the identified architecture, determining communication delay between a first computing unit and a second computing unit in the multi-processor environment, and/or assigning each of the set of functional blocks to the first computing unit or the second computing unit based on the running times and the communication time. | 2009-07-02 |
20090172354 | HANDSHAKING DUAL-PROCESSOR ARCHITECTURE OF DIGITAL CAMERA - A handshaking dual-processor architecture of a digital camera includes a microprocessor and a digital signal processor (DSP). After accepting a user command, the microprocessor transmits a wakeup signal to trigger the DSP to switch from a sleep mode to an operation mode, and transmits a data packet and a processing request to the DSP. After receiving the data packet, the DSP generates a data packet processing result according to the processing request. After receiving the data packet processing result, the microprocessor returns a processing state in response to the user command. Through the handshaking dual-processor architecture, it is unnecessary to implement low-level device operation on application program, and it is only necessary to submit a required basic function, such that the microprocessor controls the corresponding DSP to execute the basic function and report the executing result of the basic function. | 2009-07-02 |
20090172355 | INSTRUCTIONS WITH FLOATING POINT CONTROL OVERRIDE - Methods and apparatus relating to instructions with floating point control override are described. In an embodiment, floating point operation settings indicated by a floating point control register may be overridden on a per instruction basis. Other embodiments are also described. | 2009-07-02 |
20090172356 | COMPRESSED INSTRUCTION FORMAT - A technique for decoding an instruction in a variable-length instruction set. In one embodiment, an instruction encoding is described, in which legacy, present, and future instruction set extensions are supported, and increased functionality is provided, without expanding the code size and, in some cases, reducing the code size. | 2009-07-02 |
20090172357 | USING A PROCESSOR IDENTIFICATION INSTRUCTION TO PROVIDE MULTI-LEVEL PROCESSOR TOPOLOGY INFORMATION - Embodiments of an invention for using a processor identification instruction to provide multi-level processor topology information are disclosed. In one embodiment, a processor includes decode logic and control logic. The decode logic is to receive an identification instruction having an associated topological level value. The control logic is to provide, in response to the decode logic receiving the identification instruction, processor identification information corresponding to the associated topological level value. | 2009-07-02 |
20090172358 | IN-LANE VECTOR SHUFFLE INSTRUCTIONS - In-lane vector shuffle operations are described. In one embodiment a shuffle instruction specifies a field of per-lane control bits, a source operand and a destination operand, these operands having corresponding lanes, each lane divided into corresponding portions of multiple data elements. Sets of data elements are selected from corresponding portions of every lane of the source operand according to per-lane control bits. Elements of these sets are copied to specified fields in corresponding portions of every lane of the destination operand. Another embodiment of the shuffle instruction also specifies a second source operand, all operands having corresponding lanes divided into multiple data elements. A set selected according to per-lane control bits contains data elements from every lane portion of a first source operand and data elements from every corresponding lane portion of the second source operand. Set elements are copied to specified fields in every lane of the destination operand. | 2009-07-02 |
20090172359 | PROCESSING PIPELINE HAVING PARALLEL DISPATCH AND METHOD THEREOF - One or more processor cores of a multiple-core processing device each can utilize a processing pipeline having a plurality of execution units (e.g., integer execution units or floating point units) that together share a pre-execution front-end having instruction fetch, decode and dispatch resources. Further, one or more of the processor cores each can implement dispatch resources configured to dispatch multiple instructions in parallel to multiple corresponding execution units via separate dispatch buses. The dispatch resources further can opportunistically decode and dispatch instruction operations from multiple threads in parallel so as to increase the dispatch bandwidth. Moreover, some or all of the stages of the processing pipelines of one or more of the processor cores can be configured to implement independent thread selection for the corresponding stage. | 2009-07-02 |
20090172360 | INFORMATION PROCESSING APPARATUS EQUIPPED WITH BRANCH PREDICTION MISS RECOVERY MECHANISM - The information processing apparatus comprises a cache miss detection unit detects a cache miss of a load instruction; an instruction issuance stop unit stops the issuance of an instruction subsequent to a conditional branch instruction if the branch direction of a conditional branch instruction subsequent to the load instruction for which a cache miss has been detected by the cache miss detection unit is not established at the timing of issuance, wherein a period of time cancels an issued instruction, the cancelation having been caused by a branch prediction miss, is deleted and thereby a penalty for the branch prediction miss is concealed under a wait time due to a cache miss. | 2009-07-02 |
20090172361 | COMPLETION CONTINUE ON THREAD SWITCH MECHANISM FOR A MICROPROCESSOR - A thread switch mechanism and technique for a microprocessor is disclosed wherein a processing of a first thread is completed, and a continuation of a second thread is initiated during completion of the first thread. In one form, the technique includes processing a first thread at a pipeline of a processing device, and initiating processing of a second thread at a front end of the pipeline in response to an occurrence of a context switch event. The technique can also include initiating a instruction progress metric in response the context switch event. The technique can further include enabling completion of processing of instructions of the first thread that are at a back end of the pipeline at the occurrence of the context switch event until an expiry of the instruction progress metric. | 2009-07-02 |
20090172362 | PROCESSING PIPELINE HAVING STAGE-SPECIFIC THREAD SELECTION AND METHOD THEREOF - One or more processor cores of a multiple-core processing device each can utilize a processing pipeline having a plurality of execution units (e.g., integer execution units or floating point units) that together share a pre-execution front-end having instruction fetch, decode and dispatch resources. Further, one or more of the processor cores each can implement dispatch resources configured to dispatch multiple instructions in parallel to multiple corresponding execution units via separate dispatch buses. The dispatch resources further can opportunistically decode and dispatch instruction operations from multiple threads in parallel so as to increase the dispatch bandwidth. Moreover, some or all of the stages of the processing pipelines of one or more of the processor cores can be configured to implement independent thread selection for the corresponding stage. | 2009-07-02 |
20090172363 | MIXING INSTRUCTIONS WITH DIFFERENT REGISTER SIZES - When legacy instructions, that can only operate on smaller registers, are mixed with new instructions in a processor with larger registers, special handling and architecture are used to prevent the legacy instructions from causing problems with the data in the upper portion of the registers, i.e., the portion that they cannot directly access. In some embodiments, the upper portion of the registers are saved to temporary storage while the legacy instructions are operating, and restored to the upper portion of the registers when the new instructions are operating. A special instruction may also be used to disable this save/restore operation if the new instruction are not going to use the upper part of the registers. | 2009-07-02 |
20090172364 | DEVICE, SYSTEM, AND METHOD FOR GATHERING ELEMENTS FROM MEMORY - A system and method for assigning values to elements in a first register, where each data field in a first register corresponds to a data element to be written into a second register, and where for each data field in the first register, a first value may indicate that the corresponding data element has not been written into the second register and a second value indicates that the corresponding data element has been written into the second register, reading the values of each of the data fields in the first register, and for each data field in the first register having the first value, gathering the corresponding data element and writing the corresponding data element into the second register, and changing the value of the data field in the first register from the first value to the second value. Other embodiments are described and claimed. | 2009-07-02 |
20090172365 | Instructions and logic to perform mask load and store operations - In one embodiment, logic is provided to receive and execute a mask move instruction to transfer a vector data element including a plurality of packed data elements from a source location to a destination location, subject to mask information for the instruction. Other embodiments are described and claimed. | 2009-07-02 |
20090172366 | Enabling permute operations with flexible zero control - In one embodiment, the present invention includes logic to receive a permute instruction, first and second source operands, and control values, and to perform a permute operation based on an operation between at least two of the control values. Multiple permute instructions may be combined to perform efficient table lookups. Other embodiments are described and claimed. | 2009-07-02 |
20090172367 | PROCESSING UNIT - A processing unit has an extended register to which instruction extension information indicating an extension of an instruction can be set. An operation unit that, when instruction extension information is set to the extended register, executes a subsequent instruction following a first instruction for writing the instruction extension information into the extended register, extends the subsequent instruction based on the instruction extension information. | 2009-07-02 |
20090172368 | Hardware Based Runtime Error Detection - A processor that includes a storage medium which includes microcode that performs runtime analysis. The storage medium can include instrumented microcode that monitors at least one execution of a machine instruction resulting in a memory access, instrumented microcode that accesses at least one memory state indicator to determine whether the memory access is improper, and instrumented microcode that outputs an exception when the memory access is improper. | 2009-07-02 |
20090172369 | SAVING AND RESTORING ARCHITECTURAL STATE FOR PROCESSOR CORES - A method and apparatus for saving and restoring architectural states utilizing hardware is herein described. A first portion of an architectural state of a processing element, such as a core, is concurrently saved upon being updated. A remaining portion of the architectural state is saved to memory in response to a save state triggering event, which may include a hardware event or a software event. Once saved, the state is potentially transferred to another processing element, such as a second core. As a result, hardware, software, or combination thereof may transfer architectural states between multiple processing elements, such as threads or cores, of a processor utilizing hardware support. | 2009-07-02 |
20090172370 | EAGER EXECUTION IN A PROCESSING PIPELINE HAVING MULTIPLE INTEGER EXECUTION UNITS - One or more processor cores of a multiple-core processing device each can utilize a processing pipeline having a plurality of execution units (e.g., integer execution units or floating point units) that together share a pre-execution front-end having instruction fetch, decode and dispatch resources. Further, one or more of the processor cores each can implement dispatch resources configured to dispatch multiple instructions in parallel to multiple corresponding execution units via separate dispatch buses. The dispatch resources further can opportunistically decode and dispatch instruction operations from multiple threads in parallel so as to increase the dispatch bandwidth. Moreover, some or all of the stages of the processing pipelines of one or more of the processor cores can be configured to implement independent thread selection for the corresponding stage. | 2009-07-02 |
20090172371 | FEEDBACK MECHANISM FOR DYNAMIC PREDICATION OF INDIRECT JUMPS - Systems and methods are provided to detect instances where dynamic predication of indirect jumps (DIP) is considered to be ineffective utilizing data collected on the recent effectiveness of dynamic predication on recently executed indirect jump instructions. Illustratively, a computing environment comprises a DIP monitoring engine cooperating with a DIP monitoring table that aggregates and processes data representative of the effectiveness of DIP on recently executed jump instructions. Illustratively, the exemplary DIP monitoring engine collects and processes historical data on DIP instances, where, illustratively, a monitored instance can be categorized according to one or more selected classifications. A comparison can be performed for currently monitored indirect jump instructions using the collected historical data (and classifications) to determine whether DIP should be invoked by the computing environment or whether to invoke other indirect jump prediction paradigms. | 2009-07-02 |
20090172372 | METHODS AND APPARATUS FOR GENERATING SYSTEM MANAGEMENT INTERRUPTS - A method includes determining a plurality of memory addresses, each memory address being different from one another. The method further includes generating a plurality of system management interrupt interprocessor interrupts, each system management interrupt interprocessor interrupt having a corresponding processor in a plurality of processors in a system and each system management interrupt interprocessor interrupt including one of the plurality of memory addresses. The method further includes directing each system management interrupt interprocessor interrupt to the corresponding processor. An associated machine readable medium is also disclosed. | 2009-07-02 |
20090172373 | Ethernet based Automotive Infotainment Power Controller - An Automotive Infotainment Power Controller. It utilizes a PIC Micro-Controller, TCP/IP Stack, and a 10 Mbit Ethernet connection as an industry-standard interface by which the Automotive Infotainment Power Controller is connected to any number of Personal Computers. This infrastructure allows custom FW to be developed for the Power Controller that can interact with PC drivers. It also opens the door for bi-directional communication between multiple Personal Computers and the Power Controller. Any number of protocols can be chosen or developed to facilitate this communication over standard Ethernet TCP/IP packets. Having this Ethernet-based communication pipeline between the Power Controller and PCs allows the system to be configured, provide diagnostic/status information, communicate with, and individually Start-Up/Shut-Down PCs connected to the system. | 2009-07-02 |
20090172374 | Information Processing Device, Information Processing Method, and Computer Readable Medium Therefor - An information processing device, configured to perform information processing with an application run on an operating system, includes an installing unit configured to install the application into the information processing device, an instruction accepting unit configured to accept an instruction to run the application, a first determining unit configured to determine whether the operating system has been rebooted after the installation of the application, in response to the instruction accepted through the instruction accepting unit, and a prohibiting unit configured to prohibit the application from being run when the first determining unit determines that the operating system has not been rebooted. | 2009-07-02 |
20090172375 | Operating Point Management in Multi-Core Architectures - Systems and methods of managing operating points provide for determining the number of active cores in a plurality of processor cores. A maximum operating point is selected for at least one of the active cores based on the number of active cores. In one embodiment, the number of active cores is determined by monitoring an ACPI processor power state signal of each of the plurality of cores. | 2009-07-02 |
20090172376 | METHODS, APPARATUSES, AND COMPUTER PROGRAM PRODUCTS FOR PROVIDING A SECURE PREDEFINED BOOT SEQUENCE - An apparatus for providing a secure predefined boot sequence may include a processor. The processor may be configured to verify a predefined boot sequence certificate that defines a boot sequence for a device, verify one or more software elements referenced by the predefined boot sequence certificate, and execute one or more software elements that have been verified in the sequence defined by the predefined boot sequence certificate. Corresponding methods, systems, and computer program products are also provided. | 2009-07-02 |
20090172377 | METHOD AND APPARATUS FOR BOOTING A PROCESSING SYSTEM - Machine-readable media, methods, apparatus and system for booting a processing system are described. In an embodiment, whether to launch an open operating system or a closed operating system to boot a processing system may be determined. A key may be retrieved from a processor register of the processing system and used to decrypt an encrypted version of the closed operating system based at least in part on a determination of booting the processing system with the closed operating system. In another embodiment, the processor register stored with the key may be flushed based at least in part on a determination of booting the processing system with the open operating system. | 2009-07-02 |
20090172378 | METHOD AND SYSTEM FOR USING A TRUSTED DISK DRIVE AND ALTERNATE MASTER BOOT RECORD FOR INTEGRITY SERVICES DURING THE BOOT OF A COMPUTING PLATFORM - A trusted hard disk drive (“THDD”) contains cryptographic primitives and support functions in a trusted partition (“TP”). In particular, a master boot record (“MBR”) of the THDD is replaced with an alternative MBR and the normal MBR is stored elsewhere on the THDD. The program(s) loaded from the alternative MBR performs measurements of the TP. The TP, in turn, performs all necessary measurements of the MBR, a personal computer platform's OS, and the OS-present applications, including a platform trust service (“PTS”) kernel. The program(s) also performs functions to clear the PC platform's state such that any events that occurred prior to its execution do not alter the functionality of the OS-present applications. This may include clearing the PC's microprocessor, system memory and cache, for example. DRTM types of system resets may also be performed after the PC's OS has booted to force system clears without requiring OS or VMM infrastructure. | 2009-07-02 |
20090172379 | SYSTEM AND METHOD TO ENABLE PARALLELIZATION OF EARLY PLATFORM INITIALIZATION - In some embodiments, the invention involves reducing the time required for a platform to boot to its target application/operating-system using parallelization of firmware image content decompression and loading. An embodiment dispatches alternate processing agents as a means to intelligently assist in off-loading some of the initialization tasks so that the main processor may share the burden of boot tasks. In at least one embodiment, it is intended to build firmware images that facilitate parallelization, utilizing co-processing agents that can split these transactions across various processing agents. Other embodiments are described and claimed. | 2009-07-02 |
20090172380 | BOOTING AN INTEGRATED CIRCUIT - An integrated circuit comprising: a processor; a plurality of external pins operatively coupled to the processor; and a permanently written memory operatively coupled to the processor, the memory having a plurality of regions each storing one or more respective boot properties for booting the processor. The processor is programmed to select one of the regions in dependence on an indication received via one or more of the external pins, to retrieve the one or more respective boot properties from the selected region, and to boot using the one or more retrieved boot properties. | 2009-07-02 |
20090172381 | ENHANCED NETWORK AND LOCAL BOOT OF UNIFIED EXTENSIBLE FIRMWARE INTERFACE IMAGES - Techniques and architectures to provide high assurance image invocation in a pre-boot environment. These techniques may augment implementations of the Unified Extensible Firmware Interface (UEFI) to invoke UEFI images using Trusted Execution Technology (TXT). This can operate to combine pre-boot secure flows, such as UEFI image invocation, with the secure launch instruction set extensions of TXT. This may entail combination of the UEFI StartImage instruction with the SMX leaf SENTER instruction. This may operate to allow original equipment manufacturer (OEM) firmware as a guard and that uses UEFI and TXT access control logic at the same instance to pass control to the operating system (OS). | 2009-07-02 |
20090172382 | Multi-function computer system - A multi-function computer system includes a host apparatus, a display and a boot mode switching unit. The display is electrically connected to the host apparatus. The boot mode switching unit electrically connected to the host apparatus may be switched between a first mode and a second mode. In the first mode, the boot mode switching unit controls the host apparatus to boot into a first operation environment. In the second mode, the boot mode switching unit controls the host apparatus to boot into a second operation environment different from the first operation environment. | 2009-07-02 |
20090172383 | BOOTING AN INTEGRATED CIRCUIT - An integrated circuit comprising: a processor; a plurality of external pins operatively coupled to the processor; and a permanently written memory operatively coupled to the processor, the memory having a plurality of regions each storing one or more respective boot properties for booting the processor. The processor is programmed to select one of the regions in dependence on an indication received via one or more of the external pins, to retrieve the one or more respective boot properties from the selected region, and to boot using the one or more retrieved boot properties. | 2009-07-02 |
20090172384 | SYSTEMS AND METHODS FOR CONFIGURING, UPDATING, AND BOOTING AN ALTERNATE OPERATING SYSTEM ON A PORTABLE DATA READER - Systems and methods are provided for updating configuration settings, updating an OS image, and booting an alternate OS on a portable data reader including a reading engine for reading data from an object. Configuration settings of a portable data reader may be updated by detecting whether a storage device having a set of updated configuration settings stored thereon has been coupled to the portable data reader and, if so, updating one or more configuration settings on the portable data reader with one or more of the updated configuration settings from the storage device. | 2009-07-02 |
20090172385 | ENABLING SYSTEM MANAGEMENT MODE IN A SECURE SYSTEM - Apparatuses, methods, and systems for enabling system management mode in a secure system are disclosed. In one embodiment, a processor includes sub-operating-system mode logic, virtual machine logic, and control logic. The sub-operating-system mode logic is to support a sub-operating-system mode. The virtual machine logic is to support virtualization. The control logic is to prevent virtualization from being enabled when the sub-operating-system mode is disabled. | 2009-07-02 |
20090172386 | Situation Sensitive Memory Performance - The present invention presents a non-volatile memory system that adapts its performance to one or more system related situation. If a situation occurs where the memory will require more than the allotted time for completing an operation, the memory can switch from its normal operating mode to a high performance mode in order to complete the operation quickly enough. Conversely, if a situation arises where reliability could be an issue (such as partial page programming), the controller could switch to a high reliability mode. In either case, once the trigging system situation has returned to normal, the memory reverts to the normal operation. The detection of such situations can be used both for programming and data relocation operations. An exemplary embodiment is based on firmware programmable performance. | 2009-07-02 |
20090172387 | MANAGING DYNAMIC CONFIGURATION MODIFICATIONS IN A COMPUTER INFRASTRUCTURE - Data for a dynamic configuration of a set of producer components is stored in a set of component objects and a set of relationship objects. When an event is received indicating a change to the dynamic configuration, a component object and/or relationship object is updated to reflect the change. The component and/or relationship object(s) can be used to notify one or more listening components of modifications to the dynamic configuration. In this manner, listening components are only loosely coupled with producer components making any necessary adjustments to configuration changes easier to implement. | 2009-07-02 |
20090172388 | PERSONAL GUARD - In some embodiments data input to an input device is encrypted before it is received by any software. Other embodiments are described and claimed. | 2009-07-02 |
20090172389 | SECURE CLIENT/SERVER TRANSACTIONS - In some embodiments a controller establishes a secured connection between a remote computer and a user input device and/or a user output device of a computer. Information is securely transmitted in both directions between the remote computer and the user input device and/or user output device in a manner such that a user of the user input device and/or the user output device securely interacts with the remote computer in a manner that cannot be maliciously interfered with by software running on the computer. Other embodiments are described and claimed. | 2009-07-02 |
20090172390 | PACKET-PARALLEL HIGH PERFORMANCE CRYPTOGRAPHY SYSTEMS AND METHODS - A cryptographic system ( | 2009-07-02 |
20090172391 | COMMUNICATION HANDOVER METHOD, COMMUNICATION MESSAGE PROCESSING METHOD, AND COMMUNICATION CONTROL METHOD - There is disclosed a technique whereby, in a case wherein a mobile node (MN) performs a handover, between access points (APs) present on the links of different access routers (ARs), security is quickly established between the MN and the AP so as to reduce the possibility of a communication delay or disconnection due to the handover. According to this technique, before performing a handover, the MN | 2009-07-02 |
20090172392 | METHOD AND SYSTEM FOR TRANSFERRING INFORMATION TO A DEVICE - A system and method for transferring information include generating a public/private key pair for programming equipment and sending the programming equipment public key to a certificate authority. A programming equipment certificate is generated using the programming equipment public key and a private key of the certificate authority. The programming equipment certificate and a certificate authority certificate are sent to the programming equipment. Information is transferred to or from the programming equipment in response to an authentication using the programming equipment certificate and the certificate authority certificate. | 2009-07-02 |
20090172393 | Method And System For Transferring Data And Instructions Through A Host File System - A method for encrypting data may generate an encryption instruction and combine it with a payload of data to form a packet. The packet is associated with a command and passed to a host file system process. The packet, now associated with a second command, is received from the host file system process. The encryption instruction and the payload of data are extracted from the packet. At least a portion of the payload of data is encrypted based on the encryption instruction. A method for decrypting data may receive a packet and generate a decryption instruction. At least a portion of the packet is decrypted using at least the decryption instruction. The second packet comprising the decrypted packet is passed to a host file system process. A third packet comprising the decrypted packet is received from the host file system process. The decrypted packet is extracted from the third packet. | 2009-07-02 |
20090172394 | Assigning nonces for security keys - Secure communications may be implemented by transmitting packet data units with information sufficient to enable a receiving entity to reconstruct a nonce. That is, rather than transmitting all of the bits making up the nonce, some of the bits may be transmitted together with an identifier that enables the rest of the bits of the nonce to be obtained by the receiving entity. | 2009-07-02 |
20090172395 | System and Method for Service Virtualization Using a MQ Proxy Network - A system, method, and computer program product for transmitting message traffic encapsulating a MQ network having a plurality of MQ clients coupled to a MQ queue via at least one MQ queue manager and at least one MQ proxy server coupled to the plurality of MQ clients. The at least one MQ proxy server retrieves a message from a first MQ client coupled thereto, evaluates the message content and forwards the message to the MQ queue via a designated MQ queue manager. If the destination MQ client is served by a second MQ proxy server the originating MQ proxy server notifies the second MQ proxy server coupled to the second MQ client. The second MQ proxy server retrieves the message from the MQ queue thru the designated MQ queue manager, evaluates the message content and forwards the message to the second MQ client. If the first MQ client and the second or destination MQ client are served by the same MQ proxy server, then the MQ proxy server will just retrieve the message from the MQ queue through the designated MQ queue manager and forward the message to the second MQ client. | 2009-07-02 |
20090172396 | SECURE INPUT - In some embodiments input information received at an input device is encrypted before it is sent to a computer to be coupled to the input device. Other embodiments are described and claimed. | 2009-07-02 |
20090172397 | IMS Security for Femtocells - A mobile station can be authenticated by, for example, sending a challenge to a mobile station, and receiving a first authentication response from the mobile station through a wireless link, the first authentication response being generated based on the challenge and an authentication key stored at the mobile station. A second authentication response is generated based on the first authentication response. The second authentication response is provided to an IMS network for authenticating the mobile station to enable the mobile station to access the IMS network. In some examples, an authentication response of the mobile station is carried in an SIP message sent from the femtocell to a server that can authenticate the mobile station or forward the authentication response to another server that can authenticate the mobile station. Authentication of the mobile station can be performed as an integrated part of or separate from a registration process. | 2009-07-02 |