49th week of 2009 patent applcation highlights part 72 |
Patent application number | Title | Published |
20090300279 | Signature Based Method for Sending Packets to Remote Systems - A method comprises generating a plurality of signatures, wherein each complex signature corresponds to and identifies a respective one of a plurality of virtual tape complexes, and wherein each virtual tape complex comprises one or more subsystems and a control store linked to the one or more subsystems. For each virtual tape complex, a copy of the corresponding signature is stored in the control store corresponding to the virtual tape complex. | 2009-12-03 |
20090300280 | DETECTING DATA MINING PROCESSES TO INCREASE CACHING EFFICIENCY - Methods and apparatus to detect a data mining process are presented. In one embodiment the method comprising monitoring access of a process to a resource and classifying if the process is a data mining process based on at least one of a plurality of monitored values, such as an access rate, an eviction rate, and an I/O consumption value. | 2009-12-03 |
20090300281 | Disk Controller Providing for the Auto-Transfer of Host-Requested-Data from a Cache Memory within a Disk Memory System - A disk-controller ( | 2009-12-03 |
20090300282 | REDUNDANT ARRAY OF INDEPENDENT DISKS WRITE RECOVERY SYSTEM - A redundant array of independent disks write recovery system includes: providing a logical drive having a disk drive that failed; rebooting a storage controller, coupled to the disk drive, after a controller error; and reading a write hole table, in the storage controller, for regenerating data on the logical drive. | 2009-12-03 |
20090300283 | Method and apparatus for dissolving hot spots in storage systems - Hot spots in a storage system may be located and dissolved in a smallest feasible time. A particular volume can be selected to be migrated from a hot spot with a minimum workload, and the most appropriate destination for receiving the migration is identified prior to beginning the migration. A management computer may monitor the load of each array group in the storage system in order to detect hot spots, and calculate estimated migration times for selecting a volume to be migrated from a hot spot according to shortest estimated time. Furthermore, because the storage controller needs to re-write data that is updated in an already-migrated area by a host computer during the migration, choosing the smallest volume is not the only consideration taken into account. Write access rates by host computers to the volume be migrated are taken into consideration when determining a candidate for migration. | 2009-12-03 |
20090300284 | Disk array apparatus and disk array apparatus control method - A journal write unit writes journal data into a third storage device. The journal data includes an identifier of a logical volume in a first storage device into which data has been written, information of a location in which the data is stored in the logical volume, update time which is current time acquired from a timing mechanism, and the data. A second write unit refers to update time of the journal data stored in the third storage device, selects journal data for which a difference between current time acquired from the timing mechanism and the update time is longer than a detection time stored in the third storage device, and writes the data into a place indicated by the location information, in a logical volume in the second storage device in the order of update time in the selected journal data. | 2009-12-03 |
20090300285 | COMPUTER SYSTEM, STORAGE SYSTEM AND METHOD FOR EXTENDING VOLUME CAPACITY - The object of the present invention is to prevent distribution of a storage range to a volume from an inappropriate disk drive comparing with use of the volume as a result of an automatic extension of a volume capacity. A computer system has a storage system including a physical storage device, a host computer, and a management computer. The storage system includes a plurality of kinds of physical storage devices physically dividing into two segments or more the volume and records as constitution information the correspondence between each segment and the volume using the segment. And the storage system records the kind of physical storage device to be allocated to the volume and selects the physical storage device according to the kind of stored physical storage device for allocation of the segment when the host computer performs an I/O access to the volume. | 2009-12-03 |
20090300286 | METHOD FOR COORDINATING UPDATES TO DATABASE AND IN-MEMORY CACHE - A computer method and system of caching. In a multi-threaded application, different threads execute respective transactions accessing a data store (e.g. database) from a single server. The method and system represent status of datastore transactions using respective certain (e.g. Future) parameters. | 2009-12-03 |
20090300287 | Method and apparatus for controlling cache memory - An apparatus for controlling a cache memory that stores therein data transferred from a main storing unit includes a computing processing unit that executes a computing process using data, a connecting unit that connects an input portion and an output portion of the cache memory, a control unit that causes data in the main storing unit to be transferred to the output portion of the cache memory through the connecting unit when the data in the main storing unit is input from the input portion of the cache memory into the cache memory, and a transferring unit that transfers data transferred by the control unit to the output portion of the cache memory, to the computing processing unit. | 2009-12-03 |
20090300288 | Write Combining Cache with Pipelined Synchronization - Systems and methods for pipelined synchronization in a write-combining cache are described herein. An embodiment to transmit data to a memory to enable pipelined synchronization of a cache includes obtaining a plurality of synchronization events for transactions with said memory, calculating one or more matches between said events and said data stored in one or more cache-lines of said cache, storing event time stamps of events associated with said matches, generating one or more priority values based on said event time stamps, concurrently transmitting said data to said memory based on said priority values. | 2009-12-03 |
20090300289 | Reducing back invalidation transactions from a snoop filter - In one embodiment, the present invention includes a method for receiving an indication of a pending capacity eviction from a caching agent, determining whether an invalidating writeback transaction from the caching agent is likely for a cache line associated with the pending capacity eviction, and if so moving a snoop filter entry associated with the cache line from a snoop filter to a staging area. Other embodiments are described and claimed. | 2009-12-03 |
20090300290 | Memory Metadata Used to Handle Memory Errors Without Process Termination - Embodiments of the invention provide an interrupt handler configured to distinguish between critical and non-critical unrecoverable memory errors, yielding different actions for each. Doing so may allow a system to recover from certain memory errors without having to terminate a running process. In addition, when an operating system critical task experiences an unrecoverable error, such a task may be acting on behalf of a non-critical process (e.g., when swapping out a virtual memory page). When this occurs, an interrupt handler may respond to a memory error with the same response that would result had the process itself performed the memory operation. Further, firmware may be configured to perform diagnostics to identify potential memory errors and alert the operating system before a memory region state change occurs, such that the memory error would become critical. | 2009-12-03 |
20090300291 | Implementing Cache Coherency and Reduced Latency Using Multiple Controllers for Memory System - A method and apparatus implement cache coherency and reduced latency using multiple controllers for a memory system, and a design structure is provided on which the subject circuit resides. A first memory controller uses a first memory as its primary address space, for storage and fetches. A second memory controller is also connected to the first memory. A second memory controller uses a second memory as its primary address space, for storage and fetches. The first memory controller is also connected to the second memory. The first memory controller and the second memory controller, for example, are connected together by a processor communications bus. A request and send sequence of the invention sends data directly to a requesting memory controller eliminating the need to re-route data back through a responding controller, and improving the latency of the data transfer. | 2009-12-03 |
20090300292 | Using criticality information to route cache coherency communications - In one embodiment, the present invention includes a method for receiving a cache coherency message in an interconnect router from a caching agent, mapping the message to a criticality level according to a predetermined mapping, and appending the criticality level to each flow control unit of the message, which can be transmitted from the interconnect router based at least in part on the criticality level. Other embodiments are described and claimed. | 2009-12-03 |
20090300293 | Dynamically Partitionable Cache - Methods and systems for dynamically partitioning a cache and maintaining cache coherency are provided. In an embodiment, a system for processing memory requests includes a cache and a cache controller configured to compare a memory address and a type of a received memory request to a memory address and a type, respectively, corresponding to a cache line of the cache to determine whether the memory request hits on the cache line. In another embodiment, a method for processing fetch memory requests includes receiving a memory request and determining if the memory request hits on a cache line of a cache by determining if a memory address and a type of the memory request match a memory address and a type, respectively, corresponding to a cache line of the cache. | 2009-12-03 |
20090300294 | UTILIZATION OF A STORE BUFFER FOR ERROR RECOVERY ON A STORE ALLOCATION CACHE MISS - A processor and cache is coupled to a system memory via a system interconnect. A first buffer circuit coupled to the cache receives one or more data words and stores the one or more data words in each of one or more entries. The one or more data words of a first entry are written to the cache in response to error free receipt. A second buffer circuit coupled to the cache has one or more entries for storing store requests. Each entry has an associated control bit that determines whether an entry formed from a first store request is a valid entry to be written to the system memory from the second buffer circuit. Based upon error free receipt of the one or more data words, the associated control bit is set to a value that invalidates the entry in the second buffer circuit based upon the error determination. | 2009-12-03 |
20090300295 | MECHANISM FOR MAINTAINING DETAILED TRACE INFORMATION RELEVANT TO THE CURRENT OPERATION BEING PROCESSED - A system, method, computer program product, and program storage device for storing trace information of a program is disclosed. Upon entering or calling a subroutine, a memory buffer is created. Whenever a nested subroutine is called inside the subroutine, a subordinate memory buffer is created. Upon completion of a subroutine execution, a corresponding memory buffer is deleted. When encountering an event (e.g., an error, a defect, a failure, a warning) during execution, all data in currently existing memory buffers are transferred to a secondary memory storage device (e.g., a disk). | 2009-12-03 |
20090300296 | Communication apparatus with data discard functions and control method therefor - In a communication apparatus, a write controller writes received data in a temporary memory which serves as short-time storage. A read controller reads data out of the temporary memory. A discard controller controls discard operation of the data read out of the temporary memory. | 2009-12-03 |
20090300297 | Data processing apparatus, memory controller, and access control method of memory controller - A data processing apparatus includes a memory which receives and outputs data with a predetermined data width, an operation circuit which outputs a read command or a write command to access the memory, and an access control circuit which replaces a part of first read data read from the memory with a partial data, and outputs partially replaced data as write data to the memory when receiving the write command and the partial data with a data width smaller than the predetermined data width associated with the write command, from the operation circuit. The access control circuit replaces a part of second read data which has been acquired in response to the read command outputted before, instead of the first read data, with the partial data, and outputs replaced partially data as the write data if the write command has been outputted in connection with a read command outputted before the write command. | 2009-12-03 |
20090300298 | MEMORY PRESERVED CACHE TO PREVENT DATA LOSS - A method, system, and computer program product for preserving data in a storage subsystem having dual cache and dual nonvolatile storage (NVS) through a failover from a failed cluster to a surviving cluster is provided. A memory preserved indicator is initiated to mark tracks on a cache of the surviving cluster to be preserved, the tracks having an image in an NVS of the failed cluster. A destage operation is performed to destage the marked tracks. Subsequent to a determination that each of the marked tracks have been destaged, the memory preserved indicator is disabled to remove the mark from the tracks. If the surviving cluster reboots previous to each of the marked tracks having been destaged, the cache is verified as a memory preserved cache, the marked tracks are retained for processing while all unmarked tracks are removed, and the marked tracks are processed. | 2009-12-03 |
20090300299 | Dynamic Interleaving - Methods and apparatus provide for a Dynamic Interleaver to modify the interleaving distribution spanning physical memory modules. Specifically, dynamic interleaving provides the ability to increase the number of interleaved physical memory modules when a current interleaved group of memory locations is experiencing heavy use. By increasing the number of interleaved memory locations, a system can make optimal use of memory by allowing more parallel accesses to physical memory during the period of heavy utilization. However, if the current interleaved group of memory locations experience low use, the Dynamic Interleaver can choose to interleave across fewer physical memory modules and apply power management techniques to those memory locations that are no longer being accessed. Prior to “re-interleaving” interleaved memory locations, the Dynamic Interleaver migrates data out of the current interleaved memory locations. After re-interleaving, the Dynamic Interleaver maps the data back into the re-interleaved memory locations. | 2009-12-03 |
20090300300 | MEMORY SHARING OF TIME AND FREQUENCY DE-INTERLEAVER FOR ISDB-T RECEIVERS - Time and frequency de-interleaving of interleaved data in an Integrated Services Digital Broadcasting Terrestrial (ISDB-T) receiver includes exactly one random access memory (RAM) buffer in the ISDB-T receiver that performs both time and frequency de-interleaving of the interleaved data and a buffer address calculation module for generating buffer address in the buffer. The system performs memory sharing of the time and frequency de-interleaver for ISDB-T receivers and reduces the memory size required for performing de-interleaving in an ISDB-T receiver and combines the frequency and time de-interleaver buffers into one RAM thereby reducing the memory size. | 2009-12-03 |
20090300301 | OFFLOADING STORAGE OPERATIONS TO STORAGE HARDWARE - In a computer system with a disk array that has physical storage devices arranged as logical storage units and is capable of carrying out hardware storage operations on a per logical storage unit basis, the hardware storage operations can be carried out on a per-file basis using various primitives. These primitives include instructions for zeroing file blocks, cloning file blocks, and deleting file blocks, and these instructions operate on one or more files defined in a blocklist, that identifies the locations in the logical storage units to which the files map. | 2009-12-03 |
20090300302 | OFFLOADING STORAGE OPERATIONS TO STORAGE HARDWARE USING A SWITCH - In a computer system with a disk array that has physical storage devices arranged as logical storage units and is capable of carrying out hardware storage operations on a per logical storage unit basis, a switch is provided to offload storage operations from a file system to storage hardware. The switch translates primitives used for performing storage operations into commands executable by the physical storage devices so that the data moving portion of the storage operations can be offloaded from the file system to the storage devices. | 2009-12-03 |
20090300303 | Ranking and Prioritizing Point in Time Snapshots - A storage area network system having a data storage means for storing computer data, a storage manager routine running on a client, the storage manager routine having functional elements for directing snapshots to be taken of the computer data on the data storage means, and a snapshot ranking manager for determining characteristics of the snapshots, and for selectively deleting given ones of the snapshots based at least in part on the characteristics of the snapshots. The characteristics of the snapshots might include the type of application that uses the data in the logical volume from which the snapshots were taken, or mission critical aspects of the data. | 2009-12-03 |
20090300304 | MANAGING CONSISTENCY GROUPS USING HETEROGENEOUS REPLICATION ENGINES - Provided are a method, system, and article of manufacture for controlling a first storage system receiving commands from a first and second managers to create a consistency group with a second storage system. Host writes are received at the first storage system, wherein the first storage system includes a first storage system primary site and a first storage system secondary site. The first storage system sends the host writes from the first storage system primary site to the first storage system secondary site. Host write operations are quiesced at the first storage system in response to a first command from a first manager. Host write operations are resumed at the first storage system in response to receiving a second command from the first manager. The first storage system receives a run command with a marker, wherein the marker indicates a cycle number to control the cycles of the first and second storage systems. The first storage system sends the marker from the first storage system primary site to the first storage system secondary site. The first storage system sends the marker to a second manager. The first storage system applies the host writes to the first storage system secondary site. The first storage system sends a first message to the first storage system primary site from the first storage system secondary site after completing the applying of the host writes. The first storage system sends a second message to the first manager indicating whether conformation was received from the first storage system secondary site that the host writes were applied. | 2009-12-03 |
20090300305 | METHOD FOR CREATING CONSISTENT BACKUP IMAGE OF A STORAGE VOLUME WITHOUT REQUIRING A SNAPSHOT - Method for creating a consistent image, on a destination volume, of a target volume that remains in production use while the image is being created, without requiring the use of a snapshot. | 2009-12-03 |
20090300306 | DATA COPYING METHOD - A method for controlling a switch apparatus connectable to a host and a storage device including first and second areas, the method includes: establishing schedule of copying data stored in the first area of the storage device into the second area of the storage device; monitoring a state of access by the host to the storage device; carrying out copying the data stored in the first area into the second area while the monitored state of the access by the host allows copying of the data from the first area into the second area; and enhancing copying, if any portion of the data remains when a time set by the schedule is expired, the remaining portion of the data from the first area into the second area. | 2009-12-03 |
20090300307 | PROTECTION AND SECURITY PROVISIONING USING ON-THE-FLY VIRTUALIZATION - A virtualization layer is inserted between (i) an operating system of a computer system, and (ii) at least one of a memory module and a storage module of the computer system. At least one of read access and write access to at least one portion of the at least one of a memory module and a storage module is controlled, with the virtualization layer. The insertion of the virtualization layer is accomplished in an on-the-fly manner (that is, without rebooting the computer system) An additional aspect includes controlling installation of a security program from the virtualization layer. | 2009-12-03 |
20090300308 | Partitioning of a Multiple Logic-Unit-Number SCSI Target - A method, computer program product and computer system for assigning logic storage entities of a storage device to multiple partitions of a computer system, which includes associating each logic storage entity to one of the partitions that are allowed to access the logic storage entity; configuring a partition supervisor to control accesses of the partitions to the logic storage entities, so that the partitions can share resources when accessing the logic storage entities; and providing an interceptor in the partition supervisor, so that a request or a response between a select logic storage entity and a select partition is intercepted if the select partition is not allowed to access the select storage entity. | 2009-12-03 |
20090300309 | STORAGE APPARATUS - Upon receiving an access request from a server, a microprocessor allocates a free slot as a data storage destination that is different from the LU# and LBA designated as a storage destination of user data, stores user data and data identifying information for identifying the user data in the free slot, and zero-clears the pre-updated data slot designated with the LU# and LBA. During a subsequent read access, the microprocessor accesses the data slot and, if the read data identifying information and the data identifying information designated in the read access from the server coincide, transfers this read data to the server as correct data, and, if the read data identifying information and the data identifying information designated in the read access from the server do not coincide, performs processing for recovering correct data based on the read data identifying information. | 2009-12-03 |
20090300310 | Memory Architecture - A memory architecture is presented. The memory architecture comprises a first memory and a second memory. The first memory has at least a bank with a first width addressable by a single address. The second memory has a plurality of banks of a second width, said banks being addressable by components of an address vector. The second width is at most half of the first width. The first memory and the second memory are coupled selectively and said first memory and second memory are addressable by an address space. The invention further provides a method for transposing a matrix using the memory architecture comprising following steps. In the first step the matrix elements are moved from the first memory to the second memory. In the second step a set of elements arranged along a warped diagonal of the matrix is loaded into a register. In the fourth step the set of elements stored in the register are rotated until the element originating from the first row of the matrix is in a first location of the register. In the fifth step the rotated set of elements are stored in the second memory to obtain a transposed warped diagonal. The second to fifth steps are repeated with the subsequent warp diagonals until matrix transposition is complete. | 2009-12-03 |
20090300311 | SELECTIVE REGISTER RESET - The present disclosure includes methods, devices, modules, and systems for storing selective register reset. One method embodiment includes receiving an indication of a die and a plane associated with at least one address cycle. Such a method can also include selectively resetting a particular register of a number of registers, the particular register corresponding to the plane and the die. | 2009-12-03 |
20090300312 | INSTANT HARDWARE ERASE FOR CONTENT RESET AND PSEUDO-RANDOM NUMBER GENERATION - Systems and methods that facilitate securing data associated with a memory from security breaches are presented. A memory component includes nonvolatile memory, and a secure memory component (e.g., volatile memory) used to store information such as secret information related to secret processes or functions (e.g., cryptographic functions). A security component detects security-related events, such as security breaches or completion of security processes or functions, associated with the memory component and in response to a security-related event, the security component can transmit a reset signal to the secure memory component to facilitate efficiently erasing or resetting desired storage locations in the secure memory component in parallel and in a single clock cycle to facilitate data security. A random number generator component can facilitate generating random numbers after a reset based on a change in scrambler keys used by a scrambler component to descramble data read from the reset storage locations. | 2009-12-03 |
20090300313 | MEMORY CLEARING APPARATUS FOR ZERO CLEARING - A memory clear apparatus includes a processor that issues a memory clear request including a zero clear target area on a memory area and a zero clear target size, and a memory clearing circuit that receives the memory clear request from the processor, performs zero clearing on the zero clear target area based on the memory clear request, and transmits a memory clear completion notification to the processor. | 2009-12-03 |
20090300314 | MEMORY SYSTEMS AND METHODS FOR CONTROLLING THE TIMING OF RECEIVING READ DATA - Embodiments of the present invention provide memory systems having a plurality of memory devices sharing an interface for the transmission of read data. A controller can identify consecutive read requests sent to different memory devices. To avoid data contention on the interface, for example, the controller can be configured to delay the time until read data corresponding to the second read request is placed on the interface. | 2009-12-03 |
20090300315 | Reserve Pool Management in Virtualized Storage Systems - An apparatus for managing pooled real storage having a usable real storage pool and a reserve real storage pool in a virtualized storage system, comprises an extent controller for allocating and freeing storage extents in said usable real storage pool; a storage use monitor for monitoring storage use in said usable real storage pool; and a reserve pool manager responsive to said storage use monitor for transferring storage extents between said usable real storage pool and said reserve real storage pool. | 2009-12-03 |
20090300316 | COMPUTER SYSTEM, MANAGEMENT COMPUTER AND STORAGE SYSTEM, AND STORAGE AREA ALLOCATION AMOUNT CONTROLLING METHOD - To provide a computer system, a management computer and a storage system, and a storage area allocation amount controlling method for improving I/O performance of the host computer. In a computer system comprising a storage system comprising one or more storage devices with storage areas, a host computer which uses a storage area of the storage device, and a management computer for dynamically allocating the storage area in response to an input/output request from the host computer; wherein the management computer monitors dynamic allocation of a real storage area to a storage area in the storage system, and calculates allocation increment amount to the allocated storage area based on the allocation frequency and the total amount of allocation. | 2009-12-03 |
20090300317 | SYSTEM AND METHOD FOR OPTIMIZING INTERRUPT PROCESSING IN VIRTUALIZED ENVIRONMENTS - An approach is provided that retrieves a time spent value corresponding to a selected partition that is selected from a group of partitions included in a virtualized environment running on a computer system. The virtualized environment is provided by a Hypervisor. The time spent value corresponds to an amount of time the selected partition has spent processing interrupts. A number of virtual CPUs have been assigned to the selected partition. The time spent value (e.g., a percentage of the time that the selected partition spends processing interrupts) is compared to one or more interrupt threshold values. If the comparison reveals that the time that the partition is spending processing interrupts exceeds a threshold, then the number of virtual CPUs assigned to the selected partition is increased. | 2009-12-03 |
20090300318 | ADDRESS CACHING STORED TRANSLATION - Systems and/or methods that facilitate logical block address (LBA) to physical block address (PBA) translations associated with a memory component(s) are presented. The disclosed subject matter employs an optimized block address (BA) component that can facilitate caching the LBA to PBA translations within a memory controller component based in part on a predetermined optimization criteria to facilitate improving the access of data associated with the memory component. The predetermined optimization criteria can relate to a length of time since an LBA has been accessed, a number of times the LBA has been access, a data size of data related to an LBA, and/or other factors. The LBA to PBA translations can be utilized to facilitate accessing the LBA and/or associated data using the cached translation, instead of performing various functions to determine the translation. | 2009-12-03 |
20090300319 | APPARATUS AND METHOD FOR MEMORY STRUCTURE TO HANDLE TWO LOAD OPERATIONS - An apparatus and method to increase memory bandwidth is presented. In one embodiment, the apparatus comprises a load array having: a first array to store a plurality of load operation entries and a second array to store a second plurality of load operation entries. The apparatus further comprises: a store array having a plurality of store operation entries; a first address generation unit coupled to send a linear address of a first load operation to the first array and to send a linear address of a first store operation to the store array; and a second address generation unit coupled to send a linear address of a second load operation to the second array and to send a linear address of a second store operation to the store array. | 2009-12-03 |
20090300320 | PROCESSING SYSTEM WITH LINKED-LIST BASED PREFETCH BUFFER AND METHODS FOR USE THEREWITH - A processing device includes a memory and a processor that generates a plurality of read commands for reading read data from the memory and a plurality of write commands for writing write data to the memory. A prefetch memory interface prefetches prefetch data to a prefetch buffer, retrieves the read data from the prefetch buffer when the read data is included in the prefetch buffer, and retrieves the read data from the memory when the read data is not included in the prefetch buffer, wherein the prefetch buffer is managed via a linked list. | 2009-12-03 |
20090300321 | METHOD AND APPARATUS TO MINIMIZE METADATA IN DE-DUPLICATION - The invention provides a method for reducing identification of chunk portions in data de-duplication. The method includes detecting sequences of stored identification of chunk portions of at least one data object, indexing the detected stored identification of chunk portions based on a sequence type, encoding first repeated sequences of the stored identifications with a first encoding, encoding second repeated sequences of the stored identifications with a second encoding, and avoiding repeated stored identifications of chunk portions. | 2009-12-03 |
20090300322 | ABUSE DETECTION USING DISTRIBUTED CACHE - Abuse of a content-sharing service is detected by an arrangement in which an in-memory cache is distributed among a plurality of nodes, such as front-end web servers, and which caches each item accessed by users of the service as a single instance in the distributed cache. Associated with each cached item is a unit of metadata which functions as a counter that is automatically incremented each time the item is served from the distributed cache. Because abusive items often tend to become quickly popular for downloading, when the counter exceeds a predetermined threshold over a given time interval, it is indicative of an access rate that makes the item a candidate for being deemed abusive. A reference to the item and its access count are responsively written to a persistent store such as a log file or database. | 2009-12-03 |
20090300323 | Vector Processor System - A vector processing system provides high performance vector processing using a System-On-a-Chip (SOC) implementation technique. One or more scalar processors (or cores) operate in conjunction with a vector processor, and the processors collectively share access to a plurality of memory interfaces coupled to Dynamic Random Access read/write Memories (DRAMs). In typical embodiments the vector processor operates as a slave to the scalar processors, executing computationally intensive Single Instruction Multiple Data (SIMD) codes in response to commands received from the scalar processors. The vector processor implements a vector processing Instruction Set Architecture (ISA) including machine state, instruction set, exception model, and memory model. | 2009-12-03 |
20090300324 | ARRAY TYPE PROCESSOR AND DATA PROCESSING SYSTEM - In data path means, processor elements individually execute data processing in accordance with command codes described in a computer program, and switching elements individually control a connection relationship to switch among a plurality of processor elements in accordance with the command codes. When an access to an external memory is made from the data path means, slave memory means generates event data indicative of a task change while temporarily holding access information for executing the access with a delay, and executes the access in place of the data path means. Task changing means changes a task to be executed by the data path means when event data indicative of a task change is generated by the slave memory means. | 2009-12-03 |
20090300325 | DATA PROCESSING SYSTEM, APPARATUS AND METHOD FOR PERFORMING FRACTIONAL MULTIPLY OPERATIONS - A data processing system, apparatus and method for performing fractional multiply operations is disclosed. The system includes a memory that stores instructions for SIMD operations and a processing core. The processing core includes registers that store operands for the fractional multiply operations. A coprocessor included in the processing core performs the fractional multiply operations on the operands and stores the result in a destination register that is also included in the processing core. | 2009-12-03 |
20090300326 | SYSTEM, METHOD AND COMPUTER PROGRAM FOR TRANSFORMING AN EXISTING COMPLEX DATA STRUCTURE TO ANOTHER COMPLEX DATA STRUCTURE - A method (system and computer program product) performs facet classification synthesis to relate concepts represented by concept definitions defined in accordance with a faceted data set comprising facets, facet attributes, and facet attributes hierarchies. Dimensional concept relationships are expressed between the concept definitions. Two concept definitions are determined to be related in a particular dimensional concept relationship by examining whether at least one of explicit relationships and implicit relationships exist in the faceted data set between the respective facet attributes of the two concept definitions. | 2009-12-03 |
20090300327 | EXECUTION ENGINE - The execution engine is a new organization for a digital data processing apparatus, suitable for highly parallel execution of structured fine-grain parallel computations. Possible applications include many types of digital signal processing computations, such as filtering, convolution, and deconvolution, as well as many types of linear algebra operators, such as iterative and direct solvers, singular value decomposition, and constraint optimization. The invention improves energy efficiency of these structured parallel operators as compared to a regular data flow or von Neumann computer. | 2009-12-03 |
20090300328 | Aligning Protocol Data Units - An apparatus for receiving one or more protocol data units (PDUs) from a word aligned queue including a media access control (MAC) physical-layer (PHY) coprocessor (MPC) logically residing between a physical-layer controller and a media access controller (MAC) processor. The MPC is configured to access a reception physical-layer queue storing a burst, such that the reception physical-layer queue includes a plurality of word lines. The burst includes one or more PDUs that each occupy one or more word lines of the reception physical-layer queue, such that a particular word line stores a portion of a first PDU and a portion of second PDU. The MPC is also configured to receive from the reception physical-layer queue the first PDU including the portion of the first PDU stored in the selected word line. | 2009-12-03 |
20090300329 | VOLTAGE DROOP MITIGATION THROUGH INSTRUCTION ISSUE THROTTLING - A system and method for providing a digital real-time voltage droop detection and subsequent voltage droop reduction. A scheduler within a reservation station may store a weight value for each instruction corresponding to node capacitance switching activity for the instruction derived from pre-silicon power modeling analysis. For instructions picked with available source data, the corresponding weight values are summed together to produce a local current consumption value and this value is summed with any existing global current consumption values from corresponding schedulers of other processor cores yielding an activity event. The activity event is stored. Hashing functions within the scheduler are used to determine both a recent and an old activity average using the calculated activity event and stored older activity events. Instruction issue throttling occurs if either a difference between the old activity average and the recent activity average exceed a first threshold or the recent activity average exceeds a second threshold. | 2009-12-03 |
20090300330 | DATA PROCESSING METHOD AND SYSTEM BASED ON PIPELINE - A data processing system and method are disclosed. The system comprises an instruction-fetch stage where an instruction is fetched and a specific instruction is input into decode stage; a decode stage where said specific instruction indicates that contents of a register in a register file are used as an index, and then, the register file pointed to by said index is accessed based on said index; an execution stage where an access result of said decode stage is received, and computations are implemented according to the access result of the decode stage. | 2009-12-03 |
20090300331 | IMPLEMENTING INSTRUCTION SET ARCHITECTURES WITH NON-CONTIGUOUS REGISTER FILE SPECIFIERS - There are provided methods and computer program products for implementing instruction set architectures with non-contiguous register file specifiers. A method for processing instruction code includes processing a fixed-width instruction of a fixed-width instruction set using a non-contiguous register specifier of a non-contiguous register specification. The fixed-width instruction includes the non-contiguous register specifier. | 2009-12-03 |
20090300332 | NON-DESTRUCTIVE SIDEBAND READING OF PROCESSOR STATE INFORMATION - A processor receives a command via a sideband interface on the processor to read processor state information, e.g., CPUID information. The sideband interface provides the command information to a microcode engine in the processor that executes the command to retrieve the designated processor state information at an appropriate instruction boundary and retrieves the processor state information. That processor information is stored in local buffers in the sideband interface to avoid modifying processor state. After the microcode engine completes retrieval of the information and the sideband interface command is complete, execution returns to the normal flow in the processor. Thus, the processor state information may be obtained non-destructively during processor runtime. | 2009-12-03 |
20090300333 | HARDWARE SUPPORT FOR WORK QUEUE MANAGEMENT - The claimed matter provides systems and/or methods that effectuate utilization of fine-grained concurrency in parallel processing and efficient management of established memory structures. The system can include devices that establish memory structures associated with individual processors that can comprise a parallel processing phalanx. The system can thereafter utilize various enqueuing and/or dequeuing directives to add or remove work descriptors to or from the memory structures individually associated with each of the individual processors thereby providing improved work flow synchronization amongst the processors that comprise the parallel processing complex. | 2009-12-03 |
20090300334 | Method and Apparatus for Loading Data and Instructions Into a Computer - A computer array ( | 2009-12-03 |
20090300335 | Execution Unit With Inline Pseudorandom Number Generator - A circuit arrangement and method couple a hardware-based pseudorandom number generator (PRNG) to an execution unit in such a manner that pseudorandom numbers generated by the PRNG may be selectively output to the execution unit for use as an operand during the execution of instructions by the execution unit. A PRNG may be coupled to an input of an operand multiplexer that outputs to an operand input of an execution unit so that operands provided by instructions supplied to the execution unit are selectively overridden with pseudorandom numbers generated by the PRNG. Furthermore, overridden operands provided by instructions supplied to the execution unit may be used as seed values for the PRNG. In many instances, an instruction executed by an execution unit may be able to perform an arithmetic operation using both an operand specified by the instruction and a pseudorandom number generated by the PRNG during the execution of the instruction, so that the generation of the pseudorandom number and the performance of the arithmetic operation occur during a single pass of an execution unit. | 2009-12-03 |
20090300336 | Microprocessor with highly configurable pipeline and executional unit internal hierarchal structures, optimizable for different types of computational functions - The invention resides in a flexible data pipeline structure for accommodating software computational instructions for varying application programs and having a programmable embedded processor with internal pipeline stages the order and length of which varies as fast as every clock cycle based on the instruction sequence in an application program preloaded into the processor, and wherein the processor includes a data switch matrix selectively and flexibly interconnecting pluralities of mathematical execution units and memory units in response to said instructions, and wherein the execution units are configurable to perform operations at different precisions of multi-bit arithmetic and logic operations and in a multi-level hierarchical architecture structure. | 2009-12-03 |
20090300337 | Instruction set design, control and communication in programmable microprocessor cases and the like - Improved instruction set and core design, control and communication for programmable microprocessors is disclosed, involving the strategy for replacing centralized program sequencing in present-day and prior art processors with a novel distributed program sequencing wherein each functional unit has its own instruction fetch and decode block, and each functional unit has its own local memory for program storage; and wherein computational hardware execution units and memory units are flexibly pipelined as programmable embedded processors with reconfigurable pipeline stages of different order in response to varying application instruction sequences that establish different configurations and switching interconnections of the hardware units. | 2009-12-03 |
20090300338 | AGGRESSIVE STORE MERGING IN A PROCESSOR THAT SUPPORTS CHECKPOINTING - Embodiments of the present invention provide a processor that merges stores in an N-entry first-in-first-out (FIFO) store queue. In these embodiments, the processor starts by executing instructions before a checkpoint is generated. When executing instructions before the checkpoint is generated, the processor is configured to perform limited or no merging of stores into existing entries in the store queue. Then, upon detecting a predetermined condition, the processor is configured to generate a checkpoint. After generating the checkpoint, the processor is configured to continue to execute instructions. When executing instructions after the checkpoint is generated, the processor is configured to freely merge subsequent stores into post-checkpoint entries in the store queue. | 2009-12-03 |
20090300339 | LSI FOR IC CARD - To prevent exposure or tampering of data by an illegal access to a memory of an LSI, a ROM ( | 2009-12-03 |
20090300340 | Accuracy of Correlation Prefetching Via Block Correlation and Adaptive Prefetch Degree Selection - A method for prefetching data and/or instructions from a main memory to a cache memory may include generating control flow information by storing respective information for each retired branch instruction. The method may further include storing respective one or more cache miss addresses for each retired instruction that incurs one or more cache misses, with the respective one or more cache miss addresses corresponding respectively to the one or more cache misses. A correlation table may be maintained based on the generated control flow information and the stored cache miss addresses. Each respective correlation table entry may correspond to a respective index, and may contain a respective tag and a respective correlation list. The correlation list may consist of a specified number of cache miss addresses that most frequently follow the cache miss address used in generating the index to which the respective correlation table entry corresponds. A prefetch operation may be performed for each cache miss based on the contents of the correlation table entry corresponding to the index generated using a combination of bits of a given cache miss address corresponding to the cache miss, and at least a subset of bits of the program control flow information corresponding to the given cache miss address. | 2009-12-03 |
20090300341 | SYSTEM AND METHOD FOR AUTOMATIC CONFIGURATION OF PORTAL COMPOSITE APPLICATIONS - The present invention is directed to the automatic configuration of portal composite applications. A method for automatic configuration of a portal composite application including a portal composite application infrastructure, wherein configuration parameters are managed within a composite application interface of the portal, which interface defines a runtime behavior of instances of the composite application within a predetermined range of variability, and wherein each parameter defines a respective point of variability, includes: storing a collection of parameter values for each of the points of variability; defining a functional component cooperating with the composite application and having read access to the collection of parameter values; invoking the functional component after or at instantiation time of the composite application, yielding a configuration parameter value; including the configuration parameter value into a control for an instance of the composite application; and automatically configuring the instance of the composite application with the included configuration parameter value. | 2009-12-03 |
20090300342 | APPARATUS, SYSTEM, AND METHOD FOR RESETTING AND BYPASSING MICROCONTROLLER STATIONS - An apparatus, system, and method are disclosed for resetting and bypassing microcontroller stations. A command module asserts and de-asserts a reset line in response to a command. A reset module resets a microcontroller station if the command module asserts and de-asserts the reset line within a time interval. In addition, the reset module bypasses the microcontroller station if the command module asserts and holds the reset line for a time period exceeding the time interval. | 2009-12-03 |
20090300343 | METHOD AND APPARATUS FOR CHANGING BIOS PARAMETERS VIA A HOT KEY - An apparatus for changing BIOS parameters via a hot key, including a control unit, a microprocessor, a first memory, a second memory, a third memory and a keyboard. The first memory saves BIOS code while the third memory saves N parameter banks of BIOS. When the apparatus performs a keyboard-scanning process during power-on, the apparatus determines whether at least one hot key is triggered. If the hot key is triggered, the apparatus selects one of the N parameter banks. Then the BIOS performs a corresponding operation based on the selected parameter bank. The invention provides a method for changing BIOS parameters via a hot key through the apparatus. | 2009-12-03 |
20090300344 | Device and Method for Identifying a Certificate for Multiple Identifies of a User - A device and method associates a certificate with a first recipient identity. The method comprises receiving the first recipient identity of a user. The method comprises associating the first recipient identity of the user with a second recipient identity of the user. The second recipient identity is associated with a certificate so that subsequent transmissions of data to the first recipient identity encrypts the data according to specifications of the certificate. | 2009-12-03 |
20090300345 | Concept for Client Identification and Authorization in an Asynchronous Request Dispatching Environmnet - The present invention provides client and server identity validation in an asynchronous request dispatching environment with client-side aggregation. An application server receives an asynchronous include request from a client. A first unique identifier associating the client with the asynchronous include is generated and sent to a results server. A second unique identifier identifying the results server is generated and sent to the application server. Results of the asynchronous include are stored in the results server. The application server sends the first and second unique identifiers to the client, which polls the results server and sends the second unique identifier to the results server. The results server uses the second unique identifier to verify the identity of the client. The results server sends the first unique identifier to the client. The client uses the first unique identifier to validate the identity of the results server. | 2009-12-03 |
20090300346 | Device and Method for Identifying Certificates - A device and method identifies a certificate. The method comprises determining, by a transmitter of data, an identity of a recipient of the data. The method comprises identifying a certificate associated with the identity. The identifying includes a local search and a remote search. The method comprises encrypting the data according to the certificate prior to transmission. | 2009-12-03 |
20090300347 | SET MEMBERSHIP PROOFS IN DATA PROCESSING SYSTEMS - A method and apparatus for proving and a method and apparatus for verifying that a secret value is a member of a predetermined set of values. The proving mechanism receives a set of signatures which has respective values in the predetermined set signed using a private key. The proving mechanism sends to the verifying mechanism a commitment on the secret value of the proving mechanism. The proving mechanism and verifying mechanism then communicate to implement a proof of knowledge protocol demonstrating knowledge by the proving mechanism of a signature on the secret value committed to in the commitment, thus proving that the secret value is a member of the predetermined set. | 2009-12-03 |
20090300348 | PREVENTING ABUSE OF SERVICES IN TRUSTED COMPUTING ENVIRONMENTS - Methods and systems for regulating services provided by a first computing entity, such as a server, to a second computing entity, such as a client are described. A first entity receives a request for a service from a second entity over a network. The first entity determines whether the second entity has a trusted agent by examining an attestation report from the second entity. The first entity transmits a message to the second entity. The trusted agent on the second entity may receive the message. A response is created at the second computing entity and received at the first entity. The first entity then provides the service to the second entity. The first entity may transmit an attestation challenge to the second entity and in response receives an attestation report from the second entity. | 2009-12-03 |
20090300349 | VALIDATION SERVER, VALIDATION METHOD, AND PROGRAM - A validation server using HSM, which reduces required process time from receiving a validation request to responding with a validation result, and comprises a first software cryptographic module | 2009-12-03 |
20090300350 | SECURITY GROUPS - Methods and devices are provided for implementing security groups in an enterprise network. The security groups include first network nodes that are subject to rules governing communications between the first network nodes and second network nodes. An indicator, referred to as a security group tag (SGT), identifies members of a security group. In some embodiments, the SGT is provided in a field of a data packet reserved for layer 3 information or a field reserved for higher layers. However, in other embodiments, the SGT is provided in a field reserved for layer 1 or layer 2. In some embodiments, the SGT is not provided in a field used by interswitch links or other network fabric devices for the purpose of making forwarding decisions. | 2009-12-03 |
20090300351 | FAST SEARCHABLE ENCRYPTION METHOD - The present invention provides a method, apparatus and system for fast searchable encryption. The data owner encrypts files and stores the ciphertext to the server. The data owner generates an encrypted index according to each keyword of the files, and stores the encrypted index to the server. The index is composed of keyword item sets each being identified by a keyword item set locator and containing at least one or more file locators of the files associated with the corresponding keyword. Each file locator contains ciphertext of information for retrieval of an encrypted file and only with the correct file locator decryption key can the ciphertext be decrypted. Data owner issues a keyword item set locator as well as file locator decryption key to a searcher to enable the searcher to search on the encrypted index and retrieve files related to a certain keyword. | 2009-12-03 |
20090300352 | Secure session identifiers - An apparatus and a method for an authentication protocol. In one embodiment, a server generates a sequence number, and a server message authentication code based on a server secret key. The server sends the sequence number, an account identifier, and the server message authentication code to the client. The client generates a client message authentication code over the sequence number, a request specific data, and a shared secret key between the client and the server. The client sends a request to the server. The request includes the sequence number, the account identifier, the server message authentication code, the request specific data, and the client message authentication code. The server determines the validity of the client request with the shared secret key. | 2009-12-03 |
20090300353 | TRUSTED NETWORK INTERFACE - Systems and methods for combating and thwarting attacks by cybercriminals are provided. Network security appliances interposed between computer systems and public networks, such as the Internet, are configured to perform defensive and/or offensive actions against botnets and/or other cyber threats. According to some embodiments, network security appliances may be configured to perform coordinated defensive and/or offensive actions with other network security appliances. | 2009-12-03 |
20090300354 | METHOD AND APPARATUS FOR PREVENTING REPLAY ATTACK IN WIRELESS NETWORK ENVIRONMENT - A method for preventing a replay attack is provided. A prime number is mutually exchanged between a main node and children nodes. The main node generates a Prime Sequence Code Matrix (PSCM) corresponding to the prime number, notifies the children nodes of sequence orders corresponding to the children nodes. The main node selects an arbitrary value of a Prime Sequence Code-1 (PSC1) among a series of values corresponding to an arbitrary node in the PSCM. The arbitrary node computes a Prime Sequence Code-2 (PSC2) subsequent to receiving the PSC1 using a sequence order received from the main node and the prime number. The PSC2 is transmitted to the main node. The main node compares the received PSC2 with the PSCM. The method can be easily applied by supplementing a weakness for a replay attack on the basis of an IEEE 802.15-4-2006 standard and minimizing system load. | 2009-12-03 |
20090300355 | Information Sharing Method and Apparatus - Embodiments of the present invention relate to methods and apparatus for sharing information with third parties and providing mechanisms whereby those third parties may legitimately pass the personal information on to other, for example affiliated, third parties. In one example of information sharing, information is shared electronically between an information provider and an information requester, the information provider storing a body of information and associated sharing criteria provided by an originator, receiving a first information request from a first requestor and revealing the information and the sharing criteria to the first requestor if the first request is authorised by the originator, receiving a second information request from a second requestor and revealing the information to the second requestor if the second request contains an information identifier obtained from the first requester and the sharing criteria so permits, and storing evidence of information requests. | 2009-12-03 |
20090300356 | REMOTE STORAGE ENCRYPTION SYSTEM - An exemplary remote storage encryption system includes a data storage unit and a key server having a key management module configured to communicate with a client device. The key management module stores at least one key access map that maps at least one access credential to at least one encryption key to determine which encryption key to provide to the client device. An exemplary method includes mapping the at least one access credential to the at least one encryption key, receiving a request for the encryption key from a remote requestor, accepting the access credential with the request, validating the access credential against a previously stored version thereof, retrieving the encryption key associated with the access credential based on the mapping, and sending the key to the remote requester. | 2009-12-03 |
20090300357 | METHOD FOR PERSONAL NETWORK MANAGEMENT ACROSS MULTIPLE OPERATORS - A method for accessing a Personal Network (PN) from a Guest device. In this method, the Guest device ( | 2009-12-03 |
20090300358 | METHOD FOR MANAGING NETWORK KEY AND UPDATING SESSION KEY - A method for managing network key and updating session key is provided. The step of the key management includes: constructing key request group, constructing key negotiation response group, and constructing key negotiation acknowledgement group. The step of multicasting key management method includes multicasting main key negotiation protocol and multicasting session key distribution protocol. The multicasting main key negotiation protocol comprises key updating informs group, constructing encryption key negotiation request group, constructing key negotiation response group and constructing key negotiation acknowledgement group. The multicasting session key distribution protocol comprises multicasting session key request and multicasting session key distribution. | 2009-12-03 |
20090300359 | APPARATUS AND METHOD FOR SECURELY SUBMITTING AND PROCESSING A REQUEST - An apparatus and a method for securely submitting a request and an apparatus and a method for securely processing a request. The apparatus for securely submitting a request includes a request pre-submitting component and a request confirmation component. The request pre-submitting component sends a request with a unique identifier to a server and sends an alarm message containing the unique identifier and a request description to the request confirmation component. The request confirmation component contains a key inaccessible to other components in a client. It pops up a request confirmation window, on which the request description is displayed, in response to the alarm message and generates a request confirmation message associated with the request by using the key and the unique identifier. | 2009-12-03 |
20090300360 | APPLICATION SETTING TERMINAL, APPLICATION EXECUTING TERMINAL, AND SETTING INFORMATION MANAGING SERVER - An application setting terminal includes a GUI | 2009-12-03 |
20090300361 | METHOD FOR RECEIVING/SENDING MULTIMEDIA MESSAGES - A method for receiving/sending multimedia message uses a wireless LAN, and communicates with a gateway via the wireless LAN so as to send and receive multimedia messages. Furthermore, the gateway of the invention detects whether the user device is located within the wireless LAN. If yes, then multimedia messages are sent and received via the wireless LAN; and if not, then via conventional telecom network. The invention also discloses a corresponding gateway and a corresponding user device. | 2009-12-03 |
20090300362 | PASSWORD SELF ENCRYPTION METHOD AND SYSTEM AND ENCRYPTION BY KEYS GENERATED FROM PERSONAL SECRET INFORMATION - A public key cryptographic system and method is provided for a password or any other predefined personal secret information that defeats key factoring and spoofing attacks. The method adopts a new technique of encrypting a password or any predefined secret information by a numeric function of itself, replacing the fixed public key of the conventional RSA encryption. The whole process involving key generation, encryption, decryption and password handling is discussed in detail. Mathematical and cryptanalytical proofs of defeating factoring and spoofing attacks are furnished. | 2009-12-03 |
20090300363 | Method and arrangement for real-time betting with an off-line terminal - The invention relates generally to a method and arrangement for real-time betting with an off-line terminal, and especially to the technological field of keeping reliable time in the off-line terminal when handling, within a communications system comprising a distributed domain and a central domain, electronic records that contain predictions of the outcome of a certain incident. Within the distributed domain a multitude of electronic records that contain predictions of the outcome of the incident are generated and furnished with a cryptographically protected proof of a certain moment of the distributed domain's local time associated with the generation of the electronic record. | 2009-12-03 |
20090300364 | Username based authentication security - An apparatus and a method for an authentication protocol. In one embodiment, a client requests for an authentication challenge from a server. The server generates the authentication challenge and sends it to the client. The authentication challenge includes the authentication context identifier, a random string, a timestamp, and a signature value. The client computes a salt value based on a username and the authentication context identifier from the authentication challenge. The signature value is computed based on the authentication context identifier, the random string, and the timestamp. The client computes a hashed password value based on the computed salt value, and a message authentication code based on the hashed password value and the random string. The client sends a response to the server. The response includes the username, the message authentication code, the random string, the timestamp, and the signature value. | 2009-12-03 |
20090300365 | Vehicle Diagnostic System Security with Memory Card - A method and system are provided to authenticate a software stored on a computing device such as vehicle diagnostic tool. The system generates and stores encrypted information such as a memory media and the media access control address of the vehicle diagnostic tool. The encrypted information can be sent to an authentication server which returns encrypted authentication information that is used to validate the software for a period of time. | 2009-12-03 |
20090300366 | System and Method for Providing a Secure Application Fragmentation Environment - System and method for providing and using expanded memory resources secure application environment is disclosed. An embodiment comprises a system and method for providing secure application functionality comprising receiving a request for a secure operation; determining if required application code for the secure operation is present in an application fragment store; sequentially loading a plurality of fragments of the required application code from an external memory, if the required application code is not present in the application fragment store; sequentially executing the plurality of fragments of the required application code; and sending a reply to the request for the secure operation. The system and method may further comprise decrypting each of the plurality of fragments of the required application code using a secure key prior to execution of the fragment and verifying the integrity of the code fragment. | 2009-12-03 |
20090300367 | ELECTRONIC CERTIFICATION AND AUTHENTICATION SYSTEM - The invention is an automated system that works in the data center of certification offices connected to the internet which enables a member of the any of the certification offices to certify his document electronically from a distance using a computer connected to the internet, digital pad, an electronic pen and a printer. | 2009-12-03 |
20090300368 | USER INTERFACE FOR SECURE DATA ENTRY - A computer input device for operation with a computer includes an input transducer, which is coupled to receive an input from a user and to generate a data signal responsively to the input. An encryption processor is coupled to process the data signal so as to output data to the computer. The encryption processor has a first operational mode in which the encryption processor encrypts the data signal using an encryption key not accessible to the computer so that the data are unintelligible to the computer, and a second operational mode in which the data are intelligible to the computer. A mode switch is operative so as to switch between the first and second operational modes of the encryption processor. An output transducer is coupled to provide to the user an indication of whether the encryption processor is in the first or the second operational mode. | 2009-12-03 |
20090300369 | Security unit and protection system comprising such security unit as well as method for protecting data - In order to provide a protection system ( | 2009-12-03 |
20090300370 | Enabling byte-code based image isolation - In one embodiment, the present invention includes a method for setting an extensible policy mechanism to protect a root data structure including a page table, interpreting a bytecode of a pre-boot driver in a byte code interpreter, and controlling access to a memory location based on the extensible policy mechanism. Other embodiments are described and claimed. | 2009-12-03 |
20090300371 | SEMICONDUCTOR INTEGRATED DEVICE AND METHOD OF TESTING SEMICONDUCTOR INTEGRATED DEVICE - According to one embodiment, a semiconductor integrated device which stores secret data and is capable of operating in a test mode in which a scan test with respect to an internal circuit is executed, the semiconductor integrated device comprises a mode signal receiving module configured to receive a scan mode signal designating the test mode, a mask module configured to mask the secret data when the mode signal receiving module receives the scan mode signal, and an error detection module configured to detect presence or absence of error in the secret data and to store detection result in a first flip-flop. | 2009-12-03 |
20090300372 | SOLID STATE DISK AND INPUT/OUTPUT METHOD - Disclosed is a solid state disk including a storage unit configured to store data, and a control part configured to control enciphering and writing operation for the data using a key value and an initialization vector. The initialization vector is generated by processing an address corresponding to the data. | 2009-12-03 |
20090300373 | SYSTEM FOR TURNING A COMPUTER ON AND OFF - A system for turning a computer on and off, the system has a first switch mounted on an external device, and a control circuit for turning the computer on or off. The control circuit has a second switch and a first resistor. The first switch is connected between the second switch and a first power supply. The second switch is connected to a second power supply via the first resistor, and to a control end capable of turning the computer on or off. The first switch actuates the second switch. Upon a condition of the second switch turning on, the control end is grounded to turn the computer on or off. | 2009-12-03 |
20090300374 | STORAGE APPARATUS AND START-UP CONTROL METHOD FOR THE SAME - At the time of initial start-up, two or more storage units are started as a start-up control unit so that the total power consumption will not exceed specified electric power. | 2009-12-03 |
20090300375 | POWER SUPPLY CONTROL CIRCUIT - An exemplary power supply control circuit includes a first electric switch, a second electric switch, a third electric switch, a power supply, and an output terminal. The first electric switch has a first terminal connected to an SIO chip to receive a control signal. When the control signal is at a high level, the first electric switch is turned on, the second electric switch is turned off, the third electric switch is turned off, and the output terminal outputs no power supply. When the control signal is at a low level, the first electric switch is turned off, the second electric switch is turned on, the third electric switch is turned on, and the output terminal outputs the power supply. | 2009-12-03 |
20090300376 | CONTROL METHOD AND COMPUTER SYSTEM FOR ADVANCED CONFIGURATION AND POWER INTERFACE - Provided is a control method for an advanced configuration and power interface (ACPI) in a computer system. The computer system comprises a processor and a bus master, wherein the processor, as defined by the ACPI specification, has a first state (C | 2009-12-03 |
20090300377 | Computer system for Managing Power consumption and Method Thereof - A computer system for managing power consumption includes a power supply, a current detecting module, a power control module and a feedback control module. The power supply is used for outputting a system voltage according to a feedback signal. The current detecting module senses a system current to generate system current information. The power control module includes a calculating unit and a user interface. The calculating unit calculates a power consumption of the computer system according to the system current information or the system voltage. The user interface includes a plurality of power adjusting functions, and is used for displaying the system current information and the power consumption. In addition, the user interface generates a voltage control signal according to a power adjusting function selected from the plurality of power adjusting functions. The feedback control module adjusts the feedback signal according to the voltage control signal. | 2009-12-03 |
20090300378 | Computer having power management function - A power management system is disposed in a computer. The power management system includes a current detecting module and a chipset. The current detecting module is disposed between the power receiving end of an external device and the power cord of the power source of the computer for detecting the current sink by the external device and accordingly outputting a current detecting signal. The chipset adjusts the operating voltage or operating frequency of the external device according to the current detecting signal. | 2009-12-03 |