02nd week of 2013 patent applcation highlights part 60 |
Patent application number | Title | Published |
20130013846 | METHOD FOR STORING DATA AND ELECTRONIC APPARATUS USING THE SAME - A method for storing data and an electronic apparatus using the same are provided. Only data is written to a memory card when the electronic apparatus wants to store the data to the memory card. And file information and location information corresponding to the data stored in the memory card are recorded into a buffer block of the electronic apparatus. After a file closing action is executed, the file information and the location information recorded in the buffer block are written to the memory card. | 2013-01-10 |
20130013847 | STORAGE SUB-SYSTEM FOR A COMPUTER COMPRISING WRITE-ONCE MEMORY DEVICES AND WRITE-MANY MEMORY DEVICES AND RELATED METHOD - Methods and apparatus for a solid state non-volatile storage sub-system of a computer is provided. The storage sub-system may include a write-many storage sub-system memory device including write-many memory cells, a write-once storage sub-system memory device including write-once memory cells, and a page-based interface that is adapted to read and write the write-once and write-many storage sub-system memory devices. Numerous other aspects are provided. | 2013-01-10 |
20130013848 | REDUNDANT ARRAY OF INDEPENDENT DISK (RAID) CONTROLLED SEMICONDUCTOR STORAGE DEVICE (SSD)-BASED SYSTEM HAVING A HIGH-SPEED NON-VOLATILE HOST INTERFACE - Embodiments of the invention provide a RAID controlled SSD-based system having a high-speed, non-volatile host interface. Specifically, in a typical embodiment, a RAID-controlled device is provided that comprises a high-speed host interface that is coupled to a redundant array of independent disks (RAID) controller. The RAID controller itself is coupled to a set of controlled memory units that each comprises: a main controller coupled to cache memory; and a set of SSD memory units (each having a set of blocks of memory) coupled to the main controller. | 2013-01-10 |
20130013849 | Programmable Patch Architecture for ROM - A system according to one embodiment includes a host central processing unit (CPU); a first storage medium configured to be in communication with the host CPU and to store information associated with at least one address; a second storage medium configured to be in communication with the host CPU, to store patch information associated with the at least one address of the first storage medium; and selection circuitry configured to, in response to a fetch instruction from the host CPU, select the patch information from the second storage medium if the fetch instruction contains a destination address that matches the at least one address associated with the patch information. | 2013-01-10 |
20130013850 | RELATIVE HEAT INDEX BASED HOT DATA DETERMINATION FOR BLOCK BASED STORAGE TIERING - Disclosed is a process for determining a heat index for a block of data, such as an extent, for storage tiering. Weighted scores are used for read and write operations, since solid state devices operate better with read operations than write operations. The heat index associated with each extent is a function of a base score, rather than an absolute value. The base score is determined by adding the number of extents in a hot tier plus the access score, divided by the number of extents in the hot tier. In this fashion, the base score measures the weighted I/O activity relative to the size of the hot tier. | 2013-01-10 |
20130013851 | DATA STREAM DISPATCHING METHOD, MEMORY CONTROLLER, AND MEMORY STORAGE APPARATUS - A data stream dispatching method for a memory storage apparatus having a non-volatile memory module and a smart card chip is provided. The method includes configuring a plurality of logical block addresses for the non-volatile memory module, wherein a plurality of specific logical block addresses is used for storing a specific file. The method also includes receiving a response data unit from the smart card chip and storing the response data unit into a buffer memory. The method further includes when a logical block address corresponding to a read command issued by a host system is one of the specific logical block addresses and the response data unit is stored in the buffer memory, transmitting the response data unit to the host system by aligning an access unit. Thereby, the host system can correctly receive the response data unit from the smart card chip. | 2013-01-10 |
20130013852 | MEMORY CONTROLLING METHOD, MEMORY CONTROLLER AND MEMORY STORAGE APPARATUS - A memory controlling method, a memory controller and a memory storage apparatus are provided. The method includes identifying whether a transmission mode between the memory storage apparatus and a host system belongs to a first transmission mode or a second transmission mode and grouping memory dies of the memory storage apparatus into a plurality of memory die groups. The method also includes applying a first erasing mode to erase data stored in the memory dies when the transmission mode belongs to the first transmission mode and applying a second erasing mode to erase the data stored in the memory dies when the transmission mode belongs to the second transmission mode, wherein at least a part of the memory die groups are enabled simultaneously in the first erasing mode and any two of the memory die groups are not enabled simultaneously in the second erasing mode. | 2013-01-10 |
20130013853 | COMMAND EXECUTING METHOD, MEMORY CONTROLLER AND MEMORY STORAGE APPARATUS - A command executing method for a memory storage apparatus and a memory controller and the memory storage apparatus using the same are provided. The method includes, during a data merging operation, receiving a write command and a write data corresponding to the write command from a host system. The method also includes temporarily storing the write data into a buffer memory, and at a delay time point, transmitting a response message to the host, the delay time point is set by adding a dummy delay time to a time point that the operation of writing the write data into the buffer memory is completed. Accordingly, the method can effectively level the response times of executing write commands during the data merging operation, thereby shortening the maximum access time. | 2013-01-10 |
20130013854 | MEMORY CONTROLLER, METHOD THEREOF, AND ELECTRONIC DEVICES HAVING THE MEMORY CONTROLLER - A method for operating a memory controller is provided. The method includes generating a pseudo random number by using a seed included in a stored seed group corresponding to a page to be currently programmed, wherein the stored seed group is stored among a plurality of seed groups. Data to be programmed into the current page is randomizing by using the pseudo random number and the memory controller outputs the randomized data. A solid state drive (SSD) or other memory storage device such as a memory card includes the memory controller and includes a read only memory (ROM) storing the plurality of seed groups. The memory controller includes a micro-processor and a read only memory (ROM) storing executable code for causing the micro-processor to access the plurality of stored seed groups and to select a seed therefrom corresponding to a page to be currently programmed. | 2013-01-10 |
20130013855 | MEMORY CONTROLLERS AND MEMORY SYSTEMS INCLUDING THE SAME - A memory controller may include a cell state generator that is configured to generate a cell state for each of a plurality of multi-level cells included in a non-volatile memory device, using data of pages. The memory controller may also include a pseudo-random number generator that is configured to generate a pseudo-random number. The memory controller may further include an operator that is configured to change the cell state of each multi-level cell using the pseudo-random number, and that is configured to output a changed cell state for each multi-level cell. | 2013-01-10 |
20130013856 | FLASH MANAGEMENT TECHNIQUES - Various flash management techniques may be described. An apparatus may comprise a processor, a flash memory coupled to the processor, and a flash management module. The flash management module may be executed by the processor to receive a write request to write data to the flash memory, write a first control sector with a sequence number to the flash memory, and write the sequence number, an address for a logical sector, and data to at least one physical sector corresponding to the logical sector of the flash memory. Other embodiments are described and claimed. | 2013-01-10 |
20130013857 | System and Method for Providing a RAID Plus Copy Model for a Storage Network - A storage system includes a storage server adapted to receive data, determine parity data based upon the data, and store the data and the parity data in a storage array associated with the storage server. The data and the parity data may be sent to a second storage server. | 2013-01-10 |
20130013858 | TAPE LIBRARY EMULATION WITH AUTOMATIC CONFIGURATION AND DATA RETENTION - Disk based emulation of tape libraries is provided with features that allow easier management and administration of a backup system and also allow increased flexibility to both archive data on tape at a remote location and also have fast restore access to archived data files. Features include automatic emulation of physical libraries, and the retention and write protection of virtual tapes that correspond to exported physical tapes. | 2013-01-10 |
20130013859 | Structure-Based Adaptive Document Caching - Techniques for generating, updating, and transmitting a structure-based data representation of a document are described herein. The structure-based adaptive document caching techniques may effectively eliminate redundancy in data transmission by exploiting structures of the document to be transmitted. The described techniques partitions a document into a sequence of structures, differentiate between cache-worthy structures and cache-unworthy structures, and generating a structure-based data representation of the document. The techniques may transmit updated structures and instructions, instead of all data of the document, to update previously cached structures at a client device; thereby resulting in higher cache hit rates. | 2013-01-10 |
20130013860 | MEMORY CELL PRESETTING FOR IMPROVED MEMORY PERFORMANCE - Memory cell presetting for improved performance including a method for using a computer system to identify a region in a memory. The region includes a plurality of memory cells characterized by a write performance characteristic that has a first expected value when a write operation changes a current state of the memory cells to a desired state of the memory cells and a second expected value when the write operation changes a specified state of the memory cells to the desired state of the memory cells. The second expected value is closer than the first expected value to a desired value of the write performance characteristic. The plurality of memory cells in the region are set to the specified state, and the data is written into the plurality of memory cells responsive to the setting. | 2013-01-10 |
20130013861 | CACHING PERFORMANCE OPTIMIZATION - A method for managing data storage is described. The method includes receiving data from an external host at a peripheral storage device, detecting a file system type of the external host, and adapting a caching policy for transmitting the data to a memory accessible by the storage device, wherein the caching policy is based on the detected file system type. The detection of the file system type can be based on the received data. The detection bases can include a size of the received data. In some implementations, the detection of the file system type can be based on accessing the memory for file system type indicators that are associated with a unique file system type. Adapting the caching policy can reduce a number of data transmissions to the memory. The detected file system type can be a file allocation table (FAT) system type. | 2013-01-10 |
20130013862 | EFFICIENT HANDLING OF MISALIGNED LOADS AND STORES - A system and method for efficiently handling misaligned memory accesses within a processor. A processor comprises a load-store unit (LSU) with a banked data cache (d-cache) and a banked store queue. The processor generates a first address corresponding to a memory access instruction identifying a first cache line. The processor determines the memory access is misaligned which crosses over a cache line boundary. The processor generates a second address identifying a second cache line logically adjacent to the first cache line. If the instruction is a load instruction, the LSU simultaneously accesses the d-cache and store queue with the first and the second addresses. If there are two hits, the data from the two cache lines are simultaneously read out. If the access is a store instruction, the LSU separates associated write data into two subsets and simultaneously stores these subsets in separate cache lines in separate banks of the store queue. | 2013-01-10 |
20130013863 | Hybrid Caching Techniques and Garbage Collection Using Hybrid Caching Techniques - Hybrid caching techniques and garbage collection using hybrid caching techniques are provided. A determination of a measure of a characteristic of a data object is performed, the characteristic being indicative of an access pattern associated with the data object. A selection of one caching structure, from a plurality of caching structures, is performed in which to store the data object based on the measure of the characteristic. Each individual caching structure in the plurality of caching structures stores data objects has a similar measure of the characteristic with regard to each of the other data objects in that individual caching structure. The data object is stored in the selected caching structure and at least one processing operation is performed on the data object stored in the selected caching structure. | 2013-01-10 |
20130013864 | MEMORY ACCESS MONITOR - For each access request received at a shared cache of the data processing device, a memory access pattern (MAP) monitor predicts which of the memory banks, and corresponding row buffers, would be accessed by the access request if the requesting thread were the only thread executing at the data processing device. By recording predicted accesses over time for a number of access requests, the MAP monitor develops a pattern of predicted memory accesses by executing threads. The pattern can be employed to assign resources at the shared cache, thereby managing memory more efficiently. | 2013-01-10 |
20130013865 | DEDUPLICATION OF VIRTUAL MACHINE FILES IN A VIRTUALIZED DESKTOP ENVIRONMENT - Techniques for deduplication of virtual machine files in a virtualized desktop environment are described, including receiving data into a page cache, the data being received from a virtual machine and indicating a write operation, and deduplicating the data in the page cache prior to committing the data to storage, the data being deduplicated in-band and in substantially real-time. | 2013-01-10 |
20130013866 | SPATIAL LOCALITY MONITOR - A method includes updating a first tag access indicator of a storage structure. The tag access indicator indicates a number of accesses by a first thread executing on a processor to a memory resource for a portion of memory associated with a memory tag. The updating is in response to an access to the memory resource for a memory request associated with the first thread to the portion of memory associated with the memory tag. The method may include updating a first sum indicator of the storage structure indicating a sum of numbers of accesses to the memory resource being associated with a first access indicator of the storage structure for the first thread, the updating being in response to the access to the memory resource. | 2013-01-10 |
20130013867 | DATA PREFETCHER MECHANISM WITH INTELLIGENT DISABLING AND ENABLING OF A PREFETCHING FUNCTION - A data prefetcher includes a controller to control operation of the data prefetcher. The controller receives data associated with cache misses and data associated with events that do not rely on a prefetching function of the data prefetcher. The data prefetcher also includes a counter to maintain a count associated with the data prefetcher. The count is adjusted in a first direction in response to detection of a cache miss, and in a second direction in response to detection of an event that does not rely on the prefetching function. The controller disables the prefetching function when the count reaches a threshold value. | 2013-01-10 |
20130013868 | RING BUFFER - A computer implemented method for writing to a software bound ring buffer. A network adapter may determine that data is available to write to the software bound ring buffer. The network adapter determines that a read index is not equal to a write index, responsive to a determination that data is available to write to the software bound ring buffer. The network adapter writes the data to memory referenced by the hardware write index, wherein memory referenced by the write index is offset according to an offset, and the memory contents comprise a data portion and a valid bit. The network adapter writes an epoch value of the write index to the valid bit. The network adapter increments the write index, responsive to writing the data to memory referenced by the write index. Further disclosed is method to access a hardware bound ring buffer. | 2013-01-10 |
20130013869 | INVOKING OPERATING SYSTEM FUNCTIONALITY WITHOUT THE USE OF SYSTEM CALLS - Embodiments of the invention operate within the context of a system with a processor providing memory-monitoring functionality. The lower-privileged code of a first process, such as user application code, communicates directly with higher-privileged code of a second process, such as interrupt-handling code of the operating system kernel, without using a software interrupt or other gate mechanism. This enhances overall system performance by eliminating the saving of state and processing inherent in interrupt handling, and also avoids missing events that may occur while other interrupts are masked during event handling. Specifically, the second process initializes a monitored memory area that is directly accessible by processes having at least the privilege level of the first process. The second process further initializes memory-monitoring hardware of the processor to monitor writes to the monitored memory area, such that the second process will resume execution from a dormant state when a write takes place. | 2013-01-10 |
20130013870 | DIFFERENTIAL VECTOR STORAGE FOR NON-VOLATILE MEMORY - A method is disclosed for storing information on non-volatile memory which can rewrite memory cells multiple times before a block needs to be erased. The information to be stored is transformed into a suitable form which has better robustness properties with respect to common sources of error, such as leakage of charge, or imperfect read/write units. | 2013-01-10 |
20130013871 | INFORMATION PROCESSING SYSTEM AND DATA PROCESSING METHOD - An information processing system includes a first processor to store data segments in a first memory, to send the data segments to be stored in the first memory to a second processor, and to read the data segments from the first memory so as to store the data segments in a second memory; and the second processor to store the data segments to be stored sent from the first processor in a third memory, wherein when the first processor notifies the second processor about data that is permitted to be removed, the first processor sends ID information to the second processor that renders a particular data segment that was last stored in the second memory identifiable, and the second processor removes from the third memory the particular data segment and an older data segment stored previous to the particular data segment from the data segments stored in the third memory. | 2013-01-10 |
20130013872 | External Memory Controller Node - A memory controller to provide memory access services in an adaptive computing engine is provided. The controller comprises: a network interface configured to receive a memory request from a programmable network; and a memory interface configured to access a memory to fulfill the memory request from the programmable network, wherein the memory interface receives and provides data for the memory request to the network interface, the network interface configured to send data to and receive data from the programmable network. | 2013-01-10 |
20130013873 | SYSTEM AND METHOD FOR OPTIMIZING DATA IN VALUE-BASED STORAGE SYSTEM - A storage system includes a plurality of data vats, and a processor including an optimizing unit that optimizes a value of data stored in the storage system. The optimizing unit optimizes the value by computing and implementing an optimal decision for allocating new data to a first data vat of the plurality of data vats, moving existing data from at least a second data vat of the plurality of data vats to the first data vat, and deleting existing data from the first data vat, based on an amount of data in each of the plurality of data vats. | 2013-01-10 |
20130013874 | DATA STORE PAGE RECOVERY - In one implementation, a data store page recovery process includes selecting a page reference and an update record reference at a page recovery mapping based on a page identifier, accessing a backup page via the page reference, accessing an update record via the update record reference, and modifying the backup page according to the update record. The page reference is associated with the update record reference at the page recovery mapping. | 2013-01-10 |
20130013875 | METHOD AND SYSTEM FOR AUTOMATICALLY SAVING A FILE - Described herein are a method, system, and computer readable medium for automatically saving a file. A save score for the file is determined and compared against a save threshold. The save score is determined from a combination of autosave indicators indicative of whether to immediately autosave the file. Each of the autosave indicators that adjust the save score increases or decreases the likelihood that the file will be automatically saved. If the comparison indicates that the file should be automatically saved, the file is automatically saved; otherwise, the file is not automatically saved. The save score can take into consideration factors such as the number of dirty characters in the file and the time at which the file was last saved. Utilizing the save score reduces the number of saves performed when only immaterial changes have been made to the file, which helps preserve system resources such as battery life. | 2013-01-10 |
20130013876 | MEMORY DEVICE AND METHOD HAVING ON-BOARD ADDRESS PROTECTION SYSTEM FOR FACILITATING INTERFACE WITH MULTIPLE PROCESSORS, AND COMPUTER SYSTEM USING SAME - A memory device includes an address protection system that facilitates the ability of the memory device to interface with a plurality of processors operating in a parallel processing manner. The protection system is used to prevent at least some of a plurality of processors in a system from accessing addresses designated by one of the processors as a protected memory address. Until the processor releases the protection, only the designating processor can access the memory device at the protected address. If the memory device contains a cache memory, the protection system can alternatively or additionally be used to protect cache memory addresses. | 2013-01-10 |
20130013877 | HOT-SWAPPING ACTIVE MEMORY FOR VIRTUAL MACHINES WITH DIRECTED I/O - Embodiments of the invention describe a DMA Remapping unit (DRU) to receive, from a virtual machine monitor (VMM), a hot-page swap (HPS) request, the HPS request to include a virtual address, in use by at least one virtual machine (VM), mapped to a first memory page location, and a second memory page location. The DRU further blocks DMA requests to addresses of memory being remapped until the HPS request is fulfilled, copies the content of the first memory page location to the second memory page location, and ramps the virtual address from the first memory page location to the second memory page location. | 2013-01-10 |
20130013878 | Levelization of Memory Interface for Communicating with Multiple Memory Devices - In a memory system in which a system clock signal is forwarded from the memory controller to multiple memory devices, the phase of the system clock signal forwarded to the slower memory device is advanced relative to the system clock signal forwarded to the faster memory device by a phase corresponding to the skew on the data links corresponding to the memory devices. This causes the state machine of the slower memory device to change states and advance earlier than the state machine in the faster memory device, and as a result, the data read from both the slower memory device and the faster memory device are unskewed on the data links between the memory controller and the memory devices. | 2013-01-10 |
20130013879 | MEMORY CONTROL DEVICE, MEMORY DEVICE, AND MEMORY CONTROL METHOD - The memory control device according to the present invention includes a command generating unit which divides the memory access request issued by the master into access commands each of which is for one of the memory devices, a command issuing units which issue each of the access commands to the memory devices, a data control unit which switches data between a master and memories, and the command generating unit switch between control for outputting an identical physical address to the memory units and control for outputting different physical addresses to the memory devices, depending on when the physical addresses of the memory devices are identical and when the physical addresses of the memory devices are different, each of the memory devices corresponds to one of the divided access commands. | 2013-01-10 |
20130013880 | STORAGE SYSTEM AND ITS DATA PROCESSING METHOD - The de-duplication effect is enhanced even when managing data blocks by dividing them into fixed-length data. | 2013-01-10 |
20130013881 | MICROCONTROLLER AND ELECTRONIC CONTROL UNIT - A microcontroller in which respective CPUs execute different applications so as to improve processing performance, and the respective CPUs execute an application that requires safety and mutually compare the results thereof so as to enhance the reliability of write data is provided. The microcontroller has a plurality of processing systems made up of a first CPU, a second CPU, a first memory and a second memory, and for the instruction processing about specific processing set in advance, the write to peripheral modules which are not multiplexed is executed twice, and the write data of the first time and the second time are mutually collated. | 2013-01-10 |
20130013882 | CARD AND HOST DEVICE - A host device is configured to read and write information from and into a card and to supply a supply voltage that belongs to a first voltage range or a second voltage range which is lower than the first voltage range, and issues a voltage identification command to the card. The voltage identification command includes a voltage range identification section, an error detection section, and a check pattern section. The voltage range identification section includes information indicating which one of the first voltage range and the second voltage range the supply voltage belongs. The error detection section has a pattern configured to enable the card which has received the voltage identification command to detect errors in the voltage identification command. The check pattern section has a preset pattern. | 2013-01-10 |
20130013883 | SYSTEMS AND METHODS FOR PERFORMING MULTI-PATH STORAGE OPERATIONS - Systems and methods for allocating transmission resources within a computer network are provided. In some embodiments of the invention, communication links may be assigned based on predefined preferences or system configuration to facilitate the transfer of data from one point in the network to another. In other embodiments, system operation may be monitored and communication paths be assigned dynamically based on this information to improve system operation and provide improved failover response, load balancing and to promote robust data access via alternative routes. | 2013-01-10 |
20130013884 | Memory Management System - This memory management system has: (a) a logical partition management unit that manages allocation and release of a virtual memory used by an application in a logical address space; (b) a physical partition management unit that manages allocation and release of small size parts into which a physical memory is divided in a physical address space; and (c) a converter unit that converts an address between the logical address space and the physical address space. | 2013-01-10 |
20130013885 | MEMORY STORAGE DEVICE, MEMORY CONTROLLER, AND METHOD FOR IDENTIFYING VALID DATA - A memory storage device, a memory controller, and a method for identifying a valid data are provided. A rewritable non-volatile memory chip of the memory storage device includes physical blocks. Each of the physical blocks has physical pages. In the present method, logical blocks are configured and mapped to a portion of the physical blocks, wherein each of the logical blocks has logical pages. When a data to be written by a host system into a specific logical page is received, a substitute physical block is selected, the data is written into a specific physical page in the substitute physical block, and the address of a physical page in which a previous data corresponding to the specific logic page is written is recorded into the specific physical page. Thereby, a physical page containing the latest valid data can be identified among several physical pages corresponding to a same logical page. | 2013-01-10 |
20130013886 | ADAPTIVE WEAR LEVELING VIA MONITORING THE PROPERTIES OF MEMORY REFERENCE STREAM - Adaptive write leveling in limited lifetime memory devices including performing a method for monitoring a write data stream that includes write line addresses. A property of the write data stream is detected and a write leveling process is adapted in response to the detected property. The write leveling process is applied to the write data stream to generate physical addresses from the write line addresses. | 2013-01-10 |
20130013887 | MEMORY CONTROLLER - An address comparator stores an address of data read out by a host system. Also, a buffer reads out the data from a memory and stores the data. If an address of data which is expected to be newly read out by the host system is included in addresses which have already been stored in the address comparator, the host system | 2013-01-10 |
20130013888 | Method and Appartus For Index-Based Virtual Addressing - An apparatus comprising a memory configured to store a routing table and a processor coupled to the memory, the processor configured to generate a request to access at least a section of an instance, assign an index to the request based on the instance, lookup an entry in the routing table based on the index, wherein the entry comprises a resource bit vector, and identify a resource comprising at least part of the section of the instance based on the resource bit vector. | 2013-01-10 |
20130013889 | Memory management unit using stream identifiers - A memory management unit includes a translation buffer unit for storing memory management attribute entries that originate from a plurality of different memory management contexts. Context disambiguation circuitry responds to one or more characteristics of a received memory transaction to form a stream identifier and to determine which of the memory management context matches that memory transaction. In this way, memory management attribute entries stored within the translation lookaside buffer are formed under control of the appropriate matching context. When the translation buffer unit receives a further transaction, then a further stream identifier is formed therefrom and if this matches the stream identifier of stored memory management attribute entries then those memory management attribute entries may be used (if appropriate) for that further memory transaction. | 2013-01-10 |
20130013890 | DATABASE SYSTEM - Operating a database system comprises: storing a database table comprising a plurality of rows, each row comprising a key value and one or more attributes; storing a primary index for the database table, the primary index comprising a plurality of leaf nodes, each leaf node comprising one or more key values and respective memory addresses, each memory address defining the storage location of the respective key value; creating a new leaf node comprising one or more key values and respective memory addresses; performing a memory allocation analysis based upon the lowest key value of the new leaf node to identify a non-full memory page storing a leaf node whose lowest key value is similar to the lowest key value of the new leaf node; and storing the new leaf node in the identified non-full memory page. | 2013-01-10 |
20130013891 | METHOD AND APPARATUS FOR A HIERARCHICAL SYNCHRONIZATION BARRIER IN A MULTI-NODE SYSTEM - A hierarchical barrier synchronization of cores and nodes on a multiprocessor system, in one aspect, may include providing by each of a plurality of threads on a chip, input bit signal to a respective bit in a register, in response to reaching a barrier; determining whether all of the plurality of threads reached the barrier by electrically tying bits of the register together and “AND”ing the input bit signals; determining whether only on-chip synchronization is needed or whether inter-node synchronization is needed; in response to determining that all of the plurality of threads on the chip reached the barrier, notifying the plurality of threads on the chip, if it is determined that only on-chip synchronization is needed; and after all of the plurality of threads on the chip reached the barrier, communicating the synchronization signal to outside of the chip, if it is determined that inter-node synchronization is needed. | 2013-01-10 |
20130013892 | HIERARCHICAL MULTI-CORE PROCESSOR, MULTI-CORE PROCESSOR SYSTEM, AND COMPUTER PRODUCT - A hierarchical multi-core processor includes a core group for each hierarchy of a hierarchy group constituting a series of communication functions divided according to communication protocol, where a first core group of a given hierarchy among the hierarchy group is connected to a second core group of another hierarchy constituting a first communication function to be executed following a second communication function of the given hierarchy. | 2013-01-10 |
20130013893 | PORTABLE HANDHELD DEVICE WITH MULTI-CORE MICROCODED IMAGE PROCESSOR - A portable handheld device includes a CPU for processing a script; a multi-core processor for processing images; and a flash memory connected to the CPU, the flash memory storing therein a table of micro-codes. The multi-core processor includes a plurality of micro-coded processing units. The CPU is configured to read one or more micro-codes from the flash memory and load the one or more micro-codes into the processing unit to execute the script being processed thereby | 2013-01-10 |
20130013894 | DATA PROCESSOR - A RISC data processor in which the number of flags generated by each instruction is increased so that a decrease of flag-generating instructions exceeds an increase of flag-using instructions in quantity, thereby achieving the decrease in instructions. An instruction for generating flags according to operands' data sizes is defined, and an instruction set handled by the RISC data processor includes an instruction capable of executing an operation on operands in more than one data size. An identical operation process is conducted on the small-size operand and on low-order bits of the large-size operand, and flags are generated capable of coping with the respective data sizes regardless of the data size of each operand subjected to the operation. Thus, a reduction in instruction code space of the RISC data processor can be achieved. | 2013-01-10 |
20130013895 | BYTE-ORIENTED MICROCONTROLLER HAVING WIDER PROGRAM MEMORY BUS SUPPORTING MACRO INSTRUCTION EXECUTION, ACCESSING RETURN ADDRESS IN ONE CLOCK CYCLE, STORAGE ACCESSING OPERATION VIA POINTER COMBINATION, AND INCREASED POINTER ADJUSTMENT AMOUNT - An exemplary byte-oriented microcontroller includes a program memory, a program memory bus, and a core circuit. The program memory bus has a bus width wider than one instruction byte, and the core circuit is coupled to the program memory through the program memory bus for executing at least one instruction by processing a plurality of instruction bytes fetched from the program memory. The core circuit includes a fetch unit, for fetching the instruction bytes through the program memory bus and re-ordering the fetched instruction bytes to form a complete instruction. | 2013-01-10 |
20130013896 | LOAD/MOVE AND DUPLICATE INSTRUCTIONS FOR A PROCESSOR - A method includes, in a processor, loading/moving a first portion of bits of a source into a first portion of a destination register and duplicate that first portion of bits in a subsequent portion of the destination register. | 2013-01-10 |
20130013897 | METHOD TO DYNAMICALLY DISTRIBUTE A MULTI-DIMENSIONAL WORK SET ACROSS A MULTI-CORE SYSTEM - A method provides efficient dispatch/completion of an N Dimensional (ND) Range command in a data processing system (DPS). The method comprises: a compiler generating one or more commands from received program instructions; ND Range work processing (WP) logic determining when a command generated by the compiler will be implemented over an ND configuration of operands, where N is greater than one (1); automatically decomposing the ND configuration of operands into a one (1) dimension (1D) work element comprising P sequentially ordered work items that each represent one of the operands; placing the 1D work element within a command queue of the DPS; enabling sequential dispatching of 1D work items in ordered sequence from to one or more processing units; and generating an ND Range output by mapping the 1D work output result to an ND position corresponding to an original location of the operand represented by the 1D work item. | 2013-01-10 |
20130013898 | Managing Multiple Threads In A Single Pipeline - In one embodiment, the present invention includes a method for determining if an instruction of a first thread dispatched from a first queue associated with the first thread is stalled in a pipestage of a pipeline, and if so, dispatching an instruction of a second thread from a second queue associated with the second thread to the pipeline if the second thread is not stalled. Other embodiments are described and claimed. | 2013-01-10 |
20130013899 | Using Hardware Transaction Primitives for Implementing Non-Transactional Escape Actions Inside Transactions - Mechanisms are provided for performing escape actions within transactions. These mechanisms execute a transaction comprising a transactional section and an escape action. The transactional section is comprised of one or more instructions that are to be executed in an atomic manner as part of the transaction. The escape action is comprised of one or more instructions to be executed in a non-transactional manner. These mechanisms further populate at least one actions list data structure, associated with a thread of the data processing system that is executing the transaction, with one or more actions associated with the escape action. Moreover, these mechanisms execute one or more actions in the actions list data structure based upon whether the transaction commits successfully or is aborted. | 2013-01-10 |
20130013900 | MULTI-THREAD PROCESSOR AND ITS HARDWARE THREAD SCHEDULING METHOD - A multi-thread processor includes a plurality of hardware threads each of which generates an independent instruction flow, a first thread scheduler that outputs a first thread selection signal, the first thread selection signal designating a hardware thread to be executed in a next execution cycle among the plurality of hardware threads according to a priority rank, the priority rank being established in advance for each of the plurality of hardware threads, a first selector that selects one of the plurality of hardware threads according to the first thread selection signal and outputs an instruction generated by the selected hardware thread, and an execution pipeline that executes an instruction output from the first selector. Whenever the hardware thread is executed in the execution pipeline, the first scheduler updates the priority rank for the executed hardware thread and outputs the first thread selection signal in accordance with the updated priority rank. | 2013-01-10 |
20130013901 | SYSTEM AND APPARATUS FOR GROUP FLOATING-POINT INFLATE AND DEFLATE OPERATIONS - Systems and apparatuses are presented relating a programmable processor comprising an execution unit that is operable to decode and execute instructions received from an instruction path and partition data stored in registers in the register file into multiple data elements, the execution unit capable of executing group data handling operations that re-arrange data elements in different ways in response to data handling instructions, the execution unit further capable of executing a plurality of different group floating-point and group integer arithmetic operations that each arithmetically operates on the multiple data elements stored in registers in the register file to produce a catenated result that is returned to a register in the register file, wherein the catenated result comprises a plurality of individual results. | 2013-01-10 |
20130013902 | DYNAMICALLY RECONFIGURABLE PROCESSOR AND METHOD OF OPERATING THE SAME - A dynamically reconfigurable processor which executes a series of processes on an instruction basis for respective instructions, comprises: a dynamically configurable computing unit; and a clock generating circuit, wherein start timing for processes in the series of processes is determined based on the main clock except for an instruction execution process of executing the instruction with the dynamically configurable computing unit, the instruction execution process of executing the instruction with the dynamically configurable computing unit includes a computing element generating sub-process of dynamically configuring, with dynamically configurable computing unit, a computing element corresponding to the instruction, and an operation sub-process of performing an operation according to the instruction with the computing element configured in the computing element generating sub-process, start timing for the computing element generating sub-process and the operation sub-process is determined based on the sub-clock, and the sub-clock is generated such that the computing element generating sub-process and the operation sub-process are completed before the start timing for a process which is to be executed immediately after the instruction execution process. | 2013-01-10 |
20130013903 | Multicore Processor and Method of Use That Adapts Core Functions Based on Workload Execution - A processor has multiple cores with each core having an associated function to support processor operations. The functions performed by the cores are selectively altered to improve processor operations by balancing the resources applied for each function. For example, each core comprises a field programmable array that is selectively and dynamically programmed to perform a function, such as a floating point function or a fixed point function, based on the number of operations that use each function. As another example, a processor is built with a greater number of cores than can be simultaneously powered, each core associated with a function, so that cores having functions with lower utilization are selectively powered down. | 2013-01-10 |
20130013904 | MOBILE COMPUTER CONTROL OF DESKTOP INPUT/OUTPUT FEATURES WITH MINIMAL OPERATING SYSTEM REQUIREMENT ON DESKTOP - A mobile device such as a smart phone can be connected to the USB port of a computer such as a laptop to charge the battery of the mobile device and to synchronize data. Also, when a special button is pressed the computer enters a mobile device support mode in which the computer processor does not boot the full service O.S. but only a small O.S., with the mobile device sending demanded images and sounds to the larger display and speakers of the computer and receiving input from the more capable keyboard of the computer so that a user can use the resources of the computer in operating the typically more limited mobile device. | 2013-01-10 |
20130013905 | BIOS FLASH ATTACK PROTECTION AND NOTIFICATION - A system and method for BIOS flash attack protection and notification. A processor initialization module, including initialization firmware verification module may be configured to execute first in response to a power on and/or reset and to verify initialization firmware stored in non-volatile memory in a processor package. The initialization firmware is configured to verify the BIOS. If the verification of the initialization firmware and/or the BIOS fails, the system is configured to select at least one of a plurality of responses including, but not limited to, preventing the BIOS from executing, initiating recovery, reporting the verification failure, halting, shutting down and/or allowing the BIOS to execute and an operating system (OS) to boot in a limited functionality mode. | 2013-01-10 |
20130013906 | SYSTEM AND METHOD FOR VALIDATING COMPONENTS DURING A BOOTING PROCESS - A method and system for validating components during a booting process of a computing device are described herein. The method can include the steps of detecting a power up signal and in response to detecting the power up signal, progressively determining whether software components of the computing device are valid. If the software components are determined to be valid, the computing device may be permitted to move to an operational state. If, however, at least some of the software components are determined to be not valid, the computing device may be prevented from moving to the operational state. In one arrangement, if the computing device is prevented from moving to the operational state, corrective action can be taken in an effort to permit the computing device to move to the operational state. | 2013-01-10 |
20130013907 | WIRELESS ROUTER REMOTE FIRMWARE UPGRADE - A wireless router receives a firmware update from a remote server, and destructively overwrites router firmware in flash memory in a chunk-wise manner, and then writes a kernel memory before going live with upgraded firmware. Some routers authenticate the firmware image. In some cases, image chunks are re-ordered into an executable order after receipt and before finishing their final arrangement in the flash memory. In some routers, a maximum firmware image size is at least two chunk sizes smaller than the flash memory storage capacity. Some routers remap ROM to RAM memory. Some decompress data from flash into a RAM. Some save text file configuration settings in flash before rebooting. Some detect a user's inactive billing status and redirect a web browser to a billing activation page. | 2013-01-10 |
20130013908 | PARALLELIZING MULTIPLE BOOT IMAGES WITH VIRTUAL MACHINES - A system and method are presented for converting a multi-boot computer to a virtual machine. Existing boot images on a multi-boot computer are identified and converted into virtual machine instances. Each virtual machine instance represents an operating system and is capable of running at the same time. Finally, a new hosting operating system is installed. The new hosting operating system launches and manages the converted virtual machine instances. | 2013-01-10 |
20130013909 | METHOD AND APPARATUS FOR ESTABLISHING SAFE PROCESSOR OPERATING POINTS - A system and method is provided for establishing safe processor operating points. Some embodiments may include a tamper resistant storage element that stores information regarding one or more operating points of an adjustable processor operating parameter. Some embodiments may further include an element to determine what the current processor operating point is of the operating parameter, and an element to compare the current operating point of the operating parameter with the stored information. | 2013-01-10 |
20130013910 | METHOD AND DEVICE FOR OPTIMIZING LOADING AND BOOTING OF AN OPERATING SYSTEM IN A COMPUTER SYSTEM VIA A COMMUNICATION NETWORK - The subject of the invention is in particular the optimization of the loading and booting of an operating system of a computer system via a communication network to which at least one server is connected. Said server comprises at least one image of a kernel of a minimal operating system and an image of an associated file system. The method comprises steps of loading said image of said kernel ( | 2013-01-10 |
20130013911 | Technique for Selecting a Frequency of Operation in a Processor System - The present disclosure relates to a technique for varying the frequency of operation of one or more cores in a processor device capable of operating at different frequencies and voltages. A method aspect of this technique includes executing one or more tasks on the at least one processor core, wherein the tasks are grouped into groups, monitoring usage of the at least one processor core by tasks in the groups on a per group basis, aggregating the monitored usage of the at least one processor core by individual groups across the groups to derive a load parameter indicative of the combined usage of the at least one processor core by the tasks in the groups, selecting a frequency of operation based upon the load parameter, and changing the frequency of operation of the at least one processor core to the selected frequency of operation. | 2013-01-10 |
20130013912 | Systems and Methods for Securing Media and Mobile Media Communications with Private Key Encryption and Multi-Factor Authentication - Systems and methods protect and secure one-path and/or multi-path data, media, multi-media, simulations, gaming, television and mobile media communications and their fixed or mobile devices over diverse networks with symmetric key rotation, various forms of encryption, and multiple factors of authentication to provide optimal security for the integrity of any media asset. The distribution of said media asset is driven through virtual servers with effective stealth or cloaked processes, rendering them invisible to outside attacks, and securing any media from internal theft during the distribution process. The systems and methods curtail the ability to copy and/or revise the protected media and are instrumental in preventing piracy of media assets over the Internet, intranets, or private networks. | 2013-01-10 |
20130013913 | ELECTRONIC DEVICE WITH MESSAGE ENCRYPTION FUNCTION AND MESSAGE ENCRYPTION METHOD - An electronic device with a message encryption function includes a configure interface module for setting an encryption code, a storage module, an encryption module, and a message processing module. The message processing module is electrically connected to the configure interface module, the storage module and the encryption module for receiving or sending a message, accessing the encryption code from the configure interface module, and transmitting the message and the encryption code to the encryption module. The encryption module encrypts the message with the encryption code so as to generate an encrypted message and then transmits the encrypted message to the message processing module. The message processing module stores the encrypted message in the storage module. | 2013-01-10 |
20130013914 | System and Method for Monitoring Secure Data on a Network - A system and method for monitoring secure digital data on a network are provided. An exemplary network monitoring system may include a network device in communication with a user and a network. Further, a server may be in communication with the network. A browser and monitoring program may be stored on the network device, and the network device may receive secure digital data from the network. The browser may convert the secure digital data or a portion thereof into source data, and the monitoring program may transfer the source data or a portion thereof to the server. In an exemplary embodiment, the monitoring program may include a service component and an interface program. | 2013-01-10 |
20130013915 | INTERNET PROTOCOL SECURITY (IPSEC) PACKET PROCESSING FOR MULTIPLE CLIENTS SHARING A SINGLE NETWORK ADDRESS - Embodiments of the present invention address deficiencies of the art in respect to secure communications for multiple hosts in an address translation environment and provide a method, system and computer program product for IPsec SA management for multiple clients sharing a single network address. In one embodiment, a computer implemented method for IPsec SA management for multiple hosts sharing a single network address can include receiving a packet for IPsec processing for a specified client among the multiple clients sharing the single network address. A dynamic SA can be located among multiple dynamic SAs for the specified client using client identifying information exclusive of a 5-tuple produced for the dynamic SA. Finally, IPsec processing can be performed for the packet. | 2013-01-10 |
20130013916 | Method and Apparatus for Verifiable Generation of Public Keys - The invention provides a method of verifiable generation of public keys. According to the method, a self-signed signature is first generated and then used as input to the generation of a pair of private and public keys. Verification of the signature proves that the keys are generated from a key generation process utilizing the signature. A certification authority can validate and verify a public key generated from a verifiable key generation process. | 2013-01-10 |
20130013917 | SYSTEM AND METHOD FOR ENABLING BULK RETRIEVAL OF CERTIFICATES - A system and method for searching and retrieving certificates, which may be used in the processing of encoded messages. In one embodiment, a certificate synchronization application is programmed to perform certificate searches by querying one or more certificate servers for all of the certificates on those certificate servers. If all of the certificates on a certificate server cannot be successfully retrieved using a single search query, due to a search quota on the certificate server being exceeded for example, the search is re-performed through multiple queries, each corresponding to a narrower subsearch. Embodiments described herein enable large amounts of certificates to be automatically searched for and retrieved from certificate servers, thereby minimizing the need for users to manually search for individual certificates. | 2013-01-10 |
20130013918 | SYSTEM AND METHOD FOR RETRIEVING CERTIFICATES ASSOCIATED WITH SENDERS OF DIGITALLY SIGNED MESSAGES - A system and method for retrieving certificates and/or verifying the revocation status of certificates. In one embodiment, when a user opens a digitally signed message, a certificate that is required to verify the digital signature on the message may be automatically retrieved if it is not stored on the user's computing device (e.g. a mobile device), eliminating the need for users to initiate the task manually. Verification of the digital signature may also be automatically performed by the application after the certificate is retrieved. Verification of the revocation status of a certificate may also be automatically performed if it is determined that the time that has elapsed since the status was last updated exceeds a pre-specified limit. | 2013-01-10 |
20130013919 | UPDATING CERTIFICATE STATUS IN A SYSTEM AND METHOD FOR PROCESSING CERTIFICATES LOCATED IN A CERTIFICATE SEARCH - A system and method for processing certificates located in a certificate search. Certificates located in a certificate search are processed at a data server (e.g. a mobile data server) coupled to a computing device (e.g. a mobile device) to determine status data that can be used to indicate the status of those certificates to a user of the computing device. Selected certificates may be downloaded to the computing device for storage, and the downloaded certificates are tracked by the data server. This facilitates the automatic updating of the status of one or more certificates stored on the computing device by the data server, in which updated status data is pushed from the data server to the computing device. | 2013-01-10 |
20130013920 | DYNAMIC DATA-PROTECTION POLICIES WITHIN A REQUEST-REPLY MESSAGE QUEUING ENVIRONMENT - A request to process a request message using a request queue within a request-reply messaging environment is detected at a dynamic data protection module. At least one authorized sender module and a sole authorized recipient module of a response message to the request message is identified using a request queue policy of the request queue. A reply queue policy is dynamically created to process the response message using the identified at least one authorized sender module and the sole authorized recipient module of the response message. The dynamically-created reply queue policy is associated with a reply queue. The response message is processed responsive to a request to process the response message using the dynamically-created reply queue policy and the associated reply queue. | 2013-01-10 |
20130013921 | Methods and apparatus for secure data sharing - This disclosure relates to methods and apparatus for securely and easily sharing data over a communications network. As communications services on a communications network are continuously becoming cheaper, faster, and easier to use, more users are becoming receptive to the idea of sharing data over the communications network. However, although E-mails and web folders, to a certain degree, provide easy-to-use or secure data sharing mechanisms, none of the existing data sharing methods is both easy-to-use and highly secure. This disclosure provides methods and apparatus for easily and securely sharing data over a communications network. | 2013-01-10 |
20130013922 | SECURE DISSEMINATION OF EVENTS IN A PUBLISH/SUBSCRIBE NETWORK - Various embodiments of systems and methods to securely disseminate events in publish/subscribe network are described herein. One or more subscribers are authorized to receive events from a publisher through an authorize protocol carried out between the publisher, a trusted party and the one or more subscribers. A security token specific to a product associated with an event is provided, by the publisher, to the authorized one or more subscribers. Further, the event is encrypted using a public key of the trusted party, a security key of the publisher and a secret key of the publisher. The encrypted event is disseminated, by the publisher, in a publish/subscribe network. Furthermore, the encrypted event is received by the authorized one or more subscribers. The encrypted event is decrypted using the security token and an authorization key by the authorized one or more subscribers. | 2013-01-10 |
20130013923 | METHODS FOR OBTAINING AUTHENTICATION CREDENTIALS FOR ATTACHING A WIRELESS DEVICE TO A FOREIGN 3GPP WIRELESS DOMAIN - A method for obtaining authentication credentials for attaching a wireless device to a foreign wireless domain in a 3rd Generation Partnership Project (3GPP) communication system, which includes: receiving an attach request message from the wireless device; and responsive to the attach request message, authenticating the wireless device and retrieving a set of authentication vectors, wherein the authentication vectors are for authenticating the wireless device to the foreign wireless domain. The method further includes encrypting the set of authentication vectors using a first security key of a home wireless domain of the wireless device. In addition, the method includes encrypting the first security key using a second security key of the foreign wireless domain and sending the encrypted set of authentication vectors and the encrypted first security key to the wireless device. | 2013-01-10 |
20130013924 | DYNAMIC DATA-PROTECTION POLICIES WITHIN A REQUEST-REPLY MESSAGE QUEUING ENVIRONMENT - A request to process a request message using a request queue within a request-reply messaging environment is detected at a dynamic data protection module. At least one authorized sender module and a sole authorized recipient module of a response message to the request message is identified using a request queue policy of the request queue. A reply queue policy is dynamically created to process the response message using the identified at least one authorized sender module and the sole authorized recipient module of the response message. The dynamically-created reply queue policy is associated with a reply queue. The response message is processed responsive to a request to process the response message using the dynamically-created reply queue policy and the associated reply queue. | 2013-01-10 |
20130013925 | System and Method for Authentication via a Proximate Device - Techniques are provided to authenticate components in a system. Users may enter credentials into an input device and the credentials may be authenticated and/or securely transmitted to the components. The components may then provide the credentials to a server in the system. Strong authentication may thus be provided to the effect that credentials associated with specific users have been received from specific components in the system. The server may then enable the components to access selected services. | 2013-01-10 |
20130013926 | Method and Apparatus for Device-to-Device Key Management - Various methods for device-to-device key management are provided. One example method includes receiving a communication mode change command requesting a mode change to device-to-device communications, and generating a local device security key based on a secret key and a base value. The local device security key may be configured for use in device-to-device communications. The example method may also include receiving a security key combination value, and deconstructing the security key combination value using the local device security key to determine a peer device security key. The peer device security key may be configured for use in device-to-device communications. Similar and related example methods and example apparatuses are also provided. | 2013-01-10 |
20130013927 | Automated Entity Verification - Some embodiments provide a verification system for automated verification of entities. The verification system automatedly verifies entities using a two part verification campaign. One part verifies that the entity is the true owner of the entity account to be verified. This verification step involves (1) the entity receiving a verification code at the entity account and returning the verification code to the verification system, (2) the entity associating an account that it has registered at a service provider to an account that the verification system has registered at the service provider, (3) both. Another part verifies the entity can respond to communications that are sent to methods of contact that have been previously verified as belonging to the entity. The verification system submits a first communication with a code using a verified method of contact. The verification system then monitors for a second communication to be returned with the code. | 2013-01-10 |
20130013928 | Secure Credential Unlock Using Trusted Execution Environments - Computing devices utilizing trusted execution environments as virtual smart cards are designed to support expected credential recovery operations when a user credential, e.g., personal identification number (PIN), password, etc. has been forgotten or is unknown. A computing device generates a cryptographic key that is protected with a PIN unlock key (PUK) provided by an administrative entity. If the user PIN cannot be input to the computing device the PUK can be input to unlock the locked cryptographic key and thereby provide access to protected data. A computing device can also, or alternatively, generate a group of challenges and formulate responses thereto. The formulated responses are each used to secure a computing device cryptographic key. If the user PIN cannot be input to the computing device an entity may request a challenge. The computing device issues a challenge from the set of generated challenges. Upon receiving a valid response back, the computing device can unlock the secured computing device cryptographic key associated with the issued challenge and subsequently provide access to protected data. | 2013-01-10 |
20130013929 | PROJECTOR SYSTEM - A projector system includes an information processing apparatus and a projector. The projector includes a device connection unit which enables communication between the information processing apparatus and the projector, a password generating unit which generates a password, and an encryption unit which encrypts the password and outputs the encrypted password to the information processing apparatus through the device connection unit. The information processing apparatus includes a device connection unit which enables communication between the projector and the information processing apparatus, a decryption unit which decrypts the encrypted password input through the device connection unit of the information processing apparatus using a decryption key, and a password determining unit which has functions of determining whether the decrypted decryption password is correct and outputting a signal directing to start the process for projection to be performed by the information processing apparatus in a case where the decrypted password is correct. | 2013-01-10 |
20130013930 | Data Encryption Management - A method, computer program product, and apparatus for managing encrypted data are provided. A respective set of sectors in each page of the volume is selected for storing data based on a respective key in a number of keys responsive to receiving a request to store the data in the volume and an identification of the number of keys with which users are allowed to store the data in the volume. Selection of the respective set of sectors is a function of a value of the respective key and a number of available sectors within a page and the volume is much larger than the data. The data is encrypted using the respective key to form the encrypted data. The encrypted data is stored in the respective set of sectors in the page in the volume. | 2013-01-10 |
20130013931 | SECURE FILE SHARING METHOD AND SYSTEM - Systems and methods are provided for securely sharing data. A processor forms two or more shares of a data set encrypted with a symmetric key, the data set associated with a first user device, and causes the encrypted data set shares to be stored separately from each other in at least one remote storage location. The processor generates first and second encrypted keys by encrypting data indicative of the symmetric key with a first asymmetric key of first and second asymmetric key pairs associated with the first user device and a second user device, respectively, and causes the encrypted key to be stored in the at least one storage location. To restore the data set, a predetermined number of the two or more encrypted data set shares and at least one of the second asymmetric keys of the first and second asymmetric key pairs are needed. | 2013-01-10 |
20130013932 | SECURITY MANAGEMENT SYSTEM AND METHOD FOR LOCATION-BASED MOBILE DEVICE - A method and a system of managing information security for a mobile device in a restricted area based on location information regarding the mobile device are provided. The method includes receiving, by the mobile device, a request for the execution of an application program in a restricted area from a server managing the restricted area, executing, by the mobile device, the application program requested for execution when the program was set to be executable according to a security policy set to the restricted area, encrypting, by the mobile device, a file, created according to the execution of the application program, based on location information regarding the mobile device, and storing the encrypted file. | 2013-01-10 |
20130013933 | System and Method for Protecting Data on a Mobile Device - Methods and systems are disclosed for protecting data on a mobile device. A data protection module on the mobile device receives a transmission including a secret key. The secret key is used in encrypting data on the device and is then deleted. Subsequent to an event detectable to the mobile device, the data protection module receives another transmission including said secret key. The secret key is then used to decrypt the encrypted data. | 2013-01-10 |
20130013934 | Infinite Key Memory Transaction Unit - A system for providing high security for data stored in memories in computer systems is disclosed. A different encryption key is used for every memory location, and a write counter hides rewriting of the same data to a given location. As a result, the data for every read or write transaction between the microprocessor and the memory is encrypted differently for each transaction for each address, thereby providing a high level of security for the data stored. | 2013-01-10 |
20130013935 | Power supply system for an electronic flight bag - A control system for providing electrical power to an electronic flight bag device on an aircraft. The control system including a power switching component coupled to a plurality of power sources and at least one electronic flight bag device. The power switching component is operative and configured to selectively apply electrical power from at least one of the plurality of power sources to the at least one electronic flight bag device based upon a condition of the aircraft. | 2013-01-10 |
20130013936 | DYNAMIC POWER MANAGEMENT SYSTEM FOR UNIVERSAL SERIAL BUS (USB) HUB AND METHOD THEREOF - A dynamic power management system for USB hub and method thereof are described. The dynamic power management system includes a host device, a power unit and a hub device. A power management module disposed in the hub device dynamically adjusts the power-supplying statuses of ports in the hub device and further reduces the cost of power transformer externally connected to the hub device. | 2013-01-10 |
20130013937 | Information Processing Device and Method for Starting Up Information Processing Device - After a power switch | 2013-01-10 |
20130013938 | ACCESSORY ID RECOGNITION BY POWER CYCLING - Various embodiments are described herein for a peripheral device and a method of identifying the peripheral device via power cycling. In one embodiment, the method comprises obtaining characteristic information about the peripheral device, encoding the characteristic information in a power signal at the peripheral device and sending the power signal to an electronic device that is operably connected with the peripheral device. The electronic device can then take action such as adjusting its settings or applications based on the characteristic information of the peripheral device. | 2013-01-10 |
20130013939 | INFORMATION PROCESSING APPARATUS CAPABLE OF BEING INSTRUCTED TO POWER OFF BY A COMMAND FROM EXTERNAL APPARATUS, METHOD OF CONTROLLING THE SAME, AND STORAGE MEDIUM - An image processing apparatus that is capable of being instructed to power off, by a power switch or a command from an external apparatus, and is capable of executing the restart thereof under appropriate conditions. When power-off is instructed, shutdown is started. Upon completion of the shutdown, if the power switch is on, and at the same time the power-off has been instructed by the power switch of the apparatus, the restart of the apparatus is executed, whereas upon completion of the shutdown, if the power-off has been instructed by a command from the external apparatus, the restart of the apparatus is not executed. | 2013-01-10 |
20130013940 | SYSTEMS AND METHODS FOR PROVIDING DEVICE-TO-DEVICE HANDSHAKING THROUGH A POWER SUPPLY SIGNAL - Handshaking circuits are provided in a communications cable and in a device operable to be mated with the communications cable. Before a device can utilize the power supply signal of such a communications channel, the two handshaking circuits must sufficiently identify one another over a power supply signal with a decreased voltage. The decreased voltage allows for a cable plug to be provided with a safe, protected power that cannot cause harm to a human. The decreased voltage also reduces the chance that a device can receive a primary power supply signal from the cable before the device sufficiently identifies itself. Accordingly, a laptop may be connected to a portable music player, but the voltage of the power supply signal provided by the laptop to the cable may be decreased on-cable until the handshaking circuit of the portable music player sufficiently performs a handshaking operation with a on-cable handshaking circuit. | 2013-01-10 |
20130013941 | ON-DEMAND STORAGE SYSTEM ENERGY SAVINGS - Embodiments of the invention relate to dynamic power management of storage volumes and disk arrays in a storage subsystem to mitigate loss of performance resulting from the power management. The volumes and arrays are prioritized, and in real-time power is selectively reduced in response to both the prioritization and an energy savings goal. A feedback loop is provided to dynamically measure associated power gain based upon a lowering of power consumption, and device selection may be adjusted based upon received feedback. | 2013-01-10 |
20130013942 | Information Processing Device and Method for Controlling Information Processing Device - A first battery | 2013-01-10 |
20130013943 | ON-DEMAND STORAGE SYSTEM ENERGY SAVINGS - Embodiments of the invention relate to dynamic power management of storage volumes and disk arrays in a storage subsystem to mitigate loss of performance resulting from the power management. The volumes and arrays are prioritized, and in real-time power is selectively reduced in response to both the prioritization and an energy savings goal. A feedback loop is provided to dynamically measure associated power gain based upon a lowering of power consumption, and device selection may be adjusted based upon received feedback. | 2013-01-10 |
20130013944 | MULTIPROCESSOR SYSTEM AND CONTROL METHOD THEREOF, AND COMPUTER-READABLE MEDIUM - A multiprocessor system configured to share processes by a main system having a first processor and a subsystem having a second processor, comprises a first shared memory configured to receive accesses from the main system and the subsystem, a second memory configured to receive access from the subsystem at a power saving mode, a stop unit configured to stop accesses from the main system and the subsystem to the first shared memory when the subsystem enters the power saving mode, and a switching unit configured to switch an access destination of the subsystem from the first shared memory to the second memory when the subsystem enters the power saving mode. | 2013-01-10 |
20130013945 | METHOD AND APPARATUS FOR A ZERO VOLTAGE PROCESSOR SLEEP STATE - Embodiments of the invention relate to a method and apparatus for a zero voltage processor sleep state. A processor may include a dedicated cache memory. A voltage regulator may be coupled to the processor to provide an operating voltage to the processor. During a transition to a zero voltage power management state for the processor, the operational voltage applied to the processor by the voltage regulator may be reduced to approximately zero and the state variables associated with the processor may be saved to the dedicated cache memory. | 2013-01-10 |