24th week of 2014 patent applcation highlights part 73 |
Patent application number | Title | Published |
20140164686 | MOBILE DEVICE AND METHOD OF MANAGING DATA USING SWAP THEREOF - A mobile device includes a storage configured to store data, a buffer memory configured to include a swap victim buffer area and a normal data area, and an application processor configured to select page data to be swapped from the normal data area and to perform a swapping operation on the selected page data. The swapping operation performs an instant swapping operation or a lazy swapping operation according to a data type of the selected page data. | 2014-06-12 |
20140164687 | MEMORY CONTROLLER AND DATA MANAGEMENT METHOD THEREOF - The present invention provides a flash memory controller for mapping the logical addresses to the physical addresses of memory including a plurality of blocks, each having a plurality of pages, wherein the memory controller includes a processor. The processor includes hot page decision unit and an address translation unit. The hot page decision unit classifies pages in each block into hot pages and cold pages based on a predetermined criterion. When there is a plurality of the classified hot pages, the address translation unit respectively arranges the classified hot pages in different target blocks. | 2014-06-12 |
20140164688 | SOC SYSTEM AND METHOD FOR OPERATING THE SAME - A SOC system includes a central processing unit; a memory management unit receiving a virtual address from the central processing unit and converting the virtual address into a physical address; a main memory implemented by a volatile memory and directly accessed through the physical address converted by the memory management unit; and a storage implemented by a nonvolatile memory separate from the main memory and including a first area directly accessed through the physical address converted by the memory management unit. | 2014-06-12 |
20140164689 | SYSTEM AND METHOD FOR MANAGING PERFORMANCE OF A COMPUTING DEVICE HAVING DISSIMILAR MEMORY TYPES - Systems and methods are provided for managing performance of a computing device having dissimilar memory types. An exemplary embodiment comprises a method for interleaving dissimilar memory devices. The method involves determining an interleave bandwidth ratio comprising a ratio of bandwidths for two or more dissimilar memory devices. The dissimilar memory devices are interleaved according to the interleave bandwidth ratio. Memory address requests are distributed from one or more processing units to the dissimilar memory devices according to the interleave bandwidth ratio. | 2014-06-12 |
20140164690 | SYSTEM AND METHOD FOR ALLOCATING MEMORY TO DISSIMILAR MEMORY DEVICES USING QUALITY OF SERVICE - Systems and methods are provided for allocating memory to dissimilar memory devices. An exemplary embodiment includes a method for allocating memory to dissimilar memory devices. An interleave bandwidth ratio is determined, which comprises a ratio of bandwidths for two or more dissimilar memory devices. The dissimilar memory devices are interleaved according to the interleave bandwidth ratio to define two or more memory zones having different performance levels. Memory address requests are allocated to the memory zones based on a quality of service (QoS). | 2014-06-12 |
20140164691 | MEMORY ARCHITECTURE FOR DISPLAY DEVICE AND CONTROL METHOD THEREOF - A memory architecture for a display device and a control method thereof are provided. The memory architecture includes a display data memory and a memory controller. The display data memory includes N sub-memories and N×M arbiters, wherein N is a positive integer and M is a positive integer equal to or greater than 2. Each sub-memory includes M memory blocks divided by an address. Each M arbiters are coupled to the M memory blocks of each sub-memory. The memory controller, coupled to the N×M arbiters, generates N×M sets of request signals and output address signals according to a set of an input request signal and an input address signal, and transmits to the N×M arbiters to sequentially control the N×M arbiters. | 2014-06-12 |
20140164692 | MANAGING ERRORS IN A DRAM BY WEAK CELL ENCODING - This disclosure includes a method for preventing errors in a DRAM (dynamic random access memory) due to weak cells that includes determining the location of a weak cell in a DRAM row, receiving data to write to the DRAM, and encoding the data into a bit vector to be written to memory. For each weak cell location, the corresponding bit from the bit vector is equal to the reliable logic state of the weak cell and the bit vector is longer than the data. | 2014-06-12 |
20140164693 | METHOD OF WRITING A FILE TO A PLURALITY OF MEDIA AND A STORAGE SYSTEM THEREOF - According to one embodiment, a method for writing a file to a plurality of media includes loading a parent medium into a first drive to retrieve ID information about the parent medium from metadata, writing a first file part to the parent medium and, at about a same time, saving a file name, attribute information, and attribute information about the first file part to the parent medium as metadata, loading a child medium into a second drive in order to write subsequent file parts and retrieving ID information about the child medium from metadata, writing the subsequent file parts to the child medium and, at about a same time, saving the ID information and attribute information about the subsequent file parts to the parent medium, and additionally saving the ID information about the child medium and the attribute information about the subsequent file parts as metadata in the child medium. | 2014-06-12 |
20140164694 | DECOUPLED RELIABILITY GROUPS - Methods and apparatuses for updating members of a data storage reliability group are provided. In one exemplary method, a reliability group includes a data zone in a first storage node and a checksum zone in a second data storage node. The method includes updating a version counter associated with the data zone in response to destaging a data object from a staging area of the data zone to a store area of the data zone without synchronizing the destaging with the state of the checksum zone. The method further includes transmitting, from the data zone to the checksum zone, an update message indicating completion of the destaging of the data object, wherein the update message includes a current value of the version counter. | 2014-06-12 |
20140164695 | METHOD AND SYSTEM FOR STORING AND REBUILDING DATA - According to one exemplary embodiment, a method for storing and rebuilding data computes a corresponding parity after receiving an Input/Output command, and based on the parity, determines whether a final stripe corresponding to the Input/Output command is a full stripe. When the final stripe is a full stripe, a plurality of data and a parity corresponding to the Input/Output command are stored into a main hyper erase unit (HEU) in a disk storage system. When the final stripe is not a full stripe, a final parity is re-computed and written into at least two parity pages of a buffering HEU. | 2014-06-12 |
20140164696 | STORING ROW-MAJOR DATA WITH AN AFFINITY FOR COLUMNS - A method, device, and computer readable medium for striping rows of data across logical units of storage with an affinity for columns is provided. Alternately, a method, device, and computer readable medium for striping columns of data across logical units of storage with an affinity for rows is provided. When data of a logical slice is requested, a mapping may provide information for determining which logical unit is likely to store the logical slice. In one embodiment, data is retrieved from logical units that are predicted to store the logical slice. In another embodiment, data is retrieved from several logical units, and the data not mapped to the logical unit is removed from the retrieved data. | 2014-06-12 |
20140164697 | Mainframe Storage Apparatus That Utilizes Thin Provisioning - Each actual page inside a pool is configured from a plurality of actual tracks, and each virtual page inside a virtual volume is configured from a plurality of virtual tracks. A storage control apparatus of a mainframe system has management information that includes information denoting a track in which there exists a user record, which is a record including user data (the data used by a host apparatus of a mainframe system). Based on the management information, a controller identifies an actual page that is configured only from tracks that do not comprise the user record, and cancels the allocation of the identified actual page to the virtual page. | 2014-06-12 |
20140164698 | Logical Volume Transfer Method and Storage Network System - The present invention transfers replication logical volumes between and among storage control units in a storage system comprising storage control units. To transfer replication logical volumes from a storage control unit to a storage control unit, a virtualization device sets a path to the storage control unit. The storage control unit prepares a differential bitmap in order to receive access requests. When the preparation completes, the virtualization device makes access requests to the storage control unit. The storage control unit hands over the access requests to the storage control unit. The storage control unit performs a process so that the access requests are reflected in a disk device and performs an emergency destage of storing data in a cache memory into disk device. When the emergency destage ends, the storage control unit connects to an external storage control unit and hands over access requests to the external storage control unit. | 2014-06-12 |
20140164699 | EFFICIENTLY ACCESSING AN ENCODED DATA SLICE UTILIZING A MEMORY BIN - A method begins by receiving encoded data slices for storage. At least some of the encoded data slices have different data sizes. The method continues by accessing memory container information of the storage unit that includes a listing of virtual memory containers of the storage unit and, for each virtual memory container, bin identifier information. Each virtual memory contain is divided into bins, where the bins of a virtual memory container are of a substantially similar storage size. At least some of the virtual memory containers have different bin storage sizes. The method continues by mapping encoded data slices to virtual memory containers of the plurality based on data size of the encoded data slices and bin storage sizes of the virtual memory containers. The method continues by storing the encoded data slices in the virtual memory containers based on the mapping. | 2014-06-12 |
20140164700 | SYSTEM AND METHOD OF DETECTING CACHE INCONSISTENCIES - A system and method of detecting cache inconsistencies among distributed data centers is described. Key-based sampling captures a complete history of a key for comparing cache values across data centers. In one phase of a cache inconsistency detection algorithm, a log of operations performed on a sampled key is compared in reverse chronological order for inconsistent cache values. In another phase, a log of operations performed on a candidate key having inconsistent cache values as identified in the previous phase is evaluated in near real time in forward chronological order for inconsistent cache values. In a confirmation phase, a real time comparison of actual cache values stored in the data centers is performed on the candidate keys identified by both the previous phases as having inconsistent cache values. An alert is issued that identifies the data centers in which the inconsistent cache values were reported. | 2014-06-12 |
20140164701 | VIRTUAL MACHINES FAILOVER - Disclosed is a computer system ( | 2014-06-12 |
20140164702 | VIRTUAL ADDRESS CACHE MEMORY, PROCESSOR AND MULTIPROCESSOR - An embodiment provides a virtual address cache memory including: a TLB virtual page memory configured to, when a rewrite to a TLB occurs, rewrite entry data; a data memory configured to hold cache data using a virtual page tag or a page offset as a cache index; a cache state memory configured to hold a cache state for the cache data stored in the data memory, in association with the cache index; a first physical address memory configured to, when the rewrite to the TLB occurs, rewrite a held physical address; and a second physical address memory configured to, when the cache data is written to the data memory after the occurrence of the rewrite to the TLB, rewrite a held physical address. | 2014-06-12 |
20140164703 | CACHE SWIZZLE WITH INLINE TRANSPOSITION - A method and circuit arrangement selectively swizzle data in one or more levels of cache memory coupled to a processing unit based upon one or more swizzle-related page attributes stored in a memory address translation data structure such as an Effective To Real Translation (ERAT) or Translation Lookaside Buffer (TLB). A memory address translation data structure may be accessed, for example, in connection with a memory access request for data in a memory page, such that attributes associated with the memory page in the data structure may be used to control whether data is swizzled, and if so, how the data is to be formatted in association with handling the memory access request. | 2014-06-12 |
20140164704 | CACHE SWIZZLE WITH INLINE TRANSPOSITION - A method and circuit arrangement selectively swizzle data in one or more levels of cache memory coupled to a processing unit based upon one or more swizzle-related page attributes stored in a memory address translation data structure such as an Effective To Real Translation (ERAT) or Translation Lookaside Buffer (TLB). A memory address translation data structure may be accessed, for example, in connection with a memory access request for data in a memory page, such that attributes associated with the memory page in the data structure may be used to control whether data is swizzled, and if so, how the data is to be formatted in association with handling the memory access request. | 2014-06-12 |
20140164705 | PREFETCH WITH REQUEST FOR OWNERSHIP WITHOUT DATA - A method performed by a processor is described. The method includes executing an instruction. The instruction has an address as an operand. The executing of the instruction includes sending a signal to cache coherence protocol logic of the processor. In response to the signal, the cache coherence protocol logic issues a request for ownership of a cache line at the address. The cache line is not in a cache of the processor. The request for ownership also indicates that the cache line is not to be sent to the processor. | 2014-06-12 |
20140164706 | MULTI-CORE PROCESSOR HAVING HIERARCHICAL CAHCE ARCHITECTURE - Disclosed is a multi-core processor having hierarchical cache architecture. A multi-core processor may comprise a plurality of cores, a plurality of first caches independently connected to each of the plurality of cores, at least one second cache respectively connected to at least one of the plurality of first caches, a plurality of third caches respectively connected to at least one of the plurality of cores, and at least one fourth cache respectively connected to a least one of the plurality of third caches. Therefore, overhead in communications between cores may be reduced, and processing speed of application may be increased by supporting data-level parallelization. | 2014-06-12 |
20140164707 | MITIGATING CONFLICTS FOR SHARED CACHE LINES - A computer program product for mitigating conflicts for shared cache lines between an owning core currently owning a cache line and a requestor core. The computer program product includes a tangible storage medium readable by a processing circuit and storing instructions for execution by the processing circuit for performing a method. The method includes determining whether the owning core is operating in a transactional or non-transactional mode and setting a hardware-based reject threshold at a first or second value with the owning core determined to be operating in the transactional or non-transactional mode, respectively. The method further includes taking first or second actions to encourage cache line sharing between the owning core and the requestor core in response to a number of rejections of requests by the requestor core reaching the reject threshold set at the first or second value, respectively. | 2014-06-12 |
20140164708 | SPILL DATA MANAGEMENT - A processor discards spill data from a memory hierarchy in response to the final access to the spill data has been performed by a compiled program executing at the processor. In some embodiments, the final access determined based on a special-purpose load instruction configured for this purpose. In some embodiments the determination is made based on the location of a stack pointer indicating that a method of the executing program has returned, so that data of the returned method that remains in the stack frame is no longer to be accessed. Because the spill data is discarded after the final access, it is not transferred through the memory hierarchy. | 2014-06-12 |
20140164709 | VIRTUAL MACHINE FAILOVER - Disclosed is a computer system ( | 2014-06-12 |
20140164710 | VIRTUAL MACHINES FAILOVER - Disclosed is a computer system ( | 2014-06-12 |
20140164711 | Configuring a Cache Management Mechanism Based on Future Accesses in a Cache - The described embodiments include a cache controller that configures a cache management mechanism. In the described embodiments, the cache controller is configured to monitor at least one structure associated with a cache to determine at least one cache block that may be accessed during a future access in the cache. Based on the determination of the at least one cache block that may be accessed during a future access in the cache, the cache controller configures the cache management mechanism. | 2014-06-12 |
20140164712 | DATA PROCESSING APPARATUS AND CONTROL METHOD THEREOF - A cache memory device includes a data array structure including a plurality of entries identified by indices and including, for each entry, data acquired by a fetch operation or prefetch operation and a reference count associated with the data. The reference count holds a value obtained by subtracting a count at which the entry has been referred to by the fetch operation, from a count at which the entry has been referred to by the prefetch operation. As for an entry created by the prefetch operation, a prefetch device inhibits replacement of the entry until the value of the reference count of the entry becomes 0. | 2014-06-12 |
20140164713 | Bypassing Memory Requests to a Main Memory - Some embodiments include a computing device with a control circuit that handles memory requests. The control circuit checks one or more conditions to determine when a memory request should be bypassed to a main memory instead of sending the memory request to a cache memory. When the memory request should be bypassed to a main memory, the control circuit sends the memory request to the main memory. Otherwise, the control circuit sends the memory request to the cache memory. | 2014-06-12 |
20140164714 | SPECULATIVE READ IN A CACHE COHERENT MICROPROCESSOR - A cache coherence manager, disposed in a multi-core microprocessor, includes a request unit, an intervention unit, a response unit and an interface unit. The request unit receives coherent requests and selectively issues speculative requests in response. The interface unit selectively forwards the speculative requests to a memory. The interface unit includes at least three tables. Each entry in the first table represents an index to the second table. Each entry in the second table represents an index to the third table. The entry in the first table is allocated when a response to an associated intervention message is stored in the first table but before the speculative request is received by the interface unit. The entry in the second table is allocated when the speculative request is stored in the interface unit. The entry in the third table is allocated when the speculative request is issued to the memory. | 2014-06-12 |
20140164715 | METHODS AND STRUCTURE FOR USING REGION LOCKS TO DIVERT I/O REQUESTS IN A STORAGE CONTROLLER HAVING MULTIPLE PROCESSING STACKS - Methods and structure within a storage controller for using region locks to efficiently divert an I/O request received from an attached host system to one of multiple processing stacks in the controller. A region lock module within the controller allows each processing stack to request a region lock for a range of block addresses of the storage devices. A divert-type lock request may be established to identify a range of block addresses for which I/O requests should be diverted to a particular one of the multiple processing stacks. | 2014-06-12 |
20140164716 | OVERRIDE SYSTEM AND METHOD FOR MEMORY ACCESS MANAGEMENT - A memory management system and method are described. In one embodiment, a memory management system includes a memory management unit for virtualizing context memory storage and independently controlling access to the context memory without interference from other engine activities. The shared resource management unit overrides a stream of access denials (e.g., NACKs) associated with an access problem. The memory management system and method facilitate efficient and flexible access to memory while controlling translation between virtual and physical memory “spaces”. In one embodiment the memory management system includes a translation lookaside buffer and a fill component. The translation lookaside buffer tracks information associating a virtual memory space with a physical memory space. The fill component tracks the status of an access request progress from a plurality of engines independently and faults that occur in attempting to access a memory space | 2014-06-12 |
20140164717 | Systems and Methods for Improved Communications in a Nonvolatile Memory System - Systems and methods are provided for improved communications in a nonvolatile memory (“NVM”) system. The system can toggle between multiple communications channels to provide point-to-point communications between a host device and NVM dies included in the system. The host device can toggle between multiple communications channels that extend to one or more memory controllers of the system, and the memory controllers can toggle between multiple communications channels that extend to the NVM dies. Power islands may be incorporated into the system to electrically isolate system components associated with inactive communications channels. | 2014-06-12 |
20140164718 | METHODS AND APPARATUS FOR SHARING MEMORY BETWEEN MULTIPLE PROCESSES OF A VIRTUAL MACHINE - Methods and apparatus for sharing memory between multiple processes of a virtual machine are disclosed. A hypervisor associates a plurality of guest user memory regions with a first domain and assigns each associated user process an address space identifier to protect the different user memory regions from the different user processes. In addition, the hypervisor associates a global kernel memory region with a second domain. The global kernel region is reserved for the operating system of the virtual machine and is not accessible to the user processes, because the user processes do not have access rights to memory regions associated with the second domain. The hypervisor also associates a global shared memory region with a third domain. The hypervisor allows user processes associated with the third domain to access the global shared region. Using this global shared memory region, different user processes within a virtual machine may share data without the need to swap the shared data in and out of each processes respective user region of memory. | 2014-06-12 |
20140164719 | CLOUD MANAGEMENT OF DEVICE MEMORY BASED ON GEOGRAPHICAL LOCATION - An apparatus and computer program product for managing memory of a device is disclosed. A computer system collects information about use, by the device, of data in the memory of the device. The information collected by the computer system includes a time and a location for which each portion of the data is used by the device. The computer system identifies patterns of use, by the device, of each portion of the data based on the information collected. The computer system then selects one or more portions of the data that are not needed in the memory of the device based on the patterns of use by the device. | 2014-06-12 |
20140164720 | SYSTEM AND METHOD FOR DYNAMICALLY ALLOCATING MEMORY IN A MEMORY SUBSYSTEM HAVING ASYMMETRIC MEMORY COMPONENTS - Systems and methods are provided for dynamically allocating a memory subsystem. An exemplary embodiment comprises a method for dynamically allocating a memory subsystem in a portable computing device. The method involves fully interleaving a first portion of a memory subsystem having memory components with asymmetric memory capacities. A second remaining portion of the memory subsystem is partial interleaved according to an interleave bandwidth ratio. The first portion of the memory subsystem is allocated to one or more high-performance memory clients. The second remaining portion is allocated to one or more relatively lower-performance memory clients. | 2014-06-12 |
20140164721 | CLOUD MANAGEMENT OF DEVICE MEMORY BASED ON GEOGRAPHICAL LOCATION - A method for managing memory of a device is disclosed. A computer system collects information about use, by the device, of data in the memory of the device. The information collected by the computer system includes a time and a location for which each portion of the data is used by the device. The computer system identifies patterns of use, by the device, of each portion of the data based on the information collected. The computer system then selects one or more portions of the data that are not needed in the memory of the device based on the patterns of use by the device. | 2014-06-12 |
20140164722 | METHOD FOR SAVING VIRTUAL MACHINE STATE TO A CHECKPOINT FILE - A process for lazy checkpointing a virtual machine is enhanced to reduce the number of read/write accesses to the checkpoint file and thereby speed up the checkpointing process. The process for saving a state of a virtual machine running in a physical machine to a checkpoint file maintained in persistent storage includes the steps of copying contents of a block of memory pages, which may be compressed, into a staging buffer, determining after the copying if the buffer is full, and upon determining that the buffer is full, saving the buffer contents in a storage block of the checkpoint file. | 2014-06-12 |
20140164723 | METHOD FOR RESTORING VIRTUAL MACHINE STATE FROM A CHECKPOINT FILE - A process for lazy checkpointing is enhanced to reduce the number of read/write accesses to the checkpoint file and thereby speed up the checkpointing process. The process for restoring a state of a virtual machine (VM) running in a physical machine from a checkpoint file that is maintained in persistent storage includes the steps of detecting access to a memory page of the virtual machine that has not been read into physical memory of the VM from the checkpoint file, determining a storage block of the checkpoint file to which the accessed memory page maps, writing contents of the storage block in a buffer, and copying contents of a block of memory pages that includes the accessed memory page from the buffer to corresponding locations of the memory pages in the physical memory of the VM. The storage block of the checkpoint file may be compressed or uncompressed. | 2014-06-12 |
20140164724 | METHOD AND APPARATUS FOR PROCESSING SYSTEM COMMAND DURING MEMORY BACKUP - A method and an apparatus for processing a system command during memory backup. The method includes: acquiring a write address corresponding to a write operation command; if data corresponding to the write address has been read from a raw memory area but is not written to a backup memory area, mapping the write operation command to the raw memory area, and writing data to the write address in the raw memory area according to the write operation command; and deducting a set value from the write address to obtain an initial address to subsequently read data from the raw memory area. According to the embodiments of the present invention, a problem of system command blocking is solved during a memory backup operation, so that a system command is processed in a timely manner. | 2014-06-12 |
20140164725 | SYSTEM ON CHIP TO PERFORM A SECURE BOOT, AN IMAGE FORMING APPARATUS USING THE SAME, AND METHOD THEREOF - A system on chip is provided. The system on chip includes a first memory to store a plurality of encryption keys, a second memory, a third memory to store an encryption key setting value, and a CPU to decrypt encrypted data which is stored in an external non-volatile memory using an encryption key corresponding to the encryption key setting value from among the plurality of encryption keys, to store the decrypted data in the second memory, and to perform a boot using data stored in the second memory. Accordingly, security of a boot operation can be improved. | 2014-06-12 |
20140164726 | SYSTEM-ON-CHIP HAVING SPECIAL FUNCTION REGISTER AND OPERATING METHOD THEREOF - Exemplary embodiments disclose a system-on-chip (SoC) including a special function register (SFR) and an operating method thereof. The SFR comprises a first update storage element, a second update storage element, a first update logic corresponding to the first update storage element, and a second update logic corresponding to the second update storage element, wherein a clock is supplied to the first update storage element in response to the first update logic being enabled, and the clock is supplied to the second update storage element in response to the second update logic being enabled. | 2014-06-12 |
20140164727 | SYSTEM, METHOD, AND COMPUTER PROGRAM PRODUCT FOR OPTIMIZING THE MANAGEMENT OF THREAD STACK MEMORY - A system, method, and computer program product for optimizing thread stack memory allocation is disclosed. The method includes the steps of receiving source code for a program, translating the source code into an intermediate representation, analyzing the intermediate representation to identify at least two objects that could use a first allocated memory space in a thread stack memory, and modifying the intermediate representation by replacing references to a first object of the at least two objects with a reference to a second object of the at least two objects. | 2014-06-12 |
20140164728 | METHOD FOR ALLOCATING AND REALLOCATING LOGICAL VOLUME - If multiple drive chassis, in a storage system, the inlet air temperature or operation amount differs greatly among the drive chassis, total fan power in the drive chassis and may increase greatly, causing noise increase depending on operation amount and inlet air temperature compared to where there is little difference in inlet air temperature and the distribution of the operation amount. Thus, in such a storage system including one or more drive units and one or more cooling fans, a first order of priority is set to a logical volume having the largest amount of power increase by the operation of the logical volume. A second order of priority is set to a drive chassis having the smallest power increase amount of the cooling fan. Reallocation is performed from the logical volume having the first order of priority to the drive chassis having the second order of priority. | 2014-06-12 |
20140164729 | DATA MANAGEMENT METHOD IN STORAGE POOL AND VIRTUAL VOLUME IN DKC - A storage system connected to a computer and a management computer, includes storage devices accessed by the computer, and a control unit for controlling the storage devices. A first-type logical device corresponding to a storage area set in at least one of the storage devices and a second-type logical device that is a virtual storage area are provided. The control unit sets at least two of the first-type logical devices different in a characteristic as storage areas included in a storage pool through mapping. The first-type logical device stores data by allocating a storage area of the second-type logical device to a storage area of the first-type logical device mapped to the storage pool. The characteristic of the second-type logical device can be changed by changing the allocated storage area of the second-type logical device to a storage area of another first-type logical device. | 2014-06-12 |
20140164730 | SYSTEM AND METHODS FOR MANAGING STORAGE SPACE ALLOCATION - A request for obtaining a space allocation descriptor is received by a block control layer of a storage system. The space allocation descriptor is indicative of one or more logical blocks free for allocation within a range of logical addresses. The range of logical addresses is included within a logical address space related to an upper layer application which has issued the request. The space allocation descriptor is provided by using a data structure included in the block control layer and operative to map between the logical address space and allocated storage blocks within a physical storage space, managed by the block control layer. | 2014-06-12 |
20140164731 | TRANSLATION MANAGEMENT INSTRUCTIONS FOR UPDATING ADDRESS TRANSLATION DATA STRUCTURES IN REMOTE PROCESSING NODES - Translation management instructions are used in a multi-node data processing system to facilitate remote management of address translation data structures distributed throughout such a system. Thus, in multi-node data processing systems where multiple processing nodes collectively handle a workload, the address translation data structures for such nodes may be collectively managed to minimize translation misses and the performance penalties typically associated therewith. | 2014-06-12 |
20140164732 | TRANSLATION MANAGEMENT INSTRUCTIONS FOR UPDATING ADDRESS TRANSLATION DATA STRUCTURES IN REMOTE PROCESSING NODES - Translation management instructions are used in a multi-node data processing system to facilitate remote management of address translation data structures distributed throughout such a system. Thus, in multi-node data processing systems where multiple processing nodes collectively handle a workload, the address translation data structures for such nodes may be collectively managed to minimize translation misses and the performance penalties typically associated therewith. | 2014-06-12 |
20140164733 | TRANSPOSE INSTRUCTION - A transpose instruction is described. A transpose instruction is fetched, where the transpose instruction includes an operand that specifies a vector register or a location in memory. The transpose instruction is decoded. The decoded transpose instruction is executed causing each data element in the specified vector register or location in memory to be stored in that specified vector register or location in memory in reverse order. | 2014-06-12 |
20140164734 | CONCURRENT MULTIPLE INSTRUCTION ISSUE OF NON-PIPELINED INSTRUCTIONS USING NON-PIPELINED OPERATION RESOURCES IN ANOTHER PROCESSING CORE - A method and circuit arrangement utilize inactive non-pipelined operation resources in one processing core of a multi-core processing unit to execute non-pipelined instructions on behalf of another processing core in the same processing unit. Adjacent processing cores in a processing unit may be coupled together such that, for example, when one processing core's non-pipelined execution sequencer is busy, that processing core may issue into another processing core's non-pipelined execution sequencer if that other processing core's non-pipelined execution sequencer is idle, thereby providing intermittent concurrent execution of multiple non-pipelined instructions within each individual processing core. | 2014-06-12 |
20140164735 | PROCESSING SYSTEM WITH SYNCHRONIZATION INSTRUCTION - Embodiments of a multi-processor array are disclosed that may include a plurality of processors, and controllers. Each processor may include a plurality of processor ports and a sync adapter. Each sync adapter may include a plurality of adapter ports. Each controller may include a plurality of controller ports, and a configuration port. The plurality of processors and the plurality of controllers may be coupled together in an interspersed arrangement, and the controllers may be distinct from the processors. Each processor may be configured to send a synchronization signal through its adapter ports to one or more controllers, and to pause execution of program instructions while waiting for a response from the one or more controllers. | 2014-06-12 |
20140164736 | LAZY RUNAHEAD OPERATION FOR A MICROPROCESSOR - Embodiments related to managing lazy runahead operations at a microprocessor are disclosed. For example, an embodiment of a method for operating a microprocessor described herein includes identifying a primary condition that triggers an unresolved state of the microprocessor. The example method also includes identifying a forcing condition that compels resolution of the unresolved state. The example method also includes, in response to identification of the forcing condition, causing the microprocessor to enter a runahead mode. | 2014-06-12 |
20140164737 | EXECUTION EFFICIENCY IN A SINGLE-PROGRAM, MULTIPLE-DATA PROCESSOR - A method for executing instructions on a single-program, multiple-data processor system having a fixed number of execution lanes, including: scheduling a primary instruction for execution with a first wave of multiple data; assigning the first wave to a corresponding primary subset of the execution lanes; scheduling a secondary instruction having a second wave of multiple data, such that the second wave fits in lanes that are unused by the primary subset of lanes; assigning the second wave to a corresponding secondary subset of the lanes; fetching the primary and secondary instructions; configuring the execution lanes such that the primary subset is responsive to the primary instruction and the secondary subset is simultaneously responsive to the secondary instruction; and simultaneously executing the primary and secondary instructions in the execution lanes. | 2014-06-12 |
20140164738 | INSTRUCTION CATEGORIZATION FOR RUNAHEAD OPERATION - Embodiments related to methods and devices operative, in the event that execution of an instruction produces a runahead-triggering event, to cause a microprocessor to enter into and operate in a runahead without reissuing the instruction are provided. In one example, a microprocessor is provided. The example microprocessor includes fetch logic for retrieving an instruction, scheduling logic for issuing the instruction retrieved by the fetch logic for execution, and runahead control logic. The example runahead control logic is operative, in the event that execution of the instruction as scheduled by the scheduling logic produces a runahead-triggering event, to cause the microprocessor to enter into and operate in a runahead mode without reissuing the instruction, and carry out runahead policies while the microprocessor is in the runahead mode that governs operation of the microprocessor and cause the microprocessor to operate differently than when not in the runahead mode. | 2014-06-12 |
20140164739 | Modify and Execute Next Sequential Instruction Facility and Instructions Therefore - An modify next sequential instruction (MNSI) instruction, when executed, modifies a field of the fetched copy of the next sequential instruction (NSI) to enable a program to dynamically provide parameters to the NSI being executed. Thus the MNSI instruction is a non-disruptive prefix instruction to the NSI. The NSI may be modified to effectively extend the length of the NSI field, thus providing more registers or more range (in the case of a length field) than otherwise available to the NSI instruction according to the instruction set architecture (ISA) | 2014-06-12 |
20140164740 | Branch-Free Condition Evaluation - A compare instruction of an instruction set architecture (ISA), when executed tests one or more operands for an instruction defined condition. The result of the test is stored as an operand, with leading zeros, in a general register of the ISA. The general register is identified (explicitly or implicitly) by the compare instruction. Thus, the result of the test can be manipulated by standard register operations of the computer system. In a superscalar processor, no special “condition code” renaming is required, as the standard register renaming takes care of out-of-order processing of the conditions. | 2014-06-12 |
20140164741 | Modify and Execute Next Sequential Instruction Facility and Instructions Therefore - An modify next sequential instruction (MNSI) instruction, when executed, modifies a field of the fetched copy of the next sequential instruction (NSI) to enable a program to dynamically provide parameters to the NSI being executed. Thus the MNSI instruction is a non-disruptive prefix instruction to the NSI. The NSI may be modified to effectively extend the length of the NSI field, thus providing more registers or more range (in the case of a length field) than otherwise available to the NSI instruction according to the instruction set architecture (ISA) | 2014-06-12 |
20140164742 | APPARATUS AND METHOD FOR MAPPING ARCHITECTURAL REGISTERS TO PHYSICAL REGISTERS - An apparatus and method are provided for performing register renaming. Available register identifying circuitry is provided to identify which physical registers form a pool of physical registers available to be mapped by register renaming circuitry to an architectural register specified by an instruction to be executed. Configuration data whose value is modified during operation of the processing circuitry is stored such that, when the configuration data has a first value, the configuration data identifies at least one architectural register of the architectural register set which does not require mapping to a physical register by the register renaming circuitry. The register identifying circuitry is arranged to reference the modified data value, such that when the configuration data has the first value, the number of physical registers in the pool is increased due to the reduction in the number of architectural registers which require mapping to physical registers. | 2014-06-12 |
20140164743 | REORDERING BUFFER FOR MEMORY ACCESS LOCALITY - Systems and methods for scheduling instructions for execution on a multi-core processor reorder the execution of different threads to ensure that instructions specified as having localized memory access behavior are executed over one or more sequential clock cycles to benefit from memory access locality. At compile time, code sequences including memory access instructions that may be localized are delineated into separate batches. A scheduling unit ensures that multiple parallel threads are processed over one or more sequential scheduling cycles to execute the batched instructions. The scheduling unit waits to schedule execution of instructions that are not included in the particular batch until execution of the batched instructions is done so that memory access locality is maintained for the particular batch. In between the separate batches, instructions that are not included in a batch are scheduled so that threads executing non-batched instructions are also processed and not starved. | 2014-06-12 |
20140164744 | Tracking Multiple Conditions in a General Purpose Register and Instruction Therefor - An operate-and-insert instruction of a program, when executed performs an operation based on one or more operands, results of an instruction specified test of the operation performed are stored in an instruction specified location of an instruction specified general register. The instruction specified general register is therefore able to hold results of many operate-and-insert instructions. The program can then use non-branch type instructions to evaluate conditions saved in the register, thus avoiding the performance penalty of branch instructions. | 2014-06-12 |
20140164745 | REGISTER ALLOCATION FOR CLUSTERED MULTI-LEVEL REGISTER FILES - A method for allocating registers within a processing unit. A compiler assigns a plurality of instructions to a plurality of processing clusters. Each instruction is configured to access a first virtual register within a live range. The compiler determines which processing cluster in the plurality of processing clusters is an owner cluster for the first virtual register within the live range. The compiler configures a first instruction included in the plurality of instructions to access a first global virtual register. | 2014-06-12 |
20140164746 | Tracking Multiple Conditions in a General Purpose Register and Instruction Therefor - An operate-and-insert instruction of a program, when executed performs an operation based on one or more operands, results of an instruction specified test of the operation performed are stored in an instruction specified location of an instruction specified general register. The instruction specified general register is therefore able to hold results of many operate-and-insert instructions. The program can then use non-branch type instructions to evaluate conditions saved in the register, thus avoiding the performance penalty of branch instructions. | 2014-06-12 |
20140164747 | Branch-Free Condition Evaluation - A compare instruction of an instruction set architecture (ISA), when executed tests one or more operands for an instruction defined condition. The result of the test is stored as an operand, with leading zeros, in a general register of the ISA. The general register is identified (explicitly or implicitly) by the compare instruction. Thus, the result of the test can be manipulated by standard register operations of the computer system. In a superscalar processor, no special “condition code” renaming is required, as the standard register renaming takes care of out-of-order processing of the conditions. | 2014-06-12 |
20140164748 | PRE-FETCHING INSTRUCTIONS USING PREDICTED BRANCH TARGET ADDRESSES - The present application describes a method and apparatus for prefetching instructions based on predicted branch target addresses. Some embodiments of the method include providing a second cache line to a second cache when a target address for a branch instruction in a first cache line of a first cache is included in the second cache line of the first cache and when the second cache line is not resident in the second cache. | 2014-06-12 |
20140164749 | SYSTEM AND METHOD OF CAPACITY MANAGEMENT - Systems and methods are disclosed herein to a method for providing a system name of a computer system, comprises generating a system ID key based on a system type of the computer system using an external key generator module; installing the system ID key on the computer system in an active operating state by extracting the system name from the system ID key; updating operating system structures for immediate use of the system name; writing a machine name index into halt/load parameters that are implemented by the computer system for subsequent restarts of the computer system after suspending the computer system, wherein the machine name index identifies a location of the system name in a system registry; and writing the system name into the system registry from the system ID key. | 2014-06-12 |
20140164750 | SYSTEM AND METHOD FOR MOBILE PLATFORM VIRTUALIZATION - A method for a mobile platform containing a mobile terminal having an operating system includes initializing a plurality of user environments (UEs) on the mobile terminal over the operating system, including a current UE running on the mobile terminal. The plurality of UEs are capable of being switched among one another based on one or more of predetermined conditions without changing the operating system. The method also includes collecting sensing data on certain parameters associated with operation of the mobile terminal, and processing the sensing data to indicate at least one of the predetermined conditions. Further, the method includes determining whether the current UE suits the at least one of the predetermined conditions indicated by processing the sensing data and, when the current UE does not suit the condition of the mobile terminal, switching the current UE to a desired UE from the plurality of UEs. | 2014-06-12 |
20140164751 | MULTI-PHASE RESUME FROM HIBERNATE - Resume of a computing device from hibernation may be performed in multiple phases. Each phase may partially restore a state of the computing device to an operational state and may establish an environment in which another phase of the resume is performed. The hibernation information may be partitioned to store separately data to be used at each resume phase. The information may be stored in a compressed form. In a first phase, a boot-level resume loader may restore a portion of the operating system based on a portion of the hibernation information. The restored portion may be used in a second phase to retrieve hibernation information from another portion through the operating system (OS). Multiple processors supported by the OS may read and decompress the hibernation information that is then moved back to operational memory. The operating system may support asynchronous disk input/output or other functions that accelerate the resume process. | 2014-06-12 |
20140164752 | SYSTEM AND METHOD FOR SELECTING A LEAST COST PATH FOR PERFORMING A NETWORK BOOT IN A DATA CENTER NETWORK ENVIRONMENT - A method is provided in one example embodiment and includes logging in to a multipath target via first and second boot devices instantiated on a network device, the first and second boot devices respectively connected to the multipath target via first and second paths; determining which of the first and second paths comprises a least cost path; and booting the operating system via the least cost path. The determining may include comparing network statistics of the first path with network statistics of the second path, the network statistics comprising at least one of packet loss on the path, errors encountered via the path, and congestion on the path. | 2014-06-12 |
20140164753 | SYSTEM ON CHIP FOR PERFORMING SECURE BOOT, IMAGE FORMING APPARATUS USING THE SAME, AND METHOD THEREOF - A system on chip is provided. The system on chip includes: a first memory in which a plurality of encryption keys are stored, a second memory, a third memory in which an encryption key setting value is stored, and a CPU which decrypts encrypted data which is stored in an external non-volatile memory using an encryption key corresponding to the encryption key setting value from among the plurality of encryption keys, stores the decrypted data in the second memory, and performs boot using data stored in the second memory. Accordingly, security of boot can be improved. | 2014-06-12 |
20140164754 | DEVICE IN COMPUTER SYSTEM - A device includes a PCH including a reset control pin and a disable control pin. A BIOS chip to control the PCH to send a low logic level from the reset control pin to the reset pin to reset a Ethernet controller. A timing adjusting circuit and the Ethernet controller. The Ethernet controller includes a rest pin and a disable pin, the rest pin is connected to the reset control pin via the timing adjusting circuit, and the disable pin is connected to the disable control pin via the timing adjusting circuit. The PCH sends a low logic level from the disable control pin to the disable pin to disable the Ethernet controller, and the timing adjusting circuit delays the low logic level, which makes the low logic level of the disable pin come later than the high logic level of the reset pin. | 2014-06-12 |
20140164755 | EXTERNAL ELECTRONIC DEVICE - A computer system and its booting and setting method are disclosed. Power supplying and a booting process of the computer system are controlled by a basic input/output system (BIOS). The computer system includes a super input/output chip, a south bridge chipset, and a power supply module. The super input/output chip includes a timer. A counting time is set by the BIOS and the timer counts down when booting the computer system, wherein the counting time is longer than a normal booting time. The south bridge chipset is electrically connected with the super input/output chip and exchanges data between a south bridge chipset and a peripheral device. The power supply module is used for providing power to the computer system. The BIOS controls the timer to stop counting down when the computer system is capable of booting normally. | 2014-06-12 |
20140164756 | CONTROLLING METHOD AND ELECTRONIC APPARATUS UTILIZING THE CONTROLLING METHOD - A controlling method for an electronic apparatus is disclosed. The method comprises: detecting a location for vision of an eye on a display of the electronic apparatus; controlling the electronic apparatus to operate in a first mode if a time period for the vision stops on an objective on the display is not larger than a predetermined time period; and controlling the electronic apparatus to operate in a second mode if the time period for the vision stops on an objective on the display is larger than the predetermined time period. The electronic apparatus detects at least turning operation for a head comprising the eye and performs corresponding operation according to the turning operation in the second mode. | 2014-06-12 |
20140164757 | CLOSED LOOP CPU PERFORMANCE CONTROL - The invention provides a technique for targeted scaling of the voltage and/or frequency of a processor included in a computing device. One embodiment involves scaling the voltage/frequency of the processor based on the number of frames per second being input to a frame buffer in order to reduce or eliminate choppiness in animations shown on a display of the computing device. Another embodiment of the invention involves scaling the voltage/frequency of the processor based on a utilization rate of the GPU in order to reduce or eliminate any bottleneck caused by slow issuance of instructions from the CPU to the GPU. Yet another embodiment of the invention involves scaling the voltage/frequency of the CPU based on specific types of instructions being executed by the CPU. Further embodiments include scaling the voltage and/or frequency of a CPU when the CPU executes workloads that have characteristics of traditional desktop/laptop computer applications. | 2014-06-12 |
20140164758 | SECURE CLOUD DATABASE PLATFORM - A cloud computing service to securely process queries on a database. A security device and method of operation are also disclosed. The security device may be provisioned with a private key of a subscriber to the cloud service and may have processing hardware that uses that key, sequestering the key and encryption processing in hardware that others, including operating personnel of the cloud service, cannot readily access. Processing within the security device may decrypt queries received from the subscriber and may encrypt responses for communication over a public network. The device may perform functions on clear text, thereby limiting the amount of clear text data processed on the cloud platform, while limiting bandwidth consumed in communicating with the subscriber. Such processing may include formatting data, including arguments in a query, in a security protocol used by the cloud platform. | 2014-06-12 |
20140164759 | Systems and Methods for Controlling Email Access - Embodiments of the disclosure relate to proxying one or more email resources in transit to the client devices from the email services, removing one or more email attachments from the email resources, and encoding the stripped email attachments based at least in part on one or more cryptographic keys. | 2014-06-12 |
20140164760 | APPARATUS AND METHODS FOR CONTENT TRANSFER PROTECTION - Methods and apparatus for ensuring protection of transferred content. In one embodiment, content is transferred while enabling a network operator (e.g., MSO) to control and change rights and restrictions at any time, and irrespective of subsequent transfers. This is accomplished in one implementation by providing a premises device configured to receive content in a first encryption format and encodes using a first codec, with an ability to transcrypt and/or transcode the content into an encryption format and encoding format compatible with a device which requests the content therefrom (e.g., from PowerKey/MPEG-2 content to DRM/MPEG-4 content). The premises device uses the same content key to encrypt the content as is used by the requesting device to decrypt the content. | 2014-06-12 |
20140164761 | SECURE ACCESS USING LOCATION-BASED ENCRYPTED AUTHORIZATION - Embodiments of the present invention disclose a method, computer program product, and system for location-based authorization to access a resource. A first computer receives a request to access a resource from a second computer. The request to access the resource includes location information of the second computer. The first computer responds by sending a request to a third computer, requesting location information of the third computer. In response to receiving from the third computer, the location information of the third computer, the first computer determines a distance between the second computer and the third computer. If the distance between the second computer and the third computer fulfills a proximity condition, the first computer authorizes the resource request. | 2014-06-12 |
20140164762 | APPARATUS AND METHOD OF ONLINE AUTHENTICATION - In a method of online authentication, digital certificates of a client device and an application server are verified when the application server receives a login request to a network application system installed in the application server from the client device. The application server authenticates an identification of the client device when both of the application server and the client device are valid. The client is permitted to log in the network application system of the application server when the identification of the client is valid, and is forbidden to log in to the network application system of the application server when the identification of the client is invalid. | 2014-06-12 |
20140164763 | SYSTEMS AND METHODS OF PERFORMING LINK SETUP AND AUTHENTICATION - Systems and methods of performing link setup and authentication are disclosed. A method includes receiving, at a mobile device, a first access point nonce (ANonce) from an access point and generating a first pairwise transient key (PTK) using the first ANonce. The mobile device sends an authentication request including a station nonce (SNonce) to the access point, where the authentication request is protected using the first PTK. The mobile device receives an authentication response including a second ANonce from the access point, where the authentication response is protected using a second PTK. The mobile device generates the second PTK using the second ANonce and the SNonce and uses the second PTK to protect at least one subsequent message to be sent from the mobile device to the access point. | 2014-06-12 |
20140164764 | ASSIGNMENT OF DIGITAL SIGNATURE AND QUALIFICATION FOR RELATED SERVICES - Technologies are generally described for security algorithm methods in issuing, managing, and using digital certificates in online transactions. Certificate holders can be identified based on the device ID from the equipment they are using to access online services. The equipment can be previously linked to an identity known by the equipment service provider. A consumer can then authorize the using of the digital certificate associated with their device in online transactions. Third parties can then trust the identity behind the digital certificates and accept their use in identifying a private party and performing a transaction with that party. | 2014-06-12 |
20140164765 | PROCEDURE FOR A MULTIPLE DIGITAL SIGNATURE - It comprises:
| 2014-06-12 |
20140164766 | PRIVACY MANAGEMENT FOR TRACKED DEVICES - A system is disclosed that protects private data of users while permitting the monitoring or tracking of electronic devices that are shared for both business and private purposes. The electronic devices may be configured to selectively encrypt location data, and/or other types of data, before such data is transmitted to a monitoring center. For example, data collected or generated on a user device outside of work hours may be encrypted with a private key of the device's user prior to transmission to the monitoring center, so that the data is not accessible to the employer. Data collected or generated during work hours may be transmitted without such encryption. | 2014-06-12 |
20140164767 | METHODS AND APPARATUS FOR DEVICE AUTHENTICATION WITH ONE-TIME CREDENTIALS - An automated method for authenticating a proving device to a verifying device involves an elliptic curve formula (ECF) for a predetermined elliptic curve associated with a proving device. According to one example method, the prover sends the verifier a message containing a first proof value (P2). The verifier determines whether P2 is a point on the elliptic curve associated with the proving device. If P2 is not on the elliptic curve, the verifier may determine that the proving device should not be trusted. The message may further comprise a second proof value (K1), and the verifier may automatically determine whether K1 corresponds to P1, based on a previous point (P0) on the elliptic curve. If K1 does not correspond to P1, the verifier may determine that the proving device should not be trusted. Other embodiments are described and claimed. | 2014-06-12 |
20140164768 | DETECTING MATCHED CLOUD INFRASTRUCTURE CONNECTIONS FOR SECURE OFF-CHANNEL SECRET GENERATION - Technology is described for two parties, by leveraging previously established secure connections with third parties, to obtain a shared secret for generating a secure connection with each other in a way that reduces vulnerability to man-in-the-middle attacks. In some examples, the technology can include generating a session identifier; coordinating use of the session identifier by the two parties; finding an available secure communication channel to a third party; transmitting the session identifier to the third party via the available secure communication channel; receiving, via the available secure communication channel, a third party identifier and a session identifier-specific secret; sharing information about the received third party identifier; determining that the received third party identifier matches a third party identifier received by the second party; and using the session identifier-specific secret received with the matching third party identifier to generate a cryptographic key to secure communication between the two parties. | 2014-06-12 |
20140164769 | CUSTODIAN SECURING A SECRET OF A USER - Methods, systems and apparatuses for a custodian securing a secret are disclosed. One method includes receiving, by a custodian server of a first custodian, encrypted shares, wherein the encrypted share are generated based on a secret of the user, a policy, and a plurality of public keys, comprising generating a plurality of shares from the secret, and encrypting each share utilizing a corresponding one of the plurality of public keys. The method further includes verifying, by the custodian server, that the encrypted shares can be used to reconstitute the secret upon receiving the encrypted shares, comprising leveraging, by the first custodian, one-way cryptographic functions, wherein the first custodian can reconstruct the secret, but cannot obtain access to the secret or any of the shares. | 2014-06-12 |
20140164770 | ADVANCED METERING INFRASTRUCTURE NETWORK SYSTEM AND MESSAGE BROADCASTING METHOD - An advanced metering infrastructure (AMI) server, an AMI network node, an AMI network system and a message broadcasting method thereof are provided. The AMI server generates a broadcasting key from a broadcasting message through a hash function, encrypts the broadcasting message into an encrypted broadcasting message via the broadcasting key, encrypts the broadcasting key into an encrypted key via a symmetric key, and transmits the encrypted broadcasting message and the encrypted key to the AMI network node. The AMI network node decrypts the encrypted key into the broadcasting key via the symmetric key, decrypts the encrypted broadcasting message into the broadcasting message via the broadcasting key, and processes the broadcasting message after determining that the broadcasting message corresponds to the broadcasting key through the hash function. | 2014-06-12 |
20140164771 | METHOD AND SYSTEM FOR MANAGING AN EMBEDDED SECURE ELEMENT eSE - A method and system for managing an embedded secure element ( | 2014-06-12 |
20140164772 | AUGMENTED REALITY BASED PRIVACY AND DECRYPTION - A method, non-transitory computer readable medium and apparatus for decrypting a document are disclosed. For example, the method captures a tag on an encrypted document, transmits the tag to an application server of a communication network to request a per-document decryption key, receives the per-document decryption key if the tag is authenticated, and decrypts a portion of the encrypted document using a temporary decryption key contained in the tag, the tag decrypted with the per-document decryption key. | 2014-06-12 |
20140164773 | OFFLINE DATA ACCESS USING TRUSTED HARDWARE - A cryptographically-secure component provides access-undeniability and verifiable revocation for clients with respect to downloaded content items from a server. A cryptographically-secure component is implemented in a client. When the client wants to purchase and download a content item from the server, the server requests an encryption key from the client. The client generates an encryption key that is bound to a state of the client that is associated with decrypting the content item. The server encrypts the content item using the encryption key and sends the encrypted content item to the client. Because the encryption key used to encrypt the content item is bound to the state associated with the client decrypting the content item, if the client desires to view the content item the client may first advance its state to the bound state to retrieve the decryption key. | 2014-06-12 |
20140164774 | Encryption-Based Data Access Management - Encryption-based data access management may include a variety of processes. In one example, a device may transmit a user authentication request for decrypting encrypted data to a data storage server storing the encrypted data. The computing device may then receive a validation token associated with the user's authentication request, the validation token indicating that the user is authenticated to a domain. Subsequently, the computing device may transmit the validation token to a first key server different from the data storage server. Then, in response to transmitting the validation token the computing device may receive, from the first key server, a key required for decrypting the encrypted data. The device may then decrypt at least a portion of the encrypted data using the key. | 2014-06-12 |
20140164775 | MAJOR MANAGEMENT APPARATUS, AUTHORIZED MANAGEMENT APPARATUS, ELECTRONIC APPARATUS FOR DELEGATION MANAGEMENT, AND DELEGATION MANAGEMENT METHODS THEREOF - A major management apparatus, an authorized management apparatus, an electronic apparatus for delegation management, and delegation management methods thereof are provided. The major management apparatus generates a first and a second delegation deployment messages and respectively transmits them to the authorized management apparatus and the electronic apparatus. The authorized management apparatus encrypts an original authorized operation message into an authorized operation message by an authorization key included in the first delegation deployment message and transmits the authorized operation message to the electronic apparatus. The original authorized operation message includes an operation task message and a right level. The electronic apparatus decrypts the authorized operation message into the original authorized operation message by the authorization key included in the second delegation deployment message and performs an operation according to the operation task message and the right level. | 2014-06-12 |
20140164776 | CRYPTOGRAPHIC METHOD AND SYSTEM - The present invention relates to the field of security of electronic data and/or communications. In one form, the invention relates to data security and/or privacy in a distributed and/or decentralised network environment. In another form, the invention relates to enabling private collaboration and/or information sharing between users, agents and/or applications. Embodiment(s) of the present invention enable the sharing of key(s) and/or content between a first user and/or agent and a second user and/or agent. Furthermore, embodiment(s) of the present invention have application in sharing encrypted information via information sharing services. | 2014-06-12 |
20140164777 | REMOTE DEVICE SECURE DATA FILE STORAGE SYSTEM AND METHOD - A remote device secure data file storage system and method of securely storing data files at a remote device, includes a host system having a database and a plurality of remote devices, each connected with the host system by a communication network. Each remote device and the host system is programmed with a time-based cryptography system that generates an encryption key (RVK) and initialization vector (IV) for encrypting and decrypting data on the remote device. The time-based cryptography system generates the encryption key (RVK) as a function of a parameter (PDPT) that is a function of a personal date (PD) and personal time (PT) of the user. The personal date and personal time of the user being a function of personal data entered by the user on the remote device. The personal date (PD) is a function of the date of birth (DOB) of the user and the personal time (PT) is a function of the time of birth (TOB) of the user. | 2014-06-12 |
20140164778 | Method for producing and storage of digital certificates - The proposed method relates to methods for obtaining, storage, and exchange of digital information, including replication and distribution of software, more specifically, to methods for producing and storage of digital certificates and replication of software therefor. The proposed method will find useful application for safe storage and transmitting various data, e.g. personal data, electronic funds, and, also for replication and distribution of software. Comparing with all known related art methods, the present method is characterized with an essentially increased level of protection of storage and transmission of digital information and replication of software due to affirmation of the digital certificate in authorized entities, due to the employment of consolidated certificates, as well as due to the enhancement of authenticity of information transmission with the use of electronic digital signatures. | 2014-06-12 |
20140164779 | SECURE PROVISIONING IN AN UNTRUSTED ENVIRONMENT - Embodiments include methods for securely provisioning copies of an electronic circuit. A first entity (e.g., a chip manufacturer) embeds one or more secret values into copies of the electronic circuit. A second entity (e.g., an OEM): 1) embeds a trust anchor in a first copy of the electronic circuit; 2) causes the electronic circuit to generate a message signing key pair using the trust anchor and the embedded secret value(s); 3) signs provisioning code using a code signing private key; and 4) sends a corresponding code signing public key, the trust anchor, and the signed provisioning code to a third entity (e.g., a product manufacturer). The third entity embeds the trust anchor in a second copy of the electronic circuit and causes the electronic circuit to: 1) generate the message signing private key; 2) verify the signature of the signed provisioning code using the code signing public key; and 3) launch the provisioning code on the electronic circuit. The electronic circuit can authenticate itself to the OEM using the message signing key pair. | 2014-06-12 |
20140164780 | INFORMATION PROCESSING APPARATUS, SIGNATURE PROVIDING METHOD, SIGNATURE VERIFYING METHOD, PROGRAM, AND RECORDING MEDIUM - An information processing apparatus including a message generating unit that generates N sets of messages based on a multi-order multivariate polynomial set F=(f | 2014-06-12 |
20140164781 | SYSTEM AND METHOD FOR GENERATING ONE-TIME PASSWORD FOR INFORMATION HANDLING RESOURCE - In accordance with embodiments of the present disclosure, a method may include generating a random number to be associated with an information handling resource. The method may also include generating a challenge string based at least on the random number. The method may additionally include encrypting the challenge string using a first shared secret. The method may further include receiving a one-time password generated by a vendor associated with the information handling resource, the one-time password generated by decrypting the challenge string using the first shared secret, parsing the random number from the decrypted challenge string, and digitally signing the decrypted challenge string with a digital signature using a second shared secret. The method may also include granting user access to the information handling resource in response to verifying, using the second shared secret, that the digital signature matches the random number. | 2014-06-12 |
20140164782 | SYSTEM AND METHOD FOR PIN ENTRY ON MOBILE DEVICES - A system for entering a secure Personal Identification Number (PIN) into a mobile computing device includes a mobile computing device and a peripheral device that are connected via a data communication link The mobile computing device includes a mobile application and a display and the mobile application runs on the mobile computing device and displays a grid on the mobile computing device display. The peripheral device includes a display and an encryption engine, and the peripheral device display displays a grid corresponding to the grid displayed on the mobile computing device display. Positional inputs on the mobile computing device grid are sent to the peripheral device and the peripheral device decodes the positional inputs into PIN digits and generates an encrypted PIN and then sends the encrypted PIN back to the mobile computing device. | 2014-06-12 |
20140164783 | METHOD AND APPARATUS FOR SECURELY STORING DATA IN A DATABASE - A method of securely storing data in a memory on a computer including a processor is provided. The method includes receiving unencrypted data; randomly selecting a key, wherein the key is a character of an alphabet of a data type of the unencrypted data; creating partially encrypted data by encrypting the unencrypted data by randomly mapping each character of the alphabet of the data type of the unencrypted data to a character of an alphabet of a data type of encrypted data, except each character of the unencrypted data matching the key is not encrypted; and storing the partially encrypted data in the memory. | 2014-06-12 |
20140164784 | INTEGRATED HEALTH CARE SYSTEMS AND METHODS - Systems and methods described herein may store and analyze patient data sets. A processor in communication with a database may generate a plurality of patient data sets, each of the patient data sets being associated with one of a plurality of patients and comprising an attribute. The processor may de-identify each of the patient data sets so that they are not associated with the patients. The processor may encrypt each of the de-identified data sets to generate a plurality of encrypted data sets and store the encrypted data sets in the database. The processor may analyze one of the patient data sets to determine a relationship between the one of the patient data sets and the other of the patient data sets based on the attribute of the one of the patient data sets and the attributes of the other of the patient data sets. | 2014-06-12 |
20140164785 | ENCRYPTION PROCESSING DEVICE AND AUTHENTICATION METHOD - An encryption processing device includes a memory configured to store a common key, and a processor configured to generate a random number which is an integer, to perform a bit transposition on the common key, the bit transposition being determined at least by the random number, to transmit the random number to another encryption processing device and to receive a response from the other encryption processing device, the response obtained by encryption using a common key stored in the other encryption processing device and a second randomized key generated by performing the bit transposition determined by the random number; and to authenticate the other encryption processing device either by comparing the response with the random number by decrypting the response with the common key, or by comparing the random number with the response by encrypting the random number with the common key. | 2014-06-12 |