03rd week of 2014 patent applcation highlights part 63 |
Patent application number | Title | Published |
20140019686 | EFFICIENT DYNAMIC RANDOMIZING ADDRESS REMAPPING FOR PCM CACHING TO IMPROVE ENDURANCE AND ANTI-ATTACK - A method, including monitoring, by a remapping manager, a system state of a computing device for the occurrence of a predefined event, detecting, by the remapping manager, the occurrence of the predefined event, and initiating, by the remapping manager upon the detection of the predefined event, a remapping of first encoded addresses stored in tags, the first encoded addresses are associated with locations in main memory that are cached in a memory cache. | 2014-01-16 |
20140019687 | METHOD FOR INCREASING CACHE SIZE - A method for increasing storage space in a system containing a block data storage device, a memory, and a processor is provided. Generally, the processor is configured by the memory to tag metadata of a data block of the block storage device indicating the block as free, used, or semifree. The free tag indicates the data block is available to the system for storing data when needed, the used tag indicates the data block contains application data, and the semifree tag indicates the data block contains cache data and is available to the system for storing application data type if no blocks marked with the free tag are available to the system. | 2014-01-16 |
20140019688 | Solid State Drives as a Persistent Cache for Database Systems - Disclosed herein are systems, methods, and computer readable storage media for a database system using solid state drives as a second level cache. A database system includes random access memory configured to operate as a first level cache, solid state disk drives configured to operate as a persistent second level cache, and hard disk drives configured to operate as disk storage. The database system also includes a cache manager configured to receive a request for a data page and determine whether the data page is in cache or disk storage. If the data page is on disk, or in the second level cache, it is copied to the first level cache. If copying the data page results in an eviction, the evicted data page is copied to the second level cache. At checkpoint, dirty pages stored in the second level cache are flushed in place in the second level cache. | 2014-01-16 |
20140019689 | METHODS OF CACHE PRELOADING ON A PARTITION OR A CONTEXT SWITCH - A scheme referred to as a “Region-based cache restoration prefetcher” (RECAP) is employed for cache preloading on a partition or a context switch. The RECAP exploits spatial locality to provide a bandwidth-efficient prefetcher to reduce the “cold” cache effect caused by multiprogrammed virtualization. The RECAP groups cache blocks into coarse-grain regions of memory, and predicts which regions contain useful blocks that should be prefetched the next time the current virtual machine executes. Based on these predictions, and using a simple compression technique that also exploits spatial locality, the RECAP provides a robust prefetcher that improves performance without excessive bandwidth overhead or slowdown. | 2014-01-16 |
20140019690 | PROCESSOR, INFORMATION PROCESSING APPARATUS, AND CONTROL METHOD OF PROCESSOR - A request storing unit in a PF port stores an expanded request. A PF port entry selecting unit controls two pre-fetch requests expanded from the expanded request to consecutively be input to a L2-pipe. When only one of the expanded two pre-fetch requests is aborted, the PF port entry selecting unit further controls the requests such that the aborted pre-fetch request is input to the L2-pipe as the highest priority request. Further, the PF port entry selecting unit receives the number of available resources from a resource managing unit in order to select a pre-fetch request to be input to a pipe inputting unit based on the number of available resources. | 2014-01-16 |
20140019691 | SYSTEM, METHOD, AND COMPUTER PROGRAM PRODUCT FOR INVALIDATNG CACHE LINES - A system, method, and computer program product are provided for invalidating cache lines. In use, one or more cache lines that hold data from within a region of a memory address space are invalidated. | 2014-01-16 |
20140019692 | CONCURRENT EXECUTION OF CRITICAL SECTIONS BY ELIDING OWNERSHIP OF LOCKS - Critical sections of multi-threaded programs, normally protected by locks providing access by only one thread, are speculatively executed concurrently by multiple threads with elision of the lock acquisition and release. Upon a completion of the speculative execution without actual conflict as may be identified using standard cache protocols, the speculative execution is committed, otherwise the speculative execution is squashed. Speculative execution with elision of the lock acquisition, allows a greater degree of parallel execution in multi-threaded programs with aggressive lock usage. | 2014-01-16 |
20140019693 | PARALLEL PROCESSING OF A SINGLE DATA BUFFER - Technologies for executing a serial data processing algorithm on a single variable length data buffer includes streaming segments of the buffer into a data register, executing the algorithm on each of the segments in parallel, and combining the results of executing the algorithm on each of the segments to form the output of the serial data processing algorithm. | 2014-01-16 |
20140019694 | PARALLELL PROCESSING OF A SINGLE DATA BUFFER - Technologies for executing a serial data processing algorithm on a single variable-length data buffer includes padding data segments of the buffer, streaming the data segments into a data register and executing the serial data processing algorithm on each of the segments in parallel. | 2014-01-16 |
20140019695 | Systems and Methods for Rapid Erasure Retry Decoding - The present invention is related to processing data sets, and more specifically to recovering problematic portions of a data set. | 2014-01-16 |
20140019696 | METHODS AND APPARATUS FOR POINT-IN-TIME VOLUMES - Methods and apparatus for point-in-time volumes are provided. A relationship is enabled between a source volume and point-in-time volume. Copying a data chunk to the point-in-time volume before a write operation modifies the data chunk on the source volume dynamically creates the point-in-time volume. The point-in-time volume can be accessed in read/write mode as a general purpose data storage volume. Other embodiments comprising additional features, such as a forced migration process, are also provided. | 2014-01-16 |
20140019697 | CLIPBOARD FOR PROCESSING RECEIVED DATA CONTENT - An embodiment of the invention directed to a method is associated with data content, comprising discrete data portions including first data and second data portions separated from each other in the data content. A copy operation is implemented on data portions so that at least some of the data portions are each copied to a buffer, which include the first and second data portions. A paste operation is carried out to present each of the copied data portions as an input for an output data selection task. Prespecified criteria is used in the output data selection task to select a number of the copied data portions to be selected data for a given purpose, the selected number of copied data portions being less than data portions presented by the paste operation, and the selected copied data portions including the first and second data portions. | 2014-01-16 |
20140019698 | SYSTEMS AND METHODS FOR REPLICATION OF DATA UTILIZING DELTA VOLUMES - A method of data replication from a first data storage device to a second data storage device. The method may include generating, at the first data storage device, at spaced time intervals, a plurality of snapshots for a logical data volume of the first data storage device, the logical data volume being an abstraction of data blocks from one or more physical storage devices, each snapshot identifying changes of data for at least a portion of the logical data volume since a most previous snapshot. Also at the first data storage device, the method includes generating a delta volume, the delta volume indicating changes in the data of at least a portion of the logical data volume between two non-consecutive snapshots. The method further involves replicating the delta volume to the second data storage device, and replicating the changes to the data indicated therein at the second data storage device. | 2014-01-16 |
20140019699 | METHODS AND SYSTEMS FOR IMPLEMENTING TIME-LOCKS - A computer accesses a storage device. The computer includes a processor and a non-transitory computer-readable storage medium storing computer-readable instructions, when executed by the processor, the computer-readable instructions cause the computer to perform: storing a first time-lock and a second time-lock in the storage device; and, when both the first time-lock and the second time-lock are successfully stored in the storage device by the computer, to obtain an exclusive access privilege during a particular time interval associated with the first time-lock and the second time-lock. | 2014-01-16 |
20140019700 | WATER MARKING IN A DATA INTERVAL GAP - A storage device in which file data is divided into multiple blocks for storage on a recording medium is provided. The storage device includes an additional data storing section for storing additional data to be recorded on the recording medium in association with the data to be written, a position determining section for determining recording positions on the recording medium where the blocks should be respectively written, based on the additional data, and a block writing section for writing the respective blocks on the recording positions on the recording medium determined by the recording position determining section. The additional data this defines a gap length between blocks of recorded data. During a read operation, if the gap length does not comport with the additional data, then an error is assumed. | 2014-01-16 |
20140019701 | INFORMATION STORAGE SYSTEM AND METHOD OF CONTROLLING INFORMATION STORAGE SYSTEM - An example of an information storage system includes physical storage drives for providing real storage areas to a pool which is tiered into tiers different in performance, and a controller. The controller monitors accesses in a first tier in the pool. The controller determines a loaded state of the first tier based on the accesses to the first tier. The controller holds management information relating loads to the first tier to relocation speeds and/or modes of moving data in data relocation between a second tier in the pool and the first tier. The controller determines at least one of a relocation speed and a mode of moving data in data relocation between the second tier and the first tier based on the determined loaded state of the first tier and the management information. | 2014-01-16 |
20140019702 | INDEXED REGISTER ACCESS FOR MEMORY DEVICE - Example embodiments of a non-volatile memory device may comprise receiving an index value at one or more input terminals of a memory device and storing the index value in a first register of the memory device. The first register may be implemented in a first clock domain, and the index value may identify a second register of the memory device implemented in a second clock domain. | 2014-01-16 |
20140019703 | MEMORY ACCESS SYSTEM - A memory access system may be used to relay data between an electronic device and external memory. The memory access system may include write buffers which may receive and write information from the electronic device to the external memory. The memory access system may also include read buffers which may gather data from the external memory and send it to a main processing component of the electronic device for processing. The memory access system may be configured so that the main processing component of the electronic device may gather data from the write buffers of the memory access system when a condition is satisfied. | 2014-01-16 |
20140019704 | EXTENSION OF WRITE ANYWHERE FILE LAYOUT WRITE ALLOCATION - A plurality of storage devices is organized into a physical volume called an aggregate, and the aggregate is organized into a global storage space, and a data block is resident on one of the storage devices of the plurality of storage devices. A plurality of virtual volumes is organized within the aggregate and the data block is allocated to a virtual volume. A physical volume block number (pvbn) is selected for the data block from a pvbn space of the aggregate, and virtual volume block number (vvbn) for the data block is selected from a vvbn space of the selected vvol. Both the selected pvbn and the selected vvbn are inserted in a parent block as block pointers to point to the allocated data block on the storage device. | 2014-01-16 |
20140019705 | BRIDGING DEVICE HAVING A CONFIGURABLE VIRTUAL PAGE SIZE - A composite memory device including discrete memory devices and a bridge device for controlling the discrete memory devices. The bridge device has memory organized as banks, where each bank is configured to have a virtual page size that is less than the maximum physical size of the page buffer. Therefore only a segment of data corresponding to the virtual page size stored in the page buffer is transferred to the bank. The virtual page size of the banks is provided in a virtual page size (VPS) configuration command having an ordered structure where the position of VPS data fields containing VPS configuration codes in the command correspond to different banks which are ordered from a least significant bank to a most significant bank. The VPS configuration command is variable in size, and includes only the VPS configuration codes for the highest significant bank being configured and the lower significant banks. | 2014-01-16 |
20140019706 | SYSTEM AND METHOD OF LOGICAL OBJECT MANAGEMENT - A virtual allocation unit is allocated in a virtual address space corresponding to a filesystem, in response to an allocation requirement, related to a logical object in the filesystem. The size of the virtual allocation unit is determined in accordance with the current physical size of the logical object. The size of the virtual allocation unit is substantially larger than a size required with respect to the allocation requirement. Physical block address ranges are allocated in a physical storage space, in response to subsequent write requests, related to the logical object. Each physical block address range is associated with a respective portion of the virtual allocation unit. | 2014-01-16 |
20140019707 | Automatically Preventing Large Block Writes from Starving Small Block Writes in a Storage Device - A mechanism is provided in a storage device for performing a write operation. The mechanism configures a write buffer memory with a plurality of write buffer portions. Each write buffer portion is dedicated to a predetermined block size category within a plurality of block size categories. For each write operation from an initiator, the mechanism determines a block size category of the write operation. The mechanism performs each write operation by writing to a write buffer portion within the plurality of write buffer portions corresponding to the block size category of the write operation. | 2014-01-16 |
20140019708 | GRANTING AND REVOKING SUPPLEMENTAL MEMORY ALLOCATION REQUESTS - Provided are a computer program product, system, and method for granting and revoking supplemental memory allocation requests. Supplemental memory allocations of memory resources are granted to applications following initial memory allocations of the memory resources to the applications. In response to determining that available memory resources have fallen below an availability threshold, determining a weighting factor for each supplemental memory allocation based on at least one of an amount of the memory resources allocated to the supplemental memory allocation and a measured duration during which the memory resources have been allocated. At least one of the supplemental memory allocations is selected to revoke based on the determined weighting factors of the supplemental memory allocations. | 2014-01-16 |
20140019709 | Methods and Systems For Using Distributed Allocation Tables - Methods and systems are disclosed for distributed storage systems. For example, a device can receive a read request for a first file, where the read request is generated by a host device. The read request is configured to access a file on the host device. The device can access mappings to identify a first mapping. The device can identify a first file on a mobile device based on the first mapping. The device can access the first file, where the accessing uses the first mapping. The device can access the first file by communicating with the mobile device to read the first file. The device can then return the first file. | 2014-01-16 |
20140019710 | ENDIAN CONVERSION METHOD AND SYSTEM - An endian conversion method is executed by a CPU, and includes executing a program that includes endian conversion setting; and performing, when accessing an address of a main memory indicated in the endian conversion setting, endian conversion of data specified by the address of the main memory. | 2014-01-16 |
20140019711 | DISPERSED STORAGE NETWORK VIRTUAL ADDRESS SPACE - A dispersed storage network utilizes a virtual address space to store data. The dispersed storage network includes a dispersed storage device for receiving a request relating to a data object stored in the dispersed storage network and determining a virtual memory address assigned to the data object. The virtual memory address is within a virtual memory address range of the virtual address space that is allocated to a vault associated with a user of the data object. The virtual memory address is further assigned to a data slice of a plurality of data slices of the data object. The dispersed storage device uses the virtual memory address to determine an identifier of a storage unit within the dispersed storage network that has the data slice stored therein. | 2014-01-16 |
20140019712 | SYSTEMS, APPARATUSES, AND METHODS FOR PERFORMING VECTOR PACKED COMPRESSION AND REPEAT - Embodiments of systems, apparatuses, and methods for performing in a computer processor vector packed compression and repeat in response to a single vector packed compression and repeat instruction that includes a first and second source vector register operand, a destination vector register operand, and an opcode are described. | 2014-01-16 |
20140019713 | SYSTEMS, APPARATUSES, AND METHODS FOR PERFORMING A DOUBLE BLOCKED SUM OF ABSOLUTE DIFFERENCES - Embodiments of systems, apparatuses, and methods for performing in a computer processor vector double block packed sum of absolute differences (SAD) in response to a single vector double block packed sum of absolute differences instruction that includes a destination vector register operand, first and second source operands, an immediate, and an opcode are described. | 2014-01-16 |
20140019714 | VECTOR FREQUENCY EXPAND INSTRUCTION - A processor core that includes a hardware decode unit and an execution engine unit. The hardware decode unit to decode a vector frequency expand instruction, wherein the vector frequency compress instruction includes a source operand and a destination operand, wherein the source operand specifies a source vector register that includes one or more pairs of a value and run length that are to be expanded into a run of that value based on the run length. The execution engine unit to execute the decoded vector frequency expand instruction which causes, a set of one or more source data elements in the source vector register to be expanded into a set of destination data elements comprising more elements than the set of source data elements and including at least one run of identical values which were run length encoded in the source vector register. | 2014-01-16 |
20140019715 | SYSTEMS, APPARATUSES, AND METHODS FOR PERFORMING A CONVERSION OF A WRITEMASK REGISTER TO A LIST OF INDEX VALUES IN A VECTOR REGISTER - Embodiments of systems, apparatuses, and methods for performing in a computer processor conversion of a mask register into a list of index values in response to a single vector packed convert a mask register into a list of index values instruction that includes a destination vector register operand, a source writemask register operand, and an opcode are described. | 2014-01-16 |
20140019716 | PLATEABLE DIFFUSION BARRIER TECHNIQUES - Techniques are disclosed for forming a directly plateable diffusion barrier within an interconnect structure to prevent diffusion of interconnect fill metal into surrounding dielectric material and lower metal layers. The barrier can be used in back-end interconnect metallization processes and, in an embodiment, renders a seed layer unnecessary. In accordance with various example embodiments, the barrier can be implemented, for instance, as: (1) a single layer of ruthenium silicide (RuSi | 2014-01-16 |
20140019717 | SYNCHRONIZATION METHOD, MULTI-CORE PROCESSOR SYSTEM, AND SYNCHRONIZATION SYSTEM - A synchronization method is executed by a multi-core processor system. The synchronization method includes registering based on a synchronous command issued from a first CPU, CPUs to be synchronized and a count of the CPUs into a specific table; counting by each of the CPUs and based on a synchronous signal from the first CPU, an arrival count for a synchronous point, and creating by each of the CPUs, a second shared memory area that is a duplication of a first shared memory area accessed by processes executed by the CPUs; and comparing the first shared memory area and the second shared memory area when the arrival count becomes equal to the count of the CPUs, and based on a result of the comparison, judging the processes executed by the CPUs. | 2014-01-16 |
20140019718 | VECTORIZED PATTERN SEARCHING - Embodiments of computer-implemented methods, systems, computing devices, and computer-readable media are described herein for vectorized searching for a pattern P within a set of data T, the pattern P having a length m. In various embodiments, the vectorized search may include a shift of a sliding window into T by a distance d that is greater than m on determination, based on one or more ordered vectorized comparisons of portions of P and T, that no potential match of P is found within the sliding window. In various embodiments, d and m may be positive integers. In various embodiments, the one or more ordered vectorized comparisons may include one or more single instruction multiple data (“SIMD”) instructions supported by the processor. | 2014-01-16 |
20140019719 | GENERALIZED BIT MANIPULATION INSTRUCTIONS FOR A COMPUTER PROCESSOR - Methods of bit manipulation within a computer processor are disclosed. Improved flexibility in bit manipulation proves helpful in computing elementary functions critical to the performance of many programs and for other applications. In one embodiment, a unit of input data is shifted/rotated and multiple non-contiguous bit fields from the unit of input data are inserted in an output register. In another embodiment, one of two units of input data is optionally shifted or rotated, the two units of input data are partitioned into a plurality of bit fields, bitwise operations are performed on each bit field, and pairs of bit fields are combined with either an AND or an OR bitwise operation. Embodiments are also disclosed to simultaneously perform these processes on multiple units and pairs of units of input data in a Single Input, Multiple Data processing environment capable of performing logical operations on floating point data. | 2014-01-16 |
20140019720 | METHODS, APPARATUS, AND INSTRUCTIONS FOR CONVERTING VECTOR DATA - A computer processor includes a decoder for decoding machine instructions and an execution unit for executing those instructions. The decoder and the execution unit are capable of decoding and executing vector instructions that include one or more format conversion indicators. For instance, the processor may be capable of executing a vector-load-convert-and-write (VLoadConWr) instruction that provides for loading data from memory to a vector register. The VLoadConWr instruction may include a format conversion indicator to indicate that the data from memory should be converted from a first format to a second format before the data is loaded into the vector register. Other embodiments are described and claimed. | 2014-01-16 |
20140019721 | MANAGED INSTRUCTION CACHE PREFETCHING - Disclosed is an apparatus and method to manage instruction cache prefetching from an instruction cache. A processor may comprise: a prefetch engine; a branch prediction engine to predict the outcome of a branch; and dynamic optimizer. The dynamic optimizer may be used to control: indentifying common instruction cache misses and inserting a prefetch instruction from the prefetch engine to the instruction cache. | 2014-01-16 |
20140019722 | PROCESSOR AND INSTRUCTION PROCESSING METHOD OF PROCESSOR - Provided are a processor and an instruction processing method of the processor, with which it is possible to increase an instruction execution rate. A processor 1 includes a BTAC 12 that stores branch target information of a branch instruction and boundary information indicating that the branch instruction is on a fetch line boundary, a branch prediction unit 13 that performs branch prediction of a variable-length instruction set including the branch instruction by referring to the BTAC 12, and a fetch unit 14 that fetches an instruction based on the branch prediction result. The branch prediction unit 13 refers to the BTAC 12, and when the boundary information is present in the instruction which the branch prediction unit 13 makes the fetch unit 14 fetch, the branch prediction unit 13 makes the fetch unit 14 fetch the following next fetch line as well and then makes the fetch unit 14 fetch a branch prediction target instruction according to the branch target information. | 2014-01-16 |
20140019723 | BINARY TRANSLATION IN ASYMMETRIC MULTIPROCESSOR SYSTEM - An asymmetric multiprocessor system (ASMP) may comprise computational cores implementing different instruction set architectures and having different power requirements. Program code for execution on the ASMP is analyzed and a determination is made as to whether to allow the program code, or a code segment thereof to execute on a first core natively or to use binary translation on the code and execute the translated code on a second core which consumes less power than the first core during execution. | 2014-01-16 |
20140019724 | COOPERATIVE THREAD ARRAY REDUCTION AND SCAN OPERATIONS - One embodiment of the present invention sets forth a technique for performing aggregation operations across multiple threads that execute independently. Aggregation is specified as part of a barrier synchronization or barrier arrival instruction, where in addition to performing the barrier synchronization or arrival, the instruction aggregates (using reduction or scan operations) values supplied by each thread. When a thread executes the barrier aggregation instruction the thread contributes to a scan or reduction result, and waits to execute any more instructions until after all of the threads have executed the barrier aggregation instruction. A reduction result is communicated to each thread after all of the threads have executed the barrier aggregation instruction and a scan result is communicated to each thread as the barrier aggregation instruction is executed by the thread. | 2014-01-16 |
20140019725 | METHOD FOR FAST LARGE-INTEGER ARITHMETIC ON IA PROCESSORS - Methods, systems, and apparatuses are disclosed for implementing fast large-integer arithmetic within an integrated circuit, such as on IA (Intel Architecture) processors, in which such means include receiving a 512-bit value for squaring, the 512-bit value having eight sub-elements each of 64-bits and performing a 512-bit squaring algorithm by: (i) multiplying every one of the eight sub-elements by itself to yield a square of each of the eight sub-elements, the eight squared sub-elements collectively identified as T1, (ii) multiplying every one of the eight sub-elements by the other remaining seven of the eight sub-elements to yield an asymmetric intermediate result having seven diagonals therein, wherein each of the seven diagonals are of a different length, (iii) reorganizing the asymmetric intermediate result having the seven diagonals therein into a symmetric intermediate result having four diagonals each of 7×1 sub-elements of the 64-bits in length arranged across a plurality of columns, (iv) adding all sub-elements within their respective columns, the added sub-elements collectively identified as T2, and (v) yielding a final 512-bit squared result of the 512-bit value by adding the value of T2 twice with the value of T1 once. Other related embodiments are disclosed. | 2014-01-16 |
20140019726 | PARALLEL ARITHMETIC DEVICE, DATA PROCESSING SYSTEM WITH PARALLEL ARITHMETIC DEVICE, AND DATA PROCESSING PROGRAM - A parallel arithmetic device includes a status management section, a plurality of processor elements, and a plurality of switch elements for determining the relation of coupling of each of the processor elements. Each of the processor elements includes an instruction memory for memorizing a plurality of operation instructions corresponding respectively to a plurality of contexts so that an operation instruction corresponding to the context selected by the status management section is read out, and a plurality of arithmetic units for performing arithmetic processes in parallel on a plurality of sets of input data in a manner compliant with the operation instruction read out from the instruction memory. | 2014-01-16 |
20140019727 | MODIFIED BALANCED THROUGHPUT DATA-PATH ARCHITECTURE FOR SPECIAL CORRELATION APPLICATIONS - Apparatus and method for a modified, balanced throughput data-path architecture is given for efficiently implementing the digital signal processing algorithms of filtering, convolution and correlation in computer hardware, in which both data and coefficient buffers can be implemented as sliding windows. This architecture uses a multiplexer and a data path branch from the Address Generator unit to the multiply-accumulate execution unit. By selecting between the data path of Address Generator to execution unit and the data path of register to execution unit, the unbalanced throughput and multiply-accumulate bubble cycles caused by misaligned addressing on coefficients can be overcome. The modified balanced throughput data-path architecture can achieve a high multiply-accumulate operation rate per cycle in implementing digital signal processing algorithms. | 2014-01-16 |
20140019728 | CONTROLLING AN ORDER FOR PROCESSING DATA ELEMENTS DURING VECTOR PROCESSING - A data processing apparatus includes a register bank having a plurality of registers for storing vectors being processed; a pipelined processor for processing the stream of vector instructions; the pipelined processor comprising circuitry configured to detect data dependencies for the vectors processed by the stream of vector instructions and stored in the plurality of registers and to determine constraints on timing of execution for the vector instructions such that no register data hazards arise. Register data hazards arise where two accesses to a same register, at least one of said accesses being a write, occur in an order different to an order of said instruction stream such that an access occurring later in said instruction stream starts before an access occurring earlier in said instruction stream has completed. The pipelined processor includes data element hazard determination circuitry. | 2014-01-16 |
20140019729 | Method for Processing Data Sets, a Pipelined Stream Processor for Processing Data Sets, and a Computer Program for Programming a Pipelined Stream Processor - There is provided a method for processing data sets in a processor. The processor has a pipelined data path including an input, an output, and at least one discrete stage. The pipeline is configured to enable one or more data sets, each comprising one or more data items, to enter the pipeline from the input, propagate through the pipeline, and exit the pipeline through the output. Each discrete stage represents an operation to be performed on the data item occupying the discrete stage. The method comprises defining one or more non-overlapping sections of the pipeline corresponding to portions of the pipeline occupied by the data items of at least one data set. In addition, the method comprises providing one or more logic units, each dedicated to control the progress of the data items of the at least one data set through the pipeline as the section advances through the pipeline. | 2014-01-16 |
20140019730 | Method and Device for Data Transmission Between Register Files - The present disclosure discloses a method and device for data transmission between register files. The method includes that: data in a source register file are read at a Stage i of a pipeline; and the read data are transmitted to a destination register file using an idle instruction pipeline. With the method of the present disclosure, data and mask information are transmitted using an idle instruction pipeline, without addition of extra registers for data and control information buffering, thus reducing logic consumption as well as increasing utilization of an existing functional unit. | 2014-01-16 |
20140019731 | GENERALIZED BIT MANIPULATION INSTRUCTIONS FOR A COMPUTER PROCESSOR - Methods of bit manipulation within a computer processor are disclosed. Improved flexibility in bit manipulation proves helpful in computing elementary functions critical to the performance of many programs and for other applications. In one embodiment, a unit of input data is shifted/rotated and multiple non-contiguous bit fields from the unit of input data are inserted in an output register. In another embodiment, one of two units of input data is optionally shifted or rotated, the two units of input data are partitioned into a plurality of bit fields, bitwise operations are performed on each bit field, and pairs of bit fields are combined with either an AND or an OR bitwise operation. Embodiments are also disclosed to simultaneously perform these processes on multiple units and pairs of units of input data in a Single Input, Multiple Data processing environment capable of performing logical operations on floating point data. | 2014-01-16 |
20140019732 | SYSTEMS, APPARATUSES, AND METHODS FOR PERFORMING MASK BIT COMPRESSION - Embodiments of systems, apparatuses, and methods for performing in a computer processor mask bit compression in response to a single mask bit compression instruction that includes a source writemask register operand, a destination writemask register operand, and an opcode are described. | 2014-01-16 |
20140019733 | REAL TIME INSTRUCTION TRACING COMPRESSION OF RET INSTRUCTIONS - In accordance with embodiments disclosed herein, there are provided methods, systems, mechanisms, techniques, and apparatuses for implementing Real Time Instruction Tracing compression of RET instructions For example, in one embodiment, such means may include an integrated circuit having means for initiating instruction tracing for instructions of a traced application, mode, or code region, as the instructions are executed by the integrated circuit; means for generating a plurality of packets describing the instruction tracing; and means for compressing a multi-bit RET instruction (RETurn instruction) to a single bit RET instruction. | 2014-01-16 |
20140019734 | DATA PROCESSING APPARATUS AND METHOD USING CHECKPOINTING - A data processing apparatus and method of data processing are provided. The data processing apparatus comprises execution circuitry configured to execute a sequence of program instructions. Checkpoint circuitry is configured to identify an instance of a predetermined type of instruction in the sequence of program instructions and to store checkpoint information associated with that instance. The checkpoint information identifies a state of the data processing apparatus prior to execution of that instance of the predetermined type of instruction, wherein the predetermined type of instruction has an expected long completion latency. If the execution circuitry does not complete execution of that instance of the predetermined type of instruction due to occurrence of a predetermined event, the data processing apparatus is arranged to reinstate the state of the data processing apparatus with reference to the checkpoint information, such that the execution circuitry is then configured to recommence execution of the sequence of program instructions at that instance of the predetermined type of instruction. | 2014-01-16 |
20140019735 | Computer Processor Providing Exception Handling with Reduced State Storage - A computer architecture allows for simplified exception handling by restarting the program after exceptions at the beginning of idempotent regions, the idempotent regions allowing re-execution without the need for restoring complex state information from checkpoints. Recovery from mis-speculation may be provided by a similar mechanism but using smaller idempotent regions reflecting a more frequent occurrence of mis-speculation. A compiler generating different idempotent regions for speculation and exception handling is also disclosed. | 2014-01-16 |
20140019736 | Embedded Branch Prediction Unit - In accordance with some embodiments of the present invention, a branch prediction unit for an embedded controller may be placed in association with the instruction fetch unit instead of the decode stage. In addition, the branch prediction unit may include no branch predictor. Also, the return address stack may be associated with the instruction decode stage and is structurally separate from the branch prediction unit. In some cases, this arrangement reduces the area of the branch prediction unit, as well as power consumption. | 2014-01-16 |
20140019737 | Branch Prediction For Indirect Jumps - Branch prediction for indirect jumps, including: receiving, by a branch prediction module, a branch address for each of a plurality of executed branch instructions; receiving, by the branch prediction module, an instruction address of a current branch instruction; creating, by the branch prediction module, an execution path identifier in dependence upon the branch address for each of the plurality of executed branch instructions and the instruction address of the current branch instruction; and searching, by the branch prediction module, a branch prediction table for an entry that matches the execution path identifier. | 2014-01-16 |
20140019738 | MULTICORE PROCESSOR SYSTEM AND BRANCH PREDICTING METHOD - A multicore processor system includes plural CPUs; branch prediction memories respectively disposed for the CPUs; and a shared branch prediction memory that stores branch prediction information records respectively corresponding to threads executed by the CPUs. A first CPU among the CPUs is configured to set the branch prediction information record corresponding to a first thread among the threads executed by the first CPU, from the shared branch prediction memory to the branch prediction memory corresponding to the first CPU. | 2014-01-16 |
20140019739 | Method for System Scenario Based Design of Dynamic Embedded Systems - Methods are disclosed for system scenario-based design for an embedded platform whereon a dynamic application is implemented. The application meets at least one guaranteed constraint. Temporal correlations are assumed in the behaviour of internal data variables used in the application, with the internal data variables representing parameters used for executing a portion of the application. An example method includes determining a distribution over time of an N-dimensional cost function, with N an integer number N≧1, corresponding to the implementation on the platform for a set of combinations of the internal data variables. The method also includes partitioning an N-dimensional cost space in at least two bounded regions, each bounded region containing cost combinations corresponding to combinations of values of the internal data variables of the set that have similar cost and frequency of occurrence, whereby one bounded region is provided for rarely occurring cost combinations. | 2014-01-16 |
20140019740 | Computer Startup Method, Startup Apparatus, State Transition Method And State Transition Apparatus - A computer startup method, a startup apparatus, a state transition method, and a state transition apparatus are described. When the computer is in a suspend-to-RAM (STR) state, the power consumption is a first power consumption. When the computer transitions from the suspend-to-disk (STD) state to the startup state, the time consumption is a first time consumption. The state transition method includes, when the computer is in the startup state, obtaining a first power state transition command to instruct the computer to transition from the startup state to a specific state; and to respond to the first power state transition command, making the computer to be in the specific state. | 2014-01-16 |
20140019741 | METHOD AND SYSTEM FOR BOOTING ELECTRONIC DEVICE FROM NAND FLASH MEMORY - A method and system for booting an electronic device from a NAND flash memory includes a NAND flash controller that receives an event trigger for fetching a pre-boot code stored in the NAND flash memory. Based on the event trigger type, booting parameters are loaded into the controller including a boot frequency of the NAND flash memory. The controller searches for a good memory block in which the pre-boot code is stored by checking the first and second or the first and last pages of a memory block and fetches a portion or the entire pre-boot code based on the event trigger type at the boot frequency. | 2014-01-16 |
20140019742 | APPROACH FOR MANAGING STATE TRANSITIONS OF A DATA CONNECTOR - A microprocessor within a processing unit is configured to manage to operation of a finite state machine (FSM) that, in turn, manages the operation of a data connector. The FSM may be a hardwired chip component that adheres to a communication protocol associated with the data connector. The microprocessor is configured to execute a software application in order to (i) apply configuration changes to the processing unit during state transitions initiated by the FSM and (ii) cause the FSM to initiate specific state transitions. | 2014-01-16 |
20140019743 | COMPUTING DEVICES AND METHODS FOR RESETTING INACTIVITY TIMERS ON COMPUTING DEVICES - Computing devices and methods for resetting an inactivity timer of each of a first and second computing device are described. In one embodiment, the method comprises establishing a communication channel between the first computing device and the second computing device, receiving activity input responsive to a user interaction at the first computing device, resetting the inactivity timer of the first computing device, and transmitting a notification via the communication channel to the second computing device that the activity input was received at the first computing device, the inactivity timer of the second computing device being reset in response to receipt of the notification. | 2014-01-16 |
20140019744 | Right of Individual Privacy and Public Safety Protection Via Double Encrypted Lock Box - A method substantially as shown and described the detailed description and/or drawings and/or elsewhere herein. A device substantially as shown and described the detailed description and/or drawings and/or elsewhere herein. | 2014-01-16 |
20140019745 | CRYPTOGRAPHIC ISOLATION OF VIRTUAL MACHINES - Virtual machines in a network may be isolated by encrypting transmissions between the virtual machines with keys possessed only by an intended recipient. Within a network, the virtual machines may be logically organized into a number of community-of-interest (COI) groups. Each COI may use an encryption key to secure communications within the COI, such that only other virtual machines in the COI may decrypt the message. Security may be further enhanced by establishing a session key for use during communications between a first and a second virtual machine. The session key may be encrypted with the COI key. | 2014-01-16 |
20140019746 | Runtime Environment Management of Secure Communications on Card Computing Devices - A card computing device may be configured to establish and manage secure channel communications between terminal applications and local applications installed on the card computing device. A runtime component of the card computing device may be configured to generate a registry of applications available as endpoints for secure channel communications, either in response to applications registering as endpoints or based on installation parameters on the card computing device. The runtime component may provide a list of the registered applications to a terminal application. The runtime component may establish a secure channel between a terminal application and a local application and may receive and decrypt secure commands from the terminal application. The runtime component may forward the decrypted commands to the local application and encrypt and forward responses from the local application to the terminal application. | 2014-01-16 |
20140019747 | CRYPTOGRAPHIC HASH FUNCTION - A first module divides a string into a number of blocks. A second module associates the blocks with monoid elements in a list of first monoid elements to produce second monoid elements. A third module applies a first function to an initial monoid element and a first of the second monoid elements producing a first calculated monoid element and evaluates an action of the initial monoid element on the first function producing a second function. A fourth module applies the second function to the first calculated monoid element and to a second of the second monoid elements producing a second calculated monoid element and evaluates the action of the first calculated monoid element on the first function producing a third function. Further modules iteratively, corresponding to the number of blocks, apply the produced function to calculated monoid elements and the second monoid elements to produce a hash of the string | 2014-01-16 |
20140019748 | LEVEL-TWO DECRYPTION ASSOCIATED WITH INDIVIDUAL PRIVACY AND PUBLIC SAFETY PROTECTION VIA DOUBLE ENCRYPTED LOCK BOX - A method substantially as shown and described the detailed description and/or drawings and/or elsewhere herein. A device substantially as shown and described the detailed description and/or drawings and/or elsewhere herein. | 2014-01-16 |
20140019749 | SECURING INFORMATION EXCHANGED VIA A NETWORK - A privacy key is provided over a network. An information page is provided over the network. A submission of data that is to be transmitted over the network in response to the information page is detected. A subset of the data is to be encrypted using the privacy key is determined. The privacy key is used to encrypt the subset of the data. | 2014-01-16 |
20140019750 | VIRTUAL GATEWAYS FOR ISOLATING VIRTUAL MACHINES - Virtual machines in a network may be isolated by encrypting transmissions between the virtual machines with keys possessed only by an intended recipient. Within a network, the virtual machines may be logically organized into a number of community-of-interest (COI) groups. Each COI may use an encryption key to secure communications within the COI, such that only other virtual machines in the COI may decrypt the message. Virtual machines may further be isolated through a virtual gateway assigned to handle all communications between a virtual machine and a device outside of the virtual machine's COI. The virtual gateway may be a separate virtual machine for handling decrypting and encrypting messages for transmission between virtual machines and other devices. | 2014-01-16 |
20140019751 | METHOD AND APPARATUS HAVING NULL-ENCRYPTION FOR SIGNALING AND MEDIA PACKETS BETWEEN A MOBILE STATION AND A SECURE GATEWAY - Disclosed is a method for efficient transport of packets between a mobile station and a secure gateway over a wireless local area network for accessing home services. In the method, a first encryption security association is established for transporting first-type packets from the secure gateway to the mobile station, and a second encryption security association is established for transporting first-type packets from the mobile station to the secure gateway. Next, a first null-encryption security association is established for transporting second-type packets from the secure gateway to the mobile station, and a second null-encryption security association is established for transporting second-type packets from the mobile station to the secure gateway. Second-type packets are selected for transport using the second null-encryption security association based on a traffic selector. Also, second-type packets may be selected for transport using the first null-encryption security association based on a traffic selector. The traffic selector may be preconfigured. | 2014-01-16 |
20140019752 | ENCRYPTION-BASED SESSION ESTABLISHMENT - A first server is configured to receive a first token from a user device, determine whether the first token is valid, request the user device to provide a set of credentials to a second server, based on determining that the first token is invalid, and receive a first response from the user device. The first response may include information identifying whether the user device is authenticated to communicate with the first server. The first server is further configured to send the first response to a third server. The third server may generate a second response to indicate authentication of the user device to communicate with the first server. The first server is further configured to receive the second response from the third server, generate a second token, based on receiving the second response, and send the second token to the user device. | 2014-01-16 |
20140019753 | CLOUD KEY MANAGEMENT - A system for managing encryption keys within a domain includes: a client computer coupled to a cloud key management server over a network, the client computer being configured to supply a request for an encryption key, the request including an object identifier associated with the encryption key; and a cloud key management service comprising the cloud key management server, the cloud key management service being configured to: store a plurality of encryption keys in association with a plurality of object identifiers; receive the request from the client computer; identify an encryption key of the stored encryption keys associated with the object identifier of the request; and send the identified encryption key to the client computer in response to the request. | 2014-01-16 |
20140019754 | ANONYMOUS AND UNLINKABLE DISTRIBUTED COMMUNICATION AND DATA SHARING SYSTEM - A distributed communication and data sharing system that provides anonymity and unlinkability. A group comprising a number of structures, each having a public/private key pair, is stored on a plurality of nodes in a Distributed Hash Table. Advantageous features of the group management system are provided through the use of Cryptographically Generated Addresses (CGA) for the structures, a secure capture method that enables a user to capture an address and be the only one authorized to request certain operations for the address, and an anonymous get/set mechanism in which a user signs messages, encloses the public key in the message and encrypts the message and public key using the public key of the receiver. The distributed communication and data sharing system of the invention can advantageously be used for group management of social networks. | 2014-01-16 |
20140019755 | DATA STORAGE IN CLOUD COMPUTING - A redundant cloud storage solution may be created from individual cloud storage solutions. Files may be split into pieces and stored in separate cloud storage solutions and then retrieved from the cloud storage solutions to assemble the original ale. When splitting the files, the data may be encrypted for additional security. Additionally, redundancy may be obtained by duplicating data across multiple cloud storage solutions, such as in a RAID level 5 configuration. A server may intervene between a client device and the cloud storage solutions to perform the file splitting, encrypting, and management functions. Thus, the client access to the redundant cloud solution may function as any other network drive. | 2014-01-16 |
20140019756 | Obfuscating Trace Data - A tracer may obfuscate trace data such that the trace data may be used in an unsecure environment even though raw trace data may contain private, confidential, or other sensitive information. The tracer may obfuscate using irreversible or lossy hash functions, look up tables, or other mechanisms for certain raw trace data, rendering the obfuscated trace data acceptable for transmission, storage, and analysis. In the case of parameters passed to and from a function, trace data may be obfuscated as a group or as individual parameters. The obfuscated trace data may be transmitted to a remote server in some scenarios. | 2014-01-16 |
20140019757 | AUTHENTICATION METHOD AND SYSTEM - An authenticating method including establishing trust between an authentication provider and service provider; establishing trust between the authentication provider and authentication application installed in a terminal. The authentication provider, for each session, receives an access code request and connection information from the terminal; generates and stores the access code; sends the access code to the terminal; receives the access code from the authentication application; indicates verification of the access code to the authentication application and terminal; receives from the authentication application a request to grant access to the terminal; instructs the service provider to grant access; and sends a confirmation of the granted access to the terminal. An authenticated session between the terminal and the service provider is setup for providing services to the terminal. | 2014-01-16 |
20140019758 | SYSTEM, METHOD AND APPARATUS FOR SECURELY DISTRIBUTING CONTENT - System, method and apparatus for securely distributing content via an encrypted file wherein a Publisher Key (PK) associated with an authorized publisher enables presentation of the content by the authorized user via a Limited Capability Viewer (LCV), the LCV lacking the capability to forward, print, copy or otherwise disseminate the content to be presented. Various embodiments provided enhanced user authentication or authorization, VPN functions, collaboration techniques, automatic distribution of licenses, watermarking of documents, rules pertaining to content transfer between secure and insecure domains and combinations thereof. | 2014-01-16 |
20140019759 | Systems, Methods, and Computer Program Products for Secure Optimistic Mechanisms for Constrained Devices - Embodiments of the invention may provide for systems and methods for secure authentication. The systems and methods may include receiving, by a constrained device, a random string transmitted from a server; determining, by the constrained device, a responsive output by evaluating a first deterministic function based upon the received random string, a locally generated string and a first private key stored on the constrained device; and transmitting at least one portion of the responsive output and the locally generated string from the constrained device to a server. The systems and methods may also include determining, by the server, a validation output by evaluating a second deterministic function based upon the random string, the locally generated string, and a second private key of a plurality of private keys stored on the server; and authenticating the constrained device based upon the server matching the transmitted at least one portion of the responsive output to at least a portion of the validation output. | 2014-01-16 |
20140019760 | METHOD FOR PERSONALIZING A SECURE ELEMENT COMPRISED IN A TERMINAL - The invention proposes a method for personalizing a first secure element comprised in a first terminal, said method consisting in:
| 2014-01-16 |
20140019761 | SELF-CONTAINED ELECTRONIC SIGNATURE - Techniques for providing a self-contained electronic signature are disclosed. In some embodiments, techniques for providing a self-contained electronic signature include recording an audit trail for a plurality of events associated with an electronic signature of an electronic document; embedding the audit trail in the electronic document; and digitally signing the electronic document, in which the electronic document including the embedded audit trail and the electronic signature are secured by the digital signature. In some embodiments, the audit trail is embedded in metadata of the electronic document, a body of the electronic document, or both the metadata and body of the electronic document. In some embodiments, digitally signing the electronic document includes a certifying signature provided by a service provider of an electronic signature service. | 2014-01-16 |
20140019762 | Method, Process and System for Digitally Signing an Object - The invention comprises a method of auditing an object signing by creating security events throughout the signature process, including a security event that captures the identity of the signer and any anomalies associated with the signing process. The signature process may include multi-factor authentication, a policy engine that establishes the signer's authority and rights, and compliance checks that ensure the object's readiness for signature. The digital certificate used to sign the object may be stored on the cloud, locally, remotely, or on a hardware token. | 2014-01-16 |
20140019763 | METHODS AND APPARATUS FOR AUTHENTICATION - Message authentication in an ad-hoc network. Upon creation of a message, a message authentication code is created using a key shared with members of a group comprising a subset of nodes of the ad-hoc network. The message authentication code may be created using a cryptographic process having the message and a message identifier as inputs. After or in parallel with broadcast of the message, a pointer to the message is broadcast. The message authentication code is publicly broadcast and those members of the group among which the key has been shared are able to authenticate the message as coming from a particular sender. | 2014-01-16 |
20140019764 | METHOD FOR SIGNING AND VERIFYING DATA USING MULTIPLE HASH ALGORITHMS AND DIGESTS IN PKCS - Methods, systems, and apparatuses are disclosed for signing and verifying data using multiple hash algorithms and digests in PKCS including, for example, retrieving, at the originating computing device, a message for signing at the originating computing device to yield a signature for the message; identifying multiple hashing algorithms to be supported by the signature; for each of the multiple hashing algorithms identified to be supported by the signature, hashing the message to yield multiple hashes of the message corresponding to the multiple hashing algorithms identified; constructing a single digest having therein each of the multiple hashes of the messages corresponding to the multiple hashing algorithms identified and further specifying the multiple hashing algorithms to be supported by the signature; applying a signing algorithm to the single digest using a private key of the originating computing device to yield the signature for the message; and distributing the message and the signature to receiving computing devices. Other related embodiments are disclosed. | 2014-01-16 |
20140019765 | DEVICE AND METHOD FOR ONLINE STORAGE, TRANSMISSION DEVICE AND METHOD, AND RECEIVING DEVICE AND METHOD - The invention relates to a device and a method for online storage, device and method for searching for similar content, a device and a method of transmission and a device and a method. | 2014-01-16 |
20140019766 | Signature Generation and Verification System and Signature Verification Apparatus - A signature generation and verification system including a signature generation apparatus and a signature verification apparatus is provided. Based on signer certification information possessed by a signer, the signature generation apparatus generates a digital signature and verification data corresponding to a given electronic document and outputs the set of the digital signature and the verification data as signature data. Upon receipt of the electronic document and the signature data, the signature verification apparatus verifies the digital signature using the verification data to verify the integrity of the electronic document. As needed, the signature verification apparatus performs user identification ex-post facto by authenticating that the signer certification information from which the verification data was generated belongs to a legitimate user without knowledge of the signer certification information. | 2014-01-16 |
20140019767 | CONTENT SEGMENTATION OF WATERMARKING - The invention relates to a computer-implemented method for providing a data stream comprising a plurality of content elements. At least one of two or more copies of a first content element of the data stream has been watermarked with a different watermark. The method includes watermarking at least one of two or more copies of a second content element with a different watermark. In a rendering order of the data stream, the second content element is at an interval equal to or greater than a watermark interval from the first content element. The watermark interval is set to be sufficiently long so that the output quality of the rendered data stream can either completely recover or at least return to a predetermined acceptable level following the watermarking of the copies of the first content element before watermarking the copies of the next content element. | 2014-01-16 |
20140019768 | System and Method for Shunting Alarms Using Identifying Tokens - Alarms are shunted dependent on an authorized user's location being confirmed as in the vicinity of an unpowered token, such as an NFC chip, QR code or other 2D barcode. The tokens may be attached to the doors or elsewhere in the spaces to be alarmed. The tokens are detected with a user's personal mobile electronic device and a token identifier is sent with an identification of the user's device to a remote server, where a decision is made whether to override the alarm or not. | 2014-01-16 |
20140019769 | ENCRYPTION/DECRYPTION FOR DATA STORAGE SYSTEM WITH SNAPSHOT CAPABILITY - A method for managing access to encrypted data of a data storage system storing snapshot data, a snapshot providing a previous point-in-time copy of data in a volume of the data storage system, wherein the data storage system utilizes changing encryption keys for write data. For each snapshot, the method stores at least one decryption key identifier for each decryption key corresponding to an encryption key utilized to encrypt data written to a volume since a previous snapshot was committed to disk, and associates the at least one decryption key identifier with the snapshot. A key table associating decryption key identifiers with corresponding decryption keys is provided, and based on the key table and the at least one decryption key identifier associated with the snapshot, one or more decryption keys required for accessing encrypted data associated with the snapshot are determined. Decryption key identifiers may be stored in snapshot metadata. | 2014-01-16 |
20140019770 | PRE-EVENT REPOSITORY ASSOCIATED WITH INDIVIDUAL PRIVACY AND PUBLIC SAFETY PROTECTION VIA DOUBLE ENCRYPTED LOCK BOX - A method substantially as shown and described the detailed description and/or drawings and/or elsewhere herein. A device substantially as shown and described the detailed description and/or drawings and/or elsewhere herein. | 2014-01-16 |
20140019771 | Method and System for Protecting Execution of Cryptographic Hash Functions - A method of protecting the execution of a cryptographic hash function, such as SHA-256, in a computing environment where inputs, outputs and intermediate values can be observed. The method consists of encoding input messages so that hash function inputs are placed in a transformed domain, and then applying a transformed cryptographic hash function to produce an encoded output digest; the transformed cryptographic hash function implements the cryptographic hash function in the transformed domain. | 2014-01-16 |
20140019772 | TECHNIQUES FOR SECURE DATA MANAGEMENT IN A DISTRIBUTED ENVIRONMENT - Techniques for secure data management in a distributed environment are provided. A secure server includes a modified operating system that just allows a kernel application to access a secure hard drive of the secure server. The hard drive comes prepackaged with a service public and private key pair for encryption and decryption services with other secure servers of a network. The hard drive also comes prepackaged with trust certificates to authenticate the other secure servers for secure socket layer (SSL) communications with one another, and the hard drive comes with a data encryption key, which is used to encrypt storage of the secure server. The kernel application is used during data restores, data backups, and/or data versioning operations to ensure secure data management for a distributed network of users. | 2014-01-16 |
20140019773 | METHOD AND SYSTEM FOR PROTECTING DATA - Methods and systems for protecting data may include controlling encryption and/or decryption and identifying a destination of corresponding encrypted and/or decrypted data, utilizing rules based on a source location of the data prior to the encryption or decryption and an algorithm that may have been previously utilized for encrypting and/or decrypting the data prior to the data being stored in the source location. The source location and/or destination of the data may comprise protected or unprotected memory. One or more of a plurality of algorithms may be utilized for the encryption and/or decryption. The rules may be stored in a key table, which may be stored on-chip, and may be reprogrammable. One or more keys for the encryption and/or decryption may be generated within the chip. | 2014-01-16 |
20140019774 | PROCESSING INFORMATION - A method and system for processing information. An apparatus divides target information into N pieces of divided data using a secret sharing scheme in which a predetermined number (K) of pieces of the N pieces of divided data is required to restore the target information, wherein N>K, and wherein the apparatus is an information processing device or an external storage device. The apparatus selects M pieces from the N pieces, wherein KM−K. | 2014-01-16 |
20140019775 | ANTI-WIKILEAKS USB/CD DEVICE - A method for encrypting and storing data on a removable medium includes: obtaining a medium key uniquely associated with the removable medium; encrypting the data using the medium key to generate encrypted data; and writing the encrypted data onto the removable medium | 2014-01-16 |
20140019776 | METHODS OF PROVIDING FAST SEARCH, ANALYSIS, AND DATA RETRIEVAL OF ENCRYPTED DATA WITHOUT DECRYPTION - Methods and systems of providing remote coded data storage, data analysis, and search and retrieval, with assurance of data security are described. Data security is such that it protects the data from any provider, administrator of remote services, or anyone breaking into the servers housing the data at the remote site. The methods include a coding schema such that both the storage and the associated services, such as data analysis, search and retrieval, can be provided even more efficiently and more responsively than without the coding. Possible applications of the methods include data storage, powerful data search and analysis services which can all be provided “in the Cloud” over the Internet, completely securely, even when a customer's private data set needs to be uploaded to the remote site. The efficiency of analysis, and search means that the methods may be useful even when security of data is not an issue. | 2014-01-16 |
20140019777 | POWER DATA COMMUNICATION ARCHITECTURE - A power data communication architecture located in an electronic apparatus includes at least a power supply unit, a data communication control unit and a motherboard. The power supply unit includes a power source management unit to generate at least one corresponding working parameter based on operating states of the power supply unit. The data communication control unit includes at least one power source management connection port to get the working parameter of the power supply unit and a buffer memory unit to store the working parameter. The motherboard is electrically connected to the buffer memory unit to read the working parameter saved therein. | 2014-01-16 |
20140019778 | HUB DEVICE - A hub device for overcoming limits to charging and data transmission functions thereof includes a switching unit having a switch and a switching circuit; a first electrical connection port electrically connected to the switching circuit and for an electronic computing device to electrically connect thereto; a plurality of second electrical connection ports electrically connected to the switching circuit and for portable electronic devices to electrically connect thereto to get power supply therefrom; and a power supply connection unit electrically connected to the switching circuit. Through operation of the switch, the switching circuit can switch at least one of the second electrical connection ports from a charging downstream port to a dedicated charging port while all other second electrical connection ports are maintained as charging downstream ports for data transmission or charging function. Thus, the hub device is more convenient and practical for use. | 2014-01-16 |
20140019779 | SYSTEM AND METHOD FOR AUTOMATICALLY DETERMINING AN OPTIMAL POWER CAP VALUE - Generating an optimal power cap value includes steps of: analyzing power usage of a system for a specified period of time; computing a power consumption value for the system for the specified period of time; and generating the optimal power cap value for the system, using the computed power consumption value. The system should be coupled with a power meter and should support power regulation technology. | 2014-01-16 |
20140019780 | ACTIVE POWER DISSIPATION DETECTION BASED ON ERRONOUS CLOCK GATING EQUATIONS - A method detects active power dissipation in an integrated circuit. The method includes receiving a hardware design for the integrated circuit having one or more clock domains, wherein the hardware design comprises a local clock buffer for a clock domain, wherein the local clock buffer is configured to receive a clock signal and an actuation signal. The method includes adding instrumentation logic to the design for the clock domain, wherein the instrumentation logic is configured to compare a first value of the actuation signal determined at a beginning point of a test period to a second value of the actuation signal determined at a time when the clock domain is in an idle condition. The method includes detecting the clock domain includes unintended active power dissipation, in response to the first value of the actuation signal not being equal to the second value of the actuation signal. | 2014-01-16 |
20140019781 | SYSTEM FOR PROTECTING POWER SUPPLY CIRCUIT OF CENTRAL PROCESSING UNIT - A system includes a power supply unit, a central processing unit (CPU) power controller, a detecting circuit, an inductor, a thermal resistor, first and second field effect transistors (FETs), and first to third capacitors. The CPU power controller detects a voltage of the thermal resistor and compares the detected voltage with first and second preset values. If the detected voltage is greater than the first preset value and less than the second preset value, the CPU power controller outputs a first control signal to a base management controller (BMC) chip for signaling the BMC chip to control the fan to increase a speed. If the detected voltage is greater than the second preset value, the CPU controller outputs a second control signal to the CPU for signaling the CPU to regulate a working frequency or reduce a number of loads. | 2014-01-16 |
20140019782 | APPARATUS AND METHOD FOR MANAGING POWER BASED ON DATA - Provided is an apparatus and method for managing power based on data. The apparatus may include a code segment searching unit configured to search for at least one code segment in which a power type is inserted, a block determining unit configured to determine at least one block based on the at least one found code segment, and a power mode control unit configured to control the at least one determined block to operate in a power mode corresponding to the power type. | 2014-01-16 |
20140019783 | IMAGE PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND STORAGE MEDIUM - To allow an information processing apparatus to further accurately calculate power consumption of an image processing apparatus, the image processing apparatus includes a communication unit configured to communicate with a control apparatus configured to transfer power supply state information indicating a power supply state to the information processing apparatus configured to calculate a power consumption amount based on the power supply state information, and a control unit configured to control the communication unit so as to transmit information for calculating a power consumption amount of the image processing apparatus during a period from detection about the shift to the state prohibiting a transfer to the information processing apparatus to detection about the shift to the state allowing the transfer to the information processing apparatus, to the control apparatus together with the power supply state information. | 2014-01-16 |
20140019784 | COOLING APPLIANCE RATING AWARE DATA PLACEMENT - A dataset is identified as a heat-intensive dataset based, at least in part, on the dataset being related to heat generation at a source storage device exceeding a heat rise limit. The source storage device hosts the heat-intensive dataset and the heat-intensive dataset comprises non-executable data. A first cooling area of a plurality of cooling areas is selected to accommodate the heat generation based, at least in part, on cooling characteristics of a plurality of cooling appliances of the plurality of cooling areas. The source storage device is associated with a second cooling area. A target storage device associated with the first cooling area is determined. The heat-intensive dataset is moved from the source storage device to the target storage device. | 2014-01-16 |
20140019785 | ELECTRIC DEVICE, AND METHOD AND COMPUTER PROGRAM PRODUCT FOR CONTROLLING POWER SUPPLY IN ELECTRIC DEVICE - An electric device includes at least one or more processing units that perform a predetermined process; a power-supply control unit that controls supply of electric power from a power source to the processing units and shutoff of the supply; a main control unit that performs a start-up process if the main control unit is supplied with power from the power source; and a sub control unit that controls the power-supply control unit so as not to supply the electric power to all or some of the processing units after the start-up process. | 2014-01-16 |