19th week of 2014 patent applcation highlights part 65 |
Patent application number | Title | Published |
20140129769 | SYSTEM FOR DATA MIGRATION FROM A STORAGE TIER ALLOCATED TO A VIRTUAL LOGICAL VOLUME - In recent years, data life cycle management, in which data is relocated from, for example, a new storage sub-system to an older storage sub-system in accordance with how new the data is or the frequency of use of the data, has become important. One technology for achieving data life cycle management is technology for migrating the contents of a storage area (“volume”) of a storage sub-system to another volume without affecting the host computer that uses the volume. In the present invention, when an associated source volume (for example, the source volume in a copy pair association) of a pair of associated volumes is migrated, migration of an associated destination volume (for example, the target volume in the copy pair association) is also controlled. In this way, it is possible to control the migration of a pair (or a group) of associated volumes in accordance with the user's requirements. | 2014-05-08 |
20140129770 | STORAGE SYSTEM USING REAL DATA STORAGE AREA DYNAMIC ALLOCATION METHOD - The present invention aims at preventing the access performance of a distributed memory system by accessing via cross-over ownership a track mapping information formed as a hierarchical memory. In the process of assigning a real data storage area to a virtual volume, at first, a page from a pool is assigned, and thereafter, a track is assigned from said page. The page is composed of multiple tracks into which track data assigned at host write operation timings are stored sequentially from the top. A mapping information of the virtual volume and the page is stored in a control information page that differs from the track data, and the mapping information is stored in the control information page which could only be accessed by a microprocessor having the ownership of the virtual volume. | 2014-05-08 |
20140129771 | Multiple Instances of Mapping Configurations in a Storage System or Storage Appliance - The present invention is directed to a method and software for managing the host-to-volume mappings of a SAN storage system. The host-to-volume mappings of the SAN storage system are represented in mapping configuration components. The active mapping configuration component represents the current host-to-volume mapping for the SAN storage system. Only one mapping configuration component is active at a time. The host-to-volume mappings of the SAN storage system are changed by deactivating the active mapping configuration component and activating an inactive mapping configuration component that represents a different mapping configuration, effecting a repartition, repurpose, disaster recovery, or other business activity. This can be a scheduled task or performed in an on-demand manner. The mapping configuration components are managed and controlled through the management component of the SAN storage system. | 2014-05-08 |
20140129772 | PREFETCHING TO A CACHE BASED ON BUFFER FULLNESS - A processor transfers prefetch requests from their targeted cache to another cache in a memory hierarchy based on a fullness of a miss address buffer (MAB) or based on confidence levels of the prefetch requests. Each cache in the memory hierarchy is assigned a number of slots at the MAB. In response to determining the fullness of the slots assigned to a cache is above a threshold when a prefetch request to the cache is received, the processor transfers the prefetch request to the next lower level cache in the memory hierarchy. In response, the data targeted by the access request is prefetched to the next lower level cache in the memory hierarchy, and is therefore available for subsequent provision to the cache. In addition, the processor can transfer a prefetch request to lower level caches based on a confidence level of a prefetch request. | 2014-05-08 |
20140129773 | HIERARCHICAL CACHE STRUCTURE AND HANDLING THEREOF - A hierarchical cache structure comprises at least one higher level cache comprising a unified cache array for data and instructions and at least two lower level caches, each split in an instruction cache and a data cache. An instruction cache and a data cache of a split second level cache are connected to a third level cache; and an instruction cache of a split first level cache is connected to the instruction cache of the split second level cache, and a data cache of the split first level cache is connected to the instruction cache and the data cache of the split second level cache. | 2014-05-08 |
20140129774 | HIERARCHICAL CACHE STRUCTURE AND HANDLING THEREOF - A hierarchical cache structure includes at least one real indexed higher level cache with a directory and a unified cache array for data and instructions, and at least two lower level caches, each split in an instruction cache and a data cache. An instruction cache of a split real indexed second level cache includes a directory and a corresponding cache array connected to the real indexed third level cache. A data cache of the split second level cache includes a directory connected to the third level cache. An instruction cache of a split virtually indexed first level cache is connected to the second level instruction cache. A cache array of a data cache of the first level cache is connected to the cache array of the second level instruction cache and to the cache array of the third level cache. A directory of the first level data cache is connected to the second level instruction cache directory and to the third level cache directory. | 2014-05-08 |
20140129775 | CACHE PREFETCH FOR NFA INSTRUCTIONS - Disclosed is a method of pre-fetching NFA instructions to an NFA cell array. The method and system fetch instructions for use in an L1 cache during NFA instruction execution. Successive instructions from a current active state are fetched and loaded in the L1 cache. Disclosed is a system comprising an external memory, a cache line fetcher, and an L1 cache where the L1 cache is accessible and searchable by an NFA cell array and where successive instructions from a current active state in the NFA are fetched from external memory in an atomic cache line manner into a plurality of banks in the L1 cache. | 2014-05-08 |
20140129776 | STORE REPLAY POLICY - A method is provided for executing a cacheable store. The method includes determining whether to replay a store instruction to re-acquire one or more cache lines based upon a state of the cache line(s) and an execution phase of the store instruction. The store instruction is replayed in response to determining to replay the store instruction. An apparatus is provided that includes a store queue (SQ) configurable to determine whether to replay a store instruction to re-acquire one or more cache lines based upon a state of the cache line(s) and an execution phase of the store instruction. Computer readable storage devices for adapting a fabrication facility to manufacture the apparatus are provided. | 2014-05-08 |
20140129777 | SYSTEMS AND METHODS FOR DYNAMIC DATA STORAGE - A data caching method is performed to receive an instruction to operate based on a specific data set; determine whether the specific data set is cached in its memory; when the specific data set is not cached in the memory, determine a plurality of attributes for a plurality of data sets currently stored in the memory, determine whether these attributes satisfy data caching criteria for storing the specific data set, and furthermore, when the data caching criteria are not satisfied, select at least one of the plurality of data sets according to a data replacement rule, delete at least a portion of the selected data set from the memory, and download the specific data set from a remote source; operate the specific data set according to the user instruction; and store at least a portion of the specific data set in the memory. | 2014-05-08 |
20140129778 | Multi-Port Shared Cache Apparatus - An apparatus for use in telecommunications system comprises a cache memory shared by multiple clients and a controller for controlling the shared cache memory. A method of controlling the cache operation in a shared cache memory apparatus is also disclosed. The apparatus comprises a cache memory accessible by a plurality of clients and a controller configured to allocate cache lines of the cache memory to each client according to a line configuration. The line configuration comprises, for each client, a maximum allocation of cache lines that each client is permitted to access. The controller is configured to, in response to a memory request from one of the plurality of clients that has reached its maximum allocation of cache lines, allocate a replacement cache line to the client from cache lines already allocated to the client when no free cache lines in the cache are available. | 2014-05-08 |
20140129779 | CACHE REPLACEMENT POLICY FOR DATA WITH STRONG TEMPORAL LOCALITY - Various cache replacement policies are described whose goals are to identify items for eviction from the cache that are not accessed often and to identify items stored in the cache that are regularly accessed that should be maintained longer in the cache. In particular, the cache replacement policies are useful for workloads that have a strong temporal locality, that is, items that are accessed very frequently for a period of time and then quickly decay in terms of further accesses. In one embodiment, a variation on the traditional least recently used caching algorithm uses a reuse period or reuse distance for an accessed item to determine whether the item should be promoted in the cache queue. In one embodiment, a variation on the traditional two queue caching algorithm evicts items from the cache from both an active queue and an inactive queue. | 2014-05-08 |
20140129780 | DYNAMIC EVALUATION AND RECONFIGURATION OF A DATA PREFETCHER - Methods and systems for prefetching data for a processor are provided. A system is configured for and a method includes selecting one of a first prefetching control logic and a second prefetching control logic of the processor as a candidate feature, capturing the performance metric of the processor over an inactive sample period when the candidate feature is inactive, capturing a performance metric of the processor over an active sample period when the candidate feature is active, comparing the performance metric of the processor for the active and inactive sample periods, and setting a status of the candidate feature as enabled when the performance metric in the active period indicates improvement over the performance metric in the inactive period, and as disabled when the performance metric in the inactive period indicates improvement over the performance metric in the active period. | 2014-05-08 |
20140129781 | METHODS AND SYSTEMS FOR CACHING DATA USING BEHAVIORAL EVENT CORRELATIONS - A method is disclosed including a client accessing a cache for a value of an object based on an object identification (ID), initiating a request to a cache loader if the cache does not include a value for the object, the cache loader performing a lookup in an object table for the object ID corresponding to the object, the cache loader retrieving a vector of execution context IDs, from an execution context table that correspond to the object IDs looked up in the object table and the cache loader performing an execution context lookup in an execution context table for every retrieved execution context ID in the vector to retrieve object IDs from an object vector. | 2014-05-08 |
20140129782 | Server Side Distributed Storage Caching - The invention provides a system with storage cache with high bandwidth and low latency to the server, and coherence for the contents of multiple memory caches, wherein locally managing a storage cache situated on a server is combined with a means for globally managing the coherency of storage caches of a number of servers. The local cache manager delivers very high performance and low latency for write transactions that hit the local cache in the Modified or Exclusive state and for read transactions that hit the local cache in the Modified, Exclusive or Shared states. The global coherency manager enables many servers connected via a network to share the contents of their local caches, providing application transparency by maintaining a directory with an entry for each storage block that indicates which servers have that block in the shared state or which server has that block in the modified state. | 2014-05-08 |
20140129783 | SYSTEM AND METHOD FOR ALLOCATING MEMORY OF DIFFERING PROPERTIES TO SHARED DATA OBJECTS - A system and method for allocating shared memory of differing properties to shared data objects and a hybrid stack data structure. In one embodiment, the system includes: (1) a hybrid stack creator configured to create, in the shared memory, a hybrid stack data structure having a lower portion having a more favorable property and a higher portion having a less favorable property and (2) a data object allocator associated with the hybrid stack creator and configured to allocate storage for shared data object in the lower portion if the lower portion has a sufficient remaining capacity to contain the shared data object and alternatively allocate storage for the shared data object in the higher portion if the lower portion has an insufficient remaining capacity to contain the shared data object. | 2014-05-08 |
20140129784 | METHODS AND SYSTEMS FOR POLLING MEMORY OUTSIDE A PROCESSOR THREAD - A system and method of monitoring a memory address is disclosed which may replace a polling operation on a memory by determining a memory address to monitor, notifying a cache controller of the memory address, and cause execution on a polling thread to wait. The cache controller may then monitor the memory address and notify the processor to resume execution of the thread. While the processor is waiting to be notified, it may enter a power save state or allow more time to be allocated to other threads being executed. | 2014-05-08 |
20140129785 | Security Erase of a Delete File and of Sectors Not Currently Assigned to a File - Secure erase of files and unallocated sectors on storage media such that any previous data is non-recoverable. The database contains sets of data patterns used to overwrite the data on different physical media. The software programs manage the overwriting process automatically when a file has been deleted. When de-allocated sectors in the file system are pruned from a file or escaped the file deletion process also finds them. Data will never be found on deleted sectors or on pruned sectors is overwritten. | 2014-05-08 |
20140129786 | REDUCING MICROPROCESSOR PERFORMANCE LOSS DUE TO TRANSLATION TABLE COHERENCY IN A MULTI-PROCESSOR SYSTEM - A translation lookaside buffer coherency unit with Emulated Purge (TCUEP) fetches first instructions for execution in a multi-processor system. The TCUEP associates a first instruction timestamp with each of the first instructions. The TCUEP receives a multi-processor coherency operation and increments the first timestamp value in a master-tag register to form a second timestamp value after receiving the multi-processor coherency operation. The TCUEP fetches, by an instruction fetch unit in the first microprocessor, second instructions for execution in the multiprocessor system. The TCUEP associates a second instruction timestamp with each of the second instructions. The TCUEP enables an emulated purge mechanism to suppress hits in the translation lookaside buffers for the second instructions. The TCUEP after determining the first instructions are complete, purges entries in the translation lookaside buffers and disables the emulated purge mechanism. | 2014-05-08 |
20140129787 | DATA PLACEMENT FOR EXECUTION OF AN EXECUTABLE - According to one embodiment, a method for a compiler to produce an executable module to be executed by a computer system including a main processor and active memory devices includes dividing source code into code sections, identifying a first code section to be executed by the active memory devices, wherein the first code section is one of the code sections and identifying data structures that are used by the first code section. The method also includes classifying the data structures based on pre-defined attributes, formulating, by the compiler, a storage mapping plan for the data structures based on the classifying and generating, by the compiler, mapping code that implements the storage mapping plan, wherein the mapping code is part of the executable module and wherein the mapping code maps storing of the data structures to storage locations in the active memory devices. | 2014-05-08 |
20140129788 | HIGH-PERFORMANCE LARGE SCALE SEMICONDUCTOR STORAGE MODULE WITH HYBRID TECHNOLOGY - An open architecture is provided for enabling at least two memory types for a single memory disk unit. The memory disk unit includes an interface and a DMA controller. The DMA controller controls the transfer of data to/from memory of at least two memory types of the memory disk unit through a hybrid memory control module. A corresponding memory controller (and in some cases, an ECC controller) is provided for each memory of the at least two memory types. The hybrid memory control architecture can control existing memory controllers by matching protocols for the particular memory controller. Address/memory commands and signal timing can be matched up to the appropriate controller by the hybrid memory control architecture. | 2014-05-08 |
20140129789 | REDUCING MICROPROCESSOR PERFORMANCE LOSS DUE TO TRANSLATION TABLE COHERENCY IN A MULTI-PROCESSOR SYSTEM - A translation lookaside buffer coherency unit with Emulated Purge (TCUEP) fetches first instructions for execution in a multi-processor system. The TCUEP associates a first instruction timestamp with each of the first instructions. The TCUEP receives a multi-processor coherency operation and increments the first timestamp value in a master-tag register to form a second timestamp value after receiving the multi-processor coherency operation. The TCUEP fetches, by an instruction fetch unit in the first microprocessor, second instructions for execution in the multiprocessor system. The TCUEP associates a second instruction timestamp with each of the second instructions. The TCUEP enables an emulated purge mechanism to suppress hits in the translation lookaside buffers for the second instructions. The TCUEP after determining the first instructions are complete, purges entries in the translation lookaside buffers and disables the emulated purge mechanism. | 2014-05-08 |
20140129790 | EFFICIENT DATA STORAGE SYSTEM - A system and method are disclosed for providing efficient data storage. A plurality of data segments is received in a data stream. The system preliminarily checks in a memory having a relatively low latency whether one of the plurality of data segments may have been stored previously in a data segment repository. The memory having the relatively low latency stores data segment information. In the event that the preliminary check determines that one of the plurality of data segments may have been stored in the data segment repository, a memory having a relatively higher latency is checked to determine whether the data segment has been stored previously in the data segment repository. | 2014-05-08 |
20140129791 | Method of Application Memory Preservation for Dynamic Calibration of Memory Interfaces - A method for calibrating a memory interface circuit is described wherein prior to a calibration operation at least a portion of application information contained in a memory circuit is moved or copied to an alternate location to preserve that information. At the completion of the calibration operation, the information is restored to the same location of the memory circuit. Thus, the calibration operation can be performed from time to time during normal operation of a system containing the memory circuit. Non-limiting examples of calibration operations are described including operations where a capture clock for a memory read circuit is calibrated, and operations where CAS latency compensation is calibrated for a DDR memory interface. | 2014-05-08 |
20140129792 | PERMISSIONS OF OBJECTS IN HOSTED STORAGE - A data object is stored in a hosted storage system and includes an access control list specifying access permissions for data object stored in the hosted storage system. The hosted storage system provides hosted storage to a plurality of clients that are coupled to the hosted storage system. A request to store a second data object is received. The request includes an indicator that the first data object stored in the hosted storage system should be used as an access control list for the second data object. The second data object is stored in the hosted storage system. The first data object is assigned as an access control list for the second data object stored in the hosted storage system. | 2014-05-08 |
20140129793 | MEMORY UTILIZATION ANALYSIS - The performance of a monitored system is profiled based on sampling a portion of its operations. In one embodiment, the monitored system allocates memory for objects created as instances of classes and automatically performs regular garbage collection to reclaim memory. A variety of sampling techniques are used to minimize the impact on the performance of the monitored system. Characteristic memory utilization patterns can then be estimated for classes based on the samples. The patterns may be presented to a user for review and analysis. Characteristics of the monitored system's performance may be presented in an interactive interface that allows the user to trace the cause of the presented memory utilization patterns, and provides statistics regarding memory allocation and release to guide the user in this analysis. | 2014-05-08 |
20140129794 | SPECULATIVE TABLEWALK PROMOTION - A method includes performing a speculative tablewalk. The method includes performing a tablewalk to determine an address translation for a speculative operation and determining whether the speculative operation has been upgraded to a non-speculative operation concurrently with performing the tablewalk. An apparatus is provided that includes a load-store unit to maintain execution operations. The load-store unit includes a tablewalker to perform a tablewalk and includes an input indicative of the operation being speculative or non-speculative as well as a state machine to determine actions performed during the tablewalk based on the input. The apparatus also includes a translation look-aside buffer. Computer readable storage devices for performing the methods and adapting a fabrication facility to manufacture the apparatus are provided. | 2014-05-08 |
20140129795 | CONFIGURABLE I/O ADDRESS TRANSLATION DATA STRUCTURE - In response to a determination to allocate additional storage, within a real address space employed by a system memory of a data processing system, for translation control entries (TCEs) that translate addresses from an input/output (I/O) address space to the real address space, a determination is made whether or not a first real address range contiguous with an existing TCE data structure is available for allocation. In response to determining that the first real address range is available for allocation, the first real address range is allocated for storage of TCEs, and a number of levels in the TCE data structure is retained. In response to determining that the first real address range is not available for allocation, a second real address range discontiguous with the existing TCE data structure is allocated for storage of the TCEs, and a number of levels in the TCE data structure is increased. | 2014-05-08 |
20140129796 | TRANSLATION OF INPUT/OUTPUT ADDRESSES TO MEMORY ADDRESSES - An address provided in a request issued by an adapter is converted to an address directly usable in accessing system memory. The address includes a plurality of bits, in which the plurality of bits includes a first portion of bits and a second portion of bits. The second portion of bits is used to index into one or more levels of address translation tables to perform the conversion, while the first portion of bits are ignored for the conversion. The first portion of bits are used to validate the address. | 2014-05-08 |
20140129797 | CONFIGURABLE I/O ADDRESS TRANSLATION DATA STRUCTURE - In response to a determination to allocate additional storage, within a real address space employed by a system memory of a data processing system, for translation control entries (TCEs) that translate addresses from an input/output (I/O) address space to the real address space, a determination is made whether or not a first real address range contiguous with an existing TCE data structure is available for allocation. In response to determining that the first real address range is available for allocation, the first real address range is allocated for storage of TCEs, and a number of levels in the TCE data structure is retained. In response to determining that the first real address range is not available for allocation, a second real address range discontiguous with the existing TCE data structure is allocated for storage of the TCEs, and a number of levels in the TCE data structure is increased. | 2014-05-08 |
20140129798 | REDUCING MICROPROCESSOR PERFORMANCE LOSS DUE TO TRANSLATION TABLE COHERENCY IN A MULTI-PROCESSOR SYSTEM - A translation lookaside buffer coherency unit with Emulated Purge (TCUEP) translates a first virtual address for a first instruction into a first physical address. The TCUEP detects a multi-processor coherency operation that will cause hit suppression for certain entries in a TLB and purging of certain entries in the TLB. The TCUEP translates a second virtual address for a second instruction into a second physical address and stores the second physical address in a second entry in the TLB. The TCUEP configures a second marker in the second entry to indicate that the hit suppression is not allowed for the second entry, and that the purging is not allowed for the second entry. The TCUEP receives a first address translation request that indicates a hit in the second entry. The TCUEP resolves the first address translation request by returning the second physical address. | 2014-05-08 |
20140129799 | ADDRESS GENERATION IN AN ACTIVE MEMORY DEVICE - Embodiments relate to address generation in an active memory device that includes memory and a processing element. An aspect includes a method for address generation in the active memory device. The method includes reading a base address value and an offset address value from a register file group of the processing element. The processing element determines a virtual address based on the base address value and the offset address value. The processing element translates the virtual address into a physical address and accesses a location in the memory based on the physical address. | 2014-05-08 |
20140129800 | REDUCING MICROPROCESSOR PERFORMANCE LOSS DUE TO TRANSLATION TABLE COHERENCY IN A MULTI-PROCESSOR SYSTEM - Some embodiments include a method that can store a first physical address in a first entry in a translation lookaside buffer (TLB). The method can configure a first marker in the first entry in the TLB to indicate that hit suppression is allowed for the first entry. The method can detect a multi-processor coherency operation that will cause hit suppression for certain entries in a TLB, and cause purging of certain entries in the TLB. The method can translate a second virtual address for a second instruction into a second physical address. The method can store the second physical address in a second entry. The method can configure a second marker in the second entry in the TLB to indicate that the hit suppression is not allowed for the second entry in the TLB, and that the purging is not allowed for the second entry in the TLB. | 2014-05-08 |
20140129801 | SYSTEMS, APPARATUSES, AND METHODS FOR PERFORMING DELTA ENCODING ON PACKED DATA ELEMENTS - Embodiments of systems, apparatuses, and methods for performing delta encoding on packed data elements of a source and storing the results in packed data elements of a destination using a single vector packed delta encode instruction are described. | 2014-05-08 |
20140129802 | METHODS, APPARATUS, AND INSTRUCTIONS FOR PROCESSING VECTOR DATA - A computer processor includes control logic for executing LoadUnpack and PackStore instructions. In one embodiment, the processor includes a vector register and a mask register. In response to a PackStore instruction with an argument specifying a memory location, a circuit in the processor copies unmasked vector elements from the vector register to consecutive memory locations, starting at the specified memory location, without copying masked vector elements. In response to a LoadUnpack instruction, the circuit copies data items from consecutive memory locations, starting at an identified memory location, into unmasked vector elements of the vector register, without copying data to masked vector elements. Other embodiments are described and claimed. | 2014-05-08 |
20140129803 | MULTI-MAGNITUDINAL VECTORS WITH RESOLUTION BASED ON SOURCE VECTOR FEATURES - Methods, systems and computer program products for resolving multiple magnitudes assigned to a target vector are disclosed. A target vector that includes one or more target vector dimensions is received. One of the target vector dimensions is processed to determine a total number of magnitudes assigned to the processed target vector dimension. Also, a source vector that includes one or more source vector dimensions is received. The received source vector is processed to determine a total number of features associated with the source vector. When it is detected that the total number of magnitudes assigned to the processed target vector dimension exceeds one, one of the assigned magnitudes is selected based on one of the determined features associated with the source vector. | 2014-05-08 |
20140129804 | TRACKING AND RECLAIMING PHYSICAL REGISTERS - A method and apparatus for tracking and reclaiming physical registers is presented. Some embodiments of the apparatus include rename logic configurable to map architectural registers to physical registers. The rename logic is configurable to bypass allocation of a physical register to an architectural register when information to be written to the architectural register satisfies a bypass condition. Some embodiments of the apparatus also include a plurality of first bits associated with the architectural registers. The rename logic is configurable to set one of the first bits to indicate that allocation of a physical register to the corresponding architectural register has been bypassed. | 2014-05-08 |
20140129805 | EXECUTION PIPELINE POWER REDUCTION - Systems and methods for reducing power consumption by an execution pipeline are provided. In one example, a method includes stalling an operation from being executed in the execution pipeline based on inputs to the operation being unavailable in a register file and disabling access to read the register file in favor of controlling a bypass network based on the consumer characteristics of the operation and producer characteristics of other operations being executed in the execution pipeline to forward data produced at an execution stage in the execution pipeline to be used as one or more resources of the operation. | 2014-05-08 |
20140129806 | LOAD/STORE PICKER - A method and apparatus for picking load or store instructions is presented. Some embodiments of the method include determining that the entry in the queue includes an instruction that is ready to be executed by the processor based on at least one instruction-based event and concurrently determining cancel conditions based on global events of the processor. Some embodiments also include selecting the instruction for execution when the cancel conditions are not satisfied. | 2014-05-08 |
20140129807 | APPROACH FOR EFFICIENT ARITHMETIC OPERATIONS - A system and method are described for providing hints to a processing unit that subsequent operations are likely. Responsively, the processing unit takes steps to prepare for the likely subsequent operations. Where the hints are more likely than not to be correct, the processing unit operates more efficiently. For example, in an embodiment, the processing unit consumes less power. In another embodiment, subsequent operations are performed more quickly because the processing unit is prepared to efficiently handle the subsequent operations. | 2014-05-08 |
20140129808 | MIGRATING TASKS BETWEEN ASYMMETRIC COMPUTING ELEMENTS OF A MULTI-CORE PROCESSOR - In one embodiment, the present invention includes a multicore processor having first and second cores to independently execute instructions, the first core visible to an operating system (OS) and the second core transparent to the OS and heterogeneous from the first core. A task controller, which may be included in or coupled to the multicore processor, can cause dynamic migration of a first process scheduled by the OS to the first core to the second core transparently to the OS. Other embodiments are described and claimed. | 2014-05-08 |
20140129809 | BYTE SELECTION AND STEERING LOGIC FOR COMBINED BYTE SHIFT AND BYTE PERMUTE VECTOR UNIT - Exemplary embodiments of the present invention disclose a method and system for executing data permute and data shift instructions. In a step, an exemplary embodiment encodes a control index value using the recoding logic into a 1-hot-of-n control for at least one of a plurality of datum positions in the one or more target registers. In another step, an exemplary embodiment conditions the 1-hot-of-n control by a gate-free logic configured for at least one of the plurality of datum positions in the one or more target registers for each of the data permute instructions and the at least one data shift instruction. In another step, an exemplary embodiment selects the 1-hot-of-n control or the conditioned 1-hot-of-n control based on a current instruction mode. In another step, an exemplary embodiment transforms the selected 1-hot-of-n control into a format applicable for the crossbar switch. | 2014-05-08 |
20140129810 | Known Good Code for On-Chip Device Management - In one embodiment, a processor comprises a programmable map and a circuit. The programmable map is configured to store data that identifies at least one instruction for which an architectural modification of an instruction set architecture implemented by the processor has been defined, wherein the processor does not implement the modification. The circuitry is configured to detect the instruction or its memory operands and cause a transition to Known Good Code (KGC), wherein the KGC is protected from unauthorized modification and is provided from an authenticated entity. The KGC comprises code that, when executed, emulates the modification. In another embodiment, an integrated circuit comprises at least one processor core; at least one other circuit; and a KGC source configured to supply KGC to the processor core for execution. The KGC comprises interface code for the other circuit whereby an application executing on the processor core interfaces to the other circuit through the KGC. | 2014-05-08 |
20140129811 | MULTI-CORE PROCESSOR SYSTEM AND CONTROL METHOD - A multi-core processor system includes a multi-core processor that has plural core groups; and a storage device that stores a constraint on execution time for each application. A first identified core of the multi-core processor is configured to identify a constraint on execution time of a given application that is among the applications and for which an invocation instruction is received; determine whether the identified constraint meets a performance drop condition; assign the given application to a predetermined core of the multi-core processor, upon determining that the identified constraint meets the performance drop condition; and notify a second identified core of a core group among the core groups, of an assignment instruction for the given application, upon determining that the identified constraint does not meet the performance drop condition. | 2014-05-08 |
20140129812 | SYSTEM AND METHOD FOR EXECUTING SEQUENTIAL CODE USING A GROUP OF HREADS AND SINGLE-INSTRUCTION, MULTIPLE-THREAD PROCESSOR INCORPORATING THE SAME - A system and method for executing sequential code in the context of a single-instruction, multiple-thread (SIMT) processor. In one embodiment, the system includes: (1) a pipeline control unit operable to create a group of counterpart threads of the sequential code, one of the counterpart threads being a master thread, remaining ones of the counterpart threads being slave threads and (2) lanes operable to: (2 | 2014-05-08 |
20140129813 | PRODUCT HAVING A STORAGE DEVICE THAT HOLDS CONFIGURING INFORMATION - There is provided a product that includes (i) a component, (ii) a storage device that holds a configuration code that indicates that the component is installed in the product, (iii) a processor, and (iv) a memory that contains instructions that are readable by the processor and that control the processor to (a) read the configuration code from the storage device, (b) determine from the configuration code that the component is installed in the product, thus yielding a determination, and (c) execute an operation in response to the determination. | 2014-05-08 |
20140129814 | Method and device, terminal and computer readable medium for accelerating startup of operating system - Described are an operating system startup acceleration method and device, a terminal and a computer readable medium. The method comprises: acquiring prefetch information corresponding to at least one process to be accelerated in a procedure of operating system startup, wherein the prefetch information comprises a file path, a shift value and a length value of a data block required by the process to be accelerated; and reading a corresponding data block into a system cache according to the acquired prefetch information, and completing a startup procedure of the process to be accelerated using the data block in the system cache. | 2014-05-08 |
20140129815 | VALIDATION AND/OR AUTHENTICATION OF A DEVICE FOR COMMUNICATION WITH NETWORK - A device may include a trusted component. The trusted component may be verified by a trusted third party and may have a certificate of verification stored therein based on the verification by the trusted third party. The trusted component may include a root of trust that may provide secure code and data storage and secure application execution. The root of trust may also be configured to verify an integrity of the trusted component via a secure boot and to prevent access to the certain information in the device if the integrity of the trusted component may not be verified. | 2014-05-08 |
20140129816 | System and Method for Booting Multiple Servers from Snapshots of an Operating System Installation Image - The disclosure is directed to a system and method for booting a plurality of servers from at least one of a primary storage drive and a secondary storage drive. An operating system installation image is stored in a primary storage drive. Snapshots including modifications to the operating system installation image are stored in a plurality of partitions of a secondary storage device. A lookup table directs servers to read unmodified portions of the operating system installation image from the primary storage drive. The lookup table further directs servers to read modified portions of the operating system installation image from the secondary storage drive. | 2014-05-08 |
20140129817 | DEVICE BOOTING WITH AN INITIAL PROTECTION COMPONENT - Booting a computing device includes executing one or more firmware components followed by a boot loader component. A protection component for the computing device, such as an anti-malware program, is identified and executed as an initial component after executing the boot loader component. One or more boot components are also executed, these one or more boot components including only boot components that have been approved by the protection component. A list of boot components that have been previously approved by the protection component can also be maintained in a tamper-proof manner. | 2014-05-08 |
20140129818 | ELECTRONIC DEVICE AND BOOTING METHOD - The present invention provides an electronic device including a write-once-then-read-only register, a chipset, a read-only memory, a flash memory and a central processor. The write-once-then-read-only register is arranged to store a determination value. The chipset is arranged to produce a CPU reset signal. The read-only memory is implemented in the chipset, and has a first memory block which corresponds to a predetermined address and is used to store a first instruction. The flash memory is coupled to the chipset, and has a second memory block which corresponds to the predetermined address and is used to store a second instruction. The central processor is arranged to determine the location of the predetermined address according to the CPU reset signal and the determination value. | 2014-05-08 |
20140129819 | CLOUD CLUSTER SYSTEM AND BOOT DEPLOYMENT METHOD FOR THE SAME - A cloud cluster system and a boot deployment method for the same are disclosed, wherein the cloud cluster system comprises a boot server, a management server, a system storage pool and at least one host. After the host is powered on, the host executes a network boot procedure according to a netboot policy. Next, the host connects to the system storage pool for accessing the corresponding root file system, and downloads the golden image from the boot server in order to complete the network boot procedure. After booted, the host is deployed by the management server. The management server enables the corresponding content of the host according to configurations thus the deployed host acting as the corresponding role in a cloud cluster system. | 2014-05-08 |
20140129820 | METHOD OF UPDATING BOOT IMAGE FOR FAST BOOTING AND IMAGE FORMING APPARATUS FOR PERFORMING THE SAME - An image forming apparatus for updating a boot image for fast booting includes an interface unit to receive a new version of firmware to update a previous version of firmware installed in the image forming apparatus, a non-volatile memory to store a first boot image that is a previous version for the fast booting, and a processor updating a boot image by controlling the stored first boot image to be replaced with a second boot image that is a new version when the received new version of firmware includes the second boot image. | 2014-05-08 |
20140129821 | TEST SYSTEM AND METHOD FOR COMPUTER - A test system for a computer includes a basic input/output system (BIOS) chip, a platform controller hub (PCH) chip, and a baseboard management controller (BMC) chip. The PCH chip performs a test on a component of the computer according to a control instruction outputted by the BIOS chip to determine an operation state of the component. The PCH chip outputs state signals to the BMC chip through a corresponding general purpose input output (GPIO) pin according to a test result of the component. The BMC chip obtains test information according to the state signals received from the corresponding GPIO pin. | 2014-05-08 |
20140129822 | RUNTIME PROCESS DIAGNOSTICS - Content management includes populating a library with modular objects and metadata associated with the modular objects. In response to a query, the library can be searched based in part on the metadata. The query can relate to implementation of an industrial process. One or more modular objects in the library can be identified as satisfying the query. A result of the query can be output and the output can include the identified modular objects and the respective metadata associated with the identified modular objects. The metadata can be anything known about the object that might not be accessible at runtime control. | 2014-05-08 |
20140129823 | SYSTEM AND METHOD FOR CONFIGURING PLURAL SOFTWARE PROFILES - A computer with multiple software applications has defined for it plural software profiles for selection of one of the profiles in response to a system and/or user signal. Each profile when selected enables a respective set of applications to run on the system. | 2014-05-08 |
20140129824 | SINGLE-PASS DATA COMPRESSION AND ENCRYPTION - Embodiments compress and encrypt data in a single pass to reduce inefficiencies that occur from compression and encrypting data separately. Typically, compression and encryption are implemented in separate functional units. This has a few disadvantages: 1) encryption cannot make use of compression state to further secure the message, 2) processed data is read and written twice, 3) additional space, time, and resources are consumed, and 4) it is more prone to potential cipher-attacks since the encryption stage is independent from compression. Embodiments overcome these disadvantages by structuring these operations so that both compression and encryption is executed within the same processing loop. Thus: 1) encryption is stronger due to the dependence on the compression state, 2) I/O buffers are accessed only once reducing overhead, 3) system footprint is reduced, and 4) cipher analysis is more complex since the decryption process cannot be separated from the decompression process. | 2014-05-08 |
20140129825 | ADAPTIVE VIDEO SERVER WITH FAST INITIALIZATION AND METHODS FOR USE THEREWITH - A streaming video server includes a virtual file system that stores playlist data corresponding to a plurality of video programs available from at least one video source and that stores at least one initial video program segment for each of the plurality of video segments. The streaming video server receives a request for a selected one of the plurality of video programs from a client device. The selected one of the plurality of video programs is retrieved from the at least one video source in response to the request. A plurality of encoded segments are generated from the selected one of the plurality of video programs, based on rate data. A multiplexer generates a plurality of output segments from the at least one initial video program segment corresponding to the selected one of the plurality of video programs and the plurality of encoded video program segments. | 2014-05-08 |
20140129826 | Simplified Login for Mobile Devices - Aspects of the subject matter described herein relate to a simplified login for mobile devices. In aspects, on a first logon, a mobile device asks a user to enter credentials and a PIN. The credentials and PIN are sent to a server which validates user credentials. If the user credentials are valid, the server encrypts data that includes at least the user credentials and the PIN and sends the encrypted data to the mobile device. In subsequent logons, the user may logon using only the PIN. During login, the mobile device sends the PIN in conjunction with the encrypted data. The server can then decrypt the data and compare the received PIN with the decrypted PIN. If the PINs are equal, the server may grant access to a resource according to the credentials. | 2014-05-08 |
20140129827 | Implementation of robust and secure content protection in a system-on-a-chip apparatus - A content processing integrated circuit includes a system-on-a-chip (SoC) that further includes a processor to receive an authentication request from an external device for authenticating if the SoC is permitted to receive encrypted content from the external device, and to receive the encrypted content once the SoC is authenticated. An authentication processor is provided and coupled to the processor to authenticate the SoC to the external device when the processor receives the authentication request, and to generate a decryption key for decrypting the encrypted content. A decryption processor is provided and coupled to the processor and the authentication processor to receive the decryption key from the authentication processor and to decrypt the encrypted content with the decryption key. A wireless display system with such SoC is also described. A method of implementing a secure and robust content protection in a SoC is also described. | 2014-05-08 |
20140129828 | USER AUTHENTICATION METHOD USING SELF-SIGNED CERTIFICATE OF WEB SERVER, CLIENT DEVICE AND ELECTRONIC DEVICE INCLUDING WEB SERVER PERFORMING THE SAME - A user authentication method using a self-signed certificate of a web server includes: receiving a log-in message generated by using a public key registered to the self-signed certificate of the web server from a client device; generating a response message by using the log-in message and a secret key corresponding to the public key; transmitting the generated response message to the client device; receiving a verification value from the client device via a secure socket layer (SSL) channel connected by using the self-signed certificate of the web server from the client device when a reliability of the response message is verified at the client device; verifying a reliability of the log-in message by using the received verification value; and confirming completion of user authentication if the reliability of the log-in message is verified. | 2014-05-08 |
20140129829 | UNAUTHORIZED CONNECTION DETECTING DEVICE, UNAUTHORIZED CONNECTION DETECTING SYSTEM, AND UNAUTHORIZED CONNECTION DETECTING METHOD - An unauthorized connection detecting device which detects an unauthorized charge/discharge device includes: a time information obtaining unit obtaining, as time information, information from a first charge/discharge device, the information indicating at least one of an issuing date of a first certificate which is a public key certificate and an issuing date of a certificate revocation list held by the first charge/discharge device; an expiration date obtaining unit obtaining expiration date information from a second charge/discharge device, the expiration date information indicating an expiration date of a second certificate which is a public key certificate held by the second charge/discharge device; and an unauthorization detecting unit detecting whether or not the second charge/discharge device is the unauthorized charge/discharge device by comparing the time information with the expiration date information. | 2014-05-08 |
20140129830 | Process for Storing Data on a Central Server - This disclosure describes a process for storing data on a central server with a plurality of users, each of them having their own user password used for creating a user key, being respectively assigned to some of these users, and some of the data, being divided into data blocks to be uploaded, and each data block being compared to data blocks on the server based on a unique data block ID value in order to determine whether a corresponding data block is already stored on the server and to upload to the server those data blocks which are not already present, a data block list to be uploaded being created and uploaded to the central server, so that in a data recovery step data stored on the central server which are requested by the user can be restored in their original form based on said list. | 2014-05-08 |
20140129831 | Computer-Implemented System And Method For Individual Message Encryption Using A Unique Key - A computer-implemented system and method for individual record encryption is provided. A plurality of records associated with incoming calls to a call center are maintained. A unique encryption key is randomly generated for each record. The records are each encrypted using the encryption key generated for that record. The keys are stored in a location separate from the encrypted records. | 2014-05-08 |
20140129832 | TRANSPARENT REAL-TIME ACCESS TO ENCRYPTED NON-RELATIONAL DATA - Embodiments include a computer system, method and program product for encrypted file access. An access program module, connected to at least one file system, intercepts a data request for accessing a plaintext file with information stored physically and consecutively on a hard disk and having a pre-determined order and length expected by a program that sends the data request, wherein the plaintext file includes a plaintext record having a key field and a plaintext data field. The access program module determines an encrypted file, associated with the plaintext file, based on a configuration file and the data request, wherein the configuration file indicates the encrypted file associated with the plaintext file. The access program module determines one or more encryption keys based on the configuration file. The access program module accesses an encrypted data field within the encrypted file based on the encryption keys and the key field. | 2014-05-08 |
20140129833 | MANAGEMENT OF SECURE DATA IN CLOUD-BASED NETWORK - A processor receives a request to access secure data. The processor translates the request in order to locate the secure data in a secure data store. The processor retrieves the secure data from the secure data store. The processor encodes the secure data to generate protected secure data. The processor transmits the protected secure data from the secure data store to at least one instantiated virtual machine in a cloud-based network. | 2014-05-08 |
20140129834 | Providing User Authentication - In particular embodiments, a user associated with a user account wishes to utilize their computing device to facilitate authentication of their identity. The user may provide a device key to an online system hosting the user account, wherein the device key uniquely identifies their computing device. The device key may be based on a device identifier encoded in hardware of the computing device. The online system may then store the device key in association with the user account. Subsequently, if an action related to the online system requires authentication, the user may be asked to provide authentication using their computing device. The user generates an authentication code using their device, which can be entered by the user into a user interface for comparison against an authentication code generated using the device key stored by the online system. | 2014-05-08 |
20140129835 | OPTIMIZING OFFLINE MESSAGE (NETWORK HISTORY) DELIVERY FOR USERS ACCESSING AN APPLICATION FROM A SINGLE DEVICE - Devices, systems and methods for sending messages from a web service server to a computing device shared by a current user and another offline user while maintaining privacy for the other offline user's messages and decreasing bandwidth requirements for transmission of messages may include registering the user and the offline user of the computing device with the web service server, receiving at the web service server from the computing device a login by a first user, wherein the first user is determined to be the current user, checking a database for undelivered messages for the at least one offline user who is not currently accessing the web service server, wherein any user who is not a current user is determined to be an offline user, encrypting each offline user's undelivered messages, sending the undelivered messages to the computing device, and storing offline user encrypted undelivered messages in the computing device. | 2014-05-08 |
20140129836 | INFORMATION DISTRIBUTION SYSTEM AND PROGRAM FOR THE SAME - An information distribution system described herein is capable of securely storing digitized personal information in an encrypted state in a storage section and securely transferring/disclosing the stored digitized information only to a particular third person via a network. Communication of the information is securely performed in the encrypted state between information terminals connected to the communication network. An information terminal which has created information encrypts the original information by a common key generated upon communication and stores the information in a secure storage of one of the information terminals connected to the communication network while maintaining the encrypted state. Further, the system creates a mechanism for authenticating a person having a particular authority for viewing the encrypted information and index information having an encrypted common key and link information indicating the location of the information for supply to a user. | 2014-05-08 |
20140129837 | SYSTEMS AND METHODS FOR DEVICE AND DATA AUTHENTICATION - Embodiments relate to systems and methods for authenticating devices and securing data. In embodiments, a session key for securing data between two devices can be derived as a byproduct of a challenge-response protocol for authenticating one or both of the devices. | 2014-05-08 |
20140129838 | METHOD AND APPARATUS FOR RESILIENT END-TO-END MESSAGE PROTECTION FOR LARGE-SCALE CYBER-PHYSICAL SYSTEM COMMUNICATIONS - To address the security requirements for cyber-physical systems, embodiments of the present invention include a resilient end-to-end message protection framework, termed Resilient End-to End Message Protection or REMP, exploiting the notion of the long-term key that is given on per node basis. This long term key is assigned during the node authentication phase and is subsequently used to derive encryption keys from a random number per-message sent. Compared with conventional schemes, REMP improves privacy, message authentication, and key exposure, and without compromising scalability and end-to-end security. The tradeoff is a slight increase in computation time for message decryption and message authentication. | 2014-05-08 |
20140129839 | INTERNET PROTOCOL MAPPING RESOLUTION IN FIXED MOBILE CONVERGENCE NETWORKS - Techniques for facilitating operation of a communication device having a first internet protocol (IP) address in a first network and a second IP address in a second network include detecting a presence of a network address translation (NAT) table; implementing, when the NAT table is present, a message exchange protocol to obtain a mapping between the first IP address and the second IP address; and reporting, in a communication message, the mapping between the first IP address and the second IP address. In one operational scenario, the first network is a 3 GPP network and the second network is a broadband fixed network such as a DSL or a cable modem network | 2014-05-08 |
20140129840 | SYSTEMS AND METHODS FOR DEVICE AND DATA AUTHENTICATION - Embodiments relate to systems and methods for authenticating devices and securing data. In embodiments, a session key for securing data between two devices can be derived as a byproduct of a challenge-response protocol for authenticating one or both of the devices. | 2014-05-08 |
20140129841 | Methods and Apparatus to Identify Media - Methods and apparatus for identifying media are described. An example method includes determining application identification information for a media presentation application executing on a media device, determining a first watermark for the application identification information from a lookup table, requesting media identification information for media from the media presentation application, determining a second watermark for the media identification information from the lookup table, inserting the first watermark in the media prior to output of the media by the media device, and inserting the second watermark in the media prior to the output of the media by the media device. | 2014-05-08 |
20140129842 | UNAUTHORIZED CONTENTS DETECTION SYSTEM - A data processing device for playing back a digital work reduces the processing load involved in verification by using only a predetermined number of encrypted units selected randomly from multiple encrypted units constituting encrypted contents recorded on a DVD. In addition, the data processing device improves the accuracy of detecting unauthorized contents by randomly selecting a predetermined number of encrypted units every time the verification is performed. | 2014-05-08 |
20140129843 | Methods and Apparatus for Managing Service Access Using a Touch-Display Device Integrated with Fingerprint Imager - The present invention with an apparatus enables biometric based access control to services and/or resources that comprises a crypto processor, a biometric processor, a fingerprint controller, a frame hash engine, a display repeater and/or a display controller, a touch-panel controller and a biometric touch-display panel. The frame hash engine and/or the display controller computes a frame hash of the frame displayed on the biometric touch-display panel. When a fingerprint is captured, in the registration scenario, the biometric processor extracts biometric identity and stores it in a service biometric credential repository identity, and submits a registration proof to the server; in the service access scenarios, the biometric processor verifies user identity by matching fingerprint, and submits an access identity to the server. | 2014-05-08 |
20140129844 | STORAGE SECURITY USING CRYPTOGRAPHIC SPLITTING - Methods and systems for storing data securely in a secure data storage network are disclosed. One method includes receiving at a secure storage appliance a block of data for storage on a volume, the volume associated with plurality of shares distributed across a plurality of physical storage devices. The method also includes cryptographically splitting the block of data received by the secure storage appliance into a plurality of secondary data blocks. The method further includes encrypting each of the plurality of secondary data blocks with a different session key, each session key associated with at least one of the plurality of shares. The method also includes storing each data block and associated session key at the corresponding share, remote from the secure storage appliance. | 2014-05-08 |
20140129845 | ATTRIBUTE BASED ENCRYPTION USING LATTICES - A master public key is generated as a first set of lattices based on a set of attributes, along with a random vector. A master secret key is generated as a set of trap door lattices corresponding to the first set of lattices. A user secret key is generated for a user's particular set of attributes using the master secret key. The user secret key is a set of values in a vector that are chosen to satisfy a reconstruction function for reconstructing the random vector using the first set of lattices. Information is encrypted to a given set of attributes using the user secret key, the given set of attributes and the user secret key. The information is decrypted by a second user having the given set of attributes using the second user's secret key. | 2014-05-08 |
20140129846 | Method and System for Protecting a Driver - Various examples of the present disclosure provide a method and a system for protecting a driver. The method includes encrypting a program file, and sending an Input/Output Request Package (IRP) and the encrypted program file; receiving the IRP and the encrypted program file, decrypting the encrypted program file, verifying the decrypted program file; and, if verification is passed, returning a handle, otherwise, not returning the handle. In the examples of the present disclosure, the program file of the application layer is encrypted, and the encrypted program file is sent when the IRP is sent; the driver layer decrypts and verifies the encrypted program file, and returns the handle to the application layer when the verification is passed, so that the application layer can access the driver layer through the handle; if the verification is not passed, the driver layer rejects the access of the application layer. Therefore, a legitimate application layer can communicate with the driver layer, a suspicious program is prevented from accessing the driver layer, and the security of the driver layer is improved. | 2014-05-08 |
20140129847 | Trusted Storage - In one embodiment, a method for authenticating access to encrypted content on a storage medium, wherein the encrypted content is encrypted according to a full disk encryption (FDE) key, the storage medium including an encrypted version of the FDE key and an encrypted version of a protected storage area (PSA) key, and wherein the encrypted version of the FDE key is encrypted according to the PSA key, the method comprising: providing an authenticated communication channel between a host and a storage engine associated with the storage medium; at the storage engine, receiving a pass code from the host over the authenticated communication channel; hashing the pass code to form a derived key, wherein the encrypted version of the PSA key is encrypted according to the derived key; verifying an authenticity of the pass code; if the pass code is authentic, decrypting the encrypted version of the PSA key to recover the PSA key; decrypting the encrypted FDE key using the recovered PSA key to recover the FDE key; and decrypting the encrypted content using the FDE key. | 2014-05-08 |
20140129848 | Method and Apparatus for Writing and Reading Hard Disk Data - Embodiments provide a method and an apparatus for writing and reading hard disk data. Plain-text data is encrypted by using an encryption key to obtain cipher-text data and a decryption key. The cipher-text data is written into an available area of a hard disk, and the decryption key is written into a reserved area of the hard disk. | 2014-05-08 |
20140129849 | SEMICONDUCTOR DEVICE INCLUDING ENCRYPTION SECTION, SEMICONDUCTOR DEVICE INCLUDING EXTERNAL INTERFACE, AND CONTENT REPRODUCTION METHOD - A secure LSI device | 2014-05-08 |
20140129850 | POLARITY CORRECTION BRIDGE CONTROLLER FOR COMBINED POWER OVER ETHERNET SYSTEM - A system for combining power to a load in a Powered Device (PD) using Power Over Ethernet (PoE) receives power from a first channel and power from a second channel, via four pairs of wires. A MOSFET bridge for each channel is initially disabled. A bridge controller IC simultaneously senses all the voltages and controls the bridge MOSFETs. The bridge controller IC also contains a first PoE handshaking circuit. A second PoE handshaking circuit is external to the bridge controller IC and operates independently. The body diodes in the MOSFET bridge initially couple the first channel to the second PoE handshaking circuit while isolating the second channel. The second handshaking circuit then couples the first channel to the load. The first handshaking circuit then carries out a PoE handshaking routine for the second channel. Ultimately, the bridge controller controls the bridge MOSFETs to couple both channels to the load. | 2014-05-08 |
20140129851 | VOLTAGE IDENTIFICATION DEFINITION REFERENCE VOLTAGE GENERATION CIRCUIT AND BOOT VOLTAGE GENERATING METHOD THEREOF - A voltage identification definition (VID) reference voltage generation circuit and a boot voltage generating method thereof are provided. In the boot voltage generating method, a VID reference voltage generation circuit is provided. The VID reference voltage generation circuit includes a preset voltage providing unit, a switch and a VID input signal detection unit. When the VID input signal detection unit detects no input of a VID signal, a control signal is generated to control the switch, such that the preset voltage providing unit provides an adjustable preset voltage. | 2014-05-08 |
20140129852 | Dynamic Voltage Dithering - A request for a high voltage mode is received and a high voltage timer is started in response to determining that a remaining amount of high voltage credits exceeds a voltage switch threshold value. A switch to the high voltage mode is made in response to the request. A low voltage mode is switched to in response to an indication. The request may be received from an application running on a data processing system. If the indication is that the high voltage timer has expired, a low voltage timer is started in response to switching to low voltage mode. If the high voltage request is still active when the low voltage timer expires, a switch back to high voltage mode occurs and a new high voltage timer is started. | 2014-05-08 |
20140129853 | Advertising power over ethernet (POE) capabilities among nodes in a POE system using type-length-value (TLV) structures - Embodiments of the present disclosure provide systems and methods to enable a Power Source Equipment (PSE) and a Powered Device (PD) advertise their identity, capabilities, and neighbors by exchanging IEEE Standard 802.1AB Link Layer Discovery Protocol (LLDP) information in Ethernet frames. Each Ethernet frame contains one or more LLDP Data Units (LLDPDUs) corresponding to a sequence of type-length-value (TLV structure) structures. The PSEs and PDs utilize optional TLV Structures from among the one or more LLDPDUs to advertise their PoE capabilities. | 2014-05-08 |
20140129854 | Auto-Negotiation and Advanced Classification for Power Over Ethernet (POE) Systems - Systems and methods are provided for an auto-negotiation mode of operation that allows power source equipment (PSE) and a powered device (PD) to negotiate power transfer and/or data communication without the use of conventional classification pulses. The auto-negotiation mode of operation provides for a universal negotiation of information between the PSE and PD regardless of the specific capabilities of each. The information can be used to configure the power to be applied to the communication link and/or data communication over the communication link. | 2014-05-08 |
20140129855 | ADAPTIVE POWER INJECTED PORT EXTENDER - A port extender includes a chassis with uplink ports that are operable to receive power and data from a power sourcing device, and user device ports that are operable to connect to user devices. A power management processor is coupled to each of the uplink ports and the user device ports. The power management processor is operable to determine a power budget using power received by the uplink ports. The power management processor is also operable to detect a port configuration event such as the removal of a connection of a user device to a user device port, the inactivity of a user device port, or the addition of a connection of a user device to a user device port, and in response, selectively provide power to one or more of the plurality of user device ports based on the power budget and the port configuration event. | 2014-05-08 |
20140129856 | DEVICE AND METHOD FOR ELECTRIC POWER MANAGEMENT OF A PLURALITY OF PERIPHERAL INTERFACES - The arrangement and method of the invention comprises dynamically limiting current provided to multiple peripheral interfaces of an electronic device, comprising individual current limiting per peripheral interface and global current limiting over all peripheral interfaces, in way that is optimized to best suit power needs for the peripheral devices connected to the peripheral interfaces while respecting the power supplying capacity of the electronic device. | 2014-05-08 |
20140129857 | METHOD OF DYNAMICALLY SCALING A POWER LEVEL OF A MICROPROCESSOR - A method of dynamically scaling a power level of a microprocessor is provided. The method includes: receiving a plurality of workload rates of a microprocessor in a first duration period; determining a second duration period by adjusting a length of the first duration period; calculating a period workload rate based on the plurality of workload rates in the first duration period; dynamically scaling a power level of the microprocessor; and maintaining the scaled power level of the microprocessor in the second duration period. | 2014-05-08 |
20140129858 | METHOD AND APPARATUS FOR SETTING AN I/O BANDWIDTH-BASED PROCESSOR FREQUENCY FLOOR - An apparatus and method for managing a frequency of a computer processor. The apparatus includes a power control unit (PCU) to manage power in a computer processor. The | 2014-05-08 |
20140129859 | Remote Wake Using Signal Characteristics - Disclosures related to waking a sleeping device while minimizing active components needed to receive a remote wakeup request. In one aspect, devices having an RF tuner, may be configured to detect a digital or an analog signal variation or change in RF signal characteristics. Further, the variation or change may be interpreted as a wakeup signal. | 2014-05-08 |
20140129860 | METHOD AND APPARATUS FOR ENABLING MOBILE DEVICE POWER MANAGEMENT USING A SHARED WORKER - A method, apparatus and computer program product are provided to enable an application in an HTML5 runtime to listen to background servers and wait for events, even if the power savings mode is on. In the context of a method, a share worker application, comprising an event listener, is launched and the operating system is notified not to pause the shared worker application in the power savings mode. In response to a triggering event, the shared worker application may wake up the full system or may wake up only the main execution of JavaScript and a specific action caused by JavaScript while other power savings actions continue. The method may also cause performance of the specific action, such as an audio alert. | 2014-05-08 |
20140129861 | CONTROLLING A DATA STREAM - The present application relates to a computer implemented method, a computer program product and a computer system for controlling a data stream from a server computer to a client computer. The client computer comprises a data stream client and the server computer comprises a data stream server. While receiving, by the data stream client from the data stream server, the data stream, the method may comprise generating, by the client computer, a power management decrease event, receiving, by the data stream client, the power management decrease event, sending, from the data stream client to the data stream server, a first pause request to temporarily halt the data stream, and transitioning, by the client computer, from a fully working power state to a decreased power consumption state in response to the power management decrease event. | 2014-05-08 |
20140129862 | APPARATUS AND METHOD FOR REPLACING A BATTERY IN A PORTABLE TERMINAL - An apparatus and method for replacing a battery in a portable terminal are provided, in which there are a main battery and an auxiliary battery, a cover removal sensor senses the removal of a battery cover, and a controller switches from the main battery to the auxiliary battery for supplying a power in response to the battery cover removal, wherein the auxiliary battery supplies the power to some component of the portable terminal under the control of the controller. | 2014-05-08 |
20140129863 | SERVER, POWER MANAGEMENT SYSTEM, POWER MANAGEMENT METHOD, AND PROGRAM - To increase the reliability of power supply control of a server group and reduce the power consumption of the server group. A server includes a power supply stop control unit which stops a power supply of a predetermined processing unit upon receiving a power supply stop instruction signal instructing to stop the power supply, a power supply start-up control unit which intermittently starts up the power supply of the predetermined processing unit when the power supply stop control unit stops the power supply of the predetermined processing unit, a power supply start-up determination unit which determines whether a processing load of other servers which are executing their processes among a plurality of other servers is higher than an upper limit load determined in advance as a load required to be processed by servers the number of which is greater than or equal to the number of the other servers which are executing their processes when the power supply start-up control unit starts up the power supply of the predetermined processing unit, and a process control unit which controls process execution for the predetermined processing unit when the processing load of the other servers which are executing their processes is determined to be higher than the upper limit load. | 2014-05-08 |
20140129864 | MULTIPROCESSOR SYSTEM AND METHOD OF SAVING ENERGY THEREIN - A multiprocessor system comprises: a plurality of processors; a counting, measuring and calculating (CMC) unit that determines a generating rate of sleep tasks and a time length of each of the sleep tasks based on an acceptable delay; a sleep task generator that generates the sleep tasks with the time length at the generating rate, and injects the generated sleep tasks into a traffic for original tasks; and a scheduler that assigns both the original tasks and the sleep tasks in the traffic to the plurality of processors, wherein each of the sleep tasks switches off one of the plurality of processors, on which the sleep task is assigned. | 2014-05-08 |
20140129865 | SYSTEM CONTROLLER, POWER CONTROL METHOD, AND ELECTRONIC SYSTEM - According to an aspect of an embodiment, a system controller included in a first electronic apparatus connected to a different electronic apparatus via a network, includes a monitoring unit and a power supply control unit. The monitoring unit mutually monitors a survival state with an operation system controller included in a second electronic apparatus. The power supply control unit, controls a power supply of a different system controller included in the first electronic apparatus to turn off when the monitoring unit starts monitoring a survival state of the operation system controller included in the second electronic apparatus. | 2014-05-08 |
20140129866 | AGGREGATION FRAMEWORK USING LOW-POWER ALERT SENSOR - An aggregation framework system and method that automatic configures, aggregates, disaggregates, manages, and optimizes components of a consolidated system of devices, modules, and sensors. Embodiments of the system and method include a low-power alert sensor, a data aggregator module, and an interpreter module. The low-power alert sensor is a sensor that is continuously on and continuously monitoring its environment. The low-power alert sensor acts as a watchdog and triggers other sensors to awaken them from a power-conservation state when there is a change or event that occurs in an environment. The data aggregator module manages the set of sensors within the system and aggregates sensor data obtained from the sensors. The interpreter module then translates the physical data collected by sensors into logical information. Together the data aggregator module and the interpreter module present a unified logical view of the capabilities of the sensors under their control. | 2014-05-08 |
20140129867 | SELECTIVE INSERTION OF CLOCK MISMATCH COMPENSATION SYMBOLS IN SIGNAL TRANSMISSIONS - In a system comprising a first device and a second device coupled via an interconnect, a method includes setting a rate of insertion of clock mismatch compensation symbols for a transmit port of the first device to one of a plurality of rates of insertion responsive to the second device having capability to compensate for a clock frequency mismatch. A device includes an interconnect interface comprising a transmit port and a receive port, and a configuration structure. The configuration structure comprises a capability field to store a value indicating whether the device has a capability to compensate for a clock frequency mismatch, and an enable field. The device further includes a packet control module to configure a rate of insertion of clock mismatch compensation symbols by the transmit port into a data stream responsive to a value stored at the enable field. | 2014-05-08 |
20140129868 | SELECTABLE PHASE OR CYCLE JITTER DETECTOR - Embodiments of a jitter detection circuit are disclosed that may allow for detecting both cycle and phase jitter in a clock distribution network. The jitter detection circuit may include a phase selector, a data generator, a delay chain, a logic circuit, and clocked storage elements. The phase selector may be operable to select a clock phase to be used for the launch clock, and the data generator may be operable to generate a data signal responsive to the launch clock. The delay chain may generate a plurality of outputs dependent upon the data signal, and the clocked storage elements may be operable to capture the plurality of outputs from the delay chain, which may be compared to expected data by the logic circuit. | 2014-05-08 |