52nd week of 2012 patent applcation highlights part 67 |
Patent application number | Title | Published |
20120331226 | HIERARCHICAL MEMORY ARBITRATION TECHNIQUE FOR DISPARATE SOURCES - A hierarchical memory request stream arbitration technique merges coherent memory request streams from multiple memory request sources and arbitrates the merged coherent memory request stream with requests from a non-coherent memory request stream. In at least one embodiment of the invention, a method of generating a merged memory request stream from a plurality of memory request streams includes merging coherent memory requests into a first serial memory request stream. The method includes selecting, by a memory controller circuit, a memory request for placement in the merged memory request stream from at least the first serial memory request stream and a merged non-coherent request stream. The merged non-coherent memory request stream is based on an indicator of a previous memory request selected for placement in the merged memory request stream. | 2012-12-27 |
20120331227 | FACILITATING IMPLEMENTATION, AT LEAST IN PART, OF AT LEAST ONE CACHE MANAGEMENT POLICY - An embodiment may include circuitry to facilitate implementation, at least in part, of at least one cache management policy. The at least one policy may be based, at least in part, upon respective priorities of respective classifications of respective network traffic. The at least one policy may concern, at least in part, caching of respective information associated, at least in part, with the respective network traffic belonging to the respective classifications. Many alternatives, variations, and modifications are possible. | 2012-12-27 |
20120331228 | DYNAMIC CONTENT CACHING - A system for caching content including a server supplying at least one of static and non-static content elements, content distinguishing functionality operative to categorize elements of the non-static content as being either dynamic content elements or pseudodynamic content elements, and caching functionality operative to cache the pseudodynamic content elements. The static content elements are content elements which are identified by at least one of the server and metadata associated with the content elements as being expected not to change, the non-static content elements are content elements which are not identified by the server and/or by metadata associated with the content elements as being static content elements, the pseudodynamic content elements are non-static content elements which, based on observation, are not expected to change, and the dynamic content elements are non-static content elements which are not pseudodynamic. | 2012-12-27 |
20120331229 | LOAD BALANCING BASED UPON DATA USAGE - A method of load balancing can include segmenting data from a plurality of servers into usage patterns determined from accesses to the data. Items of the data can be cached in one or more servers of the plurality of servers according to the usage patterns. Each of the plurality of servers can be designated to cache items of the data of a particular usage pattern. A reference to an item of the data cached in one of the plurality of servers can be updated to specify the server of the plurality of servers within which the item is cached. | 2012-12-27 |
20120331230 | CONTROL BLOCK LINKAGE FOR DATABASE CONVERTER HANDLING - A system to load a plurality of converter pages of a datastore into a database cache, the plurality of converter pages comprising a plurality of converter inner pages, and a plurality of converter leaf pages, to allocate a control block in the database cache for each of the plurality of converter inner pages, the control block of a converter inner page comprising a pointer to a control block of a parent converter inner page and a pointer to a control block of each child converter page of the converter inner page, and to allocate a control block in the database cache for each of the plurality of converter leaf pages, the control block of a converter leaf page comprising a pointer to a control block of a parent converter inner page. | 2012-12-27 |
20120331231 | METHOD AND APPARATUS FOR SUPPORTING MEMORY USAGE THROTTLING - An apparatus for providing system memory usage throttling within a data processing system having multiple chiplets is disclosed. The apparatus includes a system memory, a memory access collection module, a memory credit accounting module and a memory throttle counter. The memory access collection module receives a first set of signals from a first cache memory within a chiplet and a second set of signals from a second cache memory within the chiplet. The memory credit accounting module tracks the usage of the system memory on a per user virtual partition basis according to the results of cache accesses extracted from the first and second set of signals from the first and second cache memories within the chiplet. The memory throttle counter for provides a throttle control signal to prevent any access to the system memory when the system memory usage has exceeded a predetermined value. | 2012-12-27 |
20120331232 | WRITE-THROUGH CACHE OPTIMIZED FOR DEPENDENCE-FREE PARALLEL REGIONS - An apparatus and computer program product for improving performance of a parallel computing system. A first hardware local cache controller associated with a first local cache memory device of a first processor detects an occurrence of a false sharing of a first cache line by a second processor running the program code and allows the false sharing of the first cache line by the second processor. The false sharing of the first cache line occurs upon updating a first portion of the first cache line in the first local cache memory device by the first hardware local cache controller and subsequent updating a second portion of the first cache line in a second local cache memory device by a second hardware local cache controller. | 2012-12-27 |
20120331233 | False Sharing Detection Logic for Performance Monitoring - A mechanism is provided for detecting false sharing misses. Responsive to performing either an eviction or an invalidation of a cache line in a cache memory of the data processing system, a determination is made as to whether there is an entry associated with the cache line in a false sharing detection table. Responsive to the entry associated with the cache line existing in the false sharing detection table, a determination is made as to whether an overlap field associated with the entry is set. Responsive to the overlap field failing to be set, identification is made that a false sharing coherence miss has occurred. A first signal is then sent to a performance monitoring unit indicating the false sharing coherence miss. | 2012-12-27 |
20120331234 | CACHE MEMORY AND CACHE MEMORY CONTROL UNIT - Data transfer between processors is efficiently performed in a multiprocessor including a shared cache memory. Each entry in a tag storage section | 2012-12-27 |
20120331235 | MEMORY MANAGEMENT APPARATUS, MEMORY MANAGEMENT METHOD, CONTROL PROGRAM, AND RECORDING MEDIUM - There is provided a memory management apparatus including a data input/output part for requesting to read data in units of blocks in a first size from a first storage medium and storing the data read from the first storage medium into a second storage medium, a data creating part for creating prefetch data obtained by converting a history of the request to read the data from the first storage medium into data of which read position and size are indicated in units of blocks in a second size, the request being issued by the data input/output part in response to a request from a program to be prefetched, and a prefetching part for requesting the data input/output part to prefetch the data of the program from the first storage medium to the second storage medium based on the prefetch data. | 2012-12-27 |
20120331236 | DATA PROCESSING APPARATUS AND IMAGE FORMING APPARATUS - A data processing apparatus includes an operation unit, a writable and readable volatile register, a writable and readable nonvolatile memory, first and second writing units and a write-back unit. The operation unit performs an arithmetic operation and a logical operation. The writable and readable volatile register stores data used in the operations performed by the operation unit. The writable and readable nonvolatile memory stores the data in parallel with the volatile register. The data stored in the nonvolatile memory is the data stored in the volatile register. The first writing unit writes the data in the volatile register. The second writing unit writes the data in the nonvolatile memory in parallel with the first writing unit every time the data is written in the volatile register. The write-back unit writes back the data stored in the nonvolatile memory to the volatile register when a power supply is turned on. | 2012-12-27 |
20120331237 | Asynchronous Grace-Period Primitives For User-Space Applications - A technique for implementing user-level read-copy update (RCU) with support for asynchronous grace periods. In an example embodiment, a user-level RCU subsystem is established that executes within threads of a user-level multithreaded application. The multithreaded application may comprise one or more reader threads that read RCU-protected data elements in a shared memory. The multithreaded application may further comprise one or more updater threads that perform updates to the RCU-protected data elements in the shared memory and register callbacks to be executed following a grace period in order to free stale data resulting from the updates. The RCU subsystem may implement two or more helper threads (helpers) that are created or selected as needed to track grace periods and execute the callbacks on behalf of the updaters instead of the updaters performing such work themselves. | 2012-12-27 |
20120331238 | Asynchronous Grace-Period Primitives For User-Space Applications - A technique for implementing user-level read-copy update (RCU) with support for asynchronous grace periods. In an example embodiment, a user-level RCU subsystem is established that executes within threads of a user-level multithreaded application. The multithreaded application may comprise one or more reader threads that read RCU-protected data elements in a shared memory. The multithreaded application may further comprise one or more updater threads that perform updates to the RCU-protected data elements in the shared memory and register callbacks to be executed following a grace period in order to free stale data resulting from the updates. The RCU subsystem may implement two or more helper threads (helpers) that are created or selected as needed to track grace periods and execute the callbacks on behalf of the updaters instead of the updaters performing such work themselves. | 2012-12-27 |
20120331239 | SHARED MEMORY ARCHITECTURE - Disclosed herein is an apparatus which may comprise a plurality of nodes. In one example embodiment, each of the plurality of nodes may include one or more central processing units (CPUs), a random access memory device, and a parallel link input/output port. The random access memory device may include a local memory address space and a global memory address space. The local memory address space may be accessible to the one or more CPUs of the node that comprises the random access memory device. The global memory address space may be accessible to CPUs of all the nodes. The parallel link input/output port may be configured to send data frames to, and receive data frames from, the global memory address space comprised by the random access memory device(s) of the other nodes. | 2012-12-27 |
20120331240 | DATA PROCESSING DEVICE AND DATA PROCESSING ARRANGEMENT - A data processing device is described with a memory and a first and a second data processing component. The first data processing component comprises a control memory comprising, for each memory region of a plurality of memory regions of the memory, an indication whether a data access to the memory region may be carried out by the first data processing component and a data access circuit configured to carry out a data access to a memory region of the plurality of memory regions if a data access to the memory region may be carried out by the first data processing component; and a setting circuit configured to set the indication for a memory region to indicate that a data access to the memory region may not be carried out by the first data processing component in response to the completion of a data access of the first data processing component to the memory region. | 2012-12-27 |
20120331241 | Adaptive Control For Efficient HARQ Memory Usage - There is determined an amount of available memory that is allocated for automatic repeat-request data. Then for each of a plurality of received data transmissions, an n | 2012-12-27 |
20120331242 | CONSISTENT UNMAPPING OF APPLICATION DATA IN PRESENCE OF CONCURRENT, UNQUIESCED WRITERS AND READERS - Free storage blocks previously allocated to a logical block device are released back to an underlying storage system supporting the logical block device in a manner that does not conflict with write operations that may be issued to the free storage blocks at about the same time. According to a first technique, write operations on the same storage blocks to be released are paused until the underlying storage system has completed the releasing operation or, if the write operations are issued earlier than when the underlying storage system actually performs the releasing operation, such storage blocks are not released. According to a second technique, a special file is allocated the free storage blocks, which are then made available for safe releasing. | 2012-12-27 |
20120331243 | Remote Direct Memory Access ('RDMA') In A Parallel Computer - Remote direct memory access (‘RDMA’) in a parallel computer, the parallel computer including a plurality of nodes, each node including a messaging unit, including: receiving an RDMA read operation request that includes a virtual address representing a memory region at which to receive data to be transferred from a second node to the first node; responsive to the RDMA read operation request: translating the virtual address to a physical address; creating a local RDMA object that includes a counter set to the size of the memory region; sending a message that includes an DMA write operation request, the physical address of the memory region on the first node, the physical address of the local RDMA object on the first node, and a remote virtual address on the second node; and receiving the data to be transferred from the second node. | 2012-12-27 |
20120331244 | CONFIGURABLE CIRCUIT ARRAY - A method and system are provided for configurable computation and data processing. A logical processor includes an array of logic elements. The processor may be a combinatorial circuit that can be applied to modify computational aspects of an array of reconfigurable circuits. A memory stores a plurality of instructions, each instruction including an instruction-fetch data portion and an output data transfer data portion. One or more memory controllers are coupled to the memory and receive instructions and/or output data from the memory. A back buffer is coupled with the memory controller and receives instructions from the memory controller. The back buffer sequentially asserts each received instruction upon one or more memory controllers. The memory controllers transfer data received from the memory to a target, such as an array of reconfigurable logic circuits that are optionally coupled to the memory, the back buffer, and one or more additional memory controllers. | 2012-12-27 |
20120331245 | Virtualizing Storage for WPAR Clients - Systems, methods and media for providing to a plurality of WPARs private access to physical storage connected to a server through a VIOS are disclosed. In one embodiment, a server is logically partitioned to form a working partition comprising a WPAR manager and individual WPARs. Each WPAR is assigned to a different port. The virtual ports are created by using NPIV protocol between the WPAR and VIOS. Thereby, each WPAR has private access to the physical storage connected to the VIOS. | 2012-12-27 |
20120331246 | Virtualizing Storage for WPAR Clients Using Node Port ID Virtualization - Systems, methods and media for providing to a plurality of WPARs private access to physical storage connected to a server through a VIOS are disclosed. In one embodiment, a server is logically partitioned to form a working partition comprising a WPAR manager and individual WPARs. Each WPAR is assigned to a different virtual port. The virtual ports are created by using NPIV protocol between the WPAR and VIOS. Thereby, each WPAR has private access to the physical storage connected to the VIOS. | 2012-12-27 |
20120331247 | INTERFACING WITH A POINT-IN-TIME COPY SERVICE ARCHITECTURE - Provided are a computer program product, system, and method for interfacing with point-in-time copy service architecture to create a point-in-time copy of a volume in a storage used by an application. A point-in-time copy request is processed to perform a point-in-time copy with respect to the volume in the storage, wherein the request indicates at least one exit, wherein the exit indicates when the exit is to be invoked with respect to an operation of the point-in-time copy and indicates a location of an executable object to execute when the exit is invoked. Communicating with the point-in-time copy service to prepare for the point-in-time copy. For each exit, determining from the exit when to invoke the exit and executing the executable object for the exit to invoke to perform operations related to the point-in-time copy. The point-in-time copy service is called to perform the point-in-time copy operation of the volume. | 2012-12-27 |
20120331248 | STORAGE MANAGEMENT SYSTEM AND STORAGE MANAGEMENT METHOD - An embodiment of this invention is a storage management system including a processor and a storage device to manage a storage system having one or more copy functions. The processor locates data designated to determine a backup method. The storage device stores copy function management information on the one or more copy functions of the storage system. The processor refers to the copy function management information to ascertain the unit of copy operation of each of the one or more copy functions. The processor determines a candidate for a copy function of the storage system to be used to back up the designated data depending on the data configuration in a volume holding the designated data and the unit of copy operation of the candidate for the copy function. | 2012-12-27 |
20120331249 | DYNAMIC DATA PLACEMENT FOR DISTRIBUTED STORAGE - A command is received to alter data storage in a cluster, along with parameters for executing the command. Information is obtained relating to one or more volumes in the cluster and information relating to devices in the cluster. A formal description of a placement function is generated that maps one or more object identifiers to a storage device set. Placement function code is generated by compiling the formal description of the placement function to computer-executable code. | 2012-12-27 |
20120331250 | HIGH-PERFORMANCE VIRTUAL MACHINE NETWORKING - A method for conveying a data packet received from a network to a virtual machine instantiated on a computer system coupled to the network, and a medium and system for carrying out the method, is described. In the method, a guest receive pointer queue of a component executing in the virtual machine is inspected in order to identify a location in a guest receive packet data buffer that is available to receive packet data. Data from the data packet received from the network is copied into the guest receive packet data buffer at the identified location. A standard receive interrupt is raised in the virtual machine. Thus, the kernel places the data packet received from the network into a memory space accessible to the virtual machine without any intervention by a virtual machine monitor component of the virtualization software. | 2012-12-27 |
20120331251 | POOL SPARES FOR DATA STORAGE VIRTUALIZATION SUBSYSTEM - A data storage virtualization subsystem (SVS) for providing storage to a host entity is disclosed. The SVS comprises a storage virtualization controller for connecting to the host entity, at least one physical storage device (PSD) pool, and at least one PSD is designated to be a pool spare PSD to the at least one PSD pool. The at least one PSD pool comprises at least one enclosure for receiving the PSD, and at least one ID-storing device to store a pool ID for identifying the at least one physical storage device pool. | 2012-12-27 |
20120331252 | PORTABLE STORAGE DEVICE SUPPORTING FILE SEGMENTATION AND MULTIPLE TRANSFER RATES - A host device includes a first file system, and a storage device includes a plurality of memory units and a plurality of controllers. While the host device is operative coupled to the storage device, the host device creates a second file system corresponding to the storage device and copies host content from the first file system to the second file system. The second file system is segmented into a plurality of segments, each of the plurality of segments being uniquely associated with a particular one of the plurality of controllers. The host device selects a data transfer rate to write the host content from the second file system to the storage device. | 2012-12-27 |
20120331253 | STRIPE-BASED MEMORY OPERATION - The present disclosure includes methods and devices for stripe-based memory operation. One method embodiment includes writing data in a first stripe across a storage volume of a plurality of memory devices. A portion of the first stripe is updated by writing updated data in a portion of a second stripe across the storage volume of the plurality of memory devices. The portion of the first stripe is invalidated. The invalid portion of the first stripe and a remainder of the first stripe are maintained until the first stripe is reclaimed. Other methods and devices are also disclosed. | 2012-12-27 |
20120331254 | STORAGE CONTROL SYSTEM AND METHOD - A storage system having a plurality of storage devices including a first type storage device and a second type storage device, a reliability attribute and/or a performance attribute of the first type storage device being different from a reliability attribute and/or a performance attribute of the second type storage device. The storage system also has a control unit and managing a plurality of virtual volumes. If necessary, a storage area allocated to a first portion of a virtual volume of the plurality of virtual volumes is changed from a first type storage area of the plurality of first type storage areas to a second type storage area of the plurality of second type storage areas while another first type storage area of the plurality of first type storage areas is allocated to a second portion of the virtual volume. | 2012-12-27 |
20120331255 | SYSTEM AND METHOD FOR ALLOCATING MEMORY RESOURCES - System and method for allocating memory resources are disclosed. The system utilizes a bus system coupled to a plurality of requestors and a plurality of memory systems coupled to the bus system. Each memory system includes a memory component and a memory management module including a value that represents access rights to the memory component. The memory management module is configured to receive an access request from a first requestor of the plurality of requestors and to grant access to the memory component only if the value indicates that the first requestor has access rights to the memory component. The memory management module is configurable to change the value to give the access rights to the memory component to a second requestor of the plurality of requestors. | 2012-12-27 |
20120331256 | Virtualizing Storage for WPAR Clients Using Key Authentication - Systems, methods and media for providing to a plurality of WPARs private access to physical storage connected to a server through a VIOS are disclosed. In one embodiment, a server is logically partitioned to form a working partition comprising a WPAR manager and individual WPARs. Each WPAR is assigned to a different virtual port. The virtual ports are created by using NPIV protocol between the WPAR and VIOS. Thereby, each WPAR has private access to the physical storage connected to the VIOS. | 2012-12-27 |
20120331257 | GEOMETRIC ARRAY DATA STRUCTURE - A method for implementing a geometric array in a computing environment is disclosed. In one embodiment, such a method includes providing an array of slots, where each slot is configured to store a pointer. Each pointer in the array points to a block of elements. Each pointer with the exception of the first pointer in the array points to a block of elements that is twice as large as the block of elements associated with the preceding pointer. Such a structure allows the geometric array to grow by simply adding a pointer to the array that points to a new block of elements that is twice as large as the block of elements associated with the preceding pointer in the array. A corresponding computer program product, as well as a method for accessing data in the geometric array, are also disclosed. | 2012-12-27 |
20120331258 | STORAGE SYSTEM GROUP - There is a journal area and one or more logical volumes comprising a first logical volume. The journal area is a storage area in which is stored a journal data element, which is a data element that is stored in any storage area of a plurality of storage areas configuring a logical volume, or a data element that is written to the storage area. A controller has a size receiver that receives a write unit size, which is the size of a write data element received from a computer, and a size setting unit that sets the received write unit size in a memory for one or more logical volumes. The size of a journal data element stored in a journal area based on the set write unit size is the write unit size. | 2012-12-27 |
20120331259 | GEOMETRIC ARRAY DATA STRUCTURE - A method for implementing a geometric array in a computing environment is disclosed. In one embodiment, such a method includes providing an array of slots, where each slot is configured to store a pointer. Each pointer in the array points to a block of elements. Each pointer with the exception of the first pointer in the array points to a block of elements that is twice as large as the block of elements associated with the preceding pointer. Such a structure allows the geometric array to grow by simply adding a pointer to the array that points to a new block of elements that is twice as large as the block of elements associated with the preceding pointer in the array. A corresponding computer program product, as well as a method for accessing data in the geometric array, are also disclosed. | 2012-12-27 |
20120331260 | IIMPLEMENTING DMA MIGRATION OF LARGE SYSTEM MEMORY AREAS - A method, system and computer program product are provided for implementing memory migration of large system memory pages in a computer system. A large page to be migrated from a current location to a target location is converted into a plurality of smaller subpages for a processor or system page table. The migrated page is divided into first, second and third segments, each segment composed of the smaller subpages and each respective segment changes as each individual subpage is migrated. CPU and I/O accesses to respective subpages of the first segment are directed to corresponding subpages of the target page or new page. I/O accesses to respective subpages of the second segment use a dual write mode targeting corresponding subpages of both the current page and the target page. CPU and I/O accesses to the subpages of the third segment access the corresponding subpages of the current page. | 2012-12-27 |
20120331261 | Point-in-Time Copying of Virtual Storage - A method includes making in a real storage, a copy of a first page content stored in a first page data structure by creating a second page content in a second data structure, the second page content pointing to actual data pointed to by the first page content, storing the second page content in the second data structure, marking the first page content in the first page data structure with a page protection bit, wherein the page protection bit prevents a modification of the virtual page, in response to an attempt to modify the virtual page, copying the virtual page in the event the first page content in the first page data structure is marked with the page protection bit, storing the copied virtual page in a second virtual storage, and altering the second page content in the second data structure to point to the stored virtual page. | 2012-12-27 |
20120331262 | PERFORMING MEMORY ACCESSES WHILE OMITTING UNNECESSARY ADDRESS TRANSLATIONS - In computing environments that use virtual addresses (or other indirectly usable addresses) to access memory, the virtual addresses are translated to absolute addresses (or other directly usable addresses) prior to accessing memory. To facilitate memory access, however, address translation is omitted in certain circumstances, including when the data to be accessed is within the same unit of memory as the instruction accessing the data. In this case, the absolute address of the data is derived from the absolute address of the instruction, thus avoiding address translation for the data. Further, in some circumstances, access checking for the data is also omitted. | 2012-12-27 |
20120331263 | METHOD FOR MANAGING A MEMORY APPARATUS - A method for managing a memory apparatus including at least one non-volatile (NV) memory element includes: building at least one local page address linking table containing a page address linking relationship between a plurality of physical page addresses and at least a logical page address, wherein the local page address linking table includes a first local page address linking table containing a first page address linking relationship of a plurality of first physical pages, and a second local page address linking table containing a second page address linking relationship of a plurality of second physical pages that are different from the first physical pages; building a global page address linking table according to the local page address linking table; and accessing the memory apparatus according to the global page address linking table. | 2012-12-27 |
20120331264 | Point-in-Time Copying of Virtual Storage and Point-in-Time Dumping - A method includes copying a first virtual storage by making, a point-in-time copy of a first page content stored in a first structure by creating a second page content in a second structure, the second page content pointing to actual data pointed to by the first page content, storing the second page content in the second data structure, marking the first page content in the first structure with a bit, copying the virtual page in the event the first page content in the first structure is marked with the bit, storing the copied virtual page in a second virtual storage, altering the second page content to point to the stored virtual page, and using the second virtual storage to perform the core dump process, wherein the second virtual storage is referenced via the second page content stored in the real storage. | 2012-12-27 |
20120331265 | Apparatus and Method for Accelerated Hardware Page Table Walk - A method of walking page tables includes comparing a virtual address to a plurality of virtual address bit segments to identify a match. Each virtual address bit segment is associated with a page table level that has a page table base address. A designated page table base address is received in response to the match. The page table walk starts at the designated page table, thereby skipping over earlier page tables. | 2012-12-27 |
20120331266 | INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD AND MEDIUM STORING PROGRAM - Disclosed is an information processing device provided with: a plurality of processing units each having a TLB (Translation Lookaside Buffer); a means for acquiring a designation of a processing unit, from among the plurality of processing units, where TLB information is to be collected, and for acquiring a designation of the timing at which the TLB information is to be collected; and a means for collecting the TLB information from the designated processing unit at the designated timing. | 2012-12-27 |
20120331267 | METHOD FOR MANAGING A MEMORY APPARATUS - A method for managing a memory apparatus including at least one non-volatile (NV) memory element includes: receiving a first access command from a host; analyzing the first access command to obtain a first host address; linking the first host address to a physical block; receiving a second access command from the host; and analyzing the second access command to obtain a second host address. For example, the method may further include: linking the second host address to the physical block, wherein a difference value of the first host address and the second host address is greater than a number of pages of the physical block. In another example, the method may further include: linking the first host address to at least a page of the physical block; and linking the second host address to at least a page of another physical block. | 2012-12-27 |
20120331268 | RECONFIGURABLE PROCESSOR ARCHITECTURE - A reconfigurable data processor architecture. The processor architecture includes: a first plurality of data processing elements, each having a respective synchronization unit, a data link structure adapted for dynamically interconnecting a number of the data processing elements, at least one configuration register, and at least one control unit in operative connection with the configuration register for controlling a contents thereof, wherein, based on the contents, the first plurality of data processing elements is adapted for temporarily constituting at runtime at least one group of one or more of said data processing elements from said first plurality of data processing elements dynamically via the data link structure. The synchronization units are adapted for synchronizing data processing by individual data processing elements within the group. The first plurality of data processing elements may be reconfigurably grouped and thus adapted to various data processing tasks at runtime. This increases data processing efficiency. | 2012-12-27 |
20120331269 | Geodesic Massively Parallel Computer. - Communication latency, now a dominant factor in computer performance, makes physical size, density, and interconnect proximity crucial system design considerations. The present invention addresses consequential supercomputing hardware challenges: spatial packing, communication topology, and thermal management. A massively-parallel computer with dense, spherically framed, geodesic processor arrangement is described. As a mimic of the problem domain, it is particularly apt for climate modelling. However, the invention's methods scale well, are largely independent of processor technology, and apply to a wide range of computing tasks. The computer's interconnect features globally short, highly regular, and tightly matched distances. Communication modes supported include neighbour-to-neighbour messaging on a spherical-shell lattice, and a radial network for system-synchronous clocking, broadcast, packet-switched networking, and IO. A near-isothermal cooling system, physically divorcing heat source and sink, enables extraordinarily compact geodes with lower temperature operation, higher speed, and lower power consumption. | 2012-12-27 |
20120331270 | Compressing Result Data For A Compute Node In A Parallel Computer - Compressing result data for a compute node in a parallel computer, the parallel computer including a collection of compute nodes organized as a tree, including: initiating a collective gather operation by a logical root of the collection of compute nodes, including adding result data of the logical root to a gather buffer; for each compute node in the collection of compute nodes, determining whether result data of the compute node is already written in the gather buffer; and if the result data of the compute node is already written in the gather buffer, incrementing a counter assigned to that result data already written in the gather buffer; and if the result data of the compute node is not already written in the gather buffer, writing the result data of the compute node as new result data in the gather buffer, incrementing a counter assigned to that new result data, and writing in the gather buffer a node ID. | 2012-12-27 |
20120331271 | COMPRESSED INSTRUCTION FORMAT - A technique for decoding an instruction in an a variable-length instruction set. In one embodiment, an instruction encoding is described, in which legacy, present, and future instruction set extensions are supported, and increased functionality is provided, without expanding the code size and, in some cases, reducing the code size. | 2012-12-27 |
20120331272 | SIMD SIGN OPERATION - Method, apparatus, and program means for nonlinear filtering and deblocking applications utilizing SIMD sign and absolute value operations. The method of one embodiment comprises receiving first data for a first block and second data for a second block. The first data and said second data are comprised of a plurality of rows and columns of pixel data. A block boundary between the first block and the second block is characterized. A correction factor for a deblocking algorithm is calculated with a first instruction for a sign operation that multiplies and with a second instruction for an absolute value operation. Data for pixels located along said block boundary between the first and second block are corrected. | 2012-12-27 |
20120331273 | Reduced Instruction Set - A method of reducing a set of instructions for execution on a processor, the method comprising: extracting information from a first instruction of the set of instructions; identifying unencoded space in one or more further instructions of the set of instructions; replacing the unencoded space of the one or more further instructions with the extracted information of the first instruction so as to form one or more amalgamated instructions; and removing the first instruction from the set of instructions. | 2012-12-27 |
20120331274 | Instruction Execution - A method of executing an instruction set having a first instruction and a second instruction, includes: reading the first instruction; determining whether the first instruction is integral with the second instruction; reading the second instruction; if the first instruction is integral with the second instruction, interpreting a first operator field of the second instruction to represent a first operator; and if the first instruction is not integral with the second instruction, interpreting the first operator field of the second instruction to represent a second operator, wherein the first operator is different to the second operator. | 2012-12-27 |
20120331275 | SYSTEM AND METHOD FOR POWER OPTIMIZATION - A technique for reducing the power consumption required to execute processing operations. A processing complex, such as a CPU or a GPU, includes a first set of cores comprising one or more fast cores and second set of cores comprising one or more slow cores. A processing mode of the processing complex can switch between a first mode of operation and a second mode of operation based on one or more of the workload characteristics, performance characteristics of the first and second sets of cores, power characteristics of the first and second sets of cores, and operating conditions of the processing complex. A controller causes the processing operations to be executed by either the first set of cores or the second set of cores to achieve the lowest total power consumption. | 2012-12-27 |
20120331276 | Instruction Execution - A method of executing an instruction set to select a set of registers, includes reading a first instruction of the instruction set; interpreting a first operand of the first instruction to represent a first register S to be selected; interpreting a second operand of the first instruction to represent a number N of registers to be selected; and selecting N consecutive registers starting at the first register S to form the set of registers. | 2012-12-27 |
20120331277 | INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, PROGRAM, AND NON-TRANSITORY INFORMATION STORAGE MEDIUM - An acquisition unit acquires a command that is executable by a processor of an other type being a processor of a different type from a processor of a processing execution subject apparatus. An identification unit identifies processing that is executable by the processor of the processing execution subject apparatus which is associated with the command acquired by the acquisition unit. An execution control unit controls execution of the processing performed by the processor of the processing execution subject apparatus based on a value of a parameter which is set in a specific command for the processor of the other type, the value of the parameter which is set in the specific command not affecting execution of processing performed by the processor of the other type. | 2012-12-27 |
20120331278 | BRANCH REMOVAL BY DATA SHUFFLING - A system and method for automatically optimizing parallel execution of multiple work units in a processor by reducing a number of branch instructions. A computing system includes a first processor core with a general-purpose micro-architecture and a second processor core with a same instruction multiple data (SIMD) micro-architecture. A compiler detects and evaluates branches within function calls with one or more records of data used to determine one or more outcomes. Multiple compute sub-kernels are generated, each comprising code from the function corresponding to a unique outcome of the branch. Multiple work units are produced by assigning one or more records of data corresponding to a given outcome of the branch to one of the multiple compute sub-kernels associated with the given outcome. The branch is removed. An operating system scheduler schedules each of the one or more compute sub-kernels to the first processor core or to the second processor core. | 2012-12-27 |
20120331279 | INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING DEVICE STARTUP METHOD, AND COMPUTER READABLE RECORDING MEDIUM - An information processing device, comprises: a connection unit connected to a predetermined storage part storing therein startup mode determination information in which a startup mode corresponds to a specified hardware configuration and/or software configuration is configured and the plurality types of suspend data, each of which corresponds to the respective startup mode; a startup mode determination part for reading the startup mode determination information and determining the startup mode when being powered on; a suspend data obtaining part for selecting the suspend data corresponds to the startup mode determined by the startup mode determination part and obtaining the selected suspend data in the storage part; a starting up part for performing a startup process using the suspend data obtained by the suspend data obtaining part; and a startup mode updating part for updating the startup mode configured in the startup mode determination information after completion of the startup process. | 2012-12-27 |
20120331280 | ELECTRONIC DEVICE AND METHOD FOR BURNING FIRMWARE TO EMBEDDED DEVICE - In a method of burning a firmware to an embedded device, a booting file is firstly created and saved in the firmware. The booting file includes a boot loader, a first kernel, a second kernel, a first initrd, a second initrd of a firmware, a rootfs, and an application program. The method burns the boot loader, the first kernel, the second kernel, the first initrd, and the second initrd in a flash memory of the embedded device. When the rootfs and the application program are recorded in a storage system of the embedded device, the method downloads the rootfs and the application program from a storage system of the embedded device, and burns the rootfs and the application program to a register of the flash memory. | 2012-12-27 |
20120331281 | METHOD AND SYSTEM FOR POWER MANAGEMENT FOR A HANDHELD MOBILE ELECTRONIC DEVICE - Methods and systems for trusted boot using an original design manufacturer secure partition using execute-in-place non-volatile memory (XIP NVM) can include forming a secure partition within the XIP NVM and loading an initial program load within the secure partition wherein the initial program load comprises computer instructions which when executed by a processor causes the processor to perform operations comprising a trusted boot. Other embodiments are disclosed. | 2012-12-27 |
20120331282 | APPARATUS AND METHODS FOR PEAK POWER MANAGEMENT IN MEMORY SYSTEMS - Disclosed are apparatus and techniques for managing power in a memory system having a controller and nonvolatile memory array. In one embodiment, prior to execution of each command with respect to the memory array, a request for execution of such command is received with respect to the memory array. In response to receipt of each request for each command, execution of such command is allowed or withheld with respect to the memory array based on whether such command, together with execution of other commands, is estimated to exceed a predetermined power usage specification for the memory system. | 2012-12-27 |
20120331283 | USER-CONTROLLED DATA ENCRYPTION WITH OBFUSCATED POLICY - An obfuscated policy data encryption system and method for re-encrypting data to maintain the confidentiality and integrity of data about a user when the data is stored in a public cloud computing environment. The system and method allow a user to specify in a data-sharing policy who can obtain the data and how much of the data is available to them. This policy is obfuscated such that it is unintelligible to the cloud operator and others processing and storing the data. In some embodiments, a patient species with whom his health care data should be shared with and the encrypted health care data is stored in the cloud in an electronic medical records system. The obfuscated policy allows the electronic medial records system to dispense the health care data of the patient to those requesting the data without disclosing the details of the policy itself. | 2012-12-27 |
20120331284 | Media Agnostic, Distributed, and Defendable Data Retention - A data protector is described. In an implementation, the data protector promotes and enforces a data retention policy of a data consumer. In an implementation, the data protector limits access to sensitive data to the data consumers. A key manager provides a time-limited encryption key to the data protector. Responsive to collection of the time-limited encryption key from the key manager and sensitive data from a data provider, the data protector encrypts the sensitive data with the time-limited encryption key effective to produce encrypted sensitive data. In some embodiments, the data protector' provides a data consumer with access to the encrypted sensitive data and the key manager provides the data consumer with access to the time-limited encryption key to decrypt the encrypted sensitive data. The key manager deletes the time-limited encryption key in compliance with the data retention policy of the data consumer. | 2012-12-27 |
20120331285 | PRIVACY-PROTECTING INTEGRITY ATTESTATION OF A COMPUTING PLATFORM - Systems, apparatus and methods for privacy-protecting integrity attestation of a computing platform. An example method for privacy-protecting integrity attestation of a computing platform (P) has a trusted platform module (TPM}, and comprises the following steps. First, the computing platform (P) receives configuration values (PCRI . . . PCRn). Then, by means of the trusted platform module (TPM}, a configuration value (PCRp) is determined which depends on the configuration of the computing platform (P). In a further step the configuration value (PCRp) is signed by means of the trusted platform module. Finally, in the event that the configuration value (PCRp) is one of the received configuration values (PCRI . . . PCRn), the computing platform (P) proves to a verifier (V) that it knows the signature (sign(PCRp}} on one of the received configuration values (PCRI . . . PCRn). | 2012-12-27 |
20120331286 | APPARATUS AND METHOD FOR PROVIDING SERVICE TO HETEROGENEOUS SERVICE TERMINALS - An apparatus and method for providing a service to heterogeneous service terminals without modifying a security framework are provided, in which a gateway that controls a first service terminal transmits a right delegation request to a server in order to provide the service to a second service terminal as well, and upon receipt of a service right verification request from the second service terminal after receiving a right delegation certificate from the server, the gateway transmits a service right verification response including the right delegation certificate to the second service terminal. | 2012-12-27 |
20120331287 | Provisioning a Shared Secret to a Portable Electronic Device and to a Service Entity - Systems and methods are provided for computing a secret shared with a portable electronic device and service entity. The service entity has a public key G and a private key g. A message comprising the public key G is broadcast to the portable electronic device. A public key B of the portable electronic device is obtained from a manufacturing server and used together with the private key g to compute the shared secret. The portable electronic device receives the broadcast message and computes the shared secret as a function of the public key G and the portable electronic device's private key b. The shared secret can be used to establish a trusted relationship between the portable electronic device and the service entity, to activate a service on the portable electronic device, and to generate certificates. | 2012-12-27 |
20120331288 | CUSTOMIZABLE PUBLIC KEY INFRASTRUCTURE AND DEVELOPMENT TOOL FOR SAME - A public key infrastructure comprises a client side to request and utilize certificates in communication across a network and a server side to administer issuance and maintenance of said certificates. The server side has a portal to receive requests for a certificate from a client. A first policy engine to processes such requests in accordance with a set of predefined protocols. A certification authority is also provided to generate certificates upon receipt of a request from the portal. The CA has a second policy engine to implement a set of predefined policies in the generation of a certificate. Each of the policy engines includes at least one policy configured as a software component e.g. a Java bean, to perform the discreet functions associated with the policy and generate notification in response to a change in state upon completion of the policy. | 2012-12-27 |
20120331289 | METHOD AND SYSTEM FOR ENCRYPTION OF MESSAGES IN LAND MOBILE RADIO SYSTEMS - A method and system for authentication of sites in a land mobile radio (LMR) system and encryption of messages exchanged by the sites. In some embodiments, the method includes transmitting a certificate created by a trusted authority by applying a function to a first site public key using the trusted authority's private key to generate a reduced representation, which is encrypted with the trusted authority's private key. Other sites may receive the certificate, decrypt it using the trusted authority's public key, and authenticate the first site. The method may further include generating a session key, encrypting it with the public key of the first site, and transmitting the encrypted session key to the first site. The first site decrypts the encrypted session key with the first site's private key, and transmits a message encrypted with the shared session key to other sites for decryption using the session key. | 2012-12-27 |
20120331290 | Method and Apparatus for Establishing Trusted Communication With External Real-Time Clock - Embodiments of the present invention provide systems and methods to enable secure communication between a host processor and external real time counter (RTC) logic. In an embodiment, the host processor generates a message including a command to an external device containing the RTC. The external device verifies a Message Authentication Code (MAC) included in the message and responds to the command. Embodiments of the present invention advantageously provide a dedicated power domain for the external RTC logic while guarding against third party attacks on the RTC logic and the communication between the RTC logic and the host processor. | 2012-12-27 |
20120331291 | MULTIMEDIA PROCESSING APPARATUS - According to one embodiment, a multimedia processing apparatus includes one or more first module, a second module, and a third module. The first module is configured to realize a function involved with a multimedia processing. The second module is configured to manage the first module. The third module is configured to control the first module or to perform a state transition of the first module through the second module. One of two modules out of the first to third modules holds a certificate that provides its personal identification. When a first processing is executed between the two modules, the other one of the two modules authenticates the one module by using the certificate held by the one module, and then, the two modules start the first processing. | 2012-12-27 |
20120331292 | ELECTRONIC ACCESS CLIENT DISTRIBUTION APPARATUS AND METHODS - Apparatus and methods for distributing access control clients. In one exemplary embodiment, a network infrastructure is disclosed that enables delivery of electronic subscriber identity modules (eSIMs) to secure elements (e.g., electronic Universal Integrated Circuit Cards (eUICCs), etc.) The network architecture includes one or more of: (i) eSIM appliances, (ii) secure eSIM storages, (iii) eSIM managers, (iv) eUICC appliances, (v) eUICC managers, (vi) service provider consoles, (vii) account managers, (viii) Mobile Network Operator (MNO) systems, (ix) eUICCs that are local to one or more devices, and (x) depots. Moreover, each depot may include: (xi) eSIM inventory managers, (xii) system directory services, (xiii) communications managers, and/or (xiv) pending eSIM storages. Functions of the disclosed infrastructure can be flexibly partitioned and/or adapted such that individual parties can host portions of the infrastructure. Exemplary embodiments of the present invention can provide redundancy, thus ensuring maximal uptime for the overall network (or the portion thereof). | 2012-12-27 |
20120331293 | METHOD AND SYSTEM FOR SECURE OVER-THE-TOP LIVE VIDEO DELIVERY - A method is provided for managing key rotation (use of series of keys) and secure key distribution in over-the-top content delivery. The method provided supports supplying a first content encryption key to a content packaging engine for encryption of a first portion of a video stream. Once the first content encryption key has expired, a second content encryption key is provided to the content packaging engine for encryption of a second portion of a video stream. The method further provides for notification of client devices of imminent key changes, as well as support for secure retrieval of new keys by client devices. A system is also specified for implementing a client and server infrastructure in accordance with the provisions of the method. | 2012-12-27 |
20120331294 | METHOD FOR SECURE REMOTE BACKUP - The present invention is directed to an architecture and mechanism for securely backing up files and directories on a local machine onto untrusted servers over an insecure network. | 2012-12-27 |
20120331295 | METHOD FOR KEY GENERATION, MEMBER AUTHENTICATION, AND COMMUNICATION SECURITY IN DYNAMIC GROUP - The present invention provides a method for keys generation, member authentication and communication security in a dynamic group, which comprises steps: assigning each member an identification vector containing common group identification vector elements and an individual identification vector element, and generating an authentication vector and an access control vector for each member according to the identification vector; using the identification vector elements to generate public key elements and establish an authentication public key and an access control public key; and using a polynomial and the identification vector to generate a private key. The present invention uses these public keys and private keys, which are generated from the identification vectors, to implement serverless member authentication and data access control, whereby is protected privacy of members and promoted security of communication. | 2012-12-27 |
20120331296 | Method and Apparatus for Communicating between Low Message Rate Wireless Devices and Users via Monitoring, Control and Information Systems - The present invention relates to a method and apparatus for the communicating between remote devices using a low message rate wireless connection via monitoring, control and information systems. The network described in this invention is capable of supporting billions of such devices in an efficient and cost effective manner. The network uses a very low signaling rate and centrally controlled architecture in order to achieve this efficiency. The network can easily support numerous applications each controlling large numbers of devices. As the complexity of protocol used in the network is very much reduced in comparison to existing hierarchical mobile wireless networks, it is possible to produce devices that use very little energy allowing their use in many new and novel applications. | 2012-12-27 |
20120331297 | METHOD FOR RECEIVING/SENDING MULTIMEDIA MESSAGES - A multimedia messaging system for receiving/sending multimedia messages, includes: a wireless LAN; and a MMS gateway. The MMS gateway performs: receiving/sending the multimedia message to/from a MMS user device via the wireless LAN; and encrypting the multimedia message. The encryption is performed by: issuing a certificate to the MMS user device; sending a session ID and a master key encrypted by the MMS gateway's private key to the MMS user device in response to a request of the MMS user device having the certificate; generated a shared secret key using an algorithm combining the master key with the MMS user device's phone number and the session ID; and encrypting the multimedia message using the shared secret key. | 2012-12-27 |
20120331298 | SECURITY AUTHENTICATION METHOD, APPARATUS, AND SYSTEM - Embodiments of the present invention provide a security authentication method, apparatus, and system, where the method includes: verifying a feature identifier for identifying terminal equipment, where the terminal equipment is machine-to-machine equipment; and obtaining a key corresponding to the feature identifier, so as to perform secure communication with the terminal equipment according to the key. In the embodiments of the present invention, after terminal equipment, a mobility management entity, and a home subscriber system successfully perform authentication and key agreement, it is verified whether a feature identifier of the terminal is legal, and when the feature identifier of the terminal is a legal identifier, a key is obtained according to the feature identifier, so that the mobility management entity and the terminal equipment perform secure communication according to the key, thereby implementing secure communication between M2M equipment and a network side. | 2012-12-27 |
20120331299 | COMMUNICATIONS APPARATUS, COMMUNICATIONS SYSTEM, AND METHOD OF SETTING CERTIFICATE - An apparatus in a system which includes at least a high-level apparatus and a plurality of low-level apparatuses, said apparatus being one of the low-level apparatuses. The apparatus includes a storage unit configured to store an individual certificate set and a common certificate set and a communication unit configured to transmit own authentication information to the high level apparatus to allow the high level apparatus to perform decryption to authenticate the validity of the apparatus. | 2012-12-27 |
20120331300 | Span Out Load Balancing Model - This document describes techniques for transporting at least a portion of the data for a remote presentation session via datagrams. In particular, a span-out model is described whereby a remote presentation session can be associated with multiple channels and each channel can be routed through a different gateway computer system. As such, a connectionless oriented channel for a client may be routed through a first gateway computer system and a connection oriented channel for the client may be routed through a second gateway computer system. In addition to the foregoing, other techniques are described in the claims, the attached drawings, and the description. | 2012-12-27 |
20120331301 | METHOD AND SYSTEM FOR USING A SMART PHONE FOR ELECTRICAL VEHICLE CHARGING - Systems and methods are provided to allow a smart phone or any terminal to reserve and activate an electric vehicle charger using a web site or server computer system. An access control system is provided that includes a server and an access device. The access device includes an electrical vehicle charger. A reservation request is accepted from a first terminal using the server. A reservation certificate is provided to a portable second terminal in response to the request using the server. The reservation certificate is accepted from the portable second terminal using the access device. The reservation certificate is determined to be authentic using the access device. The electric vehicle charger is activated in response to accepting an authentic reservation certificate using the access device. | 2012-12-27 |
20120331302 | METHOD FOR AUTHENTICATING A PORTABLE DATA CARRIER - A method for authenticating a portable data carrier ( | 2012-12-27 |
20120331303 | METHOD AND SYSTEM FOR PREVENTING EXECUTION OF MALWARE - A method and system for preventing execution of malware in a computing device. The method includes loading code into a non-executable memory of the computing device and validating an authentication signature associated with the code. Subsequently, the code is decrypted and finally, the decrypted code is executed in an executable memory upon a determination that the authentication signature is valid. | 2012-12-27 |
20120331304 | KEY BASED SECURE OPERATING SYSTEM WITH SECURE DONGLE AND METHOD, AND CRYPTOGRAPHIC METHOD - A security interface system creates plausible deniability, and consists of a security interface device having a port for a releasable connection to a PC and to a memory key containing an encrypted operating system, the interface device containing logic to decrypt the memory key and a plaintext bootloader, and a further port for a memory card containing a key. The key is entirely encrypted and appears as random data when inspected. The interface device may have a port(s) for a keyboard and mouse. An encryption and decryption method is described, for decrypting a ciphertext into one of two plaintexts by choice of a key, the choice of which plaintext depending on whether the secret is to be revealed or remain confidential. | 2012-12-27 |
20120331305 | ENCRYPTION PROCESSING APPARATUS - In order to reduce the number of data transfers and to increase parallel processing of decryption processing and authentication processing, an encryption processing apparatus is provided that includes an input/output data that processes input/output data to an encryption/decryption processing unit and an authentication processing unit, where the input/output data processing unit calculates a parameter used by the authentication processing unit from input data to the input/output data processing unit and forms input data to the authentication processing unit from the calculated parameter or a parameter calculated from data processed by the encryption/decryption processing unit and the input data to the input/output data processing unit. | 2012-12-27 |
20120331306 | Adjustable resolution media format - A play limit is set for a media file. The play limit can be, for example a date, or a number of times that the file has been played. When the file exceeds the play limit, the quality of the file playing is degraded. | 2012-12-27 |
20120331307 | METHODS, APPARATUS AND SYSTEMS TO IMPROVE SECURITY IN COMPUTER SYSTEMS - In one implementation a computer system stores a software program that contains some instructions organized in blocks wherein each block contains a first part with instructions and a second part with an electronic signature or hash value, wherein the computer system includes a security component within the processor that allows the execution of instructions of the first part of a block of data only if the hash value of the data is correct. | 2012-12-27 |
20120331308 | METHODS, APPARATUS AND SYSTEMS TO IMPROVE SECURITY IN COMPUTER SYSTEMS - According to some implementations methods, apparatus and systems are provided involving the use of processors having at least one core with a security component, the security component adapted to read and verify data within data blocks stored in a L1 instruction cache memory and to allow the execution of data block instructions in the core only upon the instructions being verified by the use of a cryptographic algorithm. | 2012-12-27 |
20120331309 | USING BUILT-IN SELF TEST FOR PREVENTING SIDE CHANNEL SECURITY ATTACKS ON MULTI-PROCESSOR SYSTEMS - A data processing system having a first processor, a second processor, a local memory of the second processor, and a built-in self-test (BIST) controller of the second processor which performs BIST memory accesses on the local memory of the second processor and which includes a random value generator is provided. The system can perform a method including executing a secure code sequence by the first processor and performing, by the BIST controller of the second processor, BIST memory accesses to the local memory of the second processor in response to the random value generator. Performing the BIST memory accesses is performed concurrently with executing the secure code sequence. | 2012-12-27 |
20120331310 | Increasing Power Efficiency Of Turbo Mode Operation In A Processor - In one embodiment, a processor has multiple cores to execute threads. The processor further includes a power control logic to enable entry into a turbo mode based on a comparison between a threshold and value of a counter that stores a count of core power and performance combinations that identify turbo mode requests of at least one of the threads. In this way, turbo mode may be entered at a utilization level of the processor that provides for high power efficiency. Other embodiments are described and claimed. | 2012-12-27 |
20120331311 | POWER MANAGEMENT SYSTEM AND METHOD - A power management system includes a plurality of electronic devices, a power distribution unit, a power management unit and a power control unit. The power distribution unit is connected with the electronic devices for providing electricity to the electronic devices. The power management unit is connected to a network and the electronic devices, so that the electronic devices are connected with the network through the power management unit. The power control unit is connected with the power management unit through the network. The power control unit is configured for controlling the power management unit, thereby sequentially starting the electronic devices. | 2012-12-27 |
20120331312 | USB CHARGING CIRCUIT FOR A COMPUTER - A Universal Serial Bus (USB) charging circuit for a computer includes a USB interface, a USB power terminal, a standby power terminal, a switch unit, a IC chip, and a control unit. The control unit disconnects the USB interface from the standby power terminal when receiving a high voltage level from the system power terminal or a first control signal from the IC chip. The control unit connects the standby power terminal supply to the USB interface when receiving a second control signal from the IC chip and the first switch signal from the switch unit; the control unit disconnects the standby power terminal supply from the USB interface when receiving a second control signal from the IC chip and the second switch signal from the switch unit. | 2012-12-27 |
20120331313 | IMAGE FORMING APPARATUS, POWER SUPPLY CONTROL METHOD, AND COMPUTER-READABLE STORAGE MEDIUM - An image forming apparatus includes a main power supply; a power generation unit configured to generate electric power with natural energy; a secondary battery configured to serve as a power supply source while the electric power is not supplied from the main power supply, the secondary battery being charged with the electric power generated by the power generation unit; a voltage detector configured to detect an output voltage of the secondary battery; and a switching unit configured to switch the power supply source from the secondary battery to the main power supply when the output voltage becomes equal to or lower than a first threshold, and switch the power supply source from the main power supply to the secondary battery when the output voltage becomes equal to or higher than a second threshold that is higher than the first threshold. | 2012-12-27 |
20120331314 | LOGICAL POWER THROTTLING - A processor includes a device providing a throttling power output signal. The throttling power output signal is used to determine when to logically throttle the power consumed by the processor. At least one core in the processor includes a pipeline having a decode pipe; and a logical power throttling unit coupled to the device to receive the output signal, and coupled to the decode pipe. Following the logical power throttling unit receiving the power throttling output signal satisfying a predetermined criterion, the logical power throttling unit causes the decode pipe to reduce an average number of instructions decoded per processor cycle without physically changing the processor cycle or any processor supply voltages. | 2012-12-27 |
20120331315 | System and Method for Re-Balancing Power Supply Efficiency in a Networking Environment - A system and method for re-balancing power supply efficiency in a networking environment. Identification of changes in a network device that affect power consumption can be used to generate power request messages that are communicated to a power supply control via a communication bus. Based on such power request messages, the power supply control can then identify a re-balanced configuration of the power supply system to enable efficient operation of the power supply system. | 2012-12-27 |
20120331316 | INDUCTIVE CHARGING AND DATA TRANSFER FOR MOBILE COMPUTING DEVICES ORGANIZED INTO A MESH NETWORK - Illustrated is a system and method to receive a data packet at a first mobile computing device that is part of a plurality of mobile computing devices organized as a mesh network, the data packet including a power up command and device identifier identifying a second mobile computing device requesting power. The system and method also include identifying a path from the first mobile computing device to the second mobile computing device, the path composed of at least the first and second mobile computing devices and including inductive links. Further, the system and method include transmitting electrical power, based upon the inductive links, from the first mobile computing device to a third mobile computing device, the third mobile computing device residing on the path from the first mobile computing device to the second mobile computing device. | 2012-12-27 |
20120331317 | POWER-CAPPING BASED ON UPS CAPACITY - The power draw of equipment in a data center may be capped in order to keep the power draw under the capacity of the Uninterruptable Power Supply (UPS) that serves the data center. The current capacity of the UPS may be estimated, and the equipment may be controlled so as to keep the equipment's power draw under that current capacity. Factors that may affect the estimate of the UPS's current capacity include the history of temperature and humidity to which the UPS has been subject, and charge/discharge history of the UPS. Factors that may affect the decision of which equipment to throttle to a lower power level include: the current power load at the data center, the type of software that each server is running, and the demand for that software. | 2012-12-27 |
20120331318 | SAVING POWER BY MANAGING THE STATE OF INACTIVE COMPUTING DEVICES - Managing readiness states of a plurality of computing devices. A programmed processor unit operates, upon receipt of a request, to: provide one or more computing devices from an inactive pool to an active pool, or accept one or more active computing devices into the inactive pool. The system proactively manages the inactive states of each computing device by: determining the desired number (and identities) of computing devices to be placed in each inactive state of readiness by solving a constraint optimization problem that describes a user-specified trade-off between expected readiness (estimated time to be able to activate computing devices when they are needed next) and conserving energy; generating a plan for changing the current set of inactive states to the desired set; and, executing the plan. Multiple alternative ways of quantifying the desired responsiveness to surges in demand are provided. | 2012-12-27 |
20120331319 | SYSTEM AND METHOD FOR POWER OPTIMIZATION - A technique for reducing the power consumption required to execute processing operations. A processing complex, such as a CPU or a GPU, includes a first set of cores comprising one or more fast cores and second set of cores comprising one or more slow cores. A processing mode of the processing complex can switch between a first mode of operation and a second mode of operation based on one or more of the workload characteristics, performance characteristics of the first and second sets of cores, power characteristics of the first and second sets of cores, and operating conditions of the processing complex. A controller causes the processing operations to be executed by either the first set of cores or the second set of cores to achieve the lowest total power consumption. | 2012-12-27 |
20120331320 | Wake-on-LAN Between Optical Link Partners - Embodiments described herein achieve Wake-on-LAN to allow optical modules the ability to wake up link partners instantaneously when there is data to be transmitted or received. As such, Wake-on-LAN features are provided for a side-band handshaking protocol and channel that is independent from the normal data traffic path. | 2012-12-27 |
20120331321 | Processor Core with Higher Performance Burst Operation with Lower Power Dissipation Sustained Workload Mode - A processor may operate at a first frequency level for a first time interval. The processor automatically may transition to a sleep state from the first frequency level after the first time interval. Then the processor automatically transitions from the sleep state to the first frequency level after a second time interval. As a result the processor may operate at a reduced power consumption and higher performance. | 2012-12-27 |
20120331322 | POWER-SUPPLY CONTROL SYSTEM, POWER-SUPPLY CONTROL METHOD, AND IMAGE FORMING APPARATUS - A power-supply control system includes a plurality of apparatuses connected to each other. Each apparatus includes a battery configured to supply power in a power saving mode where power consumption is lower than in a normal mode; a detecting unit configured to detect an output voltage of the battery; a transmitting unit configured to transmit a power supply request when the output voltage is determined to be lower than a predetermined value; a receiving unit configured to receive a request from another apparatus; a determining unit configured to determine whether the battery is available for the another apparatus in response to the request; and a control unit configured to control power supply from the battery. The control unit transmits a notification of power supply start to the another apparatus and causes the battery to supply power to the another apparatus when the battery is available for the another apparatus. | 2012-12-27 |
20120331323 | DEVICES AND METHODS FOR SAVING ENERGY THROUGH CONTROL OF SLEEP MODE - A system for saving energy through control of a sleep mode, and a method of operating the system are provided. The energy-saving system may enable a proxy device to maintain a minimum basic setup necessary for a communication when a host device enters a sleep mode, and may omit an operation performed based on the basic setup when the host device switches to a communication mode, thereby enabling a smooth switch between the sleep mode and the communication mode. | 2012-12-27 |
20120331324 | PROGRAMMABLE MECHANISM FOR SYNCHRONOUS STROBE ADVANCE - An apparatus includes a Joint Test Action Group (JTAG) interface, a synchronous bus optimizer, a core clocks generator, and a synchronous strobe driver. The JTAG interface is configured to receive control information over a standard JTAG bus, where the control information indicates an amount to advance a synchronous data strobe associated with a data group. The synchronous bus optimizer is configured to receive the control information, and is configured to develop a value on a ratio bus that indicates the amount. The core clocks generator is coupled to the ratio bus and is configured to advance a data strobe clock by the amount. The synchronous strobe driver is configured to receive the data strobe clock, and is configured to employ the data strobe clock to generate the synchronous data strobe, where the synchronous data strobe, when enabled, is advanced also by the amount. | 2012-12-27 |
20120331325 | PROGRAMMABLE MECHANISM FOR DELAYED SYNCHRONOUS DATA RECEPTION - An apparatus is provided that compensates for misalignment on a synchronous data bus. The apparatus includes a Joint Test Action Group (JTAG) interface, a synchronous bus optimizer, and a delay-locked loop (DLL). The JTAG interface is configured to receive control information over a standard JTAG bus, where the control information indicates an amount to delay a data bit signal associated with a data group. The synchronous bus optimizer is configured to receive the control information, and is configured to develop a value on a ratio bus that indicates the amount. The DLL is coupled to the ratio bus, and is configured generate a delayed data bit signal, where the DLL adds the amount of delay to the data bit signal to generate the delayed data bit signal. | 2012-12-27 |