44th week of 2013 patent applcation highlights part 72 |
Patent application number | Title | Published |
20130290602 | DATA STORAGE DEVICE - A data storage device includes a memory, a controller, a first module, a first interface, and a second interface. The first interface and the second interface are coupled to the controller. The controller is used to access data in the memory, the first module is used to perform a first predetermined function. The second interface is inaccessible to the first module. The first interface may gain access to at least one additional module in the data storage device to perform at least one additional predetermined function which the second interface may not gain access to and may not perform. | 2013-10-31 |
20130290603 | EMULATED ELECTRICALLY ERASABLE MEMORY PARALLEL RECORD MANAGEMENT - A method of transferring data from a non-volatile memory (NVM) having a plurality of blocks of an emulated electrically erasable (EEE) memory to a random access memory (RAM) of the EEE includes accessing a plurality of records, a record from each block. A determination is made if any of the data signals of the first data signals are valid and thereby considered valid data signals. If there is only one or none that are valid, the valid data, if any is loaded into RAM and the process continues with subsequent simultaneous accesses. If more than one is valid, then the processes is halted until the RAM is loaded with the valid data, then the method continues with subsequent simultaneous accesses of records. | 2013-10-31 |
20130290604 | PROGRAM-DISTURB DECOUPLING FOR ADJACENT WORDLINES OF A MEMORY DEVICE - Subject matter disclosed herein relates to memory operations regarding programming bits into a memory array. | 2013-10-31 |
20130290605 | CONVERGED MEMORY AND STORAGE SYSTEM - Embodiments of the present invention provide an approach for Dynamic Random Access Memory (DRAM) and flash converged memory and storage. Specifically, in a typical embodiment, at least one substrate will be provided on which a DRAM unit and flash memory unit are positioned. A set (e.g., one or more of input/outputs (I/Os)) may be provided for the units. Such a set of I/Os may communicate storage and/or memory access requests to a set (e.g., one or more) of controllers, which control the DRAM and flash memory units. The set of controllers may comprise a single integrated controller or multiple controllers having separate and distinct functions (e.g., a memory controller, a storage controller, a DRAM controller, a flash controller, etc.). | 2013-10-31 |
20130290606 | POWER MANAGEMENT FOR A SYSTEM HAVING NON-VOLATILE MEMORY - Systems and methods are disclosed for power management of a system having non-volatile memory (“NVM”). One or more controllers of the system can optimally turn modules on or off and/or intelligently adjust the operating speeds of modules and interfaces of the system based on the type of incoming commands and the current conditions of the system. This can result in optimal system performance and reduced system power consumption. | 2013-10-31 |
20130290607 | STORING CACHE METADATA SEPARATELY FROM INTEGRATED CIRCUIT CONTAINING CACHE CONTROLLER - A technique includes using a cache controller of an integrated circuit to control a cache including cached data content and associated cache metadata. The technique includes storing the metadata and the cached data content off of the integrated circuit and organizing the storage of the metadata relative to the cached data content such that a bus operation initiated by the cache controller to target the cached data content also targets the associated metadata. | 2013-10-31 |
20130290608 | System and Method to Keep Parity Consistent in an Array of Solid State Drives when Data Blocks are De-Allocated - A method comprises sending a first command to a solid state drive (SSD), the first command indicating that the SSD can de-allocate a first plurality of logical block addresses (LBAs), and calculating first parity data for a redundant array of independent disks (RAID) array that includes the SSD in response to receiving a first reply from the SSD indicating that the first LBAs were de-allocated by the SSD. The first parity data is calculated based upon the first LBAs including all logical zeros. | 2013-10-31 |
20130290609 | MEMORY FORMATTING METHOD, MEMORY CONTROLLER, AND MEMORY STORAGE APPARATUS - A memory formatting method adapted to a memory storage apparatus is provided. The memory formatting method includes configuring a plurality of logical block addresses to be mapped to a portion of a plurality of physical blocks, generating a first file system data and a second file system data according to the size of the logical block addresses, and storing the first file system data into a first physical block, and the first physical block is mapped to a first logical block address among the logical block addresses. The memory formatting method also includes selecting a second physical block among the physical blocks, storing the second file system data into the second physical block, determining whether a format command is received, and when the format command is received, re-mapping the first logical block address to the second physical block. | 2013-10-31 |
20130290610 | SEMICONDUCTOR MEMORY DEVICE - A semiconductor memory device includes a memory unit configured in page units, an error correction code (ECC) module for generating error correcting codes, a page information addition module for generating page information, and a controller for controlling the reading and writing of data to the memory unit. The controller is configured to associate error correction code information and page information with each frame unit of data written to the memory unit and to store the associated information with each frame unit. The controller is configured to output data to an external host in sizes less than one page unit, such as one frame unit. | 2013-10-31 |
20130290611 | POWER MANAGEMENT IN A FLASH MEMORY - The peak power requirements for operations performed on a FLASH memory circuit vary substantially, with reading, writing and erasing requiring increasing levels of power. When the memory is operated to improve performance using erase hiding, the performance of write or erase operations where the time periods for such operations can overlap results in increased peak power requirements. Controlling the time periods during which modules of a RAID group are permitted to perform erase operations, with respect to other modules in other RAID groups may smooth out the requirements. In addition, such scheduling may lead to improved efficiency in using shared data buses. | 2013-10-31 |
20130290612 | SOFT INFORMATION MODULE - A system and method for generating reliability information (aka “soft information”) from a flash memory device is disclosed. A plurality of memory cells are read by a data storage controller at a first read level to obtain a plurality of program values. On an error indicator being received in connection with reading the plurality of memory cells, the plurality of memory cells are read one or more times at one or more different read levels to categorize the plurality of memory cells into two or more cell program regions. A confidence value is then assigned to each memory cell based on a corresponding cell program region for the memory cell, the confidence value being representative of a likelihood that the memory cell is programmed to a corresponding program value read at the first read level. | 2013-10-31 |
20130290613 | STORAGE SYSTEM AND STORAGE APPARATUS - A storage system comprises a first controller and a plurality of storage devices. The plurality of storage devices configure RAID, each of which includes one or more non-volatile memory chips providing storage space where data from a host computer is stored, and a second controller coupled to the non-volatile memory chips. In case where the first controller receives an update request to update first data to second data from the host computer, the second controller in a first storage device of the storage devices is configured to store the second data in an area different from an area where the first data has been stored, in the storage space of the first storage device; generate information that relates the first data and the second data; and generate an intermediate parity based on the first and the second data. | 2013-10-31 |
20130290614 | FLASH MEMORY CONTROLLER - A method includes, in at least one aspect, asserting a control signal to one or more devices, determining an initial wait time after asserting the control signal, issuing a first command based on the initial wait time, determining a first interval time associated with the first command and a second command, and issuing the second command based on the first interval time. | 2013-10-31 |
20130290615 | COMPRESSION AND DECOMPRESSION OF DATA AT HIGH SPEED IN SOLID STATE STORAGE - Compression and decompression of data at high speed in solid state storage is described, including accessing a compressed data comprising a plurality of blocks of the compressed data, decompressing each of the plurality of blocks in a first stage of decompression to produce a plurality of partially decompressed blocks, and reconstructing an original data from the partially decompressed blocks in a second stage of decompression. | 2013-10-31 |
20130290616 | CONTROL APPARATUS OF NON-VOLATILE MEMORY AND IMAGE FORMING APPARATUS - An apparatus has an external memory control apparatus for controlling rewriting of a memory. The external memory control apparatus allows the memory to store the number of formed monochromatic images and changes a rewriting frequency of the memory according to the number of formed monochromatic images. | 2013-10-31 |
20130290617 | METHOD AND SYSTEM FOR CONTROLLING LOSS OF RELIABILITY OF NON-VOLATILE MEMORY - A method for controlling a loss of reliability of a non-volatile memory (NVM) included in an integrated circuit card (ICC) may include determining whether the NVM is reliable at the operating system (OS) side of the ICC, and generating an event associated with the reliability of the NVM at the OS side for an application of the ICC, if the NVM is determined to be unreliable. | 2013-10-31 |
20130290618 | HIGHER-LEVEL REDUNDANCY INFORMATION COMPUTATION - Higher-level redundancy information computation enables a Solid-State Disk (SSD) controller to provide higher-level redundancy capabilities to maintain reliable operation in a context of failures of non-volatile (e.g. flash) memory elements during operation of an SSD. A first portion of higher-level redundancy information is computed using parity coding via an XOR of all pages in a portion of data to be protected by the higher-level redundancy information. A second portion of the higher-level redundancy information is computed using a weighted-sum technique, each page in the portion being assigned a unique non-zero “index” as a weight when computing the weighted-sum. Arithmetic is performed over a finite field (such as a Galois Field). The portions of the higher-level redundancy information are computable in any order, such as an order based on order of read operation completion of non-volatile memory elements. | 2013-10-31 |
20130290619 | Apparatus and Method for Sequential Operation on a Random Access Device - The present disclosure involves a method. As a part of the method, a logically sequential range of memory blocks is allocated for sequential access. A pointer is initialized with an address of a first memory block that is within the range of the memory blocks. In response to a data write next request, data is written into the range of the memory blocks, starting with the first memory block and continuing sequentially in subsequent memory blocks within the range until the data write next request is completed. Thereafter, the pointer is updated based on a last memory block in which data is written. | 2013-10-31 |
20130290620 | STORAGE CONTROLLING APPARATUS, STORAGE APPARATUS AND PROCESSING METHOD - A storage controlling apparatus includes a command decoder and command processing section. The command decoder decides whether or not a plurality of access object addresses of different commands included in a command string correspond to words different from each other in a same one of blocks of a memory cell array which have a common plate. The command processing section collectively and successively executes, when it is decided that the access object addresses of the commands correspond to the words different from each other in the same block of the memory cell array, those of operations in processing of the commands in which an equal voltage is applied as a drive voltage between the plate and a bit line. | 2013-10-31 |
20130290621 | DDR CONTROLLER, METHOD FOR IMPLEMENTING THE SAME, AND CHIP - There are provided a DDR controller, a method for implementing the same and a chip, which are applicable to the field of DDR controller technology. The method includes the steps of: parsing a plurality of buffered commands concurrently (S | 2013-10-31 |
20130290622 | TCAM ACTION UPDATES - Systems, and methods, including executable instructions and/or logic thereon are provided for ternary content addressable memory (TCAM) updates. A TCAM system includes a TCAM matching array, a TCAM action array that specifies actions that are taken upon a match in the TCAM array, and a TCAM driver that provides a programmable interface to the TCAM matching array and the TCAM action array. Program instructions are executed by the TCAM driver to add a divert object which encompasses actions associated with the TCAM actions array and to apply the divert object to update action fields in the TCAM action array, without changing the relative order of entries in the TCAM matching array, while hardware is simultaneously using the entries. | 2013-10-31 |
20130290623 | COMPUTER AND METHOD FOR CONTROLLING COMPUTER - Recently, along with the increase in the importance of data protection, there are increasing demands for constructing a computer system capable of protecting data even when widespread disaster occurs. In order to reduce the risk of data loss even when widespread disaster occurs, the present invention computes the risk of data loss for each replication relationship of data (combination of storage subsystems storing the same data), and allocates data so that the risks of losing data of all replication relationships are optimized. | 2013-10-31 |
20130290624 | TRANSFERRING LEARNING METADATA BETWEEN STORAGE SERVERS HAVING CLUSTERS VIA COPY SERVICES OPERATIONS ON A SHARED VIRTUAL LOGICAL UNIT THAT STORES THE LEARNING METADATA - A virtual logical unit that stores learning metadata is allocated in a first storage server having a first plurality of clusters, wherein the learning metadata indicates a type of storage device in which selected data of the first plurality of clusters of the first storage server are stored. A copy services command is received to copy the selected data from the first storage server to a second storage server having a second plurality of clusters. The virtual logical unit that stores the learning metadata is copied, from the first storage server to the second storage server, via the copy services command. Selected logical units corresponding to the selected data are copied from the first storage server to the second storage server, and the learning metadata is used to place the selected data in the type of storage device indicated by the learning metadata. | 2013-10-31 |
20130290625 | MAPPING LOCATIONS OF LOGICAL VOLUME RECORDS ON A PHYSICAL STACKED VOLUME - A system, method and computer program product for accessing host data records stored in a virtual tape storage (VTS) system. The computer program product includes a computer readable storage medium having computer readable program code embodied therewith. The computer readable program code includes computer readable program code configured to receive a mount request to access at least one host data record in a VTS system; computer readable program code configured to determine a starting logical block ID (SLBID) corresponding to the at least one requested host data record; computer readable program code configured to determine a physical block ID (PBID) that corresponds to the SLBID; computer readable program code configured to access a physical block on a magnetic tape medium corresponding to the PBID; and computer readable program code configured to output at least the physical block without outputting an entire logical volume that the physical block is stored to. | 2013-10-31 |
20130290626 | MELTHODS AND SYSTEMS FOR INSTANTANEOUS ONLINE CAPACITY EXPANSION - The disclosure provides instantaneous, vertical online capacity expansion (OCE) for redundant (e.g., RAID-5, RAID-6) and non-redundant (e.g., RAID-0) arrays. The new OCE technique implements vertical expansion instead of the horizontal expansion techniques implemented in current OCE techniques. The vertical expansion treats any new addition of storage as an extension of the capacity of the preexisting physical drives in order to avoid having to rewrite the data blocks of the original, preexisting storage devices. Vertical RAID expansion is implemented by installing one or more new physical storage devices in a device or partition configuration that corresponds to the physical configuration of the preexisting volume and loading new metadata received through the user interface into the firmware of the RAID controller to define the configuration of the expanded volume. | 2013-10-31 |
20130290627 | Data migration - A method for migrating data in a storage system includes generating a first set of logical disks (LDs), the LDs being mapped to physical storage space in the storage system, generating a temporary virtual volume (VV) mapped to the first set of LDs, generating a second set of LDs mapped to the temporary VV, and migrating data between the second set of LDs and a third set of LDs. | 2013-10-31 |
20130290628 | METHOD AND APPARATUS TO PIN PAGE BASED ON SERVER STATE - A storage system includes plural types of storage devices that define a plurality of virtual volumes and a plurality of logical volumes. A storage controller is configured to manage the plurality of virtual volumes and the plurality of logical volumes, the plurality of virtual volumes defining first storage areas and the plurality of logical volumes defining second storage areas. A second storage area of the plurality of logical volumes is allocated to a first storage area of the plurality of virtual volumes. The storage controller is configured to determine whether data of a first storage area of a swap file is to be stored in the first tier storage device or the second tier storage device based on access information from an application server that manages a swap file information of the swap file. | 2013-10-31 |
20130290629 | STORAGE SYSTEM AND STORAGE APPARATUS - A storage system comprises a first controller and a plurality of storage devices. The plurality of storage devices configure RAID, each of which includes one or more non-volatile memory chips providing storage space where data from a host computer is stored, and a second controller coupled to the non-volatile memory chips. In case where the first controller receives an update request to update first data to second data from the host computer, the second controller in a first storage device of the storage devices is configured to store the second data in an area different from an area where the first data has been stored, in the storage space of the first storage device; generate information that relates the first data and the second data; and generate an intermediate parity based on the first and the second data. | 2013-10-31 |
20130290630 | STORAGE SYSTEM, CONTROL METHOD THEREOF, AND PROGRAM - A RAID control unit forms a redundant configuration of RAID with respect to a physical device including a plurality of disk devices. A cache control unit processes data in page units corresponding to a stripe of the disk devices. A cache area placement unit, when it receives a write request from an upper-level device, places, in a cache memory, a cache area which is provided with a plurality of page areas and has the same size as the stripe area. When new data in the cache memory which is newer than the data in the physical device is to be written back to the storage device, a write-back processing unit generates new parity data by use of an unused area in the cache stripe area, and then writes the new data and the new parity to the corresponding storage devices. | 2013-10-31 |
20130290631 | CONVERTING LUNS INTO FILES OR FILES INTO LUNS IN REAL TIME - A LUN is provided that can store multiple datasets (e.g., data and/or applications, such as virtual machines stored as virtual hard drives). The LUN is partitioned into multiple partitions. One or more datasets may be stored in each partition. As a result, multiple datasets can be accessed through a single LUN, rather than through a number of LUNs proportional to the number of datasets. Furthermore, the datasets stored in the LUN may be pivoted. A second LUN may be generated that is dedicated to storing a dataset of the multiple datasets stored in the first LUN. The dataset is copied to the second LUN, and the second LUN is exposed to a host computer to enable the host computer to interact with the dataset. Still further, the dataset may be pivoted from the second LUN back to a partition of the first LUN. | 2013-10-31 |
20130290632 | PORTABLE DEVICE FOR SECURE STORAGE OF USER PROVIDED DATA - A personal electronic carrier device (PECD) comprising means for receiving PECD data; means for storing PECD data; means for transmitting PECD data directly or indirectly PECD; and operating software means to affect the displaying, verifying, receiving, storing and transmitting the PECD data. The PECD is part of a network having a plurality of data stations and preferably a main data base. A method when use with the PECD that maintains master information or database of an individual and is integral to internetworking and interoperability of data consisting of medical, particularly insurance information, drug information, medical records; educational and identification data. The method of data exchange in association with the PECD within and between networks provide for the efficacious and convenient handling of data. Overall, it presents a method that provides an efficient and effective way of data exchange and interoperability by leveraging the master data in the PECDs of individuals. | 2013-10-31 |
20130290633 | SD CARD MEMORY TOOL - A method and apparatus for obtaining the size of a file of save data created by a computer game device. Save operations by the computer game device and a personal computer device are performed on an SD memory card, and the size of the file of save data is determined based on the two save operations. | 2013-10-31 |
20130290634 | Data Processing Method and Apparatus - Embodiments of the present invention disclose a data processing method and apparatus. The method includes: first receiving an operation command, then searching, according to a memory address, a Cache memory in a Cache controller for data to be operated, and storing the operation command in a missed command buffer area in the Cache controller when the data to be operated is not found through searching in the Cache memory; then, storing data sent by an external memory in a data buffer area of the Cache controller after sending a read command to the external memory, and finally processing, according to a missed command, the data acquired from the external memory and the data carried in the missed command. The present invention applies to the field of computer systems. | 2013-10-31 |
20130290635 | PROVISION OF ACCESS CONTROL DATA WITHIN A DATA PROCESSING SYSTEM - A data processing system ( | 2013-10-31 |
20130290636 | MANAGING MEMORY - Methods, and apparatus to cause performance of such methods, for managing memory. The methods include requesting a particular unit of data from a first level of memory. If the particular unit of data is not available from the first level of memory, the methods further include determining whether a free unit of data exists in the first level of memory, evicting a unit of data from the first level of memory if a free unit of data does not exist in the first level of memory, and requesting the particular unit of data from a second level of memory. If the particular unit of data is not available from the second level of memory, the methods further include reading the particular unit of data from a third level of memory. The methods still further include writing the particular unit of data to the first level of memory. | 2013-10-31 |
20130290637 | PER PROCESSOR BUS ACCESS CONTROL IN A MULTI-PROCESSOR CPU - A technique to provide hardware protection for bus accesses for a processor in a multiple processor environment where at least two zones are established to separate or segregate processor functionality. In one implementation, control registers within a cache memory that supports the multiple processors are loaded with addresses associated with access rights for a particular processor. Then, when an access request is generated, the registers are checked to authorize the access. | 2013-10-31 |
20130290638 | TRACKING OWNERSHIP OF DATA ASSETS IN A MULTI-PROCESSOR SYSTEM - A technique to provide ownership tracking of data assets in a multiple processor environment. Ownership tracking allows a data asset to be identified to a particular processor and tracked as the data asset travels within a system or sub-system. In one implementation, the sub-system is a cache memory that provides cache support to multiple processors. By utilizing flag bits attached to the data asset, ownership identification is attached to the data asset to identify which processor owns the data asset. | 2013-10-31 |
20130290639 | APPARATUS AND METHOD FOR MEMORY COPY AT A PROCESSOR - A processor uses a dedicated buffer to reduce the amount of time needed to execute memory copy operations. For each load instruction associated with the memory copy operation, the processor copies the load data from memory to the dedicated buffer. For each store operation associated with the memory copy operation, the processor retrieves the store data from the dedicated buffer and transfers it to memory. The dedicated buffer is separate from a register file and caches of the processor, so that each load operation associated with a memory copy operation does not have to wait for data to be loaded from memory to the register file. Similarly, each store operation associated with a memory copy operation does not have to wait for data to be transferred from the register file to memory. | 2013-10-31 |
20130290640 | BRANCH PREDICTION POWER REDUCTION - In one embodiment, a microprocessor is provided. The microprocessor includes instruction memory and a branch prediction unit. The branch prediction unit is configured to use information from the instruction memory to selectively power up the branch prediction unit from a powered-down state when fetched instruction data includes a branch instruction and maintain the branch prediction unit in the powered-down state when the fetched instruction data does not include a branch instruction in order to reduce power consumption of the microprocessor during instruction fetch operations. | 2013-10-31 |
20130290641 | ELASTIC CACHING FOR JAVA VIRTUAL MACHINES - A mechanism is provided for managing memory of a runtime environment executing on a virtual machine. The mechanism includes an elastic cache made of objects within heap memory of the runtime environment. When the runtime environment and virtual machine are not experiencing memory pressure from a hypervisor, the objects of the elastic cache may be used to temporarily store application-level cache data from applications running within the runtime environment. When memory pressure from the hypervisor is exerted, the objects of the elastic cache are re-purposed to inflate a memory balloon within heap memory of the runtime environment. | 2013-10-31 |
20130290642 | Managing nodes in a storage system - Each node in a clustered array is the owner of a set of zero logical disks (LDs). Thinly-provisioned VVs (TPVVs) are partitioned so each is mapped to a group of zero LDs from different sets of zero LDs. When there is a change in ownership, the affected zero LDs are switched one at a time so only a group of the TPVVs is affected each time. | 2013-10-31 |
20130290643 | USING A CACHE IN A DISAGGREGATED MEMORY ARCHITECTURE - Example caches in a disaggregated memory architecture are disclosed. An example apparatus includes a cache to store a first key in association with a first pointer to a location at a remote memory. The location stores a first value corresponding to the first key. The example apparatus includes a receiver to receive a plurality of key-value pairs from the remote memory based on the first key. The first value specifies the key-value pairs for retrieval from the remote memory. | 2013-10-31 |
20130290644 | Method and System Method and System For Exception-Less System Calls For Event Driven Programs - A method and system is disclosed which can enhance the performance of computer systems by altering the operation of the operating system of those computer systems. The invention provides a system and method for making exception-less system calls, thus avoiding or reducing the direct and indirect overheads associated with making an exception-based system call. The invention can be employed with single core processor systems and with multi-core processor systems, both affording improved temporal execution locality and the later also providing improved spatial execution locality. The system and method can be employed in a wide range of operating systems. | 2013-10-31 |
20130290645 | TECHNIQUES TO PRELINK SOFTWARE TO IMPROVE MEMORY DE-DUPLICATION IN A VIRTUAL SYSTEM - Techniques to prelink software to improve memory de-duplication in a virtual system are described. An apparatus may comprise a processor circuit, a memory unit coupled to the processor circuit to store private memory pages for multiple virtual machines, and a dynamic linker application operative on the processor circuit to link a binary version of a software program with associated program modules at run-time of the binary version on a virtual machine. The dynamic linker application may comprise a master prelink component operative on the processor circuit to relocate a first set of program modules for a first binary version of the software program for a first virtual machine using a first set of virtual memory addresses from a first private memory page allocated to the first virtual machine, and store relocation information for the first set of program modules in a global prelink layout map for use by a second virtual machine. Other embodiments are described and claimed. | 2013-10-31 |
20130290646 | FIFO BUFFER SYSTEM PROVIDING SAME CLOCK CYCLE RESPONSE TO POP COMMANDS - A first-in first-out (FIFO) buffer system includes FIFO control logic and first and second storage partitions. Each storage partition includes a corresponding single-port memory bank and a prefetch buffer. The FIFO control logic alternates processing of PUSH commands between the first and second storage partitions. Additionally, the FIFO control logic anticipates POP commands based on the FIFO order and the alternating PUSH arrangement by initiating prefetches of data so that data to be accessed by a POP command is available at either the prefetch buffer (if the prefetch has completed) or the output of the single-port memory bank (if the prefetch has not yet completed) of the corresponding storage partition at the time the POP command is received, thereby enabling the output of the data for the POP command in the same clock cycle in which the POP command is received. | 2013-10-31 |
20130290647 | INFORMATION-PROCESSING DEVICE - According to one embodiment, a memory device is connectable to a host device. The memory device includes a first interface unit, a controller unit, a second memory and a second interface. The first interface unit receives a write command from the host device. The controller unit acquires the write-data associated with the write command stored in a first memory area of a first memory in the host device, the write-data being copied from a second memory area of the first memory. The second interface causes the second memory to write the write-data in the second memory. | 2013-10-31 |
20130290648 | EFFICIENT DATA OBJECT STORAGE AND RETRIEVAL - A data storage system includes a processor, a system memory, and logical extents. Blocks of storage in one or more physical storage devices are allocated to each of the logical extents. The processor maintains a logical container for data objects and the volume includes one or more of the logical extents. The processor stores data objects that are uniquely identified by object identifiers in the logical extents. The processor also maintains a first index that is stored in the system memory and maps a range of the object identifiers to a second index. The second index is also stored in a logical extent and indicates storage locations of the data objects associated with the range of the object identifiers. | 2013-10-31 |
20130290649 | FORWARD COUNTER BLOCK - A forward counter block may include at least one of a plurality of local counter storage elements for counting events. The forward counter block may also include an update engine, the update engine configured to update an external memory by forwarding a value stored in any of said at lease one of a plurality of local counter storage elements and return a zero value to that local counter storage element, when the value stored in that local counter storage element reaches or surpasses a threshold value. | 2013-10-31 |
20130290650 | DISTRIBUTED ACTIVE DATA STORAGE SYSTEM - A request from a requestor identifies data stored in a distributed active data storage system and a procedure that is associated with the identified data for a given node of the distributed active data storage system to execute. The execution of the procedure causes the given node to selectively determine an address for routing another request to an element of a plurality of elements of a data structure stored on the plurality of nodes. | 2013-10-31 |
20130290651 | COMPUTER SYSTEM AND COMPUTER SYSTEM INFORMATION STORAGE METHOD - If simultaneous replacement main system and standby system of a management module was necessary due to a failure, fault or other problem in a structure containing redundant management modules, then the management information retained in the management module will be lost. A computer system contains an external storage device that is outside the manager module. This external storage device stores the same information as the management information held by the main system management module, and after replacing the management modules the management information held in the external storage device is restored in the management module. A switch is further included between the external storage device and the management module, and controlling this switch from the management module allows the plurality of management modules to exclusively access the external memory device. | 2013-10-31 |
20130290652 | STORAGE CONTROL DEVICE - A storage control device includes: a memory where a data file is temporarily stored; a read-out unit that sequentially reads out divided data segments of the data file; a storage medium that includes data storage areas having small areas and data management areas each corresponding to the small area, so as to store each of the data segments into small areas and store at least one of first link information and second link information into the data management areas; a first instruction unit that issues an instruction for procuring consecutive data management areas corresponding to a data size of data segments; a second instruction unit that issues an instruction for writing the first link information into the data management areas excluding a trailing-end data management area; and a third instruction unit that issues an instruction for sequentially writing the data segment into the data storage areas. | 2013-10-31 |
20130290653 | LOG RECORDING APPARATUS - To efficiently record logs, a log recording apparatus includes a log recording memory, an access control unit that acquires contents of an access from a CPU to a memory space, a log-recording-condition storage unit that has a log recording condition stored therein, and a log-recording-condition determination unit that determines, every time the access control unit acquires the access contents, whether the acquired access contents satisfy the log recording condition stored in the log-recording-condition storage unit. The access control unit is configured to store access contents determined as satisfying the log recording condition by the log-recording-condition determination unit in the log recording memory, and does not store access contents determined as not satisfying the log recording condition by the log-recording-condition determination unit in the log recording memory. | 2013-10-31 |
20130290654 | DATA WRITING CONTROL DEVICE, DATA WRITING CONTROL METHOD, AND INFORMATION PROCESSING DEVICE - A data writing control device includes: a determination unit that determines whether a request from a requestor is a partial-write request for data and the partial-write is continuously performed to the same address; a transmission unit that, when the request from the requestor is the partial-write request for data and the partial-write is performed to an address different from an address of the previous partial-write, transmits a read request for data to the requestor; and a hold unit that holds write data included in the partial-write request and data indicating a rewritten location of the write data until read data corresponding to the read request for the data is received. | 2013-10-31 |
20130290655 | SCM-CONSCIOUS TRANSACTIONAL KEY-VALUE STORE - Embodiments of a system are described. In one embodiment, the system is a device for performing operations and supporting transactions. The device is configured to receive a transaction comprising a command and data. The device writes the data to a transaction manager on a persistent memory device. The transaction manager also maintains a status of the transaction and reference to entries within memory that are manipulated by the transaction. The device also creates an in-memory log of the transaction in a first hash directory. The device then commits a copy of the first hash directory to a second hash directory maintained on a persistent memory device. | 2013-10-31 |
20130290656 | Concurrent Request Scheduling | 2013-10-31 |
20130290657 | STORING DATA IN CONTAINERS - Methods and apparatus to store data are disclosed. An example method includes establishing a plurality of containers for storing data representative of a list of records to be displayed on a device; and loading a first segment of the list of records into first and second ones of the containers by alternating between loading the first container with first data and loading the second container with second data until the first segment is loaded into the first and second containers. | 2013-10-31 |
20130290658 | Storage Control Device, Data Archive Storage System, and Data Access Method - Embodiments of the present invention provide a storage control device system and method. The system includes: a data storage device and a storage control device, where the storage control device controls the data storage device to write a first file into a first storage location, and then sends a first hard disk control instruction to the data storage device, so as to control an energy saving control performed on a storage medium where the first storage location is located, controls the data storage device to perform power on or dormancy recovery on a storage medium where a second location is located, and then reads a second file from the second storage location; and the data storage device executes corresponding operations under the control of the storage control device, thereby implementing an energy saving control on a storage hard disk, and reducing a storage cost. | 2013-10-31 |
20130290659 | MEMORY SYSTEM - A memory system includes a volatile first storing unit, a nonvolatile second storing unit, and a controller. The controller performs data transfer, stores management information including a storage position of data stored in the second storing unit into the first storing unit, and performs data management while updating the management information. The second storing unit stores management information in a latest state and a storage position of the management information. The storage position information is read by the controller during a startup operation of the memory system and includes a second pointer indicating a storage position of the management information in a latest state and a first pointer indicating a storage position of the second pointer. The first pointer is stored in a fixed area in the second storing unit and the second pointer is stored in an area excluding the fixed area in the second storing unit. | 2013-10-31 |
20130290660 | POST ACCESS DATA PRESERVATION - A method, article of manufacture, and apparatus for preserving changes made to data during a recovery process. In some embodiments, this includes recovering a backup data to a remote location, using an I/O intercept to access the recovered data, modifying the recovered data a first time, completing the modification of the recovered data, preserving the I/O intercept, and storing the modified data in the remote location. | 2013-10-31 |
20130290661 | COMBINED LIVE MIGRATION AND STORAGE MIGRATION USING FILE SHARES AND MIRRORING - Migration of a virtual machine and associated files to a destination host may be performed. A source host may initiate establishment of a temporary network file share at a destination location of the destination host to provide the source host and the destination host with access to the file share. While the virtual machine is running at the source host, a storage migration and a live migration may be initiated. Using the network file share, the source host may copy the associated files to the destination location. A runtime state of the virtual machine may be copied to the destination host. In a final phase of the migration, the virtual machine at the source host may be stopped, the storage migration may be completed, the copying of the runtime state may be completed, and the virtual machine may be started at the destination host. | 2013-10-31 |
20130290662 | INFORMATION SECURITY TECHNIQUES INCLUDING DETECTION, INTERDICTION AND/OR MITIGATION OF MEMORY INJECTION ATTACKS - Methods of detecting malicious code injected into memory of a computer system are disclosed. The memory injection detection methods may include enumerating memory regions of an address space in memory of computer system to create memory region address information. The memory region address information may be compared to loaded module address information to facilitate detection of malicious code memory injection. | 2013-10-31 |
20130290663 | STORAGE APPARATUS AND CONTROL METHOD THEREOF - Improvement of read/write access performance with respect to a disk is proposed. A controller manages a first volume format LDEV, in which each distributed user data area and each distributed information area among a plurality of the distributed user data areas for storing a data part and a plurality of the distributed control information area for storing a control information part, are targets that capacity is changed. The controller also manages a second format LDEV, which include a plurality of groups each of which is formed from one distributed user data area and one distributed control information area, and in which each group is a unit that capacity is expanded in a real storage area. The controller converts a data address of the data part belonging to the first volume format LDEV into a data address of a data part of the second volume format LDEV in order to execute input/output processing with respect to the data part, when received an request for access to the data part belonging to the first volume format LDEV. | 2013-10-31 |
20130290664 | Methods and Apparatus for Managing Asynchronous Dependent I/O for a Virtual Fibre Channel Target - A system and method for arbitrating exchange identifier assignments for I/O operations are disclosed. In an exemplary embodiment, the method comprises receiving, by a storage system, a data command from a host system. The data command is directed to a virtual device of the storage system, the virtual device comprising a plurality of physical devices of the storage system. A range of exchange identifier values are allocated to the data command. The range may include a predefined number of exchange identifiers, the predefined number determined prior to the receiving of the data command. A plurality of I/O operations corresponding to the data command are issued, where each of the plurality of I/O operations is directed to a physical device of the plurality of physical devices of the storage system. An exchange identifier within the range of exchange identifier values is associated with each of the plurality of I/O operations. | 2013-10-31 |
20130290665 | STORING LARGE OBJECTS ON DISK AND NOT IN MAIN MEMORY OF AN IN-MEMORY DATABASE SYSTEM - A method, computer program product and system are provided. The method, computer program product and system execute a process for determining a size of an object, the object having raw data that is operable upon by one or more physical operators. If the object is smaller than a threshold size, the object is stored in main memory of an in-memory database system. If the object is equal to or larger than the threshold size, the object is stored in a persistency of a disk storage, where storing the object in a disk storage further includes generating a global container identifier (ID) for the object, the global container ID referencing raw data of the object stored in the persistency of the disk storage. | 2013-10-31 |
20130290666 | Demand-Based Memory Management of Non-pagable Data Storage - Management of a UNIX-style storage pools is enhanced by specially managing one or more memory management inodes associated with pinned and allocated pages of data storage by providing indirect access to the pinned and allocated pages by one or more user processes via a handle, while preventing direct access of the pinned and allocated pages by the user processes without use of the handles; scanning periodically hardware status bits in the inodes to determine which of the pinned and allocated pages have been recently accessed within a pre-determined period of time; requesting via a callback communication to each user process to determine which of the least-recently accessed pinned and allocated pages can be either deallocated or defragmented and compacted; and responsive to receiving one or more page indicators of pages unpinned by the user processes, compacting or deallocating one or more pages corresponding to the page indicators. | 2013-10-31 |
20130290667 | SYSTEMS AND METHODS FOR S-LIST PARTITIONING - Systems and techniques of the management of the allocation of a plurality of memory elements stored within a plurality of lockless list structures are presented. These lockless list structures (such as Slists) may be made accessible within an operating system environment of a multicore processor—and may be partitioned within the system. Memory elements may also be partitioned among these lockless list structures. When a core processor (or other processing element) makes a request for allocating a memory element to itself, the system and/or method may search among the lockless list structures for an available memory element. When a suitable and/or available memory element is found, the system may allocate the available memory element to requesting core processor. Dynamically balancing of memory elements may occur according to a suitable balancing metric, such as maintain substantial numerical equality of memory elements or avoid over-allocation of resources. | 2013-10-31 |
20130290668 | METHOD AND APPARATUS FOR ADJUSTABLE VIRTUAL ADDRESSING FOR DATA STORAGE - Methods and apparatuses for adjusting the size of a virtual band or virtual zone of a storage medium are provided. In one embodiment, an apparatus may comprise a data storage device including a data storage medium having a physical zone; and a processor configured to receive a virtual addressing adjustment command, and adjust a number of virtual addresses in a virtual band mapped to the physical zone based on the virtual addressing adjustment command. In another embodiment, a method may comprise providing a data storage device configured to implement virtual addresses associated with a virtual band mapped to a physical zone of a data storage medium of the data storage device, receiving at the data storage device a virtual addressing adjustment command, and adjusting a number of virtual addresses in a virtual band based on the virtual addressing adjustment command. | 2013-10-31 |
20130290669 | PHYSICAL MEMORY USAGE PREDICTION - In general, in one aspect, the invention relates to a system that includes memory and a prediction subsystem. The memory includes a first memgroup and a second memgroup, wherein the first memgroup comprises a first physical page and a second physical page, wherein the first physical page is a first subtype, and wherein the second physical page is a second subtype. The prediction subsystem is configured to obtain a status value indicating an amount of freed physical pages on the memory, store the status value in a sample buffer comprising a plurality of previous status values, determine, using the status value and the plurality of previous status values, a deficiency subtype state for the first subtype based on an anticipated need for the first subtype on the memory, and instruct, based on the determination, an allocation subsystem to coalesce the second physical page to the first subtype. | 2013-10-31 |
20130290670 | MEMORY RANGE PREFERRED SIZES AND OUT-OF-BOUNDS COUNTS - A system that includes a memory, a tilelet data structure entry, a first tile freelist, and an allocation subsystem. The memory includes a first tilelet on a first tile. The tilelet data structure entry includes a first tilelet preferred pagesize assigned to a first value. The first tile freelist for the first tile includes a first tile in-bounds page freelist, and a first tile out-of-bounds page freelist. The allocation subsystem is configured to detect that a first physical page is freed, store, in the first tile in-bounds page freelist, a first page data structure, detect that a second physical page is freed, store, in the first tile out-of-bounds page freelist, a second page data structure, and coalesce the memory using the second page and at least one of the physical pages associated with the plurality of out-of-bounds page data structures into a third physical page. | 2013-10-31 |
20130290671 | Emulating Execution of a Perform Frame Management Instruction - What is disclosed is a frame management function defined for a machine architecture of a computer system. In one embodiment, a frame management instruction is obtained which identifies a first and second general register. The first general register contains a frame management field having a key field with access-protection bits and a block-size indication. If the block-size indication indicates a large block then an operand address of a large block of data is obtained from the second general register. The large block of data has a plurality of small blocks each of which is associated with a corresponding storage key having a plurality of storage key access-protection bits. If the block size indication indicates a large block, the storage key access-protection bits of each corresponding storage key of each small block within the large block is set with the access-protection bits of the key field. | 2013-10-31 |
20130290672 | APPARATUS AND METHOD OF MASK PERMUTE INSTRUCTIONS - An apparatus is described having instruction execution logic circuitry. The instruction execution logic circuitry has input vector element routing circuitry to perform the following for each of three different instructions: for each of a plurality of output vector element locations, route into an output vector element location an input vector element from one of a plurality of input vector element locations that are available to source the output vector element. The output vector element and each of the input vector element locations are one of three available bit widths for the three different instructions. The apparatus further includes masking layer circuitry coupled to the input vector element routing circuitry to mask a data structure created by the input vector routing element circuitry. The masking layer circuitry is designed to mask at three different levels of granularity that correspond to the three available bit widths. | 2013-10-31 |
20130290673 | PERFORMING A DETERMINISTIC REDUCTION OPERATION IN A PARALLEL COMPUTER - Performing a deterministic reduction operation in a parallel computer that includes compute nodes, each of which includes computer processors and a CAU (Collectives Acceleration Unit) that couples computer processors to one another for data communications, including organizing processors and a CAU into a branched tree topology in which the CAU is a root and the processors are children; receiving, from each of the processors in any order, dummy contribution data, where each processor is restricted from sending any other data to the root CAU prior to receiving an acknowledgement of receipt from the root CAU; sending, by the root CAU to the processors in the branched tree topology, in a predefined order, acknowledgements of receipt of the dummy contribution data; receiving, by the root CAU from the processors in the predefined order, the processors' contribution data to the reduction operation; and reducing, by the root CAU, the processors' contribution data. | 2013-10-31 |
20130290674 | Modeling Structured SIMD Control FLow Constructs in an Explicit SIMD Language - Constructs may express SIMD control flow that can be efficiently implemented on a SIMD machine with support for SIMD control flow. The execution semantics of constructs serve as a functional specification for an emulation implementation in the central processing unit (CPU), a non-SIMD machine, using conventional C++ compiler such as GCC or Microsoft Visual C++ without any modification to the conventional compiler in some embodiments. | 2013-10-31 |
20130290675 | MITIGATION OF THREAD HOGS ON A THREADED PROCESSOR - Systems and methods for efficient thread arbitration in a threaded processor with dynamic resource allocation. A processor includes a resource shared by multiple threads. The resource includes an array with multiple entries, each of which may be allocated for use by any thread. Control logic detects a load miss to memory, wherein the miss is associated with a latency greater than a given threshold. The load instruction or an immediately younger instruction is selected for replay for an associated thread. A pipeline flush and replay for the associated thread begins with the selected instruction. Instructions younger than the load instruction are held at a given pipeline stage until the load instruction completes. During replay, this hold prevents resources from being allocated to the associated thread while the load instruction is being serviced. | 2013-10-31 |
20130290676 | BRANCH PREDICTION POWER REDUCTION - In one embodiment, a microprocessor is provided. The microprocessor includes a branch prediction unit. The branch prediction unit is configured to track the presence of branches in instruction data that is fetched from an instruction memory after a redirection at a target of a predicted taken branch. The branch prediction unit is selectively powered up from a powered-down state when the fetched instruction data includes a branch instruction and is maintained in the powered-down state when the fetched instruction data does not include an instruction branch in order to reduce power consumption of the microprocessor during instruction fetch operations. | 2013-10-31 |
20130290677 | EFFICIENT EXTRACTION OF EXECUTION SETS FROM FETCH SETS - An apparatus having a buffer and a circuit is disclosed. The buffer may be configured to store a plurality of fetch sets. Each fetch set generally includes a prefix word and a plurality of instruction words. Each prefix word may include a plurality of symbols. Each symbol generally corresponds to a respective one of the instruction words. The circuit may be configured to (i) identify each of the symbols in each of the fetch sets having a predetermined value and (ii) parse the fetch sets into a plurality of execution sets in response to the symbols having the predetermined value. | 2013-10-31 |
20130290678 | INSTRUCTION AND LOGIC TO LENGTH DECODE X86 INSTRUCTIONS - Techniques to increase the consumption rate of raw instruction bytes within an instruction fetch unit. An instruction fetch unit according to embodiments of the present invention may include a prefetch buffer, a set of bypass multiplexers, an array of bypass latches, a byte-block multiplexer, an instruction alignment multiplexer, a predecode cache, and an instruction length decoder. Raw instruction bytes may be steered from the bypass latches into macro-instructions for consumption by the instruction length decoder, which may generate micro-instructions from the macro-instructions. Embodiments of the present invention may de-couple a latency for reading raw instruction bytes from the prefetch buffer from consuming raw instruction bytes by the instruction length decoder. | 2013-10-31 |
20130290679 | NEXT BRANCH TABLE FOR USE WITH A BRANCH PREDICTOR - A data processing system | 2013-10-31 |
20130290680 | OPTIMIZING REGISTER INITIALIZATION OPERATIONS - A system and method for efficiently reducing the latency of initializing registers. A register rename unit within a processor determines whether prior to an execution pipeline stage it is known a decoded given instruction writes a particular numerical value in a destination operand. An example is a move immediate instruction that writes a value of 0 in its destination operand. Other examples may also qualify. If the determination is made, a given physical register identifier is assigned to the destination operand, wherein the given physical register identifier is associated with the particular numerical value, but it is not associated with an actual physical register in a physical register file. The given instruction is marked to prevent it from proceeding to an execution pipeline stage. When the given physical register identifier is used to read the physical register file, no actual physical register is accessed. | 2013-10-31 |
20130290681 | REGISTER FILE POWER SAVINGS - A system and method for efficiently reducing the power consumption of register file accesses. A processor is operable to execute instructions with two or more data types, each with an associated size and alignment. Data operands for a first data type use operand sizes equal to an entire width of a physical register within a physical register file. Data operands for a second data type use operand sizes less than an entire width of a physical register. Accesses of the physical register file for operands associated with a non-full-width data type do not access a full width of the physical registers. A given numerical value may be bypassed for the portion of the physical register that is not accessed. | 2013-10-31 |
20130290682 | COMPRESSED INSTRUCTION FORMAT - A technique for decoding an instruction in a variable-length instruction set. In one embodiment, an instruction encoding is described, in which legacy, present, and future instruction set extensions are supported, and increased functionality is provided, without expanding the code size and, in some cases, reducing the code size. | 2013-10-31 |
20130290683 | Eliminating Redundant Masking Operations Instruction Processing Circuits, And Related Processor Systems, Methods, And Computer-Readable Media - Eliminating redundant masking operations in instruction processing circuits and related processor systems, methods, and computer-readable media are disclosed. In one embodiment, a first instruction in an instruction stream indicating an operation writing a value to a first register is detected by an instruction processing circuit, the value having a value size less than a size of the first register. The circuit also detects a second instruction in the instruction stream indicating a masking operation on the first register. The masking operation is eliminated upon a determination that the masking operation indicates a read operation and a write operation on the first register and has an identity mask size equal to or greater than the value size. in this manner, the elimination of the masking operation avoids potential read-after-write hazards and improves performance of a CPU by removing redundant operations from an execution pipeline. | 2013-10-31 |
20130290684 | DATA PACKET ARITHMETIC LOGIC DEVICES AND MEHTODS - New instruction definitions for a packet add (PADD) operation and for a single instruction multiple add (SMAD) operation are disclosed. In addition, a new dedicated PADD logic device that performs the PADD operation in about one to two processor clock cycles is disclosed. Also, a new dedicated SMAD logic device that performs a single instruction multiple data add (SMAD) operation in about one to two clock cycles is disclosed. | 2013-10-31 |
20130290685 | FLOATING POINT ROUNDING PROCESSORS, METHODS, SYSTEMS, AND INSTRUCTIONS - A method of an aspect includes receiving a floating point rounding instruction. The floating point rounding instruction indicates a source of one or more floating point data elements, indicates a number of fraction bits after a radix point that each of the one or more floating point data elements are to be rounded to, and indicates a destination storage location. A result is stored in the destination storage location in response to the floating point rounding instruction. The result includes one or more rounded result floating point data elements. Each of the one or more rounded result floating point data elements includes one of the floating point data elements of the source, in a corresponding position, which has been rounded to the indicated number of fraction bits. Other methods, apparatus, systems, and instructions are disclosed. | 2013-10-31 |
20130290686 | INTEGRATED CIRCUIT DEVICE AND METHOD FOR CALCULATING A PREDICATE VALUE - An integrated circuit device comprises at least one instruction processing module arranged to perform branch predication. The at least one instruction processing module comprises at least one predicate calculation module arranged to receive as an input at least one result vector for a predicate function and at least one conditional parameter value therefor and output a predicate result value from the at least one result vector based at least partly on the at least one received conditional parameter value. | 2013-10-31 |
20130290687 | APPARATUS AND METHOD OF IMPROVED PERMUTE INSTRUCTIONS - An apparatus is described having instruction execution logic circuitry. The instruction execution logic circuitry has input vector element routing circuitry to perform the following for each of three different instructions: for each of a plurality of output vector element locations, route into an output vector element location an input vector element from one of a plurality of input vector element locations that are available to source the output vector element. The output vector element and each of the input vector element locations are one of three available bit widths for the three different instructions. The apparatus further includes masking layer circuitry coupled to the input vector element routing circuitry to mask a data structure created by the input vector routing element circuitry. The masking layer circuitry is designed to mask at three different levels of granularity that correspond to the three available bit widths. | 2013-10-31 |
20130290688 | Method of Concurrent Instruction Execution and Parallel Work Balancing in Heterogeneous Computer Systems - Embodiments of the present invention provide for concurrent instruction execution in heterogeneous computer systems by forming a parallel execution context whenever a first software thread encounters a parallel execution construct. The parallel execution context may comprise a reference to instructions to be executed concurrently, a reference to data said instructions may depend on, and a parallelism level indicator whose value specifies the number of times said instructions are to be executed. The first software thread may then signal to other software threads to begin concurrent execution of instructions referenced in said context. Each software thread may then decrease the parallelism level indicator and copy data referenced in the parallel execution context to said thread's private memory location and modify said data to accommodate for the new location. Software threads may be executed by a processor and operate on behalf of other processing devices or remote computer systems. | 2013-10-31 |
20130290689 | EFFICIENT RECORDING AND REPLAYING OF NON-DETERMINISTIC INSTRUCTIONS IN A VIRTUAL MACHINE AND CPU THEREFOR - The output of a non-deterministic instruction is handled during record and replay in a virtual machine. An output of a non-deterministic instruction is stored to a buffer during record mode and retrieved from a buffer during replay mode without exiting to the hypervisor. At least part of the contents of the buffer can be stored to a log when the buffer is full during record mode, and the buffer can be replenished from a log when the buffer is empty during replay mode. | 2013-10-31 |
20130290690 | Cloud Based Master Data Management System with Configuration Advisor and Method Therefore - A system includes a data store, a data registry, an interface process module, a suggestion database, and a configuration module. The data registry establishes storage of a data record at the data store and maintains links identifying relationships between the data record and a source record at a first source enterprise and between the data record and a source record at a second source enterprise. The interface process module determines that a value at the data record has been updated, and provides the updated value to a corresponding source enterprise using an Internet protocol. The suggestion database includes configuration information associated with multiple users of the system. The configuration module provides Internet access to facilitate configuration of the system by a user and to provide configuration guidance to the user based on the configuration information. | 2013-10-31 |
20130290691 | Electronic Device That Operates In Two Modes Based on Connection to Power Supply and Command Information - An information processing system has a power supply section which detects a predetermined potential applied to a USB terminal and supplying the potential as a source potential, an information detection section which detects the predetermined information supplied to the USB terminal, and a processing section which executes, subsequent to the detection of the predetermined potential, the encoding process or the decoding process in accordance with at least the operating information supplied from the operation key arranged on the body and in accordance with the predetermined information supplied to the USB terminal after detection of the predetermined information. The recording and reproducing operation can be performed with the operating key on the body with power supplied only from the USB terminal. | 2013-10-31 |
20130290692 | Method and Apparatus for the Definition and Generation of Configurable, High Performance Low-Power Embedded Microprocessor Cores - A system and method for configuring a microprocessor core may allow a microprocessor core to be configurable. Configuration may be dynamic or automatic using an application program. Microprocessor memory, decoding units, arithmetic logic units, register banks, storage, register bypass units, and a user interface may be configured. The configuration may also be used to optimize an instruction set to run on the microprocessor core. | 2013-10-31 |
20130290693 | Method and Apparatus for the Automatic Generation of RTL from an Untimed C or C++ Description as a Fine-Grained Specialization of a Micro-processor Soft Core - A system and method for configuring a configuring a register transfer level description from a programming language may utilize a configurable microprocessor core. A compiler may compile the programming language using performance statistics and user constraints. A template processor may translate the programming language into register transfer level description language using template files. Timing and area constraints may be used prior to output a gate level netlist ready to place on a microchip. | 2013-10-31 |
20130290694 | SYSTEM AND METHOD FOR SECURE PROVISIONING OF VIRTUALIZED IMAGES IN A NETWORK ENVIRONMENT - An example method includes setting up a secure channel between a blade and a provisioning server in a network environment, downloading an image of a virtual machine monitor (VMM) from the provisioning server to the blade through the secure channel, and booting the image to instantiate the VMM on the blade. The blade and the provisioning server are mutually authenticated and authorized with a plurality of parameters. Booting the image may include loading the image on a memory element of the blade and transferring control to the image. In some embodiments, booting the image includes modifying a root file system of the image by adding the daemon such that an agent is included in the root file system. The agent can download another image corresponding to an operating system of a virtual machine. | 2013-10-31 |
20130290695 | POLICY UPDATE APPARATUS, POLICY MANAGEMENT SYSTEM, POLICY UPDATE METHOD, POLICY MANAGEMENT METHOD AND RECORDING MEDIUM - The present invention provides a policy update apparatus which can update a software policy appropriately corresponding to the configuration change in a resource. The policy update apparatus includes policy retrieving means which receives a resource identifier, reads out an install destination resource identifier corresponding to the resource identifier from resource management means which stores with providing a mapping between the resource identifier and the install destination resource identifier which indicates the resource in which the resource identified by the resource identifier is installed, and reads out the software policy corresponding to the resource identifier and the install destination resource identifier, and policy update means which updates the software policy, correlates the updated software policy to the resource identifier and the install destination resource identifier, and stores to policy storage means. | 2013-10-31 |
20130290696 | SECURE COMMUNICATIONS FOR COMPUTING DEVICES UTILIZING PROXIMITY SERVICES - Techniques are disclosed for establishing secure communications between computing devices utilizing proximity services in a communication system. For example, a method for providing secure communications in a communications system comprises the following steps. At least one key is sent from at least one network element of an access network to a first computing device and at least a second computing device. The first computing device and the second computing device utilize the access network to access the communication system and are authenticated by the access network prior to the key being sent. The key is useable by the first computing device and the second computing device to securely communicate with one another when in proximity of one another without communications between the first computing device and the second computing device going through the access network. | 2013-10-31 |
20130290697 | System and Method for Signaling Segment Encryption and Key Derivation for Adaptive Streaming - An apparatus for decoding a media stream, wherein the apparatus comprises a memory module, a processor module coupled to the memory module, wherein the memory module contains instructions that when executed by the processor cause the apparatus to perform the following: receive a media stream comprising a segment signaling information and a plurality of segments, wherein the plurality of segments comprises encoded and unencoded segments, wherein the segment signaling information comprises identification of at least two segment groups each comprising at least one segment, identify at least one segment group using the segment signaling information in the media stream, identify at least one segment decoding algorithm for the at least one segment group, identify at least one decoding key for the at least segment group, and decode each encoded segment within the at least segment group using the at least segment decoding algorithm and the at least one decoding key. | 2013-10-31 |
20130290698 | System and Method for Efficient Support for Short Cryptoperiods in Template Mode - System and method embodiments are provided herein for efficient representation and use of initialization vectors (IVs) for encrypted segments using template mode representation in Dynamic Adaptive Streaming over Hypertext Transfer Protocol (DASH). An embodiment method includes sending in a media presentation description (MPD), from a network server to a client, a template for generating a universal resource locator (URL) to obtain an IV that is used for encrypting a segment, in absence of an IV value in the MPD, receiving from the client a URL configured according to the template, and upon receiving the URL, returning an IV corresponding to the URL to the client. Another embodiment method includes receiving in a MPD, at a client from a network server, a template for generating a URL to obtain an IV that is used for encrypting a segment, upon detecting an absence of an IV value or IV base value in the MPD, configuring a URL for the IV using the template, sending the URL for the IV, and receiving an IV. | 2013-10-31 |
20130290699 | METHODS FOR SECURE COMMUNICATION BETWEEN NETWORK DEVICE SERVICES AND DEVICES THEREOF - A method, non-transitory computer readable medium, and network device that generates a network communication including a destination address associated with a second network device and a destination port number, wherein the destination port number corresponds to a service operating on the second network device. An initial SSL handshake protocol message is generated and at least the destination port number is inserted into a server name indicator (SNI) extension of the initial SSL handshake protocol message. An SSL connection is established with the second network device using a predetermined port number and the initial SSL handshake protocol message is sent to the second network device. Information included in the network communication is sent to the second network device using the SSL connection. | 2013-10-31 |
20130290700 | COMPUTATIONAL SYSTEMS AND METHODS FOR ENCRYPTING DATA FOR ANONYMOUS STORAGE - Methods, apparatuses, computer program products, devices and systems are described that carry out accepting from a user identifier encryption entity at least one encrypted identifier corresponding to a user having at least one instance of data for encryption; encrypting the at least one instance of data to produce level-one-encrypted data; associating the at least one encrypted identifier with the level-one-encrypted data, wherein a level-one decryption key for the level-one-encrypted data is inaccessible to the user identifier encryption entity; and transmitting the level-one-encrypted data and associated encrypted identifier. | 2013-10-31 |
20130290701 | KEY SETTING METHOD, NODE, SERVER, AND NETWORK SYSTEM - A key setting method executed by a node within communication ranges of multiple ad-hoc networks, includes receiving encrypted packets encrypted by respective keys specific to gateways and broadcasted from the gateways in the ad-hoc networks; detecting connection with a mobile terminal communicable with a server retaining the keys specific to the gateways in each ad-hoc network among the ad-hoc networks; transmitting to the server when connection with the mobile terminal is detected, the encrypted packets via the mobile terminal; receiving from the server via the mobile terminal, the keys that are specific to the gateways in the ad-hoc networks and that are for decrypting each encrypted packet among the encrypted packets; and setting each of the received keys as a key to encrypt data that is to be encrypted in the node and decrypt data that is to be decrypted in the node. | 2013-10-31 |