35th week of 2014 patent applcation highlights part 73 |
Patent application number | Title | Published |
20140244908 | INTEGRATED CIRCUIT FOR COMPUTING TARGET ENTRY ADDRESS OF BUFFER DESCRIPTOR BASED ON DATA BLOCK OFFSET, METHOD OF OPERATING SAME, AND SYSTEM INCLUDING SAME - A method of operating an integrated circuit is provided. The method includes receiving a data block offset from a second storage device, obtaining a target entry address using the data block offset, and reading an entry among a plurality of entries comprised in a buffer descriptor stored in a first storage device based on the target entry address. The method also includes reading data from a data buffer among a plurality of data buffers included in the first storage device using a physical address included in the entry and transmitting the data to the second storage device. | 2014-08-28 |
20140244909 | SEMICONDUCTOR MEMORY DEVICE - According to one embodiment, a semiconductor memory device includes: string units including a plurality of memory cells stacked above a semiconductor substrate; and a control circuit configured to perform an erase operation per a block, the block including the string units, the control circuit being configured to perform an erase verify operation per string unit. | 2014-08-28 |
20140244910 | ELECTRONIC APPARATUS IMPLEMENTED WITH MICROPROCESSOR WITH REWRITABLE MICRO PROGRAM AND METHOD TO REWRITE MICRO PROGRAM - An intelligent optical transceiver able to revise a micro program by the host system is disclosed. The optical transceiver includes a MDIO interface, a CPU, and a non-volatile memory. The host system may communicate with the CPU through an external MDIO bus, the MDIO interface, and an internal bus; while the CPU communicated with the non-volatile memory through another bus. The new micro program sent from the host system is temporarily stored in the non-volatile memory through the MDIO interface and the CPU, and finally set in the flash ROM in the CPU. | 2014-08-28 |
20140244911 | METHOD FOR PROGRAMMING A FLASH MEMORY - A method of programming a flash memory is described. The method includes partitioning a flash memory into a first group having a first level of write-protection, a second group having a second level of write-protection, and a third group having a third level of write-protection. The write-protection of the second and third groups is disabled using an installation adapter. The third group is programmed using a Software Installation Device. | 2014-08-28 |
20140244912 | Retired Page Utilization (RPU) for Improved Write Capacity of Solid State Drives - A method for writing data to a memory module, the method may include determining to write a representation of a data unit to a retired group of memory cells; searching for a selected retired group of memory cells that can store a representation of the data unit without being erased; and writing the representation of the data unit to the selected retired group of memory cells. | 2014-08-28 |
20140244913 | MEMORY SYSTEMS - Memory systems having a volatile memory, a non-volatile memory arranged in blocks, and a controller coupled to the volatile memory and to the non-volatile memory. The controller is configured to maintain, in the volatile memory, a list of addresses of erased blocks of the non-volatile memory. The list of addresses of erased blocks of the non-volatile memory is limited to a maximum number of list entries. The controller is further configured to transfer the list of addresses of erased blocks of the non-volatile memory from the volatile memory to the non-volatile memory in response to the list containing its maximum number of list entries and/or in response to an operation that would increase the number of list entries to a number equal to or greater than the maximum number of list entries. | 2014-08-28 |
20140244914 | MITIGATE FLASH WRITE LATENCY AND BANDWIDTH LIMITATION - A method of operating a memory system is provided. The method includes a controller that regulates read and write access to one or more FLASH memory devices that are employed for random access memory applications. A buffer component operates in conjunction with the controller to regulate read and write access to the one or more FLASH devices. Wear leveling components along with read and write processing components are provided to facilitate efficient operations of the FLASH memory devices. | 2014-08-28 |
20140244915 | PINNING CONTENT IN NONVOLATILE MEMORY - Systems and methods relating to pinning selected data to sectors in non-volatile memory. A graphical user interface allows a user to specify certain data (e.g., directories or files) to be pinned. A list of pinned sectors can be stored so that a driver or controller that operates on a sector basis and not a file or directory basis can identify data to be pinned. | 2014-08-28 |
20140244916 | VIRTUAL MEMORY MANAGEMENT APPARATUS - A virtual memory management apparatus of an embodiment is embedded in a computing machine | 2014-08-28 |
20140244917 | METHODS AND SYSTEMS FOR REDUCING CHURN IN FLASH-BASED CACHE - A storage device includes a flash memory-based cache for a hard disk-based storage device and a controller that is configured to limit the rate of cache updates through a variety of mechanisms, including determinations that the data is not likely to be read back from the storage device within a time period that justifies its storage in the cache, compressing data prior to its storage in the cache, precluding storage of sequentially-accessed data in the cache and/or throttling storage of data to the cache within predetermined write periods and/or according to user instruction. | 2014-08-28 |
20140244918 | METHODS AND SYSTEMS FOR REDUCING CHURN IN FLASH-BASED CACHE - A storage device includes a flash memory-based cache for a hard disk-based storage device and a controller that is configured to limit the rate of cache updates through a variety of mechanisms, including determinations that the data is not likely to be read back from the storage device within a time period that justifies its storage in the cache, compressing data prior to its storage in the cache, precluding storage of sequentially-accessed data in the cache, and/or throttling storage of data to the cache within predetermined write periods and/or according to user instruction. | 2014-08-28 |
20140244919 | METHOD OF ERASING INFORMATION STORED IN A NONVOLATILE REWRITABLE MEMORY, STORAGE MEDIUM AND MOTOR VEHICLE COMPUTER - Method of erasing information stored in a nonvolatile rewritable memory of a computer, wherein a master module sends erasing requests to a slave module of the computer, the memory including at least two interleaved sectors. The method includes preliminary steps of determining a virtual memory addressing space associated with the memory, in which each sector extends over a specific range of consecutive virtual memory addresses, and establishing a first correspondence function for determining, from a range of virtual memory addresses, the sector or sectors whose contents must be erased, and for each erasing request received by the slave module indicating a range of virtual memory addresses, a step of determining the sector or sectors whose contents must be erased by the slave module. The memory includes a plurality of segments, each segment breaking down into a plurality of sectors and at least two segments including interleaved physical memory addresses. | 2014-08-28 |
20140244920 | SCHEME TO ESCALATE REQUESTS WITH ADDRESS CONFLICTS - Techniques for escalating a real time agent's request that has an address conflict with a best effort agent's request. A best effort request can be allocated in a memory controller cache but can progress slowly in the memory system due to its low priority. Therefore, when a real time request has an address conflict with an older best effort request, the best effort request can be escalated if it is still pending when the real time request is received at the memory controller cache. Escalating the best effort request can include setting the push attribute of the best effort request or sending another request with a push attribute to bypass or push the best effort request. | 2014-08-28 |
20140244921 | ASYMMETRIC MULTITHREADED FIFO MEMORY - A First-in First-out (FIFO) memory comprising a latch array and a RAM array and operable to buffer data for multiple threads. Each array is partitioned into multiple sections, and each array comprises a section designated to buffer data for a respective thread. A respective latch array section is assigned higher priority to receive data for a respective thread than the corresponding RAM array section. Incoming data for the respective thread are pushed into the corresponding latch array section while it has vacancies. Upon the latch array section becoming empty, incoming data are pushed into the corresponding RAM array section during a spill-over period. The RAM array section may comprise two spill regions with only one active to receive data at a spill-over period. The allocation of data among the latch array and the spill regions of the RAM array can be transparent to external logic. | 2014-08-28 |
20140244922 | MULTI-PURPOSE REGISTER PROGRAMMING VIA PER DRAM ADDRESSABILITY MODE - Embodiments of an apparatus, system and method for using Per DRAM Addressability (PDA) to program Multi-Purpose Registers (MPRs) of a dynamic random access memory (DRAM) device are described herein. Embodiments of the invention allow unique 32 bit patterns to be stored for each DRAM device on a rank, thereby enabling data bus training to be done in parallel. Furthermore, embodiments of the invention provide 32 bits of storage per DRAM device on a rank for the system BIOS for storing codes such as MR values, or for any other purpose (e.g., temporary scratch storage to be used by BIOS processes). | 2014-08-28 |
20140244923 | MEMORY CONTROLLER WITH CLOCK-TO-STROBE SKEW COMPENSATION - A clock signal is transmitted to first and second integrated circuit (IC) components via a clock signal line, the clock signal having a first arrival time at the first IC component and a second, later arrival time at the second IC component. A write command is transmitted to the first and second IC components to be sampled by those components at respective times corresponding to transitions of the clock signal, and write data is transmitted to the first and second IC components in association with the write command. First and second strobe signals are transmitted to the first and second IC components, respectively, to time reception of the first and second write data in those components. The first and second strobe signals are selected from a plurality of phase-offset timing signals to compensate for respective timing skews between the clock signal and the first and second strobe signals. | 2014-08-28 |
20140244924 | LOAD REDUCTION DUAL IN-LINE MEMORY MODULE (LRDIMM) AND METHOD FOR PROGRAMMING THE SAME - A method is disclosed for providing memory bus timing of a load reduction dual inline memory module (LRDIMM). The method includes: determining a latency value of a dynamic random access memory (DRAM) of the LRDIMM; determining a modified latency value of the DRAM that accounts for a delay caused by a load reduction buffer (LRB) that is deployed between the DRAM and a memory bus; storing the modified latency value in a serial presence detector (SPD) of the LRDIMM; and providing memory bus timing for the LRDIMM based on the modified latency value, wherein the memory bus timing is compatible with a registered dual inline memory module (RDIMM). | 2014-08-28 |
20140244925 | SYSTEM, METHOD AND COMPUTER PROGRAM PRODUCT FOR TAMPER PROTECTION IN A DATA STORAGE SYSTEM - Systems, methods and computer software utilized in the implementation of tamper protection, where unique information associated with data storage tapes and with particular revisions of these tapes is stored on the storage medium itself and on a memory of the tape cartridge, so that the data can be compared to determine whether unauthorized modifications have been made to the tapes. One embodiment is a system which includes an archive node appliance coupled between a set of hosts and a tape media library. The archive node appliance presents files stored on a tape of a media library as a directory. The archive node appliance maintains tamper prevention data on the tape and on an auxiliary memory on the cartridge of the tape, and determines from this data whether the tape has been altered by an authorized system. | 2014-08-28 |
20140244926 | Dedicated Memory Structure for Sector Spreading Interleaving - The present disclosure is directed to a method for managing a memory. The method includes the step of receiving data, the data including a plurality of sectors. The method also includes the step of dividing each sector of the plurality of sectors into a plurality of data units. A further step of the method involves interleaving the plurality of data units to yield a plurality of interleaved data units. The method also includes the step of writing the plurality of interleaved data units to a disk. An additional step of the method is to de-spread the plurality of interleaved data units to yield at least one sector of the plurality of sectors. | 2014-08-28 |
20140244927 | STORAGE SYSTEM AND A METHOD FOR ALLOCATING DISK DRIVES TO REDUNDANCY ARRAY OF INDEPENDENT DISKS - A storage system that may include a management module; and multiple disk drives; wherein the management module is arranged to allocate disk drives of the multiple disk drives to disk drive groups, each disk drive group corresponds to at least one redundancy array of independent disks (RAID) group of data in response to at least one out of: locations of the disk drives within disk drive enclosures; and expected or actual temperatures of the disk drives. | 2014-08-28 |
20140244928 | METHOD AND SYSTEM TO PROVIDE DATA PROTECTION TO RAID 0/ OR DEGRADED REDUNDANT VIRTUAL DISK - Disclosed is a system and method for providing redundancy to RAID 0 virtual disks by utilizing any right sized physical disk in the SAS domain. The system and method restore redundancy in a degraded redundant virtual disk. This may be done even in the absence of a configured hot spare. | 2014-08-28 |
20140244929 | OBJECT STORAGE SYSTEM - The storage system exports logical storage volumes that are provisioned as storage objects. These storage objects are accessed on demand by connected computer systems using standard protocols, such as SCSI and NFS, through logical endpoints for the protocol traffic that are configured in the storage system. Logical storage volumes are created from a logical storage container having an address space that maps to storage locations of the physical data storage units. Each of the logical storage volumes so created has an address space that maps to the address space of the logical storage container. A logical storage container may span more than one storage system and logical storage volumes of different customers can be provisioned from the same logical storage container with appropriate security settings. | 2014-08-28 |
20140244930 | ELECTRONIC DEVICES HAVING SEMICONDUCTOR MAGNETIC MEMORY UNITS - An electronic device comprising a semiconductor memory unit that includes a resistance variable element configured to be changed in a resistance value according to a value of data stored therein; a first reference resistance element having a first resistance value; a second reference resistance element having a second resistance value larger than the first resistance value; and a comparison unit configured to receive a voltage corresponding to the resistance value of the resistance variable element through a first input terminal and a second input terminal thereof, a voltage corresponding to the first resistance value of the first reference resistance element through a third input terminal, and a voltage corresponding to the second resistance value of the second reference resistance element through a fourth input terminal, the comparison unit configured to output a result of comparing inputs to the first input terminal and the second input terminal and inputs to the third input terminal and fourth input terminal. | 2014-08-28 |
20140244931 | ELECTRONIC DEVICE - An electronic device comprising a semiconductor memory unit that may include a cell array including a plurality of storage cells; a first line connected to one ends of the plurality of storage cells; a second line connected to the other ends of the plurality of storage cells; a first driver connected to one end of the first line at a first contact location on one side of the cell array, and configured to apply a first electrical signal to the one end of the first line; and a second driver connected to one end of the second line at a second contact location on a side of the cell array opposing the side of the cell array where the first contact location is located, and configured to apply a second electrical signal to the one end of the second line. | 2014-08-28 |
20140244932 | METHOD AND APPARATUS FOR CACHING AND INDEXING VICTIM PRE-DECODE INFORMATION - The present invention provides a method and apparatus for caching pre-decode information. Some embodiments of the apparatus include a first pre-decode array configured to store pre-decode information for an instruction cache line that is resident in a first cache in response to the instruction cache line being evicted from one or more second cache(s). Some embodiments of the apparatus also include a second array configured to store a plurality of bits associated with the first cache. Subsets of the bits are configured to store pointers to the pre-decode information associated with the instruction cache line. | 2014-08-28 |
20140244933 | Way Lookahead - Methods and systems that identify and power up ways for future instructions are provided. A processor includes an n-way set associative cache and an instruction fetch unit. The n-way set associative cache is configured to store instructions. The instruction fetch unit is in communication with the n-way set associative cache and is configured to power up a first way, where a first indication is associated with an instruction and indicates the way where a future instruction is located and where the future instruction is two or more instructions ahead of the current instruction. | 2014-08-28 |
20140244934 | STORAGE APPARATUS - [Object] A storage apparatus capable of preventing degradation of processing performance when transferring data of a record format to a main frame is proposed. | 2014-08-28 |
20140244935 | STORAGE SYSTEM CAPABLE OF MANAGING A PLURALITY OF SNAPSHOT FAMILIES AND METHOD OF SNAPSHOT FAMILY BASED READ - A method for a snapshot family based reading of data units from a storage system, the method comprises: receiving a read request for reading a requested data entity, searching in a cache memory of the storage system for a matching cached data entity, if not finding the matching cached data entity then: searching for one or more relevant data entity candidates stored in the storage system; selecting, out of the one or more relevant data entity candidates, a selected relevant data entity that has a content that has a highest probability, out of contents of the one or more relevant data entity candidates, to be equal to the content of the requested data entity; and responding to the read request by sending the selected relevant data entity. | 2014-08-28 |
20140244936 | MAINTAINING CACHE COHERENCY BETWEEN STORAGE CONTROLLERS - Systems and methods maintain cache coherency between storage controllers utilizing bitmap data. In one embodiment, a storage controller processes an I/O request for a logical volume from a host, and generates one or more cache entries in a cache memory that is based on the request. The storage controller identifies a backup storage controller for managing the logical volume, and generates bitmap data that identifies cache entries in the cache memory that have changed since synchronizing with the backup storage controller. The storage controller provides the bitmap data to the backup storage controller to allow the backup storage controller to synchronize its cache memory with the cache memory of the storage controller based on the bitmap data. | 2014-08-28 |
20140244937 | Read Ahead Tiered Local and Cloud Storage System and Method Thereof - A high tier storage area stores a stub file and a lower tier cloud storage area stores the file corresponding to the stub file. When a client apparatus requests segments of the file from the high tier storage area, reference is made to the stub file to determine a predicted non-sequential pattern of requests to the segments by the client apparatus. The high tier storage area follows the predicted non-sequential pattern of requests to retrieve the segments of the file from the cloud prior to the client apparatus actually requesting the segments. As such, the file may be efficiently provided to the client apparatus while also efficiently storing the file on the lower tier cloud storage area. | 2014-08-28 |
20140244938 | Method and Apparatus for Returning Reads in the Presence of Partial Data Unavailability - Techniques are disclosed for reducing perceived read latency. Upon receiving a read request with a scatter-gather array from a guest operating system running on a virtual machine (VM), an early read return virtualization (ERRV) component of a virtual machine monitor fills the scatter-gather array with data from a cache and data retrieved via input-output requests (IOs) to media. The ERRV component is configured to return the read request before all IOs have completed based on a predefined policy. Prior to returning the read, the ERRV component may unmap unfilled pages of the scatter-gather array until data for the unmapped pages becomes available when IOs to the external media complete. Later accesses to unmapped pages will generate page faults, which are handled by stunning the VMs from which the access requests originated until, e.g., all elements of the SG array are filled and all pages of the SG array are mapped. | 2014-08-28 |
20140244939 | TEXTURE CACHE MEMORY SYSTEM OF NON-BLOCKING FOR TEXTURE MAPPING PIPELINE AND OPERATION METHOD OF TEXTURE CACHE MEMORY - A non-blocking texture cache memory for a texture mapping pipeline and an operation method of the non-blocking texture cache memory may include: a retry buffer configured to temporarily store result data according to a hit pipeline or a miss pipeline; a retry buffer lookup unit configured to look up the retry buffer in response to a texture request transferred from a processor; a verification unit configured to verify whether result data corresponding to the texture request is stored in the retry buffer as the lookup result; and an output control unit configured to output the stored result data to the processor when the result data corresponding to the texture request is stored as the verification result. | 2014-08-28 |
20140244940 | AFFINITY GROUP ACCESS TO GLOBAL DATA - A method, system, and computer readable medium to share data on a global basis within a symmetric multiprocessor (SMP) computer system are disclosed. The method may include grouping a plurality of processor cores into a plurality of affinity groups. Global data may be copied into a plurality of group data structures. Each group data structure may correspond to an affinity group. The method may read a first group data structure by a thread executing on a processor core associated with a first affinity group. | 2014-08-28 |
20140244941 | AFFINITY GROUP ACCESS TO GLOBAL DATA - A method, system, and computer readable medium to share data on a global basis within a symmetric multiprocessor (SMP) computer system are disclosed. The method may include grouping a plurality of processor cores into a plurality of affinity groups. The method may include creating hints about the global data in the plurality of group data structures. Each group data structure may correspond to an affinity group. The method may read a first group data structure by a thread executing on a processor core associated with a first affinity group. | 2014-08-28 |
20140244942 | AFFINITY GROUP ACCESS TO GLOBAL DATA - A method, system, and computer readable medium to share data on a global basis within a symmetric multiprocessor (SMP) computer system are disclosed. The method may include grouping a plurality of processor cores into a plurality of affinity groups. Global data may be copied into a plurality of group data structures. Each group data structure may correspond to an affinity group. The method may read a first group data structure by a thread executing on a processor core associated with a first affinity group. | 2014-08-28 |
20140244943 | AFFINITY GROUP ACCESS TO GLOBAL DATA - A method, system, and computer readable medium to share data on a global basis within a symmetric multiprocessor (SMP) computer system are disclosed. The method may include grouping a plurality of processor cores into a plurality of affinity groups. The method may include creating hints about the global data in the plurality of group data structures. Each group data structure may correspond to an affinity group. The method may read a first group data structure by a thread executing on a processor core associated with a first affinity group. | 2014-08-28 |
20140244944 | WAIT-FREE ALGORITHM FOR INTER-CORE, INTER-PROCESS, OR INTER-TASK COMMUNICATION - A method and system are presented for providing deterministic inter-core, inter-process, and inter-thread communication between a reader and a writer. The reader and writer communicate by passing data through a shared memory using double buffering of double buffers. The shared memory includes a first double buffer and a second double buffer. Both double buffers include a first low level buffer and a second low level buffer. Using double buffering of the double buffers, both the reader and the writer may simultaneously access the shared memory. | 2014-08-28 |
20140244945 | ELECTRONIC DEVICE AND METHOD FOR OPERATING ELECTRONIC DEVICE - An electronic device comprising a semiconductor memory unit that may a variable resistance element configured to be changed in its resistance value in response to current flowing through both ends thereof; an information storage unit configured to store switching frequency information corresponding to a switching frequency which minimizes an amplitude of a voltage to be applied to both ends of the variable resistance element to change the resistance value of the variable resistance element and switching amplitude information corresponding to a minimum amplitude; and a driving unit configured to generate a driving voltage with the switching frequency and the minimum amplitude in response to the switching frequency information and the switching amplitude information and apply the driving voltage to both ends of the variable resistance element. | 2014-08-28 |
20140244946 | CROSS-POINT RESISTIVE-BASED MEMORY ARCHITECTURE - A plurality of addressable memory tiles each comprise one or more cross-point arrays. Each array comprises a plurality of non-volatile resistance-change memory cells. A controller is configured to couple to the array and to a host system. The controller is configured to perform receiving, from the host system, one or more data objects each having a size equal to a predetermined logical block size, and storing the one or more data objects in a corresponding integer number of one or more of the memory tiles. | 2014-08-28 |
20140244947 | MEMORY, MEMORY SYSTEM INCLUDING THE SAME, AND OPERATION METHOD OF MEMORY CONTROLLER - A memory system includes a memory including a condition detection circuit configured to detect a memory condition, and a condition output circuit configured to output the memory condition detected by the condition detection circuit. A memory controller is configured to adjust operational performance of the memory in response to the memory condition. | 2014-08-28 |
20140244948 | MEMORY HAVING INTERNAL PROCESSORS AND METHODS OF CONTROLLING MEMORY ACCESS - Memories having internal processors and methods of data communication within such memories are provided. One such memory may include a fetch unit configured to substantially control performing commands on a memory array based on the availability of banks to be accessed. The fetch unit may receive instructions including commands indicating whether data is to be read from or written to a bank, and the address of the data to be read from or written to the bank. The fetch unit may perform the commands based on the availability of the bank. In one embodiment, control logic communicates with the fetch unit when an activated bank is available. In another implementation, the fetch unit may wait for a bank to become available based on timers set to when a previous command in the activated bank has been performed. | 2014-08-28 |
20140244949 | Asynchronous Data Mirroring in Memory Controller - A method for mirroring data between virtual machines includes intercepting a write command initiated from a virtual machine. Address and data information from the intercepted write command is stored within a queue located within a memory buffer of the primary server. The stored address and data information is transferred, upon filling the queue of the memory buffer of the primary server to a predetermined level, to a dedicated region of the memory of the primary server. The stored address and data information is sent from the dedicated region of the memory of the primary server to a backup server upon filling of the dedicated region of the memory of the primary server to a predetermined level. | 2014-08-28 |
20140244950 | CLONING LIVE VIRTUAL MACHINES - A system and method are disclosed for cloning a live virtual machine (i.e., a virtual machine that is running). In accordance with one example, a computer system prepares an area of a storage device for a clone of a live virtual machine, and a transaction is then executed that comprises: creating the clone of the live virtual machine based on a live snapshot of the live virtual machine, copying the clone to the area of the storage device, and mirroring a change to a virtual disk of the live virtual machine that occurs after the live snapshot is created, wherein the mirroring is via one or more write operations to the virtual disk and to a replica of the virtual disk associated with the clone. | 2014-08-28 |
20140244951 | LIVE SNAPSHOTTING OF MULTIPLE VIRTUAL DISKS IN NETWORKED SYSTEMS - A system and method are disclosed for servicing requests to create live snapshots of a plurality of virtual disks in a virtualized environment. In accordance with one example, a first computer system detects that a second computer system has issued one or more commands to create a first snapshot of a first virtual disk of a virtual machine and a second snapshot of a second virtual disk of the virtual machine while the virtual machine is running on the second computer system. In response to a determination that the creating of the second snapshot failed, the first computer system issues one or more commands to destroy the first snapshot and deallocate an area of a storage device that stores the first snapshot. | 2014-08-28 |
20140244952 | SYSTEM AND METHOD FOR A SCALABLE CRASH-CONSISTENT SNAPSHOT OPERATION - Described herein is a system and method for a scalable crash-consistent snapshot operation. Write requests may be received from an application and a snapshot creation request may further be received. Write requests received before the snapshot creation request may be associated with pre-snapshot tags and write requests received after the snapshot creation request may be associated with post-snapshot tags. Furthermore, in response to the snapshot creation request, logical interfaces may begin to be switched from a pre-snapshot configuration to a post-snapshot configuration. The snapshot may then be created based on the pre-snapshot write requests and the post-snapshot write requests may be suspended until the logical interfaces have switched configuration. | 2014-08-28 |
20140244953 | IDENTIFYING AND ACCESSING REFERENCE DATA IN AN IN-MEMORY DATA GRID - Embodiments relate to providing normalization techniques for reference data in an in-memory data grid. An aspect includes monitoring object creation and access in an in-memory data grid and identifying reference data in an object field of a plurality of object instances. A reference map for the object field is created and the reference map is replicated across all partitions of the in-memory data grid. The reference data of an embodiment is stored in the reference map and the object field is updated to identify the reference map. Accordingly, the reference data may be accessed using the created reference map. | 2014-08-28 |
20140244954 | IDENTIFYING AND ACCESSING REFERENCE DATA IN AN IN-MEMORY DATA GRID - Embodiments relate to providing normalization techniques for reference data in an in-memory data grid. An aspect includes monitoring object creation and access in an in-memory data grid and identifying reference data in an object field of a plurality of object instances. A reference map for the object field is created and the reference map is replicated across all partitions of the in-memory data grid. The reference data of an embodiment is stored in the reference map and the object field is updated to identify the reference map. Accordingly, the reference data may be accessed using the created reference map. | 2014-08-28 |
20140244955 | SYSTEM AND METHOD FOR ALLOCATION OF ORGANIZATIONAL RESOURCES - System and methods for storing electronic data is provided, where the system comprises a storage manager component and a management module associated with the storage manager component. The management module is configured to receive information related to storage activities associated with one or more storage operation components within the storage operation system under the direction of the storage manager component. The management module is adapted to predict storage operation resource allocations based on the received information related to the storage activities. | 2014-08-28 |
20140244956 | STORAGE SYSTEM IN WHICH FICTITIOUS INFORMATION IS PREVENTED - According to one embodiment, a storage system includes a host device and a secure storage. The host device and the secure storage produce a bus key which is shared only by the host device and the secure storage by authentication processing, and which is used for encoding processing. The host device produces a message authentication code including a message which can be stored in the secure storage based on the bus key, and sends the produced message authentication code to the secure storage. The secure storage stores the message included in the message authentication code in accordance with instructions of the host device. The host device verifies whether the message stored in the secure storage is intended contents. | 2014-08-28 |
20140244957 | STORAGE SYSTEM IN WHICH FICTITIOUS INFORMATION IS PREVENTED - According to one embodiment, a storage system includes a host device and a secure storage. The host device and the secure storage produce a bus key which is shared only by the host device and the secure storage by authentication processing, and which is used for encoding processing. The host device produces a message authentication code including a message which can be stored in the secure storage based on the bus key, and sends the produced message authentication code to the secure storage. The secure storage stores the message included in the message authentication code in accordance with instructions of the host device. The host device verifies whether the message stored in the secure storage is intended contents. | 2014-08-28 |
20140244958 | STORAGE SYSTEM AND MANAGEMENT METHOD THEREFOR - A storage system comprises multiple first storage apparatuses, and a controller which provides a first logical volume corresponding to a storage area of the multiple first storage apparatuses to a host computer. The controller partitions a storage area corresponding to the first logical volume into multiple first physical storage areas, manages the partitioned multiple first physical storage areas as physical storage areas of a storage pool, creates a first virtual volume which is provided to the host computer, and associates, from among the multiple first physical storage areas, a physical storage area in which user data is stored, with the first virtual volume. | 2014-08-28 |
20140244959 | STORAGE CONTROLLER, STORAGE SYSTEM, METHOD OF CONTROLLING STORAGE CONTROLLER, AND COMPUTER-READABLE STORAGE MEDIUM HAVING STORAGE CONTROL PROGRAM STORED THEREIN - A storage system includes: a first storage unit; a second storage unit that has an access speed higher than an access speed of the first storage unit; and a storage controller that collects load information about respective loads in a plurality of areas in the first storage unit, selects a candidate area in the first storage unit which is to be migrated, based on the collected load information, and migrates data in the selected candidate area, to the second storage unit. | 2014-08-28 |
20140244960 | COMPUTING DEVICE, MEMORY MANAGEMENT METHOD, AND PROGRAM - According to one embodiment, there is provided a computing device managing a first memory region and a second memory region, a power consumption to hold data stored in the second memory region being smaller than that of the first memory region, including: a data manager and a data processor. The data manager manages a referring number, which is a number of processes referring to first data existing in either one of the first memory region or the second memory region. The data processor moves the first data to the second memory region when the first data exists in the first memory region and the referring number to the first data satisfies a first condition. | 2014-08-28 |
20140244961 | MANAGING AND STORING ELECTRONIC MESSAGES DURING RECIPIENT UNAVAILABILITY - A method for managing storage space for electronic messages. A computer receiving a selected time period in which a user of a messaging program will not be able to access electronic messages through the messaging program. The computer estimating, by one or more computer processors, an amount of storage space required to store electronic messages received during the selected time period. The computer determining, by one or more computer processors, that an unused portion of storage space allocated to the user is less than the estimated storage space required. The computer notifying the user that the unused portion of storage space allocated to the user is less than the estimated storage space required. | 2014-08-28 |
20140244962 | Multi-Level Memory Compression - According to one embodiment of the present disclosure, an approach is provided in which a processor selects a page of data that is compressed by a first compression algorithm and stored in a memory block. The processor identifies a utilization amount of the compressed page of data and determines whether the utilization amount meets a utilization threshold. When the utilization amount fails to meet the utilization threshold, the processor uses a second compression algorithm to recompresses the page of data. | 2014-08-28 |
20140244963 | METHOD AND APPARATUS FOR ALLOCATING MEMORY FOR IMMUTABLE DATA ON A COMPUTING DEVICE - A system that allocates memory for immutable data on a computing device. The system allocates a memory region on the computing device to store immutable data for an executing application. This memory region is smaller than the immutable data for the application. When the system subsequently receives a request to access a block of immutable data for the application, the system allocates space in this memory region for the block, and proceeds to load the block into the memory region. If at a later time the space occupied by this first block is needed for another block, the system unloads and discards the first block. If a subsequent operation needs to use information in the first block, the system regenerates the block by transforming raw data associated with the block into a form that can be directly accessed by the application, and then reloads the block into the memory region. | 2014-08-28 |
20140244964 | DUAL MAPPING BETWEEN PROGRAM STATES AND DATA PATTERNS - The present disclosure includes methods and apparatuses for dual mapping between program states and data patterns. One apparatus includes a memory and a controller configured to control a dual mapping method comprising: performing a base conversion on a received data pattern and mapping a resulting base converted data pattern to one of a first number of program state combinations corresponding to a first group of memory cells; and determining a number of error data units corresponding to the base converted data pattern and mapping the number of error data units to one of a number of second program state combinations corresponding to a second group of memory cells. The number of error data units are mapped to the one of the second number of program state combinations corresponding to the second group of memory cells without being base converted. | 2014-08-28 |
20140244965 | METHOD AND SYSTEM FOR SIMPLIFIED ADDRESS TRANSLATION SUPPORT FOR STATIC INFINIBAND HOST CHANNEL ADAPTOR STRUCTURES - A method for optimized address pre-translation for a host channel adapter (HCA) static memory structure is disclosed. The method involves determining whether the HCA static memory structure spans a contiguous block of physical address space, when the HCA static memory structure spans the contiguous block of physical address space, requesting a translation from a guest physical address (GPA) to a machine physical address (MPA) of the HCA static memory structure, storing a received MPA corresponding to the HCA static memory structure in an address control and status register (CSR) associated with the HCA static memory structure, marking the received MPA stored in the address CSR as a pre-translated address, and using the pre-translated MPA stored in the address CSR when a request to access the static memory structure is received. | 2014-08-28 |
20140244966 | PACKET PROCESSING MATCH AND ACTION UNIT WITH STATEFUL ACTIONS - A packet processing block. The block comprises an input for receiving data in a packet header vector, where the vector comprises data values representing information for a packet. The block also comprises circuitry for performing packet match operations in response to at least a portion of the packet header vector and data stored in a match table and circuitry for performing one or more actions in response to a match detected by the circuitry for performing packet match operations. The one or more actions comprise modifying the data values representing information for a packet. The block also comprises at least one stateful memory comprising stateful memory data values. The one or more actions includes various stateful actions for reading stateful memory, modifying data values representing information for a packet, as a function of the stateful memory data values; and storing modified stateful memory data value back into the stateful memory. | 2014-08-28 |
20140244967 | VECTOR REGISTER ADDRESSING AND FUNCTIONS BASED ON A SCALAR REGISTER DATA VALUE - Techniques are provided for executing a vector alignment instruction. A scalar register file in a first processor is configured to share one or more register values with a second processor, the one or more register values accessed from the scalar register file according to an Rt address specified, in a vector alignment instruction, wherein a start location is determined from one of the shared register values. An alignment circuit in the second processor is configured to align data identified between the start location within a beginning Vu register of a vector register file (VRF) and an end location of a last Vu register of the VRF according to the vector alignment instruction. A store circuit is configured to select the aligned data from the alignment circuit and store the aligned data in the vector register file according to an alignment store address specified by the vector alignment instruction. | 2014-08-28 |
20140244968 | MAPPING VECTOR REPRESENTATIONS ONTO A PREDICATED SCALAR MULTI-THREADED SYSTEM - A system implementing a method for generating code for execution based on a SIMT model with parallel units of threads is provided. The system identifies a loop within a program that includes vector processing. The system generates instructions for a thread that include an instruction to set a predicate based on whether the thread of a parallel unit corresponds to a vector element. The system also generates instructions to perform the vector processing via scalar operations predicated on the predicate. As a result, the system generates instructions to perform the vector processing but to avoid branch divergence within the parallel unit of threads that would be needed to check whether a thread corresponds to a vector element. | 2014-08-28 |
20140244969 | List Vector Processing Apparatus, List Vector Processing Method, Storage Medium, Compiler, and Information Processing Apparatus - Disclosed is a list vector processing apparatus (LVPA) or the like which can process the indirect reference at a high speed. | 2014-08-28 |
20140244970 | DIGITAL SIGNAL PROCESSOR AND BASEBAND COMMUNICATION DEVICE - For increased efficiency, a digital signal processor comprises a vector execution unit arranged to execute instructions that are to be performed on multiple data in the form of a vector, comprising a vector controller arranged to determine if an instruction is a vector instruction and, if it is, inform a count register arranged to hold the vector length, said vector controller being further arranged receive an issue signal and control the execution of instructions based on this issue signal, said vector execution unit being characterized in that it comprises
| 2014-08-28 |
20140244971 | ARRAY OF PROCESSOR CORE CIRCUITS WITH REVERSIBLE TIERS - Embodiments of the invention relate to an array of processor core circuits with reversible tiers. One embodiment comprises multiple tiers of core circuits and multiple switches for routing packets between the core circuits. Each tier comprises at least one core circuit. Each switch comprises multiple router channels for routing packets in different directions relative to the switch, and at least one routing circuit configured for reversing a logical direction of at least one router channel. | 2014-08-28 |
20140244972 | METHOD AND APPARATUS FOR GAME PHYSICS CONCURRENT COMPUTATIONS - An apparatus for physical properties computation comprising an array processor. The array processor comprises of a plurality of processing elements, said processing elements arranged in a grid. A processing unit (PU) is coupled to the array processor. A local memory is coupled to the PU. The PU broadcasts data to rows of said processing elements in said grid, and performs physical computations in an order of complexity of O((√N) log N). | 2014-08-28 |
20140244973 | RECONFIGURABLE ELEMENTS - The present invention provides for a multiprocessor device on either a chip or a stack of chips. The multiprocessor device includes a plurality of processing entities and a memory system. The multiprocessor device further includes at least one interface unit to at least one of an external memory and one or more peripherals. The multiprocessor device includes a bus system interconnecting the processing entities, the memory system and the at least one interface unit. Wherein, the memory system includes a plurality of cache segments, and the plurality of segments are located on a plurality of memory cores, each having a connection to the bus system. | 2014-08-28 |
20140244974 | Background Collective Operation Management In A Parallel Computer - Background collective operation management in a parallel computer, the parallel computer including one or more compute nodes operatively coupled for data communications over one or more data communications networks, including: determining, by a management availability module, whether a compute node in the parallel computer is available to perform a background collective operation management task; responsive to determining that the compute node is available to perform the background collective operation management task, determining, by the management availability module, whether the compute node has access to sufficient resources to perform the background collective operation management task; and responsive to determining that the compute node has access to sufficient resources to perform the background collective operation management task, initiating, by the management availability module, execution of the background collective operation management task. | 2014-08-28 |
20140244975 | MULTI-CORE PROCESSOR, CONTROLLING METHOD THEREOF AND COMPUTER SYSTEM WITH SUCH PROCESSOR - A multi-core processor includes M cores. If the multi-core processor is operated under a non-multiprocessing support operating system, only a single core is configured as a central processing unit and N cores are configured as co-processors, wherein M and N are positive integers, and N is smaller than M. | 2014-08-28 |
20140244976 | IT INSTRUCTION PRE-DECODE - Various techniques for processing and pre-decoding branches within an IT instruction block. Instructions are fetched and cached in an instruction cache, and pre-decode bits are generated to indicate the presence of an IT instruction and the likely boundaries of the IT instruction block. If an unconditional branch is detected within the likely boundaries of an IT instruction block, the unconditional branch is treated as if it were a conditional branch. The unconditional branch is sent to the branch direction predictor and the predictor generates a branch direction prediction for the unconditional branch. | 2014-08-28 |
20140244977 | Deferred Saving of Registers in a Shared Register Pool for a Multithreaded Microprocessor - A method of sharing a plurality of registers in a register pool among a plurality of microprocessor threads begins by allocating a first set of registers in the register pool to a first thread, the first thread executing a first instruction using the first set of registers in the register pool. The first thread is descheduled without saving values stored in the first set of registers. A second thread is scheduled to execute a second instruction using registers allocated in the register pool. Finally, the first thread is rescheduled, the first thread reusing the allocated first set of registers. | 2014-08-28 |
20140244978 | CHECKPOINTING REGISTERS FOR TRANSACTIONAL MEMORY - The present invention provides a method and apparatus for checkpointing registers for transactional memory. Some embodiments of the apparatus include first rename logic configured to map up to a predetermined number of architectural registers to corresponding first physical registers that hold first values associated with the architectural registers. The mapping is responsive to a transaction modifying one or more of the first values associated with the architectural registers. Some embodiments of the apparatus also include microcode configured to write contents of the first physical registers to a memory in response to the transaction modifying first values associated with a number of the architectural registers that is larger than the predetermined number. | 2014-08-28 |
20140244979 | Estimating Time Remaining for an Operation - Techniques for estimating time remaining for an operation are described. Examples operations include file operations, such as file move operations, file copy operations, and so on. A wide variety of different operations may be considered in accordance with the claimed embodiments, further examples of which are discussed below. In at least some embodiments, estimating a time remaining for an operation can be based on a state of the operation. A state of an operation, for example, can be based on events related to the operation itself, such as the operation being initiated, paused, resumed, and so on. A state of an operation can also be based on events related to other operations. | 2014-08-28 |
20140244980 | METHOD AND SYSTEM FOR DYNAMIC CONTROL OF A MULTI-TIER PROCESSING SYSTEM - Method, system, and programs for dynamic control of a processing system having a plurality of tiers. Queue lengths of a plurality of nodes in one of the plurality of tiers are received. A control objective is received from a higher tier. One or more requests from the higher tier are processed by the plurality of nodes in the tier. A control model of the tier is computed based on the received queue lengths. One or more parameters of the control model are adjusted based on the received control objective. At least one control action is determined based on the control model and the control objective. | 2014-08-28 |
20140244981 | PROCESSOR AND CONTROL METHOD FOR PROCESSOR - A processor includes a programmable logic circuit provided with a plurality of processing units. The programmable logic circuit is capable of reconfiguring a first logic circuit corresponding to first circuit configuration information according to a first process and a second logic circuit corresponding to second circuit configuration information according to a second process. Each of the first and second logic circuits includes an information holding unit. A first control circuit stores the second circuit configuration information in the information holding unit of the first logic circuit and generates an execution control signal for executing the first process. A second control circuit obtains the second circuit configuration information from the information holding unit of the first logic circuit in response to completion of the first process and controls the programmable logic circuit so as to reconfigure the second logic circuit corresponding to the second circuit configuration information. | 2014-08-28 |
20140244982 | PERFORMING STENCIL COMPUTATIONS - A method and apparatus for performing stencil computations efficiently are disclosed. In one embodiment, a processor receives an offset, and in response, retrieves a value from a memory via a single instruction, where the retrieving comprises: identifying, based on the offset, one of a plurality of registers of the processor; loading an address stored in the identified register; and retrieving from the memory the value at the address. | 2014-08-28 |
20140244983 | EXECUTING AN OPERATING SYSTEM ON PROCESSORS HAVING DIFFERENT INSTRUCTION SET ARCHITECTURES - An apparatus includes a first processor having a first instruction set and a second processor having a second instruction set that is different than the first instruction set. The apparatus also includes a memory storing at least a portion of an operating system. The operating system is concurrently executable on the first processor and the second processor. | 2014-08-28 |
20140244984 | ELIGIBLE STORE MAPS FOR STORE-TO-LOAD FORWARDING - The present invention provides a method and apparatus for generating eligible store maps for store-to-load forwarding. Some embodiments of the method include generating information associated with a load instruction in a load queue. The information indicates whether one or more store instructions in a store queue is older than the load instruction and whether the store instruction(s) overlap with any younger store instructions in the store queue that are older than the load instruction. Some embodiments of the method also include determining whether to forward data associated with a store instruction to the load instruction based on the information. Some embodiments of the apparatus include a load-store unit that implements embodiments of the method. | 2014-08-28 |
20140244985 | INTELLIGENT CONTEXT MANAGEMENT - Intelligent context management for thread switching is achieved by determining that a register bank has not been used by a thread for a predetermined number of dispatches, and responsively disabling the register bank for use by that thread. A counter is incremented each time the thread is dispatched but the register bank goes unused. Usage or non-usage of the register bank is inferred by comparing a previous checksum for the register bank to a current checksum. If the previous and current checksums match, the system concludes that the register bank has not been used. If a thread attempts to access a disabled bank, the processor takes an interrupt, enables the bank, and resets the corresponding counter. For a system utilizing transactional memory, it is preferable to enable all of the register banks when thread processing begins to avoid aborted transactions from register banks disabled by lazy context management techniques. | 2014-08-28 |
20140244986 | SYSTEM AND METHOD TO SELECT A PACKET FORMAT BASED ON A NUMBER OF EXECUTED THREADS - A system and method to select a packet format based on a number of executed threads is disclosed. In a particular embodiment, a method includes determining, at a multi-threaded processor, a number of threads of a plurality of threads executing during a time period. A packet format is determined from a plurality of formats based at least in part on the determined number of threads. Data associated with execution of an instruction by a particular thread is stored in accordance with the selected format in a memory (e.g., a buffer). | 2014-08-28 |
20140244987 | Precision Exception Signaling for Multiple Data Architecture - Methods and systems that perform one or more operations on a plurality of elements using a multiple data processing element processor are provided. An input vector comprising a plurality of elements is received by a processor. The processor determines if performing a first operation on a first element will cause an exception and if so, writes an indication of the exception caused by the first operation to a first portion of an output vector stored in an output register. A second operation can be performed on a second element with the result of the second operation being written to a second portion of the output vector stored in the output register. | 2014-08-28 |
20140244988 | SELF-HEALING OF OPERATING SYSTEM COMPONENTS - Aspects of the subject matter described herein relate to operating system technology. In aspects, a mechanism is described that allows self-healing actions to correct operating system problems. The self-healing actions may be performed at virtually any time during the loading and executing of operating system components. Earlier placement of the self-healing actions may allow correction of more operating system component problems than later placement. In one implementation, while self-healing actions are occurring, the instantiating of additional operating system components is not allowed. After the self-healing actions have completed, the instantiating of additional operating system components may continue. | 2014-08-28 |
20140244989 | PEER-TO-PEER NETWORK BOOTING - A technique for booting a computing device using a boot image that is downloaded from a distributed network booting system involves identifying a tracker computing device that manages a plurality of computing devices that store all or a portion of the boot image, receiving from the tracker computing device information about one or more computing devices from which to download the boot image, and downloading the boot image from the one or more computing devices. | 2014-08-28 |
20140244990 | METHOD AND APPARATUS FOR PREFETCHING PERIPHERAL DEVICE DRIVERS FOR SMART PHONES AND OTHER CONNECTED DEVICES PRIOR TO HLOS BOOT - Apparatus and methods for booting a user equipment are described. A device boot of the user equipment may be performed. Peripherals and associated drivers for the user equipment may be configured. A high-level operating system (HLOS) may be booted. The configuring may occur before the booting of the HLOS. Apparatus and methods for loading peripheral device drivers for a user equipment are also described. Peripherals that can be associated with a user equipment may be determined. Drivers for the determined peripherals may be loaded. The loaded drivers may be associated with a high-level operating system (HLOS) architecture regardless of a type of user equipment on which the HLOS is provided. | 2014-08-28 |
20140244991 | Patching Boot Code of Read-Only Memory - The present disclosure describes apparatuses and techniques for patching boot code of read-only memory (ROM). In some aspects, execution of boot code from a ROM is initiated to start a boot process of a device. Execution of the boot code from the ROM is then interrupted to enable execution of other boot code, such as corrected boot code or additional boot code, from another memory. Once the other boot code is executed, execution of the boot code from the ROM is resumed to continue booting the computing device. By so doing, the corrected boot code or additional boot code can be executed during the boot process effective to patch the boot code stored in the ROM. | 2014-08-28 |
20140244992 | Extensible Firmware Interface External Graphic Card, Mainframe System, and Extensible Firmware Interface BIOS Booting Method - A central processing unit of a mainframe system is configured to load a physical graphic card driver into a memory of the mainframe system for performing a display function when the mainframe system is not connected to an Extensible Firmware Interface (EFI) external graphic card. The central processing unit is further configured to load a virtual graphic card driver into the memory of the mainframe system for performing the display function when the mainframe system is connected to the EFI external display card. | 2014-08-28 |
20140244993 | METHOD OF UPDATING THE OPERATING SYSTEM OF A SECURE MICROCIRCUIT - A method of loading an operating program in a secure microcircuit, includes the steps of: downloading and installing in the microcircuit a boot program, which is launched upon activation of the microcircuit, loading into the microcircuit initialization data including a first public key, performing a mutual authentication procedure between the microcircuit and a first server having a private key corresponding to the first public key, and if the mutual authentication is successful, loading from the first server operating program profile data holding a second public key, performing a mutual authentication procedure between the microcircuit and a second server having a private key corresponding to the second public key, and if the mutual authentication is successful, loading an operating program from the second server and installing it in the microcircuit, and activating the operating program when it is in the microcircuit. | 2014-08-28 |
20140244994 | Method, apparatus and system for binding MTC device and UICC - A method for binding a Machine Type Communication (MTC) device and a Universal Integrated Circuit Card (UICC) is disclosed. The method includes: during a process of establishment of a shared key, a Network Application Function (NAF) acquires identity information of the MTC device and identity information of the UICC ( | 2014-08-28 |
20140244995 | Adaptive Media Transmission Processing - Provided are methods and systems for processing information. In one example method a first frame of a first group of frames of an information transmission can be processed. The first frame can be encoded without reference to other frames of the information transmission. Additionally, a second frame can be processed in the first group of frames. The second frame can be processed with reference to a frame from a second group of frames of the information transmission. | 2014-08-28 |
20140244996 | PRIVATE DISCOVERY OF ELECTRONIC DEVICES - The disclosed embodiments provide a system that facilitates communication between a first electronic device and a second electronic device. During operation, the system uses the first electronic device to create a discovery request comprising a first group identifier (ID) associated with the first electronic device, wherein using the first electronic device to create the discovery request involves encrypting the first group ID and including the encrypted first group ID in the discovery request. Next, the system transmits the discovery request to the second electronic device, wherein the discovery request is used by the second electronic device to generate a discovery response to the discovery request. | 2014-08-28 |
20140244997 | EMERGENCY MODE FOR IOT DEVICES - Methods and apparatuses for implementing an emergency instruction based on an emergency message from a trusted authority source. The method includes receiving, at an Internet of Things (IoT) device, an emergency secret key from a trusted authority source The method receives, at an IoT device, an emergency message from the trusted authority source; decoding, at an IoT device, the emergency message from the trusted authority source using the emergency secret key to determine a value within the emergency message. The method calculates, at an IoT device, a result based on the determined value. The method implements, at an IoT device, an emergency instruction if the result is above a predetermined threshold. | 2014-08-28 |
20140244998 | SECURE PUBLISHING OF PUBLIC-KEY CERTIFICATES - The current document is directed to methods and systems for secure provisioning, publication, distribution, and utilization of public-key certificates. These methods and systems employ domain name system (“DNS”) servers implementing the DNS security extensions (“DNSSEC servers”), a publisher component, and additional client-side and server-side functionalities. Public-key certificates provided by the DNSSEC servers engender a high degree of trust, as their integrity is protected and can be readily authenticated by the cryptographic-digital-signature based chains of trust provided by the DNSSEC. The systems to which the current document is directed employ DNSSEC servers, a publisher component, and additional client-side and server-side functionalities, and are referred to as “Public-key certificate Distribution and Management Systems” (“CDMSs”). | 2014-08-28 |
20140244999 | NETWORK SYSTEM, CERTIFICATE MANAGEMENT METHOD, AND CERTIFICATE MANAGEMENT PROGRAM - A network system includes a management apparatus and multiple apparatuses. The management apparatus includes a preparation instruction unit to transmit an instruction to prepare a certificate request to the apparatuses; a collection unit to collect the certificate requests; a request unit to request issuance of certificates to a certificate authority; a resetting instruction unit to transmit the issued certificates to the apparatuses and to instruct resetting of certificates. The apparatus includes a storing unit including an operation area for storing a first certificate and a provisional operation area; a provisionally operating unit to transfer the first certificate to the provisional operation area, and to generate a certificate request, and to transmit the certificate request to the management apparatus; a setting unit to store a second certificate, issued by the certificate authority, in the operation area, and to instruct a communication unit to conduct the communication by switching a certificate. | 2014-08-28 |
20140245000 | SECURE MESSAGE DELIVERY USING A TRUST BROKER - An email security system is described that allows users within different organizations to securely send email to one another. The email security system provides a federation server on the Internet or other unsecured network accessible by each of the organizations. Each organization provides identity information to the federation server. When a sender in one organization sends a message to a recipient in another organization, the federation server provides the sender's email server with a secure token for encrypting the message to provide secure delivery over the unsecured network. | 2014-08-28 |
20140245001 | Decryption of Content Including Partial-Block Discard - Embodiments may include receiving a protected version of content that includes multiple encryption chains each including encrypted blocks of content. The protected version of content may include one or more initialization vectors for decrypting the encrypted blocks of content and discard information that specifies non-content portions of one or more data blocks to be discarded after decryption. Embodiments may also include performing chained decryption on the multiple encryption chains using the initialization vectors specified by the decryption information. The chained decryption may result in a sequence of decrypted data blocks. Embodiments may also include, based on the discard information, locating and removing the non-content portions of one or more data blocks in the sequence of decrypted data blocks. Embodiments may also include generating the protected version of content. Embodiments may also include performing any of the aforesaid techniques on one or more computers. | 2014-08-28 |
20140245002 | METHOD AND APPARATUS FOR SECURE DATA TRANSMISSIONS - An apparatus, system, and method are disclosed for secure data transmissions. In one embodiment, a method includes receiving a request for data from a remote client, the request including a public Internet protocol address of the remote client, the request encrypted according to an initial encryption scheme, encrypting the requested data according to a different encryption scheme, and transferring the data to the remote client. | 2014-08-28 |
20140245003 | Communications Method - The present application relates to a method of providing connectivity to a vehicle. The method comprises, at a first device aboard the vehicle, establishing at least one first connection with at least one first network, the at least one first connection allowing communication with a second device remote from the first device, transmitting via the at least one first connection an allocation request to the second device, receiving via the at least one first connection an allocation response from the second device, the allocation response indicating a first authentication device from a plurality of authentication devices remote from the first device, and establishing a second connection with a network and authenticating the first device on the network using the first authentication device. | 2014-08-28 |
20140245004 | RULE SETS FOR CLIENT-APPLIED ENCRYPTION IN COMMUNICATIONS NETWORKS - A rule set for client-applied encryption is created and deployed to a client device by a network device over a communications network. Encryption applied by the client in accordance with the rule set may form the basis of a secure connection in which encrypted information is encapsulated and tunneled across a network that includes a wireless or wired interface through which the client obtains network connectivity. The client may monitor operating conditions, including operating conditions of the communications network, client device, and/or service provider. The rule set includes one or more rules that may be used by the client in combination with the detected operating conditions to select the appropriate encryption protocol. The rule set may persist at the client for use over multiple sessions in which a range of communication protocols and/or access points are used by the client to obtain network connectivity. | 2014-08-28 |
20140245005 | Cryptographic processing method and system using a sensitive data item - A cryptographic processing method using a sensitive data item in a cryptographic processing system including in memory a test making it possible to tell a human and a computer apart and a reference value obtained by applying a cryptographic function to a pair of values P and R, where P is the sensitive data item and R is a solution to the memorized test, the method including the steps of: configuring the cryptographic processing system, including obtaining and memorizing the reference value in the cryptographic system; transmitting the memorized test to a user; obtaining the user's response to the transmitted test; a cryptographic processing step based on the sensitive data item, using the obtained response, the reference value and the cryptographic function. The reference value and memorized test are in the memory of the system and the solution is not in the memory of the system, during the transmission step. | 2014-08-28 |
20140245006 | CRYPTOGRAPHIC ACCUMULATORS FOR AUTHENTICATED HASH TABLES - In one exemplary embodiment, an apparatus includes a memory storing data and a processor performing operations. The apparatus generates or maintains an accumulation tree for the stored data—an ordered tree structure with a root node, leaf nodes and internal nodes. Each leaf node corresponds to a portion of the data. A depth of the tree remains constant. A bound on a degree of each internal node is a function of a number of leaf nodes of a subtree rooted at the internal node. Each node of the tree has an accumulation value. Accumulation values of the root and internal nodes are determined by hierarchically employing an accumulator over the accumulation values of the nodes lying one level below the node in question. The accumulation value of the root node is a digest for the tree. | 2014-08-28 |
20140245007 | User Authentication System - Techniques are provided for users to authenticate themselves to components in a system. The users may securely and efficiently enter credentials into the components. These credentials may be provided to a server in the system with strong authentication that the credentials originate from secure components. The server may then automatically build a network by securely distributing keys to each secure component to which a user presented credentials. | 2014-08-28 |