06th week of 2015 patent applcation highlights part 64 |
Patent application number | Title | Published |
20150039795 | SYSTEM INTERCONNECTION, SYSTEM-ON-CHIP HAVING THE SAME, AND METHOD OF DRIVING THE SYSTEM-ON-CHIP - Provided is a method of driving a system-on-chip (SOC). The method includes adding a first transaction to a list, allocating the first transaction to a first slot, determining whether a second transaction is redundant, and adding the second transaction to the list and allocating the second transaction to the first slot when it is determined that the second transaction is redundant. Accordingly, the SOC can increase outstanding capability and enhance performance of a system interconnection. | 2015-02-05 |
20150039796 | ACQUIRING RESOURCES FROM LOW PRIORITY CONNECTION REQUESTS IN SAS - Systems and methods herein provide for managing connection requests through a Serial Attached Small Computer System Interface (SAS) expander. In one embodiment, the expander receives a low priority open address frame (OAF) that includes a source address and a destination address. The expander also receives a high priority OAF that includes a source address and a destination address. The high priority OAF requires at least a portion of a partial path acquired by the low priority OAF for which connection request arbitration is in progress. The expander determines whether the high OAF source address matches the low OAF destination address, and in response to a determination that the high OAF source address is different than the low OAF destination address, acquires pathway resources from the low priority OAF and forwards the high priority OAF in accordance with its destination address. | 2015-02-05 |
20150039797 | REMOVABLE EXPANSION INTERFACE DEVICE - A removable expansion interface device comprising: a motherboard; a system slot which is installed on and electrically connected to the motherboard and develops an accommodation space; a carrier which is a printed circuit board held in the accommodation space and electrically connected to the system slot; an expansion slot installed on and electrically connected to the carrier and develops a plug-in space. As such, the present invention relies on removable design to match an interface device to be used and realize lowered costs and spatial and economic efficiency. Furthermore, the present invention is based on a motherboard without extra cost and has significant advantages such as applications of new storage devices, expansion of other functions, time and cost efficiency, lowered complexity, and industrial applicability. | 2015-02-05 |
20150039798 | CONVERSION OF AN OBJECT FOR A HARDWARE DEVICE INTO HEALTH CONTROL INFORMATION - Examples disclosed herein relate to conversion of an object for a hardware device into health control information. Examples include acquiring, from an object-oriented database, an object for a hardware device including an operational parameter value determined by the hardware device. Examples further include converting the object into health control information useable by a health controller. | 2015-02-05 |
20150039799 | METHOD AND APPARATUS FOR SETTING WORKING MODE OF MULTI-PROCESSOR SYSTEM - A method for setting a working mode of a multi-processor system includes: detecting, after a current board is inserted into a slot of the backplane, whether an associated board exists on the backplane; detecting, if the associated board exists, whether the associated board is in an independent working state; powering on the current board according to a slave working mode if the associated board is not in an independent working state, so as to work coordinately with the associated board; detecting, within a predetermined detection time if the associated board does not exist, whether a board is inserted into another slot of the backplane except the slot of the master board; and powering on the current board according to a master working mode if it is detected that the board is inserted, so as to work coordinately with the board in the other slot. | 2015-02-05 |
20150039800 | ELECTRONIC APPARATUS, BASE AND METHOD OF SWITCHING PIN FUNCTIONS OF CONNECTOR - An electronic apparatus comprises a processing unit, a second connector, a sensing module and a route-selecting module. The second connector is electrically connected with the processing unit and is capable of being forward or reverse inserted with a first connector of a base. The sensing module is provided for generating different control signals according to the forward insertion or the reverse insertion between the second connector and the first connector. The route-selecting module is electrically connected with the sensing module for switching a usage status of the second connector according to different control signals. | 2015-02-05 |
20150039801 | LIN Bus Module - One aspect of the invention relates to a network node for connecting to a Local Interconnect Network (LIN). In accordance with one example of the present invention, the network node includes a bus terminal which is operably coupled to a data line for receiving a data signal, which represents serial data, via that data line. The data signal is a binary signal having high and low signal levels. The network node further includes a receiver circuit which employs a comparator to compare the data signal with a reference signal. The comparator generates a binary output signal representing the result of the comparison. The network node also includes a measurement circuit that receives the data signal and provides a first voltage signal such that it represents the high signal level of the data signal. | 2015-02-05 |
20150039802 | SERIAL-PARALLEL INTERFACE CIRCUIT WITH NONVOLATILE MEMORY - A serial-parallel interface circuit with nonvolatile memories is provided. A control module generates a plurality of control signals, wherein the control signals include readout and write-in control signals and memory programming control signals. An input terminal receives a plurality of digital data from external. The digital data are transmitted to the input terminal serially. Memory modules are coupled to the input terminal and receive the control signals from the control module. The input terminal transmits the digital data to the memory modules. One of the memory modules includes a memory unit, and the memory unit stores or transmits one bit of the digital data based on a high voltage control signal and a memory control signal. A plurality of output signal lines are respectively coupled to the memory modules. The memory unit transmits the one bit of the digital data to one of the output signal lines. | 2015-02-05 |
20150039803 | DATA TRANSFER APPARATUS, DATA TRANSFER METHOD, AND DATA TRANSFER PROGRAM - An object of the present invention is to prevent occurrence of data destruction when a transfer source region and a transfer destination region of data overlap with each other and even when transfer is performed using a burst transfer function. The data read from the transfer source region is temporarily written into a ring buffer, and then the data written into the ring buffer is written into the transfer source region. In this case, reading of the data from the ring buffer is controlled, based on a magnitude relation between the number of times of wrap-arounds caused by writing of the data into the ring buffer and the number of times of wrap-arounds caused by reading of the data from the ring buffer. | 2015-02-05 |
20150039804 | PCI Express Data Transmission - PCIe devices and corresponding methods are provided wherein a length of data to be transferred is aligned to a multiple of a double word length. | 2015-02-05 |
20150039805 | System and Method to Emulate an Electrically Erasable Programmable Read-Only Memory - The disclosure relates to an electronic memory system, and more specifically, to a system to emulate an electrically erasable programmable read-only memory, and a method to emulate an electrically erasable programmable read-only memory. According to an embodiment of the disclosure, a system to emulate an electrically erasable programmable read-only memory is provided, the system including a first memory section and a second memory section, wherein the first memory section comprises a plurality of storage locations configured to store data partitioned into a plurality of data segments and wherein the second memory section is configured to store information mapping a physical address of a data segment stored in the first memory section to a logical address of the data segment. | 2015-02-05 |
20150039806 | SYSTEM AND METHOD FOR CONTROLLING A STORAGE DEVICE - A method of controlling a storage device includes detecting a cumulative usage condition associated with the storage device, comparing the cumulative usage condition to a usage value, and adjusting the operation of the storage device based on the comparison. Another method of controlling a storage device includes detecting an operating condition associated with the storage device, comparing the operating condition to a warranty condition, and limiting the operation of the storage device to read-only operation based on the comparison. | 2015-02-05 |
20150039807 | NOR-TYPE FLASH MEMORY DEVICE CONFIGURED TO REDUCE PROGRAM MALFUNCTION - Embodiments of the present invention include a NOR-type flash memory device capable of reducing or eliminating program malfunctions. In some embodiments, the device includes a memory array, row selection circuit, column selection circuit, and program driver circuit. The memory array includes a memory sector having a first sector bit line and a second sector bit line. The memory array also includes a plurality of flash memory cells disposed on a matrix structure having a plurality of cell bit lines and a plurality of word lines arranged sequentially. The cell bit lines are alternately defined as first cell bit lines and second cell bit lines in sequential order. The first cell bit lines are connected to the first sector bit line in response to column selection signals thereof, and the second cell bit lines are connected to the second sector bit line in response to column selection signals thereof. | 2015-02-05 |
20150039808 | MEMORY SYSTEM - According to one embodiment, a memory system includes nonvolatile memories each storing data and an address table for acquiring an address of the data, and a control unit which is configured to be capable of accessing the nonvolatile memories in parallel, and issues table read requests for reading the address tables and data read requests for reading the data to the nonvolatile memories in response to read commands from a host. When a table read request and a data read request are issued to a same nonvolatile memory, the control unit processes the data read request in priority to the table read request. | 2015-02-05 |
20150039809 | NONVOLATILE MEMORY SYSTEM AND PROGRAMMING METHOD INCLUDING REPROGRAM OPERATION - A program method for a nonvolatile memory system including a reprogram operation that does not require a reload of first program data to page buffers of a constituent nonvolatile memory device between execution of a first coarse program step and execution of a first fine program step being performed after the execution of an intervening second coarse program step. | 2015-02-05 |
20150039810 | METHOD FOR MANAGING MEMORY APPARATUS, ASSOCIATED MEMORY APPARATUS THEREOF AND ASSOCIATED CONTROLLER THEREOF - A method for managing a memory apparatus and the associated memory apparatus thereof and the associated controller thereof are provided, where the method includes: temporarily storing data received from a host device into a volatile memory in the controller and utilizing the data in the volatile memory as received data, and dynamically monitoring the data amount of the received data to determine whether to immediately write the received data into at least one non-volatile memory element; and when determining to immediately write the received data into the at least one non-volatile memory element, directly writing the received data into a specific block configured to be a Multiple Level Cell memory block within a specific non-volatile memory element, rather than indirectly writing the received data into the specific block by first temporarily writing the received data into any other block configured to be Single Level Cell memory block. | 2015-02-05 |
20150039811 | METHOD FOR MANAGING MEMORY APPARATUS, ASSOCIATED MEMORY APPARATUS THEREOF AND ASSOCIATED CONTROLLER THEREOF - A method for managing a memory apparatus and the associated memory apparatus thereof and the associated controller thereof are provided, where the method includes: temporarily storing data received from a host device into a volatile memory in the controller and utilizing the data in the volatile memory as received data, and dynamically monitoring the data amount of the received data to determine whether to immediately write the received data into at least one NV memory element; and when a specific signal is received and it is detected that specific data having not been written into a same location in a specific block configured to be an MLC memory block within a specific NV memory element of the at least one NV memory element for a predetermined number of times exists in the received data, immediately writing the specific data into another block in the at least one NV memory element. | 2015-02-05 |
20150039812 | Modify Executable Bits of System Management Memory Page Table - A computing device to create a system management memory page table in response to the computing device powering on. The system management memory page table includes pages with executable bits. The computing device modifies the executable bits of the pages before launching an option read only memory of the computing device. | 2015-02-05 |
20150039813 | NAND Interface Capacity Extender Device For Extending Solid State Drives Capacity, Performance, And Reliability - A system and method for a solid state drive comprising a system controller and one or more extender devices coupled to the system controller is disclosed, where each extender device is coupled to a plurality of NAND storage devices and each NAND storage device comprising a plurality of NAND flash memory cells. | 2015-02-05 |
20150039814 | STORAGE DEVICE AND STORAGE SYSTEM INCLUDING THE SAME - A storage device may include a nonvolatile storage and a storage controller. The nonvolatile storage may include a map table which stores information including a logical address, a physical address corresponding to the logical address and a correlation index designating the physical address. The storage controller is configured to transmit the information to an external host device, and to access the nonvolatile storage based on a request and the correlation index, each of the request and the correlation index transmitted from the host device. | 2015-02-05 |
20150039815 | SYSTEM AND METHOD FOR INTERFACING BETWEEN STORAGE DEVICE AND HOST - A system and method of use thereof that include a mass storage device connected to a host computer running host software modules. The mass storage device includes at least one non-volatile memory device, at least one volatile memory device, and a memory controller attached to the non-volatile and volatile memory devices wherein the memory controller is connected to the host computer via a computer bus interface. Firmware executing on the memory controller provides software primitive functions, a software protocol interface, and an application programming interface to the host computer. The host software modules run by the host computer access the software primitives functions and the application programming interface of the mass storage device. | 2015-02-05 |
20150039816 | UTILIZATION OF DISK BUFFER FOR BACKGROUND REPLICATION PROCESSES - A method for replicating data from a first volume to a second volume includes receiving a first data request comprising a request for a first portion of data, wherein the first portion is part of a first volume. The first portion of data is read, and so is at least a second portion of data in addition to the first portion of data requested in the first data request. In response to determining that the second portion of data should be replicated to the second volume, the second portion of data is written to the second volume. | 2015-02-05 |
20150039817 | METHOD AND APPARATUS FOR PARALLEL TRANSFER OF BLOCKS OF DATA BETWEEN AN INTERFACE MODULE AND A NON-VOLATILE SEMICONDUCTOR MEMORY - A system including a non-volatile semiconductor memory (NVSM), an interface module and a control module. The NVSM stores first and second blocks of data. The first or second block of data is non-page based such that a size of the first block of data or a size of the second block of data is not an integer multiple of a page of data. The interface module transfers the first and second blocks of data during respectively a first data transfer event and a second data transfer event. The control module, based on descriptors, controls the first and second data transfer events such that the interface module transfers the first block of data between the interface module and the NVSM while transferring the second block of data between the interface module and the NVSM. The descriptors include respective sets of instructions for transferring the first and second blocks of data. | 2015-02-05 |
20150039818 | USE OF PREDEFINED BLOCK POINTERS TO REDUCE DUPLICATE STORAGE OF CERTAIN DATA IN A STORAGE SUBSYSTEM OF A STORAGE SERVER - A method and system for eliminating the redundant allocation and deallocation of special data on disk, wherein the redundant allocation and deallocation of special data on disk is eliminated by providing an innovate technique for specially allocating special data of a storage system. Specially allocated data is data that is pre-allocated on disk and stored in memory of the storage system. “Special data” may include any pre-decided data, one or more portions of data that exceed a pre-defined sharing threshold, and/or one or more portions of data that have been identified by a user as special. For example, in some embodiments, a zero-filled data block is specially allocated by a storage system. As another example, in some embodiments, a data block whose contents correspond to a particular type document header is specially allocated. | 2015-02-05 |
20150039819 | Apparatus and Method to Share Host System RAM with Mass Storage Memory RAM - A method includes, in one non-limiting embodiment, sending a request from a mass memory storage device to a host device, the request being one to allocate memory in the host device; writing data from the mass memory storage device to allocated memory of the host device; and subsequently reading the data from the allocated memory to the mass memory storage device. The memory may be embodied as flash memory, and the data may be related to a file system stored in the flash memory. The method enables the mass memory storage device to extend its internal volatile RAM to include RAM of the host device, enabling the internal RAM to be powered off while preserving data and context stored in the internal RAM. | 2015-02-05 |
20150039820 | FLASH MEMORY STORAGE SYSTEM AND CONTROLLER AND DATA WRITING METHOD THEREOF - A flash memory storage system having a flash memory controller and a flash memory chip is provided. The flash memory controller configures a second physical unit of the flash memory chip as a midway cache physical unit corresponding to a first physical unit and temporarily stores first data corresponding to a first host write command and second data corresponding to a second host write command in the midway cache physical unit, wherein the first and second data corresponding to slow physical addresses of the first physical unit. Then, the flash memory controller synchronously copies the first and second data from the midway cache physical unit into the first physical unit, thereby shortening time for writing data into the flash memory chip. | 2015-02-05 |
20150039821 | COMMUNICATION APPARATUS AND DATA PROCESSING METHOD - A communication apparatus comprises a general-purpose memory, and a high-speed memory that allows higher-speed access than the general-purpose memory. Protocol processing is executed to packetize transmission data using a general-purpose buffer allocated to the general-purpose memory and/or a high-speed buffer allocated to the high-speed memory as network buffers. | 2015-02-05 |
20150039822 | MECHANISM FOR ENABLING FULL DATA BUS UTILIZATION WITHOUT INCREASING DATA GRANULARITY - A memory is disclosed comprising a first memory portion, a second memory portion, and an interface, wherein the memory portions are electrically isolated from each other and the interface is capable of receiving a row command and a column command in the time it takes to cycle the memory once. By interleaving access requests (comprising row commands and column commands) to the different portions of the memory, and by properly timing these access requests, it is possible to achieve full data bus utilization in the memory without increasing data granularity. | 2015-02-05 |
20150039823 | TABLE LOOKUP APPARATUS USING CONTENT-ADDRESSABLE MEMORY BASED DEVICE AND RELATED TABLE LOOKUP METHOD THEREOF - A table lookup apparatus has a content-addressable memory (CAM) based device and a first cache. The CAM based device is used to store at least one table. The first cache is coupled to the CAM based device, and used to cache at least one input search key of the CAM based device and at least one corresponding search result. Besides, the table lookup apparatus may further includes a plurality of second caches and an arbiter. Each second cache is used to cache at least one input search key of the CAM based device and at least one corresponding search result. The arbiter is coupled between the first cache and each of the second caches, and used to arbitrate access of the first cache between the second caches. | 2015-02-05 |
20150039824 | IMPLEMENTING ENHANCED BUFFER MANAGEMENT FOR DATA STORAGE DEVICES - A method, apparatus and a data storage device for implementing enhanced buffer management for storage devices. An amount of emergency power for the storage device is used to determine a time period for the storage device between emergency power loss and actual shut down of electronics. A time period for the storage device for storing write cache data to non-volatile storage is used to identify the amount of write cache data that can be safely written from the write cache to non-volatile memory after an emergency power loss, and using the write cache threshold for selected buffer management techniques for providing enhanced storage device performance, including enhanced SSD or HDD performance. | 2015-02-05 |
20150039825 | Federated Tiering Management - Apparatus and methods are described for dynamically moving data between tiers of mass storage devices responsive to at least some of the mass storage devices providing information identifying which data are candidates to be moved between the tiers. | 2015-02-05 |
20150039826 | SUB-LUN AUTO-TIERING - Embodiments of the invention include systems and methods for auto-tiering multiple file systems across a common resource pool. Storage resources are allocated as a sub-LUN auto-tiering (SLAT) sub-pool. The sub-pool is managed as a single virtual address space (VAS) with a virtual block address (VBA) for each logical block address of each data block in the sub-pool, and a portion of those VBAs can be allocated to each of a number of file systems. Mappings are maintained between each logical block address in which file system data is physically stored and a VBA in the file system's portion of the virtual address space. As data moves (e.g., is added, auto-tiered, etc.), the mappings can be updated. In this way, multiple SLAT file systems can exploit the full resources of the common SLAT sub-pool and maximize the resource options available to auto-tiering functions. | 2015-02-05 |
20150039827 | DISTRIBUTED STORAGE NETWORK WITH REPLICATION CONTROL AND METHODS FOR USE THEREWITH - A method includes encoding input data into a plurality of slices. The plurality of slices are sent to a first plurality of distributed storage and task execution units for storage, the first plurality of distributed storage and task execution units being located at a corresponding first plurality of sites. Write slice data is received from the first plurality of distributed storage and task execution units. The method determines when replication is to be applied to the plurality of slices. When replication is to be applied to the plurality of slices, a second plurality of distributed storage and task execution units are selected, a plurality of replicated slices corresponding to the plurality of slices are generated, and the plurality of replicated slices are sent to the second plurality of distributed storage and task execution units. | 2015-02-05 |
20150039828 | TIME-BASED STORAGE WITHIN A DISPERSED STORAGE NETWORK - A method begins by a dispersed storage (DS) processing obtaining estimated future availability information for storage units and organizing a plurality of sets of encoded data slices into a plurality of group-sets of encoded data slices. For each of the plurality of group-sets of encoded data slices, the method continues with the DS processing module estimating an approximate storage completion time to produce a plurality of approximate storage completion times. The method continues with the DS processing module establishing a time-availability pattern for writing the plurality of group-sets of encoded data slices to the storage units based on the estimated future availability information and the plurality of approximate storage completion times. The method continues with the DS processing module sending the plurality of group-sets of encoded data slices to at least some of the storage units for storage therein in accordance with the time-availability pattern. | 2015-02-05 |
20150039829 | METHODS AND APPARATUS FOR IMPLEMENTING EXCHANGE MANAGEMENT FOR VIRTUALIZATION OF STORAGE WITHIN A STORAGE AREA NETWORK - Methods and apparatus for managing exchanges in a network device of a storage area network are disclosed. In a first “host-side” exchange initiated by an initiator and between the initiator and the network device, one or more frames are received from an initiator and/or sent to the initiator. At least one of the frames pertains to access of a virtual storage location of a virtual storage unit representing one or more physical storage locations on one or more physical storage units of the storage area network. One or more “disk-side” exchanges between the network device and one or more targets (i.e., physical storage units) are initiated in response to the first exchange. In the disk-side exchanges, one or more frames are sent from the network device to one of the targets and/or received from the target. Exchange information for the host-side exchange and the associated disk-side exchanges are updated throughout the exchanges. | 2015-02-05 |
20150039830 | VIRTUAL APPLIANCE DEPLOYMENT - A method, article of manufacture, and apparatus for efficiently processing information. In some embodiments, this includes determining a physical appliance to virtualize, creating a virtual appliance based on the physical appliance, and storing the virtual appliance in a storage array. In some embodiments, creating the virtual appliance includes creating the virtual appliance from a template | 2015-02-05 |
20150039831 | FILE LOAD TIMES WITH DYNAMIC STORAGE USAGE - Provided is a technique for improving file load times with dynamic storage usage. A file made up of data blocks is received. A list of storage devices is retrieved. In one or more iterations, the data blocks of the file are written by: updating the list of storage devices by removing any storage devices with insufficient space to store additional data blocks; generating a performance score for each of the storage devices in the updated list of storage devices; determining a portion of the data blocks to be written to each of the storage devices based on the generated performance score for each of the storage devices; writing, in parallel, the determined portion of the data blocks to each of the storage devices; and recording placement information indicating the storage devices to which each determined portion of the data blocks was written. | 2015-02-05 |
20150039832 | System and Method of Caching Hinted Data - The disclosure is directed to a system and method of cache management for a data storage system. According to various embodiments, the cache management system includes a hinting driver and a priority controller. The hinting driver generates pointers based upon data packets intercepted from data transfer requests being processed by a host controller of the data storage system. The priority controller determines whether the data packets are associated with at least a first (high) priority level or a second (normal or low) priority level based upon the pointers generated by the hinting driver. High priority data packets are stored in cache memory regardless of whether they satisfy a threshold heat quotient (i.e. a selected level of data transfer activity). | 2015-02-05 |
20150039833 | Management of caches - A system and method for efficiently powering down banks in a cache memory for reducing power consumption. A computing system includes a cache array and a corresponding cache controller. The cache array includes multiple banks, each comprising multiple cache sets. In response to a request to power down a first bank of the multiple banks in the cache array, the cache controller selects a cache line of a given type in the first bank and determines whether a respective locality of reference for the selected cache line exceeds a threshold. If the threshold is exceeded, then the selected cache line is migrated to a second bank in the cache array. If the threshold is not exceeded, then the selected cache line is written back to lower-level memory. | 2015-02-05 |
20150039834 | SHARING LOCAL CACHE FROM A FAILOVER NODE - Sharing local cache from a failover node, including: determining, by a managing compute node, whether a first compute node and a second compute node each have a local cache, where the second compute node is a mirrored copy of the first compute node; responsive to determining that the first compute node and the second compute node each have a local cache, combining, by the managing compute node, local cache on the first compute node and local cache on the second compute node into unified logical cache; receiving, by the managing compute node, a memory access request; and sending, by the managing compute node, the memory access request to an appropriate local cache in the unified logical cache. | 2015-02-05 |
20150039835 | System and Method of Hinted Cache Data Removal - The disclosure is directed to a system and method of cache management for a data storage system. According to various embodiments, the cache management system includes a hinting driver, a priority controller, and a data scrubber. The hinting driver generates pointers based upon data packets intercepted from data transfer requests being processed by a host controller of the data storage system. The priority controller determines whether the data transfer request includes a request to discard a portion of data based upon the pointers generated by the hinting driver. If the priority controller determines that data transfer request includes a request to discard a portion of data, the data scrubber locates and removes the portion of data from the cache memory so that the cache memory is freed from invalid data (e.g. data associated with a deleted file). | 2015-02-05 |
20150039836 | METHODS AND APPARATUS RELATED TO DATA PROCESSORS AND CACHES INCORPORATED IN DATA PROCESSORS - A cache includes a cache array and a cache controller. The cache array has a multiple number of entries. The cache controller is coupled to the cache array, for storing new entries in the cache array in response to accesses by a data processor, and evicts entries from the cache array according to a cache replacement policy. The cache controller includes a frequent writes predictor for storing frequency information indicating a write back frequency for the multiple number of entries. The cache controller selects a candidate entry for eviction based on both recency information and the frequency information. | 2015-02-05 |
20150039837 | SYSTEM AND METHOD FOR TIERED CACHING AND STORAGE ALLOCATION - Method for data placement in a tiered caching system and/or tiered storage system includes: determining a first period of time between each access to a first data, in a predetermined time window; averaging the first periods of time between each access to obtain an average first period of time; determining a second period of time between each access to a second data, in said predetermined time window; averaging the second periods of time between each access to obtain an average second period of time; comparing the average first period of time and the average second period of time; placing the first data in a fast-access storage medium, when the average first period of time is less than the average second period of time; and placing the second data in the fast-access storage medium, when the average second period of time is less than the average first period of time. | 2015-02-05 |
20150039838 | METHOD AND SYSTEM FOR RESTORING CONSUMED MEMORY AFTER MEMORY CONSOLIDATION - One embodiment of the system disclosed herein facilitates reduction of latency associated with accessing content of a memory page that has been swapped out by a guest operating system in a virtualized computer system. During operation, a hypervisor detects an I/O write command issued by the guest operating system at a swap location within the guest operating system's swap file and records the swap location. The hypervisor then prefetches contents of a page stored at the swap location within the guest operating system's swap file into a prefetch cache in host machine memory. Subsequently, the hypervisor detects an I/O read command issued by the guest operating system at the swap location within the swap file. In response, the hypervisor provides contents of the page to the guest operating system from the prefetch cache, thereby avoiding accessing the guest operating system's swap file. | 2015-02-05 |
20150039839 | Data Bus Efficiency Via Cache Line Usurpation - Embodiments of the current invention permit a user to allocate cache memory to main memory more efficiently. The processor or a user allocates the cache memory and associates the cache memory to the main memory location, but suppresses or bypassing reading the main memory data into the cache memory. Some embodiments of the present invention permit the user to specify how many cache lines are allocated at a given time. Further, embodiments of the present invention may initialize the cache memory to a specified pattern. The cache memory may be zeroed or set to some desired pattern, such as all ones. Alternatively, a user may determine the initialization pattern through the processor. | 2015-02-05 |
20150039840 | REMOTE MEMORY RING BUFFERS IN A CLUSTER OF DATA PROCESSING NODES - A data processing node has an inter-node messaging module including a plurality of sets of registers each defining an instance of a GET/PUT context and a plurality of data processing cores each coupled to the inter-node messaging module. Each one of the data processing cores includes a mapping function for mapping each one of a plurality of user level processes to a different one of the sets of registers and thereby to a respective GET/PUT context instance. Mapping each one of the user level processes to the different one of the sets of registers enables a particular one of the user level processes to utilize the respective GET/PUT context instance thereof for performing a GET/PUT action to a ring buffer of a different data processing node coupled to the data processing node through a fabric without involvement of an operating system of any one of the data processing cores. | 2015-02-05 |
20150039841 | AUTOMATIC TRANSACTION COARSENING - A processing device comprises an instruction execution unit and track and combing logic to combine a plurality of transactions into a single combined transaction. The track and combine logic comprises a transaction monitoring module to monitor an execution of a plurality of transactions by the instruction execution unit, each of the plurality of transactions comprising a transaction begin instruction, at least one operation instruction and a transaction end instruction. The track and combine logic further comprises a transaction combination module to identify, in view of the monitoring, a subset of the plurality of transactions to combine into a single combined transaction for execution on the processing device and to combine the identified subset of the plurality of transactions into the single combined transaction, the single combined transaction comprising a single transaction begin instruction, a plurality of operation instructions corresponding to the subset of the plurality of transactions and a single transaction end instruction. | 2015-02-05 |
20150039842 | DATA STORAGE SYSTEM WITH DYNAMIC READ THRESHOLD MECHANISM AND METHOD OF OPERATION THEREOF - A system and method of operation of a data storage system includes: a memory die for determining a middle read threshold; a control unit, coupled to the memory die, for calculating a lower read threshold and an upper read threshold based on the middle read threshold and a memory element age; and a memory interface, coupled to the memory die, for reading a memory page of the memory die using the lower read threshold, the middle read threshold, or the upper read threshold for compensating for a charge variation. | 2015-02-05 |
20150039843 | CIRCUITS AND METHODS FOR PROVIDING DATA TO AND FROM ARRAYS OF MEMORY CELLS - A memory device uses a global input/output line or a pair of complementary global input/output lines to couple write data signals and read data signals to and from a memory array. The same input/output line or pairs of complementary global input/output lines may be used for coupling both write data signals and read data signals. | 2015-02-05 |
20150039844 | MEMORY DEVICE IMPLEMENTING REDUCED ECC OVERHEAD - A memory device using error correction code (ECC) implements a memory array parallel read-write method to reduce the storage overhead required for storing ECC check bits. The memory array parallel read-write method stores incoming address and data into serial-in parallel-out (SIPO) address registers and write data registers, respectively. The stored data are written to the memory cells in parallel when the SIPO registers are full. ECC check bits are generated for the block of parallel input data stored in the write data registers. During the read operation, a block of read out data corresponding to the read address are read from the memory cells in parallel and stored in read registers. ECC correction is performed on the block of read out data before the desired output data is selected for output. | 2015-02-05 |
20150039845 | TRANSFERRING LEARNING METADATA BETWEEN STORAGE SERVERS HAVING CLUSTERS VIA COPY SERVICES OPERATIONS ON A SHARED VIRTUAL LOGICAL UNIT THAT STORES THE LEARNING METADATA - A virtual logical unit that stores learning metadata is allocated in a first storage server having a first plurality of clusters, wherein the learning metadata indicates a type of storage device in which selected data of the first plurality of clusters of the first storage server are stored. A copy services command is received to copy the selected data from the first storage server to a second storage server having a second plurality of clusters. The virtual logical unit that stores the learning metadata is copied, from the first storage server to the second storage server, via the copy services command. Selected logical units corresponding to the selected data are copied from the first storage server to the second storage server, and the learning metadata is used to place the selected data in the type of storage device indicated by the learning metadata. | 2015-02-05 |
20150039846 | Efficiency Of Virtual Machines That Use De-Duplication As Primary Data Storage - Example apparatus and methods provide two types of storage for a virtual machine running on a hypervisor. The first storage is de-duplication based and the second storage is not de-duplication based. Example apparatus and methods may acquire data from the first storage to instantiate the virtual machine, to instantiate an operating system on the virtual machine, or to instantiate an application on the virtual machine from the first storage. Example apparatus and methods may write a snapshot to the second storage and then support random input/output for the virtual machine, for the operating system, or for the application from the second storage. The snapshot may selectively be collapsed or the second storage may selectively be retired and thus example systems may selectively update the first storage from the second storage. Having dual devices facilitates using de-duplication storage for de-duplication-centric I/O while non-de-duplication storage is used for random I/O. | 2015-02-05 |
20150039847 | BALANCING DATA DISTRIBUTION IN A FAULT-TOLERANT STORAGE SYSTEM - The disclosed embodiments relate to a system for managing replicated copies of data items in a storage system. During operation, the system obtains a current configuration of the storage system, wherein the current configuration specifies locations of replicated copies of data items. Next, the system analyzes the current configuration to identify possible movements of copies of data items among locations in the storage system. The system then assigns utilities to the identified movements, wherein a utility assigned to a movement reflects a change in reliability resulting from the movement. Finally, the system selects a utility-maximizing set of movements and performs the utility-maximizing set of movements to improve the reliability of the storage system. | 2015-02-05 |
20150039848 | METHODS AND APPARATUSES FOR IN-SYSTEM FIELD REPAIR AND RECOVERY FROM MEMORY FAILURES - In a particular embodiment, a device includes memory address remapping circuitry and a remapping engine. The memory address remapping circuitry includes a comparison circuit to compare a received memory address to one or more remapped addresses. The memory address remapping circuitry also includes a selection circuit responsive to the comparison circuit to output a physical address. The physical address corresponds to a location in a random-access memory (RAM). The remapping engine is configured to update the one or more remapped addresses to include a particular address in response to detecting that a number of occurrences of errors at a particular location satisfies a threshold. | 2015-02-05 |
20150039849 | Multi-Layer Data Storage Virtualization Using a Consistent Data Reference Model - A write request that includes a data object is processed. A hash function is executed on the data object, thereby generating a hash value that includes a first portion and a second portion. A hypervisor table is queried with the first portion, thereby obtaining a master storage node identifier. The data object and the hash value are sent to a master storage node associated with the master storage node identifier. At the master storage node, a master table is queried with the second portion, thereby obtaining a storage node identifier. The data object and the hash value are sent from the master storage node to a storage node associated with the storage node identifier. | 2015-02-05 |
20150039850 | SYNCHRONIZING A TRANSLATION LOOKASIDE BUFFER WITH AN EXTENDED PAGING TABLE - A processor including logic to execute an instruction to synchronize a mapping from a physical address of a guest of a virtualization based system (guest physical address) to a physical address of the host of the virtualization based system (host physical address), and stored in a translation lookaside buffer (TLB), with a corresponding mapping stored in an extended paging table (EPT) of the virtualization based system. | 2015-02-05 |
20150039851 | METHODS, APPARATUS, INSTRUCTIONS AND LOGIC TO PROVIDE VECTOR SUB-BYTE DECOMPRESSION FUNCTIONALITY - Methods, apparatus, instructions and logic provide SIMD vector sub-byte decompression functionality. Embodiments include shuffling a first and second byte into the least significant portion of a first vector element, and a third and fourth byte into the most significant portion. Processing continues shuffling a fifth and sixth byte into the least significant portion of a second vector element, and a seventh and eighth byte into the most significant portion. Then by shifting the first vector element by a first shift count and the second vector element by a second shift count, sub-byte elements are aligned to the least significant bits of their respective bytes. Processors then shuffle a byte from each of the shifted vector elements' least significant portions into byte positions of a destination vector element, and from each of the shifted vector elements' most significant portions into byte positions of another destination vector element. | 2015-02-05 |
20150039852 | DATA COMPACTION USING VECTORIZED INSTRUCTIONS - Techniques for performing database operations using vectorized instructions are provided. In one technique, data compaction is performed using vectorized instructions to identify a shuffle mask based on matching bits and update an output array based on the shuffle mask and an input array. In a related technique, a hash table probe involves using vectorized instructions to determine whether each key in one or more hash buckets matches a particular input key. | 2015-02-05 |
20150039853 | ESTIMATING A COST OF PERFORMING DATABASE OPERATIONS USING VECTORIZED INSTRUCTIONS - Techniques for performing database operations using vectorized instructions are provided. In one technique, it is determined whether to perform a database operation using one or more vectorized instructions or without using any vectorized instructions. This determination may comprise estimating a first cost of performing the database operation using one or more vectorized instructions and estimating a second cost of performing the database operation without using any vectorized instructions. Multiple factors that may be used to determine which approach to follow, such as the number of data elements that may fit into a SIMD register, a number of vectorized instructions in the vectorized approach, a number of data movement instructions that involve moving data from a SIMD register to a non-SIMD register and/or vice versa, a size of a cache, and a projected size of a hash table. | 2015-02-05 |
20150039854 | VECTORIZED LOOKUP OF FLOATING POINT VALUES - Systems and techniques disclosed herein include methods for de-quantization of feature vectors used in automatic speech recognition. A SIMD vector processor is used in one embodiment for efficient vectorized lookup of floating point values in conjunction with fMPE processing for increasing the discriminative power of input signals. These techniques exploit parallelism to effectively reduce the latency of speech recognition in a system operating in a high dimensional feature space. In one embodiment, a bytewise integer lookup operation effectively performs a floating point or a multiple byte lookup. | 2015-02-05 |
20150039855 | METHODS AND APPARATUS FOR SIGNAL FLOW GRAPH PIPELINING THAT REDUCE STORAGE OF TEMPORARY VARIABLES - A system for pipelining signal flow graphs by a plurality of shared memory processors organized in a 3D physical arrangement with the memory overlaid on the processor nodes that reduces storage of temporary variables. A group function formed by two or more instructions to specify two or more parts of the group function. A first instruction specifies a first part and specifies control information for a second instruction adjacent to the first instruction or at a pre-specified location relative to the first instruction. The first instruction when executed transfers the control information to a pending register and produces a result which is transferred to an operand input associated with the second instruction. The second instruction specifies a second part of the group function and when executed transfers the control information from the pending register to a second execution unit to adjust the second execution unit's operation on the received operand. | 2015-02-05 |
20150039856 | Efficient Complex Multiplication and Fast Fourier Transform (FFT) Implementation on the ManArray Architecture - Efficient computation of complex multiplication results and very efficient fast Fourier transforms (FFTs) are provided. A parallel array VLIW digital signal processor is employed along with specialized complex multiplication instructions and communication operations between the processing elements which are overlapped with computation to provide very high performance operation. Successive iterations of a loop of tightly packed VLIWs are used allowing the complex multiplication pipeline hardware to be efficiently used. In addition, efficient techniques for supporting combined multiply accumulate operations are described. | 2015-02-05 |
20150039857 | APPARATUS, METHOD, SYSTEM AND EXECUTABLE MODULE FOR CONFIGURATION AND OPERATION OF ADAPTIVE INTEGRATED CIRCUITRY HAVING FIXED, APPLICATION SPECIFIC COMPUTATIONAL ELEMENTS - The present invention concerns configuration of a new category of integrated circuitry for adaptive computing. The various embodiments provide an executable information module for an adaptive computing engine (ACE) integrated circuit and may include configuration information, operand data, and may also include routing and power control information. The ACE IC comprises a plurality of heterogeneous computational elements coupled to an interconnection network. The plurality of heterogeneous computational elements include corresponding computational elements having fixed and differing architectures, such as fixed architectures for different functions such as memory, addition, multiplication, complex multiplication, subtraction, configuration, reconfiguration, control, input, output, and field programmability. In response to configuration information, the interconnection network is operative to configure the plurality of heterogeneous computational elements for a plurality of different functional modes. | 2015-02-05 |
20150039858 | REDUCING REGISTER READ PORTS FOR REGISTER PAIRS - Embodiments relate to reducing a number of read ports for register pairs. An aspect includes executing an instruction. The instruction identifies a pair of registers as containing a wide operand which spans the pair of registers. It is determined if a pairing indicator associated with the pair of registers has a first value or a second value. The first value indicates that the wide operand is stored in a wide register, and the second value indicates that the wide operand is not stored in the wide register. Based on the pairing indicator having the first value, the wide operand is read from the wide register. Based on the pairing indicator having the second value, the wide operand is read from the pair of registers. An operation is performed using the wide operand. | 2015-02-05 |
20150039859 | MICROPROCESSOR ACCELERATED CODE OPTIMIZER - A method for accelerating code optimization a microprocessor. The method includes fetching an incoming microinstruction sequence using an instruction fetch component and transferring the fetched macroinstructions to a decoding component for decoding into microinstructions. Optimization processing is performed by reordering the microinstruction sequence into an optimized microinstruction sequence comprising a plurality of dependent code groups. The optimized microinstruction sequence is output to a microprocessor pipeline for execution. A copy of the optimized microinstruction sequence is stored into a sequence cache for subsequent use upon a subsequent hit optimized microinstruction sequence. | 2015-02-05 |
20150039860 | RDA CHECKPOINT OPTIMIZATION - A system and method for efficiently performing microarchitectural checkpointing. A register rename unit within a processor determines whether a physical register number qualifies to have duplicate mappings. Information for maintenance of the duplicate mappings is stored in a register duplicate array (RDA). To reduce the penalty for misspeculation or exception recovery, control logic in the processor supports multiple checkpoints. The RDA is one of multiple data structures to have checkpoint copies of state. The RDA utilizes a content addressable memory (CAM) to store physical register numbers. The duplicate counts for both the current state and the checkpoint copies for a given physical register number are updated when instructions utilizing the given physical register number are retired. To reduce on-die real estate and power consumption, a single CAM entry is stores the physical register number and the other fields are stored in separate storage elements. | 2015-02-05 |
20150039861 | ALLOCATION OF ALIAS REGISTERS IN A PIPELINED SCHEDULE - In an embodiment, a system includes a processor including one or more cores and a plurality of alias registers to store memory range information associated with a plurality of operations of a loop. The memory range information references one or more memory locations within a memory. The system also includes register assignment means for assigning each of the alias registers to a corresponding operation of the loop, where the assignments are made according to a rotation schedule, and one of the alias registers is assigned to a first operation in a first iteration of the loop and to a second operation in a subsequent iteration of the loop. The system also includes the memory coupled to the processor. Other embodiments are described and claimed. | 2015-02-05 |
20150039862 | TECHNIQUES FOR INCREASING INSTRUCTION ISSUE RATE AND REDUCING LATENCY IN AN OUT-OF-ORDER PROCESSOR - A technique for operating a processor includes storing a first result to a writeback buffer, in response to a first execution unit of the processor attempting to write the first result of a first completed instruction to a register file of the processor at a same processor time as a second execution unit of the processor is attempting to write a second result of a second completed instruction to the register file. The writeback buffer is positioned in a dataflow between the first execution unit and the register file. A buffer full indicator logic is used to detect that the writeback buffer is unavailable. A buffer unavailable signal is transmitted, from the buffer full indicator logic, in response to detecting the writeback buffer is unavailable. In response to receiving the buffer unavailable signal, a buffer retrieving logic writes the first result from the writeback buffer to the register file. | 2015-02-05 |
20150039863 | METHOD FOR COMPRESSING INSTRUCTION AND PROCESSOR FOR EXECUTING COMPRESSED INSTRUCTION - A method for compressing instruction is provided, which includes the following steps. Analyze a program code to be executed by a processor to find one or more instruction groups in the program code according to a preset condition. Each of the instruction groups includes one or more instructions in sequential order. Sort the one or more instruction groups according to a cost function of each of the one or more instruction groups. Put the first X of the sorted one or more instruction groups into an instruction table. X is a value determined according to the cost function. Replace each of the one or more instruction groups in the program code that are put into the instruction table with a corresponding execution-on-instruction-table (EIT) instruction. | 2015-02-05 |
20150039864 | SYSTEMS AND METHODS FOR DEFEATING MALWARE WITH RANDOMIZED OPCODE VALUES - A computer processor includes a first instruction set and a second instruction set. The computer processor further includes a translator. The translator translates the first instruction set into the second instruction set. The computer processor is configured to execute operations using only the second complete instruction set. | 2015-02-05 |
20150039865 | Control Device for Vehicle - If exclusive control is used when carrying out update processing or reference processing to a data buffer in a shared memory among plural arithmetic units, waiting time increases and it is difficult to guarantee a real time property. | 2015-02-05 |
20150039866 | COMPUTER FOR AMDAHL-COMPLIANT ALGORITHMS LIKE MATRIX INVERSION - A family of computers is disclosed and claimed that supports simultaneous processes from the single core up to multi-chip Program Execution Systems (PES). The instruction processing of the instructed resources is local, dispensing with the need for large VLIW memories. The cores through the PES have maximum performance for Amdahl-compliant algorithms like matrix inversion, because the multiplications do not stall and the other circuitry keeps up. Cores with log based multiplication generators improve this performance by a factor of two for sine and cosine calculations in single precision floating point and have even greater performance for log | 2015-02-05 |
20150039867 | INSTRUCTION SOURCE SPECIFICATION - Techniques are disclosed relating to specification of instruction operands. In some embodiments, this may involve assigning operands to source inputs. In one embodiment, an instruction includes one or more mapping values, each of which corresponds to a source of the instruction and each of which specifies a location value. In this embodiment, the instruction includes one or more location values that are each usable to identify an operand for the instruction. In this embodiment, a method may include accessing operands using the location values and assigning accessed operands to sources using the mapping values. In one embodiment, the sources may correspond to inputs of an execution block. In one embodiment, a destination mapping value in the instruction may specify a location value that indicates a destination for storing an instruction result. | 2015-02-05 |
20150039868 | INTRA-INSTRUCTIONAL TRANSACTION ABORT HANDLING - Embodiments relate to intra-instructional transaction abort handling. An aspect includes using an emulation routine to execute an instruction within a transaction. The instruction includes at least one unit of operation. The transaction effectively delays committing stores to memory until the transaction has completed successfully. After receiving an abort indication, emulation of the instruction is terminated prior to completing the execution of the instruction. The instruction is terminated after the emulation routine completes any previously initiated unit of operation of the instruction. | 2015-02-05 |
20150039869 | Handling Operating System (Os) Transitions In An Unbounded Transactional Memory (Utm) Mode - In one embodiment, the present invention includes a method for receiving control in a kernel mode via a ring transition from a user thread during execution of an unbounded transactional memory (UTM) transaction, updating a state of a transaction status register (TSR) associated with the user thread and storing the TSR with a context of the user thread, and later restoring the context during a transition from the kernel mode to the user thread. In this way, the UTM transaction may continue on resumption of the user thread. Other embodiments are described and claimed. | 2015-02-05 |
20150039870 | SYSTEMS AND METHODS FOR LOCKING BRANCH TARGET BUFFER ENTRIES - A data processing system includes a processor configured to execute processor instructions and a branch target buffer having a plurality of entries. Each entry is configured to store a branch target address and a lock indicator, wherein the lock indicator indicates whether the entry is a candidate for replacement, and wherein the processor is configured to access the branch target buffer during execution of the processor instructions. The data processing system further includes control circuitry configured to determine a fullness level of the branch target buffer, wherein in response to the fullness level reaching a fullness threshold, the control circuitry is configured to assert the lock indicator of one or more of the plurality of entries to indicate that the one or more of the plurality of entries is not a candidate for replacement. | 2015-02-05 |
20150039871 | Systems And Methods For Infrastructure Template Provisioning In Modular Chassis Systems - Systems and methods for provisioning the infrastructure of modular information handling systems, such as modular blade server chassis systems, using one or more pre-defined templates. IT service templates may be initially loaded and present in local memory or storage of a modular information handling system to define the system infrastructure configuration that ship with the modular chassis platform, or may be later downloaded or otherwise received in local memory or storage later from an external source after system installation to specify the desired end-state of the system infrastructure configuration. | 2015-02-05 |
20150039872 | Multiple Signed Filesystem Application Packages - A method and system is provided for file and application management. The method may include configuring a destination system where the method further includes generating a filesystem image including an application file and files necessary for the destination system to execute the application file, generating a cryptographic signature of the filesystem image, transferring the filesystem image and the cryptographic signature to the destination system, cryptographically verifying the filesystem image with the cryptographic signature, and mounting the filesystem image on the destination system in a read-only manner. | 2015-02-05 |
20150039873 | PROCESSOR PROVIDING MULTIPLE SYSTEM IMAGES - An example processor includes a plurality of processing core components, one or more memory interface components, and a management component, wherein the one or more memory interface components are each shared by the plurality of processing core components, and wherein the management component is configured to assign each of the plurality of processing core components to one of a plurality of system images. | 2015-02-05 |
20150039874 | SYSTEM ON A CHIP HARDWARE BLOCK FOR TRANSLATING COMMANDS FROM PROCESSOR TO READ BOOT CODE FROM OFF-CHIP NON-VOLATILE MEMORY DEVICE - Translation of boot code read request commands from an on-board processor of a system on a chip (SoC) from a bus protocol (e.g., advanced high-performance bus (AHB) protocol) into a sequence of commands understandable by a serial interface of the SoC to read boot code from an off-board (e.g., flash or other non-volatile) memory device. The serial interface of the memory device may include a relatively low pin count (e.g., 5 pins) and boot code of the memory device may be modified after tape-out of the SoC free of necessitating a subsequent tape-out of the SoC. | 2015-02-05 |
20150039875 | Deployment of Software Images with Distinct Configuration Logic - A solution for deploying a software image comprising a target operating system on a target computing machine is proposed. A corresponding method comprises mounting the software image as a storage device, identifying each software program comprised in the software image, downloading a configuration logic for configuring each software program, applying each configuration logic against the software image, and booting the target computing machine from the target operating system. | 2015-02-05 |
20150039876 | Parallelizing Boot Operations - The present disclosure describes apparatuses and techniques for parallelizing boot operations. In some aspects, an operation transferring a boot image from a non-volatile memory to a volatile memory is initiated prior to completion an operation validating another boot image previously-transferred into the volatile memory. This can be effective to enable transfer operations and validation operations of boot images to be performed in parallel. By so doing, delays between the transfer and validation operations can be minimized thereby reducing device boot times. | 2015-02-05 |
20150039877 | SYSTEM AND METHODS FOR AN IN-VEHICLE COMPUTING SYSTEM - Embodiments are disclosed for controlling power modes of a computing system. In some embodiments, a method for an in-vehicle computing system includes, while the vehicle is shut down, operating the system in a suspend mode with volatile memory on standby, and determining whether a reboot may be completed before a next anticipated vehicle start. The method may further include, if it is determined that a reboot may be completed before the next anticipated vehicle start, performing a reboot of the system. | 2015-02-05 |
20150039878 | METHOD OF PROGRAMMING THE DEFAULT CABLE INTERFACE SOFTWARE IN AN INDICIA READING DEVICE - An indicia reading apparatus includes an interconnect cable and an indicia reading device. The indicia reading device is configured so that, if the indicia reader device is not configured to any interconnect cable and detects an indicia which does not contain one of a plurality of specified sequences of data elements that the indicia reading device will recognize and use to configure itself to operate with the interconnect cable, the indicia reading device will indicate to the user of the indicia reading device that the indicia reading device needs to be configured to operate with the interconnect cable. | 2015-02-05 |
20150039879 | CubeSat System, Method and Apparatus - A satellite system includes a chassis, an avionics package included within an upper portion of the chassis. The avionics package includes a main system board, a payload interface board, at least one daughter board and a battery board. The main system board, the payload interface board, the at least one daughter board, and the battery board reside in substantially parallel planes. The payload interface board, the at least one daughter board, and the battery board are coupled to the main system board through one or more stackable connectors. A method of operating a satellite is also described. | 2015-02-05 |
20150039880 | MOBILE COMPUTING DEVICE AND WEARABLE COMPUTING DEVICE HAVING AUTOMATIC ACCESS MODE CONTROL - A system can include a mobile computing device and a wearable computing device. The wearable computing device can include a sensor that outputs an indication that the wearable computing device is being worn. In some examples, one or both of the devices can be operable to determine that the devices are within a threshold distance of each other. Responsive to receiving the indications that the wearable computing device is being worn and the devices are within the threshold distance of each other, one or both of the devices can be operable to change an access mode of computing environment provided by the respective device from a reduced access mode to an increased access mode. | 2015-02-05 |
20150039881 | Triggering an Internet Packet Protocol Against Malware - A process of triggering an Internet packet protocol against malware includes providing protocol trigger mechanisms configured to affect network access and data object access against malware, denial of service attacks, and distributed denial of service attacks, A multi-level security system is established with a cryptographically secure network channel, or another equivalent encrypted channel, and a second object of an encrypted document or data message that uses the secure network channel. The equivalent encrypted channel can be a Virtual Private Network tunnel (VPN) including MPPE/PPTP/CIPE/Open VPN, Secure Socket Layer (SSL), or IPSec tunnel. | 2015-02-05 |
20150039882 | IDENTIFYING CONTENT FROM AN ENCRYPTED COMMUNICATION - Provided is an identifying device for identifying request content from an encrypted request to a server, the identifying device including: a target acquiring unit for acquiring the data size of an encrypted response returned from the server for the encrypted request to the server to be identified; a candidate acquiring unit for acquiring the data size of each of a plurality of encrypted response candidates returned by the server in response to a plurality of encrypted request candidates to be identified sent to the server corresponding to a plurality of known request content candidates; and an identifying unit for identifying the request content to be identified from the plurality of request candidates on the basis of results obtained by comparing the data size of an encrypted response for an encrypted request to be identified to the data sizes of a plurality of encrypted response candidates. | 2015-02-05 |
20150039883 | SYSTEM AND METHOD FOR IDENTITY-BASED KEY MANAGEMENT - A system and method for identity (ID)-based key management are provided. The ID-based key management system includes an authentication server configured to authenticate a terminal through key exchange based on an ID and a password of a user of the terminal, set up a secure channel with the terminal, and provide a private key based on the ID of the user to the terminal through the secure channel, and a private-key generator configured to generate the private key corresponding to the ID of the terminal user according to a request of the authentication server. | 2015-02-05 |
20150039884 | Secure Configuration of Authentication Servers - Embodiments of the invention are directed to automatically populating a database of names and secrets in an authentication server by sending one or more lists of one or more names and secrets by a network management software to an authentication server. Furthermore, some embodiments provide that the lists being sent are encrypted and/or embedded in otherwise inconspicuous files. | 2015-02-05 |
20150039885 | CONJUNCTIVE SEARCH IN ENCRYPTED DATA - A method comprises receiving a first cryptographic token for one search term and a second cryptographic token is generated using the one search term and at least another search term. A first search is conducted using the first cryptographic token to generate a first result set, and the second cryptographic token is used for computing a subset of results of the first result set. | 2015-02-05 |
20150039886 | SECURE APPLICATION ACCESS SYSTEM - A proxy server creates an index of keywords, receives at least a portion of a file, and, when a keyword in the index is encountered in the at least a portion of the file as the at least a portion of the file is being encrypted, associates in the index an encrypted record location identifier with the encountered keyword. The proxy server receives a search query and uses the keyword index to retrieve encrypted records from the server. The encrypted records are decrypted and sent as search results in response to the search query. | 2015-02-05 |
20150039887 | SECURE APPLICATION ACCESS SYSTEM - A proxy server creates an index of keywords, receives an encrypted record, decrypts the received encrypted record as decrypted data and, when a keyword in the index is encountered in the decrypted data, associates in the index an encrypted record location identifier with the encountered keyword. The proxy server receives a search query and uses the keyword index to retrieve encrypted records from the server. The encrypted records are decrypted and sent as search results in response to the search query. | 2015-02-05 |
20150039888 | TECHNIQUES FOR SHARING DATA - Techniques for sharing data between users in a manner that maintains anonymity of the users. Tokens are generated and provided to users for sharing data. A token comprises information encoding an identifier and an encryption key. A user may use a token to upload data that is to be shared. The data to be shared is encrypted using the encryption key associated with the token and the encrypted data is stored such that it can be accessed using the identifier associated with the token. A user may then use a token to access the shared data. The identifier associated with the token being used to access the shared data is used to access the data and the encryption key associated with the token is used to decrypt the data. Data is shared anonymously without revealing the identity of the users using the tokens. | 2015-02-05 |
20150039889 | SYSTEM AND METHOD FOR EMAIL AND FILE DECRYPTION WITHOUT DIRECT ACCESS TO REQUIRED DECRYPTION KEY - Exemplary systems and methods are directed to decrypting electronic messages in a network. The system includes a processor configured to receive or monitor message sources for encrypted messages, where private keys associated with the encrypted messages are not previously provided to the system. For each message, extract a set of user certificate identifiers and corresponding encrypted session keys, securely communicate with private key provider to decrypt the encrypted session key with an acquired private key, and decrypt the message with the unencrypted session key. | 2015-02-05 |
20150039890 | METHOD AND DEVICE FOR SECURE COMMUNICATIONS OVER A NETWORK USING A HARDWARE SECURITY ENGINE - A method, device, and system for establishing a secure communication session with a server includes initiating a request for a secure communication session, such as a Secure Sockets Layer (SLL) communication session with a server using a nonce value generated in a security engine of a system-on-a-chip (SOC) of a client device. Additionally, a cryptographic key exchange is performed between the client and the server to generate a symmetric session key, which is stored in a secure storage of the security engine. The cryptographic key exchange may be, for example, a Rivest-Shamir-Adleman (RSA) key exchange or a Diffie-Hellman key exchange. Private keys and other data generated during the cryptographic key exchange may be generated and/or stored in the security engine. | 2015-02-05 |
20150039891 | Secure Server on a System with Virtual Machines - A system, an apparatus and a method for providing a secure computing environment may be provided. In one aspect, an apparatus may comprise a communication port and a computer processor coupled to the communication port. The computer processor may be configured to initialize a hypervisor, establish a first virtual machine under control of the hypervisor and execute code for a secure zone on the first virtual machine. To execute code for the secure zone, the computer processor may be further configured to verify an administrative task and execute the administrative task, which may include: establish a connection with an administrator device, ensure that the administrator device is one of a set of intended administrator devices, receive a command through the connection with the administrator device and establish a second virtual machine under control of the hypervisor. The command may relate to executing a task on the second virtual machine. | 2015-02-05 |
20150039892 | ELECTRONIC KEY SYSTEM - In a network to which a plurality of electronic devices and a server are connected, an electronic key system controls locking and unlocking of ID information output of each electronic device. Each electronic device includes a switching device that locks or unlocks output of ID information of each electronic device. The server includes an availability changing unit and a management unit. The availability changing unit unlocks only one of the plurality of electronic devices and locks the other electronic devices. The management unit updates a state at which the locking of ID information output and the unlocking of ID information output are swapped between a pair of the electronic devices. | 2015-02-05 |
20150039893 | DOCUMENT VERIFICATION WITH ID AUGMENTATION - At least one node in a distributed hash tree document verification infrastructure is augmented with an identifier of an entity in a registration path. A data signature, which includes parameters for recomputation of a verifying value, and which is associated with a digital input record, will therefore also include data that identifies at least one entity in the hash tree path used for its initial registration in the infrastructure. | 2015-02-05 |
20150039894 | SYSTEM AND METHOD FOR AUTHENTICATION FOR TRANSCEIVERS - A method and apparatus of a network element that authenticates a transceiver and/or a field replaceable unit of the network element is described. The network element generates a stored transceiver signature using transceiver data stored in the removable transceiver and a nonce. In addition, the network element generates a hardware transceiver signature using data stored in secure storage of the network element and the nonce. If the stored transceiver signature and the hardware transceiver signature are equal, the network element uses the transceiver to communicate network data for the network element. Otherwise, the network element disables the transceiver. | 2015-02-05 |