Patents - stay tuned to the technology

Inventors list

Assignees list

Classification tree browser

Top 100 Inventors

Top 100 Assignees


10th week of 2014 patent applcation highlights part 85
Patent application numberTitlePublished
20140068144HETEROGENEOUS DATA PATHS FOR SYSTEMS HAVING TIERED MEMORIES - A nonvolatile memory (“NVM”) buffer can be incorporated into an NVM system between a volatile memory buffer and an NVM to decrease the size of the volatile memory buffer and organize data for programming to the NVM. Heterogeneous data paths may be used for write and read operations such that the nonvolatile memory buffer is used only in certain situations.2014-03-06
20140068145SRAM HANDSHAKE - Various exemplary embodiments relate to an integrated circuit including: a RF interface; a wired interface connectable to a host; a volatile memory having a first block and a last block configured to store data transferred between the RF interface and the wired interface; and a memory controller configured to detect when the last block of the volatile memory has been written and to indicate that the volatile memory is ready to read. Various exemplary embodiments relate to a method performed by a tag including: determining that data is to be received on the first interface; blocking the second interface; writing data from the first interface to a volatile memory; detecting that the last block of the volatile memory has been written; unblocking the second interface; indicating that data is available for reading; blocking the first interface; and reading data from the volatile memory to the second interface.2014-03-06
20140068146MEMORY SYSTEM - According to one embodiment, a memory system according to one embodiment is equipped with several nonvolatile memory chips and a memory controller that controls the nonvolatile memory chips based on a firmware. The firmware is written in a nonvolatile memory chip positioned the farthest distance from the memory controller.2014-03-06
20140068147Flash Memory Devices and Controlling Methods Therefor - A flash memory controller is provided. The flash memory controller includes a read/write unit, a state machine, a processing unit, and a reserve unit. The read/write unit is coupled to a flash memory. The read/write unit is configured to perform a write command or a read command. The state machine is configured to determine a state of the flash memory controller. The processing unit is coupled to the read/write unit and the state machine. The processing unit is configured to control the read/write unit. The reserve unit is coupled to a first data line, a second data line, and the read/write unit. When the flash memory controller is operating abnormally, the reserve unit receives an external signal via the first data line and the second data line and controls the read/write unit according to the external signal.2014-03-06
20140068148LEVEL PLACEMENT IN SOLID-STATE MEMORY - Methods and apparatus are provided for determining level placement in q-level cells of solid-state memory, where q>2. Groups of the cells are programmed to respective levels of a predetermined plurality of programming levels, and each cell is then read at a series of time instants to obtain a sequence of read metric values for that cell. The sequences of read metric values for the group of cells programmed to each programming level are processed to derive statistical data as a function of time for that level. The statistical data for each programming level is processed to determine for that level at least one parameter of a model defining variation with time of the statistical data for programming levels. The parameters for the levels are extrapolated to define parameter variation as a function of level. A set of q programming levels which has a desired property over time is then calculated from said parameter variation and said model.2014-03-06
20140068149MEMORY SYSTEM - According to one embodiment, a memory system includes a nonvolatile semiconductor storage device, a first storage module, a second storage module, a controller, a random number generator, and a randomizing module. The first storage module stores a plurality of management data. The second storage module stores seed data. The controller issues a first command to designate one of the management data, and issues a second command to command writing in or reading from the storage device. The random number generator generates random number data, by shuffling the seed data, based on the management data that is designated by the first command. The randomizing module randomizes written data or read data, based on the random number data.2014-03-06
20140068150DATA STORAGE DEVICE AND OPERATING METHOD THEREOF - A data storage device includes: a first memory device. a second memory device configured to share a write control signal and a read control signal which are provided to the first memory device. and a controller configured to control the first and second memory devices, wherein the controller provides the write control signal and the read control signal to the first and second memory devices at the same time, the first memory device receives only the read control signal according to a first mask signal, and the second memory device receives only the write control signal according to a second mask signal.2014-03-06
20140068151METHOD OF READING AND INPUTTING DATA FOR TESTING SYSTEM AND TESTING SYSTEM THEREOF - A method of inputting data for a testing system is disclosed. The method includes coupling an information buffer to a device to be tested, transferring the device to be tested to a plurality of test stations in the testing system in turn, and obtaining the plurality of product identifications stored in the information buffer in each of the plurality of test stations.2014-03-06
20140068152METHOD AND SYSTEM FOR STORAGE ADDRESS RE-MAPPING FOR A MULTI-BANK MEMORY DEVICE - A method and system for storage address re-mapping in a multi-bank memory is disclosed. The method includes allocating logical addresses in blocks of clusters and re-mapping logical addresses into storage address space, where short runs of host data dispersed in logical address space are mapped in a contiguous manner into megablocks in storage address space. Independently in each bank, valid data is flushed within each respective bank from blocks having both valid and obsolete data to make new blocks available for receiving data in each bank of the multi-bank memory when an available number of new blocks falls below a desired threshold within a particular bank.2014-03-06
20140068153WEAR MANAGEMENT APPARATUS AND METHOD FOR STORAGE SYSTEM - A wear management apparatus and a wear management method of a storage system including storage nodes are provided. The wear management apparatus includes a monitor unit configured to collect status information about each of the storage nodes. The wear management apparatus further includes a wear management unit configured to establish a wear progress model with respect to the storage nodes based on the status information, and control a wear acceleration index of each of the storage nodes based on the wear progress model and a wear management policy.2014-03-06
20140068154SEMICONDUCTOR MEMORY DEVICE - According to one embodiment, a semiconductor memory device includes a first memory circuit and a first controller. The first memory circuit includes a register in which a read page size is stored, and a memory cell array. The first controller is configured to access the first memory circuit by the page size stored in the register, in one of an open page policy and closed page policy.2014-03-06
20140068155INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND PROGRAM - Provided is an information processing apparatus, including: a volatile memory; a nonvolatile memory including a rewritable area configured to store rewritable data, and a non-rewritable area configured to store non-rewritable data and a Snapshot Boot image, the Snapshot Boot image showing a home window corresponding to an execution status of the non-rewritable data; and a controller configured to load the rewritable data and the Snapshot Boot image into the volatile memory when booting, and to draw the home window based on difference information and the Snapshot Boot image, the difference information corresponding to difference data of the rewritable data before and after2014-03-06
20140068156DATA PROCESSING APPARATUS, METHOD FOR PROCESSING DATA, AND COMPUTER READABLE RECORDING MEDIUM RECORDED WITH PROGRAM TO PERFORM THE METHOD - A data processing apparatus includes a first storage device which stores compressed data therein, a second storage device which accesses and temporarily stores the compressed data stored in the first storage device, a data decompressor which generates decompressed data by decompressing the compressed data and outputs the decompressed data to the second storage device so that the decompressed data is temporarily stored in the second storage device, and a controller which accesses the decompressed data temporarily stored in the second storage device. The data decompressor directly scatters the decompressed data into a page cache based on addresses of the page cache. Accordingly, the operating speed of the program and the data processing apparatus can be improved.2014-03-06
20140068157SOLID-STATE DRIVE DEVICE - A solid state drive (SSD) device using a flash memory and including a non-volatile memory that differs in type from the flash memory. The SSD device receives data to be written to the flash memory; stores the received data in the non-volatile memory; stores the data stored in the non-volatile memory to the flash memory; and stores, in the non-volatile memory, flow data indicating a flow of tasks to be undertaken while storing the received data in the non-volatile memory and storing the data stored in the non-volatile memory to the flash memory.2014-03-06
20140068158FLASH STORAGE DEVICE AND CONTROL METHOD FOR FLASH MEMORY - A FLASH memory is used in data storage and is further stored with a logical-to-physical address mapping table and a write protection mapping table. The write protection mapping table shows the write protection statuses of the different logical addresses. In accordance with logical addresses issued via a dynamic capacity management command from a host, a controller of the data storage device modifies the logical-to-physical address mapping table to break the logical-to-physical mapping relationship of the issued logical addresses. Further, the controller asserts a flag, corresponding to the issued logical addresses, in the write protection mapping table, to a write protected mode. According to a change in the amount of write-protected flags of the write protection mapping table, the controller adjusts an end-of-life judgment value of the FLASH memory and thereby a lifespan of the FLASH memory is prolonged.2014-03-06
20140068159MEMORY CONTROLLER, ELECTRONIC DEVICE HAVING THE SAME AND METHOD FOR OPERATING THE SAME - A memory controller includes first and second interfaces, a microprocessor, a register and a plane control unit. The first interface is configured to receive a first command and plane logic information of a plurality of planes in a memory device from a host. The microprocessor is coupled to the first interface, and configured to decode the first command to provide a corresponding second command, and to map the plane logic information to be suited to a non-volatile memory device. The register is configured to queue the second command and the mapped plane logic information. The second interface is configured to provide the second command and the queued plane logic information to the memory device. The plane control unit is configured to control multiple planes corresponding to portions of the queued plane logic information to perform concurrently the second command in the non-volatile memory device.2014-03-06
20140068160MEMORY CONTROLLER, METHOD OF OPERATING MEMORY CONTROLLER, AND SYSTEM COMPRISING MEMORY CONTROLLER - A memory controller controls operation of a nonvolatile memory device comprising a memory area comprising a plurality of multi-level cells (MLCs). The memory controller receives an address of the memory area and data to be programmed to the memory area, analyzes access history information regarding the memory area based on the address, generates first mapping data corresponding to the data or second mapping data based on the data and previous mapping data that has been programmed to the MLCs according to a result of the analysis, and transmits a program command comprising one of the first mapping data and the second mapping data to the nonvolatile memory device.2014-03-06
20140068161MEMORY CONTROLLER, AND ELECTRONIC DEVICE HAVING THE SAME AND METHOD FOR OPERATING THE SAME - A memory controller includes a first interface and a microprocessor. The first interface is configured to receive a first command, a first address, an address state separation command, and a second address, the first address corresponding to the first command, and the address state separation command separating the first and second addresses from each other. The microprocessor is configured to decode the first command, map the first address to a non-volatile memory device, execute the first command relative to the first address mapped to the non-volatile memory device, and determine a relation between the first address and the second address. The microprocessor is further configured to selectively execute the second command relative to the second address mapped to the non-volatile memory device concurrently with the first command based on the relation between the first address and the second address.2014-03-06
20140068162DATA ACCESSING METHOD FOR FLASH MEMORY STORAGE DEVICE HAVING DATA PERTURBATION MODULE, AND STORAGE SYSTEM AND CONTROLLER USING THE SAME - A data accessing method, and a storage system and a controller using the same are provided. The data accessing method is suitable for a flash memory storage system having a data perturbation module. The data accessing method includes receiving a read command from a host and obtaining a logical block to be read and a page to be read from the read command. The data accessing method also includes determining whether a physical block in a data area corresponding to the logical block to be read is a new block and transmitting a predetermined data to the host when the physical block corresponding to the logical block to be read is a new block. Thereby, the host is prevented from reading garbled code from the flash memory storage system having the data perturbation module.2014-03-06
20140068163PERFORMING ASYNCHRONOUS DISCARD SCANS WITH STAGING AND DESTAGING OPERATIONS - A controller receives a request to perform staging or destaging operations with respect to an area of a cache. A determination is made as to whether one or more discard scans are being performed or queued for the area of the cache. In response to determining that one or more discard scans are being performed or queued for the area of the cache, the controller avoids satisfying the request to perform the staging or the destaging operations or a read hit with respect to the area of the cache.2014-03-06
20140068164EFFICIENT MEMORY CONTENT ACCESS - A memory content access interface may include, but is not limited to: a read-path memory partition; a write-path memory partition; and a memory access controller configured to regulate access to at least one of the read-path memory partition and the write-path memory partition by an external controller.2014-03-06
20140068165SPLITTING A REAL-TIME THREAD BETWEEN THE USER AND KERNEL SPACE - A method is provided for exchanging large amounts of memory within an operating system containing consumer and producer threads located in a user space and a kernel space, by controlling ownership of a plurality of RAM banks shared by multiple processes or threads in a consumer-producer relationship. The method includes sharing at least two RAM banks between a consumer process or thread and a producer process or thread, thereby allowing memory to be exchanged between said consumer process or thread and said producer process or thread, and alternately assigning ownership of a shared RAM bank to either said consumer process or thread or said producer process or thread, thereby allowing said producer process or thread to insert data into said shared RAM bank and said consumer process or thread to access data from said shared RAM bank.2014-03-06
20140068166MEMORY CONTROL TECHNIQUE - A disclosed information processing apparatus includes: one or plural memories, each of which includes a self-refresh function; and a memory control unit that stops a patrol that includes reading and error correction with respect to a memory among the one or plural memories, upon starting self-refresh of the one or plural memories, and that restarts the patrol, upon stopping the self-refresh of the one or plural memories. A disclosed memory control unit includes: a patrol unit that performs a patrol including reading and error correction with respect to a memory among one or plural memories that has a self-refresh function; and a controller that stops the patrol, upon starting self-refresh of the one or plural memories, and that restarts the patrol, upon stopping the self-refresh of the one or plural memories.2014-03-06
20140068167RESULTS GENERATION FOR STATE MACHINE ENGINES - A state machine engine includes a storage element, such as a (e.g., match) results memory. The storage element is configured to receive a result of an analysis of data. The storage element is also configured to store the result in a particular portion of the storage element based on a characteristic of the result. The storage element is additionally configured to store a result indicator corresponding to the result. Other state machine engines and methods are also disclosed.2014-03-06
20140068168TILE BASED INTERLEAVING AND DE-INTERLEAVING FOR DIGITAL SIGNAL PROCESSING - Tile based interleaving and de-interleaving of row-column interleaved data is described. In one example, the de-interleaving is divided into two memory transfer stages, the first from an on-chip memory to a DRAM and the second from the DRAM to an on-chip memory. Each stage operates on part of a row-column interleaved block of data and re-orders the data items, such that the output of the second stage comprises de-interleaved data. In the first stage, data items are read from the on-chip memory according to a non-linear sequence of memory read addresses and written to the DRAM. In the second stage, data items are read from the DRAM according to bursts of linear address sequences which make efficient use of the DRAM interface and written back to on-chip memory according to a non-linear sequence of memory write addresses.2014-03-06
20140068169Independent Threading Of Memory Devices Disposed On Memory Modules - A memory module includes a substrate having signal lines thereon that form a control path and a plurality of data paths. A plurality of memory devices are mounted on the substrate. Each memory device is coupled to the control path and to a distinct data path. The memory module includes control circuitry to enable each memory device to process a distinct respective memory access command in a succession of memory access commands and to output data on the distinct data path in response to the processed memory access command.2014-03-06
20140068170MEMORY ADDRESS GENERATION FOR DIGITAL SIGNAL PROCESSING - Memory address generation for digital signal processing is described. In one example, a digital signal processing system-on-chip utilises an on-chip memory space that is shared between functional blocks of the system. An on-chip DMA controller comprises an address generator that can generate sequences of read and write memory addresses for data items being transferred between the on-chip memory and a paged memory device, or internally within the system. The address generator is configurable and can generate non-linear sequences for the read and/or write addresses. This enables aspects of interleaving/deinterleaving operations to be performed as part of a data transfer between internal or paged memory. As a result, a dedicated memory for interleaving operations is not required. In further examples, the address generator can be configured to generate read and/or write addresses that take into account limitations of particular memory devices when performing interleaving, such as DRAM.2014-03-06
20140068171REFRESH CONTROL CIRCUIT AND SEMICONDUCTOR MEMORY DEVICE INCLUDING THE SAME - A refresh control circuit includes an internal chip information unit configured to provide internal chip information related to a retention characteristic of a memory cell, a mode information modification unit configured to output modified mode information based on the internal chip information, wherein the modified mode information represent a number of memory banks for refresh operation, and a selection signal activation unit configured to activate one or more of selection signals for selecting corresponding one or more of the memory banks in response to the modified mode information.2014-03-06
20140068172SELECTIVE REFRESH WITH SOFTWARE COMPONENTS - A method of refreshing a memory is disclosed. The method includes accessing from active memory an active memory map. The active memory map is generated by software and identifies addresses corresponding to the active memory and associated refresh criteria for the addresses. The refresh criteria are evaluated for a portion of the active memory, and an operation initiated to refresh a portion of the active memory is based on the refresh criteria.2014-03-06
20140068173CONTENT ADDRESSABLE MEMORY SCHEDULING - A digital system may utilize a serial content-addressable memory (CAM), capable of performing greater than, less than and/or equal comparisons between its contents and serially inputted data records according to a type of each data record, to select software routine addresses and associated parameters. The system may also include a scheduler, which may select one or more available processors to execute the software routines on the data records.2014-03-06
20140068174TRANSACTIONAL MEMORY THAT PERFORMS A CAMR 32-BIT LOOKUP OPERATION - A transactional memory (TM) receives a lookup command across a bus from a processor. The command includes a base address, a starting bit position, and a mask size. In response to the command, the TM pulls an input value (IV). A selecting circuit within the TM uses the starting bit position and the mask size to select a first portion of the IV. The first portion of the IV and the base address value are summed to generate a memory address. The memory address is used to read a word containing multiple result values and multiple reference values from memory. A second portion of the IV is compared with each reference value using a comparator circuit. A result value associated with the matching reference value is selected using a multiplexing circuit and a select value generated by the comparator circuit. The TM sends the selected result value to the processor.2014-03-06
20140068175OLDEST OPERATION TRANSLATION LOOK-ASIDE BUFFER - A method is provided for dispatching a load operation to a processing device and determining that the operation is the oldest load operation. The method also includes executing the operation in response to determining the operation is the oldest load operation. Computer readable storage media for performing the method are also provided. An apparatus is provided that includes a translation look-aside buffer (TLB) content addressable memory (CAM), and includes an oldest operation storage buffer operationally coupled to the TLB CAM. The apparatus also includes an output multiplexor operationally coupled to the TLB CAM and to the oldest operation storage buffer. Computer readable storage media for adapting a fabrication facility to manufacture the apparatus are also provided.2014-03-06
20140068176LOOKUP ENGINE WITH PIPELINED ACCESS, SPECULATIVE ADD AND LOCK-IN-HIT FUNCTION - Described embodiments provide a lookup engine that receives lookup requests including a requested key and a speculative add requestor. Iteratively, for each one of the lookup requests, the lookup engine searches each entry of a lookup table for an entry having a key matching the requested key of the lookup request. If the lookup table does not include an entry having a key matching the requested key, the lookup engine sends a miss indication corresponding to the lookup request to the control processor. If the speculative add requestor is set, the lookup engine speculatively adds the requested key to a free entry in the lookup table. Speculatively added keys are searchable in the lookup table for subsequent lookup requests to maintain coherency of the lookup table without creating duplicate key entries, comparing missed keys with each other or stalling the lookup engine to insert missed keys.2014-03-06
20140068177ENHANCED MEMORY SAVINGS IN ROUTING MEMORY STRUCTURES OF SERIAL ATTACHED SCSI EXPANDERS - Methods and structure are provided for representing ports of a Serial Attached SCSI (SAS) expander circuit within routing memory. The SAS expander includes a plurality of PHYs and a routing memory. The routing memory includes entries that each indicate a set of PHYs available for initiating a connection with a SAS address, and also includes an entry that represents a SAS port with a start tag indicating a first PHY of the port and a length tag indicating a number of PHYs in the port. The SAS expander also includes a Content Addressable Memory (CAM) including entries that each associate a SAS address with an entry in the routing memory. Further, the SAS expander includes a controller that receives a request for a SAS address, uses the CAM to determine a corresponding routing memory entry for the requested SAS address, and selects the port indicated by the corresponding routing memory entry.2014-03-06
20140068178WRITE PERFORMANCE OPTIMIZED FORMAT FOR A HYBRID DRIVE - An apparatus for optimizing write performance of a hybrid drive includes a magnetic medium that stores data with respect to the hybrid drive and a plurality of write cache regions configured on the magnetic medium. When a write request is received by the hybrid drive, a head of the hybrid drive is automatically positioned to a nearest write cache region for writing of data to at least one write cache region without rotational orientation, thereby eliminating rotational latency and optimizing the write performance of the hybrid drive. The hybrid drive also updates normal data regions of the magnetic medium with data comprising write cached data during drive idle time, freeing up the write cache regions for future writes.2014-03-06
20140068179PROCESSOR, INFORMATION PROCESSING APPARATUS, AND CONTROL METHOD - A processor includes a cache memory that holds data from a main storage device. The processor includes a first control unit that controls acquisition of data, and that outputs an input/output request that requests the transfer of the target data. The processor includes a second control unit that controls the cache memory, that determines, when an instruction to transfer the target data and a response output by the first processor on the basis of the input/output request that has been output to the first processor is received, whether the destination of the response is the processor, and that outputs, to the first control unit when the second control unit determines that the destination of the response is the processor, the response and the target data with respect to the input/output request.2014-03-06
20140068180DATA ANALYSIS SYSTEM - A data analysis system, particularly, a system capable of efficiently analyzing big data is provided. The data analysis system includes an analyst server, at least one data storage unit, a client terminal independent of the analyst server, and a caching device independent of the analyst server. The caching device includes a caching memory, a data transmission interface, and a controller for obtaining a data access pattern of the client terminal with respect to the at least one data storage unit, performing caching operations on the at least one data storage unit according to a caching criterion to obtain and store cache data in the caching memory, and sending the cache data to the analyst server via the data transmission interface, such that the analyst server analyzes the cache data to generate an analysis result, which may be used to request a change in the caching criterion.2014-03-06
20140068181ELASTIC CACHE WITH SINGLE PARITY - The invention provides an elastic or flexible SSD cache utilizing a hybrid RAID protocol combining RAID-0 protocol for read data and RAID-5 single parity protocol for write data in the same cache array. Read data may be stored in window sized allocations using RAID-0 protocol to avoid allocating an entire RAID stripe for read cache data. In the same SSD volume, dirty write data is stored in row allocations using RAID-5 protocol to provide single parity for the dirty write data. Read data is typically stored a window from the physical device having the largest number of available windows. Write data is stored in a row including the next available window in each arm, which decouples the window structure of the rows from the stripe configuration of the physical memory devices.2014-03-06
20140068182Storage Virtualization In A Block-Level Storage System - A data storage system that stores data has a logical address space divided into ordered areas and unordered areas. Retrieval of storage system metadata for a logical address is based on whether the address is located in an ordered area or an unordered area. Retrieval of metadata regarding addresses in ordered areas is performed using an arithmetic calculation, without accessing a block storage device. Retrieval of metadata regarding addresses in unordered areas is performed using lookup tables. In some embodiments, a mixture of ordered and unordered areas is determined to permit the data storage system to store its lookup tables entirely in volatile memory.2014-03-06
20140068183SYSTEMS, METHODS, AND INTERFACES FOR ADAPTIVE PERSISTENCE - A storage module may be configured to service I/O requests according to different persistence levels. The persistence level of an I/O request may relate to the storage resource(s) used to service the I/O request, the configuration of the storage resource(s), the storage mode of the resources, and so on. In some embodiments, a persistence level may relate to a cache mode of an I/O request. I/O requests pertaining to temporary or disposable data may be serviced using an ephemeral cache mode. An ephemeral cache mode may comprise storing I/O request data in cache storage without writing the data through (or back) to primary storage. Ephemeral cache data may be transferred between hosts in response to virtual machine migration.2014-03-06
20140068184ASSIMILATION OF FOREIGN LUNS INTO A NETWORK STORAGE SYSTEM - A storage system provides highly flexible data layouts that can be tailored to various different applications and use cases. The system defines several types of data containers, including “regions”, “logical extents” and “slabs”. Each region includes one or more logical extents. Allocated to each logical extent is at least part of one or more slabs allocated to the region that includes the extent. Each slab is a set of blocks of storage from one or more physical storage devices. The slabs can be defined from a heterogeneous pool of physical storage. The system also maintains multiple “volumes” above the region layer. Each volume includes one or more logical extents from one or more regions. A foreign LUN can be assimilated into the system by defining slabs as separate portions of the foreign LUN. Layouts of the extents within the regions are not visible to any of the volumes.2014-03-06
20140068185AVOIDING RECALL OPERATIONS IN A TIERED DATA STORAGE SYSTEM - According to one embodiment, a system for recalling a data set, includes logic integrated with and/or executable by a hardware processor, the logic being configured to receive a request to open a data set, determine whether the requested data set is stored to a lower tier of a tiered data storage system in multiple associated portions or to a higher tier of the tiered data storage system, move each associated portion of the requested data set from the lower tier to the higher tier of the tiered data storage system when at least one portion of the requested data set is stored to the lower tier of the tiered data storage system, and assemble the associated portions of the requested data set into a single data set to form the requested data set on the higher tier of the tiered data storage system.2014-03-06
20140068186METHODS AND APPARATUS FOR DESIGNATING OR USING DATA STATUS INDICATORS - Memory devices and methods facilitate handling of data received by a memory device through the use of data grouping and assignment of data validity status values to grouped data. For example, data is received and delineated into one or more data groups and a data validity status is associated with each data group. Data groups having a valid status are latched into one or more cache registers for storage in an array of memory cells wherein data groups comprising an invalid status are rejected by the one or more cache registers.2014-03-06
20140068187IMAGE PROCESSING APPARATUS, CONTROL METHOD FOR IMAGE PROCESSING APPARATUS, AND STORAGE MEDIUM - An image processing apparatus that schedules and executes a process in response to a request for job processing includes a detection unit configured to detect a process which requests backing up of management information to be managed in the job processing, a setting unit configured to set, in a case where a process requesting data backup is detected, a caching destination to which management information requested to be backed up is to be cached to a volatile memory or a non-volatile memory based on a data amount of the management information requested to be backed up, and a cache unit configured to cache the management information in the set caching destination.2014-03-06
20140068188SYSTEM AND METHOD FOR MANAGING AN OBJECT CACHE - In order to optimize efficiency of deserialization, a serialization cache is maintained at an object server. The serialization cache is maintained in conjunction with an object cache and stores serialized forms of objects cached within the object cache. When an inbound request is received, a serialized object received in the request is compared to the serialization cache. If the serialized byte stream is present in the serialization cache, then the equivalent object is retrieved from the object cache, thereby avoiding deserialization of the received serialized object. If the serialized byte stream is not present in the serialization cache, then the serialized byte stream is deserialized, the deserialized object is cached in the object cache, and the serialized object is cached in the serialization cache.2014-03-06
20140068189ADJUSTMENT OF THE NUMBER OF TASK CONTROL BLOCKS ALLOCATED FOR DISCARD SCANS - A controller receives a request to perform a release space operation. A determination is made that a new discard scan has to be performed on a cache, in response to the received request to perform the release space operation. A determination is made as to how many task control blocks are to be allocated to the perform the new discard scan, based on how many task control blocks have already been allocated for performing one or more discard scans that are already in progress.2014-03-06
20140068190STACKED MEMORY DEVICES, SYSTEMS, AND METHODS - Memory requests for information from a processor are received in an interface device, and the interface device is coupled to a stack including two or more memory devices. The interface device is operated to select a memory device from a number of memory devices including the stack, and to retrieve some or all of the information from the selected memory device for the processor. Additional apparatus, systems and methods are disclosed.2014-03-06
20140068191SYNCHRONOUS AND ANSYNCHRONOUS DISCARD SCANS BASED ON THE TYPE OF CACHE MEMORY - A computational device maintains a first type of cache and a second type of cache. The computational device receives a command from the host to release space. The computational device synchronously discards tracks from the first type of cache, and asynchronously discards tracks from the second type of cache.2014-03-06
20140068192PROCESSOR AND CONTROL METHOD OF PROCESSOR - A processor includes a plurality of CPU cores, each having an LI cache memory, that executes processing and issues a request, and an L2 cache memory connected to the plurality of CPU cores, the L2 cache memory is configured, when a request which requests a target data held by none of the L1 cache memories contained in the plurality of CPU cores, is a load request that permits other CPU cores, to make a response to the CPU core having sent the request, with non-exclusive information that indicates that the target data is non-exclusive data, together with the target data; and when the request is a load request that forbids other CPU cores, to make a response to the CPU core having sent the request, with exclusive information that indicates that the target data is exclusive, together with the target data.2014-03-06
20140068193SEMICONDUCTOR DEVICE AND MEMORY TEST METHOD - An address range of an L2 cache is divided into sets of a predetermined number of ways. A RAM-BIST pattern generating unit generates a memory address corresponding to a way, a test pattern, and an expected value with respect to the test pattern. The L2 cache and an XOR circuit write the test pattern to a memory address in accordance with the test pattern, read data from the memory address to which the test pattern is written, and compares the read data with the expected value. A decode unit generates a selection signal for each way of the L2 cache by using a memory address. A determination latch stores, by using a selection signal and in a way corresponding to each memory address, a comparison result with respect to the memory address, a scan-out being performed on the comparison result stored in each of the ways in a predetermined order.2014-03-06
20140068194PROCESSOR, INFORMATION PROCESSING APPARATUS, AND CONTROL METHOD OF PROCESSOR - A processor is includes cache memory; an arithmetic processing section that a load request loading an object data stored at a memory to the cache memory; a cache control part patent a process corresponding to the received load request; a memory management part which requests the object data corresponding to the request from the cache control part and header information containing information indicating whether or not the object data is a latest for the memory, and receives the header information responded by the memory; and a data management part that manages a write control of the data to the cache memory, and receives the object data responded by the memory based on the request. The requested data is transmitted from the memory to the data management part held by a CPU node without being intervened by the memory management part.2014-03-06
20140068195METHOD TO INCREASE PERFORMANCE OF NON-CONTIGUOUSLY WRITTEN SECTORS - A method of managing data in a cache upon a cache write operation includes determining a number of non-contiguously written sectors on a track in the cache and comparing the number with a threshold number. If the number exceeds the threshold number, a full background stage operation is issued to fill the non-contiguously written sectors with unmodified data from a storage medium and the full track is then destaged. A corresponding system includes a cache manager module operating on the storage subsystem. Upon a determination that a cache write operation on a track has taken place, the cache manager module determines a number of non-contiguously written sectors on the track, compares the number with a predetermined threshold number, issues a background stage operation to fill the non-contiguously written sectors with unmodified data from a storage medium if the number exceeds the threshold number, and then destages the full track.2014-03-06
20140068196METHOD AND SYSTEM FOR SELF-TUNING CACHE MANAGEMENT - Web objects, such as media files are sent through an adaptation server which includes a transcoder for adapting forwarded objects according to profiles of the receiving destinations, and a cache memory for caching frequently requested objects, including their adapted versions. The probability of additional requests for the same object before the object expires, is assessed by tracking hits. Only objects having experienced hits in excess of a hit threshold are cached, the hit threshold being adaptively adjusted based on the capacity of the cache, and the space required to store cached media files. Expired objects are collected in a list, and may be periodically ejected from the cache, or when the cache is nearly full.2014-03-06
20140068197SYSTEMS, METHODS, AND INTERFACES FOR ADAPTIVE CACHE PERSISTENCE - A storage module may be configured to service I/O requests according to different persistence levels. The persistence level of an I/O request may relate to the storage resource(s) used to service the I/O request, the configuration of the storage resource(s), the storage mode of the resources, and so on. In some embodiments, a persistence level may relate to a cache mode of an I/O request. I/O requests pertaining to temporary or disposable data may be serviced using an ephemeral cache mode. An ephemeral cache mode may comprise storing I/O request data in cache storage without writing the data through (or back) to primary storage. Ephemeral cache data may be transferred between hosts in response to virtual machine migration.2014-03-06
20140068198STATISTICAL CACHE PROMOTION - Storing data in a cache is disclosed. It is determined that a data record is not stored in a cache. A random value is generated using a threshold value. It is determined whether to store the data record in the cache based at least in part on the generated random value.2014-03-06
20140068199PROCESSOR AND INFORMATION PROCESSING APPARATUS - A processor includes a first transmitting unit that transmits, when receiving from a second processor a transmission request indicating transmission of target data which is read from a main storage unit and stored in the first processor, a transfer instruction to the first processor, the transfer instruction indicating transfer of the target data and state information to the second processor, the state information indicating a state of the target data used when the second processor reads and stores the target data. The processor includes a second transmitting unit that transmits acquisition information indicating acquisition of the target data to the second processor before receiving a response to the transfer instruction transmitted by the first transmitting unit from the first processor.2014-03-06
20140068200Storage Subsystem And Storage System Architecture Performing Storage Virtualization And Method Thereof - A method for generating a virtual volume (VV) in a storage system architecture. The architecture comprises a host and one or more disk array subsystems. Each subsystem comprises a storage controller. One or more of the subsystems comprises a physical storage device (PSD) array. The method comprises the following steps: mapping the PSD array into a plurality of media extents (MEs), each of the MEs comprises a plurality of sections; providing a virtual pool (VP) to implement a section cross-referencing function, wherein a section index (SI) of each of the sections contained in the VP is defined by the VP to cross-reference VP sections to physical ME locations; providing a conversion method or procedure or function for mapping VP capacity into to a VV; and presenting the VV to the host. A storage subsystem and a storage system architecture performing the method are also provided.2014-03-06
20140068201TRANSACTIONAL MEMORY PROXY - Processors in a compute node offload transactional memory accesses addressing shared memory to a transactional memory agent. The transactional memory agent typically resides near the processors in a particular compute node. The transactional memory agent acts as a proxy for those processors. A first benefit of the invention includes decoupling the processor from the direct effects of remote system failures. Other benefits of the invention includes freeing the processor from having to be aware of transactional memory semantics, and allowing the processor to address a memory space larger than the processor's native hardware addressing capabilities. The invention also enables computer system transactional capabilities to scale well beyond the transactional capabilities of those found computer systems today.2014-03-06
20140068202Intelligent Heuristics for File Systems and File System Operations - A data system may detect and halt unauthorized bulk data copy operations without interfering with or degrading authorized data copy operations. Characteristics of a request for access to a file system may be analyzed to determine whether a bulk data copy operation has been requested by a user. The bulk data copy operation may be allowed if the operation is below a particular permitted copy threshold or if the requesting user is authorized to execute a bulk data copy operation exhibiting certain characteristics.2014-03-06
20140068203MEMORY DEVICE FOR REDUCING A WRITE FAIL, A SYSTEM INCLUDING THE SAME, AND A METHOD THEREOF - A memory system includes a memory device and a memory controller. The memory device includes a plurality of memory cells. The memory controller is configured to continuously perform a plurality of write commands on the memory device between an active command and a precharge command. In the memory system, when after a first write operation having a last write command of the plurality of write commands is performed and then the precharge command is issued, the last write command is issued for a second write operation after the precharge command. The first write operation and the second write operation write a same data to memory cells of plurality of memory cells having a same address.2014-03-06
20140068204Low Power, Area-Efficient Tracking Buffer - A tracking buffer apparatus is disclosed. A tracking buffer apparatus includes lookup logic configured to locate entries having a transaction identifier corresponding to a received request. The lookup logic is configured to determine which of the entries having the same transaction identifier has a highest priority and thus cause a corresponding entry from a data buffer to be provided. When information is written into the tracking buffer, write logic writes a corresponding transaction identifier to the first free entry. The write logic also writes priority information in the entry based on other entries having the same transaction identifier. The entry currently being written may be assigned a lower priority than all other entries having the same transaction identifier. The priority information for entries having a common transaction identifier with one currently being read are updated responsive to the read operation.2014-03-06
20140068205SYSTEMS AND METHODS FOR MANAGING QUEUES - Described are systems and methods for transmitting data at an aggregation device. The aggregation device includes a record queue and an output bypass queue. The data is received from an electronic device. A record is generated of the received data. The record is placed in the record queue. A determination is made that the record in the record queue is blocked. The blocked record is transferred from the record queue to the output bypass queue.2014-03-06
20140068206METHODS AND SYSTEMS FOR DATA CLEANUP USING PHYSICAL IMAGE OF FILES ON STORAGE DEVICES - Methods, systems, and computer program products are provided for optimizing selection of files for eviction from a first storage pool to free up a predetermined amount of space in the first storage pool. A method includes analyzing an effective space occupied by each file of a plurality of files in the first storage pool, identifying, from the plurality of files, one or more data blocks making up a file to free up the predetermined amount of space based on the analysis of the effective space of each file of the plurality of files, selecting one or more of the plurality of files as one or more candidate files for eviction, based on the identified one or more data blocks, and evicting the one or more candidate files for eviction from the first storage pool to a second storage pool.2014-03-06
20140068207Reducing Page Faults in Host OS Following a Live Partition Mobility Event - Page faults during partition migration from a source computing system to a destination computing system are reduced by assigning each page used by a process as being hot or cold according to their frequency of use by the process. During a live partition migration, the cold or coldest (least frequently used) pages are copied to the destination server first, followed copying the warmer (less frequently used) and concluded by copying the hottest (most frequently used) pages. After all dirtied pages have been refreshed, cutover from the instance on the source server to the destination server is made. By transferring the warm and hot pages last (or later) in the migration process, the number of dirtied pages is reduced, thereby reducing page faults subsequent to the cutover.2014-03-06
20140068208SEPARATELY STORED REDUNDANCY - A method or system stores a data block redundancy related to a data block of a storage medium together with the mapping metadata for the data block. In an alternative implementation, redundancy storage location is on a separate block of the storage medium, the separate block being in a storage region other than the storage region of the data block.2014-03-06
20140068209ACCESSING REMOTE MEMORY ON A MEMORY BLADE - A method of accessing remote memory comprising receiving a request for access to a page from a computing device, adding an address of the accessed page to a recent list memory on the remote memory, associating a recent list group identifier to a number of addresses of accessed pages, transferring the requested page to the computing device with the recent list group identifier and temporarily maintaining a copy of the transferred page on the remote memory.2014-03-06
20140068210MANAGEMENT METHOD OF VIRTUAL STORAGE SYSTEM AND REMOTE COPY SYSTEM - Exemplary embodiments provide techniques of managing storage systems including remote copy systems and improving the manageability by automating complicated operations. In one embodiment, a computer comprises a memory and a controller. The controller is operable to: manage a virtual volume to be provided for a server; manage a plurality of logical volumes provided from a plurality of storage systems; manage a condition to be required of the virtual volume, the condition relating to a location in which data to be sent to the virtual volume is stored; manage location information of each of the plurality of logical volumes, the location information of a logical volume being defined based on a location of the logical volume; and control to map the virtual volume to a logical volume of the plurality of logical volumes, based on the condition of the virtual volume and the location information of the logical volumes.2014-03-06
20140068211CONVERTING A FIRST ADDRESS MAPPING FUNCTION FOR MAPPING ADDRESSES TO STORAGE LOCATIONS TO A SECOND ADDRESS MAPPING FUNCTION - Provided are a computer program product, system, and method for converting a first address mapping function for mapping addresses to storage locations to a second address mapping function. For each of a plurality of addresses allocated in the storage using the first address mapping function, a node is generated in the second address mapping function. Each node in the second address mapping function associates a logical address with a physical location for the logical address. A determination is made of addresses having unused space and storage space is freed for the determined addresses having the unused space. Indication is made in the second address mapping function that the storage space for the determined addresses has been freed.2014-03-06
20140068212DEVICE BACKUPS AND UPDATES IN VIEW OF DATA USAGE STATISTICS - Embodiments manage data transfer requests representing backup operations and update operations from a computing device using a centralized data transfer service. The data transfer service selects the data transfer requests for performance based at least on data usage statistics associated with a data usage plan and available network connections on the computing device. For the backup operations, the data transfer requests are also selected based on priority information associated with each of the backup operations. In some embodiments, the data transfer service selects and initiates the data transfer requests without incurring excess data transfer costs for the user.2014-03-06
20140068213INFORMATION PROCESSING APPARATUS AND AREA RELEASE CONTROL METHOD - An information processing apparatus includes a controller. The controller performs a transfer source information generation process for generating transfer source information upon reception of an offload data transfer instruction from a host computer. The controller performs a reserved state setting process for setting, when an instruction to release a release area is received, an overlapping area to a reserved state so as to reserve release of the overlapping area, after issuing a completion response for the release instruction. The controller performs a pending state determination process for determining a pending state in which data transfer using the transfer information might be executed. The controller performs an area release process for releasing the overlapping area which is set to the reserved state and thus is reserved to be released, when the pending state is cancelled.2014-03-06
20140068214INFORMATION PROCESSING APPARATUS AND COPY CONTROL METHOD - The information processing apparatus, which is a control apparatus which performs access control and copy control of a disk unit, forms a storage apparatus together with the disk unit. The copy controller manages copy sessions. A copy session is a unit of management of copying a copy source data area on a copy source disk to a copy destination data area on a copy destination disk. A copy session management unit, when there is a plurality of copy sessions, performs scheduling of the plurality of copy sessions with the disk unit being the copy destination disk, and notifies a copy controller controlling the copy source disk of the schedule. An execution unit which has been notified of the schedule executes a copy of a copy session according to the schedule.2014-03-06
20140068215METHOD AND APPARATUS FOR ACCESSING DATA IN A DATA STORAGE SYSTEM - A data storage system, and a method for accessing data in a data storage system, wherein the data storage system comprises at least a first volume and a second volume, and the first volume and the second volume remain consistent by a synchronous copy relationship, the method comprising: setting a virtual unique identifier of the second volume as a unique identifier of the first volume; creating a first path from a host to the first volume and a second path from the host to the second volume by using the unique identifier of the first volume; accessing data by using the first path from the host to the first volume; and setting the second path from the host to the second volume as unavailable.2014-03-06
20140068216STORAGE SYSTEM FOR SUPPORTING COPY COMMAND AND MOVE COMMAND AND OPERATION METHOD OF STORAGE SYSTEM - Provided are a storage system for supporting a copy command and a move command and an operation method of said storage system. The storage system performs a copy operation and a move operation without movement of data between a host and a storage device, by using a copy command and a move command which are distinguished from a read command and a write command. More specifically, the storage device updates a mapping table by responding to the reception of the copy command or the move command from the host.2014-03-06
20140068217STORAGE SYSTEM, VIRTUALIZATION CONTROL APPARATUS, INFORMATION PROCESSING APPARATUS, AND METHOD FOR CONTROLLING STORAGE SYSTEM - An information processing apparatus is configured to make access to a storage device via a first path. A virtualization control apparatus is configured to control access to a virtual storage device via a second path, where the virtual storage device is provided by virtualizing the storage device. The virtualization control apparatus sends an identifier of the storage device in response to a query from the information processing apparatus which requests information about a storage space that is accessible via the second path. The information processing apparatus incorporates the second path as an inactive standby path when the identifier received as a response to the query matches with an identifier of the storage device accessible via the first path.2014-03-06
20140068218STORAGE DEVICE AND COMMUNICATION METHOD - According to one embodiment, a storage device includes a queue, an interface unit, a selection unit and a delay unit. The interface unit exclusively executes command receiving processing of storing commands from a host in the queue and data transmission processing with the host. The selection unit selects one command from the commands stored in the queue. The delay unit delays a second timing at which data transmission processing for the selected command is started based on a first timing at which the command receiving processing is executed last. When a new command is not received between the first timing and the second timing, the interface unit starts the data transmission processing for the selected command at the second timing. When the new command is received between the first timing and the second timing, the interface unit executes command receiving processing for the new command.2014-03-06
20140068219FREE SPACE COLLECTION IN LOG STRUCTURED STORAGE SYSTEMS - A mechanism is provided for optimizing free space collection in a storage system having a plurality of segments. A collection score value is calculated for least one of the plurality of segments. The collection score value is calculated by determining a sum, across tracks in the segment, of the amount of time over a predetermined period of time during which the track has been invalid due to a more recent copy being written in a different segment. Segments are chosen for free space collection based on the determined collection score value.2014-03-06
20140068220HARDWARE BASED MEMORY ALLOCATION SYSTEM WITH DIRECTLY CONNECTED MEMORY - A hardware based memory allocation system in a computer includes: a memory module formatted with memory blocks; an input controller, in communications with the memory module and receiving a transfer request from a requestor, for transferring data from a source to the memory module; an output controller, in communications with the memory module and the input controller, for transferring data from the memory module to a destination; and a block allocator, in communications the input controller and the output controller, for maintaining a Block Descriptor Index (BDI) of Free List (FL) Addresses, each FL address pointing to a Block Descriptor Page (BDP) having a plurality of Memory Block (MB) addresses, each MB address pointing to a free memory block in the memory module.2014-03-06
20140068221Thin Provisioning - A mechanism is provided for thin provisioning. An original time-domain sequence of a load parameter of storage resources already allocated to an application program is collected. A future load peak time period of the storage resources already allocated to the application program is determined based on the collected original time-domain sequence of the load parameter. A new storage resource unit from a high-speed storage is allocated in response to receipt of a request to allocate the new storage resource unit to the application program in the future load peak time period. On an occasion of thin provisioning, whether the physical storage resources newly allocated to the application program are located in a low-speed storage or a high-speed storage is determined according to the accesses of the application program to the already-allocated physical storage resources.2014-03-06
20140068222SEMICONDUCTOR MEMORY DEVICE AND METHOD OF OPERATING THE SAME - A semiconductor memory device is operated by, inter alia, performing least significant bit programs for pages in a first page group, performing least significant bit programs for pages in a second page group, and performing most significant bit programs for the pages in the first page group. The distance between the second page group and the common source line is greater than that between the first page group and the common source line.2014-03-06
20140068223Address Server - A mechanism is provided for attributing network addresses to virtual machines. A request for a number of addresses is received from a requesting entity, thereby forming a requested number of addresses. A length of continuous ranges of available addresses is compared to the requested number of addresses. A range of available addresses comprising a number of addresses greater than the requested number of addresses is selected from a memory, thereby forming a selected range of available addresses. A first new range comprising the requested number of addresses excised from the selected range of available addresses is defined and one or more further new ranges are defined comprising the remainder of the selected range of available addresses not belonging to the first new range. The first new range is attributed for the use of the requesting entity.2014-03-06
20140068224Block-level Access to Parallel Storage - The subject disclosure is directed towards one or more parallel storage components for parallelizing block-level input/output associated with remote file data. Based upon a mapping scheme, the file data is partitioned into a plurality of blocks in which each may be equal in size. A translator component of the parallel storage may determine a mapping between the plurality of blocks and a plurality of storage nodes such that at least a portion of the plurality of blocks is accessible in parallel. Such a mapping, for example, may place each block in a different storage node allowing the plurality of blocks to be retrieved simultaneously and in its entirety.2014-03-06
20140068225CONFIGURABLE TRANSLATION LOOKASIDE BUFFER - A particular method includes receiving at least one translation lookaside buffer (TLB) configuration indicator. The at least one TLB configuration indicator indicates a specific number of entries to be enabled at a TLB. The method further includes modifying a number of searchable entries of the TLB in response to the at least one TLB configuration indicator.2014-03-06
20140068226VECTOR INSTRUCTIONS TO ENABLE EFFICIENT SYNCHRONIZATION AND PARALLEL REDUCTION OPERATIONS - In one embodiment, a processor may include a vector unit to perform operations on multiple data elements responsive to a single instruction, and a control unit coupled to the vector unit to provide the data elements to the vector unit, where the control unit is to enable an atomic vector operation to be performed on at least some of the data elements responsive to a first vector instruction to be executed under a first mask and a second vector instruction to be executed under a second mask. Other embodiments are described and claimed.2014-03-06
20140068227SYSTEMS, APPARATUSES, AND METHODS FOR EXTRACTING A WRITEMASK FROM A REGISTER - Embodiments of systems, apparatuses, and methods for performing in a computer processor mask extraction from a general purpose register in response to a single mask extraction from a general purpose register instruction that includes a source general purpose register operand, a destination writemask register operand, an immediate value, and an opcode are described.2014-03-06
20140068228INSTRUCTION FORWARDING BASED ON PREDICATION CRITERIA - Embodiments herein relate to forwarding an instruction based on predication criteria. A predicate state associated with a packet of data is to be compared to an instruction associated with the predication criteria. The instruction is to be forwarded to an execution unit if the predication criteria includes or matches the predicate state of the packet.2014-03-06
20140068229INSTRUCTION ADDRESS ENCODING AND DECODING BASED ON PROGRAM CONSTRUCT GROUPS - Coding circuitry comprises at least an encoder configured to encode an instruction address for transmission to a decoder. The encoder is operative to identify the instruction address as belonging to a particular one of a plurality of groups of instruction addresses associated with respective distinct program constructs, and to encode the instruction address based on the identified group. The decoder is operative to identify the encoded instruction address as belonging to the particular one of a plurality of groups of instruction addresses associated with respective distinct program constructs, and to decode the encoded instruction address based on the identified group. The coding circuitry may be implemented as part of an integrated circuit or other processing device that includes associated processor and memory elements. In such an arrangement, the processor may generate the instruction address for delivery over a bus to the memory.2014-03-06
20140068230MICRO-ARCHITECTURE FOR ELIMINATING MOV OPERATIONS - A computer system and processor for elimination of move operations include circuits that obtain a computer instruction and bypass execution units in response to determining that the instruction includes a move operation that involves a transfer of data from a logical source register to a logical destination register. Instead of executing the move operation, the transfer of the data is performed by tracking changes in data dependencies of the source and the destination registers, and assigning a physical register associated with the source register to the destination register based on the dependencies.2014-03-06
20140068231CENTRAL PROCESSING UNIT AND ARITHMETIC UNIT - There is a need to provide a central processing unit capable of improving the resistance to power analysis attack without changing programs, lowering clock frequencies, and greatly redesigning a central processing unit of the related art. In a central processing unit, an arithmetic unit is capable of performing arithmetic operation using data irrelevant to data stored in a register group. A control unit allows the arithmetic unit to perform arithmetic processing corresponding to an incorporated instruction. At this time, the control unit allows the arithmetic unit to perform arithmetic processing using the irrelevant data during a first one-clock cycle.2014-03-06
20140068232GLOBAL REGISTER PROTECTION IN A MULTI-THREADED PROCESSOR - Global register protection in a multi-threaded processor is described. In an embodiment, global resources within a multi-threaded processor are protected by performing checks, before allowing a thread to write to a global resource, to determine whether the thread has write access to the particular global resource. The check involves accessing one or more local control registers or a global control field within the multi-threaded processor and in an example, a local register associated with each other thread in the multi-threaded processor is accessed and checked to see whether it contains an identifier for the particular global resource. Only if none of the accessed local resources contain such an identifier, is the instruction issued and the thread allowed to write to the global resource. Otherwise, the instruction is blocked and an exception may be raised to alert the program that issued the instruction that the write failed.2014-03-06
20140068233INFORMATION PROCESSING APPARATUS AND COPY CONTROL METHOD - The information processing apparatus includes a creating unit and a control unit. On receiving an offloaded data transfer instruction, the creating unit creates a copy session for transferring data from a transfer-source memory area of a transfer-source memory apparatus to a transfer-destination memory area of a transfer-destination memory apparatus. When detecting no overload incurred by asynchronous execution control under which the data transfer is executed out of sync with the offloaded data transfer instruction, the control unit determines that the data transfer of the created copy session is to be executed under the asynchronous execution control. On the other hand, when detecting overload incurred by the asynchronous execution control, the control unit determines that the data transfer of the created copy session is to be executed under synchronous execution control in which the data transfer is executed in synchronization with the offloaded data transfer instruction.2014-03-06
20140068234INSTRUCTION INSERTION IN STATE MACHINE ENGINES - State machine engines are disclosed, including those having an instruction insertion register. One such instruction insertion register may provide an initialization instruction, such as to prepare a state machine engine for data analysis. An instruction insertion register may also provide an instruction in an attempt to resolve an error that occurs during operation of a state machine engine. An instruction insertion register may also be used to debug a state machine engine, such as after the state machine experiences a fatal error.2014-03-06
20140068235LAYOUT AND EXECUTION OF SOFTWARE APPLICATIONS USING BPRAM - A software layout system is described herein that speeds up computer system boot time and/or application initialization time by moving constant data and executable code into byte-addressable, persistent random access memory (BPRAM). The system determines which components and aspects of the operating system or application change infrequently. From this information, the system builds a high performance BPRAM cache to provide faster access to these frequently used components, including the kernel. The result is that kernel or application code and data structures have a high performance access and execution time with regard to memory fetches. Thus, the software layout system provides a faster way to prepare operating systems and applications for normal operation and reduces the time spent on initialization.2014-03-06
20140068236CUSTOM CONFIGURATION FOR A CALCULATOR BASED ON A SELECTED FUNCTIONALITY - Examples disclose a computing system comprising a computing device with a display surface to detect a selection of functionality from a list of functionalities to be disabled on a calculator. Further, the computing device creates a custom configuration based on the selected functionality. Additionally, the examples also disclose a calculator with a processor to integrate the custom configuration, the custom configuration restricts the selected functionality on the calculator.2014-03-06
20140068237CENTRAL MONITORING STATION WARM SPARE - A method for preparing a computer for use as a central monitoring station includes connecting a computer to a network. An operating system is installed on the computer. Anti-virus software is installed on the computer. Licensing information is installed on the computer. Configuration information is stored on the computer. The configuration information is for the computer and at least one additional computer. A determination is made that the computer is to be activated as a first central monitoring station on the network. When the determination is made that the computer is to be activated as the first central monitoring station, the computer is configured according to a first subset of the configuration information. The computer is activated on the network as the first central monitoring station.2014-03-06
20140068238Arbitrary Code Execution and Restricted Protected Storage Access to Trusted Code - A method comprises signing boot code with a public/private cryptographic key pair, and writing to storage the boot code, the public cryptographic key, and the signed boot code.2014-03-06
20140068239METHOD FOR BOOTING ICON LOCKOUT - The present invention relates to a method for booting icon lockout and comprises steps as follows: (1) Configure a display mode of a host computer's graphic chip as “graphic” in order to define memory addresses in the host computer to be a memory pool for icon display; (2) Decompress the zip file for a customized booting icon which is saved in the BIOS chip as one file; (3) Load the file to the memory pool and map the file to the graphic chip's memories for displaying the customized booting icon on a monitor; (4) Change content of the call function (INT10H) in order to smoothly display nothing except the identical customized booting icon on a monitor by INT10H in a boot process from power-on self-test to desktop display before OS is completely loaded.2014-03-06
20140068240LAYOUT AND EXECUTION OF OPERATING SYSTEMS USING BPRAM - A software layout system is described herein that speeds up computer system boot time and/or application initialization time by moving constant data and executable code into byte-addressable, persistent random access memory (BPRAM). The system determines which components and aspects of the operating system or application change infrequently. From this information, the system builds a high performance BPRAM cache to provide faster access to these frequently used components, including the kernel. The result is that kernel or application code and data structures have a high performance access and execution time with regard to memory fetches. Thus, the software layout system provides a faster way to prepare operating systems and applications for normal operation and reduces the time spent on initialization.2014-03-06
20140068241MEMORY DEVICE, MEMORY SYSTEM INCLUDING THE SAME, AND METHOD FOR OPERATING THE MEMORY SYSTEM - A memory device includes a non-volatile memory configured to store a repair data and output the repair data in response to an initialization signal, a plurality of registers configured to store the repair data outputted from the non-volatile memory, a plurality of memory banks configured to replace normal cells with redundant cells by using the repair data stored in corresponding registers among the plurality of registers, a verification circuit configured to generate a completion signal for informing that transfer of the repair data from the non-volatile memory to the plurality of registers is completed, and an output circuit configured to output the completion signal to a device other than the memory device.2014-03-06
20140068242IMAGE FORMING APPARATUS, METHOD FOR CONTROLLING IMAGE FORMING APPARATUS, AND STORAGE MEDIUM - An image forming apparatus, which is configured to perform image processing while managing a resource of a system, includes a memory, a detection unit configured to detect a brightness level around a main body of the image forming apparatus, and a control unit configured to reboot the resource of the system, check a state in which the system should be rebooted to determine which level this state has shifted to, reserve reboot processing according to the determined level, and control whether the reboot processing should be performed by the reboot unit according to the detected brightness level and a level of the reserved reboot processing.2014-03-06
20140068243MISOPERATION-PREVENTING METHOD AND DEVICE - A misoperation-preventing method for use in a mobile terminal having a touch screen, includes: monitoring a distance between the mobile terminal and an object in a surrounding environment, after the mobile terminal transitions from a standby state to an active state; determining if the distance satisfies a preset distance condition; and disabling the touch screen if it is determined that the distance satisfies the preset distance condition.2014-03-06
Website © 2025 Advameg, Inc.