Patents - stay tuned to the technology

Inventors list

Assignees list

Classification tree browser

Top 100 Inventors

Top 100 Assignees


09th week of 2016 patent applcation highlights part 50
Patent application numberTitlePublished
20160062827SEMICONDUCTOR MEMORY DEVICE AND PROGRAMMING METHOD THEREOF - A semiconductor memory device is provided to keep data reliability while decreasing programming time. A NAND flash memory loads programming data from an external input/output terminal to a page buffer/sense circuit. A detecting circuit for monitoring the programming data detects whether the programming data is a specific bit string. If it is detected that the programming data is not a specific bit string, a transferring/writing circuit transfers the programming data kept by the page buffer/sense circuit to an error checking correction (ECC) circuit, and an ECC code generated by an ECC operation is written to the page buffer/sense circuit. If it is detected that the programming data is a specific bit string, transfer of the programming data kept by the page buffer/sense circuit is forbidden and a known ECC code corresponding to the specific bit string is written to the page buffer/sense circuit.2016-03-03
20160062828DATA ACCESSING METHOD, MEMORY STORAGE DEVICE AND MEMORY CONTROLLING CIRCUIT UNIT - A data accessing method, a memory storage device and a memory controlling circuit unit are provided. The data accessing method includes: determining whether a first physical programming unit storing first data belongs to a first type physical programming unit or a second type physical programming unit; if the first physical programming unit belongs to the first type physical programming unit, generating a first verification code corresponding to the first data and a second verification code for being combined with the first verification code, and writing the first data and the first verification code into the first physical programming unit; and if the first data is decoded unsuccessfully by using the first verification code, combining the second verification code and the first verification code to decode the first data.2016-03-03
20160062829SEMICONDUCTOR MEMORY DEVICE - According to one embodiment, a semiconductor memory device includes a generator to generate an error correction code. The generator includes a first encoder to calculate a first error correction code, a second encoder to calculate a second correction code, and an operation part to operate the first error correction code and the second error correction code.2016-03-03
20160062830SEMICONDUCTOR MEMORY DEVICES, MEMORY SYSTEMS INCLUDING THE SAME AND METHOD OF CORRECTING ERRORS IN THE SAME - A semiconductor memory device includes a memory cell array in which a plurality of memory cells are arranged. The semiconductor memory device includes an error correcting code (ECC) circuit configured to generate parity data based on main data, write a codeword including the main data and the parity data in the memory cell array, read the codeword from a selected memory cell row to generate syndromes, and correct errors in the read codeword on a per symbol basis based on the syndromes. The main data includes first data of a first memory cell of the selected memory cell row and second data of a second memory cell of the selected memory cell row. The first data and the second data are assigned to one symbol of a plurality of symbols, and the first memory cell and the second memory cell are adjacent to each other in the memory cell array.2016-03-03
20160062831ERROR CORRECTION CODE FOR UNIDIRECTIONAL MEMORY - A memory array and a method of writing to a unidirectional non-volatile storage cell are disclosed whereby a user data word is transformed to an internal data word and written to one or more unidirectional data storage cells according to a cell coding scheme. A check word may be generated that corresponds to the internal data word. In some embodiments, the check word may be generated by inverting one or more bits of an intermediate check word. Other embodiments may be described and claimed.2016-03-03
20160062832WIDE SPREADING DATA STORAGE ARCHITECTURE - Technology is disclosed for a data storage architecture for providing enhanced storage resiliency for a data object. The data storage architecture can be implemented in a single-tier configuration and/or a multi-tier configuration. In the single-tier configuration, a data object is encoded, e.g., based on an erasure coding method, to generate many data fragments, which are stored across many storage devices. In the multi-tier configuration, a data object is encoded, e.g., based on an erasure coding method, to generate many data segments, which are sent to one or more tiers of storage nodes. Each of the storage nodes further encodes the data segment to generate many data fragments representing the data segment, which are stored across many storage devices associated with the storage node. The I/O operations for rebuilding the data in case of device failures is spread across many storage devices, which minimizes the wear of a given storage device.2016-03-03
20160062833REBUILDING A DATA OBJECT USING PORTIONS OF THE DATA OBJECT - Technology is disclosed for a data storage architecture for providing enhanced storage resiliency for a data object. The data storage architecture can be implemented in a single-tier configuration and/or a multi-tier configuration. In the single-tier configuration, a data object is encoded, e.g., based on an erasure coding method, to generate many data fragments, which are stored across many storage devices. In the multi-tier configuration, a data object is encoded, e.g., based on an erasure coding method, to generate many data segments, which are sent to one or more tiers of storage nodes. Each of the storage nodes further encodes the data segment to generate many data fragments representing the data segment, which are stored across many storage devices associated with the storage node. The I/O operations for rebuilding the data in case of device failures is spread across many storage devices, which minimizes the wear of a given storage device.2016-03-03
20160062834HIERARCHICAL DATA STORAGE ARCHITECTURE - Technology is disclosed for a data storage architecture for providing enhanced storage resiliency for a data object. The data storage architecture can be implemented in a single-tier configuration and/or a multi-tier configuration. In the single-tier configuration, a data object is encoded, e.g., based on an erasure coding method, to generate many data fragments, which are stored across many storage devices. In the multi-tier configuration, a data object is encoded, e.g., based on an erasure coding method, to generate many data segments, which are sent to one or more tiers of storage nodes. Each of the storage nodes further encodes the data segment to generate many data fragments representing the data segment, which are stored across many storage devices associated with the storage node. The I/O operations for rebuilding the data in case of device failures is spread across many storage devices, which minimizes the wear of a given storage device.2016-03-03
20160062835INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING SYSTEM, CONTROL METHOD FOR INFORMATION PROCESSING SYSTEM, AND MEDIUM - An apparatuses includes a processor, a storage unit, and a communication unit to access the storage unit without intermediary of the processor and to access a second apparatus of the plurality of information processing apparatuses via a communication unit of the second apparatus. The communication unit of a first apparatus of the plurality of information processing apparatuses executes at least one of a process of storing redundant data which is generated by making redundant data stored in the storage unit of the first apparatus in the storage unit of the second apparatus via the communication unit of the second apparatus, and a process of acquiring redundant data which is generated by making redundant data stored in the storage unit of the second apparatus via the communication unit of the second apparatus, and storing the acquired data in the storage unit of the first apparatus.2016-03-03
20160062836RECONCILIATION IN SYNC REPLICATION - A distributed storage system replicates data for a primary logical storage object on a primary node of the storage system to a secondary logical storage object on a secondary node on the distributed storage system. Failures in writing data to the primary logical storage object or failures in the replication of the data to the secondary logical storage object can cause data that should be synchronized to become divergent. In cases where the data may be divergent, reconciliation operations can be performed to resynchronize the data.2016-03-03
20160062837DEFERRED REBUILDING OF A DATA OBJECT IN A MULTI-STORAGE DEVICE STORAGE ARCHITECTURE - Technology is disclosed for a data storage architecture for providing enhanced storage resiliency for a data object. The data storage architecture can be implemented in a single-tier configuration and/or a multi-tier configuration. In the single-tier configuration, a data object is encoded, e.g., based on an erasure coding method, to generate many data fragments, which are stored across many storage devices. In the multi-tier configuration, a data object is encoded, e.g., based on an erasure coding method, to generate many data segments, which are sent to one or more tiers of storage nodes. Each of the storage nodes further encodes the data segment to generate many data fragments representing the data segment, which are stored across many storage devices associated with the storage node. The I/O operations for rebuilding the data in case of device failures is spread across many storage devices, which minimizes the wear of a given storage device.2016-03-03
20160062838INFORMATION PROCESSING APPARATUS, METHOD OF CONTROLLING THE SAME, AND STORAGE MEDIUM - In an information processing apparatus, any piece of firmware among pieces of firmware is used to activate the information processing apparatus, a piece of firmware that is different from the piece of firmware used in activation of the information processing apparatus is updated, the image processing apparatus is restarted with a piece of firmware that is different from the currently activated firmware, and the piece of firmware that is different from the piece of firmware used in activation is updated.2016-03-03
20160062839UNDO CHANGES ON A CLIENT DEVICE - In some implementations, a user can be notified when a content item operation initiated by the user on a client device may render a shared or linked content item inaccessible to the user or others. The notification can give the user an option to undo the content item operation. In some implementations, movement of a content item from one directory location to another directory location can be recorded in entries of a local content journal. The local content journal entries can be shared with a content management system and other client devices so that the corresponding content items on the client devices can be moved without downloading additional copies of the content item to the client devices.2016-03-03
20160062840SYSTEM AND METHOD FOR MAINTAINING A DISTRIBUTED AND FAULT-TOLERANT STATE OVER AN INFORMATION CENTRIC NETWORK - A replica management system facilitates maintaining a distributed and fault-tolerant state for a variable over an Information Centric Network (ICN) by replicating the variable across a set of ICN nodes. During operation, a variable-hosting ICN node can receive an Interest that includes a value-updating command for a replica instance of the variable, current values for a set of replicas of the variable, and a new value for the variable. The ICN node can determine, based on the current values for the set of replica variables, whether the current value for the local replica variable is an authoritative value. If so, the ICN node updates the local replica variable to the new value. However, if the current local value is not the authoritative value, the ICN node rolls back a state of the local replica variable to a previous state, and updates the local replica variable to the new value.2016-03-03
20160062841DATABASE AND DATA ACCESSING METHOD THEREOF - A database and a data accessing method thereof are provided. The database includes a memory, a CPU, a data storage element and a data cache element. The memory is configured to store a kernel program. The CPU is coupled to the memory and configured to execute the kernel program. The data storage element and the data cache element are coupled to the CPU. When receiving a data read command or a data write command from an application, the kernel program determines whether the data storage element is set for accelerated data accessing. If yes, the kernel program guides the data read command to read a copy file from the data cache element or writes a file of the data write command into the data storage element and the data cache element. The copy file is corresponding to a target file in the data storage element.2016-03-03
20160062842SYSTEM, METHOD AND A NON-TRANSITORY COMPUTER READABLE MEDIUM FOR PROTECTING SNAPSHOTS - A method for protecting snapshots related to a logical unit, the method may include retrieving snapshots blocks that were destaged in a storage system; processing, by the storage system, the snapshots blocks to provide, by an information protection module of the storage system, snapshots redundancy information; and storing the snapshots redundancy information in the storage system2016-03-03
20160062843METHODS AND DEVICES FOR BACKING UP FILE - A method for backing up files to a back-up server includes determining a hash value of a file according to a preset algorithm, inquiring for the determined hash value in a local back-up database, and canceling back-up of the file to the back-up server if the hash value is recorded in the local back-up database.2016-03-03
20160062844COMPUTER SYSTEM FOR BACKING UP DATA - It is provided a computer system, comprising a server and first and second storage systems. The first storage system stores deduplicated data sharing at least a part of data with other data, shared data shared by a plurality of pieces of the deduplicated data, and first type data representing a type of the stored data including the deduplicated data and the shared data. The deduplicated data is associated with the shared data by a pointer to the shared data, and includes differential data indicating a difference from the shared data. The server creates second type data representing a type of the data stored in the second storage system from the first type data. The second storage system stores the shared data associated with the deduplicate data at a reading position before a position at which the deduplicated data is read in sequential reading and stores the second type data.2016-03-03
20160062845Populating Image Metadata By Cross-Referencing Other Images - Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for accessing first image metadata corresponding to a first image, the first image metadata including a plurality of first image data fields, determining that at least one data field of the plurality of first image data fields is a null data field, in response to determining that at least one data field is a null data field, accessing second image metadata corresponding to a second image, the second image metadata including a plurality of second image data fields, determining that the second image corresponds to the first image, and cross-referencing the at least one data field with data from a corresponding data field of the plurality of second image data fields.2016-03-03
20160062846CONSOLIDATED PROCESSING OF STORAGE-ARRAY COMMANDS USING A FORWARDER MEDIA AGENT IN CONJUNCTION WITH A SNAPSHOT-CONTROL MEDIA AGENT - The illustrative systems and methods consolidate storage-array command channels into a media agent that executes outside the production environment. A “snapshot-control media agent” (“snap-MA”) is configured on a secondary storage computing device that operates apart from client computing devices. A “forwarder” media agent operates on each client computing device that uses the storage array, yet lacks command channels to the storage array. Likewise, a “forwarder” proxy media agent may operate without command channels to the storage array. No third-party libraries or storage-array-command devices are installed or needed on the host computing device. The forwarder media agent forwards any commands directed at the storage array to the snap-MA on the secondary storage computing device. The snap-MA receives and processes commands directed at the storage array that were forwarded by the forwarder media agents. Responses from the storage array are transmitted to the respective forwarder media agent. The snap-MA advantageously pools any number of storage-array-command devices so that capacity limitations in regard to communications channels at the storage array may be avoided. As a result, the snap-MA operating in conjunction with the forwarder media agents enable the illustrative system to consolidate the communication of storage-array commands away from client computing devices and/or proxy media agent hosts and into the secondary storage computing device that hosts the snap-MA.2016-03-03
20160062847INSTALLING APPLICATIONS VIA RESTORATION OF A FALSE BACKUP - Method of and system for bulk installing on a first digital electronic device, from a second digital electronic device in communication with first digital electronic device, a plurality of applications not already installed on first digital electronic device, comprising: receiving by second digital electronic device instruction to bulk install plurality of applications on first digital electronic device; creating by second digital electronic device a false backup archive containing plurality of applications, false backup archive having sufficient attributes of a true backup archive creatable by a backup operation in respect of first digital electronic device to be compatible with a restoration operation corresponding to backup operation, restoration operation executable to transfer contents of false backup archive to a non-transitory computer-readable storage medium of first digital electronic device; and causing by second digital electronic device execution of restoration operation, resulting in bulk installation of plurality of applications on first digital electronic device.2016-03-03
20160062848METHODS AND APPARATUS FOR DATA RECOVERY FOLLOWING A SERVICE INTERRUPTION AT A CENTRAL PROCESSING STATION - A method for data processing comprising may include receiving a paper instruction at a CS and stamping the paper instruction with a predetermined batch number at the CS. The method may further include transferring the paper instruction to a data element at the CS and constructing an executable electronic data record. The method may include creating a transaction identification number for the record and appending the transaction identification number to the record. The method may include transmitting the record from the RS to a CPS and receiving the record at the CPS. The method may include storing the transaction identification number and the batch identification number of the record in a CPS-table of records and executing the record at the CPS. The method may include transmitting the executed record from the CPS to a DRS, following a discrete lapse of time from the receipt of the record.2016-03-03
20160062849Recording Device and Control Method of a Recording Device - A recording device 2016-03-03
20160062850EFFICIENT FILE BROWSING USING KEY VALUE DATABASES FOR VIRTUAL BACKUPS - A method, article of manufacture, and apparatus for protecting data. In some embodiments, this includes using a directory to identify keys in a key value database, walking through each identified key to identify values, identifying a file based on the walk through, and restoring the identified file to a storage device.2016-03-03
20160062851PREVENTING MIGRATION OF A VIRTUAL MACHINE FROM AFFECTING DISASTER RECOVERY OF REPLICA - To prevent a user from initiating potentially dangerous virtual machine migrations, a storage migration engine is configured to be aware of replication properties for a source datastore and a destination datastore. The replication properties are obtained from a storage array configured to provide array-based replication. A recovery manager discovers the replication properties of the datastores stored in the storage array, and assigns custom tags to the datastores indicating the discovered replication properties. When storage migration of a virtual machine is requested, the storage migration engine performs or prevents the storage migration based on the assigned custom tags.2016-03-03
20160062852Transaction Recovery in a Transaction Processing Computer System Employing Multiple Transaction Managers - A technique for transaction recovery by one transaction manager of another transaction manager's transactions in which each transaction manager is adapted to manage two phase commit transactional operations on transactional resources and to record commit or rollback decisions in a transaction recovery log. The recovery transaction manager detects apparent unavailability of the another transaction manager for transaction processing and initiates a transaction recovery process for the another transaction manager's transactions. This process also determines whether any of the transactions of the another transaction manager have all respective resources prepared to commit without there yet being a pending commit decision record in the another transaction manager's recovery log. If so, the recovery transaction manager writes a rollback record indicating an intention to roll back the identified transaction, in the another transaction manager's recovery log provided no commit decision record has been recorded.2016-03-03
20160062853PREVENTING MIGRATION OF A VIRTUAL MACHINE FROM AFFECTING DISASTER RECOVERY OF REPLICA - A storage migration engine and a recovery manager are provided that enable failover operations to be performed in situations where storage migration and array-based replication are involved. The storage migration engine stores information related to storage migrations directly into a source datastore and a destination datastore, which are then replicated over to a recovery site. The recovery manager uses the information stored in the recovered datastores to select which instance of virtual machine data is to be used to fail over to a virtual machine at the recovery site.2016-03-03
20160062854FAILOVER SYSTEM AND METHOD - A failover system, server, method, and computer readable medium are provided. The system includes a primary server for communicating with a client machine and a backup server. The primary server includes a primary session manager, a primary dispatcher a primary order processing engine and a primary verification engine. The method involves receiving an input message, obtaining deterministic information, processing the input message and replicating the input message along with the deterministic information.2016-03-03
20160062855VIRTUAL APPLICATION DELIVERY CHASSIS SYSTEM - A method for electing a master blade in a virtual application distribution chassis (VADC), includes: sending by each blade a VADC message to each of the other blades; determining by each blade that the VADC message was not received from the master blade within a predetermined period of time; in response, sending a master claim message including a blade priority by each blade to the other blades; determining by each blade whether any of the blade priorities obtained from the received master claim messages is higher than the blade priority of the receiving blade; in response to determining that none of the blade priorities obtained is higher, setting a status of a given receiving blade to a new master blade; and sending by the given receiving blade a second VADC message to the other blades indicating the status of the new master blade of the given receiving blade.2016-03-03
20160062856TECHNIQUES FOR MAINTAINING COMMUNICATIONS SESSIONS AMONG NODES IN A STORAGE CLUSTER SYSTEM - Various embodiments are generally directed to techniques for preparing to respond to failures in performing a data access command to modify client device data in a storage cluster system. An apparatus may include a processor component of a first node coupled to a first storage device; an access component to perform a command on the first storage device; a replication component to exchange a replica of the command with the second node via a communications session formed between the first and second nodes to enable at least a partially parallel performance of the command by the first and second nodes; and a multipath component to change a state of the communications session from inactive to active to enable the exchange of the replica based on an indication of a failure within a third node that precludes performance of the command by the third node. Other embodiments are described and claimed.2016-03-03
20160062857FAULT RECOVERY ROUTINE GENERATING DEVICE, FAULT RECOVERY ROUTINE GENERATING METHOD, AND RECORDING MEDIUM - A fault recovery routine generating device includes a subroutine storage unit which stores subroutines, a precondition storage unit which sores a precondition, a fault combination acceptance unit which accepts a combination of faults that have occurred in components of an information system, a subroutine specification unit which identifies subroutines required for recovery of the components, a fault recovery routine generating unit which acquires the identified subroutines from the subroutine storage unit and links the subroutines to generate a candidate fault recovery routine which is a routine for recovering the information system, a fault recovery time estimation unit which estimates the time required for fault recovery by the candidate fault recovery routine, and a fault recovery routine output unit which outputs the candidate fault recovery routine whose fault recovery time is less than or equal to predetermined time as a fault recovery routine.2016-03-03
20160062858STORAGE POLICY-BASED AUTOMATION OF PROTECTION FOR DISASTER RECOVERY - Exemplary methods, apparatuses, and systems include a recovery manager receiving selection of a storage profile to be protected. The storage profile is an abstraction of a set of one or more logical storage devices that are treated as a single entity based upon common storage capabilities. In response to the selection of the storage profile to be protected, a set of virtual datacenter entities associated with the storage profile is added to a disaster recovery plan to automate a failover of the set of virtual datacenter entities from a protection site to a recovery site. The set of one or more virtual datacenter entities includes one or more virtual machines, one or more logical storage devices, or a combination of virtual machines and logical storage devices. The set of virtual datacenter entities is expandable and interchangeable with other virtual datacenter entities.2016-03-03
20160062859SYSTEMS AND METHODS TO MAINTAIN DATA INTEGRITY AND REDUNDANCY IN A COMPUTING SYSTEM HAVING MULTIPLE COMPUTERS - A computing device configured with a rule engine to apply a set of predetermined rules to conditions relevant to changes of presence data of computers in a computing network forming a computing entity in which data stored in the computing entity is distributed among the computers for redundancy and data recovery. In response to the absence of a computer previously present in the computing entity, the rules cause the computing device to communicate with one or more of the computers to perform data recovery and store data with redundancy with the absent computer. In response to the addition of a new computer in the computing entity, the rules cause the computing device to communicate with one or more of the computers to redistribute data across the computing entity to use the storage capacity offered by the new computer.2016-03-03
20160062860METHOD OF IMPROVING ERROR CHECKING AND CORRECTION PERFORMANCE OF MEMORY - A method of improving an error checking and correction performance of a memory includes replacing a defective column including a defective memory cell of the memory cell array with a spare column of a the spare cell array, wherein the memory cell array includes memory cells in a matrix and the spare cell array includes spare memory cells in a matrix to be replaced for defective memory cells; storing check bits of error correction code in at least one memory cell of the defective column; storing defect information regarding a defect of the defective memory cell; determining whether the at least one memory cell storing the check bits is to be used to perform error checking and correction on a memory, based on the defect information; and performing error checking and correction on the memory using a memory cell selected based on a result of determining whether the at least one memory cell storing the check bits is to be used.2016-03-03
20160062861METHOD FOR CONNECTING AN INPUT/OUTPUT INTERFACE OF A TESTING DEVICE EQUIPPED FOR TESTING A CONTROL UNIT - A method for connecting an input/output interface of a testing device equipped for testing a control unit to a model of a technical system present in the testing device. The interface connects the control unit to be tested or connects a technical system to be controlled, and the model to be connected to the input/output interface is a model of the technical system to be controlled or a model of the control unit to be tested. The testing device has a plurality of input/output functions connected to the model. The method has provides an interface hierarchy structure and a function hierarchy structure. The method has an automatic configuration of compatible connections between the interface hierarchy structure and the function hierarchy structure so that the model present in the testing device communicates through at least a part of the compatible connections with the control unit to be tested or the technical system to be controlled.2016-03-03
20160062862DATA PROCESSING SYSTEM WITH DEBUG CONTROL - A data processing system includes a processor configured to execute processor instructions and a memory. The memory has a data array and a checkbit array wherein each entry of the checkbit array includes a plurality of checkbits and corresponds to a storage location of the data array. The system includes error detection/correction logic configured to, during normal operation, detect an error in data access from a storage location of the data array using the plurality of checkbits in the entry corresponding to the storage location. The system further includes debug logic configured to, during debug mode, use a portion of the plurality of the checkbits in the entry corresponding to the storage location to generate a breakpoint/watchpoint request for the processor.2016-03-03
20160062863MULTICORE PROCESSOR SYSTEM HAVING AN ERROR ANALYSIS FUNCTION - A method for operating a multi-core processor system, wherein different of a program are each executed simultaneously by a different respective processor core of the multi-core processor system includes inserting a breakpoint in a first of the threads for interrupting the first processor core and instead executing an exception handling routine. At least one processor core to be additionally interrupted is determined with the exception handling routine on the basis of an association matrix, and an inter-processor interrupt (IPI) is sent to the at least one processor core by the exception handling routine in order to interrupt the at least one processor core.2016-03-03
20160062864METHOD AND APPARATUS FOR MULTIPLE MEMORY SHARED COLLAR ARCHITECTURE - A method and apparatus for reducing memory built-in self-test (MBIST) area by optimizing the number of interfaces required for testing a given set of memories is provided. The method begins when memories of a same configuration are grouped together. One memory is then selected from each of the groups. MBIST insertion is then performed for a selected group of memories, and the selected group of memories contains memories of different configurations. Control logic is used to select each group of memories separately. The memory group under test may also be selected using programmable user bits. An apparatus is also provided. The apparatus includes: a controller, at least one memory interface in communication with the controller, at least one control logic cloud in communication with the at least one memory interface; and at least one bit bus.2016-03-03
20160062865SYSTEMS AND METHODS FOR PROCESSING TEST RESULTS - Systems and methods for processing test results. A method of analyzing test results includes receiving a set of test result files, the set of test result files including a plurality of test results. The method also includes identifying a set of data filters based on one or more of the set of test result files or user input. The method further includes generating filtered results based on the set of data filters and the set of test result files, the filtered results including one or more of a subset of the plurality of test results or reordered test results. The method further includes providing a visual representation of the filtered results.2016-03-03
20160062866INDEX FILTER FOR VISUAL MONITORING - In one embodiment, a method includes receiving a plurality of measurements, each measurement associated with a different parameter, calculating an index based on the measurements, and generating a visual index display indicating the index, the visual index display comprising a first portion and a second portion, each portion configured for selection by a user. A first set of measurements is displayed when the first portion is selected and a second set of measurements is displayed when the second portion is selected. The first set of measurements is a subset of the second set of measurements. An apparatus and logic are also disclosed herein.2016-03-03
20160062867OPTIMIZATION OF POWER AND COMPUTATIONAL DENSITY OF A DATA CENTER - Techniques for optimizing power and computational density of data centers are described. According to various embodiments, a benchmark test is performed by a computer data center system. Thereafter, transaction information and power consumption information associated with the performance of the benchmark test are accessed. A service efficiency metric value is then generated based on the transaction information and the power consumption information, the service efficiency metric value indicating a number of transactions executed via the computer data center system during a specific time period per unit of power consumed in executing the transactions during the specific time period. The generated service efficiency metric value is then compared to a target threshold value. Thereafter, a performance summary report indicating the generated service efficiency metric value, and indicating a result of the comparison of the generated service efficiency metric value to the target value, is generated.2016-03-03
20160062868AUTOMATED INSTRUMENTATION OF APPLICATIONS - Methods for automatically identifying and instrumenting application classes and methods for a particular application are described. In some embodiments, application code (e.g., bytecode or source code) associated with the particular application may be parsed to identify classes and methods within the application code and to identify terminal components (e.g., methods or function calls) and non-terminal components (e.g., control flow statements). Once the terminal components and non-terminal components have been identified, a complexity model and a corresponding score for each of the classes and methods within the application code may be determined. The complexity model may be used to estimate the number of computations that may be required if a particular class or method is used by the particular application. Application classes and methods corresponding with a score that is greater than a threshold may be instrumented by inserting probes into the identified classes and methods.2016-03-03
20160062869EMBEDDING STALL AND EVENT TRACE PROFILING DATA IN THE TIMING STREAM - EXTENDED TIMING TRACE CIRCUITS, PROCESSES, AND SYSTEMS - An electronic tracing process includes packing both stall (2016-03-03
20160062870STRUCTURED QUERY LANGUAGE DEBUGGER - The present disclosure describes methods, systems, and computer program products for debugging structured query language (SQL) statements. One computer-implemented method includes receiving a request to fetch a debug execution plan considering different structured query language (SQL) execution optimization levels and including a mapping for a SQL statement, receiving a request to initialize a debugging process of the SQL statement, verifying received and attached filter criteria provided using a SQL debug channel, setting SQL statement breakpoints, triggering the SQL statement, transmitting a notification that a SQL process is attached to a debugger associated and ready for external execution control, providing state details and an intermediate result upon reaching a particular breakpoint associated with the SQL process, providing an ability to change the process state and influence the process, and providing a SQL final execution response after reaching the end of the execution of the triggered SQL statement.2016-03-03
20160062871PROGRAM INFORMATION GENERATING SYSTEM, METHOD, AND COMPUTER PROGRAM PRODUCT - A program information generating system includes an acquisition unit that acquires dependency information indicating dependency among a plurality of events generated by execution of a program and selection information identifying a selected event that is the event selected by a user; a generation unit that generates display information, on the basis of the dependency information and the selection information, such that a dependency path that is formed of the plurality of events having the dependency and includes the selected event is displayed in a distinguishable manner; and a display control unit that controls a display unit, on the basis of the display information, such that a display image indicating an execution state of the program is displayed.2016-03-03
20160062872PATTERN ORIENTED DATA COLLECTION AND ANALYSIS - A process for determining a problematic condition while running software includes: loading a first pattern data set having a symptom code module, a problematic condition determination module, and a set of responsive action module(s), generating a runtime symptom code in response to a first problematic condition being caused by the running of the software on the computer, determining that the runtime symptom code matches a symptom code corresponding to the first pattern data set, determining that the first problematic condition caused the generation of the runtime symptom code, and taking a responsive action from a set of responsive action(s) that corresponds to the first problematic condition.2016-03-03
20160062873CENTRALIZED DISPATCHING OF APPLICATION ANALYTICS - A method may include, in a computing device comprising at least one processor and a memory, generating at least one information beacon from each of a plurality of applications installed on the computing device. Each information beacon may include application analytics data associated with a corresponding application while the corresponding application is running on the computing device. The at least one information beacon from each of the plurality of applications may be stored in a common location in the computing device. The stored at least one information beacon may be dispatched from each of the plurality of applications to a network device communicatively coupled to the computing device. The generating may be triggered by beacon generation code implemented in each of the plurality of applications installed on the computing device.2016-03-03
20160062874DEBUG ARCHITECTURE FOR MULTITHREADED PROCESSORS - Debug architecture for multithreaded processors. In some embodiments, a method includes, in response to receiving a halt command, saving a context of a thread being executed by a processor core to a context memory distinct from the processor core; suspending execution of the thread; and initiating a debug of the thread using the context stored in the context memory. In other embodiments, an integrated circuit includes a processor core; a context management circuit coupled to the core; and a debug support circuit coupled to the context management circuit, the debug support circuit configured to send a halt request to the context management circuit and the context management circuit configured to, in response to having received the request, facilitate a debug operation by causing execution of a thread running on the core to be suspended and saving a context of the thread into a context memory distinct from the core.2016-03-03
20160062875METHOD FOR ALTERING EXECUTION OF A PROGRAM, DEBUGGER, AND COMPUTER-READABLE MEDIUM - A method for altering execution of a program on a computer. The program resides in a memory unit that has a logical address space assigned thereto. The method comprises: operating the computer to start executing the program; operating the computer to suspend execution of the program; selecting a patch insertion address within a logical address range of the program, saving the original code residing at the patch insertion address; generating a patch routine; writing a jump instruction to the patch insertion address, thus overwriting said original code, wherein the jump instruction is arranged to instruct the computer to jump to a start address of the patch routine; and operating the computer to resume execution of the program. The patch routine is arranged to prompt the computer to: save a current context of the program; execute a user code; restore the saved context of the program; and execute a surrogate code.2016-03-03
20160062876AUTOMATED SOFTWARE CHANGE MONITORING AND REGRESSION ANALYSIS - The present disclosure describes methods, systems, and computer program products for providing automatic regression analysis of software source code. One computer-implemented method includes selecting particular source code of a software produce from a source code repository, preparing the selected source code to extract information while executing, performing a series of actions on the prepared selected source code resulting in logged data associated with the performed actions, submitting the logged data to an automatic regression analyzer application, determining changes made to the particular source code, and determining software tests needed to be executed to properly test the changed particular source code and other affected source code.2016-03-03
20160062877GENERATING COVERAGE METRICS FOR BLACK-BOX TESTING - Generating coverage metrics for black-box testing includes performing static analysis of a program code to be tested. The static analysis includes identifying variables whose value depends on inputs of the program code. Code blocks are inserted into the program code to be tested. The code blocks insert vulnerabilities into the code at locations where the variables are modified. The code blocks violate one or more properties to be tested. A testing scan is applied to the program code and vulnerabilities are located by the test. A coverage metric is output based on the ratio of the located vulnerabilities to the total number of inserted vulnerabilities in the program code.2016-03-03
20160062878SPEEDING UP DYNAMIC LANGUAGE EXECUTION ON A VIRTUAL MACHINE WITH TYPE SPECULATION - According to one technique, a virtual machine stores type profiling data for program code, the type profiling data indicating observed types for profiled values within the program code at specific profile points during previous executions of the program code. The virtual machine determines to optimize a particular code segment of the program code. The virtual machine generates a program representation describing a flow of data through different variables within the code segment. The virtual machine assigns speculative types to certain variables in the particular code segment by: assigning speculative types of first variables to respective observed types recorded in the type profiling data; calculating speculative types of second variables, based on propagating the speculative types of the first variables through the program representation. The virtual machine compiles the particular code segment by optimizing instructions within the particular code segment based speculative types of variables utilized by the instructions.2016-03-03
20160062879TESTING A MOBILE APPLICATION - The present invention discloses a manager, a test agent installed on a personal mobile device and methods thereof. The manager comprises: a first network connection module configured to establish a connection with the mobile device through Internet, the mobile device being installed with a test agent for performing test operation on a mobile application on the mobile device; and a security module configured to communicate with the test agent through the first network connection module to make the test agent perform security control on the mobile device. According to the manager, mobile devices, and methods of the present invention, the cost such as maintenance cost of the data center and purchase cost of mobile devices can be reduced dramatically. It is not necessary to analyze market demands since mobile devices owned by the users of the mobile devices are the mobile devices that need to be tested by the tester.2016-03-03
20160062880Methods and Systems for the Use of Synthetic Users To Performance Test Cloud Applications - A method and system for testing the end-to-end performance of cloud based applications. Real workload is created for the cloud based applications using synthetic users. The load and length of demand may be adjusted to address different traffic models allowing the measurement and analysis of user performance metrics under specified conditions.2016-03-03
20160062881METABLOCK RELINKING SCHEME IN ADAPTIVE WEAR LEVELING - Systems and methods for metablock relinking may be provided. A first physical block of a first metablock may be determined to have a different health than a second physical block of a second metablock based on health indicators of the first and second physical blocks. Each of the health indicators may indicate an extent to which a respective one of the first and second physical blocks may be written to and/or erased before the respective one of the first and second physical blocks becomes defective. The first physical block of the first metablock may be replaced with the second physical block of the second metablock based on a determination that the health of the first physical block of the first metablock is different than the health of the second physical block of the second metablock.2016-03-03
20160062882METHOD AND SYSTEM FOR GARBAGE COLLECTION IN A STORAGE SYSTEM BASED ON LONGEVITY OF STORED DATA - A method for managing data. The method includes receiving a first request to write data to persistent storage and in response to the first request, writing the data to a short-lived block in the persistent storage, where the data is short-lived data or data of unknown longevity. The method further includes performing a modified garbage collection operation that includes: selecting a first frag page in a first block, determining that the first frag page is live, and migrating, based on the determination that the first frag page is live, the first frag page to a long-lived block in the persistent storage, where the long-lived block is distinct from the short-lived block and wherein the long-lived block does not include any short-lived data.2016-03-03
20160062883DATA STORAGE DEVICE AND OPERATING METHOD THEREOF - A data storage device may include: a nonvolatile memory device; and a controller suitable for generating a mapping table based on one or more of write logical addresses for access to the nonvolatile memory device. The mapping table may include information of: correspondence between a physical address for access to the nonvolatile memory device and one of the write logical addresses; and a number of successive physical addresses corresponding to successive logical addresses starting from the write logical addresses corresponding to the correspondence information.2016-03-03
20160062884DATA STORAGE DEVICE AND METHOD FOR OPERATING THE SAME - An operating method of a data storage device includes receiving a write request, determining whether it is possible to perform a first write operation of simultaneously writing a plurality of bits in each of memory cells coupled to one word line of a nonvolatile memory apparatus, and performing a garbage collection operation for the nonvolatile memory apparatus, according to a determination result, and generating first merged data.2016-03-03
20160062885GARBAGE COLLECTION METHOD FOR NONVOLATILE MEMORY DEVICE - A garbage collection method for a nonvolatile memory includes performing an urgent garbage collection operation by coping at least one page of a first logical area to a free block of a second logical area and remapping a page of the second logical area to the first logical area in response to a remapping command received from a host.2016-03-03
20160062886METHOD, DEVICE AND SYSTEM FOR DATA PROCESSING - An example relates to a method for data processing comprising: mapping between a logical address and a physical address of a memory, wherein the memory comprises several pages, wherein a group of pages comprises at least one page that comprises at least two portions, and wherein the at least two portions of each page of the group are not part of a single-page logical address space.2016-03-03
20160062887FLEXIBLE ARBITRATION SCHEME FOR MULTI ENDPOINT ATOMIC ACCESSES IN MULTICORE SYSTEMS - The MSMC (Multicore Shared Memory Controller) described is a module designed to manage traffic between multiple processor cores, other mastering peripherals or DMA, and the EMIF (External Memory InterFace)in a multicore SoC. The invention unifies all transaction sizes belonging to a slave previous to arbitrating the transactions in order to reduce the complexity of the arbitration process and to provide optimum bandwidth management among all masters. Two consecutive slots are assigned per cache line access to automatically guarantee the atomicity of all transactions within a single cache line. The need for synchronization among all the banks of a particular SRAM is eliminated, as synchronization is accomplished by assigning back to back slots.2016-03-03
20160062888LEAST DISRUPTIVE CACHE ASSIGNMENT - The embodiments are directed to methods and appliances for assigning communication network caches. The methods and appliances can assign storage buckets to caches in a manner that is minimally disruptive to all cache assignments. The methods and appliances can determine a minimum number of cache assignments and reassignments to perform based on a plurality of factors including a number of caches added and removed to a communication network during a given time period, a number of buckets in the communication network, and current cache assignment information. The methods and appliances determine a quantity of buckets to be assigned to each cache, and a quantity of extra buckets to assign, and selectively choose certain buckets for reassignment.2016-03-03
20160062889COHERENCY CHECKING OF INVALIDATE TRANSACTIONS CAUSED BY SNOOP FILTER EVICTION IN AN INTEGRATED CIRCUIT - An interconnect has coherency control circuitry for performing coherency control operations and a snoop filter for identifying which devices coupled to the interconnect have cached data from a given address. When an address is looked up in the snoop filter and misses, and there is no spare snoop filter entry available, then the snoop filter selects a victim entry corresponding to a victim address, and issues an invalidate transaction for invalidating locally cached copies of the data identified by the victim. The coherency control circuitry for performing coherency checking operations for data access transactions is reused for performing coherency control operations for the invalidate transaction issued by the snoop filter. This greatly reduces the circuitry complexity of the snoop filter.2016-03-03
20160062890COHERENCY CHECKING OF INVALIDATE TRANSACTIONS CAUSED BY SNOOP FILTER EVICTION IN AN INTEGRATED CIRCUIT - An interconnect has coherency control circuitry for performing coherency control operations and a snoop filter for identifying which devices coupled to the interconnect have cached data from a given address. When an address is looked up in the snoop filter and misses, and there is no spare snoop filter entry available, then the snoop filter selects a victim entry corresponding to a victim address, and issues an invalidate transaction for invalidating locally cached copies of the data identified by the victim. The coherency control circuitry for performing coherency checking operations for data access transactions is reused for performing coherency control operations for the invalidate transaction issued by the snoop filter. This greatly reduces the circuitry complexity of the snoop filter.2016-03-03
20160062891CACHE BACKING STORE FOR TRANSACTIONAL MEMORY - In response to a transactional store request, the higher level cache transmits, to the lower level cache, a backup copy of an unaltered target cache line in response to a target real address hitting in the higher level cache, updates the target cache line with store data to obtain an updated target cache line, and records the target real address as belonging to a transaction footprint of the memory transaction. In response to a conflicting access to the transaction footprint prior to completion of the memory transaction, the higher level cache signals failure of the memory transaction to the processor core, invalidates the updated target cache line in the higher level cache, and causes the backup copy of the target cache line in the lower level cache to be restored as a current version of the target cache line.2016-03-03
20160062892CACHE BACKING STORE FOR TRANSACTIONAL MEMORY - In response to a transactional store request, the higher level cache transmits, to the lower level cache, a backup copy of an unaltered target cache line in response to a target real address hitting in the higher level cache, updates the target cache line with store data to obtain an updated target cache line, and records the target real address as belonging to a transaction footprint of the memory transaction. In response to a conflicting access to the transaction footprint prior to completion of the memory transaction, the higher level cache signals failure of the memory transaction to the processor core, invalidates the updated target cache line in the higher level cache, and causes the backup copy of the target cache line in the lower level cache to be restored as a current version of the target cache line.2016-03-03
20160062893INTERCONNECT AND METHOD OF MANAGING A SNOOP FILTER FOR AN INTERCONNECT - An interconnect and method of managing a snoop filter within such an interconnect are provided. The interconnect is used to connect a plurality of devices, including a plurality of master devices where one or more of the master devices has an associated cache storage. The interconnect comprises coherency control circuitry to perform coherency control operations for data access transactions received by the interconnect from the master devices. In performing those operations, the coherency control circuitry has access to snoop filter circuitry that maintains address-dependent caching indication data, and is responsive to a data access transaction specifying a target address to produce snoop control data providing an indication of which master devices have cached data for the target address in their associated cache storage. The coherency control circuitry then responds to the snoop control data by issuing a snoop transaction to each master device indicated by the snoop control data, in order to cause a snoop operation to be performed in their associated cache storage in order to generate snoop response data. Analysis circuitry then determines from the snoop response data an update condition, and upon detection of the update condition triggers performance of an update operation within the snoop filter circuitry to update the address-dependent caching indication data. By subjecting the snoop response data to such an analysis, it is possible to identify situations where the caching indication data has become out of date, and update that caching indication data accordingly, this giving rise to significant performance benefits in the operation of the interconnect.2016-03-03
20160062894System and Method for Performing Message Driven Prefetching at the Network Interface - Each computing node of a distributed computing system may implement a hardware mechanism at the network interface for message driven prefetching of application data. For example, a parallel data-intensive application that employs function shipping may distribute respective portions of a large data set to main memory on multiple computing nodes. The application may send messages to one of the computing nodes referencing data that is stored locally on the node. For each received message, the network interface on the recipient node may extract the reference, initiate the prefetching of referenced data into a local cache (e.g., an LLC), and then store the message for subsequent interpretation and processing by a local processor core. When the processor core retrieves a stored message for processing, the referenced data may already be in the LLC, avoiding a CPU stall while retrieving it from memory. The hardware mechanism may be configured via software.2016-03-03
20160062895METHOD FOR DISK DEFRAG HANDLING IN SOLID STATE DRIVE CACHING ENVIRONMENT - An invention is provided for handling target disk access requests during disk defragmentation in a solid state drive caching environment. The invention includes detecting a request to access a target storage device. In response, data associated with the request is written to the target storage device without writing the data to the caching device, with the proviso that the request is a write request. In addition, the invention includes reading data associated with the request and marking the data associated with the request stored in the caching device for discard, with the proviso that the request is a read request and the data associated with the request is stored on the caching device. Data marked for discard is discarded from the caching device when time permits, for example, upon completion of disk defragmentation.2016-03-03
20160062896MEMORY SYSTEM - A memory system includes: a memory controller which executes a data access process with an external using an access unit; a first memory which is connected to the memory controller via a bus and has a first latency; and a second memory which is connected to the memory controller via a bus and has a second latency longer than the first latency. The access unit comprises a first access size assigned to the first memory and a second access size assigned to the second memory. The memory controller executes a data access process with the first memory using the first access size, and executes a data access process with the second memory using the second access size.2016-03-03
20160062897STORAGE CACHING - The present disclosure provides a method for processing a storage operation in a system with an added level of storage caching. The method includes receiving, in a storage cache, a read request from a host processor that identifies requested data and determining whether the requested data is in a cache memory of the storage cache. If the requested data is in the cache memory of the storage cache, the requested data may be obtained from the storage cache and sent to the host processor. If the requested data is not in the cache memory of the storage cache, the read request may be sent to a host bus adapter operatively coupled to a storage system. The storage cache is transparent to the host processor and the host bus adapter.2016-03-03
20160062898Method for dynamically adjusting a cache buffer of a solid state drive - A method for dynamically adjusting a cache buffer of a solid state drive includes receiving data, determine if the data are continuous according to logical allocation addresses of the data, increasing a memory size of the cache buffer, searching the cache buffer for same data as at least one portion of the data, modifying and merging of the at least one portion of the data with the same data already temporarily stored in the cache buffer, temporarily storing the data in the cache buffer.2016-03-03
20160062899THREAD-BASED CACHE CONTENT SAVING FOR TASK SWITCHING - Embodiments relate to thread-based cache content savings for task switching in a computer processor. An aspect includes determining a cache entry in a cache of the computer processor that is owned by the first thread, wherein the determination is made based on a hardware thread identifier (ID) of the first thread matching a hardware thread ID in the cache entry. Another aspect includes determining whether the determined cache entry is eligible for prefetching. Yet another aspect includes, based on determining that the determined cache entry is eligible for prefetching, setting a marker in the cache entry to active.2016-03-03
20160062900CACHE MANAGEMENT FOR MAP-REDUCE APPLICATIONS - A computer manages a cache for a MapReduce application based on a distributed file system that includes one or more storage medium by receiving a map request and receiving parameters for processing the map request. The parameters include a total data size to be processed, a size of each data record, and a number of map requests executing simultaneously. The computer determines a cache size for processing the map request, wherein the cache size is determined based on the received parameters for processing the map request and a machine learning model for a map request cache size and reads, based on the determined cache size, data from the one or more storage medium of the distributed file system into the cache. The computer processes the map request and writes an intermediate result data of the map request processing into the cache, based on the determined cache size.2016-03-03
20160062901POPULATING ITEMS IN WORKLISTS USING EXISTING CACHE - Methods, systems, and computer-readable storage media for providing a worklist of a user with at least one item. In some implementations, actions include determining one or more timestamps, each timestamp indicating a time, at which an item cache was synchronized for a respective provider of one or more providers, transmitting one or more requests to one or more respective providers of the one or more providers, the one or more requests each including the one or more timestamps and indicating a user, receiving one or more responses, each response including a sub-set of items, each item in the sub-set of items being included in the sub-set of items based on the one or more timestamps, populating the worklist of the user with one or more items in the sub-set of items reusing a previously synchronized worklist database cache, and providing the worklist for display to the user on a display.2016-03-03
20160062902MEMORY ACCESS PROCESSING METHOD AND INFORMATION PROCESSING DEVICE - A memory access processing method includes storing, in a cache memory, a plurality of pages stored in a main memory; storing the plurality of pages in a buffer memory, each of the plurality of pages being associated with an identifier indicating whether the each of the plurality of pages being a zero page to be zero-cleared; allocating a page to be set to a zero page, when a page fault occurs during execution of an access to the cache memory and execution of a process is stopped; updating an identifier corresponding to the allocated page to an identifier indicating the allocated page being the zero page; resuming the execution of the process; controlling an access to the cache memory, based on the identifier for each of the plurality of pages; and executing initialization of a page corresponding to the allocated page and is included in the main memory.2016-03-03
20160062903METHOD AND SYSTEM FOR CACHE MANAGEMENT - Machine logic (for example, software) for cache management. comprising cache management method includes the following operations: determining, in response to a cache entry is created, a category for the cache entry; and determining a predicted time point of an invalidation event associated with the category, wherein occurrence of the invalidation event will cause invalidation of catching entries of the category; setting a valid period of the cache entry based on the predicted time point.2016-03-03
20160062904ALLOCATION ENFORCEMENT IN A MULTI-TENANT CACHE MECHANISM - Cache optimization. Cache access rates for tenants sharing the same cache are monitored to determine an expected cache usage. Factors related to cache efficiency or performance dictate occupancy constraints. A request to increase cache space allocated to a first tenant is received. If there is a second cache tenant for which reducing its cache size by the requested amount will not violate the occupancy constraints for the second cache tenant, its cache is decreased by the requested amount and allocated to satisfy the request. Otherwise, the first cache size is increased by allocating the amount of data storage space to the first cache tenant without deallocating the same amount of data storage space allocated to another cache tenant from among the plurality of cache tenants.2016-03-03
20160062905HIERARCHICAL CACHE STRUCTURE AND HANDLING THEREOF - A hierarchical cache structure includes at least one real indexed higher level cache with a directory and a unified cache array for data and instructions, and at least two lower level caches, each split in an instruction cache and a data cache. An instruction cache of a split real indexed second level cache includes a directory and a corresponding cache array connected to the real indexed third level cache. A data cache of the split second level cache includes a directory connected to the third level cache. An instruction cache of a split virtually indexed first level cache is connected to the second level instruction cache. A cache array of a data cache of the first level cache is connected to the cache array of the second level instruction cache and to the cache array of the third level cache. A directory of the first level data cache is connected to the second level instruction cache directory and to the third level cache directory.2016-03-03
20160062906METHOD AND APPARATUS FOR ACCESSING DATA STORED IN A STORAGE SYSTEM THAT INCLUDES BOTH A FINAL LEVEL OF CACHE AND A MAIN MEMORY - A data access system including a storage device and a processor, which includes one or more levels of cache (LOC). In response to data required by the processor not being within the LOC, the processor generates a physical address to be accessed within the storage device in order to retrieve the data. The storage device includes a main memory and a cache module, which is configured as a final level of cache (FLOC) to be accessed by the processor prior to accessing the main memory. The cache module includes a controller that, in response to the data not being cached within the LOC, converts the physical address into a virtual address within the FLOC. The FLOC uses the virtual address to determine whether the data is within the FLOC. If the data is not within the FLOC, the cache module or the processor retrieves the data from the main memory.2016-03-03
20160062907MULTI-PHASE PROGRAMMING SCHEMES FOR NONVOLATILE MEMORIES - A method for data storage includes defining an end-to-end mapping between data bits to be stored in a memory device that includes multiple memory cells and predefined programming levels. The data bits are mapped into mapped bits, so that the number of the mapped bits is smaller than the number of the data bits. The data bits are stored in the memory device by programming the mapped bits in the memory cells using a programming scheme that guarantees the end-to-end mapping. After storing the data bits, the data bits are read from the memory device in accordance with the end-to-end mapping.2016-03-03
20160062908Methods for Maintaining a Storage Mapping Table and Apparatuses using the Same - A method for maintaining a storage mapping table. An access interface is directed to read a group mapping table from the last page of a block of a storage unit. The block is allocated to store data of a plurality of groups, each group stores information indicating which location in the storage unit stores data of an LBA (Logical Block Address) range, and the group mapping table stores information indicating which unit of the block stores the latest data of each group. The group mapping table is stored in a DRAM (Dynamic Random Access Memory). The access interface is directed to read data of each group from the storage unit according to the group mapping table. The data of each group is stored in a specified location of a storage mapping table of the DRAM.2016-03-03
20160062909SYSTEMS AND METHODS FOR ACCESSING MEMORY - Methods of mapping memory cells to applications, methods of accessing memory cells, systems, and memory controllers are described. In some embodiments, a memory system including multiple physical channels is mapped into regions, such that any region spans each physical channel of the memory system. Applications are allocated memory in the regions, and performance and power requirements of the applications are associated with the regions. Additional methods and systems are also described.2016-03-03
20160062910SELECTING HASH VALUES BASED ON MATRIX RANK - One embodiment of the present invention includes a hash selector that facilitates performing effective hashing operations. In operation, the hash selector creates a transformation matrix that reflects specific optimization criteria. For each hash value, the hash selector generates a potential hash value and then computes the rank of a submatrix included in the transformation matrix. Based on this rank in conjunction with the optimization criteria, the hash selector either re-generates the potential hash value or accepts the potential hash value. Advantageously, the optimization criteria may be tailored to create desired correlations between input patterns and the results of performing hashing operations based on the transformation matrix. Notably, the hash selector may be configured to efficiently and reliably incrementally generate a transformation matrix that, when applied to certain strides of memory addresses, produces a more uniform distribution of accesses across caches lines than previous approaches to memory addressing.2016-03-03
20160062911ROUTING DIRECT MEMORY ACCESS REQUESTS IN A VIRTUALIZED COMPUTING ENVIRONMENT - A device may receive a direct memory access request that identifies a virtual address. The device may determine whether the virtual address is within a particular range of virtual addresses. The device may selectively perform a first action or a second action based on determining whether the virtual address is within the particular range of virtual addresses. The first action may include causing a first address translation algorithm to be performed to translate the virtual address to a physical address associated with a memory device when the virtual address is not within the particular range of virtual addresses. The second action may include causing a second address translation algorithm to be performed to translate the virtual address to the physical address when the virtual address is within the particular range of virtual addresses. The second address translation algorithm may be different from the first address translation algorithm.2016-03-03
20160062912DATA INPUT/OUTPUT (I/O) HANDLING FOR COMPUTER NETWORK COMMUNICATIONS LINKS - Systems and methods for performing data input/output (I/O) operations using a computer network communications link are described. A method may include assigning a block of virtual addresses for usage with at least one computer network communications link. The method may also include registering the entire block of virtual addresses prior to an operating system partition performing I/O operations using the at least one computer network communications link, wherein registering comprises setting a plurality of virtual page frame numbers of the block of virtual addresses to point to distinct pages of physical memory. In some embodiments, one or more I/O operations may be performed using the at least one computer network communications link and the registered block of virtual addresses.2016-03-03
20160062913SEMICONDUCTOR DEVICE, SEMICONDUCTOR SYSTEM AND SYSTEM ON CHIP - At least one example embodiment discloses a semiconductor device including a direct memory access (DMA) system configured to directly access a memory to write first data to an address of the memory, wherein the DMA system includes an initializer configured to set a data transfer parameter for writing the first data to the memory during a flushing period of second data from a cache to the address by a processor, a creator configured to create the first data based on the set data transfer parameter, and a transferer configured to write the first data to the address of the memory after the flushing period based on the data transfer parameter.2016-03-03
20160062914ELECTRONIC SYSTEM WITH VERSION CONTROL MECHANISM AND METHOD OF OPERATION THEREOF - An electronic system includes: a storage device configured to store a descriptor, including a key and a value, having multiple versions linked on the storage device; a storage interface, coupled to the storage device, configured to provide an entry having a location; and retrieve the descriptor, including the key and the value, based on the entry having the location for selecting one of the versions of the descriptor.2016-03-03
20160062915STORAGE CONTROL DEVICE AND STORAGE CONTROL METHOD - An apparatus includes a first cache memory, a second cache memory, and a processor coupled to the first cache memory and the second cache memory, and configured to store data in the second cache memory, the data being deleted from the first cache memory, store first data stored in a first address of the storage device, in the second cache memory, in case where the first address is included in first management information and is not included in second management information, according to a request for access to the first address of the storage device, the first management information including an address in the storage device of specific data stored in the storage device, and the second management information including an address in the storage device of data stored in both of the second cache memory and the storage device, and register the first address in the second management information.2016-03-03
20160062916CIRCUIT-BASED APPARATUSES AND METHODS WITH PROBABILISTIC CACHE EVICTION OR REPLACEMENT - Selection logic can be used to select between a set of cache lines that are candidates for eviction from a cache. For each cache line in the set of cache lines, a relative probability that the cache line will result in a hit can be calculated based upon: past reuse behavior for the cache line; and hit rates for reuse distances. Based upon the relative probabilities for the set of cache lines, a particular cache line can be selected from the set of cache lines for eviction.2016-03-03
20160062917Control for Authenticated Accesses to a Memory Device - The embodiments of the invention describe settings, commands, command signals, flags, attributes, parameters or the like for signed access prior to allowing data to be written to (e.g., a write access), read from (e.g., a read access) or erased from (e.g., an erase access) protected areas of a memory device (e.g., a region, logical unit, or a portion of memory in the storage module).2016-03-03
20160062918Receipt, Data Reduction, and Storage of Encrypted Data - Embodiments of the invention relate to processing streams of encrypted data received from multiple users. As the streams are processed, smaller partitions in the form of data chunks are created and subject to individual decryption. The data chunks are placed into sub-stream based on a master key associated with its owning entity. Prior to processing, the data chunks in each stream are decrypted, and advanced functions, including but not limited to de-duplication and compression, are individually applied to the data chunks, followed by aggregation of processed data chunks into data units and encryption of the individual data units including use of a master key from the data's owning entity. Individual encryption units are created by encrypting the data unit(s) with an encryption key, thereby limiting access to the data unit. Confidentiality of data is maintained, and the ability of storage systems to perform data reduction functions is supported.2016-03-03
20160062919DOUBLE-MIX FEISTEL NETWORK FOR KEY GENERATION OR ENCRYPTION - A method of providing security in a computer system includes dividing a block of data into initial left and right halves, and calculating updated left and right halves for each of a plurality of rounds. Calculating the updated left half includes applying a first function to an input left half to produce a first result, and mixing the first result with an input right half. Calculating the updated right half includes applying a second function to the input left half to produce a second result, and mixing the second result with a round key. The input left and right halves are the initial left and right halves for the first round, and thereafter the updated left and right halves for an immediately preceding round. And method may include producing a block of ciphertext with a key composed of the updated left and right halves for the last round.2016-03-03
20160062920ADDRESS-DEPENDENT KEY GENERATION WITH A SUBSTITUTION-PERMUTATION NETWORK - A method of providing security in a computer system includes producing an initial block of data from a respective address of a memory location. An updated block of data may be calculated for each round of a plurality of rounds in a substitution-permutation network. This may include mixing an input block through a substitution layer including a plurality of substitution boxes, and a linear transformation layer including a permutation, to produce the updated block, before or after which respectively the input block or updated block may be mixed with a round key. The input block may be the initial block for the first round, and the updated block for an immediately preceding round for each round thereafter. A block of ciphertext may be produced with a key composed of the updated block for the last round, and the block of ciphertext may be written at the memory location.2016-03-03
20160062921APPLICATION PROCESSOR AND DATA PROCESSING SYSTEM INCLUDING THE SAME - A data processing system includes an application processor, a memory device, and a channel connecting the application processor and the memory device. The application processor encrypts first data using a first encryption key and a first initialization vector in response to a write command, and transmits first encrypted data to the memory device through the channel. The memory device decrypts the first encrypted data using a second encryption key and a second initialization vector, and stores first decrypted data in a memory core. The second encryption key and the second initialization vector are stored in the memory device. The first encryption key is the same as the second encryption key, and the first initialization vector is the same as the second initialization vector.2016-03-03
20160062922MEMORY SYSTEM CAPABLE OF WIRELESS COMMUNICATION AND METHOD OF CONTROLLING MEMORY SYSTEM - According to one embodiment, a memory controller allows access to a first non-volatile memory from a host device when a wireless communication unit is communicable or communicating with any one of wireless communication devices, and denies access to the first non-volatile memory from the host device when the wireless communication unit is not communicable or communicating with any one of the wireless communication devices. The memory controller does not allow the host device to access information in the first non-volatile memory after the access field specification information is updated.2016-03-03
20160062923UNIVERSAL INPUT DEVICE - Embodiments of the invention are directed to input devices configured for use with computing devices. The present invention relates to methods and devices for establishing, maintaining and managing, wireless connections between an input device and one or more host computing devices running one of a plurality of operating systems. The input device may be configured to analyze data received from the host computing devices to automatically or manually determine an operating system running on the host computing devices and configure the input device for proper functionality with the determined operating system.2016-03-03
20160062924SIMULTANEOUS VIDEO AND BUS PROTOCOLS OVER SINGLE CABLE - Method and systems are disclosed for transporting simultaneous video and bus protocols over a single cable. At least some of the illustrative embodiments are systems including a main switch configured to operate in an enhanced mode where the main switch is configured to transfer data from a first data source and a second data source to a cable, operate in a default mode where the main switch is configured to transfer data from the second data source to the cable without transferring data from the first data source; a multipurpose switch configured to operate in a handshake mode where the multipurpose switch transports handshake data between the cable and a digital logic, operate in a data mode where the multipurpose switch transports bus data between the cable and the second data source; and the digital logic programed to enable modes of operation of the multipurpose switch and the main switch.2016-03-03
20160062925METHOD AND SYSTEM FOR MANAGING STORAGE DEVICE OPERATIONS BY A HOST DEVICE - The various embodiments herein provide a method and system for managing storage device operations by a host device. The method comprises of receiving, by a device controller, at least one operation command with a high priority from a host device and information of pausing one or more logical units; triggering a pause command to pause execution of the one or more logical units if the priority of the received operation command is high; and triggering a resume command to resume the execution of the one or more logical units being paused once the operation command with higher priority is executed. This way data traffic can be reduced for high priority operation, in order to execute high priority operation faster.2016-03-03
20160062926STORAGE CONTROL DEVICES AND METHOD THEREFOR TO INVOKE ADDRESS THEREOF - A storage control device comprises storage control and memory modules coupled with each other. The memory module keeps a first Serial Attached SCSI (SAS) address. In one embodiment the memory module further keeps a firmware which the storage control module executes to invoke the first SAS address to facilitate data communication. To invoke the first SAS address, in one embodiment the storage control module fetches a bit string from the memory module. The bit string is written into a data structure that is returned to the storage control module when it is determined that the bit string is a SAS address. In one embodiment the memory module further keeps a configuration file which the storage control module invokes to operate. The configuration file comprises a second SAS address, which is not invoked by the storage control module unless the bit string is not a SAS address.2016-03-03
Website © 2025 Advameg, Inc.