08th week of 2014 patent applcation highlights part 54 |
Patent application number | Title | Published |
20140052925 | SYSTEM AND METHOD FOR WRITE-LIFE EXTENSION OF STORAGE RESOURCES - An information handling system includes a processor and a storage resource communicatively coupled to the processor. The processor is configured to determine if available overprovisioned storage of the storage resource is less than a threshold overprovisioned storage capacity, establish a new stated capacity for the storage resource in response to a determination that the available overprovisioned storage of the storage resource is less than the threshold overprovisioned storage capacity, and communicate to the processor an indication of the new stated capacity. | 2014-02-20 |
20140052926 | EFFICIENT MANAGEMENT OF COMPUTER MEMORY USING MEMORY PAGE ASSOCIATIONS AND MEMORY - A system for managing memory operations. The system includes a processor executing instructions that cause the processor to read a first memory page from a storage device responsive to a request for the first memory page and store the first memory page to system memory. Based on a pre-established set of association rules, one or more associated memory pages are identified that are related to the first memory page. The associated memory pages are read from the storage device and compressed to generate corresponding compressed associated memory pages. The compressed associated memory pages are also stored to the system memory to enable faster access to the associated memory pages during processing of the first memory page. The compressed associated memory pages are individually decompressed in response to the particular page being required for use during processing. | 2014-02-20 |
20140052927 | DATA CACHE PREFETCH HINTS - The present invention provides a method and apparatus for using prefetch hints. One embodiment of the method includes bypassing, at a first prefetcher associated with a first cache, issuing requests to prefetch data from a number of memory addresses in a sequence of memory addresses determined by the first prefetcher. The number is indicated in a request received from a second prefetcher associated with a second cache. This embodiment of the method also includes issuing, from the first prefetcher, a request to prefetch data from a memory address subsequent to the bypassed memory addresses. | 2014-02-20 |
20140052928 | INFORMATION PROCESSING DEVICE AND METHOD - An information processing device detects a sequential access for reading first data by sequentially accessing consecutive areas or inconsecutive areas within a specified range of a first storage unit when the sequential access consecutively occurs by a specified number, calculates, based on a size of the first data, a size of second data read by a prefetch for prereading the data stored consecutively in the first storage unit and for storing the read data in a second storage unit, and performs the prefetch based on the calculated size of the second data. | 2014-02-20 |
20140052929 | PROGRAMMABLE RESOURCES TO TRACK MULTIPLE BUSES - A system and method for efficiently monitoring traces of multiple components in an embedded system. A system-on-a-chip (SOC) includes a trace unit for collecting and storing trace history, bus event statistics, or both. The SOC may transfer cache coherent messages across multiple buses between a shared memory and a cache coherent controller. The trace unit includes multiple bus event filters. Programmable configuration registers are used to assign the bus event filters to selected buses for monitoring associated bus traffic and determining whether qualified bus events occur. If so, the bus event filters increment an associated count for each of the qualified bus events. The values used for determining qualified bus events may be set by programmable configuration registers. | 2014-02-20 |
20140052930 | EFFICIENT TRACE CAPTURE BUFFER MANAGEMENT - A system and method for efficiently storing traces of multiple components in an embedded system. A system-on-a-chip (SOC) includes a trace unit for collecting and storing trace history, bus event statistics, or both. The SOC may transfer cache coherent messages across multiple buses between a shared memory and a cache coherent controller. The trace unit includes a trace buffer with multiple physical partitions assigned to subsets of the multiple buses. The number of partitions is less than the number of multiple buses. One or more trace instructions may cause a trace history, trace bus event statistics, local time stamps and a global time-base value to be stored in a physical partition within the trace buffer. | 2014-02-20 |
20140052931 | Data Type Dependent Memory Scrubbing - A method for controlling a memory scrubbing rate based on content of the status bit of a tag array of a cache memory. More specifically, the tag array of a cache memory is scrubbed at smaller interval than the scrubbing rate of the storage arrays of the cache. This increased scrubbing rate is in appreciation for the importance of maintaining integrity of tag data. Based on the content of the status bit of the tag array which indicates modified, the corresponding data entry in the cache storage array is scrubbed accordingly. If the modified bit is set, then the entry in the storage array is scrubbed after processing the tag entry. If the modified bit is not set, then the storage array is scrubbed at a predetermined scrubbing interval. | 2014-02-20 |
20140052932 | METHOD FOR REDUCING THE OVERHEAD ASSOCIATED WITH A VIRTUAL MACHINE EXIT WHEN HANDLING INSTRUCTIONS RELATED TO DESCRIPTOR TABLES - A computerized method for efficient handling of a privileged instruction executed by a virtual machine (VM). The method comprises identifying when the privileged instruction causes a VM executed on a computing hardware to perform a VM exit; replacing a first virtual-to-physical address mapping to a second virtual-to-physical address mapping respective of a virtual pointer associated with the privileged instruction; and invalidating at least a cache entry in a cache memory allocated to the VM, thereby causing a new translation for the virtual pointer to the second virtual-to-physical address, wherein the second virtual-to-physical address provides a pointer to a physical address in a physical memory in the computing hardware allocated to the VM. | 2014-02-20 |
20140052933 | WRITE TRANSACTION MANAGEMENT WITHIN A MEMORY INTERCONNECT - A memory interconnect between transaction masters and a shared memory. A first snoop request is sent to other transaction masters to trigger them to invalidate any local copy of that data they may hold and for them to return any cached line of data corresponding to the write line of data that is dirty. A first write transaction is sent to the shared memory. When and if any cached line of data is received from the further transaction masters, then the portion data is used to form a second write transaction which is sent to the shared memory and writes the remaining portions of the cached line of data which were not written by the first write transaction in to the shared memory. The serialisation circuitry stalls any transaction requests to the write line of data until the first write transaction. | 2014-02-20 |
20140052934 | Memory with Alternative Command Interfaces - A memory device or module selects between alternative command ports. Memory systems with memory modules incorporating such memory devices support point-to-point connectivity and efficient interconnect usage for different numbers of modules. The memory devices and modules can be of programmable data widths. Devices on the same module can be configured select different command ports to facilitate memory threading. Modules can likewise be configured to select different command ports for the same purpose. | 2014-02-20 |
20140052935 | SCALABLE MULTI-BANK MEMORY ARCHITECTURE - According to one general aspect, a method may include, in one embodiment, grouping a plurality of at least single-ported memory banks together to substantially act as a single at least dual-ported aggregated memory element. In various embodiments, the method may also include controlling read access to the memory banks such that a read operation may occur from any memory bank in which data is stored. In some embodiments, the method may include controlling write access to the memory banks such that a write operation may occur to any memory bank which is not being accessed by a read operation. | 2014-02-20 |
20140052936 | MEMORY QUEUE HANDLING TECHNIQUES FOR REDUCING IMPACT OF HIGH-LATENCY MEMORY OPERATIONS - Techniques for handling queuing of memory accesses prevent passing excessive requests that implicate a region of memory subject to a high latency memory operation, such as a memory refresh operation, memory scrubbing or an internal bus calibration event, to a re-order queue of a memory controller. The memory controller includes a queue for storing pending memory access requests, a re-order queue for receiving the requests, and a control logic implementing a queue controller that determines if there is a collision between a received request and an ongoing high-latency memory operation. If there is a collision, then transfer of the request to the re-order queue may be rejected outright, or a count of existing queued operations that collide with the high latency operation may be used to determine if queuing the new request will exceed a threshold number of such operations. | 2014-02-20 |
20140052937 | Dynamic QoS Upgrading - In an embodiment, a memory controller includes multiple ports. Each port may be dedicated to a different type of traffic. In an embodiment, quality of service (QoS) parameters may be defined for the traffic types, and different traffic types may have different QoS parameter definitions. The memory controller may be configured to schedule operations received on the different ports based on the QoS parameters. In an embodiment, the memory controller may support upgrade of the QoS parameters when subsequent operations are received that have higher QoS parameters, via sideband request, and/or via aging of operations. In an embodiment, the memory controller is configured to reduce emphasis on QoS parameters and increase emphasis on memory bandwidth optimization as operations flow through the memory controller pipeline. | 2014-02-20 |
20140052938 | Clumsy Flow Control Method and Apparatus for Improving Performance and Energy Efficiency in On-Chip Network - A method and apparatus for increasing performance and energy-efficiency in an on-chip network are provided. A credit-based flow control method may include generating, in a core, a memory access request, throttling an injection of the memory access request until credits become available, and injecting the memory access request into a memory controller (MC) via an on-chip network, when the credits become available. | 2014-02-20 |
20140052939 | INTEGRATED STORAGE PLATFORM SYSTEM AND METHOD THEREOF - The present invention discloses an integrated storage platform system and a method thereof. The system comprises at least one adaption module respectively connecting with at least one storage space and each performing a plurality of adaption settings corresponding to one storage space; a storage administration module connecting with the adaption modules and processing the files of the storage spaces; and an access interface connecting the storage administration module, operated by a user to access the storage space through the storage administration module and the adaption module, and presenting access results to the user. The present invention establishes different adaption modules to enable the user to link to and access different types of storage spaces. | 2014-02-20 |
20140052940 | FAST ANALOG MEMORY CELL READOUT USING MODIFIED BIT-LINE CHARGING CONFIGURATIONS - A method for data storage includes providing at least first and second readout schemes for reading storage values from a group of analog memory cells that are connected to respective bit lines. The first readout scheme reads the storage values using a first bit line charging configuration having a first sense time, and the second readout scheme reads the storage values using a second bit line charging configuration having a second sense time, shorter than the first sense time. A condition is evaluated with respect to a read operation that is to be performed over a group of the memory cells. One of the first and second readout schemes is selected responsively to the evaluated condition. The storage values are read from the group of the memory cells using the selected readout scheme. | 2014-02-20 |
20140052941 | CALCULATION PROCESSING DEVICE AND CONTROL METHOD FOR CALCULATION PROCESSING DEVICE - A device includes: a request-storage unit including entries configured to store requests and stopping issuance of a stored request when a flag is set based on an input configuration notification, in which the request-storage unit outputs a warning notification when the request stored in any of the entries has not been processed for more than a predetermined amount of time; a derived request-storage unit including derived entries configured to store derived requests derived from processing of requests stored in the request-storage unit; an arbitrating unit configured to arbitrate requests stored in the request-storage unit and derived requests stored in the derived request-storage unit, and to output the configuration notification based on the warning notification output from the request-storage unit; and a request-processing unit configured to process requests or derived requests arbitrated by the arbitrating unit, and to request-storage of derived requests derived by processing requests into the derived-storage unit. | 2014-02-20 |
20140052942 | METHOD FOR CONTROLLING STORAGES AND STORAGE CONTROL APPARATUS - A method, executed by a computer, for controlling storages includes obtaining time elapsed since data to be moved in a source storage in three or more storages whose performance for response to an access request is different is accessed in accordance with the access request, identifying, from the storages, a destination storage that meets condition under which the data to be moved in the source storage is moved, based on the obtained elapsed time by referring to a storage unit that stores the condition under which data is moved to each of the storages, and moving the data to be moved in the source storage to the identified destination storage. | 2014-02-20 |
20140052943 | PROVIDING EXTENDED MEMORY SEMANTICS WITH ATOMIC MEMORY OPERATIONS - A computer-implemented method and a corresponding computer system for emulation of Extended Memory Semantics (EMS) operations. The method and system include obtaining a set of computer instructions that include an EMS operation, converting the EMS operation into a corresponding atomic memory operation (AMO), and executing the AMO on at least one processor of a computer. | 2014-02-20 |
20140052944 | Method and Apparatus for Monitoring an In-memory Computer System - An in-memory computing system for conducting on-line transaction processing and on-line analytical processing includes system tables in main memory to store runtime information. A statistics server can access the runtime information to collect monitoring data and generate historical data and other system performance metrics. | 2014-02-20 |
20140052945 | OPTIMIZING STORAGE SYSTEM BEHAVIOR IN VIRTUALIZED CLOUD COMPUTING ENVIRONMENTS BY TAGGING INPUT/OUTPUT OPERATION DATA TO INDICATE STORAGE POLICY - A method, system and computer program product for optimizing storage system behavior in a cloud computing environment. An Input/Output (I/O) operation data is appended with a tag, where the tag indicates a class of data for the I/O operation data. Upon the storage controller reviewing the tag appended to the I/O operation data, the storage controller performs a table look-up for the storage policy associated with the determined class of data. The storage controller applies a map to determine a storage location for the I/O operation data in a drive device, where the map represents a logical volume which indicates a range of block data that is to be excluded for being stored on the drive device and a range of block data that is to be considered for being stored on the drive device. In this manner, granularity of storage policies is provided in a cloud computing environment. | 2014-02-20 |
20140052946 | TECHNIQUES FOR OPPORTUNISTIC DATA STORAGE - Techniques for opportunistic data storage are described. In one embodiment, for example, an apparatus may comprise a data storage device and a storage management module, and the storage management module may be operative to receive a request to store a set of data in the data storage device, the request indicating that the set of data is to be stored with opportunistic retention, the storage management module to select, based on allocation information, storage locations of the data storage device for opportunistic storage of the set of data and write the set of data to the selected storage locations. Other embodiments are described and claimed. | 2014-02-20 |
20140052947 | DATA STORAGE DEVICE AND METHOD OF CONTROLLING DATA STORAGE DEVICE - A data storage device includes a processor or hardware circuit. The processor or hardware circuit copies data stored in regions of a copy source volume to a copy destination volume. The processor or hardware circuit sets up in a memory a management table for the regions. The management table includes first information and second information. The first information indicates whether a bitmap has been set up. The bitmap represents a state of progress of the copy. The second information specifies a bit value to be used when setting up the bitmap. The processor or hardware circuit sets up in the memory the bitmap corresponding to the regions on the basis of the second information. | 2014-02-20 |
20140052948 | METHOD AND DEVICE FOR IMPLEMENTING MEMORY MIGRATION - Disclosed are a method and device for implementing memory migration, which relate to computer technology and are invented for solving the problem that the existing operating process for memory migration is relatively complicated. The technical solution provided in the embodiments of the present application includes: the basic input-output system of a computer migrating the data in the memory to be migrated to a first unavailable memory in the operating system of the computer when migrating the memory to be migrated and the basic input-output system storing the mapping relationship between the memory to be migrated and the physical address of the first unavailable memory. The embodiments of the present application can be applied to ordinary computer systems and computer systems under the NUMA architecture. | 2014-02-20 |
20140052949 | Method, Related Apparatus, and System for Virtual Network Migration - A method, related apparatus, and system for virtual network migration are provided. A method provided by an embodiment of the present disclosure includes: locating a source physical node in a regional physical network; obtaining information a virtual element corresponding to each virtual network on the source physical node and state information of each physical node in the regional physical network; determining, according to information the virtual elements and the state information, a physical node that can execute virtual network migration in the regional physical network; reconstructing a mapping relationship between each virtual network and the regional physical network on the physical node; comparing the mapping relationships of each virtual network; selecting a mapping relationship with minimum migration consumption as a mapping relationship for executing migration; and sending, according to the mapping relationship for executing migration, a migration instruction to a physical node that needs to execute virtual network migration. | 2014-02-20 |
20140052950 | SYSTEM CONTROLLING APPARATUS, INFORMATION PROCESSING SYSTEM, AND CONTROLLING METHOD OF SYSTEM CONTROLLING APPARATUS - A system controlling apparatus that controls an information processing apparatus, includes: an issuing unit that, in accessing a component provided in the information processing apparatus, issues to the component an access request including address information specifying an address in a register provided in the component and count information indicating a number of times to access the component by the access; and an executing unit that accesses the component when a response indicating that the component permits the access request is received from the information processing apparatus. | 2014-02-20 |
20140052951 | Method and Apparatus for Transferring Data from a First Domain to a Second Domain - Data is written from a first domain to a FIFO memory buffer in a second domain. The first domain uses a first clock signal, the second domain uses a second clock signal and the memory buffer uses the first clock signal that is delivered alongside the data. The data is read from the memory buffer using the second clock signal. A read pointer is adjusted and synchronised with the delivered first clock signal. A token is generated using the delivered first clock signal, based on the read pointer. The token represents a capacity of the memory buffer having been made available. The token is passed to the first domain and synchronised with the first clock signal. The writing of data to the memory buffer is controlled based on a comparison between the synchronised token and a previously received token. | 2014-02-20 |
20140052952 | MANAGING DEREFERENCED CHUNKS IN A DEDUPLICATION SYSTEM - A chunk index has information on chunks in a storage space referenced in objects in the storage space. The chunk index includes a reference count for each chunk indicating a number of objects in which the chunk is referenced and a reference measurement representing a level of data object references to the chunk. One chunk is selected to remove from the storage space based on a criteria applied to the reference measurements of chunks having reference counts indicating that the chunks are not referenced in one object in the storage space. | 2014-02-20 |
20140052953 | MASS STORAGE SYSTEM AND METHODS OF CONTROLLING RESOURCES THEREOF - A storage system and a method for managing a memory capable of storing metadata related to logical volume sets, are disclosed. A memory quota is assigned to a metadata related to a logical volume set. The size of a memory currently consumed by the metadata is monitored. Upon exceeding a threshold by the size of the monitored memory, at least one restraining action related to memory consumption by the metadata is applied. | 2014-02-20 |
20140052954 | SYSTEM TRANSLATION LOOK-ASIDE BUFFER WITH REQUEST-BASED ALLOCATION AND PREFETCHING - A system TLB accepts translation prefetch requests from initiators. Misses generate external translation requests to a walker port. Attributes of the request such as ID, address, and class, as well as the state of the TLB affect the allocation policy of translations within multiple levels of translation tables. Translation tables are implemented with SRAM, and organized in groups. | 2014-02-20 |
20140052955 | DMA ENGINE WITH STLB PREFETCH CAPABILITIES AND TETHERED PREFETCHING - A system with a prefetch address generator coupled to a system translation look-aside buffer that comprises a translation cache. Prefetch requests are sent for page address translations for predicted future normal requests. Prefetch requests are filtered to only be issued for address translations that are unlikely to be in the translation cache. Pending prefetch requests are limited to a configurable or programmable number. Such a system is simulated from a hardware description language representation. | 2014-02-20 |
20140052956 | STLB PREFETCHING FOR A MULTI-DIMENSION ENGINE - A multi-dimension engine, connected to a system TLB, generates sequences of addresses to request page address translation prefetch requests in advance of predictable accesses to elements within data arrays. Prefetch requests are filtered to avoid redundant requests of translations to the same page. Prefetch requests run ahead of data accesses but are tethered to within a reasonable range. The number of pending prefetches are limited. A system TLB stores a number of translations, the number being relative to the dimensions of the range of elements accessed from within the data array. | 2014-02-20 |
20140052957 | TRANSLATION TABLE AND METHOD FOR COMPRESSED DATA - A translation table has entries that each include a share bit and a delta bit, with pointers that point to a memory block that includes reuse bits. When two translation table entries reference identical fragments in a memory block, one of the translation table entries is changed to refer to the same memory block referenced in the other translation table entry, which frees up a memory block. The share bit is set to indicate a translation table entry is sharing its memory block with another translation table entry. In addition, a translation table entry may include a private delta in the form of a pointer that references a memory fragment in the memory block that is not shared with other translation table entries. When a translation table has a private delta, its delta bit is set. | 2014-02-20 |
20140052958 | TRANSLATION TABLE AND METHOD FOR COMPRESSED DATA - A translation table has entries that each include a share bit and a delta bit, with pointers that point to a memory block that includes reuse bits. When two translation table entries reference identical fragments in a memory block, one of the translation table entries is changed to refer to the same memory block referenced in the other translation table entry, which frees up a memory block. The share bit is set to indicate a translation table entry is sharing its memory block with another translation table entry. In addition, a translation table entry may include a private delta in the form of a pointer that references a memory fragment in the memory block that is not shared with other translation table entries. When a translation table has a private delta, its delta bit is set. | 2014-02-20 |
20140052959 | Experimental engineering optimization algorithm at point of performance - A method is provided for reducing the data set used in creating an optimization algorithm, thus to permit the use of microprocessors, that in turn permits embedding the optimization algorithm at the point of performance, in which a subset of data points in a performance window is used to derive a vector that is utilized to create an initial optimization algorithm. | 2014-02-20 |
20140052960 | APPARATUS AND METHOD FOR GENERATING VLIW, AND PROCESSOR AND METHOD FOR PROCESSING VLIW - An apparatus and method for generating a very long instruction word (VLIW) command that supports predicated execution, and a VLIW processor and method for processing a VLIW are provided herein. The VLIW command includes an instruction bundle formed of a plurality of instructions to be executed in parallel and a single value indicating predicated execution, and is generated using the apparatus and method for generating a VLIW command. The VLIW processor decodes the instruction bundle and executes the instructions, which are included in the decoded instruction bundle, in parallel, according to the value indicating predicated execution. | 2014-02-20 |
20140052961 | PARALLEL MEMORY SYSTEMS - The invention relates to a multi-core processor memory system, wherein it is provided that the system comprises memory channels between the multi-core processor and the system memory, and that the system comprises at least as many memory channels as processor cores, each memory channel being dedicated to a processor core, and that the memory system relates at run-time dynamically memory blocks dedicatedly to the accessing core, the accessing core having dedicated access to the memory bank via the memory channel. | 2014-02-20 |
20140052962 | CUSTOM CHAINING STUBS FOR INSTRUCTION CODE TRANSLATION - A processing system includes a microprocessor, a hardware decoder arranged within the microprocessor, and a translator operatively coupled to the microprocessor. The hardware decoder is configured to decode instruction code non-native to the microprocessor for execution in the microprocessor. The translator is configured to form a translation of the instruction code in an instruction set native to the microprocessor and to connect a branch instruction in the translation to a chaining stub. The chaining stub is configured to selectively cause additional instruction code at a target address of the branch instruction to be received in the hardware decoder without causing the processing system to search for a translation of additional instruction code at the target address. | 2014-02-20 |
20140052963 | TECHNIQUE TO PERFORM THREE-SOURCE OPERATIONS - A technique to perform three-source instructions. At least one embodiment of the invention relates to converting a three-source instruction into at least two instructions identifying no more than two source values. | 2014-02-20 |
20140052964 | Programmable Logic Unit and Method for Translating and Processing Instructions Using Interpretation Registers - An architecture for microprocessors and the like in which instructions include a type identifier, which selects one of several interpretation registers. The interpretation registers hold information for interpreting the opcode of each instruction, so that a stream of compressed instructions (with type identifiers) can be translated into a stream of expanded instructions. Preferably the type identifiers also distinguish sequencer instructions from processing-element instructions, and can even distinguish among different types of sequencer instructions (as well as among different types of processing-element instructions). | 2014-02-20 |
20140052965 | DYNAMIC CPU GPU LOAD BALANCING USING POWER - Dynamic CPU GPU load balancing is described based on power. In one example, an instruction is received and power values are received for a central processing core (CPU) and a graphics processing core (GPU). The CPU or the GPU is selected based on the received power values and the instruction is sent to the selected core for processing. | 2014-02-20 |
20140052966 | MECHANISM FOR CONSISTENT CORE HANG DETECTION IN A PROCESSOR CORE - Mechanism for consistent core hang detection on a processor with multiple processor cores, each having one or more instruction execution pipelines. Each core may also include a hang detection unit with a counter unit that may generate a count value based on a clock source having a frequency that is independent of a frequency of a processor core clock. The hang detection unit may also include a detector logic unit that may determine whether a given instruction execution pipeline has ceased processing a given instruction based upon a state of the processor core and whether or not the given instruction has completed execution prior to the count value exceeding a predetermined value. | 2014-02-20 |
20140052967 | METHOD AND APPARATUS FOR DYNAMIC DATA CONFIGURATION - A method and apparatus for configuring dynamic data are provided. A compilation apparatus may select a data format showing an optimum performance when a binary code is executed, from among a plurality of data formats supported by an execution apparatus used to execute a binary code, and may generate a binary code that uses the selected data format. The execution apparatus may execute a binary code provided by the compilation apparatus. | 2014-02-20 |
20140052968 | SUPER MULTIPLY ADD (SUPER MADD) INSTRUCTION - A method of processing an instruction is described that includes fetching and decoding the instruction. The instruction has separate destination address, first operand source address and second operand source address components. The first operand source address identifies a location of a first mask pattern in mask register space. The second operand source address identifies a location of a second mask pattern in the mask register space. The method further includes fetching the first mask pattern from the mask register space; fetching the second mask pattern from the mask register space; merging the first and second mask patterns into a merged mask pattern; and, storing the merged mask pattern at a storage location identified by the destination address. | 2014-02-20 |
20140052969 | SUPER MULTIPLY ADD (SUPER MADD) INSTRUCTIONS WITH THREE SCALAR TERMS - A processing core is described having execution unit logic circuitry having a first register to store a first vector input operand, a second register to a store a second vector input operand and a third register to store a packed data structure containing scalar input operands a, b, c. The execution unit logic circuitry further include a multiplier to perform the operation (a*(first vector input operand))+(b*(second vector operand))+c. | 2014-02-20 |
20140052970 | OPCODE COUNTING FOR PERFORMANCE MEASUREMENT - Methods, systems and computer program products are disclosed for measuring a performance of a program running on a processing unit of a processing system. In one embodiment, the method comprises informing a logic unit of each instruction in the program that is executed by the processing unit, assigning a weight to each instruction, assigning the instructions to a plurality of groups, and analyzing the plurality of groups to measure one or more metrics. In one embodiment, each instruction includes an operating code portion, and the assigning includes assigning the instructions to the groups based on the operating code portions of the instructions. In an embodiment, each type of instruction is assigned to a respective one of the plurality of groups. These groups may be combined into a plurality of sets of the groups. | 2014-02-20 |
20140052971 | NATIVE CODE INSTRUCTION SELECTION - Methods, systems, and apparatus, including computer programs encoded on computer storage media, for selecting native code instructions. One of the methods includes receiving an initial machine language instruction for execution by a processor in a first execution mode; determining that a portion of the initial machine language instruction, when executed by the processor in a second execution mode, satisfies one or more risk criteria; generating one or more alternative machine language instructions to replace the initial machine language instruction for execution by the processor in the first execution mode, wherein the one or more alternative machine language instructions, when executed by the processor in the second execution mode, mitigate the one or more risk criteria; and providing the one or more alternative machine language instructions. | 2014-02-20 |
20140052972 | META PREDICTOR RESTORATION UPON DETECTING MISPREDICTION - Methods and apparatus for restoring a meta predictor system upon detecting a branch or binary misprediction, are disclosed. An example apparatus may include a base misprediction history register to store a set of misprediction history values each indicating whether a previous branch prediction taken by a previous branch instruction was predicted correctly or incorrectly. The apparatus may comprise a meta predictor to detect a branch misprediction of a current branch prediction based at least in part on an output of the base misprediction history register. The meta predictor may restore the base misprediction history register based on the detecting of the branch misprediction. Additional apparatus, systems, and methods are disclosed. | 2014-02-20 |
20140052973 | Method And Apparatus For Providing Traffic Re-Aware Slot Placement - Various embodiments provide a method and apparatus of providing an RE-aware technique for placing slots based on redundancy across and within slot communication pairs. | 2014-02-20 |
20140052974 | HOT DESK SETUP USING GEOLOCATION - A geolocation system determines a location of a user mobile device. The geolocation system identifies a hot desk policy associated with the location of the user mobile device and automatically configures a workspace according to the hot desk policy associated with the location of the user mobile device. | 2014-02-20 |
20140052975 | PROTECTING SECURE SOFTWARE IN A MULTI-SECURITY-CPU SYSTEM - A computing system includes a first central processing unit (CPU) and a second CPU coupled with the first CPU and with a host processor. In response to a request by the host processor to boot the second CPU, the first CPU is configured to execute secure booting of the second CPU by decrypting encrypted code to generate decrypted code executable by the second CPU but that is inaccessible by the host processor. | 2014-02-20 |
20140052976 | WIRELESS ROUTER REMOTE FIRMWARE UPGRADE - A wireless router receives a firmware update from a remote server, and destructively overwrites router firmware in flash memory in a chunk-wise manner, and then writes a kernel memory before going live with upgraded firmware. Some routers authenticate the firmware image. In some cases, image chunks are re-ordered into an executable order after receipt and before finishing their final arrangement in the flash memory. In some routers, a maximum firmware image size is at least two chunk sizes smaller than the flash memory storage capacity. Some routers remap ROM to RAM memory. Some decompress data from flash into a RAM. Some save text file configuration settings in flash before rebooting. Some detect a user's inactive billing status and redirect a web browser to a billing activation page. | 2014-02-20 |
20140052977 | ELECTRONIC APPARATUS AND METHOD FOR BOOTING THE SAME - An electronic apparatus includes an operating system and a control circuit. The operating system is driven by an output command signal to execute one of plural different boot procedures. The control circuit detects an input command signal correspondingly generated when a power push-button is pressed down in a condition of the operating system being deactivated, and generates the output command signal according to the detected input command signal. Moreover, a method for booting an electronic apparatus is also disclosed herein. | 2014-02-20 |
20140052978 | COMPUTER SYSTEM AND ASSOCIATED STORAGE DEVICE MANAGEMENT METHOD - A storage device management method is provided. The method includes steps of: reading a mode selection parameter when a computer system is activated; the computer operating in a first operation mode or a second operation mode according to the mode selection parameter; determining whether the mode selection parameter is modified; and selectively changing an operation mode of the computer when the mode selection parameter is modified. | 2014-02-20 |
20140052979 | SYSTEM AND METHOD FOR INTERLEAVING INFORMATION INTO SLICES OF A DATA PACKET, DIFFERENTIALLY ENCRYPTING THE SLICES, AND OBFUSCATING INFORMATION IN THE DATA PACKET - Approaches for combining different information to be transmitted into different slices of a data packet and/or encrypting the slices using different cryptographic schemes for secure transmission of the information are disclosed. In some implementations, first information and second information may be received. A first data slice representing a portion of the first information may be generated based on a first cryptographic scheme. A second data slice representing a portion of the second information may be generated based on a second cryptographic scheme different than the first cryptographic scheme. A first header may be generated such that the first header may specify the first cryptographic scheme for the first data slice and the second cryptographic scheme for the second data slice. A first data packet may be generated such that the first data packet may include the first header, the first data slice, and the second data slice. | 2014-02-20 |
20140052980 | SECURE NETWORK SYSTEMS AND METHODS - Secure network systems and methods are provided. In an aspect of the invention, a secure network system is provided that includes a computing system that comprises a client system and a specialized NIC (network interface controller) system equipped with the capability to form a secure connection with an endpoint system and encrypt and decrypt communications between the client system and the network to which it is connected. This trusted network interface (TNI), which may present itself as a physical peripheral connected to a physical client system or a virtual peripheral connected to a virtual client system, takes the place of a client system's standard NIC, and the connection that it forms with the trusted network is negotiated and enforced externally to and independent of the client system. | 2014-02-20 |
20140052981 | CENTRALIZED KEY MANAGEMENT - A first network device is configured to receive a first request for a first secret key, generate the first secret key, and send the first secret key to a second network device and a first user device; and is also configured to receive a second request for a second secret key, generate the second secret key, and send the second secret key to a third network device and a second user device. The second network device and the first user device may mutually authenticate each other using the first secret key. The third network device and the second user device may mutually authenticate each other using second secret key. | 2014-02-20 |
20140052982 | METHODS AND SYSTEMS FOR DISTRIBUTING CRYPTOGRAPHIC DATA TO AUTHENTICATED RECIPIENTS - A method for distributing cryptographic data to authenticated recipients includes receiving, by an access control management system, from a first client device, information associated with an encrypted data object. The method includes receiving, by the access control management system, from a second client device, a request for the information associated with the encrypted data object. The method includes verifying, by the access control management system, that a user of the second client device is identified in the received information associated with the encrypted data object. The method includes authenticating, by the access control management system, with an identity provider, the user of the second client device. The method includes sending, by the access control management system, to the second client device, the received information associated with the encrypted data object. | 2014-02-20 |
20140052983 | Known Plaintext Attack Protection - A Headend system including a encoder to encode input data yielding a plurality of data packets, each of the packets having a header and a payload, a post encoding processor to identify ones of the data packets having a payload with a suspected known plaintext, and modify at least some of the identified packets, and an encryption processor to encrypt at least some of the data packets yielding encrypted data packets. Related apparatus and methods are also described. | 2014-02-20 |
20140052984 | METHODS AND SYSTEMS FOR REGISTERING A PACKET-BASED ADDRESS FOR A MOBILE DEVICE USING A FULLY-QUALIFIED DOMAIN NAME (FQDN) FOR THE DEVICE IN A MOBILE COMMUNICATION NETWORK - A mobile communication device registers for data communication through a mobile communication network with a packet-based network. The device may or may not have a mobile device number, and registers using a fully-qualified-domain-name (FQDN) uniquely identifying the device in a domain-name-system (DNS) of the packet-based network. A packet-data-network gateway assigns a packet-based address for the device, and generates a request for registering the address with the FQDN in a DNS server. Alternatively, the device generates the packet-based address based on a received portion of the address, retrieves the FQDN from an identity module, and sends a DNS-Update message to the DNS server including the address and FQDN. Again alternatively, a DNS server receives an encrypted DNS update message including a FQDN and a packet-based address, and decrypts the message prior to registering the address and FQDN in a DNS database. | 2014-02-20 |
20140052985 | METHODS FOR PROVIDING REQUESTED DATA FROM A STORAGE DEVICE TO A DATA CONSUMER AND STORAGE DEVICES - According to various embodiments, a method for providing requested data from a storage device to a data consumer may be provided. The method may include: determining a helper key for the data consumer; determining encrypted data corresponding to the requested data from a memory of the storage device; determining pre-processed data based on the encrypted data and the helper key, wherein the pre-processed data is encrypted and configured to be decrypted using a private key of the data consumer; and transmitting the pre-processed data to the data consumer. | 2014-02-20 |
20140052986 | INFORMATION HANDLING DEVICE, INFORMATION OUTPUT DEVICE, AND RECORDING MEDIUM - An information handling device has a first connection unit, a Web application executing unit to generate a device operating command, a second connection unit, an application authentication processing unit to generate a platform authenticator, an application origin information attacher to attach origin information of the web application to the platform authenticator, and a third connection unit to establish a connection for transmitting the device operating command and the platform authenticator attached with the origin information to the second communication device in order to transmit the device operating command and the platform authenticator attached with the origin information. | 2014-02-20 |
20140052987 | Method and System Making it Possible to Test a Cryptographic Integrity of an Error Tolerant Data Item - A method and system for testing the cryptographic integrity of data m comprises at least the following elements: a module transmitting a message M, said module comprising a memory for storing the parameters used to execute the steps of the method, such as the key, the public data, a transmission medium, a receiver module also comprising storage means for storing at least the same parameters as in transmission. The system may comprise storage means for storing confidential data such as the secret keys, a processor suitable for executing the steps. | 2014-02-20 |
20140052988 | AUTHENTICATOR, AUTHENTICATEE AND AUTHENTICATION METHOD - According to one embodiment, an authenticatee includes, a memory configured to store secret information XYmain, XYsub, and secret information XYmain | 2014-02-20 |
20140052989 | SECURE DATA EXCHANGE USING MESSAGING SERVICE - A system for securely communicating over a network includes a sending device and a receiving device. The sending device includes first processing hardware configured to encrypt a symmetric key associated with the sending device with a public key associated with a receiving device. The first processing hardware is further configured to steganographically embed the symmetric key into an image. The sending device further includes a first signal interface configured to send the image to the receiving device. The receiving device includes second signal interface for receiving the image from the sending device. The receiving device also includes second processing hardware configured to decrypt the symmetric key with a private key stored on the receiving device and to further secure communications with the sender via the symmetric key. | 2014-02-20 |
20140052990 | ELECTRONIC FILE SENDING METHOD - An electronic file sending method is provided to securely and easily send en electronic file to a receiver. A receiving apparatus receives from a sending apparatus an electronic mail including an encrypted electronic file. The sending apparatus uses a public key of a management server to encrypt a decryption password that is necessary to decrypt the encrypted electronic file and sends the encrypted decryption password to the management server. In association with a file identifier of the electronic file, the management server stores the decryption password and an electronic mail address of a correct receiver, who is a receiver of the receiving apparatus. The receiving apparatus sends to the management server the file identifier of the electronic file and the electronic mail address of the receiver. The management server uses a public key of the receiving apparatus to encrypt the password and sends the encrypted password to the receiving apparatus. | 2014-02-20 |
20140052991 | Optical Network Terminal Management Control Interface-Based Passive Optical Network Security Enhancement - A network component comprising at least one processor coupled to a memory and configured to exchange security information using a plurality of attributes in a management entity (ME) in an optical network unit (ONU) via an ONU management control interface (OMCI) channel, wherein the attributes provide security features for the ONU and an optical line terminal (OLT). Also included is an apparatus comprising an ONU configured to couple to an OLT and comprising an OMCI ME, wherein the OMCI ME comprises a plurality of attributes that support a plurality of security features for transmissions between the ONU and the OLT, and wherein the attributes are communicated via an OMCI channel between the ONU and the OLT and provide the security features for the ONU and the OLT. | 2014-02-20 |
20140052992 | Response to Queries by Means of the Communication Terminal of a User - The subject innovation relates to a method with which a response to a request—said response having been ascertained by means of a communication terminal device can be securely transmitted to a data means, whereby the communication terminal device makes a selection from a plurality of response options. A specific key is associated with each of the response options, and the keys, which are in encrypted form, are received, together with the request, in the communication terminal device and they are decrypted in a means of the communication terminal device. On the basis of the selection made, the means ascertains the key that is associated with the selected response option, and the ascertained key is sent in a response message to the data means. The subject innovation also relates to a communication terminal device that is suitable for carrying out the method. | 2014-02-20 |
20140052993 | INFORMATION OPERATING DEVICE, INFORMATION OUTPUT DEVICE, AND INFORMATION PROCESSING METHOD - An information operating device has a first connection unit, a second connection unit, a machine operating command for operating the information output device and a usage certificate certifying that the machine operating web application, a domain name attacher to attach a domain name of the first communication device, when the connection is established by the second connection unit to transmit the machine operating command for operating the information output device using the connection, an application executing unit to execute the PIN code input web application acquired from the first communication device through the first connection unit, an encryption information generator to generate encryption information and transmit it to the information output device, and a client processing unit to transmit the usage certificate and the encryption information to the information output device through the second connection unit. | 2014-02-20 |
20140052994 | Object Signing Within a Cloud-based Architecture - This invention uses a cloud-based architecture to sign objects by dynamically creating a cloud-based virtual machine with the ability to sign objects, perform network and object isolation, and encrypt and store keys generated by an object signing agent. Multi-user authentication is supported along with mobile access. | 2014-02-20 |
20140052995 | DYNAMIC TOKEN SEED KEY INJECTION AND DEFORMATION METHOD - The present invention discloses a dynamic token seed key injection and deformation method. The method comprises steps of: generating in advance an initial seed key for a token and injecting the initial seed key into the token during manufacture; when distributing the token to an end user, performing an activation operation, and obtaining a new seed key, which is the final seed key for the future work of the token, by performing an operation based on an active code and the initial seed key; meanwhile, introducing the initial seed key into a dynamic password authentication system which performs the same deformation operation for the seed key as that performed in the token to obtain the same new seed key. After the activation operation in the token and the authentication system in this way, the final new seed key is different from the initial seed key injected by the token manufacturer, so that the privacy of the seed key is strengthened. | 2014-02-20 |
20140052996 | EXTENDING THE NUMBER OF APPLICATIONS FOR ACCESSING PROTECTED CONTENT IN A MEDIA USING MEDIA KEY BLOCKS - Embodiments of the invention relate to digital content protection for recordable media using encryption and decryption based on device keys in the media. The invention increases the number of extended applications supported the media key blocks and facilitates the assignment of the applications to the media key blocks. One aspect of the invention concerns a method that comprises assigning a first media key block in a protected area of the media for extended applications accessing protected content, processing the first media key block with a first device key set to generate a first media key, and for each extended application, creating a second media key block in a protected area of the media. The second media key block is processed to generate a second media key. A content-accessing device processes the first and second media keys in order to access protected content. | 2014-02-20 |
20140052997 | SECURITY MODEL FOR ACTOR-BASED LANGUAGES AND APPARATUS, METHODS, AND COMPUTER PROGRAMMING PRODUCTS USING SAME - An application includes: a programming model including a service provider, first components, second components, and sinks communicating via messages. Each of the second components is assigned a unique capability. A given one of the first components routes a message from the given first component to second component(s) and then to a sink. Each of the second component(s) sends the message to the service provider. The service provider creates a token corresponding at least to a received message and a unique capability assigned to an associated one of the second component(s) and sends the token to the associated one of the second component(s). The selected sink receives the message and a token corresponding to each of the second component(s), verifies each received token, and either accepts the message if each of the received tokens is verified or ignores the message if at least one of the received tokens is not verified. | 2014-02-20 |
20140052998 | SECURITY MODEL FOR ACTOR-BASED LANGUAGES AND APPARATUS, METHODS, AND COMPUTER PROGRAMMING PRODUCTS USING SAME - An application includes: a programming model including a service provider, first components, second components, and sinks communicating via messages. Each of the second components is assigned a unique capability. A given one of the first components routes a message from the given first component to second component(s) and then to a sink. Each of the second component(s) sends the message to the service provider. The service provider creates a token corresponding at least to a received message and a unique capability assigned to an associated one of the second component(s) and sends the token to the associated one of the second component(s). The selected sink receives the message and a token corresponding to each of the second component(s), verifies each received token, and either accepts the message if each of the received tokens is verified or ignores the message if at least one of the received tokens is not verified. | 2014-02-20 |
20140052999 | Searchable Encrypted Data - Embodiments of the invention broadly described, introduce systems and methods for enabling the searching of encrypted data. One embodiment of the invention discloses a method for generating a searchable encrypted database. The method comprises receiving a plurality of sensitive data records comprising personal information of different users, identifying one or more searchable fields for the sensitive data records, wherein each searchable field is associated with a subset of the personal information for a user, generating a searchable field index for each of the one or more searchable fields, and encrypting the sensitive data records using a database encryption key. | 2014-02-20 |
20140053000 | INSTRUCTIONS TO PERFORM JH CRYPTOGRAPHIC HASHING - A method is described. The method includes executing one or more JH_SBOX_L instruction to perform S-Box mappings and a linear (L) transformation on a JH state and executing one or more JH_Permute instruction to perform a permutation function on the JH state once the S-Box mappings and the L transformation have been performed | 2014-02-20 |
20140053001 | SECURITY CENTRAL PROCESSING UNIT MANAGEMENT OF A TRANSCODER PIPELINE - A method for managing a transcoder pipeline includes partitioning a memory with a numbered region; receiving an incoming media stream to be transcoded; and atomically loading, using a security central processing unit (SCPU), a decryption key, a counterpart encryption key and an associated region number of the memory into a slot of a key table, the key table providing selection of decryption and encryption keys during transcoding. The atomically loading the decryption and encryption keys and the associated numbered region ensures that the encryption key is selected to encrypt a transcoded version of the media stream when the media stream has been decrypted with the decryption key and the transcoded media stream is retrieved from the associated numbered region of the memory. | 2014-02-20 |
20140053002 | SYSTEM AND METHOD FOR ENCRYPTING SECONDARY COPIES OF DATA - A system and method for encrypting secondary copies of data is described. In some examples, the system encrypts a secondary copy of data after the secondary copy is created. In some examples, the system looks to information about a data storage system, and determines when and where to encrypt data based on the information. | 2014-02-20 |
20140053003 | RANDOM TIMESLOT CONTROLLER FOR ENABLING BUILT-IN SELF TEST MODULE - A data processing system having a first processor, a second processor, a local memory of the second processor, and a built-in self-test (BIST) controller of the second processor which can be randomly enabled to perform memory accesses on the local memory of the second processor and which includes a random value generator is provided. The system can perform a method including executing a secure code sequence by the first processor and performing, by the BIST controller of the second processor, BIST memory accesses to the local memory of the second processor in response to the random value generator. Performing the BIST memory accesses is performed concurrently with executing the secure code sequence. | 2014-02-20 |
20140053004 | SLAB INDUCTOR DEVICE PROVIDING EFFICIENT ON-CHIP SUPPLY VOLTAGE CONVERSION AND REGULATION - A method is disclosed to operate a voltage conversion circuit such as a buck regulator circuit that has a plurality of switches coupled to a voltage source; a slab inductor having a length, a width and a thickness, where the slab inductor is coupled between the plurality of switches and a load and carries a load current during operation of the plurality of switches; and a means to reduce or cancel the detrimental effect of other wires on same chip, such as a power grid, potentially conducting return current and thereby degrading the functionality of this slab inductor. In one embodiment the wires can be moved further away from the slab inductor and in another embodiment magnetic materials can be used to shield the slab inductor from at least one such interfering conductor. | 2014-02-20 |
20140053005 | STORAGE DEVICE AND DATA STORAGE SYSTEM - A mobile storage device is powered without using an external power supply when the mobile storage device is connected to a computing device. The mobile storage device includes a voltage regulator to receive a first voltage from a data transmission interface (e.g., USB interface) of the computing device. The voltage regulator converts the first voltage into several other voltages suitable for all other electronic components of the storage device, to provide full power to the mobile storage device. | 2014-02-20 |
20140053006 | Emergency Mobile Device Power Source - In various aspects, a portable electronic device includes electrical components supported by a housing, the electrical components including a user interface coupled to a processor and a storage medium including an emergency power storage module coupled to the processor. The portable apparatus further includes one or more power storage devices configured to provide electrical energy to the electrical components, at least one power storage device operably controlled by the emergency power storage module to provide emergency electrical energy to the electronic components for an emergency communication. | 2014-02-20 |
20140053007 | APPARATUS AND METHOD FOR PREVENTING MALFUNCTION - An apparatus for preventing a malfunction of a peripheral device in a portable terminal with multiple processors includes a battery, a peripheral device electrically connected to a switch through a I/O pins, a first processor in which a control port for the peripheral device is electrically connected to the switch through the GPIO method, and which controls driving of the peripheral device through generation of a normal high signal, a second processor electrically connected to the switch through the GPIO method, and the switch driven by the battery, configured to operate such that the control port of the first processor is grounded when it is determined that an unintended high signal is generated from the second processor before the portable terminal is completely booted. | 2014-02-20 |
20140053008 | METHOD AND SYSTEM FOR AUTOMATIC CLOCK-GATING OF A CLOCK GRID AT A CLOCK SOURCE - A system and method for power management by performing clock-gating at a clock source. In the method a critical stall condition is detected within a clocked component of a core of a processing unit. The core includes one or more clocked components synchronized in operation by a clock signal distributed by a clock grid. The clock grid is clock-gated to suspend distribution of the clock signal to the core during the critical stall condition. | 2014-02-20 |
20140053009 | INSTRUCTION THAT SPECIFIES AN APPLICATION THREAD PERFORMANCE STATE - An apparatus is described that includes a processor. The processor has a processing core to execute an instruction that specifies a performance state of an application thread. The instruction belongs to the application thread. The processor includes a register to store the performance state. The processor includes power management control logic coupled to the register to set a performance state of the processing core as a function of the performance state. | 2014-02-20 |
20140053010 | DATA PROCESSING SYSTEM AND DATA PROCESSOR - One data processor is provided with an interface for realizing connection with the other data processor. This interface is provided with a function for connecting the other data processor as a bus master to an internal bus of the one data processor, and the relevant other data processor is capable of directly operating peripheral functions that are memory mapped to the internal bus from an external side via the interface. Accordingly, the data processor can utilize the peripheral functions of the other data processor without interruption of the program being executed. In short, one data processor can use in common the peripheral resources of the other data processor. | 2014-02-20 |
20140053011 | Energy Efficient Sleep Signature in Power Over Ethernet - An energy efficient sleep signature in power over Ethernet. In one embodiment, a signature of a powered device is first detected. It is then determined whether the detected signature is indicative of an unknown powered device. In one example, a detected signature of an approximately 25 kΩ impedance is indicative of an unknown powered device. Where the detected signature is indicative of an unknown powered device a normal PoE startup powering process can be used that includes a conventional detection, classification and powering process. Where the detected signature is indicative of a powered device that was previously known to the PSE, then powering of the PD can proceed with a fast-restart powering method that retains previous powering parameters. | 2014-02-20 |
20140053012 | SYSTEM AND DETECTION MODE - A system includes a CPU; a sensor that detects power of the CPU; a cache memory state monitoring circuit that monitors a state of a cache memory; and a detection circuit that based on a sensor signal from the sensor and a state signal from the cache memory state monitoring circuit, detects a spin state of a program executed by the CPU. | 2014-02-20 |
20140053013 | HANDLING INTERMITTENT RECURRING ERRORS IN A NETWORK - Embodiments relate to a computer for transmitting data in a network. The computer includes at least one data transmission port configured to be connected to at least one storage device via a plurality of paths of a network. The computer further includes a processor configured to detect recurring intermittent errors in one or more paths of the plurality of paths and to disable access to the one or more paths based on detecting the recurring intermittent errors. | 2014-02-20 |
20140053014 | HANDLING INTERMITTENT RECURRING ERRORS IN A NETWORK - Embodiments relate to a computer for transmitting data in a network. The computer includes at least one data transmission port configured to be connected to at least one storage device via a plurality of paths of a network. The computer further includes a processor configured to detect recurring intermittent errors in one or more paths of the plurality of paths and to disable access to the one or more paths based on detecting the recurring intermittent errors. | 2014-02-20 |
20140053015 | Error Control Coding - A data writer is described comprising: a memory to store at least one amount of source data that is to be written to a data storage medium; a processor to arrange the source data into subsets and generate ECC data in respect of each subset, wherein the source data and the associated ECC data are to be written to a data storage medium via a plurality of individual data channels, and wherein the ECC data comprises at least a first degree of ECC protection having a first level of redundancy in respect of a first subset and a second degree of ECC protection having a second level of redundancy in respect of a second subset; a plurality of data writing elements, each to write data from an associated data channel, concurrently with the writing by the other data writing elements of data from respective data channels, to a data storage medium; and a controller, to control the writing by the data writing elements of the source data and the associated ECC data to the data storage medium. | 2014-02-20 |
20140053016 | Using A Buffer To Replace Failed Memory Cells In A Memory Component - Methods and data processing systems for using a buffer to replace failed memory cells in a memory component are provided. Embodiments include determining that a first copy of data stored within a plurality of memory cells of a memory component contains one or more errors; in response to determining that the first copy contains one or more errors, determining whether a backup cache within the buffer contains a second copy of the data; and in response to determining that the backup cache contains the second copy of the data, transferring the second copy from the backup cache to a location within an error data queue (EDQ) within the buffer and updating the buffer controller to use the location within the EDQ instead of the plurality of memory cells within the memory component. | 2014-02-20 |
20140053017 | RESOURCE SYSTEM MANAGEMENT - A resource system comprises a plurality of resource elements and a resource controller connected to the resource elements and operating the resource elements according to a predefined set of operational goals. A method of operating the resource system comprises the steps of identifying error recovery procedures that could be executed by the resource elements, categorizing each identified error recovery procedure in relation to the predefined set of operational goals, detecting that an error recovery procedure is to be performed on a specific resource element, deploying one or more actions in relation to the resource elements according to the categorization of the detected error recovery procedure, and performing the detected error recovery procedure on the specific resource element. | 2014-02-20 |
20140053018 | OPTIMISTIC DATA WRITING IN A DISPERSED STORAGE NETWORK - A method begins by a processing module dispersed storage error encoding data to produce a set of encoded data slices and sending a set of write request messages to a set of dispersed storage (DS) units, wherein each of the set of write request messages includes an encoded data slice of the set of encoded data slices. The method continues with the processing module determining whether a pillar width number of favorable write response messages has been received within a write acknowledgement (ACK) time period. The method continues with the processing module executing a retry write process to at least one DS unit of the set of DS units from which a favorable write response message was not received during the write ACK time period when the pillar width number of favorable write response messages has not been received within the write ACK time period. | 2014-02-20 |
20140053019 | REDUCED-IMPACT ERROR RECOVERY IN MULTI-CORE STORAGE-SYSTEM COMPONENTS - A method for recovering from an error in a multi-core storage-system component is disclosed. In one embodiment, such a method includes detecting an error in a first core of a multi-core component. The method determines whether the error was one of (1) detected by the first core; and (2) detected by a core other than the first core. In the event the error was detected by the first core and the error is recoverable, the first core recovers from the error without substantially impacting operation of other cores in the multi-core component. In the event the error was detected by a core other than the first core and the error is recoverable, a core other than the first core recovers from the error without substantially impacting operation of other cores in the multi-core component. A corresponding apparatus and computer program product are also disclosed. | 2014-02-20 |
20140053020 | SYSTEM FOR AND METHOD OF IMPROVING TRANSACTION PROCESSING AND FLOW-THROUGH - A system for and method improving order and transaction flow-though and processing. The method may include receiving an order request from a user corresponding to the order. The method may further include determining a set of order particulars associated with the user and the order for resolution against an external system for order fulfillment, and determining if any order particular is incorrect or missing. The method may also include identifying the missing or correct order particular, and updating the set of order particulars to include the missing or correct order particular without requesting missing or correct order particular from the user. The method may include transmitting the updated order particulars to the external system for order fulfillment. | 2014-02-20 |
20140053021 | AUTOMATIC CLASSIFICATION ADJUSTMENT OF RECORDED ACTIONS FOR AUTOMATION SCRIPT - A method for automatic revision of an automation script includes obtaining a sequence of at least one classified recorded action and an automation script, the automation script including a sub-sequence of the sequence of classified recorded actions, wherein each action is included in the automation script in accordance with the classification of that action. At least a portion of the automation script is executed. Upon failure of an action of the portion of the automation script to execute, an action of the sequence of classified recorded actions is reclassified, it is verified if the action that failed to execute executes successfully after the reclassifying, and the automation script is revised. Relating computer program product and data processing system are also disclosed. | 2014-02-20 |
20140053022 | VIRTUAL MACHINE FAULT TOLERANCE - One or more techniques and/or systems are provided for hosting a virtual machine from a snapshot. In particular, a snapshot of a virtual machine hosted on a primary computing device may be created. The virtual machine may be hosted on a secondary computing device using the snapshot, for example, when a failure of the virtual machine on the primary computing device occurs. If a virtual machine type (format) of the snapshot is not supported by the secondary computing device, then the virtual machine within the snapshot may be converted to a virtual machine type supported by the secondary computing device. In this way, the virtual machine may be operable and/or accessible on the secondary computing device despite the failure. Hosting the virtual machine on the secondary computing device provides, among other things, fault tolerance for the virtual machine and/or applications comprised therein. | 2014-02-20 |
20140053023 | PSEUDO DEDICATED DEBUG PORT WITH AN APPLICATION INTERFACE - A method is shown to provide remote access to one or more debug access points whose functions include capabilities other than accessing memories across an application interface such as USB, IEEE 802.3 (Ethernet) and other protocols. The capabilities available include all or many of the capabilities provided by a dedicated debug interface. | 2014-02-20 |
20140053024 | COMPUTING PLATFORM WITH INTERFACE BASED ERROR INJECTION - In some embodiments, a PPM interface for a computing platform may be provided with functionality to facilitate, to an OS through the PPM interface, hardware component error injection. | 2014-02-20 |