06th week of 2014 patent applcation highlights part 72 |
Patent application number | Title | Published |
20140040509 | Near Field Communication Mimic Device And Method Of Use - A NFC mimic device retrieves peripheral information from a peripheral, stores the peripheral information and then mimics the peripheral information to an information handling system so that a NFC device of the information handling system receives the peripheral information as if provided directly from the peripheral. The NFC mimic device supports automated setup a wireless interface between an information handling system and a peripheral, such as a projector. | 2014-02-06 |
20140040510 | STAGED DISCOVERY IN A DATA STORAGE FABRIC - A method of performing discovery in a data storage fabric is disclosed. Performing discovery includes performing a first stage of discovery on expanders in the data storage fabric prior to broadcasting a discovery command to initiators on the data storage fabric. After the first stage has completed, providing the discovery command to the initiators to perform a second stage of discovery. | 2014-02-06 |
20140040511 | PORTABLE DATA STORAGE DEVICE WITH WIRELESS FUNCTIONALITY HAVING A DIGITAL SWITCH CIRCUIT AND A METHOD FOR STORING DATA IN THE AFOREMENTIONED - There is provided a portable data storage device with wireless functionality. The portable storage device includes a digital switch circuit for controlling a flow of data in the portable storage device; a non-volatile memory module coupled to the digital switch circuit, the non-volatile memory module being for storing data; an interface coupled to the digital switch circuit for enabling the portable data storage device to be used for data transfer with a host device; a microcontroller coupled to the digital switch circuit for controlling the digital switch circuit; and a wireless communications module coupled to the microcontroller for wireless transmission/reception of data. The microcontroller is configured to toggle amongst a plurality of discrete modes of the digital switch circuit such that in at least one of the plurality of discrete modes the digital switch circuit diverts data away from the microcontroller to reduce a processing load on the microcontroller. A corresponding method is also disclosed. | 2014-02-06 |
20140040512 | DATA TRANSFER MANAGER - Techniques are disclosed relating to a system that implements direct memory access (DMA). In one embodiment, an apparatus is disclosed that includes a dedicated data transfer management (DTM) circuit. The DTM circuit is configured to provide commands to a direct memory access (DMA) controller coupled to a bus to facilitate the DMA controller retrieving portions of a data object to be transmitted to a peripheral circuit via the bus. In some embodiments, the DTM is configured to assemble a data packet having a payload supplied by a processor, where the DTM circuit is configured to assemble the data packet by generating direct memory access (DMA) requests for the DMA controller. In such an embodiment, the DMA requests cause a plurality of peripheral circuits coupled to the bus to transfer portions of the data packet over the bus. | 2014-02-06 |
20140040513 | Scalable Embedded Memory Programming - The present disclosure describes techniques for scalable embedded memory programming. In some aspects data is received at a first communication interface from a host device, at least a portion of the data is stored to a memory device supported by a printed circuit board, and the data is transmitted to a target device via a second communication interface. | 2014-02-06 |
20140040514 | ADAPTIVE INTERRUPT MODERATION - Generally, this disclosure relates to adaptive interrupt moderation. A method may include determining, by a host device, a number of connections between the host device and one or more link partners based, at least in part, on a connection identifier associated with each connection; determining, by the host device, a new interrupt rate based at least in part on a number of connections; updating, by the host device, an interrupt moderation timer with a value related to the new interrupt rate; and configuring the interrupt moderation timer to allow interrupts to occur at the new interrupt rate. | 2014-02-06 |
20140040515 | SYSTEM AND METHOD FOR CREATING A SCALABLE MONOLITHIC PACKET PROCESSING ENGINE - A novel and efficient method is described that creates a monolithic high capacity Packet Engine (PE) by connecting N lower capacity Packet Engines (PEs) via a novel Chip-to-Chip (C2C) interface. The C2C interface is used to perform functions, such as memory bit slicing and to communicate shared information, and enqueue/dequeue operations between individual PEs. | 2014-02-06 |
20140040516 | BARRIER TRANSACTIONS IN INTERCONNECTS - Interconnect circuitry is configured to provide data routes via which at least one initiator device may access at least one recipient device. The circuitry including: at least one input for receiving transaction requests from at least one initiator device; at least one output for outputting transaction requests to the at least one recipient device; and at least one path for transmitting transaction requests between at least one input and at least one output. Also includes is control circuitry for routing the received transaction requests from at least one input to at least one output and responds to a barrier transaction request to maintain an ordering of at least some transaction requests with respect to said barrier transaction request within a stream of transaction requests passing along one of said at least one paths. Barrier transaction requests include an indicator of transaction requests whose ordering is to be maintained. | 2014-02-06 |
20140040517 | Dynamic Address Change Optimizations - A method of setting an address of a component that includes determining a characterization value associated with a consumable, calculating a number of address change operations based upon the characterization value, and setting a last address generated from the number of address change operations as the new address of the component, wherein the characterization value is determined based upon a usage of the consumable. | 2014-02-06 |
20140040518 | MEMORY INTERFACE - The present disclosure provides a method for processing memory access operations. The method includes determining a fixed response time based at least in part, on a total memory latency of a memory module. The method also includes identifying an available time slot for receiving return data from the memory module over a data bus, wherein the time difference between a current clock cycle and the available time slot is greater than or equal to the fixed response time. The method also includes creating a first slot reservation by reserving the available time slot. The method also includes issuing as read request to the memory module over the data bus, wherein the read request is issued at a clock cycle determined by subtracting the fixed response time from a time of the first slot reservation. | 2014-02-06 |
20140040519 | ACTIVE LOCK INFORMATION MAINTENANCE AND RETRIEVAL - Technologies related to active lock information maintenance and retrieval are generally described. In some examples, a computing device may be configured to maintain active lock information including lock identifiers for active locks, lock access identifiers corresponding to a number of times a lock has been placed and/or released, and/or lock owner identifiers corresponding to threads placing locks. The computing device may provide an active lock information system configured to return active lock information including some or all of the lock identifiers for active locks, lock access identifiers, and/or lock owner identifiers in response to active lock information requests. | 2014-02-06 |
20140040520 | METHOD AND PROGRAM FOR SELECTIVE SUSPENSION OF USB DEVICE - A method provides device selective suspension feature when the operating system does not allow certain device drivers to perform device selective suspension. Two driver stacks are provided in the kernel space for the device. The first driver stack includes a virtual bus, a PDO (physical device object) created by the virtual bus, and a driver for the device (e.g. NDIS driver); the second stack includes a device driver stack (e.g. USB generic driver) and a function driver that performs device selective suspension by sending power IRPs to the device driver stack. By using a virtual bus and PDOs created by the virtual bus in the first driver stack, the driver above the PDO can be any one of many types of drivers (NDIS driver being one example). The virtual bus forwards IRPs from the first driver stack to the second driver stack. | 2014-02-06 |
20140040521 | MEMORY CARD AND CONNECTION SLOT INSERTEDLY PROVIDED THEREOF - A memory card and a connection slot insertedly provided thereof are proposed. The memory card comprises a plurality of pin holes and the connection slot comprises a plurality of pin headers. The memory card may be inserted into the pin headers of the connection slot by means of the pin holes so as to transmit a specific transfer protocol specification between the memory card and the connection slot. When the pin holes of the memory card is inserted into the pin headers of the connection slot, the contact area between each of pin holes and each of corresponding pin headers can be wider so that the connection between them can be tight and stable, which results in high reliability in the contact between the memory card and the connection slot and thus improves the security in data transmission. | 2014-02-06 |
20140040522 | UNIVERSAL PERIPHERAL CONNECTOR - A universal connector apparatus for a mobile device and in communication with the mobile device, the apparatus comprising: at least one universal serial bus (USB) connector providing at least one connection; at least one USB host controller configured to control the at least one USB connection; a microprocessor configured to control the at least one USB host controller, the microprocessor having an operating system; a USB device control interface on the mobile device configured to communicate and control the universal connector apparatus; and a USB driver configured to operate within the operating system to enable the mobile device to connect to one or more peripherals via the at least one USB connector. | 2014-02-06 |
20140040523 | MINIMIZING THE AMOUNT OF TIME STAMP INFORMATION REPORTED WITH INSTRUMENTATION DATA - This invention is time stamping subsystem of an electronic apparatus. A time stamp generator generates a multibit time stamp value including a predetermined number of least significant bits overlapping a predetermined number of most significant bits. Each client receives the least significant bits. Each client associates captured data with a corresponding set of the least significant bits in a message. A central scheduling unit associates most significant bits of the time stamp value with the least significant bits of the message. This associating compares overlap bits of the most significant bits and least significant bits. The most significant bits are decremented until the overlap bits are equal. | 2014-02-06 |
20140040524 | RACK, SERVER AND ASSEMBLY COMPRISING SUCH A RACK AND AT LEAST ONE SERVER - A rack with a mounting bay to accommodate servers, wherein 1) the mounting bay defines two opposing internal areas disposed parallel to an insertion direction of the servers and divided into a multiplicity of slots, 2) one or more data lines for data connection of servers are configured in the rack, 3) the data lines include optical data lines, and 4) on at least one of the two internal areas of the mounting bay, an end section of a data line with a data interface is disposed on each slot such that a contactless optical data connection to a further data interface on a corresponding server is enabled. | 2014-02-06 |
20140040525 | METHOD AND APPARATUS FOR ENHANCING UNIVERSAL SERIAL BUS APPLICATIONS - A system for enhancing universal serial bus (USB) applications comprises an upstream processor, a downstream processor and a main controller. The upstream processor accepts standard USB signals from a USB host and independently provides responses required by USB specification within the required time frame. The upstream processor also contains storage for descriptors for a device associated with this upstream processor. The main controller obtains the descriptors by commanding the downstream processor, and passes them to the upstream processor. The downstream processor connectable to USB-compliant devices accepts the USB signals from the USB-compliant devices and provides responses required by USB specification within the required time frame. The main controller interconnects the upstream and downstream processors, and provides timing independence between upstream and downstream timing. The main controller also commands the downstream processor to obtain device descriptors independent of the USB host. | 2014-02-06 |
20140040526 | COHERENT DATA FORWARDING WHEN LINK CONGESTION OCCURS IN A MULTI-NODE COHERENT SYSTEM - Systems and methods for efficient data transport across multiple processors when link utilization is congested. In a multi-node system, each of the nodes measures a congestion level for each of the one or more links connected to it. A source node indicates when each of one or more links to a destination node is congested or each non-congested link is unable to send a particular packet type. In response, the source node sets an indication that it is a candidate for seeking a data forwarding path to send a packet of the particular packet type to the destination node. The source node uses measured congestion levels received from other nodes to search for one or more intermediate nodes. An intermediate node in a data forwarding path has non-congested links for data transport. The source node reroutes data to the destination node through the data forwarding path. | 2014-02-06 |
20140040527 | OPTIMIZED MULTI-ROOT INPUT OUTPUT VIRTUALIZATION AWARE SWITCH - In one implementation, an optimized multi-root input-output virtualization (MRIOV) aware switch configured to route data between multiple root complexes and I/O devices is described. The MRIOV aware switch may include two or more upstream ports and one or more downstream ports. Each of an upstream port and a downstream port may include a media access controller (MAC) configured to negotiate link width and link speed for exchange of data packets between the multiple root complexes and the I/O devices. Each of an upstream port and a downstream port may further include a clocking module configured to dynamically configure a clock rate of processing data packets based one or more negotiated link width and negotiated link speed, and a data link layer (DLL) coupled to the MAC configured to operate at the clock rate, wherein the clock rate is indicative of processing speed. | 2014-02-06 |
20140040528 | RECONFIGURABLE CROSSBAR NETWORKS - Reconfigurable crossbar networks, and devices, systems and methods, including hardware in the form of logic (e.g. application specific integrated circuits (ASICS)), and software in the form of machine readable instructions stored on machine readable media (e.g., flash, non-volatile memory, etc.), which implement the same, are provided. An example of a reconfigurable crossbar network includes a crossbar. A plurality of endpoints is coupled to the crossbar. The plurality of endpoints is grouped into regions at design time of the crossbar network. A plurality of regional interconnects are provided. Each regional interconnect connects a group of endpoints within a given region. | 2014-02-06 |
20140040529 | TRANSLATION TABLE CONTROL - Memory address translation circuitry 14 performs a top down page table walk operation to translate a virtual memory address VA to a physical memory address PA using translation data stored in a hierarchy of translation tables | 2014-02-06 |
20140040530 | MIXED GRANULARITY HIGHER-LEVEL REDUNDANCY FOR NON-VOLATILE MEMORY - Mixed-granularity higher-level redundancy for NVM provides improved higher-level redundancy operation with better error recovery and/or reduced redundancy information overhead. For example, pages of the NVM that are less reliable, such as relatively more prone to errors, are operated in higher-level redundancy modes having relatively more error protection, at a cost of relatively more redundancy information. Concurrently, blocks of the NVM that are more reliable are operated in higher-level redundancy modes having relatively less error protection, at a cost of relatively less redundancy information. Compared to techniques that operate the entirety of the NVM in the higher-level redundancy modes having relatively less error protection, techniques described herein provide better error recovery. Compared to techniques that operate the entirety of the NVM in the higher-level redundancy modes having relatively more error protection, the techniques described herein provide reduced redundancy information overhead. | 2014-02-06 |
20140040531 | SINGLE-READ BASED SOFT-DECISION DECODING OF NON-VOLATILE MEMORY - A Solid-State Disk (SSD) controller performs soft-decision decoding with a single read, thus improving performance, power, and/or reliability of a storage sub-system, such as an SSD. In a first aspect, the controller generates soft-decision metrics from channel parameters of a hard decode read, without additional reads and/or array accesses. In a second aspect, the controller performs soft decoding using the generated soft-decision metrics. In a third aspect, the controller generates soft-decision metrics and performs soft decoding with the generated soft-decision metrics when a hard decode read error occurs. | 2014-02-06 |
20140040532 | STACKED MEMORY DEVICE WITH HELPER PROCESSOR - A processing system comprises one or more processor devices and other system components coupled to a stacked memory device having a set of stacked memory layers and a set of one or more logic layers. The set of logic layers implements a helper processor that executes instructions to perform tasks in response to a task request from the processor devices or otherwise on behalf of the other processor devices. The set of logic layers also includes a memory interface coupled to memory cell circuitry implemented in the set of stacked memory layers and coupleable to the processor devices. The memory interface operates to perform memory accesses for the processor devices and for the helper processor. By virtue of the helper processor's tight integration with the stacked memory layers, the helper processor may perform certain memory-intensive operations more efficiently than could be performed by the external processor devices. | 2014-02-06 |
20140040533 | DATA MANAGEMENT METHOD, MEMORY CONTROLLER AND MEMORY STORAGE DEVICE - A data management method for a rewritable non-volatile memory module including a first memory unit and a second memory unit is provided. The method includes: grouping erasing units of the first memory unit into a data area and a spare area; and grouping the physical erasing units of the second memory unit into a data backup area and a command recording area; configuring multiple logical addresses to map to the physical erasing units associated with the data area; receiving a write command which instructs writing data; writing the data to a physical erasing unit associated with the spare area, and writing the data to a physical erasing unit associated with the data backup area; recording at least a portion of the write command in a physical erasing unit associated with the command recording area. Accordingly, data is backuped in the rewritable non-volatile memory module. | 2014-02-06 |
20140040534 | DATA STORING METHOD AND MEMORY CONTROLLER AND MEMORY STORAGE DEVICE USING THE SAME - A data storing method for a rewritable non-volatile memory module and a memory controller and a memory storage device using the same are provided. The data storing method includes moving or writing data into a physical erase unit of the rewritable non-volatile memory module and determining whether the physical erase unit contains a dancing bit. The data storing method further includes when the physical erase unit contains the dancing bit, restoring the rewritable non-volatile memory module to the state before the data is moved or moving the data from the physical erase unit to another physical erase unit. Thereby, the data storing method can effectively ensure the reliability of the data. | 2014-02-06 |
20140040535 | NONVOLATILE MEMORY DEVICE HAVING WEAR-LEVELING CONTROL AND METHOD OF OPERATING THE SAME - A method is provided for controlling a write operation in a nonvolatile memory device to provide wear leveling, where the nonvolatile memory device includes multiple memory blocks. The method includes reading write indication information with respect to at least a selected memory block of the multiple memory blocks; determining whether a write order of data to be stored in the selected memory block is an ascending order or a descending order, based on the write indication information of the selected memory block; and generating addresses of memory regions in the selected memory block in an ascending order when the write order of the data is determined to be an ascending order, and generating addresses of the memory regions in the selected memory block in a descending order when the write order is determined to be a descending order. | 2014-02-06 |
20140040536 | STORAGE MEDIUM USING NONVOLATILE SEMICONDUCTOR STORAGE DEVICE, DATA TERMINAL HAVING THE STORAGE MEDIUM MOUNTED THEREON, AND FILE ERASING METHOD USABLE FOR THE SAME - A storage medium using a nonvolatile semiconductor storage device for erasing data with certainty on a file-by-file basis and preventing an inadvertent file leak as much as possible is provided. A file erasing method includes (a) reading data other than data in a file which is a target of erase from an erase block having the file as the target of erase recorded therein; (b) writing the read data other than the data in the file which is the target of erase to another erase block; and (c) erasing all the data in the erase block in which the file as the target of erase is recorded. | 2014-02-06 |
20140040537 | STORAGE MEDIUM USING NONVOLATILE SEMICONDUCTOR STORAGE DEVICE, AND DATA TERMINAL INCLUDING THE SAME - A storage medium using a nonvolatile semiconductor storage device for preventing an inadvertent file leak as much as possible is provided. A storage medium using a nonvolatile semiconductor storage device includes a control unit for writing data to memory cells which store data corresponding to files stored on the storage medium, such that all the memory cells are put into the same electronic state, or for erasing data from the memory cells, after a lapse of a set time period. | 2014-02-06 |
20140040538 | METHOD OF WRITING DATA, MEMORY, AND SYSTEM FOR WRITING DATA IN MEMORY - A method of writing data in a memory comprising a NAND cell array is disclosed, wherein a data output device completes the writing process only by transmitting the data and a start address for writing the data to the memory. | 2014-02-06 |
20140040539 | METHOD FOR WEAR LEVELING IN A NONVOLATILE MEMORY - A method for writing and reading data memory cells, comprising: defining in a first memory zone erasable data pages and programmable data blocks; and, in response to write commands of data, writing data in erased blocks of the first memory zone, and writing, in a second memory zone, metadata structures associated with data pages and comprising, for each data page, a wear counter containing a value representative of the number of times that the page has been erased. | 2014-02-06 |
20140040540 | Metadata Management For Virtual Volumes - Methods, apparatus, and systems, including computer programs encoded on a computer storage medium, manage metadata for virtual volumes. In some implementations, a method includes: loading into memory at least a portion of metadata for a virtual volume (VV) that spans data extents of different persistent storage devices, wherein the metadata comprises virtual metadata block (VMB) descriptors and virtual metadata blocks (VMBs); mapping an address of the VV to a VMB number and an index of an extent pointer within a VMB identified by the VMB number, wherein the extent pointer indicates an extent within one of the different persistent storage devices; locating a VMB descriptor in the memory based on the VMB number; and locating the identified VMB in the memory or not in the memory based on the located VMB descriptor. | 2014-02-06 |
20140040541 | METHOD OF MANAGING DYNAMIC MEMORY REALLOCATION AND DEVICE PERFORMING THE METHOD - A method of managing dynamic memory reallocation includes receiving an input address including a block bit part, a tag part, and an index part and communicating the index part to a tag memory array, receiving a tag group communicated by the tag memory array based on the index part, analyzing the tag group based on the block bit part and the tag part and changing the block bit part and the tag part based on a result of the analysis, and outputting an output address including a changed block bit part, a changed tag part, and the index part. | 2014-02-06 |
20140040542 | SCATTER-GATHER INTELLIGENT MEMORY ARCHITECTURE FOR UNSTRUCTURED STREAMING DATA ON MULTIPROCESSOR SYSTEMS - A scatter/gather technique optimizes unstructured streaming memory accesses, providing off-chip bandwidth efficiency by accessing only useful data at a fine granularity, and off-loading memory access overhead by supporting address calculation, data shuffling, and format conversion. | 2014-02-06 |
20140040543 | Providing State Storage in a Processor for System Management Mode - In one embodiment, the present invention includes a processor that has an on-die storage such as a static random access memory to store an architectural state of one or more threads that are swapped out of architectural state storage of the processor on entry to a system management mode (SMM). In this way communication of this state information to a system management memory can be avoided, reducing latency associated with entry into SMM. Embodiments may also enable the processor to update a status of executing agents that are either in a long instruction flow or in a system management interrupt (SMI) blocked state, in order to provide an indication to agents inside the SMM. Other embodiments are described and claimed. | 2014-02-06 |
20140040544 | LOGICAL VOLUME GROUP DRIVE CONTROL - A volume group power control system and a corresponding method are disclosed. The system is to organize a plurality of drive systems into a plurality of logical volume groups and to store group data identifying each of the plurality of logical volume groups. Each of the logical volume groups includes a plurality of the drive systems. The array controller includes a power management component to monitor activity of each of the plurality of drive systems in each of the plurality of logical volume groups and to deactivate the plurality of drive systems associated with a given one of the plurality of logical volume groups in response to the power management component determining that the plurality of drive systems associated with the given one of the plurality of logical volume groups is substantially inactive. | 2014-02-06 |
20140040545 | CONTROL DEVICE, STORAGE DEVICE, AND COMPUTER-READABLE RECORDING MEDIUM HAVING STORED THEREIN CONTROL PROGRAM - A control device controlling a plurality of disk devices to which a physical storage area is assigned in response to a logical storage area accessed from an upper device includes a processor, and the processor assigns an assignment start location of the logical storage area of the plurality disk devices to respectively different physical locations. | 2014-02-06 |
20140040546 | VIRTUAL DISK DRIVE SYSTEM AND METHOD - A disk drive system and method capable of dynamically allocating data is provided. The disk drive system may include a RAID subsystem having a pool of storage, for example a page pool of storage that maintains a free list of RAIDs, or a matrix of disk storage blocks that maintain a null list of RAIDs, and a disk manager having at least one disk storage system controller. The RAID subsystem and disk manager dynamically allocate data across the pool of storage and a plurality of disk drives based on RAID-to-disk mapping. The RAID subsystem and disk manager determine whether additional disk drives are required, and a notification is sent if the additional disk drives are required. Dynamic data allocation and data progression allow a user to acquire a disk drive later in time when it is needed. Dynamic data allocation also allows efficient data storage of snapshots/point-in-time copies of virtual volume pool of storage, instant data replay and data instant fusion for data backup, recovery etc., remote data storage, and data progression, etc. | 2014-02-06 |
20140040547 | NON-DISRUPTIVE DATA MIGRATION BETWEEN PROCESSING SYSTEMS THAT DO NOT SHARE STORAGE - A technique is disclosed for non-disruptive migration of data between storage on hosts that do not share storage with each other. Aggregate relocation is enabled to operate between the hosts in the absence of shared storage connectivity. The technique includes mirroring an aggregate from storage of a first host to storage of a second host by using a sub-RAID level proxy in each of the first and second hosts to proxy data communications between the hosts. The proxy is used in lieu of the mirroring application in the first host having direct access to the storage devices of the second host. The technique further includes relocating the aggregate from the first host to the second host. | 2014-02-06 |
20140040548 | TECHNIQUE TO AVOID CASCADED HOT SPOTTING - The present invention overcomes the disadvantages of the prior art by providing a technique that stripes data containers across volumes of a striped volume set (SVS) using one of a plurality of different data placement patterns to thereby reduce the possibility of hotspots arising due to each data container using the same data placement pattern within the SVS. The technique is illustratively implemented by calculating a first index value, an intermediate index value and calculating a hash value of an inode associated with a data container to be accessed within the SVS. A final index value is calculated by multiplying the intermediate index value by the hash value, modulo the number of volumes of the SVS. Further, a Locate( ) function may be used to compute the location of data container content in the SVS to which a data access request is directed to ensure consistency of such content. | 2014-02-06 |
20140040549 | STORAGE ARRAY ASSIST ARCHITECTURE - Disclosed is a storage system architecture. An Environmental service module (ESM) is coupled to one or more array controllers. The ESM is configured with a central processing unit and one or more assist functions. The assist functions may include nonvolatile memory. This nonvolatile memory may be used for write caching, mirroring data, and/or configuration data. The assist functions, or the ESM, may be controlled by the array controllers using SCSI or RDMA commands. | 2014-02-06 |
20140040550 | MEMORY CHANNEL THAT SUPPORTS NEAR MEMORY AND FAR MEMORY ACCESS - A semiconductor chip comprising memory controller circuitry having interface circuitry to couple to a memory channel. The memory controller includes first logic circuitry to implement a first memory channel protocol on the memory channel. The first memory channel protocol is specific to a first volatile system memory technology. The interface also includes second logic circuitry to implement a second memory channel protocol on the memory channel. The second memory channel protocol is specific to a second non volatile system memory technology. The second memory channel protocol is a transactional protocol. | 2014-02-06 |
20140040551 | REWIND ONLY TRANSACTIONS IN A DATA PROCESSING SYSTEM SUPPORTING TRANSACTIONAL STORAGE ACCESSES - In a multiprocessor data processing system having a distributed shared memory system, a memory transaction that is a rewind-only transaction (ROT) and that includes one or more transactional memory access instructions and a transactional abort instruction is executed. In response to execution of the one or more transactional memory access instructions, one or more memory accesses to the distributed shared memory system indicated by the one or more transactional memory access instructions are performed. In response to execution of the transactional abort instruction, execution results of the one or more transaction memory access instructions are discarded and control is passed to a fail handler. | 2014-02-06 |
20140040552 | MULTI-CORE COMPUTE CACHE COHERENCY WITH A RELEASE CONSISTENCY MEMORY ORDERING MODEL - A method includes storing, with a first programmable processor, shared variable data to cache lines of a first cache of the first processor. The method further includes executing, with the first programmable processor, a store-with-release operation, executing, with a second programmable processor, a load-with-acquire operation, and loading, with the second programmable processor, the value of the shared variable data from a cache of the second programmable processor. | 2014-02-06 |
20140040553 | CACHE DATA MIGRATION IN A MULTICORE PROCESSING SYSTEM - A method of transferring data between two caches comprises sending a first message from a first processor to a second processor indicating that data is available for transfer from a first cache associated with the first processor, requesting, from the second processor, a data transfer of the data from the first cache to a second cache associated with the second processor, transferring the data from the first cache to the second cache in response to the request, and sending a second message from the second processor to the first processor indicating that the data transfer is complete. | 2014-02-06 |
20140040554 | Protecting Large Regions without Operating-System Support - A system and method for providing very large read-sets for hardware transactional memory with limited hardware support by monitoring meta data such as page table entries. The system and method include a Hardware-based Transactional Memory (HTM) mechanism that tracks meta-data such as page-table entries (PTE) rather than all the data itself. The HTM mechanism protects large regions of memory by providing conflict detection so that regions of memory can be located within a local read or write set. | 2014-02-06 |
20140040555 | DATA PROCESSING, METHOD, DEVICE, AND SYSTEM FOR PROCESSING REQUESTS IN A MULTI-CORE SYSTEM - The present disclosure provides a method, device, and system for processing a request in a multi-core system. The method comprises steps of: receiving a request for data by a filter from a requesting unit; comparing an indicator indicative of a logical partition in the request with an indicator indicative of the logical partition in a record of the filter; searching in a unit where the filter is located based on the request and returning a search result to the requesting unit if a comparison result matches; and returning a NONE response to the requesting unit from the filter if the comparison result does not match. | 2014-02-06 |
20140040556 | Dynamic Multithreaded Cache Allocation - Apparatus and method embodiments for dynamically allocating cache space in a multi-threaded execution environment are disclosed. In some embodiments, a processor includes a cache shared by each of a plurality of processor cores and/or each of a plurality of threads executing on the processor. The processor further includes a cache allocation circuit configured to dynamically allocate space in the cache provided to each of the plurality of processor cores based on their respective usage patterns. The cache allocation unit may track cache usage by each of the processor cores/threads using subsets of usage bits and counters configured to update states of the usage bits. The cache allocation circuit may track the usage of cache space by the processor cores/threads and may allocate more space to those that exhibit more usage of the cache. | 2014-02-06 |
20140040557 | NESTED REWIND ONLY AND NON REWIND ONLY TRANSACTIONS IN A DATA PROCESSING SYSTEM SUPPORTING TRANSACTIONAL STORAGE ACCESSES - In a multiprocessor data processing system having a distributed shared memory system, first and second nested memory transactions are executed, where the first memory transaction is a rewind-only transaction (ROT) and the second memory transaction is a non-ROT memory transaction. The first memory transaction has a transaction body including the second memory transaction and an additional plurality of transactional memory access instructions. In response to execution of the transactional memory access instructions, memory accesses are performed to the distributed shared memory system. Conflicts between memory accesses not within the first memory transaction and at least a load footprint of any of the transactional memory access instructions preceding the second memory transaction are not tracked. However, conflicts between memory accesses not within the first memory transaction and store and load footprints of any of the transactional memory access instructions that follow initiation the second memory transaction are tracked. | 2014-02-06 |
20140040558 | INFORMATION PROCESSING APPARATUS, PARALLEL COMPUTER SYSTEM, AND CONTROL METHOD FOR ARITHMETIC PROCESSING UNIT - An information processing apparatus included in a parallel computer system has a memory that holds data and a processor including a cache memory that holds a part of the data held on the memory and a processor core that performs arithmetic operations using the data held on the memory or the cache memory. Moreover, the information processing apparatus has a communication device that determines whether data received from a different information processing apparatus is data that the processor core waits for. When the communication device determines that the received data is data that the processor core waits for, the communication device stores the received data on the cache memory. When the communication device determines that the received data is data that the processor core does not wait for, the communication device stores the received data on the memory. | 2014-02-06 |
20140040559 | SYSTEM AND METHOD OF CACHING INFORMATION - A system and method is provided wherein, in one aspect, a currently-requested item of information is stored in a cache based on whether it has been previously requested and, if so, the time of the previous request. If the item has not been previously requested, it may not be stored in the cache. If the subject item has been previously requested, it may or may not be cached based on a comparison of durations, namely (1) the duration of time between the current request and the previous request for the subject item and (2) for each other item in the cache, the duration of time between the current request and the previous request for the other item. If the duration associated with the subject item is less than the duration of another item in the cache, the subject item may be stored in the cache. | 2014-02-06 |
20140040560 | All Invalidate Approach for Memory Management Units - An input/output memory management unit (IOMMU) having an “invalidate all” command available to clear the contents of cache memory is presented. The cache memory provides fast access to address translation data that has been previously obtained by a process. A typical cache memory includes device tables, page tables and interrupt remapping entries. Cache memory data can become stale or be compromised from security breaches or malfunctioning devices. In these circumstances, a rapid approach to clearing cache memory content is provided. | 2014-02-06 |
20140040561 | HANDLING CACHE WRITE-BACK AND CACHE EVICTION FOR CACHE COHERENCE - A method implemented by a computer system comprising a first memory agent and a second memory agent coupled to the first memory agent, wherein the second memory agent has access to a cache comprising a cache line, the method comprising changing a state of the cache line by the second memory agent, and sending a non-snoop message from the second memory agent to the first memory agent via a communication channel assigned to snoop responses, wherein the non-snoop message informs the first memory agent of the state change of the cache line. | 2014-02-06 |
20140040562 | USING BROADCAST-BASED TLB SHARING TO REDUCE ADDRESS-TRANSLATION LATENCY IN A SHARED-MEMORY SYSTEM WITH ELECTRICAL INTERCONNECT - The disclosed embodiments provide a system that uses broadcast-based TLB-sharing techniques to reduce address-translation latency in a shared-memory multiprocessor system with two or more nodes that are connected by an electrical interconnect. During operation, a first node receives a memory operation that includes a virtual address. Upon determining that one or more TLB levels of the first node will miss for the virtual address, the first node uses the electrical interconnect to broadcast a TLB request to one or more additional nodes of the shared-memory multiprocessor in parallel with scheduling a speculative page-table walk for the virtual address. If the first node receives a TLB entry from another node of the shared-memory multiprocessor via the electrical interconnect in response to the TLB request, the first node cancels the speculative page-table walk. Otherwise, if no response is received, the first node instead waits for the completion of the page-table walk. | 2014-02-06 |
20140040563 | SHARED VIRTUAL MEMORY MANAGEMENT APPARATUS FOR PROVIDING CACHE-COHERENCE - A shared virtual memory management apparatus for ensuring cache coherence. When two or more cores request write permission to the same virtual memory page, the shared virtual memory management apparatus allocates a physical memory page for the cores to change data in the allocated physical memory page. Thereafter, changed data is updated in an original physical memory page, and accordingly it is feasible to achieve data coherence in a multi-core hardware environment that does not provide cache coherence. | 2014-02-06 |
20140040564 | System, Method, and Computer Program Product for Conditionally Sending a Request for Data to a Node Based on a Determination - A system, method, and computer program product are provided for conditionally sending a request for data to a node based on a determination. In operation, a first request for data is sent to a cache of a first node. Additionally, it is determined whether the first request can be satisfied within the first node, where the determining includes at least one of determining a type of the first request and determining a state of the data in the cache. Furthermore, a second request for the data is conditionally sent to a second node, based on the determination. | 2014-02-06 |
20140040565 | Shared Memory Space in a Unified Memory Model - Methods and systems are provided for mapping a memory instruction to a shared memory address space in a computer arrangement having a CPU and an APD. A method includes receiving a memory instruction that refers to an address in the shared memory address space, mapping the memory instruction based on the address to a memory resource associated with either the CPU or the APD, and performing the memory instruction based on the mapping. | 2014-02-06 |
20140040566 | METHOD AND SYSTEM FOR ACCESSING C++ OBJECTS IN SHARED MEMORY - The current application discloses methods and systems that access member functions and data fields of C++ objects placed in shared memory as well as cast such objects from multiple processes. The method employs basic C++ operations and guarantees correct access for any C++ translator. The method calculates the offsets to the data fields of each C++ class at the time of access. For that calculation to be correct, each process uses dedicated C++ objects of the same class placed in the process's own process heap. Because of their location, these dedicated objects allow for regular C++ access since the runtime uses the vtable for a process to access process-heap objects. For objects placed in shared memory, the runtime accesses data fields through specially constructed C++ expressions that add the calculated offsets to the object address. For member functions, an additional mechanism is involved that invokes each member function on the dedicated object from the process heap, passing auxiliary parameters to the function. These parameters include the address of the objects in shared memory and address of the object in heap memory. The addresses are used inside the function in order to further access object data fields and member functions. Casting uses dedicated objects in order to calculate offsets between super-object and sub-object. | 2014-02-06 |
20140040567 | TLB-Walk Controlled Abort Policy for Hardware Transactional Memory - A system and method are disclosed for increasing large region transaction throughput by making informed determinations whether to abort a thread from a first core or a thread from a second core when a conflict is detected between the threads. Such a system and method allow resolution of conflicts between a first thread and a second thread. In certain embodiments, the system and method allow a requester to detect a conflict under specific circumstances and make an intelligent decision whether to abort the first thread, enter a wait state to give the first thread an opportunity to complete execution or, if possible, abort the second thread. | 2014-02-06 |
20140040568 | MEMORY MODULE WITH DISTRIBUTED DATA BUFFERS AND METHOD OF OPERATION - A memory module is operable to communicate with a memory controller via a data bus and a control/address bus and comprises a module board; a plurality of memory devices mounted on the module board; and multiple sets of data pins along an edge of the module board. Each respective set of the multiple sets of data pins is operatively coupled to a respective set of multiple sets of data lines in the data bus. The memory module further comprises a control circuit configured to receive control/address information from the memory controller via the control/address bus and to produce module control signals. The memory module further comprises a plurality of buffer circuits each being disposed proximate to and electrically coupled to a respective set of the multiple sets of data pins. Each buffer circuit is configured to respond to the module control signals by enabling data communication between the memory controller and at least one first memory device among the plurality of memory devices and by isolating at least one second memory device among the plurality of memory devices from the memory controller. | 2014-02-06 |
20140040569 | LOAD-REDUCING CIRCUIT FOR MEMORY MODULE - A circuit is mountable on a memory module that includes a plurality of memory devices and that is operable in a computer system to perform memory operations in response to memory commands from a memory controller. The circuit comprises a register device configured to receive a set of input control/address signals associated with a respective memory command (e.g., a read command or a write command) from the memory controller and to generate a set of output control/address signals in response to the set of input control/address signals. The set of output control/address signals are provided to the plurality of memory devices. The circuit further comprises logic to monitor the memory commands from the memory controller and to selectively isolate one or more first memory devices among the plurality of memory devices from the memory controller in response to the respective memory command so as to reduce a load of the memory module to the computer system while one or more second memory devices among the plurality of memory devices are communicating with the memory controller in response to the set of output control/address signals. | 2014-02-06 |
20140040570 | On Die/Off Die Memory Management - Video analytics may be used to assist video encoding by selectively encoding only portions of a frame and using, instead, previously encoded portions. Previously encoded portions may be used when succeeding frames have a level of motion less than a threshold. In such case, all or part of succeeding frames may not be encoded, increasing bandwidth and speed in some embodiments. | 2014-02-06 |
20140040571 | DATA INTERLEAVING MODULE - The present disclosure includes apparatuses and methods related to a data interleaving module. A number of methods can include interleaving data received from a bus among modules according to a selected one of a plurality of data densities per memory cell supported by an apparatus and transferring the interleaved data from the modules to a register. | 2014-02-06 |
20140040572 | REQUEST ORDERING SUPPORT WHEN SWITCHING VIRTUAL DISK REPLICATION LOGS - Storage access requests, such as write requests, are received from a virtual machine. A storage request processing module updates one of multiple virtual disks as directed by each of the storage access requests, and a replication management module stores information associated with each storage access request in one of multiple logs. The logs can be transferred to a recovery device at various intervals and/or in response to various events, which results in switching logs so that the replication management module stores the information associated with each storage access request in a new log and the previous (old) log is transferred to the recovery device. During this switching, request ordering for write order dependent requests is maintained at least in part by blocking processing of the information associated with each storage access request. | 2014-02-06 |
20140040573 | DETERMINING A NUMBER OF STORAGE DEVICES TO BACKUP OBJECTS IN VIEW OF QUALITY OF SERVICE CONSIDERATIONS - Storage device libraries, machine readable media, and methods are provided for determining a number of storage devices to backup objects in view of quality of service considerations. An example of a storage device library that determines the number of storage devices to backup objects includes a plurality of storage devices and a controller to control backup of the objects to an assigned number of the storage devices. The controller determines the assigned number of the storage devices before the backup of the objects based upon assigned parameters for backup of the objects that include a time window and a number of concurrent disk agents per storage device. | 2014-02-06 |
20140040574 | RESILIENCY WITH A DESTINATION VOLUME IN A REPLICATION ENVIRONMENT - A method to provide resiliency with a destination volume in a replication environment is disclosed. Data from a source volume, such as a primary volume or a secondary volume in a replication relationship, is migrated to the destination volume. A snapshot representing data on a source volume is generated. The replication relationship between the source volumes is broken, and a new relationship between a source volume and the destination volume is established. A delta of data between the snapshot and one of the volumes in the new relationship is generated. The delta is sent to the other of the volumes in the new relationship. | 2014-02-06 |
20140040575 | MOBILE HADOOP CLUSTERS - Techniques for mobile clusters for collecting telemetry data and processing analytic tasks, are disclosed herein. The mobile cluster includes a processor, a plurality of data nodes and an analysis module. The data nodes receive and store a snapshot of at least a portion of data stored in a main Hadoop storage cluster and real-time acquired data received from a data capturing device. The analysis module is operatively coupled to the processor to process analytic tasks based on the snapshot and the real-time acquired data when the storage cluster is not connected to the main storage cluster. | 2014-02-06 |
20140040576 | REQUESTING A MEMORY SPACE BY A MEMORY CONTROLLER - Systems and methods are provided to process a request for a memory space from a memory controller. A particular method may include communicating, by a memory controller, a request for a memory space of a memory to a computer program. The memory controller is configured to initialize the memory, and the memory controller is configured to perform operations on the memory as instructed. The computer program is configured to make memory spaces of the memory available in response to requests for the memory spaces of the memory. The method may also include using, by the memory controller, the memory space in response to an indication from the computer program that the memory space is available. Also provided are systems and methods for copying a memory space by a memory controller to a memory space under exclusive control of the memory controller. | 2014-02-06 |
20140040577 | Automatic Use of Large Pages - A mechanism is provided for automatic use of large pages. An operating system loader performs aggressive contiguous allocation followed by demand paging of small pages into a best-effort contiguous and naturally aligned physical address range sized for a large page. The operating system detects when the large page is fully populated and switches the mapping to use large pages. If the operating system runs low on memory, the operating system can free portions and degrade gracefully. | 2014-02-06 |
20140040578 | MANAGING DATA SET VOLUME TABLE OF CONTENTS - For managing a data set volume table of contents, a management module creates a data set volume table of contents (DSVTOC) for a data set residing on a volume. The DSVTOC resides in a virtual storage access method (VSAM) system and includes a DSVTOC index, DSVTOC cluster data, and DSVTOC data for the data set. A copy module maintains a copy of the DSVTOC on the volume. | 2014-02-06 |
20140040579 | ADMINISTERING A SHARED, ON-LINE POOL OF DATA STORAGE RESOURCES FOR PERFORMING DATA STORAGE OPERATIONS - A data storage system according to certain aspects manages and administers the sharing of storage resources among clients in the shared storage pool. The shared storage pool according to certain aspects can provide readily available remote storage to clients in the pool. A share list for each client may be used to determine where data is stored within the storage pool. The share list may include clients that are known to each client, and therefore, a user may feel more at ease storing the data on the clients in the storage pool. Management and administration of the storage pool and backup and restore jobs can be performed by an entity other than the client, making backup and restore more streamlined and simple for the clients in the pool. | 2014-02-06 |
20140040580 | ADMINISTERING A SHARED, ON-LINE POOL OF DATA STORAGE RESOURCES FOR PERFORMING DATA STORAGE OPERATIONS - A data storage system according to certain aspects manages and administers the sharing of storage resources among clients in the shared storage pool. The shared storage pool according to certain aspects can provide readily available remote storage to clients in the pool. A share list for each client may be used to determine where data is stored within the storage pool. The share list may include clients that are known to each client, and therefore, a user may feel more at ease storing the data on the clients in the storage pool. Management and administration of the storage pool and backup and restore jobs can be performed by an entity other than the client, making backup and restore more streamlined and simple for the clients in the pool. | 2014-02-06 |
20140040581 | STORAGE SYSTEM AND DATA TRANSFER METHOD - A storage system includes: a storage device configured to copy data to another storage device, the storage device includes: a first storage region configured to store the data; a first receiving unit configured to receive a first instruction from a higher level device; a transferring unit configured to transfer the instruction from the higher level device to the another storage device; and a first storage region releasing unit configured to release the first storage region, wherein, when the first instruction is a releasing instruction instructing to release the first storage region, the transferring unit transfers the releasing instruction to the another storage device before releasing of the first storage region is completed by the first storage region releasing unit. | 2014-02-06 |
20140040582 | BLOCK-LEVEL SINGLE INSTANCING - Described in detail herein are systems and methods for single instancing blocks of data in a data storage system. For example, the data storage system may include multiple computing devices (e.g., client computing devices) that store primary data. The data storage system may also include a secondary storage computing device, a single instance database, and one or more storage devices that store copies of the primary data (e.g., secondary copies, tertiary copies, etc.). The secondary storage computing device receives blocks of data from the computing devices and accesses the single instance database to determine whether the blocks of data are unique (meaning that no instances of the blocks of data are stored on the storage devices). If a block of data is unique, the single instance database stores it on a storage device. If not, the secondary storage computing device can avoid storing the block of data on the storage devices. | 2014-02-06 |
20140040583 | STORAGE SYSTEM GROUP INCLUDING SCALE-OUT STORAGE SYSTEM AND MANAGEMENT METHOD THEREFOR - A management system is coupled to a storage system group including a scale-out storage system (a virtual storage system). The management system has storage management information, which includes information denoting, for each storage system, whether or not a storage system is a component of a virtual storage system. The management system, based on the storage management information, determines whether or not a first storage system is a component of a virtual storage system, and in a case where the result of this determination is affirmative, identifies, based on the storage management information, a second storage system, which is a storage system other than the virtual storage system that includes the first storage system, and allows a user to perform a specific operation only with respect to this second storage system. | 2014-02-06 |
20140040584 | Multi-layer content protecting microcontroller - The present invention relates to a microcontroller designed for protection of intellectual digital content. The microcontroller includes a secure CPU, a real-time cipher, and a user programmable multi-layer access control system for internal memory realized by programmable nonvolatile memory. Programmable nonvolatile memory allows in-system and in-application programming for the end user. The programmable nonvolatile memory is mainly used for program code and operating parameter storage. The multiple-layer access control is an integral part of the CPU, providing confidentiality protection to embedded digital content by controlling reading, writing, and/or execution of a code segment according to a set of user-programmed parameters. The cipher incorporates a set of cryptographic rules for data encryption and decryption with row and column manipulation for data storage. All cryptographic operations are executed in parallel with CPU run time without incurring additional latency and delay for system operation. | 2014-02-06 |
20140040585 | COMPUTER, MANAGEMENT METHOD, AND RECORDING MEDIUM - A computer manages memory by dividing the memory into at least a first storage area which stores generated data, and a closable region made of at least one second storage area and at least one third storage area. The second storage area constitutes a destination to which the data stored in the first storage area is to be moved. Data is moved from the first storage area to the second storage area; and the second storage area to which the data has been moved is changed to the third storage area. An update to the data stored in the third storage area is detected and the data of which the update has been detected is moved to a closable region other than the third storage area. | 2014-02-06 |
20140040586 | Orphan Storage Release - A method, system and computer readable medium that identify orphan storage and release the orphaned storage before application or system outages can result. More specifically, in certain embodiments, a method, system and computer readable medium periodically scan through common memory storage and identifies those areas that are no longer associated with a running task or have been allocated for longer than a running task with a matching task address. These areas are then identified as potentially orphaned storage locations. | 2014-02-06 |
20140040587 | Power Savings Apparatus and Method for Memory Device Using Delay Locked Loop - Embodiments are directed to reduced power consumption for memory data transfer at high frequency through synchronized clock signaling. Delay locked loop (DLL) circuits are used to generate the synchronized clock signals. A DLL circuit consumes power as long as it is outputting the synchronized clock signals. A power saving apparatus and method are described wherein the DLL circuit is powered on when memory data access is active, while the DLL circuit is powered down when memory access is idle. | 2014-02-06 |
20140040588 | NON-TRANSACTIONAL PAGE IN MEMORY - One or more embodiments are directed to allocating a page to put non-shared data to the page, setting a transactional property for the page, the transactional property indicating that data in the page does not need tracking by hardware transactional memory (HTM), in response to detecting an access to the page during a transaction, determining whether the transactional property for the page is set, and in response to determining that the transactional property for the page is set, handling data loaded from the page in a cache as non-transactional data. | 2014-02-06 |
20140040589 | NON-TRANSACTIONAL PAGE IN MEMORY - One or more embodiments are directed to allocating a page to put non-shared data to the page, setting a transactional property for the page, the transactional property indicating that data in the page does not need tracking by hardware transactional memory (HTM), in response to detecting an access to the page during a transaction, determining whether the transactional property for the page is set, and in response to determining that the transactional property for the page is set, handling data loaded from the page in a cache as non-transactional data. | 2014-02-06 |
20140040590 | METHOD AND SYSTEM FOR MANAGING LARGE WRITE-ONCE TABLES IN SHADOW PAGE DATABASES - Methods and systems for managing large write-once tables are described. In some embodiments, a relational database management system includes a space allocation module that utilizes both a logical space allocation scheme, as well as a physical space allocation scheme, to allocate space in units (e.g., pages) having two different sizes—small pages and big pages. For instance, small pages are logically allocated with a conventional converter module, which manages a converter table for mapping logical pages to physical pages, while big pages are physically allocated with an object directory manager, which manages big objects comprised of big pages. | 2014-02-06 |
20140040591 | Garbage Collection Based on Functional Block Size - An execution environment for functional code may treat application segments as individual programs for memory management. A larger program of application may be segmented into functional blocks that receive an input and return a value, but operate without changing state of other memory objects. The program segments may have memory pages allocated to the segments by the operating system as other full programs, and may deallocate memory pages when the segments finish operating. Functional programming languages and imperative programming languages may define program segments explicitly or implicitly, and the program segments may be identified at compile time or runtime. | 2014-02-06 |
20140040592 | ACTIVE BUFFERED MEMORY - According to one embodiment of the present invention, a method for operating a memory device that includes memory and a processing element includes receiving, in the processing element, a command from a requestor, loading, in the processing element, a program based on the command, the program comprising a load instruction loaded from a first memory location in the memory, and performing, by the processing element, the program, the performing including loading data in the processing element from a second memory location in the memory. The method also includes generating, by the processing element, a virtual address of the second memory location based on the load instruction and translating, by the processing element, the virtual address into a real address. | 2014-02-06 |
20140040593 | MULTIPLE SETS OF ATTRIBUTE FIELDS WITHIN A SINGLE PAGE TABLE ENTRY - A first processing unit and a second processing unit can access a system memory that stores a common page table that is common to the first processing unit and the second processing unit. The common page table can store virtual memory addresses to physical memory addresses mapping for memory chunks accessed by a job of an application. A page entry, within the common page table, can include a first set of attribute bits that defines accessibility of the memory chunk by the first processing unit, a second set of attribute bits that defines accessibility of the same memory chunk by the second processing unit, and physical address bits that define a physical address of the memory chunk. | 2014-02-06 |
20140040594 | PROGRAMMABLE DEVICE FOR SOFTWARE DEFINED RADIO TERMINAL - A programmable device suitable for software defined radio terminal is disclosed. In one aspect, the device includes a scalar cluster providing a scalar data path and a scalar register file and arranged for executing scalar instructions. The device may further include at least two interconnected vector clusters connected with the scalar cluster. Each of the at least two vector clusters provides a vector data path and a vector register file and is arranged for executing at least one vector instruction different from vector instructions performed by any other vector cluster of the at least two vector clusters. | 2014-02-06 |
20140040595 | SPACE EFFICIENT CHECKPOINT FACILITY AND TECHNIQUE FOR PROCESSOR WITH INTEGRALLY INDEXED REGISTER MAPPING AND FREE-LIST ARRAYS - A processor may efficiently implement register renaming and checkpoint repair even in instruction set architectures with large numbers of wide (bit-width) registers by (i) renaming all destination operand register targets, (ii) implementing free list and architectural-to-physical mapping table as a combined array storage with unitary (or common) read, write and checkpoint pointer indexing and (iiii) storing checkpoints as snapshots of the mapping table, rather than of actual register contents. In this way, uniformity (and timing simplicity) of the decode pipeline may be accentuated and architectural-to-physical mappings (or allocable mappings) may be efficiently shuttled between free-list, reorder buffer and mapping table stores in correspondence with instruction dispatch and completion as well as checkpoint creation, retirement and restoration. | 2014-02-06 |
20140040596 | PACKED LOAD/STORE WITH GATHER/SCATTER - Embodiments relate to packed loading and storing of data. An aspect includes a method for packed loading and storing of data distributed in a system that includes memory and a processing element. The method includes fetching and decoding an instruction for execution by the processing element. The processing element gathers a plurality of individually addressable data elements from non-contiguous locations in the memory which are narrower than a nominal width of register file elements in the processing element based on the instruction. The data elements are packed and loaded into register file elements of a register file entry by the processing element based on the instruction, such that at least two of the data elements gathered from the non-contiguous locations in the memory are packed and loaded into a single register file element of the register file entry. | 2014-02-06 |
20140040597 | PREDICATION IN A VECTOR PROCESSOR - Embodiments relate to vector processor predication in an active memory device. An aspect includes a system for vector processor predication in an active memory device. The system includes memory in the active memory device and a processing element in the active memory device. The processing element is configured to perform a method including decoding an instruction with a plurality of sub-instructions to execute in parallel. One or more mask bits are accessed from a vector mask register in the processing element. The one or more mask bits are applied by the processing element to predicate operation of a unit in the processing element associated with at least one of the sub-instructions. | 2014-02-06 |
20140040598 | VECTOR PROCESSING IN AN ACTIVE MEMORY DEVICE - Embodiments relate to vector processing in an active memory device. An aspect includes a system for vector processing in an active memory device. The system includes memory in the active memory device and a processing element in the active memory device. The processing element is configured to perform a method including decoding an instruction with a plurality of sub-instructions to execute in parallel. An iteration count to repeat execution of the sub-instructions in parallel is determined. Execution of the sub-instructions is repeated in parallel for multiple iterations, by the processing element, based on the iteration count. Multiple locations in the memory are accessed in parallel based on the execution of the sub-instructions. | 2014-02-06 |
20140040599 | PACKED LOAD/STORE WITH GATHER/SCATTER - Embodiments relate to packed loading and storing of data. An aspect includes a system for packed loading and storing of distributed data. The system includes memory and a processing element configured to communicate with the memory. The processing element is configured to perform a method including fetching and decoding an instruction for execution by the processing element. A plurality of individually addressable data elements is gathered from non-contiguous locations in the memory which are narrower than a nominal width of register file elements in the processing element based on the instruction. The processing element packs and loads the data elements into register file elements of a register file entry based on the instruction, such that at least two of the data elements gathered from the non-contiguous locations in the memory are packed and loaded into a single register file element of the register file entry. | 2014-02-06 |
20140040600 | DATA PROCESSOR - It is to provide a data processor which maintains compatibility with an existing instruction set such as a 16-bit fixed-length instruction set and in which an instruction code space is extended. | 2014-02-06 |
20140040601 | PREDICATION IN A VECTOR PROCESSOR - Embodiments relate to vector processor predication in an active memory device. An aspect includes a method for vector processor predication in an active memory device that includes memory and a processing element. The method includes decoding, in the processing element, an instruction including a plurality of sub-instructions to execute in parallel. One or more mask bits are accessed from a vector mask register in the processing element. The one or more mask bits are applied by the processing element to predicate operation of a unit in the processing element associated with at least one of the sub-instructions. | 2014-02-06 |
20140040602 | Storage Method, Memory, and Storing System with Accumulated Write Feature - A storage method, a memory and a storage system that have an accumulated write feature are provided in which the OR and AND operation are shifted from CPU/ALU (controller) to the memory, and the frequency for switching data transmission lines between read and write instructions can be reduced. In the memory, the interface unit includes a write arithmetic instruction interface, a write instruction interface, and an address instruction interface; the instruction/address decoder is configured to decode a write arithmetic instruction, a write instruction and an address instruction; and the pFET has a higher driving capability than the data switches, and the nFET has a lower driving capability than the data switches. The storage method, memory and storage system can reduce work load of CPU/ALU, and enable continuous data writing to the memory. | 2014-02-06 |
20140040603 | VECTOR PROCESSING IN AN ACTIVE MEMORY DEVICE - Embodiments relate to vector processing in an active memory device. An aspect includes a method for vector processing in an active memory device that includes memory and a processing element. The method includes decoding, in the processing element, an instruction including a plurality of sub-instructions to execute in parallel. An iteration count to repeat execution of the sub-instructions in parallel is determined. Based on the iteration count, execution of the sub-instructions in parallel is repeated for multiple iterations by the processing element. Multiple locations in the memory are accessed in parallel based on the execution of the sub-instructions. | 2014-02-06 |
20140040604 | PACKED ROTATE PROCESSORS, METHODS, SYSTEMS, AND INSTRUCTIONS - A method of an aspect includes receiving a masked packed rotate instruction. The instruction indicates a first source packed data including a plurality of packed data elements, a packed data operation mask having a plurality of mask elements, at least one rotation amount, and a destination storage location. A result packed data is stored in the destination storage location in response to the instruction. The result packed data includes result data elements that each correspond to a different one of the mask elements in a corresponding relative position. Result data elements that are not masked out by the corresponding mask element include one of the data elements of the first source packed data in a corresponding position that has been rotated. Result data elements that are masked out by the corresponding mask element include a masked out value. Other methods, apparatus, systems, and instructions are disclosed. | 2014-02-06 |
20140040605 | METHODS AND APPARATUS FOR PERFORMING SECURE BIOS UPGRADE - A data processing system may comprise a primary basic input/output system (BIOS) image in a primary BIOS region and a rollback BIOS image in a rollback BIOS region. In one example method for upgrading the BIOS, the data processing system may establish a measured launch environment (MLE). In response to a BIOS update request, the data processing system may replace the primary BIOS image with a new BIOS image while running the MLE. After a reset operation, the data processing system may automatically boot to the rollback BIOS image and may use the rollback BIOS to automatically determine whether the new BIOS image is authentic. In response to a determination that the new BIOS image is authentic, the data processing system may copy the new BIOS image from the primary BIOS region to the rollback BIOS region. Other embodiments are described and claimed. | 2014-02-06 |
20140040606 | Method of Proactively Event Triggering and Related Computer System - A method of proactively event triggering in a computer system is disclosed. The computer system includes an application unit and an interface. The method includes the application unit sending a setting signal to change a voltage level of a pin of a control chip module; the pin generating a triggering event to the interface unit when the voltage level of the pin changes; and the interface accessing a controller according to the triggering event to allow the application unit to communicate with the controller proactively. | 2014-02-06 |
20140040607 | Universal Microcode Image - Systems and methods for creating universal microcode images and for reconstructing a microcode image from a universal microcode image are described in the present disclosure. One method, among others, comprises receiving a plurality of microcode images each configured to initialize hardware within an electronic device before the electronic device is booted up. The method also includes separating each microcode image into sections and comparing the sections to determine whether or not two or more sections contain identical code. The method also includes creating a universal microcode image from the sections that are unique. | 2014-02-06 |
20140040608 | Power Management Methods and Systems Using an External Power Supply - A method for managing power to an information handling system (IHS) is disclosed wherein the method includes providing a battery and an external power supply operable to supply power to the IHS. The method also includes providing an application programming interface (API) to the IHS, wherein the API is configured to monitor a first parameter and a second parameter. The method further includes supplying power to the IHS via the external power supply if the first parameter reaches a first threshold level and supplying power to the IHS via the battery if the second parameter reaches a second threshold level. An information handling system (IHS) is further disclosed including an external power supply, a battery, and a controller operable to select between the external power supply and the battery to supply power to the IHS. The IHS further includes an application programmable interface (API) operable to monitor a first parameter and direct the controller to select the external power supply to supply power to the IHS if the first parameter reaches a first threshold level, and wherein the API is further operable to monitor a second parameter and direct the controller to select the battery to supply power to the IHS if the second parameter reaches a second threshold level. | 2014-02-06 |