Patents - stay tuned to the technology

Inventors list

Assignees list

Classification tree browser

Top 100 Inventors

Top 100 Assignees


18th week of 2014 patent applcation highlights part 71
Patent application numberTitlePublished
20140122756ADDRESS BASED SERIAL COMMUNICATION INTERFACE FOR CONTROL AND MONITORING OF SYSTEM-ON-CHIP IMPLEMENTATIONS - A wireless data transceiver comprises a data bus, a first slave device and a second slave device. The first slave device and the second slave device are coupled to the data bus such that both devices can detect transmitted data packets. The transceiver further comprises an interface device coupled to the data bus. The interface device is configured to convert data formatted according to a first protocol to a second protocol, and vice-versa. The transceiver further comprises a first master controller and a second master controller coupled to the interface device. The first master controller receives first data formatted according to the second protocol and outputs data packets having a first slave address corresponding to the first slave device. The second master controller receives second data formatted according to the second protocol and outputs data packets having a second slave address corresponding to the second slave device.2014-05-01
20140122757VEHICLE DATA ABSTRACTION AND COMMUNICATION - An example embodiment includes an abstraction device including a mapping platform, a vehicle transceiver, and a mobile device transceiver. The mapping platform is configured to convert input data messages formatted in a vehicle-specific format to output data messages formatted in a standard mobile device format. The mapping platform is further configured to convert input data messages formatted in the standard mobile device format to output data messages in the vehicle-specific format. The input data messages may have any of multiple data message types, which are communicated from multiple mobile device subsystems, and multiple vehicle subsystems. The vehicle transceiver is configured to transmit the output data messages formatted in the vehicle-specific format to a vehicle via a controller area network (CAN) bus of the vehicle. The mobile device transceiver is configured to transmit the output data messages formatted in the standard mobile device format to a mobile device.2014-05-01
20140122758METHOD AND DEVICE FOR PARAMETERIZING AN AS-I SLAVE - A method is disclosed for parameterizing an AS-i slave. In order to improve the parameterization of an AS-i slave, the following steps are carried out: determining the parameters of the AS-i slave to be parameterized via an engineering tool; transmitting the determined parameters to an AS-i master via a first telegram; receiving the first telegram, which contains the determined parameters of the AS-i slave to be parameterized, via a receiving unit of the AS-i master; automatically converting the received first telegram into an AS-i telegram by a processing unit of the AS-i master, such that the AS-i telegram contains the determined parameters of the AS-i slave to be parameterized; and transmitting the AS-i telegram, which contains the determined parameters of the AS-i slave to be parameterized, to the AS-i slave to be parameterized via a transmission unit of the AS-i master.2014-05-01
20140122759Edge-Triggered Interrupt Conversion - In an embodiment, a system includes an interrupt controller, one or more CPUs coupled to the interrupt controller, a communication fabric, one or more peripheral devices configured to generate interrupts to be transmitted to the interrupt controller, and one or more interrupt message circuits coupled to the peripheral devices. The interrupt message circuits are configured to generate interrupt messages to convey the interrupts over the fabric to the interrupt controller. Some of the interrupts are level-sensitive interrupts, and the interrupt message circuits are configured to transmit level-sensitive interrupt messages to the interrupt controller. At least one of the interrupts is edge-triggered. The system is configured to convert the edge-triggered interrupt to a level-sensitive interrupt so that interrupts may be handled in the same fashion.2014-05-01
20140122760COMMUNICATION OF MESSAGE SIGNALLED INTERRUPTS - A global interrupt number space 2014-05-01
20140122761METHODS AND STRUCTURE FOR SERIAL ATTACHED SCSI EXPANDERS THAT SELF-CONFIGURE ROUTING ATTRIBUTES OF THEIR PORTS - Methods and structure are provided for Serial Attached SCSI (SAS) expanders that program their own routing attributes. The structure includes a SAS expander comprising multiple physical links with associated transceivers (PHYs), wherein the PHYs are configured into ports at the expander, and a memory that defines routing attributes for each of the ports. The SAS expander also comprises a control unit that is operable to detect a discovery Serial Management Protocol (SMP) request received at a port of the expander, and that is further operable to set the routing attribute for the port to subtractive routing responsive to detecting the SMP request.2014-05-01
20140122762APPLICATION MERGING SYSTEM FOR MULTIPLE PLATFORMS FPGA OF A SAME SERIES - Provided is a Field Programmable Gate Array (FPGA) application merging system for multiple platforms of a same series, which is used in a testing or manufacturing system comprising an adapter and at least two platforms. The FPGA application merging system comprises: at least two functional modules corresponding to the at least two platforms respectively; an IO selector connected to the at least two functional modules respectively, configured to select one of the at least two functional modules adaptively; and an IO attribute controller connected to the IO selector, configured to select an attribute in accordance with the selected functional module, wherein each IO has a three-state logic attribute. The FPGA application merging system may significantly reduce the cost of the FPGA version in later development, maintenance, storage, upgrading and so on, mitigate the difficulty of storage, loading and other operations on the board, and significantly increase the operation efficiency.2014-05-01
20140122763TWO PIN SERIAL BUS COMMUNICATION INTERFACE AND PROCESS - A two pin communication interface bus and control circuits are used with circuit boards, integrated circuits, or embedded cores within integrated circuits. One pin carries data bi-directionally and address and instruction information from a controller to a selected port. The other pin carries a clock signal from the controller to a target port or ports in or on the desired circuit or circuits. The bus may be used for serial access to circuits where the availability of pins on ICs or terminals on cores is minimal. The bus is used for communication, such as serial communication related to the functional operation of an IC or core design, or serial communication related to test, emulation, debug, and/or trace operations of an IC or core design.2014-05-01
20140122764ADAPTIVE INTEGRATED CIRCUITRY WITH HETEROGENEOUS AND RECONFIGURABLE MATRICES OF DIVERSE AND ADAPTIVE COMPUTATIONAL UNITS HAVING FIXED, APPLICATION SPECIFIC COMPUTATIONAL ELEMENTS - The present invention concerns a new category of integrated circuitry and a new methodology for adaptive or reconfigurable computing. The preferred IC embodiment includes a plurality of heterogeneous computational elements coupled to an interconnection network. The plurality of heterogeneous computational elements include corresponding computational elements having fixed and differing architectures, such as fixed architectures for different functions such as memory, addition, multiplication, complex multiplication, subtraction, configuration, reconfiguration, control, input, output, and field programmability. In response to configuration information, the interconnection network is operative in real-time to configure and reconfigure the plurality of heterogeneous computational elements for a plurality of different functional modes, including linear algorithmic operations, non-linear algorithmic operations, finite state machine operations, memory operations, and bit-level manipulations. The various fixed architectures are selected to comparatively minimize power consumption and increase performance of the adaptive computing integrated circuit, particularly suitable for mobile, hand-held or other battery-powered computing applications.2014-05-01
20140122765METHOD AND APPARATUS FOR SECURING AND SEGREGATING HOST TO HOST MESSAGING ON PCIE FABRIC - A PCIe fabric includes at least one PCIe switch. The fabric may be used to connect multiple hosts. The PCIe switch implements security and segregation measures for host-to-host message communication. A management entity defines a Virtual PCIe Fabric ID (VPFID). The VPFID is used to enforce security and segregation. The fabric ID may be extended to be used in switch fabrics with other point-to-point protocols.2014-05-01
20140122766HIGH SPEED DIFFERENTIAL WIRING STRATEGY FOR SERIALLY ATTACHED SCSI SYSTEMS - A serial attached SCSI (SAS) system may include a host bus adaptor, a bus expander, and a multi-layer data transmission medium coupled between the host bus adaptor and the bus expander. The multi-layer data transmission medium may include a first microstrip structure located at a top surface portion of the multi-layer data transmission medium and a first stripline structure located within a first internal portion of the multi-layer data transmission medium. The microstrip structure provides, among other things, a repeaterless high-speed serial communications link between the host bus adaptor and the bus expander.2014-05-01
20140122767OPERATING M-PHY BASED COMMUNICATIONS OVER PERIPHERAL COMPONENT INTERCONNECT (PCI)-BASED INTERFACES, AND RELATED CABLES, CONNECTORS, SYSTEMS AND METHODS - Embodiments disclosed herein include operating the M-PHY communications over peripheral component interconnect (PCI)-based interfaces. Related cables, connectors, systems, and methods are also disclosed. In particular, embodiments disclosed herein take the M-PHY standard compliant signals and direct them through a PCI compliant connector (and optionally cable) so as to allow two M-PHY standard compliant devices having PCI connectors to communicate.2014-05-01
20140122768METHOD, DEVICE, SYSTEM AND STORAGE MEDIUM FOR IMPLEMENTING PACKET TRANSMISSION IN PCIE SWITCHING NETWORK - Embodiments of the present invention disclose a peripheral component interconnect express interface control unit. The unit includes a P2P module, configured to receive a first TLP from a RC or an EP and forward the first TLP to a reliable TLP transmission RTT module for processing; the reliable TLP transmission module, configured to determine, according to the received first TLP, sending links connected to active and standby PCIE switching units, and send the first TLP to the active and standby PCIE switching units through the sending links at the same time, so that a destination PCIE interface controller of the first TLP selectively receives the first TLP forwarded by the active and standby PCIE switching units and sends the first TLP to a destination EP or a destination RC, thereby implementing reliable transmission of a TLP in a case of a PCIE switching dual-plane networking connection.2014-05-01
20140122769Method, Device, System and Storage Medium for Implementing Packet Transmission in PCIE Switching Network - Embodiments of the present invention disclose a peripheral component interconnect express interface control unit. The unit includes a P2P module, configured to receive a first TLP from a RC or an EP and forward the first TLP to a reliable TLP transmission RTT module for processing. A reliable TLP transmission module is configured to determine, according to the received first TLP, sending links connected to active and standby PCIE switching units, and send the first TLP to the active and standby PCIE switching units through the sending links at the same time. A destination PCIE interface controller of the first TLP selectively receives the first TLP forwarded by the active and standby PCIE switching units and sends the first TLP to a destination EP or a destination RC. Thereby, reliable transmission of a TLP is implemented in a case of a PCIE switching dual-plane networking connection.2014-05-01
20140122770POWER SUPPLY CIRCUIT FOR UNIVERSAL SERIAL BUS INTERFACE - A power supply circuit includes a first electronic switch mounted near a front universal serial bus (USB) interface, a second electronic switch mounted near a rear USB interface, and a third electronic switch. The first electronic switch supplies power for the front USB interface. The second electronic switch supplies power for the rear USB interface. The third electronic switch supplies power for the front USB interface and the rear USB interface when the first and second electronic switches are off.2014-05-01
20140122771WEIGHTAGE-BASED SCHEDULING FOR HIERARCHICAL SWITCHING FABRICS - Techniques are disclosed to implement a scheduling scheme for a crossbar scheduler that provides distributed request-grant-accept arbitration between input group arbiters and output group arbiters in a distributed switch. Input and output ports are grouped and assigned a respective arbiter. The input group arbiters communicate requests indicating a count of respective ports having data packets to be transmitted via one of the output ports. The output group arbiter attempts to accommodate the requests for each member of an input group before proceeding to a next input group.2014-05-01
20140122772Kernal Memory Locking for Systems that Allow Over-Commitment Memory - Provided are techniques for allocating logical memory corresponding to a logical partition in a computing system; generating, a S/W PET data structure corresponding to a first page of the logical memory, wherein the S/W PFT data structure comprises a field indicating that the corresponding first page of logical memory is a klock page; transmitting a request for a page of physical memory and the corresponding S/W PET data structure to hypervisor, allocating physical memory corresponding to the request; and, in response to a pageout request, paging out available logical memory corresponding to the logical partition that does not indicate that the corresponding page is a klock page prior to paging out the first page.2014-05-01
20140122773PARTIAL PAGE MEMORY OPERATIONS - Apparatuses may include a memory block with strings of memory cells formed in a plurality of tiers. The apparatus may further comprise access lines and data lines shared by the strings, with the access lines coupled to the memory cells corresponding to a respective tier of the plurality of tiers. The memory cells corresponding to at least a portion of the respective tier may comprise a respective page of a plurality of pages. Subsets of the data lines may be mapped into a respective partial page of a plurality of partial pages of the respective page. Each partial page may be independently selectable from other partial pages. Additional apparatuses and methods are disclosed.2014-05-01
20140122774Method for Managing Data of Solid State Storage with Data Attributes - Different FTL implementations, including the use of different mapping schemes, log block utilization, merging, and garbage collection strategies, perform more optimally than others for different data operations with certain characteristics. The presently claimed invention provides a method to distinguish and categorize the different data operations according to their different characteristics, or data attributes; then deploy the most optimal mapping schemes, log block utilization, merging, and garbage collection strategies depending on the data attributes; wherein the data attributes include, but are not limited to, access frequency, access sequence, access size, request mode, and request write ratio.2014-05-01
20140122775MEMORY CONTROLLER FOR MEMORY DEVICE - A memory controller that generates interface signals for a memory device determines an interface signal frequency based on a timing mode of the memory device and a corresponding clock division ratio. Based on the timing mode, a look up table (LUT) is selected and then a timing parameter corresponding to the clock division ratio and the interface signal frequency is fetched from the LUT. An interface signal is generated based on the interface signal frequency and fetched timing parameter.2014-05-01
20140122776DYNAMIC TUNING OF INTERNAL PARAMETERS FOR SOLID-STATE DISK BASED ON WORKLOAD ACCESS PATTERNS - A system and method for tuning a solid state disk memory includes computing a metric representing a usage trend of a solid state disk memory. Whether one or more parameters need to be adjusted to provide a change in performance is determined. The parameter is adjusted in accordance with the metric to impact the performance of running workloads. These steps are repeated after an elapsed time interval.2014-05-01
20140122777FLASH MEMORY CONTROLLER HAVING MULTI MODE PIN-OUT - A memory controller of a data storage device which communicates with a host, has channel control modules each being configurable to have at three different pinout assignments for interfacing with two different types of memory devices operating with different memory interface protocols. One pinout assignment corresponds to a memory interface protocol where memory devices can be connected in parallel with each other. Two other pinout assignments correspond respectively to inbound and outbound signals of another memory interface protocol where memory devices are serially connected with each other. In this mode of operation, one channel control module is configured to provide the outbound signals while another channel control module is configured to receive the inbound signals. Each memory port of the channel control modules includes port buffer circuitry configurable for different functional signal assignments. The configuration of each channel control module is selectable by setting predetermined ports or registers.2014-05-01
20140122778RAPID NETWORK DATA STORAGE TIERING SYSTEM AND METHODS - Systems and methods are disclosed herein to a data storage tiering system comprising at least one storage array; at least one solid state storage unit; and a storage controller in communication with the at least one storage array and the at least one solid state storage unit and configured to combine the at least one storage array and the at least one solid state storage unit into one business tier data container using a virtualization layer and present the business tier data container on a storage area network as one storage array to a server, wherein the storage controller creates a business data tier by combining a partition of the solid state storage unit with the at least one storage array.2014-05-01
20140122779MAGNETIC RANDOM ACCESS MEMORY JOURNAL FOR MULTI-LEVEL CELL FLASH MEMORY - A flash memory system comprises a logic block interface operable to receive a write command from a host computer, the write command specifying data and a write destination address in a flash memory device, the flash memory device operable to store data at a complementary address corresponding to the specified write destination address. The system further comprises a journal communicatively coupled to the flash memory device and the logic block interface operable to temporarily store data from the complementary address of the flash memory device, and to provide the stored data in the journal to be restored to the flash memory device at the complementary address in the event of an error occurring while executing the write command.2014-05-01
20140122780MAGNETIC RANDOM ACCESS MEMORY JOURNAL - A flash memory system comprises a logic block interface operable to receive a write command to store data from a host computer, a flash memory device operable to store the data in response to the write command, and a non-volatile memory communicatively coupled to the flash memory device and the logic block interface operable to temporarily store the data, and to provide the stored data to be written to the flash memory device in the event of a disruption during execution of the write command.2014-05-01
20140122781HIERARCHICAL FLASH TRANSLATION LAYER - A flash memory system comprises a flash device operable to store data in a plurality of physical blocks assigned to a plurality of sections, a plurality of Flash Translation Tables stored in a memory comprising a Forward Translation Table that maps a Section to a plurality of physical blocks, and a Sector Translation Table for each Section, the Sector Translation Table operable to map to a Physical Page Number identifying a particular Page, a Page Offset identifying a particular location within the Page, and a Section Local Block Table comprising Block Physical Addresses indexed by a Section Local Block Table ID.2014-05-01
20140122782MEMORY SYSTEM - A memory system includes a first, second and third storing area included in a volatile semiconductor memory, and a controller that allocates the storage area of the nonvolatile semiconductor memory to the second storing area and the third storing area in a logical block unit associated with one or more blocks. First and second management units respectively manage the second and third storing areas. The second management unit has a size larger than that of the first management unit. When flushing data from the first to the second or third storing areas, the controller collects, from at least one of the first, second and third storing areas, data other than the data to be flushed and controls the flushing of the data such that a total of the data is a natural number times as large as the block unit as much as possible.2014-05-01
20140122783SOLID STATE MEMORY (SSM), COMPUTER SYSTEM INCLUDING AN SSM, AND METHOD OF OPERATING AN SSM - In one aspect, data is stored in a solid state memory which includes first and second memory layers. A first assessment is executed to determine whether received data is hot data or cold data. Received data which is assessed as hot data during the first assessment is stored in the first memory layer, and received data which is first assessed as cold data during the first assessment is stored in the second memory layer. Further, a second assessment is executed to determine whether the data stored in the first memory layer is hot data or cold data. Data which is then assessed as cold data during the second assessment is migrated from the first memory layer to the second memory layer.2014-05-01
20140122784SOLID-STATE DISK, AND USER SYSTEM COMPRISING SAME - The inventive concept relates to a user system including a solid state disk. The user system may include a main memory for storing data processed by a central processing unit; and a solid state disk for storing the selected data among data stored in the main memory. The main memory and the solid state disk form a single memory hierarchy. Thus, the user system of the inventive concept can rapidly process data.2014-05-01
20140122785DATA WRITING METHOD AND SYSTEM - A data writing method for writing data to a flash memory includes writing an initial value to the data storage area, determining whether or not the writing of the initial value is performed normally based on a write flag, writing data to the data storage area when the writing is performed normally, and erasing a block including the data storage area when the writing is not performed normally. An initial value is written to the data storage area before writing data, so that whether or not an error correction code storage area contains the initial value may be confirmed. An erase operation of the block is performed only when the error correction code storage area does not contain the initial value, so that the number of times of erasure of the block may be reduced and the life of the product may be increased.2014-05-01
20140122786FLASH MEMORY CONTROLLER - In some implementations, an apparatus includes a first programmable hardware timer that specifies an initial wait time before issuing two or more commands to a storage device, and a second programmable hardware timer that specifies an interval time between at least two commands of the two or more commands.2014-05-01
20140122787ADAPTIVE OVER-PROVISIONING IN MEMORY SYSTEMS - A method for data storage includes, in a memory that includes multiple memory blocks, specifying at a first time a first over-provisioning overhead, and storing data in the memory while retaining in the memory blocks memory areas, which do not hold valid data and whose aggregated size is at least commensurate with the specified first over-provisioning overhead. Portions of the data from one or more previously-programmed memory blocks containing one or more of the retained memory areas are compacted. At a second time subsequent to the first time, a second over-provisioning overhead, different from the first over-provisioning overhead, is specified, and data storage and data portion compaction is continued while complying with the second over-provisioning overhead.2014-05-01
20140122788REFRESH ALGORITHM FOR MEMORIES - A method and apparatus for refreshing data in a flash memory device is disclosed. A counter is maintained for each memory block. When a memory block is erased, the counter for that erase block is set to a predetermined value while the remaining counters for other erase blocks are changed. When a memory block counter reaches a predetermined threshold value, the associated memory block is refreshed.2014-05-01
20140122789MEMORY CONTROL APPARATUS AND MEMORY CONTROL METHOD - In a memory control apparatus for issuing a command for a bank corresponding to a transfer request, the transfer request for the corresponding bank is stored. The column address of the transfer request stored at the first is compared with the column addresses of a plurality of subsequent transfer requests. It is determined based on the comparison result whether to issue a command with precharge or a command without precharge for the transfer request stored at the first. The determined command is issued.2014-05-01
20140122790DYNAMIC PRIORITY MANAGEMENT OF MEMORY ACCESS - A system includes multiple master devices and at least one memory refresh scheduler. When a master device needs higher priority for memory access, the master device sends a dynamic priority signal to the memory refresh scheduler and in response, the memory refresh scheduler changes its policy for issuing refresh commands.2014-05-01
20140122791SYSTEM AND METHOD FOR PACKET CLASSIFICATION AND INTERNET PROTOCOL LOOKUP IN A NETWORK ENVIRONMENT - An example method includes partitioning a memory element of a router into a plurality of segments having one or more rows, where at least a portion of the one or more rows is encoded with a value mask (VM) list having a plurality of values and masks. The VM list is identified by a label, and the label is mapped to a base row number and a specific number of bits corresponding to the portion encoding the VM list. Another example method includes partitioning a prefix into a plurality of blocks, indexing to a hash table using a value of a specific block, where a bucket of the hash table corresponds to a segment of a ternary content addressable memory of a router, and storing the prefix in a row of the segment.2014-05-01
20140122792STORAGE SYSTEM AND ACCESS ARBITRATION METHOD - According to the prior art, when requesters having different I/O access processing abilities compete when accessing a target, latency is extended via accesses from a requester having a lower I/O access performance, according to which the number of I/Os issued per unit time (data processing quantity) cannot be increased, and the processing performance of the system cannot be improved. According to the present invention, when requesters compete in accessing a target, the request having a lower requester performance (having a longer processing time per single I/O) is processed (started) in a prioritized manner. Thereby, the number of I/O processes per unit time can be increased, and the processing performance of the whole storage system can be improved.2014-05-01
20140122793MAGNETIC DISK DEVICE AND DATA WRITING METHOD - A magnetic disk device has a magnetic head, a magnetic disk that includes a plurality of data regions and a plurality of media cache regions associated with the data regions, and a controller configured to control the magnetic head to write data received from an external device in the media cache regions and then write back the data written in the media cache regions to the data regions associated with the media cache regions.2014-05-01
20140122794CONTROL CIRCUIT FOR HARD DISKS - A control circuit is connected between a motherboard and a number of hard disks for controlling power and data transmission of the number of hard disks. Each hard disk corresponds to one power control unit and one data control unit. The power control unit controls power transmission to the corresponding hard disk. The data control unit controls data transmission of the corresponding hard disk. When one hard disk is selected as an operation object to enter a disable state, the data control unit cuts off data transmission of the selected hard disk before the power control unit cuts off power transmission of the operation object. When the hard disk is selected as an operation object to enter an enable state, the power control unit resets the power transmission to the operation object before the data control unit resets data transmission of the operation object.2014-05-01
20140122795DATA PLACEMENT FOR LOSS PROTECTION IN A STORAGE SYSTEM - Embodiments of the invention relate to data placement for loss protection in a storage system. One embodiment includes constructing multiple logical compartments. Each logical compartment includes a placement policy including a set of storage placement rules for placement of storage symbols into a set of physical storage containers. A first logical compartment of said plurality of logical compartments is container-overlapped with respect to a second logical compartment of said plurality of logical compartments. The first logical compartment is data loss independent with respect to the second logical compartment. Each of multiple storage volumes is associated with a logical compartment. The storage symbols that represent a data stripe are placed onto physical storage containers in conformity with the placement policy associated with the logical compartment containing the data stripe.2014-05-01
20140122796SYSTEMS AND METHODS FOR TRACKING A SEQUENTIAL DATA STREAM STORED IN NON-SEQUENTIAL STORAGE BLOCKS - A process for block-level tracking of a sequential data stream that is sub-divided into multiple parts, and stored, by a file system, within non-sequential storage blocks. The process creates block-level metadata as the sequential data stream is written to the storage blocks, wherein the metadata stores pointers to the non-sequential storage blocks used to store the multiple parts of the sequential data stream. This metadata can subsequently be used by a block-level controller to more efficiently read the sequential data stream back to the file system using read-ahead processes.2014-05-01
20140122797METHOD AND STRUCTURES FOR PERFORMING A MIGRATION OF A LOGICAL VOLUME WITH A SERIAL ATTACHED SCSI EXPANDER - Methods and structure for migrating a logical volume with a Serial Attached SCSI (SAS) expander are provided. The expander comprises a plurality of physical links with associated transceivers (PHYs). The expander further comprises a control unit operable to select a logical volume, and to initiate migration of data from the selected logical volume to a backup logical volume. Further, the expander includes a Serial SCSI Protocol (SSP) target of the expander operable to intercept commands directed to the selected logical volume responsive to the control unit initiating the migration, and an SSP initiator of the expander that is operable to generate commands directed to the backup logical volume based on the intercepted commands, and to provide the intercepted commands to the selected logical volume.2014-05-01
20140122798METHODS AND STRUCTURE ESTABLISHING NESTED REDUNDANT ARRAY OF INDEPENDENT DISKS VOLUMES WITH AN EXPANDER - Methods and structure are provided for provisioning a Redundant Array of Independent Disks (RAID) volume via an expander that can be used to provision a RAID volume managed by an external RAID controller. The structure includes a Serial Attached SCSI (SAS) expander. The expander comprises physical links with transceivers (PHYs) that directly couple with storage devices, a protocol target and a control unit. The control unit provisions a first RAID volume with multiple storage devices that are directly coupled with the PHYs, and is further masks the storage devices from a SAS domain, by presenting the PHYs directly coupled with the multiple storage devices as a single PHY coupled with a single logical device. The control unit is also operable to provision a portion of a second RAID volume on the logical device in response to the expander receiving a command from a RAID controller.2014-05-01
20140122799STORAGE DEVICE AND POWER SAVING METHOD THEREOF - A storage device includes a plurality of hard disk device sets, a plurality of voltage adjustment units, an information collection unit and a control unit. The hard disk device sets access data respectively to generate corresponding access messages. Each of the hard disk device sets includes at least two hard disk devices. The voltage adjustment units determine whether to provide a plurality of duty voltages to the hard disk device sets, according to a plurality of control signals. The information collection unit receives the access messages generated by the hard disk device sets, and outputs the access messages according to a read command. The control unit generates the read command to receive the access messages, acquires the usage states of the hard disk device sets according to an algorithm and the access messages, and generates the control signals according to the usage states of the hard disk device sets.2014-05-01
20140122800METHOD, SYSTEM, AND DEVICE FOR MONITORING OPERATIONS OF A SYSTEM ASSET - A device for use in monitoring operation of a system asset includes an interface for receiving sensor data representative of an operating condition of the system asset, a memory device for storing the sensor data, and a processor coupled to the interface and to the memory device. The processor is configured to create a hierarchy of sensor data within the memory device, wherein the hierarchy comprises a first tier and a second tier, store a first level of the sensor data in the first tier, and store a second level of the sensor data in the second tier.2014-05-01
20140122801MEMORY CONTROLLER WITH INTER-CORE INTERFERENCE DETECTION - Embodiments are described for a method for controlling access to memory in a processor-based system comprising monitoring a number of interference events, such as bank contentions, bus contentions, row-buffer conflicts, and increased write-to-read turnaround time caused by a first core in the processor-based system that causes a delay in access to the memory by a second core in the processor-based system; deriving a control signal based on the number of interference events; and transmitting the control signal to one or more resources of the processor-based system to reduce the number of interference events from an original number of interference events.2014-05-01
20140122802ACCESSING AN OFF-CHIP CACHE VIA SILICON PHOTONIC WAVEGUIDES - The disclosed embodiments provide a system in which a processor chip accesses an off-chip cache via silicon photonic waveguides. The system includes a processor chip and a cache chip that are both coupled to a communications substrate. The cache chip comprises one or more cache banks that receive cache requests from a structure in the processor chip optically via a silicon photonic waveguide. More specifically, the silicon photonic waveguide is comprised of waveguides in the processor chip, the communications substrate, and the cache chip, and forms an optical channel that routes an optical signal directly from the structure to a cache bank in the cache chip via the communications substrate. Transmitting optical signals from the processor chip directly to cache banks on the cache chip facilitates reducing the wire latency of cache accesses and allowing each cache bank on the cache chip to be accessed with uniform latency.2014-05-01
20140122803INFORMATION PROCESSING APPARATUS AND METHOD THEREOF - Data representing the storage state of the main memory of an information processing device is saved in a secondary storage device. The data saved in the secondary storage device is transferred to the main memory in reactivation of the information processing device to restore the storage state of the main memory. A cache allocated in the main memory is deallocated before generating data to be saved.2014-05-01
20140122804PROTECTING GROUPS OF MEMORY CELLS IN A MEMORY DEVICE - Methods for memory block protection and memory devices are disclosed. One such method for memory block protection includes programming protection data to protection bytes diagonally across different word lines of a particular memory block (e.g., Boot ROM). The protection data can be retrieved by an erase verify operation that can be performed at power-up of the memory device.2014-05-01
20140122805SELECTIVE POISONING OF DATA DURING RUNAHEAD - Embodiments related to selecting a runahead poison policy from a plurality of runahead poison policies during microprocessor operation are provided. The example method includes causing the microprocessor to enter runahead upon detection of a runahead event and implementing a first runahead poison policy selected from a plurality of runahead poison policies operative to manage runahead poison injection during runahead. The example method also includes during microprocessor operation, selecting a second runahead poison policy operative to manage runahead poison injection differently from the first runahead poison policy.2014-05-01
20140122806CACHE DEVICE FOR SENSOR DATA AND CACHING METHOD FOR THE SAME - A cache device includes a cache module, which comprises a sensor data access interface, a sensor data acquisition module and a driver library. The sensor data access interface receives a data request of back-end sensors from a front-end monitoring system. The sensor data acquisition module inquiries the sensors for the sensor data in accordance with the received request, receives and saves sensor data from the sensors, and replies the sensor data to the monitoring system through the sensor data access interface. The driver library includes at least one driver program, the sensor data acquisition module reads the sensors via executing the driver program, wherein, the driver library selects a corresponding communication protocol which is used by the inquired sensors for the executed driver program to use thereto.2014-05-01
20140122807MEMORY ADDRESS TRANSLATIONS - Memory address translations are disclosed. An example memory controller includes an address translator to translate an intermediate memory address into a hardware memory address based on a function, the address translator to select the function based on at least a portion of the intermediate memory address, the intermediate memory address being identified by a processor. The example memory controller includes a cache to store the function in association with an address range of the intermediate memory sector, the intermediate memory address being within the intermediate memory sector. Further, the example memory controller includes a memory accesser to access a memory module at the hardware memory address.2014-05-01
20140122808PREFETCHING SOURCE TRACKS FOR DESTAGING UPDATED TRACKS IN A COPY RELATIONSHIP - A point-in-time copy relationship associates tracks in a source storage with tracks in a target storage. The target storage stores the tracks in the source storage as of a point-in-time. A point-in-time copy relationship associates tracks in the source storage with tracks in the target storage, wherein the target storage stores the tracks in the source storage as of a point-in-time. A write request is received including an updated source track for a point-in-time source track in the source storage in the point-in-time copy relationship, wherein the point-in-time source track was in the source storage at the point-in-time the copy relationship was established. The updated source track is stored in a cache device. A prefetch request is sent to the source storage to prefetch the point-in-time source track in the source storage subject to the write request before destaging the updated source track to the source storage.2014-05-01
20140122809CONTROL MECHANISM FOR FINE-TUNED CACHE TO BACKING-STORE SYNCHRONIZATION - One embodiment of the present invention sets forth a technique for processing commands received by an intermediary cache from one or more clients. The technique involves receiving a first write command from an arbiter unit, where the first write command specifies a first memory address, determining that a first cache line related to a set of cache lines included in the intermediary cache is associated with the first memory address, causing data associated with the first write command to be written into the first cache line, and marking the first cache line as dirty. The technique further involves determining whether a total number of cache lines marked as dirty in the set of cache lines is less than, equal to, or greater than a first threshold value, and: not transmitting a dirty data notification to the frame buffer logic when the total number is less than the threshold value, or transmitting a dirty data notification to the frame buffer logic when the total number is equal to or greater than the first threshold value.2014-05-01
20140122810PARALLEL PROCESSING OF MULTIPLE BLOCK COHERENCE OPERATIONS - A method to eliminate the delay of multiple overlapping block invalidate operations in a multi CPU environment by overlapping the block invalidate operation with normal CPU accesses, thus making the delay transparent. The cache controller performing the block invalidate operation merges multiple overlapping requests into a parallel stream to eliminate execution delays. Cache operations other that block invalidate, such as block write back or block write back invalidate may also be merged into the execution stream.2014-05-01
20140122811Method And Apparatus For Error Correction In A Cache - A processor includes a core to execute instructions and a cache memory coupled to the core and having a plurality of entries. Each entry of the cache memory may include a data storage including a plurality of data storage portions, each data storage portion to store a corresponding data portion. Each entry may also include a metadata storage to store a plurality of portion modification indicators, each portion modification indicator corresponding to one of the data storage portions. Each portion modification indicator is to indicate whether the data portion stored in the corresponding data storage portion has been modified, independently of cache coherency state information of the entry. Other embodiments are described as claimed.2014-05-01
20140122812TILED CACHE INVALIDATION - One embodiment of the present invention sets forth a graphics subsystem. The graphics subsystem includes a first tiling unit associated with a first set of raster tiles and a crossbar unit. The crossbar unit is configured to transmit a first set of primitives to the first tiling unit and to transmit a first cache invalidate command to the first tiling unit. The first tiling unit is configured to determine that a second bounding box associated with primitives included in the first set of primitives overlaps a first cache tile and that the first bounding box overlaps the first cache tile. The first tiling unit is further configured to transmit the primitives and the first cache invalidate command to a first screen-space pipeline associated with the first tiling unit for processing. The screen-space pipeline processes the cache invalidate command to invalidate cache lines specified by the cache invalidate command.2014-05-01
20140122813STORAGE MEDIUM AND ACCESSING SYSTEM UTILIZING THE SAME - A storage medium communicating with a memory controller sent a read command is disclosed. The storage medium includes a plurality of memory units. Each memory unit includes at least sixteen memory cells coupled to a word line and a plurality of bit lines. A controlling unit receives first address information according to the read command and generates a row read signal and a column read signal according to the first address information. A row decoding unit activates the word line according to the row read signal. A column decoding unit activates the bit lines according to the column read signal to output a plurality of storing bits stored in the sixteen memory cells. A read-out unit processes the storing bits to generate a plurality of reading bits. The controlling unit outputs the reading bits to the memory controller in serial.2014-05-01
20140122814APPARATUSES AND METHODS FOR MEMORY OPERATIONS HAVING VARIABLE LATENCIES - Apparatuses and methods for performing memory operations are described. An example apparatus includes a memory operation controller. The memory operation controller is configured to receive memory instructions and decode the same to provide internal signals for performing memory operations for the memory instructions. The memory operation controller is further configured to provide information indicative of a time for a variable latency period of a memory instruction during the variable latency period. In an example method, a write instruction and an address to which write data is to be written is received at a memory and an acknowledgement indicative of an end of a variable latency period for the write instruction is provided. After waiting a variable bus turnaround after the acknowledgement, write data for the write instruction is received.2014-05-01
20140122815INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND CONTROL SYSTEM - An information processing device includes a storage device that stores information, and a controller that adjusts a consumption of read time of reading information to be read per unit data amount according to the priority of the information to be read from the storage device and a permitted read time during which read of information from the storage device is permitted. The permitted read time varies according to the processing time of another control different from the control of the read.2014-05-01
20140122816SWITCHING BETWEEN MIRRORED VOLUMES - For switching between mirrored volumes, a copy relation identification (ID) is created between mirrored volumes for using the copy relation ID in conjunction with a multi-path device driver for switching input/output (I/O) for applications between a first path to a second path between the mirrored volumes.2014-05-01
20140122817SYSTEM AND METHOD FOR AN OPTIMIZED DISTRIBUTED STORAGE SYSTEM - In accordance with the present disclosure, a system and method for providing an optimized distributed storage system is described. The method may comprise receiving a file at a processor of a first information handling system of the distributed storage system. A copy of the file may be stored in a second information handling system of the distributed storage system. A total accessibility value for the file may be determined. The method may further include storing a copy of the file in a third information handling system of the distributed storage system if the total accessibility value is less than a first threshold. Likewise, a copy of the file may be removed from the second information handling system if the total accessibility value is greater than or equal to a second threshold.2014-05-01
20140122818STORAGE APPARATUS AND METHOD FOR CONTROLLING STORAGE APPARATUS - Upon receipt of an I/O request instructing storage of data in a storage device 2014-05-01
20140122819VIRTUAL DISK MANIPULATION OPERATIONS - Described is a technology by which a virtual hard disk is able to continue servicing virtual disk I/O (reads and writes) while a meta-operation (e.g., copying, moving, deleting, merging, compressing, defragmenting, cryptographic signing, lifting, dropping, converting, or compacting virtual disk data) is performed on the virtual disk. The servicing of virtual disk I/Os may be coordinated with meta-operation performance, such as by throttling and/or prioritizing the virtual disk I/Os. Also described is performing a meta-operation by manipulating one or more de-duplication data structures.2014-05-01
20140122820SYSTEM-ON-CHIP PROCESSING SECURE CONTENTS AND MOBILE DEVICE COMPRISING THE SAME - A mobile device is provided which includes a working memory having a memory area divided into a secure domain and a non-secure domain; and a system-on-chip configured to access and process contents stored in the secure domain. The system-on-chip includes a processing unit driven by at least one of a secure operating system and a non-secure operating system; at least one hardware block configured to access the contents according to control of the processing unit comprising a master port and a slave port which are set to have different security attributes; at least one memory management unit configured to control access of the at least one hardware block to the working memory; and an access control unit configured to set security attributes of the slave port and the master port or an access authority on each of the secure domain and the non-secure domain of the working memory.2014-05-01
20140122821COMPUTER SYSTEM HAVING MAIN MEMORY AND CONTROL METHOD THEREOF - Provided are a computer system and a method of controlling the same. The computer system includes: a central processing unit (CPU) configured to drive an application program; and a main memory configured to provide the CPU with a memory space for driving of the application program and to store a processing result of the CPU. The main memory includes: a nonvolatile memory including a first memory area configured to store data and a second memory area configured to store address information of the data; a memory controller configured to control the nonvolatile memory; and a memory manager configured to read the address information from the second memory area and delete the data stored at the first area according to the read address information, in response to a data delete command from the CPU and a control of the memory controller.2014-05-01
20140122822APPARATUSES AND METHODS FOR MEMORY OPERATIONS HAVING VARIABLE LATENCIES - Apparatuses and methods for performing memory operations are described. In an example apparatus, a memory is configured to receive a memory instruction and perform a memory operation responsive to the memory instruction. The memory is further configured to provide an acknowledgement indicative of an end of the variable latency period wherein the acknowledgement includes information related to an acceptance of a memory instruction. Data associated with the memory instruction is exchanged with the memory following the acknowledgement. In an example method a read instruction and an address from which read data is to be read is received. A write operation is suspended responsive to the read instruction and an acknowledgement indicative of an end of the variable latency period is provided. Read data for the read instruction is provided and the write operation is continued to be suspended for a hold-off period following completion of the read operation.2014-05-01
20140122823GENERALIZED STORAGE ALLOCATION FOR MULTIPLE ARCHITECTURES - Embodiments of the invention relate to storage allocation in a storage system. One embodiment includes generating a request for storage space allocation in a particular storage device by a first node. An owner node associated with the particular storage device is determined by a first allocation client associated with the first node. The request is sent by the first allocation client to a second allocation client associated with the owner node. A storage device allocation region of the particular storage device is created, the allocation region comprising a height proportional to storage devices the owner node and the second allocation client are coupled with, and a width that is inversely proportional to a number of nodes sharing the particular storage device.2014-05-01
20140122824Dynamically Configurable Memory - A device includes a memory including ways and a processor in communication with the memory. The processor is configured to execute logic. The logic can monitor a parameter of the processor or a device connected with the processor. The logic can allocate, based on the parameter, a number a ways and a size of ways of the memory for use by the processor. The logic can power down an unallocated number of ways and unused portions of the ways of the memory.2014-05-01
20140122825COMPUTER SYSTEM AND METHOD FOR UPDATING CONFIGURATION INFORMATION - In a configuration where a virtual storage array is formed in a physical storage array, if a physical storage administrator acquires a lock for all the physical storage arrays during update of information of multiple physical storage arrays, not only all the physical storage arrays but also all the virtual storage arrays formed therein cannot be manipulated while information update is executed. According to the present invention, when a virtual storage administrator executes change of configuration of the virtual storage array while the physical storage administrator is performing information update of all multiple physical storage arrays and management operation from the virtual storage administrator is not prohibited, the information of the changed configuration is stored in a temporary area of a configuration information database, and at the last of information update, the information of the temporary area is reflected to a normal area of the configuration information database.2014-05-01
20140122826DETECTING MEMORY CORRUPTION - A device identifies, based on a program code instruction, an attempted write access operation to a fenced memory slab, where the fenced memory slab includes an alternating sequence of data buffers and guard buffers. The device assigns read-only protection to the fenced slab and invokes, based on the attempted write access operation, a page fault operation. When a faulting address of the attempted write operation is not an address for one of the multiple data buffers, the device performs a panic routine. When the faulting address of the attempted write operation is an address for one of the multiple data buffers, the device removes the read-only protection for the fenced slab and performs a single step processing routine for the program code instruction.2014-05-01
20140122827MANAGEMENT OF MEMORY USAGE USING USAGE ANALYTICS - An approach for managing memory usage in cloud and traditional environments using usage analytics is disclosed. The approach may be implemented in a computer infrastructure including a combination of hardware and software. The approach includes determining that space is available within one or more tables which have schema definitions with string fields having a predefined length. The approach further includes creating a virtual table and mapping the available space to the virtual table for population by one or more records.2014-05-01
20140122828Sharing address translation between CPU and peripheral devices - A method for memory access includes maintaining in a host memory, under control of a host operating system running on a central processing unit (CPU), respective address translation tables for multiple processes executed by the CPU. Upon receiving, in a peripheral device, a work item that is associated with a given process, having a respective address translation table in the host memory, and specifies a virtual memory address, the peripheral device translates the virtual memory address into a physical memory address by accessing the respective address translation table of the given process in the host memory. The work item is executed in the peripheral device by accessing data at the physical memory address in the host memory.2014-05-01
20140122829EFFICIENT MEMORY VIRTUALIZATION IN MULTI-THREADED PROCESSING UNITS - A technique for simultaneously executing multiple tasks, each having an independent virtual address space, involves assigning an address space identifier (ASID) to each task and constructing each virtual memory access request to include both a virtual address and the ASID. During virtual to physical address translation, the ASID selects a corresponding page table, which includes virtual to physical address mappings for the ASID and associated task. Entries for a translation look-aside buffer (TLB) include both the virtual address and ASID to complete each mapping to a physical address. Deep scheduling of tasks sharing a virtual address space may be implemented to improve cache affinity for both TLB and data caches.2014-05-01
20140122830Operational Efficiency of Virtual TLBs - Various mechanisms are disclosed for improving the operational efficiency of a virtual translation look-aside buffer (TLB) in a virtual machine environment. For example, one mechanism fills in entries in a shadow page table (SPT) and additionally, speculatively fills in other entries in the SPT based on various heuristics. Another mechanism allows virtual TLBs (translation look-aside buffers) to cache partial walks in a guest page table tree. Still another mechanism allows for dynamic resizing of the virtual TLB to optimize for run-time characteristics of active workloads. Still another mechanism allows virtual machine monitors (VMMs) to support legacy and enlightened modes of virtual TLB operation. Finally, another mechanism allows the VMM to remove only the stale entries in SPTs when linking or switching address spaces. All these mechanisms, together or in part, increase the operational efficiency of the virtual TLB.2014-05-01
20140122831INSTRUCTION AND LOGIC TO PROVIDE VECTOR COMPRESS AND ROTATE FUNCTIONALITY - Instructions and logic provide vector compress and rotate functionality. Some embodiments, responsive to an instruction specifying: a vector source, a mask, a vector destination and destination offset, read the mask, and copy corresponding unmasked vector elements from the vector source to adjacent sequential locations in the vector destination, starting at the vector destination offset location. In some embodiments, the unmasked vector elements from the vector source are copied to adjacent sequential element locations modulo the total number of element locations in the vector destination. In some alternative embodiments, copying stops whenever the vector destination is full, and upon copying an unmasked vector element from the vector source to an adjacent sequential element location in the vector destination, the value of a corresponding field in the mask is changed to a masked value. Alternative embodiments zero elements of the vector destination, in which no element from the vector source is copied.2014-05-01
20140122832PARTIAL VECTORIZATION COMPILATION SYSTEM - Generally, this disclosure provides technologies for generating and executing partially vectorized code that may include backward dependencies within a loop body of the code to be vectorized. The method may include identifying backward dependencies within a loop body of the code; selecting one or more ranges of iterations within the loop body, wherein the selected ranges exclude the identified backward dependencies; and vectorizing the selected ranges. The system may include a vector processor configured to provide predicated vector instruction execution, loop iteration range enabling, and dynamic loop dependence checking.2014-05-01
20140122833SERVER ON A CHIP AND NODE CARDS COMPRISING ONE OR MORE OF SAME - A server on a chip that can be a component of a node card. The server on a chip can include a node central processing unit subsystem, a peripheral subsystem, a system interconnect subsystem, and a management subsystem. The central processing unit subsystem can include a plurality of processing cores each running an independent instance of an operating system. The peripheral subsystem includes a plurality of interfaces for various configurations of storage media. The system interconnect subsystem provides for intra-node and inter-node packet connectivity. The management subsystem provides for various system and power management functionalities within the subsystems of the server on a chip.2014-05-01
20140122834Generating And Communicating Platform Event Digests From A Processor Of A System - In an embodiment, a processor includes a plurality of counters each to provide a count of a performance metric of at least one core of the processor, a plurality of threshold registers each to store a threshold value with respect to a corresponding one of the plurality of counters, and an event logic to generate an event digest packet including a plurality of indicators each to indicate whether an event occurred based on a corresponding threshold value and a corresponding count value. Other embodiments are described and claimed.2014-05-01
20140122835METHOD OF PLACEMENT AND ROUTING IN A RECONFIGURATION OF A DYNAMICALLY RECONFIGURABLE PROCESSOR - A method and system are provided for deriving a resultant compiled software code with increased compatibility for placement and routing of a dynamically reconfigurable processor.2014-05-01
20140122836CONFIDENCE-DRIVEN SELECTIVE PREDICATION OF PROCESSOR INSTRUCTIONS - An apparatus includes a network interface, memory, and a processor. The processor is coupled with the network interface and memory. The processor is configured to determine that an instruction instance is a branch instruction instance. Responsive to a determination that an instruction instance is a branch instruction instance, the processor is configured to obtain a branch prediction for the branch instruction instance and a confidence value of the branch prediction. The processor is further configured to determine that the confidence for the branch prediction is low based on the confidence value, and responsive to such a determination, generate predicated instruction instances based on the branch instruction instance.2014-05-01
20140122837NOVEL REGISTER RENAMING SYSTEM USING MULTI-BANK PHYSICAL REGISTER MAPPING TABLE AND METHOD THEREOF - Embodiments of a processor architecture utilizing multi-bank implementation of physical register mapping table are provided. A register renaming system to correlate architectural registers to physical registers includes a physical register mapping table and a renaming logic. The physical register mapping table has a plurality of entries each indicative of a state of a respective physical register. The mapping table has a plurality of non-overlapping sections each of which having respective entries of the mapping table. The renaming logic is coupled to search a number of the sections of the mapping table in parallel to identify entries that indicate the respective physical registers have a first state. The renaming logic selectively correlates each of a plurality of architectural registers to a respective physical register identified as being in the first state. Methods of utilizing the multi-bank implementation of physical register mapping table are also provided.2014-05-01
20140122838WORK-QUEUE-BASED GRAPHICS PROCESSING UNIT WORK CREATION - One embodiment of the present invention enables threads executing on a processor to locally generate and execute work within that processor by way of work queues and command blocks. A device driver, as an initialization procedure for establishing memory objects that enable the threads to locally generate and execute work, generates a work queue, and sets a GP_GET pointer of the work queue to the first entry in the work queue. The device driver also, during the initialization procedure, sets a GP_PUT pointer of the work queue to the last free entry included in the work queue, thereby establishing a range of entries in the work queue into which new work generated by the threads can be loaded and subsequently executed by the processor. The threads then populate command blocks with generated work and point entries in the work queue to the command blocks to effect processor execution of the work stored in the command blocks.2014-05-01
20140122839APPARATUS AND METHOD OF EXECUTION UNIT FOR CALCULATING MULTIPLE ROUNDS OF A SKEIN HASHING ALGORITHM - An apparatus is described that includes an execution unit within an instruction pipeline. The execution unit has multiple stages of a circuit that includes a) and b) as follows. a) a first logic circuitry section having multiple mix logic sections each having: i) a first input to receive a first quad word and a second input to receive a second quad word; ii) an adder having a pair of inputs that are respectively coupled to the first and second inputs; iii) a rotator having a respective input coupled to the second input; iv) an XOR gate having a first input coupled to an output of the adder and a second input coupled to an output of the rotator. b) permute logic circuitry having inputs coupled to the respective adder and XOR gate outputs of the multiple mix logic sections.2014-05-01
20140122840EFFICIENT USAGE OF A MULTI-LEVEL REGISTER FILE UTILIZING A REGISTER FILE BYPASS - A processor includes an execution unit, a first level register file, a second level register file, a plurality of storage locations and a register file bypass controller. The first and second level register files are comprised of physical registers, with the first level register file more efficiently accessed relative to the second level register file. The register file bypass controller is coupled with the execution unit and second level register file. The register file bypass controller determines whether an instruction indicates a logical register is unmapped from a physical register in the first level register file. The register file controller also loads data into one of the storage locations and selects one of the storage locations as input to the execution unit, without mapping the logical register to one of the physical registers in the first level register file.2014-05-01
20140122841EFFICIENT USAGE OF A REGISTER FILE MAPPER AND FIRST-LEVEL DATA REGISTER FILE - A processor includes a first level register file, second level register file, and register file mapper. The first and second level register files are comprised of physical registers, with the first level register file more efficiently accessed relative to the second level register file. The register file mapper is coupled with the first and second level register files. The register file mapper comprises a mapping structure and register file mapper controller. The mapping structure hosts mappings between logical registers and physical registers of the first level register file. The register file mapper controller determines whether to map a destination logical register of an instruction to a physical register in the first level register file. The register file mapper controller also determines, based on metadata associated with the instruction, whether to write data associated with the destination logical register to one of the physical registers of the second level register file.2014-05-01
20140122842EFFICIENT USAGE OF A REGISTER FILE MAPPER MAPPING STRUCTURE - A processor with a register file mapper can use a hasher to improve the distribution of mappings within a mapping structure. The hasher generates a value based, at least in part, on a thread identifier and logical register identifier. The hash value is used as an index value into the mapping structure. The hashing algorithm is chosen to provide a more even distribution of mappings within the mapping structure, reducing the amount of data written from a first level register file to a second level register file.2014-05-01
20140122843CONDITIONAL STORE INSTRUCTIONS IN AN OUT-OF-ORDER EXECUTION MICROPROCESSOR - An instruction translator translates a conditional store instruction (specifying data register, base register, and offset register of the register file) into at least two microinstructions. An out-of-order execution pipeline executes the microinstructions. To execute a first microinstruction, an execution unit receives a base value and an offset from the register file and generates a first result as a function of the base value and offset. The first result specifies the memory location address. To execute a second microinstruction, an execution unit receives the first result and writes the first result to an allocated entry in the store queue if the condition flags satisfy the condition (the store queue subsequently writes the data to the memory location specified by the address), and otherwise kills the allocated store queue entry so that the store queue does not write the data to the memory location specified by the address.2014-05-01
20140122844INTELLIGENT CONTEXT MANAGEMENT - Intelligent context management for thread switching is achieved by determining that a register bank has not been used by a thread for a predetermined number of dispatches, and responsively disabling the register bank for use by that thread. A counter is incremented each time the thread is dispatched but the register bank goes unused. Usage or non-usage of the register bank is inferred by comparing a previous checksum for the register bank to a current checksum. If the previous and current checksums match, the system concludes that the register bank has not been used. If a thread attempts to access a disabled bank, the processor takes an interrupt, enables the bank, and resets the corresponding counter. For a system utilizing transactional memory, it is preferable to enable all of the register banks when thread processing begins to avoid aborted transactions from register banks disabled by lazy context management techniques.2014-05-01
20140122845OVERLAPPING ATOMIC REGIONS IN A PROCESSOR - In one embodiment, the present invention includes a processor having a core to execute instructions. This core can include various structures and logic that enable instructions of different atomic regions to be executed in an overlapping manner. To this end, the core can include a register file having registers to store data for use in execution of the instructions, and multiple shadow register files each to store a register checkpoint on initiation of a given atomic region. In this way, overlapping execution of atomic regions identified by a programmer or compiler can occur. Other embodiments are described and claimed.2014-05-01
20140122846BRANCH TARGET ADDRESS CACHE USING HASHED FETCH ADDRESSES - An integrated circuit 2014-05-01
20140122847MICROPROCESSOR THAT TRANSLATES CONDITIONAL LOAD/STORE INSTRUCTIONS INTO VARIABLE NUMBER OF MICROINSTRUCTIONS - An instruction translator receives a conditional load/store instruction that specifies a condition, destination/data register, base register, offset source, and memory addressing mode. The instruction instructs the microprocessor to load data from a memory location into the destination register (conditional load) or store data to the memory location from the data register (conditional store) only if the condition flags satisfy the condition. The offset source specifies whether the offset is an immediate value or a value in an offset register. The addressing mode specifies whether the base register is updated when the condition flags satisfy the condition. The instruction translator translates the conditional load instruction into a number of microinstructions, which varies as a function of the offset source, addressing mode, and whether the conditional instruction is a conditional load or store instruction. An out-of-order execution pipeline executes the microinstructions to generate results specified by the instruction.2014-05-01
20140122848SYSTEMS AND METHODS FOR INSTRUCTION ENTITY ALLOCATION AND SCHEDULING ON MULTI-PROCESSORS - Systems and methods for instruction entity allocation and scheduling on multi-processors is provided. In at least one embodiment, a method for generating an execution schedule for a plurality of instruction entities for execution on a plurality of processing units comprises arranging the plurality of instruction entities into a sorted order and allocating instruction entities in the plurality of instruction entities to individual processing units in the plurality of processing units. The method further comprises scheduling instances of the instruction entities in scheduled time windows in the execution schedule, wherein the instances of the instruction entities are scheduled in scheduled time windows according to the sorted order of the plurality of instruction entities and organizing the execution schedule into execution groups.2014-05-01
20140122849APPARATUS AND METHOD FOR HANDLING EXCEPTION EVENTS - Processing circuitry 2014-05-01
20140122850NON-INTERRUPTING PERFORMANCE TUNING USING RUNTIME RESET - A method, system, and computer program product for non-interrupting performance tuning using runtime reset are provided in the illustrative embodiments. Component performance data from a component of a data processing system is analyzed. The component participates in processing a workload of a workload type. The analyzing determines a characteristic of the workload. A performance requirement of the workload is determined according to a performance requirement of the workload type. A set of preferred performance tuning parameter values is identified to apply to the component to meet the performance requirement of the workload. The set of preferred performance tuning parameter values is sent to the component such that the component is tuned using a value in the set of preferred performance tuning parameter values to meet the performance requirement of the workload.2014-05-01
20140122851TRANSFERRING FILES TO A BASEBOARD MANAGEMENT CONTROLLER ('BMC') IN A COMPUTING SYSTEM - Transferring files to a baseboard management controller (‘BMC’) in a computing system, including: receiving, by the BMC, a request to initiate an update of the computing system; identifying, by the BMC, an area in memory within the computing system for storing an update file; and transmitting, by the BMC, a request to register the BMC as a virtual memory device.2014-05-01
20140122852TRANSFERRING FILES TO A BASEBOARD MANAGEMENT CONTROLLER ('BMC') IN A COMPUTING SYSTEM - Transferring files to a baseboard management controller (‘BMC’) in a computing system, including: receiving, by the BMC, a request to initiate an update of the computing system; identifying, by the BMC, an area in memory within the computing system for storing an update file; and transmitting, by the BMC, a request to register the BMC as a virtual memory device.2014-05-01
20140122853CONFIGURING CONFIGURATION SETTINGS USING A USER CONTEXT - An illustrative embodiment of a computer-implemented process for configuring configuration settings authenticates a user of a predetermined system to form an authenticated user and obtains configurable configuration settings associated with the authenticated user for the predetermined system to form obtained settings. The obtained settings are used in a further portion of a power-on process to configure the predetermined system, whereby configuring the predetermined system alters available resources and associated resource consumption of the predetermined system subject to the obtained settings.2014-05-01
20140122854INFORMATION PROCESSING APPARATUS AND ACTIVATION METHOD THEREFOR - When an information processing apparatus is requested to transfer to a system interruption state, the information processing apparatus determines whether to compress data at each page, and generates a hibernation image configured of compressed data and non-compressed data. In an operating system activation period, the information processing apparatus determines whether to execute hibernation activation processing before initializing a memory management mechanism. In a case where the information processing apparatus executes the hibernation activation processing, the information processing apparatus reduces a size of the memory management region up to the size required for the initialization of the kernel, and reads the compressed data in parallel with initialization of hardware. After initializing the kernel, the information processing apparatus reads the non-compressed data in parallel with decompression of the compressed data.2014-05-01
20140122855Method for Offline Configuration of a Field Device - A method for offline configuration of a predetermined field device type having an integrated web server. The field device is intended to be applied at a process point in an automated process plant. Field devices of different field device types having integrated web servers are physically provided by a service provider, wherein a customer connects via Internet to the respective field device type corresponding to the field device to be configured. The field device of the predetermined field device type is so configured via the Internet that it meets requirements for use at the process point in the process plant, wherein, after terminating the offline configuration, the device configuration data are so held that the customer has access to the device configuration data, and wherein upon later mounting of the field device at the process point in the process plant the offline produced, device configuration data can be made available to the field device.2014-05-01
Website © 2025 Advameg, Inc.