07th week of 2015 patent applcation highlights part 64 |
Patent application number | Title | Published |
20150046589 | Utilizing Multiple Mobile Service Partitions During Mobility Operations - A mechanism is provided in a data processing system for performing a logical partition migration utilizing multiple mover service partition pairs. Responsive to a virtual machine monitor initiating a logical partition migration operation to move a logical partition from a source system to a destination system, the mechanism establishes a plurality of input/output paths between a plurality of mover service partition pairs. The virtual machine monitor performs the logical partition migration operation using the plurality of mover service partition pairs to transfer a memory image of the logical partition from the source system to the destination system to effect the logical partition migration operation. | 2015-02-12 |
20150046590 | IDENTIFIER MANAGEMENT - A method for managing identifiers can include receiving, in an identifier management system, a request for an identifier in a computing system. The method can also include verifying availability of the identifier. The method can further include returning an affirmative response to a requesting party. | 2015-02-12 |
20150046591 | DYNAMIC EDGE SERVER ALLOCATION - A system and method for managing edge servers and the location of edge servers in a content delivery network is provided. Incoming content requests to a plurality of existing edge servers are analyzed with respect to their originating locations. It is determined that a new edge server should be added to the network at a location where none of the plurality of existing edge servers reside. A data center is selected in accordance with the desired location, and a new edge server is instantiated. Traffic handled by two or more of the existing edge servers can be consolidated and routed to the new edge server. Edge servers are dynamically added to and removed from the network. | 2015-02-12 |
20150046592 | NETWORK FOLLOWED BY COMPUTE LOAD BALANCING PROCEDURE FOR EMBEDDING CLOUD SERVICES IN SOFTWARE-DEFINED FLEXIBLE-GRID OPTICAL TRANSPORT NETWORKS - A method for providing cloud embedding using Network followed by Compute Load Balancing (NCLB) by mapping one or more virtual links over one or more physical links while balancing network resources, and mapping one or more virtual nodes over one or more physical nodes while balancing different types of computational resources. | 2015-02-12 |
20150046593 | CONTENT DELIVERY METHODS AND SYSTEMS - Aspects of the present disclosure involve provisioning customers of an aggregator, such as a reseller, of a content delivery network (CDN). In one aspect, content requests to the CDN are processed in accordance with the virtual IP (VIP) address at which the request was received, according to a property template bound to the VIP where the template is selected by the customer and only involves discrete parameters for the reseller. In another aspect, cache fills of the network are processed without direct knowledge of the customer origin through a combination of some request attribute, e.g., alias host of the customer, and an attribute of the reseller to make a DNS request to a name server outside the CDN. Another aspect involves receiving a property template selection, an origin and an alias from a customer of the reseller, and providing appropriate DNS entries to validate the customer and provide origin information to the CDN. | 2015-02-12 |
20150046594 | CONTENT DELIVERY METHODS AND SYSTEMS - Aspects of the present disclosure involve provisioning customers of an aggregator, such as a reseller, of a content delivery network (CDN). In one aspect, content requests to the CDN are processed in accordance with the virtual IP (VIP) address at which the request was received, according to a property template bound to the VIP where the template is selected by the customer and only involves discrete parameters for the reseller. In another aspect, cache fills of the network are processed without direct knowledge of the customer origin through a combination of some request attribute, e.g., alias host of the customer, and an attribute of the reseller to make a DNS request to a name server outside the CDN. Another aspect involves receiving a property template selection, an origin and an alias from a customer of the reseller, and providing appropriate DNS entries to validate the customer and provide origin information to the CDN. | 2015-02-12 |
20150046595 | PRE-PROVISIONING VIRTUAL MACHINES IN A NETWORKED COMPUTING ENVIRONMENT - In general, embodiments of the present invention provide an approach for pre-provisioning cloud computing resources such as virtual machines (VMs) in order to achieve faster and more consistent provisioning times. Embodiments of the present invention describe an approach to generate a pre-provisioned pool of virtual machines that are utilized when one or more consumers start to initiate a large volume of requests (e.g., instantiate/populate multiple e-commerce ‘shopping carts’). In a typical embodiment, a selection of an operating system to be associated with a VM is received in a computer data structure. A provisioning of the VM will then be initiated based on the selection of the operating system. Thereafter, at least one selection of at least one software program to be associated with the VM will be received in the computer data structure. The provisioning of the VM can then be completed based on the at least one selection of the at least one software program in response to a provisioning request received in the computer data structure. | 2015-02-12 |
20150046596 | SPECULATIVE GENERATION OF NETWORK PAGE COMPONENTS - Disclosed are various embodiments for speculatively generating network page components to reduce network page generation latency. A request for a network page is received. Speculative generation is initiated for multiple network page components that are capable of being included in the network page. A subset of the speculatively generated network page components that will actually be included in the network page is determined. The network page is then generated, where the subset of the speculatively generated network page components are included in the network page and others of the speculatively generated network page components are excluded from the network page. | 2015-02-12 |
20150046597 | SPATIAL SECURITY IN A SESSION INITIATION PROTOCOL (SIP) CONFERENCE - In a method for securing a session initiation protocol (SIP) conference session, first location information of a SIP conference session invitee attempting to connect to a SIP conference session is received. The computer determines that the received first location information at least partially matches a location requirement assigned to the invitee attempting to connect to the SIP conference session. The computer causes the invitee to be connected to the SIP conference session. | 2015-02-12 |
20150046598 | UNIVERSAL STATE-AWARE COMMUNICATIONS - A communications system for general business environments that exploits knowledge of user state to provide advantages of efficiency and control for individual users and for the business. The communications system also provides particular advantages in environments where users have multiple communication devices and for communications of a business with external parties. In other aspects, the communication system provides features of application flexibility and system fault-tolerance with broad applicability to communication systems. The communication system includes a controller that receives requests for establishing communications when a user is in an appropriate state to receive communications and communicates state of the user to other users. The controller receives a user request for establishing a communication when the user is not in the appropriate state for communication, receives a user request for a state change to the appropriate state to receive the communication, and initiates the communication without changing state of the user. | 2015-02-12 |
20150046599 | MULTICHANNEL COMMUNICATION SYSTEMS AND METHODS OF USING THE SAME - In a general aspect, a computer-readable storage medium stores instructions that when executed cause a processor to perform a process. The instructions can include instructions to transmit video data of a remote desktop session to a client via a first data channel using a first protocol. The instructions can also include instructions to transmit event data of the remote desktop session to the client via a second data channel using a second protocol, the second protocol being different than the first protocol. | 2015-02-12 |
20150046600 | METHOD AND APPARATUS FOR DISTRIBUTING DATA IN HYBRID CLOUD ENVIRONMENT - A method of distributing data in a hybrid cloud environment is provided. The method includes receiving a request to execute a service from a client, analyzing service use pattern information of the client based on the received request to execute the service, estimating a work load of the service by using the analyzed service use pattern information, and distributing data related to the service based on the estimated work load. | 2015-02-12 |
20150046601 | NETWORK SYSTEM, MAINTENANCE WORK MANAGEMENT METHOD, PROCESSING APPARATUS, AND NON-TRANSITORY COMPUTER-READABLE RECORDING MEDIUM RECORDING PROGRAM - An influence determination unit configured to determine whether each path in a network system is a prospective to be affected path affected by maintenance work or a prospective to be unaffected path not affected by the maintenance work by referring to path management information, a path influence information output unit configured to output a diagnosis result to a plurality of processing apparatuses as path influence information about each of the paths, a maintenance possibility diagnosis unit configured to conduct a diagnosis on whether the maintenance work is performable based on the path influence information of the paths connected, and a maintenance possibility information output unit configured to output the diagnosis result as maintenance possibility information is included, wherein maintenance work efficiency is improved in the network system by outputting permission information indicating permission or refusal of the maintenance work for maintenance object components based on the maintenance possibility information. | 2015-02-12 |
20150046602 | DATA SYNCHRONIZATION SYSTEMS AND METHODS - Embodiments disclose synchronizing a predetermined dataset with a real-time dataset, wherein the predetermined dataset and the real-time dataset may be synchronized based on time information, and location information. | 2015-02-12 |
20150046603 | Method for Combining Results of Periodically Operating EDP Components at the Correct Time - A system and method for combining results of a multiplicity of periodically operating components of a distributed computer system at the correct time, wherein the components communicate solely by means of messages via at least one communication system, and wherein each component has a global time with the precision P. Each component is unambiguously associated with one of n hierarchical levels wherein the durations of the periods of the components are an integer multiple of one another, and wherein the phase of transmitting each message is synchronized with the corresponding phase of receiving each transmitted message within each longest period of the entire distributed computer system even if the transmitting components and the receiving components are arranged on different hierarchical levels and are spatially distributed. | 2015-02-12 |
20150046604 | FLEXIBLE HARDWARE MODULE ASSIGNMENT FOR ENHANCED PERFORMANCE - A system is disclosed for mapping operating-system-identified addresses for substantially-identical hardware modules into performance-parameter-based addresses for the hardware modules. The mapping is accomplished by configuring a flexible I/O interface responsive to a characterization of at least one performance parameter for each hardware module. | 2015-02-12 |
20150046605 | Method and apparatus for efficient processing of disparate data storage commands - A method for improving I/O performance by a storage controller is provided. The method includes receiving a command completion from a storage device and checking for a command stored in a command queue for more than a predetermined time period. If a command has been in the command queue for more than the predetermined time period, then issuing the command and removing the command from the command queue. If no commands have been stored in the command queue for more than the predetermined time period, then determining if there are any uncompleted commands previously issued to the storage device. If there are not any uncompleted commands previously issued to the storage device, then processing a next command in the command queue and removing the next command from the command queue. | 2015-02-12 |
20150046606 | BLOCK DEVICE MANAGEMENT - Embodiments of the present invention perform a method for reading data from, writing data to, powering on, or configuring a block device without the kernel translating a file system operation into a block device operation. This is implemented by a using a core module to couple applications running in user space to a character device through a character device driver, the core module configures the character device to communicate with a block device through a block device driver without the kernel translating a file system command into a block device command. | 2015-02-12 |
20150046607 | SEPARABLE PERIPHERAL DEVICE - A separable peripheral device includes a first module and a second module. The first module includes first connection ports and a first processing unit. The first processing unit is connected to the first connection ports. The first processing unit executes a first algorithm and connects to at least two of the first connection ports. The second module includes second connection ports and a second processing unit. One of the second connection ports is connected to one of the first connection ports to receive a current and a data generated from one of the first connection ports. The second processing unit is connected to the second connection ports to receive the data. The second processing unit executes a second algorithm for converting the data. The second processing unit sends the converted data to one of the second connection ports. | 2015-02-12 |
20150046608 | OBTAINING MULTIMEDIA DATA IN AN EXTENDED CONNECTIVITY MULTIMEDIA APPARATUS FOR PRESENTATION ON A MULTIMEDIA PRESENTATION DEVICE - To present multimedia data, a communication path is established through a selectively separable electrical connector over which encoded multimedia data are conveyed to a multimedia presentation device. An order for a multimedia data file is transmitted to a multimedia source device through a wireless communication interface that is selectively separable from the multimedia presentation device at the electrical connector. The multimedia data file is received over a wireless communication channel through the wireless communication interface upon successful completion of a financial transaction for payment of the multimedia data file. A processor that is selectively separable from the multimedia presentation device at the electrical connector encodes the multimedia data of the received multimedia data file into a format compatible with presentation capabilities of the multimedia presentation device. The encoded multimedia data are conveyed to the multimedia presentation device via the communication path established through the electrical connector. | 2015-02-12 |
20150046609 | METHOD AND SYSTEM FOR BUFFER STATE BASED LOW POWER OPERATION IN A MOCA NETWORK - A first device of a Multimedia Over Coax Alliance (MoCA) network may communicate with a second device of the MoCA network to control power-save operation of the second MoCA device. The first device may control the power-save operation of the second MoCA device based on an amount of data stored in a buffer, wherein the data stored in the buffer is destined for the second device. The buffer may be in a third device which sends the data to the second device, and/or the buffer may be in the first device. The first device may be operable to buffer data destined for the second device while the second device is in a power-saving state. | 2015-02-12 |
20150046610 | STORAGE MASTER NODE - Technology is provided for selecting a master node of a node group in a storage system. The technology can gather data regarding visibility of one or more storage devices of the storage system to one or more active nodes of the node group, determine a maximum visibility value for the node group and selecting an active node with associated visibility value equal to the maximum visibility value as the master node of the node group. | 2015-02-12 |
20150046611 | DEVICES, SYSTEMS, AND METHODS OF REDUCING CHIP SELECT - Several systems and methods of chip select are described. In one such method, a device maintains two identifiers, (ID_a and ID_m). When the device receives a command, it examines the values of ID_a and ID_m relative to a third reference identifier (ID_s). If either ID_a or ID_m is equivalent to ID_s, the device executes the command, otherwise, the device ignores the command. By using two different identification methods, a system has options in choosing to activate devices, being able to selectively switch between selecting multiple devices and single devices in a quick manner. In another such method, a device may have a persistent area that stores identification information such as an ID_a. Thus, system functionality may remain independent from any defect/marginality associated with the physical or logical components required for initial ID_a assignment of all devices in the system. | 2015-02-12 |
20150046612 | MEMORY DEVICE FORMED WITH A SEMICONDUCTOR INTERPOSER - A packaged memory device includes a semiconductor interposer, a first memory stack, a second memory stack, and a buffer chip that are all coupled to the semiconductor interposer. The first memory stack and the second memory stack each include multiple memory chips that are configured as a single stack. The buffer chip is electrically coupled to the first memory stack via a first data bus, electrically coupled to the second memory stack via a second data bus, and electrically coupled to a processor data bus that is configured for transmitting signals between the buffer chip and a processor chip. Such a memory device can have high data capacity and still operate at a high data transfer rate in an energy efficient manner. | 2015-02-12 |
20150046613 | NETWORKING APPARATUS AND A METHOD FOR NETWORKING - This specification discloses a protocol agnostic networking apparatus and method of networking. The networking apparatus receives physical layer signal through a plurality of communications ports that interface with external computing systems. A dynamic routing module interconnects the communications ports with discrete reconfigurable data conduits. Each of the data conduits defines a transmission pathway between predetermined communications ports. A management module maintains the data conduits based on routing commands received from an external computing system. The management module interfaces with the dynamic routing module to make and/or break data conduits responsive to received routing commands. | 2015-02-12 |
20150046614 | CENTRALIZED PERIPHERAL ACCESS PROTECTION - Implementations are disclosed for a centralized peripheral access controller (PAC) that is configured to protect one or more peripheral components in a system. In some implementations, the PAC stores data that can be set or cleared by software. The data corresponds to an output signal of the PAC that is routed to a corresponding peripheral component. When the data indicates that the peripheral is “unlocked” the PAC will allow write transfers to registers in the peripheral component. When the data indicates that the peripheral component is “locked” the PAC will refuse write transfers to registers in the peripheral component and terminate with an error. | 2015-02-12 |
20150046615 | MEMORY MODULE COMMUNICATION CONTROL - Methods and systems for memory module communication control are disclosed. A method includes receiving a message associated with a memory module in communication with a controller via a bus including a clock line. Further, the method includes determining whether the bus is idle. The method also includes communicating a signal via the clock line regarding the message associated with the memory module in response to determining that the bus is idle. | 2015-02-12 |
20150046616 | PERIPHERAL REGISTERS WITH FLEXIBLE DATA WIDTH - A flexible-width peripheral register mapping is disclosed for accessing peripheral registers on a peripheral bus. | 2015-02-12 |
20150046617 | SYSTEM AND METHOD FOR SCALABLE TRACE UNIT TIMESTAMPING - An integrated circuit includes a trace subsystem that provides timestamps for events occurring in a trace source that does not natively support time stamping trace data. A timestamp inserter is coupled to such a trace source. The timestamp inserter generates a modified trace data stream by arranging a reference or references with the trace information from the trace source on a trace bus. A trace destination receives the modified trace data stream including the reference(s). In some embodiments, a timestamp inserter receives a timestamp request and stores a reference in a buffer. Upon later receipt of trace information associated with the request, the timestamp inserter inserts the reference, a current reference and the received trace information into the trace data stream. | 2015-02-12 |
20150046618 | Method of Handling Network Traffic Through Optimization of Receive Side Scaling4 - An information handling system includes a plurality of processors that each includes a cache memory, and a receive side scaling (RSS) indirection table with a plurality of pointers that each points to one of the processors. A network data packet received by the information handling system determines a pointer to a first processor. In response to determining the pointer, information associated with the network data packet is transferred to the cache memory of the first processor. The information handling system also includes a process scheduler that moves a process associated with the network data packet from a second processor to the first processor, and an RSS module that directs the process scheduler to move the process and associates the first pointer with the processor in response to directing the process scheduler. | 2015-02-12 |
20150046619 | HOST CONTROLLER APPARATUS, INFORMATION PROCESSING APPARATUS, AND EVENT INFORMATION OUTPUT METHOD - The present invention aims to provide a host controller apparatus, an information processing apparatus, and an event information output method that are capable of outputting event information to a system memory while achieving power saving. A host controller apparatus according to the present invention includes: an event controller that outputs occurred event information to a system memory; and an interruption controller that outputs an interrupt signal to a CPU executing an event recorded in the system memory, the interrupt signal requesting execution of the event output from the event controller to the system memory. The event controller outputs the occurred event information to the system memory in synchronization with a timing at which the interruption controller outputs the interrupt signal to the CPU. | 2015-02-12 |
20150046620 | PESSIMISTIC INTERRUPT AFFINITY FOR DEVICES - A computing apparatus identifies that a first processor of a host has forwarded information for a device to a second processor that controls the device. After identifying that the first processor has forwarded the information to the second processor and in response to determining that one or more update criteria have been satisfied, the computing apparatus causes future information for the device to be forwarded to the second processor. | 2015-02-12 |
20150046621 | EXPANSION CARD - An expansion card includes a peripheral component interconnect express (PCIe) slot, a PCI expansion controller, a PCIe/serial advanced technology attachment (PCIe/SATA) converter, a hard disk drive (HDD) controller, and a storage chip. An edge connector is arranged on a bottom side of the expansion card and includes power pins, ground pins, and signal pins. The power pins are connected to power pins of the PCIe slot, the PCIe expansion controller, the PCIe/SATA converter, the HDD controller, and the storage chip. The signal pins are connected to the PCIe expansion controller. The PCIe expansion controller expands a PCI signal into PCI signals and provides the PCI signals to the PCIe slot and the PCIe/SATA converter. The PCIe/SATA converter converts the PCI signal to SATA signals and provides the SATA signals to the HDD controller. The HDD controller controls the storage chip to read or write data. | 2015-02-12 |
20150046622 | Information Handling System Docking with Coordinated Power and Data Communication - A docking station connects through a docking port and docking cable with an information handling system to support communication between the information handling system and docking station peripherals. On initial interface, one data lane of the docking port establishes a temporary management interface, such as an I2C management bus, to configure the docking station. After configuration, a docking manager, virtual wireless access point and power block cooperate to assign data lanes of the docking port and wireless communication resources to information transfer and power transfer functions based upon processing and communication tasks performed at the information handling system. | 2015-02-12 |
20150046623 | Information Handling System Docking with Coordinated Power and Data Communication - A docking station connects through a docking port and docking cable with an information handling system to support communication between the information handling system and docking station peripherals. On initial interface, one data lane of the docking port establishes a temporary management interface, such as an I2C management bus, to configure the docking station. After configuration, a docking manager, virtual wireless access point and power block cooperate to assign data lanes of the docking port and wireless communication resources to information transfer and power transfer functions based upon processing and communication tasks performed at the information handling system. | 2015-02-12 |
20150046624 | Information Handling System Docking with Coordinated Power and Data Communication - A docking station connects through a docking port and docking cable with an information handling system to support communication between the information handling system and docking station peripherals. On initial interface, one data lane of the docking port establishes a temporary management interface, such as an I2C management bus, to configure the docking station. After configuration, a docking manager, virtual wireless access point and power block cooperate to assign data lanes of the docking port and wireless communication resources to information transfer and power transfer functions based upon processing and communication tasks performed at the information handling system. | 2015-02-12 |
20150046625 | SOLID STATE DRIVE ARCHITECTURES - A solid state drive includes DRAM logical flash and flash memory, in which system processor reads and writes only to the DRAM logical flash which minimizes writes to the flash memory. A method for operation of a solid state flash device includes writing, by a CPU, to a solid state drive by sending commands and data to DRAM logical flash using flash commands and formatting. | 2015-02-12 |
20150046626 | LOW POWER SECONDARY INTERFACE ADJUNCT TO A PCI EXPRESS INTERFACE BETWEEN INTEGRATED CIRCUITS - A method, apparatus, and system for a secondary/adjunct interface between two Integrated Circuits (ICs) already having a Peripheral Component Interconnection Express (PCIe) interface, where the PCIe interface performs high-throughput data transfers and the adjunct/secondary interface performs low-throughput data transfers, thereby reducing power consumption for the low-throughput data transfers, are described. | 2015-02-12 |
20150046627 | COMMUNICATION ON AN I2C BUS - A communication system includes an I2C bus interconnecting at least one first device and one second device. At least one direct data link, other than the I2C bus, interconnects the first and second devices. The system is configurable to operate in: a first operating mode providing for data only transmission between the first and second devices over the I2C bus; and a second operating mode providing for simultaneous data transmission between the first and second devices over both the I2C bus and said data link. | 2015-02-12 |
20150046628 | MEMORY MODULE COMMUNICATION CONTROL - Methods and systems for memory module communication control are disclosed. A method includes receiving a message associated with a memory module in communication with a controller via a bus including a clock line. Further, the method includes determining whether the bus is idle. The method also includes communicating a signal via the clock line regarding the message associated with the memory module in response to determining that the bus is idle. | 2015-02-12 |
20150046629 | SWITCH APPARATUS AND ELECTRONIC DEVICE - An electronic device connected to numerous first load medias and second load medias. The electronic device comprises a processor and a switch module. The processor is capable of switching between a first working mode and a second working mode. Under the second working mode, the processor generates a second control signal, the switch mode establishes independent electronic connections between specified first load medias and specified second load medias, thus, the first load medias and the second load medias simultaneously communicate with each other through the electronic device. | 2015-02-12 |
20150046630 | Patching of Programmable Memory - A programmable memory | 2015-02-12 |
20150046631 | APPARATUSES AND METHODS FOR CONFIGURING I/Os OF MEMORY FOR HYBRID MEMORY MODULES - Apparatuses, hybrid memory modules, memories, and methods for configuring I/Os of a memory for a hybrid memory module are described. An example apparatus includes a non-volatile memory, a control circuit coupled to the non-volatile memory, and a volatile memory coupled to the control circuit. The volatile memory is configured to enable a first subset of I/Os for communication with a bus and enable a second subset of I/O for communication with the control circuit, wherein the control circuit is configured to transfer information between the volatile memory and the non-volatile memory. | 2015-02-12 |
20150046632 | MEMORY ADDRESS MANAGEMENT METHOD, MEMORY CONTROLLER AND MEMORY STORAGE DEVICE - A memory address management method, a memory controller, and a memory storage device are provided. The memory address management method includes: obtaining memory information of a rewritable non-volatile memory module and formatting logical addresses according to the memory information to establish a file system, such that an allocation unit of the file system includes a lower logical programming unit and an upper logical programming unit. Here, the memory information includes a programming sequence, the allocation unit starts with the lower logical programming unit and ends with the upper logical programming unit, and an initial logical address of a data region in the file system belongs to the lower logical programming unit. Accordingly, an access bandwidth of the memory storage device is expanded. | 2015-02-12 |
20150046633 | CACHE CONTROL METHOD AND STORAGE DEVICE - According to one embodiment of the present invention, a cache control method of a storage device is provided, the storage device including: a storage unit that stores data, and a buffer memory having a first cache area and a second cache area serving as a cache of the storage unit. The cache control method according to the embodiment includes: storing data read from the storage unit in the first cache area in response to a read command from a host; moving retried data, on which a read retry has occurred upon the readout from the storage unit, to the second cache area from the first cache area in order that the retried data amount in the second cache area is not more than a predetermined data amount; and transferring data in the first cache area or the second cache area to the host. | 2015-02-12 |
20150046634 | MEMORY SYSTEM AND INFORMATION PROCESSING DEVICE - According to embodiments a memory system is connectable to a host which includes a host controller and a host memory including a first memory area and a second memory area. The memory system includes an interface unit, a non-volatile memory, and a controller unit. The interface unit receives a read command and a write command. The controller unit writes write-data to the non-volatile memory according to the write command. The controller unit determines whether read-data requested by the read command is in the first memory area. If the read-data is in the first memory area, the controller unit causes the host controller to copy the read-data from the first memory area to the second memory area. If the read-data is not in the first memory area, the controller unit reads the read-data from the non-volatile memory and causes the host controller to store the read-data in the second memory area. | 2015-02-12 |
20150046635 | Electronic System with Storage Drive Life Estimation Mechanism and Method of Operation Thereof - Systems, methods and/or devices are used to enable storage drive life estimation. In one aspect, the method includes (1) determining two or more age criteria of a storage drive, and (2) determining a drive age of the storage drive in accordance with the two or more age criteria of the storage drive. | 2015-02-12 |
20150046636 | STORAGE DEVICE, COMPUTER SYSTEM AND METHODS OF OPERATING SAME - A method of operating a storage device which includes a non-volatile memory including a normal unit configured to store normal data and a swap unit configured to store swap data and a controller configured to control the non-volatile memory is provided. The method includes receiving the swap data and a unit selection signal for selecting the swap unit from a host; and processing the swap data according to a data processing policy of the swap unit and writing the processed swap data to the swap unit. The data processing policy of the swap unit may be different from a data processing policy of the normal unit. | 2015-02-12 |
20150046637 | DATA STORAGE DEVICE AND METHOD FOR RESTRICTING ACCESS THEREOF - A data storage device including a flash memory, a temperature sensor and a controller. The flash device is arranged to store data. The temperature sensor is arranged to detect surrounding ambient temperature. The controller is configured to receive a write command from a host, and perform a protection mechanism when the detected surrounding ambient temperature is outside a predetermined rage, wherein the write command is arranged to enable the controller to write data into the flash, and the controller is configured to restrict writing during the protect mode. | 2015-02-12 |
20150046638 | MULTI-BIT MEMORY DEVICE AND ON-CHIP BUFFERED PROGRAM METHOD THEREOF - A program method of a multi-bit memory device is provided. First page data is programmed in a first region of a memory cell array. The first page data is stored in a first buffer of a page buffer. Second page data is programmed in the first region of the memory cell array. The second page data is stored in a third buffer of the page buffer. Third page data is stored in the first region of the memory cell array. The second page data stored in the third buffer is transferred to a second buffer of the page buffer and the third page data is stored in the third buffer. The first to third page data stored in page buffer are programmed in a second region of the memory cell array. | 2015-02-12 |
20150046639 | SYSTEM AND METHOD OF PAGE BUFFER OPERATION FOR MEMORY DEVICES - Systems and methods are provided for using page buffers of memory devices connected to a memory controller through a common bus. A page buffer of a memory device is used as a temporary cache for data which is written to the memory cells of the memory device. This can allow the memory controller to use memory devices as temporary caches so that the memory controller can free up space in its own memory. | 2015-02-12 |
20150046640 | Method for Utilizing a Memory Interface to Control Partitioning of a Memory Module - Described herein are at least one apparatus and methods for implementing partitioning in memory cards and modules. A representative memory card/module in accordance with the invention may include a memory device(s), and a memory interface which includes a data bus, a command line and a clock line. The memory card/module may further include a memory controller coupled to the memory device(s) and to the memory interface. The memory card/module may include means for controlling the partitioning of the memory device(s). The memory controller may be configured to operate the memory device(s) in accordance with the partition information. | 2015-02-12 |
20150046641 | MEMORY INTERFACE HAVING MEMORY CONTROLLER AND PHYSICAL INTERFACE - A memory interface which is capable of performing calibration of a physical interface by realizing handshake of Update Interface signals. The physical interface connects memory and a memory controller which controls the memory to each other and converts data between the memory and the memory controller. A data conversion unit is disposed between the memory controller and the physical interface, for adjusting output timing of signals output from the memory controller to the physical interface and adjusting output timing of signals output from the physical interface to the memory controller. An update process unit is disposed between the memory controller and the physical interface, for controlling executing timing of calibration for adjusting drive performance of the physical interface. | 2015-02-12 |
20150046642 | MEMORY COMMAND SCHEDULER AND MEMORY COMMAND SCHEDULING METHOD - A memory command scheduler is provided. The memory command scheduler includes a scheduler queue receiving first and second requests for a memory access from external devices and storing the first and second requests therein; and a controller generating a command of the second request after a preset number of clock cycles from a current clock cycle and transferring the generated command to a memory, if generation of a command of the first request is possible in the current clock cycle and generation of the command of the second request is possible after the preset number of clock cycles from the current clock cycle. | 2015-02-12 |
20150046643 | Clustering with Virtual Entities Using Associative Memories - A system including an associative memory and a first input device in communication with the associative memory. The first input device is configured to receive an attribute value relating to a corresponding attribute of a subject of interest to a user. The system also includes a processor, in communication with the first input device, and configured to generate a first entity using the attribute value. The system also includes an associative memory configured to perform an analogy query using the entity to retrieve a second entity whose attributes match some attributes of the first entity. The associative memory is further configured to cluster first data in the first entity and second data in the second entity. | 2015-02-12 |
20150046644 | SHIFTABLE MEMORY DEFRAGMENTATION - Shiftable memory that supports defragmentation includes a memory having built-in shifting capability, and a memory defragmenter to shift a page of data representing a contiguous subset of data stored in the memory from a first location to a second location within the memory to be adjacent to another page of stored data. A method of memory defragmentation includes defining an array in memory cells of the shiftable memory and performing a memory defragmentation using the built-in shifting capability of the shiftable memory to shift a data page stored in the array. | 2015-02-12 |
20150046645 | Method, Storage System, and Program for Spanning Single File Across Plurality of Tape Media - Mechanisms for splitting and spanning a single file across a plurality of tape media in a tape drive file system are provided. The mechanisms format the tape media so as to store an index of the file and data on the file in the tape media in a predetermined format; splitting the single file into separate portions and managing at least one of IDs identifying the plurality of tape media that sequentially store the portions of the file in association with the file; and storing a generation number indicating the number of storing and updating each of the file portions as the index in each of the tape media. Upon receiving a request to read the stored split file, the system obtains an index on a tape medium storing a file portion whose generation number is the highest and reads a time stamp related to the size and update of the single file. | 2015-02-12 |
20150046646 | Virtual Network Disk Architectures and Related Systems - In accordance with one embodiment a disk drive device comprising: a disk drive; at least one Ethernet port; at least one powerful low power processor capable of running storage protocols; and one or more Ethernet circuits, wherein one or more of the Ethernet ports provide a power transmission medium which powers the disk drive. | 2015-02-12 |
20150046647 | APPARATUS AND METHOD FOR MANAGING DATA STORAGE - Provided are an apparatus and method for managing data storage. A first log structured array stores data in a storage device. A second log structured array in the storage device stores metadata for the data in the first log structured array, wherein the second log structured array storing the metadata for the first log structured data storage system is nested within the first log structured array, and wherein the first and second log structured arrays comprise separate instances of log structured arrays. Address space is allocated in the second log structured array for metadata when the allocation of address space is required for metadata for data stored in the first log structured array. | 2015-02-12 |
20150046648 | IMPLEMENTING DYNAMIC CACHE ENABLING AND DISABLING BASED UPON WORKLOAD - A method, system and memory controller for implementing dynamic enabling and disabling of cache based upon workload in a computer system. Predefined sets of information are monitored while the cache is enabled to identify a change in workload, and selectively disabling the cache responsive to a first identified predefined workload. Monitoring predefined information to identify a second predefined workload while the cache is disabled, and selectively enabling the cache responsive to said identified second predefined workload. | 2015-02-12 |
20150046649 | MANAGING CACHING OF EXTENTS OF TRACKS IN A FIRST CACHE, SECOND CACHE AND STORAGE - Provided are a computer program product, system, and method for managing caching of extents of tracks in a first cache, second cache and storage device. A determination is made of an eligible track in a first cache eligible for demotion to a second cache, wherein the tracks are stored in extents configured in a storage device, wherein each extent is comprised of a plurality of tracks. A determination is made of an extent including the eligible track and whether second cache caching for the determined extent is enabled or disabled. The eligible track is demoted from the first cache to the second cache in response to determining that the second cache caching for the determined extent is enabled. Selection is made not to demote the eligible track in response to determining that the second cache caching for the determined extent is disabled. | 2015-02-12 |
20150046650 | Flexible Configuration Hardware Streaming Unit - A processor having a streaming unit is disclosed. In one embodiment, a processor includes a streaming unit configured to load one or more input data streams from a memory coupled to the processor. The streaming unit includes an internal network having a plurality of queues configured to store streams of data. The streaming unit further includes a plurality of operations circuits configured to perform operations on the streams of data. The streaming unit is software programmable to operatively couple two or more of the plurality of operations circuits together via one or more of the plurality of queues. The operations circuits may perform operations on multiple streams of data, resulting in corresponding output streams of data. | 2015-02-12 |
20150046651 | METHOD FOR STORING MODIFIED INSTRUCTION DATA IN A SHARED CACHE - A processor may include a cache configured to store instructions and memory data for the processor. The cache may store instructions in which a relative address, such as for a branch instruction has been calculated, such that the instruction stored in the cache is modified from how the instruction is stored in main memory. The cache may include additional information in the tag to identify an instruction entry versus a memory data entry. When receiving a cache request, the cache may look at a type tag in addition to an address tag to determine if the request is a hit or a miss based upon the request being for an instruction from an instruction fetch unit or for memory data from a memory management unit. A cache entry may be invalidated and evicted if the address matches but the data type does not match. | 2015-02-12 |
20150046652 | WRITE COMBINING CACHE MICROARCHITECTURE FOR SYNCHRONIZATION EVENTS - A method, computer program product, and system is described that enforces a release consistency with special accesses sequentially consistent (RCsc) memory model and executes release synchronization instructions such as a StRel event without tracking an outstanding store event through a memory hierarchy, while efficiently using bandwidth resources. What is also described is the decoupling of a store event from an ordering of the store event with respect to a RCsc memory model. The description also includes a set of hierarchical read/write combining buffers that coalesce stores from different parts of the system. In addition, a pool component maintains partial order of received store events and release synchronization events to avoid content addressable memory (CAM) structures, full cache flushes, as well as direct write-throughs to memory. The approach improves the performance of both global and local synchronization events since a store event may not need to reach main memory to complete. | 2015-02-12 |
20150046653 | METHODS AND SYSTEMS FOR DETERMINING A CACHE SIZE FOR A STORAGE SYSTEM - Technology for operating a cache sizing system is disclosed. In various embodiments, the technology monitors input/output (IO) accesses to a storage system within a monitor period; tracks an access map for storage addresses within the storage system during the monitor period; and counts a particular access condition of the IO accesses based on the access map during the monitor period. When sizing a cache of the storage system that enables the storage system to provide a specified level of service, the counting is for computing a working set size (WSS) estimate of the storage system. | 2015-02-12 |
20150046654 | CONTROLLING A DYNAMICALLY INSTANTIATED CACHE - A change in workload characteristics detected at one tier of a multi-tiered cache is communicated to another tier of the multi-tiered cache. Multiple caching elements exist at different tiers, and at least one tier includes a cache element that is dynamically resizable. The communicated change in workload characteristics causes the receiving tier to adjust at least one aspect of cache performance in the multi-tiered cache. In one aspect, at least one dynamically resizable element in the multi-tiered cache is resized responsive to the change in workload characteristics. | 2015-02-12 |
20150046655 | DATA PROCESSING SYSTEMS - A data processing system includes one or more processors | 2015-02-12 |
20150046656 | MANAGING AND SHARING STORAGE CACHE RESOURCES IN A CLUSTER ENVIRONMENT - Systems and methods are provided for managing storage cache resources among all servers within the cluster storage environment. A method includes partitioning a main cache of a corresponding node into a global cache and a local cache, sharing each global cache of each node with other ones of the nodes of the multiple nodes, and dynamically adjusting a ratio of an amount of space of the main cache making up the global cache and an amount of space of the main cache making up the local cache, based on access latency and cache hit over a predetermined period of time of each of the global cache and the local cache. | 2015-02-12 |
20150046657 | SYSTEM AND METHOD FOR MANAGING CORRESPONDENCE BETWEEN A CACHE MEMORY AND A MAIN MEMORY - A system for managing correspondence between a cache memory, subdivided into a plurality of cache areas, and a main memory, subdivided into a plurality of memory areas, includes: a mechanism allocating, to each area of the main memory, at least one area of the cache memory; a mechanism temporarily assigning, to any data row stored in one of the areas of the main memory, a cache row included only in one cache area allocated to the main memory area wherein the data row is stored; and a mechanism generating and updating settings of the allocation by activating the allocation mechanism, the temporary assigning mechanism configured to determine a cache row to be assigned to a data row based on the allocation settings. | 2015-02-12 |
20150046658 | CACHE ORGANIZATION AND METHOD - A method and information processing system with improved cache organization is provided. Each register capable of accessing memory has associated metadata, which contains the tag, way, and line for a corresponding cache entry, along with a valid bit, allowing a memory access which hits a location in the cache to go directly to the cache's data array, avoiding the need to look up the address in the cache's tag array. When a cache line is evicted, any metadata referring to the line is marked as invalid. By reducing the number of tag lookups performed to access data in a cache's data array, the power that would otherwise be consumed by performing tag lookups is saved, thereby reducing power consumption of the information processing system, and the cache area needed to implement a cache having a desired level of performance may be reduced. | 2015-02-12 |
20150046659 | File Reading Method, Storage Device, and Reading System - A file reading method, storage device, and reading system, relating to the field of file reading. The method includes receiving, by a storage device, a first read request sent by a client, where to-be-read data requested by the first read request is a part of the file; reading, from a cache, data that is of the to-be-read data and located in the cache, and reading, from a first storage medium, data that is of the to-be-read data and not located in the cache; and pre-reading, from the first storage medium, data in at least one of the containers, and storing the pre-read data into the cache, where the pre-read container includes at least one unread file segment of the file. | 2015-02-12 |
20150046660 | ACTIVE MEMORY PROCESSOR SYSTEM - In general, the present invention relates to data cache processing. Specifically, the present invention relates to a system that provides reconfigurable dynamic cache which varies the operation strategy of cache memory based on the demand from the applications originating from different external general processor cores, along with functions of a virtualized hybrid core system. The system includes receiving a data request, selecting an operational mode based on the data request and a predefined selection algorithm, and processing the data request based on the selected operational mode. | 2015-02-12 |
20150046661 | Dynamic Address Negotiation for Shared Memory Regions in Heterogeneous Muliprocessor Systems - Mobile computing devices may be configured to compile and execute portions of a general purpose software application in an auxiliary processor (e.g., a DSP) of a multiprocessor system by reading and writing information to a shared memory. A first process (P1) on the applications processor may request address negotiation with a second process (P2) on the auxiliary processor, obtain a first address map from a first operating system, and send the first address map to the auxiliary processor. The second process (P2) may receive the first address map, obtain a second address map from a second operating system, identify matching addresses in the first and second address maps, store the matching addresses as common virtual addresses, and send the common virtual addresses back to the applications processor. The first and second processes (i.e., P1 and P2) may each use the common virtual addresses to map physical pages to the memory. | 2015-02-12 |
20150046662 | COALESCING TEXTURE ACCESS AND LOAD/STORE OPERATIONS - A system, method, and computer program product are provided for coalescing memory access requests. A plurality of memory access requests is received in a thread execution order and a portion of the memory access requests are coalesced into memory order, where memory access requests included in the portion are generated by threads in a thread block. A memory operation is generated that is transmitted to a memory system, where the memory operation represents the coalesced portion of memory access requests. | 2015-02-12 |
20150046663 | INFORMATION PROCESSING APPARATUS AND RECORDING MEDIUM - An information processing apparatus includes a first controller, a second controller, a non-volatile storage medium, and a volatile storage medium. The non-volatile storage medium is able to store data under control by the first controller, and unable to store data under control by the second controller. The volatile storage medium is able to store data under control by the second controller such that the data are readable therefrom under control by the first controller. The second controller includes a first storage unit that stores history data of operation performed under control by the second controller in the volatile storage medium. The first controller includes a reading unit and a second storage unit. The reading unit reads the history data stored in the volatile storage medium by the first storage unit. The second storage unit stores the history data read by the reading unit in the non-volatile storage medium. | 2015-02-12 |
20150046664 | Storage Control System with Settings Adjustment Mechanism and Method of Operation Thereof - Systems, methods and/or devices are used to enable a settings adjustment mechanism. In one aspect, the method includes (1) accessing characterization information corresponding to how a group of non-volatile memory devices of a storage control system operates as the group wears, (2) determining an estimated age of a non-volatile memory device, of the group of non-volatile memory devices, in accordance with a wear indicator for the non-volatile memory device, and (3) determining one or more settings for the non-volatile memory device in accordance with the estimated age and the characterization information. | 2015-02-12 |
20150046665 | Data Storage System with Stale Data Mechanism and Method of Operation Thereof - Systems, methods and/or devices are used to enable a stale data mechanism. In one aspect, the method includes (1) receiving a write command specifying a logical address to which to write, (2) determining whether a stale flag corresponding to the logical address is set, (3) in accordance with a determination that the stale flag is not set, setting the stale flag and releasing the write command to be processed, and (4) in accordance with a determination that the stale flag is set, detecting an overlap, wherein the overlap indicates two or more outstanding write commands are operating on the same memory space. | 2015-02-12 |
20150046666 | MEMORY SYSTEM - A memory system includes: a memory controller configured to change data to be stored in memory cells according to an address of a weak cell in order to store changed data having a lower program level than a highest program level among a plurality of program levels in peripheral cells adjacent to the weak cell; and a memory device configured to execute a program loop in order to store the changed data in a selected page. | 2015-02-12 |
20150046667 | SYNCHRONIZATION FOR INITIALIZATION OF A REMOTE MIRROR STORAGE FACILITY - A method includes computing, in a local storage system having a local volume with a plurality of local regions, respective local checksum signatures over the local regions, and computing, in a remote storage system having a remote volume with remote regions in a one-to-one correspondence with the local regions, respective remote checksum signatures over the remote regions. A given remote region is identified, the given remote region having a given remote signature and a corresponding local region with a given local signature that does not match the given remote signature. The data in the given remote region is then replaced with data from the corresponding local region. | 2015-02-12 |
20150046668 | INPUT/OUTPUT OPERATION MANAGEMENT IN A DEVICE MIRROR RELATIONSHIP - In a network computing environment, in which data stored at a primary storage system, is mirrored from the primary storage system to a secondary storage system, a selection may be made to direct an input/output operation such as a read operation, for example, to the secondary storage system instead of the primary storage system in order to improve operations. For example, a read operation may be directed to the secondary storage to improve the read operation response time. In other aspects, a read or other input/output operation may be directed to the secondary storage to improve utilization of the resources of the secondary storage system. Other aspects are described. | 2015-02-12 |
20150046669 | STORAGE SYSTEM AND METHOD FOR OPERATING SAME - A storage system includes a nonvolatile memory (NVM) and controller. The NVM includes a page buffer storing valid data and invalid data. The controller includes a processor providing copy control information, a hardware IP executing a copy operation that copies only the valid data, and a DMA that receives copy control information and controls operation of the hardware IP during execution of the copy operation response to the copy control information and referencing the valid data information stored by the DMA. | 2015-02-12 |
20150046670 | STORAGE SYSTEM AND WRITING METHOD THEREOF - A writing method of a storage system which includes a host and a storage connected to the host, includes receiving journal data during a generation of a data writing transaction; inserting in a first map table, a plurality of entries, each entry including a first logical address of a first logical area of the storage and a second logical address of a second logical area of the storage; writing the journal data to a physical area of the storage corresponding to the first logical address; and remapping the physical area from the first logical address onto the second logical address using the plurality of entries when a size of a usable space of the first logical area is less than a desired value. | 2015-02-12 |
20150046671 | METHODS, APPARATUS, INSTRUCTIONS AND LOGIC TO PROVIDE VECTOR POPULATION COUNT FUNCTIONALITY - Instructions and logic provide SIMD vector population count functionality. Some embodiments store in each data field of a portion of n data fields of a vector register or memory vector, a plurality of bits of data. In a processor, a SIMD instruction for a vector population count is executed, such that for that portion of the n data fields in the vector register or memory vector, the occurrences of binary values equal to each of a first one or more predetermined binary values, are counted and the counted occurrences are stored, in a portion of a destination register corresponding to the portion of the n data fields in the vector register or memory vector, as a first one or more counts corresponding to the first one or more predetermined binary values. | 2015-02-12 |
20150046672 | METHODS, APPARATUS, INSTRUCTIONS AND LOGIC TO PROVIDE POPULATION COUNT FUNCTIONALITY FOR GENOME SEQUENCING AND ALIGNMENT - Instructions and logic provide SIMD vector population count functionality. Some embodiments store in each data field of a portion of n data fields of a vector register or memory vector, at least two bits of data. In a processor, a SIMD instruction for a vector population count is executed, such that for that portion of the n data fields in the vector register or memory vector, the occurrences of binary values equal to each of a first one or more predetermined binary values, are counted and the counted occurrences are stored, in a portion of a destination register corresponding to the portion of the n data fields in the vector register or memory vector, as a first one or more counts corresponding to the first one or more predetermined binary values. | 2015-02-12 |
20150046673 | VECTOR PROCESSOR - A vector processor is disclosed including a variety of variable-length instructions. Computer-implemented methods are disclosed for efficiently carrying out a variety of operations in a time-conscious, memory-efficient, and power-efficient manner. Methods for more efficiently managing a buffer by controlling the threshold based on the length of delay line instructions are disclosed. Methods for disposing multi-type and multi-size operations in hardware are disclosed. Methods for condensing look-up tables are disclosed. Methods for in-line alteration of variables are disclosed. | 2015-02-12 |
20150046674 | LOW POWER COMPUTATIONAL IMAGING - The present application discloses a computing device that can provide a low-power, highly capable computing platform for computational imaging. The computing device can include one or more processing units, for example one or more vector processors and one or more hardware accelerators, an intelligent memory fabric, a peripheral device, and a power management module. The computing device can communicate with external devices, such as one or more image sensors, an accelerometer, a gyroscope, or any other suitable sensor devices. | 2015-02-12 |
20150046675 | APPARATUS, SYSTEMS, AND METHODS FOR LOW POWER COMPUTATIONAL IMAGING - The present application discloses a computing device that can provide a low-power, highly capable computing platform for computational imaging. The computing device can include one or more processing units, for example one or more vector processors and one or more hardware accelerators, an intelligent memory fabric, a peripheral device, and a power management module. The computing device can communicate with external devices, such as one or more image sensors, an accelerometer, a gyroscope, or any other suitable sensor devices. | 2015-02-12 |
20150046676 | Method and Devices for Data Path and Compute Hardware Optimization - Methods and devices for distributing processing capacity in a multi-processor system include monitoring a data input for a feature activity with a first processor, such as a high efficiency processor. When feature activity is detected, a feature event may be predicted and processing capacity requirement may be estimated. The sufficiency of available processing capacity of the first processor to meet the estimated future processing capacity requirement and process the predicted feature event may be determined. Processing capacity of a second processor, such as a high performance processor, may be distributed along with the data input when the available processing capacity of the first processor are insufficient to meet the processing capacity requirement and process the predicted feature event. | 2015-02-12 |
20150046677 | APPARATUS, SYSTEMS, AND METHODS FOR PROVIDING COMPUTATIONAL IMAGING PIPELINE - The present application relates generally to a parallel processing device. The parallel processing device can include a plurality of processing elements, a memory subsystem, and an interconnect system. The memory subsystem can include a plurality of memory slices, at least one of which is associated with one of the plurality of processing elements and comprises a plurality of random access memory (RAM) tiles, each tile having individual read and write ports. The interconnect system is configured to couple the plurality of processing elements and the memory subsystem. The interconnect system includes a local interconnect and a global interconnect. | 2015-02-12 |
20150046678 | APPARATUS, SYSTEMS, AND METHODS FOR PROVIDING CONFIGURABLE COMPUTATIONAL IMAGING PIPELINE - The present application relates generally to a parallel processing device. The parallel processing device can include a plurality of processing elements, a memory subsystem, and an interconnect system. The memory subsystem can include a plurality of memory slices, at least one of which is associated with one of the plurality of processing elements and comprises a plurality of random access memory (RAM) tiles, each tile having individual read and write ports. The interconnect system is configured to couple the plurality of processing elements and the memory subsystem. The interconnect system includes a local interconnect and a global interconnect. | 2015-02-12 |
20150046679 | Energy-Efficient Run-Time Offloading of Dynamically Generated Code in Heterogenuous Multiprocessor Systems - Mobile computing devices may be configured to intelligently select, compile, and execute portions of a general purpose software application in an auxiliary processor (e.g., a DSP) of a multiprocessor system. A processor of the mobile device may be configured to determine whether portions of a software application are suitable for execution in an auxiliary processor, monitor operating conditions of the system, determine a historical context based on the monitoring, and determine whether the portions that were determined to suitable for execution in an auxiliary processor should be compiled for execution in the auxiliary processor based on the historical context. The processor may also be configured to continue monitoring the system, update the historical context information, and determine whether code previously compiled for execution on the auxiliary processor should be invoked or executed in the auxiliary processor based on the updated historical context information. | 2015-02-12 |
20150046680 | DYNAMIC AND SELECTIVE CORE DISABLEMENT AND RECONFIGURATION IN A MULTI-CORE PROCESSOR - A method for dynamically reconfiguring one or more cores of a multi-core microprocessor comprising a plurality of cores and sideband communication wires, extrinsic to a system bus connected to a chipset, which facilitate non-system-bus inter-core communications. At least some of the cores are operable to be reconfigurably designated with or without master credentials for purposes of structuring sideband-based inter-core communications. The method includes determining an initial configuration of cores of the microprocessor, which configuration designates at least one core, but not all of the cores, as a master core, and reconfiguring the cores according to a modified configuration, which modified configuration removes a master designation from a core initially so designated, and assigns a master designation to a core not initially so designated. Each core is configured to conditionally drive a sideband communication wire to which it is connected based upon its designation, or lack thereof, as a master core. | 2015-02-12 |
20150046681 | SYSTEMS AND DEVICES FOR QUANTUM PROCESSOR ARCHITECTURES - Quantum processor architectures employ unit cells tiled over an area. A unit cell may include first and second sets of qubits where each qubit in the first set crosses at least one qubit in the second set. Angular deviations between qubits in one set may allow qubits in the same set to cross one another. Each unit cell is positioned proximally adjacent at least one other unit cell. Communicatively coupling between qubits is realized through respective intra-cell and inter-cell coupling devices. | 2015-02-12 |
20150046682 | GLOBAL BRANCH PREDICTION USING BRANCH AND FETCH GROUP HISTORY - This disclosure includes a method for performing branch prediction by a processor having an instruction pipeline. The processor speculatively updates a global history register having fetch group history and branch history, fetches a fetch group of instructions, and assigns a global history vector to the instructions. The processor predicts any branches in the fetch group using the global history vector and a predictor, and evaluates whether the fetch group contains a predicted taken branch. If the fetch group contains a predicted taken branch, the processor flushes subsequently fetched instructions in the pipeline following the predicted taken branch, repairs the global history register to the global history vector, and updates the global history register based on branch prediction information. If the fetch group does not contain a predicted taken branch, the processor updates the global history register with a branch history value for each branch in the fetch group. | 2015-02-12 |
20150046683 | METHOD FOR USING REGISTER TEMPLATES TO TRACK INTERDEPENDENCIES AMONG BLOCKS OF INSTRUCTIONS - A method for executing instructions using register templates to track interdependencies among blocks of instructions. The method includes receiving an incoming instruction sequence using a global front end; grouping the instructions to form instruction blocks; and using a register template to track instruction destinations and instruction sources by populating the register template with block numbers corresponding to the instruction blocks, wherein the block numbers corresponding to the instruction blocks indicate interdependencies among the blocks of instructions. | 2015-02-12 |
20150046684 | TECHNIQUE FOR GROUPING INSTRUCTIONS INTO INDEPENDENT STRANDS - A device compiler and linker is configured to group instructions into different strands for execution by different threads based on the dependence of those instructions on other, long-latency instructions. A thread may execute a strand that includes long-latency instructions, and then hardware resources previously allocated for the execution of that thread may be de-allocated from the thread and re-allocated to another thread. The other thread may then execute another strand while the long-latency instructions are in flight. With this approach, the other thread is not required to wait for the long-latency instructions to complete before acquiring hardware resources and initiating execution of the other strand, thereby eliminating at least a portion of the time that the other thread would otherwise spend waiting. | 2015-02-12 |
20150046685 | Intelligent Multicore Control For Optimal Performance Per Watt - The various aspects provide for a device and methods for intelligent multicore control of a plurality of processor cores of a multicore integrated circuit. The aspects may identify and activate an optimal set of processor cores to achieve the lowest level power consumption for a given workload or the highest performance for a given power budget. The optimal set of processor cores may be the number of active processor cores or a designation of specific active processor cores. When a temperature reading of the processor cores is below a threshold, a set of processor cores may be selected to provide the lowest power consumption for the given workload. When the temperature reading of the processor cores is above the threshold, a set processor cores may be selected to provide the best performance for a given power budget. | 2015-02-12 |
20150046686 | METHOD FOR EXECUTING BLOCKS OF INSTRUCTIONS USING A MICROPROCESSOR ARCHITECTURE HAVING A REGISTER VIEW, SOURCE VIEW, INSTRUCTION VIEW, AND A PLURALITY OF REGISTER TEMPLATES - A method for executing blocks of instructions using a microprocessor architecture having a register view, source view, instruction view, and a plurality of register templates. The method includes receiving an incoming instruction sequence using a global front end; grouping the instructions to form instruction blocks; using a plurality of register templates to track instruction destinations and instruction sources by populating the register template with block numbers corresponding to the instruction blocks, wherein the block numbers corresponding to the instruction blocks indicate interdependencies among the blocks of instructions; using a register view data structure, wherein the register view data structure stores destinations corresponding to the instruction blocks; using a source view data structure, wherein the source view data structure stores sources corresponding to the instruction blocks; and using an instruction view data structure, wherein the instruction view data structure stores instructions corresponding to the instruction blocks. | 2015-02-12 |
20150046687 | Hardware Streaming Unit - A processor having a streaming unit is disclosed. In one embodiment, a processor includes one or more execution units configured to execute instructions of a processor instruction set. The processor further includes a streaming unit configured to execute a first instruction of the processor instruction set, wherein executing the first instruction comprises the streaming unit loading a first data stream from a memory of a computer system responsive to execution of a first instruction. The first data stream comprises a plurality of data elements. The first instruction includes a first argument indicating a starting address of the first stream, a second argument indicating a stride between the data elements, and a third argument indicative of an ending address of the stream. The streaming unit is configured to output a second data stream corresponding to the first data stream. | 2015-02-12 |
20150046688 | METHOD OF GENERATING PROCESSOR TEST INSTRUCTION SEQUENCE AND GENERATING APPARATUS - A test instruction sequence generating method for a processor includes classifying registers used for executing test instructions into two register groups, generating a test instruction executed by the processor, amending a register specified in a result value register field of the test instruction to a register of a first register group when a first instruction is specified in an arithmetic type field of the test instruction and a register of a second register group is specified in the result value register field of the test instruction, and further amending a register specified in a input value register field of the test instruction to a register of the second register group when a second instruction is specified in the arithmetic type field of the test instruction and a register of the first register group is specified in the input value register field of the test instruction. | 2015-02-12 |