Patent application number | Description | Published |
20080242901 | METHOD FOR PRODUCING FLUORINATED ORGANIC COMPOUNDS - A method for preparing fluorinated organic compounds wherein at least one fluorinated olefin is reacted with methyl fluoride in the gas-phase and in the presence of a Lewis Acid catalyst to form at least one product having at least 3 carbon atoms. | 10-02-2008 |
20090099274 | AMINE CATALYSTS FOR POLYURETHANE FOAMS - The invention provides polyurethane and polyisocyanurate foams and methods for the preparation thereof More particularly, the invention relates to open-celled, polyurethane and polyisocyanurate foams and methods for their preparation. The foams are characterized by a fine uniform cell structure and little or no foam collapse. The foams are produced with a polyol premix composition which comprises a combination of a hydrohaloolefin blowing agent, a polyol, a silicone surfactant, and a sterically hindered amine catalyst. | 04-16-2009 |
20100139274 | Chloro- And Bromo-Fluoro Olefin Compounds Useful As Organic Rankine Cycle Working Fluids - Aspects of the present invention are directed to working fluids and their use in processes wherein the working fluids comprise compounds having the structure of formula (I): | 06-10-2010 |
20100210883 | METHOD FOR PRODUCING FLUORINATED ORGANIC COMPOUNDS - Disclosed are methods for producing fluorinated organic compounds, including hydrofluoropropenes, which preferably comprises converting at least one compound of formula (I): | 08-19-2010 |
20100233057 | METHODS AND REACTOR DESIGNS FOR PRODUCING PHOSPHORUS PENTAFLUORIDE - Processes and systems for the production of phosphorus pentafluoride (PF | 09-16-2010 |
20110288346 | PROCESS FOR CIS 1,1,1,4,4,4-HEXAFLUORO-2-BUTENE - Disclosed is a process for preparing cis-1,1,1,4,4,4-hexafluoropropene comprising the steps of (a) reacting CCl | 11-24-2011 |
20110288348 | PROCESS FOR THE PREPARATION OF HEXAFLUORO-2-BUTYNE - Disclosed is a process for making hexafluoro-2-butyne comprising the steps of: (a) providing a composition comprising CF | 11-24-2011 |
20110288349 | PROCESS FOR THE PRODUCTION OF FLUORINATED ALKENES - The present invention provides a method for the preparation of suitable chlorofluorocarbon and hydrochlorofluorocarbon materials or chlorofluorocarbon and hydrochlorofluorocarbon alkene and alkyne intermediates which serve as useful feedstock for fluorination and reduction to cis-1,1,1,4,4,4-hexafluoro-2-butene. Also presented is a continuous process for the production of cis-1,1,1,4,4,4-hexafluoro-2-butene from the alkene and alkyne intermediates. | 11-24-2011 |
20110288350 | PROCESS FOR THE PREPARATION OF FLUORINATED CIS-ALKENE - Disclosed is a process for the preparation of fluorine-containing olefins comprising contacting a chlorofluoroalkane with hydrogen in the presence of a catalyst at a temperature sufficient to cause replacement of the chlorine substituents of the chlorofluoroalkane with hydrogen to produce a fluorine-containing olefin. Also disclosed are catalyst compositions for the hydrodechlorination of chlorofluoroalkanes comprising copper metal deposited on a support, and comprising palladium deposited on calcium fluoride, poisoned with lead and reducing the in the presence or absence of a dehydrochlorination catalyst under conditions effective to form a product stream comprising cis-1,1,1,4,4,4-hexafluoro-2-butene (HFO-1336). | 11-24-2011 |
20120187330 | STABILIZED IODOCARBON COMPOSITIONS - Disclosed are compositions comprising at least one iodocarbon compound and preferably at least one stabilization agent. These compositions are generally useful as refrigerants for heating and cooling, as blowing agents, as aerosol propellants, as solvent composition, and as fire extinguishing and suppressing agents. | 07-26-2012 |
20120215041 | PROCESS FOR CIS-1-CHLORO-3,3,3-TRIFLUOROPROPENE - Disclosed is a process for making one isomer of CF | 08-23-2012 |
20130004402 | METHODS AND APPARATUSES FOR PURIFYING PHOSPHORUS PENTAFLUORIDE - Embodiments of methods and apparatuses for purifying phosphorus pentafluoride are provided. The method comprises the step of contacting a feed stream comprising phosphorus pentafluoride and impurities with anhydrous hydrogen fluoride. The anhydrous hydrogen fluoride reduces the impurities from the feed stream to form an impurity-depleted phosphorus pentafluoride effluent. | 01-03-2013 |
20130131403 | METHOD FOR PRODUCING FLUORINATED ORGANIC COMPOUNDS - Disclosed is a method for producing fluorinated organic compounds, including hydrofluoropropenes, which preferably comprises converting at least one compound of formula (I): | 05-23-2013 |
20130211155 | PROCESS FOR MAKING TETRAFLUOROPROPENE - The present invention describes a process for making CF | 08-15-2013 |
20130211156 | PROCESS FOR 1,3,3,3-TETRAFLUOROPROPENE - The present invention provides a simple three step process for the production of 1,3,3,3-tetrafluoropropene (HFO-1234ze). In the first step, carbon tetrachloride is added to vinyl fluoride to afford the compound CCl | 08-15-2013 |
20140079619 | MANUFACTURE OF PF5 - A process for producing phosphorus pentafluoride by the reaction of elemental phosphorus and elemental fluorine gas, comprising supplying to the reaction non-stoichiometric amounts of elemental phosphorus and elemental fluorine gas. | 03-20-2014 |
20140366540 | CHLORO- AND BROMO-FLUORO OLEFIN COMPOUNDS USEFUL AS ORGANIC RANKINE CYCLE WORKING FLUIDS - Aspects of the present invention are directed to working fluids and their use in processes wherein the working fluids comprise compounds having the structure of formula (I): | 12-18-2014 |
20150045588 | PROCESS FOR 1-CHLORO-3,3,3-TRIFLUOROPROPENE FROM TRIFLUOROMETHANE - The present invention provides routes for making 1-chloro-3,3,3-trifluoropropene (HCFO-1233zd) from commercially available raw materials. More specifically, this invention provides routes for HCFO-1233zd from inexpensive and commercially available trifluoromethane (HFC-23). | 02-12-2015 |
20150045590 | PROCESS FOR 1-CHLORO-3,3,3-TRIFLUOROPROPENE FROM TRIFLUOROPROPENE - The present invention provides routes for making 1-chloro-3,3,3-trifluoropropene (HCFO-1233zd) from commercially available raw materials. More specifically, this invention provides several routes for forming HCFO-1233zd from 3,3,3-trifluoropropene (FC-1234zf). | 02-12-2015 |
20150065746 | FLUOROSURFACTANTS HAVING IMPROVED BIODEGRADABILITY - To address the problem of insufficient biodegradability of perfluorinated surfactants, the present invention provides biodegradable fluorosurfactants derived from olefins having —CHR, —CHRf, —CHF, and/or —CH | 03-05-2015 |
Patent application number | Description | Published |
20080256345 | Method and Apparatus for Conserving Power by Throttling Instruction Fetching When a Processor Encounters Low Confidence Branches in an Information Handling System - An information handling system includes a processor that throttles the instruction fetcher whenever the inaccuracy, or lack of confidence, in branch predictions for branch instructions stored in a branch instruction queue exceeds a predetermined threshold confidence level of inaccuracy or error. In this manner, fetch operations slow down to conserve processor power when it is likely that the processor will mispredict the outcome of branch instructions. Fetch operations return to full speed when it is likely that the processor will correctly predict the outcome of branch instructions. | 10-16-2008 |
20080256347 | METHOD, SYSTEM, AND COMPUTER PROGRAM PRODUCT FOR PATH-CORRELATED INDIRECT ADDRESS PREDICTIONS - A method, system, and computer program product are provided, for maintaining a path history register of register indirect branches. A set of bits is generated based on a set of target address bits using a hit selection and/or a hash function operation, and the generated set of bits is inserted into a path history register by shifting bits in the path history register and/or applying a hash operation, information corresponding to prior history is removed from the path history register, using a shift out operation and/or a hash operation. The path, history register is used to maintain a recent target, table and generate register-indirect branch target address predictions based on path history correlation between register-indirect branches captured by the path history register. | 10-16-2008 |
20080276081 | COMPACT REPRESENTATION OF INSTRUCTION EXECUTION PATH HISTORY - A method of representing instruction execution path history is provided. The method in one aspect may include gathering information associated with a current instruction, the information including at least a target address. Previously computed bits representing execution path history is modified and hashed based on the target address, to compute current execution path history. | 11-06-2008 |
20080294944 | PROCESSOR BUS FOR PERFORMANCE MONITORING WITH DIGESTS - A method for monitoring event occurrences from a plurality of processor units at a centralized location via a dedicated bus coupled between the plurality of processor units and the centralized location. In particular, the method comprises receiving, at the centralized location, data indicative of cumulative events occurring at one of the processor units, and storing the data in a first temporary memory. The data is then stored in a register based on a tag identifier affixed to the data in an instance where the tag identifier provides indicia of one of the plurality of processor units. | 11-27-2008 |
20090157377 | METHOD AND SYSTEM FOR MULTIPROCESSOR EMULATION ON A MULTIPROCESSOR HOST SYSTEM - A method (and system) for executing a multiprocessor program written for a target instruction set architecture on a host computing system having a plurality of processors designed to process instructions of a second instruction set architecture, includes representing each portion of the program designed to run on a processor of the target computing system as one or more program threads to be executed on the host computing system. | 06-18-2009 |
20100287355 | Dynamic Translation in the Presence of Intermixed Code and Data - A system for translating software in a first format into a second format includes a memory containing the software in the first format and an emulator coupled to the memory configured to translate the software from the first format to the second format. The system also includes a host engine coupled to the emulator and configured to perform instructions in the second format. The emulator is configured to determine whether a store command in the first format stores information to a memory page that includes instructions and to convert the store instruction to a special store instruction in the event that the target of the store instruction does not contain an instruction. | 11-11-2010 |
20110191095 | METHOD AND SYSTEM FOR EFFICIENT EMULATION OF MULTIPROCESSOR ADDRESS TRANSLATION ON A MULTIPROCESSOR - A method (and structure) of mapping a memory addressing of a multiprocessing system when it is emulated using a virtual memory addressing of another multiprocessing system includes accessing a local lookaside table (LLT) on a target processor with a target virtual memory address. Whether there is a “miss” in the LLT is determined and, with the miss determined in the LLT, a lock for a global page table is obtained. | 08-04-2011 |
20120089820 | HYBRID MECHANISM FOR MORE EFFICIENT EMULATION AND METHOD THEREFOR - In a host system, a method for using instruction scheduling to efficiently emulate the operation of a target computing syste includes preparing, on the host system, an instruction sequence to interpret an instruction written for execution on the target computing system. An instruction scheduling on the instruction sequence is performed, to achieve an efficient instruction level parallelism, for the host system. A separate and independent instruction sequence is inserted, which, when executed simultaneously with the instruction sequence, performs to copy to a separate location a minimum instruction sequence necessary to execute an intent of an interpreted target instruction, the interpreted target instruction being a translation; and modifies the interpreter code such that a next interpretation of the target instruction results in execution of the translated version, thereby removing execution of interpreter overhead. | 04-12-2012 |
20120216197 | VIRTUALIZING THE EXECUTION OF HOMOGENEOUS PARALLEL SYSTEMS ON HETEROGENEOUS MULTIPROCESSOR PLATFORMS - An embodiment of the invention is a virtual machine monitor that is executable by computer processor. The virtual machine monitor runs a virtual processor. When the virtual processor encounters a faulting instruction the virtual processor is unmapped from the physical processor, and generates a list of other physical processors that could execute the instruction. The virtual machine monitor determines if one of the other of the physical processors in the list is currently idle, and when one of the other of the physical processors in the list is determined to be currently idle, the virtual processor is mapped to a second physical processor, which is the one of the other of the physical processors in the list that was determined to be currently idle. | 08-23-2012 |
20140040592 | ACTIVE BUFFERED MEMORY - According to one embodiment of the present invention, a method for operating a memory device that includes memory and a processing element includes receiving, in the processing element, a command from a requestor, loading, in the processing element, a program based on the command, the program comprising a load instruction loaded from a first memory location in the memory, and performing, by the processing element, the program, the performing including loading data in the processing element from a second memory location in the memory. The method also includes generating, by the processing element, a virtual address of the second memory location based on the load instruction and translating, by the processing element, the virtual address into a real address. | 02-06-2014 |
20140040596 | PACKED LOAD/STORE WITH GATHER/SCATTER - Embodiments relate to packed loading and storing of data. An aspect includes a method for packed loading and storing of data distributed in a system that includes memory and a processing element. The method includes fetching and decoding an instruction for execution by the processing element. The processing element gathers a plurality of individually addressable data elements from non-contiguous locations in the memory which are narrower than a nominal width of register file elements in the processing element based on the instruction. The data elements are packed and loaded into register file elements of a register file entry by the processing element based on the instruction, such that at least two of the data elements gathered from the non-contiguous locations in the memory are packed and loaded into a single register file element of the register file entry. | 02-06-2014 |
20140040597 | PREDICATION IN A VECTOR PROCESSOR - Embodiments relate to vector processor predication in an active memory device. An aspect includes a system for vector processor predication in an active memory device. The system includes memory in the active memory device and a processing element in the active memory device. The processing element is configured to perform a method including decoding an instruction with a plurality of sub-instructions to execute in parallel. One or more mask bits are accessed from a vector mask register in the processing element. The one or more mask bits are applied by the processing element to predicate operation of a unit in the processing element associated with at least one of the sub-instructions. | 02-06-2014 |
20140040598 | VECTOR PROCESSING IN AN ACTIVE MEMORY DEVICE - Embodiments relate to vector processing in an active memory device. An aspect includes a system for vector processing in an active memory device. The system includes memory in the active memory device and a processing element in the active memory device. The processing element is configured to perform a method including decoding an instruction with a plurality of sub-instructions to execute in parallel. An iteration count to repeat execution of the sub-instructions in parallel is determined. Execution of the sub-instructions is repeated in parallel for multiple iterations, by the processing element, based on the iteration count. Multiple locations in the memory are accessed in parallel based on the execution of the sub-instructions. | 02-06-2014 |
20140040599 | PACKED LOAD/STORE WITH GATHER/SCATTER - Embodiments relate to packed loading and storing of data. An aspect includes a system for packed loading and storing of distributed data. The system includes memory and a processing element configured to communicate with the memory. The processing element is configured to perform a method including fetching and decoding an instruction for execution by the processing element. A plurality of individually addressable data elements is gathered from non-contiguous locations in the memory which are narrower than a nominal width of register file elements in the processing element based on the instruction. The processing element packs and loads the data elements into register file elements of a register file entry based on the instruction, such that at least two of the data elements gathered from the non-contiguous locations in the memory are packed and loaded into a single register file element of the register file entry. | 02-06-2014 |
20140040601 | PREDICATION IN A VECTOR PROCESSOR - Embodiments relate to vector processor predication in an active memory device. An aspect includes a method for vector processor predication in an active memory device that includes memory and a processing element. The method includes decoding, in the processing element, an instruction including a plurality of sub-instructions to execute in parallel. One or more mask bits are accessed from a vector mask register in the processing element. The one or more mask bits are applied by the processing element to predicate operation of a unit in the processing element associated with at least one of the sub-instructions. | 02-06-2014 |
20140040603 | VECTOR PROCESSING IN AN ACTIVE MEMORY DEVICE - Embodiments relate to vector processing in an active memory device. An aspect includes a method for vector processing in an active memory device that includes memory and a processing element. The method includes decoding, in the processing element, an instruction including a plurality of sub-instructions to execute in parallel. An iteration count to repeat execution of the sub-instructions in parallel is determined. Based on the iteration count, execution of the sub-instructions in parallel is repeated for multiple iterations by the processing element. Multiple locations in the memory are accessed in parallel based on the execution of the sub-instructions. | 02-06-2014 |
20140047211 | VECTOR REGISTER FILE - An aspect includes accessing a vector register in a vector register file. The vector register file includes a plurality of vector registers and each vector register includes a plurality of elements. A read command is received at a read port of the vector register file. The read command specifies a vector register address. The vector register address is decoded by an address decoder to determine a selected vector register of the vector register file. An element address is determined for one of the plurality of elements associated with the selected vector register based on a read element counter of the selected vector register. A word is selected in a memory array of the selected vector register as read data based on the element address. The read data is output from the selected vector register based on the decoding of the vector register address by the address decoder. | 02-13-2014 |
20140047214 | VECTOR REGISTER FILE - An aspect includes accessing a vector register in a vector register file. The vector register file includes a plurality of vector registers and each vector register includes a plurality of elements. A read command is received at a read port of the vector register file. The read command specifies a vector register address. The vector register address is decoded by an address decoder to determine a selected vector register of the vector register file. An element address is determined for one of the plurality of elements associated with the selected vector register based on a read element counter of the selected vector register. A word is selected in a memory array of the selected vector register as read data based on the element address. The read data is output from the selected vector register based on the decoding of the vector register address by the address decoder. | 02-13-2014 |
20140115294 | MEMORY PAGE MANAGEMENT - According to one embodiment, a method for operating a memory device includes receiving a first request from a requestor, wherein the first request includes accessing data at a first memory location in a memory bank, opening a first page in the memory bank, wherein opening the first page includes loading a row including the first memory location into a buffer, the row being loaded from a row location in the memory bank and transmitting the data from the first memory location to the requestor. The method also includes determining, by a memory controller, whether to close the first page following execution of the first request based on information relating to a likelihood that a subsequent request will access the first page. | 04-24-2014 |
20140129799 | ADDRESS GENERATION IN AN ACTIVE MEMORY DEVICE - Embodiments relate to address generation in an active memory device that includes memory and a processing element. An aspect includes a method for address generation in the active memory device. The method includes reading a base address value and an offset address value from a register file group of the processing element. The processing element determines a virtual address based on the base address value and the offset address value. The processing element translates the virtual address into a physical address and accesses a location in the memory based on the physical address. | 05-08-2014 |
20140130050 | MAIN PROCESSOR SUPPORT OF TASKS PERFORMED IN MEMORY - According to one embodiment of the present invention, a method for operating a computer system including a main processor, a processing element and memory is provided. The method includes receiving, at the processing element, a task from the main processor, performing, by the processing element, an instruction specified by the task, determining, by the processing element, that a function is to be executed on the main processor, the function being part of the task, sending, by the processing element, a request to the main processor for execution, the request comprising execution of the function and receiving, at the processing element, an indication that the main processor has completed execution of the function specified by the request. | 05-08-2014 |
20140130051 | MAIN PROCESSOR SUPPORT OF TASKS PERFORMED IN MEMORY - According to one embodiment of the present invention, a computer system for executing a task includes a main processor, a processing element and memory. The computer system is configured to perform a method including receiving, at the processing element, the task from the main processor, performing, by the processing element, an instruction specified by the task, determining, by the processing element, that a function is to be executed on the main processor, the function being part of the task, sending, by the processing element, a request to the main processor for execution, the request including execution of the function and receiving, at the processing element, an indication that the main processor has completed execution of the function specified by the request. | 05-08-2014 |
20140136811 | ACTIVE MEMORY DEVICE GATHER, SCATTER, AND FILTER - Embodiments relate to loading and storing of data. An aspect includes a method for transferring data in an active memory device that includes memory and a processing element. An instruction is fetched and decoded for execution by the processing element. Based on determining that the instruction is a gather instruction, the processing element determines a plurality of source addresses in the memory from which to gather data elements and a destination address in the memory. One or more gathered data elements are transferred from the source addresses to contiguous locations in the memory starting at the destination address. Based on determining that the instruction is a scatter instruction, a source address in the memory from which to read data elements at contiguous locations and one or more destination addresses in the memory to store the data elements at non-contiguous locations are determined, and the data elements are transferred. | 05-15-2014 |
20140136857 | POWER-CONSTRAINED COMPILER CODE GENERATION AND SCHEDULING OF WORK IN A HETEROGENEOUS PROCESSING SYSTEM - A heterogeneous processing system includes a compiler for performing power-constrained code generation and scheduling of work in the heterogeneous processing system. The compiler produces source code that is executable by a computer. The compiler performs a method. The method includes dividing a power budget for the heterogeneous processing system into a discrete number of power tokens. Each of the power tokens has an equal value of units of power. The method also includes determining a power requirement for executing a code segment on a processing element of the heterogeneous processing system. The determining is based on characteristics of the processing element and the code segment. The method further includes allocating, to the processing element at runtime, at least one of the power tokens to satisfy the power requirement. | 05-15-2014 |
20140136858 | POWER-CONSTRAINED COMPILER CODE GENERATION AND SCHEDULING OF WORK IN A HETEROGENEOUS PROCESSING SYSTEM - An active memory system includes a computer and an active memory device including layers of memory forming a three-dimensional memory device and individual columns of chips forming vaults in communication with a processing element and logic. The processing element is configured to communicate to the chips and other processing elements. The active memory system also includes a compiler configured to implement a method. The method includes dividing a power budget for the active memory device into a discrete number of power tokens, each of the power tokens having an equal value of units of power. The method also includes determining a power requirement for executing a code segment on the processing element of the active memory device based on characteristics of the processing element and the code segment. The method further includes allocating, to the processing element at runtime, one or more power tokens to satisfy the power requirement. | 05-15-2014 |
20140136894 | EXPOSED-PIPELINE PROCESSING ELEMENT WITH ROLLBACK - An aspect includes providing rollback support in an exposed-pipeline processing element. A method for providing rollback support in an exposed-pipeline processing element includes detecting, by rollback support logic, an error associated with execution of an instruction in the exposed-pipeline processing element. The rollback support logic determines whether the exposed-pipeline processing element supports replay of the instruction for a predetermined number of cycles. Based on determining that the exposed-pipeline processing element supports replay of the instruction, a rollback action is performed in the exposed-pipeline processing element to attempt recovery from the error. | 05-15-2014 |
20140136895 | EXPOSED-PIPELINE PROCESSING ELEMENT WITH ROLLBACK - An aspect includes providing rollback support in an exposed-pipeline processing element. A system includes the exposed-pipeline processing element with rollback support logic. The rollback support logic is configured to detect an error associated with execution of an instruction in the exposed-pipeline processing element. The rollback support logic determines whether the exposed-pipeline processing element supports replay of the instruction for a predetermined number of cycles. Based on determining that the exposed-pipeline processing element supports replay of the instruction, a rollback action is performed in the exposed-pipeline processing element to attempt recovery from the error. | 05-15-2014 |
20140149464 | TREE TRAVERSAL IN A MEMORY DEVICE - Embodiments relate to tree traversal in a memory device. An aspect includes a method for tree traversal in a memory device. The method includes receiving a pointer to a tree structure within memory of the memory device. An evaluation condition is received to identify a desired node of the tree structure. The tree structure is traversed to identify the desired node. Data is returned from the desired node meeting the evaluation condition. | 05-29-2014 |
20140149673 | LOW LATENCY DATA EXCHANGE - According to one embodiment, a method for exchanging data in a system that includes a main processor in communication with an active memory device is provided. The method includes a processing element in the active memory device receiving an instruction from the main processor and receiving a store request from a thread running on the main processor, the store request specifying a memory address associated with the processing element. The method also includes storing a value provided in the store request in a queue in the processing element and the processing element performing the instruction using the value from the queue. | 05-29-2014 |
20140149680 | LOW LATENCY DATA EXCHANGE - According to one embodiment, a method for exchanging data in a system that includes a main processor in communication with an active memory device is provided. The method includes a processing element in the active memory device receiving an instruction from the main processor and receiving a store request from a thread running on the main processor, the store request specifying a memory address associated with the processing element. The method also includes storing a value provided in the store request in a queue in the processing element and the processing element performing the instruction using the value from the queue. | 05-29-2014 |
20140173224 | SEQUENTIAL LOCATION ACCESSES IN AN ACTIVE MEMORY DEVICE - Embodiments relate to sequential location accesses in an active memory device that includes memory and a processing element. An aspect includes a method for sequential location accesses that includes receiving from the memory a first group of data values associated with a queue entry at the processing element. A tag value associated with the queue entry and specifying a position from which to extract a first subset of the data values is read. The queue entry is populated with the first subset of the data values starting at the position specified by the tag value. The processing element determines whether a second subset of the data values in the first group of data values is associated with a subsequent queue entry, and populates a portion of the subsequent queue entry with the second subset of the data values. | 06-19-2014 |
20140195743 | ON-CHIP TRAFFIC PRIORITIZATION IN MEMORY - According to one embodiment, a method for traffic prioritization in a memory device includes sending a memory access request including a priority value from a processing element in the memory device to a crossbar interconnect in the memory device. The memory access request is routed through the crossbar interconnect to a memory controller in the memory device associated with the memory access request. The memory access request is received at the memory controller. The priority value of the memory access request is compared to priority values of a plurality of memory access requests stored in a queue of the memory controller to determine a highest priority memory access request. A next memory access request is performed by the memory controller based on the highest priority memory access request. | 07-10-2014 |
20140195744 | ON-CHIP TRAFFIC PRIORITIZATION IN MEMORY - According to one embodiment, a memory device is provided. The memory device includes a processing element coupled to a crossbar interconnect. The processing element is configured to send a memory access request, including a priority value, to the crossbar interconnect. The crossbar interconnect is configured to route the memory access request to a memory controller associated with the memory access request. The memory controller is coupled to memory and to the crossbar interconnect. The memory controller includes a queue and is configured to compare the priority value of the memory access request to priority values of a plurality of memory access requests stored in the queue of the memory controller to determine a highest priority memory access request and perform a next memory access request based on the highest priority memory access request. | 07-10-2014 |
20140281084 | LOCAL BYPASS FOR IN MEMORY COMPUTING - Embodiments include a method for bypassing data in an active memory device. The method includes a requestor determining a number of transfers to a grantor that have not been communicated to the grantor, requesting to the interconnect network that the bypass path be used for the transfers based on the number of transfers meeting a threshold and communicating the transfers via the bypass path to the grantor based on the request, the interconnect network granting control of the grantor in response to the request. The method also includes the interconnect network requesting control of the grantor based on an event and communicating delayed transfers via the interconnect network from other requestors, the delayed transfers being delayed due to the grantor being previously controlled by the requestor, the communicating based on the control of the grantor being changed back to the interconnect network. | 09-18-2014 |
20140281100 | LOCAL BYPASS FOR IN MEMORY COMPUTING - Embodiments include a method for bypassing data in an active memory device. The method includes a requestor determining a number of transfers to a grantor that have not been communicated to the grantor, requesting to the interconnect network that the bypass path be used for the transfers based on the number of transfers meeting a threshold and communicating the transfers via the bypass path to the grantor based on the request, the interconnect network granting control of the grantor in response to the request. The method also includes the interconnect network requesting control of the grantor based on an event and communicating delayed transfers via the interconnect network from other requestors, the delayed transfers being delayed due to the grantor being previously controlled by the requestor, the communicating based on the control of the grantor being changed back to the interconnect network. | 09-18-2014 |
20140281386 | CHAINING BETWEEN EXPOSED VECTOR PIPELINES - Embodiments include a method for chaining data in an exposed-pipeline processing element. The method includes separating a multiple instruction word into a first sub-instruction and a second sub-instruction, receiving the first sub-instruction and the second sub-instruction in the exposed-pipeline processing element. The method also includes issuing the first sub-instruction at a first time, issuing the second sub-instruction at a second time different than the first time, the second time being offset to account for a dependency of the second sub-instruction on a first result from the first sub-instruction, the first pipeline performing the first sub-instruction at a first clock cycle and communicating the first result from performing the first sub-instruction to a chaining bus coupled to the first pipeline and a second pipeline, the communicating at a second clock cycle subsequent to the first clock cycle that corresponds to a total number of latch pipeline stages in the first pipeline. | 09-18-2014 |
20140281403 | CHAINING BETWEEN EXPOSED VECTOR PIPELINES - Embodiments include a method for chaining data in an exposed-pipeline processing element. The method includes separating a multiple instruction word into a first sub-instruction and a second sub-instruction, receiving the first sub-instruction and the second sub-instruction in the exposed-pipeline processing element. The method also includes issuing the first sub-instruction at a first time, issuing the second sub-instruction at a second time different than the first time, the second time being offset to account for a dependency of the second sub-instruction on a first result from the first sub-instruction, the first pipeline performing the first sub-instruction at a first clock cycle and communicating the first result from performing the first sub-instruction to a chaining bus coupled to the first pipeline and a second pipeline, the communicating at a second clock cycle subsequent to the first clock cycle that corresponds to a total number of latch pipeline stages in the first pipeline. | 09-18-2014 |
20140281605 | POWER MANAGEMENT FOR A COMPUTER SYSTEM - Embodiments include a method for managing power in a computer system including a main processor and an active memory device including powered units, the active memory device in communication with the main processor by a memory link, the powered units including a processing element. The method includes the main processor executing a program on a program thread, encountering a first section of code to be executed by the active memory device, changing, by a first command, a power state of a powered unit on the active memory device based on the main processor encountering the first section of code, the first command including a store command. The method also includes the processing element executing the first section of code at a second time, changing a power state of the main processor from a power use state to a power saving state based on the processing element executing the first section. | 09-18-2014 |
20140281629 | POWER MANAGEMENT FOR A COMPUTER SYSTEM - Embodiments include a method for managing power in a computer system including a main processor and an active memory device including powered units, the active memory device in communication with the main processor by a memory link, the powered units including a processing element. The method includes the main processor executing a program on a program thread, encountering a first section of code to be executed by the active memory device, changing, by a first command, a power state of a powered unit on the active memory device based on the main processor encountering the first section of code, the first command including a store command. The method also includes the processing element executing the first section of code at a second time, changing a power state of the main processor from a power use state to a power saving state based on the processing element executing the first section. | 09-18-2014 |
20150032968 | IMPLEMENTING SELECTIVE CACHE INJECTION - A method, system and memory controller for implementing memory hierarchy placement decisions in a memory system including direct routing of arriving data into a main memory system and selective injection of the data or computed results into a processor cache in a computer system. A memory controller, or a processing element in a memory system, selectively drives placement of data into other levels of the memory hierarchy. The decision to inject into the hierarchy can be triggered by the arrival of data from an input output (IO) device, from computation, or from a directive of an in-memory processing element. | 01-29-2015 |