Patent application number | Description | Published |
20090172487 | Multiple pBIST Controllers - A system on a single integrated circuit chip (SoC) includes a plurality of operational circuits to be tested. A plurality of programmable built-in self-test (pBIST) controllers is connected to respective ones of the plurality of operational circuits in a manner that allows the pBIST controllers to test the respective operation circuits in parallel. An interface is connected to each of the plurality of pBIST controllers for connection to an external tester to facilitate programming of each of the plurality of pBIST controllers by the external tester, such that the plurality of pBIST controllers are operable to test the plurality of operational circuits in parallel and report the results of the parallel tests to the external tester, thereby reducing test time. | 07-02-2009 |
20120079102 | Requester Based Transaction Status Reporting in a System with Multi-Level Memory - A system has memory resources accessible by a central processing unit (CPU). One or more transaction requests are initiated by the CPU for access to one or more of the memory resources. Initiation of transaction requests is ceased for a period of time. The memory resources are monitored to determine when all of the transaction requests initiated by the CPU have been completed. An idle signal accessible by the CPU is provided that is asserted when all of the transaction requests initiated by the CPU have been completed. | 03-29-2012 |
20120079155 | Interleaved Memory Access from Multiple Requesters - A shared memory system having multiple banks is coupled to a set of requesters. Separate arbitration and control logic is provided for each bank, such that each bank can be accessed individually. The separate arbitration logics individually arbitrate transaction requests targeted to each bank of the memory. Access is granted to each bank on each access cycle to a highest priority request for each bank, such that more than one transaction request may be granted access to the memory on a same access cycle. A wide transaction request that has a transaction width that is wider than a width of one bank is divided into a plurality of divided requests. | 03-29-2012 |
20120079203 | Transaction Info Bypass for Nodes Coupled to an Interconnect Fabric - A shared resource within a module may be accessed by a request from an external requester. An external transaction request may be received from an external requester outside the module for access to the shared resource that includes control information, not all of which is needed to access the shared resource. The external transaction request may be modified to form a modified request by removing a portion of the locally unneeded control information and storing the unneeded portion of control information as an entry in a bypass buffer. A reply received from the shared resource may be modified by appending the stored portion of control information from the entry in the bypass buffer before sending the modified reply to the external requester. | 03-29-2012 |
20120079204 | Cache with Multiple Access Pipelines - Parallel pipelines are used to access a shared memory. The shared memory is accessed via a first pipeline by a processor to access cached data from the shared memory. The shared memory is accessed via a second pipeline by a memory access unit to access the shared memory. A first set of tags is maintained for use by the first pipeline to control access to the cache memory, while a second set of tags is maintained for use by the second pipeline to access the shared memory. Arbitrating for access to the cache memory for a transaction request in the first pipeline and for a transaction request in the second pipeline is performed after each pipeline has checked its respective set of tags. | 03-29-2012 |
20120191913 | Distributed User Controlled Multilevel Block and Global Cache Coherence with Accurate Completion Status - This invention permits user controlled cache coherence operations with the flexibility to do these operations on all levels of cache together or each level independently. In the case of an all level operation, the user does not have to monitor and sequence each phase of the operation. This invention also provides a way for users to track completion of these operations. This is critical for multi-core/multi-processor devices. Multiple cores may be accessing the end point and the user/application needs to be able to identify when the operation from one core is complete, before permitting other cores access that data or code. | 07-26-2012 |
20120191914 | PERFORMANCE AND POWER IMPROVEMENT ON DMA WRITES TO LEVEL TWO COMBINED CACHE/SRAM THAT IS CAUSED IN LEVEL ONE DATA CACHE AND LINE IS VALID AND DIRTY - This invention optimizes DMA writes to directly addressable level two memory that is cached in level one and the line is valid and dirty. When the level two controller detects that a line is valid and dirty in level one, the level two memory need not update its copy of the data. Level one memory will replace the level two copy with a victim writeback at a future time. Thus the level two memory need not store write a copy. This limits the number of DMA writes to level two directly addressable memory and thus improves performance and minimizes dynamic power. This also frees the level two memory for other master/requestors. | 07-26-2012 |
20120198161 | NON-BLOCKING, PIPELINED WRITE ALLOCATES WITH ALLOCATE DATA MERGING IN A MULTI-LEVEL CACHE SYSTEM - This invention handles write request cache misses. The cache controller stores write data, sends a read request to external memory for a corresponding cache line, merges the write data with data returned from the external memory and stores merged data in the cache. The cache controller includes buffers with plural entries storing the write address, the write data, the position of the write data within a cache line and unique identification number. This stored data enables the cache controller to proceed to servicing other access requests while waiting for response from the external memory. | 08-02-2012 |
20120198162 | Hazard Prevention for Data Conflicts Between Level One Data Cache Line Allocates and Snoop Writes - A comparator compares the address of DMA writes in the final entry of the FIFO stack to all pending read addresses in a monitor memory. If there is no match, then the DMA access is permitted to proceed. If the DMA write is to a cache line with a pending read, the DMA write access is stalled together with any DMA accesses behind the DMA write in the FIFO stack. DMA read accesses are not compared but may stall behind a stalled DMA write access. These stalls occur if the cache read was potentially cacheable. This is possible for some monitored accesses but not all. If a DMA write is stalled, the comparator releases it to complete once there are no pending reads to the same cache line. | 08-02-2012 |
20120198163 | Level One Data Cache Line Lock and Enhanced Snoop Protocol During Cache Victims and Writebacks to Maintain Level One Data Cache and Level Two Cache Coherence - This invention assures cache coherence in a multi-level cache system upon eviction of a higher level cache line. A victim buffer stored data on evicted lines. On a DMA access that may be cached in the higher level cache the lower level cache sends a snoop write. The address of this snoop write is compared with the victim buffer. On a hit in the victim buffer the write completes in the victim buffer. When the victim data passes to the next cache level it is written into a second victim buffer to be retired when the data is committed to cache. DMA write addresses are compared to addresses in this second victim buffer. On a match the write takes place in the second victim buffer. On a failure to match the controller sends a snoop write. | 08-02-2012 |
20120198164 | Programmable Address-Based Write-Through Cache Control - This invention is a cache system with a memory attribute register having plural entries. Each entry stores a write-through or a write-back indication for a corresponding memory address range. On a write to cached data the cache the cache consults the memory attribute register for the corresponding address range. Writes to addresses in regions marked as write-through always update all levels of the memory hierarchy. Writes to addresses in regions marked as write- back update only the first cache level that can service the write. The memory attribute register is preferably a memory mapped control register writable by the central processing unit. | 08-02-2012 |
20120198165 | Mechanism to Update the Status of In-Flight Cache Coherence In a Multi-Level Cache Hierarchy - Separate buffers store snoop writes and direct memory access writes. A multiplexer selects one of these for input to a FIFO buffer. The FIFO buffer is split into multiple FIFOs including: a command FIFO; an address FIFO; and write data FIFO. Each snoop command is compared with an allocated line set and way and deleted on a match to avoid data corruption. Each snoop command is also compared with a victim address. If the snoop address matches victim address logic redirects the snoop command to a victim buffer and the snoop write is completed in the victim buffer. | 08-02-2012 |
20120198166 | Memory Attribute Sharing Between Differing Cache Levels of Multilevel Cache - The level one memory controller maintains a local copy of the cacheability bit of each memory attribute register. The level two memory controller is the initiator of all configuration read/write requests from the CPU. Whenever a configuration write is made to a memory attribute register, the level one memory controller updates its local copy of the memory attribute register. | 08-02-2012 |
20120198272 | Priority Based Exception Mechanism for Multi-Level Cache Controller - This invention is an exception priority arbitration unit which prioritizes memory access permission fault and data exception signals according to a fixed hierarchy if received during a same cycle. A CPU memory access permission fault is prioritized above a DMA memory access permission fault of a direct memory access permission fault. Any memory access permission fault is prioritized above a data exception signal. A non-correctable data exception signal is prioritized above a correctable data exception signal. | 08-02-2012 |
20120198310 | CONFIGURABLE SOURCE BASED/REQUESTOR BASED ERROR DETECTION AND CORRECTION FOR SOFT ERRORS IN MULTI-LEVEL CACHE MEMORY TO MINIMIZE CPU INTERRUPT SERVICE ROUTINES - This invention is a memory system with parity generation which selectively forms and stores parity bits of corresponding plural data sources. The parity generation and storage depends upon the state of a global suspend bit and a global enable bit, and parity detection/correction corresponding to each data source. | 08-02-2012 |
20120260031 | ENHANCED PIPELINING AND MULTI-BUFFER ARCHITECTURE FOR LEVEL TWO CACHE CONTROLLER TO MINIMIZE HAZARD STALLS AND OPTIMIZE PERFORMANCE - This invention is a data processing system including a central processing unit, an external interface, a level one cache, level two memory including level two unified cache and directly addressable memory. A level two memory controller includes a directly addressable memory read pipeline, a central processing unit write pipeline, an external cacheable pipeline and an external non-cacheable pipeline. | 10-11-2012 |
20120290755 | Lookahead Priority Collection to Support Priority Elevation - A queuing requester for access to a memory system. Transaction requests received from two or more requestors access to the memory system. Each transaction request includes an associated priority value. A request queue is formed in the queuing requester. Each transaction request includes an associated priority value. A highest priority value of all pending transaction requests within the request queue is determined. An elevated priority value is selected when the highest priority value is higher than the priority value of an oldest transaction request in the request queue; otherwise the priority value of the oldest transaction request is selected. The oldest transaction request in the request queue with the selected priority value is then provided to the memory system. An arbitration contest with other requesters for access to the memory system uses the selected priority value. | 11-15-2012 |
20120290756 | Managing Bandwidth Allocation in a Processing Node Using Distributed Arbitration - Management of access to shared resources within a system comprising a plurality of requesters and a plurality of target resources is provided. A separate arbitration point is associated with each target resource. An access priority value is assigned to each requester. An arbitration contest is performed for access to a first target resource by requests from two or more of the requesters using a first arbitration point associated with the first target resource to determine a winning requester. The request from the winning requester is forwarded to a second target resource. A second arbitration contest is performed for access to the second target resource by the forwarded request from the winning requester and requests from one or more of the plurality of requesters using a second arbitration point associated with the second target resource. | 11-15-2012 |
20120314833 | Integer and Half Clock Step Division Digital Variable Clock Divider - A clock divider divides a high speed input clock signal by an odd, even or fractional divide ratio. The clock divider receives a divide factor value F representative of a divide ratio N, wherein the N may be an odd or an even integer. A fractional indicator indicates a fractional divide ratio when one and an integral divide ratio when zero. A count indicator is asserted every N/2 input clock cycles when N is even. The count indicator is asserted alternately N/2 input clock cycles and then 1+N/2 input clock cycles when N is odd. The clock divider synthesizes one period of an output clock signal in response to each assertion of the count indicator for a fractional divide ratio and synthesizes one period of the output clock signal in response to two assertions of the count indicator for an integral divide ratio. | 12-13-2012 |
20120324174 | Multi-Port Register File with an Input Pipelined Architecture and Asynchronous Read Data Forwarding - In an embodiment of the invention, a multi-port register file includes write port inputs (e.g. write address, write enable, data input) that are pipelined and synchronous and read port inputs (e.g. read address) that are asynchronous and are not pipelined. Because the write port inputs are pipelined, they are stored in pipelined registers. When data is written to the multi-port register file, data is first written to the pipelined registers during a first clock cycle. On the next clock cycle, data is read from the pipelined registers and written into memory array registers. When the read address is identical to the write address stored in the pipelined memory, the result of a bit-wise ANDing of data stored in pipelined synchronous data registers and data stored in pipelined synchronous bit-wise registers is presented at the output of the multi-port register file. | 12-20-2012 |
20120324175 | Multi-Port Register File with an Input Pipelined Architecture with Asynchronous Reads and Localized Feedback - In an embodiment of the invention, a multi-port register file includes write port inputs (e.g. write address, write enable, data input) that are pipelined and synchronous and read port inputs (e.g. read address) that are asynchronous and are not pipelined. Because the write port inputs are pipelined, they are stored in pipelined registers. When data is written to the multi-port register file, data is first written to the pipelined registers during a first clock cycle. On the next clock cycle, data is read from the pipelined registers and written into memory array registers. Which bits of data from a pipelined synchronous data register are written into the multi-port register file is determined by a pipelined synchronous bit-write register. The output of the pipelined synchronous bit-write register selects which inputs of multiplexers contained in registers in the multi-port register file are stored. | 12-20-2012 |
20130021858 | Process Variability Tolerant Programmable Memory Controller for a Pipelined Memory System - In an embodiment of the invention, an integrated circuit includes a pipelined memory array and a memory control circuit. The pipelined memory array contains a plurality of memory banks. Based partially on the read access time information of a memory bank, the memory control circuit is configured to select the number of clock cycles used during read latency. | 01-24-2013 |
20130275822 | At Speed Testing of High Performance Memories with a Multi-Port BIS Engine - A programmable Built In Self Test (BIST) system used to test embedded memories where the memories may be operating at a clock frequency higher than the operating frequency of the BIST. A plurality of BIST memory ports are used to generate multiple memory test instructions in parallel, and the parallel instructions are then merged to generate a single memory test instruction stream at a speed that is a multiple of the BIST operating frequency. | 10-17-2013 |
20140108737 | ZERO CYCLE CLOCK INVALIDATE OPERATION - A method to eliminate the delay of a block invalidate operation in a multi CPU environment by overlapping the block invalidate operation with normal CPU accesses, thus making the delay transparent. A range check is performed on each CPU access while a block invalidate operation is in progress, and an access that maps to within the address range of the block invalidate operation will be trated as a cache miss to ensure that the requesting CPU will receive valid data. | 04-17-2014 |
20140122810 | PARALLEL PROCESSING OF MULTIPLE BLOCK COHERENCE OPERATIONS - A method to eliminate the delay of multiple overlapping block invalidate operations in a multi CPU environment by overlapping the block invalidate operation with normal CPU accesses, thus making the delay transparent. The cache controller performing the block invalidate operation merges multiple overlapping requests into a parallel stream to eliminate execution delays. Cache operations other that block invalidate, such as block write back or block write back invalidate may also be merged into the execution stream. | 05-01-2014 |
20140164844 | pBIST ENGINE WITH DISTRIBUTED DATA LOGGING - A programmable Built In Self Test (pBIST) system used to test embedded memories where the memories under test are incorporated in a plurality of sub chips not integrated with the pBIST module. A distributed Data Logger is incorporated into each sub chip, communicating with the pBIST over serial and a compressed parallel data paths. | 06-12-2014 |
20140164854 | pBIST ARCHITECTURE WITH MULTIPLE ASYNCHRONOUS SUB CHIPS OPERATING IN DIFFERRING VOLTAGE DOMAINS - A programmable Built In Self Test (pBIST) system used to test embedded memories where the memories may be operating at a voltage domain different from the voltage domain of the pBIST. A plurality of buffer and synchronizing registers are used to avoid meta stable conditions caused by the time delays introduced by the voltage shifters required to bridge the various voltage domains. | 06-12-2014 |
20140164855 | pBIST READ ONLY MEMORY IMAGE COMPRESSION - A programmable Built In Self Test (pBIST) system used to test embedded memories where a plurality of memories requiring different testing conditions are incorporated in an SOC. The pBIST Read Only Memory storing the test setup data is organized to eliminate multiple instances of test setup data for similar embedded memories. | 06-12-2014 |
20140164856 | pBIST ENGINE WITH REDUCED SRAM TESTING BUS WIDTH - A programmable Built In Self Test (pBIST) system used to test embedded memories where the memories under test are incorporated in a plurality of sub chips not integrated with the pBIST module. Test data comparison is performed in a distributed data logging architecture to minimize the number of interconnections between the distributed data loggers and the pBIST. | 06-12-2014 |
20160034396 | Programmable Address-Based Write-Through Cache Control - This invention is a cache system with a memory attribute register having plural entries. Each entry stores a write-through or a write-back indication for a corresponding memory address range. On a write to cached data the cache the cache consults the memory attribute register for the corresponding address range. Writes to addresses in regions marked as write-through always update all levels of the memory hierarchy. Writes to addresses in regions marked as write-back update only the first cache level that can service the write. The memory attribute register is preferably a memory mapped control register writable by the central processing unit. | 02-04-2016 |
Patent application number | Description | Published |
20130243148 | Integer and Half Clock Step Division Digital Variable Clock Divider - A clock divider is provided that is configured to divide a high speed input clock signal by an odd, even or fractional divide ratio. The input clock may have a clock cycle frequency of 1 GHz or higher, for example. The input clock signal is divided to produce an output clock signal by first receiving a divide factor value F representative of a divide ratio N, wherein the N may be an odd or an even integer. A fractional indicator indicates the divide ratio is N.5 when the fractional indicator is one and indicates the divide ratio is N when the fractional indicator is zero. F is set to 2(N.5)/2 for a fractional divide ratio and F is set to N/2 for an integer divide ratio. A count indicator is asserted every N/2 input clock cycles when N is even. The count indicator is asserted alternately N/2 input clock cycles and then 1+N/2 input clock cycles when N is odd. One period of an output clock signal is synthesized in response to each assertion of the count indicator when the fractional indicator indicates the divide ratio is N.5. One period of the output clock signal is synthesized in response to two assertions of the count indicator when the fractional indicator indicates the divide ratio is an integer. | 09-19-2013 |
20130283002 | Process Variability Tolerant Programmable Memory Controller for a Pipelined Memory System - In an embodiment of the invention, an integrated circuit includes a pipelined memory array and a memory control circuit. The pipelined memory array contains a plurality of memory banks. Based partially on the read access time information of a memory bank, the memory control circuit is configured to select the number of clock cycles used during read latency. | 10-24-2013 |
20150178221 | Level One Data Cache Line Lock and Enhanced Snoop Protocol During Cache Victims and Writebacks to Maintain Level One Data Cache and Level Two Cache Coherence - This invention assures cache coherence in a multi-level cache system upon eviction of a higher level cache line. A victim buffer stored data on evicted lines. On a DMA access that may be cached in the higher level cache the lower level cache sends a snoop write. The address of this snoop write is compared with the victim buffer. On a hit in the victim buffer the write completes in the victim buffer. When the victim data passes to the next cache level it is written into a second victim buffer to be retired when the data is committed to cache. DMA write addresses are compared to addresses in this second victim buffer. On a match the write takes place in the second victim buffer. On a failure to match the controller sends a snoop write. | 06-25-2015 |
20150268958 | SPECULATIVE HISTORY FORWARDING IN OVERRIDING BRANCH PREDICTORS, AND RELATED CIRCUITS, METHODS, AND COMPUTER-READABLE MEDIA - Speculative history forwarding in overriding branch predictors, and related circuits, methods, and computer-readable media are disclosed. In one embodiment, a branch prediction circuit including a first branch predictor and a second branch predictor is provided. The first branch predictor generates a first branch prediction for a conditional branch instruction, and the first branch prediction is stored in a first branch prediction history. The first branch prediction is also speculatively forwarded to a second branch prediction history. The second branch predictor subsequently generates a second branch prediction based on the second branch prediction history, including the speculatively forwarded first branch prediction. By enabling the second branch predictor to base its branch prediction on the speculatively forwarded first branch prediction, an accuracy of the second branch predictor may be improved. | 09-24-2015 |
20150269090 | PERFORMANCE AND POWER IMPROVEMENT ON DMA WRITES TO LEVEL TWO COMBINED CACHE/SRAM THAT IS CACHED IN LEVEL ONE DATA CACHE AND LINE IS VALID AND DIRTY - This invention optimizes DMA writes to directly addressable level two memory that is cached in level one and the line is valid and dirty. When the level two controller detects that a line is valid and dirty in level one, the level two memory need not update its copy of the data. Level one memory will replace the level two copy with a victim writeback at a future time. Thus the level two memory need not store write a copy. This limits the number of DMA writes to level two directly addressable memory and thus improves performance and minimizes dynamic power. This also frees the level two memory for other master/requestors. | 09-24-2015 |
20160026569 | ZERO CYCLE CLOCK INVALIDATE OPERATION - A method to eliminate the delay of a block invalidate operation in a multi CPU environment by overlapping the block invalidate operation with normal CPU accesses, thus making the delay transparent. A range check is performed on each CPU access while a block invalidate operation is in progress, and an access that maps to within the address range of the block invalidate operation will be treated as a cache miss to ensure that the requesting CPU will receive valid data. | 01-28-2016 |
Patent application number | Description | Published |
20140181293 | METHODS AND APPARATUS FOR DETERMINING A MAXIMUM AMOUNT OF UNACCOUNTED-FOR DATA TO BE TRANSMITTED BY A DEVICE - A method includes determining a maximum amount of unaccounted-for data to be transmitted by a particular client device associated with an access point. The maximum amount of unaccounted-for data may be based on characteristics associated with data received at an access point from one or more client devices. The maximum amount of unaccounted-for data may be based on a total number of client devices associated with an access point. The maximum amount of unaccounted-for data may be based on resources available at the access point. | 06-26-2014 |
20140269752 | APPARATUS AND METHOD AGGREGATION AT ONE OR MORE LAYERS - A method for performing aggregation at one or more layers starts with an AP placing at a first layer one or more received frames in a queue at the AP. When a transmit scheduler is ready to transmit an aggregated frame corresponding to the queue, the AP may iteratively select a plurality of frames selected from the one or more received frames, and aggregate at the first layer the plurality of frames into the aggregated frame. The number of frames included in an aggregated frame may be based on at least one of: a dynamically updated rate of transmission associated with a size of the frames, a class of the frames, a transmission opportunity value associated with the class of the frames and a total projected airtime for transmitting the aggregated frame. Other embodiments are also described. | 09-18-2014 |
20140369210 | METHOD AND APPARATUS TO CONTROL TX/RX AMSDU SIZE BASED ON THE NEGOTIATED MAXIMUM TRANSMISSION UNIT IN THE TUNNEL BETWEEN A CONTROLLER AND AN ACCESS POINT - A method includes selecting upstream and downstream aggregated MAC service data unit (AMSDU) sizes for communications between a client device, an access point, and an controller in a network system. The upstream and downstream AMSDU sizes may be separately selected based on the maximum transmission unit (MTU) size for communications in a secure tunnel between the controller and the access point to avoid fragmentation and reassembly of AMSDUs transmitted in a single MTU. The upstream and downstream AMSDU sizes may be selected to be less or equal to the MTU size. | 12-18-2014 |
20150117235 | Enhanced Dynamic Multicast Optimization - The present disclosure discloses a system and method for enhanced dynamic multicast optimization based on network condition measurement. The system includes a processor and a memory storing instructions that, when executed, cause the system to: measure a network condition for a multicast group using one or more metrics; determine whether to convert all stations in the multicast group to unicast based on the network condition; and responsive to determining not to convert the all stations in the multicast group to unicast, determine, based on the network condition, a sub set of the multicast group for converting the subset of the multicast group to unicast, wherein the subset includes less than all stations in the multicast group. | 04-30-2015 |
20150312910 | Dynamic Channel Bandwidth Selection Based on Information for Packets transmitted at Different Channel Bandwidths - The present disclosure discloses a method and network device for dynamic channel bandwidth selection in a wireless local area network. Specifically, a network device obtains information corresponding to a first set of packets transmitted on one or more of a plurality of channel bandwidths over a first period of time. Based on the information, the network device selects a particular channel bandwidth, of the plurality of channel bandwidths, for transmitting a second set of packets; and transmits the second set of packets at the particular channel bandwidth. Additionally, based on the information, the network device can dynamically select a number of packets, from a second set of packets, to queue at hardware components with channel bandwidth selection for transmission by the hardware components; and can queue the selected number of packets at the hardware components with channel bandwidth selection. | 10-29-2015 |
20150318878 | AVOIDING SELF INTERFERENCE USING CHANNEL STATE INFORMATION FEEDBACK - Disclosed herein is a system, apparatus, and method for reducing self-interference within a wireless network device using channel state information feedback and beamforming techniques. The self-interference within a device may be reduced by first transmitting, by a first circuitry, a first set signals using a first radiation pattern through a first set of antennas coupled with the first circuitry. Then, based on feedback information associated with the first set of signals detected by a second circuitry of the device, a second radiation pattern to be used by the first circuitry and the first set of antennas that reduces receipt of signals by the second circuitry that are transmitted by the first set of antennas or leaked from the first circuitry may be determined. Thereafter, a second set of signals may be transmitted by the first set of antennas using the second radiation pattern. | 11-05-2015 |
20160036683 | SYNTHETIC CLIENT - A system with a device including a hardware processor is configured to perform operations: receiving, by the device, a message over a wired medium, wherein the message has a frame including (a) a MAC address as a source MAC address for the frame and (b) a second MAC address as a destination MAC address for the frame, extracting, by the device, the frame from the message received over the wired medium, and wirelessly transmitting, by the device, the frame without modifying the source MAC address and without modifying the destination MAC address. | 02-04-2016 |
Patent application number | Description | Published |
20090311407 | PRODUCTION OF PROTEIN-POLYSACCHARIDE CONJUGATES - The present invention provides novel compositions and methods for producing protein-polysaccharide conjugates in aqueous solutions. Also provided are methods for limiting the Maillard reaction to the very initial stage, the formation of the Schiff base. Provided are methods to obtain a simple product of Schiff base with white color, and compositions obtained using the methods of the present invention. | 12-17-2009 |
20100028525 | Low Fat, Clear, Bland Flavored Whey Products - Novel methods for producing whey protein concentrates with favorable properties are provided. The methods include using chitosan selective precipitation whey pre-treatment, and microfiltration using polymeric membranes. The products obtained using these methods include WPC80 with low fat content, high clarity, low browning potential during storage, and having low levels of volatiles. Compositions and foaming agents obtained using the methods of the present invention are also provided. | 02-04-2010 |
20100086657 | METHOD TO SEPARATE LIPIDS FROM CHEESE WHEY AND FAT-FREE WHEY PROTEIN PRODUCT FORMED THEREBY - Disclosed is a method of selectively separating milk fat globule membrane fragments and milk fat globules from whey. The method includes the steps of adding to whey an amount of a whey-soluble zinc salt and adjusting the pH of the whey to be less than 6.0. The amount of zinc salt added to the whey is sufficient to cause milk fat globule membrane fragments and milk fat globules contained in the whey to precipitate selectively from the whey. | 04-08-2010 |
20100151096 | INHIBITION OF ICE CRYSTAL GROWTH - Antifreeze polypeptides, antifreeze compositions including the polypeptides, nucleotides encoding the antifreeze polypeptides, methods of making antifreeze compositions, and methods of inhibiting ice crystal growth are provided herein. The peptides are based on the primary sequence of collagen and include those having a molecular weight between about 500-7000 Da. The peptides preferably include cationic polypeptides. The methods of making antifreeze compositions include digesting collagen or gelatin into hydrolysates with peptides having molecular weights between about 500-7000 Da. The digestions are performed with proteases and/or non-enzymatic hydrolysis. The methods of inhibiting ice crystal growth include adding the antifreeze polypeptides or compositions described herein to a composition to be frozen. The methods may be used to inhibit ice crystal growth in frozen food products. | 06-17-2010 |
20110045128 | PROCESS FOR REMOVING PHOSPHOLIPIDS AND OFF-FLAVORS FROM PROTEINS AND RESULTING PROTEIN PRODUCT - Described are methods of removing phospholipids and other off-flavor-causing compounds from edible proteins using a cyclodextrin treatment. The methods include treating soy protein with cyclodextrins such as β-cyclodextrin to form cyclodextrin-compound complexes and then separating the resulting complexes from the protein. Optionally, prior to treating the protein with cyclodextrin, the protein is sonicated and then treated with a phospholipase, such as phospholipase A | 02-24-2011 |
20130129889 | METHOD TO SEPARATE LIPIDS FROM CHEESE WHEY AND FAT-FREE WHEY PROTEIN PRODUCT FORMED THEREBY - Disclosed is a method of selectively separating milk fat globule membrane fragments and milk fat globules from whey. The method includes the steps of adding to whey an amount of a whey-soluble zinc salt and adjusting the pH of the whey to be less than 6.0. The amount of zinc salt added to the whey is sufficient to cause milk fat globule membrane fragments and milk fat globules contained in the whey to precipitate selectively from the whey. | 05-23-2013 |
20130316052 | METHOD TO SEPARATE LIPIDS FROM CHEESE WHEY AND FAT-FREE WHEY PROTEIN PRODUCT FORMED THEREBY - Disclosed is a method of selectively separating milk fat globule membrane fragments and milk fat globules from whey. The method includes the steps of adding to whey an amount of a whey-soluble zinc salt and adjusting the pH of the whey to be less than 6.0. The amount of zinc salt added to the whey is sufficient to cause milk fat globule membrane fragments and milk fat globules contained in the whey to precipitate selectively from the whey. | 11-28-2013 |
20140352574 | LEGUME AND/OR OIL SEED FLOUR-BASED ADHESIVE COMPOSITION - Adhesives made from phosphorylated legume and oil seed flours are described. The adhesive composition includes water and a legume and/or oil seed flour in which at least a portion of ε-amino moieties in lysine residues present in the flour are phosphorylated. An oxidizing agent may also optionally be added to the adhesive composition. | 12-04-2014 |