Class / Patent application number | Description | Number of patent applications / Date published |
708203000 | Compression/decompression | 62 |
20080215650 | Efficient multiple input multiple output signal processing method and apparatus - A method and apparatus are disclosed for use with multiple input, multiple output (MIMO) signal processing techniques, which reduce the amount of memory and memory bandwidth used to store and access filter coefficients by compressing a filter coefficient based at least in part on one or more neighboring filter coefficients for storage and decompressing the filter coefficients when retrieved. The decompressed filter coefficients can be used with a MIMO filtering technique, and/or can be used to compress or decompress additional coefficients. | 09-04-2008 |
20080250091 | CUSTOM CHARACTER-CODING COMPRESSION FOR ENCODING AND WATERMARKING MEDIA CONTENT - An apparatus for compressing media content is disclosed. The apparatus divides the media content into at least three predetermined portions, compresses each of the at least three portions using one of at least three different compression algorithms and makes the at least three compressed predetermined portions publicly available. Making the portions publicly available includes, for example, transmitting the portions over a computer network such as the Internet. | 10-09-2008 |
20090006510 | System and method for deflate processing within a compression engine - An apparatus to implement a deflate process in a compression engine. An embodiment of the apparatus includes a hash table, a dictionary, comparison logic, and encoding logic. The hash table is configured to hash a plurality of characters of an input data stream to provide a hash address. The dictionary is configured to provide a plurality of distance values in parallel based on the hash address. The distance values are stored in the dictionary. The comparison logic is configured to identify a corresponding length for each matching distance value from the plurality of distance values. The encoding logic is configured to encode the longest length and the matching distance value as a portion of a LZ77 code stream. | 01-01-2009 |
20090030960 | Data processing system and method - A matrix by vector multiplication processing system ( | 01-29-2009 |
20090204655 | SYSTEM AND METHOD FOR DETERMINING A GROUPING OF SEGMENTS WITHIN A MARKET - A method for determining a grouping of segments within a market. The method includes forming a bias mitigated square matrix from a square matrix populated with second choice data, and forming a compressed matrix from the bias mitigated square matrix. Each different segment is initially associated with a row of the square matrix and a column of the square matrix. The method also includes determining a matrix consistency score for the compressed matrix, forming at least one additional compressed matrix from the bias mitigated square matrix, and determining matrix consistency scores for each additional compressed matrix. The method further includes determining which matrix consistency score is best. | 08-13-2009 |
20100057819 | Iterative Algorithms for Variance Reduction on Compressed Sinogram Random Coincidences in PET - The use of the ordinary Poisson iterative reconstruction algorithm in PET requires the estimation of expected random coincidences. In a clinical environment, random coincidences are often acquired with a delayed coincidence technique, and expected randoms are estimated through variance reduction (VR) of measured delayed coincidences. In this paper we present iterative VR algorithms for random compressed sinograms, when previously known methods are not applicable. Iterative methods have the advantage of easy adaptation to any acquisition geometry and of allowing the estimation of singles rates at the crystal level when the number of crystals is relatively small. Two types of sinogram compression are considered: axial (span) rebinning and transaxial mashing. A monotonic sequential coordinate descent algorithm, which optimizes the Least Squares objective function, is investigated. A simultaneous update algorithm, which possesses the advantage of easy parallelization, is also derived for both cases of the Least Squares and Poisson Likelihood objective function. | 03-04-2010 |
20100082717 | COMPUTATION APPARATUS AND METHOD, QUANTIZATION APPARATUS AND METHOD, AND PROGRAM - A computation apparatus includes an inverse conversion table creation unit configured to create an inverse conversion table in which discrete values obtained by applying a predetermined conversion on predetermined data correspond to inverse conversion values obtained by applying a conversion inverse to the predetermined conversion on the discrete values, a range decision unit configured to decide in which range the predetermined data is included when the predetermined data is input among ranges where the inverse conversion values adjacent in the inverse conversion table are set as border values, and a discrete value decision unit configured to decide the discrete value corresponding to the inverse discrete value whose value is close to the predetermined data among the inverse conversion values serving as the border values of the range decided by the range decision unit. | 04-01-2010 |
20100115013 | EFFICIENT COMPRESSION AND HANDLING OF MODEL LIBRARY WAVEFORMS - A system and method for waveform compression includes preprocessing a collection of waveforms representing cell and/or interconnect response waveforms and constructing a representative waveform basis using linear algebra to create basis waveforms for a larger set of waveforms. The collection waveforms are represented as linear combination coefficients of an adaptive subset of the basis waveforms to compress an amount of stored information needed to reproduce the collection of waveforms. The representation of coefficients may be further compressed by, e.g., analytic representation. | 05-06-2010 |
20100228806 | MODULAR DIGITAL SIGNAL PROCESSING CIRCUITRY WITH OPTIONALLY USABLE, DEDICATED CONNECTIONS BETWEEN MODULES OF THE CIRCUITRY - Digital signal processing (“DSP”) circuit blocks are provided that can more easily work together to perform larger (e.g., more complex and/or more arithmetically precise) DSP operations if desired. These DSP blocks may also include redundancy circuitry that facilitates stitching together multiple such blocks despite an inability to use some block (e.g., because of a circuit defect). Systolic registers may be included at various points in the DSP blocks to facilitate use of the blocks to implement systolic form, finite-impulse-response (“FIR”), digital filters. | 09-09-2010 |
20100299378 | SORTABLE FLOATING POINT NUMBERS - The invention comprises methods for manipulating floating point numbers on a microprocessor where the numbers are sortable. That is, the numbers obey lexicographical ordering. Hence, the numbers may be quickly compared using bit-wise comparison functions such as memcmp( ). Conversion may result in a sortable floating point number in the form of a sign, leading bits of the exponent, and sets of digit triples in the form of declets (sets of 10 bits). In a variable-length version, numbers may be compressed by storing the number of trailing zero declets in lieu of storing the zero declets themselves. | 11-25-2010 |
20110078222 | ENHANCED MULTI-PROCESSOR WAVEFORM DATA EXCHANGE USING COMPRESSION AND DECOMPRESSION - Configurable compression and decompression of waveform data in a multi-core processing environment improves the efficiency of data transfer between cores and conserves data storage resources. In waveform data processing systems, input, intermediate, and output waveform data are often exchanged between cores and between cores and off-chip memory. At each core, a single configurable compressor and a single configurable decompressor can be configured to compress and to decompress integer or floating-point waveform data. At the memory controller, a configurable compressor compresses integer or floating-point waveform data for transfer to off-chip memory in compressed packets and a configurable decompressor decompresses compressed packets received from the off-chip memory. Compression reduces the memory or storage required to retain waveform data in a semiconductor or magnetic memory. Compression reduces both the latency and the bandwidth required to exchange waveform data. This abstract does not limit the scope of the invention as described in the claims. | 03-31-2011 |
20110202584 | SYSTEM FOR STORING AND TRANSMITTING COMPRESSED INTEGER DATA - A method is disclosed for encoding and decoding integer values ranging over a known gamut of values used by a data system. By noting that a data system may store and/or transmit integer values over a predefined gamut having a minimum and a maximum limit, integer values at or near the maximum may be compressed to a greater degree than in conventional systems without any loss of data resolution. | 08-18-2011 |
20110276612 | METHOD, DEVICE, COMPUTER PROGRAM AND COMPUTER PROGRAM PRODUCT FOR DETERMINING A REPRESENTATION OF A SIGNAL - A method for determining a representation (y) of a signal (s) comprise selecting a predetermined number (m) of row vectors (v | 11-10-2011 |
20120016918 | Method for Compressing Information - Provided is a method of compressing information. The method includes converting compression target information into a binary number, converting the binary number into a decimal number a, performing operation of a discriminant | 01-19-2012 |
20120030266 | SYSTEM AND METHOD FOR DATA COMPRESSION USING A FIELD PROGRAMMABLE GATE ARRAY - A system and method for compressing and/or decompressing data uses a field programmable gate array (FPGA). In an embodiment, the method includes receiving data at the FPGA device, filtering the received data in a first dimension using a first logic structure of the FPGA device, storing the first filtered data in a memory of the FPGA device, filtering the received data in a second dimension using a second logic structure of the FPGA device, storing the second filtered data in the memory, quantizing the filtered data using a third logic structure of the FPGA device, encoding the quantized data using a fourth logic structure of the FPGA device to compress the data, and storing the encoded compressed data in a memory of the FPGA device. | 02-02-2012 |
20120084334 | DATA DECOMPRESSION WITH EXTRA PRECISION - Methods and systems for decompressing data are described. The relative magnitudes of a first value and a second value are compared. The first value and the second value represent respective endpoints of a range of values. The first value and the second value each have N bits of precision. Either the first or second value is selected, based on the result of the comparison. The selected value is scaled to produce a third value having N+1 bits of precision. A specified bit value is appended as the least significant bit of the other (non-selected) value to produce a fourth value having N+1 bits of precision. | 04-05-2012 |
20120110048 | APPARATUS FOR EVALUATING A MATHEMATICAL FUNCTION - An apparatus for evaluating a mathematical function at an input value is provided. The apparatus includes a device for selecting a mathematical function, a device for inputting a value at which to evaluate the function, a device for identifying an interval containing the input value, the interval being described by at least one polynomial function, a device for retrieving at least one control point representing the polynomial function from at least one look up table, a device for deriving the polynomial function from the control points, a device for evaluating the function for the input value and a device for providing data representing the evaluated function at an output. | 05-03-2012 |
20120117133 | METHOD AND DEVICE FOR PROCESSING A DIGITAL SIGNAL - A method for processing a digital signal comprises receiving an output encoded signal (S | 05-10-2012 |
20120124113 | LIGHT DETECTION AND RANGING (LiDAR)DATA COMPRESSION AND DECOMPRESSION METHODS AND APPARATUS - Methods and apparatus for lossless LiDAR LAS file compression and decompression are provided that include predictive coding, variable-length coding, and arithmetic coding. The predictive coding uses four different predictors including three predictors for x, y, and z coordinates and a constant predictor for scalar values, associated with each LiDAR data point. | 05-17-2012 |
20120124114 | ARITHMETIC DEVICE - According to one embodiment, a representation converting unit converts a set of n elements (h | 05-17-2012 |
20120143932 | Data Structure For Tiling And Packetizing A Sparse Matrix - A computer system retrieves a slice of sparse matrix data, which includes multiple rows that each includes multiple elements. The computer system identifies one or more non-zero values stored in one or more of the rows. Each identified non-zero value corresponds to a different row, and also corresponds to an element location within the corresponding row. In turn, the computer system stores each of the identified non-zero values and corresponding element locations within a packet at predefined fields corresponding to the different rows. | 06-07-2012 |
20120150931 | DECOMPRESSING APPARATUS AND COMPRESSING APPARATUS - According to one embodiment, a decompressing apparatus includes an input unit, a calculating unit, a first selecting unit, and a decompressing unit. The input unit inputs additional data, which is obtained based on trace expression data in which an element in a subgroup of a multiplicative group of a finite field is trace-expressed and affine expression data in which the trace expression data is affine-expressed, and the trace expression data. The calculating unit calculates a plurality of solutions of simultaneous equations derived by the trace expression data. The first selecting unit selects any of a plurality of items of affine expression data in which the element is affine-expressed based on the additional data, the affine expression data being found from the solutions. The decompressing unit decompresses the selected affine expression data to the element. | 06-14-2012 |
20120166502 | LOW-COMPLEXITY INVERSE TRANSFORM COMPUTATION METHOD - A low-complexity inverse transform computation method, comprising following steps: firstly, analyzing an end-of-block (EOB) point in a matrix of a block; next, determining whether a bottom-left corner coefficient or a top-right coefficient before said EOB point is zero, and if it is zero, reducing further size of said matrix; then, determining an adequate operation mode to reduce computational complexity; and finally, realizing 2-D inverse transform through simplified 1-D inverse transforms. An inverse transform process of said method mentioned above is capable of lowering computation amount, reducing burden and computational complexity of a decompression system, and shortening effectively computation time of said 2-D inverse transform, such that it is applicable to inverse transforms of various video and still image codecs. | 06-28-2012 |
20120166503 | METHOD FOR FULLY ADAPTIVE CALIBRATION OF A PREDICTION ERROR CODER - Method for fully adaptive calibration of a prediction error coder, comprising a first step of initialization; a second step of reception and accumulation of block-size data samples wherein for each received value, it is added one to the histogram bin associated to that value; a third step of analysis of the histogram and determination of the coding option; a fourth step of analysis of the histogram and determination of a coding table; a fifth step of output a header with the prediction error coder coding table determined; and wherein previous steps are repeated if more samples need to be compressed. It is useful as a data compression technique, with the advantage of being faster and more robust than the current CCSDS lossless compression standard. | 06-28-2012 |
20120203810 | Method And Apparatus For Compressive Sensing With Reduced Compression Complexity - Various methods and devices are provided to address the need for reduced compression complexity in the area of compressive sensing. In one method, a vector x is compressed to obtain a vector y according to y=Φ | 08-09-2012 |
20120226723 | APPROXIMATE ORDER STATISTICS OF REAL NUMBERS IN GENERIC DATA - A method, system, and processor-readable storage medium are directed towards calculating approximate order statistics on a collection of real numbers. In one embodiment, the collection of real numbers is processed to create a digest comprising hierarchy of buckets. Each bucket is assigned a real number N having P digits of precision and ordinality O. The hierarchy is defined by grouping buckets into levels, where each level contains all buckets of a given ordinality. Each individual bucket in the hierarchy defines a range of numbers—all numbers that, after being truncated to that bucket's P digits of precision, are equal to that bucket's N. Each bucket additionally maintains a count of how many numbers have fallen within that bucket's range. Approximate order statistics may then be calculated by traversing the hierarchy and performing an operation on some or all of the ranges and counts associated with each bucket. | 09-06-2012 |
20120265793 | MERGED COMPRESSOR FLOP CIRCUIT - A merged compressor flip-flop circuit is provided. The circuit includes a compressor circuit having a front-end and a back-end, the front-end configured to receive four input bits and to output a first carry-bit to a back-end of a second compressor circuit, the front end further configured to output intermediate sum signals to the back-end of the compressor circuit, the back-end configured to receive the intermediate sum signals from the front-end and further configured to receive a second carry-bit from a front-end of a third compressor circuit, the back-end further configured to output a sum-bit and a third carry-bit based upon the intermediate sum signals and the second carry-bit, and a flip-flop circuit configure to receive the sum-bit and third carry-bit and to store the sum-bit and third carry-bit, wherein the back-end of the compressor circuit directly drives the sum-bit and third carry-bit into the flip-flop circuit | 10-18-2012 |
20130007075 | METHODS AND APPARATUS FOR COMPRESSING PARTIAL PRODUCTS DURING A FUSED MULTIPLY-AND-ACCUMULATE (FMAC) OPERATION ON OPERANDS HAVING A PACKED-SINGLE-PRECISION FORMAT - The disclosed embodiments relate to methods and apparatus for accurately, efficiently and quickly executing a fused multiply-and-accumulate instruction with respect to floating-point operands that have packed-single-precision format. The disclosed embodiments can speed up computation of a high-part of a result during a fused multiply-and-accumulate operation so that cycle delay can be reduced and so that power consumption can be reduced. | 01-03-2013 |
20130007076 | COMPUTATIONALLY EFFICIENT COMPRESSION OF FLOATING-POINT DATA - Compression of floating-point numbers is realized by comparing the exponents of the floating-point numbers to one or more exponent thresholds to classify the floating-point numbers and to apply different compression types to the different classes. Each class and compression type is associated with an indicator. An indicator array contains M indicators for M floating-point numbers. The position of the indicator in the indicator array corresponds to one of the floating-point numbers and the indicator value specifies the class and compression type. The floating-point number is encoded in accordance with the compression type for its class. A compressed data packet contains the indicator array and up to M encoded floating-point numbers. Decompression extracts the indicator array and the encoded floating-point numbers from the compressed data packet and decodes the encoded floating-point numbers in accordance with the compression type associated with the indicator value to form a reconstructed floating-point number. | 01-03-2013 |
20130007077 | COMPRESSION OF FLOATING-POINT DATA - Compression of exponents, mantissas and signs of floating-point numbers is described. Differences between exponents are encoded by exponent tokens selected from a code table. The mantissa is encoded to a mantissa token having a length based on the exponent. The signs are encoded directly or are compressed to produce fewer sign tokens. The exponent tokens, mantissa tokens and sign tokens are packed in a compressed data packet. Decompression decodes the exponent tokens using the code table. The decoded exponent difference is added to a previous reconstructed exponent to produce the reconstructed exponent. The reconstructed exponent is used to determine the length of the mantissa token. The mantissa token is decoded to form the reconstructed mantissa. The sign tokens provide the reconstructed signs or are decompressed to provide the reconstructed signs. The reconstructed sign, reconstructed exponent and reconstructed mantissa are combined to form a reconstructed floating-point number. | 01-03-2013 |
20130007078 | COMPRESSION OF FLOATING-POINT DATA IN ENCODING GROUPS - Exponents, mantissas and signs of floating-point numbers are compressed in encoding groups. Differences between maximum exponents of encoding groups are encoded by exponent tokens selected from a code table. Each mantissa of an encoding group is encoded to a mantissa token having a length based on the maximum exponent. Signs are encoded directly or are compressed to produce sign tokens. Exponent tokens, mantissa tokens and sign tokens are packed in a compressed data packet. For decompression, the exponent tokens are decoded using the code table. The decoded exponent difference is added to a previous reconstructed maximum exponent to produce the reconstructed maximum exponent for the encoding group. The reconstructed maximum exponent is used to determine the length of the mantissa tokens that are decoded to produce the reconstructed mantissas for the encoding group. The reconstructed sign, reconstructed exponent and reconstructed mantissa are combined to form a reconstructed floating-point number. | 01-03-2013 |
20130007079 | LU FACTORIZATION OF LOW RANK BLOCKED MATRICES WITH SIGNIFICANTLY REDUCED OPERATIONS COUNT AND MEMORY REQUIREMENTS - Methods and apparatus for fast computation of a system interaction matrix with significantly reduced operations count and memory requirements are disclosed. In one embodiment, an ordered set of input points or values is determined so that factors of the system interaction matrix have low rank. The system interaction matrix is partitioned into blocks so that a dimension of a block corresponds to a number of unknown values in a group. The logical partition is created without computing the system interaction matrix. For the chosen partition, terms of the factorization are computed and stored in compressed form. | 01-03-2013 |
20130018932 | SYSTEM AND METHOD FOR LONG RANGE AND SHORT RANGE DATA COMPRESSIONAANM Bhaskar; UdayaAACI North PatomacAAST MDAACO USAAGP Bhaskar; Udaya North Patomac MD USAANM Su; Chi-JiunAACI RockvilleAAST MDAACO USAAGP Su; Chi-Jiun Rockville MD US - A system and method are provided for use with streaming blocks of data, each of the streaming blocks of data including a number bits of data. The system includes a first compressor and a second compressor. The first compressor can receive and store a number n blocks of the streaming blocks of data, can receive and store a block of data to be compressed of the streaming blocks of data, can compress consecutive bits within the block of data to be compressed based on the n blocks of the streaming blocks of data, can output a match descriptor and a literal segment. The match descriptor is based on the compressed consecutive bits. The literal segment is based on a remainder of the number of bits of the data to be compressed not including the consecutive bits. The second compressor can compress the literal segment and can output a compressed data block including the match descriptor and a compressed string of data based on the compressed literal segment. | 01-17-2013 |
20130046803 | DITHER-AWARE IMAGE CODING - This disclosure provides implementations of dither-aware image coding processes, devices, apparatus, and systems. In one aspect, a portion of received image data is selected. First spatial domain values in the selected portion of the image data are transformed to first transform domain coefficients. Second spatial domain values in a designated dither matrix are transformed to second transform domain coefficients. A ratio of each of the first transform domain coefficients to a respective second transform domain coefficient is determined. The first transform domain coefficients are selectively coded in accordance with the determined ratios to define coded first transform domain coefficients. A reverse transformation is performed to transform the coded first transform domain coefficients to third spatial domain values defining a coded portion of the image data. By way of example, transformations such as discreet cosine transforms or discreet wavelet transforms can be used. | 02-21-2013 |
20130054661 | BLOCK FLOATING POINT COMPRESSION WITH EXPONENT DIFFERENCE AND MANTISSA CODING - A method and apparatus for compressing signal samples uses block floating point representations where the number of bits per mantissa is determined by the maximum magnitude sample in the group. The compressor defines groups of signal samples having a fixed number of samples per group. The maximum magnitude sample in the group determines an exponent value corresponding to the number of bits for representing the maximum sample value. The exponent values are encoded to form exponent tokens. Exponent differences between consecutive exponent values may be encoded individually or jointly. The samples in the group are mapped to corresponding mantissas, each mantissa having a number of bits based on the exponent value. Removing LSBs depending on the exponent value produces mantissas having fewer bits. Feedback control monitors the compressed bit rate and/or a quality metric. This abstract does not limit the scope of the invention as described in the claims. | 02-28-2013 |
20130060827 | BLOCK FLOATING POINT COMPRESSION WITH EXPONENT TOKEN CODES - A method and apparatus for compressing signal samples uses block floating point representations where the number of bits per mantissa is determined by the maximum magnitude sample in the group. The compressor defines groups of signal samples having a fixed number of samples per group. The maximum magnitude sample in the group determines an exponent value corresponding to the number of bits for representing the maximum sample value. The exponent values are encoded to form exponent tokens. Exponent differences between consecutive exponent values may be encoded individually or jointly. The samples in the group are mapped to corresponding mantissas, each mantissa having a number of bits based on the exponent value. Removing LSBs depending on the exponent value produces mantissas having fewer bits. Feedback control monitors the compressed bit rate and/or a quality metric. This abstract does not limit the scope of the invention as described in the claims. | 03-07-2013 |
20130117341 | DECIMAL ELEMENTARY FUNCTIONS COMPUTATION - A method for executing a decimal elementary function (DEF) computation from multiple decimal floating-point operands, including: extracting mantissae and exponents from the operands; generating normalized mantissae by shifting the mantissae based on the number of leading zeros; calculating a plurality of approximations for a logarithm of the first normalized mantissa; calculating, using the plurality of approximations for the logarithm, a plurality of approximations for a product of the second normalized mantissa and a sum based on the logarithm of the first normalized mantissa and an exponent; generating a plurality of shifted values by shifting the plurality of approximations for the product; generating a plurality of fraction components from the plurality of shifted values; calculating an antilog based on the plurality of fraction components; and outputting a decimal floating-point result of the DEF computation comprising a resultant mantissa based on the antilog and a resultant biased exponent. | 05-09-2013 |
20130124588 | ENCODING DENSELY PACKED DECIMALS - According to one aspect of the present disclosure, a method and technique for encoding densely packed decimals is disclosed. The method includes: executing a floating point instruction configured to perform a floating point operation on decimal data in a binary coded decimal (BCD) format; determining whether a result of the operation includes a rounded mantissa overflow; and responsive to determining that the result of the operation includes a rounded mantissa overflow, compressing a result of the operation from the BCD-formatted decimal data to decimal data in a densely packed decimal (DPD) format by shifting select bit values of the BCD formatted decimal data by one digit to select bit positions in the DPD format. | 05-16-2013 |
20130124589 | Compression and Decompression of Numerical Data - The invention relates to a computer-implemented method for compressing numerical data comprising a structured set of floating point actual values. A floating point value is defined by a sign, an exponent and a mantissa. The method comprises computing a floating point predicted value related to a target actual value of the set. The computing includes performing operations on integers corresponding to the sign, to the exponent and/or to the mantissa of actual values of a subset of the set. The method also comprises storing a bit sequence representative of a difference between integers derived from the target actual value and the predicted value. Such a method is particularly efficient for reducing the storage size of a CAD file. | 05-16-2013 |
20130151575 | ADAPTIVE BLOCK-SIZE TRANSFORM USING LLMICT - The LLMICT transform matrices are orthogonal, hence their inverses are their transpose. The LLMICT transform matrices are integer matrices, which can be implemented with high precision eliminating the drift error in video coding. The fast algorithms for the LLMICT transform are found, thus allowing a lower requirement on computation hardware. The LLMICT is also found to have high transform coding gain due to its similarity to the DCT. | 06-13-2013 |
20130173676 | COMPRESSION OF SMALL STRINGS - A method for compressing a set of small strings may include calculating n-gram frequencies for a plurality of n-grams over the set of small strings, selecting a subset of n-grams from the plurality of n-grams based on the calculated n-gram frequencies, defining a mapping table that maps each n-gram of the subset of n-grams to a unique code, and compressing the set of small strings by replacing n-grams within each small string in the set of small strings with corresponding unique codes from the mapping table. The method may use linear optimization to select a subset of n-grams that achieves a maximum space saving amount over the set of small strings for inclusion in the mapping table. The unique codes may be variable-length one or two byte codes. The set of small strings may be domain names. | 07-04-2013 |
20130173677 | SYSTEMS AND METHODS TO REDUCE I/O AND/OR SPEED UP OUT-OF-CORE LINEAR SOLVERS - Systems and methods to reduce I/O (input/output) with regard to out-of-core liner solvers and/or to speed up out-of-core linear solvers. | 07-04-2013 |
20130185344 | MODIFIED GABOR TRANSFORM WITH GAUSSIAN COMPRESSION AND BI-ORTHOGONAL DIRICHLET GAUSSIAN DECOMPRESSION - A signal processor for compressing signal data, including a function shapes generator for receiving as input time and frequency scale parameters, and for generating as output a plurality of shape parameters for a corresponding plurality of localized functions, wherein the shape parameters govern the centers and spreads of the localized functions, a matrix generator for receiving as input the plurality of shape parameters and a sequence of sampling times, and for generating as output a matrix whose elements are the values of the localized functions at the sampling times, a signal transformer for receiving as input an original signal and the matrix generated by the matrix generator, and for generating as output a transformed signal by applying the matrix to the original signal, and a signal compressor for receiving as input the transformed signal, and for generating as output a compressed representation of the transformed signal. | 07-18-2013 |
20130262538 | DATA COMPRESSION FOR DIRECT MEMORY ACCESS TRANSFERS - Memory system operations are extended for a data processor by DMA, cache, or memory controller to include a DMA descriptor, including a set of operations and parameters for the operations, which provides for data compression and decompression during or in conjunction with processes for moving data between memory elements of the memory system. The set of operations can be configured to use the parameters and perform the operations of the DMA, cache, or memory controller. The DMA, cache, or memory controller can support moves between memory having a first access latency, such as memory integrated on the same chip as a processor core, and memory having a second access latency that is longer than the first access latency, such as memory on a different integrated circuit than the processor core. | 10-03-2013 |
20130332496 | SATURATION DETECTOR - A hardware integer saturation detector that detects both whether packing a 32-bit integer value causes saturation and whether packing each of first and second 16-bit integer values causes saturation, where the first 16-bit integer value is the upper 16 bits of the 32-bit integer value and the second 16-bit integer value is the lower 16 bits of the 32-bit integer value. The detector includes hardware signal logic, configured to generate four signals with information about the integer values. The hardware integer detector also includes saturation logic, configured to gate the four signals to generate a saturation signal. Each bit of the saturation signal indicates whether packing the 32-bit integer value or whether packing one of the first and second 16-bit integer values will cause saturation respectively. | 12-12-2013 |
20130332497 | MODULAR DIGITAL SIGNAL PROCESSING CIRCUITRY WITH OPTIONALLY USABLE, DEDICATED CONNECTIONS BETWEEN MODULES OF THE CIRCUITRY - Digital signal processing (“DSP”) circuit blocks are provided that can more easily work together to perform larger (e.g., more complex and/or more arithmetically precise) DSP operations if desired. These DSP blocks may also include redundancy circuitry that facilitates stitching together multiple such blocks despite an inability to use some block (e.g., because of a circuit defect). Systolic registers may be included at various points in the DSP blocks to facilitate use of the blocks to implement systolic form, finite-impulse-response (“FIR”), digital filters. | 12-12-2013 |
20140046990 | ACCELEROMETER DATA COMPRESSION - A method of compressing data output from an acceleration measurement means configured to be transported, carried or worn by a user is provided. Acceleration values indicative of the movement of the user are measured at a first frequency and values representative of the measured acceleration values are generated at a second frequency, which is lower than the first frequency. The step of generating comprises: defining a plurality of time windows, each time window containing a plurality of measured acceleration values; and applying a transformation to the measured acceleration values within each time window to generate a plurality of transformed values. For each time window, storing at least one of said plurality of transformed values and/or one or more parameters associated therewith. | 02-13-2014 |
20140082035 | MODULAR DIGITAL SIGNAL PROCESSING CIRCUITRY WITH OPTIONALLY USABLE, DEDICATED CONNECTIONS BETWEEN MODULES OF THE CIRCUITRY - Digital signal processing (“DSP”) circuit blocks are provided that can more easily work together to perform larger (e.g., more complex and/or more arithmetically precise) DSP operations if desired. These DSP blocks may also include redundancy circuitry that facilitates stitching together multiple such blocks despite an inability to use some block (e.g., because of a circuit defect). Systolic registers may be included at various points in the DSP blocks to facilitate use of the blocks to implement systolic form, finite-impulse-response (“FIR”), digital filters. | 03-20-2014 |
20140095561 | ENHANCED MULTI-PROCESSOR WAVEFORM DATA EXCHANGE USING COMPRESSION AND DECOMPRESSION - Configurable compression and decompression of waveform data in a multi-core processing environment improves the efficiency of data transfer between cores and conserves data storage resources. In waveform data processing systems, input, intermediate, and output waveform data are often exchanged between cores and between cores and off-chip memory. At each core, a single configurable compressor and a single configurable decompressor can be configured to compress and to decompress integer or floating-point waveform data. At the memory controller, a configurable compressor compresses integer or floating-point waveform data for transfer to off-chip memory in compressed packets and a configurable decompressor decompresses compressed packets received from the off-chip memory. Compression reduces the memory or storage required to retain waveform data in a semiconductor or magnetic memory. Compression reduces both the latency and the bandwidth required to exchange waveform data. This abstract does not limit the scope of the invention as described in the claims. | 04-03-2014 |
20140143289 | Constrained System Endec - Various embodiments of the present invention provide apparatuses and methods for encoding and decoding data for constrained systems with reduced or eliminated need for hardware and time intensive arithmetic operations such as multiplication and division. | 05-22-2014 |
20140280407 | VECTOR PROCESSING CARRY-SAVE ACCUMULATORS EMPLOYING REDUNDANT CARRY-SAVE FORMAT TO REDUCE CARRY PROPAGATION, AND RELATED VECTOR PROCESSORS, SYSTEMS, AND METHODS - Embodiments disclosed herein include vector processing carry-save accumulators employing redundant carry-save format to reduce carry propagation. The multi-mode vector processing carry-save accumulators employing redundant carry-save format can be provided in a vector processing engine (VPE) to perform vector accumulation operations. Related vector processors, systems, and methods are also disclosed. The accumulator blocks are configured as carry-save accumulator structures. The accumulator blocks are configured to accumulate in redundant carry-save format so that carrys and saves are accumulated and saved without the need to provide a carry propagation path and a carry propagation add operation during each step of accumulation. A carry propagate adder is only required to propagate the accumulated carry once at the end of the accumulation. In this manner, power consumption and gate delay associated with performing a carry propagation add operation during each step of accumulation in the accumulator blocks is reduced or eliminated. | 09-18-2014 |
20140289293 | LARGE MULTIPLIER FOR PROGRAMMABLE LOGIC DEVICE - A plurality of specialized processing blocks in a programmable logic device, including multipliers and circuitry for adding results of those multipliers, can be configured as a larger multiplier by adding to the specialized processing blocks selectable circuitry for shifting multiplier results before adding. In one embodiment, this allows all but the final addition to take place in specialized processing blocks, with the final addition occurring in programmable logic. In another embodiment, additional compression and adding circuitry allows even the final addition to occur in the specialized processing blocks. | 09-25-2014 |
20140317161 | MATCHING PATTERN COMBINATIONS VIA FAST ARRAY COMPARISON - Methods and arrangements for providing a compressed representation of a number sequence. An input number sequence is received, as is a stored number sequence. The input number sequence is compared to the stored number sequence. The comparing includes determining a set of coefficients corresponding to the input number sequence, via solving at least one algebraic equation, the at least one algebraic equation comprising at least one of: an arithmetic equation, and an exponential equation. The comparing further includes applying at least one test to determine whether the set of coefficients identifies at least a portion of the stored number sequence as matching the entire input number sequence. | 10-23-2014 |
20150019602 | METHOD FOR POST-PROCESSING AN OUTPUT OF A RANDOM SOURCE OF A RANDOM GENERATOR - A method and as assemblage for post-processing an output of a random source of a random generator are presented. In the method, an output signal of the random source is compressed, thereby yielding a sequence of compressed signal values that are checked in terms of their distribution. | 01-15-2015 |
20150019603 | Method for checking an output of a random number generator - In a method for checking an output of a random number generator which includes at least one random source, the frequency of occurrence of at least one bit assignment is counted and established in a correlation with the total number of values which are taken into account. | 01-15-2015 |
20150067009 | SPARSE MATRIX DATA STRUCTURE - Various embodiments relating to encoding a sparse matrix into a data structure format that may be efficiently processed via parallel processing of a computing system are provided. In one embodiment, a sparse matrix may be received. A set of designated rows of the sparse matrix may be traversed until all non-zero elements in the sparse matrix have been placed in a first array. Each time a row in the set is traversed, a next non-zero element in that row may be placed in the first array. If all non-zero elements for a given row of the set of designated rows have been placed in the first array, the given row may be replaced in the set of designated rows with a next unprocessed row of the sparse matrix. The data structure in which the sparse matrix is encoded may be outputted. The data structure may include the first array. | 03-05-2015 |
20150088945 | ADAPTIVE COMPRESSION SUPPORTING OUTPUT SIZE THRESHOLDS - Methods and systems for adaptive compression include compressing input data according to a first compression ratio; pausing compression after a predetermined amount of input data is compressed; estimating which of a set of ranges a compressed output size will fall within using current settings; and performing compression on a remainder of the input data according to a second compression ratio based on the estimated range. | 03-26-2015 |
20150100609 | COMPRESSION OF TIME-VARYING SIMULATION DATA - A method, executed by at least one processor, for compressing time-varying scientific data, includes receiving time-varying data corresponding to a physical phenomenon within a domain comprising one or more spatial dimensions, conducting a proper orthogonal decomposition of the time-varying data to provide basis vectors for the time-varying data, generating a set of expansion coefficients corresponding to the basis vectors that are most prominent in the time-varying data, conducting an image compression algorithm on the expansion coefficients to provide a compressed representation of the time-varying data, and storing the compressed representation of the time-varying data. The time-varying data may be numeric data generated from a physical simulation or from experimentation. In some embodiments, the time-varying data corresponds to one or more sub-domains within a larger dataset. The sub-domains may be coherent sub-domains that have similar modes. A corresponding computer-program product and computing system are also disclosed herein. | 04-09-2015 |
20160112061 | NON-RECURSIVE CASCADING REDUCTION - As disclosed herein a method, executed by a computer, for conducting non-recursive cascading reduction includes receiving a collection of floating point values, using a binary representation of an index corresponding to a value being processed to determine a reduction depth for elements on a stack to be accumulated, and according to the reduction depth, iteratively conducting a reduction operation on the current value and one or more values on the stack. In addition to accumulation, the reduction operation may include transforming the value with a corresponding function. The method may also include using a SIMD processing environment to further increase the performance of the method. The method provides results with both high performance and accuracy. A computer system and computer program product corresponding to the method are also disclosed herein. | 04-21-2016 |
20160173124 | SYSTEM AND METHOD OF COMBINATORIAL HYPERMAP BASED DATA REPRESENTATIONS AND OPERATIONS | 06-16-2016 |
20160179468 | CHECKSUM ADDER | 06-23-2016 |
20160179750 | Computer-Implemented System And Method For Efficient Sparse Matrix Representation And Processing | 06-23-2016 |