Patent application number | Description | Published |
20110231394 | BOOTSTRAP AND ADAPT A DOCUMENT SEARCH ENGINE - Architecture that employs a modeling technique based on language modeling to estimate a probability of a document matching the user need as expressed in the query. The modeling technique is based on the data mining results that various portions of a document (e.g., body, title, URL, anchor text, user queries) use different styles of human languages. Thus, the results based on a language can be adapted individually to match the language of query. Since the approach is based on adaptation, the framework also provides a natural means to progressively revise the model as user data are collected. Different styles of languages in a document can be recognized and adapted individually. Background language models are also employed that offer a fallback approach in case the document has incomplete fields of data, and can utilize topical or semantic hierarchy of the knowledge domain. | 09-22-2011 |
20110238686 | CACHING DATA OBTAINED VIA DATA SERVICE INTERFACES - A potential user query that can be subsequently received from a user is identified. An interface exposed by a data service is invoked, which includes providing the potential user query to the interface. Search results in response to the potential user query are received from the interface and maintained in a complex data set store. If a user query is subsequently received that maps to a normalized query that is the potential user query, then search results for the normalized query are obtained from the complex data set store and returned as the search results for the user query. | 09-29-2011 |
20110295852 | FEDERATED IMPLICIT SEARCH - A resource selection system is described for assisting a user in performing a task that includes multiple actions. At each stage of the task, the system presents a set resources from which the user may select to perform a subsequent action in the task. The system implicitly selects the set of resources based on context information that identifies the user's current informational needs. For example, the context information may be derived from textual information that is being presented on a user device, which the user is presumed to be viewing at the current time. In one implementation, the system selects the set of resources by computing language models for respective domains and respective entities. The system uses the language models to determine the relevance of the context information to each of the domains. The system then selects resources associated with domains that have been assessed as relevant. | 12-01-2011 |
20110320470 | GENERATING AND PRESENTING A SUGGESTED SEARCH QUERY - The present invention is directed to presenting a suggested search query. Responsive to receiving a user-devised search parameter, a suggested search query is identified. The user-devised search parameter might have been previously received by a search system, or alternatively, might be a unique query that has not been previously received. A suggested search query might be generated using various techniques, such as by applying an n-gram language model. A classification of the suggested search query is determined, and the suggested search query is presented together with a visual indicator, which signifies the classification. | 12-29-2011 |
20120246133 | ONLINE SPELLING CORRECTION/PHRASE COMPLETION SYSTEM - Online spelling correction/phrase completion is described herein. A computer-executable application receives a phrase prefix from a user, wherein the phrase prefix includes a first character sequence. A transformation probability is retrieved responsive to receipt of the phrase prefix, wherein the transformation probability indicates a probability that a second character sequence has been transformed into a first character sequence. A search is then executed over a trie to locate a most probable phrase completion based at least in part upon the transformation probability. | 09-27-2012 |
20120265779 | INTERACTIVE SEMANTIC QUERY SUGGESTION FOR CONTENT SEARCH - Systems, methods and computer-storage media are provided for identifying query formulation suggestions in response to receiving a search query. A portion of a search query is received. Query formulation suggestions are identified by semantically analyzing the search query. The query formulation suggestions are used to further formulate the received search query. The query formulation suggestions include semantic-pattern-based query suggestions that are derived from semantic query patterns, one or more entities, and information associated with these entities. The query formulation suggestions are transmitted for presentation. | 10-18-2012 |
20120265784 | ORDERING SEMANTIC QUERY FORMULATION SUGGESTIONS - Methods are provided for ordering semantically-identified query formulation suggestions. Semantic query patterns are identified for a plurality of search queries and a weight is identified for each. Also identified is a plurality of semantic categories, each having an identified weight. Terms/phrases commonly associated with the semantic categories are identified, as are semantic attributes as they pertain to the semantic categories. Semantic attribute patterns and respective weights therefore are identified. A text-parser is generated from the semantic query patterns and respective weights, the semantic category terms, and the semantic attribute patterns and respective weights, the text-parser for use in parsing input user queries or portions thereof. Upon receiving a user search query, the text-parser is applied to determine at least one likely attribute, attribute value, or term commonly associated with a semantic category, and the determined attribute/attribute value/term is transmitted for presentation with an order representative of the respective calculated weights. | 10-18-2012 |
20120265787 | IDENTIFYING QUERY FORMULATION SUGGESTIONS FOR LOW-MATCH QUERIES - Systems, methods and computer-storage media are provided for identifying low-match search queries and determining comparable item matches to suggest to the user in response to a low-match query. “Low-match queries” are queries for which an insufficient number of exact item matches are available. In embodiments, exact and/or comparable item matches may be determined via semantic analysis. Also provided are systems, methods and computer-storage media for informing the user, by way of a presented indicator, or the like, that a presented item was selected for presentation based upon a similarity metric rather than being determined an exact match for the input query. | 10-18-2012 |
20130179419 | RETRIEVAL OF PREFIX COMPLETIONS BY WAY OF WALKING NODES OF A TRIE DATA STRUCTURE - Technologies pertaining to providing completions to proffered prefixes are disclosed herein, A suggested completion to a proffered prefix is retrieved by walking nodes of a trie data structure, wherein a node includes one or more characters that are used to extend a character sequence represented by its parent. Each node in the trie data structure is assigned a score, wherein the score maps to a best score assigned to its descendants. The nodes of the trie data structure are sorted based upon score, and the nodes are walked based upon scores assigned thereto. | 07-11-2013 |
20130297307 | DICTATION WITH INCREMENTAL RECOGNITION OF SPEECH - A dictation module is described herein which receives and interprets a complete utterance of the user in incremental fashion, that is, one incremental portion at a time. The dictation module also provides rendered text in incremental fashion. The rendered text corresponds to the dictation module's interpretation of each incremental portion. The dictation module also allows the user to modify any part of the rendered text, as it becomes available. In one case, for instance, the dictation module provides a marking menu which includes multiple options by which a user can modify a selected part of the rendered text. The dictation module also uses the rendered text (as modified or unmodified by the user using the marking menu) to adjust one or more models used by the dictation model to interpret the user's utterance. | 11-07-2013 |
20150269175 | Query Interpretation and Suggestion Generation under Various Constraints - A query processing system (QPS) is described herein for interpreting a user's input query against a structured knowledge base, to provide an output result. The output result may include one or more query suggestions, each providing a recommendation as to how a user may refine his or her query. In addition, or alternatively, the output result may specify one or more entity items which satisfy the user's query. In interpreting the user's query, the QPS may rely on a collection of rule modules which identify and process different types of constraints that may be expressed in the input query, including numeric constraint, nested constraints, comparison-based constraints, and so on. | 09-24-2015 |
20160004732 | EXTENDED MEMORY SYSTEM - Described herein are technologies that are configured to assist a user in recollection information about people, places, and things. Computer-readable data is captured, and contextual data that temporally corresponds to the computer-readable data is also captured. In a database, the computer-readable data is indexed by the contextual data. Thus, when a query is received that references the contextual data, the computer-readable data is retrieved. | 01-07-2016 |
20160026385 | Hover Controlled User Interface Element - Example apparatus and methods concern controlling a hover-sensitive input/output interface. One example apparatus includes a proximity detector that detects an object in a hover-space associated with the input/output interface. The apparatus produces characterization data concerning the object. The characterization data may be independent of where in the hover-space the object is located. The apparatus selectively controls the activation, display, and deactivation of user interface elements displayed by the apparatus on the input/output interface as a function of the characterization data and interface state. Selectively controlling the activation, display, and deactivation of the user interface elements includes allocating display space on the input/output interface to the user interface elements when they are needed for an operation on the apparatus and selectively reclaiming space on the input/output interface allocated to the user interface elements when they are not needed for an operation on the apparatus. | 01-28-2016 |
Patent application number | Description | Published |
20120254541 | METHODS AND APPARATUS FOR UPDATING DATA IN PASSIVE VARIABLE RESISTIVE MEMORY - Methods and apparatus for updating data in passive variable resistive memory (PVRM) are provided. In one example, a method for updating data stored in PVRM is disclosed. The method includes updating a memory block of a plurality of memory blocks in a cache hierarchy without invalidating the memory block. The updated memory block may be copied from the cache hierarchy to a write through buffer. Additionally, the method includes writing the updated memory block to the PVRM, thereby updating the data in the PVRM. | 10-04-2012 |
20120311228 | METHOD AND APPARATUS FOR PERFORMING MEMORY WEAR-LEVELING USING PASSIVE VARIABLE RESISTIVE MEMORY WRITE COUNTERS - Method and apparatus for performing wear-leveling using passive variable resistive memory (PVRM) based write counters are provided. In one example, a method for performing wear-leveling using passive PVRM based write counters is disclosed. The method includes associating a logical address of a memory array with a physical address of the memory array via at least one mapping table. Additionally, the method includes, in response to writing to the physical address of the memory array, incrementally updating at least one PVRM based write counter associated with the physical address of the memory array. The at least one PVRM based write counter may be incrementally updated by varying an amount of resistance stored in the at least one PVRM based write counter. | 12-06-2012 |
20130013864 | MEMORY ACCESS MONITOR - For each access request received at a shared cache of the data processing device, a memory access pattern (MAP) monitor predicts which of the memory banks, and corresponding row buffers, would be accessed by the access request if the requesting thread were the only thread executing at the data processing device. By recording predicted accesses over time for a number of access requests, the MAP monitor develops a pattern of predicted memory accesses by executing threads. The pattern can be employed to assign resources at the shared cache, thereby managing memory more efficiently. | 01-10-2013 |
20130013866 | SPATIAL LOCALITY MONITOR - A method includes updating a first tag access indicator of a storage structure. The tag access indicator indicates a number of accesses by a first thread executing on a processor to a memory resource for a portion of memory associated with a memory tag. The updating is in response to an access to the memory resource for a memory request associated with the first thread to the portion of memory associated with the memory tag. The method may include updating a first sum indicator of the storage structure indicating a sum of numbers of accesses to the memory resource being associated with a first access indicator of the storage structure for the first thread, the updating being in response to the access to the memory resource. | 01-10-2013 |
20130145101 | Method and Apparatus for Controlling an Operating Parameter of a Cache Based on Usage - A method and apparatus are provided for controlling power consumed by a cache. The method comprises monitoring usage of a cache and providing a cache usage signal responsive thereto. The cache usage signal may be used to vary an operating parameter of the cache. The apparatus comprises a cache usage monitor and a controller. The cache usage monitor is adapted to monitor a cache and provide a cache usage signal responsive thereto. The controller is adapted to vary the operating parameter of the cache in response to the cache usage signal. | 06-06-2013 |
20130145104 | METHOD AND APPARATUS FOR CONTROLLING CACHE REFILLS - A method and apparatus are provided for controlling a cache. The cache includes a plurality of storage locations, each having a priority associated therewith, and wherein the cache evicts data from one or more of the storage locations based on the priority associated therewith. The method comprises: storing historical information regarding data being evicted from the cache; retrieving data from a secondary memory in response to a miss in the cache; assigning a priority to the retrieved data based on the historical information; and storing the retrieved data in the cache with an indication of the assigned priority. | 06-06-2013 |
20150317249 | MEMORY ACCESS MONITOR - For each access request received at a shared cache of the data processing device, a memory access pattern (MAP) monitor predicts which of the memory banks, and corresponding row buffers, would be accessed by the access request if the requesting thread were the only thread executing at the data processing device. By recording predicted accesses over time for a number of access requests, the MAP monitor develops a pattern of predicted memory accesses by executing threads. The pattern can be employed to assign resources at the shared cache, thereby managing memory more efficiently. | 11-05-2015 |
Patent application number | Description | Published |
20140101405 | REDUCING COLD TLB MISSES IN A HETEROGENEOUS COMPUTING SYSTEM - Methods and apparatuses are provided for avoiding cold translation lookaside buffer (TLB) misses in a computer system. A typical system is configured as a heterogeneous computing system having at least one central processing unit (CPU) and one or more graphic processing units (GPUs) that share a common memory address space. Each processing unit (CPU and GPU) has an independent TLB. When offloading a task from a particular CPU to a particular GPU, translation information is sent along with the task assignment. The translation information allows the GPU to load the address translation data into the TLB associated with the one or more GPUs prior to executing the task. Preloading the TLB of the GPUs reduces or avoids cold TLB misses that could otherwise occur without the benefits offered by the present disclosure. | 04-10-2014 |
20140181417 | CACHE COHERENCY USING DIE-STACKED MEMORY DEVICE WITH LOGIC DIE - A die-stacked memory device implements an integrated coherency manager to offload cache coherency protocol operations for the devices of a processing system. The die-stacked memory device includes a set of one or more stacked memory dies and a set of one or more logic dies. The one or more logic dies implement hardware logic providing a memory interface and the coherency manager. The memory interface operates to perform memory accesses in response to memory access requests from the coherency manager and the one or more external devices. The coherency manager comprises logic to perform coherency operations for shared data stored at the stacked memory dies. Due to the integration of the logic dies and the memory dies, the coherency manager can access shared data stored in the memory dies and perform related coherency operations with higher bandwidth and lower latency and power consumption compared to the external devices. | 06-26-2014 |
20140181428 | QUALITY OF SERVICE SUPPORT USING STACKED MEMORY DEVICE WITH LOGIC DIE - A die-stacked memory device implements an integrated QoS manager to provide centralized QoS functionality in furtherance of one or more specified QoS objectives for the sharing of the memory resources by other components of the processing system. The die-stacked memory device includes a set of one or more stacked memory dies and one or more logic dies. The logic dies implement hardware logic for a memory controller and the QoS manager. The memory controller is coupleable to one or more devices external to the set of one or more stacked memory dies and operates to service memory access requests from the one or more external devices. The QoS manager comprises logic to perform operations in furtherance of one or more QoS objectives, which may be specified by a user, by an operating system, hypervisor, job management software, or other application being executed, or specified via hardcoded logic or firmware. | 06-26-2014 |
20140181453 | Processor with Host and Slave Operating Modes Stacked with Memory - A system, method, and computer program product are provided for a memory device system. One or more memory dies and at least one logic die are disposed in a package and communicatively coupled. The logic die comprises a processing device configurable to manage virtual memory and operate in an operating mode. The operating mode is selected from a set of operating modes comprising a slave operating mode and a host operating mode. | 06-26-2014 |
20140181457 | Write Endurance Management Techniques in the Logic Layer of a Stacked Memory - A system, method, and memory device embodying some aspects of the present invention for remapping external memory addresses and internal memory locations in stacked memory are provided. The stacked memory includes one or more memory layers configured to store data. The stacked memory also includes a logic layer connected to the memory layer. The logic layer has an Input/Output (I/O) port configured to receive read and write commands from external devices, a memory map configured to maintain an association between external memory addresses and internal memory locations, and a controller coupled to the I/O port, memory map, and memory layers, configured to store data received from external devices to internal memory locations. | 06-26-2014 |
20140181458 | DIE-STACKED MEMORY DEVICE PROVIDING DATA TRANSLATION - A die-stacked memory device incorporates a data translation controller at one or more logic dies of the device to provide data translation services for data to be stored at, or retrieved from, the die-stacked memory device. The data translation operations implemented by the data translation controller can include compression/decompression operations, encryption/decryption operations, format translations, wear-leveling translations, data ordering operations, and the like. Due to the tight integration of the logic dies and the memory dies, the data translation controller can perform data translation operations with higher bandwidth and lower latency and power consumption compared to operations performed by devices external to the die-stacked memory device. | 06-26-2014 |
20140223445 | Selecting a Resource from a Set of Resources for Performing an Operation - The described embodiments comprise a selection mechanism that selects a resource from a set of resources in a computing device for performing an operation. In some embodiments, the selection mechanism is configured to perform a lookup in a table selected from a set of tables to identify a resource from the set of resources. When the identified resource is not available for performing the operation and until a resource is selected for performing the operation, the selection mechanism is configured to identify a next resource in the table and select the next resource for performing the operation when the next resource is available for performing the operation. | 08-07-2014 |
20160055005 | System and Method for Page-Conscious GPU Instruction - Embodiments disclose a system and method for reducing virtual address translation latency in a wide execution engine that implements virtual memory. One example method describes a method comprising receiving a wavefront, classifying the wavefront into a subset based on classification criteria selected to reduce virtual address translation latency associated with a memory support structure, and scheduling the wavefront for processing based on the classifying. | 02-25-2016 |
20160062803 | SELECTING A RESOURCE FROM A SET OF RESOURCES FOR PERFORMING AN OPERATION - The described embodiments comprise a selection mechanism that selects a resource from a set of resources in a computing device for performing an operation. In some embodiments, the selection mechanism performs a lookup in a table selected from a set of tables to identify a resource from the set of resources. When the resource is not available for performing the operation and until another resource is selected for performing the operation, the selection mechanism identifies a next resource in the table and selects the next resource for performing the operation when the next resource is available for performing the operation. | 03-03-2016 |
Patent application number | Description | Published |
20090074073 | CODING OF MOTION VECTOR INFORMATION - Techniques and tools for encoding and decoding motion vector information for video images are described. For example, a video encoder yields an extended motion vector code by jointly coding, for a set of pixels, a switch code, motion vector information, and a terminal symbol indicating whether subsequent data is encoded for the set of pixels. In another aspect, an encoder/decoder selects motion vector predictors for macroblocks. In another aspect, a video encoder/decoder uses hybrid motion vector prediction. In another aspect, a video encoder/decoder signals a motion vector mode for a predicted image. In another aspect, a video decoder decodes a set of pixels by receiving an extended motion vector code, which reflects joint encoding of motion information together with intra/inter-coding information and a terminal symbol. The decoder determines whether subsequent data exists for the set of pixels based on e.g., the terminal symbol. | 03-19-2009 |
20090262835 | SKIP MACROBLOCK CODING - Various techniques and tools for encoding and decoding (e.g., in a video encoder/decoder) binary information (e.g., skipped macroblock information) are described. In some embodiments, the binary information is arranged in a bit plane, and the bit plane is coded at the picture/frame layer. The encoder and decoder process the binary information and, in some embodiments, switch coding modes. For example, the encoder and decoder use normal, row-skip, column-skip, or differential modes, or other and/or additional modes. In some embodiments, the encoder and decoder define a skipped macroblock as a predicted macroblock whose motion is equal to its causally predicted motion and which has zero residual error. In some embodiments, the encoder and decoder use a raw coding mode to allow for low-latency applications. | 10-22-2009 |
20120213280 | CODING OF MOTION VECTOR INFORMATION - Techniques and tools for encoding and decoding motion vector information for video images are described. For example, a video encoder yields an extended motion vector code by jointly coding, for a set of pixels, a switch code, motion vector information, and a terminal symbol indicating whether subsequent data is encoded for the set of pixels. In another aspect, an encoder/decoder selects motion vector predictors for macroblocks. In another aspect, a video encoder/decoder uses hybrid motion vector prediction. In another aspect, a video encoder/decoder signals a motion vector mode for a predicted image. In another aspect, a video decoder decodes a set of pixels by receiving an extended motion vector code, which reflects joint encoding of motion information together with intra/inter-coding information and a terminal symbol. The decoder determines whether subsequent data exists for the set of pixels based on e.g., the terminal symbol. | 08-23-2012 |
20130010861 | USE OF FRAME CACHING TO IMPROVE PACKET LOSS RECOVERY - Various new and non-obvious apparatus and methods for using frame caching to improve packet loss recovery are disclosed. One of the disclosed embodiments is a method for using periodical and synchronized frame caching within an encoder and its corresponding decoder. When the decoder discovers packet loss, it informs the encoder which then generates a frame based on one of the shared frames stored at both the encoder and the decoder. When the decoder receives this generated frame it can decode it using its locally cached frame. | 01-10-2013 |
20130235932 | SKIP MACROBLOCK CODING - Various techniques and tools for encoding and decoding (e.g., in a video encoder/decoder) binary information (e.g., skipped macroblock information) are described. In some embodiments, the binary information is arranged in a bit plane, and the bit plane is coded at the picture/frame layer. The encoder and decoder process the binary information and, in some embodiments, switch coding modes. For example, the encoder and decoder use normal, row-skip, column-skip, or differential modes, or other and/or additional modes. In some embodiments, the encoder and decoder define a skipped macroblock as a predicted macroblock whose motion is equal to its causally predicted motion and which has zero residual error. In some embodiments, the encoder and decoder use a raw coding mode to allow for low-latency applications. | 09-12-2013 |
20130301704 | VIDEO CODING / DECODING WITH RE-ORIENTED TRANSFORMS AND SUB-BLOCK TRANSFORM SIZES - Techniques and tools for video coding/decoding with sub-block transform coding/decoding and re-oriented transforms are described. For example, a video encoder adaptively switches between 8×8, 8×4, and 4×8 DCTs when encoding 8×8 prediction residual blocks; a corresponding video decoder switches between 8×8, 8×4, and 4×8 inverse DCTs during decoding. The video encoder may determine the transform sizes as well as switching levels (e.g., frame, macroblock, or block) in a closed loop evaluation of the different transform sizes and switching levels. When a video encoder or decoder uses spatial extrapolation from pixel values in a causal neighborhood to predict pixel values of a block of pixels, the encoder/decoder can use a re-oriented transform to address non-stationarity of prediction residual values. | 11-14-2013 |
20130301732 | VIDEO CODING / DECODING WITH MOTION RESOLUTION SWITCHING AND SUB-BLOCK TRANSFORM SIZES - Techniques and tools for video coding/decoding with motion resolution switching and sub-block transform coding/decoding are described. For example, a video encoder adaptively switches the resolution of motion estimation and compensation between quarter-pixel and half-pixel resolutions; a corresponding video decoder adaptively switches the resolution of motion compensation between quarter-pixel and half-pixel resolutions. For sub-block transform sizes, for example, a video encoder adaptively switches between 8×8, 8×4, and 4×8 DCTs when encoding 8×8 prediction residual blocks; a corresponding video decoder switches between 8×8, 8×4, and 4×8 inverse DCTs during decoding. | 11-14-2013 |
20140133583 | USE OF FRAME CACHING TO IMPROVE PACKET LOSS RECOVERY - Various new and non-obvious apparatus and methods for using frame caching to improve packet loss recovery are disclosed. One of the disclosed embodiments is a method for using periodical and synchronized frame caching within an encoder and its corresponding decoder. When the decoder discovers packet loss, it informs the encoder which then generates a frame based on one of the shared frames stored at both the encoder and the decoder. When the decoder receives this generated frame it can decode it using its locally cached frame. | 05-15-2014 |
20140161191 | CODING OF MOTION VECTOR INFORMATION - Techniques and tools for encoding and decoding motion vector information for video images are described. For example, a video encoder yields an extended motion vector code by jointly coding, for a set of pixels, a switch code, motion vector information, and a terminal symbol indicating whether subsequent data is encoded for the set of pixels. In another aspect, an encoder/decoder selects motion vector predictors for macroblocks. In another aspect, a video encoder/decoder uses hybrid motion vector prediction. In another aspect, a video encoder/decoder signals a motion vector mode for a predicted image. In another aspect, a video decoder decodes a set of pixels by receiving an extended motion vector code, which reflects joint encoding of motion information together with intra/inter-coding information and a terminal symbol. The decoder determines whether subsequent data exists for the set of pixels based on e.g., the terminal symbol. | 06-12-2014 |
20140286420 | SKIP MACROBLOCK CODING - Various techniques and tools for encoding and decoding (e.g., in a video encoder/decoder) binary information (e.g., skipped macroblock information) are described. In some embodiments, the binary information is arranged in a bit plane, and the bit plane is coded at the picture/frame layer. The encoder and decoder process the binary information and, in some embodiments, switch coding modes. For example, the encoder and decoder use normal, row-skip, column-skip, or differential modes, or other and/or additional modes. In some embodiments, the encoder and decoder define a skipped macroblock as a predicted macroblock whose motion is equal to its causally predicted motion and which has zero residual error. In some embodiments, the encoder and decoder use a raw coding mode to allow for low-latency applications. | 09-25-2014 |
20140307776 | VIDEO CODING / DECODING WITH RE-ORIENTED TRANSFORMS AND SUB-BLOCK TRANSFORM SIZES - Techniques and tools for video coding/decoding with sub-block transform coding/decoding and re-oriented transforms are described. For example, a video encoder adaptively switches between 8×8, 8×4, and 4×8 DCTs when encoding 8×8 prediction residual blocks; a corresponding video decoder switches between 8×8, 8×4, and 4×8 inverse DCTs during decoding. The video encoder may determine the transform sizes as well as switching levels (e.g., frame, macroblock, or block) in a closed loop evaluation of the different transform sizes and switching levels. When a video encoder or decoder uses spatial extrapolation from pixel values in a causal neighborhood to predict pixel values of a block of pixels, the encoder/decoder can use a re-oriented transform to address non-stationarity of prediction residual values. | 10-16-2014 |
20150063459 | VIDEO CODING / DECODING WITH MOTION RESOLUTION SWITCHING AND SUB-BLOCK TRANSFORM SIZES - Techniques and tools for video coding/decoding with motion resolution switching and sub-block transform coding/decoding are described. For example, a video encoder adaptively switches the resolution of motion estimation and compensation between quarter-pixel and half-pixel resolutions; a corresponding video decoder adaptively switches the resolution of motion compensation between quarter-pixel and half-pixel resolutions. For sub-block transform sizes, for example, a video encoder adaptively switches between 8×8, 8×4, and 4×8 DCTs when encoding 8×8 prediction residual blocks; a corresponding video decoder switches between 8×8, 8×4, and 4×8 inverse DCTs during decoding. | 03-05-2015 |
20150288962 | SKIP MACROBLOCK CODING - Various techniques and tools for encoding and decoding (e.g., in a video encoder/decoder) binary information (e.g., skipped macroblock information) are described. In some embodiments, the binary information is arranged in a bit plane, and the bit plane is coded at the picture/frame layer. The encoder and decoder process the binary information and, in some embodiments, switch coding modes. For example, the encoder and decoder use normal, row-skip, column-skip, or differential modes, or other and/or additional modes. In some embodiments, the encoder and decoder define a skipped macroblock as a predicted macroblock whose motion is equal to its causally predicted motion and which has zero residual error. In some embodiments, the encoder and decoder use a raw coding mode to allow for low-latency applications. | 10-08-2015 |
20150334416 | VARIABLE CODING RESOLUTION IN VIDEO CODEC - A video codec provides for encoding and decoding pictures of a video sequence at various coded resolutions, such that pictures can be encoded at lower coded resolutions based on bit rate or other constraints while maintaining a consistent display resolution. The video codec employs a coding syntax where a maximum coded resolution is signaled at the sequence level of the syntax hierarchy, whereas a lower coded resolution is signaled at the entry point level for a segment of one or more intra-coded frames and frames predictively encoded based thereon. This allows the use of a separate out-of-loop resampler after the decoder to up-sample the pictures to the display resolution. | 11-19-2015 |
Patent application number | Description | Published |
20130211253 | On-axis Shear Wave Characterization with Ultrasound - Shear wave imaging is provided in medical diagnostic ultrasound. The generation of a shear wave with acoustic energy forms a pseudo shear wave (an apparent wave) traveling towards the transducer. Transmission and reception along a single line may be used to detect the pseudo shear wave traveling towards the transducer. The shear velocity or characteristic may be determined without reception along multiple laterally spaced scan lines. One transmission to generate the shear wave may be used. With multi-beam receive or without, calculating shear velocity from along a single line allows rapid determination. | 08-15-2013 |
20130225994 | High Intensity Focused Ultrasound Registration with Imaging - High intensity focused ultrasound (HIFU) is registered with imaging. The effects of transmission from a HIFU transducer, such as a rise in temperature, are detected by a separate imaging system. By using multiple transmissions, a plurality of locations of the transmissions from the HIFU transducer are determined within the imaging system coordinates. A transform relating the imaging system coordinates to the HIFU transducer coordinates is determined from the detected effects. The transform may be used to relate locations indicated in images of the imaging system with coordinates of the HIFU transducer for application of HIFU. The imaging system may not have to scan the HIFU transducer or fiducials and a fixed relationship may not be needed. | 08-29-2013 |
20130274589 | System Scan Timing by Ultrasound Contrast Agent Study - Scan timing of contrast agent study is provided. Ultrasound is used to determine timing of contrast agent inflow and/or outflow. The timing based on the ultrasound scanning controls scanning for MR or CT imaging. The MR or CT contrast agent imaging for MR or CT contrast agents may be synchronized using the ultrasound contrast agent flow. | 10-17-2013 |
20130281877 | Skin Temperature Control in Therapeutic Medical Ultrasound - Skin temperature is measured during medical ultrasound therapy. The temperature of a standoff between the transducer and skin is monitored. The temperature of the standoff relates to the skin temperature. The temperature, whether skin or standoff temperature, is used to control the therapy. The temperature feedback may allow for increased or optimize therapy levels. | 10-24-2013 |
20130296743 | Ultrasound for Therapy Control or Monitoring - Therapy control and/or monitoring is performed with an ultrasound scanner. The ultrasound scanner detects temperature to monitor therapy, and perform HIFU beam location refocusing of the therapy system based on the temperature. The monitoring is synchronized with the therapy using a trigger output of the ultrasound scanner. The trigger output responds to a scan sequence of the ultrasound scanner. To meet a given therapy plan, the scan sequence is customized, resulting in the customized trigger sequence. Three dimensional or multi-planar reconstruction rendering is used to represent temperature for monitoring feedback. The temperature at locations not being treated may be monitored. If the temperature has an undesired characteristic (e.g., too high), then the therapy is controlled by ceasing, at least temporarily. | 11-07-2013 |
20130303880 | Thermally Tagged Motion Tracking for Medical Treatment - Motion tracking is performed with a thermal pattern within a patient. A pattern of different temperature is created in tissue, such as warming up tissue in a checkerboard pattern. The temperature pattern is used over time to track motion of the tissue. The tracked motion may be used to treat the tissue throughout at least part of a periodic cycle. | 11-14-2013 |
Patent application number | Description | Published |
20090327737 | TECHNIQUES FOR ENSURING AUTHENTICATION AND INTEGRITY OF COMMUNICATIONS - Techniques are described for ensuring data integrity and authentication of received messages. One technique includes sending a request from a first module to a second module in which the request includes a first portion that is a shared secret encrypted with a public key, obtaining by the second module a private key from a secure and trusted information store, such as a license information store, including license information or other application specific information for the first module, using the private key to decrypt the first portion and obtain the shared secret, sending a response from the second module to the first module in which the response includes authentication data and at least one data item used with the shared secret to determine the authentication data, and performing by the first module verification processing to verify the authentication data included in the response. | 12-31-2009 |
20100058478 | SOFTWARE ANTI-PIRACY PROTECTION - Licensing aspects of vendor software packages can be protected with reduced user interaction and effort by automating licensing exploit identification, and if allowed, exploit correction. Automating licensing exploit detection ensures that known exploits are more quickly and efficiently discovered to help maintain genuine software status. Minimizing user interaction in licensing exploit detection and correction involves less disruption to users and generally supports increased user satisfaction with vendor software package usage. | 03-04-2010 |
20110030062 | VERSION-BASED SOFTWARE PRODUCT ACTIVATION - A software license for a particular version of a software product on a computing device includes both a branding identifier that identifies the particular version of the software product and component dependency information that identifies one or more aspects of the particular version of the software product. To activate a software product on the computing device, the branding identifier is compared to a portion of the software product on the computing device. If the branding identifier matches the portion of the software product, then the component dependency information is compared to one or more aspects of the software product on the computing device. If the component dependency information matches the one or more aspects of the software product then the software product is activated. Otherwise, the a license state of the software product is kept unchanged. | 02-03-2011 |
20110072513 | PROVISIONAL ADMINISTRATOR PRIVILEGES - A system grants “provisional privileges” to a user request for the purpose of provisionally performing a requested transaction. If the provisionally-performed transaction does not put the system in a degraded state, the transaction is authorized despite the user request having inadequate privileges originally. | 03-24-2011 |
20120240221 | PROVISIONAL ADMINISTRATOR PRIVILEGES - A system grants “provisional privileges” to a user request for the purpose of provisionally performing a requested transaction. If the provisionally-performed transaction does not put the system in a degraded state, the transaction is authorized despite the user request having inadequate privileges originally. | 09-20-2012 |
20140109218 | PROVISIONAL ADMINISTRATOR PRIVILEGES - A system grants “provisional privileges” to a user request for the purpose of provisionally performing a requested transaction. If the provisionally-performed transaction does not put the system in a degraded state, the transaction is authorized despite the user request having inadequate privileges originally. | 04-17-2014 |
20150163058 | TECHNIQUES FOR ENSURING AUTHENTICATION AND INTEGRITY OF COMMUNICATIONS - Techniques are described for ensuring data integrity and authentication of received messages. One technique includes sending a request from a first module to a second module in which the request includes a first portion that is a shared secret encrypted with a public key, obtaining by the second module a private key from a secure and trusted information store, such as a license information store, including license information or other application specific information for the first module, using the private key to decrypt the first portion and obtain the shared secret, sending a response from the second module to the first module in which the response includes authentication data and at least one data item used with the shared secret to determine the authentication data, and performing by the first module verification processing to verify the authentication data included in the response. | 06-11-2015 |
20150261957 | PROVISIONAL ADMINISTRATOR PRIVILEGES - A system grants “provisional privileges” to a user request for the purpose of provisionally performing a requested transaction. If the provisionally-performed transaction does not put the system in a degraded state, the transaction is authorized despite the user request having inadequate privileges originally. | 09-17-2015 |