Patent application number | Description | Published |
20080205771 | CLASSIFYING COMPLETE AND INCOMPLETE DATE-TIME INFORMATION - A method for automatically classifying images into a final set of events including receiving a first plurality of images having date-time and a second plurality of images with incomplete date-time information; determining one or more time differences of the first plurality of images based on date-time clustering of the images and classify the first plurality of images into a first set of possible events; analyzing the second plurality of images using scene content and metadata cues and selecting images which correspond to different events in the first set of possible events and combining them into their corresponding possible events to thereby produce a second set of possible events; and using image scene content to verify the second set of possible events and to change the classification of images which correspond to different possible events to thereby provide the final set of events. | 08-28-2008 |
20080205772 | REPRESENTATIVE IMAGE SELECTION BASED ON HIERARCHICAL CLUSTERING - In a computer-mediated method for providing representative images, image records are classified spatio-temporally into groups. In each group, image records are partitioned into clusters and the hierarchically highest cluster is ascertained. The partitioning is between a hierarchy of feature clusters and a remainder cluster, based on a predetermined plurality of saliency features. Feature clusters each have one or more of the saliency features. The remainder cluster lacks the saliency features. Feature clusters are each exclusive of the saliency features of any higher clusters in the hierarchy and non-exclusive of the saliency features of any lower feature clusters in the hierarchy. A representative image of each group is designated from respective image records based on: the respective saliency feature of the highest cluster when the highest cluster is a feature cluster and independent of the saliency features when the highest cluster is the remainder cluster. | 08-28-2008 |
20080208791 | RETRIEVING IMAGES BASED ON AN EXAMPLE IMAGE - A method is disclosed for retrieving images relevant to an example image from among a plurality of stored images, each of the stored images being associated with metadata of different types, including retrieving set(s) of images from the stored image(s) for each different type of metadata that are based on similarities of the metadata of each different type with the example image; displaying the retrieved set(s) of image(s) organized according to each different type of metadata; and the user selecting one or more particular set(s) of retrieved image(s). | 08-28-2008 |
20080247458 | SYSTEM AND METHOD TO COMPOSE A SLIDE SHOW - A method of composing a multimedia slide show. In a preferred embodiment, the method comprises the steps of: selecting a plurality of digital images; encoding each of the plurality of digital images to generate a normal resolution image portion and a high resolution image portion; multiplexing each corresponding normal and high resolution image portion to generate a single high resolution still image; determining a time parameter for each of the high resolution still images; selecting an audio portion for at least one of the plurality of digital images; concatenating the plurality of high resolution still images to generate a video bitstream; generating an audio bitstream by encoding the audio portion; and multiplexing the video bitstream and audio bitstream to generate the multimedia slide show. | 10-09-2008 |
20080298643 | COMPOSITE PERSON MODEL FROM IMAGE COLLECTION - A method of improving recognition of a particular person in images by constructing a composite model of at least the portion of the head of that particular person, includes acquiring a collection of images taken during a particular event; identifying image(s) having a particular person in the collection; identifying one or more features in the identified image(s) associated with that particular person; searching the collection using the identified features to identify the particular person in other images of the collection; and constructing a composite model of at least a portion of the particular person's head using identified images of the particular person. | 12-04-2008 |
20090091798 | APPAREL AS EVENT MARKER - A method of characterizing images taken during a event into one or more sub-events is disclosed. The method includes; acquiring a collection of images taken during the event; identifying one or more particular person(s) in the collection and the apparel associated with the identified person; searching the collection to identify if the apparel associated with identified particular person(s) has been changed during the event; identifying one or more sub-events for those images in which the particular person(s) have changed apparel. | 04-09-2009 |
20090094518 | METHOD FOR IMAGE ANIMATION USING IMAGE VALUE RULES - A method for image presentation are provided, in which at least one theme for image presentation is obtained; at least one set of images is acquired for presentation in association with each theme; and a presentation area image is generated for presentation on a display with a plurality of separate presentation objects in the presentation area image; an image value for each of the acquired images is determined; an emphasis score is determined for each of the acquired images. The presentation area image is presented with one of the acquired images in each of the presentation objects; and the presentation objects are animated by moving the presentation objects relative to each other in a manner that attracts more attention to presentation objects that are used to present images having a higher emphasis score than presentation objects that are used to simultaneously present images having a lower emphasis score. | 04-09-2009 |
20090297032 | SEMANTIC EVENT DETECTION FOR DIGITAL CONTENT RECORDS - A system and method for semantic event detection in digital image content records is provided in which an event-level “Bag-of-Features” (BOF) representation is used to model events, and generic semantic events are detected in a concept space instead of an original low-level visual feature space based on the BOF representation. | 12-03-2009 |
20090299999 | SEMANTIC EVENT DETECTION USING CROSS-DOMAIN KNOWLEDGE - A method for facilitating semantic event classification of a group of image records related to an event. The method using an event detector system for providing: extracting a plurality of visual features from each of the image records; wherein the visual features include segmenting an image record into a number of regions, in which the visual features are extracted; generating a plurality of concept scores for each of the image records using the visual features, wherein each concept score corresponds to a visual concept and each concept score is indicative of a probability that the image record includes the visual concept; generating a feature vector corresponding to the event based on the concept scores of the image records; and supplying the feature vector to an event classifier that identifies at least one semantic event classifier that corresponds to the event. | 12-03-2009 |
20100077289 | Method and Interface for Indexing Related Media From Multiple Sources - The invention relates generally to the field of digital image processing, and in particular to a method for associating and viewing related video and still images. In particular, the present invention is directed to methods for associating and/or viewing digital content records comprising ordering a first set of digital content records and the second set of digital content records based upon information associated with each of the digital content records. | 03-25-2010 |
20100124378 | METHOD FOR EVENT-BASED SEMANTIC CLASSIFICATION - A method of automatically classifying images in a consumer digital image collection, includes generating an event representation of the image collection; computing global time-based features for each event within the hierarchical event representation; computing content-based features for each image in an event within the hierarchical event representation; combining content-based features for each image in an event to generate event-level content-based features; and using time-based features and content-based features for each event to classify an event into one of a pre-determined set of semantic categories. | 05-20-2010 |
20100245625 | IDENTIFYING COLLECTION IMAGES WITH SPECIAL EVENTS - A method for associating event times or time periods with digital images in a collection for determining if a digital image is of interest, includes storing a collection of digital images each having an associated capture time; comparing the associated capture time in the collection with a special event time to determine if a digital image in the collection is of interest, wherein the comparing step includes calculation of a special event time associated with a special event based on the calendar time associated with the special event and using such information to perform the comparison step; and associating digital images of interest with the special event. | 09-30-2010 |
20100322524 | DETECTING SIGNIFICANT EVENTS IN CONSUMER IMAGE COLLECTIONS - A method for determining significant events in a digital image collection, including, using a processor for generating image counts time-series from the image collection; computing a model of the image counts time-series; and using the image counts time-series and the model to determine significant events. | 12-23-2010 |
20110074966 | METHOD FOR MEASURING PHOTOGRAPHER'S AESTHETIC QUALITY PROGRESS - A method for measuring a photographer's progress over time toward producing images with a high level of aesthetic quality by assessing the aesthetic quality of a set of digital images captured by the photographer comprising: providing a set of digital images captured by a particular photographer, each digital image having and associated capture times captured by a particular photographer; using a processor to compute an aesthetic quality parameters for each digital image in the set; and producing an indication of the photographer's progress toward producing images with a high level of aesthetic quality using the aesthetic quality parameters for each digital image in the set and the corresponding associated capture times. | 03-31-2011 |
20110075917 | ESTIMATING AESTHETIC QUALITY OF DIGITAL IMAGES - A method for estimating the aesthetic quality of an input digital image comprising using a digital image processor for performing the following: determining one or more vanishing point(s) associated with the input digital image by automatically analyzing the digital image; computing a compositional model from at least the positions of the vanishing point(s); and producing an aesthetic quality parameter for the input digital image responsive to the compositional model, wherein the aesthetic quality parameter is an estimate for the aesthetic quality of the input digital image. | 03-31-2011 |
20110075930 | METHOD FOR COMPARING PHOTOGRAPHER AESTHETIC QUALITY - A method for comparing a plurality of photographers by assessing the aesthetic quality of a set of digital images captured by each photographer comprising: providing a set of digital images captured by each of a plurality of photographers; using a processor to determine an aesthetic quality parameter for each digital image in each of the sets of digital images, wherein the aesthetic quality parameter is an estimate for the aesthetic quality of the digital image; determining an aesthetic quality distribution for each photographer responsive to the aesthetic quality parameters computed for each of the digital images in the photographer's set of digital images; and providing a comparison between the aesthetic quality distributions of the photographers. | 03-31-2011 |
20110081082 | VIDEO CONCEPT CLASSIFICATION USING AUDIO-VISUAL ATOMS - A method for determining a classification for a video segment, comprising the steps of: breaking the video segment into a plurality of short-term video slices, each including a plurality of video frames and an audio signal; analyzing the video frames for each short-term video slice to form a plurality of region tracks; analyzing each region track to form a visual feature vector and a motion feature vector; analyzing the audio signal for each short-term video slice to determine an audio feature vector; forming a plurality of short-term audio-visual atoms for each short-term video slice by combining the visual feature vector and the motion feature vector for a particular region track with the corresponding audio feature vector; and using a classifier to determine a classification for the video segment responsive to the short-term audio-visual atoms. | 04-07-2011 |
20110099478 | IDENTIFYING COLLECTION IMAGES WITH SPECIAL EVENTS - A method for associating event times or time periods with digital images in a collection for determining if a digital image is of interest, includes storing a collection of digital images each having an associated capture time; comparing the associated capture time in the collection with a special event time to determine if a digital image in the collection is of interest, wherein the comparing step includes calculation of a special event time associated with a special event based on the calendar time associated with the special event and using such information to perform the comparison step; and associating digital images of interest with the special event. | 04-28-2011 |
20110206284 | ADAPTIVE EVENT TIMELINE IN CONSUMER IMAGE COLLECTIONS - A method for organizing an event timeline for a digital image collection, includes using a processor for detecting events in the digital image collection and each event's associated timespan; determining the detected events that are significant in the digital image collection; and organizing the event timeline so that the event timeline shows the significant events and a clustered representation of the other events, made available to the user at different time granularities. The organized event timeline is also used for selecting images for generating output. | 08-25-2011 |
20120002868 | METHOD FOR FAST SCENE MATCHING - A method for identifying digital images having matching backgrounds from a collection of digital images, comprising using a processor to perform the steps of: determining a set of one or more feature values for each digital image in the collection of digital images, wherein the set of feature values includes an edge compactness feature value that is an indication of the number of objects in the digital image that are useful for scene matching; determining a subset of the collection of digital images that are good candidates for scene matching by applying a classifier responsive to the determined feature values; applying a scene matching algorithm to the subset of the collection of digital images to identify groups of digital images having matching backgrounds; and storing an indication of the identified groups of digital images having matching backgrounds in a processor-accessible memory. | 01-05-2012 |
20120109964 | ADAPTIVE MULTIMEDIA SEMANTIC CONCEPT CLASSIFIER - A method of classifying a set of semantic concepts on a second multimedia collection based upon adapting a set of semantic concept classifiers and updating concept affinity relations that were developed to classify the set of semantic concepts for a first multimedia collection. The method comprises providing the second multimedia collection from a different domain and a processor automatically classifying the semantic concepts from the second multimedia collection by adapting the semantic concept classifiers and updating the concept affinity relations to the second multimedia collection based upon the local smoothness over the concept affinity relations and the local smoothness over data affinity relations. | 05-03-2012 |
20120148149 | VIDEO KEY FRAME EXTRACTION USING SPARSE REPRESENTATION - A method for identifying a set of key frames from a video sequence including a time sequence of video frames, comprising: extracting a feature vector for each video frame in a set of video frames selected from the video sequence; defining a set of basis functions that can be used to represent the extracted feature vectors, wherein each basis function is associated with a different video frame in the set of video frames; representing the feature vectors for each video frame in the set of video frames as a sparse combination of the basis functions associated with the other video frames; and analyzing the sparse combinations of the basis functions for the set of video frames to select the set of key frames. | 06-14-2012 |
20120203764 | IDENTIFYING PARTICULAR IMAGES FROM A COLLECTION - A method of identifying one or more particular images from an image collection, includes indexing the image collection to provide image descriptors for each image in the image collection such that each image is described by one or more of the image descriptors; receiving a query from a user specifying at least one keyword for an image search; and using the keyword(s) to search a second collection of tagged images to identify co-occurrence keywords. The method further includes using the identified co-occurrence keywords to provide an expanded list of keywords; using the expanded list of keywords to search the image descriptors to identify a set of candidate images satisfying the keywords; grouping the set of candidate images according to at least one of the image descriptors, and selecting one or more representative images from each grouping; and displaying the representative images to the user. | 08-09-2012 |
20120260175 | METHOD AND INTERFACE FOR INDEXING RELATED MEDIA FROM MULTIPLE SOURCES - The invention relates generally to the field of digital image processing, and in particular to a method for associating and viewing related video and still images. In particular, the present invention is directed to methods for associating and/or viewing digital content records comprising ordering a first set of digital content records and the second set of digital content records based upon information associated with each of the digital content records. | 10-11-2012 |
20120275701 | IDENTIFYING HIGH SALIENCY REGIONS IN DIGITAL IMAGES - A method for identifying high saliency regions in a digital image, comprising: segmenting the digital image into a plurality of segmented regions; determining a saliency value for each segmented region, merging neighboring segmented regions that share a common boundary in response to determining that one or more specified merging criteria are satisfied; and designating one or more of the segmented regions to be high saliency regions. The determination of the saliency value for a segmented region includes: determining a surround region including a set of image pixels surrounding the segmented region; analyzing the image pixels in the segmented region to determine one or more segmented region attributes; analyzing the image pixels in the surround region to determine one or more corresponding surround region attributes; determining a region saliency value responsive to differences between the one or more segmented region attributes and the corresponding surround region attributes. | 11-01-2012 |
20120281969 | VIDEO SUMMARIZATION USING AUDIO AND VISUAL CUES - A method for producing an audio-visual slideshow for a video sequence having an audio soundtrack and a corresponding video track including a time sequence of image frames, comprising: segmenting the audio soundtrack into a plurality of audio segments; subdividing the audio segments into a sequence of audio frames; determining a corresponding audio classification for each audio frame; automatically selecting a subset of the audio segments responsive to the audio classification for the corresponding audio frames; for each of the selected audio segments automatically analyzing the corresponding image frames to select one or more key image frames; merging the selected audio segments to form an audio summary; forming an audio-visual slideshow by combining the selected key frames with the audio summary, wherein the selected key frames are displayed synchronously with their corresponding audio segment; and storing the audio-visual slideshow in a processor-accessible storage memory. | 11-08-2012 |
20130089303 | VIDEO CONCEPT CLASSIFICATION USING AUDIO-VISUAL GROUPLETS - A method for determining a semantic concept classification for a digital video clip, comprising: receiving an audio-visual dictionary including a plurality of audio-visual grouplets, the audio-visual grouplets including visual background and foreground codewords, audio background and foreground codewords, wherein the codewords in a particular audio-visual grouplet were determined to be correlated with each other; analyzing the digital video clip to determine a set of visual features and a set of audio features; determining similarity scores between the digital video clip and each of the audio-visual grouplets by comparing the set of visual features to any visual background and foreground codewords associated with a particular audio-visual grouplet, and comparing the set of audio features to any audio background and foreground codewords associated with the particular audio-visual grouplet; and determining one or more semantic concept classifications using trained semantic classifiers. | 04-11-2013 |
20130089304 | VIDEO CONCEPT CLASSIFICATION USING VIDEO SIMILARITY SCORES - A method for determining a semantic concept classification for a digital video clip, comprising: receiving an audio-visual dictionary including a plurality of audio-visual grouplets, the audio-visual grouplets including visual background and foreground codewords, audio background and foreground codewords, wherein the codewords in a particular audio-visual grouplet were determined to be correlated with each other; determining reference video codeword similarity scores for a set of reference video clips; determining codeword similarity scores for the digital video clip; determining a reference video similarity score for each reference video clip representing a similarity between the digital video clip and the reference video clip responsive to the audio-visual grouplets, the codeword similarity scores and the reference video codeword similarity scores; and determining one or more semantic concept classifications using trained semantic classifiers responsive to the determined reference video similarity scores. | 04-11-2013 |
20130235275 | SCENE BOUNDARY DETERMINATION USING SPARSITY-BASED MODEL - A method for determining a scene boundary location dividing a first scene and a second scene in an input video sequence. The scene boundary location is determined responsive to a merit function value, which is a function of the candidate scene boundary location. The merit function value for a particular candidate scene boundary location is determined by representing the dynamic scene content for the input video frames before and after candidate scene boundary using sparse combinations of a set of basis functions, wherein the sparse combinations of the basis functions are determined by finding a sparse vector of weighting coefficients for each of the basis functions. The weighting coefficients determined for each of the input video frames are combined to determine the merit function value. The candidate scene boundary providing the smallest merit function value is designated to be the scene boundary location. | 09-12-2013 |
20130235939 | VIDEO REPRESENTATION USING A SPARSITY-BASED MODEL - A method for representing a video sequence including a time sequence of input video frames, the input video frames including some common scene content that is common to all of the input video frames and some dynamic scene content that changes between at least some of the input video frames. Affine transform are determined to align the common scene content in the input video frames. A common video frame including the common scene content is determined by forming a sparse combination of a first basis functions. A dynamic video frame is determined for each input video frame by forming a sparse combination of a second basis functions, wherein the dynamic video frames can be combined with the respective affine transforms and the common video frame to provide reconstructed video frames. | 09-12-2013 |
20130251340 | VIDEO CONCEPT CLASSIFICATION USING TEMPORALLY-CORRELATED GROUPLETS - A method for determining a semantic concept classification for a digital video clip based on a grouplet dictionary that includes a plurality of temporally-correlated grouplets. The temporally-correlated grouplets include textual codewords and either visual codewords or audio codewords, wherein the codewords in a particular temporally-correlated grouplet were determined to be correlated with each other based on analysis of a set of training videos. Reference video codeword similarity scores are determined for a set of reference video clips, and codeword similarity scores are determined for the digital video clip. A reference video similarity score is determined for each reference video clip representing a similarity between the digital video clip and the reference video clip based on the reference video codeword similarity scores, the codeword similarity scores, and the temporally-correlated grouplets. One or more semantic concept classifications are determined using trained semantic classifiers responsive to the determined reference video similarity scores. | 09-26-2013 |
20140037215 | IDENTIFYING KEY FRAMES USING GROUP SPARSITY ANALYSIS - A method for identifying a set of key video frames from a video sequence comprising extracting feature vectors for each video frame and applying a group sparsity algorithm to represent the feature vector for a particular video frame as a group sparse combination of the feature vectors for the other video frames. Weighting coefficients associated with the group sparse combination are analyzed to determine video frame clusters of temporally-contiguous, similar video frames. A set of key video frames are selected based on the determined video frame clusters. | 02-06-2014 |
20140037216 | IDENTIFYING SCENE BOUNDARIES USING GROUP SPARSITY ANALYSIS - A method for identifying a set of key video frames from a video sequence comprising extracting feature vectors for each video frame and applying a group sparsity algorithm to represent the feature vector for a particular video frame as a group sparse combination of the feature vectors for the other video frames. Weighting coefficients associated with the group sparse combination are analyzed to determine video frame clusters of temporally-contiguous, similar video frames. The video sequence is segmented into scenes by identifying scene boundaries based on the determined video frame clusters. | 02-06-2014 |
20140037269 | VIDEO SUMMARIZATION USING GROUP SPARSITY ANALYSIS - A method for identifying a set of key video frames from a video sequence comprising extracting feature vectors for each video frame and applying a group sparsity algorithm to represent the feature vector for a particular video frame as a group sparse combination of the feature vectors for the other video frames. Weighting coefficients associated with the group sparse combination are analyzed to determine video frame clusters of temporally-contiguous, similar video frames. A summary is formed based on the determined video frame clusters. | 02-06-2014 |
20140046914 | METHOD FOR EVENT-BASED SEMANTIC CLASSIFICATION - A method of automatically classifying images in a consumer digital image collection, includes generating an event representation of the image collection; computing global time-based features for each event within the hierarchical event representation; computing content-based features for each image in an event within the hierarchical event representation; combining content-based features for each image in an event to generate event-level content-based features; and using time-based features and content-based features for each event to classify an event into one of a pre-determined set of semantic categories. | 02-13-2014 |
20140056432 | AUDIO SIGNAL SEMANTIC CONCEPT CLASSIFICATION METHOD - A method for determining a semantic concept associated with an audio signal captured using an audio sensor. A data processor is used to automatically analyze the audio signal using a plurality of semantic concept detectors to determine corresponding preliminary semantic concept detection values, each semantic concept detector being adapted to detect a particular semantic concept. The preliminary semantic concept detection values are analyzed using a joint likelihood model based on predetermined pair-wise likelihoods that particular pairs of semantic concepts co-occur to determine updated semantic concept detection values. One or more semantic concepts are determined based on the updated semantic concept detection values. The semantic concept detectors and the joint likelihood model are trained together with a joint training process using training audio signals, at least some of which are known to be associated with a plurality of semantic concepts. | 02-27-2014 |
20140058982 | AUDIO BASED CONTROL OF EQUIPMENT AND SYSTEMS - A method for controlling a device responsive to an audio signal captured using an audio sensor. A data processor is used to automatically analyze the audio signal using a plurality of semantic concept detectors to determine corresponding preliminary semantic concept detection values, each semantic concept detector being adapted to detect a particular semantic concept. The preliminary semantic concept detection values are analyzed using a joint likelihood model based on predetermined pair-wise likelihoods that particular pairs of semantic concepts co-occur to determine updated semantic concept detection values. One or more semantic concepts are determined based on the updated semantic concept detection values, and the device is controlled responsive to identified semantic concepts. The semantic concept detectors and the joint likelihood model are trained together with a joint training process using training audio signals, at least some of which are known to be associated with a plurality of semantic concepts. | 02-27-2014 |
20140063536 | Method For Computing Scale For Tag Insertion - Computing a scale factor to insert a first set of shapes into a second set of shapes to form a combined image includes receiving the two sets of shapes, using a processor to convert the first set of shapes into a set of rectangles and the second set of shapes into a set of intervals and computing the scale factor for either the set of intervals or the set of rectangles to generate the combined image by iteratively inserting the set of rectangles into the set of intervals and updating the scale factor in response to a residual area or an overflow area until all the rectangles in the set of rectangles have been inserted into the set of intervals and the residual area in the set of intervals is below a threshold, and storing the combined image in memory. | 03-06-2014 |
20140063555 | Method For Generating Tag Layouts - Generating a tag layout from a set of tags and an ordering of the set of tags, wherein each tag includes a text label and a size for the text label, is disclosed. The method further includes receiving at least one closed shape corresponding to a space for the tag layout. A processor computes a scale factor for at least one of the closed shape or the size of the text labels in the set of tags to generate the tag layout of the set of tags within the closed shape such that all the tags in the set of tags fit within the closed shape and the tags are placed in the space based at least upon the ordering of the tags in the set of tags. | 03-06-2014 |
20140063556 | System For Generating Tag Layouts - Generating a tag layout from a set of tags and an ordering of the set of tags, wherein each tag includes a text label and a size for the text label, is disclosed. The system includes a processor accessible memory for receiving an ordered set of tags, each tag including a text label and a size for the text label, and at least one closed shape corresponding to a space for the tag layout. The system further includes a processor for generating the tag layout by computing a scale factor for either the closed shape or the size of the text labels in the set of tags such that all the tags in the set of tags fit within the closed shape, and the processor stores the generated tag layout in the memory. | 03-06-2014 |
20140074825 | IDENTIFYING PARTICULAR IMAGES FROM A COLLECTION - A method of identifying one or more particular images from an image collection, includes indexing the image collection to provide image descriptors for each image in the image collection such that each image is described by one or more of the image descriptors; receiving a query from a user specifying at least one keyword for an image search; and using the keyword(s) to search a second collection of tagged images to identify co-occurrence keywords. The method further includes using the identified co-occurrence keywords to provide an expanded list of keywords; using the expanded list of keywords to search the image descriptors to identify a set of candidate images satisfying the keywords; grouping the set of candidate images according to at least one of the image descriptors, and selecting one or more representative images from each grouping; and displaying the representative images to the user. | 03-13-2014 |
20140086487 | ESTIMATING THE CLUTTER OF DIGITAL IMAGES - A method for determining an estimated clutter level of an input digital image based on an inequality index. The inequality index is determined by partitioning the input digital image into small sub-images and analyzing the sub-images to determine a set of image features. The image features are associated with a set of designated reference features, and the inequality index is determined based on the statistical variation of the reference features. The inequality index is compared to a predefined threshold to classify the input digital image as a rich-content image or a low-content image. For rich-content images, the estimated clutter level is determined responsive to a set of scene content features relating to spatial structures or semantic content of the input digital image is determined by analyzing the input digital image. For low-content images, the estimated clutter level is determined responsive to an overall luminance level. | 03-27-2014 |
20140086495 | DETERMINING THE ESTIMATED CLUTTER OF DIGITAL IMAGES - A method for determining an estimated clutter level of an input digital image based on an inequality index. The inequality index is determined by analyzing the input digital image to determine a set of image features. The image features are associated with a set of designated reference features, and the inequality index is determined based on the statistical variation of the reference features. A set of scene content features relating to spatial structures or semantic content of the input digital image is determined by analyzing the input digital image. The estimated clutter is determined responsive to the inequality index and the scene content features. | 03-27-2014 |
20140119652 | DETECTING RECURRING THEMES IN CONSUMER IMAGE COLLECTIONS - A method of identifying groups of related digital images in a digital image collection, comprising: analyzing each of the digital images to generate associated feature descriptors related to image content or image capture conditions; storing the feature descriptors associated with the digital images in a metadata database; automatically analyzing the metadata database to identify a plurality of frequent itemsets, wherein each of the frequent itemsets is a co-occurring feature descriptor group that occurs in at least a predefined fraction of the digital images; determining a probability of occurrence for each the identified frequent itemsets; determining a quality score for each of the identified frequent itemsets responsive to the determined probability of occurrence; ranking the frequent itemsets based at least on the determined quality scores; and identifying one or more groups of related digital images corresponding to one or more of the top ranked frequent itemsets. | 05-01-2014 |
20140133766 | ADAPTIVE EVENT TIMELINE IN CONSUMER IMAGE COLLECTIONS - A method for organizing an event timeline for a digital image collection, includes using a processor for detecting events in the digital image collection and each event's associated timespan; determining the detected events that are significant in the digital image collection; and organizing the event timeline so that the event timeline shows the significant events and a clustered representation of the other events, made available to the user at different time granularities. The organized event timeline is also used for selecting images for generating output. | 05-15-2014 |
20140160315 | IDENTIFYING COLLECTION IMAGES WITH SPECIAL EVENTS - A method for associating event times or time periods with digital images in a collection for determining if a digital image is of interest, includes storing a collection of digital images each having an associated capture time; comparing the associated capture time in the collection with a special event time to determine if a digital image in the collection is of interest, wherein the comparing step includes calculation of a special event time associated with a special event based on the calendar time associated with the special event and using such information to perform the comparison step; and associating digital images of interest with the special event. | 06-12-2014 |
20150036931 | SYSTEM AND METHOD FOR CREATING NAVIGABLE VIEWS - A method for creating navigable views includes receiving digital images, computing a set of feature points for each of the digital images, selecting one of the digital images as a reference image, identifying a salient region of interest in the reference image, identifying other digital images containing a region of interest similar to the salient region of interest in the reference image using the set of feature points computed for each of other digital images, designating a reference location for the salient region of interest in the reference image, aligning the other digital images to the image that contains the designated reference location, ordering the image that contains the designated reference location and the other digital images, and generating a navigable view. | 02-05-2015 |
20150049910 | IMAGING WORKFLOW USING FACIAL AND NON-FACIAL FEATURES - A method for determining an impact score for a digital image includes providing the digital image wherein the digital image includes faces; using a processor to determine an image feature for the faces; using the processor to compute an object impact score for the faces, wherein the object impact score is based at least upon one of the determined image features; weighting the object impact score for the faces based on one of the determined image features for a face; using the processor to compute an impact score for the digital image by combining the weighted object impact scores for the faces in the image; and storing the computed impact score in a processor accessible memory. | 02-19-2015 |