Patent application number | Description | Published |
20090087027 | ESTIMATOR IDENTIFIER COMPONENT FOR BEHAVIORAL RECOGNITION SYSTEM - An estimator/identifier component for a computer vision engine of a machine-learning based behavior-recognition system is disclosed. The estimator/identifier component may be configured to classify an object being one of two or more classification types, e.g., as being a vehicle or a person. Once classified, the estimator/identifier may evaluate the object to determine a set of kinematic data, static data, and a current pose of the object. The output of the estimator/identifier component may include the classifications assigned to a tracked object, as well as the derived information and object attributes. | 04-02-2009 |
20090087085 | TRACKER COMPONENT FOR BEHAVIORAL RECOGNITION SYSTEM - A tracker component for a computer vision engine of a machine-learning based behavior-recognition system is disclosed. The behavior-recognition system may be configured to learn, identify, and recognize patterns of behavior by observing a video stream (i.e., a sequence of individual video frames). The tracker component may be configured to track objects depicted in the sequence of video frames and to generate, search, match, and update computational models of such objects. | 04-02-2009 |
20100061624 | DETECTING ANOMALOUS EVENTS USING A LONG-TERM MEMORY IN A VIDEO ANALYSIS SYSTEM - Techniques are described for detecting anomalous events using a long-term memory in a video analysis system. The long-term memory may be used to store and retrieve information learned while a video analysis system observes a stream of video frames depicting a given scene. Further, the long-term memory may be configured to detect the occurrence of anomalous events, relative to observations of other events that have occurred in the scene over time. A distance measure may used to determine a distance between an active percept (encoding an observed event depicted in the stream of video frames) and a retrieved percept (encoding a memory of previously observed events in the long-term memory). If the distance exceeds a specified threshold, the long-term memory may publish the occurrence of an anomalous event for review by users of the system. | 03-11-2010 |
20100063949 | LONG-TERM MEMORY IN A VIDEO ANALYSIS SYSTEM - A long-term memory used to store and retrieve information learned while a video analysis system observes a stream of video frames is disclosed. The long-term memory provides a memory with a capacity that grows in size gracefully, as events are observed over time. Additionally, the long-term memory may encode events, represented by sub-graphs of a neural network. Further, rather than predefining a number of patterns recognized and manipulated by the long-term memory, embodiments of the invention provide a long-term memory where the size of a feature dimension (used to determine the similarity between different observed events) may grow dynamically as necessary, depending on the actual events observed in a sequence of video frames. | 03-11-2010 |
20100150471 | HIERARCHICAL SUDDEN ILLUMINATION CHANGE DETECTION USING RADIANCE CONSISTENCY WITHIN A SPATIAL NEIGHBORHOOD - Techniques are disclosed for detecting sudden illumination changes using radiance consistency within a spatial neighborhood. A background/foreground (BG/FG) component of a behavior recognition system may be configured to generate a background image depicting a scene background. Further, the (BG/FG) component may periodically evaluate a current video frame to determine whether a sudden illumination change has occurred. A sudden illumination change occurs when scene lighting changes dramatically from one frame to the next (or over a small number of frames). | 06-17-2010 |
20100260376 | MAPPER COMPONENT FOR MULTIPLE ART NETWORKS IN A VIDEO ANALYSIS SYSTEM - Techniques are disclosed for detecting the occurrence of unusual events in a sequence of video frames Importantly, what is determined as unusual need not be defined in advance, but can be determined over time by observing a stream of primitive events and a stream of context events. A mapper component may be configured to parse the event streams and supply input data sets to multiple adaptive resonance theory (ART) networks. Each individual ART network may generate clusters from the set of inputs data supplied to that ART network. Each cluster represents an observed statistical distribution of a particular thing or event being observed that ART network. | 10-14-2010 |
20110043536 | VISUALIZING AND UPDATING SEQUENCES AND SEGMENTS IN A VIDEO SURVEILLANCE SYSTEM - Techniques are disclosed for visually conveying a sequence storing an ordered string of symbols generated from kinematic data derived from analyzing an input stream of video frames depicting one or more foreground objects. The sequence may represent information learned by a video surveillance system. A request may be received to view the sequence or a segment partitioned form the sequence. A visual representation of the segment may be generated and superimposed over a background image associated with the scene. A user interface may be configured to display the visual representation of the sequence or segment and to allow a user to view and/or modify properties associated with the sequence or segment. | 02-24-2011 |
20110043625 | SCENE PRESET IDENTIFICATION USING QUADTREE DECOMPOSITION ANALYSIS - Techniques are disclosed for matching a current background scene of an image received by a surveillance system with a gallery of scene presets that each represent a previously captured background scene. A quadtree decomposition analysis is used to improve the robustness of the matching operation when the scene lighting changes (including portions containing over-saturation/under-saturation) or a portion of the content changes. The current background scene is processed to generate a quadtree decomposition including a plurality of window portions. Each of the window portions is processed to generate a plurality of phase spectra. The phase spectra are then projected onto a corresponding plurality of scene preset image matrices of one or more scene preset. When a match between the current background scene and one of the scene presets is identified, the matched scene preset is updated. Otherwise a new scene preset is created based on the current background scene. | 02-24-2011 |
20110043626 | INTRA-TRAJECTORY ANOMALY DETECTION USING ADAPTIVE VOTING EXPERTS IN A VIDEO SURVEILLANCE SYSTEM - A sequence layer in a machine-learning engine configured to learn from the observations of a computer vision engine. In one embodiment, the machine-learning engine uses the voting experts to segment adaptive resonance theory (ART) network label sequences for different objects observed in a scene. The sequence layer may be configured to observe the ART label sequences and incrementally build, update, and trim, and reorganize an ngram trie for those label sequences. The sequence layer computes the entropies for the nodes in the ngram trie and determines a sliding window length and vote count parameters. Once determined, the sequence layer may segment newly observed sequences to estimate the primitive events observed in the scene as well as issue alerts for inter-sequence and intra-sequence anomalies. | 02-24-2011 |
20110043689 | FIELD-OF-VIEW CHANGE DETECTION - Techniques are disclosed for detecting a field-of-view change for a video feed. These techniques differentiate between a new or changed scene and a temporary variation in the scene to accurately detect field-of-view changes for the video feed. A field-of-view change is detected when the position of a camera providing the video feed changes, the video feed is switched to a different camera, the video feed is disconnected, or the camera providing the video feed is obscured. A false-positive field-of-view change is not detected when the scene changes due to a sudden variation in illumination, obstruction of a portion of the camera providing the video feed, blurred images due to an out-of-focus camera, or a transition between bright and dark light when the video feed transitions between color and near infrared capture modes. | 02-24-2011 |
20110044492 | ADAPTIVE VOTING EXPERTS FOR INCREMENTAL SEGMENTATION OF SEQUENCES WITH PREDICTION IN A VIDEO SURVEILLANCE SYSTEM - A sequence layer in a machine-learning engine configured to learn from the observations of a computer vision engine. In one embodiment, the machine-learning engine uses the voting experts to segment adaptive resonance theory (ART) network label sequences for different objects observed in a scene. The sequence layer may be configured to observe the ART label sequences and incrementally build, update, and trim, and reorganize an ngram trie for those label sequences. The sequence layer computes the entropies for the nodes in the ngram trie and determines a sliding window length and vote count parameters. Once determined, the sequence layer may segment newly observed sequences to estimate the primitive events observed in the scene as well as issue alerts for inter-sequence and intra-sequence anomalies. | 02-24-2011 |
20110044498 | VISUALIZING AND UPDATING LEARNED TRAJECTORIES IN VIDEO SURVEILLANCE SYSTEMS - Techniques are disclosed for visually conveying a trajectory map. The trajectory map provides users with a visualization of data observed by a machine-learning engine of a behavior recognition system. Further, the visualization may provide an interface used to guide system behavior. For example, the interface may be used to specify that the behavior recognition system should alert (or not alert) when a particular trajectory is observed to occur. | 02-24-2011 |
20110044499 | INTER-TRAJECTORY ANOMALY DETECTION USING ADAPTIVE VOTING EXPERTS IN A VIDEO SURVEILLANCE SYSTEM - A sequence layer in a machine-learning engine configured to learn from the observations of a computer vision engine. In one embodiment, the machine-learning engine uses the voting experts to segment adaptive resonance theory (ART) network label sequences for different objects observed in a scene. The sequence layer may be configured to observe the ART label sequences and incrementally build, update, and trim, and reorganize an ngram trie for those label sequences. The sequence layer computes the entropies for the nodes in the ngram trie and determines a sliding window length and vote count parameters. Once determined, the sequence layer may segment newly observed sequences to estimate the primitive events observed in the scene as well as issue alerts for inter-sequence and intra-sequence anomalies. | 02-24-2011 |
20110044533 | VISUALIZING AND UPDATING LEARNED EVENT MAPS IN SURVEILLANCE SYSTEMS - Techniques are disclosed for visually conveying an event map. The event map may represent information learned by a surveillance system. A request may be received to view the event map for a specified scene. The event map may be generated, including a background model of the specified scene and at least one cluster providing a statistical distribution of an event in the specified scene. Each statistical distribution may be derived from data streams generated from a sequence of video frames depicting the specified scene captured by a video camera. Each event may be observed to occur at a location in the specified scene corresponding to a location of the respective cluster in the event map. The event map may be configured to allow a user to view and/or modify properties associated with each cluster. For example, the user may label a cluster and set events matching the cluster to always (or never) generate an alert. | 02-24-2011 |
20110044536 | PIXEL-LEVEL BASED MICRO-FEATURE EXTRACTION - Techniques are disclosed for extracting micro-features at a pixel-level based on characteristics of one or more images. Importantly, the extraction is unsupervised, i.e., performed independent of any training data that defines particular objects, allowing a behavior-recognition system to forgo a training phase and for object classification to proceed without being constrained by specific object definitions. A micro-feature extractor that does not require training data is adaptive and self-trains while performing the extraction. The extracted micro-features are represented as a micro-feature vector that may be input to a micro-classifier which groups objects into object type clusters based on the micro-feature vectors. | 02-24-2011 |
20110044537 | BACKGROUND MODEL FOR COMPLEX AND DYNAMIC SCENES - Techniques are disclosed for learning and modeling a background for a complex and/or dynamic scene over a period of observations without supervision. A background/foreground component of a computer vision engine may be configured to model a scene using an array of ART networks. The ART networks learn the regularity and periodicity of the scene by observing the scene over a period of time. Thus, the ART networks allow the computer vision engine to model complex and dynamic scene backgrounds in video. | 02-24-2011 |
20110050896 | VISUALIZING AND UPDATING LONG-TERM MEMORY PERCEPTS IN A VIDEO SURVEILLANCE SYSTEM - Techniques are disclosed for visually conveying a percept. The percept may represent information learned by a video surveillance system. A request may be received to view a percept for a specified scene. The percept may have been derived from data streams generated from a sequence of video frames depicting the specified scene captured by a video camera. A visual representation of the percept may be generated. A user interface may be configured to display the visual representation of the percept and to allow a user to view and/or modify metadata attributes with the percept. For example, the user may label a percept and set events matching the percept to always (or never) result in alert being generated for users of the video surveillance system. | 03-03-2011 |
20110050897 | VISUALIZING AND UPDATING CLASSIFICATIONS IN A VIDEO SURVEILLANCE SYSTEM - Techniques are disclosed for visually conveying classifications derived from pixel-level micro-features extracted from image data. The image data may include an input stream of video frames depicting one or more foreground objects. The classifications represent information learned by a video surveillance system. A request may be received to view a classification. A visual representation of the classification may be generated. A user interface may be configured to display the visual representation of the classification and to allow a user to view and/or modify properties associated with the classification. | 03-03-2011 |
20110051992 | UNSUPERVISED LEARNING OF TEMPORAL ANOMALIES FOR A VIDEO SURVEILLANCE SYSTEM - Techniques are described for analyzing a stream of video frames to identify temporal anomalies. A video surveillance system configured to identify when agents depicted in the video stream engage in anomalous behavior, relative to the time-of-day (TOD) or day-of-week (DOW) at which the behavior occurs. A machine-learning engine may establish the normalcy of a scene by observing the scene over a specified period of time. Once the observations of the scene have matured, the actions of agents in the scene may be evaluated and classified as normal or abnormal temporal behavior, relative to the past observations. | 03-03-2011 |
20110052000 | DETECTING ANOMALOUS TRAJECTORIES IN A VIDEO SURVEILLANCE SYSTEM - Techniques are disclosed for determining anomalous trajectories of objects tracked over a sequence of video frames. In one embodiment, a symbol trajectory may be derived from observing an object moving through a scene. The symbol trajectory represents semantic concepts extracted from the trajectory of the object. Whether the symbol trajectory is anomalous may be determined, based on previously observed symbol trajectories. A user may be alerted upon determining that the symbol trajectory is anomalous. | 03-03-2011 |
20110052002 | FOREGROUND OBJECT TRACKING - Techniques are disclosed for detecting foreground objects in a scene captured by a surveillance system and tracking the detected foreground objects from frame to frame in real time. A motion flow field is used to validate foreground objects(s) that are extracted from the background model of a scene. Spurious foreground objects are filtered before the foreground objects are provided to the tracking stage. The motion flow field is also used by the tracking stage to improve the performance of the tracking as needed for real time surveillance applications. | 03-03-2011 |
20110052003 | FOREGROUND OBJECT DETECTION IN A VIDEO SURVEILLANCE SYSTEM - Techniques are disclosed for detecting foreground objects in a scene captured by a surveillance system and tracking the detected foreground objects from frame to frame in real time. A motion flow field is used to validate foreground objects(s) that are extracted from the background model of a scene. Spurious foreground objects are filtered before the detected foreground objects are provided to the tracking stage. The motion flow field is also used by the tracking stage to improve the performance of the tracking as needed for real time surveillance applications. | 03-03-2011 |
20110052067 | CLUSTERING NODES IN A SELF-ORGANIZING MAP USING AN ADAPTIVE RESONANCE THEORY NETWORK - Techniques are disclosed for discovering object type clusters using pixel-level micro-features extracted from image data. A self-organizing map and adaptive resonance theory (SOM-ART) network is used to classify objects depicted in the image data based on the pixel-level micro-features. Importantly, the discovery of the object type clusters is unsupervised, i.e., performed independent of any training data that defines particular objects, allowing a behavior-recognition system to forgo a training phase and for object classification to proceed without being constrained by specific object definitions. The SOM-ART network is adaptive and able to learn while discovering the object type clusters and classifying objects. | 03-03-2011 |
20110052068 | IDENTIFYING ANOMALOUS OBJECT TYPES DURING CLASSIFICATION - Techniques are disclosed for identifying anomaly object types during classification of foreground objects extracted from image data. A self-organizing map and adaptive resonance theory (SOM-ART) network is used to discover object type clusters and classify objects depicted in the image data based on pixel-level micro-features that are extracted from the image data. Importantly, the discovery of the object type clusters is unsupervised, i.e., performed independent of any training data that defines particular objects, allowing a behavior-recognition system to forgo a training phase and for object classification to proceed without being constrained by specific object definitions. The SOM-ART network is adaptive and able to learn while discovering the object type clusters and classifying objects and identifying anomaly object types. | 03-03-2011 |
20110064267 | CLASSIFIER ANOMALIES FOR OBSERVED BEHAVIORS IN A VIDEO SURVEILLANCE SYSTEM - Techniques are disclosed for a video surveillance system to learn to recognize complex behaviors by analyzing pixel data using alternating layers of clustering and sequencing. A combination of a self organizing map (SOM) and an adaptive resonance theory (ART) network may be used to identify a variety of different anomalous inputs at each cluster layer. As progressively higher layers of the cortex model component represent progressively higher levels of abstraction, anomalies occurring in the higher levels of the cortex model represent observations of behavioral anomalies corresponding to progressively complex patterns of behavior. | 03-17-2011 |
20110064268 | VIDEO SURVEILLANCE SYSTEM CONFIGURED TO ANALYZE COMPLEX BEHAVIORS USING ALTERNATING LAYERS OF CLUSTERING AND SEQUENCING - Techniques are disclosed for a video surveillance system to learn to recognize complex behaviors by analyzing pixel data using alternating layers of clustering and sequencing. A video surveillance system may be configured to observe a scene (as depicted in a sequence of video frames) and, over time, develop hierarchies of concepts including classes of objects, actions and behaviors. That is, the video surveillance system may develop models at progressively more complex levels of abstraction used to identify what events and behaviors are common and which are unusual. When the models have matured, the video surveillance system issues alerts on unusual events. | 03-17-2011 |
20120163670 | BEHAVIORAL RECOGNITION SYSTEM - Embodiments of the present invention provide a method and a system for analyzing and learning behavior based on an acquired stream of video frames. Objects depicted in the stream are determined based on an analysis of the video frames. Each object may have a corresponding search model used to track an object's motion frame-to-frame. Classes of the objects are determined and semantic representations of the objects are generated. The semantic representations are used to determine objects' behaviors and to learn about behaviors occurring in an environment depicted by the acquired video streams. This way, the system learns rapidly and in real-time normal and abnormal behaviors for any environment by analyzing movements or activities or absence of such in the environment and identifies and predicts abnormal and suspicious behavior based on what has been learned. | 06-28-2012 |
20120224746 | CLASSIFIER ANOMALIES FOR OBSERVED BEHAVIORS IN A VIDEO SURVEILLANCE SYSTEM - Techniques are disclosed for a video surveillance system to learn to recognize complex behaviors by analyzing pixel data using alternating layers of clustering and sequencing. A combination of a self organizing map (SOM) and an adaptive resonance theory (ART) network may be used to identify a variety of different anomalous inputs at each cluster layer. As progressively higher layers of the cortex model component represent progressively higher levels of abstraction, anomalies occurring in the higher levels of the cortex model represent observations of behavioral anomalies corresponding to progressively complex patterns of behavior. | 09-06-2012 |
20120257831 | CONTEXT PROCESSOR FOR VIDEO ANALYSIS SYSTEM - Embodiments of the present invention provide a method and a system for mapping a scene depicted in an acquired stream of video frames that may be used by a machine-learning behavior-recognition system. A background image of the scene is segmented into plurality of regions representing various objects of the background image. Statistically similar regions may be merged and associated. The regions are analyzed to determine their z-depth order in relation to a video capturing device providing the stream of the video frames and other regions, using occlusions between the regions and data about foreground objects in the scene. An annotated map describing the identified regions and their properties is created and updated. | 10-11-2012 |
20120275649 | FOREGROUND OBJECT TRACKING - Techniques are disclosed for detecting foreground objects in a scene captured by a surveillance system and tracking the detected foreground objects from frame to frame in real time. A motion flow field is used to validate foreground objects(s) that are extracted from the background model of a scene. Spurious foreground objects are filtered before the foreground objects are provided to the tracking stage. The motion flow field is also used by the tracking stage to improve the performance of the tracking as needed for real time surveillance applications. | 11-01-2012 |
20130022242 | IDENTIFYING ANOMALOUS OBJECT TYPES DURING CLASSIFICATION - Techniques are disclosed for identifying anomaly object types during classification of foreground objects extracted from image data. A self-organizing map and adaptive resonance theory (SOM-ART) network is used to discover object type clusters and classify objects depicted in the image data based on pixel-level micro-features that are extracted from the image data. Importantly, the discovery of the object type clusters is unsupervised, i.e., performed independent of any training data that defines particular objects, allowing a behavior-recognition system to forgo a training phase and for object classification to proceed without being constrained by specific object definitions. The SOM-ART network is adaptive and able to learn while discovering the object type clusters and classifying objects and identifying anomaly object types. | 01-24-2013 |
20130121533 | INTER-TRAJECTORY ANOMALY DETECTION USING ADAPTIVE VOTING EXPERTS IN A VIDEO SURVEILLANCE SYSTEM - A sequence layer in a machine-learning engine configured to learn from the observations of a computer vision engine. In one embodiment, the machine-learning engine uses the voting experts to segment adaptive resonance theory (ART) network label sequences for different objects observed in a scene. The sequence layer may be configured to observe the ART label sequences and incrementally build, update, and trim, and reorganize an ngram trie for those label sequences. The sequence layer computes the entropies for the nodes in the ngram trie and determines a sliding window length and vote count parameters. Once determined, the sequence layer may segment newly observed sequences to estimate the primitive events observed in the scene as well as issue alerts for inter-sequence and intra-sequence anomalies. | 05-16-2013 |
20130136353 | BACKGROUND MODEL FOR COMPLEX AND DYNAMIC SCENES - Techniques are disclosed for learning and modeling a background for a complex and/or dynamic scene over a period of observations without supervision. A background/foreground component of a computer vision engine may be configured to model a scene using an array of ART networks. The ART networks learn the regularity and periodicity of the scene by observing the scene over a period of time. Thus, the ART networks allow the computer vision engine to model complex and dynamic scene backgrounds in video. | 05-30-2013 |
20130241730 | ALERT VOLUME NORMALIZATION IN A VIDEO SURVEILLANCE SYSTEM - Techniques are disclosed for normalizing and publishing alerts using a behavioral recognition-based video surveillance system configured with an alert normalization module. Certain embodiments allow a user of the behavioral recognition system to provide the normalization module with a set of relative weights for alert types and a maximum publication value. Using these values, the normalization module evaluates an alert and determines whether its rareness value exceed a threshold. Upon determining that the alert exceeds the threshold, the module normalizes and publishes the alert. | 09-19-2013 |
20130242093 | ALERT DIRECTIVES AND FOCUSED ALERT DIRECTIVES IN A BEHAVIORAL RECOGNITION SYSTEM - Alert directives and focused alert directives allow a user to provide feedback to a behavioral recognition system to always or never publish an alert for certain events. Such an approach bypasses the normal publication methods of the behavioral recognition system yet does not obstruct the system's learning procedures. | 09-19-2013 |
20130243252 | LOITERING DETECTION IN A VIDEO SURVEILLANCE SYSTEM - A behavioral recognition system may include both a computer vision engine and a machine learning engine configured to observe and learn patterns of behavior in video data. Certain embodiments may be configured to learn patterns of behavior consistent with a person loitering and generate alerts for same. Upon receiving information of a foreground object remaining in a scene over a threshold period of time, a loitering detection module evaluates the whether the object trajectory corresponds to a random walk. Upon determining that the trajectory does correspond, the loitering detection module generates a loitering alert. | 09-19-2013 |
20140002647 | ANOMALOUS STATIONARY OBJECT DETECTION AND REPORTING | 01-02-2014 |
20140003710 | UNSUPERVISED LEARNING OF FEATURE ANOMALIES FOR A VIDEO SURVEILLANCE SYSTEM | 01-02-2014 |
20140003713 | AUTOMATIC GAIN CONTROL FILTER IN A VIDEO ANALYSIS SYSTEM | 01-02-2014 |
20140003720 | ADAPTIVE ILLUMINANCE FILTER IN A VIDEO ANALYSIS SYSTEM | 01-02-2014 |
20140050355 | METHOD AND SYSTEM FOR DETECTING SEA-SURFACE OIL - A behavioral recognition system may include both a computer vision engine and a machine learning engine configured to observe and learn patterns of behavior in video data. Certain embodiments may be configured to detect and evaluate the presence of sea-surface oil on the water surrounding an offshore oil platform. The computer vision engine may be configured to segment image data into detected patches or blobs of surface oil (foreground) present in the field of view of an infrared camera (or cameras). A machine learning engine may evaluate the detected patches of surface oil to learn to distinguish between sea-surface oil incident to the operation of an offshore platform and the appearance of surface oil that should be investigated by platform personnel. | 02-20-2014 |
20140072206 | SEMANTIC REPRESENTATION MODULE OF A MACHINE LEARNING ENGINE IN A VIDEO ANALYSIS SYSTEM - A machine-learning engine is disclosed that is configured to recognize and learn behaviors, as well as to identify and distinguish between normal and abnormal behavior within a scene, by analyzing movements and/or activities (or absence of such) over time. The machine-learning engine may be configured to evaluate a sequence of primitive events and associated kinematic data generated for an object depicted in a sequence of video frames and a related vector representation. The vector representation is generated from a primitive event symbol stream and a phase space symbol stream, and the streams describe actions of the objects depicted in the sequence of video frames. | 03-13-2014 |
20140132786 | IMAGE STABILIZATION TECHNIQUES FOR VIDEO SURVEILLANCE SYSTEMS - A behavioral recognition system may include both a computer vision engine and a machine learning engine configured to observe and learn patterns of behavior in video data. Certain embodiments may provide image stabilization of a video stream obtained from a camera. An image stabilization module in the behavioral recognition system obtains a reference image from the video stream. The image stabilization module identifies alignment regions within the reference image based on the regions of the image that are dense with features. Upon determining that the tracked features of a current image is out of alignment with the reference image, the image stabilization module uses the most feature dense alignment region to estimate an affine transformation matrix to apply to the entire current image to warp the image into proper alignment. | 05-15-2014 |
20150046155 | COGNITIVE NEURO-LINGUISTIC BEHAVIOR RECOGNITION SYSTEM FOR MULTI-SENSOR DATA FUSION - Embodiments presented herein describe techniques for generating a linguistic model of input data obtained from a data source (e.g., a video camera). According to one embodiment of the present disclosure, a sequence of symbols is generated based on an ordered stream of normalized vectors generated from the input data. A dictionary of words is generated from combinations of the ordered sequence of symbols based on a frequency at which combinations of symbols appear in the ordered sequence of symbols. A plurality of phrases is generated based an ordered sequence of words from the dictionary observed in the ordered sequence of symbols based on a frequency by which combinations of words in ordered sequence of words appear relative to one another. | 02-12-2015 |
20150047040 | COGNITIVE INFORMATION SECURITY USING A BEHAVIORAL RECOGNITION SYSTEM - Embodiments presented herein describe a method for processing streams of data of one or more networked computer systems. According to one embodiment of the present disclosure, an ordered stream of normalized vectors corresponding to information security data obtained from one or more sensors monitoring a computer network is received. A neuro-linguistic model of the information security data is generated by clustering the ordered stream of vectors and assigning a letter to each cluster, outputting an ordered sequence of letters based on a mapping of the ordered stream of normalized vectors to the clusters, building a dictionary of words from of the ordered output of letters, outputting an ordered stream of words based on the ordered output of letters, and generating a plurality of phrases based on the ordered output of words. | 02-12-2015 |
20150078656 | VISUALIZING AND UPDATING LONG-TERM MEMORY PERCEPTS IN A VIDEO SURVEILLANCE SYSTEM - Techniques are disclosed for visually conveying a percept. The percept may represent information learned by a video surveillance system. A request may be received to view a percept for a specified scene. The percept may have been derived from data streams generated from a sequence of video frames depicting the specified scene captured by a video camera. A visual representation of the percept may be generated. A user interface may be configured to display the visual representation of the percept and to allow a user to view and/or modify metadata attributes with the percept. For example, the user may label a percept and set events matching the percept to always (or never) result in alert being generated for users of the video surveillance system. | 03-19-2015 |