Patents - stay tuned to the technology

Inventors list

Assignees list

Classification tree browser

Top 100 Inventors

Top 100 Assignees


Farzin Aghdasi, Clovis US

Farzin Aghdasi, Clovis, CA US

Patent application numberDescriptionPublished
20080239075METHOD AND APPARATUS FOR IMPROVING VIDEO PERFORMANCE IN A WIRELESS SURVEILLANCE SYSTEM - A method of improving video performance in video surveillance system having a wireless encoder connected to a video surveillance network by a wireless access point device comprises the steps of allocating channel bandwidth to the wireless encoder from the wireless access point device, transmitting packets of video data from the wireless encoder to the wireless access point device, transmitting signals from the wireless access point device to the wireless encoder, monitoring the strength of the signals received by the wireless access point device, the strength of the signals received by the wireless encoder, and the number of lost packets of video data transmitted from the wireless encoder to the wireless access point device, sending a request from the wireless encoder to the wireless access point device to change the bit transmission rate of the wireless encoder if the strength of the signals received by the wireless access point device is less than a first threshold, if the strength of the signals received by the wireless encoder is less than a second threshold, or if the number of lost packets of video data is greater than a third threshold, and changing the bit transfer rate of the wireless encoder if the wireless access point device approves the change.10-02-2008
20080244409METHOD AND APPARATUS FOR CONFIGURING A VIDEO SURVEILLANCE SOURCE - A method of controlling a video source in a video surveillance system having a video source connected by a network to a workstation having a graphical user interface for enabling a user to control the video source comprising the steps of providing a video analysis program for analyzing the video images generated by the video source before the video images are transmitted over the network, providing a file containing the user interface controls for the graphical user interface and the parameters for configuring the video analysis program; storing the file in memory, downloading the file to the workstation at run time and enabling a user to configure the video analysis program by interacting with the graphical user interface.10-02-2008
20100214425METHOD OF IMPROVING THE VIDEO IMAGES FROM A VIDEO CAMERA - A method of improving a video image by removing the effects of camera vibration comprising the steps of, obtaining a reference frame, receiving an incoming frame, determining the frame translation vector for the incoming frame, translating the incoming frame to generate a realigned frame, performing low pass filtering in the spatial domain on pixels in the realigned frame, performing low pass filtering in the spatial domain on pixels in the reference frame, determining the absolute difference between the filtered pixels in the reference frame and the filtered pixels in the realigned frame, performing low pass filtering in the temporal domain on the pixels in the realigned frame to generate the output frame if the absolute difference is less than a predetermined threshold, and providing the realigned frame as the output frame if the absolute difference is greater than the predetermined threshold.08-26-2010
20120162416STOPPED OBJECT DETECTION - A video surveillance system includes: an input configured to receive indications of images each comprising a plurality of pixels; a memory; and a processing unit communicatively coupled to the input and the memory and configured to: analyze the indications of the images; compare the present image with a short-term background image stored in the memory; compare the present image with a long-term background image stored in the memory; provide an indication in response to an object in the present image being disposed in a first location in the present image, in a second location in, or absent from, the short-term background image, and in a third location in, or absent from, the long-term background image, where the first location is different from both the second location and the third location.06-28-2012
20120169871Multi-Resolution Image Display - An image display method includes: receiving, from a single camera, first and second image information for first and second captured images captured from different perspectives, the first image information having a first data density; selecting a portion of the first captured image for display with a higher level of detail than other portions of the first captured image, the selected portion corresponding to a first area of the first captured image; displaying the selected portion in a first displayed image, using a second data density relative to the selected portion of the first captured image; and displaying another portion of the first captured image, in a second displayed image, using a third data density; where the another portion of the first captured image is other than the selected portion of the first captured image; and where the third data density is lower than the second data density.07-05-2012
20120169882Tracking Moving Objects Using a Camera Network - Techniques are described for tracking moving objects using a plurality of security cameras. Multiple cameras may capture frames that contain images of a moving object. These images may be processed by the cameras to create metadata associated with the images of the objects. Frames of each camera's video feed and metadata may be transmitted to a host computer system. The host computer system may use the metadata received from each camera to determine whether the moving objects imaged by the cameras represent the same moving object. Based upon properties of the images of the objects described in the metadata received from each camera, the host computer system may select a preferable video feed containing images of the moving object for display to a user.07-05-2012
20120169923VIDEO CODING - Techniques are discussed for providing mechanisms for coding and transmitting high definition video, e.g., over low bandwidth connections. In particular, foreground-objects are identified as distinct from the background of a scene represented in a plurality of video frames received from a video source, such as a camera. In identifying foreground-objects, semantically significant and semantically insignificant movement (e.g., repetitive versus non-repetitive movement) is differentiated. Processing of the foreground-objects and background proceed at different update rates or frequencies.07-05-2012
20120170802SCENE ACTIVITY ANALYSIS USING STATISTICAL AND SEMANTIC FEATURES LEARNT FROM OBJECT TRAJECTORY DATA - Trajectory information of objects appearing in a scene can be used to cluster trajectories into groups of trajectories according to each trajectory's relative distance between each other for scene activity analysis. By doing so, a database of trajectory data can be maintained that includes the trajectories to be clustered into trajectory groups. This database can be used to train a clustering system, and with extracted statistical features of resultant trajectory groups a new trajectory can be analyzed to determine whether the new trajectory is normal or abnormal. Embodiments described herein, can be used to determine whether a video scene is normal or abnormal. In the event that the new trajectory is identified as normal the new trajectory can be annotated with the extracted semantic data. In the event that the new trajectory is determined to be abnormal a user can be notified that an abnormal behavior has occurred.07-05-2012
20120170803SEARCHING RECORDED VIDEO - Embodiments of the disclosure provide for systems and methods for creating metadata associated with a video data. The metadata can include data about objects viewed within a video scene and/or events that occur within the video scene. Some embodiments allow users to search for specific objects and/or events by searching the recorded metadata. In some embodiments, metadata is created by receiving a video frame and developing a background model for the video frame. Foreground object(s) can then be identified in the video frame using the background model. Once these objects are identified they can be classified and/or an event associated with the foreground object may be detected. The event and the classification of the foreground object can then be recorded as metadata.07-05-2012
20120170838Color Similarity Sorting for Video Forensics Search - Systems and methods of sorting electronic color images of objects are provided. One method includes receiving an input representation of an object, the representation including pixels defined in a first color space, converting the input image into a second color space, determining a query feature vector including multiple parameters associated with color of the input representation, the query feature vector parameters including at least a first parameter of the first color space and at least a first parameter of the second color space and comparing the query feature vector to multiple candidate feature vectors. Each candidate feature vector includes multiple parameters associated with color of multiple stored candidate images, the candidate feature vector parameters including at least the first parameter from the first color space and at least the first parameter from the second color space. The method further includes determining at least one of the candidate images to be a possible match to the desired object based on the comparison.07-05-2012
20120170902Inference Engine for Video Analytics Metadata-Based Event Detection and Forensic Search - Embodiments of the disclosure provide for systems and methods for searching video data for events and/or behaviors. An inference engine can be used to aide in the searching. In some embodiments, a user can specify various search criteria, for example, a video source(s), an event(s) or behavior(s) to search, and an action(s) to perform in the event of a successful search. The search can be performed by analyzing an object(s) found within scenes of the video data. An object can be identified by a number of attributes specified by the user. Once the search criteria has been received from the user, the video data can be received (or extracted from storage), the data analyzed for the specified events (or behaviors), and the specified action performed in the event a successful search occurs.07-05-2012
20120173577SEARCHING RECORDED VIDEO - Embodiments of the disclosure provide for systems and methods for creating metadata associated with a video data. The metadata can include data about objects viewed within a video scene and/or events that occur within the video scene. Some embodiments allow users to search for specific objects and/or events by searching the recorded metadata. In some embodiments, metadata is created by receiving a video frame and developing a background model for the video frame. Foreground object(s) can then be identified in the video frame using the background model. Once these objects are identified they can be classified and/or an event associated with the foreground object may be detected. The event and the classification of the foreground object can then be recorded as metadata.07-05-2012
20130028467SEARCHING RECORDED VIDEO - Embodiments of the disclosure provide for systems and methods for creating metadata associated with a video data. The metadata can include data about objects viewed within a video scene and/or events that occur within the video scene. Some embodiments allow users to search for specific objects and/or events by searching the recorded metadata. In some embodiments, metadata is created by receiving a video frame and developing a background model for the video frame. Foreground object(s) can then be identified in the video frame using the background model. Once these objects are identified they can be classified and/or an event associated with the foreground object may be detected. The event and the classification of the foreground object can then be recorded as metadata.01-31-2013
20130128050GEOGRAPHIC MAP BASED CONTROL - Disclosed are methods, systems, computer readable media and other implementations, including a method that includes determining, from image data captured by a plurality of cameras, motion data for multiple moving objects, and presenting, on a global image representative of areas monitored by the plurality of cameras, graphical indications of the determined motion data for the multiple objects at positions on the global image corresponding to geographic locations of the multiple moving objects. The method further includes presenting captured image data from one of the plurality of cameras in response to selection, based on the graphical indications presented on the global image, of an area of the global image presenting at least one of the graphical indications for at least one of the multiple moving objects captured by the one of the plurality of cameras.05-23-2013
20130155247Method and System for Color Adjustment - A method of adjusting the color of images captured by a plurality of cameras comprises the steps of receiving a first image captured by a first camera from the plurality of cameras, analyzing the first image to separate the pixels in the first image into background pixels and foreground pixels, selecting pixels from the background pixels that have a color that is a shade of gray, determining the amount to adjust the colors of the selected pixels to move their colors towards true gray, and providing information for use in adjusting the color components of images from the plurality of cameras.06-20-2013
20130162834INTEGRATED VIDEO QUANTIZATION - Techniques for processing video content in a video camera are provided. The techniques include a method for processing video content in at video camera according to the disclosure includes capturing thermal video data using a thermal imaging sensor, determining quantization parameters for the thermal video data, quantizing the thermal video data to generate quantized thermal video data content and video quantization information, and transmitting the quantized thermal video data stream and the video quantization information to a video analytics server over a network.06-27-2013
20130163382SONAR SYSTEM FOR AUTOMATICALLY DETECTING LOCATION OF DEVICES - Systems and methods are described for determining device positions in a video surveillance system. A method described herein includes generating a reference sound; emitting, at a first device, the reference sound; detecting, at the first device, a responsive reference sound from one or more second devices in response to the emitted reference sound; identifying a position of each of the one or more second devices; obtaining information relating to latency of the one or more second devices; computing a round trip time associated with each of the one or more second devices based on at least a timing of detecting the one or more responsive reference sounds and the latency of each of the one or more second devices; and estimating the position of the first device according to the round trip time and the position associated with each of the one or more second devices.06-27-2013
20130166711Cloud-Based Video Surveillance Management System - Systems and methods are described herein that provide a three-tier intelligent video surveillance management system. An example of a system described herein includes a gateway configured to obtain video content and metadata relating to the video content from a plurality of network devices, a metadata processing module communicatively coupled to the gateway and configured to filter the metadata according to one or more criteria to obtain a filtered set of metadata, a video processing module communicatively coupled to the gateway and the metadata processing module and configured to isolate video portions, of video the content, associated with respective first portions of the filtered set of metadata, and a cloud services interface communicatively coupled to the gateway, the metadata processing module and the video processing module and configured to provide at least some of the filtered set of metadata or the isolated video portions to a cloud computing service.06-27-2013
20130169822CAMERA CALIBRATION USING FEATURE IDENTIFICATION - Disclosed are methods, systems, computer readable media and other implementations, including a method to calibrate a camera that includes capturing by the camera a frame of a scene, identifying features appearing in the captured frame, the features associated with pre-determined values representative of physical attributes of one or more objects, and determining parameters of the camera based on the identified features appearing in the captured frame and the pre-determined values associated with the identified features.07-04-2013
20130170557Method and System for Video Coding with Noise Filtering - Techniques are discussed herein for providing mechanisms for coding and transmitting high definition video, e.g., over low bandwidth connections. In particular, foreground-objects are identified as distinct from the background of a scene represented by a plurality of video frames. In identifying foreground-objects, semantically significant and semantically insignificant movement (e.g., non-repetitive versus repetitive movement) is differentiated. For example, the swaying motion of a tree's leaves being minor and repetitive, can be determined to be semantically insignificant and to belong in a scene's background. Processing of the foreground-objects and background proceed at different update rates or frequencies. For example, foreground-objects can be updated 30 or 60 times per second. By contrast, a background is updated less frequently, e.g., once every 10 seconds. In some implementations, if no foreground-objects are identified, no live video is transmitted (e.g., if no motion is detected, static images are not configured to be repeatedly sent). Techniques described herein take advantage of the realization that, in the area of surveillance and wireless communications, updating video of semantically significant movement at a high frame rate is sufficient.07-04-2013
20130170696CLUSTERING-BASED OBJECT CLASSIFICATION - An example of a method for identifying objects in video content according to the disclosure includes receiving video content of a scene captured by a video camera, detecting an object in the video content, identifying a track that the object follows over a series of frames of the video content, extracting object features for the object from the video content, and classifying the object based on the object features. Classifying the object further comprises: determining a track-level classification for the object using spatially invariant object features, determining a global-clustering classification for the object using spatially variant features, and determining an object type for the object based on the track-level classification and the global-clustering classification for the object.07-04-2013
20130170760Method and System for Video Composition - A method of presenting video comprising receiving a plurality of video data from a video source, analyzing the plurality of video data; identifying the presence of foreground-objects that are distinct from background portions in the plurality of video data, classifying the foreground-objects into foreground-object classifications, receiving user input selecting a foreground-object classification, and generating video frames from the plurality of video data containing background portions and only foreground-objects in the selected foreground-object classification.07-04-2013
20130176430CONTEXT AWARE MOVING OBJECT DETECTION - An image capture system includes: an image capture unit configured to capture a first image frame comprising a set of pixels; and a processor coupled to the image capture unit and configured to: determine a normalized distance of a pixel characteristic between the first image frame and a second image frame for each pixel in the first image frame; compare the normalized distance for each pixel in the first image frame against a pixel sensitivity value for that pixel; determine that a particular pixel of the first image frame is a foreground or background pixel based on the normalized distance of the particular pixel relative to the pixel sensitivity value for the particular pixel; and adapt the pixel sensitivity value for each pixel over a range of allowable pixel sensitivity values.07-11-2013
20140139633Method and System for Counting People Using Depth Sensor - A sensor system according to an embodiment of the invention may process depth data and visible light data for a more accurate detection. Depth data assists where visible light images are susceptible to false positives. Visible light images (or video) may similarly enhance conclusions drawn from depth data alone. Detections may be object-based or defined with the context of a target object. Depending on the target object, the types of detections may vary to include motion and behavior. Applications of the described sensor system include motion guided interfaces where users may interact with one or more systems through gestures. The sensor system described may also be applied to counting systems, surveillance systems, polling systems, retail store analytics, or the like.05-22-2014
20140139660Method and Apparatus for Detecting People by a Surveillance System - Surveillance systems may be found in both private and public spaces. In private spaces, they can be designed to help provide and monitor secure premises. Similarly, public spaces may also use surveillance systems to determine an allocation of public resources. A camera surveillance system according to an embodiment of the invention uses advanced image processing techniques to determine whether an object moving across a scene is a person. The camera surveillance system achieves an accurate and efficient classification by selectively processing a set of features associated with the object, such as features that define an omega shape. By selectively processing the set of features associated with the object, the methods and systems described herein reduce the computational complexity of standard image processing/object detection techniques.05-22-2014
20140139680Method And System For Metadata Extraction From Master-Slave Cameras Tracking System - An embodiment of the present invention includes a master camera that may record master metadata regarding an object of interest and communicate the master metadata to a slave camera. The slave camera may zoom, pan, or tilt to isolate and record more detailed image data regarding the object of interest based on the master metadata. In addition, the slave camera may record slave metadata regarding the object of interest. The master and slave metadata may be stored associated with the recorded image data enabling a later search for the object of interest to be expedited. The recorded image data including the object of interest may be identified with greater ease as it may be guided by the master or slave metadata, or a combination thereof. According to embodiments presented herein, processing time for searching and identifying an object of interest may be reduced by enabling a search on the metadata associated with image data, rather than by searching the image data itself.05-22-2014
20140143385METHOD AND APPARATUS FOR EFFICIENTLY PRIORITIZING ELEMENTS IN A VIDEO STREAM FOR LOW-BANDWIDTH TRANSMISSION - Processing video for low-bandwidth transmission may be complex. At a content source embodiment of methods disclosed herein may include assigning the content identifier as a function of content in a packet of a packet stream on a packet-by-packet basis. The method may further comprise forwarding the content identifier with the packet to enable a downstream network node or device to effect prioritization of the packet within the packet stream. The downstream network node or device may make drop decisions that are guided by a content identifier. Packets, or video frames that contain useful information may be prioritized and have a higher probability of being delivered.05-22-2014
20140152815Window Blanking for Pan/Tilt/Zoom Camera - Network cameras employing advanced video analytics are increasingly being used in both public and private settings. Unlike fixed surveillance cameras, pan/tilt/zoom cameras provide a dynamic field-of-view. Some regions within a given field-of-view can be designated as private and may not be recorded. A window-blanking feature, according to an embodiment of the invention, enables an ability to block out defined portions of the video where a privacy zone may otherwise appear. Through use of the embodiment, consistent privacy is provided during dynamic surveillance to ensure compliance with privacy regulations or contractual arrangements relating to use of a surveillance camera having a privacy zone within the given field-of-view.06-05-2014
20140267704System and Method For Audio Source Localization Using Multiple Audio Sensors - An automated security surveillance system ideally determines a location of a possible disturbance and adjusts its cameras to record video footage of the disturbance. In one embodiment, a disturbance can be determined by recording audio of the nearby area. A system, coupled to a camera, may include an arrangement of at least four audio sensors configured record audio of the nearby area to produce independent outputs. The system further may include a processing module configured to determine an angle and distance of an audio source relative to a location of the arrangement of the at least four audio sensors. The system can then adjust the camera by rotation along an azimuth or elevation angle and adjusting the zoom level to record video of the audio source. Through use of the system, a surveillance system can present an image of a source of possible disturbance to an operator more rapidly and precisely than through manual techniques.09-18-2014
20140267758STEREO INFRARED DETECTOR - Existing passive infrared (PIR) sensors rely on motion of an object to detect presence and do not provide information about the number of objects or other characteristics of objects in a field of view such as distance or size. Disclosed herein are apparatuses and corresponding methods for detecting a source of infrared emission. Example embodiments include two infrared sensors for imaging and a processor configured to use the images to detect a presence of an infrared source and output a signal based on the presence. Example apparatuses and corresponding methods provide for measurement of an infrared source's speed, size, height, width, temperature, or range using analytics. Some advantages of these systems and methods include low cost, stereo view, and detection of people, children, or objects with infrared/thermal sensors of low pixel count.09-18-2014
20140270358Online Learning Method for People Detection and Counting for Retail Stores - People detection can provide valuable metrics that can be used by businesses, such as retail stores. Such information can be used to influence any number of business decisions such a employment hiring and product orders. The business value of this data hinges upon its accuracy. Thus, a method according to the principles of the current invention outputs metrics regarding people in a video frame within a stream of video frames through use of an object classifier configured to detect people. The method further comprises automatically updating the object classifier using data in at least a subset of the video frames in the stream of video frames.09-18-2014
20140277757METHOD AND APPARATUS FOR AN ENERGY SAVING HEATING, VENTILATION, AND AIR CONDITIONING (HVAC) CONTROL SYSTEM - Embodiments of methods and apparatus disclosed herein may employ depth, visual, or motions sensors to enable three-dimensional people counting and data mining to enable an energy saving heating, ventilation, and air conditioning (HVAC) control system. Head detection methods based on depth information may assist people counting in order to enable an accurate determination of room occupancy. A pattern of activities of room occupancy may be learned to predict the activity level of a building or its rooms, reducing energy usage and thereby providing a cost savings.09-18-2014

Patent applications by Farzin Aghdasi, Clovis, CA US

Website © 2015 Advameg, Inc.