Patent application number | Description | Published |
20110249075 | Remote Control Operations in a Video Conference - Some embodiments provide a method for allowing a first device that is in a video conference with a second mobile device to remotely control the second mobile device. The method sends images captured by a camera of the first device to the second device. The method receives images captured by a camera of the second device. The method sends a command through a communication channel of a real-time communication session to the second device. The command is for instructing the second device to perform an operation that modifies the images captured by the camera of the second device. | 10-13-2011 |
20110249077 | Video Conference Network Management for a Mobile Device - Some embodiments provide a method for managing a video conference between a first device and a second device. The method identifies a first ceiling bit rate for transmitting video conference data to the second device through the communication channel. The method identifies a current bit rate that is less than the first ceiling bit rate. The method receives networking data regarding the communication channel from the second device. The method determines, from the received network data, that the communication channel will sustain an increase in the current bit rate. The method increments the current bit rate. The method iteratively performs the receiving, determining, and incrementing operations until a determination is made that the communication channel will not sustain the increase in the current bit rate. | 10-13-2011 |
20110249078 | Switching Cameras During a Video Conference of a Multi-Camera Mobile Device - Some embodiments provide a method for conducting a video conference between a first mobile device and a second device. The first mobile device includes first and second cameras. The method selects the first camera for capturing images. The method transmits images captured by the first camera to the second device. The method receives selections of the second camera for capturing images during the video conference. The method terminates the transmission of images captured by the first camera and transmits images captured by the second camera of the first mobile device to the second device during the video conference. | 10-13-2011 |
20120281715 | ADAPTIVE BANDWIDTH ESTIMATION - Some embodiments provide a method of combining multiple streams of data packets into a single combined stream in a manner that facilitates accurate estimation of bandwidth of a connection over a network between two devices. When combining the streams into the combined stream, the method associates a set of packets from a first stream and a reference packet from a second stream to form a longer sequence of packets in the combined stream. The method sends the combined stream from a first device to a second device so that the second device can estimate the bandwidth of the connection between the first and second devices based on the inter-arrival times of the packets in the sequence of packets. | 11-08-2012 |
20130222515 | SYSTEM AND METHOD FOR OPTIMIZING VIDEO CONFERENCING IN A WIRELESS DEVICE - A wireless device described herein can use information on data flow, in addition to indications from the physical network, to decide on suitable bandwidth usage for audio and video information. This data flow information is further used to determine an efficient network route to use for high-quality reception and transmission of audio and video data, as well as the appropriate time to switch between available network routes to improve bandwidth performance. | 08-29-2013 |
20130265378 | Switching Cameras During a Video Conference of a Multi-Camera Mobile Device - Some embodiments provide a method for conducting a video conference between a first mobile device and a second device. The first mobile device includes first and second cameras. The method selects the first camera for capturing images. The method transmits images captured by the first camera to the second device. The method receives selections of the second camera for capturing images during the video conference. The method terminates the transmission of images captured by the first camera and transmits images captured by the second camera of the first mobile device to the second device during the video conference. | 10-10-2013 |
20140064165 | RADIO POWER SAVING TECHNIQUES FOR VIDEO CONFERENCE APPLICATIONS - In video conferencing over a radio network, the radio equipment is a major power consumer especially in cellular networks such as LTE. In order to reduce the radio power consumption in video conferencing, it is important to introduce an enough radio inactive time. Several types of data buffering and bundling can be employed within a reasonable range of latency that doesn't significantly disrupt the real-time nature of video conferencing. In addition, the data transmission can be synchronized to the data reception in a controlled manner, which can result in an even longer radio inactive time and thus take advantage of radio power saving modes such as LTE C-DRX. | 03-06-2014 |
20140067405 | ADAPTIVE AUDIO CODEC SELECTION DURING A COMMUNICATION SESSION - A method for adaptive audio codec selection during a communication session is disclosed. The method can include negotiating a set of audio codecs for use during the communication session. The method can further include defining multiple audio tiers. Each audio tier can be associated with a network condition and can define an audio codec from the set of audio codecs for use in the associated network condition. The method can also include using a first audio codec during the wireless communication session. The method can additionally include determining a changed network condition selecting a second audio codec by determining the audio tier corresponding to the changed network condition. The method can further include, in response to the changed network condition, switching from the first audio codec to a second audio codec that is defined by an audio tier having an associated network condition corresponding to the changed network condition. | 03-06-2014 |
20140068084 | DETECTING AND RECOVERING FROM A TRANSMISSION CHANNEL CHANGE DURING A STREAMING MEDIA SESSION - A method for detecting and recovering from a transmission channel change during a streaming media session is disclosed. The method can include a wireless communication device detecting a stall condition resulting from a transmission channel change. The method can further include the wireless communication device capturing a snapshot of a current transmission parameter state of the streaming media session in response to detecting the stall condition. The method can also include the wireless communication device using the snapshot to restore the streaming media session to the transmission parameter state captured by the snapshot following completion of the transmission channel change. | 03-06-2014 |
20140072000 | ADAPTIVE JITTER BUFFER MANAGEMENT FOR NETWORKS WITH VARYING CONDITIONS - An apparatus and method for detecting and analyzing spikes in network jitter and the estimation of a jitter buffer target size is disclosed. Detected spikes may be classified as jump spikes or slope spikes, and a clipped size of detected spikes may be used in the estimation of the jitter buffer target. Network characteristics and conditions may also be used in the estimation of the jitter buffer target size. Samples may be modified during playback adaptation to improve audio quality and maintain low delay of a receive chain. | 03-13-2014 |
20140241415 | ADAPTIVE STREAMING TECHNIQUES - Systems and methods are presented for minimizing the suddenness and immediacy of changes to the video quality perceived by users due to bandwidth fluctuations and transitions between different bitrate streams. A method may include identifying an upcoming bitrate change in a bitstream and a nearest scene cut boundary from sync frame scene cut tags included in the bitstream. The method may include calculating whether waiting until the identified nearest scene cut boundary before changing the bitrate will cause the buffer to drop below a threshold. When the buffer is calculated to not drop below the threshold, the method may postpone the upcoming bitrate change until the identified nearest scene cut boundary. | 08-28-2014 |
20140362162 | RADIO POWER SAVING TECHNIQUES FOR VIDEO CONFERENCE APPLICATIONS - In video conferencing over a radio network, the radio equipment is a major power consumer especially in cellular networks such as LTE. In order to reduce the radio power consumption in video conferencing, it is important to introduce an enough radio inactive time. Several types of data buffering and bundling can be employed within a reasonable range of latency that doesn't significantly disrupt the real-time nature of video conferencing. In addition, the data transmission can be synchronized to the data reception in a controlled manner, which can result in an even longer radio inactive time and thus take advantage of radio power saving modes such as LTE C-DRX. | 12-11-2014 |
20150350560 | VIDEO CODING WITH COMPOSITION AND QUALITY ADAPTATION BASED ON DEPTH DERIVATIONS - Techniques for coding video data estimate depths of different elements within video content and identify regions within the video content based on the estimated depths. One of the regions may be assigned as an area of interest. Thereafter, video content of a region that is not an area of interest may be masked out and the resultant video content obtained from the masking may be coded. The coded video content may be transmitted to a channel. These techniques permit a coding terminal to mask out captured video content prior to coding in order to support coding policies that account for privacy interests or video composition features during a video coding session. | 12-03-2015 |
20150350714 | PLAYBACK OF VIDEO ON DEMAND - A method and system for caching and streaming media content, including predictively delivering and/or acquiring content is provided. In the system, client devices may be communicatively coupled in a network, and may access and share cached content. Video segments making up a media stream may be selectively delivered to the clients such that a complete media stream may be formed from the different segments delivered to the different clients. Video segments may be pushed by the server to the client or requested by the client according to a prioritization scheme, including downloading: partial items on a client's subscription log, lower quality version(s) of content before higher quality version(s), higher bitrate segments before lower bitrate segments, summaries of full-length content, advertisements and splash screens common to multiple video clips. | 12-03-2015 |
20150358577 | INSTANT VIDEO COMMUNICATION CONNECTIONS - Computing devices may implement instant video communication connections for video communications. Connection information for mobile computing devices may be maintained. A request to initiate an instant video communication may be received, and if authorized, the connection information for the particular recipient mobile computing device may be accessed. Video communication data may then be sent to the recipient mobile computing device according to the connection information so that the video communication data may be displayed at the recipient device as it is received. New connection information for different mobile computing devices may be added, or updates to existing connection information may also be performed. Connection information for some mobile computing devices may be removed. | 12-10-2015 |
20150358580 | DYNAMIC DISPLAY OF VIDEO COMMUNICATION DATA - Computing devices may implement dynamic display of video communication data. Video communication data for a video communication may be received at a computing device where another application is currently displaying image data on an electronic display. A display location may be determined for the video communication data according to display attributes that are configured by the other application at runtime. Once determined, the video communication data may then be displayed in the determined location. In some embodiments, the video communication data may be integrated with other data displayed on the electronic display for the other application. | 12-10-2015 |
20150358581 | DYNAMIC DETECTION OF PAUSE AND RESUME FOR VIDEO COMMUNICATIONS - Computing devices may implement dynamic detection of pause and resume for video communications. Video communication data may be capture at a participant device in a video communication. The video communication data may be evaluated to detect a pause or resume event for the transmission of the video communication data. Various types of video, audio, and other sensor analysis may be used to detect when a pause event or a resume event may be triggered. For triggered pause events, at least some of the video communication data my no longer be transmitted as part of the video communication. For triggered resume events, a pause state may cease and all of the video communication data may be transmitted. | 12-10-2015 |
20150358582 | DYNAMIC TRANSITION FROM VIDEO MESSAGING TO VIDEO COMMUNICATION - Computing devices may implement dynamic transitions from video messages to video communications. Video communication data for a video message may be received at a recipient device. The video communication data may be displayed as it is received, and recorded for subsequent playback. An indication of a selection to establish a video communication with the sender of the video message may be received, or an indication that display of the video communication is to be ceased may be received. If a video communication is to be established, then a video communication connection with the sender of the video message may be created so that subsequent video communication data may be sent via the established connection. | 12-10-2015 |
20160092561 | VIDEO ANALYSIS TECHNIQUES FOR IMPROVED EDITING, NAVIGATION, AND SUMMARIZATION - Systems and processes for improved video editing, summarization and navigation based on generation and analysis of metadata are described. The metadata may be content-based (e.g., differences between neighboring frames, exposure data, key frame identification data, motion data, or face detection data) or non-content-based (e.g., exposure, focus, location, time) and used to prioritize and/or classify portions of video. The metadata may be generated at the time of image capture or during post-processing. Prioritization information, such as a score for various portions of the image data may be based on the metadata and/or image data. Classification information such as the type or quality of a scene may be determined based on the metadata and/or image data. The classification and prioritization information may be metadata and may be used to automatically remove undesirable portions of the video, generate suggestions during editing or automatically generate summary video. | 03-31-2016 |
20160100131 | Radio Power Saving Techniques for Video Conference Applications - In video conferencing over a radio network, the radio equipment is a major power consumer especially in cellular networks such as LTE. In order to reduce the radio power consumption in video conferencing, it is important to introduce an enough radio inactive time. Several types of data buffering and bundling can be employed within a reasonable range of latency that doesn't significantly disrupt the real-time nature of video conferencing. In addition, the data transmission can be synchronized to the data reception in a controlled manner, which can result in an even longer radio inactive time and thus take advantage of radio power saving modes such as LTE C-DRX. | 04-07-2016 |
Patent application number | Description | Published |
20090073005 | COMPLEXITY-AWARE ENCODING - Techniques for encoding data based at least in part upon an awareness of the decoding complexity of the encoded data and the ability of a target decoder to decode the encoded data are disclosed. In some embodiments, a set of data is encoded based at least in part upon a state of a target decoder to which the encoded set of data is to be provided. In some embodiments, a set of data is encoded based at least in part upon the states of multiple decoders to which the encoded set of data is to be provided. | 03-19-2009 |
20090092184 | POWER SAVING DECODER ARCHITECTURE - A method and system are provided for decoding coded video data by turning off or not loading at least one functional unit or functional subunit of the decoder while decoding a portion of the coded video data. A schedule may be created prior to substantive decoding and then the schedule may be used to decode coded video data. The coded video data may be reordered based on the functional units or subunits the portions of the coded video data need for decoding. The portions of the coded video data are reordered into their original order in an output buffer after being decoded. The decoder may determine which functional units or subunits are needed for decoding based on administration information included with the coded video data. The decoder may decode portions of the coded video data in parallel. | 04-09-2009 |
20100235523 | FRAMEWORK FOR SUPPORTING MULTI-DEVICE COLLABORATION - A framework for providing multi-device collaboration is described herein. In one embodiment, a method for providing multi-device collaboration between first and second devices can include transferring an initializing function call to create a session object. The function call specifies a mode of the session object, a service type, and a service name. The session object can include functions to discover the second device, connect with the second device, and provide data transport between the connected first and second devices. The service name can include a truncated name, a unique identification, and a state of service of a software application associated with the first device. The method can include detecting a network and advertising the service type and the service name via the network. The service type and service name can be advertised prior to establishing the connection between the first and second devices. | 09-16-2010 |
20100321469 | Video Processing in a Multi-Participant Video Conference - Some embodiments provide an architecture for establishing multi-participant video conferences. This architecture has a central distributor that receives video images from two or more participants. From the received images, the central distributor generates composite images that the central distributor transmits back to the participants. Each composite image includes a set of sub images, where each sub image belongs to one participant. In some embodiments, the central distributor saves network bandwidth by removing each particular participant's image from the composite image that the central distributor sends to the particular participant. In some embodiments, images received from each participant are arranged in the composite in a non-interleaved manner. For instance, in some embodiments, the composite image includes at most one sub-image for each participant, and no two sub-images are interleaved. | 12-23-2010 |
20110116409 | MULTI-PARTICIPANT CONFERENCE SETUP - Some embodiments provide an architecture for establishing a multi-participant conference. This architecture has one participant's computer in the conference act as a central content distributor for the conference. The central distributor receives data (e.g., video and/or audio streams) from the computer of each other participant, and distributes the received data to the computers of all participants. In some embodiments, the central distributor receives A/V data from the computers of the other participants. From such received data, the central distributor of some embodiments generates composite data (e.g., composite image data and/or composite audio data) that the central distributor distributes back to the participants. | 05-19-2011 |
20110205332 | HETEROGENEOUS VIDEO CONFERENCING - Some embodiments provide an architecture for establishing a multi-participant conference. This architecture has one participant's computer in the conference act as a central content distributor for the conference. The central distributor receives data (e.g., video and/or audio streams) from the computer of each other participant, and distributes the received data to the computers of all participants. In some embodiments, the central distributor receives A/V data from the computers of the other participants. From such received data, the central distributor of some embodiments generates composite data (e.g., composite image data and/or composite audio data) that the central distributor distributes back to the participants. The central distributor in some embodiments can implement a heterogeneous audio/video conference. In such a conference, different participants can participate in the conference differently. For instance, different participants might use different audio or video codecs. Moreover, in some embodiments, one participant might participate in only the audio aspect of the conference, while another participant might participate in both audio and video aspects of the conference. | 08-25-2011 |
20110234430 | COMPLEXITY-AWARE ENCODING - Techniques for encoding data based at least in part upon an awareness of the decoding complexity of the encoded data and the ability of a target decoder to decode the encoded data are disclosed. In some embodiments, a set of data is encoded based at least in part upon a state of a target decoder to which the encoded set of data is to be provided. In some embodiments, a set of data is encoded based at least in part upon the states of multiple decoders to which the encoded set of data is to be provided. | 09-29-2011 |
20120250761 | MULTI-PASS VIDEO ENCODING - Some embodiments of the invention provide a multi-pass encoding method that encodes several images (e.g., several frames of a video sequence). The method iteratively performs an encoding operation that encodes these images. The encoding operation is based on a nominal quantization parameter, which the method uses to compute quantization parameters for the images. During several different iterations of the encoding operation, the method uses several different nominal quantization parameters. The method stops its iterations when it reaches a terminating criterion (e.g., it identifies an acceptable encoding of the images). | 10-04-2012 |
20120290668 | MULTI-PARTICIPANT CONFERENCE SETUP - Some embodiments provide an architecture for establishing a multi-participant conference. This architecture has one participant's computer in the conference act as a central content distributor for the conference. The central distributor receives data (e.g., video and/or audio streams) from the computer of each other participant, and distributes the received data to the computers of all participants. In some embodiments, the central distributor receives AN data from the computers of the other participants. From such received data, the central distributor of some embodiments generates composite data (e.g., composite image data and/or composite audio data) that the central distributor distributes back to the participants. | 11-15-2012 |
20140049599 | Multi-Participant Conference Setup - Some embodiments provide an architecture for establishing a multi-participant conference. This architecture has one participant's computer in the conference act as a central content distributor for the conference. The central distributor receives data (e.g., video and/or audio streams) from the computer of each other participant, and distributes the received data to the computers of all participants. In some embodiments, the central distributor receives A/V data from the computers of the other participants. From such received data, the central distributor of some embodiments generates composite data (e.g., composite image data and/or composite audio data) that the central distributor distributes back to the participants. | 02-20-2014 |
20150103135 | Compositing Pairs Of Image Frames From Different Cameras Of A Mobile Device To Generate A Video Stream - Some embodiments provide a novel method for in-conference adjustment of encoded video pictures captured by a mobile device having at least first and second cameras. The method may involve real-time modifications of composite video displays that are generated by the mobile devices involved in such a conference. Specifically, in some embodiments, the mobile devices generate composite displays that simultaneously display multiple videos captured by multiple cameras of one or more devices. In some cases, the composite displays place the videos in adjacent display areas (e.g., in adjacent windows). In other cases, the composite display is a picture-in-picture (PIP) display that includes at least two display areas that show two different videos where one of the display areas is a background main display area and the other is a foreground inset display area that overlaps the background main display area. | 04-16-2015 |