Avid Technology, Inc. Patent applications |
Patent application number | Title | Published |
20150288645 | SYNCHRONIZED STORY-CENTRIC MEDIA DISTRIBUTION - Social media messages associated with a news story item are scheduled to be published at a time relative to an expected air time of the news story in a news rundown. When the time to air of the news story item is changed, the time for publishing the social media message is changed automatically according to the updated news rundown, maintaining the relative timing. This helps news organizations manage the timing of social media messages that are related to broadcast stories, without the need to monitor a changing rundown. When social media messages published in advance of a news item broadcast engender high interest, newsroom producers may react by promoting the news item within the rundown and publishing additional teaser messages. Conversely, low reaction levels may lead to the demotion of a story, or even its elimination from the rundown. | 10-08-2015 |
20150286489 | AUTOMATIC DETECTION AND LOADING OF MISSING PLUG-INS IN A MEDIA COMPOSITION APPLICATION - When collaborators working on a media composition project share portions of a composition that involve the use of plug-ins, the collaborator receiving the shared portion requires a local copy of the plug-ins in order to play or edit the shared portion. If a plug-in is missing, the receiving system automatically notifies the receiving collaborator of the missing plug-in, and enables the receiver to purchase or rent it from a marketplace made available within the receiver's media composition application, and to download, install, load, and run the missing application without restarting the composition application. The same process may be used when a plug-in on the receiving system needs to be updated before it is able to process the shared portion. This streamlines collaboration in distributed media composition workflows. | 10-08-2015 |
20150234850 | MERGING AND SPLITTING OF MEDIA COMPOSITION FILES - During the production of a time-based media project, it is often desirable for editors to work with media files or reels of a given size, both in terms of the temporal duration of media represented in each file and the number of tracks in a file. During the course of editing, files may become longer, or incorporate additional tracks, making them cumbersome to handle. A super-file view that displays multiple files simultaneously provides a framework for an editor to rebalance files during the course of media production. A graphical user interface permits users to adjust the content of the various files, including moving tracks among multiple files that comprise a given reel, as well as media between files belonging to different reels. | 08-20-2015 |
20150066175 | AUDIO PROCESSING IN MULTIPLE LATENCY DOMAINS - Methods and systems for generating computationally complex audio effects with low latency involve partitioning computation required to produce the effect into two components: a first component to be executed on a low latency signal network; and the second component to be executed simultaneously with the first component on a high latency signal network. For certain effects for which computation is separable into high and low latency functions, such dual signal network execution results in an overall signal latency of the low latency signal network and an overall efficiency of the high latency signal network. The low and high latency signal networks may be implemented on a DSP and a general purpose microprocessor respectively or both networks may be implemented on a single CPU. Simultaneous dual network implementation is especially beneficial in professional audio performance and recording environments. | 03-05-2015 |
20150063774 | INTERCONNECTED MULTIMEDIA SYSTEMS WITH SYNCHRONIZED PLAYBACK OF MEDIA STREAMS - Synchronous playback of time-based media received from one or more locations remote from a primary editing/mixing studio is achieved by time-stamping media samples with a local presentation time before streaming them to the primary studio. At the primary studio, samples having the same presentation timestamp are played back at the same time, independently of the samples' arrival time at the playback system. Media stored locally to the playback system may also be included as part of the synchronous playback using locally computed presentation times. In order to accommodate media streaming transmission delays, the playback system negotiates a suitable delay with the remote systems such that samples corresponding to a given presentation time are received at the playback system from remote locations prior to playback of media corresponding to the given presentation time. | 03-05-2015 |
20140304603 | FULL FIDELITY REMOTE VIDEO EDITING - Video editing methods and systems enable an editor to edit a video project for which source media assets are located at a media storage server located remotely from the editor with substantially the same fidelity and editing feature set that would be available if the source media assets and editor were co-located. A video editing client used by the editor maintains a persistent cache of proxy media with the layers of the video project stored independently, facilitating editing with combinations locally originated assets and remote assets. The client requests frames not already cached from the remote server via a low bandwidth network. Unless a frame is purged from the cache, no frame is requested from the server more than once. A multi-level priority prefetching scheme, including sequence-based prefetching, populates the cache with frames likely to be requested during editing. | 10-09-2014 |
20140301716 | CONTENT-BASED UNIQUE MATERIAL IDENTIFIERS - A unique material identifier (UMID) for a media file that was not provided with a UMID at its point of origination is generated by using the content of the file, and is independent of the time of file import or accessing. For a given item of media material, the UMID remains unchanged and uniquely identifies the item when such a file is imported or accessed multiple times. The UMID may be generated by hashing together selected portions of the metadata and essence of the media file. The amount of metadata and essence sampled is chosen to provide a high degree of assurance that the UMID will be unique, but is kept small enough so as to avoid causing a perceptible lag when the UMID is generated. In various embodiments the UMID is based purely on one or more selected portions of the media file essence. | 10-09-2014 |
20140289472 | APPLICATION-GUIDED BANDWIDTH-MANAGED CACHING - Methods and systems for populating a cache memory that services a media composition system. Caching priorities are based on a state of the media composition system, such as media currently within a media composition timeline, a composition playback location, media playback history, and temporal location within clips that are included in the composition. Caching may also be informed by descriptive metadata and media search results within a media composition client or a within a media asset management system accessed by the client. Additional caching priorities may be based on a project workflow phase or a client project schedule. Media may be partially written to or read from cache in order to meet media request deadlines. Caches may be local to a media composition system or remote, and may be fixed or portable. | 09-25-2014 |
20140281979 | MODULAR AUDIO CONTROL SURFACE - A user-configurable modular audio control surface comprises master modules for controlling global surface properties and channel modules for controlling one or more individual audio channels. The modules are disposed in a two-dimensional spatial arrangement such that any module can occupy a location within the control surface not occupied by another module. The modules are connected to each other and to external platforms hosting media applications and plug-ins via a network. Control surface users can interact with external applications via remote graphical user interfaces displayed on modules within the surface, and can automate multiple external applications using an automation system built into the surface. Automation line graphs and metadata for both internal and external applications are displayed over the corresponding waveform displays that can include audio ahead of a current playback location. | 09-18-2014 |
20140267298 | MODULAR AUDIO CONTROL SURFACE - A user-configurable modular audio control surface comprises master modules for controlling global surface properties and channel modules for controlling one or more individual audio channels. The modules are disposed in a two-dimensional spatial arrangement such that any module can occupy a location within the control surface not occupied by another module. The modules are connected to each other and to external platforms hosting media applications and plug-ins via a network. Control surface users can interact with external applications via remote graphical user interfaces displayed on modules within the surface, and can automate multiple external applications using an automation system built into the surface. Automation line graphs and metadata for both internal and external applications are displayed over the corresponding waveform displays that can include audio ahead of a current playback location. | 09-18-2014 |
20140244014 | AUDIO AND MUSIC DATA TRANSMISSION MEDIUM AND TRANSMISSION PROTOCOL - A transmission medium and protocol is provided for bi-directional communication between an audio system and a peripheral device. The transmission medium includes a communication medium for communicating data and a communication medium for communicating a clock signal that corresponds to a transmission rate of bits on the other communication media. By transmitting the clock signal on a separate communication medium from the data, clock recovery is avoided. There may be multiple clock domains. By having multiple clock domains, multiple sample rates can be supported. Synchronization information is embedded in the signal by using run length limiting markers between the data for each channel and a synchronization word having more consecutive zero bits than the number of bits for each channel. One or more channels may be dedicated to providing control and status information. | 08-28-2014 |
20140143671 | DUAL FORMAT AND DUAL SCREEN EDITING ENVIRONMENT - Methods and systems for multi-screen media authoring include displaying an integrated graphical user interface with a timeline for first screen linear time-based media editing and a second timeline for editing second screen content associated with the first screen content. Second screen content includes a sequence of modules that involve active viewer interaction and/or passive consumption. The display of the first and second timelines are temporally aligned with each other, and enable time-line-based editing of second screen content synchronized to the first screen. Selection of a second screen module on the second timeline invokes an editing environment corresponding to the type of module selected. Integrated monitors show end user or proxy views of first and second screen content corresponding to the first and second timelines respectively. | 05-22-2014 |
20140136574 | HIERARCHICAL MULTIMEDIA PROGRAM COMPOSITION - A computer-based method for media composition of a family of related time-based media programs. The method involves creating a master program with time-based elements of video and/or audio as well as time-based and non-time-based metadata, creating a derivative program that includes derivative elements, defining an inheritance relationship between the master program and the derivative program that specifies elements of the master program to be inherited by the derivative program, and causing the derivative program to inherit the specified elements from the master program in accordance with the inheritance relationship. User interfaces are provided for creating, editing, and viewing hierarchical trees of related programs. | 05-15-2014 |
20140116233 | METRICAL GRID INFERENCE FOR FREE RHYTHM MUSICAL INPUT - Computer-based methods infer a metrical grid from music that has been input without a predetermined time signature or tempo, enabling such free rhythm input to be annotated with the inferred grid, and stored and transcribed as a musical score. The methods use Bayesian modeling techniques, in which an optimal metrical grid is inferred by identifying the metrical grid that best explains the given sequence of notes by maximizing the posterior probability that it represents the note sequence. Prior musical input from a given user as well as explicit information about the musical style of the input may be used to improve the accuracy of the transcription. | 05-01-2014 |
20140044413 | SYNCHRONOUS DATA TRACKS IN A MEDIA EDITING SYSTEM - A media editing system provides an editor with full visibility and editing capability for synchronous data that is adjunct to audio and video. The data tracks include one or more streams of data packets, each stream being of a particular data type. Synchronous data tracks are displayed on the timeline, facilitating data track editing independent of the associated media tracks. The UI also enables selective playback and export of the data tracks along with the corresponding video and audio. The system also enables data streams to be filtered and combined. Data from the data tracks can be extracted and imported into a media asset management system, enabling the data to be searched. | 02-13-2014 |
20130275312 | METHODS AND SYSTEMS FOR COLLABORATIVE MEDIA CREATION - A collaboration server hosts software for collaborative composition and editing of a media project with project collaborators using different media editing applications each having their own native data format. Project collaborators, such as video editors, sound editors, effects and graphics artists, and producers access a shared project workspace which contains a snapshot of the current state of the media project in a canonical format, as well as source media files, native application metadata, and change notes. Each editing application includes a module enabling it to read the canonical snapshot representation, and also to flatten its native data model representation into the canonical representation for writing to the shared project workspace. A collaboration server hosts the shared project space, and includes a workflow manager for issuing change notifications and handling versions, and an application server for the shared project user interface. Change notes are generated manually and also expressed automatically in terms of machine-readable change primitives that serve to direct an editor's attention to portions of the media project needing attention. | 10-17-2013 |
20130163855 | AUTOMATED DETECTION AND CORRECTION OF STEREOSCOPIC EDGE VIOLATIONS - Pixel-based and region-based methods, computer program products, and systems for detecting, flagging, highlighting on a display, and automatically fixing edge violations in stereoscopic images and video. The highlighting and display methods involve signed, clamped subtraction of one image of a stereo image pair from the other image, with the subtraction preferably isolated to a region of interest near the lateral edges. Various embodiments include limiting the detection, flagging, and highlighting of edge violations to objects causing a degree of perceptual discomfort greater than a user-set or preset threshold, or to objects having a certain size and/or proximity and/or degree of cut-off by a lateral edge of the left or right eye images of a stereo image pair. Methods of removing violations include automatic or semi-automatic cropping of the offending object, and depth shifting of the offending object onto the screen plane. | 06-27-2013 |
20130127883 | FRAMEWORK TO INTEGRATE AND ABSTRACT PROCESSING OF MULTIPLE HARDWARE DOMAINS, DATA TYPES AND FORMAT - A portable development and execution framework for processing media objects. The framework involves: accepting an instruction to perform a media processing function; accepting a media object to be associated with the media processing function; wrapping the media object with an attribute that specifies a type and format of the media object, and a hardware domain associated with the media object; and causing an execution domain to perform the media processing function on the media object. The instruction to perform the media processing function is expressed in a form that is independent of the hardware domain associated with the media object, and may also be independent of the type and format of the media object. The media object may be an image, and the media processing function may include an image processing function performed on a GPU. | 05-23-2013 |
20130047059 | TRANSCRIPT EDITOR - A transcript editor enables text-based editing of time-based media that includes spoken dialog. It involves an augmented transcript that includes timing metadata that associates words and phrases within the transcript with corresponding temporal locations within the time-based media where the text is spoken, and editing the augmented transcript without the need for playback of the time-based media. After editing, the augmented transcript is processed by a media editing system to automatically generate an edited version of the time-based media that only includes the segments of the time-based media that include the speech corresponding to the edited augmented transcript. | 02-21-2013 |
20100033492 | PRODUCING WRINKLES AND OTHER EFFECTS FOR A COMPUTER-GENERATED CHARACTER BASED ON SURFACE STRESS - Wrinkles are produced by computing directional stress, whether compression or stretching, for each pixel within each face of the mesh representing the skin, and then perturbing a surface normal based on the computed stress at each pixel in that face of the mesh. Directional stress at a given frame in an animation is determined, in general, by comparing the current state of the mesh at that frame (called a “current pose”) to the original state of the mesh (called a “rest pose”). An artist specifies a wrinkle pattern by defining a texture that is mapped to the surface, using conventional techniques. A gradient texture is created from this wrinkle texture by computing the gradient at each pixel in the wrinkle texture. For each location in a face of the surface, the vector from the gradient texture is mapped to the corresponding face of the rest pose skin model and the current pose skin model, to produce two surface vectors. These two vectors are compared to provide an estimate of the surface stress at this location in the face. A wrinkle effect may be implemented using bump mapping, but the surface normal is perturbed differently for each location in the face of the mesh based on the skin stress estimated at that location. Other effects also may be created using the estimated stresses. | 02-11-2010 |
20100033483 | Exchanging Data Between Vertex Shaders And Fragment Shaders On A Graphics Processing Unit - It is desirable for a fragment shader to have access to non-interpolated values for each vertex of the primitive in which the fragment is located. For example, a fragment shader may use the distortion of the primitive with respect to an original state of the primitive as part of the function the fragment shader performs. Due to the specification of fragment shaders and vertex shaders, fragments shaders receive only interpolated values, and thus cannot receive non-interpolated values of, for example, one solution to this problem would be to modify the processing engine for the shader language, and the shader specifications themselves, so that a fragment shader can receive non-interpolated values from the vertices of the primitive on which the fragment is located. Desirable values to receive would be at least the vertex coordinates. Another solution is to specify and use varyings in a manner that pass data to a fragment shader that permit the fragment shader to reconstruct the non-interpolated values. One way to achieve this is to a. allocate varyings and assign them indices, b. assign indices to the vertices and c. have each a shader contribute only to those varyings having the same index as the vertex being processed, and otherwise contribute a null value, such as 0, to the varyings with other indices. In this manner, when the interpolated value for the indexed varying is received by the fragment shader, the indexed varying contains the contribution of only one vertex, scaled by an interpolation parameter. Another indexed varying can be used to pass the interpolation parameter, allowing the original value for the vertex to be computed by the fragment shader. | 02-11-2010 |