Patent application number | Description | Published |
20080201138 | Headset for Separation of Speech Signals in a Noisy Environment - A headset is constructed to generate an acoustically distinct speech signal in a noisy acoustic environment. The headset positions a pair of spaced-apart microphones near a user's mouth. The microphones each receive the user s speech, and also receive acoustic environmental noise. The microphone signals, which have both a noise and information component, are received into a separation process. The separation process generates a speech signal that has a substantial reduced noise component. The speech signal is then processed for transmission. In one example, the transmission process includes sending the speech signal to a local control module using a Bluetooth radio. | 08-21-2008 |
20080208538 | SYSTEMS, METHODS, AND APPARATUS FOR SIGNAL SEPARATION - Methods, apparatus, and systems for source separation include a converged plurality of coefficient values that is based on each of a plurality of M-channel signals. Each of the plurality of M-channel signals is based on signals produced by M transducers in response to at least one information source and at least one interference source. In some examples, the converged plurality of coefficient values is used to filter an M-channel signal to produce an information output signal and an interference output signal. | 08-28-2008 |
20090001262 | System and Method for Spectral Analysis - The system and method for spectral analysis uses a set of spectral data. The spectral data is arranged according to a second dimension, such as time, temperature, position, or other condition. The arranged spectral data is used in a signal separation process, such as an independent component analysis (ICA), which generates independent signals. The independent signals are then used for identifying or quantifying a target component. | 01-01-2009 |
20090022336 | SYSTEMS, METHODS, AND APPARATUS FOR SIGNAL SEPARATION - Methods, apparatus, and systems for source separation include a converged plurality of coefficient values that is based on each of a plurality of M-channel signals. Each of the plurality of M-channel signals is based on signals produced by M transducers in response to at least one information source and at least one interference source. In some examples, the converged plurality of coefficient values is used to filter an M-channel signal to produce an information output signal and an interference output signal. | 01-22-2009 |
20090164212 | SYSTEMS, METHODS, AND APPARATUS FOR MULTI-MICROPHONE BASED SPEECH ENHANCEMENT - Systems, methods, and apparatus for processing an M-channel input signal are described that include outputting a signal produced by a selected one among a plurality of spatial separation filters. Applications to separating an acoustic signal from a noisy environment are described, and configurations that may be implemented on a multi-microphone handheld device are also described. | 06-25-2009 |
20090254338 | SYSTEM AND METHOD FOR GENERATING A SEPARATED SIGNAL - The present invention relates to blind source separation. More specifically it relates to the blind source separation using frequency domain processes. | 10-08-2009 |
20090299742 | SYSTEMS, METHODS, APPARATUS, AND COMPUTER PROGRAM PRODUCTS FOR SPECTRAL CONTRAST ENHANCEMENT - Systems, methods, and apparatus for spectral contrast enhancement of speech signals, based on information from a noise reference that is derived by a spatially selective processing filter from a multichannel sensed audio signal, are disclosed. | 12-03-2009 |
20100017205 | SYSTEMS, METHODS, APPARATUS, AND COMPUTER PROGRAM PRODUCTS FOR ENHANCED INTELLIGIBILITY - Techniques described herein include the use of equalization techniques to improve intelligibility of a reproduced audio signal (e.g., a far-end speech signal). | 01-21-2010 |
20100323652 | SYSTEMS, METHODS, APPARATUS, AND COMPUTER-READABLE MEDIA FOR PHASE-BASED PROCESSING OF MULTICHANNEL SIGNAL - Phase-based processing of a multichannel signal, and applications including proximity detection, are disclosed. | 12-23-2010 |
20110038489 | SYSTEMS, METHODS, APPARATUS, AND COMPUTER-READABLE MEDIA FOR COHERENCE DETECTION - Based on phase differences between corresponding frequency components of different channels of a multichannel signal, a measure of directional coherency is calculated. Application of such a measure to voice activity detection and noise reduction are also disclosed. | 02-17-2011 |
20110058676 | SYSTEMS, METHODS, APPARATUS, AND COMPUTER-READABLE MEDIA FOR DEREVERBERATION OF MULTICHANNEL SIGNAL - Systems, methods, apparatus, and computer-readable media for dereverberation of a multimicrophone signal combine use of a directionally selective processing operation (e.g., beamforming) with an inverse filter trained on a separated reverberation estimate that is obtained using a decorrelation operation (e.g., a blind source separation operation). | 03-10-2011 |
20110264447 | SYSTEMS, METHODS, AND APPARATUS FOR SPEECH FEATURE DETECTION - Implementations and applications are disclosed for detection of a transition in a voice activity state of an audio signal, based on a change in energy that is consistent in time across a range of frequencies of the signal. | 10-27-2011 |
20110288860 | SYSTEMS, METHODS, APPARATUS, AND COMPUTER-READABLE MEDIA FOR PROCESSING OF SPEECH SIGNALS USING HEAD-MOUNTED MICROPHONE PAIR - A noise cancelling headset for voice communications contains a microphone at each of the user's ears and a voice microphone. The headset shares the use of the ear microphones for improving signal-to-noise ratio on both the transmit path and the receive path. | 11-24-2011 |
20110293103 | SYSTEMS, METHODS, DEVICES, APPARATUS, AND COMPUTER PROGRAM PRODUCTS FOR AUDIO EQUALIZATION - Methods and apparatus for generating an anti-noise signal and equalizing a reproduced audio signal (e.g., a far-end telephone signal) are described, wherein the generating and the equalizing are both based on information from an acoustic error signal. | 12-01-2011 |
20120020480 | SYSTEMS, METHODS, AND APPARATUS FOR ENHANCED ACOUSTIC IMAGING - Methods, systems, and apparatus for using a psychoacoustic-bass-enhanced signal to drive an array of loudspeakers are disclosed. | 01-26-2012 |
20120020485 | SYSTEMS, METHODS, APPARATUS, AND COMPUTER-READABLE MEDIA FOR MULTI-MICROPHONE LOCATION-SELECTIVE PROCESSING - A multi-microphone system performs location-selective processing of an acoustic signal, wherein source location is indicated by directions of arrival relative to microphone pairs at opposite sides of a midsagittal plane of a user's head. | 01-26-2012 |
20120099732 | SYSTEMS, METHODS, APPARATUS, AND COMPUTER-READABLE MEDIA FOR FAR-FIELD MULTI-SOURCE TRACKING AND SEPARATION - An apparatus for multichannel signal processing separates signal components from different acoustic sources by initializing a separation filter bank with beams in the estimated source directions, adapting the separation filter bank under specified constraints, and normalizing an adapted solution based on a maximum response with respect to direction. Such an apparatus may be used to separate signal components from sources that are close to one another in the far field of the microphone array. | 04-26-2012 |
20120101826 | DECOMPOSITION OF MUSIC SIGNALS USING BASIS FUNCTIONS WITH TIME-EVOLUTION INFORMATION - Decomposition of a multi-source signal using a basis function inventory and a sparse recovery technique is disclosed. | 04-26-2012 |
20120128160 | THREE-DIMENSIONAL SOUND CAPTURING AND REPRODUCING WITH MULTI-MICROPHONES - Systems, methods, apparatus, and machine-readable media for three-dimensional sound recording and reproduction using a multi-microphone setup are described. | 05-24-2012 |
20120128165 | SYSTEMS, METHOD, APPARATUS, AND COMPUTER-READABLE MEDIA FOR DECOMPOSITION OF A MULTICHANNEL MUSIC SIGNAL - Decomposition of a multichannel signal using direction-of-arrival estimation, a basis function inventory, and a sparse recovery technique is disclosed. | 05-24-2012 |
20120128166 | SYSTEMS, METHODS, APPARATUS, AND COMPUTER-READABLE MEDIA FOR HEAD TRACKING BASED ON RECORDED SOUND SIGNALS - Systems, methods, apparatus, and machine-readable media for detecting head movement based on recorded sound signals are described. | 05-24-2012 |
20120128175 | SYSTEMS, METHODS, APPARATUS, AND COMPUTER-READABLE MEDIA FOR ORIENTATION-SENSITIVE RECORDING CONTROL - Systems, methods, apparatus, and machine-readable media for orientation-sensitive selection and/or preservation of a recording direction using a multi-microphone setup are described. | 05-24-2012 |
20120130713 | SYSTEMS, METHODS, AND APPARATUS FOR VOICE ACTIVITY DETECTION - Systems, methods, apparatus, and machine-readable media for voice activity detection in a single-channel or multichannel audio signal are disclosed. | 05-24-2012 |
20120182429 | VARIABLE BEAMFORMING WITH A MOBILE PLATFORM - A mobile platform includes a microphone array and is capable of implementing beamforming to amplify or suppress audio information from a sound source. The sound source is indicated through a user input, such as pointing the mobile platform in the direction of the sound source or through a touch screen display interface. The mobile platform further includes orientation sensors capable of detecting movement of the mobile platform. When the mobile platform moves with respect to the sound source, the beamforming is adjusted based on the data from the orientation sensors so that beamforming is continuously implemented in the direction of the sound source. The audio information from the sound source may be included or suppressed from a telephone or video-telephony conversation. Images or video from a camera may be likewise controlled based on the data from the orientation sensors. | 07-19-2012 |
20120224456 | SYSTEMS, METHODS, APPARATUS, AND COMPUTER-READABLE MEDIA FOR SOURCE LOCALIZATION USING AUDIBLE SOUND AND ULTRASOUND - A method of signal processing includes calculating a range based on information from a reflected ultrasonic signal. Based on the calculated range, one among a plurality of direction-of-arrival (DOA) estimation operations is selected. The method also includes performing the selected operation to calculate an estimated direction of arrival (DOA) of an audio-frequency component of a multichannel signal. Examples of DOA estimation operations include operations based on phase differences between channels of the multichannel signal and operations based on a difference in gain between signals that are based on channels of the multichannel signal. | 09-06-2012 |
20120250882 | INTEGRATED ECHO CANCELLATION AND NOISE SUPPRESSION - A method for echo cancellation and noise suppression is disclosed. Linear echo cancellation (LEC) is performed for a primary microphone channel on an entire frequency band or in a range of frequencies where echo is audible. LEC is performed on one or more secondary microphone channels only on a lower frequency range over which spatial processing is effective. The microphone channels are spatially processed over the lower frequency range after LEC. Non-linear noise suppression post-processing is performed on the entire frequency band. Echo post-processing is performed on the entire frequency band. | 10-04-2012 |
20120263317 | SYSTEMS, METHODS, APPARATUS, AND COMPUTER READABLE MEDIA FOR EQUALIZATION - Enhancement of audio quality (e.g., speech intelligibility) in a noisy environment, based on subband gain control using information from a noise reference, is described. | 10-18-2012 |
20120294446 | BLIND SOURCE SEPARATION BASED SPATIAL FILTERING - A method for blind source separation based spatial filtering on an electronic device includes obtaining a first source audio signal and a second source audio signal. The method also includes applying a blind source separation filter set to the first source audio signal and to the second source audio signal to produce a spatially filtered first audio signal and a spatially filtered second audio signal. The method further includes playing the spatially filtered first audio signal over a first speaker to produce an acoustic spatially filtered first audio signal and playing the spatially filtered second audio signal over a second speaker to produce an acoustic spatially filtered second audio signal. The acoustic spatially filtered first audio signal and the acoustic spatially filtered second audio signal produce an isolated acoustic first source audio signal at a first position and an isolated acoustic second source audio signal at a second position. | 11-22-2012 |
20120316869 | GENERATING A MASKING SIGNAL ON AN ELECTRONIC DEVICE - An electronic device for generating a masking signal is described. The electronic device includes a plurality of microphones and a speaker. The electronic device also includes a processor and executable instructions stored in memory that is in electronic communication with the processor. The electronic device obtains a plurality of audio signals from the plurality of microphones. The electronic device also obtains an ambience signal based on the plurality of audio signals. The electronic device further determines an ambience feature based on the ambience signal. Additionally, the electronic device obtains a voice signal based on the plurality of audio signals. The electronic device also determines a voice feature based on the voice signal. The electronic device additionally generates a masking signal based on the voice feature and the ambience feature. The electronic device further outputs the masking signal using the speaker. | 12-13-2012 |
20130040694 | REMOVAL OF USER IDENTIFIED NOISE - Methods, systems and devices enabling a party to a telephone conversation to identify sounds for active filtering so that the identified sound can be actively filtered and/or amplified. Cell phones are provided with a button that allows users to identify sounds for filtering by pressing the button or virtual key when the sound is heard. Sounds recorded in response to such user inputs are processed to identify filtering criteria, such as frequencies and amplitude. The identified filtering criteria are then used to actively filter or enhance sounds. The methods and systems enable user to identify specific sounds for filtering so only those sounds deemed annoying are suppressed while permitting other sounds (e.g., voice) to be heard. | 02-14-2013 |
20130156198 | AUTOMATED USER/SENSOR LOCATION RECOGNITION TO CUSTOMIZE AUDIO PERFORMANCE IN A DISTRIBUTED MULTI-SENSOR ENVIRONMENT - A wireless device is provided that makes use of other nearby audio transducer devices to generate a surround sound effect for a targeted user. To do this, the wireless device first ascertains whether there are any nearby external microphones and/or loudspeaker devices. An internal microphone for the wireless device and any other nearby external microphones may be used to ascertain a location of the desired/targeted user as well as the nearby loudspeaker devices. This information is then used to generate a surround sound effect for the desired/targeted user by having the wireless device steer audio signals to its internal loudspeakers and/or the nearby external loudspeaker devices. | 06-20-2013 |
20130156207 | OPTIMIZING AUDIO PROCESSING FUNCTIONS BY DYNAMICALLY COMPENSATING FOR VARIABLE DISTANCES BETWEEN SPEAKER(S) AND MICROPHONE(S) IN AN ACCESSORY DEVICE - An accessory device having multiple speakers and/or microphones to perform a number of audio functions, for use with mobile devices, are provided. The audio transducers (e.g., microphones and/or speakers) may be housed in one or more extendable and/or rotationally adjustable arms. To compensate for the unwanted signal feedback between the speakers and microphones, acoustic echo cancellation may be implemented to determine the proper distance and relative location between the speakers and microphones. Acoustic echo cancellation removes the echo from voice communications to improve the quality of the sound. The removal of the unwanted signals captured by the microphones may be accomplished by characterizing the audio signal paths from the speakers to the microphones (speaker-to-microphone path distance profile), including the distance and relative location between the speakers and microphones. The optimal distance and relative location between the speakers and microphones is provided to the user to optimize performance. | 06-20-2013 |
20130259238 | SYSTEMS, METHODS, APPARATUS, AND COMPUTER-READABLE MEDIA FOR GESTURAL MANIPULATION OF A SOUND FIELD - Gesture-responsive modification of a generated sound field is described. | 10-03-2013 |
20130259254 | SYSTEMS, METHODS, AND APPARATUS FOR PRODUCING A DIRECTIONAL SOUND FIELD - A system may be used to drive an array of loudspeakers to produce a sound field that includes a source component, whose energy is concentrated along a first direction relative to the array, and a masking component that is based on an estimated intensity of the source component in a second direction that is different from the first direction. | 10-03-2013 |
20130272097 | SYSTEMS, METHODS, AND APPARATUS FOR ESTIMATING DIRECTION OF ARRIVAL - Systems, methods, and apparatus for matching pair-wise differences (e.g., phase delay measurements) to an inventory of source direction candidates, and application of pair-wise source direction estimates, are described. | 10-17-2013 |
20130272538 | SYSTEMS, METHODS, AND APPARATUS FOR INDICATING DIRECTION OF ARRIVAL - Systems, methods, and apparatus for projecting an estimated direction of arrival of sound onto a plane that does not include the estimated direction are described. | 10-17-2013 |
20130272539 | SYSTEMS, METHODS, AND APPARATUS FOR SPATIALLY DIRECTIVE FILTERING - Systems, methods, and apparatus are described for applying, based on angles of arrival of source components relative to the axes of different microphone pairs, a spatially directive filter to a multichannel audio signal to produce an output signal. | 10-17-2013 |
20130272548 | OBJECT RECOGNITION USING MULTI-MODAL MATCHING SCHEME - Methods, systems and articles of manufacture for recognizing and locating one or more objects in a scene are disclosed. An image and/or video of the scene are captured. Using audio recorded at the scene, an object search of the captured scene is narrowed down. For example, the direction of arrival (DOA) of a sound can be determined and used to limit the search area in a captured image/video. In another example, keypoint signatures may be selected based on types of sounds identified in the recorded audio. A keypoint signature corresponds to a particular object that the system is configured to recognize. Objects in the scene may then be recognized using a shift invariant feature transform (SIFT) analysis comparing keypoints identified in the captured scene to the selected keypoint signatures. | 10-17-2013 |
20130275077 | SYSTEMS AND METHODS FOR MAPPING A SOURCE LOCATION - A method for mapping a source location by an electronic device is described. The method includes obtaining sensor data. The method also includes mapping a source location to electronic device coordinates based on the sensor data. The method further includes mapping the source location from electronic device coordinates to physical coordinates. The method additionally includes performing an operation based on a mapping. | 10-17-2013 |
20130275872 | SYSTEMS AND METHODS FOR DISPLAYING A USER INTERFACE - A method for displaying a user interface on an electronic device is described. The method includes presenting a user interface. The user interface includes a coordinate system. The coordinate system corresponds to physical coordinates based on sensor data. The method also includes providing a sector selection feature that allows selection of at least one sector of the coordinate system. The method further includes providing a sector editing feature that allows editing the at least one sector. | 10-17-2013 |
20130275873 | SYSTEMS AND METHODS FOR DISPLAYING A USER INTERFACE - A method for displaying a user interface on an electronic device is described. The method includes presenting a user interface. The user interface includes a coordinate system. The coordinate system corresponds to physical coordinates based on sensor data. The method also includes displaying at least a target audio signal and an interfering audio signal on the user interface. | 10-17-2013 |
20130282369 | SYSTEMS AND METHODS FOR AUDIO SIGNAL PROCESSING - A method for signal level matching by an electronic device is described. The method includes capturing a plurality of audio signals from a plurality of microphones. The method also includes determining a difference signal based on an inter-microphone subtraction. The difference signal includes multiple harmonics. The method also includes determining whether a harmonicity of the difference signal exceeds a harmonicity threshold. The method also includes preserving the harmonics to determine an envelope. The method further applies the envelope to a noise-suppressed signal. | 10-24-2013 |
20130282372 | SYSTEMS AND METHODS FOR AUDIO SIGNAL PROCESSING - A method for detecting voice activity by an electronic device is described. The method includes detecting near end speech based on a near end voiced speech detector and at least one single channel voice activity detector. The near end voiced speech detector is associated with a harmonic statistic based on a speech pitch histogram. | 10-24-2013 |
20130282373 | SYSTEMS AND METHODS FOR AUDIO SIGNAL PROCESSING - A method for restoring a processed speech signal by an electronic device is described. The method includes obtaining at least one audio signal. The method also includes performing bin-wise voice activity detection based on the at least one audio signal. The method further includes restoring the processed speech signal based on the bin-wise voice activity detection. | 10-24-2013 |
20130300648 | AUDIO USER INTERACTION RECOGNITION AND APPLICATION INTERFACE - Disclosed is an application interface that takes into account the user's gaze direction relative to who is speaking in an interactive multi-participant environment where audio-based contextual information and/or visual-based semantic information is being presented. Among these various implementations, two different types of microphone array devices (MADs) may be used. The first type of MAD is a steerable microphone array (a.k.a. a steerable array) which is worn by a user in a known orientation with regard to the user's eyes, and wherein multiple users may each wear a steerable array. The second type of MAD is a fixed-location microphone array (a.k.a. a fixed array) which is placed in the same acoustic space as the users (one or more of which are using steerable arrays). | 11-14-2013 |
20130301837 | Audio User Interaction Recognition and Context Refinement - A system which tracks a social interaction between a plurality of participants, includes a fixed beamformer that is adapted to output a first spatially filtered output and configured to receive a plurality of second spatially filtered outputs from a plurality of steerable beamformers. Each steerable beamformer outputs a respective one of the second spatially filtered outputs associated with a different one of the participants. The system also includes a processor capable of determining a similarity between the first spatially filtered output and each of the second spatially filtered outputs. The processor determines the social interaction between the participants based on the similarity between the first spatially filtered output and each of the second spatially filtered outputs. | 11-14-2013 |
20130304476 | Audio User Interaction Recognition and Context Refinement - A system which performs social interaction analysis for a plurality of participants includes a processor. The processor is configured to determine a similarity between a first spatially filtered output and each of a plurality of second spatially filtered outputs. The processor is configured to determine the social interaction between the participants based on the similarities between the first spatially filtered output and each of the second spatially filtered outputs and display an output that is representative of the social interaction between the participants. The first spatially filtered output is received from a fixed microphone array, and the second spatially filtered outputs are received from a plurality of steerable microphone arrays each corresponding to a different participant. | 11-14-2013 |
20130315402 | THREE-DIMENSIONAL SOUND COMPRESSION AND OVER-THE-AIR TRANSMISSION DURING A CALL - A method for encoding multiple directional audio signals using an integrated codec by a wireless communication device is disclosed. The wireless communication device records a plurality of directional audio signals. The wireless communication device also generates a plurality of audio signal packets based on the plurality of directional audio signals. At least one of the audio signal packets includes an averaged signal. The wireless communication device further transmits the plurality of audio signal packets. | 11-28-2013 |
20130316691 | VARIABLE BEAMFORMING WITH A MOBILE PLATFORM - A mobile platform includes a microphone array and is capable of implementing beamforming to amplify or suppress audio information from a sound source. The sound source is indicated through a user input, such as pointing the mobile platform in the direction of the sound source or through a touch screen display interface. The mobile platform further includes orientation sensors capable of detecting movement of the mobile platform. When the mobile platform moves with respect to the sound source, the beamforming is adjusted based on the data from the orientation sensors so that beamforming is continuously implemented in the direction of the sound source. The audio information from the sound source may be included or suppressed from a telephone or video-telephony conversation. Images or video from a camera may be likewise controlled based on the data from the orientation sensors. | 11-28-2013 |
20130317830 | THREE-DIMENSIONAL SOUND COMPRESSION AND OVER-THE-AIR TRANSMISSION DURING A CALL - A method for encoding three dimensional audio by a wireless communication device is disclosed. The wireless communication device detects an indication of a plurality of localizable audio sources. The wireless communication device also records a plurality of audio signals associated with the plurality of localizable audio sources. The wireless communication device also encodes the plurality of audio signals. | 11-28-2013 |
20130339011 | SYSTEMS, METHODS, APPARATUS, AND COMPUTER-READABLE MEDIA FOR PITCH TRAJECTORY ANALYSIS - Systems, methods, and apparatus for pitch trajectory analysis are described. Such techniques may be used to remove vocals and/or vibrato from an audio mixture signal. For example, such a technique may be used to pre-process the signal before an operation to decompose the mixture signal into individual instrument components. | 12-19-2013 |
20140003611 | SYSTEMS AND METHODS FOR SURROUND SOUND ECHO REDUCTION | 01-02-2014 |
20140003635 | AUDIO SIGNAL PROCESSING DEVICE CALIBRATION | 01-02-2014 |
20140254816 | CONTENT BASED NOISE SUPPRESSION - Apparatus and methods for audio noise attenuation are disclosed. An audio signal analyzer can determine whether an input audio signal received from a microphone device includes a noise signal having identifiable content. If there is a noise signal having identifiable content, a content source is accessed to obtain a copy of the noise signal. An audio canceller can generate a processed audio signal, having an attenuated noise signal, based on comparing the copy of the noise signal to the input audio signal. Additionally or alternatively, data may be communicated on a communication channel to a separate media device to receive at least a portion of the copy of the noise signal from the separate media device, or to receive content-identification data corresponding to the content source. | 09-11-2014 |
20140328490 | MULTI-CHANNEL ECHO CANCELLATION AND NOISE SUPPRESSION - A method for multi-channel echo cancellation and noise suppression is described. One of multiple echo estimates is selected for non-linear echo cancellation. Echo notch masking is performed on a noise-suppressed signal based on an echo direction of arrival (DOA) to produce an echo-suppressed signal. Non-linear echo cancellation is performed on the echo-suppressed signal based, at least in part, on the selected echo estimate. | 11-06-2014 |
20140337021 | SYSTEMS AND METHODS FOR NOISE CHARACTERISTIC DEPENDENT SPEECH ENHANCEMENT - A method for noise characteristic dependent speech enhancement by an electronic device is described. The method includes determining a noise characteristic of input audio. Determining a noise characteristic of input audio includes determining whether noise is stationary noise and determining whether the noise is music noise. The method also includes determining a noise reference based on the noise characteristic. Determining the noise reference includes excluding a spatial noise reference from the noise reference when the noise is stationary noise and including the spatial noise reference in the noise reference when the noise is not music noise and is not stationary noise. The method further includes performing noise suppression based on the noise characteristic. | 11-13-2014 |