AudioData, and getpieces by Robert Ochshorn
on 2008-06-06. Some refactoring and everything else by Joshua Lifton
2008-09-07. Refactoring by Ben Lacker 2009-02-11. Other contributions
by Adam Lindsay.
|
AudioAnalysis
This class uses (but does not wrap) pyechonest.track to allow
transparent caching of the audio analysis of an audio file.
|
|
AudioRenderable
An object that gives an AudioData in response to a call to its render ()
method.
Intended to be an abstract class that helps enforce the AudioRenderable
protocol. Picked up a couple of convenience methods common to many descendants.
|
|
AudioData
Handles audio data transparently. A smart audio container
with accessors that include:
|
|
AudioData32
A 32-bit variant of AudioData, intended for data collection on
audio rendering with headroom.
|
|
LocalAudioFile
The basic do-everything class for remixing. Acts as an AudioData
object, but with an added analysis selector which is an
AudioAnalysis object. It conditianally uploads the file
it was initialized with. If the file is already known to the
Analyze API, then it does not bother uploading the file.
|
|
LocalAnalysis
Like LocalAudioFile, it conditionally uploads the file with which
it was initialized. Unlike LocalAudioFile, it is not a subclass of
AudioData, so contains no sample data.
|
|
AudioQuantum
A unit of musical time, identified at minimum with a start time and
a duration, both in seconds. It most often corresponds with a section ,
bar , beat , tatum , or (by inheritance) segment obtained from an Analyze
API call.
|
|
AudioSegment
Subclass of AudioQuantum for the data-rich segments returned by
the Analyze API.
|
|
ModifiedRenderable
Class that contains any AudioRenderable, but overrides the
render() method with nested effects, called sequentially on the
result of the preceeding effect.
|
|
AudioQuantumList
A container that enables content-based selection and filtering.
A List that contains AudioQuantum objects, with additional methods
for manipulating them.
|
|
AudioEffect
|
|
LevelDB
|
|
AmplitudeFactor
|
|
TimeTruncateFactor
|
|
TimeTruncateLength
|
|
Simultaneous
Stacks all contained AudioQuanta atop one another, adding their respective
samples. The rhythmic length of the segment is the duration of the first
AudioQuantum, but there can be significant overlap caused by the longest
segment.
|
|
FileTypeError
|
|
EchoNestRemixError
Error raised by the Remix API.
|
|
get_os()
returns is_linux, is_mac, is_windows |
source code
|
|
|
getpieces(audioData,
segs)
Collects audio samples for output.
Returns a new AudioData where the new sample data is assembled
from the input audioData according to the time offsets in each
of the elements of the input segs (commonly an AudioQuantumList). |
source code
|
|
|
assemble(audioDataList,
numChannels=1,
sampleRate=44100,
verbose=True)
Collects audio samples for output.
Returns a new AudioData object assembled
by concatenating all the elements of audioDataList. |
source code
|
|
|
mix(dataA,
dataB,
mix=0.5)
Mixes two AudioData objects. Assumes they have the same sample rate
and number of channels. |
source code
|
|
|
megamix(dataList)
Mix together any number of AudioData objects. Keep the shape of
the first one in the list. Assume they all have the same sample rate
and number of channels. |
source code
|
|
|
ffmpeg(infile,
outfile=None,
overwrite=True,
bitRate=None,
numChannels=None,
sampleRate=None,
verbose=True)
Executes ffmpeg through the shell to convert or read media files. |
source code
|
|
|
settings_from_ffmpeg(parsestring)
Parses the output of ffmpeg to determine sample rate and frequency of
an audio file. |
source code
|
|
|
ffmpeg_error_check(parsestring)
Looks for known errors in the ffmpeg output |
source code
|
|
|
|
|
|
|
|