%pylab inline
import matplotlib.pylab as pylab
pylab.rcParams['figure.figsize'] = 16, 8 # that's default image size for this interactive session
Timeside API is based on different core processing unit called processors :
import timeside.core
from timeside.core import list_processors
list_processors(timeside.core.api.IDecoder)
list_processors(timeside.core.api.IAnalyzer)
list_processors(timeside.core.api.IEncoder)
list_processors(timeside.core.api.IGrapher)
All these processors can be chained to form a process pipeline.
Let first define a decoder that reads and decodes audio from a file
from timeside.core import get_processor
from timeside.core.tools.test_samples import samples
file_decoder = get_processor('file_decoder')(samples['C4_scale.wav'])
And then some other processors
# analyzers
pitch = get_processor('aubio_pitch')()
level = get_processor('level')()
# Encoder
mp3 = get_processor('mp3_encoder')('/tmp/guitar.mp3', overwrite=True)
# Graphers
specgram = get_processor('spectrogram_lin')()
waveform = get_processor('waveform_simple')()
Let's now define a process pipeline with all these processors and run it
pipe = (file_decoder | pitch | level | mp3 | specgram | waveform)
pipe.run()
Analyzers results are available through the pipe:
pipe.results.keys()
or from the analyzer:
pitch.results.keys()
pitch.results['aubio_pitch.pitch'].keys()
pitch.results['aubio_pitch.pitch']
Grapher result can also be display or save into a file
imshow(specgram.render(), origin='lower')
imshow(waveform.render(), origin='lower')
waveform.render('/tmp/waveform.png')
from IPython.display import HTML
HTML('<iframe width=1300 height=260 frameborder=0 scrolling=no marginheight=0 marginwidth=0 src=http://demo.telemeta.org/archives/items/6/player/1200x170></iframe>')