From: Thomas Fillon Date: Thu, 12 Dec 2013 14:42:10 +0000 (+0100) Subject: Doc: Add a tutorial on second pass with ArrayDecoder + fix bug in Waveform analyzer X-Git-Tag: 0.5.2~9 X-Git-Url: https://git.parisson.com/?a=commitdiff_plain;h=f9720bb1f81002c82517a6e939b2531a4e7920b8;p=timeside.git Doc: Add a tutorial on second pass with ArrayDecoder + fix bug in Waveform analyzer --- diff --git a/README.rst b/README.rst index 8025a10..09c8950 100644 --- a/README.rst +++ b/README.rst @@ -22,7 +22,7 @@ We just **need** a python library to: * **Draw** various fancy waveforms, spectrograms and other cool graphers, * **Transcode** audio data in various media formats and stream them through web apps, * **Playback** and **interact** **on demand** through a smart high-level HTML5 extensible player, -* **Index**, **tag** and **organize semantic metadata** (see `Telemeta `_ which embed TimeSide). +* **Index**, **tag** and **organize semantic metadata** (see `Telemeta `_ which embeds TimeSide). Here is a schematic diagram of the TimeSide engine architecture: diff --git a/doc/source/tutorial/ArrayDecoder.rst b/doc/source/tutorial/ArrayDecoder.rst new file mode 100644 index 0000000..6a37158 --- /dev/null +++ b/doc/source/tutorial/ArrayDecoder.rst @@ -0,0 +1,44 @@ +.. This file is part of TimeSide + @author: Thomas Fillon + +=============================================== + Running a pipe with previously decoded frames +=============================================== + +Example of use of the :class:`ArrayDecoder ` and :class:`Waveform analyzer ` to run a pipe with previously frames from memory on a second pass + +First, setup a :class:`FileDecoder ` on an audio file: + +>>> import timeside +>>> import numpy as np +>>> +>>> audio_file = 'http://github.com/yomguy/timeside-samples/raw/master/samples/sweep.mp3' +>>> +>>> file_decoder = timeside.decoder.FileDecoder(audio_file) + +Then, setup an arbitrary analyzer to check that both decoding process are equivalent and a :class:`Waveform analyzer ` which result will store the decoded frames: + +>>> pitch_on_file = timeside.analyzer.AubioPitch() +>>> waveform = timeside.analyzer.Waveform() + +And run the pipe: + +>>> (file_decoder | pitch_on_file | waveform).run() + +To run the second pass, we need to get back the decoded samples and the original samplerate and pass them to :class:`ArrayDecoder `: + +>>> samples = waveform.results['waveform_analyzer'].data +>>> samplerate = waveform.results['waveform_analyzer'].frame_metadata.samplerate +>>> array_decoder = timeside.decoder.ArrayDecoder(samples=samples, samplerate=samplerate) + +Then we can run a second pipe with the previously decoded frames and pass the frames to the same analyzer: + +>>> pitch_on_array = timeside.analyzer.AubioPitch() +>>> (array_decoder | pitch_on_array).run() + +To assert that the frames passed to the two analyzers are the same, we check that the results of these analyzers are equivalent: + +>>> np.allclose(pitch_on_file.results['aubio_pitch.pitch'].data, +... pitch_on_array.results['aubio_pitch.pitch'].data) +True + diff --git a/doc/source/tutorial/index.rst b/doc/source/tutorial/index.rst index 19142c4..333e314 100644 --- a/doc/source/tutorial/index.rst +++ b/doc/source/tutorial/index.rst @@ -13,5 +13,6 @@ Contents: Quick start Usage of AnalyzerResult + Running a pipe with previously decoded frames diff --git a/timeside/analyzer/waveform.py b/timeside/analyzer/waveform.py index 30fd4df..e1289f0 100644 --- a/timeside/analyzer/waveform.py +++ b/timeside/analyzer/waveform.py @@ -31,8 +31,8 @@ class Waveform(Analyzer): def __init__(self): super(Waveform, self).__init__() - self.input_blocksize = 2048 - self.input_stepsize = self.input_blocksize / 2 +# self.input_blocksize = 2048 +# self.input_stepsize = self.input_blocksize / 2 @interfacedoc def setup(self, channels=None, samplerate=None, @@ -58,13 +58,13 @@ class Waveform(Analyzer): def unit(): return "" - @downmix_to_mono - @frames_adapter +# @downmix_to_mono +# @frames_adapter def process(self, frames, eod=False): self.values.append(frames) return frames, eod def post_process(self): waveform = self.new_result(data_mode='value', time_mode='framewise') - waveform.data_object.value = np.asarray(self.values).flatten() + waveform.data_object.value = np.vstack(self.values) self.pipe.results.add(waveform)