API / Documentation
====================
-http://files.parisson.com/timeside/doc/
+* General : http://files.parisson.com/timeside/doc/
+* Tutorial : http://files.parisson.com/timeside/doc/examples/index.html
+* API : http://files.parisson.com/timeside/doc/api/index.html
+++ /dev/null
-.. This file is part of TimeSide
- @author: Thomas Fillon
-
-=============================
- New analyzer Result example
-=============================
-
-Example of use of the new analyzerResult structure
-
-Usage : AnalyzerResult(data_mode=None, time_mode=None)
-
-See : :class:`timeside.analyzer.core.AnalyzerResult`
-
-Default
-=======
-
-Create a new analyzer result without arguments
-
- >>> from timeside.analyzer.core import AnalyzerResult
- >>> res = AnalyzerResult()
-
-This default result has all the metadata and dataObject attribute
-
- >>> res.keys()
- ['data_mode', 'time_mode', 'id_metadata', 'data', 'audio_metadata', 'frame_metadata', 'label_metadata', 'parameters']
-
- >>> for key,value in res.items():
- ... print '%s : %s' % (key, value)
- ...
- data_mode : None
- time_mode : None
- id_metadata : {'description': '', 'author': '', 'version': '', 'date': '', 'id': '', 'unit': '', 'name': ''}
- dataObject : {'duration': array([], dtype=float64), 'time': array([], dtype=float64), 'value': None, 'label': array([], dtype=int64)}
- audio_metadata : {'duration': None, 'start': 0, 'channelsManagement': '', 'uri': '', 'channels': None}
- frame_metadata : {'blocksize': None, 'samplerate': None, 'stepsize': None}
- label_metadata : {'label_type': 'mono', 'description': None, 'label': None}
- parameters : {}
-
-
-Specification of time_mode
-=========================
-Four different time_mode can be specified :
-
-- 'framewise' : Data are returned on a frame basis (i.e. with specified blocksize, stepsize and framerate)
-- 'global' : A global data value is return for the entire audio item
-- 'segment' : Data are returned on a segmnet basis (i.e. with specified start time and duration)
-- 'event' : Data are returned on a segment basis (i.e. with specified start time)
-
-
-Framewise
----------
-
->>> res = AnalyzerResult(time_mode='framewise')
->>> res.keys()
-['data_mode', 'time_mode', 'id_metadata', 'data', 'audio_metadata', 'frame_metadata', 'label_metadata', 'parameters']
-
-Global
-------
-
-No frame metadata information is needed for these modes.
-The 'frame_metadata' key/attribute is deleted.
-
->>> res = AnalyzerResult(time_mode='global')
->>> res.keys()
-['data_mode', 'time_mode', 'id_metadata', 'data', 'audio_metadata', 'label_metadata', 'parameters']
->>> res.data
-DataObject(value=None, label=array([], dtype=int64))
-
-Segment
--------
-
->>> res = AnalyzerResult(time_mode='segment')
->>> res.keys()
-['data_mode', 'time_mode', 'id_metadata', 'data', 'audio_metadata', 'label_metadata', 'parameters']
->>> res.data
-DataObject(value=None, label=array([], dtype=int64), time=array([], dtype=float64), duration=array([], dtype=float64))
-
-Event
------
-
->>> res = AnalyzerResult(time_mode='event')
->>> res.keys()
-['data_mode', 'time_mode', 'id_metadata', 'data', 'audio_metadata', 'label_metadata', 'parameters']
->>> res.data
-DataObject(value=None, label=array([], dtype=int64), time=array([], dtype=float64))
-
-Specification of data_mode
-=========================
-Two different data_mode can be specified :
-
-- 'value' : Data are returned as numpy Array of arbitrary type
-- 'label' : Data are returned as label indexes (specified by the label_metadata key)
-
-Value
------
-The label_metadata key is deleted.
-
->>> res = AnalyzerResult(data_mode='value')
->>> res.keys()
-['data_mode', 'time_mode', 'id_metadata', 'data', 'audio_metadata', 'frame_metadata', 'parameters']
-
-In the dataObject key, the 'value' key is kept and the 'label' key is deleted.
-
->>> res.data
-DataObject(value=None, time=array([], dtype=float64), duration=array([], dtype=float64))
-
-Label
------
->>> res = AnalyzerResult(data_mode='label')
->>> res.keys()
-['data_mode', 'time_mode', 'id_metadata', 'data', 'audio_metadata', 'frame_metadata', 'label_metadata', 'parameters']
-
-In the dataObject key, the 'label' key is kept and the 'value' key is deleted.
-
-
->>> res.data
-DataObject(label=array([], dtype=int64), time=array([], dtype=float64), duration=array([], dtype=float64))
+++ /dev/null
-.. TimeSide documentation master file, created by
- sphinx-quickstart on Thu Aug 22 10:49:09 2013.
- You can adapt this file completely to your liking, but it should at least
- contain the root `toctree` directive.
-
-==============================================
-TimeSide : Examples
-==============================================
-Contents:
-
-.. toctree::
- :maxdepth: 2
-
- Tutorial <tutorial>
- Usage of AnalyzerResult <AnalyzerResult>
-
-
+++ /dev/null
-==========
- Tutorial
-==========
-
-== Quick Start ==
-
-A most basic operation, transcoding, is easily performed with two processors:
-
- >>> import timeside
- >>> decoder = timeside.decoder.FileDecoder('myfile.wav')
- >>> encoder = timeside.encoder.VorbisEncoder("myfile.ogg")
- >>> pipe = decoder | encoder
- >>> pipe.run()
-
-As one can see in the above example, creating a processing pipe is performed with
-the binary OR operator.
-
-Audio data visualisation can be performed using graphers, such as Waveform and
-Spectrogram. All graphers return a [http://www.pythonware.com/library/pil/handbook/image.htm PIL image]:
-
- >>> import timeside
- >>> decoder = timeside.decoder.FileDecoder('myfile.wav')
- >>> spectrogram = timeside.grapher.Spectrogram(width=400, height=150)
- >>> (decoder | spectrogram).run()
- >>> spectrogram.render().save('graph.png')
-
-It is possible to create longer pipes, as well as subpipes, here for both
-analysis and encoding:
-
- >>> import timeside
- >>> decoder = timeside.decoder.FileDecoder('myfile.wav')
- >>> levels = timeside.analyzer.Level()
- >>> encoders = timeside.encoder.Mp3Encoder('myfile.mp3') | timeside.encoder.FlacEncoder('myfile.flac')
- >>> (decoder | levels | encoders).run()
- >>> print levels.results
intro
news
Installation <install>
- Examples <examples/index>
- API <api/index>
+ Tutorial <examples/index>
ui
+ API <api/index>
Development <dev>
related
Copyright <copyright>
* Deep refactoring of the analyzer API to handle various new usecases, specifically audio feature extraction
* Add serializable global result container (NEW dependency to h5py, json, yaml)
- * Add new audio feature extraction analyzers thanks to the Aubio library providing beat & BPM detection, pitch dectection and other cool stuff (NEW dependency)
- * Add new audio feature extraction analyzers thanks to the Yaafe library (NEW dependency)
- * EXPERIMENTAL : add new audio feature extraction thanks to the VAMP plugin library (NEW dependency)
+ * Add new audio feature extraction analyzers thanks to the Aubio library providing beat & BPM detection, pitch dectection and other cool stuff (NEW dependency on aubio)
+ * Add new audio feature extraction analyzers thanks to the Yaafe library (NEW dependency on yaafe)
+ * Add new IRIT speech detection analyzers (NEW dependency on scipy)
+ * EXPERIMENTAL : add new audio feature extraction thanks to the VAMP plugin library (NEW dependency on some vamp toold)
* Add new documentation : http://files.parisson.com/timeside/doc/
* New Debian repository for instant install
* Various bugfixes
+Sponsors and Patners
+====================
+
+ * CNRS (National Center of Science Research, France)
+ * TGE Adonis (big data equipment for human sciences)
+ * CREM (french National Center of Ethomusicology Research, France)
+ * Université Pierre et Marie Curie (UPMC Paris, France)
+ * ANR (CONTINT 2012 project : DIADEMS)
+ * MNHN :
+
+
Related projects
=================
-TimeSide has emerged in 2010 from the `Telemeta project <http://telemeta.org>`_ which develops a free and open source web audio CMS. Find a direct example of application here : http://archives.crem-cnrs.fr/
-
-This project has been sponsored by:
+ * `Telemeta <http://telemeta.org>`_ : open source web audio CMS
+ * `Sound archives <http://archives.crem-cnrs.fr/>` of the CNRS, CREM and the "Musée de l'Homme" in Paris.
+ * DIADEMS :
- * CNRS (french center of national research)
- * TGE Adonis
- * CREM (Nanterre, UPMC (Paris),
--- /dev/null
+.. This file is part of TimeSide
+ @author: Thomas Fillon
+
+Analyzer Result example
+=============================
+
+Example of use of the new analyzerResult structure
+
+Usage : AnalyzerResult(data_mode=None, time_mode=None)
+
+See : :class:`timeside.analyzer.core.AnalyzerResult`
+
+Default
+=======
+
+Create a new analyzer result without arguments
+
+ >>> from timeside.analyzer.core import AnalyzerResult
+ >>> res = AnalyzerResult()
+
+This default result has all the metadata and dataObject attribute
+
+ >>> res.keys()
+ ['data_mode', 'time_mode', 'id_metadata', 'data', 'audio_metadata', 'frame_metadata', 'label_metadata', 'parameters']
+
+ >>> for key,value in res.items():
+ ... print '%s : %s' % (key, value)
+ ...
+ data_mode : None
+ time_mode : None
+ id_metadata : {'description': '', 'author': '', 'version': '', 'date': '', 'id': '', 'unit': '', 'name': ''}
+ dataObject : {'duration': array([], dtype=float64), 'time': array([], dtype=float64), 'value': None, 'label': array([], dtype=int64)}
+ audio_metadata : {'duration': None, 'start': 0, 'channelsManagement': '', 'uri': '', 'channels': None}
+ frame_metadata : {'blocksize': None, 'samplerate': None, 'stepsize': None}
+ label_metadata : {'label_type': 'mono', 'description': None, 'label': None}
+ parameters : {}
+
+
+Specification of time_mode
+=========================
+Four different time_mode can be specified :
+
+- 'framewise' : Data are returned on a frame basis (i.e. with specified blocksize, stepsize and framerate)
+- 'global' : A global data value is return for the entire audio item
+- 'segment' : Data are returned on a segmnet basis (i.e. with specified start time and duration)
+- 'event' : Data are returned on a segment basis (i.e. with specified start time)
+
+
+Framewise
+---------
+
+>>> res = AnalyzerResult(time_mode='framewise')
+>>> res.keys()
+['data_mode', 'time_mode', 'id_metadata', 'data', 'audio_metadata', 'frame_metadata', 'label_metadata', 'parameters']
+
+Global
+------
+
+No frame metadata information is needed for these modes.
+The 'frame_metadata' key/attribute is deleted.
+
+>>> res = AnalyzerResult(time_mode='global')
+>>> res.keys()
+['data_mode', 'time_mode', 'id_metadata', 'data', 'audio_metadata', 'label_metadata', 'parameters']
+>>> res.data_object
+DataObject(value=None, label=array([], dtype=int64))
+
+Segment
+-------
+
+>>> res = AnalyzerResult(time_mode='segment')
+>>> res.keys()
+['data_mode', 'time_mode', 'id_metadata', 'data', 'audio_metadata', 'label_metadata', 'parameters']
+>>> res.data
+DataObject(value=None, label=array([], dtype=int64), time=array([], dtype=float64), duration=array([], dtype=float64))
+
+Event
+-----
+
+>>> res = AnalyzerResult(time_mode='event')
+>>> res.keys()
+['data_mode', 'time_mode', 'id_metadata', 'data', 'audio_metadata', 'label_metadata', 'parameters']
+>>> res.data
+DataObject(value=None, label=array([], dtype=int64), time=array([], dtype=float64))
+
+Specification of data_mode
+=========================
+Two different data_mode can be specified :
+
+- 'value' : Data are returned as numpy Array of arbitrary type
+- 'label' : Data are returned as label indexes (specified by the label_metadata key)
+
+Value
+-----
+The label_metadata key is deleted.
+
+>>> res = AnalyzerResult(data_mode='value')
+>>> res.keys()
+['data_mode', 'time_mode', 'id_metadata', 'data', 'audio_metadata', 'frame_metadata', 'parameters']
+
+In the dataObject key, the 'value' key is kept and the 'label' key is deleted.
+
+>>> res.data
+DataObject(value=None, time=array([], dtype=float64), duration=array([], dtype=float64))
+
+Label
+-----
+>>> res = AnalyzerResult(data_mode='label')
+>>> res.keys()
+['data_mode', 'time_mode', 'id_metadata', 'data', 'audio_metadata', 'frame_metadata', 'label_metadata', 'parameters']
+
+In the dataObject key, the 'label' key is kept and the 'value' key is deleted.
+
+
+>>> res.data
+DataObject(label=array([], dtype=int64), time=array([], dtype=float64), duration=array([], dtype=float64))
--- /dev/null
+.. TimeSide documentation master file, created by
+ sphinx-quickstart on Thu Aug 22 10:49:09 2013.
+ You can adapt this file completely to your liking, but it should at least
+ contain the root `toctree` directive.
+
+====================
+TimeSide : Tutorial
+====================
+Contents:
+
+.. toctree::
+ :maxdepth: 2
+
+ Quick start <quick_start>
+ Usage of AnalyzerResult <AnalyzerResult>
+
+
--- /dev/null
+ Quick start
+============
+
+A most basic operation, transcoding, is easily performed with two processors:
+
+ >>> import timeside
+ >>> decoder = timeside.decoder.FileDecoder('sweep.wav')
+ >>> encoder = timeside.encoder.VorbisEncoder("sweep.ogg")
+ >>> pipe = decoder | encoder
+ >>> pipe.run()
+
+As one can see in the above example, creating a processing pipe is performed with
+the binary OR operator.
+
+Audio data visualisation can be performed using graphers, such as Waveform and
+Spectrogram. All graphers return an image:
+
+ >>> import timeside
+ >>> decoder = timeside.decoder.FileDecoder('sweep.wav')
+ >>> spectrogram = timeside.grapher.Spectrogram(width=400, height=150)
+ >>> (decoder | spectrogram).run()
+ >>> spectrogram.render().save('graph.png')
+
+It is possible to create longer pipes, as well as subpipes, here for both
+analysis and encoding:
+
+ >>> import timeside
+ >>> decoder = timeside.decoder.FileDecoder('sweep.wav')
+ >>> levels = timeside.analyzer.Level()
+ >>> encoders = timeside.encoder.Mp3Encoder('sweep.mp3') | timeside.encoder.FlacEncoder('sweep.flac')
+ >>> (decoder | levels | encoders).run()
+ >>> print levels.results
+
-HTML5 User Interface
-====================
+User Interface
+===============
-TimeSide comes with a smart and pure HTML5 audio player.
+TimeSide comes with a smart and pure **HTML5** audio player.
Features:
* embed it in any audio web application
@property
def data(self):
if self.data_mode is None:
- return {key: self.data_object[key] for key in ['value', 'label'] if key in self.data_object.keys()}
-
+ return (
+ {key: self.data_object[key]
+ for key in ['value', 'label'] if key in self.data_object.keys()}
+ )
elif self.data_mode is 'value':
return self.data_object.value
elif self.data_mode is 'label':
class AnalyzerResultContainer(dict):
'''
- >>> from timeside.decoder import FileDecoder
- >>> import timeside.analyzer.core as coreA
+ >>> import timeside
>>> import os
>>> ModulePath = os.path.dirname(os.path.realpath(coreA.__file__))
>>> wavFile = os.path.join(ModulePath , '../../tests/samples/sweep.wav')
- >>> d = FileDecoder(wavFile, start=1)
+ >>> d = timeside.decoder.FileDecoder(wavFile, start=1)
- >>> a = coreA.Analyzer()
+ >>> a = timeside.analyzer.Analyzer()
>>> (d|a).run() #doctest: +ELLIPSIS
<timeside.core.ProcessPipe object at 0x...>
>>> a.new_result() #doctest: +ELLIPSIS
AnalyzerResult(data_mode=None, time_mode=None, id_metadata=id_metadata(id='', name='', unit='', description='', date='...', version='...', author='TimeSide'), data=DataObject(value=None, label=array([], dtype=int64), time=array([], dtype=float64), duration=array([], dtype=float64)), audio_metadata=audio_metadata(uri='file:///.../tests/samples/sweep.wav', start=1.0, duration=7.0, channels=None, channelsManagement=''), frame_metadata=FrameMetadata(samplerate=None, blocksize=None, stepsize=None), label_metadata=LabelMetadata(label=None, description=None, label_type='mono'), parameters={})
- >>> resContainer = coreA.AnalyzerResultContainer()
+ >>> resContainer = timeside.analyzer.AnalyzerResultContainer()
'''
Attributes
----------
- data : MetadataObject
+ data_object : MetadataObject
id_metadata : MetadataObject
audio_metadata : MetadataObject
frame_metadata : MetadataObject