From: Guillaume Pellerin Date: Mon, 17 Feb 2014 22:17:23 +0000 (+0100) Subject: docfix X-Git-Tag: 0.5.4~1 X-Git-Url: https://git.parisson.com/?a=commitdiff_plain;h=24592d615acee95482b34c9f52193fe4c37aea45;p=timeside.git docfix --- diff --git a/README.rst b/README.rst index e4a075a..9878f27 100644 --- a/README.rst +++ b/README.rst @@ -24,10 +24,10 @@ Goals We just **need** a python library to: * **Do** asynchronous and fast audio processing with Python, -* **Decode** audio frames from ANY format into numpy arrays, +* **Decode** audio frames from **any** audio or video media format into numpy arrays, * **Analyze** audio content with some state-of-the-art audio feature extraction libraries, * **Organize**, serialize and save analysis metadata through various formats, -* **Draw** various fancy waveforms, spectrograms and other cool graphers, +* **Draw** various fancy waveforms, spectrograms and other cool visualizers, * **Transcode** audio data in various media formats and stream them through web apps, * **Playback** and **interact** **on demand** through a smart high-level HTML5 extensible player, * **Index**, **tag** and **organize semantic metadata** (see `Telemeta `_ which embed TimeSide). @@ -105,10 +105,10 @@ News 0.5.4 - * Bugfix realease * Encoder : transcoded streams where broken. Now fixed with some smart thread controls. * Analyzer : update VAMP plugin example in sandbox - * Analyzer : NEW experimental plugin : Limsi Speech Activity Detection Systems (limsi_sad) + * Analyzer : new experimental plugin : Limsi Speech Activity Detection Systems (limsi_sad) + * Decoder : process any media in streaming mode giving its URL * Install : fix some setup requirements 0.5.3 @@ -266,6 +266,9 @@ API / Documentation Install ======= +The TimeSide engine is intended to work on all Unix / Linux platforms. +MacOS X and Windows versions will soon be explorated. + TimeSide needs some other python modules to run. The following methods explain how to install all dependencies on various Linux based systems. On Debian, Ubuntu, etc: @@ -287,7 +290,7 @@ On Fedora and Red-Hat: $ sudo pip install timeside -Otherwise, you can also install all dependencies and then use pip:: +On other Linux platforms, you can also install all dependencies and then use pip:: $ sudo pip install timeside @@ -298,13 +301,6 @@ python (>=2.7), python-setuptools, python-gst0.10, gstreamer0.10-plugins-good, g gstreamer0.10-plugins-ugly, python-aubio, python-yaafe, python-simplejson, python-yaml, python-h5py, python-scipy, python-matplotlib, python-matplotlib -Platforms -========== - -The TimeSide engine is intended to work on all Unix / Linux platforms. -MacOS X and Windows versions will soon be explorated. -The player should work on any modern HTML5 enabled browser. -Flash is needed for MP3 if the browser doesn't support it. Shell Interface ================ @@ -368,6 +364,9 @@ TODO list: * zoom * layers +The player should work on any modern HTML5 enabled browser. +Flash is needed for MP3 if the browser doesn't support it. + Development =========== diff --git a/timeside/analyzer/limsi_sad.py b/timeside/analyzer/limsi_sad.py index b503317..8e53992 100644 --- a/timeside/analyzer/limsi_sad.py +++ b/timeside/analyzer/limsi_sad.py @@ -29,6 +29,7 @@ import numpy as N import pickle import os.path + class GMM: def __init__(self, weights, means, vars): @@ -52,7 +53,8 @@ class LimsiSad(Analyzer): """ Limsi Speech Activity Detection Systems LimsiSad performs frame level speech activity detection based on GMM models - For each frame, it computes the log likelihood difference between a speech model and a non speech model. The highest is the estimate, the largest is the probability that the frame corresponds to speech. + For each frame, it computes the log likelihood difference between a speech model and a non speech model. + The highest is the estimate, the largest is the probability that the frame corresponds to speech. The initialization of the analyzer requires to chose a model between 'etape' and 'maya' 'etape' models were obtained on data collected by LIMSI in the framework of ETAPE ANR project 'maya' models were obtained on data collected by EREA – Centre Enseignement et Recherche en Ethnologie Amerindienne