\documentclass{paper}\r
%\hyphenation{Post-Script}\r
\usepackage[authoryear]{natbib}\r
+\r
\usepackage{fma2014}\r
\r
\usepackage{graphicx}\r
\usepackage{url}\r
\usepackage[utf8]{inputenc}\r
\usepackage[T1]{fontenc}\r
-\r
%\usepackage{enumitem}\r
%\setlist{nosep}\r
\r
In the context of ethnomusicological research, the Research Center on Ethnomusicology (CREM) and Parisson, a company specialized in the management of audio databases, have been developing an innovative, collaborative and interdisciplinary open-source web-based multimedia platform since 2007. \r
This platform, \emph{Telemeta} is designed to fit the professional requirements from both sound archivists and researchers in ethnomusicology. The first prototype of this platform has been online\footnote{Archives sonores du CNRS, Musée de l'Homme, http://archives.crem-cnrs.fr} since 2008 and is now fully operational and used on a daily basis for ethnomusicological studies.\r
\r
-The positive impact of this platform on ethnomusicological research has been describe in several publications \cite{Simmonot_IASA_2011, Julien_IASA_2011, Simonnot_ICTM_2014}.\r
+The positive impact of this platform on ethnomusicological research has been describe in several publications \citep{Simmonot_IASA_2011, Julien_IASA_2011, Simonnot_ICTM_2014}.\r
\r
-Recently, an open-source audio analysis framework, \emph{Timeside}, has been developed to bring automatic music analysis capabilities to the web platform and thus have turned Telemeta into a unique resource for \emph{Computational Ethnomusicology} \cite{Tzanetakis_2007_JIMS, Gomez_JNMR_2013}\r
+Recently, an open-source audio analysis framework, \emph{Timeside}, has been developed to bring automatic music analysis capabilities to the web platform and thus have turned Telemeta into a unique resource for \emph{Computational Ethnomusicology} \citep{Tzanetakis_2007_JIMS, Gomez_JNMR_2013}\r
\r
\section{The Telemeta platform}\label{sec:Telemeta}\r
\subsection{Web audio content management features and architecture}\r
\r
The time-based nature of such audio-visual materials and some associated metadata as annotations raises issues of access and visualization. Easy access to these data, as you listen to the recording, represents a significant improvement.\r
\r
- An overview of the Telemeta's web interface is illustrated in Figure~\ref{fig:Telemeta}\r
+ An overview of the Telemeta's web interface is illustrated in Figure~\ref{fig:Telemeta} and Telemeta architecture is represented in Figure~\ref{fig:TM_arch}\r
\begin{figure}\r
\centering\r
- \includegraphics[width=0.95\linewidth]{img/telemeta.png}\r
+ \fbox{\includegraphics[width=0.97\linewidth]{img/telemeta_screenshot_en.png}}\r
\caption[1]{Screenshot excerpt of the \emph{Telemeta} web interface}\r
\label{fig:Telemeta}\r
\end{figure}\r
\r
+\begin{figure*}[htbp]\r
+ \centering\r
+ \includegraphics[width=0.5\linewidth]{img/TM_arch.pdf}\r
+ \caption{Telemeta architecture}\label{fig:TM_arch}\r
+\end{figure*}\r
+\r
Telemeta is ideal for professionals who wants to easily organize, backup, archive and publish documented sound collections of audio files, CDs, digitalized vinyls and magnetic tapes over a strong database, in accordance with open web standards. \r
\emph{Telemeta} architecture is flexible and can easily be adapted to particular database organization of a given sound archives. \r
\r
\item Model-View-Controller (MVC) architecture \r
\end{itemize}\r
Beside database management, the audio support is mainly provided through an external component, TimeSide, which is described in Section~\ref{sec:Timeside}.\r
-\r
+\defcitealias{DublinCore}{Papier I}\r
\subsection{Metadata}\label{sec:metadata}\r
In addition to the audio data, an efficient and dynamic management of the associated metadata is also required. Consulting metadata provide both an exhaustive access to valuable information about the source of the data and to the related work of peer researchers. \r
Dynamically handling metadata in a collaborative manner optimises the continuous process of knowledge gathering and enrichment of the materials in the database. \r
One of the major challenge is thus the standardization of audio and metadata formats with the aim of long-term preservation and usage of the different materials.\r
-The compatibility with other systems is facilitated by the integration of the metadata standards protocols \emph{Dublin Core} and \emph{OAI-PMH} (Open Archives Initiative Protocol for Metadata Harvesting) \cite{DublinCore,OAI-PMH}.\r
+The compatibility with other systems is facilitated by the integration of the metadata standards protocols \emph{Dublin Core}\footnote{{Dublin Core} Metadata Initiative, \url{http://dublincore.org/}} and \emph{OAI-PMH} (Open Archives Initiative Protocol for Metadata Harvesting)\footnote{\url{http://www.openarchives.org/pmh/}}.\r
\r
Metadata provide two different kinds of information about the audio item: contextual information and annotations.\r
\r
One specificity of the Telemeta architecture is to rely on an external component, \emph{TimeSide}\footnote{\url{https://github.com/yomguy/TimeSide}}, that offers audio player web integration together with audio signal processing analysis capabilities. \r
\r
\emph{TimeSide} is an audio analysis and visualization framework based on both python and javascript languages to provide state-of-the-art signal processing and machine learning algorithms together with web audio capabilities for display and streaming.\r
-% Figure~\ref{fig:TimeSide_Archi} illustrates the overall architecture of \emph{TimeSide}.\r
+Figure~\ref{fig:TimeSide_Archi} illustrates the overall architecture of \emph{TimeSide}.\r
\r
-% \begin{figure}[htbp]\r
-% \centering\r
-% \includegraphics[width=0.95\linewidth]{img/timeside_schema.pdf}\r
-% \caption{TimeSide architecture (see \texttt{https://code.google.com/p/timeside/})}\label{fig:TimeSide_Archi}\r
-% \end{figure}\r
+\begin{figure*}[htbp]\r
+ \centering\r
+ \includegraphics[width=0.7\linewidth]{img/timeside_schema_v3.pdf}\r
+ \caption{TimeSide architecture}\label{fig:TimeSide_Archi}\r
+\end{figure*}\r
\r
\r
\subsection{Audio management}\r
\end{itemize}\r
\r
\subsection{Audio features extraction}\r
-In order to provide Music Information Retrieval analysis methods to be implemented over a large corpus for ethnomusicological studies, TimeSide incorporates some state-of-the-art audio feature extraction libraries such as Aubio\footnote{\url{http://aubio.org/}}, Yaafe\footnote{\url{http://yaafe.sourceforge.net}} and Vamp plugins\footnote{ \url{http://www.vamp-plugins.org}} \cite{brossierPhD,yaafe_ISMIR2010,vamp-plugins}.\r
+In order to provide Music Information Retrieval analysis methods to be implemented over a large corpus for ethnomusicological studies, TimeSide incorporates some state-of-the-art audio feature extraction libraries such as Aubio\footnote{\url{http://aubio.org/}} \citep{brossierPhD}, Yaafe\footnote{\url{https://github.com/Yaafe/Yaafe}} \citep{yaafe_ISMIR2010} and Vamp plugins\footnote{ \url{http://www.vamp-plugins.org}}.\r
\r
As a open-source framework and given its architecture and the flexibility provided by Python, the implementation of any audio and music analysis algorithm can be consider and makes it a very convenient framework.\r
\r
The project has been partially funded by the French National Centre for Scientific Research (CNRS), the French Ministry of Culture and Communication, the TGE Adonis Consortium, and the Centre of Research in Ethnomusicology (CREM).}\r
\r
\r
-\bibliographystyle{plain}\r
+%\bibliographystyle{plainnat}\r
\bibliography{fma2014_Telemeta}\r
\r
\r