The MMASCS multi-modal annotated synchronous corpus of audio, video, facial motion and tongue motion data of normal, fast and slow speech

Dietmar Schabus, Michael Pucher, Phil Hoole. The MMASCS multi-modal annotated synchronous corpus of audio, video, facial motion and tongue motion data of normal, fast and slow speech. In Nicoletta Calzolari, Khalid Choukri, Thierry Declerck, Hrafn Loftsson, Bente Maegaard, Joseph Mariani, AsunciĆ³n Moreno, Jan Odijk, Stelios Piperidis, editors, Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC-2014), Reykjavik, Iceland, May 26-31, 2014. pages 3411-3416, European Language Resources Association (ELRA), 2014. [doi]

@inproceedings{SchabusPH14,
  title = {The MMASCS multi-modal annotated synchronous corpus of audio, video, facial motion and tongue motion data of normal, fast and slow speech},
  author = {Dietmar Schabus and Michael Pucher and Phil Hoole},
  year = {2014},
  url = {http://www.lrec-conf.org/proceedings/lrec2014/summaries/192.html},
  researchr = {https://researchr.org/publication/SchabusPH14},
  cites = {0},
  citedby = {0},
  pages = {3411-3416},
  booktitle = {Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC-2014), Reykjavik, Iceland, May 26-31, 2014},
  editor = {Nicoletta Calzolari and Khalid Choukri and Thierry Declerck and Hrafn Loftsson and Bente Maegaard and Joseph Mariani and AsunciĆ³n Moreno and Jan Odijk and Stelios Piperidis},
  publisher = {European Language Resources Association (ELRA)},
}