The following publications are possibly variants of this publication:
- Data-Driven Synthesis of Spatially Inflected Verbs for American Sign Language AnimationPengfei Lu, Matt Huenerfauth. taccess, 4(1):4, 2011. [doi]
- Data-driven Synthesis of Animations of Spatially Inflected American Sign Language Verbs Using Human DataPengfei Lu. PhD thesis, City University of New York, USA, 2014. [doi]
- Evaluation of a psycholinguistically motivated timing model for animations of american sign languageMatt Huenerfauth. assets 2008: 129-136 [doi]
- A Linguistically Motivated Model for Speed and Pausing in Animations of American Sign LanguageMatt Huenerfauth. taccess, 2(2), 2009. [doi]
- Synthesizing and Evaluating Animations of American Sign Language Verbs Modeled from Motion-Capture DataMatt Huenerfauth, Pengfei Lu, Hernisa Kacorri. slpat 2015: 22-28 [doi]
- Learning a Vector-Based Model of American Sign Language Inflecting Verbs from Motion-Capture DataPengfei Lu, Matt Huenerfauth. slpat 2012: 66-74 [doi]
- Spatial, Temporal, and Semantic Models for American Sign Language Generation: Implications for Gesture GenerationMatt Huenerfauth. ijsc, 2(1):21-45, 2008. [doi]
- Modeling animations of American Sign Language verbs through motion-capture of native ASL signersPengfei Lu. sigaccess, 96:41-45, 2010. [doi]
- Improving Spatial Reference in American Sign Language Animation through Data Collection from Native ASL SignersMatt Huenerfauth. hci 2009: 530-539 [doi]
- Effect of spatial reference and verb inflection on the usability of sign language animationsMatt Huenerfauth, Pengfei Lu. uais, 11(2):169-184, 2012. [doi]