The following publications are possibly variants of this publication:
- Modeling and synthesizing spatially inflected verbs for American sign language animationsMatt Huenerfauth, Pengfei Lu. assets 2010: 99-106 [doi]
- Learning a Vector-Based Model of American Sign Language Inflecting Verbs from Motion-Capture DataPengfei Lu, Matt Huenerfauth. slpat 2012: 66-74 [doi]
- Evaluation of a psycholinguistically motivated timing model for animations of american sign languageMatt Huenerfauth. assets 2008: 129-136 [doi]
- Modeling animations of American Sign Language verbs through motion-capture of native ASL signersPengfei Lu. sigaccess, 96:41-45, 2010. [doi]
- Improving Spatial Reference in American Sign Language Animation through Data Collection from Native ASL SignersMatt Huenerfauth. hci 2009: 530-539 [doi]
- Data-Driven Synthesis of Spatially Inflected Verbs for American Sign Language AnimationPengfei Lu, Matt Huenerfauth. taccess, 4(1):4, 2011. [doi]
- A Linguistically Motivated Model for Speed and Pausing in Animations of American Sign LanguageMatt Huenerfauth. taccess, 2(2), 2009. [doi]
- Data-driven Synthesis of Animations of Spatially Inflected American Sign Language Verbs Using Human DataPengfei Lu. PhD thesis, City University of New York, USA, 2014. [doi]
- Accessible motion-capture glove calibration protocol for recording sign language data from deaf subjectsPengfei Lu, Matt Huenerfauth. assets 2009: 83-90 [doi]