Vid2Seq: Large-Scale Pretraining of a Visual Language Model for Dense Video Captioning

Antoine Yang, Arsha Nagrani, Paul Hongsuck Seo, Antoine Miech, Jordi Pont-Tuset, Ivan Laptev, Josef Sivic, Cordelia Schmid. Vid2Seq: Large-Scale Pretraining of a Visual Language Model for Dense Video Captioning. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2023, Vancouver, BC, Canada, June 17-24, 2023. pages 10714-10726, IEEE, 2023. [doi]

@inproceedings{YangNSMPLSS23,
  title = {Vid2Seq: Large-Scale Pretraining of a Visual Language Model for Dense Video Captioning},
  author = {Antoine Yang and Arsha Nagrani and Paul Hongsuck Seo and Antoine Miech and Jordi Pont-Tuset and Ivan Laptev and Josef Sivic and Cordelia Schmid},
  year = {2023},
  doi = {10.1109/CVPR52729.2023.01032},
  url = {https://doi.org/10.1109/CVPR52729.2023.01032},
  researchr = {https://researchr.org/publication/YangNSMPLSS23},
  cites = {0},
  citedby = {0},
  pages = {10714-10726},
  booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2023, Vancouver, BC, Canada, June 17-24, 2023},
  publisher = {IEEE},
  isbn = {979-8-3503-0129-8},
}