Vid2Seq: a pretrained visual language model for describing multi-event videos

89 · Google AI Research · March 17, 2023, 6:46 p.m.
Posted by Antoine Yang, Student Researcher, and Arsha Nagrani, Research Scientist, Google Research, Perception team Videos have become an increasingly important part of our daily lives, spanning fields such as entertainment, education, and communication. Understanding the content of videos, however, is a challenging task as videos often contain multiple events occurring at different time scales. For example, a video of a musher hitching up dogs to a dog sled before they all race away involves a...