Vid_20230119_175021_814.mp4 95%
The code and models are available on GitHub , requiring high-end hardware (like an NVIDIA A6000) to run the full automated pipeline.
This research, published in 2025, focuses on automatically generating academic presentation videos from scientific papers using a multi-agent framework called . The project includes a benchmark dataset of 101 papers paired with author-created videos and slides. Key Aspects of the Paper2Video Project: VID_20230119_175021_814.mp4
A new benchmark called Paper2Video that includes metadata and metrics (like "PresentArena" and "IP Memory") to evaluate how effectively a video conveys a paper's information. The code and models are available on GitHub

