抄録
Video frame interpolation aims to generate intermediate frames between the original frames. This produces videos with a higher frame rate and creates smoother motion. Many video frame interpolation methods first estimate the motion vector between the input frames and then synthesizes the intermediate frame based on the motion. However, these methods rely on the accuracy of the motion estimation step and fail to accurately generate the interpolated frame when the estimated motion vectors are inaccurate. Therefore, to avoid the uncertainties caused by motion estimation, this paper proposes a method that implicitly learns the motion between frames and directly generates the intermediate frame. Since two consecutive frames are relatively similar, our method takes the average of these two frames and utilizes residual learning to learn the difference between the average of these frames and the ground truth middle frame. In addition, our method uses Convolutional LSTMs and four input frames to better incorporate spatiotemporal information. We also incorporate attention mechanisms in our model to further enhance the performance. This neural network can be easily trained end to end without difficult to obtain data such as optical flow. Our experimental results show that the proposed method without explicit motion estimation can perform favorably against other state-of-the-art frame interpolation methods. Further ablation studies show the effectiveness of various components in our proposed model.
本文言語 | English |
---|---|
論文番号 | 9145730 |
ページ(範囲) | 134185-134193 |
ページ数 | 9 |
ジャーナル | IEEE Access |
巻 | 8 |
DOI | |
出版ステータス | Published - 2020 |
ASJC Scopus subject areas
- コンピュータサイエンス一般
- 材料科学一般
- 工学一般