Skip to content
GitLab
Projects Groups Snippets
  • /
  • Help
    • Help
    • Support
    • Community forum
    • Submit feedback
    • Contribute to GitLab
  • Sign in / Register
  • S Super-SloMo
  • Project information
    • Project information
    • Activity
    • Labels
    • Members
  • Repository
    • Repository
    • Files
    • Commits
    • Branches
    • Tags
    • Contributors
    • Graph
    • Compare
  • Issues 19
    • Issues 19
    • List
    • Boards
    • Service Desk
    • Milestones
  • Merge requests 10
    • Merge requests 10
  • CI/CD
    • CI/CD
    • Pipelines
    • Jobs
    • Schedules
  • Deployments
    • Deployments
    • Environments
    • Releases
  • Packages and registries
    • Packages and registries
    • Package Registry
    • Infrastructure Registry
  • Monitor
    • Monitor
    • Incidents
  • Analytics
    • Analytics
    • Value stream
    • CI/CD
    • Repository
  • Wiki
    • Wiki
  • Snippets
    • Snippets
  • Activity
  • Graph
  • Create a new issue
  • Jobs
  • Commits
  • Issue Boards
Collapse sidebar
  • Avinash Paliwal
  • Super-SloMo
  • Merge requests
  • !1

updated the quantitative results to accurately reflect the performance of SepConv

  • Review changes

  • Download
  • Email patches
  • Plain diff
Merged Administrator requested to merge github/fork/sniklaus/master into master Dec 27, 2018
  • Overview 3
  • Commits 1
  • Pipelines 0
  • Changes 1

Created by: sniklaus

Thank you for sharing your implementation and contributing to the area of video frame interpolation!

I am the first author of SepConv and am afraid that the quantitative results in the table do not accurately reflect the performance of SepConv. Specifically, they state the results for the version of SepConv that is trained to produce perceptually good results which perform subpar when using them in a quantitative benchmark. As such, I have extended the table with the results for the more appropriate version.

On a side note, I am not a fan of using motion masks for the quantitative benchmark since they ignore possible artifacts in the regions outside the motion masks. Furthermore, the samples for the comparison are only crops from UCF-101 and only have a resolution of 256x256 pixels. As such, a method that performs perfect on this benchmark may perform poorly at more realistic resolutions. Lastly, for some of the examples, the ground truth seems to be either the first or the second frame (like 1, 141, or 271).

Anyways, again huge thanks for contributing to the area of video frame interpolation!

Assignee
Assign to
Reviewers
Request review from
Time tracking
Source branch: github/fork/sniklaus/master