One of the requirements for running the competition this year is the ability to record and upload uncut video of your performances. This way, other teams can watch the video and publicly attest to the scores that the team came up with, or flag issues or misunderstandings that can then be adjudicated. By crowdsourcing the attestation of these videos during preliminaries, rather than just relying on the OC to review the videos, we can get statistical significance and our ability to deal with more teams will scale with the size of the community.
We are still developing guidelines and will be sending around details and discussing them shortly.
Here are my thoughts on the procedure. Note that nothing is fixed yet, this is still being developed but we’d be interested to know your opinions and have some discussion about what is possible.
- For each run, we would require two uncut videos to be recorded and uploaded for verification.
- Start each run pointed at a screen or page with the details of the team, apparatus, and robot configuration.
- Then pointed to an internet-synced clock to timestamp the videos relative to real time and each other.
- Then document the robot and apparatus:
- One camera will be focused on the apparatus to verify settings.
- The other camera will be focused on the robot to verify configuration.
- Then the test is run (without stopping the cameras or cutting the video): .
- One camera will be showing the operator, screen, and controls.
- The other camera will show the robot, zooming in as necessary to show scoring detail.
- The cameras will end pointing at the scoresheet that was filled in during the run.
- We will also require a third video to be uploaded, which is a Picture-in-Picture.
- This will be the one that we expect people to actually watch to verify recordings and for publicity (the originals would be for if more detail is required).
- I would be happy with teams putting their own splash screens, etc. on this.
- I’ve put together a guide on how to use ffmpeg to create such videos in a batch fashion (assuming the videos were started at the same time or that you know the time offset between the videos) for teams that don’t have the resources (software or time) to touch each video manually.
- These videos would be uploaded somewhere that is public and timestamps the video upload (e.g. Youtube).
Here are some of the details that we’re still working on.
- Exact instructions, position-by-position, for each test method (or type of test), showing what we expect the camera to be able to see at every stage, along with an example video. The goal is that we can remotely verify scoring, settings, compliance, and so-on.
- Specifications for the quality of the video, lighting, and so-on. We’re assuming that everyone has access to at least mobile phones that can record and upload Full HD video although we would prefer cameras that can zoom in at various times so that scoring is easier to verify.
- Which video services to allow. We want to keep this to services that are public and will timestamp the uploads (rather than peoples’ own servers) just so there’s no question as to when a given video was uploaded and that it wasn’t edited. We need to be sensitive to what services can be accessed in different parts of the world so opinions here would be appreciated.
- Logistics (web and email forms) to keep track of teams’ uploads and attestations.
Again, I stress that these aren’t “the new rules” - nothing is set yet, this is the current thinking so we can have some discussion.