Thanks to everyone for coming to the second Team Leader Meeting! Also a huge thanks to everyone for their work in attesting to the videos. It’s an unusual year and we’re making it work together.
The thing I do want to emphasise is that as we’re all adapting, there have been some minor miscommunications. Therefore we have focused on figuring out where the actual differences in performance are, and if the slight variations in what teams have done have made a significant difference, or if we can account for them in our usual assumption of some measurement error.
Based on this, we have the following teams moving through to the Best-in-Class finals, based on clear, undeniable excellence in performance:
- Nubot Rescue
- Nubot Rescue
Congratulations to these teams, and to all teams who were able to contribute runs! Across all 10 teams, there were a total of 305 Standard Test Method trials from 5 countries. This has been a very impressive undertaking, especially in these challenging times. The data and videos contribute a significant body of knowledge to the Response Robotics community.
Here are some notes about how we will run the finals. Note that as normal, we consider scores within 10% to be equivalent.
Dexterity: Shinobi and Hector-DRZ
The performance of teams in the Dexterity tests was particularly impressive given the large number of trials and the wide area of space that teams were able to perform the tasks at. These two teams in particular stood out. To demonstrate the reliability of their performance, we will run the finals as follows:
- Scores reset.
- 3x 15 minute trials, one at each height of 30 cm, 60 cm, and 90 cm.
- At each height, perform the Omni Touch and Insert task at White, Yellow, and Orange distances.
- Robot starts the trial off the terrain, holding the insertion tool.
- No need to let go of the insertion tool between insertions (but can if you like).
- Between shifting the Omni apparatus been White, Yellow, and Orange, the arm should be out of the dexterity platform footprint but the robot does not need to leave the terrain.
- The trial ends when the time expires (robot may still be on the terrain).
- If during a single 15 minute trial a team performs the touch/insert task in all 15 positions (5 holes x 3 Omni positions) at a given height, they can repeat back at white if there is still time.
- The winner is the team with the most points at the end of the 3x 15 minute trials.
Search/Inspect: Hector, Nubot Rescue, Shinobi
All three teams demonstrated very good abilities to place their sensors in the different locations required and inspect with very good acuity. There was considerable variation in time taken, but also in difficulty of the environments. Now that all teams have seen each others’ performances, we would like the finals to try and be a little bit more comparable. We will therefore run the finals as follows:
- Scores reset.
- 20 minute time limit.
- The search rails are placed with the same distribution (4x forward, 3x upward, 3x downward, at the heights listed previously) but may now be 1.2m apart to reduce travel time.
- There must be an obstacle at least 90 mm (3.5") in height every 2 rails. This may be a ramp, curb, block of wood, etc.
- The winner is the team with the most points at the end of the 20 minute trial.
Exploration/Mapping: Hector, Nubot Rescue, Shinobi
All three teams displayed good maps in real-world environments, with only minor issues in mapping accuracy and consistency. There were also some minor deviations in implementation and recording although these did not significantly affect the demonstration of mapping capability and we are disregarding these given the unique challenges this year. As demonstrations of mapping capability, while there are some variations, all three are close enough that given the widely varying environments it is too close to call a winner.
Thus we will perform a finals run with the following parameters to demonstrate reliability and make sure everyone is running with as similar a process as possible.
- Scores reset.
- 30 minute time limit.
- Same environment as before.
- Robot path should be different.
- There must be an obstacle of at least 90 mm (3.5") every 4.8 m of progress through the environment. This may be a ramp, curb, block of wood, etc.
- Map must be demonstrated live as being built in realtime.
- At the start of the call we will ask teams to shift some items and demonstrate that they appear in the map.
- Please prepare three “Cross Fiducials”, which serve to block the laser scan and show up on the map. These are two panels, 60-120cm wide and tall enough to show up in your first level laser scan, arranged in a cross, that will easily show up in the map. We will tell you where to place these. You can make these (or something similar) out of anything you have available, such as wood panels, thick cardboard, foam, etc.
- Scores will be evaluated by the Organising Committee based on the quality and alignment of scans of the half-round fiducials as normal.
Autonomous Mobility: Shinobi
One team demonstrated Autonomous Mobility, although we understand that other teams attempted it. The demonstrated capability was extremely impressive and proves that this test is within the capabilities of the robot platforms of the league. We would like to see a live run of the hardest variant demonstrated in this video to evaluate the reliability of the implementation, discuss the levels and capability of the autonomy system, and decide on the trophy.
Aerial: NITro and Sazanka
These two teams did an excellent job in replicating the tests and produced scores that are exactly similar, in almost the same time, with the same aircraft. That demonstrates reproducibility and repeatability in the test methods. They also proved the tests useful for evaluating beyond visual line of sight (as is our custom in RoboCup but not necessarily elsewhere). We would ordinarily progress to a more difficult setting but this will depend on what you can achieve logistically. Adam will reach out to the two teams to determine what is practical for a final/demonstration.