Pretrained inpainting for robotic exploration; today's reading.

inpainting masked-autoencoder robotics today-i-read

  1. Vishnu Dutt Sharma, Anukriti Singh, and Pratap Tokekar, “Pre-Trained Masked Image Model for Mobile Robot Navigation.” arXiv, October 2023 [Online]. Available at: http://arxiv.org/abs/2310.07021. [Accessed: October 15, 2023]

    tl;dr: Robotic exploration and map building. Uses an off-the-shelf model for inpainting, MAE (Masked Autoencoder, He et al. 2022), and applies it to three contexts. For field-of-view expansion experiments, the larger the patches to be inpainted, the worse the performance. Tested with semantic and binary (occupancy) maps, synthetic data. No fine-tuning of MAE, and performance is better than classical techniques on single-agent and multiple-agent exploration. I liked the writing in this paper – the hypothesis and themes are very clear throughout.

© Amy Tabb 2018 - 2023. All rights reserved. The contents of this site reflect my personal perspectives and not those of any other entity.