Research
My recent work explores generative 3D world models, specifically diffusion-based occupancy prediction for exploration, mapping, and long-horizon planning in mobile robots. I develop algorithms and AI models that enable mobile robots to perceive, predict, and act effectively in complex, partially observed, and egocentric environments. My work focuses on: multi-modal machine perception for 3D scene understanding, generative occupancy modeling, and Vision–Language–Action (VLA) models that unify perception, language, and control for grounded decision-making.
Robust Robotic Exploration and Mapping Using Generative Occupancy Map Synthesis
Accepted at Autonomous Robots (Springer Journal) for Publication January 2026 Check out the pre-print manuscript here.
Online Diffusion-Based 3D Occupancy Prediction at the Frontier with Probabilistic Map Reconciliation
Accepted to ICRA 2025: Check out the pre-print manuscript here.
SceneSense: Diffusion Models for 3D Occupancy Synthesis from Partial Observation
Accepted to IROS 2024: Read our pre-print manuscript here. Read the published version on IEEE Xplore
RF-Modulated Adaptive Communication Improves Multi-Agent Robotic Exploration
In Review