I am a computer scientist working as a Postdoctoral Researcher at the Oxford Robotics Institute, University of Oxford, where I am a part of the GOALS group led by Nick Hawes. I am also a Retained Lecturer in Engineering Science at Jesus College and an Honorary Research Fellow at UCL Computer Science.

I am interested in reinforcement learning (RL) and artificial intelligence (AI) more broadly. The key insight behind my work is the ability of RL to discover, by trial-and-error, ways of solving decision-making problems that can outperform or complement traditional methods. My work develops rigorous RL methodologies, especially for graph-structured systems (Graph RL), and applies them to disciplines as diverse as robotics, operations research, and statistics (AI for Science).

News

[Mar 2026] Our work Accelerating atomic fine structure determination with graph reinforcement learning, a collaboration with researchers at Imperial College, has been published in Nature Communication Physics. This paper proposes an AI framework to accelerate a fundamental discovery task in atomic physics that takes highly-trained human experts months or even years to complete.

[Feb 2026] New pre-print on Online Navigation Planning for Long-term Autonomous Operation of Underwater Gliders. This work proposes a set of AI methodologies and an autonomous system for controlling underwater glider robots that sample scientific ocean data. We report results from deployments totalling 3 months and 1000 km in collaboration with the National Oceanography Centre. This represents the largest fully autonomous field evaluation of underwater gliders to date.

[Feb 2026] My colleague Alex Schutz and I wrote a blog post on using graph neural networks in reinforcement learning. This work, which will be presented at the ICLR 2026 Blogposts Track, covers crucial design decisions and practical implementation concerns. It also includes an implementation example using well-known libraries.

For older news, see the archive.