In my previous work, I have contributed to a variety of debates in AI ethics, including debates about trust and explainability in medical AI systems, the risks of generative AI, and the challenges of continual learning systems in healthcare.
For example, I have argued that medical AI systems are not the appropriate objects of trust or trustworthiness, and that prioritising accuracy over explainability in medical AI systems may, unintuitively, generate worse patient health outcomes than prioritising explainability in these systems.
More recently, I have argued that "adaptive" medical AI systems (i.e. those that continue learning from new data even after being implemented in a clinical setting) exacerbate the ethical issues associated with standard, "locked" medical AI systems. I have also argued that using and maintaining adaptive medical AI systems are activities that may need to be classified (and, therefore, regulated) as a form of medical research.
Beyond healthcare, I have also advocated for a range of practical strategies to minimise the risks associated with large language models and other generative AI systems, particularly in teaching and education.
Beyond AI, I also contribute to debates in biomedical ethics. For instance, I have argued that the exclusion of psychiatric patients from access to physician-assisted suicide is a form of discrimination.
You can find a full list of my peer-reviewed articles and citation metrics here.