AI

My AI work explores the intersection between technology and humanity, focusing not just on what models do, but why they do it. I'm deeply interested in the data feeding AI systems, the ethical complexities surrounding their use, and how we define success in contexts where outcomes matter deeply.

How do we build models that are ethical by design? Can we truly anticipate how they'll be used in the future? And crucially, how do we measure outcomes, ensuring these powerful tools actually serve vulnerable people and communities, rather than just placating "the system"?

Taking inspiration from my more formal work, I'm also fascinated by the nature of inspiration itself—how 'creative' can AI models truly be, and what is the value of automated or AI-driven work?