Systems I've Built

LLM Engineering Pipeline (Focus: Trust & Safety)

Coffee Meets Bagel ⸱ 2026

  • Built end-to-end experimentation platform, reducing setup time by ~70%
  • Designed evaluation framework (rule-based + LLM-as-judge), improving pass rate from 41.6% → 88.6%
  • Implemented multi-layer safety architecture, reducing violations from 8.4% → 1.9%

Stack: Python, OpenAI API, Pandas

I study how intelligent systems influence human agency under uncertainty.

Neuro-AI · Responsible ML · Computational health

ABOUT ME
I study how humans and machine learning systems learn to adapt to one another under uncertainty.

My work focuses on neuro-AI, interpretable machine learning, and computational mental health, with particular attention to agency — not as a constraint on systems, but as a condition for human–machine symbiosis. I work with neural, clinical, and longitudinal data, and examine how models shape behaviour, interpretation, and choice once they enter real settings. Currently pursuing a BSc in Data Science and Business Analytics from University of London.

Symbiosis begins when both sides are allowed to change.
Current Focus
TECH STACK
PyTorch · OpenCV · YOLOv8 · SHAP · Python (numpy, pandas, scikit-learn) · React
OPEN QUESTIONS
1. How should agency be operationalised in sequential decision models without collapsing into proxy metrics?
2. Where do current interpretability tools mislead us about causal structure in neural data?
3. What evaluation regimes meaningfully detect harm before deployment rather than after?