
Physics taught me how to see the world. AI safety is how I hope to protect it.
Starting Sept. 2025, I’m an AI researcher at CHAI (Center for Human-Compatible AI), where I’m fortunate to work with Cam Allen. We are working on predicting when interpretable abstractions appear, clarifying some insufficiencies in the current literature, and working towards a more robust, general, and natural definition of features.
I’m also a 4th-year physics major at Berkeley (3.95 GPA). I’m thankful for the stipends that have supported my studies, most notably EGE.
Until late Aug. 2025, I was a particle physics researcher at the Neutrino Group at SLAC National Accelerator Laboratory under Professor Hirohisa Tanaka. I used latent spaces to investigate the low-energy excess of electron neutrinos detected in the MiniBooNE experiment.
Before SLAC, I worked on Bayesian analysis for Mu2e under Professor Yury Kolomensky.
Before Berkeley, I wanted to experience “real life” – so I went all in and took half a year to work as a trash man. Then I went to South America for a half a year. Things like the deadly fear I felt when running from wild dogs in the desert gave me clarity on what matters in life.
I love language and spend more than half of my time speaking languages other than English: Chinese (here’s proof), Spanish, Swedish/Norwegian, Italian.
Apart from family and friends, my values have been influenced greatly by Marcus Aurelius’s Meditations, Confucius’s Lunyu ( 论语 ), Yuval Noah Harari’s Sapiens, and, honestly, Avatar the Last Airbender and SpongeBob. The most transformative read of my life was when I read Joanne Baker’s 50 Physics Ideas You Really Need to Know as a kid.
I’d love to chat about AI and AI safety over a boba! Feel free to reach out at the correct reordering of this set of strings: “berkeley”, “@”, “.”, “edu”, “karlcal”
