
Physics taught me how to see the world. AI safety is how I hope to protect it.
Starting early Sept. 2025, I’m an AI researcher at CHAI (Center for Human-Compatible AI) with Cam Allen. I have a world model paper under review, which matters firstly because it is a step in the direction of predicting when interpretable abstractions appear, and secondly because it empirically shows that linear probe fits don’t imply that networks are “aware of” or use those features.
I’m also a 4th-year physics major at Berkeley.
Until late Aug. 2025, I was a particle physics researcher at the Neutrino Group at SLAC National Accelerator Laboratory under Professor Hirohisa Tanaka. I used latent spaces to investigate the low-energy excess of electron neutrinos detected in the MiniBooNE experiment.
Before SLAC, I worked on Bayesian analysis for Mu2e under Professor Yury Kolomensky.
Before Berkeley, I wanted to experience “real life” – so I went all in and took a half a year to work as a trash man. Then I went to South America for a half a year. Things like the deadly fear I felt when running from wild dogs in the desert gave me clarity on what matters in life.
I love language and spend more than half of my time speaking languages other than English (Chinese, Spanish, Swedish/Norwegian, Italian).
Apart from family and friends, my values have been influenced greatly by Marcus Aurelius’s Meditations, Confucius’s Lunyu ( 论语 ), Yuval Noah Harari’s Sapiens, and, honestly, Avatar the Last Airbender and SpongeBob. The most transformative read of my life was when I read Joanne Baker’s 50 Physics Ideas You Really Need to Know as a kid.
I’d love to chat about AI and AI safety over a boba! Feel free to reach out at the correct reordering of this set of strings: “berkeley”, “@”, “.”, “edu”, “karlcal”
