Hey there! I am a Senior studying Computer Science at UC Berkeley.
Previously, I spent a great summer with the
AI Technology Group
at the MIT Lincoln Lab, working with
Keegan Quigley and
Miriam Cha.
Currently, I am an undergraduate researcher in the
Berkeley NLP Group,
where I have been very fortunate to collaborate with and learn from
Alane Suhr,
Sanjay Subramanian,
Kayo Yin, and
Dan Klein.
My research interests broadly span natural language processing, with a focus on improving our understanding of models through interpretability and evaluation.
Recently, I have been exploring the multilingual capabilities of LLMs and the training dynamics of language models.
Using Language Models to Disambiguate Lexical Choices in Translation
Josh Barua, Sanjay Subramanian, Kayo Yin, Alane Suhr
EMNLP 2024
paper|
code
Bidirectional Captioning for Clinically Accurate and Interpretable Models
Keegan Quigley, Miriam Cha, Josh Barua, Geeticka Chauhan, Seth Berkowitz, Steven Horng, Polina Golland
Under Review
paper|
cite