Hey there! I'm currently a research intern at CMU LTI working with
Graham Neubig.
I recently graduated from UC Berkeley 🐻, where I was very fortunate to be advised by
Alane Suhr as part of the
Berkeley NLP Group.
Before that, I spent a great summer interning with the AI Technology Group at the
MIT Lincoln Lab.
I'm drawn to ideas that develop a deep understanding of the underlying mechanisms, capabilities, and limitations of language models.
Recently, I've been thinking about how we can develop reasoning models that are flexible and generalize well out-of-distribution.
In addition to research, I spent three fun semesters teaching the data structures & algos class (CS 61B) at Berkeley.
In my free time, I enjoy being outdoors, playing video games with my friends, and sending funny cat videos to
Angela.
Long Chain-of-Thought Reasoning Across Languages
Josh Barua, Seun Eisape, Kayo Yin, Alane Suhr
SCALR @ COLM 2025
paper
Using Language Models to Disambiguate Lexical Choices in Translation
Josh Barua, Sanjay Subramanian, Kayo Yin, Alane Suhr
EMNLP 2024
paper|
code
Improving Medical Visual Representations via Radiology Report Generation
Keegan Quigley, Miriam Cha, Josh Barua, Geeticka Chauhan, Seth Berkowitz, Steven Horng, Polina Golland
ArXiv
paper|
cite