Noah Lee

MS Student @ KAIST AI

Noah.jpg

I am a Master Student at the Kim Jaechul Graduate School of AI of KAIST, jointly advised by James Thorne and Jinwoo Shin. Before joining KAIST, I was a Research Intern at Naver CLOVA where I conducted research on Recommender Systems. Before that, I received my Bachelor’s degree in Statistics in Korea University.


My current main research interest lies in (but not confined to):

  • Bettering human representation of LLMs
  • Uncertainty-aware systems for reliable and safe AI usage
  • Personalization & customization of LLMs

Feel free to get in touch!


News

Jan 24, 2025 Our RM cross-lingual paper has been accepted to NAACL 2025!
Dec 2, 2024 A paper has been accepted to COLING 2025
Oct 30, 2024 Check out our new preprint on cross-lingual transfer of reward models!

Publications

  1. cross.png
    Cross-lingual Transfer of Reward Models in Multilingual Alignment
    Jiwoo Hong*, Noah Lee*, Rodrigo Martínez-Castaño, and 2 more authors
    NAACL, 2025
  2. biggen.png
    The BiGGen Bench: A Principled Benchmark for Fine-grained Evaluation of Language Models with Language Models
    Seungone Kim, Juyoung Suk, Ji Yong Cho, and 29 more authors
    NAACL, 2025
  3. conseval.png
    Evaluating the Consistency of LLM Evaluators
    Noah Lee*, Jiwoo Hong*, and James Thorne
    COLING, 2025
  4. orpo.png
    ORPO: Monolithic Preference Optimization without Reference Model
    Jiwoo Hong, Noah Lee, and James Thorne
    EMNLP, 2024
  5. mapo.png
    Margin-aware Preference Optimization for Aligning Diffusion Models without Reference
    Jiwoo Hong*, Sayak Paul*, Noah Lee, and 3 more authors
    Preprint, 2024
  6. humllm.png
    Can Large Language Models Capture Dissenting Human Voices?
    Noah Lee*, Na Min An*, and James Thorne
    EMNLP, 2023