publications

(*) denotes equal contribution

2025

  1. Every expert matters: Towards effective knowledge distillation for mixture-of-experts language models
    Gyeongman Kim*, Gyouk Chu*, and Eunho Yang
    arXiv preprint arXiv:2502.12947, 2025
  2. ReviewScore: Misinformed Peer Review Detection with Large Language Models
    Hyun Ryu, Doohyuk Jang, Hyemin S Lee, and 8 more authors
    arXiv preprint arXiv:2509.21679, 2025

2024

  1. KorMedMCQA: Multi-choice question answering benchmark for Korean healthcare professional licensing examinations
    Sunjun Kweon*, Byungjin Choi*, Gyouk Chu, and 7 more authors
    arXiv preprint arXiv:2403.01469, 2024

2023

  1. Prediction-segmentation tasks for self-supervision of anomaly detection networks under noisy conditions
    Jihoon Choi, Gyouk Chu, and Jung-Woo Choi
    In INTER-NOISE and NOISE-CON Congress and Conference Proceedings, 2023