About me

I’m a Postdoctoral Fellow at Queen’s University’s Centre for Neuroscience Studies & Ingenuity Labs Research Institute, where I develop multi-modal AI pipelines integrating foundation models, retrieval-augmented generation (RAG), and reinforcement learning across diverse domains including medical, financial, and sensor signals as well as multi-modal data such as images, text, and time-series. I have also been a Postdoctoral Fellow in ECE & Ingenuity Labs at Queen’s since Apr 2023, working on hierarchical time-series representation learning and multi-domain EEG modeling with state-space models and large language model fine-tuning.

Previously, from Sept 2021–Mar 2023, I was at the KNU-LG Convergence Research Center in South Korea, building Transformer-based ICU outcome predictors and AI clinical-decision support systems. I earned my PhD (with Best Thesis Award) in Electronic & Electrical Engineering from Kyungpook National University in August 2021, where I focused on low-shot, long-tailed learning for medical imaging and time-series prediction.

Profile Summary

  • Develop and apply cutting-edge AI methods across medical, financial, and multi-modal domains, with a focus on advancing both foundational research and real-world deployment.
  • Authored over 42 publications, including journal articles, conferences, and patents, featured in venues like NeurIPS, IEEE Transactions, and Elsevier, accumulating 890+ citations, with an h-index of 17.
  • Contributed to securing research funding and grants, supporting the expansion and development of innovative projects.
  • Co-supervised 12 researchers (4 PhD and 8 MSc students), providing mentorship to support their academic and professional development.

Research Interests

  • Deep Learning & Foundation Models: Transformers, self-supervised learning, masked autoencoders, mixture-of-experts, State-Space Models (Mamba), Low-Rank Adaptation (LoRA), Multi-Task Learning, Reinforcement Learning
  • NLP & Generative AI: Large Language Model Fine-Tuning (Qwen, GPT), Retrieval-Augmented Generation (RAG), Instruction Tuning, Vector Search, agentic AI, hallucination mitigation, preference alignment, Knowledge Graph Reasoning, Chain-of-Thought Reasoning
  • Computer Vision & Multimodal Fusion: Vision-Language Models, Contrastive Vision-Language Alignment, cross-modal attention, Multi-Modal Data Fusion and Alignment
  • Signal Processing & Biomedical AI: MRI imaging, EEG/Biosignal Processing, time-series forecasting, anomaly detection, financial time-series analysis, sensor data modeling, foundation models for time-series
  • Low-Shot & Imbalanced Learning: Imbalance learning regularizations, meta-loss methods, in-context learning, prompt tuning, Sentiment Analysis