Linlin Yu
I am a Ph.D. student in the Computer Science Department at The University of Texas at Dallas,
working under the supervision of Prof. Feng Chen. I also work closely with
Prof. Yifei Lou.
I received my B.E. degree
in Information Security from Shanghai Jiao Tong University in 2019.
My research focuses on evidential uncertainty quantification and reasoning for complex structural data.
I aim to improve the reliability of uncertainty estimations by integrating domain-specific prior knowledge.
My work has applications in areas such as attributed graphs, hyperspectral imaging classification, bird's-eye view semantic segmentation, and generative models.
I am actively seeking research opportunities, including internships or full-time positions, in areas related to Deep Learning, Machine Learning, and Data Mining.
I'd love to get in contact.
Email  / 
Google Scholar  / 
LinkedIn
|
|
News
05/2025: I started my summer research internship at NEC Laboratories America
03/2025: Our survey paper on uncertainty quantification in LLM is available online
01/2025: One paper got accepted by ICLR 2025
01/2025: One paper got accepted by AISTATS 2025
10/2024: One paper got accepted by Frontiers in BigData 2024
10/2024: I will serve as the reviewer of ICLR,AISTATS,NeurIPS 2025
09/2024: One paper got accepted by EMNLP 2024
07/2024: I will serve as the reviewer of KDD 2025, BigData 2024
05/2024: I will give a talk on "Evidential Deep Learning for Uncertainty Quantification" in Tianjin University
05/2024: We release the code and dataset for the first benchmark on uncertainty-aware bird's eye view segmentation
05/2024: I will serve as the reviewer of NeurIPS 2024
03/2024: One paper got accepted by NAACL 2024 Finidings
01/2024: One paper got accepted by ICLR 2024
09/2023: One paper got accepted by NeurIPS 2024
06/2023: One paper got accepted by 2nd KDD workshop on Uncertainty Reasoning and Quantification in Decision Making
|
Research
I'm interested in machine learning and data mining, especially in Uncertainty Estimation, Trustworthy Large Language Models, and Graph Neural Networks.
Representative papers are highlighted.
|
|
Survey of Uncertainty Estimation in LLMs - Sources, Methods, Applications, and Challenge
Jianfeng He *, Linlin Yu *, Changbin Li *, Runing Yang, Fanglan Chen, Kangshuo Li, Min Zhang, Shuo Lei, Xuchao Zhang, Mohammad Beigi, Kaize Ding, Bei Xiao, Lifu Huang, Feng Chen †, Ming Jin †, Chang-Tien Liu †
Preprint, 2025
paper
This survey provides a comprehensive overview of uncertainty estimation for LLMs from the perspective of uncertainty sources, serving as a foundational resource for researchers entering the field. We begin by reviewing essential background on LLMs, followed by a detailed clarification of uncertainty sources relevant to them. We then introduce various uncertainty estimation methods, including both commonly used and LLM-specific approaches. Metrics for evaluating uncertainty are discussed, along with key application areas. Finally, we highlight major challenges and outline future research directions aimed at improving the trustworthiness and reliability of LLMs.
|
|
Evidential Uncertainty Probes for Graph Neural Networks
Linlin Yu, Kangshuo Li, Pritom Kumar Saha, Yifei Lou, Feng Chen
AISTATS, 2025
paper /
code/
poster
We propose a plug-and-play framework for uncertainty quantification in Graph Neural Networkss that works with pre-trained models without the need for retraining. Our Evidential Probing Network uses a lightweight Multi-Layer-Perceptron head to extract evidence from learned representations, allowing efficient integration with various GNN architectures. We further introduce evidence-based regularization techniques, referred to as EPN-reg, to enhance the estimation of epistemic uncertainty with theoretical justifications. Extensive experiments demonstrate that the proposed EPN-reg achieves state-of-the-art performance in accurate and efficient uncertainty quantification, making it suitable for real-world deployment.
|
|
Uncertainty Quantification for Bird's Eye View Semantic Segmentation: Methods and Benchmarks
Linlin Yu *, Bowen Yang *, Tianhao Wang, Kangshuo Li, Feng Chen
ICLR, 2025
paper /
code /
poster
This study introduces a comprehensive benchmark for predictive uncertainty quantification in BEV segmentation, evaluating multiple uncertainty quantification methods across three popular datasets with three representative network architectures. we propose a novel loss function, Uncertainty-FocalCross-Entropy (UFCE), specifically designed for highly imbalanced data, along with a simple uncertainty-scaling regularization term that improves both uncertainty quantification and model calibration for BEV segmentation.
|
|
Can We Trust the Performance Evaluation of Uncertainty Estimation
Methods in Text Summarization?
Jianfeng He, Runing Yang, Linlin Yu, Changbin Li, Ruoxi Jia, Feng Chen, Ming Jin, Chang-Tien Lu
EMNLP, 2024
paper /
code
In this paper, we introduce a comprehensive uncertainty estimation on text summarization benchmark incorporating 31 NLG metrics across four dimensionse. We also assess the performance of 14 common uncertainty estimation methods within this benchmark. Our findings emphasize the importance of considering multiple uncorrelated NLG metrics and diverse uncertainty estimation methods to ensure reliable and efficient evaluation of uncertainty-aware text summarization techniques.
|
|
Camera-view Supervision for Bird's-Eye-View Semantic Segmentation
Bowen Yang, Linlin Yu, Feng Chen
Frontiers in BigData, 2024
paper/
code
We propose a method of supervising feature extraction with camera-view depth and segmentation information, which improves the quality of feature extraction and projection in the Bird's eye view semantic segmentation pipeline. Our model, evaluated on the nuScenes dataset, shows a 3.8% improvement in Intersection-over-Union (IoU) for vehicle segmentation and a 30-fold reduction in depth error compared to baselines, while maintaining competitive inference times of 32 FPS. This method offers more accurate and reliable BEVSS for real-time autonomous driving systems.
|
|
Uncertainty Estimation on Sequential Labeling via Uncertainty Transmission
Jianfeng He, Linlin Yu,Shuo Lei, Chang-Tien Lu, Feng Chen
NAACL Findings, 2024
paper /
code
we propose a Sequential Labeling Posterior Network (SLPN) to estimate uncertainty scores for the extracted entities, considering uncertainty transmitted from other tokens. Moreover, we have defined an evaluation strategy to address the specificity of wrong-span cases. Our SLPN has achieved significant improvements on three datasets, such as a 5.54-point improvement in AUPR on the MIT-Restaurant dataset.
|
|
Uncertainty-aware Graph-based Hyperspectral Image Classification
Linlin Yu, Yifei Lou, Feng Chen
ICLR, 2024
paper /
code /
poster
In this paper, we adapt two advanced uncertainty quantification models, evidential GCNs (EGCN) and graph posterior networks (GPN), designed for node classifications in graphs, into the realm of Hyper Spectral Imaging Classification. We first reveal theoretically that a popular uncertainty cross-entropy loss function is insufficient to produce good epistemic uncertainty when learning EGCNs. To mitigate the limitations, we propose two regularization terms based on the inherent physical characteristics of hyperspectral data.
|
|
Improvements on Uncertainty Quantification for Node Classification via Distance Based Regularization
Russell Hart,
Linlin Yu, Yifei Lou, Feng Chen
NeurIPS, 2024
paper /
code /
poster
We theoretically analyze the limitations of Graph Posterior Network at OOD detection when minimizing uncertainty cross-entropy loss, and we propose a
distance-based regularization that considers the prior knowledge that OOD-specific features are useful
for learning representational space mappings.
|
Conference Reviewer
2025: NeurIPS, ICML, ICLR, KDD, AISTATS
2024: KDD, NeurIPS, BigData, ICML
2023: ICML
|
|