Hi, thanks for stopping by!
I am a master student in Computer Science at Fudan University advised by Prof. Xuanjing Huang. Currently, I am fortunate to work with Prof. Pengfei Liu on Large Language Models in GAIR Lab.
Prior to this, I gained research experiences in HKUST NLP Group, Shanghai AI Lab and MSRA NLC Group.
Research Focus
Out-of-distribution Generalization
Dataset Collection and Indirect Evaluation
Large Language Models
Publications and Open-source Toolkits
Great appreciation to all my collaborators and advisors!
Reperforming Feedback Loops in Self-Alignment for Problem Solving
Ting Wu, Pengfei Liu. [Will be In Submission to NeurIPS 2024]
Enhancing Contrastive Learning with Noise-guided Attack: Towards Continual Relation Extraction in the Wild [ARR December Meta-Review 4, Suggested Submission to ACL 204]
Self-Challenging Reweighting via Memorization-driven Coreset for OOD Generalization
Ting Wu, Xuanjing Huang. Under Review. 2023. [paper]
Modeling the Q-Diversity in a Min-max Play Game for Robust Optimization
Decorrelate Irrelevant, Purify Relevant: Overcome Textual Spurious Correlations from a Feature Perspective
Less is Better: Recovering Intended-Feature Subspace to Robustify NLU Models
OpenCompass: Open-Source Large Language Model Evaluation Suite and Platform
TextFlint: Unified Multilingual Robustness Evaluation Toolkit for Natural Language Processing