I am a Tenure-track Assistant Professor (准聘助理教授/特聘研究员/博导) at the School of Computer Science, Nanjing University. My research focuses on Software Engineering (SE), particularly leveraging AI and Large Language Models (LLMs) to build advanced, automated testing and debugging technologies for complex systems.
🚀 欢迎对软件工程、AI for SE、LLM 或系统可靠性方向感兴趣的优秀本科生和研究生联系我,参与科研实习或攻读硕士、博士学位!查看详情 →
I received my Ph.D. from Nanjing University under the supervision of Prof. Yuming Zhou and was a visiting student at the CREST centre, University College London, co-supervised by Prof. Mark Harman and Prof. Jens Krinke.
My research lies at the intersection of AI and Software Engineering, with a focus on advancing automated techniques for software validation and debugging.
我们研究组常年招收有志于在软件工程、AI/LLM for SE、软件测试与调试、编译技术、系统可靠性等领域深入研究的硕士生和博士生。我们提供充足的科研经费、完善的计算资源和自由的学术氛围。
联系方式: 如果您对课题组研究方向感兴趣,欢迎将 个人简历 (CV)、本科/研究生成绩单、简短的研究兴趣陈述 发送至我的邮箱: yangyibiao (at) nju.edu.cn。
|
Beyond Coverage: Automatic Test Suite Augmentation for Enhanced Effectiveness Using Large Language Models |
|
Once4All: Skeleton-Guided SMT Solver Fuzzing with LLM-Synthesized Generators |
|
Isolating Compiler Faults via Multiple Pairs of Adversarial Compilation Configurations |
|
Using a Sledgehammer to Crack a Nut? Revisiting Automated Compiler Fault Isolation |
|
Towards Better Code Understanding in Decoder-Only Models with Contrastive Learning |
|
Validating SMT Rewriters via Rewrite Space Exploration Supported by Generative Equality Saturation |
|
Isolating Compiler Faults through Differentiated Compilation Configurations |
|
Unveiling Compiler Faults via Attribute-Guided Compilation Space Exploration |
|
Debugger Toolchain Validation via Cross-Level Debugging |
|
ClozeMaster: Fuzzing Rust Compiler by Harnessing LLMs for Infilling Masked Real Programs |
|
Deep Learning-based Software Engineering: Progress, Challenges, and Opportunities |
|
Enriching Mutation Testing with Innovative Method Invocation Mutation: Filling the Crucial Missing Piece of the Puzzle |
|
Understanding the Potentially Confounding Effect of Test Suite Size in Test Effectiveness Evaluation |
|
Code-Line-Level Bugginess Identification: How Far Have We Come, and How Far Have We Yet to Go? |
|
Assessing Effectiveness of Test Suites: What Do We Know and What Should We Do? |
|
Risky Dynamic Typing Related Practices in Python: An Empirical Study |
|
SMT Solver Validation Empowered by Large Pre-trained Language Models |
|
Validating SMT Solvers via Skeleton Enumeration Empowered by Historical Bug-Triggering Inputs |
|
Heterogeneous Testing for Coverage Profilers Empowered with Debugging Support |
|
Effective Isolation of Fault-Correlated Variables via Statistical and Mutation Analysis |
|
Mitigating False Positive Static Analysis Warnings: Progress, Challenges, and Opportunities |
|
Uncovering Bugs in Code Coverage Profilers via Control Flow Constraint Solving |
|
Mutant Reduction Evaluation: What is There and What is Missing? |
|
CBUA: A probabilistic, predictive, and practical approach for evaluating test suite effectiveness |
|
Automatic Self-Validation for Code Coverage Profilers |
|
Hunting for Bugs in Code Coverage Tools via Randomized Differential Testing |
|
Predictive analysis for race detection in software-defined networks |
|
How Far We Have Progressed in the Journey? An Examination of Cross-Project Defect Prediction |
|
Effort-Aware Just-in-Time Defect Prediction: Simple Unsupervised Models Could Be Better Than Supervised Models |
|
An Empirical Study on Dependence Clusters for Effort-Aware Fault-Proneness Prediction |
|
Are Slice-Based Cohesion Metrics Actually Useful in Effort-Aware Post-Release Fault-Proneness Prediction? An Empirical Study |
Room 722, Computer Science Building, Nanjing University
163 Xianlin Avenue, Nanjing 210023, China