Skip to content

DeepSoftwareAnalytics/Awesome-Issue-Resolution

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

12 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

✨ Awesome Issue Resolution

Advances and Frontiers of LLM-based Issue Resolution in Software Engineering A Comprehensive Survey

GitHub Stars Forks Awesome Paper arXiv Tables Contributors Papers Count

📖 Documentation Website | 📄 Full Paper | 📋 Tables & Resources

Awesome Issue Resolution

📖 Abstract

Based on a systematic review of 175 papers and online resources, this survey establishes a holistic theoretical framework for Issue Resolution in software engineering. We examine how Large Language Models (LLMs) are transforming the automation of GitHub issue resolution. Beyond the theoretical analysis, we have curated a comprehensive collection of datasets and model training resources, which are continuously synchronized with our GitHub repository and project documentation website.

🔍 Explore This Survey:

  • 📊 Data: Evaluation and training datasets, data collection and synthesis methods
  • 🛠️ Methods: Training-free (agent/workflow) and training-based (SFT/RL) approaches
  • 🔍 Analysis: Insights into both data characteristics and method performance
  • 📋 Tables & Resources: Comprehensive statistical tables and resources
  • 📄 Full Paper: Read the complete survey paper

📚 Complete Paper List

Total: 170 papers across 14 categories

📊 Evaluation Datasets

Benchmarks for evaluating issue resolution systems

  • SWE-bench: SWE-bench: Can Language Models Resolve Real-World GitHub Issues? (2024) arXiv
  • SWE-bench Lite: SWE-bench: Can Language Models Resolve Real-World GitHub Issues? (2024) arXiv
  • SWE-bench Verified: Introducing SWE-bench Verified | OpenAI (2024) arXiv
  • SWE-bench-java: SWE-bench-java: A GitHub Issue Resolving Benchmark for Java (2024) arXiv
  • Visual SWE-bench: CodeV: Issue Resolving with Visual Data (2025) arXiv
  • SWE-Lancer: SWE-Lancer: Can Frontier LLMs Earn $1 Million from Real-World Freelance Software Engineering? (2025) arXiv
  • Multi-SWE-bench: Multi-SWE-bench: A Multilingual Benchmark for Issue Resolving (2025) arXiv
  • SWE-PolyBench: SWE-PolyBench: A multi-language benchmark for repository level evaluation of coding agents (2025) arXiv
  • SWE-bench Multilingual: SWE-smith: Scaling Data for Software Engineering Agents (2025) arXiv
  • SwingArena: SwingArena: Competitive Programming Arena for Long-context GitHub Issue Solving (2025) arXiv
  • SWE-bench Multimodal: SWE-bench Multimodal: Do AI Systems Generalize to Visual Software Domains? (2024) arXiv
  • OmniGIRL: OmniGIRL: A Multilingual and Multimodal Benchmark for GitHub Issue Resolution (2025) arXiv
  • SWE-bench-Live: SWE-bench Goes Live! (2025) arXiv
  • SWE-Factory: SWE-Factory: Your Automated Factory for Issue Resolution Training Data and Evaluation Benchmarks (2025) arXiv
  • SWE-MERA: SWE-MERA: A Dynamic Benchmark for Agenticly Evaluating Large Language Models on Software Engineering Tasks (2025) arXiv
  • SWE-Perf: SWE-Perf: Can Language Models Optimize Code Performance on Real-World Repositories? (2025) arXiv
  • SWE-Bench Pro: SWE-Bench Pro: Can AI Agents Solve Long-Horizon Software Engineering Tasks? (2025) arXiv
  • SWE-InfraBench: SWE-InfraBench: Evaluating Language Models on Cloud Infrastructure Code (2025) arXiv
  • SWE-Sharp-Bench: SWE-Sharp-Bench: A Reproducible Benchmark for C# Software Engineering Tasks (2025) arXiv
  • SWE-fficiency: SWE-fficiency: Can Language Models Optimize Real-World Repositories on Real Workloads? (2025) arXiv
  • SWE-Compass: SWE-Compass: Towards Unified Evaluation of Agentic Coding Abilities for Large Language Models (2025) arXiv
  • SWE-Bench++: SWE-Bench++: A Framework for the Scalable Generation of Software Engineering Benchmarks from Open-Source Repositories (2025) arXiv
  • SWE-EVO: SWE-EVO: Benchmarking Coding Agents in Long-Horizon Software Evolution Scenarios (2025) arXiv
  • SWE-Lego: SWE-Lego: Pushing the Limits of Supervised Fine-tuning for Software Issue Resolving (2026) arXiv

🎯 Training Datasets

Datasets for training issue resolution agents

  • SWE-bench-train: SWE-bench: Can Language Models Resolve Real-World GitHub Issues? (2024) arXiv
  • SWE-bench-extra: SWE-bench: Can Language Models Resolve Real-World GitHub Issues? (2024) arXiv
  • Multi-SWE-RL: Multi-SWE-bench: A Multilingual Benchmark for Issue Resolving (2025) arXiv
  • R2E-Gym: R2E-Gym: Procedural Environments and Hybrid Verifiers for Scaling Open-Weights SWE Agents (2025) arXiv
  • SWE-Synth: SWE-Synth: Synthesizing Verifiable Bug-Fix Data to Enable Large Language Models in Resolving Real-World Bugs (2025) arXiv
  • LocAgent: OrcaLoca: An LLM Agent Framework for Software Issue Localization (2025) arXiv
  • SWE-Smith: SWE-smith: Scaling Data for Software Engineering Agents (2025) arXiv
  • SWE-Fixer: SWE-Fixer: Training Open-Source LLMs for Effective and Efficient GitHub Issue Resolution (2025) arXiv
  • SWELoc: SweRank: Software Issue Localization with Code Ranking (2025) arXiv
  • SWE-Gym: Training Software Engineering Agents and Verifiers with SWE-Gym (2025) arXiv
  • SWE-Flow: SWE-Flow: Synthesizing Software Engineering Data in a Test-Driven Manner (2025) arXiv
  • SWE-Factory: SWE-Factory: Your Automated Factory for Issue Resolution Training Data and Evaluation Benchmarks (2025) arXiv
  • Skywork-SWE: Skywork-SWE: Unveiling Data Scaling Laws for Software Engineering in LLMs (2025) arXiv
  • RepoForge: RepoForge: Training a SOTA Fast-thinking SWE Agent with an End-to-End Data Curation Pipeline Synergizing SFT and RL at Scale (2025) arXiv
  • SWE-Mirror: SWE-Mirror: Scaling Issue-Resolving Datasets by Mirroring Issues Across Repositories (2025) arXiv
  • SWE-Bench++: SWE-Bench++: A Framework for the Scalable Generation of Software Engineering Benchmarks from Open-Source Repositories (2025) arXiv

🤖 Single-Agent Systems

Individual autonomous agents for issue resolution

  • SWE-agent: SWE-agent: Agent-Computer Interfaces Enable Automated Software Engineering (2024) arXiv
  • PatchPilot: PatchPilot: A Cost-Efficient Software Engineering Agent with Early Attempts on Formal Verification (2025) arXiv
  • LCLM: Putting It All into Context: Simplifying Agents with LCLMs (2025) arXiv
  • DGM: Darwin Godel Machine: Open-Ended Evolution of Self-Improving Agents (2025) arXiv
  • SE-Agent: SE-Agent: Self-Evolution Trajectory Optimization in Multi-Step Reasoning with LLM-Based Agents (2025) arXiv
  • TOM-SWE: TOM-SWE: User Mental Modeling For Software Engineering Agents (2025) arXiv
  • Live-SWE-agent: SE-Agent: Self-Evolution Trajectory Optimization in Multi-Step Reasoning with LLM-Based Agents (2025) arXiv

👥 Multi-Agent Systems

Collaborative multi-agent frameworks

  • MAGIS: MAGIS: LLM-Based Multi-Agent Framework for GitHub Issue Resolution (2024) arXiv
  • AutoCodeRover: AutoCodeRover: Autonomous Program Improvement (2024) arXiv
  • CodeR: CodeR: Issue Resolving with Multi-Agent and Task Graphs (2024) arXiv
  • OpenHands: OpenHands: An Open Platform for AI Software Developers as Generalist Agents (2025) arXiv
  • OrcaLora: OrcaLoca: An LLM Agent Framework for Software Issue Localization (2025) arXiv
  • DEI: Diversity Empowers Intelligence: Integrating Expertise of Software Engineering Agents (2024) arXiv
  • MarsCode Agent: MarsCode Agent: AI-native Automated Bug Fixing (2024) arXiv
  • SWE-Search: SWE-Search: Enhancing Software Agents with Monte Carlo Tree Search and Iterative Refinement (2025) arXiv
  • CodeCoR: CodeCoR: An LLM-Based Self-Reflective Multi-Agent Framework for Code Generation (2025) arXiv
  • Agent KB: Agent KB: Leveraging Cross-Domain Experience for Agentic Problem Solving (2025) arXiv
  • SWE-Debate: SWE-Debate: Competitive Multi-Agent Debate for Software Issue Resolution (2025) arXiv
  • SWE-Exp: SWE-Exp: Experience-Driven Software Issue Resolution (2025) arXiv
  • Trae Agent: Trae Agent: An LLM-based Agent for Software Engineering with Test-time Scaling (2025) arXiv
  • Meta-RAG: Meta-RAG on Large Codebases Using Code Summarization (2025) arXiv

🔄 Workflow-Based Methods

Structured pipeline approaches

  • Agentless: Agentless: Demystifying LLM-based Software Engineering Agents (2024) arXiv
  • Conversational Pipeline: Exploring the Potential of Conversational Test Suite Based Program Repair on SWE-bench (2024) arXiv
  • SynFix: SynFix: Dependency-Aware Program Repair via RelationGraph Analysis (2025) arXiv
  • CodeV: CodeV: Issue Resolving with Visual Data (2025) arXiv
  • GUIRepair: Seeing is Fixing: Cross-Modal Reasoning with Multimodal LLMs for Visual Software Issue Fixing (2025) arXiv

🛠️ Tool-Augmented Methods

Methods leveraging external tools

  • MAGIS: MAGIS: LLM-Based Multi-Agent Framework for GitHub Issue Resolution (2024) arXiv
  • AutoCodeRover: AutoCodeRover: Autonomous Program Improvement (2024) arXiv
  • SWE-agent: SWE-agent: Agent-Computer Interfaces Enable Automated Software Engineering (2024) arXiv
  • Alibaba LingmaAgent: Alibaba LingmaAgent: Improving Automated Issue Resolution via Comprehensive Repository Exploration (2025) arXiv
  • OpenHands: OpenHands: An Open Platform for AI Software Developers as Generalist Agents (2025) arXiv
  • SpecRover: SpecRover: Code Intent Extraction via LLMs (2025) arXiv
  • MarsCode Agent: MarsCode Agent: AI-native Automated Bug Fixing (2024) arXiv
  • RepoGraph: RepoGraph: Enhancing AI Software Engineering with Repository-level Code Graph (2025) arXiv
  • SuperCoder2.0: SuperCoder2.0: Technical Report on Exploring the feasibility of LLMs as Autonomous Programmer (2024) arXiv
  • EvoCoder: LLMs as Continuous Learners: Improving the Reproduction of Defective Code in Software Issues (2024) arXiv
  • AEGIS: AEGIS: An Agent-based Framework for General Bug Reproduction from Issue Descriptions (2024) arXiv
  • OrcaLoca: OrcaLoca: An LLM Agent Framework for Software Issue Localization (2025) arXiv
  • Otter: Otter: Generating Tests from Issues to Validate SWE Patches (2025) arXiv
  • CoRNStack: CoRNStack: High-Quality Contrastive Data for Better Code Retrieval and Reranking (2025) arXiv
  • Issue2Test: Issue2Test: Generating Reproducing Test Cases from Issue Reports (2025) arXiv
  • KGCompass: Enhancing repository-level software repair via repository-aware knowledge graphs (2025) arXiv
  • CoSIL: Issue Localization via LLM-Driven Iterative Code Graph Searching (2025) arXiv
  • InfantAgent-Next: InfantAgent-Next: A Multimodal Generalist Agent for Automated Computer Interaction (2025) arXiv
  • Co-PatcheR: Co-PatcheR: Collaborative Software Patching with Component(s)-specific Small Reasoning Models (2025) arXiv
  • SWERank: SweRank: Software Issue Localization with Code Ranking (2025) arXiv
  • Nemotron-CORTEXA: Nemotron-CORTEXA: Enhancing LLM Agents for Software Engineering Tasks via Improved Localization and Solution Diversity (2025) arXiv
  • LCLM: Putting It All into Context: Simplifying Agents with LCLMs (2025) arXiv
  • SACL: SACL: Understanding and Combating Textual Bias in Code Retrieval with Semantic-Augmented Reranking and Localization (2025) arXiv
  • SWE-Debate: SWE-Debate: Competitive Multi-Agent Debate for Software Issue Resolution (2025) arXiv
  • OpenHands-Versa: Coding Agents with Multimodal Browsing are Generalist Problem Solvers (2025) arXiv
  • Repeton: Repeton: Structured Bug Repair with ReAct-Guided Patch-and-Test Cycles (2025) arXiv
  • cAST: cAST: Enhancing Code Retrieval-Augmented Generation with Structural Chunking via Abstract Syntax Tree (2025) arXiv
  • Prometheus: Prometheus: Unified Knowledge Graphs for Issue Resolution in Multilingual Codebases (2025) arXiv
  • Git Context Controller: Git Context Controller: Manage the Context of LLM-based Agents like Git (2025) arXiv
  • Trae Agent: Trae Agent: An LLM-based Agent for Software Engineering with Test-time Scaling (2025) arXiv
  • TestPrune: When Old Meets New: Evaluating the Impact of Regression Tests on SWE Issue Resolution (2025) arXiv
  • e-Otter++: Execution-Feedback Driven Test Generation from SWE Issues (2025) arXiv
  • Meta-RAG: Meta-RAG on Large Codebases Using Code Summarization (2025) arXiv

🧠 Memory-Enhanced Methods

Systems with memory mechanisms

  • Infant Agent: Infant Agent: A Tool-Integrated, Logic-Driven Agent with Cost-Effective API Usage (2024) arXiv
  • EvoCoder: LLMs as Continuous Learners: Improving the Reproduction of Defective Code in Software Issues (2024) arXiv
  • Learn-by-interact: Learn-by-interact: A Data-Centric Framework For Self-Adaptive Agents in Realistic Environments (2025) arXiv
  • DGM: Darwin Godel Machine: Open-Ended Evolution of Self-Improving Agents (2025) arXiv
  • ExpeRepair: EXPEREPAIR: Dual-Memory Enhanced LLM-based Repository-Level Program Repair (2025) arXiv
  • Agent KB: Agent KB: Leveraging Cross-Domain Experience for Agentic Problem Solving (2025) arXiv
  • SWE-Exp: SWE-Exp: Experience-Driven Software Issue Resolution (2025) arXiv
  • RepoMem: Improving Code Localization with Repository Memory (2025) arXiv
  • ReasoningBank: ReasoningBank: Scaling Agent Self-Evolving with Reasoning Memory (2025) arXiv

📚 Supervised Fine-Tuning (SFT)

Models trained via supervised learning

  • Lingma SWE-GPT: Lingma SWE-GPT: An Open Development-Process-Centric Language Model for Automated Software Improvement (2024) arXiv
  • Scaling data collection: Scaling Data Collection for Training SWE Agents (2024)
  • CodeXEmbed: CodeXEmbed: A Generalist Embedding Model Family for Multilingual and Multi-task Code Retrieval (2025) arXiv
  • SWE-Gym: Training Software Engineering Agents and Verifiers with SWE-Gym (2025) arXiv
  • TSP: Think-Search-Patch: A Retrieval-Augmented Reasoning Framework for Repository-Level Code Repair (2025) arXiv GitHub
  • Co-PatcheR: Co-PatcheR: Collaborative Software Patching with Component(s)-specific Small Reasoning Models (2025) arXiv
  • SWE-Swiss: SWE-Swiss: A Multi-Task Fine-Tuning and RL Recipe for High-Performance Issue Resolution (2025)
  • Devstral: Devstral: Fine-tuning Language Models for Coding Agent Applications (2025) arXiv
  • Kimi-Dev: Kimi-Dev: Agentless Training as Skill Prior for SWE-Agents (2025) arXiv
  • SWE-Compressor: Context as a Tool: Context Management for Long-Horizon SWE-Agents (2025) arXiv

🎮 Reinforcement Learning (RL)

Models trained via reinforcement learning

  • SWE-RL: SWE-RL: Advancing LLM Reasoning via Reinforcement Learning on Open Software Evolution (2025) arXiv
  • SoRFT: SoRFT: Issue Resolving with Subtask-oriented Reinforced Fine-Tuning (2025) arXiv
  • SEAlign: SEAlign: Alignment Training for Software Engineering Agent (2025) arXiv
  • SWE-Dev1: SWE-Dev: Evaluating and Training Autonomous Feature-Driven Software Development (2025) arXiv
  • Satori-SWE: Satori-SWE: Evolutionary Test-Time Scaling for Sample-Efficient Software Engineering (2025) arXiv
  • Agent-RLVR: Agent-RLVR: Training Software Engineering Agents via Guidance and Environment Rewards (2025) arXiv
  • DeepSWE: DeepSWE: Training a State-of-the-Art Coding Agent from Scratch by Scaling RL (2025)
  • SWE-Dev2: SWE-Dev: Building Software Engineering Agents with Training and Inference Scaling (2025) arXiv
  • SWE-Swiss: SWE-Swiss: A Multi-Task Fine-Tuning and RL Recipe for High-Performance Issue Resolution (2025)
  • SeamlessFlow: SeamlessFlow: A Trainer Agent Isolation RL Framework Achieving Bubble-Free Pipelines via Tag Scheduling (2025) arXiv
  • DAPO: Training Long-Context, Multi-Turn Software Engineering Agents with Reinforcement Learning (2025) arXiv
  • Kimi-Dev: Kimi-Dev: Agentless Training as Skill Prior for SWE-Agents (2025) arXiv
  • FoldGRPO: Scaling Long-Horizon LLM Agent via Context-Folding (2025) arXiv
  • GRPO-based Method: A Practitioner's Guide to Multi-turn Agentic Reinforcement Learning (2025) arXiv
  • Self-play SWE-RL: Toward Training Superintelligent Software Agents through Self-Play SWE-RL (2025) arXiv
  • SWE-RM: SWE-RM: Execution-free Feedback For Software Engineering Agents (2025) arXiv

⚡ Inference-Time Scaling

Methods for scaling at inference time

  • SWE-Search: SWE-Search: Enhancing Software Agents with Monte Carlo Tree Search and Iterative Refinement (2025) arXiv
  • CodeMonkeys: CodeMonkeys: Scaling Test-Time Compute for Software Engineering (2025) arXiv
  • SWE-PRM: When Agents go Astray: Course-Correcting SWE Agents with PRMs (2025) arXiv
  • ReasoningBank: CodeMonkeys: Scaling Test-Time Compute for Software Engineering (2025) arXiv

📥 Data Collection Methods

Techniques for collecting training data

  • SWE-rebench: SWE-rebench: An Automated Pipeline for Task Collection and Decontaminated Evaluation of Software Engineering Agents (2025) arXiv
  • RepoLaunch: SWE-bench Goes Live! (2025) arXiv
  • SWE-Factory: SWE-Factory: Your Automated Factory for Issue Resolution Training Data and Evaluation Benchmarks (2025) arXiv
  • SWE-MERA: SWE-MERA: A Dynamic Benchmark for Agenticly Evaluating Large Language Models on Software Engineering Tasks (2025) arXiv
  • RepoForge: RepoForge: Training a SOTA Fast-thinking SWE Agent with an End-to-End Data Curation Pipeline Synergizing SFT and RL at Scale (2025) arXiv
  • Multi-Docker-Eval: Multi-Docker-Eval: A `Shovel of the Gold Rush' Benchmark on Automatic Environment Building for Software Engineering (2025) arXiv

🔬 Data Synthesis Methods

Approaches for synthetic data generation

  • Learn-by-interact: Learn-by-interact: A Data-Centric Framework For Self-Adaptive Agents in Realistic Environments (2025) arXiv
  • R2E-Gym: R2E-Gym: Procedural Environments and Hybrid Verifiers for Scaling Open-Weights SWE Agents (2025) arXiv
  • SWE-Synth: SWE-Synth: Synthesizing Verifiable Bug-Fix Data to Enable Large Language Models in Resolving Real-World Bugs (2025) arXiv
  • SWE-smith: SWE-smith: Scaling Data for Software Engineering Agents (2025) arXiv
  • SWE-Flow: SWE-Flow: Synthesizing Software Engineering Data in a Test-Driven Manner (2025) arXiv
  • SWE-Mirror: SWE-Mirror: Scaling Issue-Resolving Datasets by Mirroring Issues Across Repositories (2025) arXiv

📈 Data Analysis

Analysis of datasets and benchmarks

  • SWE-bench Verified: Introducing SWE-bench Verified | OpenAI (2024) arXiv
  • SWE-Bench+: SWE-Bench+: Enhanced Coding Benchmark for LLMs (2024) arXiv
  • Patch Correctness: Are "Solved Issues" in SWE-bench Really Solved Correctly? An Empirical Study (2025) arXiv
  • UTBoost: UTBoost: Rigorous Evaluation of Coding Agents on SWE-Bench (2025) arXiv
  • Trustworthiness: Is Your Automated Software Engineer Trustworthy? (2025) arXiv
  • Rigorous agentic benchmarks: Establishing Best Practices for Building Rigorous Agentic Benchmarks (2025) arXiv
  • The SWE-Bench Illusion: The SWE-Bench Illusion: When State-of-the-Art LLMs Remember Instead of Reason (2025) arXiv
  • Revisiting SWE-Bench: Revisiting SWE-Bench: On the Importance of Data Quality for LLM-Based Code Models (2025)
  • SPICE: SPICE: An Automated SWE-Bench Labeling Pipeline for Issue Clarity, Test Coverage, and Effort Estimation (2025) arXiv
  • Data contamination: Does SWE-Bench-Verified Test Agent Ability or Model Memory? (2025) arXiv

🔍 Methods Analysis

Comparative analysis of different methods

  • Context Retrieval: On The Importance of Reasoning for Context Retrieval in Repository-Level Code Editing (2024) arXiv
  • Evaluating software development agents: Evaluating Software Development Agents: Patch Patterns, Code Quality, and Issue Complexity in Real-World GitHub Scenarios (2025) arXiv
  • Overthinking: The Danger of Overthinking: Examining the Reasoning-Action Dilemma in Agentic Tasks (2025) arXiv
  • Beyond final code: Beyond Final Code: A Process-Oriented Error Analysis of Software Development Agents in Real-World GitHub Scenarios (2025) arXiv
  • GSO: GSO: Challenging Software Optimization Tasks for Evaluating SWE-Agents (2025) arXiv
  • Dissecting the SWE-Bench Leaderboards: Dissecting the SWE-Bench Leaderboards: Profiling Submitters and Architectures of LLM- and Agent-Based Repair Systems (2025) arXiv
  • Security analysis: Are AI-Generated Fixes Secure? Analyzing LLM and Agent Patches on SWE-bench (2025) arXiv
  • Failures analysis: An Empirical Study on Failures in Automated Issue Solving (2025) arXiv
  • SeaView: SeaView: Software Engineering Agent Visual Interface for Enhanced Workflow (2025) arXiv
  • SWEnergy: SWEnergy: An Empirical Study on Energy Efficiency in Agentic Issue Resolution Frameworks with SLMs (2025) arXiv

🤝 Contributing

We welcome contributions to this survey! If you'd like to add new papers or fix errors:

🚀 Quick Add (Recommended)

Use our interactive scripts to add papers easily:

Windows:

add_paper.bat

Linux/Mac:

chmod +x add_paper.sh
./add_paper.sh

Or use Python directly (cross-platform):

python scripts/add_paper.py

The script will guide you through:

  1. Selecting a category
  2. Entering paper information (title, authors, links, etc.)
  3. Automatically saving to the correct YAML file

📝 Manual Process

  1. Fork this repository
  2. Add paper entries in the corresponding YAML file under data/ directory (e.g., papers_evaluation_datasets.yaml, papers_single_agent.yaml, etc.)
  3. Follow the existing format with fields: short_name, title, authors, venue, year, and links (arxiv, github, huggingface)
  4. Run python scripts/sync_readme.py to update the README.md
  5. Run python scripts/render_papers.py to update the documentation website
  6. Submit a PR with your changes

📖 Detailed instructions: See scripts/README_ADD_PAPER.md or QUICK_START.md


📄 Citation

If you use this project or related survey in your research or system, please cite the following BibTeX:

@misc{li2025awesome_issue_resolution,
    title       = {Advances and Frontiers of LLM-based Issue Resolution in Software Engineering A Comprehensive Survey},
    author      = {Caihua Li and Lianghong Guo and Yanlin Wang and Daya Guo and Wei Tao and Zhenyu Shan and Mingwei Liu and Jiachi Chen and Haoyu Song and Duyu Tang and Hongyu Zhang and Zibin Zheng},
    year        = {2025},
    howpublished = {\url{https://github.com/DeepSoftwareAnalytics/Awesome-Issue-Resolution}}
}

Once published on arXiv or at a conference, please replace the entry with the official citation information (authors, DOI/arXiv ID, conference name, etc.).


📬 Contact

If you have any questions or suggestions, please contact us through:


📜 License

This project is licensed under the MIT License - see the LICENSE file for details.


⭐ Star this repository if you find it helpful!

Made with ❤️ by the DeepSoftwareAnalytics team

Documentation | Paper | Tables | About | Cite

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published