Luogu Advanced Competitive Programming Test (LACPT)
♡ Shihao Ji, ♡ Zihui Song, ♡ Sumo Yu
♡ Tengzhou No.1 High School   ♠ Luogu
A Benchmark for Advanced Algorithmic Coding Abilities
Due to copyright issues, we do not provide problem data for now. We are communicating with Luogu to obtain copyright for the problem statements. Test cases can now be generated by LLMs.

Introduction

Luogu Advanced Competitive Programming Test (LACPT) is a comprehensive benchmark suite designed to evaluate AI's coding abilities in high-difficulty algorithmic competitions. LACPT aims to serve as a rigorous benchmark for measuring AI's core capabilities in solving complex, non-standard programming problems, which are considered key components for achieving Artificial General Intelligence (AGI).

Authors

Authors: Shihao Ji, Zihui Song, Sumo Yu

Organizations: Luogu, Tengzhou No.1 High School

Project Structure

LACPT/ ├── 📁 src/ # Core source code
│ ├── 📁 judge/ # Judging module
│ │ ├── __init__.py
│ │ └── local_judge.py # Local judge
│ ├── 📁 prompts/ # Prompt templates
│ │ ├── __init__.py
│ │ └── competitive_programming.py
│ ├── 📁 generator/ # Test case generation
│ │ ├── __init__.py
│ │ └── test_case_generator.py # AI test case generator
│ ├── 📁 evaluator/ # Evaluation pipeline
│ │ ├── __init__.py
│ │ ├── evaluator.py # Main evaluator
│ │ └── model_interface.py # Model interface
│ └── __init__.py
├── 📁 data/ # Data directory
│ └── 📁 problems/ # Problem data
│ ├── 📁 a_plus_b/ # A+B problem
│ │ ├── problem.json # Problem description
│ │ └── test_cases.json # Test cases
│ └── 📁 fibonacci/ # Fibonacci problem
│ └── problem.json
├── 📁 scripts/ # Script tools
│ └── 📁 eval/
│ └── run_evaluation.py # Evaluation script
├── 📁 examples/ # Usage examples
│ └── quick_start.py # Quick start example
├── 📄 requirements.txt # Dependencies
└── 📄 README.md # Project documentation

Quick Start

1. Install dependencies
pip install -r requirements.txt

2. Set API key
export OPENAI_API_KEY="your_openai_api_key"
or
export ANTHROPIC_API_KEY="your_anthropic_api_key"

3. Run evaluation
python scripts/eval/run_evaluation.py --model openai --model-name gpt-4o
python scripts/eval/run_evaluation.py --model openai --problems a_plus_b fibonacci
python scripts/eval/run_evaluation.py --model openai --use-ai-generator

4. Quick example
python examples/quick_start.py

Problem Data Format

Each problem contains the following files:
problem.json
{
  "problem_id": "unique_id",
  "title": "Problem Title",
  "difficulty": "easy|medium|hard",
  "tags": ["tag1", "tag2"],
  "problem_statement": "Markdown format description",
  "input_file": "input.txt",
  "output_file": "output.txt",
  "time_limit": 1000,
  "memory_limit": 256,
  "reference_solution": {
    "language": "cpp|python",
    "code": "Reference code"
  }
}
test_cases.json (optional)
{
  "problem_id": "unique_id",
  "test_cases": [
    {
      "input": "test input",
      "expected_output": "expected output",
      "timeout": 5
    }
  ]
}

Capabilities Assessed

Supported Models

Intended Use

Limitations and Considerations

Citation

@misc{luogu_llm_research_2025,
author = { Luogu LLM Research },
title = { LACPT },
year = 2025,
url = { https://huggingface.co/datasets/luogu-llm-research/LACPT },
publisher = { Hugging Face }
}
© 2024 LACPT. All rights reserved.