
Luogu Advanced Competitive Programming Test (LACPT) is a comprehensive benchmark suite designed to evaluate AI's coding abilities in high-difficulty algorithmic competitions. LACPT aims to serve as a rigorous benchmark for measuring AI's core capabilities in solving complex, non-standard programming problems, which are considered key components for achieving Artificial General Intelligence (AGI).
Authors: Shihao Ji, Zihui Song, Sumo Yu
Organizations: Luogu, Tengzhou No.1 High School
pip install -r requirements.txtexport OPENAI_API_KEY="your_openai_api_key"export ANTHROPIC_API_KEY="your_anthropic_api_key"python scripts/eval/run_evaluation.py --model openai --model-name gpt-4opython scripts/eval/run_evaluation.py --model openai --problems a_plus_b fibonaccipython scripts/eval/run_evaluation.py --model openai --use-ai-generatorpython examples/quick_start.py
{
"problem_id": "unique_id",
"title": "Problem Title",
"difficulty": "easy|medium|hard",
"tags": ["tag1", "tag2"],
"problem_statement": "Markdown format description",
"input_file": "input.txt",
"output_file": "output.txt",
"time_limit": 1000,
"memory_limit": 256,
"reference_solution": {
"language": "cpp|python",
"code": "Reference code"
}
}
test_cases.json (optional)
{
"problem_id": "unique_id",
"test_cases": [
{
"input": "test input",
"expected_output": "expected output",
"timeout": 5
}
]
}