A European Hpc-centric Benchmark Framework
HORIZON Research and Innovation Actions
Basic Information
- Identifier
- HORIZON-EUROHPC-JU-2024-BENCHMARK-05
- Programme
- A European HPC-centric Benchmark Framework
- Programme Period
- 2021 - 2027
- Status
- Open (31094502)
- Opening Date
- December 11, 2025
- Deadline
- March 24, 2026
- Deadline Model
- single-stage
- Budget
- €2,000,000
- Min Grant Amount
- €500,000
- Max Grant Amount
- €1,000,000
- Expected Number of Grants
- 1
- Keywords
- HORIZON-EUROHPC-JU-2024-BENCHMARK-05HORIZON-EUROHPC-JU-2024-BENCHMARK-05High performance computing
Description
The Project is expected to contribute to the following outcomes:
- Enhanced decision-making through comprehensive system comparisons that improves the procurement process for exascale, post-exascale supercomputers and supercomputers with dedicated AI capabilities. This will enable more informed choices regarding the acquisition of new systems and upgrades of existing ones
- Competent HPC application developers and end-users in selecting systems that best meet their needs, balancing quality factors like accuracy with considerations of cost, such as time-to-solution
- Overall improved operation and fine-tuning of HPC and HPC-AI systems leading to improved performance, throughput and energy optimization, and improved end-user experience
- A unified, extensible and well-documented benchmarking framework to easily accommodate new, community-contributed benchmarks with common standards, versioning and control
- A well-maintained and continuously updated benchmarking suite for exascale and post-exascale HPC, incl. set of apps, as well as AI models.
A. Deployment of a benchmarking framework for designing, developing and executing exascale HPC and HPC-AI benchmarks. The envisioned benchmarking framework will:
- offer a fine grained and fair comparison methodology among different HPC systems, i.e. all benchmarks, benchmark run rules[1] and benchmark submission rules must be designed to ensure reproducibility, repeatability and replicability of metrics on the same system, ensuring fairness and comparability of metrics across different systems
- define precise performance metrics with a clear focus on energy-related performance indicators
- standardise all benchmarking input- and output formats
- collect and report all benchmarking results while offering statistically sound result analyses
- ensure that all benchmarks are executable on the respective target environment(s)
- offer a standardized structured workflow capturing and streamlining the entire benchmarking process
- offer a standardised repository with transparent version control
- provide a reference implementation for each benchmark
- use a EuroHPC reference system, where applicable, to normalize the performance metrics produced by the benchmarking suite, i.e. each benchmark is run and measured on this system to establish a reference value for that benchmark[2] subsequently, the normalized performance is the quotient of the performance value attained on the EuroHPC reference machine and the one on the system under test
- is of production-quality and ready to assess all EuroHPC supercomputers and supercomputers with AI capabilities,
- provides all required templates with relevant input data to properly execute the benchmarking suite on every EuroHPC system.
The benchmarking framework along with its workflows will be realised in a software implementation that offers to the end-user a dynamic workspace for the entire workflow.
B. Establishing a comprehensive exascale HPC and HPC-AI benchmarking suite utilizing the framework developed in the first objective. This benchmarking suite, with its associated performance metrics, will be designed to measure and assess the performance of HPC, as well as HPC-AI[3] systems at various levels of granularity, encompassing:
- Microbenchmarks: Microbenchmarks focus on small or very small building blocks of real programs. They are typically characterized by a narrow focus on a single subsystem and used by component developers or system integrators for assessing the performance and optimizing specific parts of the system, e.g. the memory subsystem, or the interconnect. Examples include: dense and sparse linear algebra operations including tensor operations, spectral methods, n-body methods, (un)structured grid methods and others.
- Application and workflow benchmarks: Application benchmarks are used for measuring the performance of a system under typical user inflicted workloads. Applications are comprehensive and attain a broad focus covering multiple components and their interactions. They are used by end-users, system administrators, and procurement authorities who need to evaluate overall system performance and compare different systems for their specific purposes: system selection, system optimization or system procurement. Examples are CFD, molecular dynamics simulation, numerical weather prediction, atomic scale materials modelling and others, AI model training, service/inference. Note that the concept of an application benchmarks encompasses real application benchmarks and their synthetic flavours, proxy applications, mini-apps, kernels and similar. Workflow benchmarks go beyond application benchmarks by accounting for system-performance effects of the flow and control-flow complexities of integrated scientific workflows. These workflows couple computational and data manipulation steps across simulation and modelling, end-to-end AI workflows, and high-performance data analytics.
- System benchmarks: System benchmarks offer a comprehensive system performance assessment under conditions where multiple, diverse workloads are concurrently executed and orchestrated by job schedulers and workload managers, reflecting a realistic, multi-user production environment. This involves running a curated portfolio of applications and is used by system administrators for optimizing the performance of schedulers and workload managers and by procurement authorities to assistant in system procurement decision-making. An example is running an ensemble of large AI multimodal model training simultaneously with large scale lattice Boltzmann simulations.
The envisioned benchmarking suite is expected to:
- be generally hardware agnostic
- provide documentation for developers and end-users
- catalogue well-established benchmarks of both technical areas
- continuously update the portfolio with novel benchmarks of both technical areas
- ensure that each benchmark produces at least one metric, examples are time-to-solution (under a quality constraint), throughput or utilization define reliable and appropriate common metrics to compare the different architectures based on pre-defined criteria (e.g. efficiency)
- ensure that all benchmarks and associated metrics will comprehensively cover all relevant workloads and performance aspects ensuring to meet the diverse needs of the European HPC-AI community in a future-proof manner
- offer a comprehensive coverage of contemporary and upcoming architectures, utilizing current representative and upcoming workloads from the HPC and HPC-AI domains
- be application oriented, reflecting actual use-patterns, use-cases and diverse workloads in all three technical areas (exascale HPC, as well as HPC-AI ), ensuring that the genuine capabilities and limitations of each system is well-captured
- ensure the scalability of each benchmark by identifying relevant scale parameters[4].
Proposals should provide a thorough justification for the selection of each benchmark and performance metric, clearly explaining how they align with the specific requirements and priorities of the European HPC-AI landscape. The inclusion or integration of existing benchmarks under the umbrella of this initiative is encouraged, provided there are prior agreements with the benchmark owners and compatibility with licensing conditions.
Proposals must outline a strategy for ensuring the sustainability and availability of the benchmarking suite beyond the duration of the action, specifically focusing on how to transform it into a community-driven effort. The proposal must also outline a clear IP plan targeting industry needs.
The consortium will actively coordinate with international collaborators to establish common and objective benchmarking standards.
The project will also propose and maintain a detailed strategic development roadmap for the action, which:
- anticipates future developments in HPC, including emerging technologies and prospective AI models
- identifies and addresses novel opportunities for exascale systems with a clear focus on energy efficiency
- foresees the hardware agnostic (ARM, x86, RISC-V) and hardware inclusive (processors, accelerators and hybrid systems) support of heterogeneous systems
The consortium will actively engage with industry and research communities through workshops, working groups, and feedback loops to receive continuous feedback ensuring that all benchmarks are relevant and up to date.
Requirements:
- The proposal will eliminate duplication of effort by building on existing European benchmarking efforts and initiatives in HPC, such as Unified European Application Benchmark Suite (UEABS). Each proposal is expected to outline a strategy for aligning with and incorporate their results
- The benchmark suite must specify a workload in an implementation independent way
- Define a dataset and quality criteria:
- Detail benchmark specifications to test the supercomputing current and feature systems against SOTA metrics
- Address relevant metrics of the target system per technical area i.e. Performance, Scalability, Resource utilization, Energy efficiency, and extend where necessary
- Contribute to relevant standardization efforts, including security standards such as ISO 27001, and AI standards like ISO 22989 and 23053
- Define a methodology to deal with legacy applications and legacy systems.
- Define HPC utilization metrics including breakdown by benchmarking area (microbenchmarks, application benchmarks, mixed-workload benchmarks) and corresponding qualitative and quantitative KPIs to drive the development towards the objectives:
- Define effective KPIs between the different benchmarking areas
- Collect and analyse user feedback to evaluate how the benchmark suite efficiently and fairly compares diverse systems
- Define a mechanism to monitor the benchmarking framework and pool appropriate existing benchmark suites, relevant for architectures of all participating HPC centres for deployment in a common data repository:
- The developed automation framework together with the benchmarks will be onboarded to a common software repository created within other EuroHPC initiatives
- Enable continuous improvement, e.g. within an automated integration and testing workflow for the benchmark suite and framework repository, with appropriate tools, including version tracking of the benchmarks (where applicable including the data sets, build infrastructure, etc.)
- Define a mechanism for extending the benchmark suite: identification, selection, and standardisation of future relevant benchmarks, governance
- Extensive user documentation must be prepared and deemed sufficient by the users to effectively understand and use the benchmark suite.
- The consortium should demonstrate complementary expertise regarding the two main technical areas/key topics that add up to the modularity layered benchmark framework
- The benchmarking framework and the encompassing benchmarking suite will be made available to the user communities under the European Union Public Licence (EUPL)
- The benchmarking framework will be defined through a consensus among stakeholders representing the HPC and HPC-AI communities, ensuring alignment with their diverse needs. This collaborative approach will establish a single point of agreement, providing a unified standard that accommodates the evolving landscape of high-performance computing and its related fields
- All technical and legal aspects should already be addressed at the proposal stage and not deferred to a later time or the consortium agreement. Where required, an appropriate modification of, e.g., the general terms and conditions for users of supercomputers should be elaborated and implemented by the participating HPC operators.
[1] Run rules define required and forbidden hardware, software, optimization, tuning, and procedures.
[2] When two different systems are compared with the same benchmark, their performance relative to each other must be invariant, even if different reference machines are used.
[3] We shall refer to conventional HPC and HPC-AI systems and benchmarks collectively as HPC-AI systems and benchmarks.
[4] For example, the scale parameter for an FFT benchmark is the window size and the scale parameters for AI model training applications include the size of the dataset, model size, and, in some cases, the number of models being trained simultaneously (e.g., in bagging scenarios).
Eligibility & Conditions
General conditions
1. Admissibility Conditions: Proposal page limit and layout
The conditions are described in the General Annex A of the Horizon Europe Work Programme 2023-2025.
The page limit of the application is 70 pages.
2. Eligible Countries
The criteria are described in Annex B of the Work Programme General Annexes.
A number of non-EU/non-Associated Countries that are not automatically eligible for funding have made specific provisions for making funding available for their participants in Horizon Europe projects. See the information in the Horizon Europe Programme Guide.
3. Other Eligible Conditions
The conditions are described in the General Annex B of the Horizon Europe Work Programme 2023-2025.
A number of non-EU/non-Associated Countries that are not automatically eligible for funding have made specific provisions for making funding available for their participants in Horizon Europe projects. See the information in the Horizon Europe Programme Guide.
The following legal entities are eligible to participate
- National HPC centres
- Research and academic institutions focused on HPC
- Standardisation bodies
- Other public and private entities regularly procuring, operating or using significant HPC resources, if clearly explained and duly justified in the proposal, and provided no conflict of interest exists
Due to potential conflicts of interest, for-profit entities with business models around hardware and software for HPC are generally not eligible for participation.
4. Financial and operational capacity and exclusion
The criteria are described in Annex C of the Work Programme General Annexes.
5a. Evaluation and award: Award criteria, scoring and thresholds
The procedure is described in General Annex F of the Horizon Europe Work Programme 2023-2025.
The granting authority can fund a maximum of one project.
The criteria are are described in Annex D of the Work Programme General Annexes.
5b. Evaluation and award: Submission and evaluation processes
The criteria are described in of the General Annex D of the Horizon Europe Work Programme 2023-2025.
5c. Evaluation and award: Indicative timeline for evaluation and grant agreement
The criteria are described in Annex F of the Work Programme General Annexes.
6. Legal and financial set-up of the grants
As an exception from General Annex G of the Horizon Europe Work Programme, the EU-funding rate for eligible costs in grants awarded by the JU for this topic will be up to 50% of the eligible costs.
The rules are described in General Annex G of the of the Horizon Europe Work Programme 2023-2025.
Specific conditions
The documents are described in General Annex E of the Horizon Europe Work Programme 2023-2025.
Application and evaluation forms and model grant agreement (MGA):
Application form templates — the application form specific to this call is available in the Submission System
Standard application form (HE RIA, IA)
Standard application form (HE RIA IA Stage 1)
Standard application form (HE CSA)
Standard application form (HE CSA Stage 1)
Standard application form (HE RI)
Standard application form (HE PCP)
Standard application form (HE PPI)
Standard application form (HE COFUND)
Standard application form (HE FPA)
Standard application form (HE MSCA DN)
Standard application form (HE MSCA PF)
Standard application form (HE MSCA SE)
Standard application form (HE MSCA COFUND)
Standard application form (HE ERC STG)
Standard application form (HE ERC COG)
Standard application form (HE ERC ADG)
Standard application form (HE ERC POC)
Standard application form (HE ERC SYG)
Standard application form (HE EIC PATHFINDER CHALLENGES)
Standard application form (HE EIC PATHFINDER OPEN)
Standard application form (HE EIC TRANSITION)
Evaluation form templates — will be used with the necessary adaptations
Standard evaluation form (HE RIA, IA)
Standard evaluation form (HE CSA)
Standard evaluation form (HE RIA, IA and CSA Stage 1)
Standard evaluation form (HE PCP PPI)
Standard evaluation form (HE COFUND)
Standard evaluation form (HE FPA)
Standard evaluation form (HE MSCA)
Standard evaluation form (HE EIC PATHFINDER CHALLENGES)
Standard evaluation form (HE EIC PATHFINDER OPEN)
Standard evaluation form (HE EIC TRANSITION)
Standard evaluation form (HE EIC Accelerator stage 1 - short proposal)
Standard evaluation form (HE EIC Accelerator stage 2 - full proposal)
Guidance
Model Grant Agreements (MGA)
Framework Partnership Agreement FPA
Call-specific instructions
Information on financial support to third parties (HE)
Additional documents:
HE Main Work Programme 2023–2025 – 1. General Introduction
HE Main Work Programme 2023–2025 – 2. Marie Skłodowska-Curie Actions
HE Main Work Programme 2023–2025 – 3. Research Infrastructures
HE Main Work Programme 2023–2025 – 4. Health
HE Main Work Programme 2023–2025 – 5. Culture, creativity and inclusive society
HE Main Work Programme 2023–2025 – 6. Civil Security for Society
HE Main Work Programme 2023–2025 – 7. Digital, Industry and Space
HE Main Work Programme 2023–2025 – 8. Climate, Energy and Mobility
HE Main Work Programme 2023–2025 – 10. European Innovation Ecosystems (EIE)
HE Main Work Programme 2023–2025 – 12. Missions
HE Main Work Programme 2023–2025 – 13. General Annexes
HE Framework Programme 2021/695
HE Specific Programme Decision 2021/764
EU Financial Regulation 2018/1046
Rules for Legal Entity Validation, LEAR Appointment and Financial Capacity Assessment
EU Grants AGA — Annotated Model Grant Agreement
Funding & Tenders Portal Online Manual
Frequently Asked Questions About A European Hpc-centric Benchmark Framework
Support & Resources
Online Manual is your guide on the procedures from proposal submission to managing your grant.
Horizon Europe Programme Guide contains the detailed guidance to the structure, budget and political priorities of Horizon Europe.
Funding & Tenders Portal FAQ – find the answers to most frequently asked questions on submission of proposals, evaluation and grant management.
Research Enquiry Service – ask questions about any aspect of European research in general and the EU Research Framework Programmes in particular.
National Contact Points (NCPs) – get guidance, practical information and assistance on participation in Horizon Europe. There are also NCPs in many non-EU and non-associated countries (‘third-countries’).
Enterprise Europe Network – contact your EEN national contact for advice to businesses with special focus on SMEs. The support includes guidance on the EU research funding.
IT Helpdesk – contact the Funding & Tenders Portal IT helpdesk for questions such as forgotten passwords, access rights and roles, technical aspects of submission of proposals, etc.
European IPR Helpdesk assists you on intellectual property issues.
CEN-CENELEC Research Helpdesk and ETSI Research Helpdesk – the European Standards Organisations advise you how to tackle standardisation in your project proposal.
The European Charter for Researchers and the Code of Conduct for their recruitment – consult the general principles and requirements specifying the roles, responsibilities and entitlements of researchers, employers and funders of researchers.
Partner Search help you find a partner organisation for your proposal.