
Certified AI/ML Pentester
(C-AI/MLPen)
The Certified AI/ML Pentester (C-AI/MLPen) is an intermediate-level exam designed to test a candidate’s knowledge of the core concepts involving AI/ML security. If you are passionate about identifying and exploiting potential security risks when deploying and managing Large Language Models (LLMs), then this one’s for you!
- Practical
- 4 Hours
- Online
- On-demand
- Real world pentesting scenarios
£250
Equivalent Industry Certifications*
*Note: We are not affiliated with any of the certifications mentioned here. These are respected industry certifications, and referenced here to show how our Certified AI/ML Pentester (C-AI/MLPen) exam’s syllabus/difficulty overlaps with these exams.
If you already hold any of these, you’re likely well-prepared to test your knowledge with our exam. If you’re preparing for one, our exam is a great way to test your progress.
Our Candidates Say it Best
Who should take the exam?
C-AI/MLPen is intended to be taken by pentesters, application security architects, SOC analysts, red and blue team members, AI/ML engineers and any AI/ML security enthusiast, who wants to evaluate and advance their knowledge.
What is the format of the exam?
C-AI/MLPen is an intense 4 hour long practical exam. It requires candidates to solve a number of challenges, identify and exploit various vulnerabilities and obtain flags. The exam can be taken online, anytime (on-demand) and from anywhere. Candidates will need to connect to the exam VPN server to access the vulnerable applications.
What is the pass criteria for the exam?
The pass criteria are as follows:
- Candidates scoring over 60% marks will be deemed to have successfully passed the exam.
- Candidates scoring over 75% marks will be deemed to have passed with merit.
What is the experience needed to take the exam?
This is an intermediate-level exam. Candidates should have prior knowledge and experience of AI/ML pentesting. They should have a solid understanding of common application security topics, including the OWASP Top 10 vulnerabilities for large language models (LLMs), prompt injection attacks, common security misconfigurations, and best security practices. They should be able to demonstrate their practical knowledge on AI/ML security topics by completing a series of tasks on identifying and exploiting vulnerabilities that have been created in the exam environment to mimic the real world scenarios.
Note: As this is an intermediate-level exam, a minimum of one year of professional pentesting experience is recommended.
What will the candidates get?
On completing the exam, each candidate will receive:
- A certificate with their pass/fail and merit status.
- The certificate will contain a code/QR link, which can be used by anyone to validate the certificate.
What is the exam retake policy?
Candidates, who fail the exam, are allowed 1 free exam retake within the exam fees.
What are the benefits of this exam?
The exam will allow candidates to demonstrate their skills in AI/ML pentesting. This will help them to advance in their career.
How long is the certificate valid for?
The certificate does not have an expiration date. However, the passing certificate will mention the details of the exam such as the exam version and the date. As the exam is updated over time, candidates should retake the newer version as per their convenience.
Will you provide any training that can be taken before the exam?
Being an independent certifying authority, we (The SecOps Group) do not provide any training for the exam. Candidates should carefully go over each topic listed in the syllabus and make sure they have adequate understanding, required experience and practical knowledge of these topics. Further, the following independent resources can be utilized to prepare for the exam.
Learning Resources
Exam Syllabus
Prompt Injection
- Direct Prompt Injections
- Indirect Prompt Injections
Insecure Output Handling
Training Data Poisoning
Supply Chain Vulnerabilities
- Traditional third-party package vulnerabilities, including outdated or deprecated components.
- Using a vulnerable pre-trained model for fine-tuning.
- Using outdated or deprecated models that are no longer maintained leads to security issues.
- Use of poisoned crowd-sourced data for training.
Sensitive Information Disclosure
- Incomplete or improper filtering of sensitive information in the LLM responses.
- Overfitting or memorization of sensitive data in the LLM training process.
- Unintended disclosure of confidential information due to LLM misinterpretation, lack of data scrubbing methods or errors.
Insecure Plugin Design
Excessive Agency
- Excessive Functionality.
- Excessive Permissions.
- Excessive Autonomy.