top of page
cropped-efort-logo-org.png

Project Acronym: cPAID
Project Full Title: Cloud-based Platform-agnostic Adversarial AI Defence framework
Duration: 01/10/2024 – 30/09/2027
Topic: Security of robust AI systems (HORIZON-CL3-2023-CS-01-03)
Project Website: https://cpaid.eu

Suite5 in cPAID

Suite5 leads the “Adversarial AI Prevention and Capacity Building” Work Package and contributes to the definition and development of various modules: first, supports the development of AI attack cyber range, second, supports the definition and development of the architecture of cPAID risk management, third, supports the coding & testing of the AI-assisted Adversarial Intrusion Detection and Prevention System, and fourth, supports the implementation of the Generative Adversarial AI module. Additionally, Suite5 contributes to the requirements & architecture definition, and supports the integration and refinements of the modules of interest towards an integrated cPAID platform. Finally, Suite5 contributes to the collective dissemination efforts & exploitation planning.

cPAID envisions researching, designing, and developing a cloud-based platform-agnostic defense framework for the holistic protection of AI applications and the overall AI operations of organizations against malicious actions and adversarial attacks. cPAID aims at tackling both poisoning and evasion adversarial attacks by combining AI-based defense methods (e.g., life-long semi-supervised reinforcement learning, transfer learning, feature reduction, adversarial training), security- and privacy-by-design, privacy-preserving, explainable AI (XAI), Generative AI, context-awareness, as well as risk and vulnerability assessment and threat intelligence of AI systems. cPAID will identify guidelines to a) guarantee security- and privacy-by-design in the design and development of AI applications, b) thoroughly assess the robustness and resiliency of ML and DL algorithms against adversarial attacks, c) ensure that EU principles for AI ethics have been considered, and d) validate the performance of AI systems in real-life use case scenarios. The identified guidelines aspire to promote research toward developing certification schemes that will certify the robustness, security, privacy, and ethical excellence of AI applications and systems.

Other Resources: https://cordis.europa.eu/project/id/101168407

bottom of page