AI Agentic Security &
Data Governance
Protect your organisation's AI systems from emerging threats. Master cybersecurity fundamentals, AI agent security, vulnerability scanning, and enterprise data governance. Hands-on training with real security tools.
Workshop Dates
Register your interest to be notified when dates are announced.
Dates Coming Soon
Register your interest and be the first to know when we announce workshop dates for this programme.
Register Your InterestPrivate Corporate Training
Looking to secure your entire organisation's AI infrastructure?
Exclusive sessions available for groups of 25-35 pax per class. Fully claimable.
5 Security Tools. Hands-On.
Industry-standard tools you will master during this course. All open-source or free community editions.
OWASP ZAP
Web app security scanner
Burp Suite
Vulnerability scanner & proxy
Nmap
Network discovery & audit
Gitleaks
Secret detection in code
Nuclei
Template-based vuln scanner
What You'll Build
Practical security projects you will complete during the workshop.
AI Agent Threat Scanner
Build an automated scanner that identifies vulnerabilities in AI agent configurations, API endpoints, and data flows.
OWASP AI Security Audit Tool
Implement OWASP Top 10 for LLM Applications checks against your AI systems with automated reporting.
Data Governance Dashboard
Create a centralised dashboard tracking data classification, access controls, retention policies, and compliance status.
Prompt Injection Defence System
Build multi-layered defences against prompt injection, jailbreaking, and adversarial attacks on AI agents.
API Security Gateway
Deploy a security gateway that monitors, rate-limits, and validates all AI API traffic with anomaly detection.
Incident Response Playbook
Design automated incident detection, classification, and response workflows for AI security events.
Compliance Audit Report Generator
Automate PDPA, industry-specific compliance checks and generate audit-ready security reports.
HRDC Training Architecture
A structured, hands-on approach to mastering AI security and data governance.
Day 1: Cybersecurity Foundations & AI Threat Landscape
Understanding the core security mechanics and the AI-specific threat landscape.
Core Theory
- Cybersecurity Fundamentals: CIA triad, attack surfaces, threat modelling, defence-in-depth strategy. The essential foundation before adding the AI layer.
- AI-Specific Threat Landscape: Prompt injection, data poisoning, model theft, adversarial attacks, hallucination exploitation. How AI agents create new attack vectors.
- OWASP Top 10 for LLM Applications: Deep dive into each vulnerability category: prompt injection, insecure output handling, training data poisoning, model denial of service, supply chain vulnerabilities, sensitive information disclosure, insecure plugin design, excessive agency, overreliance, model theft.
- Malaysian Regulatory Context: PDPA compliance, BNM RMiT for financial services, Cybersecurity Act 2024, industry-specific requirements and how AI intersects with existing regulations.
Hands-On Labs — Using Real Security Tools
Use OWASP ZAP to scan web applications and AI-powered endpoints for common vulnerabilities (XSS, SQL injection, CSRF).
Intercept and analyse AI API traffic, test for prompt injection vulnerabilities, and identify insecure data handling in AI agent communications.
Map your AI infrastructure, discover exposed services, identify misconfigured ports, and assess the attack surface of AI deployments.
Scan repositories and codebases for accidentally committed API keys, tokens, passwords, and sensitive credentials used by AI agents.
Day 2: AI Agent Security & Data Governance
Securing AI agents, implementing governance, and advanced penetration testing.
Core Theory
- Securing AI Agents: Authentication, authorisation, sandboxing, output validation, tool-use restrictions. How to prevent AI agents from being weaponised or manipulated.
- Data Governance Frameworks: Data classification, access control matrices, retention policies, data lineage tracking, right-to-erasure compliance for AI training data.
- API & Integration Security: Securing AI API endpoints, rate limiting, input validation, output sanitisation, webhook security, and MCP server hardening.
- Security Architecture for AI Systems: Zero-trust principles applied to AI, network segmentation, secrets management, encrypted communication, and secure model deployment patterns.
Hands-On Labs
Conduct structured penetration tests against a live AI chatbot: prompt injection attacks, jailbreak attempts, data extraction techniques, and privilege escalation.
Use ProjectDiscovery's Nuclei to run templated vulnerability scans against web applications and AI infrastructure with custom security templates.
Implement input sanitisation, output filtering, rate limiting, and anomaly detection for an AI agent. Test each layer against known attack patterns.
Set up data classification schemas, access control policies, audit logging, and compliance monitoring for AI data pipelines.
Day 3 (Optional): Enterprise Security Operations & Incident Response
Phase 03Specifically designed for corporate consulting engagements. Covers:
- Security operations centre (SOC) design for AI-augmented environments
- Automated incident detection and response workflows
- Threat intelligence integration with AI systems
- Red team / blue team exercises for AI security
- Building an AI security policy framework
- Vendor risk assessment for AI tools and platforms
- Board-level security reporting and risk communication
- Developing an organisational AI security roadmap
Who Should Attend?
This hands-on intensive is designed for technical professionals responsible for AI security and data governance.
IT Security Teams & CISOs
Security professionals responsible for securing AI deployments and ensuring regulatory compliance.
Software Engineers & DevOps
Developers building AI-powered applications who need to implement security-by-design principles.
Data Protection Officers
Compliance professionals managing data governance, PDPA requirements, and AI data handling policies.
CTOs & Technical Leaders
Decision-makers evaluating AI security risks and building organisational security strategies.
Experience the Workshop
A hands-on, high-energy environment where teams actually build, not just listen.
Our People
Learn from Malaysia's top AI security practitioners.
Shah Mijanur Rahman
Cybersecurity & Agentic Security Expert
Expert in cybersecurity, data pipelines, and AI agent security. Specialist in securing enterprise AI deployments, conducting penetration testing, and implementing data governance frameworks. Optimizes how AI agents retrieve and process internal knowledge securely.
Detailed FAQ
Addressing your technical, logistical, and HRDC inquiries.
Course Fee
Transparent pricing for your AI security transformation journey.
Self-Funded (non-HRDC)
Kickstart your AI Security journey
- 2 full days of in-person intensive training
- Complete programme materials, tools and templates
- Certificate of Completion
- 3-month post-training WhatsApp group support
- Admission to wider AI Learning Community
HRDC-Claimable
Upskill with your company's HRDC grant
- 2 full days of in-person intensive training
- Complete programme materials, tools and templates
- Certificate of Completion
- 3-month post-training WhatsApp group support
- Admission to wider AI Learning Community
About AITraining2U
AITraining2U was established by professionals to close the divide between academic theory, business and practical industry demands. Our mission is to ensure that AI education translates directly into measurable, real-world results. Since 2025, we have upskilled over 1,200 professionals across Malaysia in AI, Business Transformation, Agentic Automation, and Vibe Coding.
Driven by a core philosophy of "100%-focus on success" our expert faculty delivers highly interactive, hands-on learning experiences focused entirely on implementation. We don't just teach prompt engineering; we teach you how to architect robust, autonomous systems.
Whether through bespoke corporate masterclasses or intensive public bootcamps, we actively partner with enterprise leaders, technical specialists, and government bodies to accelerate their digital transformation journey and build confident, AI-native organizations.