The Malaysian regulatory floor for technology risk shifted on 28 November 2025, when Bank Negara's revised Risk Management in Technology (RMiT) policy came into effect. The revision is the most important AI-relevant regulatory event in the Malaysian market in the past 18 months — not because it explicitly regulates AI, but because it tightens the cybersecurity and cloud governance posture that every regulated AI deployment now sits inside.
For non-banks, RMiT does not directly apply, but the surrounding framework — PDPA, MOSTI's National Guidelines on AI Governance and Ethics, ISO 27001 — is moving in the same direction. The practical effect is that the bar for "responsible AI deployment" in Malaysia has risen meaningfully across the board.
This is a working scaffold for AI governance in Malaysian enterprises in 2026, drawn from deployments AITraining2U has helped harden across banks, insurers, and mid-market companies in the past year.
The four regulatory frameworks that matter
1. BNM RMiT (revised November 2025)
Applies to financial institutions licensed by Bank Negara — banks, insurers, payments providers, and increasingly non-bank merchant acquirers under the November 2024 exposure draft. The revised RMiT tightens cloud governance, cybersecurity assessment, and accountability under shared-responsibility models. For AI deployments inside FIs, the practical implications are: tighter due diligence on AI vendors and cloud providers, stricter logging and audit, and explicit accountability for AI-related cyber risks.
2. PDPA (Personal Data Protection Act 2010)
Applies to all Malaysian organisations processing personal data. For AI, three sections matter most: lawful processing (you must have a lawful basis to feed personal data into AI systems); data residency (cross-border transfers require safeguards); and security obligations (the controls protecting personal data extend to AI workflows handling that data). The 2024 PDPA amendments added stricter penalties, which has meaningfully changed how seriously enterprises take this.
3. MOSTI National Guidelines on AI Governance and Ethics
Not legally binding in 2026, but increasingly used as the de facto governance baseline by GLCs, regulators, and enterprise procurement teams. Malaysian companies that align with these guidelines voluntarily are better positioned for both regulatory scrutiny and customer trust.
4. ISO 27001 (Information Security Management)
The international standard for information security management systems. Increasingly demanded in enterprise procurement, particularly for AI vendors. Achieving ISO 27001 certification is not specific to AI, but the disciplines it instils — risk assessment, access controls, audit logging, incident response — are exactly what serious AI governance requires.
The seven controls that anchor responsible AI deployment
Seven controls for responsible AI deployment
Across the deployments AITraining2U has helped harden, seven controls show up consistently in the ones that pass scrutiny.
1. Approved-model registry
A documented list of AI models the organisation is approved to use, what data classifications can flow to each, and the contractual basis for that flow. Anthropic enterprise, OpenAI enterprise, and Google Cloud enterprise tiers typically anchor this list. Consumer ChatGPT and similar do not.
2. Data classification and flow controls
Every AI workflow should map: what data classification goes in, what residency constraints apply, what processing happens, what comes out, and who has access. Without this map, you cannot answer the basic regulatory questions when they are asked.
3. Audit logging
Every AI-assisted decision and every input/output pair logged with timestamps, identifiers, and the model used. Retention aligned with the regulatory requirement for the data type. PDPA-aware redaction so audit logs themselves are not a privacy liability.
4. Prompt-injection and adversarial defence
Particularly for agents that read external content (emails, documents, scraped pages, customer messages). Standard defences include prompt-shielding patterns, allowlisted tool surfaces, output filtering, and human-in-the-loop on sensitive actions.
5. Human-in-the-loop on consequential actions
Any AI action that affects external parties — sending messages, posting transactions, taking irreversible decisions — should require human approval for the first 90 days at minimum. Many systems should keep this rail permanently.
6. Incident response and kill switches
A documented procedure for what happens when an AI system misbehaves. Who notifies whom. How fast the system can be disabled. Single-flag kill switches that operations can flip without engineering involvement. Tested at least quarterly.
7. Quarterly governance review
A standing review by an internal committee — typically including risk, IT, legal, and the business sponsor — that reviews approved models, fairness audits, incident reports, and policy adherence. The review is what keeps governance from decaying into a dusty PDF.
Where Malaysian organisations typically fall short
From what we see across enterprise audits in 2026:
- Shadow AI usage — staff using consumer LLMs with company data without approval. Almost universal. The fix is sanctioned alternatives, not bans.
- No audit logs on early deployments. The first deployment is treated as an experiment; logging is added retroactively after a regulator asks. Better to bake logging in from day one.
- Vendor due diligence gaps — accepting an AI vendor's self-declared security posture without independent verification.
- Fairness audits performed once, then never again.
- Kill switches that are not tested until the moment they are needed, when they often do not work as expected.
Where to start if you are behind
If your organisation is deploying AI without a governance scaffold and you are reading this with mild alarm, the practical sequence is:
- Inventory current AI usage — sanctioned and shadow.
- Stand up an approved-model registry and migrate sensitive workflows onto enterprise tiers.
- Add audit logging to all production AI workflows.
- Assemble an internal governance committee with quarterly review cadence.
- Run a baseline fairness audit and prompt-injection assessment.
None of this requires a six-month consulting engagement. It does require a consistent owner and a governance committee that meets. Our AI Agentic Security programme covers the full scaffold above end-to-end and is HRDC SBL-KHAS claimable for eligible Malaysian employers.