AI Governance for Malaysian Enterprises 2026: BNM RMiT, PDPA, ISO 27001
AI Governance

AI Governance for Malaysian Enterprises 2026: RMiT, PDPA, ISO 27001

The regulatory floor moved in November 2025. Here is what changed, what it means for Malaysian AI deployments, and the governance scaffold that actually works in 2026.

By Warren Leow 2026-03-18 11 min read
AI governance for Malaysian enterprises 2026 — BNM RMiT, PDPA, ISO 27001

The Malaysian regulatory floor for technology risk shifted on 28 November 2025, when Bank Negara's revised Risk Management in Technology (RMiT) policy came into effect. The revision is the most important AI-relevant regulatory event in the Malaysian market in the past 18 months — not because it explicitly regulates AI, but because it tightens the cybersecurity and cloud governance posture that every regulated AI deployment now sits inside.

For non-banks, RMiT does not directly apply, but the surrounding framework — PDPA, MOSTI's National Guidelines on AI Governance and Ethics, ISO 27001 — is moving in the same direction. The practical effect is that the bar for "responsible AI deployment" in Malaysia has risen meaningfully across the board.

This is a working scaffold for AI governance in Malaysian enterprises in 2026, drawn from deployments AITraining2U has helped harden across banks, insurers, and mid-market companies in the past year.

The four regulatory frameworks that matter

1. BNM RMiT (revised November 2025)

Applies to financial institutions licensed by Bank Negara — banks, insurers, payments providers, and increasingly non-bank merchant acquirers under the November 2024 exposure draft. The revised RMiT tightens cloud governance, cybersecurity assessment, and accountability under shared-responsibility models. For AI deployments inside FIs, the practical implications are: tighter due diligence on AI vendors and cloud providers, stricter logging and audit, and explicit accountability for AI-related cyber risks.

2. PDPA (Personal Data Protection Act 2010)

Applies to all Malaysian organisations processing personal data. For AI, three sections matter most: lawful processing (you must have a lawful basis to feed personal data into AI systems); data residency (cross-border transfers require safeguards); and security obligations (the controls protecting personal data extend to AI workflows handling that data). The 2024 PDPA amendments added stricter penalties, which has meaningfully changed how seriously enterprises take this.

3. MOSTI National Guidelines on AI Governance and Ethics

Not legally binding in 2026, but increasingly used as the de facto governance baseline by GLCs, regulators, and enterprise procurement teams. Malaysian companies that align with these guidelines voluntarily are better positioned for both regulatory scrutiny and customer trust.

4. ISO 27001 (Information Security Management)

The international standard for information security management systems. Increasingly demanded in enterprise procurement, particularly for AI vendors. Achieving ISO 27001 certification is not specific to AI, but the disciplines it instils — risk assessment, access controls, audit logging, incident response — are exactly what serious AI governance requires.

The seven controls that anchor responsible AI deployment

Seven controls for responsible AI deployment

Seven controls for responsible AI deployment 1Approved-model registry
Documented list of approved models, data classes that can flow to each, and the contractual basis for each flow.
2Data classification & flow controls
Map every workflow's data classification, residency, processing, output, and access boundaries.
3Audit logging
Every input/output pair logged with timestamps and model used. Retention aligned to regulatory requirements.
4Prompt-injection defence
Allowlists, output filtering, HITL on consequential actions for any agent reading external content.
5Human-in-the-loop
Any AI action affecting external parties requires human approval for at least the first 90 days of production.
6Incident response & kill switches
Documented procedure. Single-flag kill switches operations can flip without engineering. Tested quarterly.

Across the deployments AITraining2U has helped harden, seven controls show up consistently in the ones that pass scrutiny.

1. Approved-model registry

A documented list of AI models the organisation is approved to use, what data classifications can flow to each, and the contractual basis for that flow. Anthropic enterprise, OpenAI enterprise, and Google Cloud enterprise tiers typically anchor this list. Consumer ChatGPT and similar do not.

2. Data classification and flow controls

Every AI workflow should map: what data classification goes in, what residency constraints apply, what processing happens, what comes out, and who has access. Without this map, you cannot answer the basic regulatory questions when they are asked.

3. Audit logging

Every AI-assisted decision and every input/output pair logged with timestamps, identifiers, and the model used. Retention aligned with the regulatory requirement for the data type. PDPA-aware redaction so audit logs themselves are not a privacy liability.

4. Prompt-injection and adversarial defence

Particularly for agents that read external content (emails, documents, scraped pages, customer messages). Standard defences include prompt-shielding patterns, allowlisted tool surfaces, output filtering, and human-in-the-loop on sensitive actions.

5. Human-in-the-loop on consequential actions

Any AI action that affects external parties — sending messages, posting transactions, taking irreversible decisions — should require human approval for the first 90 days at minimum. Many systems should keep this rail permanently.

6. Incident response and kill switches

A documented procedure for what happens when an AI system misbehaves. Who notifies whom. How fast the system can be disabled. Single-flag kill switches that operations can flip without engineering involvement. Tested at least quarterly.

7. Quarterly governance review

A standing review by an internal committee — typically including risk, IT, legal, and the business sponsor — that reviews approved models, fairness audits, incident reports, and policy adherence. The review is what keeps governance from decaying into a dusty PDF.

Where Malaysian organisations typically fall short

From what we see across enterprise audits in 2026:

  • Shadow AI usage — staff using consumer LLMs with company data without approval. Almost universal. The fix is sanctioned alternatives, not bans.
  • No audit logs on early deployments. The first deployment is treated as an experiment; logging is added retroactively after a regulator asks. Better to bake logging in from day one.
  • Vendor due diligence gaps — accepting an AI vendor's self-declared security posture without independent verification.
  • Fairness audits performed once, then never again.
  • Kill switches that are not tested until the moment they are needed, when they often do not work as expected.

Where to start if you are behind

If your organisation is deploying AI without a governance scaffold and you are reading this with mild alarm, the practical sequence is:

  • Inventory current AI usage — sanctioned and shadow.
  • Stand up an approved-model registry and migrate sensitive workflows onto enterprise tiers.
  • Add audit logging to all production AI workflows.
  • Assemble an internal governance committee with quarterly review cadence.
  • Run a baseline fairness audit and prompt-injection assessment.

None of this requires a six-month consulting engagement. It does require a consistent owner and a governance committee that meets. Our AI Agentic Security programme covers the full scaffold above end-to-end and is HRDC SBL-KHAS claimable for eligible Malaysian employers.

About the author

Warren Leow →

Bain & Company alum · KAIN Founding Member · Former MED4IRN

Warren is the founder of AITraining2U and a Founding Member of Konsortium AI Negara (KAIN), Malaysia's national AI consortium. A former management consultant at Bain & Company and ex-CEO of Designs.ai / Interim Group CEO of Inmagine Group, where Pixlr scaled to 10M+ monthly active users globally. Warren has been featured in The Star, BFM 89.9, e27, and KrASIA, and is a former member of the Council of Digital Economy and the Fourth Industrial Revolution (MED4IRN).

Frequently Asked Questions

Not directly. RMiT applies to BNM-licensed entities — banks, insurers, payments providers, and (under the November 2024 exposure draft) non-bank merchant acquirers and other market participants. For non-FIs, RMiT does not legally bind, but its principles increasingly inform enterprise procurement standards across regulated and unregulated sectors. PDPA, ISO 27001, and MOSTI's AI governance guidelines are the more directly relevant frameworks for non-FIs.

The revised RMiT tightens cloud governance, requires stricter due diligence on cloud service providers, formalises the shared responsibility model accountability, and extends key standards to a broader range of regulated entities. While AI is not explicitly named, the cybersecurity and cloud governance changes meaningfully raise the bar for AI deployments inside financial institutions.

No, generally not. Consumer ChatGPT and similar consumer-grade tools do not provide the contractual confidentiality, data residency controls, or auditability needed for processing personal data in a Malaysian regulated environment. Enterprise tiers from Anthropic, OpenAI, and Google Cloud are the practical baseline for any AI workflow touching personal data.

Quarterly is the practical baseline for systems with material impact on individuals — recruitment, lending, insurance pricing. Annual reviews are insufficient given how quickly underlying models, training data, and usage patterns change. Each audit should compare outcomes across protected demographic categories, document any disparities, and record remediation actions. The audit cadence becomes more important as the regulatory environment continues to evolve.

Yes. AITraining2U's AI Agentic Security programme — covering BNM RMiT alignment, PDPA implications, ISO 27001 controls, and the seven-control governance scaffold — is HRDC SBL-KHAS claimable for eligible Malaysian employers. Many financial institutions and GLCs use HRDC funding to train risk, IT, and compliance teams together on AI governance.

Want to apply this in your organisation?

AITraining2U runs HRDC-claimable corporate AI training for Malaysian organisations — from leadership awareness to hands-on builder workshops. Talk to us about a programme tailored to your team.