HR is one of the under-served corners of corporate AI in Malaysia. Most automation budgets in 2025 went to finance, customer service, and marketing — partly because those teams were noisier, partly because HR data is sensitive in ways the others are not. By 2026, that gap has started to close, and the workflows that work in HR are becoming clearer.
This is a working playbook based on what we have deployed with Malaysian corporate HR teams in the past 12 months — what is generating value, what is creating risk, and what to do about both.
Five HR workflows that consistently deliver value
Five HR workflows that work (with HITL)
1. CV screening, with humans deciding
The headline use case, and also the one with the most landmines. AI is excellent at parsing a CV against a structured rubric (skills, years, certifications) and at flagging top candidates for human review. It is dangerous when used as a black-box filter that automatically rejects candidates without a human ever looking. The Malaysian implementations that work in 2026 use AI as a triage layer that accelerates the human reviewer, not as a decision-maker.
2. Interview question generation
Generating tailored, role-specific behavioural and technical interview questions from a job description is now standard. The win is consistency — every interviewer asks comparable questions — and breadth, because the AI suggests dimensions a human interviewer might forget.
3. Onboarding chat agent
A WhatsApp or Slack-based onboarding agent that answers new hire questions about benefits, policies, IT setup, and first-week logistics. Done well, this reduces HR's time on repetitive onboarding queries by 60–80% and gives new hires faster, more consistent answers than chasing the right person on Slack.
4. Policy summarisation and Q&A
An internal RAG system over the employee handbook, leave policy, claims policy, and code of conduct. Employees ask a question; the agent answers with citations to the source policy. Far better than the current default of forwarding the question to HR or guessing from a 6-month-old WhatsApp thread.
5. Performance review drafting
Used carefully. AI helps managers draft initial review narratives based on bullet-point notes — improving consistency and quality of feedback — while final wording remains the manager's responsibility. The teams using this in 2026 report markedly better review quality, particularly from less-experienced managers.
What you must not automate without a human
- Hiring decisions. Triage yes, decision no.
- Termination decisions. AI should not be involved at all.
- Disciplinary actions or performance improvement plans.
- Compensation decisions outside published bands.
- Anything legally sensitive — investigations, complaints, accommodation requests.
The principle: AI can support HR work, not replace HR judgement, particularly on decisions that have material impact on employee livelihoods. The Malaysian regulatory environment, MEF guidelines, and basic employee trust all reinforce this.
The PDPA layer
HR data is some of the most sensitive personal data your organisation holds. Three rules we apply consistently:
One: No employee personal data flows to a public LLM without an enterprise contract that specifically addresses confidentiality and data residency. Anthropic, OpenAI, and Google all offer enterprise tiers that meet this bar; consumer ChatGPT does not.
Two: Document the purpose of every AI processing step. Under PDPA, employees have a right to know what you are doing with their data. "We use AI to summarise CVs" is acceptable; vague generalities are not.
Three: Build audit logs. Every AI-assisted decision in HR — even support decisions like "recommended for first interview" — should leave a trail showing what input went in, what came out, and which human reviewed it.
The bias problem (and what to do about it)
Every AI system absorbs bias from its training data. In HR, the cost of that bias is concrete — disqualifying candidates, narrowing the pipeline, reinforcing existing demographic patterns. The honest position is that AI bias in HR cannot be eliminated, only managed.
What managing it looks like in practice:
- Run regular fairness audits — does the AI's screening output show different rejection rates across demographic groups for comparable candidates?
- Strip identifying information from CVs before AI screening (name, gender, race, nationality, photo) where possible.
- Use AI as a triage layer with explicit human review of borderline cases — not as a binary filter.
- Document the fairness controls and review them quarterly with HR leadership.
The HRDC funding twist
HR teams have a built-in advantage: they are usually the function inside a Malaysian organisation that already understands HRDC well, because they administer it. Using HRDC SBL-KHAS funding to train your own HR team on AI tools is one of the cleanest applications of the scheme — the cost is fully recoverable for eligible employers, and the team is well-positioned to scale the same training across the organisation afterwards. Our HRDC training overview walks through how this works in practice.