Be wary of WhatsApp messages impersonating Jobline Resources's staff offering job opportunities. Those who encounter suspicious messages can contact Jobline at +65 6339 7198
Collaborate with Group Security and Data Governance teams to align AI platform designs with enterprise security and compliance policies.
Define and implement secure-by-design architectural principles for AI/ML platforms, covering data pipelines, model deployment, and access layers.
Implement and oversee Responsible AI (RAI) practices to ensure AI systems are designed and deployed ethically, with fairness, transparency, and compliance
Implement and oversee Explainable AI (XAI) practices to ensure AI model decisions are transparent, interpretable, and trustworthy through integrated explainability features.
Ensure compliance with regulatory frameworks and AI governance standards.
Ensure secure and compliant architecture in collaboration with cybersecurity and governance teams, embedding PDPA and enterprise policy requirements into designs
Translate governance requirements into technical specifications and enforceable controls across cloud and on-premise AI environments.
Integrate privacy-preserving mechanisms such as data anonymization, encryption, tokenization, and secure logging into AI workflows.
Evaluate and recommend AI security and governance tools (e.g. AWS Guardrails, Azure Responsible AI, IBM Watson Governance) for adoption.
Conduct AI-specific risk assessments, including model misuse, bias, data leakage, adversarial attacks, and LLM prompt vulnerabilities.
Review and approve the integration of third-party AI services and open-source models from a security and compliance perspective.
Champion awareness of AI security and governance across AIDA by contributing to policies, best practices, and team enablement sessions.
Review and clear governance approvals related to architecture and solution design, with specific focus on AI security.
Collaborate with vendors and partners to review, evaluate, and select appropriate security solutions.
Ensure all AI solutions, including in-house and vendor-developed systems, undergo thorough testing and Vulnerability Assessment and Penetration Testing (VAPT) to safeguard security and reliability.
Requirements
Bachelor’s or Master’s degree in Cybersecurity, Engineering, AI/ML, or related field.
More than 5 years of experience in cybersecurity, data governance, or secure systems architecture, with at least 3 years focused on AI or cloud-based ML systems
Proven expertise in implementing cybersecurity governance, data protection, etc on data or AI platform.
Strong understanding of AI/ML pipeline components and risks—model misuse, prompt injection, data leakage, adversarial inputs, bias and explainability.
Proficient in implementing secure and compliant AI/ML systems on cloud platforms such as AWS SageMaker, Azure ML, Google Vertex AI, etc.
Experience with AIOps/LLMOps and DevSecOps practices, including secure CI/CD, RBAC, secrets management and logging
Familiarity with AI governance toolkits and regulatory trends
Technical knowledge of data privacy controls (encryption, tokenization, data minimization) and security frameworks (e.g., Zero Trust, OWASP for ML)
Ability to perform threat modeling and security assessment for AI and LLM-based systems
Strong cross-functional communication and collaboration skills, with the ability to influence both technical and policy-level decisions
Good internal (IT, Networks, business) and external (suppliers, government) stakeholders management skills
Strong technical writing and presentation skills, with the ability to communicate complex concepts clearly to both technical and non-technical stakeholders.
Proactive and fast learner with a strong drive to stay current on emerging technologies and industry trends
Proven experience working in a telco environment or implementing security and governance role is a plus