Responsibilities
AI/ML Security Assessments & Risk Management
- Conduct comprehensive security assessments of AI/ML systems, including data pipelines, model training environments, inference endpoints, and MLOps workflows.
- Identify complex risks related to data privacy, data leakage, adversarial attacks, model poisoning, prompt injection, and misuse of AI technologies.
- Evaluate threats across the AI lifecycle—from data collection to model retirement—and define appropriate mitigation actions.
AI Governance & Security Controls
- Develop and implement security controls, governance frameworks, and policies for end-to-end AI lifecycle management.
- Support clients in complying with AI regulations, responsible AI principles, and data protection requirements (e.g., GDPR, NIST AI RMF).Create strategic roadmaps and executive-level recommendations for secure AI adoption.
Cloud & Infrastructure Security for AI
- Design secure cloud architectures for AI workloads across AWS, Azure, and GCP.Implement best practices for IAM, encryption, secrets management, container security, network segmentation, and secure data storage.
- Assess and secure APIs, microservices, and application components that support AI models and intelligent systems.
Identity & Access Management for AI Agents
- Design IAM models for AI agents, including agent identities, delegated permissions, ephemeral credentials, and cross-system trust boundaries.
- Implement zero-trust principles for agent authentication, authorization, and privilege controls.Develop patterns for scoped access, JIT (Just-In-Time) authorizations, short-lived tokens, and decoupled privilege elevation.
- Integrate IAM systems with AI agent orchestration and establish access governance processes, including permission reviews, certifications, and usage monitoring.
Client Communication & Advisory
- Translate technical security risks into clear business impacts that executive stakeholders can act on.Prepare assessment reports, recommendations, threat models, and remediation plans for clients.
- Work cross-functionally with AI engineers, data scientists, IT security, and compliance teams to deliver secure AI solutions.
Requirements
- 3–8+ years of experience in cybersecurity, cloud security, or data security roles.
- Demonstrated experience securing AI/ML platforms, models, pipelines, or agent-based systems.Strong knowledge of cloud security (AWS, Azure, GCP), IAM, network security, encryption, and API security.
- Understanding of AI threats such as adversarial ML, data contamination, and model theft.Experience with container platforms (Docker, Kubernetes) and MLOps tools (SageMaker, Vertex AI, Azure ML, MLflow).
- Excellent analytical and communication skills, with the ability to present findings to technical and non-technical audiences.
- Certifications such as CCSP, CISSP, CCIE, AWS/Azure/GCP Security Specialty, or AI governance credentials.
- Experience in responsible AI, AI policy, or AI compliance frameworks.
- Background in security engineering, threat modeling, or red teaming for AI systems.
- Experience working in consulting organizations or large enterprise security programs.
Shortlisted candidate will be offered a 6 months Agency contract employment, subject to extension.