Client : Toyota Financial Services
Job : AI Security, Engineer
Duration : 12 Months
Location : Plano , ,TX (Hybrid)
Pay : $90-95 / hr on w2
We are seeking a forward-thinking AI Security Engineer to help secure our AI / ML systems and infrastructure. This role is ideal for someone with a strong background in cybersecurity and a passion for artificial intelligence. You will be responsible for identifying and mitigating risks in AI models, data pipelines, and AI-powered applications, ensuring the integrity, confidentiality, and availability of our AI systems
What you’ll be doing :
- Design and implement security controls for AI / ML systems, including model training, inference, and data pipelines.
- Identify and mitigate threats such as model inversion, data poisoning, adversarial attacks, and prompt injection.
- Collaborate with data scientists, ML engineers, and DevOps teams to integrate security into the AI / ML lifecycle.
- Conduct threat modeling and risk assessments for AI systems and algorithms.
- Monitor AI systems for anomalous behavior and potential misuse.
- Secure APIs and endpoints used for model access and inference.
- Ensure compliance with data privacy regulations (e.g., GDPR, CCPA) in AI workflows.
- Develop and enforce AI security policies, standards, and best practices.
- Stay current with emerging threats and research in AI / ML security.
Requirements : What You Bring :
Bachelor’s or Master’s degree in Computer Science, Cybersecurity, Machine Learning, or a related field.3+ years of experience in cybersecurity, with at least 1 year focused on AI / ML systems.Strong understanding of machine learning workflows, model architectures, and data pipelines.Familiarity with AI-specific threats such as adversarial ML, model extraction, and data leakage.Experience with Python and ML frameworks (e.g., TensorFlow, PyTorch, Scikit-learn).Knowledge of secure software development practices and DevSecOps principles.Added bonus if you have :
Experience with securing LLMs and generative AI systems.Familiarity with AI governance, model explainability, and ethical AI principles.Hands-on experience with tools like IBM Adversarial Robustness Toolbox, Microsoft Counterfit, or similar.Certifications such as :Certified AI Security Specialist (CAISS)GIAC Machine Learning Security Engineer (GMSE)CISSP, OSCP, or CEH with AI / ML experience