Pierce Technology Corp logo

AI Security Engineer

Pierce Technology Corp
Full-time
Remote
United States
Emerging Tech & AI Security
Description

We are seeking an AI Security Engineer to ensure that advanced AI systems are designed and operated with the highest levels of security, compliance, and reliability. In this role, you will define threat models for AI workflows, implement guardrails and access controls, and build monitoring systems to detect drift, hallucinations, and anomalous behavior. You’ll also integrate AI decision-logging into enterprise security tooling, lead red-team exercises, and establish incident response playbooks.

  • Threat Modeling: Analyze and document potential risks in agent workflows, including data misuse, privilege escalation, and model abuse.
  • Guardrails & Access Controls: Implement robust data loss prevention (DLP), secret management, and RBAC/ABAC policies for AI pipelines.
  • Bounded Autonomy: Enforce safe autonomy levels for agents, including human-in-the-loop approvals for critical actions.
  • Monitoring & Detection: Build systems to monitor model drift, hallucination rates, and anomalous behavior across inference pipelines.
  • Security Integration: Connect AI decision-logging into enterprise SIEM platforms for traceability and compliance.
  • Red-Teaming: Lead adversarial testing of AI systems to expose vulnerabilities and strengthen resilience.
  • Incident Response: Develop and maintain runbooks for AI-related security incidents, coordinating across engineering and security teams.


Requirements
  • Strong background in application security, cloud security, or security engineering with exposure to AI/ML systems.
  • Hands-on experience with threat modeling and building secure architectures.
  • Familiarity with data loss prevention (DLP), secrets management, RBAC/ABAC, and access governance.
  • Knowledge of ML/AI system vulnerabilities, including prompt injection, data poisoning, and model drift.
  • Experience integrating logging and monitoring systems with SIEM/SOAR platforms.
  • Skilled in running red-team exercises or adversarial testing.
  • Proficiency with Python or similar scripting languages for automation and security tooling.
  • Bonus: Prior experience with AI guardrails, LLMOps, or responsible AI frameworks.