AI Risk & Governance Analyst

AI Risk & Governance Analyst

Type:

Contract To Hire

Location:

Boston - Massachusetts

Rate Info:

$60 - $70

Work Model:

Remote National

Published:

05-Mar-2026

Job ID:

41910

Optomi, in partnership with a leading provider in the Healthcare industry is seeking an AI Risk & Governance Analyst to join their team. You will be responsible for performing compliance reviews of AI applications to ensure alignment with internal policies and governance standards. The role involves conducting structured risk assessments across the AI system lifecycle, identifying risks related to bias, privacy, security, and regulatory noncompliance. The analyst will work collaboratively with AI development teams to gather information for assessments and prepare clear findings and recommendations for leadership.

 

Key duties and responsibilities:
  • Performs compliance reviews of AI applications and products to assess alignment with internal policies, governance standards, and standard operating procedures, including verification of required documentation, approvals, and controls prior to production deployment.
  • Conducts structured risk assessments of AI systems across their lifecycle, identifying and documenting risks related to bias, privacy, security, safety, model behavior, and regulatory noncompliance; evaluate risk likelihood, impact, and adequacy of mitigation controls.
  • Reviews model development practices, data handling procedures, deployment controls, and technical artifacts (e.g., model cards, system architecture documentation) to identify compliance gaps and discrepancies between documented capabilities and actual system behavior.
  • Investigates AI system incidents, complaints, or governance concerns by analyzing system behavior, data flows, and decision logic; document investigative methods, evidence reviewed, and conclusions reached.
  • Conducts hands on testing and probing of AI systems to validate documented claims regarding performance and behavior, and support ongoing monitoring of deployed systems.
  • Tracks compliance and risk findings, remediation actions, and residual risk through maintained risk registers and supporting documentation; verify corrective actions are implemented and documented.
  • Partners with AI development teams, product owners, and subject matter experts to gather information for assessments and investigations, and prepare clear findings, executive summaries, and recommendations for leadership and governance stakeholders.
  • Monitors trends in compliance and risk findings to identify systemic issues and support continuous improvement of AI governance practices; stay current with evolving AI regulations, standards, and industry best practices.
APPLY NOW

Share this job

SCHEMA MARKUP ( This text will only show on the editor. )