company logo

AI Engineer - QA / Validation (Enablement)

Zelis.com

Office

India Hyderabad (Nexity)

Full Time

About Us

Zelis is modernizing the healthcare financial experience in the United States (U.S.) across payers, providers, and healthcare consumers. We serve more than 750 payers, including the top five national health plans, regional health plans, TPAs and millions of healthcare providers and consumers across our platform of solutions. Zelis sees across the system to identify, optimize, and solve problems holistically with technology built by healthcare experts – driving real, measurable results for clients.

Why We Do What We Do 

In the U.S., consumers, payers, and providers face significant challenges throughout the healthcare financial journey. Zelis helps streamline the process by offering solutions that improve transparency, efficiency, and communication among all parties involved. By addressing the obstacles that patients face in accessing care, navigating the intricacies of insurance claims, and the logistical challenges healthcare providers encounter with processing payments, Zelis aims to create a more seamless and effective healthcare financial system.

Zelis India plays a crucial role in this mission by supporting various initiatives that enhance the healthcare financial experience. The local team contributes to the development and implementation of innovative solutions, ensuring that technology and processes are optimized for efficiency and effectiveness. Beyond operational expertise, Zelis India cultivates a collaborative work culture, leadership development, and global exposure, creating a dynamic environment for professional growth. With hybrid work flexibility, comprehensive healthcare benefits, financial wellness programs, and cultural celebrations, we foster a holistic workplace experience. Additionally, the team plays a vital role in maintaining high standards of service delivery and contributes to Zelis’ award-winning culture. 

Position Overview

The AI Quality Engineer ensures Zelis AI solutions are safe, reliable, compliant, and production-ready. This role owns LLM evaluation, behavioral testing, bias/hallucination detection, automation of AI QA pipelines, and AI governance alignment.
This is a foundational role in ensuring responsible AI deployment across enterprise workflows and agentic systems.

Key Responsibilities

AI / LLM Testing & Evaluation

  • Design evaluation datasets, test cases, and benchmarks for LLMs, agents, and orchestration flows.
  • Test for hallucinations, bias, toxicity, drift, behavioral inconsistencies, and exploit vulnerabilities.
  • Implement regression testing for prompts, agent workflows, tool calling, and RAG pipelines.
  • Use AWS Bedrock evaluation features and custom scoring frameworks for LLM quality.

Functional & Non-Functional QA

  • Validate accuracy, consistency, latency, throughput, and cost-efficiency of AI services.
  • Ensure CI/CD integration of AI models with automated test suites.
  • Conduct red-team testing for safety, compliance, and security vulnerabilities.
  • Undefined

Governance, Safety & Compliance

  • Maintain logs, audit trails, and quality documentation for responsible AI governance.
  • Ensure HIPAA compliance, data protection, privacy guardrails, and prompt security patterns.
  • Collaborate with internal Platform, Privacy, Compliance, and InfoSec for model deployment approvals.

Enablement & Cross functional collaboration

  • Work with Product management teams to define acceptance criteria for AI features.
  • Guide citizen builders on how to test and validate their AI prototypes.
  • Partner with AI Engineers to refine model behavior, prompt design, and agent orchestration.

Qualifications

  • 3–6 years in QA engineering or ML testing.
  • Experience testing AI/ML/LLM systems (Bedrock, OpenAI, or similar).
  • Strong understanding of prompt behavior, model evaluation, dataset curation, and risk mitigation.
  • Familiarity with CI/CD, Python/PyTest, automation frameworks.
  • Strong knowledge / applied understanding of healthcare claims processing, payments, pricing, provider networks.
  • Experience with healthcare data and compliance (HIPAA, CMS guidelines).
  • Exposure to agentic workflows or GenAI-based automations.
  • Commitment to Diversity, Equity, Inclusion, and Belonging
  • 3–6 years in QA engineering or ML testing.
  • Experience testing AI/ML/LLM systems (Bedrock, OpenAI, or similar).
  • Strong understanding of prompt behavior, model evaluation, dataset curation, and risk mitigation.
  • Familiarity with CI/CD, Python/PyTest, automation frameworks.
  • Strong knowledge / applied understanding of healthcare claims processing, payments, pricing, provider networks.
  • Experience with healthcare data and compliance (HIPAA, CMS guidelines).
  • Exposure to agentic workflows or GenAI-based automations.
  • Commitment to Diversity, Equity, Inclusion, and Belonging

Preferred Qualifications

At Zelis, we champion diversity, equity, inclusion, and belonging in all aspects of our operations. We embrace the power of diversity and create an environment where people can bring their authentic and best selves to work. We know that a sense of belonging is key not only to your success at Zelis, but also to your ability to bring your best each day.

Equal Employment Opportunity

Zelis is proud to be an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws.

Accessibility Support

We are dedicated to ensuring our application process is accessible to all candidates. If you are a qualified individual with a disability and require reasonable accommodation with any part of the application and/or interview process, please email talentacquisition@zelis.com.

AI Engineer - QA / Validation (Enablement)

Office

India Hyderabad (Nexity)

Full Time

December 15, 2025