AI Policy

Last updated: 5 April 2026

CoVector AI is an AI-first company. We use AI tools in virtually everything we do — from internal operations to client delivery. This page explains how, and what that means for the people we work with.

Our commitment to transparency

We believe organisations that use AI should be clear about how they use it. This policy is a public statement of our practices — not a legal requirement, but a choice. We update it as our tools and practices evolve.

How we use AI internally

Every internal process at CoVector AI is AI-enabled. This includes:

  • Code and product development: We use AI coding assistants (Claude Code, GitHub Copilot) to accelerate development of our tools, platforms, and client deliverables
  • Research and analysis: AI models help us process research papers, benchmark technologies, and synthesise findings
  • Document creation: Presentations, proposals, and reports are drafted with AI assistance and reviewed by humans before delivery
  • Internal communications: AI tools assist with scheduling, summarisation, and workflow automation

How we use AI in our hiring process

We practice what we preach. Our hiring pipeline is a three-stage automated process that uses AI at each stage:

  • Stage 1 — Resume and GitHub screening: Applications are evaluated by an AI agent that assesses qualifications, experience alignment, and (for technical roles) GitHub portfolio quality against structured rubrics. The rubrics and scoring criteria are designed by humans.
  • Stage 2 — AI build challenge: Shortlisted candidates complete a practical challenge. Submissions are evaluated by an AI agent using predefined rubrics that assess technical execution, problem-solving approach, and communication quality.
  • Stage 3 — Face-to-face interview: Final-stage interviews are conducted by humans. AI is not used in the interview itself.

All automated evaluations produce structured scores and reasoning that candidates can request to review. Human oversight is applied at stage transitions — no candidate is rejected solely by an AI decision without human review of borderline cases.

How we use AI with clients

When we work with clients, AI tool usage is governed by the Statement of Work for each engagement. Our standard practices:

  • We disclose which AI tools will be used in the engagement before work begins
  • Client data is only processed by AI models with the client's informed consent
  • We do not use client data to train AI models or share it across engagements
  • Where we deploy Digital Employees (AI agents) for clients, the client retains full control and can audit agent behaviour

AI tools we use

We are model-agnostic and select the best tool for each task. The AI models and platforms we currently use include:

  • Anthropic Claude: Reasoning, analysis, coding assistance, and conversational AI
  • Google Gemini: Research, analysis, and multimodal processing
  • OpenAI models: Specific use cases where GPT models are best suited
  • Voice AI platforms: For call centre automation and voice agent deployments
  • Custom ML models: Purpose-built models for specific client use cases (e.g., recovery prediction, document classification)

This list is updated as our toolset evolves. We evaluate new models and platforms continuously as part of our R&D practice.

Human oversight

AI assists; humans decide. This is our operating principle. Specifically:

  • All client-facing deliverables are reviewed by a human before delivery
  • Strategic recommendations are developed with AI input but are the responsibility of our consultants
  • AI-generated code in client deployments undergoes human code review
  • Automated hiring decisions have human oversight at each stage transition
  • We do not use AI to make consequential decisions about individuals without human review

Client data and AI processing

When client data is processed by AI models:

  • Data is transmitted over encrypted connections and is not stored by AI providers beyond the processing session (per our data processing agreements)
  • Client data is never used to train foundation models
  • Where data sensitivity requires it, we offer on-premise or air-gapped deployment options
  • Data handling for each engagement is documented in the SOW and can be audited

AI-generated deliverables

We are transparent about the role of AI in our deliverables:

  • Deliverables that are substantially AI-generated are identified as such when appropriate
  • AI-generated content in reports, analyses, and recommendations is reviewed and validated by our team before delivery
  • The AI Fluency Assessment on this website produces AI-generated coaching and summaries — these are clearly labelled as AI-generated
  • Blog posts and website content may be drafted with AI assistance and are reviewed by our team

Continuous learning

We dedicate a significant portion of our time to R&D — reading papers, benchmarking new models, and testing emerging techniques. This is not just about staying current; it's about ensuring that the AI solutions we recommend and deploy reflect the best available approaches, not just the ones we're familiar with.

This policy itself is a living document. As AI capabilities and industry practices evolve, so will our commitments and disclosures.

Questions

If you have questions about how we use AI, contact us at hello@covectorai.in.

© 2026 Florintree Value Studio Pvt. Ltd. All rights reserved.