Koalrdocs
FeaturesAi chat

What You Can Ask

Example questions and capabilities of Koalr's AI chat panel.

What You Can Ask

Koalr's AI chat panel is powered by Koalr AI with direct access to your organization's live engineering data. It's not a knowledge base search — Koalr AI queries your actual metrics and gives answers backed by real data.

Accessing the chat

Click the chat bubble icon in the bottom-right corner of any dashboard page.

Example questions

Deploy risk

  • "Which repos have the highest deploy risk right now?"
  • "Why did the payments service score 87/100 on the last deploy?"
  • "How does our change failure rate compare to last quarter?"

DORA metrics

  • "What's our current deployment frequency?"
  • "Why did MTTR spike in the last 2 weeks?"
  • "Which team has the best lead time?"

Code coverage

  • "Which repos have coverage below 50%?"
  • "Show me the coverage trend for the api repo"
  • "Which PRs dropped coverage the most this month?"

Pull requests

  • "How many PRs are currently awaiting review?"
  • "Who are the top reviewers this quarter?"
  • "What's our average PR cycle time?"

Incidents

  • "How many P1 incidents did we have this month?"
  • "What was the MTTR for the payment outage last week?"
  • "Which services have the most incidents?"

Tool use capabilities

For precise queries, Koalr AI uses built-in tools:

ToolWhat it queries
get_deploy_riskRisk scores, factor breakdowns, recent deployments
get_coverageCoverage snapshots, trend, hotspots
get_dora_metricsDeployment frequency, lead time, CFR, MTTR
get_pr_statsCycle time, review health, throughput
get_incidentsIncident list, MTTR, service correlation
get_jira_issueJira issue status, assignee, linked PRs
get_github_prPR details, review status, checks
get_linear_issueLinear issue status, team, project

Context awareness

The chat automatically injects context from the page you're on. If you open chat from the Deploy Risk page, Koalr AI already knows which repos and deployments you're looking at.

Rate limits

  • 10 messages per minute per user
  • Complex queries (tool use, multi-step analysis) use a more capable model for better accuracy
  • Simple questions use a faster model for quick responses