Usage Examples

Real-world workflow examples using the MCP Server

These examples demonstrate real-world workflows you can accomplish by chatting with Claude (or any MCP-compatible AI assistant) connected to the DigiUsher MCP Server.


Example 1: Analyzing Cloud Costs by Service

Scenario: You're a FinOps practitioner preparing a monthly cost review and want to see which services drove the most spending last month.

Prompt:

"Show me our cloud spending for February 2026, broken down by service. I want to see the top 10 services by cost."

What happens:

  1. Claude calls list_organizations to find your organization ID
  2. Claude calls query_expense_data with:
    • start_date: "2026-02-01", end_date: "2026-02-28"
    • group_by: [{"dimension": "service_name"}]
    • order_by: [{"field": "cost", "direction": "desc"}]
    • limit: 10
  3. If results contain data source IDs, Claude calls get_expense_dimension_lookups to translate them to names

Expected output: A table showing the top 10 cloud services ranked by cost for February, with totals and any notable changes from the previous period.

Follow-up prompts you could try:

  • "Now filter this to only AWS accounts"
  • "Break down the top service by region"
  • "Show me the daily trend for EC2 costs this month"

Example 2: Investigating Cost Anomalies

Scenario: You received an alert about unexpected cost increases and want to quickly understand what's happening.

Prompt:

"Are there any high-severity cost anomalies in my cloud accounts this month? Show me the details of the most impactful one."

What happens:

  1. Claude calls list_organizations to find your organization ID
  2. Claude calls get_anomaly_summary with severity: ["HIGH"] and this month's date range to get a high-level count
  3. Claude calls list_anomalies filtered by severity: ["HIGH"] to get the list
  4. Claude calls get_anomaly for the top anomaly by impact to get full details including impact metrics

Expected output: A summary showing the number of high-severity anomalies, followed by a detailed breakdown of the most impactful one — including the affected service, region, cost impact, anomaly type (spike, pattern deviation, etc.), and when it was detected.

Follow-up prompts you could try:

  • "What about medium-severity anomalies?"
  • "Show me all anomalies for the us-east-1 region"
  • "Are there any pattern deviation anomalies?"

Example 3: Finding Cost Optimization Recommendations

Scenario: You're an engineering manager looking for quick wins to reduce cloud spend before the quarterly budget review.

Prompt:

"What open cost optimization recommendations do we have? Give me a summary of potential savings, then show the top 5 recommendations by savings amount."

What happens:

  1. Claude calls list_organizations to find your organization ID
  2. Claude calls get_savings_summary with status: ["open"] to get total potential savings
  3. Claude calls list_recommendations with status: ["open"], sorted by savings descending, limit 5

Expected output: First, a savings summary showing total potential monthly/annual savings from open recommendations. Then a list of the top 5 recommendations with details like the resource name, current configuration, recommended change, and estimated savings.

Follow-up prompts you could try:

  • "Group the savings by scenario type"
  • "Show me only the AWS rightsizing recommendations"
  • "What recommendations have already been applied?"

Example 4: Tracking FinOps KPIs Over Time

Scenario: You're preparing a FinOps maturity report and need to show how key metrics have trended over the last quarter.

Prompt:

"How has our effective savings rate and compute commitment coverage trended over the last 3 months?"

What happens:

  1. Claude calls list_organizations to find your organization ID
  2. Claude calls get_kpi_time_series with:
    • start_date: 3 months ago
    • end_date: today
    • kpi_ids: ["effective_savings_rate", "compute_commitment_coverage"]

Expected output: A time series showing daily or weekly values for both KPIs over the 3-month period, with trends and any notable changes highlighted. Claude may present this as a table or describe the trend narrative.

Follow-up prompts you could try:

  • "What are all our current KPI values as of today?"
  • "Show me the cost optimization index trend for the last 6 months"
  • "How much commitment discount waste do we have?"

Example 5: Chargeback Analysis Across Teams

Scenario: You need to report how cloud costs are distributed across engineering teams for last month's chargeback cycle.

Prompt:

"Show me the cost allocation breakdown across teams for February 2026."

What happens:

  1. Claude calls list_organizations to find your organization ID
  2. Claude calls get_chargeback_for_month with month: "2026-02-01" to get the chargeback data
  3. Or Claude calls get_cost_allocation_summary with start_date: "2026-02-01" and end_date: "2026-02-28" for a summary view

Expected output: A breakdown of costs allocated to each team/pool, including total cost per team, percentage of total spend, and any unallocated costs.

Follow-up prompts you could try:

  • "Compare this to January's chargeback"
  • "Show me Q1 2026 cost allocation with monthly granularity"
  • "Which team had the largest cost increase month over month?"