Case Study

AI-Powered Analytics: Ask Trading Data Anything

An agentic analytics platform that lets users query complex financial data in plain English — with full reasoning transparency. Built with Claude Agent SDK and MCP.

Client Fintech Analytics Platform
Industry Fintech
Duration 3 months
Team 2 AI engineers + 1 PM
AI Agent
Claude SDK + MCP
3 mo
Build time
Client renewed
Live
In production

The Problem

A fintech company needed a way for their users to get insights from complex financial data without writing SQL or waiting for analyst reports. The goal: ask a question in plain English, get an accurate, data-backed answer in seconds.

The challenge wasn't just building an AI chat interface — it was making one that financial professionals would trust. In fintech, a wrong number isn't a minor error. It's a decision made on bad data.

What We Built

An analytics chat tool where users ask questions in plain language and get AI-powered answers backed by real data.

Under the hood, a Claude-based autonomous agent receives the question, decides which tools to call, queries the data through a custom MCP (Model Context Protocol) server, and streams the response back in real time.

What makes this different from a typical ChatGPT wrapper: a collapsible panel alongside the chat exposes the agent's full reasoning trace. Users can see not just the answer, but how it was reached — which tools were called, what data was queried, and why the agent chose that approach.

Why reasoning transparency matters: Most AI analytics tools give you an answer and expect you to trust it. This system shows its work — so users can verify the logic before acting on it.

Tech Stack

Claude Agent SDK MCP DuckDB Node.js Python 3 Next.js
  • Claude (Anthropic) powers the agent via the Claude Agent SDK, with MCP handling tool orchestration
  • Streaming inference for real-time responses with extended thinking
  • DuckDB as the analytics engine — fast analytical queries on structured data
  • Custom MCP server bridging the AI agent to the data layer
  • Next.js frontend with real-time streaming chat UI and reasoning trace panel

How We Worked

We started with a working prototype using LangChain that connected to the client's data, retrieved relevant information based on user queries, and replied in plain language. This let the client validate the approach before committing to the full build.

The hardest part wasn't the AI — it was understanding the domain well enough to define accurate metrics. Before the agent could answer complex analytical questions, someone had to define precisely what each metric means in the context of their business. Getting these definitions wrong would mean the AI gives technically correct but practically misleading answers.

The team: 2 data science engineers building the agent pipeline and MCP server, with 1 PM coordinating weekly with the client's team.

The Result

The platform is live and in production. The client renewed for additional scope of work after the initial build — extending the agent's capabilities and coverage.

Want something like this built?

Tell us the problem. We'll tell you what 72 hours can produce.

Chat with us