Client reporting used to be the task I dreaded most. Not because it's hard — because it's boring. Pulling the same metrics from the same dashboards, dropping them into the same template, writing the same "here's what happened this month" summary. Every agency owner and freelancer I know has the same complaint: reports take hours, clients barely read them, and skipping them isn't an option because reporting is how you prove value.
Six months ago, I automated most of this with Claude Code. Now my monthly reports take about 15 minutes each instead of two hours. Here's exactly how I built it and how you can do the same.
What a Client Report Actually Needs
Before automating anything, I had to be honest about what my clients actually want from a report. After asking a dozen of them directly, the answer was simpler than I expected:
- What changed since last month? Traffic up or down, leads up or down, key numbers at a glance.
- What did you do? A short summary of the work completed — not a task list, but a narrative.
- What are you doing next? One or two priorities for the coming month.
- Anything I should worry about? Flags for things that need the client's attention or decision.
That's it. Nobody wants a 15-page PDF with 40 charts. They want a one-page summary they can read in three minutes. Once I understood that, automation became straightforward.
The Architecture: Three Steps
My reporting system has three pieces, all built with Claude Code in a single afternoon.
Step 1: Data Collection Script
I wrote a Python script that pulls the key metrics from each client's data sources. For most of my clients, that means Google Analytics (via the GA4 API), Google Search Console, and sometimes Stripe or a CRM. The script runs on the first of each month and dumps everything into a structured JSON file — one per client.
The JSON includes: total sessions, organic sessions, top 10 landing pages by traffic, conversion count, revenue (if applicable), top search queries, and click-through rates. It also pulls the same data from the previous month so comparisons are automatic.
Claude Code wrote 90% of this script. I described the data sources and the output format I wanted, pointed it at the API documentation, and it handled the authentication setup, error handling, and data transformation. The only part I wrote manually was the credential management, because I wanted those handled a specific way.
Step 2: Report Generation
This is where Claude Code really shines. I feed it the JSON data file and a prompt that says, essentially: "You are writing a monthly performance report for [client name]. Their business is [type]. Here's this month's data and last month's data. Write a one-page summary covering what changed, what we did, what's next, and any flags."
The prompt also includes a few rules: no jargon the client wouldn't understand, lead with the most important metric, and keep it under 400 words. I include a list of the work we actually completed that month (pulled from our project management tool) so the report accurately reflects what happened.
The output is clean, professional prose that sounds like me. The first time I ran it, I had to adjust the tone slightly — it was a bit too formal. After tweaking the prompt once, every report since has been spot-on. My clients haven't noticed any difference in quality. A few have actually commented that the reports are easier to read now.
Step 3: Formatting and Delivery
The generated text goes into a simple HTML template that matches my brand — logo, colors, consistent layout. Claude Code built the template too. The script outputs a PDF and also saves an HTML version I can link to in an email.
I review each report before sending. This is the part that takes the 15 minutes. I read through, sanity-check the numbers against the dashboard, occasionally add a personal note about something specific we discussed, and hit send. The entire process from data pull to delivered report takes less time than the old process took just to open all the dashboards.
What I Learned Building This
A few things surprised me along the way:
The data collection is the hardest part. Writing the report is easy for AI. Getting clean, reliable data from APIs that change their authentication schemes every six months — that's where the real work is. Budget more time for this step than you think you'll need.
Clients care about narrative, not data. The best reports I generate aren't the ones with the most metrics. They're the ones that tell a clear story: "Organic traffic grew 12% because the blog posts we published in February are starting to rank. Here's what we're doubling down on next month." Clients remember the story. They forget the numbers.
Automation doesn't mean hands-off. I still review every report. I still add personal context when it matters. The automation handles the 80% that's mechanical — the data pulling, the comparisons, the first draft. The 20% that's judgment and relationship-building stays with me. That's the right split.
The goal of automated reporting isn't to remove yourself from the process. It's to remove the drudgery so you can focus on the insight.
Key Takeaways
- Start with what your client actually reads. Most clients want a one-page summary, not a data dump. Design your automated report around their attention span, not your data sources.
- Separate data collection from report writing. Build a reliable data pipeline first, then let AI handle the narrative. Trying to do both at once creates fragile systems.
- Always review before sending. AI-generated reports are good enough to send with minor edits, but never good enough to send without reading. Your reputation is on every page.
If you're spending hours on client reports every month and want to see how this could work for your specific setup, check the FAQ or book a call. I can usually build a working prototype of your reporting pipeline in a single session.