Setup
One morning at school drop-off, a friend mentioned that he now works on projects primarily through dictation — knocking out chunks of documentation and project work using AI-assisted tools, designing frameworks, and even engaging workflows from his phone as he’s driving into work.
The efficiency and productivity described sounded amazing. I thought I would give it a try. I regularly produce documentation for analytics projects, write content around work and ideas, and have thoughts I would like to capture in a meaningful way to incorporate later. What followed was something less than inspiring. I found myself feeling absolutely ridiculous and completely unnatural as I talked to my computer like I was talking into the ether. My language was disjointed, and it didn’t make a lot of sense. While I can speak competently and at length to many topics in a professional setting, using native dictation software in a Word document just wasn’t working for me.
The question that followed was simple: what if I made this more conversational? What if I built out a utility that I could give topics to — or the details of a project — and it could prod me for the content, recording my answers as we go along?
It seemed like a good use case for a build with Claude Code, so I started off that morning with a mission. The project became something else entirely. What started as a pet project quickly became quite instructive. I want to tell you how this project showed me what analytics in practice should look like when execution is cheap, iteration is fast, and subject matter expertise is right at your fingertips.
The value here isn’t about a specific tool or how you too can build this utility. It’s about how decisions should be made, where humans provide the most value, and how AI fits and drives an analytics ecosystem.
The Problem Worth Solving
The friction was already there before the build started. I was finding myself going back to copilot tools, using LLMs to do different tasks, moving files around and manually organizing everything. It was time spent that could be better spent elsewhere, and these tasks are very conducive to the newer AI tools in the marketplace right now.
What I wanted was simple: a way to work more efficiently, perform quick interactive interviews, generate base content, save the results, clean them up — and do all of these extra tasks I was doing manually, in one go.
I’m sure there are any number of utilities out there that could have sufficed. But I wanted to take on a project as a use case for an agentic AI coding tool. I wanted to build something I could own with Claude Code.
What I Built
Starting from Scratch
I started entirely from scratch — no existing code base, no prior development sessions. The work began in the Claude UI: going over some initial thoughts, asking questions along the way about whether certain approaches would be conducive, getting questions back about design and infrastructure. Claude produced a markdown file that went into the project directory. Then, after launching Claude Code in planning mode and answering some more design questions, I had a functional, running front end in a matter of minutes.
I didn’t have to go look up any API documentation. I didn’t have to look up anything — no infrastructure research, no compatibility questions, I didn’t even have to perform the installs or get dependencies working within my laptop environment. Claude Code took care of all of that. When a piece of information was needed, like retrieving my personal API keys, Claude would give me the exact URL I needed to go to and all the navigation instructions. Then I would feed the information back to Claude Code. The whole process was straightforward.
What’s worth noting is how Claude Code handles the work itself. It has access to the file systems and environments — it can perform a lot of the tasks on its own, asking permission before executing something in the command prompt or in Python. You can give it as much supervision as you want. You can be as involved as you want. What I wasn’t expecting is that you can just let it loose and it will produce a solid output, conditional on being given well-defined instructions.
The Iterative Loop
Within the day, there was output from actual conducted interviews. This was really all I had set out to do, but the outputs were rough. The content had issues — audio artifacts recovered by Google’s voice-to-text, subtle inaccuracies, everything coming out as one giant run-on sentence without ideas separated. Rather than spending time cleaning content manually, I decided to ask for help. I asked the Claude UI to take the raw back-and-forth between myself and the Anthropic API, extract my words, and clean them up. Then I could just ask: I want this to be part of the utility — can you provide me with a prompt for Claude Code to perform these same tasks and produce cleaned transcripts automatically at the end of each interview? Any subsequent task I found myself doing manually, I started with the Claude UI, iterated until it was doing what I wanted, then asked for a prompt for Claude Code. Changes were made in minutes and I could test immediately.
From there, each capability followed the same pattern: identify friction, describe the need in natural language, open a new session, test, and iterate. Mobile access. Subnet hosting. Security architecture discussions. I was able to do the work of a whole team of developers and testers that would take weeks or months. I was able to do it myself in hours.
This loop — identifying friction, truly understanding it, describing it clearly in natural language, and refining the solution with an AI tool — is how effective analytics teams and resources will be operating in the near future, if they aren’t already.
The Bug That Required Real Engineering
The voice recognition layer for mobile was the part of this project that most felt like real engineering work. The symptoms: spoken text was duplicating — sometimes single words, sometimes full phrases — and sometimes text would be lost or cut off before it was committed to the screen.
The approach to debugging was deliberate. I would start off by telling Claude Code: hey, I’d like to talk about this first, don’t make any programming changes — so that I could explain something without having the code base automatically updated and I could get a better understanding of whether it was a hardware issue, a software issue, or an issue with one of the APIs.
My initial suspicion was the microphone picking up echo. Testing with a quality noise-canceling headset ruled that out. The problem proved to be in configurations to the Chrome browser — how audio was being sent to Google, then how it was preserved on screen by the application. The solution we established was configuring continuous listening after the recording started, then saving pieces as they were streamed so everything was captured. This caused issues with duplication of some text. Especially on mobile versions of Chrome, we found that we couldn’t work around some of the limitations; instead, we wrote code to address the issues after the fact: if the last block of text streamed matches the previous block of text, we know it’s some level of duplication and can deduplicate within the application itself.
Some of the changes made along the way disabled the microphone completely. The testing loop looked like this: I would do the user testing, explain in natural language what was going on to Claude, Claude Code would update the code, tell me what was going on, explain the changes — I would move that over to my production environment, update everything, refresh it, relaunch the server, and then continue on with the testing. Ultimately, asking Claude to go into planning mode and be more conversational made the difference. We were able to isolate issues, make changes one at a time, and get it resolved — so that the audio recordings, the speech recognition, and getting it committed onto the screen were all working correctly and consistently across different devices.
Four iterations across sessions to reach a stable solution. The kind of problem that, working alone with Stack Overflow and documentation tabs, would have taken a day. Working this way, it took under an hour.
This was a reminder that AI tools don’t eliminate engineering judgment — they simplify and focus it. The work shifts from writing syntax to understanding interactions and how they contribute to failures.
When Claude Built More Than I Asked For
An unexpected turn arrived when I tested the voice interview utility for the first time. Claude had interpreted the word “interview” through a job-interview lens and, without being asked, extended the project to include scoring, performance feedback, and structured summaries of strengths and weaknesses.
I’ve seen vendors selling automated interview evaluation tools for years. I was quite surprised that Claude produced this feature performing with remarkable accuracy without me specifically requesting it — on a first pass, in code that was put together in ten minutes by Claude Code and maybe a couple of hours of planning and preparation on my end. Rather than removing the features, I decided to keep it and separate it into a dedicated mode for job interview practice, with optional job description and resume inputs. Ready in minutes on the second pass. I’ve seen companies charging hundreds of thousands of dollars for this type of capability. Claude Code produced it without being asked.
What this really showed me is that the barriers to building AI-powered capabilities are coming down fast. The differentiator is no longer the intellectual property — it’s design, deployment, execution, and governance.
The Deployment
When I set out on the project, I really wanted to get typed content from dictation produced in a conversational, interview-style setup. Working from my laptop I had that problem solved quickly. The initial build was so smooth and straightforward that I had time to think about how I would actually use it and what more I could get out of it. I started burning through my nice-to-have list. I wanted to use it from my phone. I wanted my wife to be able to try it. I wanted to improve the output and have it make suggestions for topics and organize the content. My goal throughout was to produce the entire project with Claude, so I asked the Claude UI for thoughts on how I could make the tool more usable and host it within my subnet. I gave prompts to Claude Code and it made configuration changes and installed dependencies.
Eventually I decided to load everything onto an unused Windows 11 mini PC I had sitting around — a GMKtec, about the size of a paperback book, on the home LAN. Claude UI helped me make a plan. Claude Code told me what to do and executed the changes. I asked it to produce a batch file to install all the dependencies and configuration changes necessary to move the utility over.
Claude produced an install batch file and an update batch file. It was fairly painless but not without friction. It was an iterative process of running the install, hitting an error, describing it back to Claude Code, and getting a fix. Node.js install, HTTPS certificates via mkcert, firewall rules, a Windows Task Scheduler entry that starts the server at boot under the SYSTEM account — no login required.
I’ve been using the utility in this setup for quite a while now and haven’t had issues with it crashing or being unreliable. I’ve been continually surprised by the quality of the code and the redundancies that Claude Code had taken into consideration.
A tool that requires maintenance is a tool you stop using.
The Reframe
Specific coding expertise is becoming less meaningful. It’s less relevant to have coders who can fluently write in specific languages — and you certainly don’t need as many of them doing dev work when agentic AI coding tools are in play. The more important shift is not about headcount. It’s about what the scarce input to analytics teams actually is now.
Capabilities are trivial. Code can be generated to do what you can imagine. It’s more important to have the imagination on your team than it is to have the software that performs the task.
The distinction worth drawing is not between people who use AI and people who don’t. Everyone is using AI in some way — LLMs, copilots, ChatGPT. The articles we read online, the scripts on videos — we are all consumers of content from large language models. Where I would draw the value distinction is in using AI to create actual resources — things that do other things. The difference is between consuming AI-generated content and using AI to build functional resources, models, and code bases that solve problems.
Anyone can create capabilities — anyone at all — using natural language and a tool like Claude Code. Historically that has been the biggest challenge, but today that’s the easy part. Claude can design a whole front end, a model, make API calls, make connections, perform all of these tasks, and produce a back end. All of that development work can be done by anyone now. That’s here today.
For analytics teams, this reshapes what to look for when hiring. The work that analytics resources will be performing in the future will largely be design and review — tasks that are far more accessible and achievable with a lower level of technical fluency than ever before. Analytics teams are going to need leaders who can translate business problems into potential solutions and design them to be flexible and scalable. It’s going to be less about code experts and more about the abstract thinking and organizational understanding you’re looking for when you’re hiring for an analytics team or a data science resource.
Close
I started the day with a problem, an idea, no code base, and had a solution by the end of the morning.
That sentence is not a product claim. It’s a description of what happened — and a signal about what the execution barrier looks like for anyone willing to define a problem clearly and work with the tools available.
The overall interface with Claude Code is natural. You’re just describing how things need to work to achieve the desired outcome. Claude Code generates the entire code base. All you need is a basic understanding of what is possible, the ability to advise and make decisions, and a sharp understanding of project objectives.
I started off by describing the problem and asking: would this be a good use case for Claude Code? I got what felt like an honest answer — an absolute yes — and an invitation to talk more about it. Typing to the Claude UI and having it answer and ask me back questions made it natural. We’d already made a lot of the decisions in the planning phase before a single line of code was written.
I just guided the outcome in partnership with Claude Code — advising and approving how everything would execute. I was able to test and iterate and add functionality. I could do the work of a whole project team myself, in hours, taking a project from nothing to a solid useful utility. A tool that provides value for my needs, but built to provide the most utility possible in the process.
The question for analytics leaders watching this space is not whether these capabilities are real. The question is what your team is choosing to imagine.
The differentiator is no longer who can write the best, most elegant code or the most elaborate pipeline. It’s who can recognize what’s worth building, guide AI-assisted execution, and ensure the result actually fits the organization it’s meant to serve.
That’s the work I’m interested in doing — and the kind of teams I’m excited to help shape.