# AGENTIC Conversational Audit Playbook (V2) ## Entry point to the AGENT Pipeline (via Kickoff) *Part of the AGENTIC Framework V5.2 by Madeleine Pierce.* --- > Nobody documents their workflows unprompted. But everyone can talk about their work. The Conversational Assessment is how AGENTIC actually starts, not with a blank template, but with a conversation. This is the full extraction methodology. If you're running your very first scan and want to move fast, the Kickoff Playbook provides a lighter entry point. --- ## How it works The Conversational Assessment replaces the traditional approach of asking people to document their processes in writing. Instead, it uses a four-step flow: **Record** → **Extract** → **Validate** → **Prioritise** 1. **Record**, A team member talks through their work. What they do, how they do it, what tools they use, where things get stuck. No template. No structure. Just talk. 2. **Extract**, AI processes the transcript and pulls out workflows, steps, handoffs, decision points, exceptions, tacit knowledge, and workarounds, structured into the Assess Discovery Pack format automatically. 3. **Validate**, The person reviews the AI's extraction and corrects it. This is where the real accuracy comes from, people are much better at reacting to a draft than creating from scratch. 4. **Prioritise**, AI analyses all extracted workflows across the team and recommends which ones are worth interrogating further through a full Assess pass. The output of this process is a populated **Assess Discovery Pack**, the same template, just filled in through conversation rather than manual documentation. --- ## Step 1: Record ### What you need - A way to record a conversation (voice memo, video call recording, transcription app) - A facilitator, someone who steers the conversation using the guide below (can be a person or an AI agent) - 30–60 minutes per person - A quiet, comfortable setting, people share more when they feel relaxed and unhurried ### Who to record Start with the people who actually do the work. Not managers describing how things should work. Not process owners reading from documentation. The people whose hands are on the keyboard, whose judgment gets applied, whose workarounds keep the system running. For a full team audit, record: - Every person who touches a core workflow - At least one person in each role that interacts with the workflow - The person who has been there longest (they hold the institutional memory) - The person who joined most recently (they see the gaps and oddities that veterans have normalised) ### The conversation, not an interview This is not a structured interview. It is a guided conversation. The goal is to get people talking naturally about their work, not interrogating them with a checklist. The facilitator's job is to: - Ask open questions and follow the thread - Gently steer back to specifics when the conversation drifts too far - Listen for the things people mention casually, those are often the most important ("oh, and then I usually just check the spreadsheet Dave keeps, but that's not really part of the process") - Never correct or challenge, just capture ### Conversation guide The following prompts are organised in the order a natural conversation typically flows. You don't need to ask every question. Use them as starting points and follow wherever the person takes you. **Opening, getting them talking** - *Tell me about a typical day in your role. What does your morning usually look like?* - *What's the first thing you do when you sit down to work?* - *What takes up most of your time in a typical week?* **Zooming into workflows** - *You mentioned [thing]. Walk me through exactly what happens when that comes in. Start from the very beginning.* - *What triggers that? How do you know it's time to start?* - *And then what do you do? Talk me through it step by step.* - *What tools or systems are you using at each point?* - *Who else is involved? When do they come in?* **Handoffs and dependencies** - *When you finish your part, what happens next? Who picks it up?* - *How do you hand it over? Email? Slack? Just leave it somewhere?* - *What do they need to know that you don't always tell them?* - *Is there ever a delay here? What causes it?* **Decision points and judgment** - *Is there a point where you have to make a call on something?* - *What information do you use to make that decision?* - *Could you teach someone else how to make that call? What would you tell them?* - *What happens when it's not clear-cut? What do you do then?* **Exceptions and problems** - *What goes wrong with this? What's the most common problem?* - *What's the weirdest thing that's happened?* - *What do you do when you get stuck? Who do you ask?* - *Is there anything you handle differently depending on the situation, certain clients, certain types, certain times of year?* **Hidden knowledge and workarounds** - *Is there anything about how you do this that isn't written down anywhere?* - *Are there any shortcuts or workarounds you use that aren't part of the official process?* - *If you were off sick tomorrow, what would someone else get wrong?* - *What would a new starter need to know that nobody would think to tell them?* **Feelings and readiness (ask gently, towards the end)** - *How do you feel about the way this process works? Is there anything you'd change?* - *If some of this could be handled by AI, which parts would you be happy to let go of?* - *Which parts would you want to keep doing yourself?* - *What would you need to see before you'd trust a system to do part of this?* **Closing** - *Is there anything about your work that I haven't asked about but should have?* - *Who else should I talk to about this?* - *Anything you want to make sure gets captured?* ### Recording tips - **Get consent.** Always tell people what the recording is for, who will hear it, and how it will be used. Be transparent: "We're going to use AI to extract the workflows from this conversation so we can understand how work actually flows. You'll get to review everything before it goes anywhere." - **Record audio, not just notes.** Notes lose nuance. The pauses, the hedging ("well, technically we're supposed to..."), the asides, those are where the real process lives. - **Use transcription.** Most recording tools (Otter, Zoom, Teams, etc.) can auto-transcribe. The transcript is what feeds into the extraction step. - **Let it run long.** The best material often comes after the person thinks the "real" conversation is over. Give them space. --- ### "Show me", live screen capture during the conversation The most powerful moment in any audit conversation is when someone starts describing a screen-based workflow and you say: *"Can you show me? Let's capture it right now."* This is not a separate step. It is a technique you use mid-conversation whenever someone describes a workflow that happens on a computer. They talk, they hit a workflow, they start describing clicks and systems and tabs, and you pivot from listening to watching. The conversation continues, but now you can see exactly what they do. Voice conversations capture how people *think about* their work. Screen recordings capture how they *actually do* it. The gap between the two is where the most valuable insights live, the steps so automatic they've become invisible, the tab switches nobody thinks to mention, the quick checks that "aren't really part of the process" but absolutely are. **How the pivot works:** You're mid-conversation. They say something like *"So then I go into the system and check the status and update the record..."* That's the moment. You say: *"Actually, can you show me? If you're able to share your screen right now, I'd love to watch you do this one in real time. I'll keep recording so we capture the whole thing. Just talk me through it as you go, like you're showing a new starter."* If they say yes, start recording the screen (or turn on Scribe/Tango to auto-capture). If they say no, that's fine, keep talking, keep the conversation going. Never push. **Why people feel uncomfortable, and how to address it honestly:** Screen recording makes people uneasy. That's normal and it needs to be addressed directly, not glossed over. People worry about being judged, about being watched, about doing something wrong on camera. Some will feel like they're being performance-reviewed. Others will worry the recording will be used to justify replacing them. These concerns are reasonable. Be direct: *"This isn't about judging how you work, it's about understanding what the workflow actually involves so we can document it properly. You'll see everything we capture, and nothing gets used without your review. If you want to pause at any point, just say so."* The key messages to land: - **This is about the workflow, not about you.** You're documenting the process, not evaluating the person. - **You stay in control.** They can pause or stop at any time. They review everything before it goes anywhere. - **This protects your knowledge.** Getting a workflow on record means the organisation understands the value of what you do, which is harder to see when it lives entirely in your head. - **Mistakes are useful.** If something goes wrong during the recording, that's valuable data, it shows how you handle exceptions, which is the hardest thing to capture any other way. **What not to do:** - **Don't record without explicit consent.** Ever. Consent comes from the person, not from their manager. - **Don't dismiss discomfort.** If someone is genuinely uneasy, don't push. A relaxed person talking about their work produces better data than an anxious person performing on camera. Keep the conversation going without the screen share. - **Don't capture personal data.** If the workflow involves personal emails, messages, or accounts, agree what gets paused over. Build in pause points. - **Don't keep recordings longer than needed.** Once workflows are extracted and validated, delete the recordings. Tell people this upfront. **During the live screen capture:** - Let them work at their natural pace. Don't rush. - Ask light questions: *"What are you checking here?"* / *"Why did you switch to that tab?"* / *"What would you do if that field was blank?"* - Watch for the things they do automatically without mentioning, these are the steps they'd forget to describe from memory. - If they make a mistake or hit a problem, ask them to talk through how they handle it. This is gold. **After:** - Thank them. Acknowledge that being recorded is uncomfortable and that their contribution matters. - If time allows, do a quick review together. This often surfaces things like *"Actually, that's not how I normally do it, I was being more careful because you were watching."* That last comment is especially important. Flag it. **Tools for live screen capture:** | Tool | What it does | Best for | |---|---|---| | **Descript** | Video + auto-transcription + text-based editing. Edit the video by editing the transcript. | The best all-in-one option, you get screen recording, automatic transcription, and can edit both together. Makes it easy to pull out specific workflow segments and share them for validation. | | **Scribe** | Auto-generates step-by-step guides with screenshots from screen activity | Running in the background during the conversation, captures every click automatically | | **Tango** | Similar to Scribe, creates visual step-by-step documentation | Quick workflow capture with minimal setup | | **Loom** | Screen + audio recording | When you want the narration and the screen together in one shareable file | | **Zoom / Teams screen share** | They share screen during the call, you record | Remote conversations, the facilitator watches and asks questions live | **How screen captures feed into extraction:** The output from screen capture goes into the same extraction step as the voice transcript. Feed both into the [[AGENTIC_Workflow_Extraction_Prompt]]: - Auto-generated step documentation from Scribe or Tango, paste alongside the transcript - Narrated screen recordings from Loom or Zoom, transcribe the narration and paste it in The AI processes conversation and screen capture data together. The screen data fills in the steps the person forgot to mention verbally, and the conversation data provides the context, judgment, and feeling that a screen recording alone cannot capture. **The real insight:** This is not about catching people out. It's about the fact that experienced workers have automated so much of their process in their own heads that they genuinely cannot describe it accurately from memory. When someone has done a task a thousand times, half the steps have become invisible to them. The screen recording makes those steps visible again, for the person and for the system that will eventually need to know about them. **A note on keeping recordings, the future value:** There is a strong case for retaining screen recordings (with consent) beyond the immediate extraction step. Today, AI processes transcripts and auto-generated step documentation to extract workflow data. But multimodal AI is advancing rapidly, and the recordings you capture now will become increasingly valuable over time. In the near future, AI agents will be able to watch a screen recording of a workflow and learn it directly, the same way a new employee learns by watching a colleague. The video becomes training data: the agent sees which systems are used, what sequence the steps follow, how the person navigates between tabs, where they pause to make decisions, and how they handle exceptions. This means the recordings captured during Audit are not just documentation assets. They are future training assets for the agents that will eventually run these workflows. Every screen recording is a lesson that an agent may one day learn from directly. **Practical guidance:** - Ask for explicit consent to retain recordings for future AI training use, this is a separate consent from the immediate documentation purpose - Store recordings securely with clear access controls and retention policies - Label and catalogue them so they can be retrieved when the technology is ready - Respect anyone who consents to documentation but not to retention, delete their recordings after extraction as promised - Revisit this with the AI Governance stream, retention of screen recordings has governance, privacy, and data protection implications that should be documented --- ## Step 2: Extract ### What happens here The transcript from Step 1 is processed by AI using a structured extraction prompt. The AI reads the conversation and pulls out: - **Workflows identified**, distinct processes the person described - **Steps within each workflow**, in sequence, as described - **Triggers**, what starts each workflow - **Handoffs**, where work moves between people or systems - **Decision points**, where judgment is applied - **Tools and systems**, what's used at each step - **Exceptions and edge cases**, what goes wrong and how it's handled - **Tacit knowledge**, things the person knows but hadn't documented - **Workarounds**, informal shortcuts or fixes - **Emotional signals**, how the person feels about the work, resistance indicators, readiness signals - **Suggested workflows for full Audit**, the AI's recommendation on which workflows are worth investigating further ### The extraction prompt Use the prompt in the companion document: **[[AGENTIC_Workflow_Extraction_Prompt]]** Feed the prompt the full transcript. The AI will return a structured extraction that maps directly to the **Assess Discovery Pack** template. ### What to expect - A single 45-minute conversation typically reveals 3–8 distinct workflows - Most people describe their workflows differently to how they actually execute them, the AI will flag inconsistencies and gaps for validation - The extraction will be 70–80% accurate on first pass. That's the point. Step 3 (Validate) gets it to 95%+. - The AI will identify workflows the person didn't think to mention, background tasks, informal checks, habitual routines that have become invisible --- ## Step 3: Validate ### What happens here The extracted workflows are returned to the person who was recorded, and they review them. This is where accuracy goes from good to excellent. ### How to run validation **Option A: Review document** Send the person the structured extraction and ask them to mark it up: - ✅ Correct - ❌ Wrong, with their correction - ❓ Missing, things the AI didn't capture - ⚠️ Sensitive, things they said that they don't want documented (respect this) **Option B: Validation conversation (recommended)** Walk through the extraction with the person in a short follow-up conversation (15–20 minutes). This is more effective because: - People react better verbally than in writing - Seeing their workflow structured often triggers additional details ("Oh, I forgot to mention, before that step, I always check...") - It builds trust in the process, they see their input being taken seriously **Option C: AI-assisted validation** Use an AI agent to walk the person through each extracted workflow interactively: - *"Here's what I understood about how you process invoices. Step 1 is... Step 2 is... Does that sound right?"* - The person confirms, corrects, or adds to each step - The AI updates the extraction in real time ### Validation questions - *Does this look like what you actually do?* - *Is anything in the wrong order?* - *Is anything missing, steps I didn't capture?* - *Is anything here that shouldn't be, something I misunderstood?* - *Are the decision points right? Is that really how you decide?* - *Did I capture the exceptions correctly?* ### After validation The validated extraction becomes the populated **Assess Discovery Pack** for each workflow. Parts 1–3 of the template (workflow map, handoffs, decisions, exceptions, tacit knowledge) are now filled in from conversation rather than manual documentation. Parts 4–7 (workflow health, optimisation, organisational readiness, recommendation) are completed by the Assess lead using the validated data as input. --- ## Step 4: Prioritise ### What happens here Once multiple people have been recorded and their workflows extracted and validated, the AI analyses the full set and recommends which workflows are worth taking through the full AGENT pipeline. ### Prioritisation criteria The AI scores each extracted workflow against five signals: | Signal | What it means | Where it comes from | |---|---|---| | **Frequency** | How often does this workflow run? | Mentioned in conversation, daily, weekly, per event, etc. | | **Pain** | How much friction, frustration, or time does this workflow cause? | Emotional signals in the transcript, complaints, workarounds, sighs | | **Dependency risk** | How dependent is this workflow on specific people or undocumented knowledge? | Tacit knowledge density, if one person holds all the knowledge, it's fragile | | **Repeatability** | How consistent and rule-based is this workflow? | Decision point analysis, more rules = more automatable; more judgment = less | | **Cross-mention** | Do multiple people describe the same workflow? | Cross-referencing transcripts, workflows mentioned by several people are central | ### Output The AI produces a **Workflow Discovery Report** containing: 1. **All workflows identified** across all conversations, deduplicated and grouped 2. **A scored prioritisation** of which workflows are best candidates for a full Assess pass 3. **A recommended starting point**, the single best workflow to take through AGENTIC first, with rationale 4. **Fragility flags**, workflows that depend entirely on one person's knowledge and represent organisational risk regardless of automation intent 5. **Quick wins**, simple, high-frequency workflows that could be automated with minimal effort (these build momentum and trust) ### What happens next The top-priority workflows get a full Assess pass, using the **Assess Discovery Pack** template, now pre-populated with data from the conversational extraction. The Assess lead completes the remaining sections (workflow health, organisational readiness, recommendation) and the workflow proceeds to Greenlight. The Workflow Discovery Report feeds directly into Greenlight later in the pipeline. --- ## Running this at scale ### For a single team (5–10 people) - Record each person individually (30–60 minutes each) - Run extraction and validation - Produce the Workflow Discovery Report - Timeline: 1–2 weeks ### For a department (20–50 people) - Record a representative sample (not everyone, aim for coverage of roles, not coverage of people) - Supplement with group sessions where teams walk through shared workflows together - Cross-reference individual and group extractions - Timeline: 3–4 weeks ### For an organisation - Start with one team. Prove the approach. Use the results to build credibility. - Expand team by team, not all at once - The Workflow Discovery Reports from each team feed into an organisation-wide pipeline view - Timeline: phased over months, not weeks --- ## Tooling recommendations | Function | Options | |---|---| | **Recording + transcription** | Descript (recommended, records, transcribes, and lets you edit video by editing text), Zoom, Teams, Otter.ai, Loom | | **Screen capture** | Descript (screen + audio + transcript in one), Scribe, Tango, Loom, Zoom/Teams screen share | | **Transcription only** | Descript, Otter.ai, Whisper, Zoom auto-transcription, Teams transcription, Rev | | **Extraction** | Use the AGENTIC Workflow Extraction Prompt with Claude, GPT-4, or equivalent | | **Validation** | Google Docs with comments, Notion, or a follow-up conversation | | **Prioritisation** | AI-generated report using the extraction outputs as input | --- ## Common problems and how to handle them **"People won't talk openly."** Start with people who are enthusiastic about improvement. Use their results as examples. Trust spreads through demonstration, not persuasion. Also: make it clear this is about understanding the work, not evaluating the person. **"The transcripts are messy."** They will be. That's fine. The extraction prompt is designed to handle natural, unstructured conversation. Don't try to clean the transcript first, let the AI do the heavy lifting. **"People describe workflows differently."** That's a feature, not a bug. When two people describe the same workflow differently, you've found an inconsistency that the full Audit needs to investigate. Flag it. **"The extraction misses things."** Expected. That's what Step 3 (Validate) is for. The extraction is a first draft, not a final document. Aim for 70–80% on first pass and correct from there. **"We don't have time for 45-minute conversations."** Even 15 minutes per person produces useful data. Start short, go deeper on the workflows that matter. Something is always better than nothing. **"Leadership wants to see results before investing time."** Run one conversation. Extract the workflows. Show leadership the Workflow Discovery Report for that single person. The density of what a single conversation reveals is usually enough to make the case. --- ## Related documents - [[AGENTIC_Framework_V5]], The full AGENTIC V5 methodology - [[AGENTIC Assess Discovery Pack]], The Assess Discovery Pack template (populated by this process) - [[AGENTIC_Workflow_Extraction_Prompt]], The AI prompt for processing transcripts - [[AGENTIC_Govern_Template]], The AI Governance stream template (runs alongside the pipeline) --- *The best audit starts with a conversation, not a spreadsheet.* --- *Part of the AGENTIC Framework V5.2 by Madeleine Pierce.*