Module 1 · Foundation

What AI Agents Actually Are

Most enterprise sellers can't explain the difference between AI assistant, copilot, and agent — and they lose deals because of it. Fix that here.

The Three-Layer Model

Every AI product on the market fits into one of three layers. Buyers confuse them constantly. Your job is to know exactly where your product sits — and why it matters.

Layer 1

AI Assistant

Responds to questions, generates content when prompted. Requires a human to initiate every single action. The human is always in the loop.

Reactive by design
ChatGPT basic chatbots FAQ bots
Layer 2

AI Copilot

Embedded in workflows, suggests actions, augments human decisions. Works alongside humans but doesn't act independently without approval.

Advisory, not autonomous
GitHub Copilot Salesforce Einstein
Layer 3

AI Agent

Sets sub-goals, executes multi-step workflows, recovers from errors, maintains memory across sessions. Acts autonomously over extended periods to achieve business objectives.

Autonomous + goal-directed
Integrity Agents Claude agents

Market Reality

Key Stat
$10.91B
Global AI agents market size in 2024
Key Stat
$50B+
Projected market size by 2030
Key Stat
45.8%
CAGR — fastest growing enterprise category
Key Stat
51%
Enterprises already running agents in production

Agentwashing Detection Guide

"Agentwashing" is when vendors call their product an AI agent when it's really just a chatbot or copilot. It's rampant. Here are the 5 red flags to spot it — and use to differentiate yourself.

1
Requires constant prompting to take the next step

A real agent sets its own sub-goals and proceeds without a human pushing it forward at every step. If the vendor says "the user tells it what to do next," that's a copilot at best.

Tell-tale phrase: "The AI helps your team do X" (vs. "The AI does X automatically")
2
No autonomous decision-making when situations vary

Ask: "What happens when the agent encounters a situation it wasn't explicitly configured for?" A real agent reasons through novel situations. A fake one fails or escalates to a human every time.

Tell-tale phrase: "It follows the workflow you define" (rigid, not adaptive)
3
Cannot recover from errors or unexpected outputs

Real agents handle exceptions: they retry, find alternative approaches, and flag only truly novel failures. If the product requires human intervention for any error, it's not an agent.

Tell-tale phrase: "A team member reviews and corrects the output" after every run
4
No persistent memory across sessions

Agents accumulate context: they remember what happened in previous runs, build knowledge about the business, and improve over time. If every session starts from zero, it's a stateless tool, not an agent.

Tell-tale phrase: "You provide the context at the start of each conversation"
5
Human approval required for every individual action

Some level of oversight is fine — in fact, it's responsible deployment. But if the system requires human approval for each atomic action (not just high-stakes decisions), it's a human-in-the-loop tool, not an autonomous agent.

Tell-tale phrase: "The AI drafts it and your team approves every single item before it goes out"

🎯 The Whiteboard Test

In your next vendor evaluation meeting (or when a competitor shows up in your deal), ask this question and listen carefully to the answer. Real agents give you real answers. Fake ones deflect.

Ask them:
"Walk me through exactly what happens when your agent encounters an error or an unexpected situation it wasn't trained on. Who gets notified, what does the agent do next, and how does it log that for improvement?"
✓ Good answers sound like:
  • Describes a specific error-handling and retry mechanism
  • Explains how the agent reasons about uncertainty before escalating
  • Mentions a feedback loop that improves future performance
  • Gives a concrete example from a real customer
🎮 See It In Action
Watch a real autonomous agent vs. a "copilot" in the wild
The simulator demos are your proof — MOC Document Agent shows real agentic behavior. Use it when a buyer challenges "is this really an agent?"
Explore AI Demos →
Exercise

Agentwashing Detector

1 / 3

Three vendor pitches. For each one, decide: is this a Real Agent (Layer 3 — autonomous, end-to-end) or Agentwashing (Layer 1 or 2 relabeled)? Then rate your confidence.

Vendor Pitch 1 of 3
DrillingAssist AI
"Ask any question about your drilling data — our AI-powered chat interface gives you instant answers from your rig logs, mud reports, and daily surveys. Just type in plain English and get results in seconds."
Classify this product:
Confidence 70%
Module 2 · Buyers

The New Buyer Map

AI agent deals involve 6–8 stakeholders, not 2. Each has a completely different agenda. Walk in knowing all of them.

The 8-Role Buyer Map

Enterprise AI agent deals rarely have a single economic buyer. Every role has a veto. Know their primary concern before you walk in the room.

Role Primary Concern What They Need From You Red Flags They'll Raise
CIO Technical risk, integration complexity, security posture Architecture diagrams, security docs, SSO/API specs, compliance certifications Vague integration story; no API docs
CTO Architectural fit, scalability, vendor lock-in System design, performance benchmarks, data residency details, model transparency Can't explain how it connects to existing stack
COO Process disruption, change management, adoption Change mgmt plan, training timeline, rollout roadmap, success metrics No deployment plan; "it just works" hand-waves
CFO ROI, budget impact, payback period, total cost Business case model, pricing structure, cost-benefit analysis, comparable deals Can't quantify ROI or dodge payback questions
CDO Data governance, quality, lineage, privacy Data flow diagrams, retention policies, consent framework, audit trail Unclear who owns data; vague on model training
Head of AI Technical depth, limitations, evaluation methodology Model specs, failure mode analysis, evaluation benchmarks, roadmap Overselling capabilities; can't discuss limitations
Legal / Compliance Liability exposure, regulatory compliance, IP ownership Audit trails, indemnification clauses, regulatory certifications, DPA Weak compliance posture; no indemnification discussion
AI Buyer (Procurement) Vendor stability, contract terms, SLAs, exit clauses Customer references, uptime SLA, data portability, exit procedures New vendor with no references; aggressive lock-in terms
Seller Warning
In O&G enterprise deals, the Head of AI and Legal/Compliance roles are often the most underestimated blockers. Legal has never reviewed an AI liability clause before. Head of AI will test your technical claims. Prepare for both — or they'll tank your deal in the final review.

4 Ways This Differs From SaaS Procurement

Your SaaS playbook won't work here. These four differences will surprise you if you don't know them going in.

1

Multiple Technical Stakeholders

In SaaS, IT approves the tool. In AI agent deals, CTO + Head of AI both need to sign off — and they have different (sometimes conflicting) agendas. Prepare separate technical briefings.

2

Legal Enters the Room Earlier

AI liability is uncharted territory. Legal gets pulled in at discovery, not just contract. They're not obstructing — they're genuinely uncertain. Come with a DPA and indemnification framework ready.

3

POC/Pilot is Assumed, Not Optional

Enterprise buyers won't sign a $100K+ annual contract without a pilot. Budget and timeline for this from your first conversation. "We can start with a 90-day POC" closes more deals than any demo.

4

Board-Level Visibility on AI Spend

Unlike a SaaS tool that lives in one department, AI agent deployments often get reviewed by the executive team and board. Your champion needs to be able to present upward. Give them the slide deck to do it.

Discovery Question
"Besides yourself, who else would be involved in evaluating and signing off on a technology like this? And what's their biggest concern typically?" — Ask this in your first meeting. Map every name they give you.
Exercise

Stakeholder Mapping Simulator

1 / 5

Five company scenarios. For each: select 3–4 stakeholders from the 8-role buyer map, then assign each a priority (Champion / Influencer / Blocker). Match the recommended mapping to score points.

Module 3 · Discovery

Discovery Reimagined

Selling AI agents requires a different discovery approach. You're not finding pain — you're mapping workflows, quantifying burden, and scoring automation potential.

The Workflow Discovery Methodology

Five steps. Do them in order. Don't skip to the pitch.

1
Shadow the team for 2 hours — don't ask, watch

People describe their work differently than they actually do it. Ask "what do you do?" and you get the idealized version. Watch them work for 2 hours and you see the copy-paste, the manual workarounds, the email chains that should be automated. Request a "working session" observation — most teams are flattered, not annoyed.

2
Map inputs and outputs for every workflow you observe

Every workflow has sources (data in) and deliverables (data out). Document them explicitly: "This daily report pulls from 3 systems, takes 90 minutes, and produces a PDF that goes to 5 stakeholders." That's your automation opportunity statement.

3
Identify every decision point in the workflow

At each step, ask: "Does a human have to make a judgment call here, or is this rule-based?" Decision points that follow explicit rules (even complex ones) are automatable. Ones that require political judgment or domain expertise still need humans — design for augmentation, not replacement.

4
Quantify the actual burden: hours × rate × frequency

Get numbers. "How many hours does this take per week? Who does it — what's their approximate hourly fully-loaded cost?" Then multiply: 40 hrs/week × $80/hr × 52 weeks = $166K/year in labor. That's your ROI anchor. Don't let the prospect abstract it — pin down real numbers.

5
Score automation potential using the 5-dimension framework

Apply the scoring grid below to prioritize which workflows to tackle first. This gives you an objective conversation with the prospect — and helps you set realistic expectations about sequencing.

Automation Opportunity Scoring (1–5 per dimension)

Score each dimension 1–5. Max score: 25. Higher = stronger automation candidate. Note: high "Cost of Error" scores LOWER priority (not higher).

Volume
1–5
1 = Runs rarely
5 = Multiple times daily
Rules Clarity
1–5
1 = Judgment-heavy
5 = Explicit, documented rules
Data Availability
1–5
1 = Mostly unstructured
5 = Clean, structured data
Cost of Error
1–5
5 = Safety critical (avoid)
1 = Low stakes (prefer)
Current Cost
1–5
1 = <$500/week
5 = >$10K/week
18–25
Strong Candidate
Lead with this workflow. Build your POC here.
10–17
Moderate — Phase 2
Good target after initial deployment proves value.
0–9
Not Yet Ready
Data or process maturity needed first. Don't start here.

Discovery Question Bank

Expand the relevant function/industry to get targeted questions. Adapt them — don't recite them.

🛢️ O&G Engineering & Operations
Walk me through how you generate a daily drilling report today. Who touches it, in what order, and how long does the whole process take from rig data to sign-off?
When there's a production anomaly, what's the process from detection to engineering decision? How many people are involved and how long does it typically take?
How many RFPs did your company respond to last year? What percentage did you win, and what was the main bottleneck in your response process?
What reports or documentation does your team produce on a daily or weekly basis that feels like groundhog day — the same structure, just different numbers?
When you do a well integrity assessment, what's the most time-consuming part — and what percentage of that time is "finding and formatting data" vs. "actually thinking about it"?
💰 Finance & CFO
What reports does your finance team generate manually every month that you've been telling yourself "we should automate that" for the last 2 years?
How long does your month-end close take? What's the single biggest bottleneck — and how much of that bottleneck is data gathering vs. analysis?
If I told you an AI agent could cut your close cycle by 40%, what would that be worth in terms of analyst hours freed? What would those analysts do instead?
When variance reports land on your desk, how much time do you spend figuring out what happened vs. actually deciding what to do about it?
⚙️ Operations & COO
What's the one workflow your operations team complains about most — the one where people say "why are we still doing this manually"?
If you could give your team back 5 hours per person per week by automating one category of work, which category would you choose first?
Where do approvals or handoffs between departments create the most delay in your operations? What's the typical wait time for a standard approval?
How do you currently handle incident documentation and reporting? Who writes it, who reviews it, and what does that process cost you in senior engineer time?
🤖 AI & Technology Leadership
What AI initiatives have you tried in the last 18 months? What worked, what didn't, and what made the difference between the two?
When you think about deploying an autonomous agent in your environment, what's your biggest technical concern — integration, security, reliability, or something else?
What does your current evaluation process look like for AI vendors? Who's involved, and what are the non-negotiable technical requirements?
How do you currently measure success for AI investments? What metrics would make you say "that pilot worked, let's expand"?
🎮 See It In Action — Live O&G Workflows
Daily Drilling Report (DDR)
High-volume, structured, time-sensitive workflow — a perfect "Strong" automation candidate. Walk through the scoring live.
See Drilling Dashboard →
Turnaround Inspection Reports
Another high-scoring workflow — complex inspection checklists, critical compliance timelines, multiple data sources.
See Facilities Inspection →
Exercise

Workflow Discovery & Scoring Tool

Score common O&G workflows on 5 automation dimensions. Auto-ranks them by potential. Score 5 workflows to complete this exercise.

0 / 5 scored
Select a workflow to score:
📊 Ranked by Automation Potential
Score workflows above to see rankings appear here

✓ Exercise Complete

Module 4 · Pricing

Pricing Without Panic

Enterprise AI agent pricing is nothing like SaaS. There's no $50/seat/month comparison. You need a framework, not a number — and a script for the 5 objections you'll always get.

7 Pricing Model Comparison

Know all 7. Recommend the right one for the deal context. Never let pricing be a surprise.

Model Structure Best For Watch Out
Seat-based $/user/month Large teams using agent tools daily; familiar to buyers Buyers compare to $50/mo SaaS tools — anchors low
Per-Agent $/active agent/month Clear inventory of agents; each has a defined scope Buyers want to "start with 1" and never expand
Usage / Consumption $ per workflow run, API call, or token Variable or unpredictable workloads; proof-of-concept phase Hard to budget; CFOs hate unpredictability
Per-Workflow $ per completed workflow execution Discrete, countable, clearly defined tasks Needs strong metering; billing disputes are common
Outcome-based % of quantified value created High-confidence ROI use cases with clear attribution Requires attribution agreement upfront; hard to audit
AELA Annual flat enterprise license Large enterprise, broad multi-team deployment Heavy discounting pressure; negotiation is intense
Hybrid Base license + usage overage + optional outcome kicker Complex enterprise deployments; aligns risk appropriately Complex to explain, complex to contract — use sparingly

🏦 The CFO Conversation Framework: Ceiling → Payback → Reference

Before presenting a number, run through these three questions. In that order. They establish value, timeline, and context — so your price lands with a foundation instead of into a vacuum.

Ceiling
"What's the maximum you'd pay per month if this delivered everything you need perfectly?"
Establishes the value anchor from their perspective — not yours. A CFO who says "$30K/month" just told you where the ceiling is. Now you can price below it and look like a bargain.
Payback
"How quickly does this need to pay for itself for you to approve the investment?"
Sets the timeline expectation before you're negotiating. "Within 12 months" means you need ROI data for year 1 only. "Within 6 months" tells you which workflows to lead with (fastest, highest impact).
Reference
"What's the most you've paid for a comparable technology or service in the last 2 years?"
Gives you competitive pricing context. If they say "$200K/year for a BI platform," you know what they consider a normal technology investment — and can position your pricing relative to that reference point, not to $50/seat SaaS tools.

5 Pricing Objections — With Response Scripts

These are the 5 objections you'll hear in almost every enterprise AI agent deal. Each one has a script. Memorize the principle, not the exact words.

1
"This is too expensive."

This is almost never about the number. It's about the unclear ROI story. Don't defend the price — reframe the comparison.

"Compared to what? If your team spends 40 hours per week on this workflow at $80/hr fully loaded, that's $166K per year in labor alone. Our pricing is $X/year. What's the number that makes this feel like a good investment?"
Why it works: Forces a concrete comparison instead of an abstract "too expensive" objection. Puts the ROI math on the table.
2
"We need to see an ROI model first."

This is a fair request and a buying signal. Don't defend — lean in and take control of the model-building process.

"Absolutely — I'd love to build one together. Let's start with the 3 workflows most likely to be automated. Can you share the headcount and approximate hours on each? I'll have a first pass back to you within 48 hours."
Why it works: Collaborative model-building creates buy-in. You control the assumptions, which means you control the outcome. And the conversation itself qualifies them further.
3
"Our IT team needs to approve this."

Don't wait for IT to kill your deal because they don't have the right information. Go on offense.

"Of course — IT approval is table stakes for us too. What are the top 3 technical concerns they'll have? Integration, security posture, or something else? Let me prep a technical brief and security overview tailored to their review process. I can have it ready for their first meeting."
Why it works: You're volunteering to do the work that would otherwise slow the deal. IT blockers who get good documentation early move faster.
4
"We're already evaluating [Competitor X]."

Don't panic. Don't trash the competitor. Shift the evaluation criteria to where you win.

"That's good news — evaluating options seriously means you're committed to moving forward. What does their proposal look like? The question that matters most isn't features — it's: who owns accountability when the agent makes an error in production? That's what separates real enterprise solutions from demos."
Why it works: Reframes the competition around operational accountability rather than a feature checklist. Forces them to think about post-deployment, where most vendors underdeliver.
5
"We're not ready for AI yet."

This is usually a fear objection dressed up as a readiness concern. Explore what "ready" means to them — it's almost always ambiguous.

"I hear that a lot. What would 'ready' look like for you specifically? Most companies that say this are already 12 months behind their direct competitors who are running agents in production right now. Let's figure out if your data and process maturity actually qualifies — it might be closer than you think."
Why it works: Makes "not ready" concrete and potentially reversible. Creates urgency through competitive context without being pushy. Opens a diagnostic conversation rather than a standoff.
Seller Warning
Never give a price before you've run the CFO framework. Pricing without a value anchor lands cold. Pricing after Ceiling → Payback → Reference lands as a solution to a quantified problem. Same number, completely different reception.
Exercise

Pricing Model Matcher

5 deal scenarios. Match the right pricing model, then handle 2 objections each — referencing the CFO Ceiling and Payback Period framework. Score up to 20 points.

Start Exercise →
Module 5 · Advanced

Pilot Design & POC Strategy

95% of AI pilots fail. Not because the technology fails — because the scope was wrong, metrics weren't defined, or there was no executive sponsorship. Learn to design pilots that convert.

3 Criteria for a Well-Scoped Pilot

Criterion 1
Executive Relevance
Does the pilot solve a problem a VP+ cares about? If it only matters to a team lead, it won't get expansion budget.
Criterion 2
6-12 Week Feasibility
Can you show measurable results in 6-12 weeks? If the pilot needs 6 months of data prep, it's a project — not a pilot.
Criterion 3
Pre-Defined Metrics
Success criteria must be agreed before the pilot starts. Ambiguous metrics guarantee pilot purgatory.
Pilot Purgatory Warning
When pilots keep getting extended without a decision, you're in pilot purgatory. Set a firm decision date on Day 1. If the answer on that date is "we need more time," the answer is actually "no."

PoV Agreement Elements

Every pilot needs a written Proof of Value agreement signed by both sides before work begins.

Element 1
Scope & Use Case
Element 2
Timeline & Decision Date
Element 3
Success Metrics + Targets
Element 4
Executive Sponsor
Exercise

Pilot Design Workshop

Interactive 6-step pilot scoping form. Select a workflow, define success metrics, choose timeline & governance, specify data requirements — then generate a formatted PoV summary card. Score up to 15 points.

Start Workshop →
Module 6 · Advanced

ROI Business Case Builder

CFOs don't buy AI. They buy P&L impact. Learn to frame every conversation in dollars — and generate presentation-ready ROI business cases that survive CFO scrutiny.

Key Industry Benchmarks

Industry Avg
$3.50
Return per $1 invested in AI agents (customer service)
Avg Payback
4.7 mo
Average AI agent payback period across verticals
Cost Reduction
23%
Average cost reduction in automated processes
3-Year ROI
380–520%
3-year compounding ROI for well-implemented AI agents

The ROI Conversation Framework

Step 1
Start with current cost
FTEs × loaded cost + error cost + opportunity cost. This is the "burning platform" number the CFO needs to feel the pain before hearing a solution price.
Step 2
Show the agent cost
Implementation + annual subscription. Keep this simple — one number for Year 1 investment.
Step 3
Calculate Net Year 1 savings
Gross savings from automation + error reduction, minus Year 1 investment. If this is positive, the conversation changes immediately.
Step 4
Show 3-year compounding effect
As process expands and matures, savings compound. Year 2 and Year 3 projections show the full value trajectory.
Step 5
Express as payback period in months
CFOs think in payback periods. "4.2 months to break even" is a better closer than any ROI percentage.
Warning: Conservative Numbers Win Deals
Never inflate ROI projections. CFOs have seen enough "50x ROI" claims to be permanently skeptical. Conservative, defensible numbers build trust and close deals. When in doubt, round down.
🎮 See It In Action — Live ROI Examples
Stuck Pipe Early Warning: $500K–$2M per event avoided
Use the interactive ROI calculator in the simulator to walk a prospect through a real avoided-cost calculation. The numbers are immediately defensible to a CFO.
Try ROI Calculator →
Exercise

ROI Business Case Builder

Interactive ROI calculator: input FTE count, loaded cost, error rates, implementation cost, and automation percentages. Auto-calculates Year 1 savings, 3-year cumulative ROI, payback period, and generates a CFO-ready business case card. Score up to 10 points.

Open ROI Builder →
🎭

Module 7: Live Role-Play & Objection Drills

AI-powered live practice: face an enterprise buyer persona and handle real objections in real time. Includes the full AI Agent Sale simulation, cold call scripts for O&G decision-makers, and a 90-day prospecting tracker. Coming in Phase 2.

⏳ Coming Soon — Phase 2
🎮 Objection: "Show me proof it works"
When a prospect says "show me real results" — send them to the simulator. Live demos of DDR, stuck pipe warnings, and ROI calculators are your most compelling proof point.
Launch Simulator →