Leveraging FinOps for AI: Bringing Financial Discipline to Artificial Intelligence
“Everyone’s talking about AI — but few are talking about what it costs.”
That single observation from Stephen Old, Head of FinOps and GreenOps at Synyega, captures the reason enterprises everywhere are beginning to connect FinOps (financial operations) and AI governance.
In this recent webcast (replay here) hosted by Tom Hollowell of 321 Gang, Stephen walked through the emerging framework of FinOps for AI (F4AI) — a practical approach that brings transparency, control, and business alignment to the most compute-intensive technology trend of our time.
Why AI Changes the Cost Conversation
AI isn’t one thing. It’s a sprawling ecosystem of SaaS tools, PaaS models, cloud training services, and hybrid deployments.
Many organizations already run a patchwork of these without realizing it: an engineer experimenting with ChatGPT Plus, a data-science team training models in AWS SageMaker, marketing using Copilot for content, and finance testing forecasting algorithms in Azure.
Each of those purchases flows through different channels and teams — often without centralized visibility. It’s “shadow IT 2.0,” only this time it’s shadow AI.
“There are people in your organization right now buying SaaS AI subscriptions you don’t know about,” Stephen warned.
“They’re not being malicious — they’re just trying to work smarter. But if you don’t have a strategy, you can’t optimize pricing or governance.”
That’s where FinOps enters the picture. FinOps provides a common language for finance, engineering, and procurement to measure and manage cloud costs — and now AI costs too.
The Core Challenge: Valuing AI Return on Investment
Traditional FinOps asks: “What are we spending and why?” With AI, the bigger question is “What value are we getting?”
AI often replaces manual effort or augments existing processes — yet few organizations measure the “before” state.
If a bot cuts support tickets by 20%, does that mean a cost savings or a shift of labor to higher-value work? Without a baseline, it’s impossible to quantify ROI. Stephen urged teams to start measuring early, even before projects are fully defined.
“You don’t have to wait until you finish building. Start tracking while you’re building. FinOps isn’t reactive — it’s about forecasting and understanding the cost of doing nothing.”
Understanding the AI Landscape Through a FinOps Lens
Stephen outlined five broad AI categories and how each drives unique cost patterns:
- Traditional / Predictive AI (Machine Learning): Heavy training costs up front, light ongoing inference usage.
- Generative AI: High compute for training on large datasets; moderate to high cost for daily inference requests.
- Retrieval-Augmented Generation (RAG): Adds search and vector databases to ground answers in real data.
- Agentic AI: Chains multiple AI calls to orchestrate tasks — brilliant but expensive to run continuously.
- Edge / On-Device AI: Low central compute but “death by a thousand cuts” as costs spread across devices.
Each layer builds on the previous one, stacking compute requirements and therefore costs.
FinOps helps teams understand which models truly need enterprise-scale resources and which can run lighter or on-prem.
The Stages of AI and Where Costs Emerge
Stephen broke down the AI lifecycle into seven phases — each with its own cost and optimization opportunity:
- Data Collection: Gather only what you need. Training on all data wastes time and money.
- Data Preparation: Cleaning and filtering consume compute — good data reduces later costs.
- Model Training: Extremely high compute and carbon footprint; define clear objectives before you start.
- Fine-Tuning: Shorter but intensive bursts to align models with specific tasks.
- Inference: Where value is realized — and where poor prompt design can quietly balloon costs.
- Retrieval & Orchestration: Adds layers of complexity that improve accuracy but raise spend.
- Monitoring & Optimization: Light per-use, but ongoing forever.
FinOps for AI means tracking each of these as distinct cost centers with different lifespans and KPIs.
Avoid Burning Money on AI
When early enterprise clients asked Stephen how to avoid “burning money on AI,” his answer was simple:
Start with purpose and data.
“If you don’t know what you want to learn, and you train on all your data, you’ll spend a fortune and learn nothing.”
He developed a FinOps-for-AI model built on three layers:
- Foundations and Governance — Define value, success metrics, ROI, and security considerations up front.
- Use Case Cycle — Design → Run → Review → Learn. Continuously measure and refine forecasts.
- Technical Efficiency — Right-size compute and data; choose the simplest model that meets the goal.
This loop mirrors the FinOps framework’s “Inform → Optimize → Operate,” but applies it to AI projects where training costs can spike unpredictably.
Practical Optimization Levers for AI Spending
1. Right-Sizing and Data Discipline
Training on too large a dataset not only wastes compute but can reduce accuracy. Smaller, cleaner datasets train faster and cheaper.
2. Model Efficiency and Choice
Not everything needs to be agentic AI or a large language model. Stephen showed how he built a small language model for $20 on a spot instance, then passed its answers to GPT for polish — achieving enterprise-grade results for a fraction of the cost.
3. Request Efficiency (Training the Humans)
Poor prompting is a hidden cost. When users iterate endlessly in ChatGPT or Copilot, token usage can explode 20× or more. Teaching teams to write clear prompts is one of the most cost-effective FinOps for AI tactics available.
4. Time and Resource Scheduling
For enterprises training many models, consider running jobs in series rather than parallel to qualify for committed-use discounts. Constraint can save millions without hurting results.
5. Collaboration Across Functions
AI blurs the boundaries between FinOps, Procurement, Engineering, and IT Asset Management. Licensing and consumption models overlap; the teams must work together to govern SaaS, PaaS, and usage-based spend holistically.
FinOps for AI in Action — A Joint Perspective
321 Gang and Synyega both see FinOps for AI as an extension of their core mission — helping clients realize measurable value from technology investments.
For IBM clients, that often means connecting FinOps principles with IBM solutions like Turbonomic, Apptio Cloudability, Instana, and Kubecost to bring real-time visibility into AI workloads.
For non-IBM clients, FinOps for AI offers a universal framework — agnostic to vendor — that links financial governance to technical performance in any cloud environment.
The Mindset Shift: From Experimentation to Accountability
AI innovation shouldn’t be stifled by cost control — but it should be guided by it. FinOps for AI empowers teams to experiment responsibly, knowing how each decision affects cost, carbon, and business outcomes.
“You can’t just let AI projects run free,” Stephen concluded.
“You need to keep your eye on usage, efficiency, and ROI — that’s how you avoid burning goggles and deliver value sustainably.”
Key Takeaways
- FinOps for AI is about discipline and measurement, not restriction.
- Start with purpose: Know the business question before you train a model.
- Measure early and often: Define ROI and the “cost of doing nothing.”
- Optimize holistically: Compute, data, architecture, and human usage all affect AI spend.
- Collaboration is key: FinOps, engineering, and ITAM must work as one team.
About the Presenters
Stephen Old is Head of FinOps and GreenOps at Synyega, a UK-based FinOps, GreenOps, AI, and ITAM consultancy. He co-hosts The FinOps Guy podcast, contributes to the FinOps Framework, and is one of the most certified FinOps practitioners globally.
Tom Hollowell of 321 Gang and leads the company’s FinOps and TBM strategy. 321 Gang is a member of the FinOps Foundation and TBM Council, and an IBM Platinum Business Partner focused on FinOps, DevOps, and cloud optimization.
Ready to Learn More?
If your organization is exploring AI and wants to understand its true financial and operational impact, connect with our teams:
📩 info@321gang.com
🌐 321gang.com/ibm-finops-solutions

At 321 Gang, we are committed to helping organizations navigate the evolving intersection of cloud, finance, and emerging technologies. As active members of both the FinOps Foundation and the Technology Business Management (TBM) Council, we stay engaged with the latest frameworks and community-driven practices for cost optimization and value realization. These memberships provide us with practical insights and peer collaboration that enhance our ability to support organizations facing the unique financial challenges introduced by AI and cloud-native architectures. info@321gang.com

321 Gang | 14362 North FLW | Suite 1000 | Scottsdale, AZ 85260 | 877.820.0888

