CIO Influence
AIOps Automation Cloud Guest Authors Security

Working the Pillars of Intent-aware FinOps

Navigating the landscape of FinOps often leads to a common, yet ultimately fleeting, triumph. Picture this: you’re presented with a striking heatmap, a vibrant visual indictment of severely underutilized cloud resources. This initial revelation, often a wake-up call, sets the stage for a period of focused optimization. And indeed, the efforts often bear fruit. Over time, the dedicated application of FinOps principles can lead to a commendable 20% reduction in cloud costs.

In that moment, a collective cheer erupts. Success is celebrated, and the team revels in the immediate financial gains. However, this euphoria is frequently short-lived, proving to be a transient victory rather than a sustainable transformation. The harsh reality often arrives with the turn of the calendar. As the next month dawns, the celebrated achievement from the previous period begins to unravel. The initial cost savings, while real, often prove to be a temporary balm rather than a fundamental cure.

The true challenge then emerges: the delicate balance between cost optimization and operational efficiency. The very measures that led to the 20% reduction might inadvertently trigger a cascading set of problems. Suddenly, the service level agreements (SLAs) that are critical to business operations are in disarray, struggling to keep pace with demand. The finely tuned resource allocation, once a point of pride, transforms into a bottleneck. Every CPU, once seemingly abundant, is now maxed out, not on critical, value-added work, but on non-essential processes, scrambling to catch up, or worse, idle when needed. This scenario highlights the crucial distinction between simply cutting costs and truly optimizing cloud spend in a way that aligns with business objectives and maintains, or even enhances, operational performance. The initial FinOps win, without a broader, more strategic approach, can quickly devolve into a struggle to maintain fundamental service delivery.

Also Read:ย Zero Trust in the Cloud Era: Securing Hybrid and Multi-Cloud Environments

Managing cloud costs often falls victim to three pervasive blind spots. The first is akin to wielding a blunt axe, indiscriminately cutting expenses without understanding the underlying reasons for a particular build. The second is the illusion of efficiency, where a workload is deemed acceptable due to high utilization, even when much of that utilization fails to deliver customer value. Finally, there’s the illusion of local optima, which falsely assumes that optimizing a single component will inherently enhance the entire system.

You need context around where finances are directed, which is why intent-aware FinOps is desperately needed. Anyone involved in managing cloud expenses should understand how each dollar fits into the overall architectural scheme. If optimization stands in the way of ROI, or is responsible for risks in compliance or time-to-market, you donโ€™t get to count it as a victory. Each win is contextual, and must be presented to senior management with that full story (the good, the bad, the ugly).

Busy doesnโ€™t mean better

When a CIO views a dashboard and sees โ€˜high utilization,โ€™ they feel reassured about resource stewardship, but in reality, this could be masking a significant amount of waste. Here are some real-world examples in which righting aย workloadย was more effective than tweaking any particular instance.

The database was handling about 80% of I/O operations and appeared efficient, but missing indexes caused full scans and high latency. Reintroducing indexes cut latency tenfold and enabled downgrading to a cheaper instance.

A GPU inference fleet, operating at 65% utilization due to the use of small models and individual request processing, incurred high costs from idle GPUs. Batching a minimum of 35 requests or using CPU inference significantly reduced the per-prediction cost and dramatically reduced single-issue errors.

Then thereโ€™s the locked job, in this case, an Apache Spark job stuck at 60% CPU for five hours a night. The cluster appeared to operate efficiently, but in reality, the overwhelming majority of the data was pinned to one skewed key. This caused tasks to constantly run without any advancement. Repartitioning and salting the key righted the workload, allowing the job to finish in about 45 minutes with a cluster that was nearly a quarter of its original size.

In each of these examples, rightsizing cut the bill slightly, but fixing the workload delivered far greater savings and significantly improved performance.

Read More onย CIO Influence:ย AI-Augmented Risk Scoring in Shared Data Ecosystems

Working the pillars

Success with FinOps depends on following four key pillars, each of which is โ€˜intent-awareโ€™ (intentions matter). This includes capturing context by linking cost to each workload and its owner. Additionally, connect these to a key performance indicator (KPI) formula to measure revenue per request, minutes saved and compliance with guidelines. The next pillar is to explore intent by asking what โ€œpromiseโ€ a workload fulfills. If no one knows, it might mean thereโ€™s no longer a need. The third pillar involves fixing and right-sizing workloads. Watch out for design waste such as polling loops and missing indexes. Reducing this waste lowers costs and improves performance. Then, resize or decommission resources as needed. The final pillar is to act safely and document all steps. This allows for automation behind safety measures like compliance and SLAs. Throughout, keep notes on changes to avoid repeated investigations later.

Intent is everything

To kickstart your FinOps journey, establish a baseline of KPIs that integrate both business costs and customer-centric metrics. When analyzing workloads, ensure close collaboration between FinOps and engineering teams, providing a centralized repository for context. Furthermore, equip yourself with tools for policy staging, scheduling and monitoring, alongside technology that can correlate reliability, expense and performance.

These initial findings are merely the tip of the iceberg, serving as crucial diagnostic data. Their true value is unlocked when meticulously analyzed by dedicated teams, each possessing a deep understanding of the specific business and technical questions that need answering. This isn’t a superficial cost-cutting exercise; the core objective of intent-aware FinOps transcends simply spending less. While cost optimization is a valuable byproduct, the far greater benefit lies in fulfilling the promise of fundamental resources. This means ensuring that every dollar spent directly contributes to strategic business goals, optimizing resource allocation for maximum impact and ultimately driving significant value for the organization.

[To share your insights with us, please write toย psen@itechseries.com]

Related posts

AiM Future Joins the Edge AI and Vision Alliance

Phunware Releases Data SDK for Third-Party Mobile Applications to Reward Consumers with PhunCoin

CIO Influence News Desk

Hitachi Energy and Google Cloud Combine Energy and Digital Expertise to Advance Sustainability Initiatives

GlobeNewswire