How to Choose the Right Bets in a Volatile 2026

How to Choose the Right Bets in a Volatile 2026.

How to Choose the Right Bets in a Volatile 2026

Manufacturing and professional services leaders did not enter 2026 with a shortage of ambition.

There are AI roadmaps, digital transformation programs, nearshoring plans, sustainability targets, new pricing models, and talent initiatives stacked up across the portfolio. At the same time, tariffs and trade policy remain unsettled, capital is more expensive, and input costs are still rising for most manufacturers. Professional services firms, meanwhile, are racing to integrate AI as a true delivery “team member” and to shift from billable hours to outcome-based pricing.

The result: a lot of motion, not always matched by progress.

This is exactly where the E in the PRESSURE methodology – Evaluate – becomes essential. Evaluate is about building a disciplined habit of pausing to ask: What is actually working, at what cost, and what do we need to stop, slow, or double down on?

Change is not failing for lack of ideas. It is failing for lack of honest evaluation.


A Quick Orientation: Where “Evaluate” Fits in PRESSURE

The PRESSURE methodology moves organizations through eight linked disciplines:

  • P – Problem: Name the real business problem, not just the symptom.
  • R – Reflect: Surface assumptions, history, and emotions around the issue.
  • E – Evaluate: Test what is true today: data, options, constraints, and trade-offs.
  • S – Strategize: Choose a focused path forward.
  • S – Sacrifices: Decide what will not be done and what you are willing to give up.
  • U – Undertake: Execute with clear ownership and structure.
  • R – Reframe: Revisit whether you are still solving the right problem.
  • E – Engage: Sustain change through communication, learning, and feedback.

Many leadership teams jump straight from Problem to Strategize. In 2026’s environment—tariffs, energy costs, sustainability pressures, AI disruption, and a fatigued workforce—that leap is risky.

Evaluate is the circuit breaker. It slows unhelpful momentum just long enough to avoid expensive bets on the wrong things.

Why Evaluate Matters More in 2026

For manufacturing:

  • Trade and tariff uncertainty continues to distort input costs and planning horizons.
  • Supply chains are still recalibrating toward resilience and regionalization, not pure cost.
  • Smart manufacturing and automation are moving from “innovation” to basic expectation, but capital for large bets is constrained.

For professional services:

  • AI is shifting from a side experiment to a billable, embedded part of delivery.
  • Clients expect faster time-to-value, transparent pricing, and outcome-based fees.
  • Talent models are changing under pressure: hybrid work, skills-first hiring, and increased emphasis on “human” capabilities like judgment and leadership.

In this context, Evaluate is not “nice-to-have.” It is how leadership protects margins, credibility, and culture while still making decisive moves.

1. Evaluate the Problem, Not Just the Project

Most “evaluation” conversations in organizations focus on projects:

  • “Is the AI pilot on track?”
  • “Are we hitting the ERP milestones?”
  • “Is the new client onboarding workflow live?”

Those are execution questions. Evaluate begins one step earlier: Is this the right problem to be solving right now?

Consider a manufacturing example:
On-time delivery is slipping. The default “problem statement” becomes: “We need a more automated, AI-driven production schedule.” But a closer look might reveal that engineering changes are hitting the floor late and dirty; the real constraint is design-release discipline, not machine utilization.A professional services example:
Margins are shrinking. The story becomes: “We need to push utilization and discount less.” Yet a portfolio evaluation might show that the real issue is bespoke work and weak scoping, not utilization—clients are asking for faster, productized solutions they can trust and re-use.

Evaluate Questions for Leaders

Use these prompts in your next leadership team:

  1. If we froze this project today, what underlying business problem would still be there?
  2. How would we describe the problem in one sentence without mentioning our current solution?
  3. Who experiences this problem most acutely—customers, operators, managers, or support functions?
  4. What would be different in 12 months if this problem were actually solved?

If leaders cannot clearly answer these, the problem—not the team—is what needs evaluation.

2. Evaluate Capacity: Your 2026 “Change Budget”

Every organization has a finite change budget: time, attention, capital, and trust.

In 2026:

  • Manufacturers face high input costs and limited room for capital projects.
  • Professional services firms are stretching teams between billable work, AI enablement, and new productized offerings.

Evaluate asks: Given this reality, what is the true size of our change budget this year?

A practical way to do this:

  1. List current and planned change initiatives
    Include everything that materially changes how people work: system changes, restructures, new plants or regions, major client-delivery model shifts, pricing model changes.
  2. Rate each initiative on two dimensions (1–5 scale):
    • Business criticality (5 = existential to strategy or risk; 1 = optional improvement)
    • Human load (5 = heavy habit/role change across many people; 1 = low disruption)
  3. Plot and decide:
    • High criticality / high load: protect and support heavily.
    • High criticality / low load: accelerate.
    • Low criticality / high load: pause, kill, or radically simplify.
    • Low criticality / low load: bundle or park.

The Evaluate discipline gives leaders permission—and a structure—to say “not now” without guilt. Sacrifices is the letter that formalizes those cuts, but Evaluate is where the evidence is gathered.

3. Evaluate AI and Automation Bets with Less Hype, More Ground Truth

Both manufacturing and professional services are being told that 2026 is the year to “get serious” about AI.

The risk is not under-investment; it is mis-investment:

  • Pursuing AI use cases disconnected from current bottlenecks
  • Automating broken processes instead of fixing them
  • Overestimating savings and underestimating change effort and governance

An Evaluate-oriented lens for AI/automation:

  1. Start from a real, felt bottleneck
    • Manufacturing: throughput constraint, scrap/rework, planning accuracy, changeover times, quality escapes.
    • Professional services: proposal cycle time, onboarding friction, data preparation, reporting and insights lag.
  2. Ask three questions before funding:
    • Does this directly attack a top-three bottleneck we already feel in the P&L or customer experience?
    • Do frontline teams believe this solution will actually help them, and have they been part of shaping it?
    • Are governance, data quality, and accountability clear enough that we can trust what this AI or automation will do?
  3. Define evaluation criteria up front:
    • Leading indicators (e.g., cycle time, error rates, rework, proposal win rates)
    • Lagging indicators (margin, capacity released, customer satisfaction scores)
    • Human indicators (engagement, perceived usefulness, rework created elsewhere)

If these are fuzzy, the initiative is not ready for serious investment—no matter how compelling the pitch deck.

4. The PRESSURE “Evaluate Canvas” – A 60-Minute Leadership Exercise

To make Evaluate concrete, use a simple canvas in your next monthly or quarterly review. Pick one major initiative—ideally in manufacturing operations or professional services delivery—and work through these five boxes.

  1. Outcomes
    • What specific business outcomes was this initiative supposed to deliver in 12–18 months?
    • How will we know—numerically and behaviorally—that it worked?
  2. Evidence (Now)
    • What data do we have so far (even if messy)?
    • Where are we clearly on track, off track, or unsure?
    • What are people on the ground saying about it?
  3. Effort
    • How much leadership time, capital, and organizational disruption has this required so far?
    • What is the run-rate cost—in money and attention—of continuing as-is?
  4. Risk
    • What risks grow if we stop or slow this work?
    • What risks grow if we keep going on the current path?
    • Where might we be over-optimistic?
  5. Next Move (Strategize + Sacrifices)
    • Given what we see, should we double down, re-scope, pause, or stop?
    • If we double down: what must we remove from the portfolio to make space?
    • If we pause/stop: how will we exit cleanly and protect trust?

This can be done in 60 minutes with structured facilitation. It pairs well with a 4C-style meeting design: Collect the evidence, Choose the focus areas, Create options, and Commit to a decision and owner.

5. Evaluating Culture: Psychological Safety and Learning Signals

Technical evaluation is not enough. In a year where AI, automation, and new pricing models are stretching people’s identity and skills, cultural signals matter just as much.

Evaluate should deliberately check:

  • Are people still raising concerns early, or are they going quiet?
    Silence is a lagging indicator of psychological safety and a leading indicator of nasty surprises.
  • Are frontline teams improvising workarounds that contradict the official process?
    That is often where the truth about feasibility and design flaws lives.
  • Are managers spending more time on performance policing or on coaching and learning?
    Under cost pressure, many organizations unconsciously shift back to control, which slows learning exactly when it is most needed.

Include these qualitative questions in your Evaluate conversations. They are not “soft” topics; they are early-warning systems for whether change will sustain.

6. How to Use “Evaluate” This Month

To put Evaluate into practice quickly, leaders in manufacturing and professional services can:

  1. Pick one mission-critical initiative to evaluate.
    • For manufacturing: a smart manufacturing or automation program, a nearshoring move, or a major capacity expansion.
    • For professional services: an AI-enabled delivery pilot, a new productized service, or a pricing-model shift.
  2. Run a 60–90 minute Evaluate session with the right cross-section in the room.
    • Include people who own the P&L, people who run the process, and people who live in it daily.
    • Use the Evaluate Canvas and the capacity questions; commit to one clear decision at the end.
  3. Communicate the outcome and the “why.”
    • If you double down: explain what you are stopping to make space.
    • If you pause or stop: honor the work done, share what you learned, and be explicit about what will replace it in the priority stack.

Organizations can absolutely run this discipline on their own. Many should.

Where an external partner adds value is in three places:

  • Objectivity – surfacing uncomfortable truths and trade-offs that are hard to name internally.
  • Structure – designing and facilitating the Evaluate sessions so they are fast, fair, and conclusive rather than circular.
  • Accountability – ensuring that Evaluate does not become a one-off exercise but a regular leadership habit tied to strategy, budgeting, and talent decisions.

In a year where volatility is a given, Evaluate is how leadership turns pressure into clarity—before committing more resources, energy, and goodwill than the organization can afford.


This post was written by Michael Nagorski, Founding Partner of Double Loop Performance. Michael partners with manufacturing and professional services leaders who are serious about making change stick—by slowing down just enough to ask better questions, confront trade-offs, and build the structural accountability that turns pressure into progress. If you want a thought partner to help your team evaluate its current change portfolio—and translate that clarity into concrete next steps—reach out to explore what working together could look like.

Contact Double Loop Performance or contact Mike directly through LinkedIn.