A discussion currently bubbling up on Lobsters strikes a nerve many are familiar with: the organization has purchased AI subscriptions, built infrastructure, sent features into production — and now management wants to know what they're actually getting for their money. The problem? No one has a proper answer.

The thread isn't dramatic in tone, but that's precisely what makes it interesting. Here, people with real jobs and real budgets are genuinely asking each other: “How the hell do you do this?” It's no longer an academic exercise.

This mirrors what research quite brutally confirms: around 74% of companies fail to document concrete value from their AI investments. And one of the main reasons is simple — they started without establishing any baseline to measure from. You can't show progress if you don't know where you started.

Many companies deployed AI first and started thinking about measurement afterward. That's backward.

There are some recurring patterns in the discussion. People are measuring activity — how many users are on Copilot, the number of API calls, "productivity" as a buzzword — rather than actual business results. It's a bit like measuring the success of a workout plan by counting how many times you've been to the locker room.

Those who actually succeed seem to be doing something more nuanced: they start with soft signals early on (are people using the tools, are they satisfied, is onboarding seamless?) and gradually move towards harder numbers — time saved per task, cost reduction, revenue per employee — as usage matures. It's not sexy, but it's honest.

Why is this worth paying attention to right now? Because we are approaching an inflection point. 2024 and 2025 were the "try everything" years. 2026 is starting to become the "show me the numbers" year. Companies that cannot document ROI will have their AI budgets cut. Those who have thought about measurement from day one will walk away with the funds — and the good people.

These are still early signals from a community discussion, not a representative study. But when developers and tech leads rally around a problem in this way, it's often because something is genuinely going wrong within their organizations.

Worth keeping an eye on.