A campaign report says the open rate is up, the click-through rate looks healthy, and the dashboard suggests things are moving in the right direction. For many marketing leaders, that is where the conversation ends.

But in dealer networks, strong averages can hide weak execution.

A campaign can generate solid engagement numbers and still fail where it matters most. The wrong customers may have received the message. Dealers may have launched late. Audience rules may have been adjusted locally. Follow-up may have broken down after the click. On paper, the campaign performed. In reality, it drifted.

The problem is not just measurement noise

Open rates and clicks were never a complete picture, but they are becoming even less reliable as signals of campaign quality. More importantly, they tell you very little about whether the campaign reached the right customer at the right moment.

That is the real test in automotive retail.

Did the message match the customer’s actual context? Was it relevant to their vehicle, service status, contract timing, or lifecycle stage? Did it arrive when the customer could still act on it? And did the network execute the campaign in a way that preserved that relevance from central design to local delivery?

If the answer to those questions is unclear, a good click rate can be dangerously reassuring.

Why averages hide the real risk

Centrally, a campaign may look successful because the network average is acceptable. But averages are poor at showing execution variance.

One dealer may have applied the audience logic exactly as intended. Another may have widened the list to fill workshop capacity. A third may have launched days late because local priorities got in the way. All three outcomes end up in the same report.

This is why campaign reviews so often end up debating creative, subject lines, or channel mix when the real issue sits elsewhere. The question is not only whether customers engaged. It is whether the campaign was executed with enough discipline for the results to mean anything.

What marketing leaders should measure instead

Better campaign measurement starts before the first open or click. It begins with relevance and execution visibility.

That means asking whether the intended audience rules stayed intact, whether launch timing held across the network, and whether follow-up happened within the expected window. It means looking beyond engagement rates to see whether the message matched real customer context and whether local execution preserved that logic.

Because the real danger is not a low open rate.

It is spending budget on the wrong audience, getting reassuring averages back, and learning too late that the campaign never stood a fair chance of working.

Before your next campaign review, ask a harder question: are you measuring customer engagement, or are you measuring whether execution stayed true to the strategy?