Event Detection: A Canary That Lives Beyond the Coal Mine


Albert Drouart

CPTO & Co-Founder of Pagos

October 20, 2021

October 20, 2021

October 20, 2021

When we announced Canary we highlighted the importance of monitoring your payments performance and touched on how we use machine learning to answer important questions. Today, we’re taking a deeper look into how the logic we’re building identifies meaningful events in your data.

What Are We Looking For?

In payments, there are numerous metrics we can measure, and they are often unremarkable. But every so often, something goes wrong: an API change makes the transaction volume drop, a field name change drives transaction fee increase, or a specific decline code spikes in frequency. Even what looks like a small decrease in the approval rate can indicate hundreds of frustrated customers, reduced lifetime values, bad reviews, and lost revenue until the issue is identified and addressed. We call these events, and when they occur Canary by Pagos surfaces them for you.

What Does An Event Look Like?

Let’s use the approval rate as an example. If life were easy, it would stay the same over time:

If something caused it to drop by even 3%, it would be easy to see:

But in real life, the metric is noisy, and the same 3% drop in approval rating could look like this:

This is much trickier. We wanted to create a way to look at thousands and thousands of metrics like this and to tell if something had changed.

Signal From the Noise

The first step is to filter (scipy.ndimage.gaussian_filter1d) the metric.

This looks promising, but how can we determine where the actual change event occurred in our time series? We borrow a technique from image processing and use an edge detector (scipy.ndimage.sobel) combined with peak detection (scipy.signal.find_peaks) to pinpoint the location of the change event.

Building an Event Classifier

Now we're ready to build our event classifier—something that will tell us if a metric has experienced a change event. Like any ML model, this classifier will need to train on a large set of examples. We created a simulator to create many thousands of examples of noisy metrics—some where a change occurs, some where it doesn't. Then we extracted features from the synthesized metrics that remain consistent regardless of:

  1. The scale of the metric (some metrics could have nominal values 1000 times larger than others)

  2. When the change event occurred (early or late in the series)

  3. The size of the change, if there was one

  4. The noisiness of the metric

  5. The direction of the change (“up” or “down”)

Filtering by Probability

Here’s what the output of our classifier looks like for our example metric:

{'change_direction': 'down', 'probability': 0.709, 'index': 12, 'pct_diff': -2.179, 'mean_before_event': 0.946, 'mean_after_event': 0.926}

Our event classifier gives us more than a True or False flag; what we really want to know is the probability an event occurred. That lets you decide how to threshold which events to hear about, and to customize how Canary works for you. For hypercritical metrics, following up on every event with >40% probability of representing an actual change might make sense. For others, limiting notifications to change events with greater than >99% probability and with a percent difference larger than 20% might be appropriate.

 This is the beginning of putting your payments data to work for you. You know best what you need for your business, and Canary is watching out for you. 

No-code beta integration is now open to businesses using Braintree or Stripe (Adyen coming soon!), and our API is available for use with other processors.

Register to join.

Share on X