An Example Monitoring Use Case

Say "hello" to Smart Brandz Inc. - in short SB - a marketing analytics company. In their core offering, SB analyzes social media posts (text) in which their customers’ brands are mentioned.

Specifically, SB has two ML models deployed in production, operating in the context of these social media posts. The first model is a classifier tagging posts with customer brands and the second model provides a sentiment score (negative through neutral to positive).

SB wants to gain transparency into the behavior and performance of these models over time. They are worried about data integrity, about concept drift and about other performance blind spots.

SB decided to set up Mona to monitor their data and models. We’ll refer back to SB as we proceed explaining how Mona works.

In the nutshell:

  • SB decided to monitor two ML models running on social media posts.
  • SB set up the data export to Mona directly from their code at the time of inference. For each social media post, they export:
    • Model outputs, e.g., classification results, sentiment scores, confidence intervals -- they intended
      to track leading indicators of model performance, not just precision and recall, aiming to
      proactively detect anomalous behavior before business KPIs are negatively impacted
    • Metadata associated with the social media posts, e.g., text length, time of day, geographic
      location, language -- but not the posts themselves.
  • Mona was automatically configured, and began aggregating exported data into a monitoring dataset
  • Mona uses smart anomaly detection algorithms to find specific segments in SB's data in which the models underperform, encounter concept drift or sudden changes in the data's behavior
  • SB began accessing the monitored data and insights via the Mona dashboard
  • SB began maintaining and tweaking their configuration of Mona to get the best results!

We reference SB in the following chapters to illustrate some of our platform concepts.


What’s Next