SegmentSizeSuddenChange

Description

Sudden Change verses configure Mona to find timeseries-based anomalies, in which the latest point in a behavioral time-series data acts very differently than the rest of the time series. The latest point will be considered as an anomaly, if it is over (or under) a threshold. To calculate if the latest point is over/under this threshold we first measure the distance between the latest point value and some top (for over) or bottom (for under) percentile (a parameter). We then normalize this difference by the difference between the same top/bottom percentile and the median value of the timeseries.

'Segment Size Sudden Change' verses configure Mona to find segments, whose absolute size (the number of contexts in this segment) per time-frame has gone through a "sudden change" (as defined above) in the latest time frame compared to the rest of the segment sizes time series.

{
  "stanzas": {
    "stanza_name": {
      "verses": [
        {
          "type": "SegmentSizeSuddenChange",
          "segment_baseline_by": [
            "company_id",
            "country"
          ],
          "segment_by": [
            "detected_language"
          ],
          "min_culprit_size": 50,
          "min_anomaly_level": 3,
          "time_series_points": "60d"
        }
      ]
    }
  }
}

In this example we see a SegmentSizeSuddenChange verse which is configured to search for statistically significant changes in the size of any specific "detected_language" in the last day compared to the 59 days prior, within any different values of "country" and "company_id" as the baseline. Note that this verse type requires no "metrics" param. The "min_culprit_size" param will filter out countries or companies that on the day of the anomaly (the last day) have less then 50 records. We use "min_anomaly_level" to define that a sudden change occurs when the difference between the last day's average and the median of the entire time series is at least 3 times the difference between the 95 percentile of the entire time series and the median.

Basic Params

see more
cadence
NameDescriptionTypeDefault
cadenceThe cadence for evaluation of this verse. Only the following cadences are valid: Minutes: 1m, 5m, 10m, 15m, 20m, 30m. Hours: 1h, 2h, 3h, 4h, 6h, 8h, 12h. Days: 1d, 2d, 3d, 4d, 5d, 6d. Weeks: 1w, 2w, 3w, 4w, 5w.Cadence1d
{ "cadence": "6h" }
default_urgency
NameDescriptionTypeDefault
default_urgencyThe urgency class for insights created using this verse. Currently, supports two values: "normal" (default) and "high". If set to "normal", then specific thresholds for "high" urgency can be set using other parameters prefixed with "highurgency". If set to "high", then threshold parameters prefixed with "highurgency" are not considered at all - since all insights of this verse will be considered as having a "high" urgency.Urgencynormal
{ "default_urgency": "high" }
description
NameDescriptionTypeDefault
descriptionVerse description.str
{ "description": "searches for asc drifts in output_score" }
metrics
NameDescriptionTypeDefault
metricsRelevant metrics to search anomalies for in the verse, relevant only for types who search for anomalies in metrics behavior.MetricsList()
{ "metrics": [ "top_score", "delta_top_to_second_score" ] }
min_anomaly_level
NameDescriptionTypeDefault
min_anomaly_levelThis parameter sets the threshold for the minimal anomaly level for which an insight will be generated. Anomaly Level of this verse is the difference between the last point's value to the median value, normalized by the difference between the top_percentile_benchmark and the median.PositiveFloat2.5
{ "min_anomaly_level": 2 }
min_segment_size
NameDescriptionTypeDefault
min_segment_sizeMinimal segment size to require for the entire time-series.PositiveInt5
{ "min_segment_size": 10 }
min_segment_size_fraction
NameDescriptionTypeDefault
min_segment_size_fractionMinimal segment size in fraction from baseline segment, which a segment must have in order to be considered in the search.InclusiveFraction0
{ "min_segment_size_fraction": 0.05 }
name
NameDescriptionTypeDefault
name(Required) The name of the verse. Please note, a verse's name must be different from other verses in the same stanza.strNone
{ "name": "confidence_outliers" }
segment_by
NameDescriptionTypeDefault
segment_byThe dimensions to use to segment the data in order to search for anomalies. This list must be a sublist of all arc class' dimensions. Limiting the possible values of a specific segmentation field on which insights can be generated can be done using the "avoid_values" and the "include_only_values" keys in the segmentation JSON object, as seen in the example.SegmentationsList()
{ "segment_by": [ "city", "bot_id", {"name": "provider-code", "avoid_values": ["zoom"]}, {"name": "selected-language", "avoid_values": ["eng", "spa"]}, {"name": "country", "include_only_values": ["jpn"]} ] }
time_resolution
NameDescriptionTypeDefault
time_resolutionTime series time resolution period. Expected format is "" where can be any positive integer, and options currently include: "d" (days), or "w" (weeks). e.g, "1d" means 1 day periodTimeResolution1d
{ "time_resolution": "1w"" }
time_series_points
NameDescriptionTypeDefault
time_series_pointsSize of desired entire time series.PositiveInt60
{ "time_series_points": 30 }
trend_directions
NameDescriptionTypeDefault
trend_directionsA list of allowed anomalies trends directions - either 'asc' for ascending (anomalies in which the found value is LARGER THAN the relevant benchmark), or 'desc' for descending (anomalies in which the found value is SMALLER THAN the relevant benchmark).TrendDirections('asc', 'desc')
{ "trend_directions": [ "asc" ] }

Advanced Misc Params

see more
avoid_same_field_for_segment_and_metric
NameDescriptionTypeDefault
avoid_same_field_for_segment_and_metricIf True, insights would not be created for segments based on the same field as the given metric.boolTrue
{ "avoid_same_field_for_segment_and_metric": false }
cookbook
NameDescriptionTypeDefault
cookbookInstructions on how to read an insight generated by this verse. Expected format is MarkDown.Cookbook
{ "cookbook": "Use **this param** to add instructions using [markdown](https://daringfireball.net/projects/markdown/syntax) syntax on how to read insights generated from this `verse`, and what should the insight recipient do with it." }
create_extra_adjacent_signals
NameDescriptionTypeDefault
create_extra_adjacent_signalsIf set to true (default), will cause Mona to create new signals from existing signals with adjacent numeric segments. So if there are two signals defined on 1 <= x < 2 and 2 <= x < 3 - Mona will automatically create a new signal with 1 <= x < 3. This will allow the Mona clustering algorithm to create an insight with the most relevant segment for its main signal.boolTrue
{ "create_extra_adjacent_signals": false }
disabled
NameDescriptionTypeDefault
disabledIf set to True - this verse won't be used when searching for new insights.boolFalse
{ "disabled": true }
expire_after
NameDescriptionTypeDefault
expire_afterInsights detected by this verse will continue to be considered active for at least this amount of time after the last time they were detected.TimePeriodOrEmpty3d
{ "expire_after": "2d" }
relevant_data_time_buffer
NameDescriptionTypeDefault
relevant_data_time_bufferAdds an end-time buffer to the insight generation. For example - If this param's value is "1d", then insights are generated for a day before the latest received data. This is useful for processes in which it takes a specific period of time to get all the healthy monitoring data in place.TimePeriodOrEmpty
{ "relevant_data_time_buffer": "1d" }
timestamp_field_name
NameDescriptionTypeDefault
timestamp_field_nameThe field that is used as the time dimension for insight generation.TimestampFieldtimestamp
{ "timestamp_field_name": "run_end_time" }
timezone
NameDescriptionTypeDefault
timezoneThe timezone used to aggregate daily data points. Accepts any IANA time zone ID: (https://en.wikipedia.org/wiki/List_of_tz_database_time_zones)TimezoneUTC
{ "timezone": "Asia/Hong_Kong" }

Advanced Score Calculation Params

see more
score_anomaly_level_exponent
NameDescriptionTypeDefault
score_anomaly_level_exponentAn exponent to put on the anomaly level in the score after multiplying it by the given multiplier.float1
{ "score_anomaly_level_exponent": 0.5 }
score_anomaly_level_multiplier
NameDescriptionTypeDefault
score_anomaly_level_multiplierMultiplier for an anomaly level to use before using the exponent.float1
{ "score_anomaly_level_multiplier": 1.2 }
score_segment_size_exponent
NameDescriptionTypeDefault
score_segment_size_exponentAn exponent to put on the segment's size (or relative size) in the combined score. If score_segment_size_log_base is not 0, the exponent will be applied before the logarithm will.float0.5
{ "score_segment_size_exponent": 1.5 }
score_segment_size_log_base
NameDescriptionTypeDefault
score_segment_size_log_baseChanges the log base used for the segment's size (or relative size) in the combined score, or remove the log altogether by setting 0 here. Unless its 0 this value must be larger than 1.float0
{ "score_segment_size_log_base": 5 }
score_use_segment_absolute_size
NameDescriptionTypeDefault
score_use_segment_absolute_sizeIf true, use the segment absolute size in the combined score, otherwise use the segment's size relative to its baseline (fraction).boolTrue
{ "score_use_segment_absolute_size": false }
top_percentile_benchmark
NameDescriptionTypeDefault
top_percentile_benchmarkDefines the top/bottom percentile to search for in the time-series (after the last point is removed) to serve as a benchmark for what will be defined as a large difference from the median (to that top/bottom direction). For verses with 'desc' in trend_directions the bottom percentile is used as is. For verses with 'asc' in trend_directions the top percentile is (100 - top_percentile_benchmark). For example, if the top fraction percentile is 5 then for bottom threshold 5th percentile is used as benchmark, whereas for top threshold the 95th percentile is used as benchmark.PositiveFloat5
{ "top_percentile_benchmark": 10 }

Anomaly Thresholds Params

see more
epsilon
NameDescriptionTypeDefault
epsilonMinimal required absolute difference between value and benchmark. Used to account for statistical errors in stable time series.NonNegativeFloat0.01
{ "epsilon": 0.5 }
high_urgency_min_anomaly_level
NameDescriptionTypeDefault
high_urgency_min_anomaly_levelThreshold for separating between "high" and "normal" urgency insights with regards to min_anomaly_level. See "min_anomaly_level" param for more details on the functionality of this param.PositiveFloatOrNoneNone
{ "high_urgency_min_anomaly_level": 1.5 }
high_urgency_min_score
NameDescriptionTypeDefault
high_urgency_min_scoreThreshold for separating between "high" and "normal" urgency insights with regards to min_score. See "min_score" param for more details on the functionality of this param.FloatOrNoneNone
{ "high_urgency_min_score": 20 }
min_anomaly_level
NameDescriptionTypeDefault
min_anomaly_levelThis parameter sets the threshold for the minimal anomaly level for which an insight will be generated. Anomaly Level of this verse is the difference between the last point's value to the median value, normalized by the difference between the top_percentile_benchmark and the median.PositiveFloat2.5
{ "min_anomaly_level": 2 }
min_score
NameDescriptionTypeDefault
min_scoreThe minimal score for a signal to be considered as an anomaly.float0
{ "min_score": 4 }

Data Filtering Params

see more
avoid_segmenting_on_missing
NameDescriptionTypeDefault
avoid_segmenting_on_missingWhen true, insights will not be generated for segments which are (partially or fully) defined by a missing field.boolFalse
{ "avoid_segmenting_on_missing": true }
baseline_segment
NameDescriptionTypeDefault
baseline_segmentThe baseline segment of this verse. This segment defines "the world" as far as this verse is concerned. Only data from this segment will be considered when finding insights.Segment{}
{ "baseline_segment": { "model_version": [ { "value": "V1" } ] } }
enhance_exclude_segments
NameDescriptionTypeDefault
enhance_exclude_segmentsIf True, when exclude segments are added to any level of configuration (either in the verse, the stanza or the stanzas_global_defaults) they are ADDED to the excluded segments of higher level defaults, if exists any. For example, if we have in stanzas_global_default a single excluded segment of {dimensionA: MISSING}, and the stanza (or verse) has a single excluded segment of {dimensionB: 0}, then if enhance_exclude_segments is True, the excluded segments will include both {dimensionA: MISSING} and {dimensionB: 0} and will filter either one. Otherwise (if enhance_exclude_segments is False) it will be overridden to just the one segment in the verse {dimensionB: 0}.boolFalse
{ "enhance_exclude_segments": true }
exclude_segments
NameDescriptionTypeDefault
exclude_segmentsSegments to exclude in the baseline of this verse. Each data we search for will not include these segments - both tested segments as well as any benchmarks used to find the anomalies. Notice that whether or not this param will override definitions for exclude_segments in other levels is decided by enhance_exclude_segments.SegmentsList()
{ "exclude_segments": [ { "text_length": [ { "min_value": 0, "max_value": 100 } ] } ] }

Related Anomalies Params

see more
avoid_related_anomalies_for
NameDescriptionTypeDefault
avoid_related_anomalies_forA list of fields to avoid checking for correlated anomalies to the main anomaly in a generated insight. See "find_related_anomalies_for" for further details.MetricsList()
{ "avoid_related_anomalies_for": ["delta_top_to_second_score"] }
find_related_anomalies_for
NameDescriptionTypeDefault
find_related_anomalies_forA list of fields to check for correlated anomalies to the main anomaly in a generated insight. These correlated anomalies might help with understanding the possible cause of an insight. Leave empty to search in all fields.MetricsList()
{ "find_related_anomalies_for": ["sentiment_score", "confidence_interval"] }
related_anomalies_min_correlation
NameDescriptionTypeDefault
related_anomalies_min_correlationMinimal Pearson correlation between the metric on which an anomaly was found and another metric with an anomaly on the same segment, below which Mona will not use the other metric as a related anomaly.NonNegativeFloat0.3
{ "related_anomalies_min_correlation": 0.5 }

Required Params

see more
name
NameDescriptionTypeDefault
name(Required) The name of the verse. Please note, a verse's name must be different from other verses in the same stanza.strNone
{ "name": "confidence_outliers" }

Segmentation Params

see more
always_segment_baseline_by
NameDescriptionTypeDefault
always_segment_baseline_byA list of dimensions to always segment the baseline segment by. This is useful when separating the world to completely unrelated parts - e.g., in a case where you have a different model developed for each customer and there's no need to look for insights across different customers. Limiting the possible values of a specific segmentation field on which insights can be generated can be done using the "avoid_values" and the "include_only_values" keys in the segmentation JSON object, as seen in the example.SegmentationsList()
{ "always_segment_baseline_by": [ "country", {"name": "city", "avoid_values": ["Tel Aviv"]}, ] }
avoid_segmenting_on_missing
NameDescriptionTypeDefault
avoid_segmenting_on_missingWhen true, insights will not be generated for segments which are (partially or fully) defined by a missing field.boolFalse
{ "avoid_segmenting_on_missing": true }
max_segment_baseline_by_depth
NameDescriptionTypeDefault
max_segment_baseline_by_depthThe maximum number of fields Mona should combine for segmenting the baseline (if "segment_baseline_by" fields given).PositiveInt2
{ "max_segment_baseline_by_depth": 3 }
max_segment_by_depth
NameDescriptionTypeDefault
max_segment_by_depthThe maximum number of fields Mona should combine to create sub-segments to search in. Baseline segment fields and parent fields are "free", and are not counted for depth. Notice this parameter has a exponential effect on the performance and should be kept within SLAs.PositiveInt2
{ "max_segment_by_depth": 3 }
min_segment_baseline_by_depth
NameDescriptionTypeDefault
min_segment_baseline_by_depthThe minimum number of fields Mona should combine for segmenting the baseline (if "segment_baseline_by" fields given).NonNegativeInt0
{ "min_segment_baseline_by_depth": 1 }
min_segment_by_depth
NameDescriptionTypeDefault
min_segment_by_depthThe minimum number of fields Mona should combine to create sub-segments to search in.NonNegativeInt0
{ "min_segment_by_depth": 1 }
segment_baseline_by
NameDescriptionTypeDefault
segment_baseline_byA list of dimensions to potentially segment the baseline segment by. Limiting the possible values of a specific segmentation field on which insights can be generated can be done using the "avoid_values" and the "include_only_values" keys in the segmentation JSON object.SegmentationsList()
{ "segment_baseline_by": [ "model_version" ] }
segment_by
NameDescriptionTypeDefault
segment_byThe dimensions to use to segment the data in order to search for anomalies. This list must be a sublist of all arc class' dimensions. Limiting the possible values of a specific segmentation field on which insights can be generated can be done using the "avoid_values" and the "include_only_values" keys in the segmentation JSON object, as seen in the example.SegmentationsList()
{ "segment_by": [ "city", "bot_id", {"name": "provider-code", "avoid_values": ["zoom"]}, {"name": "selected-language", "avoid_values": ["eng", "spa"]}, {"name": "country", "include_only_values": ["jpn"]} ] }

Size Thresholds Params

see more
baseline_min_segment_size
NameDescriptionTypeDefault
baseline_min_segment_sizeMinimal segment size for the baseline segment.PositiveFloat1
{ "baseline_min_segment_size": 100 }
high_urgency_baseline_min_segment_size
NameDescriptionTypeDefault
high_urgency_baseline_min_segment_sizeThreshold for separating between "high" and "normal" urgency insights with regards to baseline_min_segment_size. See "baseline_min_segment_size" param for more details on the functionality of this param.PositiveFloatOrNoneNone
{ "high_urgency_baseline_min_segment_size": 1000 }
high_urgency_min_culprit_size
NameDescriptionTypeDefault
high_urgency_min_culprit_sizeThreshold for separating between "high" and "normal" urgency insights with regards to min_culprit_size. See "min_culprit_size" param for more details on the functionality of this param.NonNegativeFloatOrNoneNone
{ "high_urgency_min_culprit_size": 500 }
high_urgency_min_segment_size
NameDescriptionTypeDefault
high_urgency_min_segment_sizeThreshold for separating between "high" and "normal" urgency insights with regards to min_segment_size. See "min_segment_size" param for more details on the functionality of this param.PositiveIntOrNoneNone
{ "high_urgency_min_segment_size": 1000 }
high_urgency_min_segment_size_fraction
NameDescriptionTypeDefault
high_urgency_min_segment_size_fractionThreshold for separating between "high" and "normal" urgency insights with regards to min_segment_size_fraction. See "min_segment_size_fraction" param for more details on the functionality of this param.InclusiveFractionOrNoneNone
{ "high_urgency_min_segment_size_fraction": 0.2 }
max_segment_size
NameDescriptionTypeDefault
max_segment_sizeMaximal segment size which a segment must have (bigger segments won't be considered in the search). Leave empty to not have such a threshold.PositiveIntOrNoneNone
{ "max_segment_size": 10000 }
max_segment_size_fraction
NameDescriptionTypeDefault
max_segment_size_fractionMaximal segment size in fraction from baseline segment, which a segment must have. Leave empty to not have such a threshold.NonInclusiveFractionOrNoneNone
{ "max_segment_size_fraction": 0.2 }
min_culprit_size
NameDescriptionTypeDefault
min_culprit_sizeMinimal absolute size (number of relevant contexts) of the checked point (latest).NonNegativeFloat0
{ "min_culprit_size": 50 }
min_exist_freq
NameDescriptionTypeDefault
min_exist_freqThe minimum fraction of the timeseries frames in which the segment had any data. Why do we need this? Verses might rely on data that only exists in a few days (or any other time resolution). These cases are usually much less stable and more noisy. Usually these cases are not interesting and should be filtered (with this param). Cases that expect that behavior should reduce this value significantly.InclusiveFraction0
{ "min_exist_freq": 0.5 }
min_point_size
NameDescriptionTypeDefault
min_point_sizeThe minimal absolute size (number of relevant contexts) of all the timeseries points except the checked point. Points under this threshold will be ignored when calculating the percentiles of the time series.NonNegativeInt0
{ "min_point_size": 20 }
min_segment_size
NameDescriptionTypeDefault
min_segment_sizeMinimal segment size to require for the entire time-series.PositiveInt5
{ "min_segment_size": 10 }
min_segment_size_fraction
NameDescriptionTypeDefault
min_segment_size_fractionMinimal segment size in fraction from baseline segment, which a segment must have in order to be considered in the search.InclusiveFraction0
{ "min_segment_size_fraction": 0.05 }

Time Related Params

see more
cadence
NameDescriptionTypeDefault
cadenceThe cadence for evaluation of this verse. Only the following cadences are valid: Minutes: 1m, 5m, 10m, 15m, 20m, 30m. Hours: 1h, 2h, 3h, 4h, 6h, 8h, 12h. Days: 1d, 2d, 3d, 4d, 5d, 6d. Weeks: 1w, 2w, 3w, 4w, 5w.Cadence1d
{ "cadence": "6h" }
expire_after
NameDescriptionTypeDefault
expire_afterInsights detected by this verse will continue to be considered active for at least this amount of time after the last time they were detected.TimePeriodOrEmpty3d
{ "expire_after": "2d" }
relevant_data_time_buffer
NameDescriptionTypeDefault
relevant_data_time_bufferAdds an end-time buffer to the insight generation. For example - If this param's value is "1d", then insights are generated for a day before the latest received data. This is useful for processes in which it takes a specific period of time to get all the healthy monitoring data in place.TimePeriodOrEmpty
{ "relevant_data_time_buffer": "1d" }
time_resolution
NameDescriptionTypeDefault
time_resolutionTime series time resolution period. Expected format is "" where can be any positive integer, and options currently include: "d" (days), or "w" (weeks). e.g, "1d" means 1 day periodTimeResolution1d
{ "time_resolution": "1w"" }
timestamp_field_name
NameDescriptionTypeDefault
timestamp_field_nameThe field that is used as the time dimension for insight generation.TimestampFieldtimestamp
{ "timestamp_field_name": "run_end_time" }
timezone
NameDescriptionTypeDefault
timezoneThe timezone used to aggregate daily data points. Accepts any IANA time zone ID: (https://en.wikipedia.org/wiki/List_of_tz_database_time_zones)TimezoneUTC
{ "timezone": "Asia/Hong_Kong" }

Urgency Params

see more
default_urgency
NameDescriptionTypeDefault
default_urgencyThe urgency class for insights created using this verse. Currently, supports two values: "normal" (default) and "high". If set to "normal", then specific thresholds for "high" urgency can be set using other parameters prefixed with "highurgency". If set to "high", then threshold parameters prefixed with "highurgency" are not considered at all - since all insights of this verse will be considered as having a "high" urgency.Urgencynormal
{ "default_urgency": "high" }
high_urgency_baseline_min_segment_size
NameDescriptionTypeDefault
high_urgency_baseline_min_segment_sizeThreshold for separating between "high" and "normal" urgency insights with regards to baseline_min_segment_size. See "baseline_min_segment_size" param for more details on the functionality of this param.PositiveFloatOrNoneNone
{ "high_urgency_baseline_min_segment_size": 1000 }
high_urgency_min_anomaly_level
NameDescriptionTypeDefault
high_urgency_min_anomaly_levelThreshold for separating between "high" and "normal" urgency insights with regards to min_anomaly_level. See "min_anomaly_level" param for more details on the functionality of this param.PositiveFloatOrNoneNone
{ "high_urgency_min_anomaly_level": 1.5 }
high_urgency_min_culprit_size
NameDescriptionTypeDefault
high_urgency_min_culprit_sizeThreshold for separating between "high" and "normal" urgency insights with regards to min_culprit_size. See "min_culprit_size" param for more details on the functionality of this param.NonNegativeFloatOrNoneNone
{ "high_urgency_min_culprit_size": 500 }
high_urgency_min_score
NameDescriptionTypeDefault
high_urgency_min_scoreThreshold for separating between "high" and "normal" urgency insights with regards to min_score. See "min_score" param for more details on the functionality of this param.FloatOrNoneNone
{ "high_urgency_min_score": 20 }
high_urgency_min_segment_size
NameDescriptionTypeDefault
high_urgency_min_segment_sizeThreshold for separating between "high" and "normal" urgency insights with regards to min_segment_size. See "min_segment_size" param for more details on the functionality of this param.PositiveIntOrNoneNone
{ "high_urgency_min_segment_size": 1000 }
high_urgency_min_segment_size_fraction
NameDescriptionTypeDefault
high_urgency_min_segment_size_fractionThreshold for separating between "high" and "normal" urgency insights with regards to min_segment_size_fraction. See "min_segment_size_fraction" param for more details on the functionality of this param.InclusiveFractionOrNoneNone
{ "high_urgency_min_segment_size_fraction": 0.2 }
high_urgency_require_all_criteria
NameDescriptionTypeDefault
high_urgency_require_all_criteriaDecide if to use 'AND'/'OR' condition between all high_urgency threshold params.boolTrue
{ "high_urgency_require_all_criteria": false }

Visuals and Enrichments Params

see more
field_vectors
NameDescriptionTypeDefault
field_vectorsThis attribute lists metric vectors for the FE to show on an insight card of this verse. A value in this field can either be a string (in which case the string should correspond to a kapi_vector name in the config) or an array (in which case the array should be treated as an ad-hoc kapi vector defined specifically for this verse).FieldVectorsList()
{ "field_vectors": [ "field_vector_group_1", "field_vector_group_2", "field_vector_group_3" ] }
investigate_no_drill
NameDescriptionTypeDefault
investigate_no_drillDictates the link to the investigations page to add to the found insights. If True, the link will point to investigations page with a drilldown to the segment that was found. If it's false the link will point to the investigations page without drilldown, but with the found segment selected, so it can be compared with a benchmark of a higher level.boolFalse
{ "investigate_no_drill": true }
time_resolution
NameDescriptionTypeDefault
time_resolutionTime series time resolution period. Expected format is "" where can be any positive integer, and options currently include: "d" (days), or "w" (weeks). e.g, "1d" means 1 day periodTimeResolution1d
{ "time_resolution": "1w"" }

Wizard Params

see more
cadence
NameDescriptionTypeDefault
cadenceThe cadence for evaluation of this verse. Only the following cadences are valid: Minutes: 1m, 5m, 10m, 15m, 20m, 30m. Hours: 1h, 2h, 3h, 4h, 6h, 8h, 12h. Days: 1d, 2d, 3d, 4d, 5d, 6d. Weeks: 1w, 2w, 3w, 4w, 5w.Cadence1d
{ "cadence": "6h" }
default_urgency
NameDescriptionTypeDefault
default_urgencyThe urgency class for insights created using this verse. Currently, supports two values: "normal" (default) and "high". If set to "normal", then specific thresholds for "high" urgency can be set using other parameters prefixed with "highurgency". If set to "high", then threshold parameters prefixed with "highurgency" are not considered at all - since all insights of this verse will be considered as having a "high" urgency.Urgencynormal
{ "default_urgency": "high" }
metrics
NameDescriptionTypeDefault
metricsRelevant metrics to search anomalies for in the verse, relevant only for types who search for anomalies in metrics behavior.MetricsList()
{ "metrics": [ "top_score", "delta_top_to_second_score" ] }
min_anomaly_level
NameDescriptionTypeDefault
min_anomaly_levelThis parameter sets the threshold for the minimal anomaly level for which an insight will be generated. Anomaly Level of this verse is the difference between the last point's value to the median value, normalized by the difference between the top_percentile_benchmark and the median.PositiveFloat2.5
{ "min_anomaly_level": 2 }
min_segment_size
NameDescriptionTypeDefault
min_segment_sizeMinimal segment size to require for the entire time-series.PositiveInt5
{ "min_segment_size": 10 }
min_segment_size_fraction
NameDescriptionTypeDefault
min_segment_size_fractionMinimal segment size in fraction from baseline segment, which a segment must have in order to be considered in the search.InclusiveFraction0
{ "min_segment_size_fraction": 0.05 }
name
NameDescriptionTypeDefault
name(Required) The name of the verse. Please note, a verse's name must be different from other verses in the same stanza.strNone
{ "name": "confidence_outliers" }
segment_by
NameDescriptionTypeDefault
segment_byThe dimensions to use to segment the data in order to search for anomalies. This list must be a sublist of all arc class' dimensions. Limiting the possible values of a specific segmentation field on which insights can be generated can be done using the "avoid_values" and the "include_only_values" keys in the segmentation JSON object, as seen in the example.SegmentationsList()
{ "segment_by": [ "city", "bot_id", {"name": "provider-code", "avoid_values": ["zoom"]}, {"name": "selected-language", "avoid_values": ["eng", "spa"]}, {"name": "country", "include_only_values": ["jpn"]} ] }
time_resolution
NameDescriptionTypeDefault
time_resolutionTime series time resolution period. Expected format is "" where can be any positive integer, and options currently include: "d" (days), or "w" (weeks). e.g, "1d" means 1 day periodTimeResolution1d
{ "time_resolution": "1w"" }
time_series_points
NameDescriptionTypeDefault
time_series_pointsSize of desired entire time series.PositiveInt60
{ "time_series_points": 30 }
trend_directions
NameDescriptionTypeDefault
trend_directionsA list of allowed anomalies trends directions - either 'asc' for ascending (anomalies in which the found value is LARGER THAN the relevant benchmark), or 'desc' for descending (anomalies in which the found value is SMALLER THAN the relevant benchmark).TrendDirections('asc', 'desc')
{ "trend_directions": [ "asc" ] }