Skip to main content

Emotions Model

The Emotions model can classify speaker's emotion into "happy", "irritated", "neutral" or "tired".

Because it only makes sense to apply this model to speech audio, it is combined with the SpeechRT and Volume models to increase the reliability of the results.

The receptive field of this model is 2107 milliseconds.

Specification

Receptive FieldResult Type
2107 msresult ∈ ["happy", "irritated", "neutral", "tired", "no_speech", "silence"]

Time-series

The time-series result will be an iterable with elements that contain the following information:

{
"timestamp": 0,
"results":{
"emotions": {
"result": "happy",
"confidence": 0.782
}
}
}

Time-series with raw values

If raw values were requested, they will be added to the time-series result:

{
"timestamp": 0,
"results":{
"emotions": {
"result": "happy",
"confidence": 0.782
}
},
"raw": {
"emotions": {
"happy": 0.782,
"irritated": 0.132,
"neutral": 0.014,
"tired": 0.072
}
}
}

Summary

In case a summary is requested the following will be returned

{
"emotions": {
"happy_fraction": 0.23,
"irritated_fraction": 0.61,
"neutral_fraction": 0.05,
"tired_fraction": 0.07,
"no_speech_fraction": 0.04,
"silence_fraction": 0.0
}
}

where x_fraction represents the percentage of time that x class was identified for the duration of the input.

Transitions

In case the transitions are requested a time-series with transition elements like shown below will be returned

{
"timestamp_start": 0,
"timestamp_end": 1500,
"result": "neutral",
"confidence": 0.96
},
{
"timestamp_start": 1500,
"timestamp_end": 4000,
"result": "happy",
"confidence": 0.88
}

The example above means that the emotion detected within first 1500ms of the audio snippet was neutral, and between 1500ms and 4000ms DeepTone™ it was happy.