The Emotions model can be used to classify the emotion in an audio snippet into happy, neutral, irritated or tired. You can use that model to eg. assess how the speakers in a snippet/stream are feeling.
In the examples below, you will see how to use the Emotions model to detect speakers emotion when streaming from a microphone and display a warning if the speaker sounds tired for too long.
- Deeptone with license key and models
- a microphone
Installing pyaudio in a python3.7 env may require some extra steps unless you are using Anaconda to manage your environment.
We still feel it's the easiest way to get your mic input in python though. For more details on how to install head to Gender model recipes
Remember to add a valid license key before running the example.
In these examples we make use of the
transitions level outputs, calculated optionally when processing a file.