The AudioEvent model can be used to classify different types of human produced sounds.
In the example below, we demonstrate how to use the AudioEvent model to detect when someone is laughing. This model can be especially useful for content moderation, detection of bullying or other similar use-cases.
- Deeptone with license key
- Audio File(s) you want to process
You can download this audio sample for the following examples.
Remember to add a valid license key before running the example.
In this example, you can use the
transitions level output, which is optionally calculated when processing a file,
to detect parts of the audio file where the speakers were using positive human sounds, in this case laughter.
After executing the script using our example file, you should see the following output:
From these results, we can now listen to the mentioned segments of the audio and verify if those match moments where someone was laughing.