The Speech model can classify audio into "speech", "music", "silence" or "other". The "silence" class indicates audio fragments with very low volume (according to the requested threshold) and the "other" class contains everything which is not speech or music.
Our latest SpeechRT outperforms the current Speech model in terms of speech classification in common use cases. When deciding between the Speech and SpeechRT models, it is advisable to experiment with your own data and review the latest advice to ensure optimal performance for your use case.
The receptive filed of this model is 1082ms.
|Receptive Field||Result Type|
|1082ms||result ∈ ["speech", "music", "other", "silence"]|
The time-series result will be an iterable with elements that contain the following information:
Time-series with raw values
If raw values were requested, they will be added to the time-series result:
In case a summary is requested the following will be returned
where x_fraction represents the percentage of time that x class was identified for the duration of the input.
In case the transitions are requested a time-series with the following transition elements will be returned:
The result above means that the first 1500ms of the audio snippet contained no speech or music, and between 1500ms and 6000ms of the audio - music was detected.