The SpeechRT model can classify audio into "speech", "music", "other" or "silence". The "silence" class indicates audio fragments with very low volume (according to the requested threshold). Compared to the Speech model, the SpeechRT model can react to changes in the audio much faster due to its shorter receptive field. The SpeechRT model is also our default model for detecting speech within other models.
When deciding between the Speech and SpeechRT models, it is advisable to experiment with your own data and review the latest advice to ensure optimal performance for your use case.
The receptive field of this model is 146ms.
|Receptive Field||Result Type|
|146ms||result ∈ ["speech", "music", "other", "silence"]|
The time-series result will be an iterable with elements that contain the following information:
Time-series with raw values
If raw values were requested, they will be added to the time-series result:
In case a summary is requested the following will be returned
where x_fraction represents the percentage of time that x class was identified for the duration of the input.
In case the transitions are requested a time-series with the following transition elements will be returned:
The result above means that the first 1500ms of the audio snippet contained speech, and between 1500ms and 6000ms of the audio no speech or music was detected.