# Changelog ## 0.6.0 #### Fixed Bugs * Fixed bug in stream processing to account correctly for receptive field of the model * When `use_chunking=True`, the chunking method is actually used #### Features * Add speechRT model for low-latency speech predictions (decision latency <100ms) * Add new methods of processing - `process_audio_bytes`, `process_audio_chunk` - more suitable for analysing byte numpy arrays directly * Make the SDK thread-safe ## 0.5.0 #### Breaking Changes * The output of the `process_file` function changed to align with the `process_stream` function. For more information on the new output structure see https://sdk.oto.ai/docs/output-specification. #### Fixed Bugs * Performance bug in the output calculation * File processing results are now consistent with the streaming results * Typo in 'GENDER_UNKOWN' constant #### Features * Add ability to retrieve raw model outputs to allow for customisation. For example usage see [here](https://sdk.oto.ai/docs/speech-model-recipes#custom-speech-thresholds---example-3). * Add silence detection to all models * Optimise performance when using more than one model ## 0.4.0 Initial release with the Speech, Gender and Arousal models.