This plugin allows you to use tflite to make audio/speech classifications. Currently supports android, however will update with an IOS version soon.
tflite_audio
as a [dependency in your pubspec.yaml file] assets:
- assets/conv_actions_frozen.tflite
- assets/conv_actions_labels.txt
import 'package:tflite_audio/tflite_audio.dart';
//Loads your model
//Higher numThreads will be reduce inference times, but is more intensive on cpu
Future loadModel({model, label, numThreads, isAsset}) async {
return await TfliteAudio.loadModel(model, label, numThreads, isAsset);
}
Future<dynamic> startAudioRecognition(
{int sampleRate, int recordingLength, int bufferSize}) async {
return await TfliteAudio.startAudioRecognition(
sampleRate, recordingLength, bufferSize);
}
loadModel(
model: "assets/conv_actions_frozen.tflite",
label: "assets/conv_actions_labels.txt",
numThreads: 1,
isAsset: true);
//This future checks for permissions, records voice and starts audio recognition, then returns the result.
//Make sure the recordingLength fits your tensor input
//Higher buffer size = more latency, but less intensive on cpu. Also shorter recording time.
//Sample rate is the number of samples per second
Future<String> startAudioRecognition() async {
await startAudioRecognition(sampleRate: 16000, recordingLength: 16000, bufferSize: 1280)
}
Add the permissions below to your AndroidManifest. This could be found in /android/app/src folder. For example:
<uses-permission android:name="android.permission.RECORD_AUDIO" />
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
Edit the following below to your build.gradle. This could be found in /app/src/For example:
aaptOptions {
noCompress 'tflite'
Also add the following key to Info.plist for iOS
<key>NSMicrophoneUsageDescription</key>
<string>Record audio for playback</string>
Author: Caldarie
Source Code: https://github.com/Caldarie/flutter_tflite_audio
#flutter #dart #mobile-apps