I was writing a chat bot where a user interacts with a machine learning powered bot, then I wanted to write a general example application for anybody to use it. In this application, there will not be any intelligence. The bot will simply recite what it heard so that anyone can implement his/her own logic.

A small catch for now this application will only work on Chrome browsers as it’s the only browser supports Web Speech API currently.

Since I will be using reactive approach, I created aninterface with an optional payload parameter named Action. All our actions will implement this interface. Components will be subscribing to these actions and react to them.

export interface Action {
	payload?: any;
export class SpeakingStarted implements Action {}
export class SpeakingEnded implements Action {}

export class ListeningStarted implements Action {}
export class ListeningEnded implements Action {}

export class RecognizedTextAction implements Action {
	constructor(public payload: string) {}
export class SpeakAction implements Action {
	constructor(public payload: string) {}

In the constructor, we will inject **NgZone **because web speech API lives outside of Angular, we need NgZone to bring it into the Angular realm.

	constructor(private zone: NgZone) {
		this.window = (window as unknown) as IWindow;



One challenge is that synthesizer & recognizer are separate objects. When synthesizer is speaking, the recognizer picks up and it goes into a loop. So, we need to pause one or the other.

As you can see in the snippet above, we are creating speaker and listener separately and then we will set up subscriptions to prevent issues in between them.

For example: We are reacting to _SpeakingStarted _action (as shown below), when it’s received, we stop listening and when _SpeakingEnded _or _ListeningEnded _we start listening again.

#angular #typescript #speech-recognition #javascript

Speech Recognition and Speech Synthesis on Angular
8.55 GEEK