.NET Framework SpeechRecognitionEngine class to recognize speech

30% OFF - 9th Anniversary discount on Entity Framework Extensions until December 15 with code: ZZZANNIVERSARY9


  • SpeechRecognitionEngine()
  • SpeechRecognitionEngine.LoadGrammar(Grammar grammar)
  • SpeechRecognitionEngine.SetInputToDefaultAudioDevice()
  • SpeechRecognitionEngine.RecognizeAsync(RecognizeMode mode)
  • GrammarBuilder()
  • GrammarBuilder.Append(Choices choices)
  • Choices(params string[] choices)
  • Grammar(GrammarBuilder builder)


LoadGrammar: ParametersDetails
grammarThe grammar to load. For example, a DictationGrammar object to allow free text dictation.
RecognizeAsync: ParametersDetails
modeThe RecognizeMode for the current recognition: Single for just one recognition, Multiple to allow multiple.
GrammarBuilder.Append: ParametersDetails
choicesAppends some choices to the grammar builder. This means that, when the user inputs speech, the recognizer can follow different "branches" from a grammar.
Choices constructor: ParametersDetails
choicesAn array of choices for the grammar builder. See GrammarBuilder.Append.
Grammar constructor: ParameterDetails
builderThe GrammarBuilder to construct a Grammar from.


To use SpeechRecognitionEngine, your Windows version needs to have speech recognition enabled.

You have to add a reference to System.Speech.dll before you can use the speech classes.

Got any .NET Framework Question?