Thursday, June 2, 2022
HomeOperating SystemLanguage Detection and Textual content to Speech in SwiftUI Apps

Language Detection and Textual content to Speech in SwiftUI Apps


With the arrival of machine studying and synthetic intelligence, the iOS SDK already comes with quite a lot of frameworks for builders to develop apps with machine learning-related options. On this tutorial, let’s discover two built-in ML APIs for changing textual content to speech and performing language detection.

Utilizing Textual content to Speech in AVFoundation

Let’s say, if we’re constructing an app that reads customers’ enter message, it is advisable to implement some type of text-to-speech capabilities.

text-to-speech-app

The AVFoundation framework has include some text-to-speech APIs. To make use of these APIs, we now have to first import the framework:

Subsequent, create an occasion of AVSpeechSynthesizer:

To transform the textual content message to speech, you possibly can write the code like this:

You create an occasion of AVSpeechUtterance with the textual content for the synthesizer to talk. Optionally, you possibly can configure the pitch, charge, and voice. For the voice parameter, we set the language to English (U.S.). Lastly, you cross the utterance object to the speech synthesizer to learn the textual content in English.

The built-in speech synthesizer is able to talking a number of languages similar to Chinese language, Japanese, and French. To inform the synthesizer the language to talk, you must cross the right language code when creating the occasion of AVSpeechSynthesisVoice.

To seek out out all of the language codes that the system helps, you possibly can name up the speechVoices() methodology of AVSpeechSynthesisVoice:

Listed below are among the supported language codes:

  • Japanese – ja-JP
  • Korean – ko-KR
  • French – fr-FR
  • Italian – it-IT
  • Cantonese – zh-HK
  • Mandarin – zh-TW
  • Putonghua – zh-CN

In some instances, chances are you’ll have to interrupt the speech synthesizer. You’ll be able to name up the stopSpeaking methodology to cease the synthesizer:

Performing Language Identification Utilizing Pure Language Framework

As you possibly can see within the code above, we now have to determine the language of the enter message earlier than the speech synthesizer can convert the textual content to speech appropriately. Wouldn’t it’s nice if the app can routinely detect the language of the enter message?

The NaturalLanguage framework gives quite a lot of pure language processing (NLP) performance together with language identification.

To make use of the NLP APIs, first import the NaturalLanguage framework:

You simply want a pair strains of code to detect the language of a textual content message:

The code above creates an occasion of NLLanguageRecognizer after which invokes the processString to course of the enter message. As soon as processed, the language recognized is saved within the dominantLanguage property:

Here’s a fast instance:

natural-language-detection

For the pattern, NLLanguageRecognizer acknowledges the language as English (i.e. en). If you happen to change the inputMessage to Japanese like beneath, the dominantLanguage turns into ja:

The dominantLanguage property could don’t have any worth if the enter message is like this:

Wrap Up

On this tutorial, we now have walked you thru two of the built-in machine studying APIs for changing textual content to speech and figuring out the language of a textual content message. With these ML APIs, you possibly can simply incorporate the text-to-speech function in your iOS apps.


Founding father of AppCoda. Creator of a number of iOS programming books together with Starting iOS Programming with Swift and Mastering SwiftUI. iOS App Developer and Blogger. Comply with me at Fb, Twitter and Google+.



RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

- Advertisment -
Google search engine

Most Popular

Recent Comments