AWSPollySynthesizeSpeechInput

Objective-C

@interface AWSPollySynthesizeSpeechInput

Swift

class AWSPollySynthesizeSpeechInput
  • Specifies the engine (standard, neural, long-form, or generative) for Amazon Polly to use when processing input text for speech synthesis. Provide an engine that is supported by the voice you select. If you don’t provide an engine, the standard engine is selected by default. If a chosen voice isn’t supported by the standard engine, this will result in an error. For information on Amazon Polly voices and which voices are available for each engine, see Available Voices.

    Type: String

    Valid Values: standard | neural | long-form | generative

    Required: Yes

    Declaration

    Objective-C

    @property (nonatomic) AWSPollyEngine engine;

    Swift

    var engine: AWSPollyEngine { get set }
  • Optional language code for the Synthesize Speech request. This is only necessary if using a bilingual voice, such as Aditi, which can be used for either Indian English (en-IN) or Hindi (hi-IN).

    If a bilingual voice is used and no language code is specified, Amazon Polly uses the default language of the bilingual voice. The default language for any voice is the one returned by the DescribeVoices operation for the LanguageCode parameter. For example, if no language code is specified, Aditi will use Indian English rather than Hindi.

    Declaration

    Objective-C

    @property (nonatomic) AWSPollyLanguageCode languageCode;

    Swift

    var languageCode: AWSPollyLanguageCode { get set }
  • List of one or more pronunciation lexicon names you want the service to apply during synthesis. Lexicons are applied only if the language of the lexicon is the same as the language of the voice. For information about storing lexicons, see PutLexicon.

    Declaration

    Objective-C

    @property (nonatomic, strong) NSArray<NSString *> *_Nullable lexiconNames;

    Swift

    var lexiconNames: [String]? { get set }
  • The format in which the returned output will be encoded. For audio stream, this will be mp3, ogg_vorbis, or pcm. For speech marks, this will be json.

    When pcm is used, the content returned is audio/pcm in a signed 16-bit, 1 channel (mono), little-endian format.

    Declaration

    Objective-C

    @property (nonatomic) AWSPollyOutputFormat outputFormat;

    Swift

    var outputFormat: AWSPollyOutputFormat { get set }
  • The audio frequency specified in Hz.

    The valid values for mp3 and ogg_vorbis are “8000”, “16000”, “22050”, and “24000”. The default value for standard voices is “22050”. The default value for neural voices is “24000”. The default value for long-form voices is “24000”. The default value for generative voices is “24000”.

    Valid values for pcm are “8000” and “16000” The default value is “16000”.

    Declaration

    Objective-C

    @property (nonatomic, strong) NSString *_Nullable sampleRate;

    Swift

    var sampleRate: String? { get set }
  • The type of speech marks returned for the input text.

    Declaration

    Objective-C

    @property (nonatomic, strong) NSArray<NSString *> *_Nullable speechMarkTypes;

    Swift

    var speechMarkTypes: [String]? { get set }
  • Input text to synthesize. If you specify ssml as the TextType, follow the SSML format for the input text.

    Declaration

    Objective-C

    @property (nonatomic, strong) NSString *_Nullable text;

    Swift

    var text: String? { get set }
  • Specifies whether the input text is plain text or SSML. The default value is plain text. For more information, see Using SSML.

    Declaration

    Objective-C

    @property (nonatomic) AWSPollyTextType textType;

    Swift

    var textType: AWSPollyTextType { get set }
  • Voice ID to use for the synthesis. You can get a list of available voice IDs by calling the DescribeVoices operation.

    Declaration

    Objective-C

    @property (nonatomic) AWSPollyVoiceId voiceId;

    Swift

    var voiceId: AWSPollyVoiceId { get set }