AWSTranscribeSettings

Objective-C

@interface AWSTranscribeSettings

Swift

class AWSTranscribeSettings

Provides optional settings for the StartTranscriptionJob operation.

  • Instructs Amazon Transcribe to process each audio channel separately and then merge the transcription output of each channel into a single transcription.

    Amazon Transcribe also produces a transcription of each item detected on an audio channel, including the start time and end time of the item and alternative transcriptions of the item including the confidence that Amazon Transcribe has in the transcription.

    You can’t set both ShowSpeakerLabels and ChannelIdentification in the same request. If you set both, your request returns a BadRequestException.

    Declaration

    Objective-C

    @property (nonatomic, strong) NSNumber *_Nullable channelIdentification;

    Swift

    var channelIdentification: NSNumber? { get set }
  • The number of alternative transcriptions that the service should return. If you specify the MaxAlternatives field, you must set the ShowAlternatives field to true.

    Declaration

    Objective-C

    @property (nonatomic, strong) NSNumber *_Nullable maxAlternatives;

    Swift

    var maxAlternatives: NSNumber? { get set }
  • The maximum number of speakers to identify in the input audio. If there are more speakers in the audio than this number, multiple speakers are identified as a single speaker. If you specify the MaxSpeakerLabels field, you must set the ShowSpeakerLabels field to true.

    Declaration

    Objective-C

    @property (nonatomic, strong) NSNumber *_Nullable maxSpeakerLabels;

    Swift

    var maxSpeakerLabels: NSNumber? { get set }
  • Determines whether the transcription contains alternative transcriptions. If you set the ShowAlternatives field to true, you must also set the maximum number of alternatives to return in the MaxAlternatives field.

    Declaration

    Objective-C

    @property (nonatomic, strong) NSNumber *_Nullable showAlternatives;

    Swift

    var showAlternatives: NSNumber? { get set }
  • Determines whether the transcription job uses speaker recognition to identify different speakers in the input audio. Speaker recognition labels individual speakers in the audio file. If you set the ShowSpeakerLabels field to true, you must also set the maximum number of speaker labels MaxSpeakerLabels field.

    You can’t set both ShowSpeakerLabels and ChannelIdentification in the same request. If you set both, your request returns a BadRequestException.

    Declaration

    Objective-C

    @property (nonatomic, strong) NSNumber *_Nullable showSpeakerLabels;

    Swift

    var showSpeakerLabels: NSNumber? { get set }
  • Set to mask to remove filtered text from the transcript and replace it with three asterisks (“***”) as placeholder text. Set to remove to remove filtered text from the transcript without using placeholder text. Set to tag to mark the word in the transcription output that matches the vocabulary filter. When you set the filter method to tag, the words matching your vocabulary filter are not masked or removed.

    Declaration

    Objective-C

    @property (nonatomic) AWSTranscribeVocabularyFilterMethod vocabularyFilterMethod;

    Swift

    var vocabularyFilterMethod: AWSTranscribeVocabularyFilterMethod { get set }
  • The name of the vocabulary filter to use when transcribing the audio. The filter that you specify must have the same language code as the transcription job.

    Declaration

    Objective-C

    @property (nonatomic, strong) NSString *_Nullable vocabularyFilterName;

    Swift

    var vocabularyFilterName: String? { get set }
  • The name of a vocabulary to use when processing the transcription job.

    Declaration

    Objective-C

    @property (nonatomic, strong) NSString *_Nullable vocabularyName;

    Swift

    var vocabularyName: String? { get set }