AWSTranscribeTranscriptionJob

Objective-C

@interface AWSTranscribeTranscriptionJob

Swift

class AWSTranscribeTranscriptionJob

Provides detailed information about a transcription job.

To view the status of the specified transcription job, check the TranscriptionJobStatus field. If the status is COMPLETED, the job is finished and you can find the results at the location specified in TranscriptFileUri. If the status is FAILED, FailureReason provides details on why your transcription job failed.

If you enabled content redaction, the redacted transcript can be found at the location specified in RedactedTranscriptFileUri.

  • The date and time the specified transcription job finished processing.

    Timestamps are in the format YYYY-MM-DD'T'HH:MM:SS.SSSSSS-UTC. For example, 2022-05-04T12:33:13.922000-07:00 represents a transcription job that started processing at 12:33 PM UTC-7 on May 4, 2022.

    Declaration

    Objective-C

    @property (nonatomic, strong) NSDate *_Nullable completionTime;

    Swift

    var completionTime: Date? { get set }
  • Redacts or flags specified personally identifiable information (PII) in your transcript.

    Declaration

    Objective-C

    @property (nonatomic, strong) AWSTranscribeContentRedaction *_Nullable contentRedaction;

    Swift

    var contentRedaction: AWSTranscribeContentRedaction? { get set }
  • The date and time the specified transcription job request was made.

    Timestamps are in the format YYYY-MM-DD'T'HH:MM:SS.SSSSSS-UTC. For example, 2022-05-04T12:32:58.761000-07:00 represents a transcription job that started processing at 12:32 PM UTC-7 on May 4, 2022.

    Declaration

    Objective-C

    @property (nonatomic, strong) NSDate *_Nullable creationTime;

    Swift

    var creationTime: Date? { get set }
  • If TranscriptionJobStatus is FAILED, FailureReason contains information about why the transcription job request failed.

    The FailureReason field contains one of the following values:

    • Unsupported media format.

      The media format specified in MediaFormat isn’t valid. Refer to MediaFormat for a list of supported formats.

    • The media format provided does not match the detected media format.

      The media format specified in MediaFormat doesn’t match the format of the input file. Check the media format of your media file and correct the specified value.

    • Invalid sample rate for audio file.

      The sample rate specified in MediaSampleRateHertz isn’t valid. The sample rate must be between 8,000 and 48,000 Hertz.

    • The sample rate provided does not match the detected sample rate.

      The sample rate specified in MediaSampleRateHertz doesn’t match the sample rate detected in your input media file. Check the sample rate of your media file and correct the specified value.

    • Invalid file size: file size too large.

      The size of your media file is larger than what Amazon Transcribe can process. For more information, refer to Guidelines and quotas.

    • Invalid number of channels: number of channels too large.

      Your audio contains more channels than Amazon Transcribe is able to process. For more information, refer to Guidelines and quotas.

    Declaration

    Objective-C

    @property (nonatomic, strong) NSString *_Nullable failureReason;

    Swift

    var failureReason: String? { get set }
  • The confidence score associated with the language identified in your media file.

    Confidence scores are values between 0 and 1; a larger value indicates a higher probability that the identified language correctly matches the language spoken in your media.

    Declaration

    Objective-C

    @property (nonatomic, strong) NSNumber *_Nullable identifiedLanguageScore;

    Swift

    var identifiedLanguageScore: NSNumber? { get set }
  • Indicates whether automatic language identification was enabled (TRUE) for the specified transcription job.

    Declaration

    Objective-C

    @property (nonatomic, strong) NSNumber *_Nullable identifyLanguage;

    Swift

    var identifyLanguage: NSNumber? { get set }
  • Indicates whether automatic multi-language identification was enabled (TRUE) for the specified transcription job.

    Declaration

    Objective-C

    @property (nonatomic, strong) NSNumber *_Nullable identifyMultipleLanguages;

    Swift

    var identifyMultipleLanguages: NSNumber? { get set }
  • Provides information about how your transcription job is being processed. This parameter shows if your request is queued and what data access role is being used.

    Declaration

    Objective-C

    @property (nonatomic, strong) AWSTranscribeJobExecutionSettings *_Nullable jobExecutionSettings;

    Swift

    var jobExecutionSettings: AWSTranscribeJobExecutionSettings? { get set }
  • The language code used to create your transcription job. For a list of supported languages and their associated language codes, refer to the Supported languages table.

    Note that you must include one of LanguageCode, IdentifyLanguage, or IdentifyMultipleLanguages in your request. If you include more than one of these parameters, your transcription job fails.

    Declaration

    Objective-C

    @property (nonatomic) AWSTranscribeLanguageCode languageCode;

    Swift

    var languageCode: AWSTranscribeLanguageCode { get set }
  • The language codes used to create your transcription job. This parameter is used with multi-language identification. For single-language identification requests, refer to the singular version of this parameter, LanguageCode.

    For a list of supported languages and their associated language codes, refer to the Supported languages table.

    Declaration

    Objective-C

    @property (nonatomic, strong) NSArray<AWSTranscribeLanguageCodeItem *> *_Nullable languageCodes;

    Swift

    var languageCodes: [AWSTranscribeLanguageCodeItem]? { get set }
  • If using automatic language identification (IdentifyLanguage) in your request and you want to apply a custom language model, a custom vocabulary, or a custom vocabulary filter, include LanguageIdSettings with the relevant sub-parameters (VocabularyName, LanguageModelName, and VocabularyFilterName).

    You can specify two or more language codes that represent the languages you think may be present in your media; including more than five is not recommended. Each language code you include can have an associated custom language model, custom vocabulary, and custom vocabulary filter. The languages you specify must match the languages of the specified custom language models, custom vocabularies, and custom vocabulary filters.

    To include language options using IdentifyLanguagewithout including a custom language model, a custom vocabulary, or a custom vocabulary filter, use LanguageOptions instead of LanguageIdSettings. Including language options can improve the accuracy of automatic language identification.

    If you want to include a custom language model with your request but do not want to use automatic language identification, use instead the parameter with the LanguageModelName sub-parameter.

    If you want to include a custom vocabulary or a custom vocabulary filter (or both) with your request but do not want to use automatic language identification, use instead the parameter with the VocabularyName or VocabularyFilterName (or both) sub-parameter.

    Declaration

    Objective-C

    @property (nonatomic, strong) NSDictionary<NSString *, AWSTranscribeLanguageIdSettings *> *_Nullable languageIdSettings;

    Swift

    var languageIdSettings: [String : AWSTranscribeLanguageIdSettings]? { get set }
  • You can specify two or more language codes that represent the languages you think may be present in your media; including more than five is not recommended. If you’re unsure what languages are present, do not include this parameter.

    If you include LanguageOptions in your request, you must also include IdentifyLanguage.

    For more information, refer to Supported languages.

    To transcribe speech in Modern Standard Arabic (ar-SA), your media file must be encoded at a sample rate of 16,000 Hz or higher.

    Declaration

    Objective-C

    @property (nonatomic, strong) NSArray<NSString *> *_Nullable languageOptions;

    Swift

    var languageOptions: [String]? { get set }
  • Describes the Amazon S3 location of the media file you want to use in your request.

    Declaration

    Objective-C

    @property (nonatomic, strong) AWSTranscribeMedia *_Nullable media;

    Swift

    var media: AWSTranscribeMedia? { get set }
  • The format of the input media file.

    Declaration

    Objective-C

    @property (nonatomic) AWSTranscribeMediaFormat mediaFormat;

    Swift

    var mediaFormat: AWSTranscribeMediaFormat { get set }
  • The sample rate, in Hertz, of the audio track in your input media file.

    Declaration

    Objective-C

    @property (nonatomic, strong) NSNumber *_Nullable mediaSampleRateHertz;

    Swift

    var mediaSampleRateHertz: NSNumber? { get set }
  • The custom language model you want to include with your transcription job. If you include ModelSettings in your request, you must include the LanguageModelName sub-parameter.

    Declaration

    Objective-C

    @property (nonatomic, strong) AWSTranscribeModelSettings *_Nullable modelSettings;

    Swift

    var modelSettings: AWSTranscribeModelSettings? { get set }
  • Specify additional optional settings in your request, including channel identification, alternative transcriptions, speaker labeling; allows you to apply custom vocabularies and vocabulary filters.

    If you want to include a custom vocabulary or a custom vocabulary filter (or both) with your request but do not want to use automatic language identification, use Settings with the VocabularyName or VocabularyFilterName (or both) sub-parameter.

    If you’re using automatic language identification with your request and want to include a custom language model, a custom vocabulary, or a custom vocabulary filter, do not use the Settings parameter; use instead the parameter with the LanguageModelName, VocabularyName or VocabularyFilterName sub-parameters.

    Declaration

    Objective-C

    @property (nonatomic, strong) AWSTranscribeSettings *_Nullable settings;

    Swift

    var settings: AWSTranscribeSettings? { get set }
  • The date and time the specified transcription job began processing.

    Timestamps are in the format YYYY-MM-DD'T'HH:MM:SS.SSSSSS-UTC. For example, 2022-05-04T12:32:58.789000-07:00 represents a transcription job that started processing at 12:32 PM UTC-7 on May 4, 2022.

    Declaration

    Objective-C

    @property (nonatomic, strong) NSDate *_Nullable startTime;

    Swift

    var startTime: Date? { get set }
  • Generate subtitles for your media file with your transcription request.

    Declaration

    Objective-C

    @property (nonatomic, strong) AWSTranscribeSubtitlesOutput *_Nullable subtitles;

    Swift

    var subtitles: AWSTranscribeSubtitlesOutput? { get set }
  • Adds one or more custom tags, each in the form of a key:value pair, to a new transcription job at the time you start this new job.

    To learn more about using tags with Amazon Transcribe, refer to Tagging resources.

    Declaration

    Objective-C

    @property (nonatomic, strong) NSArray<AWSTranscribeTag *> *_Nullable tags;

    Swift

    var tags: [AWSTranscribeTag]? { get set }
  • Provides you with the Amazon S3 URI you can use to access your transcript.

    Declaration

    Objective-C

    @property (nonatomic, strong) AWSTranscribeTranscript *_Nullable transcript;

    Swift

    var transcript: AWSTranscribeTranscript? { get set }
  • The name of the transcription job. Job names are case sensitive and must be unique within an Amazon Web Services account.

    Declaration

    Objective-C

    @property (nonatomic, strong) NSString *_Nullable transcriptionJobName;

    Swift

    var transcriptionJobName: String? { get set }
  • Provides the status of the specified transcription job.

    If the status is COMPLETED, the job is finished and you can find the results at the location specified in TranscriptFileUri (or RedactedTranscriptFileUri, if you requested transcript redaction). If the status is FAILED, FailureReason provides details on why your transcription job failed.

    Declaration

    Objective-C

    @property (nonatomic) AWSTranscribeTranscriptionJobStatus transcriptionJobStatus;

    Swift

    var transcriptionJobStatus: AWSTranscribeTranscriptionJobStatus { get set }