AWSTranscribeStartTranscriptionJobRequest

Objective-C

@interface AWSTranscribeStartTranscriptionJobRequest

Swift

class AWSTranscribeStartTranscriptionJobRequest
  • Makes it possible to redact or flag specified personally identifiable information (PII) in your transcript. If you use ContentRedaction, you must also include the sub-parameters: RedactionOutput and RedactionType. You can optionally include PiiEntityTypes to choose which types of PII you want to redact. If you do not include PiiEntityTypes in your request, all PII is redacted.

    Declaration

    Objective-C

    @property (nonatomic, strong) AWSTranscribeContentRedaction *_Nullable contentRedaction;

    Swift

    var contentRedaction: AWSTranscribeContentRedaction? { get set }
  • Enables automatic language identification in your transcription job request. Use this parameter if your media file contains only one language. If your media contains multiple languages, use IdentifyMultipleLanguages instead.

    If you include IdentifyLanguage, you can optionally include a list of language codes, using LanguageOptions, that you think may be present in your media file. Including LanguageOptions restricts IdentifyLanguage to only the language options that you specify, which can improve transcription accuracy.

    If you want to apply a custom language model, a custom vocabulary, or a custom vocabulary filter to your automatic language identification request, include LanguageIdSettings with the relevant sub-parameters (VocabularyName, LanguageModelName, and VocabularyFilterName). If you include LanguageIdSettings, also include LanguageOptions.

    Note that you must include one of LanguageCode, IdentifyLanguage, or IdentifyMultipleLanguages in your request. If you include more than one of these parameters, your transcription job fails.

    Declaration

    Objective-C

    @property (nonatomic, strong) NSNumber *_Nullable identifyLanguage;

    Swift

    var identifyLanguage: NSNumber? { get set }
  • Enables automatic multi-language identification in your transcription job request. Use this parameter if your media file contains more than one language. If your media contains only one language, use IdentifyLanguage instead.

    If you include IdentifyMultipleLanguages, you can optionally include a list of language codes, using LanguageOptions, that you think may be present in your media file. Including LanguageOptions restricts IdentifyLanguage to only the language options that you specify, which can improve transcription accuracy.

    If you want to apply a custom vocabulary or a custom vocabulary filter to your automatic language identification request, include LanguageIdSettings with the relevant sub-parameters (VocabularyName and VocabularyFilterName). If you include LanguageIdSettings, also include LanguageOptions.

    Note that you must include one of LanguageCode, IdentifyLanguage, or IdentifyMultipleLanguages in your request. If you include more than one of these parameters, your transcription job fails.

    Declaration

    Objective-C

    @property (nonatomic, strong) NSNumber *_Nullable identifyMultipleLanguages;

    Swift

    var identifyMultipleLanguages: NSNumber? { get set }
  • Makes it possible to control how your transcription job is processed. Currently, the only JobExecutionSettings modification you can choose is enabling job queueing using the AllowDeferredExecution sub-parameter.

    If you include JobExecutionSettings in your request, you must also include the sub-parameters: AllowDeferredExecution and DataAccessRoleArn.

    Declaration

    Objective-C

    @property (nonatomic, strong) AWSTranscribeJobExecutionSettings *_Nullable jobExecutionSettings;

    Swift

    var jobExecutionSettings: AWSTranscribeJobExecutionSettings? { get set }
  • A map of plain text, non-secret key:value pairs, known as encryption context pairs, that provide an added layer of security for your data. For more information, see KMS encryption context and Asymmetric keys in KMS.

    Declaration

    Objective-C

    @property (nonatomic, strong) NSDictionary<NSString *, NSString *> *_Nullable KMSEncryptionContext;

    Swift

    var kmsEncryptionContext: [String : String]? { get set }
  • The language code that represents the language spoken in the input media file.

    If you’re unsure of the language spoken in your media file, consider using IdentifyLanguage or IdentifyMultipleLanguages to enable automatic language identification.

    Note that you must include one of LanguageCode, IdentifyLanguage, or IdentifyMultipleLanguages in your request. If you include more than one of these parameters, your transcription job fails.

    For a list of supported languages and their associated language codes, refer to the Supported languages table.

    To transcribe speech in Modern Standard Arabic (ar-SA), your media file must be encoded at a sample rate of 16,000 Hz or higher.

    Declaration

    Objective-C

    @property (nonatomic) AWSTranscribeLanguageCode languageCode;

    Swift

    var languageCode: AWSTranscribeLanguageCode { get set }
  • If using automatic language identification in your request and you want to apply a custom language model, a custom vocabulary, or a custom vocabulary filter, include LanguageIdSettings with the relevant sub-parameters (VocabularyName, LanguageModelName, and VocabularyFilterName). Note that multi-language identification (IdentifyMultipleLanguages) doesn’t support custom language models.

    LanguageIdSettings supports two to five language codes. Each language code you include can have an associated custom language model, custom vocabulary, and custom vocabulary filter. The language codes that you specify must match the languages of the associated custom language models, custom vocabularies, and custom vocabulary filters.

    It’s recommended that you include LanguageOptions when using LanguageIdSettings to ensure that the correct language dialect is identified. For example, if you specify a custom vocabulary that is in en-US but Amazon Transcribe determines that the language spoken in your media is en-AU, your custom vocabulary is not applied to your transcription. If you include LanguageOptions and include en-US as the only English language dialect, your custom vocabulary is applied to your transcription.

    If you want to include a custom language model with your request but do not want to use automatic language identification, use instead the parameter with the LanguageModelName sub-parameter. If you want to include a custom vocabulary or a custom vocabulary filter (or both) with your request but do not want to use automatic language identification, use instead the parameter with the VocabularyName or VocabularyFilterName (or both) sub-parameter.

    Declaration

    Objective-C

    @property (nonatomic, strong) NSDictionary<NSString *, AWSTranscribeLanguageIdSettings *> *_Nullable languageIdSettings;

    Swift

    var languageIdSettings: [String : AWSTranscribeLanguageIdSettings]? { get set }
  • You can specify two or more language codes that represent the languages you think may be present in your media. Including more than five is not recommended. If you’re unsure what languages are present, do not include this parameter.

    If you include LanguageOptions in your request, you must also include IdentifyLanguage.

    For more information, refer to Supported languages.

    To transcribe speech in Modern Standard Arabic (ar-SA), your media file must be encoded at a sample rate of 16,000 Hz or higher.

    Declaration

    Objective-C

    @property (nonatomic, strong) NSArray<NSString *> *_Nullable languageOptions;

    Swift

    var languageOptions: [String]? { get set }
  • Describes the Amazon S3 location of the media file you want to use in your request.

    Declaration

    Objective-C

    @property (nonatomic, strong) AWSTranscribeMedia *_Nullable media;

    Swift

    var media: AWSTranscribeMedia? { get set }
  • Specify the format of your input media file.

    Declaration

    Objective-C

    @property (nonatomic) AWSTranscribeMediaFormat mediaFormat;

    Swift

    var mediaFormat: AWSTranscribeMediaFormat { get set }
  • The sample rate, in hertz, of the audio track in your input media file.

    If you do not specify the media sample rate, Amazon Transcribe determines it for you. If you specify the sample rate, it must match the rate detected by Amazon Transcribe. If there’s a mismatch between the value that you specify and the value detected, your job fails. In most cases, you can omit MediaSampleRateHertz and let Amazon Transcribe determine the sample rate.

    Declaration

    Objective-C

    @property (nonatomic, strong) NSNumber *_Nullable mediaSampleRateHertz;

    Swift

    var mediaSampleRateHertz: NSNumber? { get set }
  • Specify the custom language model you want to include with your transcription job. If you include ModelSettings in your request, you must include the LanguageModelName sub-parameter.

    For more information, see Custom language models.

    Declaration

    Objective-C

    @property (nonatomic, strong) AWSTranscribeModelSettings *_Nullable modelSettings;

    Swift

    var modelSettings: AWSTranscribeModelSettings? { get set }
  • The name of the Amazon S3 bucket where you want your transcription output stored. Do not include the S3:// prefix of the specified bucket.

    If you want your output to go to a sub-folder of this bucket, specify it using the OutputKey parameter; OutputBucketName only accepts the name of a bucket.

    For example, if you want your output stored in S3://DOC-EXAMPLE-BUCKET, set OutputBucketName to DOC-EXAMPLE-BUCKET. However, if you want your output stored in S3://DOC-EXAMPLE-BUCKET/test-files/, set OutputBucketName to DOC-EXAMPLE-BUCKET and OutputKey to test-files/.

    Note that Amazon Transcribe must have permission to use the specified location. You can change Amazon S3 permissions using the Amazon Web Services Management Console. See also Permissions Required for IAM User Roles.

    If you do not specify OutputBucketName, your transcript is placed in a service-managed Amazon S3 bucket and you are provided with a URI to access your transcript.

    Declaration

    Objective-C

    @property (nonatomic, strong) NSString *_Nullable outputBucketName;

    Swift

    var outputBucketName: String? { get set }
  • The KMS key you want to use to encrypt your transcription output.

    If using a key located in the current Amazon Web Services account, you can specify your KMS key in one of four ways:

    1. Use the KMS key ID itself. For example, 1234abcd-12ab-34cd-56ef-1234567890ab.

    2. Use an alias for the KMS key ID. For example, alias/ExampleAlias.

    3. Use the Amazon Resource Name (ARN) for the KMS key ID. For example, arn:aws:kms:region:account-ID:key/1234abcd-12ab-34cd-56ef-1234567890ab.

    4. Use the ARN for the KMS key alias. For example, arn:aws:kms:region:account-ID:alias/ExampleAlias.

    If using a key located in a different Amazon Web Services account than the current Amazon Web Services account, you can specify your KMS key in one of two ways:

    1. Use the ARN for the KMS key ID. For example, arn:aws:kms:region:account-ID:key/1234abcd-12ab-34cd-56ef-1234567890ab.

    2. Use the ARN for the KMS key alias. For example, arn:aws:kms:region:account-ID:alias/ExampleAlias.

    If you do not specify an encryption key, your output is encrypted with the default Amazon S3 key (SSE-S3).

    If you specify a KMS key to encrypt your output, you must also specify an output location using the OutputLocation parameter.

    Note that the role making the request must have permission to use the specified KMS key.

    Declaration

    Objective-C

    @property (nonatomic, strong) NSString *_Nullable outputEncryptionKMSKeyId;

    Swift

    var outputEncryptionKMSKeyId: String? { get set }
  • Use in combination with OutputBucketName to specify the output location of your transcript and, optionally, a unique name for your output file. The default name for your transcription output is the same as the name you specified for your transcription job (TranscriptionJobName).

    Here are some examples of how you can use OutputKey:

    • If you specify ‘DOC-EXAMPLE-BUCKET’ as the OutputBucketName and ‘my-transcript.json’ as the OutputKey, your transcription output path is s3://DOC-EXAMPLE-BUCKET/my-transcript.json.

    • If you specify ‘my-first-transcription’ as the TranscriptionJobName, ‘DOC-EXAMPLE-BUCKET’ as the OutputBucketName, and ‘my-transcript’ as the OutputKey, your transcription output path is s3://DOC-EXAMPLE-BUCKET/my-transcript/my-first-transcription.json.

    • If you specify ‘DOC-EXAMPLE-BUCKET’ as the OutputBucketName and ‘test-files/my-transcript.json’ as the OutputKey, your transcription output path is s3://DOC-EXAMPLE-BUCKET/test-files/my-transcript.json.

    • If you specify ‘my-first-transcription’ as the TranscriptionJobName, ‘DOC-EXAMPLE-BUCKET’ as the OutputBucketName, and ‘test-files/my-transcript’ as the OutputKey, your transcription output path is s3://DOC-EXAMPLE-BUCKET/test-files/my-transcript/my-first-transcription.json.

    If you specify the name of an Amazon S3 bucket sub-folder that doesn’t exist, one is created for you.

    Declaration

    Objective-C

    @property (nonatomic, strong) NSString *_Nullable outputKey;

    Swift

    var outputKey: String? { get set }
  • Specify additional optional settings in your request, including channel identification, alternative transcriptions, speaker partitioning. You can use that to apply custom vocabularies and vocabulary filters.

    If you want to include a custom vocabulary or a custom vocabulary filter (or both) with your request but do not want to use automatic language identification, use Settings with the VocabularyName or VocabularyFilterName (or both) sub-parameter.

    If you’re using automatic language identification with your request and want to include a custom language model, a custom vocabulary, or a custom vocabulary filter, use instead the parameter with the LanguageModelName, VocabularyName or VocabularyFilterName sub-parameters.

    Declaration

    Objective-C

    @property (nonatomic, strong) AWSTranscribeSettings *_Nullable settings;

    Swift

    var settings: AWSTranscribeSettings? { get set }
  • Produces subtitle files for your input media. You can specify WebVTT (*.vtt) and SubRip (*.srt) formats.

    Declaration

    Objective-C

    @property (nonatomic, strong) AWSTranscribeSubtitles *_Nullable subtitles;

    Swift

    var subtitles: AWSTranscribeSubtitles? { get set }
  • Adds one or more custom tags, each in the form of a key:value pair, to a new transcription job at the time you start this new job.

    To learn more about using tags with Amazon Transcribe, refer to Tagging resources.

    Declaration

    Objective-C

    @property (nonatomic, strong) NSArray<AWSTranscribeTag *> *_Nullable tags;

    Swift

    var tags: [AWSTranscribeTag]? { get set }
  • Enables toxic speech detection in your transcript. If you include ToxicityDetection in your request, you must also include ToxicityCategories.

    For information on the types of toxic speech Amazon Transcribe can detect, see Detecting toxic speech.

    Declaration

    Objective-C

    @property (nonatomic, strong) NSArray<AWSTranscribeToxicityDetectionSettings *> *_Nullable toxicityDetection;

    Swift

    var toxicityDetection: [AWSTranscribeToxicityDetectionSettings]? { get set }
  • A unique name, chosen by you, for your transcription job. The name that you specify is also used as the default name of your transcription output file. If you want to specify a different name for your transcription output, use the OutputKey parameter.

    This name is case sensitive, cannot contain spaces, and must be unique within an Amazon Web Services account. If you try to create a new job with the same name as an existing job, you get a ConflictException error.

    Declaration

    Objective-C

    @property (nonatomic, strong) NSString *_Nullable transcriptionJobName;

    Swift

    var transcriptionJobName: String? { get set }