Classes
The following classes are available globally.
-
A time range, in milliseconds, between two points in your media file.
You can use
StartTimeandEndTimeto search a custom segment. For example, settingStartTimeto 10000 andEndTimeto 50000 only searches for your specified criteria in the audio contained between the 10,000 millisecond mark and the 50,000 millisecond mark of your media file. You must useStartTimeandEndTimeas a set; that is, if you include one, you must include both.You can use also
Firstto search from the start of the audio until the time that you specify, orLastto search from the time that you specify until the end of the audio. For example, settingFirstto 50000 only searches for your specified criteria in the audio contained between the start of the media file to the 50,000 millisecond mark. You can useFirstandLastindependently of each other.If you prefer to use percentage instead of milliseconds, see .
See moreDeclaration
Objective-C
@interface AWSTranscribeAbsoluteTimeRangeSwift
class AWSTranscribeAbsoluteTimeRange -
Provides detailed information about a Call Analytics job.
To view the job’s status, refer to
CallAnalyticsJobStatus. If the status isCOMPLETED, the job is finished. You can find your completed transcript at the URI specified inTranscriptFileUri. If the status isFAILED,FailureReasonprovides details on why your transcription job failed.If you enabled personally identifiable information (PII) redaction, the redacted transcript appears at the location specified in
RedactedTranscriptFileUri.If you chose to redact the audio in your media file, you can find your redacted media file at the location specified in the
See moreRedactedMediaFileUrifield of your response.Declaration
Objective-C
@interface AWSTranscribeCallAnalyticsJobSwift
class AWSTranscribeCallAnalyticsJob -
Contains details about a call analytics job, including information about skipped analytics features.
See moreDeclaration
Objective-C
@interface AWSTranscribeCallAnalyticsJobDetailsSwift
class AWSTranscribeCallAnalyticsJobDetails -
Provides additional optional settings for your request, including content redaction, automatic language identification; allows you to apply custom language models, custom vocabulary filters, and custom vocabularies.
See moreDeclaration
Objective-C
@interface AWSTranscribeCallAnalyticsJobSettingsSwift
class AWSTranscribeCallAnalyticsJobSettings -
Provides detailed information about a specific Call Analytics job.
See moreDeclaration
Objective-C
@interface AWSTranscribeCallAnalyticsJobSummarySwift
class AWSTranscribeCallAnalyticsJobSummary -
Represents a skipped analytics feature during the analysis of a call analytics job.
The
Featurefield indicates the type of analytics feature that was skipped.The
Messagefield contains additional information or a message explaining why the analytics feature was skipped.The
See moreReasonCodefield provides a code indicating the reason why the analytics feature was skipped.Declaration
Objective-C
@interface AWSTranscribeCallAnalyticsSkippedFeatureSwift
class AWSTranscribeCallAnalyticsSkippedFeature -
Provides you with the properties of the Call Analytics category you specified in your request. This includes the list of rules that define the specified category.
See moreDeclaration
Objective-C
@interface AWSTranscribeCategoryPropertiesSwift
class AWSTranscribeCategoryProperties -
Makes it possible to specify which speaker is on which channel. For example, if your agent is the first participant to speak, you would set
See moreChannelIdto0(to indicate the first channel) andParticipantRoletoAGENT(to indicate that it’s the agent speaking).Declaration
Objective-C
@interface AWSTranscribeChannelDefinitionSwift
class AWSTranscribeChannelDefinition -
Makes it possible to redact or flag specified personally identifiable information (PII) in your transcript. If you use
ContentRedaction, you must also include the sub-parameters:RedactionOutputandRedactionType. You can optionally includePiiEntityTypesto choose which types of PII you want to redact.Required parameters: [RedactionType, RedactionOutput]
See moreDeclaration
Objective-C
@interface AWSTranscribeContentRedactionSwift
class AWSTranscribeContentRedaction -
Declaration
Objective-C
@interface AWSTranscribeCreateCallAnalyticsCategoryRequestSwift
class AWSTranscribeCreateCallAnalyticsCategoryRequest -
Declaration
Objective-C
@interface AWSTranscribeCreateCallAnalyticsCategoryResponseSwift
class AWSTranscribeCreateCallAnalyticsCategoryResponse -
Declaration
Objective-C
@interface AWSTranscribeCreateLanguageModelRequestSwift
class AWSTranscribeCreateLanguageModelRequest -
Declaration
Objective-C
@interface AWSTranscribeCreateLanguageModelResponseSwift
class AWSTranscribeCreateLanguageModelResponse -
Declaration
Objective-C
@interface AWSTranscribeCreateMedicalVocabularyRequestSwift
class AWSTranscribeCreateMedicalVocabularyRequest -
Declaration
Objective-C
@interface AWSTranscribeCreateMedicalVocabularyResponseSwift
class AWSTranscribeCreateMedicalVocabularyResponse -
Declaration
Objective-C
@interface AWSTranscribeCreateVocabularyFilterRequestSwift
class AWSTranscribeCreateVocabularyFilterRequest -
Declaration
Objective-C
@interface AWSTranscribeCreateVocabularyFilterResponseSwift
class AWSTranscribeCreateVocabularyFilterResponse -
Declaration
Objective-C
@interface AWSTranscribeCreateVocabularyRequestSwift
class AWSTranscribeCreateVocabularyRequest -
Declaration
Objective-C
@interface AWSTranscribeCreateVocabularyResponseSwift
class AWSTranscribeCreateVocabularyResponse -
Declaration
Objective-C
@interface AWSTranscribeDeleteCallAnalyticsCategoryRequestSwift
class AWSTranscribeDeleteCallAnalyticsCategoryRequest -
Declaration
Objective-C
@interface AWSTranscribeDeleteCallAnalyticsCategoryResponseSwift
class AWSTranscribeDeleteCallAnalyticsCategoryResponse -
Declaration
Objective-C
@interface AWSTranscribeDeleteCallAnalyticsJobRequestSwift
class AWSTranscribeDeleteCallAnalyticsJobRequest -
Declaration
Objective-C
@interface AWSTranscribeDeleteCallAnalyticsJobResponseSwift
class AWSTranscribeDeleteCallAnalyticsJobResponse -
Declaration
Objective-C
@interface AWSTranscribeDeleteLanguageModelRequestSwift
class AWSTranscribeDeleteLanguageModelRequest -
Declaration
Objective-C
@interface AWSTranscribeDeleteMedicalScribeJobRequestSwift
class AWSTranscribeDeleteMedicalScribeJobRequest -
Declaration
Objective-C
@interface AWSTranscribeDeleteMedicalTranscriptionJobRequestSwift
class AWSTranscribeDeleteMedicalTranscriptionJobRequest -
Declaration
Objective-C
@interface AWSTranscribeDeleteMedicalVocabularyRequestSwift
class AWSTranscribeDeleteMedicalVocabularyRequest -
Declaration
Objective-C
@interface AWSTranscribeDeleteTranscriptionJobRequestSwift
class AWSTranscribeDeleteTranscriptionJobRequest -
Declaration
Objective-C
@interface AWSTranscribeDeleteVocabularyFilterRequestSwift
class AWSTranscribeDeleteVocabularyFilterRequest -
Declaration
Objective-C
@interface AWSTranscribeDeleteVocabularyRequestSwift
class AWSTranscribeDeleteVocabularyRequest -
Declaration
Objective-C
@interface AWSTranscribeDescribeLanguageModelRequestSwift
class AWSTranscribeDescribeLanguageModelRequest -
Declaration
Objective-C
@interface AWSTranscribeDescribeLanguageModelResponseSwift
class AWSTranscribeDescribeLanguageModelResponse -
Declaration
Objective-C
@interface AWSTranscribeGetCallAnalyticsCategoryRequestSwift
class AWSTranscribeGetCallAnalyticsCategoryRequest -
Declaration
Objective-C
@interface AWSTranscribeGetCallAnalyticsCategoryResponseSwift
class AWSTranscribeGetCallAnalyticsCategoryResponse -
Declaration
Objective-C
@interface AWSTranscribeGetCallAnalyticsJobRequestSwift
class AWSTranscribeGetCallAnalyticsJobRequest -
Declaration
Objective-C
@interface AWSTranscribeGetCallAnalyticsJobResponseSwift
class AWSTranscribeGetCallAnalyticsJobResponse -
Declaration
Objective-C
@interface AWSTranscribeGetMedicalScribeJobRequestSwift
class AWSTranscribeGetMedicalScribeJobRequest -
Declaration
Objective-C
@interface AWSTranscribeGetMedicalScribeJobResponseSwift
class AWSTranscribeGetMedicalScribeJobResponse -
Declaration
Objective-C
@interface AWSTranscribeGetMedicalTranscriptionJobRequestSwift
class AWSTranscribeGetMedicalTranscriptionJobRequest -
Declaration
Objective-C
@interface AWSTranscribeGetMedicalTranscriptionJobResponseSwift
class AWSTranscribeGetMedicalTranscriptionJobResponse -
Declaration
Objective-C
@interface AWSTranscribeGetMedicalVocabularyRequestSwift
class AWSTranscribeGetMedicalVocabularyRequest -
Declaration
Objective-C
@interface AWSTranscribeGetMedicalVocabularyResponseSwift
class AWSTranscribeGetMedicalVocabularyResponse -
Declaration
Objective-C
@interface AWSTranscribeGetTranscriptionJobRequestSwift
class AWSTranscribeGetTranscriptionJobRequest -
Declaration
Objective-C
@interface AWSTranscribeGetTranscriptionJobResponseSwift
class AWSTranscribeGetTranscriptionJobResponse -
Declaration
Objective-C
@interface AWSTranscribeGetVocabularyFilterRequestSwift
class AWSTranscribeGetVocabularyFilterRequest -
Declaration
Objective-C
@interface AWSTranscribeGetVocabularyFilterResponseSwift
class AWSTranscribeGetVocabularyFilterResponse -
Declaration
Objective-C
@interface AWSTranscribeGetVocabularyRequestSwift
class AWSTranscribeGetVocabularyRequest -
Declaration
Objective-C
@interface AWSTranscribeGetVocabularyResponseSwift
class AWSTranscribeGetVocabularyResponse -
Contains the Amazon S3 location of the training data you want to use to create a new custom language model, and permissions to access this location.
When using
InputDataConfig, you must include these sub-parameters:S3UriandDataAccessRoleArn. You can optionally includeTuningDataS3Uri.Required parameters: [S3Uri, DataAccessRoleArn]
See moreDeclaration
Objective-C
@interface AWSTranscribeInputDataConfigSwift
class AWSTranscribeInputDataConfig -
Flag the presence or absence of interruptions in your Call Analytics transcription output.
Rules using
InterruptionFilterare designed to match:Instances where an agent interrupts a customer
Instances where a customer interrupts an agent
Either participant interrupting the other
A lack of interruptions
See Rule criteria for post-call categories for usage examples.
See moreDeclaration
Objective-C
@interface AWSTranscribeInterruptionFilterSwift
class AWSTranscribeInterruptionFilter -
Makes it possible to control how your transcription job is processed. Currently, the only
JobExecutionSettingsmodification you can choose is enabling job queueing using theAllowDeferredExecutionsub-parameter.If you include
See moreJobExecutionSettingsin your request, you must also include the sub-parameters:AllowDeferredExecutionandDataAccessRoleArn.Declaration
Objective-C
@interface AWSTranscribeJobExecutionSettingsSwift
class AWSTranscribeJobExecutionSettings -
Provides information on the speech contained in a discreet utterance when multi-language identification is enabled in your request. This utterance represents a block of speech consisting of one language, preceded or followed by a block of speech in a different language.
See moreDeclaration
Objective-C
@interface AWSTranscribeLanguageCodeItemSwift
class AWSTranscribeLanguageCodeItem -
If using automatic language identification in your request and you want to apply a custom language model, a custom vocabulary, or a custom vocabulary filter, include
LanguageIdSettingswith the relevant sub-parameters (VocabularyName,LanguageModelName, andVocabularyFilterName). Note that multi-language identification (IdentifyMultipleLanguages) doesn’t support custom language models.LanguageIdSettingssupports two to five language codes. Each language code you include can have an associated custom language model, custom vocabulary, and custom vocabulary filter. The language codes that you specify must match the languages of the associated custom language models, custom vocabularies, and custom vocabulary filters.It’s recommended that you include
LanguageOptionswhen usingLanguageIdSettingsto ensure that the correct language dialect is identified. For example, if you specify a custom vocabulary that is inen-USbut Amazon Transcribe determines that the language spoken in your media isen-AU, your custom vocabulary is not applied to your transcription. If you includeLanguageOptionsand includeen-USas the only English language dialect, your custom vocabulary is applied to your transcription.If you want to include a custom language model with your request but do not want to use automatic language identification, use instead the
See moreparameter with theLanguageModelNamesub-parameter. If you want to include a custom vocabulary or a custom vocabulary filter (or both) with your request but do not want to use automatic language identification, use instead theparameter with theVocabularyNameorVocabularyFilterName(or both) sub-parameter.Declaration
Objective-C
@interface AWSTranscribeLanguageIdSettingsSwift
class AWSTranscribeLanguageIdSettings -
Provides information about a custom language model, including:
The base model name
When the model was created
The location of the files used to train the model
When the model was last modified
The name you chose for the model
The model’s language
The model’s processing state
Any available upgrades for the base model
Declaration
Objective-C
@interface AWSTranscribeLanguageModelSwift
class AWSTranscribeLanguageModel -
Declaration
Objective-C
@interface AWSTranscribeListCallAnalyticsCategoriesRequestSwift
class AWSTranscribeListCallAnalyticsCategoriesRequest -
Declaration
Objective-C
@interface AWSTranscribeListCallAnalyticsCategoriesResponseSwift
class AWSTranscribeListCallAnalyticsCategoriesResponse -
Declaration
Objective-C
@interface AWSTranscribeListCallAnalyticsJobsRequestSwift
class AWSTranscribeListCallAnalyticsJobsRequest -
Declaration
Objective-C
@interface AWSTranscribeListCallAnalyticsJobsResponseSwift
class AWSTranscribeListCallAnalyticsJobsResponse -
Declaration
Objective-C
@interface AWSTranscribeListLanguageModelsRequestSwift
class AWSTranscribeListLanguageModelsRequest -
Declaration
Objective-C
@interface AWSTranscribeListLanguageModelsResponseSwift
class AWSTranscribeListLanguageModelsResponse -
Declaration
Objective-C
@interface AWSTranscribeListMedicalScribeJobsRequestSwift
class AWSTranscribeListMedicalScribeJobsRequest -
Declaration
Objective-C
@interface AWSTranscribeListMedicalScribeJobsResponseSwift
class AWSTranscribeListMedicalScribeJobsResponse -
Declaration
Objective-C
@interface AWSTranscribeListMedicalTranscriptionJobsRequestSwift
class AWSTranscribeListMedicalTranscriptionJobsRequest -
Declaration
Objective-C
@interface AWSTranscribeListMedicalTranscriptionJobsResponseSwift
class AWSTranscribeListMedicalTranscriptionJobsResponse -
Declaration
Objective-C
@interface AWSTranscribeListMedicalVocabulariesRequestSwift
class AWSTranscribeListMedicalVocabulariesRequest -
Declaration
Objective-C
@interface AWSTranscribeListMedicalVocabulariesResponseSwift
class AWSTranscribeListMedicalVocabulariesResponse -
Declaration
Objective-C
@interface AWSTranscribeListTagsForResourceRequestSwift
class AWSTranscribeListTagsForResourceRequest -
Declaration
Objective-C
@interface AWSTranscribeListTagsForResourceResponseSwift
class AWSTranscribeListTagsForResourceResponse -
Declaration
Objective-C
@interface AWSTranscribeListTranscriptionJobsRequestSwift
class AWSTranscribeListTranscriptionJobsRequest -
Declaration
Objective-C
@interface AWSTranscribeListTranscriptionJobsResponseSwift
class AWSTranscribeListTranscriptionJobsResponse -
Declaration
Objective-C
@interface AWSTranscribeListVocabulariesRequestSwift
class AWSTranscribeListVocabulariesRequest -
Declaration
Objective-C
@interface AWSTranscribeListVocabulariesResponseSwift
class AWSTranscribeListVocabulariesResponse -
Declaration
Objective-C
@interface AWSTranscribeListVocabularyFiltersRequestSwift
class AWSTranscribeListVocabularyFiltersRequest -
Declaration
Objective-C
@interface AWSTranscribeListVocabularyFiltersResponseSwift
class AWSTranscribeListVocabularyFiltersResponse -
Describes the Amazon S3 location of the media file you want to use in your request.
For information on supported media formats, refer to the
See moreMediaFormatparameter or the Media formats section in the Amazon S3 Developer Guide.Declaration
Objective-C
@interface AWSTranscribeMediaSwift
class AWSTranscribeMedia -
Indicates which speaker is on which channel. The options are
CLINICIANandPATIENTRequired parameters: [ChannelId, ParticipantRole]
See moreDeclaration
Objective-C
@interface AWSTranscribeMedicalScribeChannelDefinitionSwift
class AWSTranscribeMedicalScribeChannelDefinition -
Provides detailed information about a Medical Scribe job.
To view the status of the specified Medical Scribe job, check the
See moreMedicalScribeJobStatusfield. If the status isCOMPLETED, the job is finished and you can find the results at the locations specified inMedicalScribeOutput. If the status isFAILED,FailureReasonprovides details on why your Medical Scribe job failed.Declaration
Objective-C
@interface AWSTranscribeMedicalScribeJobSwift
class AWSTranscribeMedicalScribeJob -
Provides detailed information about a specific Medical Scribe job.
See moreDeclaration
Objective-C
@interface AWSTranscribeMedicalScribeJobSummarySwift
class AWSTranscribeMedicalScribeJobSummary -
The location of the output of your Medical Scribe job.
ClinicalDocumentUriholds the Amazon S3 URI for the Clinical Document andTranscriptFileUriholds the Amazon S3 URI for the Transcript.Required parameters: [TranscriptFileUri, ClinicalDocumentUri]
See moreDeclaration
Objective-C
@interface AWSTranscribeMedicalScribeOutputSwift
class AWSTranscribeMedicalScribeOutput -
Makes it possible to control how your Medical Scribe job is processed using a
See moreMedicalScribeSettingsobject. SpecifyChannelIdentificationifChannelDefinitionsare set. EnabledShowSpeakerLabelsifChannelIdentificationandChannelDefinitionsare not set. One and only one ofChannelIdentificationandShowSpeakerLabelsmust be set. IfShowSpeakerLabelsis set,MaxSpeakerLabelsmust also be set. UseSettingsto specify a vocabulary or vocabulary filter or both usingVocabularyName,VocabularyFilterName.VocabularyFilterMethodmust be specified ifVocabularyFilterNameis set.Declaration
Objective-C
@interface AWSTranscribeMedicalScribeSettingsSwift
class AWSTranscribeMedicalScribeSettings -
Provides you with the Amazon S3 URI you can use to access your transcript.
See moreDeclaration
Objective-C
@interface AWSTranscribeMedicalTranscriptSwift
class AWSTranscribeMedicalTranscript -
Provides detailed information about a medical transcription job.
To view the status of the specified medical transcription job, check the
See moreTranscriptionJobStatusfield. If the status isCOMPLETED, the job is finished and you can find the results at the location specified inTranscriptFileUri. If the status isFAILED,FailureReasonprovides details on why your transcription job failed.Declaration
Objective-C
@interface AWSTranscribeMedicalTranscriptionJobSwift
class AWSTranscribeMedicalTranscriptionJob -
Provides detailed information about a specific medical transcription job.
See moreDeclaration
Objective-C
@interface AWSTranscribeMedicalTranscriptionJobSummarySwift
class AWSTranscribeMedicalTranscriptionJobSummary -
Allows additional optional settings in your request, including channel identification, alternative transcriptions, and speaker partitioning. You can use that to apply custom vocabularies to your medical transcription job.
See moreDeclaration
Objective-C
@interface AWSTranscribeMedicalTranscriptionSettingSwift
class AWSTranscribeMedicalTranscriptionSetting -
Provides the name of the custom language model that was included in the specified transcription job.
Only use
See moreModelSettingswith theLanguageModelNamesub-parameter if you’re not using automatic language identification (). If usingLanguageIdSettingsin your request, this parameter contains aLanguageModelNamesub-parameter.Declaration
Objective-C
@interface AWSTranscribeModelSettingsSwift
class AWSTranscribeModelSettings -
Flag the presence or absence of periods of silence in your Call Analytics transcription output.
Rules using
NonTalkTimeFilterare designed to match:The presence of silence at specified periods throughout the call
The presence of speech at specified periods throughout the call
See Rule criteria for post-call categories for usage examples.
See moreDeclaration
Objective-C
@interface AWSTranscribeNonTalkTimeFilterSwift
class AWSTranscribeNonTalkTimeFilter -
A time range, in percentage, between two points in your media file.
You can use
StartPercentageandEndPercentageto search a custom segment. For example, settingStartPercentageto 10 andEndPercentageto 50 only searches for your specified criteria in the audio contained between the 10 percent mark and the 50 percent mark of your media file.You can use also
Firstto search from the start of the media file until the time that you specify. Or useLastto search from the time that you specify until the end of the media file. For example, settingFirstto 10 only searches for your specified criteria in the audio contained in the first 10 percent of the media file.If you prefer to use milliseconds instead of percentage, see .
See moreDeclaration
Objective-C
@interface AWSTranscribeRelativeTimeRangeSwift
class AWSTranscribeRelativeTimeRange -
A rule is a set of criteria that you can specify to flag an attribute in your Call Analytics output. Rules define a Call Analytics category.
Rules can include these parameters: , , , and .
To learn more about Call Analytics rules and categories, see Creating categories for post-call transcriptions and Creating categories for real-time transcriptions.
To learn more about Call Analytics, see Analyzing call center audio with Call Analytics.
See moreDeclaration
Objective-C
@interface AWSTranscribeRuleSwift
class AWSTranscribeRule -
Flag the presence or absence of specific sentiments detected in your Call Analytics transcription output.
Rules using
SentimentFilterare designed to match:The presence or absence of a positive sentiment felt by the customer, agent, or both at specified points in the call
The presence or absence of a negative sentiment felt by the customer, agent, or both at specified points in the call
The presence or absence of a neutral sentiment felt by the customer, agent, or both at specified points in the call
The presence or absence of a mixed sentiment felt by the customer, the agent, or both at specified points in the call
See Rule criteria for post-call categories for usage examples.
Required parameters: [Sentiments]
See moreDeclaration
Objective-C
@interface AWSTranscribeSentimentFilterSwift
class AWSTranscribeSentimentFilter -
Allows additional optional settings in your request, including channel identification, alternative transcriptions, and speaker partitioning. You can use that to apply custom vocabularies to your transcription job.
See moreDeclaration
Objective-C
@interface AWSTranscribeSettingsSwift
class AWSTranscribeSettings -
Declaration
Objective-C
@interface AWSTranscribeStartCallAnalyticsJobRequestSwift
class AWSTranscribeStartCallAnalyticsJobRequest -
Declaration
Objective-C
@interface AWSTranscribeStartCallAnalyticsJobResponseSwift
class AWSTranscribeStartCallAnalyticsJobResponse -
Declaration
Objective-C
@interface AWSTranscribeStartMedicalScribeJobRequestSwift
class AWSTranscribeStartMedicalScribeJobRequest -
Declaration
Objective-C
@interface AWSTranscribeStartMedicalScribeJobResponseSwift
class AWSTranscribeStartMedicalScribeJobResponse -
Declaration
Objective-C
@interface AWSTranscribeStartMedicalTranscriptionJobRequestSwift
class AWSTranscribeStartMedicalTranscriptionJobRequest -
Declaration
Objective-C
@interface AWSTranscribeStartMedicalTranscriptionJobResponseSwift
class AWSTranscribeStartMedicalTranscriptionJobResponse -
Declaration
Objective-C
@interface AWSTranscribeStartTranscriptionJobRequestSwift
class AWSTranscribeStartTranscriptionJobRequest -
Declaration
Objective-C
@interface AWSTranscribeStartTranscriptionJobResponseSwift
class AWSTranscribeStartTranscriptionJobResponse -
Generate subtitles for your media file with your transcription request.
You can choose a start index of 0 or 1, and you can specify either WebVTT or SubRip (or both) as your output format.
Note that your subtitle files are placed in the same location as your transcription output.
See moreDeclaration
Objective-C
@interface AWSTranscribeSubtitlesSwift
class AWSTranscribeSubtitles -
Provides information about your subtitle file, including format, start index, and Amazon S3 location.
See moreDeclaration
Objective-C
@interface AWSTranscribeSubtitlesOutputSwift
class AWSTranscribeSubtitlesOutput -
Contains
GenerateAbstractiveSummary, which is a required parameter if you want to enable Generative call summarization in your Call Analytics request.Required parameters: [GenerateAbstractiveSummary]
See moreDeclaration
Objective-C
@interface AWSTranscribeSummarizationSwift
class AWSTranscribeSummarization -
Adds metadata, in the form of a key:value pair, to the specified resource.
For example, you could add the tag
Department:Salesto a resource to indicate that it pertains to your organization’s sales department. You can also use tags for tag-based access control.To learn more about tagging, see Tagging resources.
Required parameters: [Key, Value]
See moreDeclaration
Objective-C
@interface AWSTranscribeTagSwift
class AWSTranscribeTag -
Declaration
Objective-C
@interface AWSTranscribeTagResourceRequestSwift
class AWSTranscribeTagResourceRequest -
Declaration
Objective-C
@interface AWSTranscribeTagResourceResponseSwift
class AWSTranscribeTagResourceResponse -
Contains
ToxicityCategories, which is a required parameter if you want to enable toxicity detection (ToxicityDetection) in your transcription request.Required parameters: [ToxicityCategories]
See moreDeclaration
Objective-C
@interface AWSTranscribeToxicityDetectionSettingsSwift
class AWSTranscribeToxicityDetectionSettings -
Provides you with the Amazon S3 URI you can use to access your transcript.
See moreDeclaration
Objective-C
@interface AWSTranscribeTranscriptSwift
class AWSTranscribeTranscript -
Flag the presence or absence of specific words or phrases detected in your Call Analytics transcription output.
Rules using
TranscriptFilterare designed to match:Custom words or phrases spoken by the agent, the customer, or both
Custom words or phrases not spoken by the agent, the customer, or either
Custom words or phrases that occur at a specific time frame
See Rule criteria for post-call categories and Rule criteria for streaming categories for usage examples.
Required parameters: [TranscriptFilterType, Targets]
See moreDeclaration
Objective-C
@interface AWSTranscribeTranscriptFilterSwift
class AWSTranscribeTranscriptFilter -
Provides detailed information about a transcription job.
To view the status of the specified transcription job, check the
TranscriptionJobStatusfield. If the status isCOMPLETED, the job is finished and you can find the results at the location specified inTranscriptFileUri. If the status isFAILED,FailureReasonprovides details on why your transcription job failed.If you enabled content redaction, the redacted transcript can be found at the location specified in
See moreRedactedTranscriptFileUri.Declaration
Objective-C
@interface AWSTranscribeTranscriptionJobSwift
class AWSTranscribeTranscriptionJob -
Provides detailed information about a specific transcription job.
See moreDeclaration
Objective-C
@interface AWSTranscribeTranscriptionJobSummarySwift
class AWSTranscribeTranscriptionJobSummary -
Declaration
Objective-C
@interface AWSTranscribeUntagResourceRequestSwift
class AWSTranscribeUntagResourceRequest -
Declaration
Objective-C
@interface AWSTranscribeUntagResourceResponseSwift
class AWSTranscribeUntagResourceResponse -
Declaration
Objective-C
@interface AWSTranscribeUpdateCallAnalyticsCategoryRequestSwift
class AWSTranscribeUpdateCallAnalyticsCategoryRequest -
Declaration
Objective-C
@interface AWSTranscribeUpdateCallAnalyticsCategoryResponseSwift
class AWSTranscribeUpdateCallAnalyticsCategoryResponse -
Declaration
Objective-C
@interface AWSTranscribeUpdateMedicalVocabularyRequestSwift
class AWSTranscribeUpdateMedicalVocabularyRequest -
Declaration
Objective-C
@interface AWSTranscribeUpdateMedicalVocabularyResponseSwift
class AWSTranscribeUpdateMedicalVocabularyResponse -
Declaration
Objective-C
@interface AWSTranscribeUpdateVocabularyFilterRequestSwift
class AWSTranscribeUpdateVocabularyFilterRequest -
Declaration
Objective-C
@interface AWSTranscribeUpdateVocabularyFilterResponseSwift
class AWSTranscribeUpdateVocabularyFilterResponse -
Declaration
Objective-C
@interface AWSTranscribeUpdateVocabularyRequestSwift
class AWSTranscribeUpdateVocabularyRequest -
Declaration
Objective-C
@interface AWSTranscribeUpdateVocabularyResponseSwift
class AWSTranscribeUpdateVocabularyResponse -
Provides information about a custom vocabulary filter, including the language of the filter, when it was last modified, and its name.
See moreDeclaration
Objective-C
@interface AWSTranscribeVocabularyFilterInfoSwift
class AWSTranscribeVocabularyFilterInfo -
Provides information about a custom vocabulary, including the language of the custom vocabulary, when it was last modified, its name, and the processing state.
See moreDeclaration
Objective-C
@interface AWSTranscribeVocabularyInfoSwift
class AWSTranscribeVocabularyInfo -
Undocumented
See moreDeclaration
Objective-C
@interface AWSTranscribeResources : NSObject + (instancetype)sharedInstance; - (NSDictionary *)JSONObject; @endSwift
class AWSTranscribeResources : NSObject -
Amazon Transcribe offers three main types of batch transcription: Standard, Medical, and Call Analytics.
Standard transcriptions are the most common option. Refer to for details.
Medical transcriptions are tailored to medical professionals and incorporate medical terms. A common use case for this service is transcribing doctor-patient dialogue into after-visit notes. Refer to for details.
Call Analytics transcriptions are designed for use with call center audio on two different channels; if you’re looking for insight into customer service calls, use this option. Refer to for details.
Declaration
Objective-C
@interface AWSTranscribeSwift
class AWSTranscribe
View on GitHub
Install in Dash
Classes Reference