Classes
The following classes are available globally.
-
A time range, in milliseconds, between two points in your media file.
You can use
StartTime
andEndTime
to search a custom segment. For example, settingStartTime
to 10000 andEndTime
to 50000 only searches for your specified criteria in the audio contained between the 10,000 millisecond mark and the 50,000 millisecond mark of your media file. You must useStartTime
andEndTime
as a set; that is, if you include one, you must include both.You can use also
First
to search from the start of the audio until the time that you specify, orLast
to search from the time that you specify until the end of the audio. For example, settingFirst
to 50000 only searches for your specified criteria in the audio contained between the start of the media file to the 50,000 millisecond mark. You can useFirst
andLast
independently of each other.If you prefer to use percentage instead of milliseconds, see .
See moreDeclaration
Objective-C
@interface AWSTranscribeAbsoluteTimeRange
Swift
class AWSTranscribeAbsoluteTimeRange
-
Provides detailed information about a Call Analytics job.
To view the job’s status, refer to
CallAnalyticsJobStatus
. If the status isCOMPLETED
, the job is finished. You can find your completed transcript at the URI specified inTranscriptFileUri
. If the status isFAILED
,FailureReason
provides details on why your transcription job failed.If you enabled personally identifiable information (PII) redaction, the redacted transcript appears at the location specified in
RedactedTranscriptFileUri
.If you chose to redact the audio in your media file, you can find your redacted media file at the location specified in the
See moreRedactedMediaFileUri
field of your response.Declaration
Objective-C
@interface AWSTranscribeCallAnalyticsJob
Swift
class AWSTranscribeCallAnalyticsJob
-
Contains details about a call analytics job, including information about skipped analytics features.
See moreDeclaration
Objective-C
@interface AWSTranscribeCallAnalyticsJobDetails
Swift
class AWSTranscribeCallAnalyticsJobDetails
-
Provides additional optional settings for your request, including content redaction, automatic language identification; allows you to apply custom language models, custom vocabulary filters, and custom vocabularies.
See moreDeclaration
Objective-C
@interface AWSTranscribeCallAnalyticsJobSettings
Swift
class AWSTranscribeCallAnalyticsJobSettings
-
Provides detailed information about a specific Call Analytics job.
See moreDeclaration
Objective-C
@interface AWSTranscribeCallAnalyticsJobSummary
Swift
class AWSTranscribeCallAnalyticsJobSummary
-
Represents a skipped analytics feature during the analysis of a call analytics job.
The
Feature
field indicates the type of analytics feature that was skipped.The
Message
field contains additional information or a message explaining why the analytics feature was skipped.The
See moreReasonCode
field provides a code indicating the reason why the analytics feature was skipped.Declaration
Objective-C
@interface AWSTranscribeCallAnalyticsSkippedFeature
Swift
class AWSTranscribeCallAnalyticsSkippedFeature
-
Provides you with the properties of the Call Analytics category you specified in your request. This includes the list of rules that define the specified category.
See moreDeclaration
Objective-C
@interface AWSTranscribeCategoryProperties
Swift
class AWSTranscribeCategoryProperties
-
Makes it possible to specify which speaker is on which channel. For example, if your agent is the first participant to speak, you would set
See moreChannelId
to0
(to indicate the first channel) andParticipantRole
toAGENT
(to indicate that it’s the agent speaking).Declaration
Objective-C
@interface AWSTranscribeChannelDefinition
Swift
class AWSTranscribeChannelDefinition
-
Makes it possible to redact or flag specified personally identifiable information (PII) in your transcript. If you use
ContentRedaction
, you must also include the sub-parameters:RedactionOutput
andRedactionType
. You can optionally includePiiEntityTypes
to choose which types of PII you want to redact.Required parameters: [RedactionType, RedactionOutput]
See moreDeclaration
Objective-C
@interface AWSTranscribeContentRedaction
Swift
class AWSTranscribeContentRedaction
-
Declaration
Objective-C
@interface AWSTranscribeCreateCallAnalyticsCategoryRequest
Swift
class AWSTranscribeCreateCallAnalyticsCategoryRequest
-
Declaration
Objective-C
@interface AWSTranscribeCreateCallAnalyticsCategoryResponse
Swift
class AWSTranscribeCreateCallAnalyticsCategoryResponse
-
Declaration
Objective-C
@interface AWSTranscribeCreateLanguageModelRequest
Swift
class AWSTranscribeCreateLanguageModelRequest
-
Declaration
Objective-C
@interface AWSTranscribeCreateLanguageModelResponse
Swift
class AWSTranscribeCreateLanguageModelResponse
-
Declaration
Objective-C
@interface AWSTranscribeCreateMedicalVocabularyRequest
Swift
class AWSTranscribeCreateMedicalVocabularyRequest
-
Declaration
Objective-C
@interface AWSTranscribeCreateMedicalVocabularyResponse
Swift
class AWSTranscribeCreateMedicalVocabularyResponse
-
Declaration
Objective-C
@interface AWSTranscribeCreateVocabularyFilterRequest
Swift
class AWSTranscribeCreateVocabularyFilterRequest
-
Declaration
Objective-C
@interface AWSTranscribeCreateVocabularyFilterResponse
Swift
class AWSTranscribeCreateVocabularyFilterResponse
-
Declaration
Objective-C
@interface AWSTranscribeCreateVocabularyRequest
Swift
class AWSTranscribeCreateVocabularyRequest
-
Declaration
Objective-C
@interface AWSTranscribeCreateVocabularyResponse
Swift
class AWSTranscribeCreateVocabularyResponse
-
Declaration
Objective-C
@interface AWSTranscribeDeleteCallAnalyticsCategoryRequest
Swift
class AWSTranscribeDeleteCallAnalyticsCategoryRequest
-
Declaration
Objective-C
@interface AWSTranscribeDeleteCallAnalyticsCategoryResponse
Swift
class AWSTranscribeDeleteCallAnalyticsCategoryResponse
-
Declaration
Objective-C
@interface AWSTranscribeDeleteCallAnalyticsJobRequest
Swift
class AWSTranscribeDeleteCallAnalyticsJobRequest
-
Declaration
Objective-C
@interface AWSTranscribeDeleteCallAnalyticsJobResponse
Swift
class AWSTranscribeDeleteCallAnalyticsJobResponse
-
Declaration
Objective-C
@interface AWSTranscribeDeleteLanguageModelRequest
Swift
class AWSTranscribeDeleteLanguageModelRequest
-
Declaration
Objective-C
@interface AWSTranscribeDeleteMedicalScribeJobRequest
Swift
class AWSTranscribeDeleteMedicalScribeJobRequest
-
Declaration
Objective-C
@interface AWSTranscribeDeleteMedicalTranscriptionJobRequest
Swift
class AWSTranscribeDeleteMedicalTranscriptionJobRequest
-
Declaration
Objective-C
@interface AWSTranscribeDeleteMedicalVocabularyRequest
Swift
class AWSTranscribeDeleteMedicalVocabularyRequest
-
Declaration
Objective-C
@interface AWSTranscribeDeleteTranscriptionJobRequest
Swift
class AWSTranscribeDeleteTranscriptionJobRequest
-
Declaration
Objective-C
@interface AWSTranscribeDeleteVocabularyFilterRequest
Swift
class AWSTranscribeDeleteVocabularyFilterRequest
-
Declaration
Objective-C
@interface AWSTranscribeDeleteVocabularyRequest
Swift
class AWSTranscribeDeleteVocabularyRequest
-
Declaration
Objective-C
@interface AWSTranscribeDescribeLanguageModelRequest
Swift
class AWSTranscribeDescribeLanguageModelRequest
-
Declaration
Objective-C
@interface AWSTranscribeDescribeLanguageModelResponse
Swift
class AWSTranscribeDescribeLanguageModelResponse
-
Declaration
Objective-C
@interface AWSTranscribeGetCallAnalyticsCategoryRequest
Swift
class AWSTranscribeGetCallAnalyticsCategoryRequest
-
Declaration
Objective-C
@interface AWSTranscribeGetCallAnalyticsCategoryResponse
Swift
class AWSTranscribeGetCallAnalyticsCategoryResponse
-
Declaration
Objective-C
@interface AWSTranscribeGetCallAnalyticsJobRequest
Swift
class AWSTranscribeGetCallAnalyticsJobRequest
-
Declaration
Objective-C
@interface AWSTranscribeGetCallAnalyticsJobResponse
Swift
class AWSTranscribeGetCallAnalyticsJobResponse
-
Declaration
Objective-C
@interface AWSTranscribeGetMedicalScribeJobRequest
Swift
class AWSTranscribeGetMedicalScribeJobRequest
-
Declaration
Objective-C
@interface AWSTranscribeGetMedicalScribeJobResponse
Swift
class AWSTranscribeGetMedicalScribeJobResponse
-
Declaration
Objective-C
@interface AWSTranscribeGetMedicalTranscriptionJobRequest
Swift
class AWSTranscribeGetMedicalTranscriptionJobRequest
-
Declaration
Objective-C
@interface AWSTranscribeGetMedicalTranscriptionJobResponse
Swift
class AWSTranscribeGetMedicalTranscriptionJobResponse
-
Declaration
Objective-C
@interface AWSTranscribeGetMedicalVocabularyRequest
Swift
class AWSTranscribeGetMedicalVocabularyRequest
-
Declaration
Objective-C
@interface AWSTranscribeGetMedicalVocabularyResponse
Swift
class AWSTranscribeGetMedicalVocabularyResponse
-
Declaration
Objective-C
@interface AWSTranscribeGetTranscriptionJobRequest
Swift
class AWSTranscribeGetTranscriptionJobRequest
-
Declaration
Objective-C
@interface AWSTranscribeGetTranscriptionJobResponse
Swift
class AWSTranscribeGetTranscriptionJobResponse
-
Declaration
Objective-C
@interface AWSTranscribeGetVocabularyFilterRequest
Swift
class AWSTranscribeGetVocabularyFilterRequest
-
Declaration
Objective-C
@interface AWSTranscribeGetVocabularyFilterResponse
Swift
class AWSTranscribeGetVocabularyFilterResponse
-
Declaration
Objective-C
@interface AWSTranscribeGetVocabularyRequest
Swift
class AWSTranscribeGetVocabularyRequest
-
Declaration
Objective-C
@interface AWSTranscribeGetVocabularyResponse
Swift
class AWSTranscribeGetVocabularyResponse
-
Contains the Amazon S3 location of the training data you want to use to create a new custom language model, and permissions to access this location.
When using
InputDataConfig
, you must include these sub-parameters:S3Uri
andDataAccessRoleArn
. You can optionally includeTuningDataS3Uri
.Required parameters: [S3Uri, DataAccessRoleArn]
See moreDeclaration
Objective-C
@interface AWSTranscribeInputDataConfig
Swift
class AWSTranscribeInputDataConfig
-
Flag the presence or absence of interruptions in your Call Analytics transcription output.
Rules using
InterruptionFilter
are designed to match:Instances where an agent interrupts a customer
Instances where a customer interrupts an agent
Either participant interrupting the other
A lack of interruptions
See Rule criteria for post-call categories for usage examples.
See moreDeclaration
Objective-C
@interface AWSTranscribeInterruptionFilter
Swift
class AWSTranscribeInterruptionFilter
-
Makes it possible to control how your transcription job is processed. Currently, the only
JobExecutionSettings
modification you can choose is enabling job queueing using theAllowDeferredExecution
sub-parameter.If you include
See moreJobExecutionSettings
in your request, you must also include the sub-parameters:AllowDeferredExecution
andDataAccessRoleArn
.Declaration
Objective-C
@interface AWSTranscribeJobExecutionSettings
Swift
class AWSTranscribeJobExecutionSettings
-
Provides information on the speech contained in a discreet utterance when multi-language identification is enabled in your request. This utterance represents a block of speech consisting of one language, preceded or followed by a block of speech in a different language.
See moreDeclaration
Objective-C
@interface AWSTranscribeLanguageCodeItem
Swift
class AWSTranscribeLanguageCodeItem
-
If using automatic language identification in your request and you want to apply a custom language model, a custom vocabulary, or a custom vocabulary filter, include
LanguageIdSettings
with the relevant sub-parameters (VocabularyName
,LanguageModelName
, andVocabularyFilterName
). Note that multi-language identification (IdentifyMultipleLanguages
) doesn’t support custom language models.LanguageIdSettings
supports two to five language codes. Each language code you include can have an associated custom language model, custom vocabulary, and custom vocabulary filter. The language codes that you specify must match the languages of the associated custom language models, custom vocabularies, and custom vocabulary filters.It’s recommended that you include
LanguageOptions
when usingLanguageIdSettings
to ensure that the correct language dialect is identified. For example, if you specify a custom vocabulary that is inen-US
but Amazon Transcribe determines that the language spoken in your media isen-AU
, your custom vocabulary is not applied to your transcription. If you includeLanguageOptions
and includeen-US
as the only English language dialect, your custom vocabulary is applied to your transcription.If you want to include a custom language model with your request but do not want to use automatic language identification, use instead the
See moreparameter with the
LanguageModelName
sub-parameter. If you want to include a custom vocabulary or a custom vocabulary filter (or both) with your request but do not want to use automatic language identification, use instead theparameter with the
VocabularyName
orVocabularyFilterName
(or both) sub-parameter.Declaration
Objective-C
@interface AWSTranscribeLanguageIdSettings
Swift
class AWSTranscribeLanguageIdSettings
-
Provides information about a custom language model, including:
The base model name
When the model was created
The location of the files used to train the model
When the model was last modified
The name you chose for the model
The model’s language
The model’s processing state
Any available upgrades for the base model
Declaration
Objective-C
@interface AWSTranscribeLanguageModel
Swift
class AWSTranscribeLanguageModel
-
Declaration
Objective-C
@interface AWSTranscribeListCallAnalyticsCategoriesRequest
Swift
class AWSTranscribeListCallAnalyticsCategoriesRequest
-
Declaration
Objective-C
@interface AWSTranscribeListCallAnalyticsCategoriesResponse
Swift
class AWSTranscribeListCallAnalyticsCategoriesResponse
-
Declaration
Objective-C
@interface AWSTranscribeListCallAnalyticsJobsRequest
Swift
class AWSTranscribeListCallAnalyticsJobsRequest
-
Declaration
Objective-C
@interface AWSTranscribeListCallAnalyticsJobsResponse
Swift
class AWSTranscribeListCallAnalyticsJobsResponse
-
Declaration
Objective-C
@interface AWSTranscribeListLanguageModelsRequest
Swift
class AWSTranscribeListLanguageModelsRequest
-
Declaration
Objective-C
@interface AWSTranscribeListLanguageModelsResponse
Swift
class AWSTranscribeListLanguageModelsResponse
-
Declaration
Objective-C
@interface AWSTranscribeListMedicalScribeJobsRequest
Swift
class AWSTranscribeListMedicalScribeJobsRequest
-
Declaration
Objective-C
@interface AWSTranscribeListMedicalScribeJobsResponse
Swift
class AWSTranscribeListMedicalScribeJobsResponse
-
Declaration
Objective-C
@interface AWSTranscribeListMedicalTranscriptionJobsRequest
Swift
class AWSTranscribeListMedicalTranscriptionJobsRequest
-
Declaration
Objective-C
@interface AWSTranscribeListMedicalTranscriptionJobsResponse
Swift
class AWSTranscribeListMedicalTranscriptionJobsResponse
-
Declaration
Objective-C
@interface AWSTranscribeListMedicalVocabulariesRequest
Swift
class AWSTranscribeListMedicalVocabulariesRequest
-
Declaration
Objective-C
@interface AWSTranscribeListMedicalVocabulariesResponse
Swift
class AWSTranscribeListMedicalVocabulariesResponse
-
Declaration
Objective-C
@interface AWSTranscribeListTagsForResourceRequest
Swift
class AWSTranscribeListTagsForResourceRequest
-
Declaration
Objective-C
@interface AWSTranscribeListTagsForResourceResponse
Swift
class AWSTranscribeListTagsForResourceResponse
-
Declaration
Objective-C
@interface AWSTranscribeListTranscriptionJobsRequest
Swift
class AWSTranscribeListTranscriptionJobsRequest
-
Declaration
Objective-C
@interface AWSTranscribeListTranscriptionJobsResponse
Swift
class AWSTranscribeListTranscriptionJobsResponse
-
Declaration
Objective-C
@interface AWSTranscribeListVocabulariesRequest
Swift
class AWSTranscribeListVocabulariesRequest
-
Declaration
Objective-C
@interface AWSTranscribeListVocabulariesResponse
Swift
class AWSTranscribeListVocabulariesResponse
-
Declaration
Objective-C
@interface AWSTranscribeListVocabularyFiltersRequest
Swift
class AWSTranscribeListVocabularyFiltersRequest
-
Declaration
Objective-C
@interface AWSTranscribeListVocabularyFiltersResponse
Swift
class AWSTranscribeListVocabularyFiltersResponse
-
Describes the Amazon S3 location of the media file you want to use in your request.
For information on supported media formats, refer to the
See moreMediaFormat
parameter or the Media formats section in the Amazon S3 Developer Guide.Declaration
Objective-C
@interface AWSTranscribeMedia
Swift
class AWSTranscribeMedia
-
Indicates which speaker is on which channel. The options are
CLINICIAN
andPATIENT
Required parameters: [ChannelId, ParticipantRole]
See moreDeclaration
Objective-C
@interface AWSTranscribeMedicalScribeChannelDefinition
Swift
class AWSTranscribeMedicalScribeChannelDefinition
-
Provides detailed information about a Medical Scribe job.
To view the status of the specified Medical Scribe job, check the
See moreMedicalScribeJobStatus
field. If the status isCOMPLETED
, the job is finished and you can find the results at the locations specified inMedicalScribeOutput
. If the status isFAILED
,FailureReason
provides details on why your Medical Scribe job failed.Declaration
Objective-C
@interface AWSTranscribeMedicalScribeJob
Swift
class AWSTranscribeMedicalScribeJob
-
Provides detailed information about a specific Medical Scribe job.
See moreDeclaration
Objective-C
@interface AWSTranscribeMedicalScribeJobSummary
Swift
class AWSTranscribeMedicalScribeJobSummary
-
The location of the output of your Medical Scribe job.
ClinicalDocumentUri
holds the Amazon S3 URI for the Clinical Document andTranscriptFileUri
holds the Amazon S3 URI for the Transcript.Required parameters: [TranscriptFileUri, ClinicalDocumentUri]
See moreDeclaration
Objective-C
@interface AWSTranscribeMedicalScribeOutput
Swift
class AWSTranscribeMedicalScribeOutput
-
Makes it possible to control how your Medical Scribe job is processed using a
See moreMedicalScribeSettings
object. SpecifyChannelIdentification
ifChannelDefinitions
are set. EnabledShowSpeakerLabels
ifChannelIdentification
andChannelDefinitions
are not set. One and only one ofChannelIdentification
andShowSpeakerLabels
must be set. IfShowSpeakerLabels
is set,MaxSpeakerLabels
must also be set. UseSettings
to specify a vocabulary or vocabulary filter or both usingVocabularyName
,VocabularyFilterName
.VocabularyFilterMethod
must be specified ifVocabularyFilterName
is set.Declaration
Objective-C
@interface AWSTranscribeMedicalScribeSettings
Swift
class AWSTranscribeMedicalScribeSettings
-
Provides you with the Amazon S3 URI you can use to access your transcript.
See moreDeclaration
Objective-C
@interface AWSTranscribeMedicalTranscript
Swift
class AWSTranscribeMedicalTranscript
-
Provides detailed information about a medical transcription job.
To view the status of the specified medical transcription job, check the
See moreTranscriptionJobStatus
field. If the status isCOMPLETED
, the job is finished and you can find the results at the location specified inTranscriptFileUri
. If the status isFAILED
,FailureReason
provides details on why your transcription job failed.Declaration
Objective-C
@interface AWSTranscribeMedicalTranscriptionJob
Swift
class AWSTranscribeMedicalTranscriptionJob
-
Provides detailed information about a specific medical transcription job.
See moreDeclaration
Objective-C
@interface AWSTranscribeMedicalTranscriptionJobSummary
Swift
class AWSTranscribeMedicalTranscriptionJobSummary
-
Allows additional optional settings in your request, including channel identification, alternative transcriptions, and speaker partitioning. You can use that to apply custom vocabularies to your medical transcription job.
See moreDeclaration
Objective-C
@interface AWSTranscribeMedicalTranscriptionSetting
Swift
class AWSTranscribeMedicalTranscriptionSetting
-
Provides the name of the custom language model that was included in the specified transcription job.
Only use
See moreModelSettings
with theLanguageModelName
sub-parameter if you’re not using automatic language identification (). If using
LanguageIdSettings
in your request, this parameter contains aLanguageModelName
sub-parameter.Declaration
Objective-C
@interface AWSTranscribeModelSettings
Swift
class AWSTranscribeModelSettings
-
Flag the presence or absence of periods of silence in your Call Analytics transcription output.
Rules using
NonTalkTimeFilter
are designed to match:The presence of silence at specified periods throughout the call
The presence of speech at specified periods throughout the call
See Rule criteria for post-call categories for usage examples.
See moreDeclaration
Objective-C
@interface AWSTranscribeNonTalkTimeFilter
Swift
class AWSTranscribeNonTalkTimeFilter
-
A time range, in percentage, between two points in your media file.
You can use
StartPercentage
andEndPercentage
to search a custom segment. For example, settingStartPercentage
to 10 andEndPercentage
to 50 only searches for your specified criteria in the audio contained between the 10 percent mark and the 50 percent mark of your media file.You can use also
First
to search from the start of the media file until the time that you specify. Or useLast
to search from the time that you specify until the end of the media file. For example, settingFirst
to 10 only searches for your specified criteria in the audio contained in the first 10 percent of the media file.If you prefer to use milliseconds instead of percentage, see .
See moreDeclaration
Objective-C
@interface AWSTranscribeRelativeTimeRange
Swift
class AWSTranscribeRelativeTimeRange
-
A rule is a set of criteria that you can specify to flag an attribute in your Call Analytics output. Rules define a Call Analytics category.
Rules can include these parameters: , , , and .
To learn more about Call Analytics rules and categories, see Creating categories for post-call transcriptions and Creating categories for real-time transcriptions.
To learn more about Call Analytics, see Analyzing call center audio with Call Analytics.
See moreDeclaration
Objective-C
@interface AWSTranscribeRule
Swift
class AWSTranscribeRule
-
Flag the presence or absence of specific sentiments detected in your Call Analytics transcription output.
Rules using
SentimentFilter
are designed to match:The presence or absence of a positive sentiment felt by the customer, agent, or both at specified points in the call
The presence or absence of a negative sentiment felt by the customer, agent, or both at specified points in the call
The presence or absence of a neutral sentiment felt by the customer, agent, or both at specified points in the call
The presence or absence of a mixed sentiment felt by the customer, the agent, or both at specified points in the call
See Rule criteria for post-call categories for usage examples.
Required parameters: [Sentiments]
See moreDeclaration
Objective-C
@interface AWSTranscribeSentimentFilter
Swift
class AWSTranscribeSentimentFilter
-
Allows additional optional settings in your request, including channel identification, alternative transcriptions, and speaker partitioning. You can use that to apply custom vocabularies to your transcription job.
See moreDeclaration
Objective-C
@interface AWSTranscribeSettings
Swift
class AWSTranscribeSettings
-
Declaration
Objective-C
@interface AWSTranscribeStartCallAnalyticsJobRequest
Swift
class AWSTranscribeStartCallAnalyticsJobRequest
-
Declaration
Objective-C
@interface AWSTranscribeStartCallAnalyticsJobResponse
Swift
class AWSTranscribeStartCallAnalyticsJobResponse
-
Declaration
Objective-C
@interface AWSTranscribeStartMedicalScribeJobRequest
Swift
class AWSTranscribeStartMedicalScribeJobRequest
-
Declaration
Objective-C
@interface AWSTranscribeStartMedicalScribeJobResponse
Swift
class AWSTranscribeStartMedicalScribeJobResponse
-
Declaration
Objective-C
@interface AWSTranscribeStartMedicalTranscriptionJobRequest
Swift
class AWSTranscribeStartMedicalTranscriptionJobRequest
-
Declaration
Objective-C
@interface AWSTranscribeStartMedicalTranscriptionJobResponse
Swift
class AWSTranscribeStartMedicalTranscriptionJobResponse
-
Declaration
Objective-C
@interface AWSTranscribeStartTranscriptionJobRequest
Swift
class AWSTranscribeStartTranscriptionJobRequest
-
Declaration
Objective-C
@interface AWSTranscribeStartTranscriptionJobResponse
Swift
class AWSTranscribeStartTranscriptionJobResponse
-
Generate subtitles for your media file with your transcription request.
You can choose a start index of 0 or 1, and you can specify either WebVTT or SubRip (or both) as your output format.
Note that your subtitle files are placed in the same location as your transcription output.
See moreDeclaration
Objective-C
@interface AWSTranscribeSubtitles
Swift
class AWSTranscribeSubtitles
-
Provides information about your subtitle file, including format, start index, and Amazon S3 location.
See moreDeclaration
Objective-C
@interface AWSTranscribeSubtitlesOutput
Swift
class AWSTranscribeSubtitlesOutput
-
Contains
GenerateAbstractiveSummary
, which is a required parameter if you want to enable Generative call summarization in your Call Analytics request.Required parameters: [GenerateAbstractiveSummary]
See moreDeclaration
Objective-C
@interface AWSTranscribeSummarization
Swift
class AWSTranscribeSummarization
-
Adds metadata, in the form of a key:value pair, to the specified resource.
For example, you could add the tag
Department:Sales
to a resource to indicate that it pertains to your organization’s sales department. You can also use tags for tag-based access control.To learn more about tagging, see Tagging resources.
Required parameters: [Key, Value]
See moreDeclaration
Objective-C
@interface AWSTranscribeTag
Swift
class AWSTranscribeTag
-
Declaration
Objective-C
@interface AWSTranscribeTagResourceRequest
Swift
class AWSTranscribeTagResourceRequest
-
Declaration
Objective-C
@interface AWSTranscribeTagResourceResponse
Swift
class AWSTranscribeTagResourceResponse
-
Contains
ToxicityCategories
, which is a required parameter if you want to enable toxicity detection (ToxicityDetection
) in your transcription request.Required parameters: [ToxicityCategories]
See moreDeclaration
Objective-C
@interface AWSTranscribeToxicityDetectionSettings
Swift
class AWSTranscribeToxicityDetectionSettings
-
Provides you with the Amazon S3 URI you can use to access your transcript.
See moreDeclaration
Objective-C
@interface AWSTranscribeTranscript
Swift
class AWSTranscribeTranscript
-
Flag the presence or absence of specific words or phrases detected in your Call Analytics transcription output.
Rules using
TranscriptFilter
are designed to match:Custom words or phrases spoken by the agent, the customer, or both
Custom words or phrases not spoken by the agent, the customer, or either
Custom words or phrases that occur at a specific time frame
See Rule criteria for post-call categories and Rule criteria for streaming categories for usage examples.
Required parameters: [TranscriptFilterType, Targets]
See moreDeclaration
Objective-C
@interface AWSTranscribeTranscriptFilter
Swift
class AWSTranscribeTranscriptFilter
-
Provides detailed information about a transcription job.
To view the status of the specified transcription job, check the
TranscriptionJobStatus
field. If the status isCOMPLETED
, the job is finished and you can find the results at the location specified inTranscriptFileUri
. If the status isFAILED
,FailureReason
provides details on why your transcription job failed.If you enabled content redaction, the redacted transcript can be found at the location specified in
See moreRedactedTranscriptFileUri
.Declaration
Objective-C
@interface AWSTranscribeTranscriptionJob
Swift
class AWSTranscribeTranscriptionJob
-
Provides detailed information about a specific transcription job.
See moreDeclaration
Objective-C
@interface AWSTranscribeTranscriptionJobSummary
Swift
class AWSTranscribeTranscriptionJobSummary
-
Declaration
Objective-C
@interface AWSTranscribeUntagResourceRequest
Swift
class AWSTranscribeUntagResourceRequest
-
Declaration
Objective-C
@interface AWSTranscribeUntagResourceResponse
Swift
class AWSTranscribeUntagResourceResponse
-
Declaration
Objective-C
@interface AWSTranscribeUpdateCallAnalyticsCategoryRequest
Swift
class AWSTranscribeUpdateCallAnalyticsCategoryRequest
-
Declaration
Objective-C
@interface AWSTranscribeUpdateCallAnalyticsCategoryResponse
Swift
class AWSTranscribeUpdateCallAnalyticsCategoryResponse
-
Declaration
Objective-C
@interface AWSTranscribeUpdateMedicalVocabularyRequest
Swift
class AWSTranscribeUpdateMedicalVocabularyRequest
-
Declaration
Objective-C
@interface AWSTranscribeUpdateMedicalVocabularyResponse
Swift
class AWSTranscribeUpdateMedicalVocabularyResponse
-
Declaration
Objective-C
@interface AWSTranscribeUpdateVocabularyFilterRequest
Swift
class AWSTranscribeUpdateVocabularyFilterRequest
-
Declaration
Objective-C
@interface AWSTranscribeUpdateVocabularyFilterResponse
Swift
class AWSTranscribeUpdateVocabularyFilterResponse
-
Declaration
Objective-C
@interface AWSTranscribeUpdateVocabularyRequest
Swift
class AWSTranscribeUpdateVocabularyRequest
-
Declaration
Objective-C
@interface AWSTranscribeUpdateVocabularyResponse
Swift
class AWSTranscribeUpdateVocabularyResponse
-
Provides information about a custom vocabulary filter, including the language of the filter, when it was last modified, and its name.
See moreDeclaration
Objective-C
@interface AWSTranscribeVocabularyFilterInfo
Swift
class AWSTranscribeVocabularyFilterInfo
-
Provides information about a custom vocabulary, including the language of the custom vocabulary, when it was last modified, its name, and the processing state.
See moreDeclaration
Objective-C
@interface AWSTranscribeVocabularyInfo
Swift
class AWSTranscribeVocabularyInfo
-
Undocumented
See moreDeclaration
Objective-C
@interface AWSTranscribeResources : NSObject + (instancetype)sharedInstance; - (NSDictionary *)JSONObject; @end
Swift
class AWSTranscribeResources : NSObject
-
Amazon Transcribe offers three main types of batch transcription: Standard, Medical, and Call Analytics.
Standard transcriptions are the most common option. Refer to for details.
Medical transcriptions are tailored to medical professionals and incorporate medical terms. A common use case for this service is transcribing doctor-patient dialogue into after-visit notes. Refer to for details.
Call Analytics transcriptions are designed for use with call center audio on two different channels; if you’re looking for insight into customer service calls, use this option. Refer to for details.
Declaration
Objective-C
@interface AWSTranscribe
Swift
class AWSTranscribe