Classes
The following classes are available globally.
-
Undocumented
See moreDeclaration
Objective-C
@interface AWSTranscribeStreamingEventDecoder : NSObject /// Decodes a single stream event, per /// https://docs.aws.amazon.com/transcribe/latest/dg/streaming-format.html + (nullable AWSTranscribeStreamingTranscriptResultStream *)decodeEvent:(NSData *)data decodingError:(NSError **)decodingError; @end
Swift
class AWSTranscribeStreamingEventDecoder : NSObject
-
A list of possible transcriptions for the audio.
See moreDeclaration
Objective-C
@interface AWSTranscribeStreamingAlternative
Swift
class AWSTranscribeStreamingAlternative
-
Provides a wrapper for the audio chunks that you are sending.
See moreDeclaration
Objective-C
@interface AWSTranscribeStreamingAudioEvent
Swift
class AWSTranscribeStreamingAudioEvent
-
Represents the audio stream from your application to Amazon Transcribe.
See moreDeclaration
Objective-C
@interface AWSTranscribeStreamingAudioStream
Swift
class AWSTranscribeStreamingAudioStream
-
A word or phrase transcribed from the input audio.
See moreDeclaration
Objective-C
@interface AWSTranscribeStreamingItem
Swift
class AWSTranscribeStreamingItem
-
The result of transcribing a portion of the input audio stream.
See moreDeclaration
Objective-C
@interface AWSTranscribeStreamingResult
Swift
class AWSTranscribeStreamingResult
-
Declaration
Objective-C
@interface AWSTranscribeStreamingStartStreamTranscriptionRequest
Swift
class AWSTranscribeStreamingStartStreamTranscriptionRequest
-
Declaration
Objective-C
@interface AWSTranscribeStreamingStartStreamTranscriptionResponse
Swift
class AWSTranscribeStreamingStartStreamTranscriptionResponse
-
The transcription in a TranscriptEvent.
See moreDeclaration
Objective-C
@interface AWSTranscribeStreamingTranscript
Swift
class AWSTranscribeStreamingTranscript
-
Represents a set of transcription results from the server to the client. It contains one or more segments of the transcription.
See moreDeclaration
Objective-C
@interface AWSTranscribeStreamingTranscriptEvent
Swift
class AWSTranscribeStreamingTranscriptEvent
-
Represents the transcription result stream from Amazon Transcribe to your application.
See moreDeclaration
Objective-C
@interface AWSTranscribeStreamingTranscriptResultStream
Swift
class AWSTranscribeStreamingTranscriptResultStream
-
Undocumented
See moreDeclaration
Objective-C
@interface AWSTranscribeStreamingResources : NSObject + (instancetype)sharedInstance; - (NSDictionary *)JSONObject; @end
Swift
class AWSTranscribeStreamingResources : NSObject
-
Operations and objects for transcribing streaming speech to text.
For backend setup and instructions on configuring policies, please see https://docs.aws.amazon.com/transcribe/latest/dg/streaming.html
This SDK currently only supports streaming via WebSockets, which is described here https://docs.aws.amazon.com/transcribe/latest/dg/websocket.html
How to Use
See the
AWSTranscribeStreamingSwiftTests.testStreamingExample()
integration test for an example of the “happy path” usage of this SDK. The general steps for usage are:Configure the AWSServiceConfiguration, including setting a credentials provider for signing WebSocket requests
Create a TranscribeStreaming client with
+[AWSTranscribeStreaming registerTranscribeStreamingWithConfiguration:forKey:]
, or use the defaultCreate a
AWSTranscribeStreamingStartStreamTranscriptionRequest
and set its properties to allow transcription of your audio streamSet up an
AWSTranscribeStreamingClientDelegate
to receive callbacks for connection status changes and transcription eventsCall
AWSTranscribeStreaming.setDelegate(:callbackQueue:)
to register your delegate with the client. NOTE We do not recommend using themain
queue as your callback queue, since doing so could impact your app’s UI performance.Call
AWSTranscribeStreaming.startTranscriptionWSS()
with the configured requestWait for your delegate’s
connectionStatusCallback
to be invoked with a status of.connected
. At this point, the transcribe client is ready to receive audio dataChunk your audio data and send it to AWS Transcribe using the
AWSTranscribeStreaming.send()
methodAs you send data, your delegate will be receiving transcription events in the
receiveEventCallback
, which you can decode and use in your app.When you reach the end of your audio data, call
AWSTranscribeStreaming.sendEndFrame()
to signal the end of processing. NOTE: We recommend waiting 2-3 seconds past the end of your last detected audio data before sending the end frame.Wait for your final transcription events to be received, as indicated by a transcription event with the
isPartial
flag set to0
.Call
AWSTranscribeStreaming.endTranscription()
to close the web socket and gracefully shut down the connection to the service.
Declaration
Objective-C
@interface AWSTranscribeStreaming
Swift
class AWSTranscribeStreaming