Classes
The following classes are available globally.
-
Undocumented
See moreDeclaration
Objective-C
@interface AWSTranscribeStreamingEventDecoder : NSObject /// Decodes a single stream event, per /// https://docs.aws.amazon.com/transcribe/latest/dg/streaming-format.html + (nullable AWSTranscribeStreamingTranscriptResultStream *)decodeEvent:(NSData *)data decodingError:(NSError **)decodingError; @endSwift
class AWSTranscribeStreamingEventDecoder : NSObject -
A list of possible transcriptions for the audio.
See moreDeclaration
Objective-C
@interface AWSTranscribeStreamingAlternativeSwift
class AWSTranscribeStreamingAlternative -
Provides a wrapper for the audio chunks that you are sending.
See moreDeclaration
Objective-C
@interface AWSTranscribeStreamingAudioEventSwift
class AWSTranscribeStreamingAudioEvent -
Represents the audio stream from your application to Amazon Transcribe.
See moreDeclaration
Objective-C
@interface AWSTranscribeStreamingAudioStreamSwift
class AWSTranscribeStreamingAudioStream -
A word or phrase transcribed from the input audio.
See moreDeclaration
Objective-C
@interface AWSTranscribeStreamingItemSwift
class AWSTranscribeStreamingItem -
The result of transcribing a portion of the input audio stream.
See moreDeclaration
Objective-C
@interface AWSTranscribeStreamingResultSwift
class AWSTranscribeStreamingResult -
Declaration
Objective-C
@interface AWSTranscribeStreamingStartStreamTranscriptionRequestSwift
class AWSTranscribeStreamingStartStreamTranscriptionRequest -
Declaration
Objective-C
@interface AWSTranscribeStreamingStartStreamTranscriptionResponseSwift
class AWSTranscribeStreamingStartStreamTranscriptionResponse -
The transcription in a TranscriptEvent.
See moreDeclaration
Objective-C
@interface AWSTranscribeStreamingTranscriptSwift
class AWSTranscribeStreamingTranscript -
Represents a set of transcription results from the server to the client. It contains one or more segments of the transcription.
See moreDeclaration
Objective-C
@interface AWSTranscribeStreamingTranscriptEventSwift
class AWSTranscribeStreamingTranscriptEvent -
Represents the transcription result stream from Amazon Transcribe to your application.
See moreDeclaration
Objective-C
@interface AWSTranscribeStreamingTranscriptResultStreamSwift
class AWSTranscribeStreamingTranscriptResultStream -
Undocumented
See moreDeclaration
Objective-C
@interface AWSTranscribeStreamingResources : NSObject + (instancetype)sharedInstance; - (NSDictionary *)JSONObject; @endSwift
class AWSTranscribeStreamingResources : NSObject -
Operations and objects for transcribing streaming speech to text.
For backend setup and instructions on configuring policies, please see https://docs.aws.amazon.com/transcribe/latest/dg/streaming.html
This SDK currently only supports streaming via WebSockets, which is described here https://docs.aws.amazon.com/transcribe/latest/dg/websocket.html
How to Use
See the
AWSTranscribeStreamingSwiftTests.testStreamingExample()integration test for an example of the “happy path” usage of this SDK. The general steps for usage are:Configure the AWSServiceConfiguration, including setting a credentials provider for signing WebSocket requests
Create a TranscribeStreaming client with
+[AWSTranscribeStreaming registerTranscribeStreamingWithConfiguration:forKey:], or use the defaultCreate a
AWSTranscribeStreamingStartStreamTranscriptionRequestand set its properties to allow transcription of your audio streamSet up an
AWSTranscribeStreamingClientDelegateto receive callbacks for connection status changes and transcription eventsCall
AWSTranscribeStreaming.setDelegate(:callbackQueue:)to register your delegate with the client. NOTE We do not recommend using themainqueue as your callback queue, since doing so could impact your app’s UI performance.Call
AWSTranscribeStreaming.startTranscriptionWSS()with the configured requestWait for your delegate’s
connectionStatusCallbackto be invoked with a status of.connected. At this point, the transcribe client is ready to receive audio dataChunk your audio data and send it to AWS Transcribe using the
AWSTranscribeStreaming.send()methodAs you send data, your delegate will be receiving transcription events in the
receiveEventCallback, which you can decode and use in your app.When you reach the end of your audio data, call
AWSTranscribeStreaming.sendEndFrame()to signal the end of processing. NOTE: We recommend waiting 2-3 seconds past the end of your last detected audio data before sending the end frame.Wait for your final transcription events to be received, as indicated by a transcription event with the
isPartialflag set to0.Call
AWSTranscribeStreaming.endTranscription()to close the web socket and gracefully shut down the connection to the service.
Declaration
Objective-C
@interface AWSTranscribeStreamingSwift
class AWSTranscribeStreaming
View on GitHub
Install in Dash
Classes Reference