AWSRekognitionGetTextDetectionResponse

@interface AWSRekognitionGetTextDetectionResponse
  • Current status of the text detection job.

    Declaration

    Objective-C

    @property (nonatomic) AWSRekognitionVideoJobStatus jobStatus;

    Swift

    var jobStatus: AWSRekognitionVideoJobStatus { get set }
  • If the response is truncated, Amazon Rekognition Video returns this token that you can use in the subsequent request to retrieve the next set of text.

    Declaration

    Objective-C

    @property (nonatomic, strong) NSString *_Nullable nextToken;

    Swift

    var nextToken: String? { get set }
  • If the job fails, StatusMessage provides a descriptive error message.

    Declaration

    Objective-C

    @property (nonatomic, strong) NSString *_Nullable statusMessage;

    Swift

    var statusMessage: String? { get set }
  • An array of text detected in the video. Each element contains the detected text, the time in milliseconds from the start of the video that the text was detected, and where it was detected on the screen.

    Declaration

    Objective-C

    @property (nonatomic, strong) NSArray<AWSRekognitionTextDetectionResult *> *_Nullable textDetections;

    Swift

    var textDetections: [AWSRekognitionTextDetectionResult]? { get set }
  • Version number of the text detection model that was used to detect text.

    Declaration

    Objective-C

    @property (nonatomic, strong) NSString *_Nullable textModelVersion;

    Swift

    var textModelVersion: String? { get set }
  • Information about a video that Amazon Rekognition analyzed. Videometadata is returned in every page of paginated responses from a Amazon Rekognition video operation.

    Declaration

    Objective-C

    @property (nonatomic, strong) AWSRekognitionVideoMetadata *_Nullable videoMetadata;

    Swift

    var videoMetadata: AWSRekognitionVideoMetadata? { get set }