AWSRekognitionGetFaceDetectionResponse

@interface AWSRekognitionGetFaceDetectionResponse
  • An array of faces detected in the video. Each element contains a detected face’s details and the time, in milliseconds from the start of the video, the face was detected.

    Declaration

    Objective-C

    @property (readwrite, strong, nonatomic)
        NSArray<AWSRekognitionFaceDetection *> *_Nullable faces;

    Swift

    var faces: [AWSRekognitionFaceDetection]? { get set }
  • The current status of the face detection job.

    Declaration

    Objective-C

    @property (assign, readwrite, nonatomic) AWSRekognitionVideoJobStatus jobStatus;

    Swift

    var jobStatus: AWSRekognitionVideoJobStatus { get set }
  • If the response is truncated, Amazon Rekognition returns this token that you can use in the subsequent request to retrieve the next set of faces.

    Declaration

    Objective-C

    @property (readwrite, strong, nonatomic) NSString *_Nullable nextToken;

    Swift

    var nextToken: String? { get set }
  • If the job fails, StatusMessage provides a descriptive error message.

    Declaration

    Objective-C

    @property (readwrite, strong, nonatomic) NSString *_Nullable statusMessage;

    Swift

    var statusMessage: String? { get set }
  • Information about a video that Amazon Rekognition Video analyzed. Videometadata is returned in every page of paginated responses from a Amazon Rekognition video operation.

    Declaration

    Objective-C

    @property (readwrite, strong, nonatomic)
        AWSRekognitionVideoMetadata *_Nullable videoMetadata;

    Swift

    var videoMetadata: AWSRekognitionVideoMetadata? { get set }