AWSRekognitionGetFaceSearchResponse
Objective-C
@interface AWSRekognitionGetFaceSearchResponse
Swift
class AWSRekognitionGetFaceSearchResponse
-
The current status of the face search job.
Declaration
Objective-C
@property (nonatomic) AWSRekognitionVideoJobStatus jobStatus;
Swift
var jobStatus: AWSRekognitionVideoJobStatus { get set }
-
If the response is truncated, Amazon Rekognition Video returns this token that you can use in the subsequent request to retrieve the next set of search results.
Declaration
Objective-C
@property (nonatomic, strong) NSString *_Nullable nextToken;
Swift
var nextToken: String? { get set }
-
An array of persons, PersonMatch, in the video whose face(s) match the face(s) in an Amazon Rekognition collection. It also includes time information for when persons are matched in the video. You specify the input collection in an initial call to
StartFaceSearch
. EachPersons
element includes a time the person was matched, face match details (FaceMatches
) for matching faces in the collection, and person information (Person
) for the matched person.Declaration
Objective-C
@property (nonatomic, strong) NSArray<AWSRekognitionPersonMatch *> *_Nullable persons;
Swift
var persons: [AWSRekognitionPersonMatch]? { get set }
-
If the job fails,
StatusMessage
provides a descriptive error message.Declaration
Objective-C
@property (nonatomic, strong) NSString *_Nullable statusMessage;
Swift
var statusMessage: String? { get set }
-
Information about a video that Amazon Rekognition analyzed.
Videometadata
is returned in every page of paginated responses from a Amazon Rekognition Video operation.Declaration
Objective-C
@property (nonatomic, strong) AWSRekognitionVideoMetadata *_Nullable videoMetadata;
Swift
var videoMetadata: AWSRekognitionVideoMetadata? { get set }