interface GetFaceSearchResponse {
    JobId?: string;
    JobStatus?: VideoJobStatus;
    JobTag?: string;
    NextToken?: string;
    Persons?: PersonMatch[];
    StatusMessage?: string;
    Video?: Video;
    VideoMetadata?: VideoMetadata;
}

Hierarchy (view full)

Properties

JobId?: string

Job identifier for the face search operation for which you want to obtain results. The job identifer is returned by an initial call to StartFaceSearch.

JobStatus?: VideoJobStatus

The current status of the face search job.

JobTag?: string

A job identifier specified in the call to StartFaceSearch and returned in the job completion notification sent to your Amazon Simple Notification Service topic.

NextToken?: string

If the response is truncated, Amazon Rekognition Video returns this token that you can use in the subsequent request to retrieve the next set of search results.

Persons?: PersonMatch[]

An array of persons, PersonMatch, in the video whose face(s) match the face(s) in an Amazon Rekognition collection. It also includes time information for when persons are matched in the video. You specify the input collection in an initial call to StartFaceSearch. Each Persons element includes a time the person was matched, face match details (FaceMatches) for matching faces in the collection, and person information (Person) for the matched person.

StatusMessage?: string

If the job fails, StatusMessage provides a descriptive error message.

Video?: Video

Video file stored in an Amazon S3 bucket. Amazon Rekognition video start operations such as StartLabelDetection use Video to specify a video for analysis. The supported file formats are .mp4, .mov and .avi.

VideoMetadata?: VideoMetadata

Information about a video that Amazon Rekognition analyzed. Videometadata is returned in every page of paginated responses from a Amazon Rekognition Video operation.