AWSRekognitionIndexFacesRequest
Objective-C
@interface AWSRekognitionIndexFacesRequest
Swift
class AWSRekognitionIndexFacesRequest
-
The ID of an existing collection to which you want to add the faces that are detected in the input images.
Declaration
Objective-C
@property (nonatomic, strong) NSString *_Nullable collectionId;
Swift
var collectionId: String? { get set }
-
An array of facial attributes you want to be returned. A
DEFAULT
subset of facial attributes -BoundingBox
,Confidence
,Pose
,Quality
, andLandmarks
- will always be returned. You can request for specific facial attributes (in addition to the default list) - by using["DEFAULT", "FACE_OCCLUDED"]
or just["FACE_OCCLUDED"]
. You can request for all facial attributes by using["ALL"]
. Requesting more attributes may increase response time.If you provide both,
["ALL", "DEFAULT"]
, the service uses a logical AND operator to determine which attributes to return (in this case, all attributes).Declaration
Objective-C
@property (nonatomic, strong) NSArray<NSString *> *_Nullable detectionAttributes;
Swift
var detectionAttributes: [String]? { get set }
-
The ID you want to assign to all the faces detected in the image.
Declaration
Objective-C
@property (nonatomic, strong) NSString *_Nullable externalImageId;
Swift
var externalImageId: String? { get set }
-
The input image as base64-encoded bytes or an S3 object. If you use the AWS CLI to call Amazon Rekognition operations, passing base64-encoded image bytes isn’t supported.
If you are using an AWS SDK to call Amazon Rekognition, you might not need to base64-encode image bytes passed using the
Bytes
field. For more information, see Images in the Amazon Rekognition developer guide.Declaration
Objective-C
@property (nonatomic, strong) AWSRekognitionImage *_Nullable image;
Swift
var image: AWSRekognitionImage? { get set }
-
The maximum number of faces to index. The value of
MaxFaces
must be greater than or equal to 1.IndexFaces
returns no more than 100 detected faces in an image, even if you specify a larger value forMaxFaces
.If
IndexFaces
detects more faces than the value ofMaxFaces
, the faces with the lowest quality are filtered out first. If there are still more faces than the value ofMaxFaces
, the faces with the smallest bounding boxes are filtered out (up to the number that’s needed to satisfy the value ofMaxFaces
). Information about the unindexed faces is available in theUnindexedFaces
array.The faces that are returned by
IndexFaces
are sorted by the largest face bounding box size to the smallest size, in descending order.MaxFaces
can be used with a collection associated with any version of the face model.Declaration
Objective-C
@property (nonatomic, strong) NSNumber *_Nullable maxFaces;
Swift
var maxFaces: NSNumber? { get set }
-
A filter that specifies a quality bar for how much filtering is done to identify faces. Filtered faces aren’t indexed. If you specify
AUTO
, Amazon Rekognition chooses the quality bar. If you specifyLOW
,MEDIUM
, orHIGH
, filtering removes all faces that don’t meet the chosen quality bar. The default value isAUTO
. The quality bar is based on a variety of common use cases. Low-quality detections can occur for a number of reasons. Some examples are an object that’s misidentified as a face, a face that’s too blurry, or a face with a pose that’s too extreme to use. If you specifyNONE
, no filtering is performed.To use quality filtering, the collection you are using must be associated with version 3 of the face model or higher.
Declaration
Objective-C
@property (nonatomic) AWSRekognitionQualityFilter qualityFilter;
Swift
var qualityFilter: AWSRekognitionQualityFilter { get set }