AWSRekognitionDetectFacesRequest
Objective-C
@interface AWSRekognitionDetectFacesRequest
Swift
class AWSRekognitionDetectFacesRequest
-
An array of facial attributes you want to be returned. A
DEFAULT
subset of facial attributes -BoundingBox
,Confidence
,Pose
,Quality
, andLandmarks
- will always be returned. You can request for specific facial attributes (in addition to the default list) - by using ["DEFAULT", "FACE_OCCLUDED"
] or just ["FACE_OCCLUDED"
]. You can request for all facial attributes by using ["ALL"]
. Requesting more attributes may increase response time.If you provide both,
["ALL", "DEFAULT"]
, the service uses a logical “AND” operator to determine which attributes to return (in this case, all attributes).Note that while the FaceOccluded and EyeDirection attributes are supported when using
DetectFaces
, they aren’t supported when analyzing videos withStartFaceDetection
andGetFaceDetection
.Declaration
Objective-C
@property (nonatomic, strong) NSArray<NSString *> *_Nullable attributes;
Swift
var attributes: [String]? { get set }
-
The input image as base64-encoded bytes or an S3 object. If you use the AWS CLI to call Amazon Rekognition operations, passing base64-encoded image bytes is not supported.
If you are using an AWS SDK to call Amazon Rekognition, you might not need to base64-encode image bytes passed using the
Bytes
field. For more information, see Images in the Amazon Rekognition developer guide.Declaration
Objective-C
@property (nonatomic, strong) AWSRekognitionImage *_Nullable image;
Swift
var image: AWSRekognitionImage? { get set }