AWSRekognitionDetectLabelsRequest
Objective-C
@interface AWSRekognitionDetectLabelsRequest
Swift
class AWSRekognitionDetectLabelsRequest
-
The input image as base64-encoded bytes or an S3 object. If you use the AWS CLI to call Amazon Rekognition operations, passing image bytes is not supported. Images stored in an S3 Bucket do not need to be base64-encoded.
If you are using an AWS SDK to call Amazon Rekognition, you might not need to base64-encode image bytes passed using the
Bytes
field. For more information, see Images in the Amazon Rekognition developer guide.Declaration
Objective-C
@property (nonatomic, strong) AWSRekognitionImage *_Nullable image;
Swift
var image: AWSRekognitionImage? { get set }
-
Maximum number of labels you want the service to return in the response. The service returns the specified number of highest confidence labels.
Declaration
Objective-C
@property (nonatomic, strong) NSNumber *_Nullable maxLabels;
Swift
var maxLabels: NSNumber? { get set }
-
Specifies the minimum confidence level for the labels to return. Amazon Rekognition doesn’t return any labels with confidence lower than this specified value.
If
MinConfidence
is not specified, the operation returns labels with a confidence values greater than or equal to 55 percent.Declaration
Objective-C
@property (nonatomic, strong) NSNumber *_Nullable minConfidence;
Swift
var minConfidence: NSNumber? { get set }