AWSRekognitionGetLabelDetectionRequest
Objective-C
@interface AWSRekognitionGetLabelDetectionRequest
Swift
class AWSRekognitionGetLabelDetectionRequest
-
Defines how to aggregate the returned results. Results can be aggregated by timestamps or segments.
Declaration
Objective-C
@property (nonatomic) AWSRekognitionLabelDetectionAggregateBy aggregateBy;
Swift
var aggregateBy: AWSRekognitionLabelDetectionAggregateBy { get set }
-
Job identifier for the label detection operation for which you want results returned. You get the job identifer from an initial call to
StartlabelDetection
.Declaration
Objective-C
@property (nonatomic, strong) NSString *_Nullable jobId;
Swift
var jobId: String? { get set }
-
Maximum number of results to return per paginated call. The largest value you can specify is 1000. If you specify a value greater than 1000, a maximum of 1000 results is returned. The default value is 1000.
Declaration
Objective-C
@property (nonatomic, strong) NSNumber *_Nullable maxResults;
Swift
var maxResults: NSNumber? { get set }
-
If the previous response was incomplete (because there are more labels to retrieve), Amazon Rekognition Video returns a pagination token in the response. You can use this pagination token to retrieve the next set of labels.
Declaration
Objective-C
@property (nonatomic, strong) NSString *_Nullable nextToken;
Swift
var nextToken: String? { get set }
-
Sort to use for elements in the
Labels
array. UseTIMESTAMP
to sort array elements by the time labels are detected. UseNAME
to alphabetically group elements for a label together. Within each label group, the array element are sorted by detection confidence. The default sort is byTIMESTAMP
.Declaration
Objective-C
@property (nonatomic) AWSRekognitionLabelDetectionSortBy sortBy;
Swift
var sortBy: AWSRekognitionLabelDetectionSortBy { get set }