AWSRekognitionModerationLabel
Objective-C
@interface AWSRekognitionModerationLabel
Swift
class AWSRekognitionModerationLabel
Provides information about a single type of inappropriate, unwanted, or offensive content found in an image or video. Each type of moderated content has a label within a hierarchical taxonomy. For more information, see Content moderation in the Amazon Rekognition Developer Guide.
-
Specifies the confidence that Amazon Rekognition has that the label has been correctly identified.
If you don’t specify the
MinConfidence
parameter in the call toDetectModerationLabels
, the operation returns labels with a confidence value greater than or equal to 50 percent.Declaration
Objective-C
@property (nonatomic, strong) NSNumber *_Nullable confidence;
Swift
var confidence: NSNumber? { get set }
-
The label name for the type of unsafe content detected in the image.
Declaration
Objective-C
@property (nonatomic, strong) NSString *_Nullable name;
Swift
var name: String? { get set }
-
The name for the parent label. Labels at the top level of the hierarchy have the parent label
""
.Declaration
Objective-C
@property (nonatomic, strong) NSString *_Nullable parentName;
Swift
var parentName: String? { get set }
-
The level of the moderation label with regard to its taxonomy, from 1 to 3.
Declaration
Objective-C
@property (nonatomic, strong) NSNumber *_Nullable taxonomyLevel;
Swift
var taxonomyLevel: NSNumber? { get set }