AWSRekognitionModerationLabel

@interface AWSRekognitionModerationLabel

Provides information about a single type of unsafe content found in an image or video. Each type of moderated content has a label within a hierarchical taxonomy. For more information, see Detecting Unsafe Content in the Amazon Rekognition Developer Guide.

  • Specifies the confidence that Amazon Rekognition has that the label has been correctly identified.

    If you don’t specify the MinConfidence parameter in the call to DetectModerationLabels, the operation returns labels with a confidence value greater than or equal to 50 percent.

    Declaration

    Objective-C

    @property (readwrite, strong, nonatomic) NSNumber *_Nullable confidence;

    Swift

    var confidence: NSNumber? { get set }
  • The label name for the type of unsafe content detected in the image.

    Declaration

    Objective-C

    @property (readwrite, strong, nonatomic) NSString *_Nullable name;

    Swift

    var name: String? { get set }
  • The name for the parent label. Labels at the top level of the hierarchy have the parent label "".

    Declaration

    Objective-C

    @property (readwrite, strong, nonatomic) NSString *_Nullable parentName;

    Swift

    var parentName: String? { get set }