Classes

The following classes are available globally.

  • Structure containing the estimated age range, in years, for a face.

    Amazon Rekognition estimates an age range for faces detected in the input image. Estimated age ranges can overlap. A face of a 5-year-old might have an estimated range of 4-6, while the face of a 6-year-old might have an estimated range of 4-8.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionAgeRange

    Swift

    class AWSRekognitionAgeRange
  • Assets are the images that you use to train and evaluate a model version. Assets can also contain validation information that you use to debug a failed model training.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionAsset

    Swift

    class AWSRekognitionAsset
  • Declaration

    Objective-C

    @interface AWSRekognitionAssociateFacesRequest

    Swift

    class AWSRekognitionAssociateFacesRequest
  • Declaration

    Objective-C

    @interface AWSRekognitionAssociateFacesResponse

    Swift

    class AWSRekognitionAssociateFacesResponse
  • Provides face metadata for the faces that are associated to a specific UserID.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionAssociatedFace

    Swift

    class AWSRekognitionAssociatedFace
  • Metadata information about an audio stream. An array of AudioMetadata objects for the audio streams found in a stored video is returned by GetSegmentDetection.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionAudioMetadata

    Swift

    class AWSRekognitionAudioMetadata
  • An image that is picked from the Face Liveness video and returned for audit trail purposes, returned as Base64-encoded bytes.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionAuditImage

    Swift

    class AWSRekognitionAuditImage
  • Indicates whether or not the face has a beard, and the confidence level in the determination.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionBeard

    Swift

    class AWSRekognitionBeard
  • A filter that allows you to control the black frame detection by specifying the black levels and pixel coverage of black pixels in a frame. As videos can come from multiple sources, formats, and time periods, they may contain different standards and varying noise levels for black frames that need to be accounted for. For more information, see StartSegmentDetection.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionBlackFrame

    Swift

    class AWSRekognitionBlackFrame
  • Identifies the bounding box around the label, face, text, object of interest, or personal protective equipment. The left (x-coordinate) and top (y-coordinate) are coordinates representing the top and left sides of the bounding box. Note that the upper-left corner of the image is the origin (0,0).

    The top and left values returned are ratios of the overall image size. For example, if the input image is 700x200 pixels, and the top-left coordinate of the bounding box is 350x50 pixels, the API returns a left value of 0.5 (350/700) and a top value of 0.25 (50/200).

    The width and height values represent the dimensions of the bounding box as a ratio of the overall image dimension. For example, if the input image is 700x200 pixels, and the bounding box width is 70 pixels, the width returned is 0.1.

    The bounding box coordinates can have negative values. For example, if Amazon Rekognition is able to detect a face that is at the image edge and is only partially visible, the service can return coordinates that are outside the image bounds and, depending on the image edge, you might get negative values or values greater than 1 for the left or top values.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionBoundingBox

    Swift

    class AWSRekognitionBoundingBox
  • Provides information about a celebrity recognized by the RecognizeCelebrities operation.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionCelebrity

    Swift

    class AWSRekognitionCelebrity
  • Information about a recognized celebrity.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionCelebrityDetail

    Swift

    class AWSRekognitionCelebrityDetail
  • Information about a detected celebrity and the time the celebrity was detected in a stored video. For more information, see GetCelebrityRecognition in the Amazon Rekognition Developer Guide.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionCelebrityRecognition

    Swift

    class AWSRekognitionCelebrityRecognition
  • Provides information about a face in a target image that matches the source image face analyzed by CompareFaces. The Face property contains the bounding box of the face in the target image. The Similarity property is the confidence that the source image face matches the face in the bounding box.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionCompareFacesMatch

    Swift

    class AWSRekognitionCompareFacesMatch
  • Declaration

    Objective-C

    @interface AWSRekognitionCompareFacesRequest

    Swift

    class AWSRekognitionCompareFacesRequest
  • Declaration

    Objective-C

    @interface AWSRekognitionCompareFacesResponse

    Swift

    class AWSRekognitionCompareFacesResponse
  • Provides face metadata for target image faces that are analyzed by CompareFaces and RecognizeCelebrities.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionComparedFace

    Swift

    class AWSRekognitionComparedFace
  • Type that describes the face Amazon Rekognition chose to compare with the faces in the target. This contains a bounding box for the selected face and confidence level that the bounding box contains a face. Note that Amazon Rekognition selects the largest face in the source image for this comparison.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionComparedSourceImageFace

    Swift

    class AWSRekognitionComparedSourceImageFace
  • Label detection settings to use on a streaming video. Defining the settings is required in the request parameter for CreateStreamProcessor. Including this setting in the CreateStreamProcessor request enables you to use the stream processor for label detection. You can then select what you want the stream processor to detect, such as people or pets. When the stream processor has started, one notification is sent for each object class specified. For example, if packages and pets are selected, one SNS notification is published the first time a package is detected and one SNS notification is published the first time a pet is detected, as well as an end-of-session summary.

    Required parameters: [Labels]

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionConnectedHomeSettings

    Swift

    class AWSRekognitionConnectedHomeSettings
  • The label detection settings you want to use in your stream processor. This includes the labels you want the stream processor to detect and the minimum confidence level allowed to label objects.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionConnectedHomeSettingsForUpdate

    Swift

    class AWSRekognitionConnectedHomeSettingsForUpdate
  • Information about an inappropriate, unwanted, or offensive content label detection in a stored video.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionContentModerationDetection

    Swift

    class AWSRekognitionContentModerationDetection
  • Declaration

    Objective-C

    @interface AWSRekognitionReplicateProjectVersionRequest

    Swift

    class AWSRekognitionReplicateProjectVersionRequest
  • Declaration

    Objective-C

    @interface AWSRekognitionReplicateProjectVersionResponse

    Swift

    class AWSRekognitionReplicateProjectVersionResponse
  • Information about an item of Personal Protective Equipment covering a corresponding body part. For more information, see DetectProtectiveEquipment.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionCoversBodyPart

    Swift

    class AWSRekognitionCoversBodyPart
  • Declaration

    Objective-C

    @interface AWSRekognitionCreateCollectionRequest

    Swift

    class AWSRekognitionCreateCollectionRequest
  • Declaration

    Objective-C

    @interface AWSRekognitionCreateCollectionResponse

    Swift

    class AWSRekognitionCreateCollectionResponse
  • Declaration

    Objective-C

    @interface AWSRekognitionCreateDatasetRequest

    Swift

    class AWSRekognitionCreateDatasetRequest
  • Declaration

    Objective-C

    @interface AWSRekognitionCreateDatasetResponse

    Swift

    class AWSRekognitionCreateDatasetResponse
  • Declaration

    Objective-C

    @interface AWSRekognitionCreateFaceLivenessSessionRequest

    Swift

    class AWSRekognitionCreateFaceLivenessSessionRequest
  • A session settings object. It contains settings for the operation to be performed. It accepts arguments for OutputConfig and AuditImagesLimit.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionCreateFaceLivenessSessionRequestSettings

    Swift

    class AWSRekognitionCreateFaceLivenessSessionRequestSettings
  • Declaration

    Objective-C

    @interface AWSRekognitionCreateFaceLivenessSessionResponse

    Swift

    class AWSRekognitionCreateFaceLivenessSessionResponse
  • Declaration

    Objective-C

    @interface AWSRekognitionCreateProjectRequest

    Swift

    class AWSRekognitionCreateProjectRequest
  • Declaration

    Objective-C

    @interface AWSRekognitionCreateProjectResponse

    Swift

    class AWSRekognitionCreateProjectResponse
  • Declaration

    Objective-C

    @interface AWSRekognitionCreateProjectVersionRequest

    Swift

    class AWSRekognitionCreateProjectVersionRequest
  • Declaration

    Objective-C

    @interface AWSRekognitionCreateProjectVersionResponse

    Swift

    class AWSRekognitionCreateProjectVersionResponse
  • Declaration

    Objective-C

    @interface AWSRekognitionCreateStreamProcessorRequest

    Swift

    class AWSRekognitionCreateStreamProcessorRequest
  • Declaration

    Objective-C

    @interface AWSRekognitionCreateStreamProcessorResponse

    Swift

    class AWSRekognitionCreateStreamProcessorResponse
  • Declaration

    Objective-C

    @interface AWSRekognitionCreateUserRequest

    Swift

    class AWSRekognitionCreateUserRequest
  • Declaration

    Objective-C

    @interface AWSRekognitionCreateUserResponse

    Swift

    class AWSRekognitionCreateUserResponse
  • A custom label detected in an image by a call to DetectCustomLabels.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionCustomLabel

    Swift

    class AWSRekognitionCustomLabel
  • Feature specific configuration for the training job. Configuration provided for the job must match the feature type parameter associated with project. If configuration and feature type do not match an InvalidParameterException is returned.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionCustomizationFeatureConfig

    Swift

    class AWSRekognitionCustomizationFeatureConfig
  • Configuration options for Content Moderation training.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionCustomizationFeatureContentModerationConfig

    Swift

    class AWSRekognitionCustomizationFeatureContentModerationConfig
  • Describes updates or additions to a dataset. A Single update or addition is an entry (JSON Line) that provides information about a single image. To update an existing entry, you match the source-ref field of the update entry with the source-ref filed of the entry that you want to update. If the source-ref field doesn’t match an existing entry, the entry is added to dataset as a new entry.

    Required parameters: [GroundTruth]

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionDatasetChanges

    Swift

    class AWSRekognitionDatasetChanges
  • A description for a dataset. For more information, see DescribeDataset.

    The status fields Status, StatusMessage, and StatusMessageCode reflect the last operation on the dataset.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionDatasetDescription

    Swift

    class AWSRekognitionDatasetDescription
  • Describes a dataset label. For more information, see ListDatasetLabels.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionDatasetLabelDescription

    Swift

    class AWSRekognitionDatasetLabelDescription
  • Statistics about a label used in a dataset. For more information, see DatasetLabelDescription.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionDatasetLabelStats

    Swift

    class AWSRekognitionDatasetLabelStats
  • Summary information for an Amazon Rekognition Custom Labels dataset. For more information, see ProjectDescription.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionDatasetMetadata

    Swift

    class AWSRekognitionDatasetMetadata
  • The source that Amazon Rekognition Custom Labels uses to create a dataset. To use an Amazon Sagemaker format manifest file, specify the S3 bucket location in the GroundTruthManifest field. The S3 bucket must be in your AWS account. To create a copy of an existing dataset, specify the Amazon Resource Name (ARN) of an existing dataset in DatasetArn.

    You need to specify a value for DatasetArn or GroundTruthManifest, but not both. if you supply both values, or if you don’t specify any values, an InvalidParameterException exception occurs.

    For more information, see CreateDataset.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionDatasetSource

    Swift

    class AWSRekognitionDatasetSource
  • Provides statistics about a dataset. For more information, see DescribeDataset.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionDatasetStats

    Swift

    class AWSRekognitionDatasetStats
  • Declaration

    Objective-C

    @interface AWSRekognitionDeleteCollectionRequest

    Swift

    class AWSRekognitionDeleteCollectionRequest
  • Declaration

    Objective-C

    @interface AWSRekognitionDeleteCollectionResponse

    Swift

    class AWSRekognitionDeleteCollectionResponse
  • Declaration

    Objective-C

    @interface AWSRekognitionDeleteDatasetRequest

    Swift

    class AWSRekognitionDeleteDatasetRequest
  • Declaration

    Objective-C

    @interface AWSRekognitionDeleteDatasetResponse

    Swift

    class AWSRekognitionDeleteDatasetResponse
  • Declaration

    Objective-C

    @interface AWSRekognitionDeleteFacesRequest

    Swift

    class AWSRekognitionDeleteFacesRequest
  • Declaration

    Objective-C

    @interface AWSRekognitionDeleteFacesResponse

    Swift

    class AWSRekognitionDeleteFacesResponse
  • Declaration

    Objective-C

    @interface AWSRekognitionDeleteProjectPolicyRequest

    Swift

    class AWSRekognitionDeleteProjectPolicyRequest
  • Declaration

    Objective-C

    @interface AWSRekognitionDeleteProjectPolicyResponse

    Swift

    class AWSRekognitionDeleteProjectPolicyResponse
  • Declaration

    Objective-C

    @interface AWSRekognitionDeleteProjectRequest

    Swift

    class AWSRekognitionDeleteProjectRequest
  • Declaration

    Objective-C

    @interface AWSRekognitionDeleteProjectResponse

    Swift

    class AWSRekognitionDeleteProjectResponse
  • Declaration

    Objective-C

    @interface AWSRekognitionDeleteProjectVersionRequest

    Swift

    class AWSRekognitionDeleteProjectVersionRequest
  • Declaration

    Objective-C

    @interface AWSRekognitionDeleteProjectVersionResponse

    Swift

    class AWSRekognitionDeleteProjectVersionResponse
  • Declaration

    Objective-C

    @interface AWSRekognitionDeleteStreamProcessorRequest

    Swift

    class AWSRekognitionDeleteStreamProcessorRequest
  • Declaration

    Objective-C

    @interface AWSRekognitionDeleteStreamProcessorResponse

    Swift

    class AWSRekognitionDeleteStreamProcessorResponse
  • Declaration

    Objective-C

    @interface AWSRekognitionDeleteUserRequest

    Swift

    class AWSRekognitionDeleteUserRequest
  • Declaration

    Objective-C

    @interface AWSRekognitionDeleteUserResponse

    Swift

    class AWSRekognitionDeleteUserResponse
  • Declaration

    Objective-C

    @interface AWSRekognitionDescribeCollectionRequest

    Swift

    class AWSRekognitionDescribeCollectionRequest
  • Declaration

    Objective-C

    @interface AWSRekognitionDescribeCollectionResponse

    Swift

    class AWSRekognitionDescribeCollectionResponse
  • Declaration

    Objective-C

    @interface AWSRekognitionDescribeDatasetRequest

    Swift

    class AWSRekognitionDescribeDatasetRequest
  • Declaration

    Objective-C

    @interface AWSRekognitionDescribeDatasetResponse

    Swift

    class AWSRekognitionDescribeDatasetResponse
  • Declaration

    Objective-C

    @interface AWSRekognitionDescribeProjectVersionsRequest

    Swift

    class AWSRekognitionDescribeProjectVersionsRequest
  • Declaration

    Objective-C

    @interface AWSRekognitionDescribeProjectVersionsResponse

    Swift

    class AWSRekognitionDescribeProjectVersionsResponse
  • Declaration

    Objective-C

    @interface AWSRekognitionDescribeProjectsRequest

    Swift

    class AWSRekognitionDescribeProjectsRequest
  • Declaration

    Objective-C

    @interface AWSRekognitionDescribeProjectsResponse

    Swift

    class AWSRekognitionDescribeProjectsResponse
  • Declaration

    Objective-C

    @interface AWSRekognitionDescribeStreamProcessorRequest

    Swift

    class AWSRekognitionDescribeStreamProcessorRequest
  • Declaration

    Objective-C

    @interface AWSRekognitionDescribeStreamProcessorResponse

    Swift

    class AWSRekognitionDescribeStreamProcessorResponse
  • Declaration

    Objective-C

    @interface AWSRekognitionDetectCustomLabelsRequest

    Swift

    class AWSRekognitionDetectCustomLabelsRequest
  • Declaration

    Objective-C

    @interface AWSRekognitionDetectCustomLabelsResponse

    Swift

    class AWSRekognitionDetectCustomLabelsResponse
  • Declaration

    Objective-C

    @interface AWSRekognitionDetectFacesRequest

    Swift

    class AWSRekognitionDetectFacesRequest
  • Declaration

    Objective-C

    @interface AWSRekognitionDetectFacesResponse

    Swift

    class AWSRekognitionDetectFacesResponse
  • The background of the image with regard to image quality and dominant colors.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionDetectLabelsImageBackground

    Swift

    class AWSRekognitionDetectLabelsImageBackground
  • The foreground of the image with regard to image quality and dominant colors.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionDetectLabelsImageForeground

    Swift

    class AWSRekognitionDetectLabelsImageForeground
  • Information about the quality and dominant colors of an input image. Quality and color information is returned for the entire image, foreground, and background.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionDetectLabelsImageProperties

    Swift

    class AWSRekognitionDetectLabelsImageProperties
  • Settings for the IMAGE_PROPERTIES feature type.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionDetectLabelsImagePropertiesSettings

    Swift

    class AWSRekognitionDetectLabelsImagePropertiesSettings
  • The quality of an image provided for label detection, with regard to brightness, sharpness, and contrast.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionDetectLabelsImageQuality

    Swift

    class AWSRekognitionDetectLabelsImageQuality
  • Declaration

    Objective-C

    @interface AWSRekognitionDetectLabelsRequest

    Swift

    class AWSRekognitionDetectLabelsRequest
  • Declaration

    Objective-C

    @interface AWSRekognitionDetectLabelsResponse

    Swift

    class AWSRekognitionDetectLabelsResponse
  • Settings for the DetectLabels request. Settings can include filters for both GENERAL_LABELS and IMAGE_PROPERTIES. GENERAL_LABELS filters can be inclusive or exclusive and applied to individual labels or label categories. IMAGE_PROPERTIES filters allow specification of a maximum number of dominant colors.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionDetectLabelsSettings

    Swift

    class AWSRekognitionDetectLabelsSettings
  • Declaration

    Objective-C

    @interface AWSRekognitionDetectModerationLabelsRequest

    Swift

    class AWSRekognitionDetectModerationLabelsRequest
  • Declaration

    Objective-C

    @interface AWSRekognitionDetectModerationLabelsResponse

    Swift

    class AWSRekognitionDetectModerationLabelsResponse
  • Declaration

    Objective-C

    @interface AWSRekognitionDetectProtectiveEquipmentRequest

    Swift

    class AWSRekognitionDetectProtectiveEquipmentRequest
  • Declaration

    Objective-C

    @interface AWSRekognitionDetectProtectiveEquipmentResponse

    Swift

    class AWSRekognitionDetectProtectiveEquipmentResponse
  • A set of optional parameters that you can use to set the criteria that the text must meet to be included in your response. WordFilter looks at a word’s height, width, and minimum confidence. RegionOfInterest lets you set a specific region of the image to look for text in.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionDetectTextFilters

    Swift

    class AWSRekognitionDetectTextFilters
  • Declaration

    Objective-C

    @interface AWSRekognitionDetectTextRequest

    Swift

    class AWSRekognitionDetectTextRequest
  • Declaration

    Objective-C

    @interface AWSRekognitionDetectTextResponse

    Swift

    class AWSRekognitionDetectTextResponse
  • A set of parameters that allow you to filter out certain results from your returned results.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionDetectionFilter

    Swift

    class AWSRekognitionDetectionFilter
  • Declaration

    Objective-C

    @interface AWSRekognitionDisassociateFacesRequest

    Swift

    class AWSRekognitionDisassociateFacesRequest
  • Declaration

    Objective-C

    @interface AWSRekognitionDisassociateFacesResponse

    Swift

    class AWSRekognitionDisassociateFacesResponse
  • Provides face metadata for the faces that are disassociated from a specific UserID.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionDisassociatedFace

    Swift

    class AWSRekognitionDisassociatedFace
  • A training dataset or a test dataset used in a dataset distribution operation. For more information, see DistributeDatasetEntries.

    Required parameters: [Arn]

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionDistributeDataset

    Swift

    class AWSRekognitionDistributeDataset
  • Declaration

    Objective-C

    @interface AWSRekognitionDistributeDatasetEntriesRequest

    Swift

    class AWSRekognitionDistributeDatasetEntriesRequest
  • Declaration

    Objective-C

    @interface AWSRekognitionDistributeDatasetEntriesResponse

    Swift

    class AWSRekognitionDistributeDatasetEntriesResponse
  • A description of the dominant colors in an image.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionDominantColor

    Swift

    class AWSRekognitionDominantColor
  • The emotions that appear to be expressed on the face, and the confidence level in the determination. The API is only making a determination of the physical appearance of a person’s face. It is not a determination of the person’s internal emotional state and should not be used in such a way. For example, a person pretending to have a sad face might not be sad emotionally.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionEmotion

    Swift

    class AWSRekognitionEmotion
  • Information about an item of Personal Protective Equipment (PPE) detected by DetectProtectiveEquipment. For more information, see DetectProtectiveEquipment.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionEquipmentDetection

    Swift

    class AWSRekognitionEquipmentDetection
  • The evaluation results for the training of a model.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionEvaluationResult

    Swift

    class AWSRekognitionEvaluationResult
  • Indicates the direction the eyes are gazing in (independent of the head pose) as determined by its pitch and yaw.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionEyeDirection

    Swift

    class AWSRekognitionEyeDirection
  • Indicates whether or not the eyes on the face are open, and the confidence level in the determination.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionEyeOpen

    Swift

    class AWSRekognitionEyeOpen
  • Indicates whether or not the face is wearing eye glasses, and the confidence level in the determination.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionEyeglasses

    Swift

    class AWSRekognitionEyeglasses
  • Describes the face properties such as the bounding box, face ID, image ID of the input image, and external image ID that you assigned.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionFace

    Swift

    class AWSRekognitionFace
  • Structure containing attributes of the face that the algorithm detected.

    A FaceDetail object contains either the default facial attributes or all facial attributes. The default attributes are BoundingBox, Confidence, Landmarks, Pose, and Quality.

    GetFaceDetection is the only Amazon Rekognition Video stored video operation that can return a FaceDetail object with all attributes. To specify which attributes to return, use the FaceAttributes input parameter for StartFaceDetection. The following Amazon Rekognition Video operations return only the default attributes. The corresponding Start operations don’t have a FaceAttributes input parameter:

    • GetCelebrityRecognition

    • GetPersonTracking

    • GetFaceSearch

    The Amazon Rekognition Image DetectFaces and IndexFaces operations can return all facial attributes. To specify which attributes to return, use the Attributes input parameter for DetectFaces. For IndexFaces, use the DetectAttributes input parameter.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionFaceDetail

    Swift

    class AWSRekognitionFaceDetail
  • Information about a face detected in a video analysis request and the time the face was detected in the video.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionFaceDetection

    Swift

    class AWSRekognitionFaceDetection
  • Provides face metadata. In addition, it also provides the confidence in the match of this face with the input face.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionFaceMatch

    Swift

    class AWSRekognitionFaceMatch
  • FaceOccluded should return “true” with a high confidence score if a detected face’s eyes, nose, and mouth are partially captured or if they are covered by masks, dark sunglasses, cell phones, hands, or other objects. FaceOccluded should return “false” with a high confidence score if common occurrences that do not impact face verification are detected, such as eye glasses, lightly tinted sunglasses, strands of hair, and others.

    You can use FaceOccluded to determine if an obstruction on a face negatively impacts using the image for face matching.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionFaceOccluded

    Swift

    class AWSRekognitionFaceOccluded
  • Object containing both the face metadata (stored in the backend database), and facial attributes that are detected but aren’t stored in the database.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionFaceRecord

    Swift

    class AWSRekognitionFaceRecord
  • Input face recognition parameters for an Amazon Rekognition stream processor. Includes the collection to use for face recognition and the face attributes to detect. Defining the settings is required in the request parameter for CreateStreamProcessor.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionFaceSearchSettings

    Swift

    class AWSRekognitionFaceSearchSettings
  • The predicted gender of a detected face.

    Amazon Rekognition makes gender binary (male/female) predictions based on the physical appearance of a face in a particular image. This kind of prediction is not designed to categorize a person’s gender identity, and you shouldn’t use Amazon Rekognition to make such a determination. For example, a male actor wearing a long-haired wig and earrings for a role might be predicted as female.

    Using Amazon Rekognition to make gender binary predictions is best suited for use cases where aggregate gender distribution statistics need to be analyzed without identifying specific users. For example, the percentage of female users compared to male users on a social media platform.

    We don’t recommend using gender binary predictions to make decisions that impact an individual’s rights, privacy, or access to services.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionGender

    Swift

    class AWSRekognitionGender
  • Contains filters for the object labels returned by DetectLabels. Filters can be inclusive, exclusive, or a combination of both and can be applied to individual labels or entire label categories. To see a list of label categories, see Detecting Labels.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionGeneralLabelsSettings

    Swift

    class AWSRekognitionGeneralLabelsSettings
  • Information about where an object (DetectCustomLabels) or text (DetectText) is located on an image.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionGeometry

    Swift

    class AWSRekognitionGeometry
  • Declaration

    Objective-C

    @interface AWSRekognitionGetCelebrityInfoRequest

    Swift

    class AWSRekognitionGetCelebrityInfoRequest
  • Declaration

    Objective-C

    @interface AWSRekognitionGetCelebrityInfoResponse

    Swift

    class AWSRekognitionGetCelebrityInfoResponse
  • Declaration

    Objective-C

    @interface AWSRekognitionGetCelebrityRecognitionRequest

    Swift

    class AWSRekognitionGetCelebrityRecognitionRequest
  • Declaration

    Objective-C

    @interface AWSRekognitionGetCelebrityRecognitionResponse

    Swift

    class AWSRekognitionGetCelebrityRecognitionResponse
  • Declaration

    Objective-C

    @interface AWSRekognitionGetContentModerationRequest

    Swift

    class AWSRekognitionGetContentModerationRequest
  • Contains metadata about a content moderation request, including the SortBy and AggregateBy options.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionGetContentModerationRequestMetadata

    Swift

    class AWSRekognitionGetContentModerationRequestMetadata
  • Declaration

    Objective-C

    @interface AWSRekognitionGetContentModerationResponse

    Swift

    class AWSRekognitionGetContentModerationResponse
  • Declaration

    Objective-C

    @interface AWSRekognitionGetFaceDetectionRequest

    Swift

    class AWSRekognitionGetFaceDetectionRequest
  • Declaration

    Objective-C

    @interface AWSRekognitionGetFaceDetectionResponse

    Swift

    class AWSRekognitionGetFaceDetectionResponse
  • Declaration

    Objective-C

    @interface AWSRekognitionGetFaceLivenessSessionResultsRequest

    Swift

    class AWSRekognitionGetFaceLivenessSessionResultsRequest
  • Declaration

    Objective-C

    @interface AWSRekognitionGetFaceLivenessSessionResultsResponse

    Swift

    class AWSRekognitionGetFaceLivenessSessionResultsResponse
  • Declaration

    Objective-C

    @interface AWSRekognitionGetFaceSearchRequest

    Swift

    class AWSRekognitionGetFaceSearchRequest
  • Declaration

    Objective-C

    @interface AWSRekognitionGetFaceSearchResponse

    Swift

    class AWSRekognitionGetFaceSearchResponse
  • Declaration

    Objective-C

    @interface AWSRekognitionGetLabelDetectionRequest

    Swift

    class AWSRekognitionGetLabelDetectionRequest
  • Contains metadata about a label detection request, including the SortBy and AggregateBy options.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionGetLabelDetectionRequestMetadata

    Swift

    class AWSRekognitionGetLabelDetectionRequestMetadata
  • Declaration

    Objective-C

    @interface AWSRekognitionGetLabelDetectionResponse

    Swift

    class AWSRekognitionGetLabelDetectionResponse
  • Declaration

    Objective-C

    @interface AWSRekognitionGetPersonTrackingRequest

    Swift

    class AWSRekognitionGetPersonTrackingRequest
  • Declaration

    Objective-C

    @interface AWSRekognitionGetPersonTrackingResponse

    Swift

    class AWSRekognitionGetPersonTrackingResponse
  • Declaration

    Objective-C

    @interface AWSRekognitionGetSegmentDetectionRequest

    Swift

    class AWSRekognitionGetSegmentDetectionRequest
  • Declaration

    Objective-C

    @interface AWSRekognitionGetSegmentDetectionResponse

    Swift

    class AWSRekognitionGetSegmentDetectionResponse
  • Declaration

    Objective-C

    @interface AWSRekognitionGetTextDetectionRequest

    Swift

    class AWSRekognitionGetTextDetectionRequest
  • Declaration

    Objective-C

    @interface AWSRekognitionGetTextDetectionResponse

    Swift

    class AWSRekognitionGetTextDetectionResponse
  • The S3 bucket that contains an Amazon Sagemaker Ground Truth format manifest file.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionGroundTruthManifest

    Swift

    class AWSRekognitionGroundTruthManifest
  • Shows the results of the human in the loop evaluation. If there is no HumanLoopArn, the input did not trigger human review.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionHumanLoopActivationOutput

    Swift

    class AWSRekognitionHumanLoopActivationOutput
  • Sets up the flow definition the image will be sent to if one of the conditions is met. You can also set certain attributes of the image before review.

    Required parameters: [HumanLoopName, FlowDefinitionArn]

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionHumanLoopConfig

    Swift

    class AWSRekognitionHumanLoopConfig
  • Allows you to set attributes of the image. Currently, you can declare an image as free of personally identifiable information.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionHumanLoopDataAttributes

    Swift

    class AWSRekognitionHumanLoopDataAttributes
  • Provides the input image either as bytes or an S3 object.

    You pass image bytes to an Amazon Rekognition API operation by using the Bytes property. For example, you would use the Bytes property to pass an image loaded from a local file system. Image bytes passed by using the Bytes property must be base64-encoded. Your code may not need to encode image bytes if you are using an AWS SDK to call Amazon Rekognition API operations.

    For more information, see Analyzing an Image Loaded from a Local File System in the Amazon Rekognition Developer Guide.

    You pass images stored in an S3 bucket to an Amazon Rekognition API operation by using the S3Object property. Images stored in an S3 bucket do not need to be base64-encoded.

    The region for the S3 bucket containing the S3 object must match the region you use for Amazon Rekognition operations.

    If you use the AWS CLI to call Amazon Rekognition operations, passing image bytes using the Bytes property is not supported. You must first upload the image to an Amazon S3 bucket and then call the operation using the S3Object property.

    For Amazon Rekognition to process an S3 object, the user must have permission to access the S3 object. For more information, see How Amazon Rekognition works with IAM in the Amazon Rekognition Developer Guide.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionImage

    Swift

    class AWSRekognitionImage
  • Identifies face image brightness and sharpness.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionImageQuality

    Swift

    class AWSRekognitionImageQuality
  • Declaration

    Objective-C

    @interface AWSRekognitionIndexFacesRequest

    Swift

    class AWSRekognitionIndexFacesRequest
  • Declaration

    Objective-C

    @interface AWSRekognitionIndexFacesResponse

    Swift

    class AWSRekognitionIndexFacesResponse
  • An instance of a label returned by Amazon Rekognition Image (DetectLabels) or by Amazon Rekognition Video (GetLabelDetection).

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionInstance

    Swift

    class AWSRekognitionInstance
  • The Kinesis data stream Amazon Rekognition to which the analysis results of a Amazon Rekognition stream processor are streamed. For more information, see CreateStreamProcessor in the Amazon Rekognition Developer Guide.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionKinesisDataStream

    Swift

    class AWSRekognitionKinesisDataStream
  • Kinesis video stream stream that provides the source streaming video for a Amazon Rekognition Video stream processor. For more information, see CreateStreamProcessor in the Amazon Rekognition Developer Guide.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionKinesisVideoStream

    Swift

    class AWSRekognitionKinesisVideoStream
  • Specifies the starting point in a Kinesis stream to start processing. You can use the producer timestamp or the fragment number. One of either producer timestamp or fragment number is required. If you use the producer timestamp, you must put the time in milliseconds. For more information about fragment numbers, see Fragment.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionKinesisVideoStreamStartSelector

    Swift

    class AWSRekognitionKinesisVideoStreamStartSelector
  • The known gender identity for the celebrity that matches the provided ID. The known gender identity can be Male, Female, Nonbinary, or Unlisted.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionKnownGender

    Swift

    class AWSRekognitionKnownGender
  • Structure containing details about the detected label, including the name, detected instances, parent labels, and level of confidence.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionLabel

    Swift

    class AWSRekognitionLabel
  • A potential alias of for a given label.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionLabelAlias

    Swift

    class AWSRekognitionLabelAlias
  • The category that applies to a given label.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionLabelCategory

    Swift

    class AWSRekognitionLabelCategory
  • Information about a label detected in a video analysis request and the time the label was detected in the video.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionLabelDetection

    Swift

    class AWSRekognitionLabelDetection
  • Contains the specified filters that should be applied to a list of returned GENERAL_LABELS.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionLabelDetectionSettings

    Swift

    class AWSRekognitionLabelDetectionSettings
  • Indicates the location of the landmark on the face.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionLandmark

    Swift

    class AWSRekognitionLandmark
  • Declaration

    Objective-C

    @interface AWSRekognitionListCollectionsRequest

    Swift

    class AWSRekognitionListCollectionsRequest
  • Declaration

    Objective-C

    @interface AWSRekognitionListCollectionsResponse

    Swift

    class AWSRekognitionListCollectionsResponse
  • Declaration

    Objective-C

    @interface AWSRekognitionListDatasetEntriesRequest

    Swift

    class AWSRekognitionListDatasetEntriesRequest
  • Declaration

    Objective-C

    @interface AWSRekognitionListDatasetEntriesResponse

    Swift

    class AWSRekognitionListDatasetEntriesResponse
  • Declaration

    Objective-C

    @interface AWSRekognitionListDatasetLabelsRequest

    Swift

    class AWSRekognitionListDatasetLabelsRequest
  • Declaration

    Objective-C

    @interface AWSRekognitionListDatasetLabelsResponse

    Swift

    class AWSRekognitionListDatasetLabelsResponse
  • Declaration

    Objective-C

    @interface AWSRekognitionListFacesRequest

    Swift

    class AWSRekognitionListFacesRequest
  • Declaration

    Objective-C

    @interface AWSRekognitionListFacesResponse

    Swift

    class AWSRekognitionListFacesResponse
  • Declaration

    Objective-C

    @interface AWSRekognitionListProjectPoliciesRequest

    Swift

    class AWSRekognitionListProjectPoliciesRequest
  • Declaration

    Objective-C

    @interface AWSRekognitionListProjectPoliciesResponse

    Swift

    class AWSRekognitionListProjectPoliciesResponse
  • Declaration

    Objective-C

    @interface AWSRekognitionListStreamProcessorsRequest

    Swift

    class AWSRekognitionListStreamProcessorsRequest
  • Declaration

    Objective-C

    @interface AWSRekognitionListStreamProcessorsResponse

    Swift

    class AWSRekognitionListStreamProcessorsResponse
  • Declaration

    Objective-C

    @interface AWSRekognitionListTagsForResourceRequest

    Swift

    class AWSRekognitionListTagsForResourceRequest
  • Declaration

    Objective-C

    @interface AWSRekognitionListTagsForResourceResponse

    Swift

    class AWSRekognitionListTagsForResourceResponse
  • Declaration

    Objective-C

    @interface AWSRekognitionListUsersRequest

    Swift

    class AWSRekognitionListUsersRequest
  • Declaration

    Objective-C

    @interface AWSRekognitionListUsersResponse

    Swift

    class AWSRekognitionListUsersResponse
  • Contains settings that specify the location of an Amazon S3 bucket used to store the output of a Face Liveness session. Note that the S3 bucket must be located in the caller’s AWS account and in the same region as the Face Liveness end-point. Additionally, the Amazon S3 object keys are auto-generated by the Face Liveness system.

    Required parameters: [S3Bucket]

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionLivenessOutputConfig

    Swift

    class AWSRekognitionLivenessOutputConfig
  • Contains metadata for a UserID matched with a given face.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionMatchedUser

    Swift

    class AWSRekognitionMatchedUser
  • Provides information about a single type of inappropriate, unwanted, or offensive content found in an image or video. Each type of moderated content has a label within a hierarchical taxonomy. For more information, see Content moderation in the Amazon Rekognition Developer Guide.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionModerationLabel

    Swift

    class AWSRekognitionModerationLabel
  • Indicates whether or not the mouth on the face is open, and the confidence level in the determination.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionMouthOpen

    Swift

    class AWSRekognitionMouthOpen
  • Indicates whether or not the face has a mustache, and the confidence level in the determination.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionMustache

    Swift

    class AWSRekognitionMustache
  • The Amazon Simple Notification Service topic to which Amazon Rekognition publishes the completion status of a video analysis operation. For more information, see Calling Amazon Rekognition Video operations. Note that the Amazon SNS topic must have a topic name that begins with AmazonRekognition if you are using the AmazonRekognitionServiceRole permissions policy to access the topic. For more information, see Giving access to multiple Amazon SNS topics.

    Required parameters: [SNSTopicArn, RoleArn]

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionNotificationChannel

    Swift

    class AWSRekognitionNotificationChannel
  • The S3 bucket and folder location where training output is placed.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionOutputConfig

    Swift

    class AWSRekognitionOutputConfig
  • A parent label for a label. A label can have 0, 1, or more parents.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionParent

    Swift

    class AWSRekognitionParent
  • Details about a person detected in a video analysis request.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionPersonDetail

    Swift

    class AWSRekognitionPersonDetail
  • Details and path tracking information for a single time a person’s path is tracked in a video. Amazon Rekognition operations that track people’s paths return an array of PersonDetection objects with elements for each time a person’s path is tracked in a video.

    For more information, see GetPersonTracking in the Amazon Rekognition Developer Guide.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionPersonDetection

    Swift

    class AWSRekognitionPersonDetection
  • Information about a person whose face matches a face(s) in an Amazon Rekognition collection. Includes information about the faces in the Amazon Rekognition collection (FaceMatch), information about the person (PersonDetail), and the time stamp for when the person was detected in a video. An array of PersonMatch objects is returned by GetFaceSearch.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionPersonMatch

    Swift

    class AWSRekognitionPersonMatch
  • The X and Y coordinates of a point on an image or video frame. The X and Y values are ratios of the overall image size or video resolution. For example, if an input image is 700x200 and the values are X=0.5 and Y=0.25, then the point is at the (350,50) pixel coordinate on the image.

    An array of Point objects makes up a Polygon. A Polygon is returned by DetectText and by DetectCustomLabelsPolygon represents a fine-grained polygon around a detected item. For more information, see Geometry in the Amazon Rekognition Developer Guide.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionPoint

    Swift

    class AWSRekognitionPoint
  • Indicates the pose of the face as determined by its pitch, roll, and yaw.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionPose

    Swift

    class AWSRekognitionPose
  • A description of an Amazon Rekognition Custom Labels project. For more information, see DescribeProjects.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionProjectDescription

    Swift

    class AWSRekognitionProjectDescription
  • Describes a project policy in the response from ListProjectPolicies.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionProjectPolicy

    Swift

    class AWSRekognitionProjectPolicy
  • A description of a version of a Amazon Rekognition project version.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionProjectVersionDescription

    Swift

    class AWSRekognitionProjectVersionDescription
  • Information about a body part detected by DetectProtectiveEquipment that contains PPE. An array of ProtectiveEquipmentBodyPart objects is returned for each person detected by DetectProtectiveEquipment.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionProtectiveEquipmentBodyPart

    Swift

    class AWSRekognitionProtectiveEquipmentBodyPart
  • A person detected by a call to DetectProtectiveEquipment. The API returns all persons detected in the input image in an array of ProtectiveEquipmentPerson objects.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionProtectiveEquipmentPerson

    Swift

    class AWSRekognitionProtectiveEquipmentPerson
  • Specifies summary attributes to return from a call to DetectProtectiveEquipment. You can specify which types of PPE to summarize. You can also specify a minimum confidence value for detections. Summary information is returned in the Summary (ProtectiveEquipmentSummary) field of the response from DetectProtectiveEquipment. The summary includes which persons in an image were detected wearing the requested types of person protective equipment (PPE), which persons were detected as not wearing PPE, and the persons in which a determination could not be made. For more information, see ProtectiveEquipmentSummary.

    Required parameters: [MinConfidence, RequiredEquipmentTypes]

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionProtectiveEquipmentSummarizationAttributes

    Swift

    class AWSRekognitionProtectiveEquipmentSummarizationAttributes
  • Summary information for required items of personal protective equipment (PPE) detected on persons by a call to DetectProtectiveEquipment. You specify the required type of PPE in the SummarizationAttributes (ProtectiveEquipmentSummarizationAttributes) input parameter. The summary includes which persons were detected wearing the required personal protective equipment (PersonsWithRequiredEquipment), which persons were detected as not wearing the required PPE (PersonsWithoutRequiredEquipment), and the persons in which a determination could not be made (PersonsIndeterminate).

    To get a total for each category, use the size of the field array. For example, to find out how many people were detected as wearing the specified PPE, use the size of the PersonsWithRequiredEquipment array. If you want to find out more about a person, such as the location (BoundingBox) of the person on the image, use the person ID in each array element. Each person ID matches the ID field of a ProtectiveEquipmentPerson object returned in the Persons array by DetectProtectiveEquipment.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionProtectiveEquipmentSummary

    Swift

    class AWSRekognitionProtectiveEquipmentSummary
  • Declaration

    Objective-C

    @interface AWSRekognitionPutProjectPolicyRequest

    Swift

    class AWSRekognitionPutProjectPolicyRequest
  • Declaration

    Objective-C

    @interface AWSRekognitionPutProjectPolicyResponse

    Swift

    class AWSRekognitionPutProjectPolicyResponse
  • Declaration

    Objective-C

    @interface AWSRekognitionRecognizeCelebritiesRequest

    Swift

    class AWSRekognitionRecognizeCelebritiesRequest
  • Declaration

    Objective-C

    @interface AWSRekognitionRecognizeCelebritiesResponse

    Swift

    class AWSRekognitionRecognizeCelebritiesResponse
  • Specifies a location within the frame that Rekognition checks for objects of interest such as text, labels, or faces. It uses a BoundingBox or Polygon to set a region of the screen.

    A word, face, or label is included in the region if it is more than half in that region. If there is more than one region, the word, face, or label is compared with all regions of the screen. Any object of interest that is more than half in a region is kept in the results.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionRegionOfInterest

    Swift

    class AWSRekognitionRegionOfInterest
  • The Amazon S3 bucket location to which Amazon Rekognition publishes the detailed inference results of a video analysis operation. These results include the name of the stream processor resource, the session ID of the stream processing session, and labeled timestamps and bounding boxes for detected labels.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionS3Destination

    Swift

    class AWSRekognitionS3Destination
  • Provides the S3 bucket name and object name.

    The region for the S3 bucket containing the S3 object must match the region you use for Amazon Rekognition operations.

    For Amazon Rekognition to process an S3 object, the user must have permission to access the S3 object. For more information, see How Amazon Rekognition works with IAM in the Amazon Rekognition Developer Guide.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionS3Object

    Swift

    class AWSRekognitionS3Object
  • Declaration

    Objective-C

    @interface AWSRekognitionSearchFacesByImageRequest

    Swift

    class AWSRekognitionSearchFacesByImageRequest
  • Declaration

    Objective-C

    @interface AWSRekognitionSearchFacesByImageResponse

    Swift

    class AWSRekognitionSearchFacesByImageResponse
  • Declaration

    Objective-C

    @interface AWSRekognitionSearchFacesRequest

    Swift

    class AWSRekognitionSearchFacesRequest
  • Declaration

    Objective-C

    @interface AWSRekognitionSearchFacesResponse

    Swift

    class AWSRekognitionSearchFacesResponse
  • Declaration

    Objective-C

    @interface AWSRekognitionSearchUsersByImageRequest

    Swift

    class AWSRekognitionSearchUsersByImageRequest
  • Declaration

    Objective-C

    @interface AWSRekognitionSearchUsersByImageResponse

    Swift

    class AWSRekognitionSearchUsersByImageResponse
  • Declaration

    Objective-C

    @interface AWSRekognitionSearchUsersRequest

    Swift

    class AWSRekognitionSearchUsersRequest
  • Declaration

    Objective-C

    @interface AWSRekognitionSearchUsersResponse

    Swift

    class AWSRekognitionSearchUsersResponse
  • Provides face metadata such as FaceId, BoundingBox, Confidence of the input face used for search.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionSearchedFace

    Swift

    class AWSRekognitionSearchedFace
  • Contains data regarding the input face used for a search.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionSearchedFaceDetails

    Swift

    class AWSRekognitionSearchedFaceDetails
  • Contains metadata about a User searched for within a collection.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionSearchedUser

    Swift

    class AWSRekognitionSearchedUser
  • A technical cue or shot detection segment detected in a video. An array of SegmentDetection objects containing all segments detected in a stored video is returned by GetSegmentDetection.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionSegmentDetection

    Swift

    class AWSRekognitionSegmentDetection
  • Information about the type of a segment requested in a call to StartSegmentDetection. An array of SegmentTypeInfo objects is returned by the response from GetSegmentDetection.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionSegmentTypeInfo

    Swift

    class AWSRekognitionSegmentTypeInfo
  • Information about a shot detection segment detected in a video. For more information, see SegmentDetection.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionShotSegment

    Swift

    class AWSRekognitionShotSegment
  • Indicates whether or not the face is smiling, and the confidence level in the determination.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionSmile

    Swift

    class AWSRekognitionSmile
  • Declaration

    Objective-C

    @interface AWSRekognitionStartCelebrityRecognitionRequest

    Swift

    class AWSRekognitionStartCelebrityRecognitionRequest
  • Declaration

    Objective-C

    @interface AWSRekognitionStartCelebrityRecognitionResponse

    Swift

    class AWSRekognitionStartCelebrityRecognitionResponse
  • Declaration

    Objective-C

    @interface AWSRekognitionStartContentModerationRequest

    Swift

    class AWSRekognitionStartContentModerationRequest
  • Declaration

    Objective-C

    @interface AWSRekognitionStartContentModerationResponse

    Swift

    class AWSRekognitionStartContentModerationResponse
  • Declaration

    Objective-C

    @interface AWSRekognitionStartFaceDetectionRequest

    Swift

    class AWSRekognitionStartFaceDetectionRequest
  • Declaration

    Objective-C

    @interface AWSRekognitionStartFaceDetectionResponse

    Swift

    class AWSRekognitionStartFaceDetectionResponse
  • Declaration

    Objective-C

    @interface AWSRekognitionStartFaceSearchRequest

    Swift

    class AWSRekognitionStartFaceSearchRequest
  • Declaration

    Objective-C

    @interface AWSRekognitionStartFaceSearchResponse

    Swift

    class AWSRekognitionStartFaceSearchResponse
  • Declaration

    Objective-C

    @interface AWSRekognitionStartLabelDetectionRequest

    Swift

    class AWSRekognitionStartLabelDetectionRequest
  • Declaration

    Objective-C

    @interface AWSRekognitionStartLabelDetectionResponse

    Swift

    class AWSRekognitionStartLabelDetectionResponse
  • Declaration

    Objective-C

    @interface AWSRekognitionStartPersonTrackingRequest

    Swift

    class AWSRekognitionStartPersonTrackingRequest
  • Declaration

    Objective-C

    @interface AWSRekognitionStartPersonTrackingResponse

    Swift

    class AWSRekognitionStartPersonTrackingResponse
  • Declaration

    Objective-C

    @interface AWSRekognitionStartProjectVersionRequest

    Swift

    class AWSRekognitionStartProjectVersionRequest
  • Declaration

    Objective-C

    @interface AWSRekognitionStartProjectVersionResponse

    Swift

    class AWSRekognitionStartProjectVersionResponse
  • Filters applied to the technical cue or shot detection segments. For more information, see StartSegmentDetection.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionStartSegmentDetectionFilters

    Swift

    class AWSRekognitionStartSegmentDetectionFilters
  • Declaration

    Objective-C

    @interface AWSRekognitionStartSegmentDetectionRequest

    Swift

    class AWSRekognitionStartSegmentDetectionRequest
  • Declaration

    Objective-C

    @interface AWSRekognitionStartSegmentDetectionResponse

    Swift

    class AWSRekognitionStartSegmentDetectionResponse
  • Filters for the shot detection segments returned by GetSegmentDetection. For more information, see StartSegmentDetectionFilters.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionStartShotDetectionFilter

    Swift

    class AWSRekognitionStartShotDetectionFilter
  • Declaration

    Objective-C

    @interface AWSRekognitionStartStreamProcessorRequest

    Swift

    class AWSRekognitionStartStreamProcessorRequest
  • Declaration

    Objective-C

    @interface AWSRekognitionStartStreamProcessorResponse

    Swift

    class AWSRekognitionStartStreamProcessorResponse
  • Filters for the technical segments returned by GetSegmentDetection. For more information, see StartSegmentDetectionFilters.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionStartTechnicalCueDetectionFilter

    Swift

    class AWSRekognitionStartTechnicalCueDetectionFilter
  • Set of optional parameters that let you set the criteria text must meet to be included in your response. WordFilter looks at a word’s height, width and minimum confidence. RegionOfInterest lets you set a specific region of the screen to look for text in.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionStartTextDetectionFilters

    Swift

    class AWSRekognitionStartTextDetectionFilters
  • Declaration

    Objective-C

    @interface AWSRekognitionStartTextDetectionRequest

    Swift

    class AWSRekognitionStartTextDetectionRequest
  • Declaration

    Objective-C

    @interface AWSRekognitionStartTextDetectionResponse

    Swift

    class AWSRekognitionStartTextDetectionResponse
  • Declaration

    Objective-C

    @interface AWSRekognitionStopProjectVersionRequest

    Swift

    class AWSRekognitionStopProjectVersionRequest
  • Declaration

    Objective-C

    @interface AWSRekognitionStopProjectVersionResponse

    Swift

    class AWSRekognitionStopProjectVersionResponse
  • Declaration

    Objective-C

    @interface AWSRekognitionStopStreamProcessorRequest

    Swift

    class AWSRekognitionStopStreamProcessorRequest
  • Declaration

    Objective-C

    @interface AWSRekognitionStopStreamProcessorResponse

    Swift

    class AWSRekognitionStopStreamProcessorResponse
  • This is a required parameter for label detection stream processors and should not be used to start a face search stream processor.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionStreamProcessingStartSelector

    Swift

    class AWSRekognitionStreamProcessingStartSelector
  • Specifies when to stop processing the stream. You can specify a maximum amount of time to process the video.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionStreamProcessingStopSelector

    Swift

    class AWSRekognitionStreamProcessingStopSelector
  • An object that recognizes faces or labels in a streaming video. An Amazon Rekognition stream processor is created by a call to CreateStreamProcessor. The request parameters for CreateStreamProcessor describe the Kinesis video stream source for the streaming video, face recognition parameters, and where to stream the analysis resullts.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionStreamProcessor

    Swift

    class AWSRekognitionStreamProcessor
  • Allows you to opt in or opt out to share data with Rekognition to improve model performance. You can choose this option at the account level or on a per-stream basis. Note that if you opt out at the account level this setting is ignored on individual streams.

    Required parameters: [OptIn]

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionStreamProcessorDataSharingPreference

    Swift

    class AWSRekognitionStreamProcessorDataSharingPreference
  • Information about the source streaming video.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionStreamProcessorInput

    Swift

    class AWSRekognitionStreamProcessorInput
  • The Amazon Simple Notification Service topic to which Amazon Rekognition publishes the object detection results and completion status of a video analysis operation.

    Amazon Rekognition publishes a notification the first time an object of interest or a person is detected in the video stream. For example, if Amazon Rekognition detects a person at second 2, a pet at second 4, and a person again at second 5, Amazon Rekognition sends 2 object class detected notifications, one for a person at second 2 and one for a pet at second 4.

    Amazon Rekognition also publishes an an end-of-session notification with a summary when the stream processing session is complete.

    Required parameters: [SNSTopicArn]

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionStreamProcessorNotificationChannel

    Swift

    class AWSRekognitionStreamProcessorNotificationChannel
  • Information about the Amazon Kinesis Data Streams stream to which a Amazon Rekognition Video stream processor streams the results of a video analysis. For more information, see CreateStreamProcessor in the Amazon Rekognition Developer Guide.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionStreamProcessorOutput

    Swift

    class AWSRekognitionStreamProcessorOutput
  • Input parameters used in a streaming video analyzed by a Amazon Rekognition stream processor. You can use FaceSearch to recognize faces in a streaming video, or you can use ConnectedHome to detect labels.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionStreamProcessorSettings

    Swift

    class AWSRekognitionStreamProcessorSettings
  • The stream processor settings that you want to update. ConnectedHome settings can be updated to detect different labels with a different minimum confidence.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionStreamProcessorSettingsForUpdate

    Swift

    class AWSRekognitionStreamProcessorSettingsForUpdate
  • The S3 bucket that contains the training summary. The training summary includes aggregated evaluation metrics for the entire testing dataset and metrics for each individual label.

    You get the training summary S3 bucket location by calling DescribeProjectVersions.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionSummary

    Swift

    class AWSRekognitionSummary
  • Indicates whether or not the face is wearing sunglasses, and the confidence level in the determination.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionSunglasses

    Swift

    class AWSRekognitionSunglasses
  • Declaration

    Objective-C

    @interface AWSRekognitionTagResourceRequest

    Swift

    class AWSRekognitionTagResourceRequest
  • Declaration

    Objective-C

    @interface AWSRekognitionTagResourceResponse

    Swift

    class AWSRekognitionTagResourceResponse
  • Information about a technical cue segment. For more information, see SegmentDetection.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionTechnicalCueSegment

    Swift

    class AWSRekognitionTechnicalCueSegment
  • The dataset used for testing. Optionally, if AutoCreate is set, Amazon Rekognition uses the training dataset to create a test dataset with a temporary split of the training dataset.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionTestingData

    Swift

    class AWSRekognitionTestingData
  • Sagemaker Groundtruth format manifest files for the input, output and validation datasets that are used and created during testing.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionTestingDataResult

    Swift

    class AWSRekognitionTestingDataResult
  • Information about a word or line of text detected by DetectText.

    The DetectedText field contains the text that Amazon Rekognition detected in the image.

    Every word and line has an identifier (Id). Each word belongs to a line and has a parent identifier (ParentId) that identifies the line of text in which the word appears. The word Id is also an index for the word within a line of words.

    For more information, see Detecting text in the Amazon Rekognition Developer Guide.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionTextDetection

    Swift

    class AWSRekognitionTextDetection
  • Information about text detected in a video. Incudes the detected text, the time in milliseconds from the start of the video that the text was detected, and where it was detected on the screen.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionTextDetectionResult

    Swift

    class AWSRekognitionTextDetectionResult
  • The dataset used for training.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionTrainingData

    Swift

    class AWSRekognitionTrainingData
  • The data validation manifest created for the training dataset during model training.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionTrainingDataResult

    Swift

    class AWSRekognitionTrainingDataResult
  • A face that IndexFaces detected, but didn’t index. Use the Reasons response attribute to determine why a face wasn’t indexed.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionUnindexedFace

    Swift

    class AWSRekognitionUnindexedFace
  • Face details inferred from the image but not used for search. The response attribute contains reasons for why a face wasn’t used for Search.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionUnsearchedFace

    Swift

    class AWSRekognitionUnsearchedFace
  • Contains metadata like FaceId, UserID, and Reasons, for a face that was unsuccessfully associated.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionUnsuccessfulFaceAssociation

    Swift

    class AWSRekognitionUnsuccessfulFaceAssociation
  • Contains metadata like FaceId, UserID, and Reasons, for a face that was unsuccessfully deleted.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionUnsuccessfulFaceDeletion

    Swift

    class AWSRekognitionUnsuccessfulFaceDeletion
  • Contains metadata like FaceId, UserID, and Reasons, for a face that was unsuccessfully disassociated.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionUnsuccessfulFaceDisassociation

    Swift

    class AWSRekognitionUnsuccessfulFaceDisassociation
  • Declaration

    Objective-C

    @interface AWSRekognitionUntagResourceRequest

    Swift

    class AWSRekognitionUntagResourceRequest
  • Declaration

    Objective-C

    @interface AWSRekognitionUntagResourceResponse

    Swift

    class AWSRekognitionUntagResourceResponse
  • Declaration

    Objective-C

    @interface AWSRekognitionUpdateDatasetEntriesRequest

    Swift

    class AWSRekognitionUpdateDatasetEntriesRequest
  • Declaration

    Objective-C

    @interface AWSRekognitionUpdateDatasetEntriesResponse

    Swift

    class AWSRekognitionUpdateDatasetEntriesResponse
  • Declaration

    Objective-C

    @interface AWSRekognitionUpdateStreamProcessorRequest

    Swift

    class AWSRekognitionUpdateStreamProcessorRequest
  • Declaration

    Objective-C

    @interface AWSRekognitionUpdateStreamProcessorResponse

    Swift

    class AWSRekognitionUpdateStreamProcessorResponse
  • Metadata of the user stored in a collection.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionUser

    Swift

    class AWSRekognitionUser
  • Provides UserID metadata along with the confidence in the match of this UserID with the input face.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionUserMatch

    Swift

    class AWSRekognitionUserMatch
  • Contains the Amazon S3 bucket location of the validation data for a model training job.

    The validation data includes error information for individual JSON Lines in the dataset. For more information, see Debugging a Failed Model Training in the Amazon Rekognition Custom Labels Developer Guide.

    You get the ValidationData object for the training dataset (TrainingDataResult) and the test dataset (TestingDataResult) by calling DescribeProjectVersions.

    The assets array contains a single Asset object. The GroundTruthManifest field of the Asset object contains the S3 bucket location of the validation data.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionValidationData

    Swift

    class AWSRekognitionValidationData
  • Video file stored in an Amazon S3 bucket. Amazon Rekognition video start operations such as StartLabelDetection use Video to specify a video for analysis. The supported file formats are .mp4, .mov and .avi.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionVideo

    Swift

    class AWSRekognitionVideo
  • Information about a video that Amazon Rekognition analyzed. Videometadata is returned in every page of paginated responses from a Amazon Rekognition video operation.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionVideoMetadata

    Swift

    class AWSRekognitionVideoMetadata
  • Undocumented

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionResources : NSObject
    
    + (instancetype)sharedInstance;
    
    - (NSDictionary *)JSONObject;
    
    @end

    Swift

    class AWSRekognitionResources : NSObject
  • This is the API Reference for Amazon Rekognition Image, Amazon Rekognition Custom Labels, Amazon Rekognition Stored Video, Amazon Rekognition Streaming Video. It provides descriptions of actions, data types, common parameters, and common errors.

    Amazon Rekognition Image

    Amazon Rekognition Custom Labels

    Amazon Rekognition Video Stored Video

    Amazon Rekognition Video Streaming Video

    See more

    Declaration

    Objective-C

    @interface AWSRekognition

    Swift

    class AWSRekognition