Classes

The following classes are available globally.

  • Structure containing the estimated age range, in years, for a face.

    Amazon Rekognition estimates an age range for faces detected in the input image. Estimated age ranges can overlap. A face of a 5-year-old might have an estimated range of 4-6, while the face of a 6-year-old might have an estimated range of 4-8.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionAgeRange

    Swift

    class AWSRekognitionAgeRange
  • Assets are the images that you use to train and evaluate a model version. Assets are referenced by Sagemaker GroundTruth manifest files.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionAsset

    Swift

    class AWSRekognitionAsset
  • Indicates whether or not the face has a beard, and the confidence level in the determination.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionBeard

    Swift

    class AWSRekognitionBeard
  • Identifies the bounding box around the label, face, or text. The left (x-coordinate) and top (y-coordinate) are coordinates representing the top and left sides of the bounding box. Note that the upper-left corner of the image is the origin (0,0).

    The top and left values returned are ratios of the overall image size. For example, if the input image is 700x200 pixels, and the top-left coordinate of the bounding box is 350x50 pixels, the API returns a left value of 0.5 (350/700) and a top value of 0.25 (50/200).

    The width and height values represent the dimensions of the bounding box as a ratio of the overall image dimension. For example, if the input image is 700x200 pixels, and the bounding box width is 70 pixels, the width returned is 0.1.

    The bounding box coordinates can have negative values. For example, if Amazon Rekognition is able to detect a face that is at the image edge and is only partially visible, the service can return coordinates that are outside the image bounds and, depending on the image edge, you might get negative values or values greater than 1 for the left or top values.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionBoundingBox

    Swift

    class AWSRekognitionBoundingBox
  • Provides information about a celebrity recognized by the RecognizeCelebrities operation.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionCelebrity

    Swift

    class AWSRekognitionCelebrity
  • Information about a recognized celebrity.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionCelebrityDetail

    Swift

    class AWSRekognitionCelebrityDetail
  • Information about a detected celebrity and the time the celebrity was detected in a stored video. For more information, see GetCelebrityRecognition in the Amazon Rekognition Developer Guide.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionCelebrityRecognition

    Swift

    class AWSRekognitionCelebrityRecognition
  • Provides information about a face in a target image that matches the source image face analyzed by CompareFaces. The Face property contains the bounding box of the face in the target image. The Similarity property is the confidence that the source image face matches the face in the bounding box.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionCompareFacesMatch

    Swift

    class AWSRekognitionCompareFacesMatch
  • Declaration

    Objective-C

    @interface AWSRekognitionCompareFacesRequest

    Swift

    class AWSRekognitionCompareFacesRequest
  • Declaration

    Objective-C

    @interface AWSRekognitionCompareFacesResponse

    Swift

    class AWSRekognitionCompareFacesResponse
  • Provides face metadata for target image faces that are analyzed by CompareFaces and RecognizeCelebrities.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionComparedFace

    Swift

    class AWSRekognitionComparedFace
  • Type that describes the face Amazon Rekognition chose to compare with the faces in the target. This contains a bounding box for the selected face and confidence level that the bounding box contains a face. Note that Amazon Rekognition selects the largest face in the source image for this comparison.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionComparedSourceImageFace

    Swift

    class AWSRekognitionComparedSourceImageFace
  • Information about an unsafe content label detection in a stored video.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionContentModerationDetection

    Swift

    class AWSRekognitionContentModerationDetection
  • Declaration

    Objective-C

    @interface AWSRekognitionCreateCollectionRequest

    Swift

    class AWSRekognitionCreateCollectionRequest
  • Declaration

    Objective-C

    @interface AWSRekognitionCreateCollectionResponse

    Swift

    class AWSRekognitionCreateCollectionResponse
  • Declaration

    Objective-C

    @interface AWSRekognitionCreateProjectRequest

    Swift

    class AWSRekognitionCreateProjectRequest
  • Declaration

    Objective-C

    @interface AWSRekognitionCreateProjectResponse

    Swift

    class AWSRekognitionCreateProjectResponse
  • Declaration

    Objective-C

    @interface AWSRekognitionCreateProjectVersionRequest

    Swift

    class AWSRekognitionCreateProjectVersionRequest
  • Declaration

    Objective-C

    @interface AWSRekognitionCreateProjectVersionResponse

    Swift

    class AWSRekognitionCreateProjectVersionResponse
  • Declaration

    Objective-C

    @interface AWSRekognitionCreateStreamProcessorRequest

    Swift

    class AWSRekognitionCreateStreamProcessorRequest
  • Declaration

    Objective-C

    @interface AWSRekognitionCreateStreamProcessorResponse

    Swift

    class AWSRekognitionCreateStreamProcessorResponse
  • A custom label detected in an image by a call to DetectCustomLabels.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionCustomLabel

    Swift

    class AWSRekognitionCustomLabel
  • Declaration

    Objective-C

    @interface AWSRekognitionDeleteCollectionRequest

    Swift

    class AWSRekognitionDeleteCollectionRequest
  • Declaration

    Objective-C

    @interface AWSRekognitionDeleteCollectionResponse

    Swift

    class AWSRekognitionDeleteCollectionResponse
  • Declaration

    Objective-C

    @interface AWSRekognitionDeleteFacesRequest

    Swift

    class AWSRekognitionDeleteFacesRequest
  • Declaration

    Objective-C

    @interface AWSRekognitionDeleteFacesResponse

    Swift

    class AWSRekognitionDeleteFacesResponse
  • Declaration

    Objective-C

    @interface AWSRekognitionDeleteStreamProcessorRequest

    Swift

    class AWSRekognitionDeleteStreamProcessorRequest
  • Declaration

    Objective-C

    @interface AWSRekognitionDeleteStreamProcessorResponse

    Swift

    class AWSRekognitionDeleteStreamProcessorResponse
  • Declaration

    Objective-C

    @interface AWSRekognitionDescribeCollectionRequest

    Swift

    class AWSRekognitionDescribeCollectionRequest
  • Declaration

    Objective-C

    @interface AWSRekognitionDescribeCollectionResponse

    Swift

    class AWSRekognitionDescribeCollectionResponse
  • Declaration

    Objective-C

    @interface AWSRekognitionDescribeProjectVersionsRequest

    Swift

    class AWSRekognitionDescribeProjectVersionsRequest
  • Declaration

    Objective-C

    @interface AWSRekognitionDescribeProjectVersionsResponse

    Swift

    class AWSRekognitionDescribeProjectVersionsResponse
  • Declaration

    Objective-C

    @interface AWSRekognitionDescribeProjectsRequest

    Swift

    class AWSRekognitionDescribeProjectsRequest
  • Declaration

    Objective-C

    @interface AWSRekognitionDescribeProjectsResponse

    Swift

    class AWSRekognitionDescribeProjectsResponse
  • Declaration

    Objective-C

    @interface AWSRekognitionDescribeStreamProcessorRequest

    Swift

    class AWSRekognitionDescribeStreamProcessorRequest
  • Declaration

    Objective-C

    @interface AWSRekognitionDescribeStreamProcessorResponse

    Swift

    class AWSRekognitionDescribeStreamProcessorResponse
  • Declaration

    Objective-C

    @interface AWSRekognitionDetectCustomLabelsRequest

    Swift

    class AWSRekognitionDetectCustomLabelsRequest
  • Declaration

    Objective-C

    @interface AWSRekognitionDetectCustomLabelsResponse

    Swift

    class AWSRekognitionDetectCustomLabelsResponse
  • Declaration

    Objective-C

    @interface AWSRekognitionDetectFacesRequest

    Swift

    class AWSRekognitionDetectFacesRequest
  • Declaration

    Objective-C

    @interface AWSRekognitionDetectFacesResponse

    Swift

    class AWSRekognitionDetectFacesResponse
  • Declaration

    Objective-C

    @interface AWSRekognitionDetectLabelsRequest

    Swift

    class AWSRekognitionDetectLabelsRequest
  • Declaration

    Objective-C

    @interface AWSRekognitionDetectLabelsResponse

    Swift

    class AWSRekognitionDetectLabelsResponse
  • Declaration

    Objective-C

    @interface AWSRekognitionDetectModerationLabelsRequest

    Swift

    class AWSRekognitionDetectModerationLabelsRequest
  • Declaration

    Objective-C

    @interface AWSRekognitionDetectModerationLabelsResponse

    Swift

    class AWSRekognitionDetectModerationLabelsResponse
  • A set of optional parameters that you can use to set the criteria that the text must meet to be included in your response. WordFilter looks at a word’s height, width, and minimum confidence. RegionOfInterest lets you set a specific region of the image to look for text in.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionDetectTextFilters

    Swift

    class AWSRekognitionDetectTextFilters
  • Declaration

    Objective-C

    @interface AWSRekognitionDetectTextRequest

    Swift

    class AWSRekognitionDetectTextRequest
  • Declaration

    Objective-C

    @interface AWSRekognitionDetectTextResponse

    Swift

    class AWSRekognitionDetectTextResponse
  • A set of parameters that allow you to filter out certain results from your returned results.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionDetectionFilter

    Swift

    class AWSRekognitionDetectionFilter
  • The emotions that appear to be expressed on the face, and the confidence level in the determination. The API is only making a determination of the physical appearance of a person’s face. It is not a determination of the person’s internal emotional state and should not be used in such a way. For example, a person pretending to have a sad face might not be sad emotionally.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionEmotion

    Swift

    class AWSRekognitionEmotion
  • The evaluation results for the training of a model.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionEvaluationResult

    Swift

    class AWSRekognitionEvaluationResult
  • Indicates whether or not the eyes on the face are open, and the confidence level in the determination.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionEyeOpen

    Swift

    class AWSRekognitionEyeOpen
  • Indicates whether or not the face is wearing eye glasses, and the confidence level in the determination.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionEyeglasses

    Swift

    class AWSRekognitionEyeglasses
  • Describes the face properties such as the bounding box, face ID, image ID of the input image, and external image ID that you assigned.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionFace

    Swift

    class AWSRekognitionFace
  • Structure containing attributes of the face that the algorithm detected.

    A FaceDetail object contains either the default facial attributes or all facial attributes. The default attributes are BoundingBox, Confidence, Landmarks, Pose, and Quality.

    GetFaceDetection is the only Amazon Rekognition Video stored video operation that can return a FaceDetail object with all attributes. To specify which attributes to return, use the FaceAttributes input parameter for StartFaceDetection. The following Amazon Rekognition Video operations return only the default attributes. The corresponding Start operations don’t have a FaceAttributes input parameter.

    • GetCelebrityRecognition

    • GetPersonTracking

    • GetFaceSearch

    The Amazon Rekognition Image DetectFaces and IndexFaces operations can return all facial attributes. To specify which attributes to return, use the Attributes input parameter for DetectFaces. For IndexFaces, use the DetectAttributes input parameter.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionFaceDetail

    Swift

    class AWSRekognitionFaceDetail
  • Information about a face detected in a video analysis request and the time the face was detected in the video.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionFaceDetection

    Swift

    class AWSRekognitionFaceDetection
  • Provides face metadata. In addition, it also provides the confidence in the match of this face with the input face.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionFaceMatch

    Swift

    class AWSRekognitionFaceMatch
  • Object containing both the face metadata (stored in the backend database), and facial attributes that are detected but aren’t stored in the database.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionFaceRecord

    Swift

    class AWSRekognitionFaceRecord
  • Input face recognition parameters for an Amazon Rekognition stream processor. FaceRecognitionSettings is a request parameter for CreateStreamProcessor.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionFaceSearchSettings

    Swift

    class AWSRekognitionFaceSearchSettings
  • The predicted gender of a detected face.

    Amazon Rekognition makes gender binary (male/female) predictions based on the physical appearance of a face in a particular image. This kind of prediction is not designed to categorize a person’s gender identity, and you shouldn’t use Amazon Rekognition to make such a determination. For example, a male actor wearing a long-haired wig and earrings for a role might be predicted as female.

    Using Amazon Rekognition to make gender binary predictions is best suited for use cases where aggregate gender distribution statistics need to be analyzed without identifying specific users. For example, the percentage of female users compared to male users on a social media platform.

    We don’t recommend using gender binary predictions to make decisions that impact
 an individual’s rights, privacy, or access to services.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionGender

    Swift

    class AWSRekognitionGender
  • Information about where an object (DetectCustomLabels) or text (DetectText) is located on an image.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionGeometry

    Swift

    class AWSRekognitionGeometry
  • Declaration

    Objective-C

    @interface AWSRekognitionGetCelebrityInfoRequest

    Swift

    class AWSRekognitionGetCelebrityInfoRequest
  • Declaration

    Objective-C

    @interface AWSRekognitionGetCelebrityInfoResponse

    Swift

    class AWSRekognitionGetCelebrityInfoResponse
  • Declaration

    Objective-C

    @interface AWSRekognitionGetCelebrityRecognitionRequest

    Swift

    class AWSRekognitionGetCelebrityRecognitionRequest
  • Declaration

    Objective-C

    @interface AWSRekognitionGetCelebrityRecognitionResponse

    Swift

    class AWSRekognitionGetCelebrityRecognitionResponse
  • Declaration

    Objective-C

    @interface AWSRekognitionGetContentModerationRequest

    Swift

    class AWSRekognitionGetContentModerationRequest
  • Declaration

    Objective-C

    @interface AWSRekognitionGetContentModerationResponse

    Swift

    class AWSRekognitionGetContentModerationResponse
  • Declaration

    Objective-C

    @interface AWSRekognitionGetFaceDetectionRequest

    Swift

    class AWSRekognitionGetFaceDetectionRequest
  • Declaration

    Objective-C

    @interface AWSRekognitionGetFaceDetectionResponse

    Swift

    class AWSRekognitionGetFaceDetectionResponse
  • Declaration

    Objective-C

    @interface AWSRekognitionGetFaceSearchRequest

    Swift

    class AWSRekognitionGetFaceSearchRequest
  • Declaration

    Objective-C

    @interface AWSRekognitionGetFaceSearchResponse

    Swift

    class AWSRekognitionGetFaceSearchResponse
  • Declaration

    Objective-C

    @interface AWSRekognitionGetLabelDetectionRequest

    Swift

    class AWSRekognitionGetLabelDetectionRequest
  • Declaration

    Objective-C

    @interface AWSRekognitionGetLabelDetectionResponse

    Swift

    class AWSRekognitionGetLabelDetectionResponse
  • Declaration

    Objective-C

    @interface AWSRekognitionGetPersonTrackingRequest

    Swift

    class AWSRekognitionGetPersonTrackingRequest
  • Declaration

    Objective-C

    @interface AWSRekognitionGetPersonTrackingResponse

    Swift

    class AWSRekognitionGetPersonTrackingResponse
  • Declaration

    Objective-C

    @interface AWSRekognitionGetTextDetectionRequest

    Swift

    class AWSRekognitionGetTextDetectionRequest
  • Declaration

    Objective-C

    @interface AWSRekognitionGetTextDetectionResponse

    Swift

    class AWSRekognitionGetTextDetectionResponse
  • The S3 bucket that contains the Ground Truth manifest file.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionGroundTruthManifest

    Swift

    class AWSRekognitionGroundTruthManifest
  • Shows the results of the human in the loop evaluation. If there is no HumanLoopArn, the input did not trigger human review.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionHumanLoopActivationOutput

    Swift

    class AWSRekognitionHumanLoopActivationOutput
  • Sets up the flow definition the image will be sent to if one of the conditions is met. You can also set certain attributes of the image before review.

    Required parameters: [HumanLoopName, FlowDefinitionArn]

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionHumanLoopConfig

    Swift

    class AWSRekognitionHumanLoopConfig
  • Allows you to set attributes of the image. Currently, you can declare an image as free of personally identifiable information.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionHumanLoopDataAttributes

    Swift

    class AWSRekognitionHumanLoopDataAttributes
  • Provides the input image either as bytes or an S3 object.

    You pass image bytes to an Amazon Rekognition API operation by using the Bytes property. For example, you would use the Bytes property to pass an image loaded from a local file system. Image bytes passed by using the Bytes property must be base64-encoded. Your code may not need to encode image bytes if you are using an AWS SDK to call Amazon Rekognition API operations.

    For more information, see Analyzing an Image Loaded from a Local File System in the Amazon Rekognition Developer Guide.

    You pass images stored in an S3 bucket to an Amazon Rekognition API operation by using the S3Object property. Images stored in an S3 bucket do not need to be base64-encoded.

    The region for the S3 bucket containing the S3 object must match the region you use for Amazon Rekognition operations.

    If you use the AWS CLI to call Amazon Rekognition operations, passing image bytes using the Bytes property is not supported. You must first upload the image to an Amazon S3 bucket and then call the operation using the S3Object property.

    For Amazon Rekognition to process an S3 object, the user must have permission to access the S3 object. For more information, see Resource Based Policies in the Amazon Rekognition Developer Guide.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionImage

    Swift

    class AWSRekognitionImage
  • Identifies face image brightness and sharpness.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionImageQuality

    Swift

    class AWSRekognitionImageQuality
  • Declaration

    Objective-C

    @interface AWSRekognitionIndexFacesRequest

    Swift

    class AWSRekognitionIndexFacesRequest
  • Declaration

    Objective-C

    @interface AWSRekognitionIndexFacesResponse

    Swift

    class AWSRekognitionIndexFacesResponse
  • An instance of a label returned by Amazon Rekognition Image (DetectLabels) or by Amazon Rekognition Video (GetLabelDetection).

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionInstance

    Swift

    class AWSRekognitionInstance
  • The Kinesis data stream Amazon Rekognition to which the analysis results of a Amazon Rekognition stream processor are streamed. For more information, see CreateStreamProcessor in the Amazon Rekognition Developer Guide.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionKinesisDataStream

    Swift

    class AWSRekognitionKinesisDataStream
  • Kinesis video stream stream that provides the source streaming video for a Amazon Rekognition Video stream processor. For more information, see CreateStreamProcessor in the Amazon Rekognition Developer Guide.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionKinesisVideoStream

    Swift

    class AWSRekognitionKinesisVideoStream
  • Structure containing details about the detected label, including the name, detected instances, parent labels, and level of confidence.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionLabel

    Swift

    class AWSRekognitionLabel
  • Information about a label detected in a video analysis request and the time the label was detected in the video.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionLabelDetection

    Swift

    class AWSRekognitionLabelDetection
  • Indicates the location of the landmark on the face.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionLandmark

    Swift

    class AWSRekognitionLandmark
  • Declaration

    Objective-C

    @interface AWSRekognitionListCollectionsRequest

    Swift

    class AWSRekognitionListCollectionsRequest
  • Declaration

    Objective-C

    @interface AWSRekognitionListCollectionsResponse

    Swift

    class AWSRekognitionListCollectionsResponse
  • Declaration

    Objective-C

    @interface AWSRekognitionListFacesRequest

    Swift

    class AWSRekognitionListFacesRequest
  • Declaration

    Objective-C

    @interface AWSRekognitionListFacesResponse

    Swift

    class AWSRekognitionListFacesResponse
  • Declaration

    Objective-C

    @interface AWSRekognitionListStreamProcessorsRequest

    Swift

    class AWSRekognitionListStreamProcessorsRequest
  • Declaration

    Objective-C

    @interface AWSRekognitionListStreamProcessorsResponse

    Swift

    class AWSRekognitionListStreamProcessorsResponse
  • Provides information about a single type of unsafe content found in an image or video. Each type of moderated content has a label within a hierarchical taxonomy. For more information, see Detecting Unsafe Content in the Amazon Rekognition Developer Guide.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionModerationLabel

    Swift

    class AWSRekognitionModerationLabel
  • Indicates whether or not the mouth on the face is open, and the confidence level in the determination.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionMouthOpen

    Swift

    class AWSRekognitionMouthOpen
  • Indicates whether or not the face has a mustache, and the confidence level in the determination.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionMustache

    Swift

    class AWSRekognitionMustache
  • The Amazon Simple Notification Service topic to which Amazon Rekognition publishes the completion status of a video analysis operation. For more information, see api-video.

    Required parameters: [SNSTopicArn, RoleArn]

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionNotificationChannel

    Swift

    class AWSRekognitionNotificationChannel
  • The S3 bucket and folder location where training output is placed.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionOutputConfig

    Swift

    class AWSRekognitionOutputConfig
  • A parent label for a label. A label can have 0, 1, or more parents.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionParent

    Swift

    class AWSRekognitionParent
  • Details about a person detected in a video analysis request.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionPersonDetail

    Swift

    class AWSRekognitionPersonDetail
  • Details and path tracking information for a single time a person’s path is tracked in a video. Amazon Rekognition operations that track people’s paths return an array of PersonDetection objects with elements for each time a person’s path is tracked in a video.

    For more information, see GetPersonTracking in the Amazon Rekognition Developer Guide.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionPersonDetection

    Swift

    class AWSRekognitionPersonDetection
  • Information about a person whose face matches a face(s) in an Amazon Rekognition collection. Includes information about the faces in the Amazon Rekognition collection (FaceMatch), information about the person (PersonDetail), and the time stamp for when the person was detected in a video. An array of PersonMatch objects is returned by GetFaceSearch.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionPersonMatch

    Swift

    class AWSRekognitionPersonMatch
  • The X and Y coordinates of a point on an image. The X and Y values returned are ratios of the overall image size. For example, if the input image is 700x200 and the operation returns X=0.5 and Y=0.25, then the point is at the (350,50) pixel coordinate on the image.

    An array of Point objects, Polygon, is returned by DetectText and by DetectCustomLabels. Polygon represents a fine-grained polygon around a detected item. For more information, see Geometry in the Amazon Rekognition Developer Guide.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionPoint

    Swift

    class AWSRekognitionPoint
  • Indicates the pose of the face as determined by its pitch, roll, and yaw.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionPose

    Swift

    class AWSRekognitionPose
  • A description of a Amazon Rekognition Custom Labels project.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionProjectDescription

    Swift

    class AWSRekognitionProjectDescription
  • The description of a version of a model.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionProjectVersionDescription

    Swift

    class AWSRekognitionProjectVersionDescription
  • Declaration

    Objective-C

    @interface AWSRekognitionRecognizeCelebritiesRequest

    Swift

    class AWSRekognitionRecognizeCelebritiesRequest
  • Declaration

    Objective-C

    @interface AWSRekognitionRecognizeCelebritiesResponse

    Swift

    class AWSRekognitionRecognizeCelebritiesResponse
  • Specifies a location within the frame that Rekognition checks for text. Uses a BoundingBox object to set a region of the screen.

    A word is included in the region if the word is more than half in that region. If there is more than one region, the word will be compared with all regions of the screen. Any word more than half in a region is kept in the results.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionRegionOfInterest

    Swift

    class AWSRekognitionRegionOfInterest
  • Provides the S3 bucket name and object name.

    The region for the S3 bucket containing the S3 object must match the region you use for Amazon Rekognition operations.

    For Amazon Rekognition to process an S3 object, the user must have permission to access the S3 object. For more information, see Resource-Based Policies in the Amazon Rekognition Developer Guide.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionS3Object

    Swift

    class AWSRekognitionS3Object
  • Declaration

    Objective-C

    @interface AWSRekognitionSearchFacesByImageRequest

    Swift

    class AWSRekognitionSearchFacesByImageRequest
  • Declaration

    Objective-C

    @interface AWSRekognitionSearchFacesByImageResponse

    Swift

    class AWSRekognitionSearchFacesByImageResponse
  • Declaration

    Objective-C

    @interface AWSRekognitionSearchFacesRequest

    Swift

    class AWSRekognitionSearchFacesRequest
  • Declaration

    Objective-C

    @interface AWSRekognitionSearchFacesResponse

    Swift

    class AWSRekognitionSearchFacesResponse
  • Indicates whether or not the face is smiling, and the confidence level in the determination.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionSmile

    Swift

    class AWSRekognitionSmile
  • Declaration

    Objective-C

    @interface AWSRekognitionStartCelebrityRecognitionRequest

    Swift

    class AWSRekognitionStartCelebrityRecognitionRequest
  • Declaration

    Objective-C

    @interface AWSRekognitionStartCelebrityRecognitionResponse

    Swift

    class AWSRekognitionStartCelebrityRecognitionResponse
  • Declaration

    Objective-C

    @interface AWSRekognitionStartContentModerationRequest

    Swift

    class AWSRekognitionStartContentModerationRequest
  • Declaration

    Objective-C

    @interface AWSRekognitionStartContentModerationResponse

    Swift

    class AWSRekognitionStartContentModerationResponse
  • Declaration

    Objective-C

    @interface AWSRekognitionStartFaceDetectionRequest

    Swift

    class AWSRekognitionStartFaceDetectionRequest
  • Declaration

    Objective-C

    @interface AWSRekognitionStartFaceDetectionResponse

    Swift

    class AWSRekognitionStartFaceDetectionResponse
  • Declaration

    Objective-C

    @interface AWSRekognitionStartFaceSearchRequest

    Swift

    class AWSRekognitionStartFaceSearchRequest
  • Declaration

    Objective-C

    @interface AWSRekognitionStartFaceSearchResponse

    Swift

    class AWSRekognitionStartFaceSearchResponse
  • Declaration

    Objective-C

    @interface AWSRekognitionStartLabelDetectionRequest

    Swift

    class AWSRekognitionStartLabelDetectionRequest
  • Declaration

    Objective-C

    @interface AWSRekognitionStartLabelDetectionResponse

    Swift

    class AWSRekognitionStartLabelDetectionResponse
  • Declaration

    Objective-C

    @interface AWSRekognitionStartPersonTrackingRequest

    Swift

    class AWSRekognitionStartPersonTrackingRequest
  • Declaration

    Objective-C

    @interface AWSRekognitionStartPersonTrackingResponse

    Swift

    class AWSRekognitionStartPersonTrackingResponse
  • Declaration

    Objective-C

    @interface AWSRekognitionStartProjectVersionRequest

    Swift

    class AWSRekognitionStartProjectVersionRequest
  • Declaration

    Objective-C

    @interface AWSRekognitionStartProjectVersionResponse

    Swift

    class AWSRekognitionStartProjectVersionResponse
  • Declaration

    Objective-C

    @interface AWSRekognitionStartStreamProcessorRequest

    Swift

    class AWSRekognitionStartStreamProcessorRequest
  • Declaration

    Objective-C

    @interface AWSRekognitionStartStreamProcessorResponse

    Swift

    class AWSRekognitionStartStreamProcessorResponse
  • Set of optional parameters that let you set the criteria text must meet to be included in your response. WordFilter looks at a word’s height, width and minimum confidence. RegionOfInterest lets you set a specific region of the screen to look for text in.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionStartTextDetectionFilters

    Swift

    class AWSRekognitionStartTextDetectionFilters
  • Declaration

    Objective-C

    @interface AWSRekognitionStartTextDetectionRequest

    Swift

    class AWSRekognitionStartTextDetectionRequest
  • Declaration

    Objective-C

    @interface AWSRekognitionStartTextDetectionResponse

    Swift

    class AWSRekognitionStartTextDetectionResponse
  • Declaration

    Objective-C

    @interface AWSRekognitionStopProjectVersionRequest

    Swift

    class AWSRekognitionStopProjectVersionRequest
  • Declaration

    Objective-C

    @interface AWSRekognitionStopProjectVersionResponse

    Swift

    class AWSRekognitionStopProjectVersionResponse
  • Declaration

    Objective-C

    @interface AWSRekognitionStopStreamProcessorRequest

    Swift

    class AWSRekognitionStopStreamProcessorRequest
  • Declaration

    Objective-C

    @interface AWSRekognitionStopStreamProcessorResponse

    Swift

    class AWSRekognitionStopStreamProcessorResponse
  • An object that recognizes faces in a streaming video. An Amazon Rekognition stream processor is created by a call to CreateStreamProcessor. The request parameters for CreateStreamProcessor describe the Kinesis video stream source for the streaming video, face recognition parameters, and where to stream the analysis resullts.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionStreamProcessor

    Swift

    class AWSRekognitionStreamProcessor
  • Information about the source streaming video.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionStreamProcessorInput

    Swift

    class AWSRekognitionStreamProcessorInput
  • Information about the Amazon Kinesis Data Streams stream to which a Amazon Rekognition Video stream processor streams the results of a video analysis. For more information, see CreateStreamProcessor in the Amazon Rekognition Developer Guide.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionStreamProcessorOutput

    Swift

    class AWSRekognitionStreamProcessorOutput
  • Input parameters used to recognize faces in a streaming video analyzed by a Amazon Rekognition stream processor.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionStreamProcessorSettings

    Swift

    class AWSRekognitionStreamProcessorSettings
  • The S3 bucket that contains the training summary. The training summary includes aggregated evaluation metrics for the entire testing dataset and metrics for each individual label.

    You get the training summary S3 bucket location by calling DescribeProjectVersions.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionSummary

    Swift

    class AWSRekognitionSummary
  • Indicates whether or not the face is wearing sunglasses, and the confidence level in the determination.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionSunglasses

    Swift

    class AWSRekognitionSunglasses
  • The dataset used for testing. Optionally, if AutoCreate is set, Amazon Rekognition Custom Labels creates a testing dataset using an 80/20 split of the training dataset.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionTestingData

    Swift

    class AWSRekognitionTestingData
  • A Sagemaker Groundtruth format manifest file representing the dataset used for testing.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionTestingDataResult

    Swift

    class AWSRekognitionTestingDataResult
  • Information about a word or line of text detected by DetectText.

    The DetectedText field contains the text that Amazon Rekognition detected in the image.

    Every word and line has an identifier (Id). Each word belongs to a line and has a parent identifier (ParentId) that identifies the line of text in which the word appears. The word Id is also an index for the word within a line of words.

    For more information, see Detecting Text in the Amazon Rekognition Developer Guide.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionTextDetection

    Swift

    class AWSRekognitionTextDetection
  • Information about text detected in a video. Incudes the detected text, the time in milliseconds from the start of the video that the text was detected, and where it was detected on the screen.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionTextDetectionResult

    Swift

    class AWSRekognitionTextDetectionResult
  • The dataset used for training.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionTrainingData

    Swift

    class AWSRekognitionTrainingData
  • A Sagemaker Groundtruth format manifest file that represents the dataset used for training.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionTrainingDataResult

    Swift

    class AWSRekognitionTrainingDataResult
  • A face that IndexFaces detected, but didn’t index. Use the Reasons response attribute to determine why a face wasn’t indexed.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionUnindexedFace

    Swift

    class AWSRekognitionUnindexedFace
  • Video file stored in an Amazon S3 bucket. Amazon Rekognition video start operations such as StartLabelDetection use Video to specify a video for analysis. The supported file formats are .mp4, .mov and .avi.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionVideo

    Swift

    class AWSRekognitionVideo
  • Information about a video that Amazon Rekognition analyzed. Videometadata is returned in every page of paginated responses from a Amazon Rekognition video operation.

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionVideoMetadata

    Swift

    class AWSRekognitionVideoMetadata
  • Undocumented

    See more

    Declaration

    Objective-C

    @interface AWSRekognitionResources : NSObject
    
    + (instancetype)sharedInstance;
    
    - (NSDictionary *)JSONObject;
    
    @end

    Swift

    class AWSRekognitionResources : NSObject
  • This is the Amazon Rekognition API reference.

    See more

    Declaration

    Objective-C

    @interface AWSRekognition

    Swift

    class AWSRekognition