Classes
The following classes are available globally.
-
Structure containing the estimated age range, in years, for a face.
Amazon Rekognition estimates an age range for faces detected in the input image. Estimated age ranges can overlap. A face of a 5-year-old might have an estimated range of 4-6, while the face of a 6-year-old might have an estimated range of 4-8.
See moreDeclaration
Objective-C
@interface AWSRekognitionAgeRange
Swift
class AWSRekognitionAgeRange
-
Assets are the images that you use to train and evaluate a model version. Assets can also contain validation information that you use to debug a failed model training.
See moreDeclaration
Objective-C
@interface AWSRekognitionAsset
Swift
class AWSRekognitionAsset
-
Metadata information about an audio stream. An array of
See moreAudioMetadata
objects for the audio streams found in a stored video is returned by GetSegmentDetection.Declaration
Objective-C
@interface AWSRekognitionAudioMetadata
Swift
class AWSRekognitionAudioMetadata
-
Indicates whether or not the face has a beard, and the confidence level in the determination.
See moreDeclaration
Objective-C
@interface AWSRekognitionBeard
Swift
class AWSRekognitionBeard
-
Identifies the bounding box around the label, face, text or personal protective equipment. The
left
(x-coordinate) andtop
(y-coordinate) are coordinates representing the top and left sides of the bounding box. Note that the upper-left corner of the image is the origin (0,0).The
top
andleft
values returned are ratios of the overall image size. For example, if the input image is 700x200 pixels, and the top-left coordinate of the bounding box is 350x50 pixels, the API returns aleft
value of 0.5 (350/700) and atop
value of 0.25 (50/200).The
width
andheight
values represent the dimensions of the bounding box as a ratio of the overall image dimension. For example, if the input image is 700x200 pixels, and the bounding box width is 70 pixels, the width returned is 0.1. See moreThe bounding box coordinates can have negative values. For example, if Amazon Rekognition is able to detect a face that is at the image edge and is only partially visible, the service can return coordinates that are outside the image bounds and, depending on the image edge, you might get negative values or values greater than 1 for the
left
ortop
values.Declaration
Objective-C
@interface AWSRekognitionBoundingBox
Swift
class AWSRekognitionBoundingBox
-
Provides information about a celebrity recognized by the RecognizeCelebrities operation.
See moreDeclaration
Objective-C
@interface AWSRekognitionCelebrity
Swift
class AWSRekognitionCelebrity
-
Information about a recognized celebrity.
See moreDeclaration
Objective-C
@interface AWSRekognitionCelebrityDetail
Swift
class AWSRekognitionCelebrityDetail
-
Information about a detected celebrity and the time the celebrity was detected in a stored video. For more information, see GetCelebrityRecognition in the Amazon Rekognition Developer Guide.
See moreDeclaration
Objective-C
@interface AWSRekognitionCelebrityRecognition
Swift
class AWSRekognitionCelebrityRecognition
-
Provides information about a face in a target image that matches the source image face analyzed by
See moreCompareFaces
. TheFace
property contains the bounding box of the face in the target image. TheSimilarity
property is the confidence that the source image face matches the face in the bounding box.Declaration
Objective-C
@interface AWSRekognitionCompareFacesMatch
Swift
class AWSRekognitionCompareFacesMatch
-
Declaration
Objective-C
@interface AWSRekognitionCompareFacesRequest
Swift
class AWSRekognitionCompareFacesRequest
-
Declaration
Objective-C
@interface AWSRekognitionCompareFacesResponse
Swift
class AWSRekognitionCompareFacesResponse
-
Provides face metadata for target image faces that are analyzed by
See moreCompareFaces
andRecognizeCelebrities
.Declaration
Objective-C
@interface AWSRekognitionComparedFace
Swift
class AWSRekognitionComparedFace
-
Type that describes the face Amazon Rekognition chose to compare with the faces in the target. This contains a bounding box for the selected face and confidence level that the bounding box contains a face. Note that Amazon Rekognition selects the largest face in the source image for this comparison.
See moreDeclaration
Objective-C
@interface AWSRekognitionComparedSourceImageFace
Swift
class AWSRekognitionComparedSourceImageFace
-
Information about an unsafe content label detection in a stored video.
See moreDeclaration
Objective-C
@interface AWSRekognitionContentModerationDetection
Swift
class AWSRekognitionContentModerationDetection
-
Information about an item of Personal Protective Equipment covering a corresponding body part. For more information, see DetectProtectiveEquipment.
See moreDeclaration
Objective-C
@interface AWSRekognitionCoversBodyPart
Swift
class AWSRekognitionCoversBodyPart
-
Declaration
Objective-C
@interface AWSRekognitionCreateCollectionRequest
Swift
class AWSRekognitionCreateCollectionRequest
-
Declaration
Objective-C
@interface AWSRekognitionCreateCollectionResponse
Swift
class AWSRekognitionCreateCollectionResponse
-
Declaration
Objective-C
@interface AWSRekognitionCreateProjectRequest
Swift
class AWSRekognitionCreateProjectRequest
-
Declaration
Objective-C
@interface AWSRekognitionCreateProjectResponse
Swift
class AWSRekognitionCreateProjectResponse
-
Declaration
Objective-C
@interface AWSRekognitionCreateProjectVersionRequest
Swift
class AWSRekognitionCreateProjectVersionRequest
-
Declaration
Objective-C
@interface AWSRekognitionCreateProjectVersionResponse
Swift
class AWSRekognitionCreateProjectVersionResponse
-
Declaration
Objective-C
@interface AWSRekognitionCreateStreamProcessorRequest
Swift
class AWSRekognitionCreateStreamProcessorRequest
-
Declaration
Objective-C
@interface AWSRekognitionCreateStreamProcessorResponse
Swift
class AWSRekognitionCreateStreamProcessorResponse
-
A custom label detected in an image by a call to DetectCustomLabels.
See moreDeclaration
Objective-C
@interface AWSRekognitionCustomLabel
Swift
class AWSRekognitionCustomLabel
-
Declaration
Objective-C
@interface AWSRekognitionDeleteCollectionRequest
Swift
class AWSRekognitionDeleteCollectionRequest
-
Declaration
Objective-C
@interface AWSRekognitionDeleteCollectionResponse
Swift
class AWSRekognitionDeleteCollectionResponse
-
Declaration
Objective-C
@interface AWSRekognitionDeleteFacesRequest
Swift
class AWSRekognitionDeleteFacesRequest
-
Declaration
Objective-C
@interface AWSRekognitionDeleteFacesResponse
Swift
class AWSRekognitionDeleteFacesResponse
-
Declaration
Objective-C
@interface AWSRekognitionDeleteProjectRequest
Swift
class AWSRekognitionDeleteProjectRequest
-
Declaration
Objective-C
@interface AWSRekognitionDeleteProjectResponse
Swift
class AWSRekognitionDeleteProjectResponse
-
Declaration
Objective-C
@interface AWSRekognitionDeleteProjectVersionRequest
Swift
class AWSRekognitionDeleteProjectVersionRequest
-
Declaration
Objective-C
@interface AWSRekognitionDeleteProjectVersionResponse
Swift
class AWSRekognitionDeleteProjectVersionResponse
-
Declaration
Objective-C
@interface AWSRekognitionDeleteStreamProcessorRequest
Swift
class AWSRekognitionDeleteStreamProcessorRequest
-
Declaration
Objective-C
@interface AWSRekognitionDeleteStreamProcessorResponse
Swift
class AWSRekognitionDeleteStreamProcessorResponse
-
Declaration
Objective-C
@interface AWSRekognitionDescribeCollectionRequest
Swift
class AWSRekognitionDescribeCollectionRequest
-
Declaration
Objective-C
@interface AWSRekognitionDescribeCollectionResponse
Swift
class AWSRekognitionDescribeCollectionResponse
-
Declaration
Objective-C
@interface AWSRekognitionDescribeProjectVersionsRequest
Swift
class AWSRekognitionDescribeProjectVersionsRequest
-
Declaration
Objective-C
@interface AWSRekognitionDescribeProjectVersionsResponse
Swift
class AWSRekognitionDescribeProjectVersionsResponse
-
Declaration
Objective-C
@interface AWSRekognitionDescribeProjectsRequest
Swift
class AWSRekognitionDescribeProjectsRequest
-
Declaration
Objective-C
@interface AWSRekognitionDescribeProjectsResponse
Swift
class AWSRekognitionDescribeProjectsResponse
-
Declaration
Objective-C
@interface AWSRekognitionDescribeStreamProcessorRequest
Swift
class AWSRekognitionDescribeStreamProcessorRequest
-
Declaration
Objective-C
@interface AWSRekognitionDescribeStreamProcessorResponse
Swift
class AWSRekognitionDescribeStreamProcessorResponse
-
Declaration
Objective-C
@interface AWSRekognitionDetectCustomLabelsRequest
Swift
class AWSRekognitionDetectCustomLabelsRequest
-
Declaration
Objective-C
@interface AWSRekognitionDetectCustomLabelsResponse
Swift
class AWSRekognitionDetectCustomLabelsResponse
-
Declaration
Objective-C
@interface AWSRekognitionDetectFacesRequest
Swift
class AWSRekognitionDetectFacesRequest
-
Declaration
Objective-C
@interface AWSRekognitionDetectFacesResponse
Swift
class AWSRekognitionDetectFacesResponse
-
Declaration
Objective-C
@interface AWSRekognitionDetectLabelsRequest
Swift
class AWSRekognitionDetectLabelsRequest
-
Declaration
Objective-C
@interface AWSRekognitionDetectLabelsResponse
Swift
class AWSRekognitionDetectLabelsResponse
-
Declaration
Objective-C
@interface AWSRekognitionDetectModerationLabelsRequest
Swift
class AWSRekognitionDetectModerationLabelsRequest
-
Declaration
Objective-C
@interface AWSRekognitionDetectModerationLabelsResponse
Swift
class AWSRekognitionDetectModerationLabelsResponse
-
Declaration
Objective-C
@interface AWSRekognitionDetectProtectiveEquipmentRequest
Swift
class AWSRekognitionDetectProtectiveEquipmentRequest
-
Declaration
Objective-C
@interface AWSRekognitionDetectProtectiveEquipmentResponse
Swift
class AWSRekognitionDetectProtectiveEquipmentResponse
-
A set of optional parameters that you can use to set the criteria that the text must meet to be included in your response.
See moreWordFilter
looks at a word’s height, width, and minimum confidence.RegionOfInterest
lets you set a specific region of the image to look for text in.Declaration
Objective-C
@interface AWSRekognitionDetectTextFilters
Swift
class AWSRekognitionDetectTextFilters
-
Declaration
Objective-C
@interface AWSRekognitionDetectTextRequest
Swift
class AWSRekognitionDetectTextRequest
-
Declaration
Objective-C
@interface AWSRekognitionDetectTextResponse
Swift
class AWSRekognitionDetectTextResponse
-
A set of parameters that allow you to filter out certain results from your returned results.
See moreDeclaration
Objective-C
@interface AWSRekognitionDetectionFilter
Swift
class AWSRekognitionDetectionFilter
-
The emotions that appear to be expressed on the face, and the confidence level in the determination. The API is only making a determination of the physical appearance of a person’s face. It is not a determination of the person’s internal emotional state and should not be used in such a way. For example, a person pretending to have a sad face might not be sad emotionally.
See moreDeclaration
Objective-C
@interface AWSRekognitionEmotion
Swift
class AWSRekognitionEmotion
-
Information about an item of Personal Protective Equipment (PPE) detected by DetectProtectiveEquipment. For more information, see DetectProtectiveEquipment.
See moreDeclaration
Objective-C
@interface AWSRekognitionEquipmentDetection
Swift
class AWSRekognitionEquipmentDetection
-
The evaluation results for the training of a model.
See moreDeclaration
Objective-C
@interface AWSRekognitionEvaluationResult
Swift
class AWSRekognitionEvaluationResult
-
Indicates whether or not the eyes on the face are open, and the confidence level in the determination.
See moreDeclaration
Objective-C
@interface AWSRekognitionEyeOpen
Swift
class AWSRekognitionEyeOpen
-
Indicates whether or not the face is wearing eye glasses, and the confidence level in the determination.
See moreDeclaration
Objective-C
@interface AWSRekognitionEyeglasses
Swift
class AWSRekognitionEyeglasses
-
Describes the face properties such as the bounding box, face ID, image ID of the input image, and external image ID that you assigned.
See moreDeclaration
Objective-C
@interface AWSRekognitionFace
Swift
class AWSRekognitionFace
-
Structure containing attributes of the face that the algorithm detected.
A
FaceDetail
object contains either the default facial attributes or all facial attributes. The default attributes areBoundingBox
,Confidence
,Landmarks
,Pose
, andQuality
.GetFaceDetection is the only Amazon Rekognition Video stored video operation that can return a
FaceDetail
object with all attributes. To specify which attributes to return, use theFaceAttributes
input parameter for StartFaceDetection. The following Amazon Rekognition Video operations return only the default attributes. The corresponding Start operations don’t have aFaceAttributes
input parameter.GetCelebrityRecognition
GetPersonTracking
GetFaceSearch
The Amazon Rekognition Image DetectFaces and IndexFaces operations can return all facial attributes. To specify which attributes to return, use the
See moreAttributes
input parameter forDetectFaces
. ForIndexFaces
, use theDetectAttributes
input parameter.Declaration
Objective-C
@interface AWSRekognitionFaceDetail
Swift
class AWSRekognitionFaceDetail
-
Information about a face detected in a video analysis request and the time the face was detected in the video.
See moreDeclaration
Objective-C
@interface AWSRekognitionFaceDetection
Swift
class AWSRekognitionFaceDetection
-
Provides face metadata. In addition, it also provides the confidence in the match of this face with the input face.
See moreDeclaration
Objective-C
@interface AWSRekognitionFaceMatch
Swift
class AWSRekognitionFaceMatch
-
Object containing both the face metadata (stored in the backend database), and facial attributes that are detected but aren’t stored in the database.
See moreDeclaration
Objective-C
@interface AWSRekognitionFaceRecord
Swift
class AWSRekognitionFaceRecord
-
Input face recognition parameters for an Amazon Rekognition stream processor.
See moreFaceRecognitionSettings
is a request parameter for CreateStreamProcessor.Declaration
Objective-C
@interface AWSRekognitionFaceSearchSettings
Swift
class AWSRekognitionFaceSearchSettings
-
The predicted gender of a detected face.
Amazon Rekognition makes gender binary (male/female) predictions based on the physical appearance of a face in a particular image. This kind of prediction is not designed to categorize a person’s gender identity, and you shouldn’t use Amazon Rekognition to make such a determination. For example, a male actor wearing a long-haired wig and earrings for a role might be predicted as female.
Using Amazon Rekognition to make gender binary predictions is best suited for use cases where aggregate gender distribution statistics need to be analyzed without identifying specific users. For example, the percentage of female users compared to male users on a social media platform.
We don’t recommend using gender binary predictions to make decisions that impact an individual’s rights, privacy, or access to services.
See moreDeclaration
Objective-C
@interface AWSRekognitionGender
Swift
class AWSRekognitionGender
-
Information about where an object (DetectCustomLabels) or text (DetectText) is located on an image.
See moreDeclaration
Objective-C
@interface AWSRekognitionGeometry
Swift
class AWSRekognitionGeometry
-
Declaration
Objective-C
@interface AWSRekognitionGetCelebrityInfoRequest
Swift
class AWSRekognitionGetCelebrityInfoRequest
-
Declaration
Objective-C
@interface AWSRekognitionGetCelebrityInfoResponse
Swift
class AWSRekognitionGetCelebrityInfoResponse
-
Declaration
Objective-C
@interface AWSRekognitionGetCelebrityRecognitionRequest
Swift
class AWSRekognitionGetCelebrityRecognitionRequest
-
Declaration
Objective-C
@interface AWSRekognitionGetCelebrityRecognitionResponse
Swift
class AWSRekognitionGetCelebrityRecognitionResponse
-
Declaration
Objective-C
@interface AWSRekognitionGetContentModerationRequest
Swift
class AWSRekognitionGetContentModerationRequest
-
Declaration
Objective-C
@interface AWSRekognitionGetContentModerationResponse
Swift
class AWSRekognitionGetContentModerationResponse
-
Declaration
Objective-C
@interface AWSRekognitionGetFaceDetectionRequest
Swift
class AWSRekognitionGetFaceDetectionRequest
-
Declaration
Objective-C
@interface AWSRekognitionGetFaceDetectionResponse
Swift
class AWSRekognitionGetFaceDetectionResponse
-
Declaration
Objective-C
@interface AWSRekognitionGetFaceSearchRequest
Swift
class AWSRekognitionGetFaceSearchRequest
-
Declaration
Objective-C
@interface AWSRekognitionGetFaceSearchResponse
Swift
class AWSRekognitionGetFaceSearchResponse
-
Declaration
Objective-C
@interface AWSRekognitionGetLabelDetectionRequest
Swift
class AWSRekognitionGetLabelDetectionRequest
-
Declaration
Objective-C
@interface AWSRekognitionGetLabelDetectionResponse
Swift
class AWSRekognitionGetLabelDetectionResponse
-
Declaration
Objective-C
@interface AWSRekognitionGetPersonTrackingRequest
Swift
class AWSRekognitionGetPersonTrackingRequest
-
Declaration
Objective-C
@interface AWSRekognitionGetPersonTrackingResponse
Swift
class AWSRekognitionGetPersonTrackingResponse
-
Declaration
Objective-C
@interface AWSRekognitionGetSegmentDetectionRequest
Swift
class AWSRekognitionGetSegmentDetectionRequest
-
Declaration
Objective-C
@interface AWSRekognitionGetSegmentDetectionResponse
Swift
class AWSRekognitionGetSegmentDetectionResponse
-
Declaration
Objective-C
@interface AWSRekognitionGetTextDetectionRequest
Swift
class AWSRekognitionGetTextDetectionRequest
-
Declaration
Objective-C
@interface AWSRekognitionGetTextDetectionResponse
Swift
class AWSRekognitionGetTextDetectionResponse
-
The S3 bucket that contains an Amazon Sagemaker Ground Truth format manifest file.
See moreDeclaration
Objective-C
@interface AWSRekognitionGroundTruthManifest
Swift
class AWSRekognitionGroundTruthManifest
-
Shows the results of the human in the loop evaluation. If there is no HumanLoopArn, the input did not trigger human review.
See moreDeclaration
Objective-C
@interface AWSRekognitionHumanLoopActivationOutput
Swift
class AWSRekognitionHumanLoopActivationOutput
-
Sets up the flow definition the image will be sent to if one of the conditions is met. You can also set certain attributes of the image before review.
Required parameters: [HumanLoopName, FlowDefinitionArn]
See moreDeclaration
Objective-C
@interface AWSRekognitionHumanLoopConfig
Swift
class AWSRekognitionHumanLoopConfig
-
Allows you to set attributes of the image. Currently, you can declare an image as free of personally identifiable information.
See moreDeclaration
Objective-C
@interface AWSRekognitionHumanLoopDataAttributes
Swift
class AWSRekognitionHumanLoopDataAttributes
-
Provides the input image either as bytes or an S3 object.
You pass image bytes to an Amazon Rekognition API operation by using the
Bytes
property. For example, you would use theBytes
property to pass an image loaded from a local file system. Image bytes passed by using theBytes
property must be base64-encoded. Your code may not need to encode image bytes if you are using an AWS SDK to call Amazon Rekognition API operations.For more information, see Analyzing an Image Loaded from a Local File System in the Amazon Rekognition Developer Guide.
You pass images stored in an S3 bucket to an Amazon Rekognition API operation by using the
S3Object
property. Images stored in an S3 bucket do not need to be base64-encoded.The region for the S3 bucket containing the S3 object must match the region you use for Amazon Rekognition operations.
If you use the AWS CLI to call Amazon Rekognition operations, passing image bytes using the Bytes property is not supported. You must first upload the image to an Amazon S3 bucket and then call the operation using the S3Object property.
For Amazon Rekognition to process an S3 object, the user must have permission to access the S3 object. For more information, see Resource Based Policies in the Amazon Rekognition Developer Guide.
See moreDeclaration
Objective-C
@interface AWSRekognitionImage
Swift
class AWSRekognitionImage
-
Identifies face image brightness and sharpness.
See moreDeclaration
Objective-C
@interface AWSRekognitionImageQuality
Swift
class AWSRekognitionImageQuality
-
Declaration
Objective-C
@interface AWSRekognitionIndexFacesRequest
Swift
class AWSRekognitionIndexFacesRequest
-
Declaration
Objective-C
@interface AWSRekognitionIndexFacesResponse
Swift
class AWSRekognitionIndexFacesResponse
-
An instance of a label returned by Amazon Rekognition Image (DetectLabels) or by Amazon Rekognition Video (GetLabelDetection).
See moreDeclaration
Objective-C
@interface AWSRekognitionInstance
Swift
class AWSRekognitionInstance
-
The Kinesis data stream Amazon Rekognition to which the analysis results of a Amazon Rekognition stream processor are streamed. For more information, see CreateStreamProcessor in the Amazon Rekognition Developer Guide.
See moreDeclaration
Objective-C
@interface AWSRekognitionKinesisDataStream
Swift
class AWSRekognitionKinesisDataStream
-
Kinesis video stream stream that provides the source streaming video for a Amazon Rekognition Video stream processor. For more information, see CreateStreamProcessor in the Amazon Rekognition Developer Guide.
See moreDeclaration
Objective-C
@interface AWSRekognitionKinesisVideoStream
Swift
class AWSRekognitionKinesisVideoStream
-
Structure containing details about the detected label, including the name, detected instances, parent labels, and level of confidence.
See moreDeclaration
Objective-C
@interface AWSRekognitionLabel
Swift
class AWSRekognitionLabel
-
Information about a label detected in a video analysis request and the time the label was detected in the video.
See moreDeclaration
Objective-C
@interface AWSRekognitionLabelDetection
Swift
class AWSRekognitionLabelDetection
-
Indicates the location of the landmark on the face.
See moreDeclaration
Objective-C
@interface AWSRekognitionLandmark
Swift
class AWSRekognitionLandmark
-
Declaration
Objective-C
@interface AWSRekognitionListCollectionsRequest
Swift
class AWSRekognitionListCollectionsRequest
-
Declaration
Objective-C
@interface AWSRekognitionListCollectionsResponse
Swift
class AWSRekognitionListCollectionsResponse
-
Declaration
Objective-C
@interface AWSRekognitionListFacesRequest
Swift
class AWSRekognitionListFacesRequest
-
Declaration
Objective-C
@interface AWSRekognitionListFacesResponse
Swift
class AWSRekognitionListFacesResponse
-
Declaration
Objective-C
@interface AWSRekognitionListStreamProcessorsRequest
Swift
class AWSRekognitionListStreamProcessorsRequest
-
Declaration
Objective-C
@interface AWSRekognitionListStreamProcessorsResponse
Swift
class AWSRekognitionListStreamProcessorsResponse
-
Provides information about a single type of unsafe content found in an image or video. Each type of moderated content has a label within a hierarchical taxonomy. For more information, see Detecting Unsafe Content in the Amazon Rekognition Developer Guide.
See moreDeclaration
Objective-C
@interface AWSRekognitionModerationLabel
Swift
class AWSRekognitionModerationLabel
-
Indicates whether or not the mouth on the face is open, and the confidence level in the determination.
See moreDeclaration
Objective-C
@interface AWSRekognitionMouthOpen
Swift
class AWSRekognitionMouthOpen
-
Indicates whether or not the face has a mustache, and the confidence level in the determination.
See moreDeclaration
Objective-C
@interface AWSRekognitionMustache
Swift
class AWSRekognitionMustache
-
The Amazon Simple Notification Service topic to which Amazon Rekognition publishes the completion status of a video analysis operation. For more information, see api-video.
Required parameters: [SNSTopicArn, RoleArn]
See moreDeclaration
Objective-C
@interface AWSRekognitionNotificationChannel
Swift
class AWSRekognitionNotificationChannel
-
The S3 bucket and folder location where training output is placed.
See moreDeclaration
Objective-C
@interface AWSRekognitionOutputConfig
Swift
class AWSRekognitionOutputConfig
-
A parent label for a label. A label can have 0, 1, or more parents.
See moreDeclaration
Objective-C
@interface AWSRekognitionParent
Swift
class AWSRekognitionParent
-
Details about a person detected in a video analysis request.
See moreDeclaration
Objective-C
@interface AWSRekognitionPersonDetail
Swift
class AWSRekognitionPersonDetail
-
Details and path tracking information for a single time a person’s path is tracked in a video. Amazon Rekognition operations that track people’s paths return an array of
PersonDetection
objects with elements for each time a person’s path is tracked in a video.For more information, see GetPersonTracking in the Amazon Rekognition Developer Guide.
See moreDeclaration
Objective-C
@interface AWSRekognitionPersonDetection
Swift
class AWSRekognitionPersonDetection
-
Information about a person whose face matches a face(s) in an Amazon Rekognition collection. Includes information about the faces in the Amazon Rekognition collection (FaceMatch), information about the person (PersonDetail), and the time stamp for when the person was detected in a video. An array of
See morePersonMatch
objects is returned by GetFaceSearch.Declaration
Objective-C
@interface AWSRekognitionPersonMatch
Swift
class AWSRekognitionPersonMatch
-
The X and Y coordinates of a point on an image. The X and Y values returned are ratios of the overall image size. For example, if the input image is 700x200 and the operation returns X=0.5 and Y=0.25, then the point is at the (350,50) pixel coordinate on the image.
An array of
See morePoint
objects,Polygon
, is returned by DetectText and by DetectCustomLabels.Polygon
represents a fine-grained polygon around a detected item. For more information, see Geometry in the Amazon Rekognition Developer Guide.Declaration
Objective-C
@interface AWSRekognitionPoint
Swift
class AWSRekognitionPoint
-
Indicates the pose of the face as determined by its pitch, roll, and yaw.
See moreDeclaration
Objective-C
@interface AWSRekognitionPose
Swift
class AWSRekognitionPose
-
A description of a Amazon Rekognition Custom Labels project.
See moreDeclaration
Objective-C
@interface AWSRekognitionProjectDescription
Swift
class AWSRekognitionProjectDescription
-
The description of a version of a model.
See moreDeclaration
Objective-C
@interface AWSRekognitionProjectVersionDescription
Swift
class AWSRekognitionProjectVersionDescription
-
Information about a body part detected by DetectProtectiveEquipment that contains PPE. An array of
See moreProtectiveEquipmentBodyPart
objects is returned for each person detected byDetectProtectiveEquipment
.Declaration
Objective-C
@interface AWSRekognitionProtectiveEquipmentBodyPart
Swift
class AWSRekognitionProtectiveEquipmentBodyPart
-
A person detected by a call to DetectProtectiveEquipment. The API returns all persons detected in the input image in an array of
See moreProtectiveEquipmentPerson
objects.Declaration
Objective-C
@interface AWSRekognitionProtectiveEquipmentPerson
Swift
class AWSRekognitionProtectiveEquipmentPerson
-
Specifies summary attributes to return from a call to DetectProtectiveEquipment. You can specify which types of PPE to summarize. You can also specify a minimum confidence value for detections. Summary information is returned in the
Summary
(ProtectiveEquipmentSummary) field of the response fromDetectProtectiveEquipment
. The summary includes which persons in an image were detected wearing the requested types of person protective equipment (PPE), which persons were detected as not wearing PPE, and the persons in which a determination could not be made. For more information, see ProtectiveEquipmentSummary.Required parameters: [MinConfidence, RequiredEquipmentTypes]
See moreDeclaration
Objective-C
@interface AWSRekognitionProtectiveEquipmentSummarizationAttributes
Swift
class AWSRekognitionProtectiveEquipmentSummarizationAttributes
-
Summary information for required items of personal protective equipment (PPE) detected on persons by a call to DetectProtectiveEquipment. You specify the required type of PPE in the
SummarizationAttributes
(ProtectiveEquipmentSummarizationAttributes) input parameter. The summary includes which persons were detected wearing the required personal protective equipment (PersonsWithRequiredEquipment
), which persons were detected as not wearing the required PPE (PersonsWithoutRequiredEquipment
), and the persons in which a determination could not be made (PersonsIndeterminate
).To get a total for each category, use the size of the field array. For example, to find out how many people were detected as wearing the specified PPE, use the size of the
See morePersonsWithRequiredEquipment
array. If you want to find out more about a person, such as the location (BoundingBox) of the person on the image, use the person ID in each array element. Each person ID matches the ID field of a ProtectiveEquipmentPerson object returned in thePersons
array byDetectProtectiveEquipment
.Declaration
Objective-C
@interface AWSRekognitionProtectiveEquipmentSummary
Swift
class AWSRekognitionProtectiveEquipmentSummary
-
Declaration
Objective-C
@interface AWSRekognitionRecognizeCelebritiesRequest
Swift
class AWSRekognitionRecognizeCelebritiesRequest
-
Declaration
Objective-C
@interface AWSRekognitionRecognizeCelebritiesResponse
Swift
class AWSRekognitionRecognizeCelebritiesResponse
-
Specifies a location within the frame that Rekognition checks for text. Uses a
BoundingBox
object to set a region of the screen.A word is included in the region if the word is more than half in that region. If there is more than one region, the word will be compared with all regions of the screen. Any word more than half in a region is kept in the results.
See moreDeclaration
Objective-C
@interface AWSRekognitionRegionOfInterest
Swift
class AWSRekognitionRegionOfInterest
-
Provides the S3 bucket name and object name.
The region for the S3 bucket containing the S3 object must match the region you use for Amazon Rekognition operations.
For Amazon Rekognition to process an S3 object, the user must have permission to access the S3 object. For more information, see Resource-Based Policies in the Amazon Rekognition Developer Guide.
See moreDeclaration
Objective-C
@interface AWSRekognitionS3Object
Swift
class AWSRekognitionS3Object
-
Declaration
Objective-C
@interface AWSRekognitionSearchFacesByImageRequest
Swift
class AWSRekognitionSearchFacesByImageRequest
-
Declaration
Objective-C
@interface AWSRekognitionSearchFacesByImageResponse
Swift
class AWSRekognitionSearchFacesByImageResponse
-
Declaration
Objective-C
@interface AWSRekognitionSearchFacesRequest
Swift
class AWSRekognitionSearchFacesRequest
-
Declaration
Objective-C
@interface AWSRekognitionSearchFacesResponse
Swift
class AWSRekognitionSearchFacesResponse
-
A technical cue or shot detection segment detected in a video. An array of
See moreSegmentDetection
objects containing all segments detected in a stored video is returned by GetSegmentDetection.Declaration
Objective-C
@interface AWSRekognitionSegmentDetection
Swift
class AWSRekognitionSegmentDetection
-
Information about the type of a segment requested in a call to StartSegmentDetection. An array of
See moreSegmentTypeInfo
objects is returned by the response from GetSegmentDetection.Declaration
Objective-C
@interface AWSRekognitionSegmentTypeInfo
Swift
class AWSRekognitionSegmentTypeInfo
-
Information about a shot detection segment detected in a video. For more information, see SegmentDetection.
See moreDeclaration
Objective-C
@interface AWSRekognitionShotSegment
Swift
class AWSRekognitionShotSegment
-
Indicates whether or not the face is smiling, and the confidence level in the determination.
See moreDeclaration
Objective-C
@interface AWSRekognitionSmile
Swift
class AWSRekognitionSmile
-
Declaration
Objective-C
@interface AWSRekognitionStartCelebrityRecognitionRequest
Swift
class AWSRekognitionStartCelebrityRecognitionRequest
-
Declaration
Objective-C
@interface AWSRekognitionStartCelebrityRecognitionResponse
Swift
class AWSRekognitionStartCelebrityRecognitionResponse
-
Declaration
Objective-C
@interface AWSRekognitionStartContentModerationRequest
Swift
class AWSRekognitionStartContentModerationRequest
-
Declaration
Objective-C
@interface AWSRekognitionStartContentModerationResponse
Swift
class AWSRekognitionStartContentModerationResponse
-
Declaration
Objective-C
@interface AWSRekognitionStartFaceDetectionRequest
Swift
class AWSRekognitionStartFaceDetectionRequest
-
Declaration
Objective-C
@interface AWSRekognitionStartFaceDetectionResponse
Swift
class AWSRekognitionStartFaceDetectionResponse
-
Declaration
Objective-C
@interface AWSRekognitionStartFaceSearchRequest
Swift
class AWSRekognitionStartFaceSearchRequest
-
Declaration
Objective-C
@interface AWSRekognitionStartFaceSearchResponse
Swift
class AWSRekognitionStartFaceSearchResponse
-
Declaration
Objective-C
@interface AWSRekognitionStartLabelDetectionRequest
Swift
class AWSRekognitionStartLabelDetectionRequest
-
Declaration
Objective-C
@interface AWSRekognitionStartLabelDetectionResponse
Swift
class AWSRekognitionStartLabelDetectionResponse
-
Declaration
Objective-C
@interface AWSRekognitionStartPersonTrackingRequest
Swift
class AWSRekognitionStartPersonTrackingRequest
-
Declaration
Objective-C
@interface AWSRekognitionStartPersonTrackingResponse
Swift
class AWSRekognitionStartPersonTrackingResponse
-
Declaration
Objective-C
@interface AWSRekognitionStartProjectVersionRequest
Swift
class AWSRekognitionStartProjectVersionRequest
-
Declaration
Objective-C
@interface AWSRekognitionStartProjectVersionResponse
Swift
class AWSRekognitionStartProjectVersionResponse
-
Filters applied to the technical cue or shot detection segments. For more information, see StartSegmentDetection.
See moreDeclaration
Objective-C
@interface AWSRekognitionStartSegmentDetectionFilters
Swift
class AWSRekognitionStartSegmentDetectionFilters
-
Declaration
Objective-C
@interface AWSRekognitionStartSegmentDetectionRequest
Swift
class AWSRekognitionStartSegmentDetectionRequest
-
Declaration
Objective-C
@interface AWSRekognitionStartSegmentDetectionResponse
Swift
class AWSRekognitionStartSegmentDetectionResponse
-
Filters for the shot detection segments returned by
See moreGetSegmentDetection
. For more information, see StartSegmentDetectionFilters.Declaration
Objective-C
@interface AWSRekognitionStartShotDetectionFilter
Swift
class AWSRekognitionStartShotDetectionFilter
-
Declaration
Objective-C
@interface AWSRekognitionStartStreamProcessorRequest
Swift
class AWSRekognitionStartStreamProcessorRequest
-
Declaration
Objective-C
@interface AWSRekognitionStartStreamProcessorResponse
Swift
class AWSRekognitionStartStreamProcessorResponse
-
Filters for the technical segments returned by GetSegmentDetection. For more information, see StartSegmentDetectionFilters.
See moreDeclaration
Objective-C
@interface AWSRekognitionStartTechnicalCueDetectionFilter
Swift
class AWSRekognitionStartTechnicalCueDetectionFilter
-
Set of optional parameters that let you set the criteria text must meet to be included in your response.
See moreWordFilter
looks at a word’s height, width and minimum confidence.RegionOfInterest
lets you set a specific region of the screen to look for text in.Declaration
Objective-C
@interface AWSRekognitionStartTextDetectionFilters
Swift
class AWSRekognitionStartTextDetectionFilters
-
Declaration
Objective-C
@interface AWSRekognitionStartTextDetectionRequest
Swift
class AWSRekognitionStartTextDetectionRequest
-
Declaration
Objective-C
@interface AWSRekognitionStartTextDetectionResponse
Swift
class AWSRekognitionStartTextDetectionResponse
-
Declaration
Objective-C
@interface AWSRekognitionStopProjectVersionRequest
Swift
class AWSRekognitionStopProjectVersionRequest
-
Declaration
Objective-C
@interface AWSRekognitionStopProjectVersionResponse
Swift
class AWSRekognitionStopProjectVersionResponse
-
Declaration
Objective-C
@interface AWSRekognitionStopStreamProcessorRequest
Swift
class AWSRekognitionStopStreamProcessorRequest
-
Declaration
Objective-C
@interface AWSRekognitionStopStreamProcessorResponse
Swift
class AWSRekognitionStopStreamProcessorResponse
-
An object that recognizes faces in a streaming video. An Amazon Rekognition stream processor is created by a call to CreateStreamProcessor. The request parameters for
See moreCreateStreamProcessor
describe the Kinesis video stream source for the streaming video, face recognition parameters, and where to stream the analysis resullts.Declaration
Objective-C
@interface AWSRekognitionStreamProcessor
Swift
class AWSRekognitionStreamProcessor
-
Information about the source streaming video.
See moreDeclaration
Objective-C
@interface AWSRekognitionStreamProcessorInput
Swift
class AWSRekognitionStreamProcessorInput
-
Information about the Amazon Kinesis Data Streams stream to which a Amazon Rekognition Video stream processor streams the results of a video analysis. For more information, see CreateStreamProcessor in the Amazon Rekognition Developer Guide.
See moreDeclaration
Objective-C
@interface AWSRekognitionStreamProcessorOutput
Swift
class AWSRekognitionStreamProcessorOutput
-
Input parameters used to recognize faces in a streaming video analyzed by a Amazon Rekognition stream processor.
See moreDeclaration
Objective-C
@interface AWSRekognitionStreamProcessorSettings
Swift
class AWSRekognitionStreamProcessorSettings
-
The S3 bucket that contains the training summary. The training summary includes aggregated evaluation metrics for the entire testing dataset and metrics for each individual label.
You get the training summary S3 bucket location by calling DescribeProjectVersions.
See moreDeclaration
Objective-C
@interface AWSRekognitionSummary
Swift
class AWSRekognitionSummary
-
Indicates whether or not the face is wearing sunglasses, and the confidence level in the determination.
See moreDeclaration
Objective-C
@interface AWSRekognitionSunglasses
Swift
class AWSRekognitionSunglasses
-
Information about a technical cue segment. For more information, see SegmentDetection.
See moreDeclaration
Objective-C
@interface AWSRekognitionTechnicalCueSegment
Swift
class AWSRekognitionTechnicalCueSegment
-
The dataset used for testing. Optionally, if
See moreAutoCreate
is set, Amazon Rekognition Custom Labels creates a testing dataset using an 80/20 split of the training dataset.Declaration
Objective-C
@interface AWSRekognitionTestingData
Swift
class AWSRekognitionTestingData
-
Sagemaker Groundtruth format manifest files for the input, output and validation datasets that are used and created during testing.
See moreDeclaration
Objective-C
@interface AWSRekognitionTestingDataResult
Swift
class AWSRekognitionTestingDataResult
-
Information about a word or line of text detected by DetectText.
The
DetectedText
field contains the text that Amazon Rekognition detected in the image.Every word and line has an identifier (
Id
). Each word belongs to a line and has a parent identifier (ParentId
) that identifies the line of text in which the word appears. The wordId
is also an index for the word within a line of words.For more information, see Detecting Text in the Amazon Rekognition Developer Guide.
See moreDeclaration
Objective-C
@interface AWSRekognitionTextDetection
Swift
class AWSRekognitionTextDetection
-
Information about text detected in a video. Incudes the detected text, the time in milliseconds from the start of the video that the text was detected, and where it was detected on the screen.
See moreDeclaration
Objective-C
@interface AWSRekognitionTextDetectionResult
Swift
class AWSRekognitionTextDetectionResult
-
The dataset used for training.
See moreDeclaration
Objective-C
@interface AWSRekognitionTrainingData
Swift
class AWSRekognitionTrainingData
-
Sagemaker Groundtruth format manifest files for the input, output and validation datasets that are used and created during testing.
See moreDeclaration
Objective-C
@interface AWSRekognitionTrainingDataResult
Swift
class AWSRekognitionTrainingDataResult
-
A face that IndexFaces detected, but didn’t index. Use the
See moreReasons
response attribute to determine why a face wasn’t indexed.Declaration
Objective-C
@interface AWSRekognitionUnindexedFace
Swift
class AWSRekognitionUnindexedFace
-
Contains the Amazon S3 bucket location of the validation data for a model training job.
The validation data includes error information for individual JSON lines in the dataset. For more information, see Debugging a Failed Model Training in the Amazon Rekognition Custom Labels Developer Guide.
You get the
ValidationData
object for the training dataset (TrainingDataResult) and the test dataset (TestingDataResult) by calling DescribeProjectVersions.The assets array contains a single Asset object. The GroundTruthManifest field of the Asset object contains the S3 bucket location of the validation data.
See moreDeclaration
Objective-C
@interface AWSRekognitionValidationData
Swift
class AWSRekognitionValidationData
-
Video file stored in an Amazon S3 bucket. Amazon Rekognition video start operations such as StartLabelDetection use
See moreVideo
to specify a video for analysis. The supported file formats are .mp4, .mov and .avi.Declaration
Objective-C
@interface AWSRekognitionVideo
Swift
class AWSRekognitionVideo
-
Information about a video that Amazon Rekognition analyzed.
See moreVideometadata
is returned in every page of paginated responses from a Amazon Rekognition video operation.Declaration
Objective-C
@interface AWSRekognitionVideoMetadata
Swift
class AWSRekognitionVideoMetadata
-
Undocumented
See moreDeclaration
Objective-C
@interface AWSRekognitionResources : NSObject + (instancetype)sharedInstance; - (NSDictionary *)JSONObject; @end
Swift
class AWSRekognitionResources : NSObject
-
This is the Amazon Rekognition API reference.
See moreDeclaration
Objective-C
@interface AWSRekognition
Swift
class AWSRekognition