Classes
The following classes are available globally.
-
Structure containing the estimated age range, in years, for a face.
Amazon Rekognition estimates an age range for faces detected in the input image. Estimated age ranges can overlap. A face of a 5-year-old might have an estimated range of 4-6, while the face of a 6-year-old might have an estimated range of 4-8.
See moreDeclaration
Objective-C
@interface AWSRekognitionAgeRangeSwift
class AWSRekognitionAgeRange -
Assets are the images that you use to train and evaluate a model version. Assets can also contain validation information that you use to debug a failed model training.
See moreDeclaration
Objective-C
@interface AWSRekognitionAssetSwift
class AWSRekognitionAsset -
Declaration
Objective-C
@interface AWSRekognitionAssociateFacesRequestSwift
class AWSRekognitionAssociateFacesRequest -
Declaration
Objective-C
@interface AWSRekognitionAssociateFacesResponseSwift
class AWSRekognitionAssociateFacesResponse -
Provides face metadata for the faces that are associated to a specific UserID.
See moreDeclaration
Objective-C
@interface AWSRekognitionAssociatedFaceSwift
class AWSRekognitionAssociatedFace -
Metadata information about an audio stream. An array of
See moreAudioMetadataobjects for the audio streams found in a stored video is returned by GetSegmentDetection.Declaration
Objective-C
@interface AWSRekognitionAudioMetadataSwift
class AWSRekognitionAudioMetadata -
An image that is picked from the Face Liveness video and returned for audit trail purposes, returned as Base64-encoded bytes.
See moreDeclaration
Objective-C
@interface AWSRekognitionAuditImageSwift
class AWSRekognitionAuditImage -
Indicates whether or not the face has a beard, and the confidence level in the determination.
See moreDeclaration
Objective-C
@interface AWSRekognitionBeardSwift
class AWSRekognitionBeard -
A filter that allows you to control the black frame detection by specifying the black levels and pixel coverage of black pixels in a frame. As videos can come from multiple sources, formats, and time periods, they may contain different standards and varying noise levels for black frames that need to be accounted for. For more information, see StartSegmentDetection.
See moreDeclaration
Objective-C
@interface AWSRekognitionBlackFrameSwift
class AWSRekognitionBlackFrame -
Identifies the bounding box around the label, face, text, object of interest, or personal protective equipment. The
left(x-coordinate) andtop(y-coordinate) are coordinates representing the top and left sides of the bounding box. Note that the upper-left corner of the image is the origin (0,0).The
topandleftvalues returned are ratios of the overall image size. For example, if the input image is 700x200 pixels, and the top-left coordinate of the bounding box is 350x50 pixels, the API returns aleftvalue of 0.5 (350/700) and atopvalue of 0.25 (50/200).The
widthandheightvalues represent the dimensions of the bounding box as a ratio of the overall image dimension. For example, if the input image is 700x200 pixels, and the bounding box width is 70 pixels, the width returned is 0.1. See moreThe bounding box coordinates can have negative values. For example, if Amazon Rekognition is able to detect a face that is at the image edge and is only partially visible, the service can return coordinates that are outside the image bounds and, depending on the image edge, you might get negative values or values greater than 1 for the
leftortopvalues.Declaration
Objective-C
@interface AWSRekognitionBoundingBoxSwift
class AWSRekognitionBoundingBox -
Provides information about a celebrity recognized by the RecognizeCelebrities operation.
See moreDeclaration
Objective-C
@interface AWSRekognitionCelebritySwift
class AWSRekognitionCelebrity -
Information about a recognized celebrity.
See moreDeclaration
Objective-C
@interface AWSRekognitionCelebrityDetailSwift
class AWSRekognitionCelebrityDetail -
Information about a detected celebrity and the time the celebrity was detected in a stored video. For more information, see GetCelebrityRecognition in the Amazon Rekognition Developer Guide.
See moreDeclaration
Objective-C
@interface AWSRekognitionCelebrityRecognitionSwift
class AWSRekognitionCelebrityRecognition -
Provides information about a face in a target image that matches the source image face analyzed by
See moreCompareFaces. TheFaceproperty contains the bounding box of the face in the target image. TheSimilarityproperty is the confidence that the source image face matches the face in the bounding box.Declaration
Objective-C
@interface AWSRekognitionCompareFacesMatchSwift
class AWSRekognitionCompareFacesMatch -
Declaration
Objective-C
@interface AWSRekognitionCompareFacesRequestSwift
class AWSRekognitionCompareFacesRequest -
Declaration
Objective-C
@interface AWSRekognitionCompareFacesResponseSwift
class AWSRekognitionCompareFacesResponse -
Provides face metadata for target image faces that are analyzed by
See moreCompareFacesandRecognizeCelebrities.Declaration
Objective-C
@interface AWSRekognitionComparedFaceSwift
class AWSRekognitionComparedFace -
Type that describes the face Amazon Rekognition chose to compare with the faces in the target. This contains a bounding box for the selected face and confidence level that the bounding box contains a face. Note that Amazon Rekognition selects the largest face in the source image for this comparison.
See moreDeclaration
Objective-C
@interface AWSRekognitionComparedSourceImageFaceSwift
class AWSRekognitionComparedSourceImageFace -
Label detection settings to use on a streaming video. Defining the settings is required in the request parameter for CreateStreamProcessor. Including this setting in the
CreateStreamProcessorrequest enables you to use the stream processor for label detection. You can then select what you want the stream processor to detect, such as people or pets. When the stream processor has started, one notification is sent for each object class specified. For example, if packages and pets are selected, one SNS notification is published the first time a package is detected and one SNS notification is published the first time a pet is detected, as well as an end-of-session summary.Required parameters: [Labels]
See moreDeclaration
Objective-C
@interface AWSRekognitionConnectedHomeSettingsSwift
class AWSRekognitionConnectedHomeSettings -
The label detection settings you want to use in your stream processor. This includes the labels you want the stream processor to detect and the minimum confidence level allowed to label objects.
See moreDeclaration
Objective-C
@interface AWSRekognitionConnectedHomeSettingsForUpdateSwift
class AWSRekognitionConnectedHomeSettingsForUpdate -
Information about an inappropriate, unwanted, or offensive content label detection in a stored video.
See moreDeclaration
Objective-C
@interface AWSRekognitionContentModerationDetectionSwift
class AWSRekognitionContentModerationDetection -
Contains information regarding the confidence and name of a detected content type.
See moreDeclaration
Objective-C
@interface AWSRekognitionContentTypeSwift
class AWSRekognitionContentType -
Declaration
Objective-C
@interface AWSRekognitionReplicateProjectVersionRequestSwift
class AWSRekognitionReplicateProjectVersionRequest -
Declaration
Objective-C
@interface AWSRekognitionReplicateProjectVersionResponseSwift
class AWSRekognitionReplicateProjectVersionResponse -
Information about an item of Personal Protective Equipment covering a corresponding body part. For more information, see DetectProtectiveEquipment.
See moreDeclaration
Objective-C
@interface AWSRekognitionCoversBodyPartSwift
class AWSRekognitionCoversBodyPart -
Declaration
Objective-C
@interface AWSRekognitionCreateCollectionRequestSwift
class AWSRekognitionCreateCollectionRequest -
Declaration
Objective-C
@interface AWSRekognitionCreateCollectionResponseSwift
class AWSRekognitionCreateCollectionResponse -
Declaration
Objective-C
@interface AWSRekognitionCreateDatasetRequestSwift
class AWSRekognitionCreateDatasetRequest -
Declaration
Objective-C
@interface AWSRekognitionCreateDatasetResponseSwift
class AWSRekognitionCreateDatasetResponse -
Declaration
Objective-C
@interface AWSRekognitionCreateFaceLivenessSessionRequestSwift
class AWSRekognitionCreateFaceLivenessSessionRequest -
A session settings object. It contains settings for the operation to be performed. It accepts arguments for OutputConfig and AuditImagesLimit.
See moreDeclaration
Objective-C
@interface AWSRekognitionCreateFaceLivenessSessionRequestSettingsSwift
class AWSRekognitionCreateFaceLivenessSessionRequestSettings -
Declaration
Objective-C
@interface AWSRekognitionCreateFaceLivenessSessionResponseSwift
class AWSRekognitionCreateFaceLivenessSessionResponse -
Declaration
Objective-C
@interface AWSRekognitionCreateProjectRequestSwift
class AWSRekognitionCreateProjectRequest -
Declaration
Objective-C
@interface AWSRekognitionCreateProjectResponseSwift
class AWSRekognitionCreateProjectResponse -
Declaration
Objective-C
@interface AWSRekognitionCreateProjectVersionRequestSwift
class AWSRekognitionCreateProjectVersionRequest -
Declaration
Objective-C
@interface AWSRekognitionCreateProjectVersionResponseSwift
class AWSRekognitionCreateProjectVersionResponse -
Declaration
Objective-C
@interface AWSRekognitionCreateStreamProcessorRequestSwift
class AWSRekognitionCreateStreamProcessorRequest -
Declaration
Objective-C
@interface AWSRekognitionCreateStreamProcessorResponseSwift
class AWSRekognitionCreateStreamProcessorResponse -
Declaration
Objective-C
@interface AWSRekognitionCreateUserRequestSwift
class AWSRekognitionCreateUserRequest -
Declaration
Objective-C
@interface AWSRekognitionCreateUserResponseSwift
class AWSRekognitionCreateUserResponse -
A custom label detected in an image by a call to DetectCustomLabels.
See moreDeclaration
Objective-C
@interface AWSRekognitionCustomLabelSwift
class AWSRekognitionCustomLabel -
Feature specific configuration for the training job. Configuration provided for the job must match the feature type parameter associated with project. If configuration and feature type do not match an InvalidParameterException is returned.
See moreDeclaration
Objective-C
@interface AWSRekognitionCustomizationFeatureConfigSwift
class AWSRekognitionCustomizationFeatureConfig -
Configuration options for Content Moderation training.
See moreDeclaration
Objective-C
@interface AWSRekognitionCustomizationFeatureContentModerationConfigSwift
class AWSRekognitionCustomizationFeatureContentModerationConfig -
Describes updates or additions to a dataset. A Single update or addition is an entry (JSON Line) that provides information about a single image. To update an existing entry, you match the
source-reffield of the update entry with thesource-reffiled of the entry that you want to update. If thesource-reffield doesn’t match an existing entry, the entry is added to dataset as a new entry.Required parameters: [GroundTruth]
See moreDeclaration
Objective-C
@interface AWSRekognitionDatasetChangesSwift
class AWSRekognitionDatasetChanges -
A description for a dataset. For more information, see DescribeDataset.
The status fields
See moreStatus,StatusMessage, andStatusMessageCodereflect the last operation on the dataset.Declaration
Objective-C
@interface AWSRekognitionDatasetDescriptionSwift
class AWSRekognitionDatasetDescription -
Describes a dataset label. For more information, see ListDatasetLabels.
See moreDeclaration
Objective-C
@interface AWSRekognitionDatasetLabelDescriptionSwift
class AWSRekognitionDatasetLabelDescription -
Statistics about a label used in a dataset. For more information, see DatasetLabelDescription.
See moreDeclaration
Objective-C
@interface AWSRekognitionDatasetLabelStatsSwift
class AWSRekognitionDatasetLabelStats -
Summary information for an Amazon Rekognition Custom Labels dataset. For more information, see ProjectDescription.
See moreDeclaration
Objective-C
@interface AWSRekognitionDatasetMetadataSwift
class AWSRekognitionDatasetMetadata -
The source that Amazon Rekognition Custom Labels uses to create a dataset. To use an Amazon Sagemaker format manifest file, specify the S3 bucket location in the
GroundTruthManifestfield. The S3 bucket must be in your AWS account. To create a copy of an existing dataset, specify the Amazon Resource Name (ARN) of an existing dataset inDatasetArn.You need to specify a value for
DatasetArnorGroundTruthManifest, but not both. if you supply both values, or if you don’t specify any values, an InvalidParameterException exception occurs.For more information, see CreateDataset.
See moreDeclaration
Objective-C
@interface AWSRekognitionDatasetSourceSwift
class AWSRekognitionDatasetSource -
Provides statistics about a dataset. For more information, see DescribeDataset.
See moreDeclaration
Objective-C
@interface AWSRekognitionDatasetStatsSwift
class AWSRekognitionDatasetStats -
Declaration
Objective-C
@interface AWSRekognitionDeleteCollectionRequestSwift
class AWSRekognitionDeleteCollectionRequest -
Declaration
Objective-C
@interface AWSRekognitionDeleteCollectionResponseSwift
class AWSRekognitionDeleteCollectionResponse -
Declaration
Objective-C
@interface AWSRekognitionDeleteDatasetRequestSwift
class AWSRekognitionDeleteDatasetRequest -
Declaration
Objective-C
@interface AWSRekognitionDeleteDatasetResponseSwift
class AWSRekognitionDeleteDatasetResponse -
Declaration
Objective-C
@interface AWSRekognitionDeleteFacesRequestSwift
class AWSRekognitionDeleteFacesRequest -
Declaration
Objective-C
@interface AWSRekognitionDeleteFacesResponseSwift
class AWSRekognitionDeleteFacesResponse -
Declaration
Objective-C
@interface AWSRekognitionDeleteProjectPolicyRequestSwift
class AWSRekognitionDeleteProjectPolicyRequest -
Declaration
Objective-C
@interface AWSRekognitionDeleteProjectPolicyResponseSwift
class AWSRekognitionDeleteProjectPolicyResponse -
Declaration
Objective-C
@interface AWSRekognitionDeleteProjectRequestSwift
class AWSRekognitionDeleteProjectRequest -
Declaration
Objective-C
@interface AWSRekognitionDeleteProjectResponseSwift
class AWSRekognitionDeleteProjectResponse -
Declaration
Objective-C
@interface AWSRekognitionDeleteProjectVersionRequestSwift
class AWSRekognitionDeleteProjectVersionRequest -
Declaration
Objective-C
@interface AWSRekognitionDeleteProjectVersionResponseSwift
class AWSRekognitionDeleteProjectVersionResponse -
Declaration
Objective-C
@interface AWSRekognitionDeleteStreamProcessorRequestSwift
class AWSRekognitionDeleteStreamProcessorRequest -
Declaration
Objective-C
@interface AWSRekognitionDeleteStreamProcessorResponseSwift
class AWSRekognitionDeleteStreamProcessorResponse -
Declaration
Objective-C
@interface AWSRekognitionDeleteUserRequestSwift
class AWSRekognitionDeleteUserRequest -
Declaration
Objective-C
@interface AWSRekognitionDeleteUserResponseSwift
class AWSRekognitionDeleteUserResponse -
Declaration
Objective-C
@interface AWSRekognitionDescribeCollectionRequestSwift
class AWSRekognitionDescribeCollectionRequest -
Declaration
Objective-C
@interface AWSRekognitionDescribeCollectionResponseSwift
class AWSRekognitionDescribeCollectionResponse -
Declaration
Objective-C
@interface AWSRekognitionDescribeDatasetRequestSwift
class AWSRekognitionDescribeDatasetRequest -
Declaration
Objective-C
@interface AWSRekognitionDescribeDatasetResponseSwift
class AWSRekognitionDescribeDatasetResponse -
Declaration
Objective-C
@interface AWSRekognitionDescribeProjectVersionsRequestSwift
class AWSRekognitionDescribeProjectVersionsRequest -
Declaration
Objective-C
@interface AWSRekognitionDescribeProjectVersionsResponseSwift
class AWSRekognitionDescribeProjectVersionsResponse -
Declaration
Objective-C
@interface AWSRekognitionDescribeProjectsRequestSwift
class AWSRekognitionDescribeProjectsRequest -
Declaration
Objective-C
@interface AWSRekognitionDescribeProjectsResponseSwift
class AWSRekognitionDescribeProjectsResponse -
Declaration
Objective-C
@interface AWSRekognitionDescribeStreamProcessorRequestSwift
class AWSRekognitionDescribeStreamProcessorRequest -
Declaration
Objective-C
@interface AWSRekognitionDescribeStreamProcessorResponseSwift
class AWSRekognitionDescribeStreamProcessorResponse -
Declaration
Objective-C
@interface AWSRekognitionDetectCustomLabelsRequestSwift
class AWSRekognitionDetectCustomLabelsRequest -
Declaration
Objective-C
@interface AWSRekognitionDetectCustomLabelsResponseSwift
class AWSRekognitionDetectCustomLabelsResponse -
Declaration
Objective-C
@interface AWSRekognitionDetectFacesRequestSwift
class AWSRekognitionDetectFacesRequest -
Declaration
Objective-C
@interface AWSRekognitionDetectFacesResponseSwift
class AWSRekognitionDetectFacesResponse -
The background of the image with regard to image quality and dominant colors.
See moreDeclaration
Objective-C
@interface AWSRekognitionDetectLabelsImageBackgroundSwift
class AWSRekognitionDetectLabelsImageBackground -
The foreground of the image with regard to image quality and dominant colors.
See moreDeclaration
Objective-C
@interface AWSRekognitionDetectLabelsImageForegroundSwift
class AWSRekognitionDetectLabelsImageForeground -
Information about the quality and dominant colors of an input image. Quality and color information is returned for the entire image, foreground, and background.
See moreDeclaration
Objective-C
@interface AWSRekognitionDetectLabelsImagePropertiesSwift
class AWSRekognitionDetectLabelsImageProperties -
Settings for the IMAGE_PROPERTIES feature type.
See moreDeclaration
Objective-C
@interface AWSRekognitionDetectLabelsImagePropertiesSettingsSwift
class AWSRekognitionDetectLabelsImagePropertiesSettings -
The quality of an image provided for label detection, with regard to brightness, sharpness, and contrast.
See moreDeclaration
Objective-C
@interface AWSRekognitionDetectLabelsImageQualitySwift
class AWSRekognitionDetectLabelsImageQuality -
Declaration
Objective-C
@interface AWSRekognitionDetectLabelsRequestSwift
class AWSRekognitionDetectLabelsRequest -
Declaration
Objective-C
@interface AWSRekognitionDetectLabelsResponseSwift
class AWSRekognitionDetectLabelsResponse -
Settings for the DetectLabels request. Settings can include filters for both GENERAL_LABELS and IMAGE_PROPERTIES. GENERAL_LABELS filters can be inclusive or exclusive and applied to individual labels or label categories. IMAGE_PROPERTIES filters allow specification of a maximum number of dominant colors.
See moreDeclaration
Objective-C
@interface AWSRekognitionDetectLabelsSettingsSwift
class AWSRekognitionDetectLabelsSettings -
Declaration
Objective-C
@interface AWSRekognitionDetectModerationLabelsRequestSwift
class AWSRekognitionDetectModerationLabelsRequest -
Declaration
Objective-C
@interface AWSRekognitionDetectModerationLabelsResponseSwift
class AWSRekognitionDetectModerationLabelsResponse -
Declaration
Objective-C
@interface AWSRekognitionDetectProtectiveEquipmentRequestSwift
class AWSRekognitionDetectProtectiveEquipmentRequest -
Declaration
Objective-C
@interface AWSRekognitionDetectProtectiveEquipmentResponseSwift
class AWSRekognitionDetectProtectiveEquipmentResponse -
A set of optional parameters that you can use to set the criteria that the text must meet to be included in your response.
See moreWordFilterlooks at a word’s height, width, and minimum confidence.RegionOfInterestlets you set a specific region of the image to look for text in.Declaration
Objective-C
@interface AWSRekognitionDetectTextFiltersSwift
class AWSRekognitionDetectTextFilters -
Declaration
Objective-C
@interface AWSRekognitionDetectTextRequestSwift
class AWSRekognitionDetectTextRequest -
Declaration
Objective-C
@interface AWSRekognitionDetectTextResponseSwift
class AWSRekognitionDetectTextResponse -
A set of parameters that allow you to filter out certain results from your returned results.
See moreDeclaration
Objective-C
@interface AWSRekognitionDetectionFilterSwift
class AWSRekognitionDetectionFilter -
Declaration
Objective-C
@interface AWSRekognitionDisassociateFacesRequestSwift
class AWSRekognitionDisassociateFacesRequest -
Declaration
Objective-C
@interface AWSRekognitionDisassociateFacesResponseSwift
class AWSRekognitionDisassociateFacesResponse -
Provides face metadata for the faces that are disassociated from a specific UserID.
See moreDeclaration
Objective-C
@interface AWSRekognitionDisassociatedFaceSwift
class AWSRekognitionDisassociatedFace -
A training dataset or a test dataset used in a dataset distribution operation. For more information, see DistributeDatasetEntries.
Required parameters: [Arn]
See moreDeclaration
Objective-C
@interface AWSRekognitionDistributeDatasetSwift
class AWSRekognitionDistributeDataset -
Declaration
Objective-C
@interface AWSRekognitionDistributeDatasetEntriesRequestSwift
class AWSRekognitionDistributeDatasetEntriesRequest -
Declaration
Objective-C
@interface AWSRekognitionDistributeDatasetEntriesResponseSwift
class AWSRekognitionDistributeDatasetEntriesResponse -
A description of the dominant colors in an image.
See moreDeclaration
Objective-C
@interface AWSRekognitionDominantColorSwift
class AWSRekognitionDominantColor -
The emotions that appear to be expressed on the face, and the confidence level in the determination. The API is only making a determination of the physical appearance of a person’s face. It is not a determination of the person’s internal emotional state and should not be used in such a way. For example, a person pretending to have a sad face might not be sad emotionally.
See moreDeclaration
Objective-C
@interface AWSRekognitionEmotionSwift
class AWSRekognitionEmotion -
Information about an item of Personal Protective Equipment (PPE) detected by DetectProtectiveEquipment. For more information, see DetectProtectiveEquipment.
See moreDeclaration
Objective-C
@interface AWSRekognitionEquipmentDetectionSwift
class AWSRekognitionEquipmentDetection -
The evaluation results for the training of a model.
See moreDeclaration
Objective-C
@interface AWSRekognitionEvaluationResultSwift
class AWSRekognitionEvaluationResult -
Indicates the direction the eyes are gazing in (independent of the head pose) as determined by its pitch and yaw.
See moreDeclaration
Objective-C
@interface AWSRekognitionEyeDirectionSwift
class AWSRekognitionEyeDirection -
Indicates whether or not the eyes on the face are open, and the confidence level in the determination.
See moreDeclaration
Objective-C
@interface AWSRekognitionEyeOpenSwift
class AWSRekognitionEyeOpen -
Indicates whether or not the face is wearing eye glasses, and the confidence level in the determination.
See moreDeclaration
Objective-C
@interface AWSRekognitionEyeglassesSwift
class AWSRekognitionEyeglasses -
Describes the face properties such as the bounding box, face ID, image ID of the input image, and external image ID that you assigned.
See moreDeclaration
Objective-C
@interface AWSRekognitionFaceSwift
class AWSRekognitionFace -
Structure containing attributes of the face that the algorithm detected.
A
FaceDetailobject contains either the default facial attributes or all facial attributes. The default attributes areBoundingBox,Confidence,Landmarks,Pose, andQuality.GetFaceDetection is the only Amazon Rekognition Video stored video operation that can return a
FaceDetailobject with all attributes. To specify which attributes to return, use theFaceAttributesinput parameter for StartFaceDetection. The following Amazon Rekognition Video operations return only the default attributes. The corresponding Start operations don’t have aFaceAttributesinput parameter:GetCelebrityRecognition
GetPersonTracking
GetFaceSearch
The Amazon Rekognition Image DetectFaces and IndexFaces operations can return all facial attributes. To specify which attributes to return, use the
See moreAttributesinput parameter forDetectFaces. ForIndexFaces, use theDetectAttributesinput parameter.Declaration
Objective-C
@interface AWSRekognitionFaceDetailSwift
class AWSRekognitionFaceDetail -
Information about a face detected in a video analysis request and the time the face was detected in the video.
See moreDeclaration
Objective-C
@interface AWSRekognitionFaceDetectionSwift
class AWSRekognitionFaceDetection -
Provides face metadata. In addition, it also provides the confidence in the match of this face with the input face.
See moreDeclaration
Objective-C
@interface AWSRekognitionFaceMatchSwift
class AWSRekognitionFaceMatch -
FaceOccludedshould return “true” with a high confidence score if a detected face’s eyes, nose, and mouth are partially captured or if they are covered by masks, dark sunglasses, cell phones, hands, or other objects.FaceOccludedshould return “false” with a high confidence score if common occurrences that do not impact face verification are detected, such as eye glasses, lightly tinted sunglasses, strands of hair, and others.You can use
See moreFaceOccludedto determine if an obstruction on a face negatively impacts using the image for face matching.Declaration
Objective-C
@interface AWSRekognitionFaceOccludedSwift
class AWSRekognitionFaceOccluded -
Object containing both the face metadata (stored in the backend database), and facial attributes that are detected but aren’t stored in the database.
See moreDeclaration
Objective-C
@interface AWSRekognitionFaceRecordSwift
class AWSRekognitionFaceRecord -
Input face recognition parameters for an Amazon Rekognition stream processor. Includes the collection to use for face recognition and the face attributes to detect. Defining the settings is required in the request parameter for CreateStreamProcessor.
See moreDeclaration
Objective-C
@interface AWSRekognitionFaceSearchSettingsSwift
class AWSRekognitionFaceSearchSettings -
The predicted gender of a detected face.
Amazon Rekognition makes gender binary (male/female) predictions based on the physical appearance of a face in a particular image. This kind of prediction is not designed to categorize a person’s gender identity, and you shouldn’t use Amazon Rekognition to make such a determination. For example, a male actor wearing a long-haired wig and earrings for a role might be predicted as female.
Using Amazon Rekognition to make gender binary predictions is best suited for use cases where aggregate gender distribution statistics need to be analyzed without identifying specific users. For example, the percentage of female users compared to male users on a social media platform.
We don’t recommend using gender binary predictions to make decisions that impact an individual’s rights, privacy, or access to services.
See moreDeclaration
Objective-C
@interface AWSRekognitionGenderSwift
class AWSRekognitionGender -
Contains filters for the object labels returned by DetectLabels. Filters can be inclusive, exclusive, or a combination of both and can be applied to individual labels or entire label categories. To see a list of label categories, see Detecting Labels.
See moreDeclaration
Objective-C
@interface AWSRekognitionGeneralLabelsSettingsSwift
class AWSRekognitionGeneralLabelsSettings -
Information about where an object (DetectCustomLabels) or text (DetectText) is located on an image.
See moreDeclaration
Objective-C
@interface AWSRekognitionGeometrySwift
class AWSRekognitionGeometry -
Declaration
Objective-C
@interface AWSRekognitionGetCelebrityInfoRequestSwift
class AWSRekognitionGetCelebrityInfoRequest -
Declaration
Objective-C
@interface AWSRekognitionGetCelebrityInfoResponseSwift
class AWSRekognitionGetCelebrityInfoResponse -
Declaration
Objective-C
@interface AWSRekognitionGetCelebrityRecognitionRequestSwift
class AWSRekognitionGetCelebrityRecognitionRequest -
Declaration
Objective-C
@interface AWSRekognitionGetCelebrityRecognitionResponseSwift
class AWSRekognitionGetCelebrityRecognitionResponse -
Declaration
Objective-C
@interface AWSRekognitionGetContentModerationRequestSwift
class AWSRekognitionGetContentModerationRequest -
Contains metadata about a content moderation request, including the SortBy and AggregateBy options.
See moreDeclaration
Objective-C
@interface AWSRekognitionGetContentModerationRequestMetadataSwift
class AWSRekognitionGetContentModerationRequestMetadata -
Declaration
Objective-C
@interface AWSRekognitionGetContentModerationResponseSwift
class AWSRekognitionGetContentModerationResponse -
Declaration
Objective-C
@interface AWSRekognitionGetFaceDetectionRequestSwift
class AWSRekognitionGetFaceDetectionRequest -
Declaration
Objective-C
@interface AWSRekognitionGetFaceDetectionResponseSwift
class AWSRekognitionGetFaceDetectionResponse -
Declaration
Objective-C
@interface AWSRekognitionGetFaceLivenessSessionResultsRequestSwift
class AWSRekognitionGetFaceLivenessSessionResultsRequest -
Declaration
Objective-C
@interface AWSRekognitionGetFaceLivenessSessionResultsResponseSwift
class AWSRekognitionGetFaceLivenessSessionResultsResponse -
Declaration
Objective-C
@interface AWSRekognitionGetFaceSearchRequestSwift
class AWSRekognitionGetFaceSearchRequest -
Declaration
Objective-C
@interface AWSRekognitionGetFaceSearchResponseSwift
class AWSRekognitionGetFaceSearchResponse -
Declaration
Objective-C
@interface AWSRekognitionGetLabelDetectionRequestSwift
class AWSRekognitionGetLabelDetectionRequest -
Contains metadata about a label detection request, including the SortBy and AggregateBy options.
See moreDeclaration
Objective-C
@interface AWSRekognitionGetLabelDetectionRequestMetadataSwift
class AWSRekognitionGetLabelDetectionRequestMetadata -
Declaration
Objective-C
@interface AWSRekognitionGetLabelDetectionResponseSwift
class AWSRekognitionGetLabelDetectionResponse -
Declaration
Objective-C
@interface AWSRekognitionGetMediaAnalysisJobRequestSwift
class AWSRekognitionGetMediaAnalysisJobRequest -
Declaration
Objective-C
@interface AWSRekognitionGetMediaAnalysisJobResponseSwift
class AWSRekognitionGetMediaAnalysisJobResponse -
Declaration
Objective-C
@interface AWSRekognitionGetPersonTrackingRequestSwift
class AWSRekognitionGetPersonTrackingRequest -
Declaration
Objective-C
@interface AWSRekognitionGetPersonTrackingResponseSwift
class AWSRekognitionGetPersonTrackingResponse -
Declaration
Objective-C
@interface AWSRekognitionGetSegmentDetectionRequestSwift
class AWSRekognitionGetSegmentDetectionRequest -
Declaration
Objective-C
@interface AWSRekognitionGetSegmentDetectionResponseSwift
class AWSRekognitionGetSegmentDetectionResponse -
Declaration
Objective-C
@interface AWSRekognitionGetTextDetectionRequestSwift
class AWSRekognitionGetTextDetectionRequest -
Declaration
Objective-C
@interface AWSRekognitionGetTextDetectionResponseSwift
class AWSRekognitionGetTextDetectionResponse -
The S3 bucket that contains an Amazon Sagemaker Ground Truth format manifest file.
See moreDeclaration
Objective-C
@interface AWSRekognitionGroundTruthManifestSwift
class AWSRekognitionGroundTruthManifest -
Shows the results of the human in the loop evaluation. If there is no HumanLoopArn, the input did not trigger human review.
See moreDeclaration
Objective-C
@interface AWSRekognitionHumanLoopActivationOutputSwift
class AWSRekognitionHumanLoopActivationOutput -
Sets up the flow definition the image will be sent to if one of the conditions is met. You can also set certain attributes of the image before review.
Required parameters: [HumanLoopName, FlowDefinitionArn]
See moreDeclaration
Objective-C
@interface AWSRekognitionHumanLoopConfigSwift
class AWSRekognitionHumanLoopConfig -
Allows you to set attributes of the image. Currently, you can declare an image as free of personally identifiable information.
See moreDeclaration
Objective-C
@interface AWSRekognitionHumanLoopDataAttributesSwift
class AWSRekognitionHumanLoopDataAttributes -
Provides the input image either as bytes or an S3 object.
You pass image bytes to an Amazon Rekognition API operation by using the
Bytesproperty. For example, you would use theBytesproperty to pass an image loaded from a local file system. Image bytes passed by using theBytesproperty must be base64-encoded. Your code may not need to encode image bytes if you are using an AWS SDK to call Amazon Rekognition API operations.For more information, see Analyzing an Image Loaded from a Local File System in the Amazon Rekognition Developer Guide.
You pass images stored in an S3 bucket to an Amazon Rekognition API operation by using the
S3Objectproperty. Images stored in an S3 bucket do not need to be base64-encoded.The region for the S3 bucket containing the S3 object must match the region you use for Amazon Rekognition operations.
If you use the AWS CLI to call Amazon Rekognition operations, passing image bytes using the Bytes property is not supported. You must first upload the image to an Amazon S3 bucket and then call the operation using the S3Object property.
For Amazon Rekognition to process an S3 object, the user must have permission to access the S3 object. For more information, see How Amazon Rekognition works with IAM in the Amazon Rekognition Developer Guide.
See moreDeclaration
Objective-C
@interface AWSRekognitionImageSwift
class AWSRekognitionImage -
Identifies face image brightness and sharpness.
See moreDeclaration
Objective-C
@interface AWSRekognitionImageQualitySwift
class AWSRekognitionImageQuality -
Declaration
Objective-C
@interface AWSRekognitionIndexFacesRequestSwift
class AWSRekognitionIndexFacesRequest -
Declaration
Objective-C
@interface AWSRekognitionIndexFacesResponseSwift
class AWSRekognitionIndexFacesResponse -
An instance of a label returned by Amazon Rekognition Image (DetectLabels) or by Amazon Rekognition Video (GetLabelDetection).
See moreDeclaration
Objective-C
@interface AWSRekognitionInstanceSwift
class AWSRekognitionInstance -
The Kinesis data stream Amazon Rekognition to which the analysis results of a Amazon Rekognition stream processor are streamed. For more information, see CreateStreamProcessor in the Amazon Rekognition Developer Guide.
See moreDeclaration
Objective-C
@interface AWSRekognitionKinesisDataStreamSwift
class AWSRekognitionKinesisDataStream -
Kinesis video stream stream that provides the source streaming video for a Amazon Rekognition Video stream processor. For more information, see CreateStreamProcessor in the Amazon Rekognition Developer Guide.
See moreDeclaration
Objective-C
@interface AWSRekognitionKinesisVideoStreamSwift
class AWSRekognitionKinesisVideoStream -
Specifies the starting point in a Kinesis stream to start processing. You can use the producer timestamp or the fragment number. One of either producer timestamp or fragment number is required. If you use the producer timestamp, you must put the time in milliseconds. For more information about fragment numbers, see Fragment.
See moreDeclaration
Objective-C
@interface AWSRekognitionKinesisVideoStreamStartSelectorSwift
class AWSRekognitionKinesisVideoStreamStartSelector -
The known gender identity for the celebrity that matches the provided ID. The known gender identity can be Male, Female, Nonbinary, or Unlisted.
See moreDeclaration
Objective-C
@interface AWSRekognitionKnownGenderSwift
class AWSRekognitionKnownGender -
Structure containing details about the detected label, including the name, detected instances, parent labels, and level of confidence.
See moreDeclaration
Objective-C
@interface AWSRekognitionLabelSwift
class AWSRekognitionLabel -
A potential alias of for a given label.
See moreDeclaration
Objective-C
@interface AWSRekognitionLabelAliasSwift
class AWSRekognitionLabelAlias -
The category that applies to a given label.
See moreDeclaration
Objective-C
@interface AWSRekognitionLabelCategorySwift
class AWSRekognitionLabelCategory -
Information about a label detected in a video analysis request and the time the label was detected in the video.
See moreDeclaration
Objective-C
@interface AWSRekognitionLabelDetectionSwift
class AWSRekognitionLabelDetection -
Contains the specified filters that should be applied to a list of returned GENERAL_LABELS.
See moreDeclaration
Objective-C
@interface AWSRekognitionLabelDetectionSettingsSwift
class AWSRekognitionLabelDetectionSettings -
Indicates the location of the landmark on the face.
See moreDeclaration
Objective-C
@interface AWSRekognitionLandmarkSwift
class AWSRekognitionLandmark -
Declaration
Objective-C
@interface AWSRekognitionListCollectionsRequestSwift
class AWSRekognitionListCollectionsRequest -
Declaration
Objective-C
@interface AWSRekognitionListCollectionsResponseSwift
class AWSRekognitionListCollectionsResponse -
Declaration
Objective-C
@interface AWSRekognitionListDatasetEntriesRequestSwift
class AWSRekognitionListDatasetEntriesRequest -
Declaration
Objective-C
@interface AWSRekognitionListDatasetEntriesResponseSwift
class AWSRekognitionListDatasetEntriesResponse -
Declaration
Objective-C
@interface AWSRekognitionListDatasetLabelsRequestSwift
class AWSRekognitionListDatasetLabelsRequest -
Declaration
Objective-C
@interface AWSRekognitionListDatasetLabelsResponseSwift
class AWSRekognitionListDatasetLabelsResponse -
Declaration
Objective-C
@interface AWSRekognitionListFacesRequestSwift
class AWSRekognitionListFacesRequest -
Declaration
Objective-C
@interface AWSRekognitionListFacesResponseSwift
class AWSRekognitionListFacesResponse -
Declaration
Objective-C
@interface AWSRekognitionListMediaAnalysisJobsRequestSwift
class AWSRekognitionListMediaAnalysisJobsRequest -
Declaration
Objective-C
@interface AWSRekognitionListMediaAnalysisJobsResponseSwift
class AWSRekognitionListMediaAnalysisJobsResponse -
Declaration
Objective-C
@interface AWSRekognitionListProjectPoliciesRequestSwift
class AWSRekognitionListProjectPoliciesRequest -
Declaration
Objective-C
@interface AWSRekognitionListProjectPoliciesResponseSwift
class AWSRekognitionListProjectPoliciesResponse -
Declaration
Objective-C
@interface AWSRekognitionListStreamProcessorsRequestSwift
class AWSRekognitionListStreamProcessorsRequest -
Declaration
Objective-C
@interface AWSRekognitionListStreamProcessorsResponseSwift
class AWSRekognitionListStreamProcessorsResponse -
Declaration
Objective-C
@interface AWSRekognitionListTagsForResourceRequestSwift
class AWSRekognitionListTagsForResourceRequest -
Declaration
Objective-C
@interface AWSRekognitionListTagsForResourceResponseSwift
class AWSRekognitionListTagsForResourceResponse -
Declaration
Objective-C
@interface AWSRekognitionListUsersRequestSwift
class AWSRekognitionListUsersRequest -
Declaration
Objective-C
@interface AWSRekognitionListUsersResponseSwift
class AWSRekognitionListUsersResponse -
Contains settings that specify the location of an Amazon S3 bucket used to store the output of a Face Liveness session. Note that the S3 bucket must be located in the caller’s AWS account and in the same region as the Face Liveness end-point. Additionally, the Amazon S3 object keys are auto-generated by the Face Liveness system.
Required parameters: [S3Bucket]
See moreDeclaration
Objective-C
@interface AWSRekognitionLivenessOutputConfigSwift
class AWSRekognitionLivenessOutputConfig -
Contains metadata for a UserID matched with a given face.
See moreDeclaration
Objective-C
@interface AWSRekognitionMatchedUserSwift
class AWSRekognitionMatchedUser -
Configuration for Moderation Labels Detection.
See moreDeclaration
Objective-C
@interface AWSRekognitionMediaAnalysisDetectModerationLabelsConfigSwift
class AWSRekognitionMediaAnalysisDetectModerationLabelsConfig -
Declaration
Objective-C
@interface AWSRekognitionMediaAnalysisInputSwift
class AWSRekognitionMediaAnalysisInput -
Description for a media analysis job.
Required parameters: [JobId, OperationsConfig, Status, CreationTimestamp, Input, OutputConfig]
See moreDeclaration
Objective-C
@interface AWSRekognitionMediaAnalysisJobDescriptionSwift
class AWSRekognitionMediaAnalysisJobDescription -
Details about the error that resulted in failure of the job.
See moreDeclaration
Objective-C
@interface AWSRekognitionMediaAnalysisJobFailureDetailsSwift
class AWSRekognitionMediaAnalysisJobFailureDetails -
Summary that provides statistics on input manifest and errors identified in the input manifest.
See moreDeclaration
Objective-C
@interface AWSRekognitionMediaAnalysisManifestSummarySwift
class AWSRekognitionMediaAnalysisManifestSummary -
Object containing information about the model versions of selected features in a given job.
See moreDeclaration
Objective-C
@interface AWSRekognitionMediaAnalysisModelVersionsSwift
class AWSRekognitionMediaAnalysisModelVersions -
Configuration options for a media analysis job. Configuration is operation-specific.
See moreDeclaration
Objective-C
@interface AWSRekognitionMediaAnalysisOperationsConfigSwift
class AWSRekognitionMediaAnalysisOperationsConfig -
Declaration
Objective-C
@interface AWSRekognitionMediaAnalysisOutputConfigSwift
class AWSRekognitionMediaAnalysisOutputConfig -
Contains the results for a media analysis job created with StartMediaAnalysisJob.
See moreDeclaration
Objective-C
@interface AWSRekognitionMediaAnalysisResultsSwift
class AWSRekognitionMediaAnalysisResults -
Provides information about a single type of inappropriate, unwanted, or offensive content found in an image or video. Each type of moderated content has a label within a hierarchical taxonomy. For more information, see Content moderation in the Amazon Rekognition Developer Guide.
See moreDeclaration
Objective-C
@interface AWSRekognitionModerationLabelSwift
class AWSRekognitionModerationLabel -
Indicates whether or not the mouth on the face is open, and the confidence level in the determination.
See moreDeclaration
Objective-C
@interface AWSRekognitionMouthOpenSwift
class AWSRekognitionMouthOpen -
Indicates whether or not the face has a mustache, and the confidence level in the determination.
See moreDeclaration
Objective-C
@interface AWSRekognitionMustacheSwift
class AWSRekognitionMustache -
The Amazon Simple Notification Service topic to which Amazon Rekognition publishes the completion status of a video analysis operation. For more information, see Calling Amazon Rekognition Video operations. Note that the Amazon SNS topic must have a topic name that begins with AmazonRekognition if you are using the AmazonRekognitionServiceRole permissions policy to access the topic. For more information, see Giving access to multiple Amazon SNS topics.
Required parameters: [SNSTopicArn, RoleArn]
See moreDeclaration
Objective-C
@interface AWSRekognitionNotificationChannelSwift
class AWSRekognitionNotificationChannel -
The S3 bucket and folder location where training output is placed.
See moreDeclaration
Objective-C
@interface AWSRekognitionOutputConfigSwift
class AWSRekognitionOutputConfig -
A parent label for a label. A label can have 0, 1, or more parents.
See moreDeclaration
Objective-C
@interface AWSRekognitionParentSwift
class AWSRekognitionParent -
Details about a person detected in a video analysis request.
See moreDeclaration
Objective-C
@interface AWSRekognitionPersonDetailSwift
class AWSRekognitionPersonDetail -
Details and path tracking information for a single time a person’s path is tracked in a video. Amazon Rekognition operations that track people’s paths return an array of
PersonDetectionobjects with elements for each time a person’s path is tracked in a video.For more information, see GetPersonTracking in the Amazon Rekognition Developer Guide.
See moreDeclaration
Objective-C
@interface AWSRekognitionPersonDetectionSwift
class AWSRekognitionPersonDetection -
Information about a person whose face matches a face(s) in an Amazon Rekognition collection. Includes information about the faces in the Amazon Rekognition collection (FaceMatch), information about the person (PersonDetail), and the time stamp for when the person was detected in a video. An array of
See morePersonMatchobjects is returned by GetFaceSearch.Declaration
Objective-C
@interface AWSRekognitionPersonMatchSwift
class AWSRekognitionPersonMatch -
The X and Y coordinates of a point on an image or video frame. The X and Y values are ratios of the overall image size or video resolution. For example, if an input image is 700x200 and the values are X=0.5 and Y=0.25, then the point is at the (350,50) pixel coordinate on the image.
An array of
See morePointobjects makes up aPolygon. APolygonis returned by DetectText and by DetectCustomLabelsPolygonrepresents a fine-grained polygon around a detected item. For more information, see Geometry in the Amazon Rekognition Developer Guide.Declaration
Objective-C
@interface AWSRekognitionPointSwift
class AWSRekognitionPoint -
Indicates the pose of the face as determined by its pitch, roll, and yaw.
See moreDeclaration
Objective-C
@interface AWSRekognitionPoseSwift
class AWSRekognitionPose -
A description of an Amazon Rekognition Custom Labels project. For more information, see DescribeProjects.
See moreDeclaration
Objective-C
@interface AWSRekognitionProjectDescriptionSwift
class AWSRekognitionProjectDescription -
Describes a project policy in the response from ListProjectPolicies.
See moreDeclaration
Objective-C
@interface AWSRekognitionProjectPolicySwift
class AWSRekognitionProjectPolicy -
A description of a version of a Amazon Rekognition project version.
See moreDeclaration
Objective-C
@interface AWSRekognitionProjectVersionDescriptionSwift
class AWSRekognitionProjectVersionDescription -
Information about a body part detected by DetectProtectiveEquipment that contains PPE. An array of
See moreProtectiveEquipmentBodyPartobjects is returned for each person detected byDetectProtectiveEquipment.Declaration
Objective-C
@interface AWSRekognitionProtectiveEquipmentBodyPartSwift
class AWSRekognitionProtectiveEquipmentBodyPart -
A person detected by a call to DetectProtectiveEquipment. The API returns all persons detected in the input image in an array of
See moreProtectiveEquipmentPersonobjects.Declaration
Objective-C
@interface AWSRekognitionProtectiveEquipmentPersonSwift
class AWSRekognitionProtectiveEquipmentPerson -
Specifies summary attributes to return from a call to DetectProtectiveEquipment. You can specify which types of PPE to summarize. You can also specify a minimum confidence value for detections. Summary information is returned in the
Summary(ProtectiveEquipmentSummary) field of the response fromDetectProtectiveEquipment. The summary includes which persons in an image were detected wearing the requested types of person protective equipment (PPE), which persons were detected as not wearing PPE, and the persons in which a determination could not be made. For more information, see ProtectiveEquipmentSummary.Required parameters: [MinConfidence, RequiredEquipmentTypes]
See moreDeclaration
Objective-C
@interface AWSRekognitionProtectiveEquipmentSummarizationAttributesSwift
class AWSRekognitionProtectiveEquipmentSummarizationAttributes -
Summary information for required items of personal protective equipment (PPE) detected on persons by a call to DetectProtectiveEquipment. You specify the required type of PPE in the
SummarizationAttributes(ProtectiveEquipmentSummarizationAttributes) input parameter. The summary includes which persons were detected wearing the required personal protective equipment (PersonsWithRequiredEquipment), which persons were detected as not wearing the required PPE (PersonsWithoutRequiredEquipment), and the persons in which a determination could not be made (PersonsIndeterminate).To get a total for each category, use the size of the field array. For example, to find out how many people were detected as wearing the specified PPE, use the size of the
See morePersonsWithRequiredEquipmentarray. If you want to find out more about a person, such as the location (BoundingBox) of the person on the image, use the person ID in each array element. Each person ID matches the ID field of a ProtectiveEquipmentPerson object returned in thePersonsarray byDetectProtectiveEquipment.Declaration
Objective-C
@interface AWSRekognitionProtectiveEquipmentSummarySwift
class AWSRekognitionProtectiveEquipmentSummary -
Declaration
Objective-C
@interface AWSRekognitionPutProjectPolicyRequestSwift
class AWSRekognitionPutProjectPolicyRequest -
Declaration
Objective-C
@interface AWSRekognitionPutProjectPolicyResponseSwift
class AWSRekognitionPutProjectPolicyResponse -
Declaration
Objective-C
@interface AWSRekognitionRecognizeCelebritiesRequestSwift
class AWSRekognitionRecognizeCelebritiesRequest -
Declaration
Objective-C
@interface AWSRekognitionRecognizeCelebritiesResponseSwift
class AWSRekognitionRecognizeCelebritiesResponse -
Specifies a location within the frame that Rekognition checks for objects of interest such as text, labels, or faces. It uses a
BoundingBoxorPolygonto set a region of the screen.A word, face, or label is included in the region if it is more than half in that region. If there is more than one region, the word, face, or label is compared with all regions of the screen. Any object of interest that is more than half in a region is kept in the results.
See moreDeclaration
Objective-C
@interface AWSRekognitionRegionOfInterestSwift
class AWSRekognitionRegionOfInterest -
The Amazon S3 bucket location to which Amazon Rekognition publishes the detailed inference results of a video analysis operation. These results include the name of the stream processor resource, the session ID of the stream processing session, and labeled timestamps and bounding boxes for detected labels.
See moreDeclaration
Objective-C
@interface AWSRekognitionS3DestinationSwift
class AWSRekognitionS3Destination -
Provides the S3 bucket name and object name.
The region for the S3 bucket containing the S3 object must match the region you use for Amazon Rekognition operations.
For Amazon Rekognition to process an S3 object, the user must have permission to access the S3 object. For more information, see How Amazon Rekognition works with IAM in the Amazon Rekognition Developer Guide.
See moreDeclaration
Objective-C
@interface AWSRekognitionS3ObjectSwift
class AWSRekognitionS3Object -
Declaration
Objective-C
@interface AWSRekognitionSearchFacesByImageRequestSwift
class AWSRekognitionSearchFacesByImageRequest -
Declaration
Objective-C
@interface AWSRekognitionSearchFacesByImageResponseSwift
class AWSRekognitionSearchFacesByImageResponse -
Declaration
Objective-C
@interface AWSRekognitionSearchFacesRequestSwift
class AWSRekognitionSearchFacesRequest -
Declaration
Objective-C
@interface AWSRekognitionSearchFacesResponseSwift
class AWSRekognitionSearchFacesResponse -
Declaration
Objective-C
@interface AWSRekognitionSearchUsersByImageRequestSwift
class AWSRekognitionSearchUsersByImageRequest -
Declaration
Objective-C
@interface AWSRekognitionSearchUsersByImageResponseSwift
class AWSRekognitionSearchUsersByImageResponse -
Declaration
Objective-C
@interface AWSRekognitionSearchUsersRequestSwift
class AWSRekognitionSearchUsersRequest -
Declaration
Objective-C
@interface AWSRekognitionSearchUsersResponseSwift
class AWSRekognitionSearchUsersResponse -
Provides face metadata such as FaceId, BoundingBox, Confidence of the input face used for search.
See moreDeclaration
Objective-C
@interface AWSRekognitionSearchedFaceSwift
class AWSRekognitionSearchedFace -
Contains data regarding the input face used for a search.
See moreDeclaration
Objective-C
@interface AWSRekognitionSearchedFaceDetailsSwift
class AWSRekognitionSearchedFaceDetails -
Contains metadata about a User searched for within a collection.
See moreDeclaration
Objective-C
@interface AWSRekognitionSearchedUserSwift
class AWSRekognitionSearchedUser -
A technical cue or shot detection segment detected in a video. An array of
See moreSegmentDetectionobjects containing all segments detected in a stored video is returned by GetSegmentDetection.Declaration
Objective-C
@interface AWSRekognitionSegmentDetectionSwift
class AWSRekognitionSegmentDetection -
Information about the type of a segment requested in a call to StartSegmentDetection. An array of
See moreSegmentTypeInfoobjects is returned by the response from GetSegmentDetection.Declaration
Objective-C
@interface AWSRekognitionSegmentTypeInfoSwift
class AWSRekognitionSegmentTypeInfo -
Information about a shot detection segment detected in a video. For more information, see SegmentDetection.
See moreDeclaration
Objective-C
@interface AWSRekognitionShotSegmentSwift
class AWSRekognitionShotSegment -
Indicates whether or not the face is smiling, and the confidence level in the determination.
See moreDeclaration
Objective-C
@interface AWSRekognitionSmileSwift
class AWSRekognitionSmile -
Declaration
Objective-C
@interface AWSRekognitionStartCelebrityRecognitionRequestSwift
class AWSRekognitionStartCelebrityRecognitionRequest -
Declaration
Objective-C
@interface AWSRekognitionStartCelebrityRecognitionResponseSwift
class AWSRekognitionStartCelebrityRecognitionResponse -
Declaration
Objective-C
@interface AWSRekognitionStartContentModerationRequestSwift
class AWSRekognitionStartContentModerationRequest -
Declaration
Objective-C
@interface AWSRekognitionStartContentModerationResponseSwift
class AWSRekognitionStartContentModerationResponse -
Declaration
Objective-C
@interface AWSRekognitionStartFaceDetectionRequestSwift
class AWSRekognitionStartFaceDetectionRequest -
Declaration
Objective-C
@interface AWSRekognitionStartFaceDetectionResponseSwift
class AWSRekognitionStartFaceDetectionResponse -
Declaration
Objective-C
@interface AWSRekognitionStartFaceSearchRequestSwift
class AWSRekognitionStartFaceSearchRequest -
Declaration
Objective-C
@interface AWSRekognitionStartFaceSearchResponseSwift
class AWSRekognitionStartFaceSearchResponse -
Declaration
Objective-C
@interface AWSRekognitionStartLabelDetectionRequestSwift
class AWSRekognitionStartLabelDetectionRequest -
Declaration
Objective-C
@interface AWSRekognitionStartLabelDetectionResponseSwift
class AWSRekognitionStartLabelDetectionResponse -
Declaration
Objective-C
@interface AWSRekognitionStartMediaAnalysisJobRequestSwift
class AWSRekognitionStartMediaAnalysisJobRequest -
Declaration
Objective-C
@interface AWSRekognitionStartMediaAnalysisJobResponseSwift
class AWSRekognitionStartMediaAnalysisJobResponse -
Declaration
Objective-C
@interface AWSRekognitionStartPersonTrackingRequestSwift
class AWSRekognitionStartPersonTrackingRequest -
Declaration
Objective-C
@interface AWSRekognitionStartPersonTrackingResponseSwift
class AWSRekognitionStartPersonTrackingResponse -
Declaration
Objective-C
@interface AWSRekognitionStartProjectVersionRequestSwift
class AWSRekognitionStartProjectVersionRequest -
Declaration
Objective-C
@interface AWSRekognitionStartProjectVersionResponseSwift
class AWSRekognitionStartProjectVersionResponse -
Filters applied to the technical cue or shot detection segments. For more information, see StartSegmentDetection.
See moreDeclaration
Objective-C
@interface AWSRekognitionStartSegmentDetectionFiltersSwift
class AWSRekognitionStartSegmentDetectionFilters -
Declaration
Objective-C
@interface AWSRekognitionStartSegmentDetectionRequestSwift
class AWSRekognitionStartSegmentDetectionRequest -
Declaration
Objective-C
@interface AWSRekognitionStartSegmentDetectionResponseSwift
class AWSRekognitionStartSegmentDetectionResponse -
Filters for the shot detection segments returned by
See moreGetSegmentDetection. For more information, see StartSegmentDetectionFilters.Declaration
Objective-C
@interface AWSRekognitionStartShotDetectionFilterSwift
class AWSRekognitionStartShotDetectionFilter -
Declaration
Objective-C
@interface AWSRekognitionStartStreamProcessorRequestSwift
class AWSRekognitionStartStreamProcessorRequest -
Declaration
Objective-C
@interface AWSRekognitionStartStreamProcessorResponseSwift
class AWSRekognitionStartStreamProcessorResponse -
Filters for the technical segments returned by GetSegmentDetection. For more information, see StartSegmentDetectionFilters.
See moreDeclaration
Objective-C
@interface AWSRekognitionStartTechnicalCueDetectionFilterSwift
class AWSRekognitionStartTechnicalCueDetectionFilter -
Set of optional parameters that let you set the criteria text must meet to be included in your response.
See moreWordFilterlooks at a word’s height, width and minimum confidence.RegionOfInterestlets you set a specific region of the screen to look for text in.Declaration
Objective-C
@interface AWSRekognitionStartTextDetectionFiltersSwift
class AWSRekognitionStartTextDetectionFilters -
Declaration
Objective-C
@interface AWSRekognitionStartTextDetectionRequestSwift
class AWSRekognitionStartTextDetectionRequest -
Declaration
Objective-C
@interface AWSRekognitionStartTextDetectionResponseSwift
class AWSRekognitionStartTextDetectionResponse -
Declaration
Objective-C
@interface AWSRekognitionStopProjectVersionRequestSwift
class AWSRekognitionStopProjectVersionRequest -
Declaration
Objective-C
@interface AWSRekognitionStopProjectVersionResponseSwift
class AWSRekognitionStopProjectVersionResponse -
Declaration
Objective-C
@interface AWSRekognitionStopStreamProcessorRequestSwift
class AWSRekognitionStopStreamProcessorRequest -
Declaration
Objective-C
@interface AWSRekognitionStopStreamProcessorResponseSwift
class AWSRekognitionStopStreamProcessorResponse -
This is a required parameter for label detection stream processors and should not be used to start a face search stream processor.
See moreDeclaration
Objective-C
@interface AWSRekognitionStreamProcessingStartSelectorSwift
class AWSRekognitionStreamProcessingStartSelector -
Specifies when to stop processing the stream. You can specify a maximum amount of time to process the video.
See moreDeclaration
Objective-C
@interface AWSRekognitionStreamProcessingStopSelectorSwift
class AWSRekognitionStreamProcessingStopSelector -
An object that recognizes faces or labels in a streaming video. An Amazon Rekognition stream processor is created by a call to CreateStreamProcessor. The request parameters for
See moreCreateStreamProcessordescribe the Kinesis video stream source for the streaming video, face recognition parameters, and where to stream the analysis resullts.Declaration
Objective-C
@interface AWSRekognitionStreamProcessorSwift
class AWSRekognitionStreamProcessor -
Allows you to opt in or opt out to share data with Rekognition to improve model performance. You can choose this option at the account level or on a per-stream basis. Note that if you opt out at the account level this setting is ignored on individual streams.
Required parameters: [OptIn]
See moreDeclaration
Objective-C
@interface AWSRekognitionStreamProcessorDataSharingPreferenceSwift
class AWSRekognitionStreamProcessorDataSharingPreference -
Information about the source streaming video.
See moreDeclaration
Objective-C
@interface AWSRekognitionStreamProcessorInputSwift
class AWSRekognitionStreamProcessorInput -
The Amazon Simple Notification Service topic to which Amazon Rekognition publishes the object detection results and completion status of a video analysis operation.
Amazon Rekognition publishes a notification the first time an object of interest or a person is detected in the video stream. For example, if Amazon Rekognition detects a person at second 2, a pet at second 4, and a person again at second 5, Amazon Rekognition sends 2 object class detected notifications, one for a person at second 2 and one for a pet at second 4.
Amazon Rekognition also publishes an an end-of-session notification with a summary when the stream processing session is complete.
Required parameters: [SNSTopicArn]
See moreDeclaration
Objective-C
@interface AWSRekognitionStreamProcessorNotificationChannelSwift
class AWSRekognitionStreamProcessorNotificationChannel -
Information about the Amazon Kinesis Data Streams stream to which a Amazon Rekognition Video stream processor streams the results of a video analysis. For more information, see CreateStreamProcessor in the Amazon Rekognition Developer Guide.
See moreDeclaration
Objective-C
@interface AWSRekognitionStreamProcessorOutputSwift
class AWSRekognitionStreamProcessorOutput -
Input parameters used in a streaming video analyzed by a Amazon Rekognition stream processor. You can use
See moreFaceSearchto recognize faces in a streaming video, or you can useConnectedHometo detect labels.Declaration
Objective-C
@interface AWSRekognitionStreamProcessorSettingsSwift
class AWSRekognitionStreamProcessorSettings -
The stream processor settings that you want to update.
See moreConnectedHomesettings can be updated to detect different labels with a different minimum confidence.Declaration
Objective-C
@interface AWSRekognitionStreamProcessorSettingsForUpdateSwift
class AWSRekognitionStreamProcessorSettingsForUpdate -
The S3 bucket that contains the training summary. The training summary includes aggregated evaluation metrics for the entire testing dataset and metrics for each individual label.
You get the training summary S3 bucket location by calling DescribeProjectVersions.
See moreDeclaration
Objective-C
@interface AWSRekognitionSummarySwift
class AWSRekognitionSummary -
Indicates whether or not the face is wearing sunglasses, and the confidence level in the determination.
See moreDeclaration
Objective-C
@interface AWSRekognitionSunglassesSwift
class AWSRekognitionSunglasses -
Declaration
Objective-C
@interface AWSRekognitionTagResourceRequestSwift
class AWSRekognitionTagResourceRequest -
Declaration
Objective-C
@interface AWSRekognitionTagResourceResponseSwift
class AWSRekognitionTagResourceResponse -
Information about a technical cue segment. For more information, see SegmentDetection.
See moreDeclaration
Objective-C
@interface AWSRekognitionTechnicalCueSegmentSwift
class AWSRekognitionTechnicalCueSegment -
The dataset used for testing. Optionally, if
See moreAutoCreateis set, Amazon Rekognition uses the training dataset to create a test dataset with a temporary split of the training dataset.Declaration
Objective-C
@interface AWSRekognitionTestingDataSwift
class AWSRekognitionTestingData -
Sagemaker Groundtruth format manifest files for the input, output and validation datasets that are used and created during testing.
See moreDeclaration
Objective-C
@interface AWSRekognitionTestingDataResultSwift
class AWSRekognitionTestingDataResult -
Information about a word or line of text detected by DetectText.
The
DetectedTextfield contains the text that Amazon Rekognition detected in the image.Every word and line has an identifier (
Id). Each word belongs to a line and has a parent identifier (ParentId) that identifies the line of text in which the word appears. The wordIdis also an index for the word within a line of words.For more information, see Detecting text in the Amazon Rekognition Developer Guide.
See moreDeclaration
Objective-C
@interface AWSRekognitionTextDetectionSwift
class AWSRekognitionTextDetection -
Information about text detected in a video. Incudes the detected text, the time in milliseconds from the start of the video that the text was detected, and where it was detected on the screen.
See moreDeclaration
Objective-C
@interface AWSRekognitionTextDetectionResultSwift
class AWSRekognitionTextDetectionResult -
The dataset used for training.
See moreDeclaration
Objective-C
@interface AWSRekognitionTrainingDataSwift
class AWSRekognitionTrainingData -
The data validation manifest created for the training dataset during model training.
See moreDeclaration
Objective-C
@interface AWSRekognitionTrainingDataResultSwift
class AWSRekognitionTrainingDataResult -
A face that IndexFaces detected, but didn’t index. Use the
See moreReasonsresponse attribute to determine why a face wasn’t indexed.Declaration
Objective-C
@interface AWSRekognitionUnindexedFaceSwift
class AWSRekognitionUnindexedFace -
Face details inferred from the image but not used for search. The response attribute contains reasons for why a face wasn’t used for Search.
See moreDeclaration
Objective-C
@interface AWSRekognitionUnsearchedFaceSwift
class AWSRekognitionUnsearchedFace -
Contains metadata like FaceId, UserID, and Reasons, for a face that was unsuccessfully associated.
See moreDeclaration
Objective-C
@interface AWSRekognitionUnsuccessfulFaceAssociationSwift
class AWSRekognitionUnsuccessfulFaceAssociation -
Contains metadata like FaceId, UserID, and Reasons, for a face that was unsuccessfully deleted.
See moreDeclaration
Objective-C
@interface AWSRekognitionUnsuccessfulFaceDeletionSwift
class AWSRekognitionUnsuccessfulFaceDeletion -
Contains metadata like FaceId, UserID, and Reasons, for a face that was unsuccessfully disassociated.
See moreDeclaration
Objective-C
@interface AWSRekognitionUnsuccessfulFaceDisassociationSwift
class AWSRekognitionUnsuccessfulFaceDisassociation -
Declaration
Objective-C
@interface AWSRekognitionUntagResourceRequestSwift
class AWSRekognitionUntagResourceRequest -
Declaration
Objective-C
@interface AWSRekognitionUntagResourceResponseSwift
class AWSRekognitionUntagResourceResponse -
Declaration
Objective-C
@interface AWSRekognitionUpdateDatasetEntriesRequestSwift
class AWSRekognitionUpdateDatasetEntriesRequest -
Declaration
Objective-C
@interface AWSRekognitionUpdateDatasetEntriesResponseSwift
class AWSRekognitionUpdateDatasetEntriesResponse -
Declaration
Objective-C
@interface AWSRekognitionUpdateStreamProcessorRequestSwift
class AWSRekognitionUpdateStreamProcessorRequest -
Declaration
Objective-C
@interface AWSRekognitionUpdateStreamProcessorResponseSwift
class AWSRekognitionUpdateStreamProcessorResponse -
Metadata of the user stored in a collection.
See moreDeclaration
Objective-C
@interface AWSRekognitionUserSwift
class AWSRekognitionUser -
Provides UserID metadata along with the confidence in the match of this UserID with the input face.
See moreDeclaration
Objective-C
@interface AWSRekognitionUserMatchSwift
class AWSRekognitionUserMatch -
Contains the Amazon S3 bucket location of the validation data for a model training job.
The validation data includes error information for individual JSON Lines in the dataset. For more information, see Debugging a Failed Model Training in the Amazon Rekognition Custom Labels Developer Guide.
You get the
ValidationDataobject for the training dataset (TrainingDataResult) and the test dataset (TestingDataResult) by calling DescribeProjectVersions.The assets array contains a single Asset object. The GroundTruthManifest field of the Asset object contains the S3 bucket location of the validation data.
See moreDeclaration
Objective-C
@interface AWSRekognitionValidationDataSwift
class AWSRekognitionValidationData -
Video file stored in an Amazon S3 bucket. Amazon Rekognition video start operations such as StartLabelDetection use
See moreVideoto specify a video for analysis. The supported file formats are .mp4, .mov and .avi.Declaration
Objective-C
@interface AWSRekognitionVideoSwift
class AWSRekognitionVideo -
Information about a video that Amazon Rekognition analyzed.
See moreVideometadatais returned in every page of paginated responses from a Amazon Rekognition video operation.Declaration
Objective-C
@interface AWSRekognitionVideoMetadataSwift
class AWSRekognitionVideoMetadata -
Undocumented
See moreDeclaration
Objective-C
@interface AWSRekognitionResources : NSObject + (instancetype)sharedInstance; - (NSDictionary *)JSONObject; @endSwift
class AWSRekognitionResources : NSObject -
This is the API Reference for Amazon Rekognition Image, Amazon Rekognition Custom Labels, Amazon Rekognition Stored Video, Amazon Rekognition Streaming Video. It provides descriptions of actions, data types, common parameters, and common errors.
Amazon Rekognition Image
Amazon Rekognition Custom Labels
Amazon Rekognition Video Stored Video
Amazon Rekognition Video Streaming Video
Declaration
Objective-C
@interface AWSRekognitionSwift
class AWSRekognition
View on GitHub
Install in Dash
Classes Reference