AWSRekognitionGetSegmentDetectionResponse
Objective-C
@interface AWSRekognitionGetSegmentDetectionResponse
Swift
class AWSRekognitionGetSegmentDetectionResponse
-
An array of objects. There can be multiple audio streams. Each
AudioMetadata
object contains metadata for a single audio stream. Audio information in anAudioMetadata
objects includes the audio codec, the number of audio channels, the duration of the audio stream, and the sample rate. Audio metadata is returned in each page of information returned byGetSegmentDetection
.Declaration
Objective-C
@property (nonatomic, strong) NSArray<AWSRekognitionAudioMetadata *> *_Nullable audioMetadata;
Swift
var audioMetadata: [AWSRekognitionAudioMetadata]? { get set }
-
Job identifier for the segment detection operation for which you want to obtain results. The job identifer is returned by an initial call to StartSegmentDetection.
Declaration
Objective-C
@property (nonatomic, strong) NSString *_Nullable jobId;
Swift
var jobId: String? { get set }
-
Current status of the segment detection job.
Declaration
Objective-C
@property (nonatomic) AWSRekognitionVideoJobStatus jobStatus;
Swift
var jobStatus: AWSRekognitionVideoJobStatus { get set }
-
A job identifier specified in the call to StartSegmentDetection and returned in the job completion notification sent to your Amazon Simple Notification Service topic.
Declaration
Objective-C
@property (nonatomic, strong) NSString *_Nullable jobTag;
Swift
var jobTag: String? { get set }
-
If the previous response was incomplete (because there are more labels to retrieve), Amazon Rekognition Video returns a pagination token in the response. You can use this pagination token to retrieve the next set of text.
Declaration
Objective-C
@property (nonatomic, strong) NSString *_Nullable nextToken;
Swift
var nextToken: String? { get set }
-
An array of segments detected in a video. The array is sorted by the segment types (TECHNICAL_CUE or SHOT) specified in the
SegmentTypes
input parameter ofStartSegmentDetection
. Within each segment type the array is sorted by timestamp values.Declaration
Objective-C
@property (nonatomic, strong) NSArray<AWSRekognitionSegmentDetection *> *_Nullable segments;
Swift
var segments: [AWSRekognitionSegmentDetection]? { get set }
-
An array containing the segment types requested in the call to
StartSegmentDetection
.Declaration
Objective-C
@property (nonatomic, strong) NSArray<AWSRekognitionSegmentTypeInfo *> *_Nullable selectedSegmentTypes;
Swift
var selectedSegmentTypes: [AWSRekognitionSegmentTypeInfo]? { get set }
-
If the job fails,
StatusMessage
provides a descriptive error message.Declaration
Objective-C
@property (nonatomic, strong) NSString *_Nullable statusMessage;
Swift
var statusMessage: String? { get set }
-
Video file stored in an Amazon S3 bucket. Amazon Rekognition video start operations such as StartLabelDetection use
Video
to specify a video for analysis. The supported file formats are .mp4, .mov and .avi.Declaration
Objective-C
@property (nonatomic, strong) AWSRekognitionVideo *_Nullable video;
Swift
var video: AWSRekognitionVideo? { get set }
-
Currently, Amazon Rekognition Video returns a single object in the
VideoMetadata
array. The object contains information about the video stream in the input file that Amazon Rekognition Video chose to analyze. TheVideoMetadata
object includes the video codec, video format and other information. Video metadata is returned in each page of information returned byGetSegmentDetection
.Declaration
Objective-C
@property (nonatomic, strong) NSArray<AWSRekognitionVideoMetadata *> *_Nullable videoMetadata;
Swift
var videoMetadata: [AWSRekognitionVideoMetadata]? { get set }