AWSRekognitionGetSegmentDetectionResponse
Objective-C
@interface AWSRekognitionGetSegmentDetectionResponse
Swift
class AWSRekognitionGetSegmentDetectionResponse
-
An array of objects. There can be multiple audio streams. Each
AudioMetadata
object contains metadata for a single audio stream. Audio information in anAudioMetadata
objects includes the audio codec, the number of audio channels, the duration of the audio stream, and the sample rate. Audio metadata is returned in each page of information returned byGetSegmentDetection
.Declaration
Objective-C
@property (nonatomic, strong) NSArray<AWSRekognitionAudioMetadata *> *_Nullable audioMetadata;
Swift
var audioMetadata: [AWSRekognitionAudioMetadata]? { get set }
-
Current status of the segment detection job.
Declaration
Objective-C
@property (nonatomic) AWSRekognitionVideoJobStatus jobStatus;
Swift
var jobStatus: AWSRekognitionVideoJobStatus { get set }
-
If the previous response was incomplete (because there are more labels to retrieve), Amazon Rekognition Video returns a pagination token in the response. You can use this pagination token to retrieve the next set of text.
Declaration
Objective-C
@property (nonatomic, strong) NSString *_Nullable nextToken;
Swift
var nextToken: String? { get set }
-
An array of segments detected in a video. The array is sorted by the segment types (TECHNICAL_CUE or SHOT) specified in the
SegmentTypes
input parameter ofStartSegmentDetection
. Within each segment type the array is sorted by timestamp values.Declaration
Objective-C
@property (nonatomic, strong) NSArray<AWSRekognitionSegmentDetection *> *_Nullable segments;
Swift
var segments: [AWSRekognitionSegmentDetection]? { get set }
-
An array containing the segment types requested in the call to
StartSegmentDetection
.Declaration
Objective-C
@property (nonatomic, strong) NSArray<AWSRekognitionSegmentTypeInfo *> *_Nullable selectedSegmentTypes;
Swift
var selectedSegmentTypes: [AWSRekognitionSegmentTypeInfo]? { get set }
-
If the job fails,
StatusMessage
provides a descriptive error message.Declaration
Objective-C
@property (nonatomic, strong) NSString *_Nullable statusMessage;
Swift
var statusMessage: String? { get set }
-
Currently, Amazon Rekognition Video returns a single object in the
VideoMetadata
array. The object contains information about the video stream in the input file that Amazon Rekognition Video chose to analyze. TheVideoMetadata
object includes the video codec, video format and other information. Video metadata is returned in each page of information returned byGetSegmentDetection
.Declaration
Objective-C
@property (nonatomic, strong) NSArray<AWSRekognitionVideoMetadata *> *_Nullable videoMetadata;
Swift
var videoMetadata: [AWSRekognitionVideoMetadata]? { get set }