public class GetContentModerationResult
extends java.lang.Object
implements java.io.Serializable
Constructor and Description |
---|
GetContentModerationResult() |
Modifier and Type | Method and Description |
---|---|
boolean |
equals(java.lang.Object obj) |
java.lang.String |
getJobStatus()
The current status of the unsafe content analysis job.
|
java.util.List<ContentModerationDetection> |
getModerationLabels()
The detected unsafe content labels and the time(s) they were detected.
|
java.lang.String |
getModerationModelVersion()
Version number of the moderation detection model that was used to detect
unsafe content.
|
java.lang.String |
getNextToken()
If the response is truncated, Amazon Rekognition Video returns this token
that you can use in the subsequent request to retrieve the next set of
unsafe content labels.
|
java.lang.String |
getStatusMessage()
If the job fails,
StatusMessage provides a descriptive error
message. |
VideoMetadata |
getVideoMetadata()
Information about a video that Amazon Rekognition analyzed.
|
int |
hashCode() |
void |
setJobStatus(java.lang.String jobStatus)
The current status of the unsafe content analysis job.
|
void |
setJobStatus(VideoJobStatus jobStatus)
The current status of the unsafe content analysis job.
|
void |
setModerationLabels(java.util.Collection<ContentModerationDetection> moderationLabels)
The detected unsafe content labels and the time(s) they were detected.
|
void |
setModerationModelVersion(java.lang.String moderationModelVersion)
Version number of the moderation detection model that was used to detect
unsafe content.
|
void |
setNextToken(java.lang.String nextToken)
If the response is truncated, Amazon Rekognition Video returns this token
that you can use in the subsequent request to retrieve the next set of
unsafe content labels.
|
void |
setStatusMessage(java.lang.String statusMessage)
If the job fails,
StatusMessage provides a descriptive error
message. |
void |
setVideoMetadata(VideoMetadata videoMetadata)
Information about a video that Amazon Rekognition analyzed.
|
java.lang.String |
toString()
Returns a string representation of this object; useful for testing and
debugging.
|
GetContentModerationResult |
withJobStatus(java.lang.String jobStatus)
The current status of the unsafe content analysis job.
|
GetContentModerationResult |
withJobStatus(VideoJobStatus jobStatus)
The current status of the unsafe content analysis job.
|
GetContentModerationResult |
withModerationLabels(java.util.Collection<ContentModerationDetection> moderationLabels)
The detected unsafe content labels and the time(s) they were detected.
|
GetContentModerationResult |
withModerationLabels(ContentModerationDetection... moderationLabels)
The detected unsafe content labels and the time(s) they were detected.
|
GetContentModerationResult |
withModerationModelVersion(java.lang.String moderationModelVersion)
Version number of the moderation detection model that was used to detect
unsafe content.
|
GetContentModerationResult |
withNextToken(java.lang.String nextToken)
If the response is truncated, Amazon Rekognition Video returns this token
that you can use in the subsequent request to retrieve the next set of
unsafe content labels.
|
GetContentModerationResult |
withStatusMessage(java.lang.String statusMessage)
If the job fails,
StatusMessage provides a descriptive error
message. |
GetContentModerationResult |
withVideoMetadata(VideoMetadata videoMetadata)
Information about a video that Amazon Rekognition analyzed.
|
public java.lang.String getJobStatus()
The current status of the unsafe content analysis job.
Constraints:
Allowed Values: IN_PROGRESS, SUCCEEDED, FAILED
The current status of the unsafe content analysis job.
VideoJobStatus
public void setJobStatus(java.lang.String jobStatus)
The current status of the unsafe content analysis job.
Constraints:
Allowed Values: IN_PROGRESS, SUCCEEDED, FAILED
jobStatus
- The current status of the unsafe content analysis job.
VideoJobStatus
public GetContentModerationResult withJobStatus(java.lang.String jobStatus)
The current status of the unsafe content analysis job.
Returns a reference to this object so that method calls can be chained together.
Constraints:
Allowed Values: IN_PROGRESS, SUCCEEDED, FAILED
jobStatus
- The current status of the unsafe content analysis job.
VideoJobStatus
public void setJobStatus(VideoJobStatus jobStatus)
The current status of the unsafe content analysis job.
Constraints:
Allowed Values: IN_PROGRESS, SUCCEEDED, FAILED
jobStatus
- The current status of the unsafe content analysis job.
VideoJobStatus
public GetContentModerationResult withJobStatus(VideoJobStatus jobStatus)
The current status of the unsafe content analysis job.
Returns a reference to this object so that method calls can be chained together.
Constraints:
Allowed Values: IN_PROGRESS, SUCCEEDED, FAILED
jobStatus
- The current status of the unsafe content analysis job.
VideoJobStatus
public java.lang.String getStatusMessage()
If the job fails, StatusMessage
provides a descriptive error
message.
If the job fails, StatusMessage
provides a
descriptive error message.
public void setStatusMessage(java.lang.String statusMessage)
If the job fails, StatusMessage
provides a descriptive error
message.
statusMessage
-
If the job fails, StatusMessage
provides a
descriptive error message.
public GetContentModerationResult withStatusMessage(java.lang.String statusMessage)
If the job fails, StatusMessage
provides a descriptive error
message.
Returns a reference to this object so that method calls can be chained together.
statusMessage
-
If the job fails, StatusMessage
provides a
descriptive error message.
public VideoMetadata getVideoMetadata()
Information about a video that Amazon Rekognition analyzed.
Videometadata
is returned in every page of paginated
responses from GetContentModeration
.
Information about a video that Amazon Rekognition analyzed.
Videometadata
is returned in every page of paginated
responses from GetContentModeration
.
public void setVideoMetadata(VideoMetadata videoMetadata)
Information about a video that Amazon Rekognition analyzed.
Videometadata
is returned in every page of paginated
responses from GetContentModeration
.
videoMetadata
-
Information about a video that Amazon Rekognition analyzed.
Videometadata
is returned in every page of
paginated responses from GetContentModeration
.
public GetContentModerationResult withVideoMetadata(VideoMetadata videoMetadata)
Information about a video that Amazon Rekognition analyzed.
Videometadata
is returned in every page of paginated
responses from GetContentModeration
.
Returns a reference to this object so that method calls can be chained together.
videoMetadata
-
Information about a video that Amazon Rekognition analyzed.
Videometadata
is returned in every page of
paginated responses from GetContentModeration
.
public java.util.List<ContentModerationDetection> getModerationLabels()
The detected unsafe content labels and the time(s) they were detected.
The detected unsafe content labels and the time(s) they were detected.
public void setModerationLabels(java.util.Collection<ContentModerationDetection> moderationLabels)
The detected unsafe content labels and the time(s) they were detected.
moderationLabels
- The detected unsafe content labels and the time(s) they were detected.
public GetContentModerationResult withModerationLabels(ContentModerationDetection... moderationLabels)
The detected unsafe content labels and the time(s) they were detected.
Returns a reference to this object so that method calls can be chained together.
moderationLabels
- The detected unsafe content labels and the time(s) they were detected.
public GetContentModerationResult withModerationLabels(java.util.Collection<ContentModerationDetection> moderationLabels)
The detected unsafe content labels and the time(s) they were detected.
Returns a reference to this object so that method calls can be chained together.
moderationLabels
- The detected unsafe content labels and the time(s) they were detected.
public java.lang.String getNextToken()
If the response is truncated, Amazon Rekognition Video returns this token that you can use in the subsequent request to retrieve the next set of unsafe content labels.
Constraints:
Length: - 255
If the response is truncated, Amazon Rekognition Video returns this token that you can use in the subsequent request to retrieve the next set of unsafe content labels.
public void setNextToken(java.lang.String nextToken)
If the response is truncated, Amazon Rekognition Video returns this token that you can use in the subsequent request to retrieve the next set of unsafe content labels.
Constraints:
Length: - 255
nextToken
- If the response is truncated, Amazon Rekognition Video returns this token that you can use in the subsequent request to retrieve the next set of unsafe content labels.
public GetContentModerationResult withNextToken(java.lang.String nextToken)
If the response is truncated, Amazon Rekognition Video returns this token that you can use in the subsequent request to retrieve the next set of unsafe content labels.
Returns a reference to this object so that method calls can be chained together.
Constraints:
Length: - 255
nextToken
- If the response is truncated, Amazon Rekognition Video returns this token that you can use in the subsequent request to retrieve the next set of unsafe content labels.
public java.lang.String getModerationModelVersion()
Version number of the moderation detection model that was used to detect unsafe content.
Version number of the moderation detection model that was used to detect unsafe content.
public void setModerationModelVersion(java.lang.String moderationModelVersion)
Version number of the moderation detection model that was used to detect unsafe content.
moderationModelVersion
- Version number of the moderation detection model that was used to detect unsafe content.
public GetContentModerationResult withModerationModelVersion(java.lang.String moderationModelVersion)
Version number of the moderation detection model that was used to detect unsafe content.
Returns a reference to this object so that method calls can be chained together.
moderationModelVersion
- Version number of the moderation detection model that was used to detect unsafe content.
public java.lang.String toString()
toString
in class java.lang.Object
Object.toString()
public int hashCode()
hashCode
in class java.lang.Object
public boolean equals(java.lang.Object obj)
equals
in class java.lang.Object
Copyright © 2018 Amazon Web Services, Inc. All Rights Reserved.