public class PutRecordsRequestEntry
extends java.lang.Object
implements java.io.Serializable
Represents the output for PutRecords
.
Constructor and Description |
---|
PutRecordsRequestEntry() |
Modifier and Type | Method and Description |
---|---|
boolean |
equals(java.lang.Object obj) |
java.nio.ByteBuffer |
getData()
The data blob to put into the record, which is base64-encoded when the
blob is serialized.
|
java.lang.String |
getExplicitHashKey()
The hash value used to determine explicitly the shard that the data
record is assigned to by overriding the partition key hash.
|
java.lang.String |
getPartitionKey()
Determines which shard in the stream the data record is assigned to.
|
int |
hashCode() |
void |
setData(java.nio.ByteBuffer data)
The data blob to put into the record, which is base64-encoded when the
blob is serialized.
|
void |
setExplicitHashKey(java.lang.String explicitHashKey)
The hash value used to determine explicitly the shard that the data
record is assigned to by overriding the partition key hash.
|
void |
setPartitionKey(java.lang.String partitionKey)
Determines which shard in the stream the data record is assigned to.
|
java.lang.String |
toString()
Returns a string representation of this object; useful for testing and
debugging.
|
PutRecordsRequestEntry |
withData(java.nio.ByteBuffer data)
The data blob to put into the record, which is base64-encoded when the
blob is serialized.
|
PutRecordsRequestEntry |
withExplicitHashKey(java.lang.String explicitHashKey)
The hash value used to determine explicitly the shard that the data
record is assigned to by overriding the partition key hash.
|
PutRecordsRequestEntry |
withPartitionKey(java.lang.String partitionKey)
Determines which shard in the stream the data record is assigned to.
|
public java.nio.ByteBuffer getData()
The data blob to put into the record, which is base64-encoded when the blob is serialized. When the data blob (the payload before base64-encoding) is added to the partition key size, the total size must not exceed the maximum record size (1 MB).
Constraints:
Length: 0 - 1048576
The data blob to put into the record, which is base64-encoded when the blob is serialized. When the data blob (the payload before base64-encoding) is added to the partition key size, the total size must not exceed the maximum record size (1 MB).
public void setData(java.nio.ByteBuffer data)
The data blob to put into the record, which is base64-encoded when the blob is serialized. When the data blob (the payload before base64-encoding) is added to the partition key size, the total size must not exceed the maximum record size (1 MB).
Constraints:
Length: 0 - 1048576
data
- The data blob to put into the record, which is base64-encoded when the blob is serialized. When the data blob (the payload before base64-encoding) is added to the partition key size, the total size must not exceed the maximum record size (1 MB).
public PutRecordsRequestEntry withData(java.nio.ByteBuffer data)
The data blob to put into the record, which is base64-encoded when the blob is serialized. When the data blob (the payload before base64-encoding) is added to the partition key size, the total size must not exceed the maximum record size (1 MB).
Returns a reference to this object so that method calls can be chained together.
Constraints:
Length: 0 - 1048576
data
- The data blob to put into the record, which is base64-encoded when the blob is serialized. When the data blob (the payload before base64-encoding) is added to the partition key size, the total size must not exceed the maximum record size (1 MB).
public java.lang.String getExplicitHashKey()
The hash value used to determine explicitly the shard that the data record is assigned to by overriding the partition key hash.
Constraints:
Pattern: 0|([1-9]\d{0,38})
The hash value used to determine explicitly the shard that the data record is assigned to by overriding the partition key hash.
public void setExplicitHashKey(java.lang.String explicitHashKey)
The hash value used to determine explicitly the shard that the data record is assigned to by overriding the partition key hash.
Constraints:
Pattern: 0|([1-9]\d{0,38})
explicitHashKey
- The hash value used to determine explicitly the shard that the data record is assigned to by overriding the partition key hash.
public PutRecordsRequestEntry withExplicitHashKey(java.lang.String explicitHashKey)
The hash value used to determine explicitly the shard that the data record is assigned to by overriding the partition key hash.
Returns a reference to this object so that method calls can be chained together.
Constraints:
Pattern: 0|([1-9]\d{0,38})
explicitHashKey
- The hash value used to determine explicitly the shard that the data record is assigned to by overriding the partition key hash.
public java.lang.String getPartitionKey()
Determines which shard in the stream the data record is assigned to. Partition keys are Unicode strings with a maximum length limit of 256 characters for each key. Amazon Kinesis Data Streams uses the partition key as input to a hash function that maps the partition key and associated data to a specific shard. Specifically, an MD5 hash function is used to map partition keys to 128-bit integer values and to map associated data records to shards. As a result of this hashing mechanism, all data records with the same partition key map to the same shard within the stream.
Constraints:
Length: 1 - 256
Determines which shard in the stream the data record is assigned to. Partition keys are Unicode strings with a maximum length limit of 256 characters for each key. Amazon Kinesis Data Streams uses the partition key as input to a hash function that maps the partition key and associated data to a specific shard. Specifically, an MD5 hash function is used to map partition keys to 128-bit integer values and to map associated data records to shards. As a result of this hashing mechanism, all data records with the same partition key map to the same shard within the stream.
public void setPartitionKey(java.lang.String partitionKey)
Determines which shard in the stream the data record is assigned to. Partition keys are Unicode strings with a maximum length limit of 256 characters for each key. Amazon Kinesis Data Streams uses the partition key as input to a hash function that maps the partition key and associated data to a specific shard. Specifically, an MD5 hash function is used to map partition keys to 128-bit integer values and to map associated data records to shards. As a result of this hashing mechanism, all data records with the same partition key map to the same shard within the stream.
Constraints:
Length: 1 - 256
partitionKey
- Determines which shard in the stream the data record is assigned to. Partition keys are Unicode strings with a maximum length limit of 256 characters for each key. Amazon Kinesis Data Streams uses the partition key as input to a hash function that maps the partition key and associated data to a specific shard. Specifically, an MD5 hash function is used to map partition keys to 128-bit integer values and to map associated data records to shards. As a result of this hashing mechanism, all data records with the same partition key map to the same shard within the stream.
public PutRecordsRequestEntry withPartitionKey(java.lang.String partitionKey)
Determines which shard in the stream the data record is assigned to. Partition keys are Unicode strings with a maximum length limit of 256 characters for each key. Amazon Kinesis Data Streams uses the partition key as input to a hash function that maps the partition key and associated data to a specific shard. Specifically, an MD5 hash function is used to map partition keys to 128-bit integer values and to map associated data records to shards. As a result of this hashing mechanism, all data records with the same partition key map to the same shard within the stream.
Returns a reference to this object so that method calls can be chained together.
Constraints:
Length: 1 - 256
partitionKey
- Determines which shard in the stream the data record is assigned to. Partition keys are Unicode strings with a maximum length limit of 256 characters for each key. Amazon Kinesis Data Streams uses the partition key as input to a hash function that maps the partition key and associated data to a specific shard. Specifically, an MD5 hash function is used to map partition keys to 128-bit integer values and to map associated data records to shards. As a result of this hashing mechanism, all data records with the same partition key map to the same shard within the stream.
public java.lang.String toString()
toString
in class java.lang.Object
Object.toString()
public int hashCode()
hashCode
in class java.lang.Object
public boolean equals(java.lang.Object obj)
equals
in class java.lang.Object
Copyright © 2018 Amazon Web Services, Inc. All Rights Reserved.