1. Packages
  2. AWS Cloud Control
  3. API Docs
  4. kinesisfirehose
  5. getDeliveryStream

We recommend new projects start with resources from the AWS provider.

AWS Cloud Control v1.28.0 published on Monday, May 19, 2025 by Pulumi

aws-native.kinesisfirehose.getDeliveryStream

Explore with Pulumi AI

aws-native logo

We recommend new projects start with resources from the AWS provider.

AWS Cloud Control v1.28.0 published on Monday, May 19, 2025 by Pulumi

    Resource Type definition for AWS::KinesisFirehose::DeliveryStream

    Using getDeliveryStream

    Two invocation forms are available. The direct form accepts plain arguments and either blocks until the result value is available, or returns a Promise-wrapped result. The output form accepts Input-wrapped arguments and returns an Output-wrapped result.

    function getDeliveryStream(args: GetDeliveryStreamArgs, opts?: InvokeOptions): Promise<GetDeliveryStreamResult>
    function getDeliveryStreamOutput(args: GetDeliveryStreamOutputArgs, opts?: InvokeOptions): Output<GetDeliveryStreamResult>
    def get_delivery_stream(delivery_stream_name: Optional[str] = None,
                            opts: Optional[InvokeOptions] = None) -> GetDeliveryStreamResult
    def get_delivery_stream_output(delivery_stream_name: Optional[pulumi.Input[str]] = None,
                            opts: Optional[InvokeOptions] = None) -> Output[GetDeliveryStreamResult]
    func LookupDeliveryStream(ctx *Context, args *LookupDeliveryStreamArgs, opts ...InvokeOption) (*LookupDeliveryStreamResult, error)
    func LookupDeliveryStreamOutput(ctx *Context, args *LookupDeliveryStreamOutputArgs, opts ...InvokeOption) LookupDeliveryStreamResultOutput

    > Note: This function is named LookupDeliveryStream in the Go SDK.

    public static class GetDeliveryStream 
    {
        public static Task<GetDeliveryStreamResult> InvokeAsync(GetDeliveryStreamArgs args, InvokeOptions? opts = null)
        public static Output<GetDeliveryStreamResult> Invoke(GetDeliveryStreamInvokeArgs args, InvokeOptions? opts = null)
    }
    public static CompletableFuture<GetDeliveryStreamResult> getDeliveryStream(GetDeliveryStreamArgs args, InvokeOptions options)
    public static Output<GetDeliveryStreamResult> getDeliveryStream(GetDeliveryStreamArgs args, InvokeOptions options)
    
    fn::invoke:
      function: aws-native:kinesisfirehose:getDeliveryStream
      arguments:
        # arguments dictionary

    The following arguments are supported:

    DeliveryStreamName string
    The name of the Firehose stream.
    DeliveryStreamName string
    The name of the Firehose stream.
    deliveryStreamName String
    The name of the Firehose stream.
    deliveryStreamName string
    The name of the Firehose stream.
    delivery_stream_name str
    The name of the Firehose stream.
    deliveryStreamName String
    The name of the Firehose stream.

    getDeliveryStream Result

    The following output properties are available:

    Arn string
    The Amazon Resource Name (ARN) of the delivery stream, such as arn:aws:firehose:us-east-2:123456789012:deliverystream/delivery-stream-name .
    DeliveryStreamEncryptionConfigurationInput Pulumi.AwsNative.KinesisFirehose.Outputs.DeliveryStreamEncryptionConfigurationInput
    Specifies the type and Amazon Resource Name (ARN) of the CMK to use for Server-Side Encryption (SSE).
    ExtendedS3DestinationConfiguration Pulumi.AwsNative.KinesisFirehose.Outputs.DeliveryStreamExtendedS3DestinationConfiguration

    An Amazon S3 destination for the delivery stream.

    Conditional. You must specify only one destination configuration.

    If you change the delivery stream destination from an Amazon Extended S3 destination to an Amazon ES destination, update requires some interruptions .

    HttpEndpointDestinationConfiguration Pulumi.AwsNative.KinesisFirehose.Outputs.DeliveryStreamHttpEndpointDestinationConfiguration
    Enables configuring Kinesis Firehose to deliver data to any HTTP endpoint destination. You can specify only one destination.
    RedshiftDestinationConfiguration Pulumi.AwsNative.KinesisFirehose.Outputs.DeliveryStreamRedshiftDestinationConfiguration

    An Amazon Redshift destination for the delivery stream.

    Conditional. You must specify only one destination configuration.

    If you change the delivery stream destination from an Amazon Redshift destination to an Amazon ES destination, update requires some interruptions .

    SplunkDestinationConfiguration Pulumi.AwsNative.KinesisFirehose.Outputs.DeliveryStreamSplunkDestinationConfiguration
    The configuration of a destination in Splunk for the delivery stream.
    Tags List<Pulumi.AwsNative.Outputs.Tag>

    A set of tags to assign to the Firehose stream. A tag is a key-value pair that you can define and assign to AWS resources. Tags are metadata. For example, you can add friendly names and descriptions or other types of information that can help you distinguish the Firehose stream. For more information about tags, see Using Cost Allocation Tags in the AWS Billing and Cost Management User Guide.

    You can specify up to 50 tags when creating a Firehose stream.

    If you specify tags in the CreateDeliveryStream action, Amazon Data Firehose performs an additional authorization on the firehose:TagDeliveryStream action to verify if users have permissions to create tags. If you do not provide this permission, requests to create new Firehose streams with IAM resource tags will fail with an AccessDeniedException such as following.

    AccessDeniedException

    User: arn:aws:sts::x:assumed-role/x/x is not authorized to perform: firehose:TagDeliveryStream on resource: arn:aws:firehose:us-east-1:x:deliverystream/x with an explicit deny in an identity-based policy.

    For an example IAM policy, see Tag example.

    Arn string
    The Amazon Resource Name (ARN) of the delivery stream, such as arn:aws:firehose:us-east-2:123456789012:deliverystream/delivery-stream-name .
    DeliveryStreamEncryptionConfigurationInput DeliveryStreamEncryptionConfigurationInputType
    Specifies the type and Amazon Resource Name (ARN) of the CMK to use for Server-Side Encryption (SSE).
    ExtendedS3DestinationConfiguration DeliveryStreamExtendedS3DestinationConfiguration

    An Amazon S3 destination for the delivery stream.

    Conditional. You must specify only one destination configuration.

    If you change the delivery stream destination from an Amazon Extended S3 destination to an Amazon ES destination, update requires some interruptions .

    HttpEndpointDestinationConfiguration DeliveryStreamHttpEndpointDestinationConfiguration
    Enables configuring Kinesis Firehose to deliver data to any HTTP endpoint destination. You can specify only one destination.
    RedshiftDestinationConfiguration DeliveryStreamRedshiftDestinationConfiguration

    An Amazon Redshift destination for the delivery stream.

    Conditional. You must specify only one destination configuration.

    If you change the delivery stream destination from an Amazon Redshift destination to an Amazon ES destination, update requires some interruptions .

    SplunkDestinationConfiguration DeliveryStreamSplunkDestinationConfiguration
    The configuration of a destination in Splunk for the delivery stream.
    Tags Tag

    A set of tags to assign to the Firehose stream. A tag is a key-value pair that you can define and assign to AWS resources. Tags are metadata. For example, you can add friendly names and descriptions or other types of information that can help you distinguish the Firehose stream. For more information about tags, see Using Cost Allocation Tags in the AWS Billing and Cost Management User Guide.

    You can specify up to 50 tags when creating a Firehose stream.

    If you specify tags in the CreateDeliveryStream action, Amazon Data Firehose performs an additional authorization on the firehose:TagDeliveryStream action to verify if users have permissions to create tags. If you do not provide this permission, requests to create new Firehose streams with IAM resource tags will fail with an AccessDeniedException such as following.

    AccessDeniedException

    User: arn:aws:sts::x:assumed-role/x/x is not authorized to perform: firehose:TagDeliveryStream on resource: arn:aws:firehose:us-east-1:x:deliverystream/x with an explicit deny in an identity-based policy.

    For an example IAM policy, see Tag example.

    arn String
    The Amazon Resource Name (ARN) of the delivery stream, such as arn:aws:firehose:us-east-2:123456789012:deliverystream/delivery-stream-name .
    deliveryStreamEncryptionConfigurationInput DeliveryStreamEncryptionConfigurationInput
    Specifies the type and Amazon Resource Name (ARN) of the CMK to use for Server-Side Encryption (SSE).
    extendedS3DestinationConfiguration DeliveryStreamExtendedS3DestinationConfiguration

    An Amazon S3 destination for the delivery stream.

    Conditional. You must specify only one destination configuration.

    If you change the delivery stream destination from an Amazon Extended S3 destination to an Amazon ES destination, update requires some interruptions .

    httpEndpointDestinationConfiguration DeliveryStreamHttpEndpointDestinationConfiguration
    Enables configuring Kinesis Firehose to deliver data to any HTTP endpoint destination. You can specify only one destination.
    redshiftDestinationConfiguration DeliveryStreamRedshiftDestinationConfiguration

    An Amazon Redshift destination for the delivery stream.

    Conditional. You must specify only one destination configuration.

    If you change the delivery stream destination from an Amazon Redshift destination to an Amazon ES destination, update requires some interruptions .

    splunkDestinationConfiguration DeliveryStreamSplunkDestinationConfiguration
    The configuration of a destination in Splunk for the delivery stream.
    tags List<Tag>

    A set of tags to assign to the Firehose stream. A tag is a key-value pair that you can define and assign to AWS resources. Tags are metadata. For example, you can add friendly names and descriptions or other types of information that can help you distinguish the Firehose stream. For more information about tags, see Using Cost Allocation Tags in the AWS Billing and Cost Management User Guide.

    You can specify up to 50 tags when creating a Firehose stream.

    If you specify tags in the CreateDeliveryStream action, Amazon Data Firehose performs an additional authorization on the firehose:TagDeliveryStream action to verify if users have permissions to create tags. If you do not provide this permission, requests to create new Firehose streams with IAM resource tags will fail with an AccessDeniedException such as following.

    AccessDeniedException

    User: arn:aws:sts::x:assumed-role/x/x is not authorized to perform: firehose:TagDeliveryStream on resource: arn:aws:firehose:us-east-1:x:deliverystream/x with an explicit deny in an identity-based policy.

    For an example IAM policy, see Tag example.

    arn string
    The Amazon Resource Name (ARN) of the delivery stream, such as arn:aws:firehose:us-east-2:123456789012:deliverystream/delivery-stream-name .
    deliveryStreamEncryptionConfigurationInput DeliveryStreamEncryptionConfigurationInput
    Specifies the type and Amazon Resource Name (ARN) of the CMK to use for Server-Side Encryption (SSE).
    extendedS3DestinationConfiguration DeliveryStreamExtendedS3DestinationConfiguration

    An Amazon S3 destination for the delivery stream.

    Conditional. You must specify only one destination configuration.

    If you change the delivery stream destination from an Amazon Extended S3 destination to an Amazon ES destination, update requires some interruptions .

    httpEndpointDestinationConfiguration DeliveryStreamHttpEndpointDestinationConfiguration
    Enables configuring Kinesis Firehose to deliver data to any HTTP endpoint destination. You can specify only one destination.
    redshiftDestinationConfiguration DeliveryStreamRedshiftDestinationConfiguration

    An Amazon Redshift destination for the delivery stream.

    Conditional. You must specify only one destination configuration.

    If you change the delivery stream destination from an Amazon Redshift destination to an Amazon ES destination, update requires some interruptions .

    splunkDestinationConfiguration DeliveryStreamSplunkDestinationConfiguration
    The configuration of a destination in Splunk for the delivery stream.
    tags Tag[]

    A set of tags to assign to the Firehose stream. A tag is a key-value pair that you can define and assign to AWS resources. Tags are metadata. For example, you can add friendly names and descriptions or other types of information that can help you distinguish the Firehose stream. For more information about tags, see Using Cost Allocation Tags in the AWS Billing and Cost Management User Guide.

    You can specify up to 50 tags when creating a Firehose stream.

    If you specify tags in the CreateDeliveryStream action, Amazon Data Firehose performs an additional authorization on the firehose:TagDeliveryStream action to verify if users have permissions to create tags. If you do not provide this permission, requests to create new Firehose streams with IAM resource tags will fail with an AccessDeniedException such as following.

    AccessDeniedException

    User: arn:aws:sts::x:assumed-role/x/x is not authorized to perform: firehose:TagDeliveryStream on resource: arn:aws:firehose:us-east-1:x:deliverystream/x with an explicit deny in an identity-based policy.

    For an example IAM policy, see Tag example.

    arn str
    The Amazon Resource Name (ARN) of the delivery stream, such as arn:aws:firehose:us-east-2:123456789012:deliverystream/delivery-stream-name .
    delivery_stream_encryption_configuration_input DeliveryStreamEncryptionConfigurationInput
    Specifies the type and Amazon Resource Name (ARN) of the CMK to use for Server-Side Encryption (SSE).
    extended_s3_destination_configuration DeliveryStreamExtendedS3DestinationConfiguration

    An Amazon S3 destination for the delivery stream.

    Conditional. You must specify only one destination configuration.

    If you change the delivery stream destination from an Amazon Extended S3 destination to an Amazon ES destination, update requires some interruptions .

    http_endpoint_destination_configuration DeliveryStreamHttpEndpointDestinationConfiguration
    Enables configuring Kinesis Firehose to deliver data to any HTTP endpoint destination. You can specify only one destination.
    redshift_destination_configuration DeliveryStreamRedshiftDestinationConfiguration

    An Amazon Redshift destination for the delivery stream.

    Conditional. You must specify only one destination configuration.

    If you change the delivery stream destination from an Amazon Redshift destination to an Amazon ES destination, update requires some interruptions .

    splunk_destination_configuration DeliveryStreamSplunkDestinationConfiguration
    The configuration of a destination in Splunk for the delivery stream.
    tags Sequence[root_Tag]

    A set of tags to assign to the Firehose stream. A tag is a key-value pair that you can define and assign to AWS resources. Tags are metadata. For example, you can add friendly names and descriptions or other types of information that can help you distinguish the Firehose stream. For more information about tags, see Using Cost Allocation Tags in the AWS Billing and Cost Management User Guide.

    You can specify up to 50 tags when creating a Firehose stream.

    If you specify tags in the CreateDeliveryStream action, Amazon Data Firehose performs an additional authorization on the firehose:TagDeliveryStream action to verify if users have permissions to create tags. If you do not provide this permission, requests to create new Firehose streams with IAM resource tags will fail with an AccessDeniedException such as following.

    AccessDeniedException

    User: arn:aws:sts::x:assumed-role/x/x is not authorized to perform: firehose:TagDeliveryStream on resource: arn:aws:firehose:us-east-1:x:deliverystream/x with an explicit deny in an identity-based policy.

    For an example IAM policy, see Tag example.

    arn String
    The Amazon Resource Name (ARN) of the delivery stream, such as arn:aws:firehose:us-east-2:123456789012:deliverystream/delivery-stream-name .
    deliveryStreamEncryptionConfigurationInput Property Map
    Specifies the type and Amazon Resource Name (ARN) of the CMK to use for Server-Side Encryption (SSE).
    extendedS3DestinationConfiguration Property Map

    An Amazon S3 destination for the delivery stream.

    Conditional. You must specify only one destination configuration.

    If you change the delivery stream destination from an Amazon Extended S3 destination to an Amazon ES destination, update requires some interruptions .

    httpEndpointDestinationConfiguration Property Map
    Enables configuring Kinesis Firehose to deliver data to any HTTP endpoint destination. You can specify only one destination.
    redshiftDestinationConfiguration Property Map

    An Amazon Redshift destination for the delivery stream.

    Conditional. You must specify only one destination configuration.

    If you change the delivery stream destination from an Amazon Redshift destination to an Amazon ES destination, update requires some interruptions .

    splunkDestinationConfiguration Property Map
    The configuration of a destination in Splunk for the delivery stream.
    tags List<Property Map>

    A set of tags to assign to the Firehose stream. A tag is a key-value pair that you can define and assign to AWS resources. Tags are metadata. For example, you can add friendly names and descriptions or other types of information that can help you distinguish the Firehose stream. For more information about tags, see Using Cost Allocation Tags in the AWS Billing and Cost Management User Guide.

    You can specify up to 50 tags when creating a Firehose stream.

    If you specify tags in the CreateDeliveryStream action, Amazon Data Firehose performs an additional authorization on the firehose:TagDeliveryStream action to verify if users have permissions to create tags. If you do not provide this permission, requests to create new Firehose streams with IAM resource tags will fail with an AccessDeniedException such as following.

    AccessDeniedException

    User: arn:aws:sts::x:assumed-role/x/x is not authorized to perform: firehose:TagDeliveryStream on resource: arn:aws:firehose:us-east-1:x:deliverystream/x with an explicit deny in an identity-based policy.

    For an example IAM policy, see Tag example.

    Supporting Types

    DeliveryStreamBufferingHints

    IntervalInSeconds int
    The length of time, in seconds, that Kinesis Data Firehose buffers incoming data before delivering it to the destination. For valid values, see the IntervalInSeconds content for the BufferingHints data type in the Amazon Kinesis Data Firehose API Reference .
    SizeInMbs int
    The size of the buffer, in MBs, that Kinesis Data Firehose uses for incoming data before delivering it to the destination. For valid values, see the SizeInMBs content for the BufferingHints data type in the Amazon Kinesis Data Firehose API Reference .
    IntervalInSeconds int
    The length of time, in seconds, that Kinesis Data Firehose buffers incoming data before delivering it to the destination. For valid values, see the IntervalInSeconds content for the BufferingHints data type in the Amazon Kinesis Data Firehose API Reference .
    SizeInMbs int
    The size of the buffer, in MBs, that Kinesis Data Firehose uses for incoming data before delivering it to the destination. For valid values, see the SizeInMBs content for the BufferingHints data type in the Amazon Kinesis Data Firehose API Reference .
    intervalInSeconds Integer
    The length of time, in seconds, that Kinesis Data Firehose buffers incoming data before delivering it to the destination. For valid values, see the IntervalInSeconds content for the BufferingHints data type in the Amazon Kinesis Data Firehose API Reference .
    sizeInMbs Integer
    The size of the buffer, in MBs, that Kinesis Data Firehose uses for incoming data before delivering it to the destination. For valid values, see the SizeInMBs content for the BufferingHints data type in the Amazon Kinesis Data Firehose API Reference .
    intervalInSeconds number
    The length of time, in seconds, that Kinesis Data Firehose buffers incoming data before delivering it to the destination. For valid values, see the IntervalInSeconds content for the BufferingHints data type in the Amazon Kinesis Data Firehose API Reference .
    sizeInMbs number
    The size of the buffer, in MBs, that Kinesis Data Firehose uses for incoming data before delivering it to the destination. For valid values, see the SizeInMBs content for the BufferingHints data type in the Amazon Kinesis Data Firehose API Reference .
    interval_in_seconds int
    The length of time, in seconds, that Kinesis Data Firehose buffers incoming data before delivering it to the destination. For valid values, see the IntervalInSeconds content for the BufferingHints data type in the Amazon Kinesis Data Firehose API Reference .
    size_in_mbs int
    The size of the buffer, in MBs, that Kinesis Data Firehose uses for incoming data before delivering it to the destination. For valid values, see the SizeInMBs content for the BufferingHints data type in the Amazon Kinesis Data Firehose API Reference .
    intervalInSeconds Number
    The length of time, in seconds, that Kinesis Data Firehose buffers incoming data before delivering it to the destination. For valid values, see the IntervalInSeconds content for the BufferingHints data type in the Amazon Kinesis Data Firehose API Reference .
    sizeInMbs Number
    The size of the buffer, in MBs, that Kinesis Data Firehose uses for incoming data before delivering it to the destination. For valid values, see the SizeInMBs content for the BufferingHints data type in the Amazon Kinesis Data Firehose API Reference .

    DeliveryStreamCloudWatchLoggingOptions

    Enabled bool
    Indicates whether CloudWatch Logs logging is enabled.
    LogGroupName string

    The name of the CloudWatch Logs log group that contains the log stream that Kinesis Data Firehose will use.

    Conditional. If you enable logging, you must specify this property.

    LogStreamName string

    The name of the CloudWatch Logs log stream that Kinesis Data Firehose uses to send logs about data delivery.

    Conditional. If you enable logging, you must specify this property.

    Enabled bool
    Indicates whether CloudWatch Logs logging is enabled.
    LogGroupName string

    The name of the CloudWatch Logs log group that contains the log stream that Kinesis Data Firehose will use.

    Conditional. If you enable logging, you must specify this property.

    LogStreamName string

    The name of the CloudWatch Logs log stream that Kinesis Data Firehose uses to send logs about data delivery.

    Conditional. If you enable logging, you must specify this property.

    enabled Boolean
    Indicates whether CloudWatch Logs logging is enabled.
    logGroupName String

    The name of the CloudWatch Logs log group that contains the log stream that Kinesis Data Firehose will use.

    Conditional. If you enable logging, you must specify this property.

    logStreamName String

    The name of the CloudWatch Logs log stream that Kinesis Data Firehose uses to send logs about data delivery.

    Conditional. If you enable logging, you must specify this property.

    enabled boolean
    Indicates whether CloudWatch Logs logging is enabled.
    logGroupName string

    The name of the CloudWatch Logs log group that contains the log stream that Kinesis Data Firehose will use.

    Conditional. If you enable logging, you must specify this property.

    logStreamName string

    The name of the CloudWatch Logs log stream that Kinesis Data Firehose uses to send logs about data delivery.

    Conditional. If you enable logging, you must specify this property.

    enabled bool
    Indicates whether CloudWatch Logs logging is enabled.
    log_group_name str

    The name of the CloudWatch Logs log group that contains the log stream that Kinesis Data Firehose will use.

    Conditional. If you enable logging, you must specify this property.

    log_stream_name str

    The name of the CloudWatch Logs log stream that Kinesis Data Firehose uses to send logs about data delivery.

    Conditional. If you enable logging, you must specify this property.

    enabled Boolean
    Indicates whether CloudWatch Logs logging is enabled.
    logGroupName String

    The name of the CloudWatch Logs log group that contains the log stream that Kinesis Data Firehose will use.

    Conditional. If you enable logging, you must specify this property.

    logStreamName String

    The name of the CloudWatch Logs log stream that Kinesis Data Firehose uses to send logs about data delivery.

    Conditional. If you enable logging, you must specify this property.

    DeliveryStreamCopyCommand

    DataTableName string
    The name of the target table. The table must already exist in the database.
    CopyOptions string
    Parameters to use with the Amazon Redshift COPY command. For examples, see the CopyOptions content for the CopyCommand data type in the Amazon Kinesis Data Firehose API Reference .
    DataTableColumns string
    A comma-separated list of column names.
    DataTableName string
    The name of the target table. The table must already exist in the database.
    CopyOptions string
    Parameters to use with the Amazon Redshift COPY command. For examples, see the CopyOptions content for the CopyCommand data type in the Amazon Kinesis Data Firehose API Reference .
    DataTableColumns string
    A comma-separated list of column names.
    dataTableName String
    The name of the target table. The table must already exist in the database.
    copyOptions String
    Parameters to use with the Amazon Redshift COPY command. For examples, see the CopyOptions content for the CopyCommand data type in the Amazon Kinesis Data Firehose API Reference .
    dataTableColumns String
    A comma-separated list of column names.
    dataTableName string
    The name of the target table. The table must already exist in the database.
    copyOptions string
    Parameters to use with the Amazon Redshift COPY command. For examples, see the CopyOptions content for the CopyCommand data type in the Amazon Kinesis Data Firehose API Reference .
    dataTableColumns string
    A comma-separated list of column names.
    data_table_name str
    The name of the target table. The table must already exist in the database.
    copy_options str
    Parameters to use with the Amazon Redshift COPY command. For examples, see the CopyOptions content for the CopyCommand data type in the Amazon Kinesis Data Firehose API Reference .
    data_table_columns str
    A comma-separated list of column names.
    dataTableName String
    The name of the target table. The table must already exist in the database.
    copyOptions String
    Parameters to use with the Amazon Redshift COPY command. For examples, see the CopyOptions content for the CopyCommand data type in the Amazon Kinesis Data Firehose API Reference .
    dataTableColumns String
    A comma-separated list of column names.

    DeliveryStreamDataFormatConversionConfiguration

    Enabled bool
    Defaults to true . Set it to false if you want to disable format conversion while preserving the configuration details.
    InputFormatConfiguration Pulumi.AwsNative.KinesisFirehose.Inputs.DeliveryStreamInputFormatConfiguration
    Specifies the deserializer that you want Firehose to use to convert the format of your data from JSON. This parameter is required if Enabled is set to true.
    OutputFormatConfiguration Pulumi.AwsNative.KinesisFirehose.Inputs.DeliveryStreamOutputFormatConfiguration
    Specifies the serializer that you want Firehose to use to convert the format of your data to the Parquet or ORC format. This parameter is required if Enabled is set to true.
    SchemaConfiguration Pulumi.AwsNative.KinesisFirehose.Inputs.DeliveryStreamSchemaConfiguration
    Specifies the AWS Glue Data Catalog table that contains the column information. This parameter is required if Enabled is set to true.
    Enabled bool
    Defaults to true . Set it to false if you want to disable format conversion while preserving the configuration details.
    InputFormatConfiguration DeliveryStreamInputFormatConfiguration
    Specifies the deserializer that you want Firehose to use to convert the format of your data from JSON. This parameter is required if Enabled is set to true.
    OutputFormatConfiguration DeliveryStreamOutputFormatConfiguration
    Specifies the serializer that you want Firehose to use to convert the format of your data to the Parquet or ORC format. This parameter is required if Enabled is set to true.
    SchemaConfiguration DeliveryStreamSchemaConfiguration
    Specifies the AWS Glue Data Catalog table that contains the column information. This parameter is required if Enabled is set to true.
    enabled Boolean
    Defaults to true . Set it to false if you want to disable format conversion while preserving the configuration details.
    inputFormatConfiguration DeliveryStreamInputFormatConfiguration
    Specifies the deserializer that you want Firehose to use to convert the format of your data from JSON. This parameter is required if Enabled is set to true.
    outputFormatConfiguration DeliveryStreamOutputFormatConfiguration
    Specifies the serializer that you want Firehose to use to convert the format of your data to the Parquet or ORC format. This parameter is required if Enabled is set to true.
    schemaConfiguration DeliveryStreamSchemaConfiguration
    Specifies the AWS Glue Data Catalog table that contains the column information. This parameter is required if Enabled is set to true.
    enabled boolean
    Defaults to true . Set it to false if you want to disable format conversion while preserving the configuration details.
    inputFormatConfiguration DeliveryStreamInputFormatConfiguration
    Specifies the deserializer that you want Firehose to use to convert the format of your data from JSON. This parameter is required if Enabled is set to true.
    outputFormatConfiguration DeliveryStreamOutputFormatConfiguration
    Specifies the serializer that you want Firehose to use to convert the format of your data to the Parquet or ORC format. This parameter is required if Enabled is set to true.
    schemaConfiguration DeliveryStreamSchemaConfiguration
    Specifies the AWS Glue Data Catalog table that contains the column information. This parameter is required if Enabled is set to true.
    enabled bool
    Defaults to true . Set it to false if you want to disable format conversion while preserving the configuration details.
    input_format_configuration DeliveryStreamInputFormatConfiguration
    Specifies the deserializer that you want Firehose to use to convert the format of your data from JSON. This parameter is required if Enabled is set to true.
    output_format_configuration DeliveryStreamOutputFormatConfiguration
    Specifies the serializer that you want Firehose to use to convert the format of your data to the Parquet or ORC format. This parameter is required if Enabled is set to true.
    schema_configuration DeliveryStreamSchemaConfiguration
    Specifies the AWS Glue Data Catalog table that contains the column information. This parameter is required if Enabled is set to true.
    enabled Boolean
    Defaults to true . Set it to false if you want to disable format conversion while preserving the configuration details.
    inputFormatConfiguration Property Map
    Specifies the deserializer that you want Firehose to use to convert the format of your data from JSON. This parameter is required if Enabled is set to true.
    outputFormatConfiguration Property Map
    Specifies the serializer that you want Firehose to use to convert the format of your data to the Parquet or ORC format. This parameter is required if Enabled is set to true.
    schemaConfiguration Property Map
    Specifies the AWS Glue Data Catalog table that contains the column information. This parameter is required if Enabled is set to true.

    DeliveryStreamDeserializer

    HiveJsonSerDe Pulumi.AwsNative.KinesisFirehose.Inputs.DeliveryStreamHiveJsonSerDe
    The native Hive / HCatalog JsonSerDe. Used by Firehose for deserializing data, which means converting it from the JSON format in preparation for serializing it to the Parquet or ORC format. This is one of two deserializers you can choose, depending on which one offers the functionality you need. The other option is the OpenX SerDe.
    OpenXJsonSerDe Pulumi.AwsNative.KinesisFirehose.Inputs.DeliveryStreamOpenXJsonSerDe
    The OpenX SerDe. Used by Firehose for deserializing data, which means converting it from the JSON format in preparation for serializing it to the Parquet or ORC format. This is one of two deserializers you can choose, depending on which one offers the functionality you need. The other option is the native Hive / HCatalog JsonSerDe.
    HiveJsonSerDe DeliveryStreamHiveJsonSerDe
    The native Hive / HCatalog JsonSerDe. Used by Firehose for deserializing data, which means converting it from the JSON format in preparation for serializing it to the Parquet or ORC format. This is one of two deserializers you can choose, depending on which one offers the functionality you need. The other option is the OpenX SerDe.
    OpenXJsonSerDe DeliveryStreamOpenXJsonSerDe
    The OpenX SerDe. Used by Firehose for deserializing data, which means converting it from the JSON format in preparation for serializing it to the Parquet or ORC format. This is one of two deserializers you can choose, depending on which one offers the functionality you need. The other option is the native Hive / HCatalog JsonSerDe.
    hiveJsonSerDe DeliveryStreamHiveJsonSerDe
    The native Hive / HCatalog JsonSerDe. Used by Firehose for deserializing data, which means converting it from the JSON format in preparation for serializing it to the Parquet or ORC format. This is one of two deserializers you can choose, depending on which one offers the functionality you need. The other option is the OpenX SerDe.
    openXJsonSerDe DeliveryStreamOpenXJsonSerDe
    The OpenX SerDe. Used by Firehose for deserializing data, which means converting it from the JSON format in preparation for serializing it to the Parquet or ORC format. This is one of two deserializers you can choose, depending on which one offers the functionality you need. The other option is the native Hive / HCatalog JsonSerDe.
    hiveJsonSerDe DeliveryStreamHiveJsonSerDe
    The native Hive / HCatalog JsonSerDe. Used by Firehose for deserializing data, which means converting it from the JSON format in preparation for serializing it to the Parquet or ORC format. This is one of two deserializers you can choose, depending on which one offers the functionality you need. The other option is the OpenX SerDe.
    openXJsonSerDe DeliveryStreamOpenXJsonSerDe
    The OpenX SerDe. Used by Firehose for deserializing data, which means converting it from the JSON format in preparation for serializing it to the Parquet or ORC format. This is one of two deserializers you can choose, depending on which one offers the functionality you need. The other option is the native Hive / HCatalog JsonSerDe.
    hive_json_ser_de DeliveryStreamHiveJsonSerDe
    The native Hive / HCatalog JsonSerDe. Used by Firehose for deserializing data, which means converting it from the JSON format in preparation for serializing it to the Parquet or ORC format. This is one of two deserializers you can choose, depending on which one offers the functionality you need. The other option is the OpenX SerDe.
    open_x_json_ser_de DeliveryStreamOpenXJsonSerDe
    The OpenX SerDe. Used by Firehose for deserializing data, which means converting it from the JSON format in preparation for serializing it to the Parquet or ORC format. This is one of two deserializers you can choose, depending on which one offers the functionality you need. The other option is the native Hive / HCatalog JsonSerDe.
    hiveJsonSerDe Property Map
    The native Hive / HCatalog JsonSerDe. Used by Firehose for deserializing data, which means converting it from the JSON format in preparation for serializing it to the Parquet or ORC format. This is one of two deserializers you can choose, depending on which one offers the functionality you need. The other option is the OpenX SerDe.
    openXJsonSerDe Property Map
    The OpenX SerDe. Used by Firehose for deserializing data, which means converting it from the JSON format in preparation for serializing it to the Parquet or ORC format. This is one of two deserializers you can choose, depending on which one offers the functionality you need. The other option is the native Hive / HCatalog JsonSerDe.

    DeliveryStreamDynamicPartitioningConfiguration

    Enabled bool
    Specifies whether dynamic partitioning is enabled for this Kinesis Data Firehose delivery stream.
    RetryOptions Pulumi.AwsNative.KinesisFirehose.Inputs.DeliveryStreamRetryOptions
    Specifies the retry behavior in case Kinesis Data Firehose is unable to deliver data to an Amazon S3 prefix.
    Enabled bool
    Specifies whether dynamic partitioning is enabled for this Kinesis Data Firehose delivery stream.
    RetryOptions DeliveryStreamRetryOptions
    Specifies the retry behavior in case Kinesis Data Firehose is unable to deliver data to an Amazon S3 prefix.
    enabled Boolean
    Specifies whether dynamic partitioning is enabled for this Kinesis Data Firehose delivery stream.
    retryOptions DeliveryStreamRetryOptions
    Specifies the retry behavior in case Kinesis Data Firehose is unable to deliver data to an Amazon S3 prefix.
    enabled boolean
    Specifies whether dynamic partitioning is enabled for this Kinesis Data Firehose delivery stream.
    retryOptions DeliveryStreamRetryOptions
    Specifies the retry behavior in case Kinesis Data Firehose is unable to deliver data to an Amazon S3 prefix.
    enabled bool
    Specifies whether dynamic partitioning is enabled for this Kinesis Data Firehose delivery stream.
    retry_options DeliveryStreamRetryOptions
    Specifies the retry behavior in case Kinesis Data Firehose is unable to deliver data to an Amazon S3 prefix.
    enabled Boolean
    Specifies whether dynamic partitioning is enabled for this Kinesis Data Firehose delivery stream.
    retryOptions Property Map
    Specifies the retry behavior in case Kinesis Data Firehose is unable to deliver data to an Amazon S3 prefix.

    DeliveryStreamEncryptionConfiguration

    KmsEncryptionConfig Pulumi.AwsNative.KinesisFirehose.Inputs.DeliveryStreamKmsEncryptionConfig
    The AWS Key Management Service ( AWS KMS) encryption key that Amazon S3 uses to encrypt your data.
    NoEncryptionConfig Pulumi.AwsNative.KinesisFirehose.DeliveryStreamEncryptionConfigurationNoEncryptionConfig
    Disables encryption. For valid values, see the NoEncryptionConfig content for the EncryptionConfiguration data type in the Amazon Kinesis Data Firehose API Reference .
    KmsEncryptionConfig DeliveryStreamKmsEncryptionConfig
    The AWS Key Management Service ( AWS KMS) encryption key that Amazon S3 uses to encrypt your data.
    NoEncryptionConfig DeliveryStreamEncryptionConfigurationNoEncryptionConfig
    Disables encryption. For valid values, see the NoEncryptionConfig content for the EncryptionConfiguration data type in the Amazon Kinesis Data Firehose API Reference .
    kmsEncryptionConfig DeliveryStreamKmsEncryptionConfig
    The AWS Key Management Service ( AWS KMS) encryption key that Amazon S3 uses to encrypt your data.
    noEncryptionConfig DeliveryStreamEncryptionConfigurationNoEncryptionConfig
    Disables encryption. For valid values, see the NoEncryptionConfig content for the EncryptionConfiguration data type in the Amazon Kinesis Data Firehose API Reference .
    kmsEncryptionConfig DeliveryStreamKmsEncryptionConfig
    The AWS Key Management Service ( AWS KMS) encryption key that Amazon S3 uses to encrypt your data.
    noEncryptionConfig DeliveryStreamEncryptionConfigurationNoEncryptionConfig
    Disables encryption. For valid values, see the NoEncryptionConfig content for the EncryptionConfiguration data type in the Amazon Kinesis Data Firehose API Reference .
    kms_encryption_config DeliveryStreamKmsEncryptionConfig
    The AWS Key Management Service ( AWS KMS) encryption key that Amazon S3 uses to encrypt your data.
    no_encryption_config DeliveryStreamEncryptionConfigurationNoEncryptionConfig
    Disables encryption. For valid values, see the NoEncryptionConfig content for the EncryptionConfiguration data type in the Amazon Kinesis Data Firehose API Reference .
    kmsEncryptionConfig Property Map
    The AWS Key Management Service ( AWS KMS) encryption key that Amazon S3 uses to encrypt your data.
    noEncryptionConfig "NoEncryption"
    Disables encryption. For valid values, see the NoEncryptionConfig content for the EncryptionConfiguration data type in the Amazon Kinesis Data Firehose API Reference .

    DeliveryStreamEncryptionConfigurationInput

    KeyType Pulumi.AwsNative.KinesisFirehose.DeliveryStreamEncryptionConfigurationInputKeyType

    Indicates the type of customer master key (CMK) to use for encryption. The default setting is AWS_OWNED_CMK . For more information about CMKs, see Customer Master Keys (CMKs) .

    You can use a CMK of type CUSTOMER_MANAGED_CMK to encrypt up to 500 delivery streams.

    To encrypt your delivery stream, use symmetric CMKs. Kinesis Data Firehose doesn't support asymmetric CMKs. For information about symmetric and asymmetric CMKs, see About Symmetric and Asymmetric CMKs in the AWS Key Management Service developer guide.

    KeyArn string
    If you set KeyType to CUSTOMER_MANAGED_CMK , you must specify the Amazon Resource Name (ARN) of the CMK. If you set KeyType to AWS _OWNED_CMK , Firehose uses a service-account CMK.
    KeyType DeliveryStreamEncryptionConfigurationInputKeyType

    Indicates the type of customer master key (CMK) to use for encryption. The default setting is AWS_OWNED_CMK . For more information about CMKs, see Customer Master Keys (CMKs) .

    You can use a CMK of type CUSTOMER_MANAGED_CMK to encrypt up to 500 delivery streams.

    To encrypt your delivery stream, use symmetric CMKs. Kinesis Data Firehose doesn't support asymmetric CMKs. For information about symmetric and asymmetric CMKs, see About Symmetric and Asymmetric CMKs in the AWS Key Management Service developer guide.

    KeyArn string
    If you set KeyType to CUSTOMER_MANAGED_CMK , you must specify the Amazon Resource Name (ARN) of the CMK. If you set KeyType to AWS _OWNED_CMK , Firehose uses a service-account CMK.
    keyType DeliveryStreamEncryptionConfigurationInputKeyType

    Indicates the type of customer master key (CMK) to use for encryption. The default setting is AWS_OWNED_CMK . For more information about CMKs, see Customer Master Keys (CMKs) .

    You can use a CMK of type CUSTOMER_MANAGED_CMK to encrypt up to 500 delivery streams.

    To encrypt your delivery stream, use symmetric CMKs. Kinesis Data Firehose doesn't support asymmetric CMKs. For information about symmetric and asymmetric CMKs, see About Symmetric and Asymmetric CMKs in the AWS Key Management Service developer guide.

    keyArn String
    If you set KeyType to CUSTOMER_MANAGED_CMK , you must specify the Amazon Resource Name (ARN) of the CMK. If you set KeyType to AWS _OWNED_CMK , Firehose uses a service-account CMK.
    keyType DeliveryStreamEncryptionConfigurationInputKeyType

    Indicates the type of customer master key (CMK) to use for encryption. The default setting is AWS_OWNED_CMK . For more information about CMKs, see Customer Master Keys (CMKs) .

    You can use a CMK of type CUSTOMER_MANAGED_CMK to encrypt up to 500 delivery streams.

    To encrypt your delivery stream, use symmetric CMKs. Kinesis Data Firehose doesn't support asymmetric CMKs. For information about symmetric and asymmetric CMKs, see About Symmetric and Asymmetric CMKs in the AWS Key Management Service developer guide.

    keyArn string
    If you set KeyType to CUSTOMER_MANAGED_CMK , you must specify the Amazon Resource Name (ARN) of the CMK. If you set KeyType to AWS _OWNED_CMK , Firehose uses a service-account CMK.
    key_type DeliveryStreamEncryptionConfigurationInputKeyType

    Indicates the type of customer master key (CMK) to use for encryption. The default setting is AWS_OWNED_CMK . For more information about CMKs, see Customer Master Keys (CMKs) .

    You can use a CMK of type CUSTOMER_MANAGED_CMK to encrypt up to 500 delivery streams.

    To encrypt your delivery stream, use symmetric CMKs. Kinesis Data Firehose doesn't support asymmetric CMKs. For information about symmetric and asymmetric CMKs, see About Symmetric and Asymmetric CMKs in the AWS Key Management Service developer guide.

    key_arn str
    If you set KeyType to CUSTOMER_MANAGED_CMK , you must specify the Amazon Resource Name (ARN) of the CMK. If you set KeyType to AWS _OWNED_CMK , Firehose uses a service-account CMK.
    keyType "AWS_OWNED_CMK" | "CUSTOMER_MANAGED_CMK"

    Indicates the type of customer master key (CMK) to use for encryption. The default setting is AWS_OWNED_CMK . For more information about CMKs, see Customer Master Keys (CMKs) .

    You can use a CMK of type CUSTOMER_MANAGED_CMK to encrypt up to 500 delivery streams.

    To encrypt your delivery stream, use symmetric CMKs. Kinesis Data Firehose doesn't support asymmetric CMKs. For information about symmetric and asymmetric CMKs, see About Symmetric and Asymmetric CMKs in the AWS Key Management Service developer guide.

    keyArn String
    If you set KeyType to CUSTOMER_MANAGED_CMK , you must specify the Amazon Resource Name (ARN) of the CMK. If you set KeyType to AWS _OWNED_CMK , Firehose uses a service-account CMK.

    DeliveryStreamEncryptionConfigurationInputKeyType

    DeliveryStreamEncryptionConfigurationNoEncryptionConfig

    DeliveryStreamExtendedS3DestinationConfiguration

    BucketArn string
    The Amazon Resource Name (ARN) of the Amazon S3 bucket. For constraints, see ExtendedS3DestinationConfiguration in the Amazon Kinesis Data Firehose API Reference .
    RoleArn string
    The Amazon Resource Name (ARN) of the AWS credentials. For constraints, see ExtendedS3DestinationConfiguration in the Amazon Kinesis Data Firehose API Reference .
    BufferingHints Pulumi.AwsNative.KinesisFirehose.Inputs.DeliveryStreamBufferingHints
    The buffering option.
    CloudWatchLoggingOptions Pulumi.AwsNative.KinesisFirehose.Inputs.DeliveryStreamCloudWatchLoggingOptions
    The Amazon CloudWatch logging options for your Firehose stream.
    CompressionFormat Pulumi.AwsNative.KinesisFirehose.DeliveryStreamExtendedS3DestinationConfigurationCompressionFormat
    The compression format. If no value is specified, the default is UNCOMPRESSED .
    CustomTimeZone string
    The time zone you prefer. UTC is the default.
    DataFormatConversionConfiguration Pulumi.AwsNative.KinesisFirehose.Inputs.DeliveryStreamDataFormatConversionConfiguration
    The serializer, deserializer, and schema for converting data from the JSON format to the Parquet or ORC format before writing it to Amazon S3.
    DynamicPartitioningConfiguration Pulumi.AwsNative.KinesisFirehose.Inputs.DeliveryStreamDynamicPartitioningConfiguration
    The configuration of the dynamic partitioning mechanism that creates targeted data sets from the streaming data by partitioning it based on partition keys.
    EncryptionConfiguration Pulumi.AwsNative.KinesisFirehose.Inputs.DeliveryStreamEncryptionConfiguration
    The encryption configuration for the Kinesis Data Firehose delivery stream. The default value is NoEncryption .
    ErrorOutputPrefix string
    A prefix that Kinesis Data Firehose evaluates and adds to failed records before writing them to S3. This prefix appears immediately following the bucket name. For information about how to specify this prefix, see Custom Prefixes for Amazon S3 Objects .
    FileExtension string
    Specify a file extension. It will override the default file extension
    Prefix string
    The YYYY/MM/DD/HH time format prefix is automatically used for delivered Amazon S3 files. For more information, see ExtendedS3DestinationConfiguration in the Amazon Kinesis Data Firehose API Reference .
    ProcessingConfiguration Pulumi.AwsNative.KinesisFirehose.Inputs.DeliveryStreamProcessingConfiguration
    The data processing configuration for the Kinesis Data Firehose delivery stream.
    S3BackupConfiguration Pulumi.AwsNative.KinesisFirehose.Inputs.DeliveryStreamS3DestinationConfiguration
    The configuration for backup in Amazon S3.
    S3BackupMode Pulumi.AwsNative.KinesisFirehose.DeliveryStreamExtendedS3DestinationConfigurationS3BackupMode
    The Amazon S3 backup mode. After you create a Firehose stream, you can update it to enable Amazon S3 backup if it is disabled. If backup is enabled, you can't update the Firehose stream to disable it.
    BucketArn string
    The Amazon Resource Name (ARN) of the Amazon S3 bucket. For constraints, see ExtendedS3DestinationConfiguration in the Amazon Kinesis Data Firehose API Reference .
    RoleArn string
    The Amazon Resource Name (ARN) of the AWS credentials. For constraints, see ExtendedS3DestinationConfiguration in the Amazon Kinesis Data Firehose API Reference .
    BufferingHints DeliveryStreamBufferingHints
    The buffering option.
    CloudWatchLoggingOptions DeliveryStreamCloudWatchLoggingOptions
    The Amazon CloudWatch logging options for your Firehose stream.
    CompressionFormat DeliveryStreamExtendedS3DestinationConfigurationCompressionFormat
    The compression format. If no value is specified, the default is UNCOMPRESSED .
    CustomTimeZone string
    The time zone you prefer. UTC is the default.
    DataFormatConversionConfiguration DeliveryStreamDataFormatConversionConfiguration
    The serializer, deserializer, and schema for converting data from the JSON format to the Parquet or ORC format before writing it to Amazon S3.
    DynamicPartitioningConfiguration DeliveryStreamDynamicPartitioningConfiguration
    The configuration of the dynamic partitioning mechanism that creates targeted data sets from the streaming data by partitioning it based on partition keys.
    EncryptionConfiguration DeliveryStreamEncryptionConfiguration
    The encryption configuration for the Kinesis Data Firehose delivery stream. The default value is NoEncryption .
    ErrorOutputPrefix string
    A prefix that Kinesis Data Firehose evaluates and adds to failed records before writing them to S3. This prefix appears immediately following the bucket name. For information about how to specify this prefix, see Custom Prefixes for Amazon S3 Objects .
    FileExtension string
    Specify a file extension. It will override the default file extension
    Prefix string
    The YYYY/MM/DD/HH time format prefix is automatically used for delivered Amazon S3 files. For more information, see ExtendedS3DestinationConfiguration in the Amazon Kinesis Data Firehose API Reference .
    ProcessingConfiguration DeliveryStreamProcessingConfiguration
    The data processing configuration for the Kinesis Data Firehose delivery stream.
    S3BackupConfiguration DeliveryStreamS3DestinationConfiguration
    The configuration for backup in Amazon S3.
    S3BackupMode DeliveryStreamExtendedS3DestinationConfigurationS3BackupMode
    The Amazon S3 backup mode. After you create a Firehose stream, you can update it to enable Amazon S3 backup if it is disabled. If backup is enabled, you can't update the Firehose stream to disable it.
    bucketArn String
    The Amazon Resource Name (ARN) of the Amazon S3 bucket. For constraints, see ExtendedS3DestinationConfiguration in the Amazon Kinesis Data Firehose API Reference .
    roleArn String
    The Amazon Resource Name (ARN) of the AWS credentials. For constraints, see ExtendedS3DestinationConfiguration in the Amazon Kinesis Data Firehose API Reference .
    bufferingHints DeliveryStreamBufferingHints
    The buffering option.
    cloudWatchLoggingOptions DeliveryStreamCloudWatchLoggingOptions
    The Amazon CloudWatch logging options for your Firehose stream.
    compressionFormat DeliveryStreamExtendedS3DestinationConfigurationCompressionFormat
    The compression format. If no value is specified, the default is UNCOMPRESSED .
    customTimeZone String
    The time zone you prefer. UTC is the default.
    dataFormatConversionConfiguration DeliveryStreamDataFormatConversionConfiguration
    The serializer, deserializer, and schema for converting data from the JSON format to the Parquet or ORC format before writing it to Amazon S3.
    dynamicPartitioningConfiguration DeliveryStreamDynamicPartitioningConfiguration
    The configuration of the dynamic partitioning mechanism that creates targeted data sets from the streaming data by partitioning it based on partition keys.
    encryptionConfiguration DeliveryStreamEncryptionConfiguration
    The encryption configuration for the Kinesis Data Firehose delivery stream. The default value is NoEncryption .
    errorOutputPrefix String
    A prefix that Kinesis Data Firehose evaluates and adds to failed records before writing them to S3. This prefix appears immediately following the bucket name. For information about how to specify this prefix, see Custom Prefixes for Amazon S3 Objects .
    fileExtension String
    Specify a file extension. It will override the default file extension
    prefix String
    The YYYY/MM/DD/HH time format prefix is automatically used for delivered Amazon S3 files. For more information, see ExtendedS3DestinationConfiguration in the Amazon Kinesis Data Firehose API Reference .
    processingConfiguration DeliveryStreamProcessingConfiguration
    The data processing configuration for the Kinesis Data Firehose delivery stream.
    s3BackupConfiguration DeliveryStreamS3DestinationConfiguration
    The configuration for backup in Amazon S3.
    s3BackupMode DeliveryStreamExtendedS3DestinationConfigurationS3BackupMode
    The Amazon S3 backup mode. After you create a Firehose stream, you can update it to enable Amazon S3 backup if it is disabled. If backup is enabled, you can't update the Firehose stream to disable it.
    bucketArn string
    The Amazon Resource Name (ARN) of the Amazon S3 bucket. For constraints, see ExtendedS3DestinationConfiguration in the Amazon Kinesis Data Firehose API Reference .
    roleArn string
    The Amazon Resource Name (ARN) of the AWS credentials. For constraints, see ExtendedS3DestinationConfiguration in the Amazon Kinesis Data Firehose API Reference .
    bufferingHints DeliveryStreamBufferingHints
    The buffering option.
    cloudWatchLoggingOptions DeliveryStreamCloudWatchLoggingOptions
    The Amazon CloudWatch logging options for your Firehose stream.
    compressionFormat DeliveryStreamExtendedS3DestinationConfigurationCompressionFormat
    The compression format. If no value is specified, the default is UNCOMPRESSED .
    customTimeZone string
    The time zone you prefer. UTC is the default.
    dataFormatConversionConfiguration DeliveryStreamDataFormatConversionConfiguration
    The serializer, deserializer, and schema for converting data from the JSON format to the Parquet or ORC format before writing it to Amazon S3.
    dynamicPartitioningConfiguration DeliveryStreamDynamicPartitioningConfiguration
    The configuration of the dynamic partitioning mechanism that creates targeted data sets from the streaming data by partitioning it based on partition keys.
    encryptionConfiguration DeliveryStreamEncryptionConfiguration
    The encryption configuration for the Kinesis Data Firehose delivery stream. The default value is NoEncryption .
    errorOutputPrefix string
    A prefix that Kinesis Data Firehose evaluates and adds to failed records before writing them to S3. This prefix appears immediately following the bucket name. For information about how to specify this prefix, see Custom Prefixes for Amazon S3 Objects .
    fileExtension string
    Specify a file extension. It will override the default file extension
    prefix string
    The YYYY/MM/DD/HH time format prefix is automatically used for delivered Amazon S3 files. For more information, see ExtendedS3DestinationConfiguration in the Amazon Kinesis Data Firehose API Reference .
    processingConfiguration DeliveryStreamProcessingConfiguration
    The data processing configuration for the Kinesis Data Firehose delivery stream.
    s3BackupConfiguration DeliveryStreamS3DestinationConfiguration
    The configuration for backup in Amazon S3.
    s3BackupMode DeliveryStreamExtendedS3DestinationConfigurationS3BackupMode
    The Amazon S3 backup mode. After you create a Firehose stream, you can update it to enable Amazon S3 backup if it is disabled. If backup is enabled, you can't update the Firehose stream to disable it.
    bucket_arn str
    The Amazon Resource Name (ARN) of the Amazon S3 bucket. For constraints, see ExtendedS3DestinationConfiguration in the Amazon Kinesis Data Firehose API Reference .
    role_arn str
    The Amazon Resource Name (ARN) of the AWS credentials. For constraints, see ExtendedS3DestinationConfiguration in the Amazon Kinesis Data Firehose API Reference .
    buffering_hints DeliveryStreamBufferingHints
    The buffering option.
    cloud_watch_logging_options DeliveryStreamCloudWatchLoggingOptions
    The Amazon CloudWatch logging options for your Firehose stream.
    compression_format DeliveryStreamExtendedS3DestinationConfigurationCompressionFormat
    The compression format. If no value is specified, the default is UNCOMPRESSED .
    custom_time_zone str
    The time zone you prefer. UTC is the default.
    data_format_conversion_configuration DeliveryStreamDataFormatConversionConfiguration
    The serializer, deserializer, and schema for converting data from the JSON format to the Parquet or ORC format before writing it to Amazon S3.
    dynamic_partitioning_configuration DeliveryStreamDynamicPartitioningConfiguration
    The configuration of the dynamic partitioning mechanism that creates targeted data sets from the streaming data by partitioning it based on partition keys.
    encryption_configuration DeliveryStreamEncryptionConfiguration
    The encryption configuration for the Kinesis Data Firehose delivery stream. The default value is NoEncryption .
    error_output_prefix str
    A prefix that Kinesis Data Firehose evaluates and adds to failed records before writing them to S3. This prefix appears immediately following the bucket name. For information about how to specify this prefix, see Custom Prefixes for Amazon S3 Objects .
    file_extension str
    Specify a file extension. It will override the default file extension
    prefix str
    The YYYY/MM/DD/HH time format prefix is automatically used for delivered Amazon S3 files. For more information, see ExtendedS3DestinationConfiguration in the Amazon Kinesis Data Firehose API Reference .
    processing_configuration DeliveryStreamProcessingConfiguration
    The data processing configuration for the Kinesis Data Firehose delivery stream.
    s3_backup_configuration DeliveryStreamS3DestinationConfiguration
    The configuration for backup in Amazon S3.
    s3_backup_mode DeliveryStreamExtendedS3DestinationConfigurationS3BackupMode
    The Amazon S3 backup mode. After you create a Firehose stream, you can update it to enable Amazon S3 backup if it is disabled. If backup is enabled, you can't update the Firehose stream to disable it.
    bucketArn String
    The Amazon Resource Name (ARN) of the Amazon S3 bucket. For constraints, see ExtendedS3DestinationConfiguration in the Amazon Kinesis Data Firehose API Reference .
    roleArn String
    The Amazon Resource Name (ARN) of the AWS credentials. For constraints, see ExtendedS3DestinationConfiguration in the Amazon Kinesis Data Firehose API Reference .
    bufferingHints Property Map
    The buffering option.
    cloudWatchLoggingOptions Property Map
    The Amazon CloudWatch logging options for your Firehose stream.
    compressionFormat "UNCOMPRESSED" | "GZIP" | "ZIP" | "Snappy" | "HADOOP_SNAPPY"
    The compression format. If no value is specified, the default is UNCOMPRESSED .
    customTimeZone String
    The time zone you prefer. UTC is the default.
    dataFormatConversionConfiguration Property Map
    The serializer, deserializer, and schema for converting data from the JSON format to the Parquet or ORC format before writing it to Amazon S3.
    dynamicPartitioningConfiguration Property Map
    The configuration of the dynamic partitioning mechanism that creates targeted data sets from the streaming data by partitioning it based on partition keys.
    encryptionConfiguration Property Map
    The encryption configuration for the Kinesis Data Firehose delivery stream. The default value is NoEncryption .
    errorOutputPrefix String
    A prefix that Kinesis Data Firehose evaluates and adds to failed records before writing them to S3. This prefix appears immediately following the bucket name. For information about how to specify this prefix, see Custom Prefixes for Amazon S3 Objects .
    fileExtension String
    Specify a file extension. It will override the default file extension
    prefix String
    The YYYY/MM/DD/HH time format prefix is automatically used for delivered Amazon S3 files. For more information, see ExtendedS3DestinationConfiguration in the Amazon Kinesis Data Firehose API Reference .
    processingConfiguration Property Map
    The data processing configuration for the Kinesis Data Firehose delivery stream.
    s3BackupConfiguration Property Map
    The configuration for backup in Amazon S3.
    s3BackupMode "Disabled" | "Enabled"
    The Amazon S3 backup mode. After you create a Firehose stream, you can update it to enable Amazon S3 backup if it is disabled. If backup is enabled, you can't update the Firehose stream to disable it.

    DeliveryStreamExtendedS3DestinationConfigurationCompressionFormat

    DeliveryStreamExtendedS3DestinationConfigurationS3BackupMode

    DeliveryStreamHiveJsonSerDe

    TimestampFormats List<string>
    Indicates how you want Firehose to parse the date and timestamps that may be present in your input data JSON. To specify these format strings, follow the pattern syntax of JodaTime's DateTimeFormat format strings. For more information, see Class DateTimeFormat . You can also use the special value millis to parse timestamps in epoch milliseconds. If you don't specify a format, Firehose uses java.sql.Timestamp::valueOf by default.
    TimestampFormats []string
    Indicates how you want Firehose to parse the date and timestamps that may be present in your input data JSON. To specify these format strings, follow the pattern syntax of JodaTime's DateTimeFormat format strings. For more information, see Class DateTimeFormat . You can also use the special value millis to parse timestamps in epoch milliseconds. If you don't specify a format, Firehose uses java.sql.Timestamp::valueOf by default.
    timestampFormats List<String>
    Indicates how you want Firehose to parse the date and timestamps that may be present in your input data JSON. To specify these format strings, follow the pattern syntax of JodaTime's DateTimeFormat format strings. For more information, see Class DateTimeFormat . You can also use the special value millis to parse timestamps in epoch milliseconds. If you don't specify a format, Firehose uses java.sql.Timestamp::valueOf by default.
    timestampFormats string[]
    Indicates how you want Firehose to parse the date and timestamps that may be present in your input data JSON. To specify these format strings, follow the pattern syntax of JodaTime's DateTimeFormat format strings. For more information, see Class DateTimeFormat . You can also use the special value millis to parse timestamps in epoch milliseconds. If you don't specify a format, Firehose uses java.sql.Timestamp::valueOf by default.
    timestamp_formats Sequence[str]
    Indicates how you want Firehose to parse the date and timestamps that may be present in your input data JSON. To specify these format strings, follow the pattern syntax of JodaTime's DateTimeFormat format strings. For more information, see Class DateTimeFormat . You can also use the special value millis to parse timestamps in epoch milliseconds. If you don't specify a format, Firehose uses java.sql.Timestamp::valueOf by default.
    timestampFormats List<String>
    Indicates how you want Firehose to parse the date and timestamps that may be present in your input data JSON. To specify these format strings, follow the pattern syntax of JodaTime's DateTimeFormat format strings. For more information, see Class DateTimeFormat . You can also use the special value millis to parse timestamps in epoch milliseconds. If you don't specify a format, Firehose uses java.sql.Timestamp::valueOf by default.

    DeliveryStreamHttpEndpointCommonAttribute

    AttributeName string
    The name of the HTTP endpoint common attribute.
    AttributeValue string
    The value of the HTTP endpoint common attribute.
    AttributeName string
    The name of the HTTP endpoint common attribute.
    AttributeValue string
    The value of the HTTP endpoint common attribute.
    attributeName String
    The name of the HTTP endpoint common attribute.
    attributeValue String
    The value of the HTTP endpoint common attribute.
    attributeName string
    The name of the HTTP endpoint common attribute.
    attributeValue string
    The value of the HTTP endpoint common attribute.
    attribute_name str
    The name of the HTTP endpoint common attribute.
    attribute_value str
    The value of the HTTP endpoint common attribute.
    attributeName String
    The name of the HTTP endpoint common attribute.
    attributeValue String
    The value of the HTTP endpoint common attribute.

    DeliveryStreamHttpEndpointConfiguration

    Url string
    The URL of the HTTP endpoint selected as the destination.
    AccessKey string
    The access key required for Kinesis Firehose to authenticate with the HTTP endpoint selected as the destination.
    Name string
    The name of the HTTP endpoint selected as the destination.
    Url string
    The URL of the HTTP endpoint selected as the destination.
    AccessKey string
    The access key required for Kinesis Firehose to authenticate with the HTTP endpoint selected as the destination.
    Name string
    The name of the HTTP endpoint selected as the destination.
    url String
    The URL of the HTTP endpoint selected as the destination.
    accessKey String
    The access key required for Kinesis Firehose to authenticate with the HTTP endpoint selected as the destination.
    name String
    The name of the HTTP endpoint selected as the destination.
    url string
    The URL of the HTTP endpoint selected as the destination.
    accessKey string
    The access key required for Kinesis Firehose to authenticate with the HTTP endpoint selected as the destination.
    name string
    The name of the HTTP endpoint selected as the destination.
    url str
    The URL of the HTTP endpoint selected as the destination.
    access_key str
    The access key required for Kinesis Firehose to authenticate with the HTTP endpoint selected as the destination.
    name str
    The name of the HTTP endpoint selected as the destination.
    url String
    The URL of the HTTP endpoint selected as the destination.
    accessKey String
    The access key required for Kinesis Firehose to authenticate with the HTTP endpoint selected as the destination.
    name String
    The name of the HTTP endpoint selected as the destination.

    DeliveryStreamHttpEndpointDestinationConfiguration

    EndpointConfiguration Pulumi.AwsNative.KinesisFirehose.Inputs.DeliveryStreamHttpEndpointConfiguration
    The configuration of the HTTP endpoint selected as the destination.
    S3Configuration Pulumi.AwsNative.KinesisFirehose.Inputs.DeliveryStreamS3DestinationConfiguration
    Describes the configuration of a destination in Amazon S3.
    BufferingHints Pulumi.AwsNative.KinesisFirehose.Inputs.DeliveryStreamBufferingHints
    The buffering options that can be used before data is delivered to the specified destination. Kinesis Data Firehose treats these options as hints, and it might choose to use more optimal values. The SizeInMBs and IntervalInSeconds parameters are optional. However, if you specify a value for one of them, you must also provide a value for the other.
    CloudWatchLoggingOptions Pulumi.AwsNative.KinesisFirehose.Inputs.DeliveryStreamCloudWatchLoggingOptions
    Describes the Amazon CloudWatch logging options for your delivery stream.
    ProcessingConfiguration Pulumi.AwsNative.KinesisFirehose.Inputs.DeliveryStreamProcessingConfiguration
    Describes the data processing configuration.
    RequestConfiguration Pulumi.AwsNative.KinesisFirehose.Inputs.DeliveryStreamHttpEndpointRequestConfiguration
    The configuration of the request sent to the HTTP endpoint specified as the destination.
    RetryOptions Pulumi.AwsNative.KinesisFirehose.Inputs.DeliveryStreamRetryOptions
    Describes the retry behavior in case Kinesis Data Firehose is unable to deliver data to the specified HTTP endpoint destination, or if it doesn't receive a valid acknowledgment of receipt from the specified HTTP endpoint destination.
    RoleArn string
    Kinesis Data Firehose uses this IAM role for all the permissions that the delivery stream needs.
    S3BackupMode string
    Describes the S3 bucket backup options for the data that Kinesis Data Firehose delivers to the HTTP endpoint destination. You can back up all documents (AllData) or only the documents that Kinesis Data Firehose could not deliver to the specified HTTP endpoint destination (FailedDataOnly).
    SecretsManagerConfiguration Pulumi.AwsNative.KinesisFirehose.Inputs.DeliveryStreamSecretsManagerConfiguration
    The configuration that defines how you access secrets for HTTP Endpoint destination.
    EndpointConfiguration DeliveryStreamHttpEndpointConfiguration
    The configuration of the HTTP endpoint selected as the destination.
    S3Configuration DeliveryStreamS3DestinationConfiguration
    Describes the configuration of a destination in Amazon S3.
    BufferingHints DeliveryStreamBufferingHints
    The buffering options that can be used before data is delivered to the specified destination. Kinesis Data Firehose treats these options as hints, and it might choose to use more optimal values. The SizeInMBs and IntervalInSeconds parameters are optional. However, if you specify a value for one of them, you must also provide a value for the other.
    CloudWatchLoggingOptions DeliveryStreamCloudWatchLoggingOptions
    Describes the Amazon CloudWatch logging options for your delivery stream.
    ProcessingConfiguration DeliveryStreamProcessingConfiguration
    Describes the data processing configuration.
    RequestConfiguration DeliveryStreamHttpEndpointRequestConfiguration
    The configuration of the request sent to the HTTP endpoint specified as the destination.
    RetryOptions DeliveryStreamRetryOptions
    Describes the retry behavior in case Kinesis Data Firehose is unable to deliver data to the specified HTTP endpoint destination, or if it doesn't receive a valid acknowledgment of receipt from the specified HTTP endpoint destination.
    RoleArn string
    Kinesis Data Firehose uses this IAM role for all the permissions that the delivery stream needs.
    S3BackupMode string
    Describes the S3 bucket backup options for the data that Kinesis Data Firehose delivers to the HTTP endpoint destination. You can back up all documents (AllData) or only the documents that Kinesis Data Firehose could not deliver to the specified HTTP endpoint destination (FailedDataOnly).
    SecretsManagerConfiguration DeliveryStreamSecretsManagerConfiguration
    The configuration that defines how you access secrets for HTTP Endpoint destination.
    endpointConfiguration DeliveryStreamHttpEndpointConfiguration
    The configuration of the HTTP endpoint selected as the destination.
    s3Configuration DeliveryStreamS3DestinationConfiguration
    Describes the configuration of a destination in Amazon S3.
    bufferingHints DeliveryStreamBufferingHints
    The buffering options that can be used before data is delivered to the specified destination. Kinesis Data Firehose treats these options as hints, and it might choose to use more optimal values. The SizeInMBs and IntervalInSeconds parameters are optional. However, if you specify a value for one of them, you must also provide a value for the other.
    cloudWatchLoggingOptions DeliveryStreamCloudWatchLoggingOptions
    Describes the Amazon CloudWatch logging options for your delivery stream.
    processingConfiguration DeliveryStreamProcessingConfiguration
    Describes the data processing configuration.
    requestConfiguration DeliveryStreamHttpEndpointRequestConfiguration
    The configuration of the request sent to the HTTP endpoint specified as the destination.
    retryOptions DeliveryStreamRetryOptions
    Describes the retry behavior in case Kinesis Data Firehose is unable to deliver data to the specified HTTP endpoint destination, or if it doesn't receive a valid acknowledgment of receipt from the specified HTTP endpoint destination.
    roleArn String
    Kinesis Data Firehose uses this IAM role for all the permissions that the delivery stream needs.
    s3BackupMode String
    Describes the S3 bucket backup options for the data that Kinesis Data Firehose delivers to the HTTP endpoint destination. You can back up all documents (AllData) or only the documents that Kinesis Data Firehose could not deliver to the specified HTTP endpoint destination (FailedDataOnly).
    secretsManagerConfiguration DeliveryStreamSecretsManagerConfiguration
    The configuration that defines how you access secrets for HTTP Endpoint destination.
    endpointConfiguration DeliveryStreamHttpEndpointConfiguration
    The configuration of the HTTP endpoint selected as the destination.
    s3Configuration DeliveryStreamS3DestinationConfiguration
    Describes the configuration of a destination in Amazon S3.
    bufferingHints DeliveryStreamBufferingHints
    The buffering options that can be used before data is delivered to the specified destination. Kinesis Data Firehose treats these options as hints, and it might choose to use more optimal values. The SizeInMBs and IntervalInSeconds parameters are optional. However, if you specify a value for one of them, you must also provide a value for the other.
    cloudWatchLoggingOptions DeliveryStreamCloudWatchLoggingOptions
    Describes the Amazon CloudWatch logging options for your delivery stream.
    processingConfiguration DeliveryStreamProcessingConfiguration
    Describes the data processing configuration.
    requestConfiguration DeliveryStreamHttpEndpointRequestConfiguration
    The configuration of the request sent to the HTTP endpoint specified as the destination.
    retryOptions DeliveryStreamRetryOptions
    Describes the retry behavior in case Kinesis Data Firehose is unable to deliver data to the specified HTTP endpoint destination, or if it doesn't receive a valid acknowledgment of receipt from the specified HTTP endpoint destination.
    roleArn string
    Kinesis Data Firehose uses this IAM role for all the permissions that the delivery stream needs.
    s3BackupMode string
    Describes the S3 bucket backup options for the data that Kinesis Data Firehose delivers to the HTTP endpoint destination. You can back up all documents (AllData) or only the documents that Kinesis Data Firehose could not deliver to the specified HTTP endpoint destination (FailedDataOnly).
    secretsManagerConfiguration DeliveryStreamSecretsManagerConfiguration
    The configuration that defines how you access secrets for HTTP Endpoint destination.
    endpoint_configuration DeliveryStreamHttpEndpointConfiguration
    The configuration of the HTTP endpoint selected as the destination.
    s3_configuration DeliveryStreamS3DestinationConfiguration
    Describes the configuration of a destination in Amazon S3.
    buffering_hints DeliveryStreamBufferingHints
    The buffering options that can be used before data is delivered to the specified destination. Kinesis Data Firehose treats these options as hints, and it might choose to use more optimal values. The SizeInMBs and IntervalInSeconds parameters are optional. However, if you specify a value for one of them, you must also provide a value for the other.
    cloud_watch_logging_options DeliveryStreamCloudWatchLoggingOptions
    Describes the Amazon CloudWatch logging options for your delivery stream.
    processing_configuration DeliveryStreamProcessingConfiguration
    Describes the data processing configuration.
    request_configuration DeliveryStreamHttpEndpointRequestConfiguration
    The configuration of the request sent to the HTTP endpoint specified as the destination.
    retry_options DeliveryStreamRetryOptions
    Describes the retry behavior in case Kinesis Data Firehose is unable to deliver data to the specified HTTP endpoint destination, or if it doesn't receive a valid acknowledgment of receipt from the specified HTTP endpoint destination.
    role_arn str
    Kinesis Data Firehose uses this IAM role for all the permissions that the delivery stream needs.
    s3_backup_mode str
    Describes the S3 bucket backup options for the data that Kinesis Data Firehose delivers to the HTTP endpoint destination. You can back up all documents (AllData) or only the documents that Kinesis Data Firehose could not deliver to the specified HTTP endpoint destination (FailedDataOnly).
    secrets_manager_configuration DeliveryStreamSecretsManagerConfiguration
    The configuration that defines how you access secrets for HTTP Endpoint destination.
    endpointConfiguration Property Map
    The configuration of the HTTP endpoint selected as the destination.
    s3Configuration Property Map
    Describes the configuration of a destination in Amazon S3.
    bufferingHints Property Map
    The buffering options that can be used before data is delivered to the specified destination. Kinesis Data Firehose treats these options as hints, and it might choose to use more optimal values. The SizeInMBs and IntervalInSeconds parameters are optional. However, if you specify a value for one of them, you must also provide a value for the other.
    cloudWatchLoggingOptions Property Map
    Describes the Amazon CloudWatch logging options for your delivery stream.
    processingConfiguration Property Map
    Describes the data processing configuration.
    requestConfiguration Property Map
    The configuration of the request sent to the HTTP endpoint specified as the destination.
    retryOptions Property Map
    Describes the retry behavior in case Kinesis Data Firehose is unable to deliver data to the specified HTTP endpoint destination, or if it doesn't receive a valid acknowledgment of receipt from the specified HTTP endpoint destination.
    roleArn String
    Kinesis Data Firehose uses this IAM role for all the permissions that the delivery stream needs.
    s3BackupMode String
    Describes the S3 bucket backup options for the data that Kinesis Data Firehose delivers to the HTTP endpoint destination. You can back up all documents (AllData) or only the documents that Kinesis Data Firehose could not deliver to the specified HTTP endpoint destination (FailedDataOnly).
    secretsManagerConfiguration Property Map
    The configuration that defines how you access secrets for HTTP Endpoint destination.

    DeliveryStreamHttpEndpointRequestConfiguration

    CommonAttributes List<Pulumi.AwsNative.KinesisFirehose.Inputs.DeliveryStreamHttpEndpointCommonAttribute>
    Describes the metadata sent to the HTTP endpoint destination.
    ContentEncoding Pulumi.AwsNative.KinesisFirehose.DeliveryStreamHttpEndpointRequestConfigurationContentEncoding
    Kinesis Data Firehose uses the content encoding to compress the body of a request before sending the request to the destination. For more information, see Content-Encoding in MDN Web Docs, the official Mozilla documentation.
    CommonAttributes []DeliveryStreamHttpEndpointCommonAttribute
    Describes the metadata sent to the HTTP endpoint destination.
    ContentEncoding DeliveryStreamHttpEndpointRequestConfigurationContentEncoding
    Kinesis Data Firehose uses the content encoding to compress the body of a request before sending the request to the destination. For more information, see Content-Encoding in MDN Web Docs, the official Mozilla documentation.
    commonAttributes List<DeliveryStreamHttpEndpointCommonAttribute>
    Describes the metadata sent to the HTTP endpoint destination.
    contentEncoding DeliveryStreamHttpEndpointRequestConfigurationContentEncoding
    Kinesis Data Firehose uses the content encoding to compress the body of a request before sending the request to the destination. For more information, see Content-Encoding in MDN Web Docs, the official Mozilla documentation.
    commonAttributes DeliveryStreamHttpEndpointCommonAttribute[]
    Describes the metadata sent to the HTTP endpoint destination.
    contentEncoding DeliveryStreamHttpEndpointRequestConfigurationContentEncoding
    Kinesis Data Firehose uses the content encoding to compress the body of a request before sending the request to the destination. For more information, see Content-Encoding in MDN Web Docs, the official Mozilla documentation.
    common_attributes Sequence[DeliveryStreamHttpEndpointCommonAttribute]
    Describes the metadata sent to the HTTP endpoint destination.
    content_encoding DeliveryStreamHttpEndpointRequestConfigurationContentEncoding
    Kinesis Data Firehose uses the content encoding to compress the body of a request before sending the request to the destination. For more information, see Content-Encoding in MDN Web Docs, the official Mozilla documentation.
    commonAttributes List<Property Map>
    Describes the metadata sent to the HTTP endpoint destination.
    contentEncoding "NONE" | "GZIP"
    Kinesis Data Firehose uses the content encoding to compress the body of a request before sending the request to the destination. For more information, see Content-Encoding in MDN Web Docs, the official Mozilla documentation.

    DeliveryStreamHttpEndpointRequestConfigurationContentEncoding

    DeliveryStreamInputFormatConfiguration

    Deserializer Pulumi.AwsNative.KinesisFirehose.Inputs.DeliveryStreamDeserializer
    Specifies which deserializer to use. You can choose either the Apache Hive JSON SerDe or the OpenX JSON SerDe. If both are non-null, the server rejects the request.
    Deserializer DeliveryStreamDeserializer
    Specifies which deserializer to use. You can choose either the Apache Hive JSON SerDe or the OpenX JSON SerDe. If both are non-null, the server rejects the request.
    deserializer DeliveryStreamDeserializer
    Specifies which deserializer to use. You can choose either the Apache Hive JSON SerDe or the OpenX JSON SerDe. If both are non-null, the server rejects the request.
    deserializer DeliveryStreamDeserializer
    Specifies which deserializer to use. You can choose either the Apache Hive JSON SerDe or the OpenX JSON SerDe. If both are non-null, the server rejects the request.
    deserializer DeliveryStreamDeserializer
    Specifies which deserializer to use. You can choose either the Apache Hive JSON SerDe or the OpenX JSON SerDe. If both are non-null, the server rejects the request.
    deserializer Property Map
    Specifies which deserializer to use. You can choose either the Apache Hive JSON SerDe or the OpenX JSON SerDe. If both are non-null, the server rejects the request.

    DeliveryStreamKmsEncryptionConfig

    AwskmsKeyArn string
    The Amazon Resource Name (ARN) of the AWS KMS encryption key that Amazon S3 uses to encrypt data delivered by the Kinesis Data Firehose stream. The key must belong to the same region as the destination S3 bucket.
    AwskmsKeyArn string
    The Amazon Resource Name (ARN) of the AWS KMS encryption key that Amazon S3 uses to encrypt data delivered by the Kinesis Data Firehose stream. The key must belong to the same region as the destination S3 bucket.
    awskmsKeyArn String
    The Amazon Resource Name (ARN) of the AWS KMS encryption key that Amazon S3 uses to encrypt data delivered by the Kinesis Data Firehose stream. The key must belong to the same region as the destination S3 bucket.
    awskmsKeyArn string
    The Amazon Resource Name (ARN) of the AWS KMS encryption key that Amazon S3 uses to encrypt data delivered by the Kinesis Data Firehose stream. The key must belong to the same region as the destination S3 bucket.
    awskms_key_arn str
    The Amazon Resource Name (ARN) of the AWS KMS encryption key that Amazon S3 uses to encrypt data delivered by the Kinesis Data Firehose stream. The key must belong to the same region as the destination S3 bucket.
    awskmsKeyArn String
    The Amazon Resource Name (ARN) of the AWS KMS encryption key that Amazon S3 uses to encrypt data delivered by the Kinesis Data Firehose stream. The key must belong to the same region as the destination S3 bucket.

    DeliveryStreamOpenXJsonSerDe

    CaseInsensitive bool
    When set to true , which is the default, Firehose converts JSON keys to lowercase before deserializing them.
    ColumnToJsonKeyMappings Dictionary<string, string>
    Maps column names to JSON keys that aren't identical to the column names. This is useful when the JSON contains keys that are Hive keywords. For example, timestamp is a Hive keyword. If you have a JSON key named timestamp , set this parameter to {"ts": "timestamp"} to map this key to a column named ts .
    ConvertDotsInJsonKeysToUnderscores bool

    When set to true , specifies that the names of the keys include dots and that you want Firehose to replace them with underscores. This is useful because Apache Hive does not allow dots in column names. For example, if the JSON contains a key whose name is "a.b", you can define the column name to be "a_b" when using this option.

    The default is false .

    CaseInsensitive bool
    When set to true , which is the default, Firehose converts JSON keys to lowercase before deserializing them.
    ColumnToJsonKeyMappings map[string]string
    Maps column names to JSON keys that aren't identical to the column names. This is useful when the JSON contains keys that are Hive keywords. For example, timestamp is a Hive keyword. If you have a JSON key named timestamp , set this parameter to {"ts": "timestamp"} to map this key to a column named ts .
    ConvertDotsInJsonKeysToUnderscores bool

    When set to true , specifies that the names of the keys include dots and that you want Firehose to replace them with underscores. This is useful because Apache Hive does not allow dots in column names. For example, if the JSON contains a key whose name is "a.b", you can define the column name to be "a_b" when using this option.

    The default is false .

    caseInsensitive Boolean
    When set to true , which is the default, Firehose converts JSON keys to lowercase before deserializing them.
    columnToJsonKeyMappings Map<String,String>
    Maps column names to JSON keys that aren't identical to the column names. This is useful when the JSON contains keys that are Hive keywords. For example, timestamp is a Hive keyword. If you have a JSON key named timestamp , set this parameter to {"ts": "timestamp"} to map this key to a column named ts .
    convertDotsInJsonKeysToUnderscores Boolean

    When set to true , specifies that the names of the keys include dots and that you want Firehose to replace them with underscores. This is useful because Apache Hive does not allow dots in column names. For example, if the JSON contains a key whose name is "a.b", you can define the column name to be "a_b" when using this option.

    The default is false .

    caseInsensitive boolean
    When set to true , which is the default, Firehose converts JSON keys to lowercase before deserializing them.
    columnToJsonKeyMappings {[key: string]: string}
    Maps column names to JSON keys that aren't identical to the column names. This is useful when the JSON contains keys that are Hive keywords. For example, timestamp is a Hive keyword. If you have a JSON key named timestamp , set this parameter to {"ts": "timestamp"} to map this key to a column named ts .
    convertDotsInJsonKeysToUnderscores boolean

    When set to true , specifies that the names of the keys include dots and that you want Firehose to replace them with underscores. This is useful because Apache Hive does not allow dots in column names. For example, if the JSON contains a key whose name is "a.b", you can define the column name to be "a_b" when using this option.

    The default is false .

    case_insensitive bool
    When set to true , which is the default, Firehose converts JSON keys to lowercase before deserializing them.
    column_to_json_key_mappings Mapping[str, str]
    Maps column names to JSON keys that aren't identical to the column names. This is useful when the JSON contains keys that are Hive keywords. For example, timestamp is a Hive keyword. If you have a JSON key named timestamp , set this parameter to {"ts": "timestamp"} to map this key to a column named ts .
    convert_dots_in_json_keys_to_underscores bool

    When set to true , specifies that the names of the keys include dots and that you want Firehose to replace them with underscores. This is useful because Apache Hive does not allow dots in column names. For example, if the JSON contains a key whose name is "a.b", you can define the column name to be "a_b" when using this option.

    The default is false .

    caseInsensitive Boolean
    When set to true , which is the default, Firehose converts JSON keys to lowercase before deserializing them.
    columnToJsonKeyMappings Map<String>
    Maps column names to JSON keys that aren't identical to the column names. This is useful when the JSON contains keys that are Hive keywords. For example, timestamp is a Hive keyword. If you have a JSON key named timestamp , set this parameter to {"ts": "timestamp"} to map this key to a column named ts .
    convertDotsInJsonKeysToUnderscores Boolean

    When set to true , specifies that the names of the keys include dots and that you want Firehose to replace them with underscores. This is useful because Apache Hive does not allow dots in column names. For example, if the JSON contains a key whose name is "a.b", you can define the column name to be "a_b" when using this option.

    The default is false .

    DeliveryStreamOrcSerDe

    BlockSizeBytes int
    The Hadoop Distributed File System (HDFS) block size. This is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The default is 256 MiB and the minimum is 64 MiB. Firehose uses this value for padding calculations.
    BloomFilterColumns List<string>
    The column names for which you want Firehose to create bloom filters. The default is null .
    BloomFilterFalsePositiveProbability double
    The Bloom filter false positive probability (FPP). The lower the FPP, the bigger the Bloom filter. The default value is 0.05, the minimum is 0, and the maximum is 1.
    Compression string
    The compression code to use over data blocks. The default is SNAPPY .
    DictionaryKeyThreshold double
    Represents the fraction of the total number of non-null rows. To turn off dictionary encoding, set this fraction to a number that is less than the number of distinct keys in a dictionary. To always use dictionary encoding, set this threshold to 1.
    EnablePadding bool
    Set this to true to indicate that you want stripes to be padded to the HDFS block boundaries. This is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The default is false .
    FormatVersion string
    The version of the file to write. The possible values are V0_11 and V0_12 . The default is V0_12 .
    PaddingTolerance double

    A number between 0 and 1 that defines the tolerance for block padding as a decimal fraction of stripe size. The default value is 0.05, which means 5 percent of stripe size.

    For the default values of 64 MiB ORC stripes and 256 MiB HDFS blocks, the default block padding tolerance of 5 percent reserves a maximum of 3.2 MiB for padding within the 256 MiB block. In such a case, if the available size within the block is more than 3.2 MiB, a new, smaller stripe is inserted to fit within that space. This ensures that no stripe crosses block boundaries and causes remote reads within a node-local task.

    Kinesis Data Firehose ignores this parameter when EnablePadding is false .

    RowIndexStride int
    The number of rows between index entries. The default is 10,000 and the minimum is 1,000.
    StripeSizeBytes int
    The number of bytes in each stripe. The default is 64 MiB and the minimum is 8 MiB.
    BlockSizeBytes int
    The Hadoop Distributed File System (HDFS) block size. This is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The default is 256 MiB and the minimum is 64 MiB. Firehose uses this value for padding calculations.
    BloomFilterColumns []string
    The column names for which you want Firehose to create bloom filters. The default is null .
    BloomFilterFalsePositiveProbability float64
    The Bloom filter false positive probability (FPP). The lower the FPP, the bigger the Bloom filter. The default value is 0.05, the minimum is 0, and the maximum is 1.
    Compression string
    The compression code to use over data blocks. The default is SNAPPY .
    DictionaryKeyThreshold float64
    Represents the fraction of the total number of non-null rows. To turn off dictionary encoding, set this fraction to a number that is less than the number of distinct keys in a dictionary. To always use dictionary encoding, set this threshold to 1.
    EnablePadding bool
    Set this to true to indicate that you want stripes to be padded to the HDFS block boundaries. This is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The default is false .
    FormatVersion string
    The version of the file to write. The possible values are V0_11 and V0_12 . The default is V0_12 .
    PaddingTolerance float64

    A number between 0 and 1 that defines the tolerance for block padding as a decimal fraction of stripe size. The default value is 0.05, which means 5 percent of stripe size.

    For the default values of 64 MiB ORC stripes and 256 MiB HDFS blocks, the default block padding tolerance of 5 percent reserves a maximum of 3.2 MiB for padding within the 256 MiB block. In such a case, if the available size within the block is more than 3.2 MiB, a new, smaller stripe is inserted to fit within that space. This ensures that no stripe crosses block boundaries and causes remote reads within a node-local task.

    Kinesis Data Firehose ignores this parameter when EnablePadding is false .

    RowIndexStride int
    The number of rows between index entries. The default is 10,000 and the minimum is 1,000.
    StripeSizeBytes int
    The number of bytes in each stripe. The default is 64 MiB and the minimum is 8 MiB.
    blockSizeBytes Integer
    The Hadoop Distributed File System (HDFS) block size. This is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The default is 256 MiB and the minimum is 64 MiB. Firehose uses this value for padding calculations.
    bloomFilterColumns List<String>
    The column names for which you want Firehose to create bloom filters. The default is null .
    bloomFilterFalsePositiveProbability Double
    The Bloom filter false positive probability (FPP). The lower the FPP, the bigger the Bloom filter. The default value is 0.05, the minimum is 0, and the maximum is 1.
    compression String
    The compression code to use over data blocks. The default is SNAPPY .
    dictionaryKeyThreshold Double
    Represents the fraction of the total number of non-null rows. To turn off dictionary encoding, set this fraction to a number that is less than the number of distinct keys in a dictionary. To always use dictionary encoding, set this threshold to 1.
    enablePadding Boolean
    Set this to true to indicate that you want stripes to be padded to the HDFS block boundaries. This is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The default is false .
    formatVersion String
    The version of the file to write. The possible values are V0_11 and V0_12 . The default is V0_12 .
    paddingTolerance Double

    A number between 0 and 1 that defines the tolerance for block padding as a decimal fraction of stripe size. The default value is 0.05, which means 5 percent of stripe size.

    For the default values of 64 MiB ORC stripes and 256 MiB HDFS blocks, the default block padding tolerance of 5 percent reserves a maximum of 3.2 MiB for padding within the 256 MiB block. In such a case, if the available size within the block is more than 3.2 MiB, a new, smaller stripe is inserted to fit within that space. This ensures that no stripe crosses block boundaries and causes remote reads within a node-local task.

    Kinesis Data Firehose ignores this parameter when EnablePadding is false .

    rowIndexStride Integer
    The number of rows between index entries. The default is 10,000 and the minimum is 1,000.
    stripeSizeBytes Integer
    The number of bytes in each stripe. The default is 64 MiB and the minimum is 8 MiB.
    blockSizeBytes number
    The Hadoop Distributed File System (HDFS) block size. This is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The default is 256 MiB and the minimum is 64 MiB. Firehose uses this value for padding calculations.
    bloomFilterColumns string[]
    The column names for which you want Firehose to create bloom filters. The default is null .
    bloomFilterFalsePositiveProbability number
    The Bloom filter false positive probability (FPP). The lower the FPP, the bigger the Bloom filter. The default value is 0.05, the minimum is 0, and the maximum is 1.
    compression string
    The compression code to use over data blocks. The default is SNAPPY .
    dictionaryKeyThreshold number
    Represents the fraction of the total number of non-null rows. To turn off dictionary encoding, set this fraction to a number that is less than the number of distinct keys in a dictionary. To always use dictionary encoding, set this threshold to 1.
    enablePadding boolean
    Set this to true to indicate that you want stripes to be padded to the HDFS block boundaries. This is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The default is false .
    formatVersion string
    The version of the file to write. The possible values are V0_11 and V0_12 . The default is V0_12 .
    paddingTolerance number

    A number between 0 and 1 that defines the tolerance for block padding as a decimal fraction of stripe size. The default value is 0.05, which means 5 percent of stripe size.

    For the default values of 64 MiB ORC stripes and 256 MiB HDFS blocks, the default block padding tolerance of 5 percent reserves a maximum of 3.2 MiB for padding within the 256 MiB block. In such a case, if the available size within the block is more than 3.2 MiB, a new, smaller stripe is inserted to fit within that space. This ensures that no stripe crosses block boundaries and causes remote reads within a node-local task.

    Kinesis Data Firehose ignores this parameter when EnablePadding is false .

    rowIndexStride number
    The number of rows between index entries. The default is 10,000 and the minimum is 1,000.
    stripeSizeBytes number
    The number of bytes in each stripe. The default is 64 MiB and the minimum is 8 MiB.
    block_size_bytes int
    The Hadoop Distributed File System (HDFS) block size. This is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The default is 256 MiB and the minimum is 64 MiB. Firehose uses this value for padding calculations.
    bloom_filter_columns Sequence[str]
    The column names for which you want Firehose to create bloom filters. The default is null .
    bloom_filter_false_positive_probability float
    The Bloom filter false positive probability (FPP). The lower the FPP, the bigger the Bloom filter. The default value is 0.05, the minimum is 0, and the maximum is 1.
    compression str
    The compression code to use over data blocks. The default is SNAPPY .
    dictionary_key_threshold float
    Represents the fraction of the total number of non-null rows. To turn off dictionary encoding, set this fraction to a number that is less than the number of distinct keys in a dictionary. To always use dictionary encoding, set this threshold to 1.
    enable_padding bool
    Set this to true to indicate that you want stripes to be padded to the HDFS block boundaries. This is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The default is false .
    format_version str
    The version of the file to write. The possible values are V0_11 and V0_12 . The default is V0_12 .
    padding_tolerance float

    A number between 0 and 1 that defines the tolerance for block padding as a decimal fraction of stripe size. The default value is 0.05, which means 5 percent of stripe size.

    For the default values of 64 MiB ORC stripes and 256 MiB HDFS blocks, the default block padding tolerance of 5 percent reserves a maximum of 3.2 MiB for padding within the 256 MiB block. In such a case, if the available size within the block is more than 3.2 MiB, a new, smaller stripe is inserted to fit within that space. This ensures that no stripe crosses block boundaries and causes remote reads within a node-local task.

    Kinesis Data Firehose ignores this parameter when EnablePadding is false .

    row_index_stride int
    The number of rows between index entries. The default is 10,000 and the minimum is 1,000.
    stripe_size_bytes int
    The number of bytes in each stripe. The default is 64 MiB and the minimum is 8 MiB.
    blockSizeBytes Number
    The Hadoop Distributed File System (HDFS) block size. This is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The default is 256 MiB and the minimum is 64 MiB. Firehose uses this value for padding calculations.
    bloomFilterColumns List<String>
    The column names for which you want Firehose to create bloom filters. The default is null .
    bloomFilterFalsePositiveProbability Number
    The Bloom filter false positive probability (FPP). The lower the FPP, the bigger the Bloom filter. The default value is 0.05, the minimum is 0, and the maximum is 1.
    compression String
    The compression code to use over data blocks. The default is SNAPPY .
    dictionaryKeyThreshold Number
    Represents the fraction of the total number of non-null rows. To turn off dictionary encoding, set this fraction to a number that is less than the number of distinct keys in a dictionary. To always use dictionary encoding, set this threshold to 1.
    enablePadding Boolean
    Set this to true to indicate that you want stripes to be padded to the HDFS block boundaries. This is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The default is false .
    formatVersion String
    The version of the file to write. The possible values are V0_11 and V0_12 . The default is V0_12 .
    paddingTolerance Number

    A number between 0 and 1 that defines the tolerance for block padding as a decimal fraction of stripe size. The default value is 0.05, which means 5 percent of stripe size.

    For the default values of 64 MiB ORC stripes and 256 MiB HDFS blocks, the default block padding tolerance of 5 percent reserves a maximum of 3.2 MiB for padding within the 256 MiB block. In such a case, if the available size within the block is more than 3.2 MiB, a new, smaller stripe is inserted to fit within that space. This ensures that no stripe crosses block boundaries and causes remote reads within a node-local task.

    Kinesis Data Firehose ignores this parameter when EnablePadding is false .

    rowIndexStride Number
    The number of rows between index entries. The default is 10,000 and the minimum is 1,000.
    stripeSizeBytes Number
    The number of bytes in each stripe. The default is 64 MiB and the minimum is 8 MiB.

    DeliveryStreamOutputFormatConfiguration

    Serializer Pulumi.AwsNative.KinesisFirehose.Inputs.DeliveryStreamSerializer
    Specifies which serializer to use. You can choose either the ORC SerDe or the Parquet SerDe. If both are non-null, the server rejects the request.
    Serializer DeliveryStreamSerializer
    Specifies which serializer to use. You can choose either the ORC SerDe or the Parquet SerDe. If both are non-null, the server rejects the request.
    serializer DeliveryStreamSerializer
    Specifies which serializer to use. You can choose either the ORC SerDe or the Parquet SerDe. If both are non-null, the server rejects the request.
    serializer DeliveryStreamSerializer
    Specifies which serializer to use. You can choose either the ORC SerDe or the Parquet SerDe. If both are non-null, the server rejects the request.
    serializer DeliveryStreamSerializer
    Specifies which serializer to use. You can choose either the ORC SerDe or the Parquet SerDe. If both are non-null, the server rejects the request.
    serializer Property Map
    Specifies which serializer to use. You can choose either the ORC SerDe or the Parquet SerDe. If both are non-null, the server rejects the request.

    DeliveryStreamParquetSerDe

    BlockSizeBytes int
    The Hadoop Distributed File System (HDFS) block size. This is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The default is 256 MiB and the minimum is 64 MiB. Firehose uses this value for padding calculations.
    Compression string
    The compression code to use over data blocks. The possible values are UNCOMPRESSED , SNAPPY , and GZIP , with the default being SNAPPY . Use SNAPPY for higher decompression speed. Use GZIP if the compression ratio is more important than speed.
    EnableDictionaryCompression bool
    Indicates whether to enable dictionary compression.
    MaxPaddingBytes int
    The maximum amount of padding to apply. This is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The default is 0.
    PageSizeBytes int
    The Parquet page size. Column chunks are divided into pages. A page is conceptually an indivisible unit (in terms of compression and encoding). The minimum value is 64 KiB and the default is 1 MiB.
    WriterVersion string
    Indicates the version of row format to output. The possible values are V1 and V2 . The default is V1 .
    BlockSizeBytes int
    The Hadoop Distributed File System (HDFS) block size. This is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The default is 256 MiB and the minimum is 64 MiB. Firehose uses this value for padding calculations.
    Compression string
    The compression code to use over data blocks. The possible values are UNCOMPRESSED , SNAPPY , and GZIP , with the default being SNAPPY . Use SNAPPY for higher decompression speed. Use GZIP if the compression ratio is more important than speed.
    EnableDictionaryCompression bool
    Indicates whether to enable dictionary compression.
    MaxPaddingBytes int
    The maximum amount of padding to apply. This is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The default is 0.
    PageSizeBytes int
    The Parquet page size. Column chunks are divided into pages. A page is conceptually an indivisible unit (in terms of compression and encoding). The minimum value is 64 KiB and the default is 1 MiB.
    WriterVersion string
    Indicates the version of row format to output. The possible values are V1 and V2 . The default is V1 .
    blockSizeBytes Integer
    The Hadoop Distributed File System (HDFS) block size. This is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The default is 256 MiB and the minimum is 64 MiB. Firehose uses this value for padding calculations.
    compression String
    The compression code to use over data blocks. The possible values are UNCOMPRESSED , SNAPPY , and GZIP , with the default being SNAPPY . Use SNAPPY for higher decompression speed. Use GZIP if the compression ratio is more important than speed.
    enableDictionaryCompression Boolean
    Indicates whether to enable dictionary compression.
    maxPaddingBytes Integer
    The maximum amount of padding to apply. This is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The default is 0.
    pageSizeBytes Integer
    The Parquet page size. Column chunks are divided into pages. A page is conceptually an indivisible unit (in terms of compression and encoding). The minimum value is 64 KiB and the default is 1 MiB.
    writerVersion String
    Indicates the version of row format to output. The possible values are V1 and V2 . The default is V1 .
    blockSizeBytes number
    The Hadoop Distributed File System (HDFS) block size. This is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The default is 256 MiB and the minimum is 64 MiB. Firehose uses this value for padding calculations.
    compression string
    The compression code to use over data blocks. The possible values are UNCOMPRESSED , SNAPPY , and GZIP , with the default being SNAPPY . Use SNAPPY for higher decompression speed. Use GZIP if the compression ratio is more important than speed.
    enableDictionaryCompression boolean
    Indicates whether to enable dictionary compression.
    maxPaddingBytes number
    The maximum amount of padding to apply. This is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The default is 0.
    pageSizeBytes number
    The Parquet page size. Column chunks are divided into pages. A page is conceptually an indivisible unit (in terms of compression and encoding). The minimum value is 64 KiB and the default is 1 MiB.
    writerVersion string
    Indicates the version of row format to output. The possible values are V1 and V2 . The default is V1 .
    block_size_bytes int
    The Hadoop Distributed File System (HDFS) block size. This is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The default is 256 MiB and the minimum is 64 MiB. Firehose uses this value for padding calculations.
    compression str
    The compression code to use over data blocks. The possible values are UNCOMPRESSED , SNAPPY , and GZIP , with the default being SNAPPY . Use SNAPPY for higher decompression speed. Use GZIP if the compression ratio is more important than speed.
    enable_dictionary_compression bool
    Indicates whether to enable dictionary compression.
    max_padding_bytes int
    The maximum amount of padding to apply. This is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The default is 0.
    page_size_bytes int
    The Parquet page size. Column chunks are divided into pages. A page is conceptually an indivisible unit (in terms of compression and encoding). The minimum value is 64 KiB and the default is 1 MiB.
    writer_version str
    Indicates the version of row format to output. The possible values are V1 and V2 . The default is V1 .
    blockSizeBytes Number
    The Hadoop Distributed File System (HDFS) block size. This is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The default is 256 MiB and the minimum is 64 MiB. Firehose uses this value for padding calculations.
    compression String
    The compression code to use over data blocks. The possible values are UNCOMPRESSED , SNAPPY , and GZIP , with the default being SNAPPY . Use SNAPPY for higher decompression speed. Use GZIP if the compression ratio is more important than speed.
    enableDictionaryCompression Boolean
    Indicates whether to enable dictionary compression.
    maxPaddingBytes Number
    The maximum amount of padding to apply. This is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The default is 0.
    pageSizeBytes Number
    The Parquet page size. Column chunks are divided into pages. A page is conceptually an indivisible unit (in terms of compression and encoding). The minimum value is 64 KiB and the default is 1 MiB.
    writerVersion String
    Indicates the version of row format to output. The possible values are V1 and V2 . The default is V1 .

    DeliveryStreamProcessingConfiguration

    Enabled bool
    Indicates whether data processing is enabled (true) or disabled (false).
    Processors List<Pulumi.AwsNative.KinesisFirehose.Inputs.DeliveryStreamProcessor>
    The data processors.
    Enabled bool
    Indicates whether data processing is enabled (true) or disabled (false).
    Processors []DeliveryStreamProcessor
    The data processors.
    enabled Boolean
    Indicates whether data processing is enabled (true) or disabled (false).
    processors List<DeliveryStreamProcessor>
    The data processors.
    enabled boolean
    Indicates whether data processing is enabled (true) or disabled (false).
    processors DeliveryStreamProcessor[]
    The data processors.
    enabled bool
    Indicates whether data processing is enabled (true) or disabled (false).
    processors Sequence[DeliveryStreamProcessor]
    The data processors.
    enabled Boolean
    Indicates whether data processing is enabled (true) or disabled (false).
    processors List<Property Map>
    The data processors.

    DeliveryStreamProcessor

    Type DeliveryStreamProcessorType
    The type of processor. Valid values: Lambda .
    Parameters []DeliveryStreamProcessorParameter
    The processor parameters.
    type DeliveryStreamProcessorType
    The type of processor. Valid values: Lambda .
    parameters List<DeliveryStreamProcessorParameter>
    The processor parameters.
    type DeliveryStreamProcessorType
    The type of processor. Valid values: Lambda .
    parameters DeliveryStreamProcessorParameter[]
    The processor parameters.
    type DeliveryStreamProcessorType
    The type of processor. Valid values: Lambda .
    parameters Sequence[DeliveryStreamProcessorParameter]
    The processor parameters.

    DeliveryStreamProcessorParameter

    ParameterName string
    The name of the parameter. Currently the following default values are supported: 3 for NumberOfRetries and 60 for the BufferIntervalInSeconds . The BufferSizeInMBs ranges between 0.2 MB and up to 3MB. The default buffering hint is 1MB for all destinations, except Splunk. For Splunk, the default buffering hint is 256 KB.
    ParameterValue string
    The parameter value.
    ParameterName string
    The name of the parameter. Currently the following default values are supported: 3 for NumberOfRetries and 60 for the BufferIntervalInSeconds . The BufferSizeInMBs ranges between 0.2 MB and up to 3MB. The default buffering hint is 1MB for all destinations, except Splunk. For Splunk, the default buffering hint is 256 KB.
    ParameterValue string
    The parameter value.
    parameterName String
    The name of the parameter. Currently the following default values are supported: 3 for NumberOfRetries and 60 for the BufferIntervalInSeconds . The BufferSizeInMBs ranges between 0.2 MB and up to 3MB. The default buffering hint is 1MB for all destinations, except Splunk. For Splunk, the default buffering hint is 256 KB.
    parameterValue String
    The parameter value.
    parameterName string
    The name of the parameter. Currently the following default values are supported: 3 for NumberOfRetries and 60 for the BufferIntervalInSeconds . The BufferSizeInMBs ranges between 0.2 MB and up to 3MB. The default buffering hint is 1MB for all destinations, except Splunk. For Splunk, the default buffering hint is 256 KB.
    parameterValue string
    The parameter value.
    parameter_name str
    The name of the parameter. Currently the following default values are supported: 3 for NumberOfRetries and 60 for the BufferIntervalInSeconds . The BufferSizeInMBs ranges between 0.2 MB and up to 3MB. The default buffering hint is 1MB for all destinations, except Splunk. For Splunk, the default buffering hint is 256 KB.
    parameter_value str
    The parameter value.
    parameterName String
    The name of the parameter. Currently the following default values are supported: 3 for NumberOfRetries and 60 for the BufferIntervalInSeconds . The BufferSizeInMBs ranges between 0.2 MB and up to 3MB. The default buffering hint is 1MB for all destinations, except Splunk. For Splunk, the default buffering hint is 256 KB.
    parameterValue String
    The parameter value.

    DeliveryStreamProcessorType

    DeliveryStreamRedshiftDestinationConfiguration

    ClusterJdbcurl string
    The connection string that Kinesis Data Firehose uses to connect to the Amazon Redshift cluster.
    CopyCommand Pulumi.AwsNative.KinesisFirehose.Inputs.DeliveryStreamCopyCommand
    Configures the Amazon Redshift COPY command that Kinesis Data Firehose uses to load data into the cluster from the Amazon S3 bucket.
    RoleArn string
    The ARN of the AWS Identity and Access Management (IAM) role that grants Kinesis Data Firehose access to your Amazon S3 bucket and AWS KMS (if you enable data encryption). For more information, see Grant Kinesis Data Firehose Access to an Amazon Redshift Destination in the Amazon Kinesis Data Firehose Developer Guide .
    S3Configuration Pulumi.AwsNative.KinesisFirehose.Inputs.DeliveryStreamS3DestinationConfiguration
    The S3 bucket where Kinesis Data Firehose first delivers data. After the data is in the bucket, Kinesis Data Firehose uses the COPY command to load the data into the Amazon Redshift cluster. For the Amazon S3 bucket's compression format, don't specify SNAPPY or ZIP because the Amazon Redshift COPY command doesn't support them.
    CloudWatchLoggingOptions Pulumi.AwsNative.KinesisFirehose.Inputs.DeliveryStreamCloudWatchLoggingOptions
    The CloudWatch logging options for your Firehose stream.
    Password string
    The password for the Amazon Redshift user that you specified in the Username property.
    ProcessingConfiguration Pulumi.AwsNative.KinesisFirehose.Inputs.DeliveryStreamProcessingConfiguration
    The data processing configuration for the Kinesis Data Firehose delivery stream.
    RetryOptions Pulumi.AwsNative.KinesisFirehose.Inputs.DeliveryStreamRedshiftRetryOptions
    The retry behavior in case Firehose is unable to deliver documents to Amazon Redshift. Default value is 3600 (60 minutes).
    S3BackupConfiguration Pulumi.AwsNative.KinesisFirehose.Inputs.DeliveryStreamS3DestinationConfiguration
    The configuration for backup in Amazon S3.
    S3BackupMode Pulumi.AwsNative.KinesisFirehose.DeliveryStreamRedshiftDestinationConfigurationS3BackupMode
    The Amazon S3 backup mode. After you create a Firehose stream, you can update it to enable Amazon S3 backup if it is disabled. If backup is enabled, you can't update the Firehose stream to disable it.
    SecretsManagerConfiguration Pulumi.AwsNative.KinesisFirehose.Inputs.DeliveryStreamSecretsManagerConfiguration
    The configuration that defines how you access secrets for Amazon Redshift.
    Username string
    The Amazon Redshift user that has permission to access the Amazon Redshift cluster. This user must have INSERT privileges for copying data from the Amazon S3 bucket to the cluster.
    ClusterJdbcurl string
    The connection string that Kinesis Data Firehose uses to connect to the Amazon Redshift cluster.
    CopyCommand DeliveryStreamCopyCommand
    Configures the Amazon Redshift COPY command that Kinesis Data Firehose uses to load data into the cluster from the Amazon S3 bucket.
    RoleArn string
    The ARN of the AWS Identity and Access Management (IAM) role that grants Kinesis Data Firehose access to your Amazon S3 bucket and AWS KMS (if you enable data encryption). For more information, see Grant Kinesis Data Firehose Access to an Amazon Redshift Destination in the Amazon Kinesis Data Firehose Developer Guide .
    S3Configuration DeliveryStreamS3DestinationConfiguration
    The S3 bucket where Kinesis Data Firehose first delivers data. After the data is in the bucket, Kinesis Data Firehose uses the COPY command to load the data into the Amazon Redshift cluster. For the Amazon S3 bucket's compression format, don't specify SNAPPY or ZIP because the Amazon Redshift COPY command doesn't support them.
    CloudWatchLoggingOptions DeliveryStreamCloudWatchLoggingOptions
    The CloudWatch logging options for your Firehose stream.
    Password string
    The password for the Amazon Redshift user that you specified in the Username property.
    ProcessingConfiguration DeliveryStreamProcessingConfiguration
    The data processing configuration for the Kinesis Data Firehose delivery stream.
    RetryOptions DeliveryStreamRedshiftRetryOptions
    The retry behavior in case Firehose is unable to deliver documents to Amazon Redshift. Default value is 3600 (60 minutes).
    S3BackupConfiguration DeliveryStreamS3DestinationConfiguration
    The configuration for backup in Amazon S3.
    S3BackupMode DeliveryStreamRedshiftDestinationConfigurationS3BackupMode
    The Amazon S3 backup mode. After you create a Firehose stream, you can update it to enable Amazon S3 backup if it is disabled. If backup is enabled, you can't update the Firehose stream to disable it.
    SecretsManagerConfiguration DeliveryStreamSecretsManagerConfiguration
    The configuration that defines how you access secrets for Amazon Redshift.
    Username string
    The Amazon Redshift user that has permission to access the Amazon Redshift cluster. This user must have INSERT privileges for copying data from the Amazon S3 bucket to the cluster.
    clusterJdbcurl String
    The connection string that Kinesis Data Firehose uses to connect to the Amazon Redshift cluster.
    copyCommand DeliveryStreamCopyCommand
    Configures the Amazon Redshift COPY command that Kinesis Data Firehose uses to load data into the cluster from the Amazon S3 bucket.
    roleArn String
    The ARN of the AWS Identity and Access Management (IAM) role that grants Kinesis Data Firehose access to your Amazon S3 bucket and AWS KMS (if you enable data encryption). For more information, see Grant Kinesis Data Firehose Access to an Amazon Redshift Destination in the Amazon Kinesis Data Firehose Developer Guide .
    s3Configuration DeliveryStreamS3DestinationConfiguration
    The S3 bucket where Kinesis Data Firehose first delivers data. After the data is in the bucket, Kinesis Data Firehose uses the COPY command to load the data into the Amazon Redshift cluster. For the Amazon S3 bucket's compression format, don't specify SNAPPY or ZIP because the Amazon Redshift COPY command doesn't support them.
    cloudWatchLoggingOptions DeliveryStreamCloudWatchLoggingOptions
    The CloudWatch logging options for your Firehose stream.
    password String
    The password for the Amazon Redshift user that you specified in the Username property.
    processingConfiguration DeliveryStreamProcessingConfiguration
    The data processing configuration for the Kinesis Data Firehose delivery stream.
    retryOptions DeliveryStreamRedshiftRetryOptions
    The retry behavior in case Firehose is unable to deliver documents to Amazon Redshift. Default value is 3600 (60 minutes).
    s3BackupConfiguration DeliveryStreamS3DestinationConfiguration
    The configuration for backup in Amazon S3.
    s3BackupMode DeliveryStreamRedshiftDestinationConfigurationS3BackupMode
    The Amazon S3 backup mode. After you create a Firehose stream, you can update it to enable Amazon S3 backup if it is disabled. If backup is enabled, you can't update the Firehose stream to disable it.
    secretsManagerConfiguration DeliveryStreamSecretsManagerConfiguration
    The configuration that defines how you access secrets for Amazon Redshift.
    username String
    The Amazon Redshift user that has permission to access the Amazon Redshift cluster. This user must have INSERT privileges for copying data from the Amazon S3 bucket to the cluster.
    clusterJdbcurl string
    The connection string that Kinesis Data Firehose uses to connect to the Amazon Redshift cluster.
    copyCommand DeliveryStreamCopyCommand
    Configures the Amazon Redshift COPY command that Kinesis Data Firehose uses to load data into the cluster from the Amazon S3 bucket.
    roleArn string
    The ARN of the AWS Identity and Access Management (IAM) role that grants Kinesis Data Firehose access to your Amazon S3 bucket and AWS KMS (if you enable data encryption). For more information, see Grant Kinesis Data Firehose Access to an Amazon Redshift Destination in the Amazon Kinesis Data Firehose Developer Guide .
    s3Configuration DeliveryStreamS3DestinationConfiguration
    The S3 bucket where Kinesis Data Firehose first delivers data. After the data is in the bucket, Kinesis Data Firehose uses the COPY command to load the data into the Amazon Redshift cluster. For the Amazon S3 bucket's compression format, don't specify SNAPPY or ZIP because the Amazon Redshift COPY command doesn't support them.
    cloudWatchLoggingOptions DeliveryStreamCloudWatchLoggingOptions
    The CloudWatch logging options for your Firehose stream.
    password string
    The password for the Amazon Redshift user that you specified in the Username property.
    processingConfiguration DeliveryStreamProcessingConfiguration
    The data processing configuration for the Kinesis Data Firehose delivery stream.
    retryOptions DeliveryStreamRedshiftRetryOptions
    The retry behavior in case Firehose is unable to deliver documents to Amazon Redshift. Default value is 3600 (60 minutes).
    s3BackupConfiguration DeliveryStreamS3DestinationConfiguration
    The configuration for backup in Amazon S3.
    s3BackupMode DeliveryStreamRedshiftDestinationConfigurationS3BackupMode
    The Amazon S3 backup mode. After you create a Firehose stream, you can update it to enable Amazon S3 backup if it is disabled. If backup is enabled, you can't update the Firehose stream to disable it.
    secretsManagerConfiguration DeliveryStreamSecretsManagerConfiguration
    The configuration that defines how you access secrets for Amazon Redshift.
    username string
    The Amazon Redshift user that has permission to access the Amazon Redshift cluster. This user must have INSERT privileges for copying data from the Amazon S3 bucket to the cluster.
    cluster_jdbcurl str
    The connection string that Kinesis Data Firehose uses to connect to the Amazon Redshift cluster.
    copy_command DeliveryStreamCopyCommand
    Configures the Amazon Redshift COPY command that Kinesis Data Firehose uses to load data into the cluster from the Amazon S3 bucket.
    role_arn str
    The ARN of the AWS Identity and Access Management (IAM) role that grants Kinesis Data Firehose access to your Amazon S3 bucket and AWS KMS (if you enable data encryption). For more information, see Grant Kinesis Data Firehose Access to an Amazon Redshift Destination in the Amazon Kinesis Data Firehose Developer Guide .
    s3_configuration DeliveryStreamS3DestinationConfiguration
    The S3 bucket where Kinesis Data Firehose first delivers data. After the data is in the bucket, Kinesis Data Firehose uses the COPY command to load the data into the Amazon Redshift cluster. For the Amazon S3 bucket's compression format, don't specify SNAPPY or ZIP because the Amazon Redshift COPY command doesn't support them.
    cloud_watch_logging_options DeliveryStreamCloudWatchLoggingOptions
    The CloudWatch logging options for your Firehose stream.
    password str
    The password for the Amazon Redshift user that you specified in the Username property.
    processing_configuration DeliveryStreamProcessingConfiguration
    The data processing configuration for the Kinesis Data Firehose delivery stream.
    retry_options DeliveryStreamRedshiftRetryOptions
    The retry behavior in case Firehose is unable to deliver documents to Amazon Redshift. Default value is 3600 (60 minutes).
    s3_backup_configuration DeliveryStreamS3DestinationConfiguration
    The configuration for backup in Amazon S3.
    s3_backup_mode DeliveryStreamRedshiftDestinationConfigurationS3BackupMode
    The Amazon S3 backup mode. After you create a Firehose stream, you can update it to enable Amazon S3 backup if it is disabled. If backup is enabled, you can't update the Firehose stream to disable it.
    secrets_manager_configuration DeliveryStreamSecretsManagerConfiguration
    The configuration that defines how you access secrets for Amazon Redshift.
    username str
    The Amazon Redshift user that has permission to access the Amazon Redshift cluster. This user must have INSERT privileges for copying data from the Amazon S3 bucket to the cluster.
    clusterJdbcurl String
    The connection string that Kinesis Data Firehose uses to connect to the Amazon Redshift cluster.
    copyCommand Property Map
    Configures the Amazon Redshift COPY command that Kinesis Data Firehose uses to load data into the cluster from the Amazon S3 bucket.
    roleArn String
    The ARN of the AWS Identity and Access Management (IAM) role that grants Kinesis Data Firehose access to your Amazon S3 bucket and AWS KMS (if you enable data encryption). For more information, see Grant Kinesis Data Firehose Access to an Amazon Redshift Destination in the Amazon Kinesis Data Firehose Developer Guide .
    s3Configuration Property Map
    The S3 bucket where Kinesis Data Firehose first delivers data. After the data is in the bucket, Kinesis Data Firehose uses the COPY command to load the data into the Amazon Redshift cluster. For the Amazon S3 bucket's compression format, don't specify SNAPPY or ZIP because the Amazon Redshift COPY command doesn't support them.
    cloudWatchLoggingOptions Property Map
    The CloudWatch logging options for your Firehose stream.
    password String
    The password for the Amazon Redshift user that you specified in the Username property.
    processingConfiguration Property Map
    The data processing configuration for the Kinesis Data Firehose delivery stream.
    retryOptions Property Map
    The retry behavior in case Firehose is unable to deliver documents to Amazon Redshift. Default value is 3600 (60 minutes).
    s3BackupConfiguration Property Map
    The configuration for backup in Amazon S3.
    s3BackupMode "Disabled" | "Enabled"
    The Amazon S3 backup mode. After you create a Firehose stream, you can update it to enable Amazon S3 backup if it is disabled. If backup is enabled, you can't update the Firehose stream to disable it.
    secretsManagerConfiguration Property Map
    The configuration that defines how you access secrets for Amazon Redshift.
    username String
    The Amazon Redshift user that has permission to access the Amazon Redshift cluster. This user must have INSERT privileges for copying data from the Amazon S3 bucket to the cluster.

    DeliveryStreamRedshiftDestinationConfigurationS3BackupMode

    DeliveryStreamRedshiftRetryOptions

    DurationInSeconds int
    The length of time during which Firehose retries delivery after a failure, starting from the initial request and including the first attempt. The default value is 3600 seconds (60 minutes). Firehose does not retry if the value of DurationInSeconds is 0 (zero) or if the first delivery attempt takes longer than the current value.
    DurationInSeconds int
    The length of time during which Firehose retries delivery after a failure, starting from the initial request and including the first attempt. The default value is 3600 seconds (60 minutes). Firehose does not retry if the value of DurationInSeconds is 0 (zero) or if the first delivery attempt takes longer than the current value.
    durationInSeconds Integer
    The length of time during which Firehose retries delivery after a failure, starting from the initial request and including the first attempt. The default value is 3600 seconds (60 minutes). Firehose does not retry if the value of DurationInSeconds is 0 (zero) or if the first delivery attempt takes longer than the current value.
    durationInSeconds number
    The length of time during which Firehose retries delivery after a failure, starting from the initial request and including the first attempt. The default value is 3600 seconds (60 minutes). Firehose does not retry if the value of DurationInSeconds is 0 (zero) or if the first delivery attempt takes longer than the current value.
    duration_in_seconds int
    The length of time during which Firehose retries delivery after a failure, starting from the initial request and including the first attempt. The default value is 3600 seconds (60 minutes). Firehose does not retry if the value of DurationInSeconds is 0 (zero) or if the first delivery attempt takes longer than the current value.
    durationInSeconds Number
    The length of time during which Firehose retries delivery after a failure, starting from the initial request and including the first attempt. The default value is 3600 seconds (60 minutes). Firehose does not retry if the value of DurationInSeconds is 0 (zero) or if the first delivery attempt takes longer than the current value.

    DeliveryStreamRetryOptions

    DurationInSeconds int
    The total amount of time that Kinesis Data Firehose spends on retries. This duration starts after the initial attempt to send data to the custom destination via HTTPS endpoint fails. It doesn't include the periods during which Kinesis Data Firehose waits for acknowledgment from the specified destination after each attempt.
    DurationInSeconds int
    The total amount of time that Kinesis Data Firehose spends on retries. This duration starts after the initial attempt to send data to the custom destination via HTTPS endpoint fails. It doesn't include the periods during which Kinesis Data Firehose waits for acknowledgment from the specified destination after each attempt.
    durationInSeconds Integer
    The total amount of time that Kinesis Data Firehose spends on retries. This duration starts after the initial attempt to send data to the custom destination via HTTPS endpoint fails. It doesn't include the periods during which Kinesis Data Firehose waits for acknowledgment from the specified destination after each attempt.
    durationInSeconds number
    The total amount of time that Kinesis Data Firehose spends on retries. This duration starts after the initial attempt to send data to the custom destination via HTTPS endpoint fails. It doesn't include the periods during which Kinesis Data Firehose waits for acknowledgment from the specified destination after each attempt.
    duration_in_seconds int
    The total amount of time that Kinesis Data Firehose spends on retries. This duration starts after the initial attempt to send data to the custom destination via HTTPS endpoint fails. It doesn't include the periods during which Kinesis Data Firehose waits for acknowledgment from the specified destination after each attempt.
    durationInSeconds Number
    The total amount of time that Kinesis Data Firehose spends on retries. This duration starts after the initial attempt to send data to the custom destination via HTTPS endpoint fails. It doesn't include the periods during which Kinesis Data Firehose waits for acknowledgment from the specified destination after each attempt.

    DeliveryStreamS3DestinationConfiguration

    BucketArn string
    The Amazon Resource Name (ARN) of the Amazon S3 bucket to send data to.
    RoleArn string
    The ARN of an AWS Identity and Access Management (IAM) role that grants Kinesis Data Firehose access to your Amazon S3 bucket and AWS KMS (if you enable data encryption). For more information, see Grant Kinesis Data Firehose Access to an Amazon S3 Destination in the Amazon Kinesis Data Firehose Developer Guide .
    BufferingHints Pulumi.AwsNative.KinesisFirehose.Inputs.DeliveryStreamBufferingHints
    Configures how Kinesis Data Firehose buffers incoming data while delivering it to the Amazon S3 bucket.
    CloudWatchLoggingOptions Pulumi.AwsNative.KinesisFirehose.Inputs.DeliveryStreamCloudWatchLoggingOptions
    The CloudWatch logging options for your Firehose stream.
    CompressionFormat Pulumi.AwsNative.KinesisFirehose.DeliveryStreamS3DestinationConfigurationCompressionFormat
    The type of compression that Kinesis Data Firehose uses to compress the data that it delivers to the Amazon S3 bucket. For valid values, see the CompressionFormat content for the S3DestinationConfiguration data type in the Amazon Kinesis Data Firehose API Reference .
    EncryptionConfiguration Pulumi.AwsNative.KinesisFirehose.Inputs.DeliveryStreamEncryptionConfiguration
    Configures Amazon Simple Storage Service (Amazon S3) server-side encryption. Kinesis Data Firehose uses AWS Key Management Service ( AWS KMS) to encrypt the data that it delivers to your Amazon S3 bucket.
    ErrorOutputPrefix string
    A prefix that Kinesis Data Firehose evaluates and adds to failed records before writing them to S3. This prefix appears immediately following the bucket name. For information about how to specify this prefix, see Custom Prefixes for Amazon S3 Objects .
    Prefix string
    A prefix that Kinesis Data Firehose adds to the files that it delivers to the Amazon S3 bucket. The prefix helps you identify the files that Kinesis Data Firehose delivered.
    BucketArn string
    The Amazon Resource Name (ARN) of the Amazon S3 bucket to send data to.
    RoleArn string
    The ARN of an AWS Identity and Access Management (IAM) role that grants Kinesis Data Firehose access to your Amazon S3 bucket and AWS KMS (if you enable data encryption). For more information, see Grant Kinesis Data Firehose Access to an Amazon S3 Destination in the Amazon Kinesis Data Firehose Developer Guide .
    BufferingHints DeliveryStreamBufferingHints
    Configures how Kinesis Data Firehose buffers incoming data while delivering it to the Amazon S3 bucket.
    CloudWatchLoggingOptions DeliveryStreamCloudWatchLoggingOptions
    The CloudWatch logging options for your Firehose stream.
    CompressionFormat DeliveryStreamS3DestinationConfigurationCompressionFormat
    The type of compression that Kinesis Data Firehose uses to compress the data that it delivers to the Amazon S3 bucket. For valid values, see the CompressionFormat content for the S3DestinationConfiguration data type in the Amazon Kinesis Data Firehose API Reference .
    EncryptionConfiguration DeliveryStreamEncryptionConfiguration
    Configures Amazon Simple Storage Service (Amazon S3) server-side encryption. Kinesis Data Firehose uses AWS Key Management Service ( AWS KMS) to encrypt the data that it delivers to your Amazon S3 bucket.
    ErrorOutputPrefix string
    A prefix that Kinesis Data Firehose evaluates and adds to failed records before writing them to S3. This prefix appears immediately following the bucket name. For information about how to specify this prefix, see Custom Prefixes for Amazon S3 Objects .
    Prefix string
    A prefix that Kinesis Data Firehose adds to the files that it delivers to the Amazon S3 bucket. The prefix helps you identify the files that Kinesis Data Firehose delivered.
    bucketArn String
    The Amazon Resource Name (ARN) of the Amazon S3 bucket to send data to.
    roleArn String
    The ARN of an AWS Identity and Access Management (IAM) role that grants Kinesis Data Firehose access to your Amazon S3 bucket and AWS KMS (if you enable data encryption). For more information, see Grant Kinesis Data Firehose Access to an Amazon S3 Destination in the Amazon Kinesis Data Firehose Developer Guide .
    bufferingHints DeliveryStreamBufferingHints
    Configures how Kinesis Data Firehose buffers incoming data while delivering it to the Amazon S3 bucket.
    cloudWatchLoggingOptions DeliveryStreamCloudWatchLoggingOptions
    The CloudWatch logging options for your Firehose stream.
    compressionFormat DeliveryStreamS3DestinationConfigurationCompressionFormat
    The type of compression that Kinesis Data Firehose uses to compress the data that it delivers to the Amazon S3 bucket. For valid values, see the CompressionFormat content for the S3DestinationConfiguration data type in the Amazon Kinesis Data Firehose API Reference .
    encryptionConfiguration DeliveryStreamEncryptionConfiguration
    Configures Amazon Simple Storage Service (Amazon S3) server-side encryption. Kinesis Data Firehose uses AWS Key Management Service ( AWS KMS) to encrypt the data that it delivers to your Amazon S3 bucket.
    errorOutputPrefix String
    A prefix that Kinesis Data Firehose evaluates and adds to failed records before writing them to S3. This prefix appears immediately following the bucket name. For information about how to specify this prefix, see Custom Prefixes for Amazon S3 Objects .
    prefix String
    A prefix that Kinesis Data Firehose adds to the files that it delivers to the Amazon S3 bucket. The prefix helps you identify the files that Kinesis Data Firehose delivered.
    bucketArn string
    The Amazon Resource Name (ARN) of the Amazon S3 bucket to send data to.
    roleArn string
    The ARN of an AWS Identity and Access Management (IAM) role that grants Kinesis Data Firehose access to your Amazon S3 bucket and AWS KMS (if you enable data encryption). For more information, see Grant Kinesis Data Firehose Access to an Amazon S3 Destination in the Amazon Kinesis Data Firehose Developer Guide .
    bufferingHints DeliveryStreamBufferingHints
    Configures how Kinesis Data Firehose buffers incoming data while delivering it to the Amazon S3 bucket.
    cloudWatchLoggingOptions DeliveryStreamCloudWatchLoggingOptions
    The CloudWatch logging options for your Firehose stream.
    compressionFormat DeliveryStreamS3DestinationConfigurationCompressionFormat
    The type of compression that Kinesis Data Firehose uses to compress the data that it delivers to the Amazon S3 bucket. For valid values, see the CompressionFormat content for the S3DestinationConfiguration data type in the Amazon Kinesis Data Firehose API Reference .
    encryptionConfiguration DeliveryStreamEncryptionConfiguration
    Configures Amazon Simple Storage Service (Amazon S3) server-side encryption. Kinesis Data Firehose uses AWS Key Management Service ( AWS KMS) to encrypt the data that it delivers to your Amazon S3 bucket.
    errorOutputPrefix string
    A prefix that Kinesis Data Firehose evaluates and adds to failed records before writing them to S3. This prefix appears immediately following the bucket name. For information about how to specify this prefix, see Custom Prefixes for Amazon S3 Objects .
    prefix string
    A prefix that Kinesis Data Firehose adds to the files that it delivers to the Amazon S3 bucket. The prefix helps you identify the files that Kinesis Data Firehose delivered.
    bucket_arn str
    The Amazon Resource Name (ARN) of the Amazon S3 bucket to send data to.
    role_arn str
    The ARN of an AWS Identity and Access Management (IAM) role that grants Kinesis Data Firehose access to your Amazon S3 bucket and AWS KMS (if you enable data encryption). For more information, see Grant Kinesis Data Firehose Access to an Amazon S3 Destination in the Amazon Kinesis Data Firehose Developer Guide .
    buffering_hints DeliveryStreamBufferingHints
    Configures how Kinesis Data Firehose buffers incoming data while delivering it to the Amazon S3 bucket.
    cloud_watch_logging_options DeliveryStreamCloudWatchLoggingOptions
    The CloudWatch logging options for your Firehose stream.
    compression_format DeliveryStreamS3DestinationConfigurationCompressionFormat
    The type of compression that Kinesis Data Firehose uses to compress the data that it delivers to the Amazon S3 bucket. For valid values, see the CompressionFormat content for the S3DestinationConfiguration data type in the Amazon Kinesis Data Firehose API Reference .
    encryption_configuration DeliveryStreamEncryptionConfiguration
    Configures Amazon Simple Storage Service (Amazon S3) server-side encryption. Kinesis Data Firehose uses AWS Key Management Service ( AWS KMS) to encrypt the data that it delivers to your Amazon S3 bucket.
    error_output_prefix str
    A prefix that Kinesis Data Firehose evaluates and adds to failed records before writing them to S3. This prefix appears immediately following the bucket name. For information about how to specify this prefix, see Custom Prefixes for Amazon S3 Objects .
    prefix str
    A prefix that Kinesis Data Firehose adds to the files that it delivers to the Amazon S3 bucket. The prefix helps you identify the files that Kinesis Data Firehose delivered.
    bucketArn String
    The Amazon Resource Name (ARN) of the Amazon S3 bucket to send data to.
    roleArn String
    The ARN of an AWS Identity and Access Management (IAM) role that grants Kinesis Data Firehose access to your Amazon S3 bucket and AWS KMS (if you enable data encryption). For more information, see Grant Kinesis Data Firehose Access to an Amazon S3 Destination in the Amazon Kinesis Data Firehose Developer Guide .
    bufferingHints Property Map
    Configures how Kinesis Data Firehose buffers incoming data while delivering it to the Amazon S3 bucket.
    cloudWatchLoggingOptions Property Map
    The CloudWatch logging options for your Firehose stream.
    compressionFormat "UNCOMPRESSED" | "GZIP" | "ZIP" | "Snappy" | "HADOOP_SNAPPY"
    The type of compression that Kinesis Data Firehose uses to compress the data that it delivers to the Amazon S3 bucket. For valid values, see the CompressionFormat content for the S3DestinationConfiguration data type in the Amazon Kinesis Data Firehose API Reference .
    encryptionConfiguration Property Map
    Configures Amazon Simple Storage Service (Amazon S3) server-side encryption. Kinesis Data Firehose uses AWS Key Management Service ( AWS KMS) to encrypt the data that it delivers to your Amazon S3 bucket.
    errorOutputPrefix String
    A prefix that Kinesis Data Firehose evaluates and adds to failed records before writing them to S3. This prefix appears immediately following the bucket name. For information about how to specify this prefix, see Custom Prefixes for Amazon S3 Objects .
    prefix String
    A prefix that Kinesis Data Firehose adds to the files that it delivers to the Amazon S3 bucket. The prefix helps you identify the files that Kinesis Data Firehose delivered.

    DeliveryStreamS3DestinationConfigurationCompressionFormat

    DeliveryStreamSchemaConfiguration

    CatalogId string
    The ID of the AWS Glue Data Catalog. If you don't supply this, the AWS account ID is used by default.
    DatabaseName string

    Specifies the name of the AWS Glue database that contains the schema for the output data.

    If the SchemaConfiguration request parameter is used as part of invoking the CreateDeliveryStream API, then the DatabaseName property is required and its value must be specified.

    Region string
    If you don't specify an AWS Region, the default is the current Region.
    RoleArn string

    The role that Firehose can use to access AWS Glue. This role must be in the same account you use for Firehose. Cross-account roles aren't allowed.

    If the SchemaConfiguration request parameter is used as part of invoking the CreateDeliveryStream API, then the RoleARN property is required and its value must be specified.

    TableName string

    Specifies the AWS Glue table that contains the column information that constitutes your data schema.

    If the SchemaConfiguration request parameter is used as part of invoking the CreateDeliveryStream API, then the TableName property is required and its value must be specified.

    VersionId string
    Specifies the table version for the output data schema. If you don't specify this version ID, or if you set it to LATEST , Firehose uses the most recent version. This means that any updates to the table are automatically picked up.
    CatalogId string
    The ID of the AWS Glue Data Catalog. If you don't supply this, the AWS account ID is used by default.
    DatabaseName string

    Specifies the name of the AWS Glue database that contains the schema for the output data.

    If the SchemaConfiguration request parameter is used as part of invoking the CreateDeliveryStream API, then the DatabaseName property is required and its value must be specified.

    Region string
    If you don't specify an AWS Region, the default is the current Region.
    RoleArn string

    The role that Firehose can use to access AWS Glue. This role must be in the same account you use for Firehose. Cross-account roles aren't allowed.

    If the SchemaConfiguration request parameter is used as part of invoking the CreateDeliveryStream API, then the RoleARN property is required and its value must be specified.

    TableName string

    Specifies the AWS Glue table that contains the column information that constitutes your data schema.

    If the SchemaConfiguration request parameter is used as part of invoking the CreateDeliveryStream API, then the TableName property is required and its value must be specified.

    VersionId string
    Specifies the table version for the output data schema. If you don't specify this version ID, or if you set it to LATEST , Firehose uses the most recent version. This means that any updates to the table are automatically picked up.
    catalogId String
    The ID of the AWS Glue Data Catalog. If you don't supply this, the AWS account ID is used by default.
    databaseName String

    Specifies the name of the AWS Glue database that contains the schema for the output data.

    If the SchemaConfiguration request parameter is used as part of invoking the CreateDeliveryStream API, then the DatabaseName property is required and its value must be specified.

    region String
    If you don't specify an AWS Region, the default is the current Region.
    roleArn String

    The role that Firehose can use to access AWS Glue. This role must be in the same account you use for Firehose. Cross-account roles aren't allowed.

    If the SchemaConfiguration request parameter is used as part of invoking the CreateDeliveryStream API, then the RoleARN property is required and its value must be specified.

    tableName String

    Specifies the AWS Glue table that contains the column information that constitutes your data schema.

    If the SchemaConfiguration request parameter is used as part of invoking the CreateDeliveryStream API, then the TableName property is required and its value must be specified.

    versionId String
    Specifies the table version for the output data schema. If you don't specify this version ID, or if you set it to LATEST , Firehose uses the most recent version. This means that any updates to the table are automatically picked up.
    catalogId string
    The ID of the AWS Glue Data Catalog. If you don't supply this, the AWS account ID is used by default.
    databaseName string

    Specifies the name of the AWS Glue database that contains the schema for the output data.

    If the SchemaConfiguration request parameter is used as part of invoking the CreateDeliveryStream API, then the DatabaseName property is required and its value must be specified.

    region string
    If you don't specify an AWS Region, the default is the current Region.
    roleArn string

    The role that Firehose can use to access AWS Glue. This role must be in the same account you use for Firehose. Cross-account roles aren't allowed.

    If the SchemaConfiguration request parameter is used as part of invoking the CreateDeliveryStream API, then the RoleARN property is required and its value must be specified.

    tableName string

    Specifies the AWS Glue table that contains the column information that constitutes your data schema.

    If the SchemaConfiguration request parameter is used as part of invoking the CreateDeliveryStream API, then the TableName property is required and its value must be specified.

    versionId string
    Specifies the table version for the output data schema. If you don't specify this version ID, or if you set it to LATEST , Firehose uses the most recent version. This means that any updates to the table are automatically picked up.
    catalog_id str
    The ID of the AWS Glue Data Catalog. If you don't supply this, the AWS account ID is used by default.
    database_name str

    Specifies the name of the AWS Glue database that contains the schema for the output data.

    If the SchemaConfiguration request parameter is used as part of invoking the CreateDeliveryStream API, then the DatabaseName property is required and its value must be specified.

    region str
    If you don't specify an AWS Region, the default is the current Region.
    role_arn str

    The role that Firehose can use to access AWS Glue. This role must be in the same account you use for Firehose. Cross-account roles aren't allowed.

    If the SchemaConfiguration request parameter is used as part of invoking the CreateDeliveryStream API, then the RoleARN property is required and its value must be specified.

    table_name str

    Specifies the AWS Glue table that contains the column information that constitutes your data schema.

    If the SchemaConfiguration request parameter is used as part of invoking the CreateDeliveryStream API, then the TableName property is required and its value must be specified.

    version_id str
    Specifies the table version for the output data schema. If you don't specify this version ID, or if you set it to LATEST , Firehose uses the most recent version. This means that any updates to the table are automatically picked up.
    catalogId String
    The ID of the AWS Glue Data Catalog. If you don't supply this, the AWS account ID is used by default.
    databaseName String

    Specifies the name of the AWS Glue database that contains the schema for the output data.

    If the SchemaConfiguration request parameter is used as part of invoking the CreateDeliveryStream API, then the DatabaseName property is required and its value must be specified.

    region String
    If you don't specify an AWS Region, the default is the current Region.
    roleArn String

    The role that Firehose can use to access AWS Glue. This role must be in the same account you use for Firehose. Cross-account roles aren't allowed.

    If the SchemaConfiguration request parameter is used as part of invoking the CreateDeliveryStream API, then the RoleARN property is required and its value must be specified.

    tableName String

    Specifies the AWS Glue table that contains the column information that constitutes your data schema.

    If the SchemaConfiguration request parameter is used as part of invoking the CreateDeliveryStream API, then the TableName property is required and its value must be specified.

    versionId String
    Specifies the table version for the output data schema. If you don't specify this version ID, or if you set it to LATEST , Firehose uses the most recent version. This means that any updates to the table are automatically picked up.

    DeliveryStreamSecretsManagerConfiguration

    Enabled bool
    Specifies whether you want to use the secrets manager feature. When set as True the secrets manager configuration overwrites the existing secrets in the destination configuration. When it's set to False Firehose falls back to the credentials in the destination configuration.
    RoleArn string
    Specifies the role that Firehose assumes when calling the Secrets Manager API operation. When you provide the role, it overrides any destination specific role defined in the destination configuration. If you do not provide the then we use the destination specific role. This parameter is required for Splunk.
    SecretArn string
    The ARN of the secret that stores your credentials. It must be in the same region as the Firehose stream and the role. The secret ARN can reside in a different account than the Firehose stream and role as Firehose supports cross-account secret access. This parameter is required when Enabled is set to True .
    Enabled bool
    Specifies whether you want to use the secrets manager feature. When set as True the secrets manager configuration overwrites the existing secrets in the destination configuration. When it's set to False Firehose falls back to the credentials in the destination configuration.
    RoleArn string
    Specifies the role that Firehose assumes when calling the Secrets Manager API operation. When you provide the role, it overrides any destination specific role defined in the destination configuration. If you do not provide the then we use the destination specific role. This parameter is required for Splunk.
    SecretArn string
    The ARN of the secret that stores your credentials. It must be in the same region as the Firehose stream and the role. The secret ARN can reside in a different account than the Firehose stream and role as Firehose supports cross-account secret access. This parameter is required when Enabled is set to True .
    enabled Boolean
    Specifies whether you want to use the secrets manager feature. When set as True the secrets manager configuration overwrites the existing secrets in the destination configuration. When it's set to False Firehose falls back to the credentials in the destination configuration.
    roleArn String
    Specifies the role that Firehose assumes when calling the Secrets Manager API operation. When you provide the role, it overrides any destination specific role defined in the destination configuration. If you do not provide the then we use the destination specific role. This parameter is required for Splunk.
    secretArn String
    The ARN of the secret that stores your credentials. It must be in the same region as the Firehose stream and the role. The secret ARN can reside in a different account than the Firehose stream and role as Firehose supports cross-account secret access. This parameter is required when Enabled is set to True .
    enabled boolean
    Specifies whether you want to use the secrets manager feature. When set as True the secrets manager configuration overwrites the existing secrets in the destination configuration. When it's set to False Firehose falls back to the credentials in the destination configuration.
    roleArn string
    Specifies the role that Firehose assumes when calling the Secrets Manager API operation. When you provide the role, it overrides any destination specific role defined in the destination configuration. If you do not provide the then we use the destination specific role. This parameter is required for Splunk.
    secretArn string
    The ARN of the secret that stores your credentials. It must be in the same region as the Firehose stream and the role. The secret ARN can reside in a different account than the Firehose stream and role as Firehose supports cross-account secret access. This parameter is required when Enabled is set to True .
    enabled bool
    Specifies whether you want to use the secrets manager feature. When set as True the secrets manager configuration overwrites the existing secrets in the destination configuration. When it's set to False Firehose falls back to the credentials in the destination configuration.
    role_arn str
    Specifies the role that Firehose assumes when calling the Secrets Manager API operation. When you provide the role, it overrides any destination specific role defined in the destination configuration. If you do not provide the then we use the destination specific role. This parameter is required for Splunk.
    secret_arn str
    The ARN of the secret that stores your credentials. It must be in the same region as the Firehose stream and the role. The secret ARN can reside in a different account than the Firehose stream and role as Firehose supports cross-account secret access. This parameter is required when Enabled is set to True .
    enabled Boolean
    Specifies whether you want to use the secrets manager feature. When set as True the secrets manager configuration overwrites the existing secrets in the destination configuration. When it's set to False Firehose falls back to the credentials in the destination configuration.
    roleArn String
    Specifies the role that Firehose assumes when calling the Secrets Manager API operation. When you provide the role, it overrides any destination specific role defined in the destination configuration. If you do not provide the then we use the destination specific role. This parameter is required for Splunk.
    secretArn String
    The ARN of the secret that stores your credentials. It must be in the same region as the Firehose stream and the role. The secret ARN can reside in a different account than the Firehose stream and role as Firehose supports cross-account secret access. This parameter is required when Enabled is set to True .

    DeliveryStreamSerializer

    OrcSerDe Pulumi.AwsNative.KinesisFirehose.Inputs.DeliveryStreamOrcSerDe
    A serializer to use for converting data to the ORC format before storing it in Amazon S3. For more information, see Apache ORC .
    ParquetSerDe Pulumi.AwsNative.KinesisFirehose.Inputs.DeliveryStreamParquetSerDe
    A serializer to use for converting data to the Parquet format before storing it in Amazon S3. For more information, see Apache Parquet .
    OrcSerDe DeliveryStreamOrcSerDe
    A serializer to use for converting data to the ORC format before storing it in Amazon S3. For more information, see Apache ORC .
    ParquetSerDe DeliveryStreamParquetSerDe
    A serializer to use for converting data to the Parquet format before storing it in Amazon S3. For more information, see Apache Parquet .
    orcSerDe DeliveryStreamOrcSerDe
    A serializer to use for converting data to the ORC format before storing it in Amazon S3. For more information, see Apache ORC .
    parquetSerDe DeliveryStreamParquetSerDe
    A serializer to use for converting data to the Parquet format before storing it in Amazon S3. For more information, see Apache Parquet .
    orcSerDe DeliveryStreamOrcSerDe
    A serializer to use for converting data to the ORC format before storing it in Amazon S3. For more information, see Apache ORC .
    parquetSerDe DeliveryStreamParquetSerDe
    A serializer to use for converting data to the Parquet format before storing it in Amazon S3. For more information, see Apache Parquet .
    orc_ser_de DeliveryStreamOrcSerDe
    A serializer to use for converting data to the ORC format before storing it in Amazon S3. For more information, see Apache ORC .
    parquet_ser_de DeliveryStreamParquetSerDe
    A serializer to use for converting data to the Parquet format before storing it in Amazon S3. For more information, see Apache Parquet .
    orcSerDe Property Map
    A serializer to use for converting data to the ORC format before storing it in Amazon S3. For more information, see Apache ORC .
    parquetSerDe Property Map
    A serializer to use for converting data to the Parquet format before storing it in Amazon S3. For more information, see Apache Parquet .

    DeliveryStreamSplunkBufferingHints

    IntervalInSeconds int
    Buffer incoming data for the specified period of time, in seconds, before delivering it to the destination. The default value is 60 (1 minute).
    SizeInMbs int
    Buffer incoming data to the specified size, in MBs, before delivering it to the destination. The default value is 5.
    IntervalInSeconds int
    Buffer incoming data for the specified period of time, in seconds, before delivering it to the destination. The default value is 60 (1 minute).
    SizeInMbs int
    Buffer incoming data to the specified size, in MBs, before delivering it to the destination. The default value is 5.
    intervalInSeconds Integer
    Buffer incoming data for the specified period of time, in seconds, before delivering it to the destination. The default value is 60 (1 minute).
    sizeInMbs Integer
    Buffer incoming data to the specified size, in MBs, before delivering it to the destination. The default value is 5.
    intervalInSeconds number
    Buffer incoming data for the specified period of time, in seconds, before delivering it to the destination. The default value is 60 (1 minute).
    sizeInMbs number
    Buffer incoming data to the specified size, in MBs, before delivering it to the destination. The default value is 5.
    interval_in_seconds int
    Buffer incoming data for the specified period of time, in seconds, before delivering it to the destination. The default value is 60 (1 minute).
    size_in_mbs int
    Buffer incoming data to the specified size, in MBs, before delivering it to the destination. The default value is 5.
    intervalInSeconds Number
    Buffer incoming data for the specified period of time, in seconds, before delivering it to the destination. The default value is 60 (1 minute).
    sizeInMbs Number
    Buffer incoming data to the specified size, in MBs, before delivering it to the destination. The default value is 5.

    DeliveryStreamSplunkDestinationConfiguration

    HecEndpoint string
    The HTTP Event Collector (HEC) endpoint to which Firehose sends your data.
    HecEndpointType Pulumi.AwsNative.KinesisFirehose.DeliveryStreamSplunkDestinationConfigurationHecEndpointType
    This type can be either Raw or Event .
    S3Configuration Pulumi.AwsNative.KinesisFirehose.Inputs.DeliveryStreamS3DestinationConfiguration
    The configuration for the backup Amazon S3 location.
    BufferingHints Pulumi.AwsNative.KinesisFirehose.Inputs.DeliveryStreamSplunkBufferingHints
    The buffering options. If no value is specified, the default values for Splunk are used.
    CloudWatchLoggingOptions Pulumi.AwsNative.KinesisFirehose.Inputs.DeliveryStreamCloudWatchLoggingOptions
    The Amazon CloudWatch logging options for your Firehose stream.
    HecAcknowledgmentTimeoutInSeconds int
    The amount of time that Firehose waits to receive an acknowledgment from Splunk after it sends it data. At the end of the timeout period, Firehose either tries to send the data again or considers it an error, based on your retry settings.
    HecToken string
    This is a GUID that you obtain from your Splunk cluster when you create a new HEC endpoint.
    ProcessingConfiguration Pulumi.AwsNative.KinesisFirehose.Inputs.DeliveryStreamProcessingConfiguration
    The data processing configuration.
    RetryOptions Pulumi.AwsNative.KinesisFirehose.Inputs.DeliveryStreamSplunkRetryOptions
    The retry behavior in case Firehose is unable to deliver data to Splunk, or if it doesn't receive an acknowledgment of receipt from Splunk.
    S3BackupMode string

    Defines how documents should be delivered to Amazon S3. When set to FailedEventsOnly , Firehose writes any data that could not be indexed to the configured Amazon S3 destination. When set to AllEvents , Firehose delivers all incoming records to Amazon S3, and also writes failed documents to Amazon S3. The default value is FailedEventsOnly .

    You can update this backup mode from FailedEventsOnly to AllEvents . You can't update it from AllEvents to FailedEventsOnly .

    SecretsManagerConfiguration Pulumi.AwsNative.KinesisFirehose.Inputs.DeliveryStreamSecretsManagerConfiguration
    The configuration that defines how you access secrets for Splunk.
    HecEndpoint string
    The HTTP Event Collector (HEC) endpoint to which Firehose sends your data.
    HecEndpointType DeliveryStreamSplunkDestinationConfigurationHecEndpointType
    This type can be either Raw or Event .
    S3Configuration DeliveryStreamS3DestinationConfiguration
    The configuration for the backup Amazon S3 location.
    BufferingHints DeliveryStreamSplunkBufferingHints
    The buffering options. If no value is specified, the default values for Splunk are used.
    CloudWatchLoggingOptions DeliveryStreamCloudWatchLoggingOptions
    The Amazon CloudWatch logging options for your Firehose stream.
    HecAcknowledgmentTimeoutInSeconds int
    The amount of time that Firehose waits to receive an acknowledgment from Splunk after it sends it data. At the end of the timeout period, Firehose either tries to send the data again or considers it an error, based on your retry settings.
    HecToken string
    This is a GUID that you obtain from your Splunk cluster when you create a new HEC endpoint.
    ProcessingConfiguration DeliveryStreamProcessingConfiguration
    The data processing configuration.
    RetryOptions DeliveryStreamSplunkRetryOptions
    The retry behavior in case Firehose is unable to deliver data to Splunk, or if it doesn't receive an acknowledgment of receipt from Splunk.
    S3BackupMode string

    Defines how documents should be delivered to Amazon S3. When set to FailedEventsOnly , Firehose writes any data that could not be indexed to the configured Amazon S3 destination. When set to AllEvents , Firehose delivers all incoming records to Amazon S3, and also writes failed documents to Amazon S3. The default value is FailedEventsOnly .

    You can update this backup mode from FailedEventsOnly to AllEvents . You can't update it from AllEvents to FailedEventsOnly .

    SecretsManagerConfiguration DeliveryStreamSecretsManagerConfiguration
    The configuration that defines how you access secrets for Splunk.
    hecEndpoint String
    The HTTP Event Collector (HEC) endpoint to which Firehose sends your data.
    hecEndpointType DeliveryStreamSplunkDestinationConfigurationHecEndpointType
    This type can be either Raw or Event .
    s3Configuration DeliveryStreamS3DestinationConfiguration
    The configuration for the backup Amazon S3 location.
    bufferingHints DeliveryStreamSplunkBufferingHints
    The buffering options. If no value is specified, the default values for Splunk are used.
    cloudWatchLoggingOptions DeliveryStreamCloudWatchLoggingOptions
    The Amazon CloudWatch logging options for your Firehose stream.
    hecAcknowledgmentTimeoutInSeconds Integer
    The amount of time that Firehose waits to receive an acknowledgment from Splunk after it sends it data. At the end of the timeout period, Firehose either tries to send the data again or considers it an error, based on your retry settings.
    hecToken String
    This is a GUID that you obtain from your Splunk cluster when you create a new HEC endpoint.
    processingConfiguration DeliveryStreamProcessingConfiguration
    The data processing configuration.
    retryOptions DeliveryStreamSplunkRetryOptions
    The retry behavior in case Firehose is unable to deliver data to Splunk, or if it doesn't receive an acknowledgment of receipt from Splunk.
    s3BackupMode String

    Defines how documents should be delivered to Amazon S3. When set to FailedEventsOnly , Firehose writes any data that could not be indexed to the configured Amazon S3 destination. When set to AllEvents , Firehose delivers all incoming records to Amazon S3, and also writes failed documents to Amazon S3. The default value is FailedEventsOnly .

    You can update this backup mode from FailedEventsOnly to AllEvents . You can't update it from AllEvents to FailedEventsOnly .

    secretsManagerConfiguration DeliveryStreamSecretsManagerConfiguration
    The configuration that defines how you access secrets for Splunk.
    hecEndpoint string
    The HTTP Event Collector (HEC) endpoint to which Firehose sends your data.
    hecEndpointType DeliveryStreamSplunkDestinationConfigurationHecEndpointType
    This type can be either Raw or Event .
    s3Configuration DeliveryStreamS3DestinationConfiguration
    The configuration for the backup Amazon S3 location.
    bufferingHints DeliveryStreamSplunkBufferingHints
    The buffering options. If no value is specified, the default values for Splunk are used.
    cloudWatchLoggingOptions DeliveryStreamCloudWatchLoggingOptions
    The Amazon CloudWatch logging options for your Firehose stream.
    hecAcknowledgmentTimeoutInSeconds number
    The amount of time that Firehose waits to receive an acknowledgment from Splunk after it sends it data. At the end of the timeout period, Firehose either tries to send the data again or considers it an error, based on your retry settings.
    hecToken string
    This is a GUID that you obtain from your Splunk cluster when you create a new HEC endpoint.
    processingConfiguration DeliveryStreamProcessingConfiguration
    The data processing configuration.
    retryOptions DeliveryStreamSplunkRetryOptions
    The retry behavior in case Firehose is unable to deliver data to Splunk, or if it doesn't receive an acknowledgment of receipt from Splunk.
    s3BackupMode string

    Defines how documents should be delivered to Amazon S3. When set to FailedEventsOnly , Firehose writes any data that could not be indexed to the configured Amazon S3 destination. When set to AllEvents , Firehose delivers all incoming records to Amazon S3, and also writes failed documents to Amazon S3. The default value is FailedEventsOnly .

    You can update this backup mode from FailedEventsOnly to AllEvents . You can't update it from AllEvents to FailedEventsOnly .

    secretsManagerConfiguration DeliveryStreamSecretsManagerConfiguration
    The configuration that defines how you access secrets for Splunk.
    hec_endpoint str
    The HTTP Event Collector (HEC) endpoint to which Firehose sends your data.
    hec_endpoint_type DeliveryStreamSplunkDestinationConfigurationHecEndpointType
    This type can be either Raw or Event .
    s3_configuration DeliveryStreamS3DestinationConfiguration
    The configuration for the backup Amazon S3 location.
    buffering_hints DeliveryStreamSplunkBufferingHints
    The buffering options. If no value is specified, the default values for Splunk are used.
    cloud_watch_logging_options DeliveryStreamCloudWatchLoggingOptions
    The Amazon CloudWatch logging options for your Firehose stream.
    hec_acknowledgment_timeout_in_seconds int
    The amount of time that Firehose waits to receive an acknowledgment from Splunk after it sends it data. At the end of the timeout period, Firehose either tries to send the data again or considers it an error, based on your retry settings.
    hec_token str
    This is a GUID that you obtain from your Splunk cluster when you create a new HEC endpoint.
    processing_configuration DeliveryStreamProcessingConfiguration
    The data processing configuration.
    retry_options DeliveryStreamSplunkRetryOptions
    The retry behavior in case Firehose is unable to deliver data to Splunk, or if it doesn't receive an acknowledgment of receipt from Splunk.
    s3_backup_mode str

    Defines how documents should be delivered to Amazon S3. When set to FailedEventsOnly , Firehose writes any data that could not be indexed to the configured Amazon S3 destination. When set to AllEvents , Firehose delivers all incoming records to Amazon S3, and also writes failed documents to Amazon S3. The default value is FailedEventsOnly .

    You can update this backup mode from FailedEventsOnly to AllEvents . You can't update it from AllEvents to FailedEventsOnly .

    secrets_manager_configuration DeliveryStreamSecretsManagerConfiguration
    The configuration that defines how you access secrets for Splunk.
    hecEndpoint String
    The HTTP Event Collector (HEC) endpoint to which Firehose sends your data.
    hecEndpointType "Raw" | "Event"
    This type can be either Raw or Event .
    s3Configuration Property Map
    The configuration for the backup Amazon S3 location.
    bufferingHints Property Map
    The buffering options. If no value is specified, the default values for Splunk are used.
    cloudWatchLoggingOptions Property Map
    The Amazon CloudWatch logging options for your Firehose stream.
    hecAcknowledgmentTimeoutInSeconds Number
    The amount of time that Firehose waits to receive an acknowledgment from Splunk after it sends it data. At the end of the timeout period, Firehose either tries to send the data again or considers it an error, based on your retry settings.
    hecToken String
    This is a GUID that you obtain from your Splunk cluster when you create a new HEC endpoint.
    processingConfiguration Property Map
    The data processing configuration.
    retryOptions Property Map
    The retry behavior in case Firehose is unable to deliver data to Splunk, or if it doesn't receive an acknowledgment of receipt from Splunk.
    s3BackupMode String

    Defines how documents should be delivered to Amazon S3. When set to FailedEventsOnly , Firehose writes any data that could not be indexed to the configured Amazon S3 destination. When set to AllEvents , Firehose delivers all incoming records to Amazon S3, and also writes failed documents to Amazon S3. The default value is FailedEventsOnly .

    You can update this backup mode from FailedEventsOnly to AllEvents . You can't update it from AllEvents to FailedEventsOnly .

    secretsManagerConfiguration Property Map
    The configuration that defines how you access secrets for Splunk.

    DeliveryStreamSplunkDestinationConfigurationHecEndpointType

    DeliveryStreamSplunkRetryOptions

    DurationInSeconds int
    The total amount of time that Firehose spends on retries. This duration starts after the initial attempt to send data to Splunk fails. It doesn't include the periods during which Firehose waits for acknowledgment from Splunk after each attempt.
    DurationInSeconds int
    The total amount of time that Firehose spends on retries. This duration starts after the initial attempt to send data to Splunk fails. It doesn't include the periods during which Firehose waits for acknowledgment from Splunk after each attempt.
    durationInSeconds Integer
    The total amount of time that Firehose spends on retries. This duration starts after the initial attempt to send data to Splunk fails. It doesn't include the periods during which Firehose waits for acknowledgment from Splunk after each attempt.
    durationInSeconds number
    The total amount of time that Firehose spends on retries. This duration starts after the initial attempt to send data to Splunk fails. It doesn't include the periods during which Firehose waits for acknowledgment from Splunk after each attempt.
    duration_in_seconds int
    The total amount of time that Firehose spends on retries. This duration starts after the initial attempt to send data to Splunk fails. It doesn't include the periods during which Firehose waits for acknowledgment from Splunk after each attempt.
    durationInSeconds Number
    The total amount of time that Firehose spends on retries. This duration starts after the initial attempt to send data to Splunk fails. It doesn't include the periods during which Firehose waits for acknowledgment from Splunk after each attempt.

    Tag

    Key string
    The key name of the tag
    Value string
    The value of the tag
    Key string
    The key name of the tag
    Value string
    The value of the tag
    key String
    The key name of the tag
    value String
    The value of the tag
    key string
    The key name of the tag
    value string
    The value of the tag
    key str
    The key name of the tag
    value str
    The value of the tag
    key String
    The key name of the tag
    value String
    The value of the tag

    Package Details

    Repository
    AWS Native pulumi/pulumi-aws-native
    License
    Apache-2.0
    aws-native logo

    We recommend new projects start with resources from the AWS provider.

    AWS Cloud Control v1.28.0 published on Monday, May 19, 2025 by Pulumi