Kamelets user guide
Speaking technically, a Kamelet is a resource that can be installed on any Kubernetes cluster. The following is an example of a Kamelet that we’ll use to discuss the various parts:
apiVersion: camel.apache.org/v1
kind: Kamelet
metadata:
name: telegram-text-source (1)
annotations: (2)
camel.apache.org/kamelet.icon: "data:image/svg+xml;base64,PD94bW..."
labels: (3)
camel.apache.org/kamelet.type: "source"
spec:
definition: (4)
title: "Telegram Text Source"
description: |-
Receive all text messages that people send to your telegram bot.
# Instructions
Description can include Markdown and guide the final user to configure the Kamelet parameters.
required:
- botToken
properties:
botToken:
title: Token
description: The token to access your bot on Telegram
type: string
x-descriptors:
- urn:alm:descriptor:com.tectonic.ui:password
dataTypes: (5)
out:
default: text
types:
text:
mediaType: text/plain
# schema:
template: (6)
from:
uri: telegram:bots
parameters:
authorizationToken: "#property:botToken"
steps:
- convert-body-to:
type: "java.lang.String"
type-class: "java.lang.String"
charset: "UTF8"
- filter:
simple: "${body} != null"
- log: "${body}"
- to: "kamelet:sink"
1 | The Kamelet ID, to be used in integrations that want to leverage the Kamelet |
2 | Annotations such as icon provide additional display features to the Kamelet |
3 | Labels allow users to query Kamelets e.g. by kind ("source" vs. "sink") |
4 | Description of the Kamelets and parameters in JSON-schema specification format |
5 | The data type that the Kamelet produces. Data type specifications contain the media type of the output and also may include a schema. |
6 | The route template defining the behavior of the Kamelet |
At a high level (more details are provided later), a Kamelet resource describes:
-
A metadata section containing the ID (
metadata
→name
) of the Kamelet and other information, such as the type of Kamelet (source
orsink
) -
A JSON-schema specification (
definition
) containing a set of parameters that you can use to configure the Kamelet -
An optional section containing information about input and output expected by the Kamelet (
types
) -
A Camel flow in YAML DSL containing the implementation of the Kamelet (
flow
)
Once installed on a Kubernetes namespace, the Kamelet can be used by any integration in that namespace.
Kamelets can be installed on a Kubernetes namespace with a simple command:
kubectl apply -f telegram-text-source.kamelet.yaml
Kamelets are standard YAML files, but their common extension is .kamelet.yaml
to help IDEs to recognize them and provide auto-completion (in the future).
Using Kamelets in Integrations
Kamelets can be used in integrations as if they were standard Camel components. For example, suppose that you’ve created the telegram-text-source
Kamelet in the default
namespace on Kubernetes, then you can write the following integration to use the Kamelet:
- from:
uri: "kamelet:telegram-text-source?botToken=XXXXYYYY"
steps:
- to: "log:info"
URI properties ("botToken") match the corresponding parameters in the Kamelet definition |
Kamelets can also be used multiple times in the same route definition. This happens usually with sink Kamelets.
Suppose that you’ve defined a Kamelet named "my-company-log-sink" in your Kubernetes namespace, then you can write a route like this:
- from:
uri: "kamelet:telegram-text-source?botToken=XXXXYYYY"
steps:
- to: "kamelet:my-company-log-sink?bucket=general"
- filter:
simple: '${body} contains "Camel"'
- to: "kamelet:my-company-log-sink?bucket=special"
The "my-company-log-sink" will obviously define what it means to write a log in the enterprise system and what is concretely a "bucket".
Configuration
When using a Kamelet, the instance parameters (e.g. "botToken", "bucket") can be passed explicitly in the URI or you can use properties. Properties can be also loaded implicitly by the operator from Kubernetes secrets (see below).
1. URI based configuration
You can configure the Kamelet by passing directly the configuration parameters in the URI, as in:
- from:
uri: "kamelet:telegram-text-source?botToken=the-token-value"
...
In this case, "the-token-value" is passed explicitly in the URI (you can also pass a custom property placeholder as value).
2. Property based configuration
An alternative way to configure the Kamelet is to provide configuration parameters as properties of the integration.
Taking for example a different version of the integration above:
- from:
uri: "kamelet:telegram-text-source"
steps:
- to: "kamelet:my-company-log-sink"
- filter:
simple: '${body} contains "Camel"'
- to: "kamelet:my-company-log-sink/mynamedconfig"
The integration above does not contain URI query parameters and the last URI ("kamelet:my-company-log-sink/mynamedconfig") contains a path parameter with value "mynamedconfig" |
The integration above needs some configuration in order to run properly. The configuration can be provided in a property file:
# Configuration for the Telegram source Kamelet
camel.kamelet.telegram-text-source.botToken=the-token-value
# General configuration for the Company Log Kamelet
camel.kamelet.my-company-log-sink.bucket=general
# camel.kamelet.my-company-log-sink.xxx=yyy
# Specific configuration for the Company Log Kamelet corresponding to the named configuration "mynamedconfig"
camel.kamelet.my-company-log-sink.mynamedconfig.bucket=special
# When using "kamelet:my-company-log-sink/mynamedconfig", the bucket will be "special", not "general"
Then the integration can be run with the following command:
kamel run kamelet-properties-route.yaml --property file:kamelet-example.properties
3. Implicit configuration using secrets
Property based configuration can also be used implicitly by creating secrets in the namespace that will be used to determine the Kamelets configuration.
To use implicit configuration via secret, we first need to create a configuration file holding only the properties of a named configuration.
# Only configuration related to the "mynamedconfig" named config
camel.kamelet.my-company-log-sink.mynamedconfig.bucket=special
# camel.kamelet.my-company-log-sink.mynamedconfig.xxx=yyy
We can create a secret from the file and label it so that it will be picked up automatically by the operator:
# Create the secret from the property file
kubectl create secret generic my-company-log-sink.mynamedconfig --from-file=mynamedconfig.properties
# Bind it to the named configuration "mynamedconfig" of the "my-company-log-sink" Kamelet
kubectl label secret my-company-log-sink.mynamedconfig camel.apache.org/kamelet=my-company-log-sink camel.apache.org/kamelet.configuration=mynamedconfig
You can now write an integration that uses the Kamelet with the named configuration:
- from:
uri: "timer:tick"
steps:
- setBody:
constant: "Hello"
- to: "kamelet:my-company-log-sink/mynamedconfig"
You can run this integration without specifying other parameters, the Kamelet endpoint will be implicitly configured by the Camel K operator that will automatically mount the secret into the integration Pod.
Kamelet versioning
Kamelets provided in a catalog are generally meant to work with a given runtime version (the same for which they are released). However, when you create a Kamelet and publish to a cluster, you may want to store and use different versions. If the Kamelet is provided with more than the main
version, then, you can specify which version to use in your Integration by adding the version parameter. For instance:
- from:
uri: "timer:tick?kameletVersion=v2"
steps:
- to: "log:info"
The operator will be able to automatically pick the right version and use it at runtime. If no version is specified, then you will use the default one.
Binding Kamelets in Pipes
In some contexts (for example "serverless") users often want to leverage the power of Apache Camel to be able to connect to various sources/sinks, without doing additional processing (such as transformations or other enterprise integration patterns).
A common use case is that of Knative Event Sources, for which the Apache Camel developers provide the concept of Kamelets and Pipes. Kamelets represent an evolution of the Camel route templates to provide an opinionated and easy to use connector to various components and services. The Kamelets allow using a declarative style of binding sources and sinks where data produced by a source, transformed in the form of actions steps and pushed to a given sink, via a resource named Pipe.
Binding to Knative
A Pipe allows to move data from a system described by a Kamelet towards a Knative destination, or from a Knative channel/broker to another external system described by a Kamelet. This means Pipes may act as event sources and sinks for the Knative eventing broker in a declarative way.
For example, here is a Pipe that connects a Kamelet Telegram source to the Knative broker:
apiVersion: camel.apache.org/v1
kind: Pipe
metadata:
name: telegram-to-knative
spec:
source: (1)
ref:
kind: Kamelet
apiVersion: camel.apache.org/v1
name: telegram-text-source
properties:
botToken: the-token-here
sink: (2)
ref:
kind: Broker
apiVersion: eventing.knative.dev/v1
name: default
1 | Reference to the source that provides data |
2 | Reference to the sink where data should be sent to |
This binding takes the telegram-text-source
Kamelet, configures it using specific properties ("botToken") and makes sure that messages produced by the Kamelet are forwarded to the Knative Broker named "default".
Note that source and sink are specified as standard Kubernetes object references in a declarative way.
Knative eventing uses the CloudEvents data format by default. You may want to set some properties that specify the event attributes such as the event type.
apiVersion: camel.apache.org/v1
kind: Pipe
metadata:
name: telegram-to-knative
spec:
source:
ref:
kind: Kamelet
apiVersion: camel.apache.org/v1
name: telegram-text-source
properties:
botToken: the-token-here
sink:
ref:
kind: Broker
apiVersion: eventing.knative.dev/v1
name: default
properties:
type: org.apache.camel.telegram.events (1)
1 | Sets the event type attribute of the CloudEvent produced by this Pipe |
This way you may specify event attributes before publishing to the Knative broker. Note that Camel uses a default CloudEvents event type org.apache.camel.event
for events produced by Camel.
You can overwrite CloudEvent event attributes on the sink using the ce.overwrite.
prefix when setting a property.
apiVersion: camel.apache.org/v1
kind: Pipe
metadata:
name: telegram-to-knative
spec:
source:
ref:
kind: Kamelet
apiVersion: camel.apache.org/v1
name: telegram-text-source
properties:
botToken: the-token-here
sink:
ref:
kind: Broker
apiVersion: eventing.knative.dev/v1
name: default
properties:
type: org.apache.camel.telegram.events
ce.overwrite.ce-source: my-source (1)
1 | Use "ce.overwrite.ce-source" to explicitly set the CloudEvents source attribute. |
The example shows how we can reference the "telegram-text-source" resource in a Pipe. It’s contained in the source
section because it’s a Kamelet of type "source". A Kamelet of type "sink", by contrast, can only be used in the sink
section of a Pipe
.
Under the covers, a Pipe creates an Integration resource that implements the binding, but all details of how to connect with Telegram forwarding the data to the Knative broker is fully transparent to the end user. For instance the Integration uses a SinkBinding
concept under the covers in order to retrieve the Knative broker endpoint URL.
In the same way you can also connect a Kamelet source to a Knative channel.
apiVersion: camel.apache.org/v1
kind: Pipe
metadata:
name: telegram-to-knative-channel
spec:
source: (1)
ref:
kind: Kamelet
apiVersion: camel.apache.org/v1
name: telegram-text-source
properties:
botToken: the-token-here
sink: (2)
ref:
kind: InMemoryChannel
apiVersion: messaging.knative.dev/v1
name: messages
1 | Reference to the source that provides data |
2 | Reference to the Knative channel that acts as the sink where data should be sent to |
When reading data from Knative you just need to specify for instance the Knative broker as a source in the Pipe. Events consumed from Knative event stream will be pushed to the given sink of the Pipe.
apiVersion: camel.apache.org/v1
kind: Pipe
metadata:
name: knative-to-slack
spec:
source: (1)
ref:
kind: Broker
apiVersion: eventing.knative.dev/v1
name: default
properties:
type: org.apache.camel.event.messages
sink: (2)
ref:
kind: Kamelet
apiVersion: camel.apache.org/v1
name: slack-sink
properties:
channel: "#my-channel"
webhookUrl: the-webhook-url
1 | Reference to the Knative broker source that provides data |
2 | Reference to the sink where data should be sent to |
Once again, the Pipe provides a declarative way of creating event sources and sinks for Knative eventing. In the example, all events of type org.apache.camel.event.messages
get forwarded to the given Slack channel using the Webhook API.
When consuming events from the Knative broker you most likely need to filter and select the events to process. You can do that with the properties set on the Knative broker source reference, for instance filtering by the even type as shown in the example. The filter possibilities include CloudEvent attributes such as event type, source, subject and extensions.
In the background Camel K will automatically create a Knative Trigger resource for the Pipe that uses the filter attributes accordingly.
apiVersion: eventing.knative.dev/v1
kind: Trigger
metadata:
name: camel-event-messages
spec:
broker: default (1)
filter:
attributes:
type: org.apache.camel.event.messages
myextension: my-extension-value
subscriber:
ref:
apiVersion: serving.knative.dev/v1 (2)
kind: Service
name: camel-service
uri: /events/camel.event.messages
1 | Reference to the Knative broker source that provides data |
2 | Reference to the Camel K integration/pipe service |
The trigger calls the Camel K integration service endpoint URL and pushes events with the given filter attributes to the Pipe. All properties that you have set on the Knative broker source reference will be set as a filter attribute on the trigger resource (except for reserved properties such as name
and cloudEventsType
).
Note that Camel K creates the trigger resource only for Knative broker type event sources. In case you reference a Knative channel as a source in a Pipe Camel K assumes that the channel and the trigger are already present. Camel K will only create the subscription for the integration service on the channel.
Binding to a Kafka Topic
The example seen in the previous paragraph can be also configured to push data a Strimzi Kafka topic (Kamelets can be also configured to pull data from topics).
To do so, you need to:
-
Install Strimzi on your cluster
-
Create a Strimzi Kafka cluster using plain listener and no authentication
-
Create a Strimzi KafkaTopic named
my-topic
Refer to the Strimzi documentation for instructions on how to do that.
The following binding can be created to push data into the my-topic
topic:
apiVersion: camel.apache.org/v1
kind: Pipe
metadata:
name: telegram-text-source-to-kafka
spec:
source:
ref:
kind: Kamelet
apiVersion: camel.apache.org/v1
name: telegram-text-source
properties:
botToken: the-token-here
sink:
ref: (1)
kind: KafkaTopic
apiVersion: kafka.strimzi.io/v1beta1
name: my-topic
1 | Kubernetes reference to a Strimzi KafkaTopic |
After creating it, messages will flow from Telegram to Kafka.
Binding to an explicit URI
An alternative way to use a Pipe is to configure the source/sink to be an explicit Camel URI. For example, the following binding is allowed:
apiVersion: camel.apache.org/v1
kind: Pipe
metadata:
name: telegram-text-source-to-channel
spec:
source:
ref:
kind: Kamelet
apiVersion: camel.apache.org/v1
name: telegram-text-source
properties:
botToken: the-token-here
sink:
uri: https://mycompany.com/the-service (1)
1 | Pipe with explicitly URI |
This Pipe explicitly defines an URI where data is going to be pushed.
the uri option is also conventionally used in Knative to specify a non-kubernetes destination. To comply with the Knative specifications, in case an "http" or "https" URI is used, Camel will send CloudEvents to the destination. |
Binding with data types
When referencing Kamelets in a binding users may choose from one of the supported input/output data types provided by the Kamelet. The supported data types are declared on the Kamelet itself and give additional information about used header names, content type and content schema.
apiVersion: camel.apache.org/v1
kind: Pipe
metadata:
name: my-sample-source-to-log
spec:
source:
ref:
kind: Kamelet
apiVersion: camel.apache.org/v1
name: my-sample-source
data-types: (1)
out:
format: text-plain (2)
sink:
uri: "log:info"
1 | Specify the output data type on the referenced Kamelet source. |
2 | Select text-plain as an output data type of the my-sample-source Kamelet. |
The very same Kamelet my-sample-source
may also provide a CloudEvents specific data type as an output which fits perfect for binding to a Knative broker.
apiVersion: camel.apache.org/v1
kind: Pipe
metadata:
name: my-sample-source-to-knative
spec:
source:
ref:
kind: Kamelet
apiVersion: camel.apache.org/v1
name: my-sample-source
data-types:
out:
format: application-cloud-events (1)
sink:
ref:
kind: Broker
apiVersion: eventing.knative.dev/v1
name: default
1 | Select application-cloud-events as an output data type of the my-sample-source Kamelet. |
Information about the supported data types can be found on the Kamelet itself.
apiVersion: camel.apache.org/v1
kind: Kamelet
metadata:
name: my-sample-source
labels:
camel.apache.org/kamelet.type: "source"
spec:
definition:
# ...
dataTypes:
out: (1)
default: text-plain (2)
types: (3)
text-plain:
description: Output type as plain text.
mediaType: text/plain
application-cloud-events:
description: CloudEvents specific representation of the Kamelet output.
mediaType: application/cloudevents+json
schema: (4)
# ...
dependencies: (5)
- "camel:cloudevents"
template:
from:
uri: ...
steps:
- to: "kamelet:sink"
1 | Declared output data types of this Kamelet source |
2 | The output data type used by default |
3 | List of supported output types |
4 | Optional Json schema describing the application/cloudevents+json data type |
5 | Optional list of additional dependencies that are required by the data type. |
This way users may choose the best Kamelet data type for a specific use case when referencing Kamelets in a binding.
Error Handling
You can configure an error handler in order to specify what to do when some event ends up with failure. See Pipes Error Handler User Guide for more detail.
Trait via annotations
You can easily tune your Pipe
with traits configuration adding .metadata.annotations
. Let’s have a look at the following example:
apiVersion: camel.apache.org/v1
kind: Pipe
metadata:
name: timer-2-log-annotation
annotations: (1)
trait.camel.apache.org/logging.level: DEBUG
trait.camel.apache.org/logging.color: "false"
spec:
source:
uri: timer:foo
sink:
uri: log:bar
1 | Include .metadata.annotations to specify the list of traits we want to configure |
In this example, we’ve set the logging
trait to specify certain configuration we want to apply. You can do the same with all the traits available, just by setting trait.camel.apache.org/trait-name.trait-property
with the expected value.
if you need to specify an array of values, the syntax will be trait.camel.apache.org/trait.conf: "[\"opt1\", \"opt2\", …]" |
Troubleshooting
A Kamelet
is translated into a Route
used from the Ìntegration
. In order to troubleshoot any possible issue, you can have a look at the dedicated troubleshoot section.
Kamelet Specification
We’re now going to describe the various parts of the Kamelet in more details.
Metadata
The metadata section contains important information related to the Kamelet as Kubernetes resource.
name | Description | Type | Example |
---|---|---|---|
| ID of the Kamelet, used to refer to the Kamelet in external routes |
| E.g. |
| The Kubernetes namespace where the resource is installed |
|
The following annotations and labels are also defined on the resource:
name | Description | Type | Example |
---|---|---|---|
| An optional icon for the Kamelet in URI data format |
| E.g. |
| An optional configuration setting for a trait |
| E.g. |
name | Description | Type | Example |
---|---|---|---|
label: | Indicates if the Kamelet can be used as source, action or sink. | enum: | E.g. |
Definition
The definition part of a Kamelet contains a valid JSON-schema document describing general information about the Kamelet and all defined parameters.
name | Description | Type | Example |
---|---|---|---|
| Display name of the Kamelet |
| E.g. |
| A markdown description of the Kamelet |
| E.g. |
| List of required parameters (complies with JSON-schema spec) | array: | |
| Map of properties that can be configured on the Kamelet | map: |
Each property defined in the Kamelet has its own schema (normally a flat schema, containing only 1 level of properties). The following table lists some common fields allowed for each property.
name | Description | Type | Example |
---|---|---|---|
| Display name of the property |
| E.g. |
| Simple text description of the property |
| E.g. |
| JSON-schema type of the property |
| E.g. |
| Specific aids for the visual tools | array: | E.g. |
Data shapes
Kamelets are designed to be plugged as sources or sinks in more general routes, so they can accept data as input and/or produce their own data. To help visual tools and applications to understand how to interact with the Kamelet, the specification of a Kamelet includes also information about type of data that it manages.
# ...
spec:
# ...
dataTypes:
out: (1)
default: json
types:
json: (2)
mediaType: application/json
schema: (3)
properties:
# ...
1 | Defines the type of the output |
2 | Name of the data type |
3 | Optional JSON-schema definition of the output |
Data shape can be indicated for the following channels:
-
in
: the input of the Kamelet, in case the Kamelet is of typesink
-
out
: the output of the Kamelet, for bothsource
andsink
Kamelets -
error
: an optional error data shape, for bothsource
andsink
Kamelets
Data shapes contain the following information:
name | Description | Type | Example |
---|---|---|---|
| A specific component scheme that is used to identify the data shape |
| E.g. |
| The data shape name used to identify and reference the data type in a Pipe when choosing from multiple data type options. |
| E.g. |
| The media type of the data |
| E.g. |
| Optional map of message headers that get set with the data shape where the map keys represent the header name and the value defines the header type information. |
| |
| Optional list of additional dependencies that are required for this data type (e.g. Json marshal/unmarshal libraries) |
| E.g. |
| An optional JSON-schema definition for the data |
|
Flow
Each Kamelet contains a YAML-based Camel DSL that provides the actual implementation of the connector.
For example:
spec:
# ...
template:
from:
uri: telegram:bots
parameters:
authorizationToken: "#property:botToken"
steps:
- convert-body-to:
type: "java.lang.String"
type-class: "java.lang.String"
charset: "UTF8"
- filter:
simple: "${body} != null"
- log: "${body}"
- to: "kamelet:sink"
Source and sink flows will connect to the outside route via the kamelet:source
or kamelet:sink
special endpoints: - A source Kamelet must contain a call to kamelet:sink
- A sink Kamelet must start from kamelet:source
The kamelet:source and kamelet:sink endpoints are special endpoints that are only available in Kamelet route templates and will be replaced with actual references at runtime. |
Kamelets contain a single route template written in YAML DSL, as in the previous example.
Kamelets, however, can also contain additional sources in the spec
→ sources
field. Those sources can be of any kind (not necessarily route templates) and will be added once to all the integrations where the Kamelet is used. They main role is to do advanced configuration of the integration context where the Kamelet is used, such as registering beans in the registry or adding customizers.
KEDA enabled Kamelets
Some Kamelets are enhanced with KEDA metadata to allow users to automatically configure autoscalers on them. Kamelets with KEDA features can be distinguished by the presence of the annotation camel.apache.org/keda.type
, which is set to the name of a specific KEDA autoscaler.
A KEDA enabled Kamelet can be used in the same way as any other Kamelet, in a binding or in an integration. KEDA autoscalers are not enabled by default: they need to be manually enabled by the user via the keda
trait.
In a Pipe, the KEDA trait can be enabled using annotations:
apiVersion: camel.apache.org/v1
kind: Pipe
metadata:
name: my-keda-binding
annotations:
trait.camel.apache.org/keda.enabled: "true"
spec:
source:
# ...
sink:
# ...
In an integration, it can be enabled using kamel run
args, for example:
kamel run my-keda-integration.yaml -t keda.enabled=true
Make sure that the my-keda-integration uses at least one KEDA enabled Kamelet, otherwise enabling KEDA (without other options) will have no effect. |
For information on how to create KEDA enabled Kamelets, see the KEDA section in the development guide.