Camel Kubernetes plugin
This plugin helps you to get started with running Camel applications on Kubernetes. Please make sure to meet these prerequisites for running Camel integrations on Kubernetes:
-
Install a Kubernetes command line tooling (kubectl)
-
Connect to a running Kubernetes cluster where you want to run the Camel integration
You can connect to a remote Kubernetes cluster or set up a local cluster. To set up a local Kubernetes cluster, you have a variety of options.
Camel JBang is able to interact with any of these Kubernetes platforms (remote or local).
Running Camel routes on Kubernetes is quite simple with Camel JBang. In fact, you can develop and test your Camel route locally with Camel JBang and then promote the same source to running it as an integration on Kubernetes.
The Camel JBang Kubernetes functionality is provided as a command plugin. This means you need to enable the kubernetes
plugin first to use the subcommands in Camel JBang.
camel plugin add kubernetes
You should see the kubernetes
plugin listed as an installed plugin.
camel plugin get
NAME COMMAND DEPENDENCY DESCRIPTION
kubernetes kubernetes org.apache.camel:camel-jbang-plugin-kubernetes Run Camel applications on Kubernetes
Now Camel JBang is able to run the subcommands offered by the plugin. You can inspect the help page to see the list of available plugin subcommands.
camel kubernetes --help
Kubernetes export
The Kubernetes plugin works with the Camel JBang export functionality. The project export generates a proper Maven/Gradle project following one of the available runtime types Quarkus, SpringBoot or camel-main.
In case you export the project with the Kubernetes plugin the exported project holds all information (e.g. sources, properties, dependencies, etc.) and is ready to build, push and deploy the application to Kubernetes, too. The export generates a Kubernetes manifest (kubernetes.yml) that holds all resources (e.g. Deployment, Service, ConfigMap) required to run the application on Kubernetes.
You can create a project export with following command.
camel kubernetes export route.yaml --dir some/path/to/project
The command receives one or more source files (e.g. Camel routes) and performs the export. As a result you will find the Maven/Gradle project sources generated into the given project path.
The default runtime of the project is Quarkus. You can adjust the runtime with an additional command option --runtime=quarkus
.
If you want to run this application on Kubernetes you need to build the container image, push it to a registry and deploy the application to Kubernetes.
The Camel JBang Kubernetes plugin provides a run command that combines these steps (export, container image build, push, deploy) into a single command. |
You can now navigate to the generated project folder and build the project artifacts for instance with this Maven command.
./mvnw package -Dquarkus.container-image.build=true
According to the runtime type (e.g. quarkus) defined for the export this builds and creates a Quarkus application artifact JAR in the Maven build output folder (e.g. target/route-1.0-SNAPSHOT.jar
).
The option -Dquarkus.container-image.build=true
also builds a container image that is ready for deployment to Kubernetes. More precisely the exported project uses the very same tooling and options as an arbitrary Quarkus/SpringBoot application would do. This means you can easily customize the container image and all settings provided by the runtime provider (e.g. Quarkus or SpringBoot) after the export.
The Kubernetes deployment resources are automatically generated with the export, too.
You can find the Kubernetes manifest in src/main/kubernetes/kubernetes.yml
.
For instance with the option -Dquarkus.kubernetes.deploy=true
uses this manifest to trigger the Kubernetes deployment as part of the Maven build.
./mvnw package -Dquarkus.kubernetes.deploy=true
You will see the Deployment on Kubernetes shortly after this command has finished.
The Camel JBang Kubernetes export command provides several options to customize the exported project.
Option | Description |
---|---|
| The trait profile to use for the deployment. |
| The service account used to run the application. |
| Adds dependency that should be included, use "camel:" prefix for a Camel component, "mvn:org.my:app:1.0" for a Maven dependency. |
| Maven/Gradle build properties (syntax: --build-property=prop1=foo) |
| Add a runtime property or properties file from a path, a config map or a secret (syntax: [my-key=my-value,file:/path/to/my-conf.properties,[configmap,secret]:name]). |
| Add a runtime configuration from a ConfigMap or a Secret (syntax: [configmap,secret]:name[/key], where name represents the configmap/secret name and key optionally represents the configmap/secret key to be filtered). |
| Add a runtime resource from a Configmap or a Secret (syntax: [configmap,secret]:name[/key][@path], where name represents the configmap/secret name, key optionally represents the configmap/secret key to be filtered and path represents the destination path). |
| Add an OpenAPI spec (syntax: [configmap,file]:name). |
| Set an environment variable in the integration container, for instance "-e MY_VAR=my-value". |
| Mount a volume into the integration container, for instance "-v pvcname:/container/path". |
| A Service that the integration should bind to, specified as [[apigroup/]version:]kind:[namespace/]name. |
| Add source file to your integration, this is added to the list of files listed as arguments of the command. |
| Add an annotation to the integration. Use name values pairs like "--annotation my.company=hello". |
| Add a label to the integration. Use name values pairs like "--label my.company=hello". |
| Add a trait configuration to the integration. Use name values pairs like "--trait trait.name.config=hello". |
| An image built externally (for instance via CI/CD). Enabling it will skip the integration build phase. |
| The image registry to hold the app container image. |
| The image registry group used to push images to. |
| The image builder used to build the container image (e.g. docker, jib, podman, s2i). |
| The target cluster type. Special configurations may be applied to different cluster types such as Kind or Minikube. |
| The developer profile to use a specific configuration in configuration files using the naming style |
The Kubernetes plugin export command also inherits all options from the arbitrary Camel JBang export command.
See the possible options by running: camel kubernetes export --help for more details. |
Kubernetes manifest options
The Kubernetes manifest (kubernetes.yml) describes all resources to successfully run the application on Kubernetes. The manifest usually holds the deployment, a service definition, config maps and much more.
You can use several options on the export
command to customize this manifest with the traits. The trait concept was born out of Camel K and the Camel K operator uses the traits to configure the Kubernetes resources that are managed by an integration. You can use the same options to also customize the Kubernetes manifest that is generated as part of the project export.
The configuration of the traits are used by the given order:
-
Use the
--trait
command options values -
Any annotation starting with the prefix
trait.camel.apache.org/*
-
Any properties from the specific configuration
application-<profile>.properties
for the profile defined by the command option--profile
with the prefixcamel.jbang.trait.*
-
Any properties from the default configuration
application.properties
with the prefixcamel.jbang.trait.*
Container trait options
The container specification is part of the Kubernetes Deployment resource and describes the application container image, exposed ports and health probes for example.
The container trait is able to customize the container specification with following options:
Property | Type | Description |
---|---|---|
|
| To configure a different port exposed by the container (default |
|
| To configure a different port name for the port exposed by the container. It defaults to |
|
| To configure under which service port the container port is to be exposed (default |
|
| To configure under which service port name the container port is to be exposed (default |
|
| The application container name. |
|
| The application container image to use for the Deployment. |
|
| The pull policy: Always|Never|IfNotPresent |
|
| The minimum amount of CPU required. |
|
| The minimum amount of memory required. |
|
| The maximum amount of CPU required. |
|
| The maximum amount of memory required. |
The syntax to specify container trait options is as follows:
camel kubernetes export Sample.java --trait container.[key]=[value]
You may specify these options with the export command to customize the container specification.
camel kubernetes export Sample.java --trait container.name=my-container --trait container.port=8088 --trait container.imagePullPolicy=IfNotPresent --trait container.request-cpu=0.005 --trait container.request-memory=100Mi
This results in the following container specification in the Deployment resource.
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
camel.apache.org/integration: sample
name: sample
spec:
selector:
matchLabels:
camel.apache.org/integration: sample
template:
metadata:
labels:
camel.apache.org/integration: sample
spec:
containers:
- image: quay.io/sample:1.0-SNAPSHOT (1)
imagePullPolicy: IfNotPresent (2)
name: my-container (3)
ports:
- containerPort: 8088 (4)
name: http
protocol: TCP
resources:
requests:
memory: 100Mi
cpu: '0.005'
1 | Container image running the application |
2 | Customized image pull policy |
3 | Custom container name |
4 | Custom container port exposed |
Labels and annotations
You may need to add labels or annotations to the generated Kubernetes resources. By default, the generated resources will have the label camel.apache.org/integration
set to the exported project name.
You can add labels and annotations with these options on the export command:
camel kubernetes export Sample.java --annotation [key]=[value] --label [key]=[value]
camel kubernetes export Sample.java --annotation project.team=camel-experts
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
project.team: camel-experts (1)
labels:
camel.apache.org/integration: sample
name: sample
spec:
selector:
matchLabels:
camel.apache.org/integration: sample
template:
metadata:
labels:
camel.apache.org/integration: sample
spec:
containers:
- image: quay.io/sample:1.0-SNAPSHOT
name: sample
1 | Custom deployment annotation |
Environment variables
The environment trait is there to set environment variables on the container specification.
The environment trait provides the following configuration options:
Property | Type | Description |
---|---|---|
|
| A list of environment variables to be added to the integration container. The syntax is KEY=VALUE, e.g., |
The syntax to specify environment trait options is as follows:
camel kubernetes export Sample.java --trait environment.[key]=[value]
There is also a shortcut option --env
that you can use.
camel kubernetes export Sample.java --env [key]=[value]
camel kubernetes export Sample.java --trait environment.vars=MY_ENV=foo --env FOO_ENV=bar
This results in the following container specification in the Deployment resource.
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
camel.apache.org/integration: sample
name: sample
spec:
selector:
matchLabels:
camel.apache.org/integration: sample
template:
metadata:
labels:
camel.apache.org/integration: sample
spec:
containers:
- image: quay.io/sample:1.0-SNAPSHOT
name: sample
env: (1)
- name: MY_ENV
value: foo
- name: FOO_ENV
value: bar
1 | Environment variables set in the container specification |
Service trait options
The Service trait enhances the Kubernetes manifest with a Service resource so that the application can be accessed by other components in the same namespace. The service resource exposes the application with a protocol (e.g. TCP/IP) on a given port and uses either ClusterIP
, NodePort
or LoadBalancer
type.
The Camel JBang plugin automatically inspects the Camel routes for exposed Http services and adds the service resource when applicable. This means when one of the Camel routes exposes a Http service (for instance by using the platform-http
component) the Kubernetes manifest also creates a Kubernetes Service resource besides the arbitrary Deployment.
You can customize the generated Kubernetes service resource with trait options:
Property | Type | Description |
---|---|---|
|
| The type of service to be used, either 'ClusterIP', 'NodePort' or 'LoadBalancer'. |
|
| To configure under which service port the container port is to be exposed (default |
|
| To configure under which service port name the container port is to be exposed (default |
Knative service trait options
Knative serving defines a set of resources on Kubernetes to handle Serverless workloads with automatic scaling and scale-to-zero functionality.
When Knative serving is available on the target Kubernetes cluster you may want to use the Knative service resource instead of an arbitrary Kubernetes service resource. The Knative service trait will create such a resource as part of the Kubernetes manifest.
You need to enable the Knative service trait with --trait knative-service.enabled=true option. Otherwise the Camel JBang export will always create an arbitrary Kubernetes service resource. |
The trait offers following options for customization:
Property | Type | Description |
---|---|---|
|
| Can be used to enable or disable a trait. All traits share this common property. |
|
| The annotations added to route. This can be used to set knative service specific annotations CLI usage example: -t "knative-service.annotations.'haproxy.router.openshift.io/balance'=true" |
|
| Configures the Knative autoscaling class property (e.g. to set Refer to the Knative documentation for more information. |
|
| Configures the Knative autoscaling metric property (e.g. to set Refer to the Knative documentation for more information. |
|
| Sets the allowed concurrency level or CPU percentage (depending on the autoscaling metric) for each Pod. Refer to the Knative documentation for more information. |
|
| The minimum number of Pods that should be running at any time for the integration. It’s zero by default, meaning that the integration is scaled down to zero when not used for a configured amount of time. Refer to the Knative documentation for more information. |
|
| An upper bound for the number of Pods that can be running in parallel for the integration. Knative has its own cap value that depends on the installation. Refer to the Knative documentation for more information. |
|
| Enables to gradually shift traffic to the latest Revision and sets the rollout duration. It’s disabled by default and must be expressed as a Golang |
|
| Setting Refer to the Knative documentation for more information. |
Connecting to Knative
The previous section described how the exported Apache Camel application can leverage the Knative service resource with auto-scaling as part of the deployment to Kubernetes.
Apache Camel also provides a Knative component that makes you easily interact with Knative eventing and Knative serving.
The Knative component enables you to exchange data with the Knative eventing broker and other Knative services deployed on Kubernetes. The Camel JBang Kubernetes plugin provides some autoconfiguration options when connecting with the Knative component. The export command assists you in configuring both the Knative component and the Kubernetes manifest for connecting to Knative resources on the Kubenretes cluster.
Knative trigger
The concept of a Knative trigger allows you to consume events from the Knative eventing broker. In case your Camel route uses the Knative component as a consumer you may need to create a trigger in Kubernetes in order to connect your Camel application with the Knative broker.
The Camel JBang Kubernetes plugin is able to automatically create this trigger for you.
The following Camel route uses the Knative event component and references a Knative broker by its name. The plugin inspects the code and automatically generates the Knative trigger as part of the Kubernetes manifest that is used to run the Camel application on Kubernetes.
- from:
uri: knative:event/camel.evt.type?name=my-broker
steps:
- to: log:info
The route consumes Knative events of type camel.evt.type
. If you export this route with the Camel JBang Kubernetes plugin you will see a Knative trigger being generated as part of the Kubernetes manifest (kubernetes.yml).
camel kubernetes export knative-route.yaml
The generated export project can be deployed to Kubernetes and as part of the deployment the trigger is automatically created so the application can start consuming events.
The generated trigger looks as follows:
apiVersion: eventing.knative.dev/v1
kind: Trigger
metadata:
name: my-broker-knative-route-camel-evt-type
spec:
broker: my-broker
filter:
attributes:
type: camel.evt.type
subscriber:
ref:
apiVersion: v1
kind: Service
name: knative-route
uri: /events/camel-evt-type
The trigger uses a default filter on the event type CloudEvents attribute and calls the Camel application via the exposed Kubernetes service resource.
The Camel application is automatically configured to expose an Http service so incoming events are handed over to the Camel route. You can review the Knative service resource configuration that makes Camel configure the Knative component. The configuration has been automatically created in src/main/resources/knative.json
in the exported project.
Here is an example of the generated knative.json
file:
{
"resources" : [ {
"name" : "camel-event",
"type" : "event",
"endpointKind" : "source",
"path" : "/events/camel-event",
"objectApiVersion" : "eventing.knative.dev/v1",
"objectKind" : "Broker",
"objectName" : "my-broker",
"reply" : false
} ]
}
The exported project has everything configured to run the application on Kubernetes. Of course, you need Knative eventing installed on your target cluster, and you need to have a Knative broker named my-broker
available in the target namespace.
Now you can just deploy the application using the Kubernetes manifest and see the Camel route consuming events from the broker.
Knative channel subscription
Knative channels represent another form of producing and consuming events from the Knative broker. Instead of using a trigger you can create a subscription for a Knative channel to consume events.
The Camel route that connects to a Knative channel in order to receive events looks like this:
- from:
uri: knative:channel/my-channel
steps:
- to: log:info
The Knative channel is referenced by its name. The Camel JBang Kubernetes plugin will inspect your code to automatically create a channel subscription as part of the Kubernetes manifest. You just need to export the Camel route as usual.
camel kubernetes export knative-route.yaml
The code inspection recognizes the Knative component that references the Knative channel and the subscription automatically becomes part of the exported Kubernetes manifest.
Here is an example subscription that has been generated during the export:
apiVersion: messaging.knative.dev/v1
kind: Subscription
metadata:
name: my-channel-knative-route
spec:
channel:
apiVersion: messaging.knative.dev/v1
kind: Channel
name: my-channel
subscriber:
ref:
apiVersion: v1
kind: Service
name: knative-route
uri: /channels/my-channel
The subscription connects the Camel application with the channel so each event on the channel is sent to the Kubernetes service resource that also has been created as part of the Kubernetes manifest.
The Camel Knative component uses a service resource configuration internally to create the proper Http service. You can review the Knative service resource configuration that makes Camel configure the Knative component. The configuration has been automatically created in src/main/resources/knative.json
in the exported project.
Here is an example of the generated knative.json
file:
{
"resources" : [ {
"name" : "my-channel",
"type" : "channel",
"endpointKind" : "source",
"path" : "/channels/my-channel",
"objectApiVersion" : "messaging.knative.dev/v1",
"objectKind" : "Channel",
"objectName" : "my-channel",
"reply" : false
} ]
}
Assuming that you have Knative eventing installed on your cluster and that you have setup the Knative channel my-channel
you can start consuming events right away. The deployment of the exported project uses the Kubernetes manifest to create all required resources including the Knative subscription.
Knative sink binding
When connecting to a Knative resource (Broker, Channel, Service) in order to produce events for Knative eventing you probably want to use a SinkBinding
that resolves the URL to the Knative resource for you. The sink binding is a Kubernetes resource that makes Knative eventing automatically inject the resource URL into your Camel application on startup. The Knative URL injection uses environment variables (K_SINK
, K_CE_OVERRIDES
) on your deployment. The Knative eventing operator will automatically resolve the Knative resource (e.g. a Knative broker URL) and inject the value so your application does not need to know the actual URL when deploying.
The Camel JBang Kubernetes plugin leverages the sink binding concept for all routes that use the Knative component as an output.
The following route produces events on a Knative broker:
- from:
uri: timer:tick
steps:
- setBody:
constant: Hello Camel !!!
- to: knative:event/camel.evt.type?name=my-broker
The route produces events of type camel.evt.type
and pushes the events to the broker named my-broker
. At this point the actual Knative broker URL is unknown. The sink binding is going to resolve the URL and inject its value at deployment time using the K_SINK
environment variable.
The Camel JBang Kubernetes plugin export automatically inspects such a route and automatically creates the sink binding resource for us. The sink binding is part of the exported Kubernetes manifest and is created on the cluster as part of the deployment.
A sink binding resource that is created by the export command looks like follows:
camel kubernetes export knative-route.yaml
apiVersion: sources.knative.dev/v1
kind: SinkBinding
metadata:
finalizers:
- sinkbindings.sources.knative.dev
name: knative-route
spec:
sink:
ref:
apiVersion: eventing.knative.dev/v1
kind: Broker
name: my-broker
subject:
apiVersion: apps/v1
kind: Deployment
name: knative-route
In addition to creating the sink binding the Camel JBang plugin also takes care of configuring the Knative Camel component. The Knative component uses a configuration file that you can find in src/main/resources/knative.json
. As you can see the configuration uses the K_SINK
injected property placeholder as a broker URL.
{
"resources" : [ {
"name" : "camel-evt-type",
"type" : "event",
"endpointKind" : "sink",
"url" : "{{k.sink}}",
"objectApiVersion" : "eventing.knative.dev/v1",
"objectKind" : "Broker",
"objectName" : "my-broker",
"reply" : false
} ]
}
As soon as the Kubernetes deployment for the exported project has started the sink binding will inject the K_SINK
environment variable so that the Camel applicaiton is ready to send events to the Knative broker.
The sink binding concept works for Knative Broker, Channel and Service resources. You just reference the resource by its name in your Camel route when sending data to the Knative component as an output of the route (to("knative:event|channel|endpoint/<resource-name>")
).
Mount trait options
The mount trait is able to configure volume mounts on the Deployment resource in order to inject data from Kubernetes resources such as config maps or secrets.
There are also shortcut options like --volume
, --config
and --resource
for the mount trait. These options are described in more detail in the next section. For now let’s have a look into the pure mount trait configuration options.
The mount trait provides the following configuration options:
Property | Type | Description |
---|---|---|
|
| A list of configuration pointing to configmap/secret. The configuration are expected to be UTF-8 resources as they are processed by runtime Camel Context and tried to be parsed as property files. They are also made available on the classpath in order to ease their usage directly from the Route. Syntax: [configmap|secret]:name[/key], where name represents the resource name and key optionally represents the resource key to be filtered |
|
| A list of resources (text or binary content) pointing to a configmap/secret. The resources are expected to be any resource type (text or binary content). The destination path can be either a default location or any path specified by the user. Syntax: [configmap|secret]:name[/key][@path], where name represents the resource name, key optionally represents the resource key to be filtered and path represents the destination path |
|
| A list of Persistent Volume Claims to be mounted. Syntax: [pvcname:/container/path] |
The syntax to specify mount trait options is as follows:
camel kubernetes export Sample.java --trait mount.[key]=[value]
camel kubernetes export Sample.java --trait mount.configs=configmap:my-data --trait mount.volumes=my-pvc:/container/path
This results in the following container specification in the Deployment resource.
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
camel.apache.org/integration: sample
name: sample
spec:
selector:
matchLabels:
camel.apache.org/integration: sample
template:
metadata:
labels:
camel.apache.org/integration: sample
spec:
containers:
- image: quay.io/sample:1.0-SNAPSHOT
name: sample
volumeMounts:
- mountPath: /etc/camel/conf.d/_configmaps/my-data (1)
name: my-data
readOnly: true
- mountPath: /container/path (2)
name: my-pvc
readOnly: false
volumes:
- name: my-data (3)
configMap:
name: my-data
- name: my-pvc (4)
persistentVolumeClaim:
claimName: my-pvc
1 | The config map my-data mounted into the container with default mount path for configurations |
2 | The volume mounted into the container with given path |
3 | The config map reference as volume spec |
4 | The persistent volume claim my-pvc |
ConfigMaps, volumes and secrets
In the previous section we have seen how to mount volumes, configs, resources into the container.
The Kubernetes export command provides some shortcut options for adding configmaps and secrets as volume mounts. The syntax is as follows:
camel kubernetes export Sample.java --config [key]=[value] --resource [key]=[value] --volume [key]=[value]
The options expect the following syntax:
Option | Syntax |
---|---|
| Add a runtime configuration from a ConfigMap or a Secret (syntax: [configmap|secret]:name[/key], where name represents the configmap or secret name and key optionally represents the configmap or secret key to be filtered). |
| Add a runtime resource from a Configmap or a Secret (syntax: [configmap|secret]:name[/key][@path], where name represents the configmap or secret name, key optionally represents the configmap or secret key to be filtered and path represents the destination path). |
| Mount a volume into the integration container, for instance "--volume pvcname:/container/path". |
camel kubernetes export Sample.java --config secret:my-credentials --resource configmap:my-data --volume my-pvc:/container/path
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
camel.apache.org/integration: sample
name: sample
spec:
selector:
matchLabels:
camel.apache.org/integration: sample
template:
metadata:
labels:
camel.apache.org/integration: sample
spec:
containers:
- image: quay.io/sample:1.0-SNAPSHOT
name: sample
volumeMounts:
- mountPath: /etc/camel/conf.d/_secrets/my-credentials
name: my-credentials (1)
readOnly: true
- mountPath: /etc/camel/resources.d/_configmaps/my-data
name: my-data (2)
readOnly: true
- mountPath: /container/path
name: my-pvc (3)
readOnly: false
volumes:
- name: my-credentials (4)
secret:
secretName: my-credentials
- name: my-data (5)
configMap:
name: my-data
- name: my-pvc (6)
persistentVolumeClaim:
claimName: my-pvc
1 | The secret configuration volume mount |
2 | The config map resource volume mount |
3 | The volume mount |
4 | The secret configuration volume |
5 | The config map resource volume |
6 | The persistent volume claim volume |
The trait volume mounts follow some best practices in specifying the mount paths in the container. Configurations and resources, secrets and configmaps do use different paths in the container. The Camel application is automatically configured to read these paths as resource folders, so you can use the mounted data in the Camel routes via classpath reference for instance.
Ingress trait options
The ingress trait enhance the Kubernetes manifest with an Ingress resource to expose the application to the outside world. This requires the presence in the Kubernetes manifest of a Service Resource.
The Camel JBang plugin automatically creates an Ingress Resource if the Service Resource is generated for the Camel route’s service trait application.
The ingress trait provides the following configuration options:
Property | Type | Description |
---|---|---|
|
| Can be used to enable or disable a trait. All traits share this common property (default |
|
| The annotations added to the ingress. This can be used to set controller specific annotations, e.g., when using the NGINX Ingress controller. |
|
| To configure the host exposed by the ingress. |
|
| To configure the path exposed by the ingress (default |
|
| To configure the path type exposed by the ingress. One of Exact, Prefix, ImplementationSpecific (default to Prefix). |
|
| To automatically add an Ingress Resource whenever the route uses an HTTP endpoint consumer (default |
The syntax to specify container trait options is as follows:
camel kubernetes export Sample.java --trait ingress.[key]=[value]
You may specify these options with the export command to customize the Ingress Resource specification.
camel kubernetes export Sample.java --trait ingress.host=example.com --trait ingress.path=/sample(/|$)(.*) --trait ingress.pathType=ImplementationSpecific --trait ingress.annotations=nginx.ingress.kubernetes.io/rewrite-target=/\$2 --trait ingress.annotations=nginx.ingress.kubernetes.io/use-regex=true
This results in the following container specification in the Ingress resource.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations: (1)
nginx.ingress.kubernetes.io/rewrite-target: /$2
nginx.ingress.kubernetes.io/use-regex: "true"
labels:
app.kubernetes.io/name: sample
name: sample
spec:
ingressClassName: nginx
rules:
- host: example.com
http:
paths:
- backend:
service:
name: route-service
port:
name: http (2)
path: /sample(/|$)(.*) (3)
pathType: ImplementationSpecific (4)
1 | Custom annotations configuration for ingress behavior |
2 | Service port name |
3 | Custom ingress backend path |
4 | Custom ingress backend path type |
Route trait options
The Route trait enhance the Kubenetes manifest with a Route resource to expose the application to the outside world. This requires the presence in the Kubenetes manifest of a Service Resource.
You need to enable the Openshift profile trait with --trait-profile=openshift option. |
The Camel JBang plugin automatically creates an Route Resource if the Service Resource is generated for the Camel route’s service trait application.
The Route trait provides the following configuration options:
Property | Type | Description |
---|---|---|
|
| Can be used to enable or disable a trait. All traits share this common property (default |
|
| The annotations added to the route. This can be used to set openshift route specific annotations options. |
|
| To configure the host exposed by the route. |
|
| The TLS termination type, like |
|
| The TLS certificate contents or file ( |
|
| The TLS certificate key contents or file ( |
|
| The TLS CA certificate contents or file ( |
|
| The destination CA contents or file ( |
|
| To configure how to deal with insecure traffic, e.g. |
The syntax to specify container trait options is as follows:
camel kubernetes export Sample.java --trait route.[key]=[value]
You may specify these options with the export command to customize the Route Resource specification.
camel kubernetes export Sample.java --trait-profile=openshift --trait ingress.host=example.com -t route.tls-termination=edge -t route.tls-certificate=file:/tmp/tls.crt -t route.tls-key=file:/tmp/tls.key
This results in the following container specification in the Route resource.
apiVersion: route.openshift.io/v1
kind: Route
metadata:
name: route-service
spec:
host: example.com (1)
port:
targetPort: http (2)
tls:
certificate: | (3)
...
key: | (4)
...
termination: edge (5)
to: (6)
kind: Service
name: route-service
1 | Custom route host |
2 | Service port name |
3 | Custom route TLS certificate content |
4 | Custom route TLS certificate key content |
5 | Custom route TLS termination |
6 | Service Resource reference |
OpenApi specifications
You can mount OpenAPI specifications to the application container with this trait.
The openapi trait provides the following configuration options:
Property | Type | Description |
---|---|---|
|
| The configmaps holding the spec of the OpenAPI |
The syntax to specify openapi trait options is as follows:
camel kubernetes export Sample.java --trait openapi.[key]=[value]
camel kubernetes export Sample.java --trait openapi.configmaps=configmap:my-spec
There is also a shortcut option --open-api=configmap:my-configmap . |
camel kubernetes export Sample.java --open-api configmap:[name-of-configmap]
This results in the following container specification in the Deployment resource.
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
camel.apache.org/integration: sample
name: sample
spec:
selector:
matchLabels:
camel.apache.org/integration: sample
template:
metadata:
labels:
camel.apache.org/integration: sample
spec:
containers:
- image: quay.io/sample:1.0-SNAPSHOT
name: sample
volumeMounts:
- mountPath: /etc/camel/resources.d/_configmaps/my-spec
name: my-spec (1)
readOnly: true
volumes:
- name: my-spec (2)
configMap:
name: my-spec
1 | OpenAPI specification volume mount |
2 | Volume referencing the config map holding the OpenAPI specification |
Deploy to OpenShift
By default, the Kubernetes manifest is suited for plain Kubernetes platforms. In case you are targeting OpenShift as a platform you may want to leverage special resources such as Route, ImageStream or BuildConfig.
You can set the cluster-type=openshift
option on the export command in order to tell the Kubernetes plugin to create a Kubernetes manifest specifically suited for OpenShift.
Also, the default image builder is S2I for OpenShift clusters. This means by setting the cluster type you will automatically switch from default Jib to S2I. Of course, you can tell the plugin to use Jib with --image-builder=jib
option. The image may then get pushed to an external registry (docker.io or quay.io) so OpenShift can pull as part of the deployment in the cluster.
When using S2I you may need to explicitly set the --image-group option to the project/namespace name in the OpenShift cluster. This is because S2I will push the container image to an image repository that uses the OpenShift project/namespace name as part of the image coordinates in the registry: image-registry.openshift-image-registry.svc:5000/<project name>/<name>:<tag> |
When using S2I as an image build option the Kubernetes manifest also contains an ImageStream and BuildConfig resource. Both resources are automatically added/removed when creating/deleting the deployment with the Camel Kubernetes JBang plugin.
Kubernetes run
The run command combines several steps into one single command. The command performs a project export to a temporary folder, builds the project artifacts as well as the container images, pushes the image to an image registry and finally performs the deployment to Kubernetes using the generated Kubernetes manifest (kubernetes.yml
).
camel kubernetes run route.yaml --image-registry=kind
When connecting to a local Kubernetes cluster you may need to specify the image registry where the application container image gets pushed to. The run command is able to automatically configure the local registry when using predefined names such as kind
or minikube
.
Use the --image-group
or the --image
option to customize the container image.
camel kubernetes run route.yaml --image-registry=kind --image-group camel-experts
The command above builds and pushes the container image: localhost:5001/camel-experts/route:1.0-SNAPSHOT
.
camel kubernetes run route.yaml --image quay.io/camel-experts/demo-app:1.0
The --image
option forces the container image group, name, version as well as the image registry.
Customize the Kubernetes manifest
The run
command provides the same options to customize the Kubernetes manifest as the export
command. You may want to add environment variables, mount secrets and configmaps, adjust the exposed service and many other things with trait options as described in the export command section.
Auto reload with --dev option
The --dev
option runs the application on Kubernetes and automatically adds a file watcher to listen for changes on the Camel route source files. In case the sources get changed the process will automatically perform a rebuild and redeployment. The command constantly prints the logs to the output, so you may see the changes directly being applied to the Kubernetes deployment.
camel kubernetes run route.yaml --image-registry=kind --dev
You need to terminate the process to stop the dev mode. This automatically removes the Kubernetes deployment from the cluster on shutdown.
On MacOS hosts the file watch mechanism is known to be much slower and less stable compared to using the --dev option on other operating systems like Linux. This is due to limited native file operations on MacOS for Java processes. |
Show logs
To inspect the log output of a running deployment call:
camel kubernetes logs --name=route
The command connects to the running integration Pod on the cluster and streams the log output. Just terminate the process to stop printing the logs.
The --name
option should point to a previously exported project (either via run
or export
command).