In this section you can find detailed information about Pepr and how to use it.
Sections
You can find the following information in this section:
This is the multi-page printable view of this section. Click here to print.
In this section you can find detailed information about Pepr and how to use it.
You can find the following information in this section:
npx pepr init
Initialize a new Pepr Module.
Options:
--skip-post-init
- Skip npm install, git init and VSCode launch.--confirm
- Skip verification prompt when creating a new module.--description <string>
- Explain the purpose of the new module.--name <string>
- Set the name of the new module.--skip-post-init
- Skip npm install, git init, and VSCode launch.--errorBehavior <audit|ignore|reject>
- Set an errorBehavior.npx pepr update
Update the current Pepr Module to the latest SDK version. This command is not recommended for production use, instead, we recommend Renovate or Dependabot for automated updates.
Options:
--skip-template-update
- Skip updating the template filesnpx pepr dev
Connect a local cluster to a local version of the Pepr Controller to do real-time debugging of your module. Note the npx pepr dev
assumes a K3d cluster is running by default. If you are working with Kind or another docker-based K8s distro, you will need to pass the --host host.docker.internal
option to npx pepr dev
. If working with a remote cluster you will have to give Pepr a host path to your machine that is reachable from the K8s cluster.
NOTE: This command, by necessity, installs resources into the cluster you run it against. Generally, these resources are removed once the pepr dev
session ends but there are two notable exceptions:
pepr-system
namespace, andPeprStore
CRD.These can’t be auto-removed because they’re global in scope & doing so would risk wrecking any other Pepr deployments that are already running in-cluster. If (for some strange reason) you’re not pepr dev
-ing against an ephemeral dev cluster and need to keep the cluster clean, you’ll have to remove these hold-overs yourself (or not)!
Options:
-h, --host [host]
- Host to listen on (default: “host.k3d.internal”)--confirm
- Skip confirmation promptnpx pepr deploy
Deploy the current module into a Kubernetes cluster, useful for CI systems. Not recommended for production use.
Options:
-i, --image [image]
- Override the image tag--confirm
- Skip confirmation prompt--pullSecret <name>
- Deploy imagePullSecret for Controller private registry--docker-server <server>
- Docker server address--docker-username <username>
- Docker registry username--docker-email <email>
- Email for Docker registry--docker-password <password>
- Password for Docker registry--force
- Force deploy the module, override manager fieldnpx pepr monitor
Monitor Validations for a given Pepr Module or all Pepr Modules.
Usage:
npx pepr monitor [options] [module-uuid]
Options:
-h, --help
- Display help for commandnpx pepr uuid
Module UUID(s) currently deployed in the cluster with their descriptions.
Options:
[uuid]
- Specific module UUIDnpx pepr build
Create a zarf.yaml and K8s manifest for the current module. This includes everything needed to deploy Pepr and the current module into production environments.
Options:
-e, --entry-point [file]
- Specify the entry point file to build with. (default: “pepr.ts”)-n, --no-embed
- Disables embedding of deployment files into output module. Useful when creating library modules intended solely for reuse/distribution via NPM-r, --registry-info [<registry>/<username>]
- Registry Info: Image registry and username. Note: You must be signed into the registry-o, --output-dir [output directory]
- Define where to place build output--timeout [timeout]
- How long the API server should wait for a webhook to respond before treating the call as a failure--rbac-mode [admin|scoped]
- Rbac Mode: admin, scoped (default: admin) (choices: “admin”, “scoped”, default: “admin”)-i, --custom-image [custom-image]
- Custom Image: Use custom image for Admission and Watcher Deployments.--registry [GitHub, Iron Bank]
- Container registry: Choose container registry for deployment manifests.-v, --version <version>. Example: '0.27.3'
- The version of the Pepr image to use in the deployment manifests.--withPullSecret <imagePullSecret>
- Image Pull Secret: Use image pull secret for controller Deployment.-z, --zarf [manifest|chart]
- The Zarf package type to generate: manifest or chart (default: manifest).npx pepr kfc
Execute a kubernetes-fluent-client
command. This command is a wrapper around kubernetes-fluent-client
.
Usage:
npx pepr kfc [options] [command]
If you are unsure of what commands are available, you can run npx pepr kfc
to see the available commands.
For example, to generate usable types from a Kubernetes CRD, you can run npx pepr kfc crd [source] [directory]
. This will generate the types for the [source]
CRD and output the generated types to the [directory]
.
You can learn more about the kubernetes-fluent-client
here.
To use, import the sdk
from the pepr
package:
import { sdk } from "pepr";
containers
Returns list of all containers in a pod. Accepts the following parameters:
Usage:
Get all containers
const { containers } = sdk;
let result = containers(peprValidationRequest)
Get only the standard containers
const { containers } = sdk;
let result = containers(peprValidationRequest, "containers")
Get only the init containers
const { containers } = sdk;
let result = containers(peprValidationRequest, "initContainers")
Get only the ephemeral containers
const { containers } = sdk;
let result = containers(peprValidationRequest, "ephemeralContainers")
getOwnerRefFrom
Returns the owner reference for a Kubernetes resource as an array. Accepts the following parameters:
Usage:
const { getOwnerRefFrom } = sdk;
const ownerRef = getOwnerRefFrom(kubernetesResource);
writeEvent
Write a K8s event for a CRD. Accepts the following parameters:
Usage:
const { writeEvent } = sdk;
writeEvent(
kubernetesResource,
event,
"Warning",
"ReconciliationFailed",
"uds.dev/operator",
process.env.HOSTNAME,
);
sanitizeResourceName
Returns a sanitized resource name to make the given name a valid Kubernetes resource name. Accepts the following parameter:
Usage:
const { sanitizeResourceName } = sdk;
const sanitizedResourceName = sanitizeResourceName(resourceName)
Looking for information on the Pepr mutate helpers? See Mutate Helpers for information on those.
A Pepr Module is a collection of files that can be used to create a new Pepr Project. A Pepr Module can be used to create a new Pepr Project by using the npx pepr init
command.
An action is a discrete set of behaviors defined in a single function that acts on a given Kubernetes GroupVersionKind (GVK) passed in during the admission controller lifecycle. Actions are the atomic operations that are performed on Kubernetes resources by Pepr.
For example, an action could be responsible for adding a specific label to a Kubernetes resource, or for modifying a specific field in a resource’s metadata. Actions can be grouped together within a Capability to provide a more comprehensive set of operations that can be performed on Kubernetes resources.
Actions are Mutate()
, Validate()
, Watch()
, Reconcile()
, and Finalize()
. Both Mutate and Validate actions run during the admission controller lifecycle, while Watch and Reconcile actions run in a separate controller that tracks changes to resources, including existing resource; the Finalize action spans both the admission & afterward.
Let’s look at some example actions that are included in the HelloPepr
capability that is created for you when you npx pepr init
:
In this first example, Pepr is adding a label and annotation to a ConfigMap with the name example-1
when it is created. Comments are added to each line to explain in more detail what is happening.
// When(a.<Kind>) filters which GroupVersionKind (GVK) this action should act on.
When(a.ConfigMap)
// This limits the action to only act on new resources.
.IsCreated()
// This limits the action to only act on resources with the name "example-1".
.WithName("example-1")
// Mutate() is where we define the actual behavior of this action.
.Mutate(request => {
// The request object is a wrapper around the K8s resource that Pepr is acting on.
request
// Here we are adding a label to the ConfigMap.
.SetLabel("pepr", "was-here")
// And here we are adding an annotation.
.SetAnnotation("pepr.dev", "annotations-work-too");
// Note that we are not returning anything here. This is because Pepr is tracking the changes in each action automatically.
});
In this example, a Validate action rejects any ConfigMap in the pepr-demo
namespace that has no data.
When(a.ConfigMap)
.IsCreated()
.InNamespace("pepr-demo")
// Validate() is where we define the actual behavior of this action.
.Validate(request => {
// If data exists, approve the request.
if (request.Raw.data) {
return request.Approve();
}
// Otherwise, reject the request with a message and optional code.
return request.Deny("ConfigMap must have data");
});
In this example, a Watch action on the name and phase of any ConfigMap.Watch actions run in a separate controller that tracks changes to resources, including existing resources so that you can react to changes in real-time. It is important to note that Watch actions are not run during the admission controller lifecycle, so they cannot be used to modify or validate resources. They also may run multiple times for the same resource, so it is important to make sure that your Watch actions are idempotent. In a future release, Pepr will provide a better way to control when a Watch action is run to avoid this issue.
When(a.ConfigMap)
// Watch() is where we define the actual behavior of this action.
.Watch((cm, phase) => {
Log.info(cm, `ConfigMap ${cm.metadata.name} was ${phase}`);
});
There are many more examples in the HelloPepr
capability that you can use as a reference when creating your own actions. Note that each time you run npx pepr update
, Pepr will automatically update the HelloPepr
capability with the latest examples and best practices for you to reference and test directly in your Pepr Module.
In some scenarios involving Kubernetes Resource Controllers or Operator patterns, opting for a Reconcile action could be more fitting. Comparable to the Watch functionality, Reconcile is responsible for monitoring the name and phase of any Kubernetes Object. It operates within the Watch controller dedicated to observing modifications to resources, including those already existing, enabling responses to alterations as they occur. Unlike Watch, however, Reconcile employs a Queue to sequentially handle events once they are returned by the Kubernetes API. This allows the operator to handle bursts of events without overwhelming the system or the Kubernetes API. It provides a mechanism to back off when the system is under heavy load, enhancing overall stability and maintaining the state consistency of Kubernetes resources, as the order of operations can impact the final state of a resource.
When(WebApp)
.IsCreatedOrUpdated()
.Validate(validator)
.Reconcile(async instance => {
const { namespace, name, generation } = instance.metadata;
if (!instance.metadata?.namespace) {
Log.error(instance, `Invalid WebApp definition`);
return;
}
const isPending = instance.status?.phase === Phase.Pending;
const isCurrentGeneration = generation === instance.status?.observedGeneration;
if (isPending || isCurrentGeneration) {
Log.debug(instance, `Skipping pending or completed instance`);
return;
}
Log.debug(instance, `Processing instance ${namespace}/${name}`);
try {
// Set Status to pending
await updateStatus(instance, { phase: Phase.Pending });
// Deploy Deployment, ConfigMap, Service, ServiceAccount, and RBAC based on instance
await Deploy(instance);
// Set Status to ready
await updateStatus(instance, {
phase: Phase.Ready,
observedGeneration: instance.metadata.generation,
});
} catch (e) {
Log.error(e, `Error configuring for ${namespace}/${name}`);
// Set Status to failed
void updateStatus(instance, {
phase: Phase.Failed,
observedGeneration: instance.metadata.generation,
});
}
});
Mutating admission webhooks are invoked first and can modify objects sent to the API server to enforce custom defaults. After an object is sent to Pepr’s Mutating Admission Webhook, Pepr will annotate the object to indicate the status.
After a successful mutation of an object in a module with UUID static-test, and capability name hello-pepr, expect to see this annotation: static-test.pepr.dev/hello-pepr: succeeded
.
SetLabel
SetLabel
is used to set a lable on a Kubernetes object as part of a Pepr Mutate action.
For example, to add a label when a ConfigMap is created:
When(a.ConfigMap)
.IsCreated()
.Mutate(request => {
request
// Here we are adding a label to the ConfigMap.
.SetLabel("pepr", "was-here")
// Note that we are not returning anything here. This is because Pepr is tracking the changes in each action automatically.
});
RemoveLabel
RemoveLabel
is used to remove a label on a Kubernetes object as part of a Pepr Mutate action.
For example, to remove a label when a ConfigMap is updated:
When(a.ConfigMap)
.IsCreated()
.Mutate(request => {
request
// Here we are removing a label from the ConfigMap.
.RemoveLabel("remove-me")
// Note that we are not returning anything here. This is because Pepr is tracking the changes in each action automatically.
});
SetAnnotation
SetAnnotation
is used to set an annotation on a Kubernetes object as part of a Pepr Mutate action.
For example, to add an annotation when a ConfigMap is created:
When(a.ConfigMap)
.IsCreated()
.Mutate(request => {
request
// Here we are adding an annotation to the ConfigMap.
.SetAnnotation("pepr.dev", "annotations-work-too");
// Note that we are not returning anything here. This is because Pepr is tracking the changes in each action automatically.
});
RemoveAnnotation
RemoveAnnotation
is used to remove an annotation on a Kubernetes object as part of a Pepr Mutate action.
For example, to remove an annotation when a ConfigMap is updated:
When(a.ConfigMap)
.IsUpdated()
.Mutate(request => {
request
// Here we are removing an annotation from the ConfigMap.
.RemoveAnnotation("remove-me");
// Note that we are not returning anything here. This is because Pepr is tracking the changes in each action automatically.
});
Looking for some more generic helpers? Check out the Module Author SDK for information on other things that Pepr can help with.
After the Mutation phase comes the Validation phase where the validating admission webhooks are invoked and can reject requests to enforce custom policies.
Validate does not annotate the objects that are allowed into the cluster, but the validation webhook can be audited with npx pepr monitor
. Read the monitoring docs for more information.
Reconcile functions the same as Watch but is tailored for building Kubernetes Controllers and Operators because it processes callback operations in a Queue, guaranteeing ordered and synchronous processing of events, even when the system may be under heavy load.
Ordering can be configured to operate in one of two ways: as a single queue that maintains ordering of operations across all resources of a kind (default) or with separate processing queues per resource instance.
See Configuring Reconcile for more on configuring how Reconcile behaves.
Kubernetes supports efficient change notifications on resources via watches. Pepr uses the Watch action for monitoring resources that previously existed in the cluster and for performing long-running asynchronous events upon receiving change notifications on resources, as watches are not limited by timeouts.
A specialized combination of Pepr’s Mutate & Watch functionalities that allow a module author to run logic while Kubernetes is Finalizing a resource (i.e. cleaning up related resources after a deleteion request has been accepted).
This method will:
Inject a finalizer into the metadata.finalizers
field of the requested resource during the mutation phase of the admission.
Watch appropriate resource lifecycle events & invoke the given callback.
Remove the injected finalizer from the metadata.finalizers
field of the requested resource.
You can use the Alias function to include a user-defined alias in the logs for Mutate, Validate, and Watch actions. This can make for easier debugging since your user-defined alias will be included in the action’s logs. This is especially useful when you have multiple actions of the same type in a single module.
The below example uses Mutate, Validate, and Watch actions with the Alias function:
When(a.Pod)
.IsCreatedOrUpdated()
.Alias("mutate")
.Mutate((po, logger) => {
logger.info(`alias: mutate ${po.Raw.metadata.name}`);
});
When(a.Pod)
.IsCreatedOrUpdated()
.Alias("validate")
.Validate((po, logger) => {
logger.info(`alias: validate ${po.Raw.metadata.name}`);
return po.Approve();
});
When(a.Pod)
.IsCreatedOrUpdated()
.Alias("watch")
.Watch((po, _, logger) => {
logger.info(`alias: watch ${po.metadata.name}`);
});
When(a.Pod)
.IsCreatedOrUpdated()
.Alias("reconcile")
.Reconcile((po, _, logger) => {
logger.info(`alias: reconcile ${po.metadata.name}`);
});
This will result in log entries when creating a Pod with the that include the alias:
Logs for Mutate When Pod red
is Created:
{"level":30,"time":1726632368808,"pid":16,"hostname":"pepr-static-test-6786948977-6hbnt","uid":"b2221631-e87c-41a2-94c8-cdaef15e7b5f","namespace":"pepr-demo","name":"/red","gvk":{"group":"","version":"v1","kind":"Pod"},"operation":"CREATE","admissionKind":"Mutate","msg":"Incoming request"}
{"level":30,"time":1726632368808,"pid":16,"hostname":"pepr-static-test-6786948977-6hbnt","uid":"b2221631-e87c-41a2-94c8-cdaef15e7b5f","namespace":"pepr-demo","name":"/red","msg":"Processing request"}
{"level":30,"time":1726632368808,"pid":16,"hostname":"pepr-static-test-6786948977-6hbnt","msg":"Executing mutation action with alias: mutate"}
{"level":30,"time":1726632368808,"pid":16,"hostname":"pepr-static-test-6786948977-6hbnt","alias":"mutate","msg":"alias: mutate red"}
{"level":30,"time":1726632368808,"pid":16,"hostname":"pepr-static-test-6786948977-6hbnt","uid":"b2221631-e87c-41a2-94c8-cdaef15e7b5f","namespace":"pepr-demo","name":"hello-pepr","msg":"Mutation action succeeded (mutateCallback)"}
{"level":30,"time":1726632368808,"pid":16,"hostname":"pepr-static-test-6786948977-6hbnt","uid":"b2221631-e87c-41a2-94c8-cdaef15e7b5f","namespace":"pepr-demo","name":"/red","res":{"uid":"b2221631-e87c-41a2-94c8-cdaef15e7b5f","allowed":true,"patchType":"JSONPatch","patch":"W3sib3AiOiJhZGQiLCJwYXRoIjoiL21ldGFkYXRhL2Fubm90YXRpb25zL3N0YXRpYy10ZXN0LnBlcHIuZGV2fjFoZWxsby1wZXByIiwidmFsdWUiOiJzdWNjZWVkZWQifV0="},"msg":"Check response"}
{"level":30,"time":1726632368809,"pid":16,"hostname":"pepr-static-test-6786948977-6hbnt","uid":"b2221631-e87c-41a2-94c8-cdaef15e7b5f","method":"POST","url":"/mutate/c1a7fb6e3f2ab9dc08909d2de4166987520f317d53b759ab882dfd0b1c198479?timeout=10s","status":200,"duration":"1 ms"}
Logs for Validate When Pod red
is Created:
{"level":30,"time":1726631437605,"pid":16,"hostname":"pepr-static-test-6786948977-j7f9h","uid":"731eff93-d457-4ffc-a98c-0bcbe4c1727a","namespace":"pepr-demo","name":"/red","gvk":{"group":"","version":"v1","kind":"Pod"},"operation":"CREATE","admissionKind":"Validate","msg":"Incoming request"}
{"level":30,"time":1726631437606,"pid":16,"hostname":"pepr-static-test-6786948977-j7f9h","uid":"731eff93-d457-4ffc-a98c-0bcbe4c1727a","namespace":"pepr-demo","name":"/red","msg":"Processing validation request"}
{"level":30,"time":1726631437606,"pid":16,"hostname":"pepr-static-test-6786948977-j7f9h","uid":"731eff93-d457-4ffc-a98c-0bcbe4c1727a","namespace":"pepr-demo","name":"hello-pepr","msg":"Processing validation action (validateCallback)"}
{"level":30,"time":1726631437606,"pid":16,"hostname":"pepr-static-test-6786948977-j7f9h","msg":"Executing validate action with alias: validate"}
{"level":30,"time":1726631437606,"pid":16,"hostname":"pepr-static-test-6786948977-j7f9h","alias":"validate","msg":"alias: validate red"}
{"level":30,"time":1726631437606,"pid":16,"hostname":"pepr-static-test-6786948977-j7f9h","uid":"731eff93-d457-4ffc-a98c-0bcbe4c1727a","namespace":"pepr-demo","name":"hello-pepr","msg":"Validation action complete (validateCallback): allowed"}
{"level":30,"time":1726631437606,"pid":16,"hostname":"pepr-static-test-6786948977-j7f9h","uid":"731eff93-d457-4ffc-a98c-0bcbe4c1727a","namespace":"pepr-demo","name":"/red","res":{"uid":"731eff93-d457-4ffc-a98c-0bcbe4c1727a","allowed":true},"msg":"Check response"}
{"level":30,"time":1726631437606,"pid":16,"hostname":"pepr-static-test-6786948977-j7f9h","uid":"731eff93-d457-4ffc-a98c-0bcbe4c1727a","method":"POST","url":"/validate/c1a7fb6e3f2ab9dc08909d2de4166987520f317d53b759ab882dfd0b1c198479?timeout=10s","status":200,"duration":"5 ms"}
Logs for Watch and Reconcile When Pod red
is Created:
{"level":30,"time":1726798504518,"pid":16,"hostname":"pepr-static-test-watcher-6dc69654c9-5ql6b","msg":"Executing reconcile action with alias: reconcile"}
{"level":30,"time":1726798504518,"pid":16,"hostname":"pepr-static-test-watcher-6dc69654c9-5ql6b","alias":"reconcile","msg":"alias: reconcile red"}
{"level":30,"time":1726798504518,"pid":16,"hostname":"pepr-static-test-watcher-6dc69654c9-5ql6b","msg":"Executing watch action with alias: watch"}
{"level":30,"time":1726798504518,"pid":16,"hostname":"pepr-static-test-watcher-6dc69654c9-5ql6b","alias":"watch","msg":"alias: watch red"}
{"level":30,"time":1726798504521,"pid":16,"hostname":"pepr-static-test-watcher-6dc69654c9-5ql6b","msg":"Executing reconcile action with alias: reconcile"}
{"level":30,"time":1726798504521,"pid":16,"hostname":"pepr-static-test-watcher-6dc69654c9-5ql6b","alias":"reconcile","msg":"alias: reconcile red"}
{"level":30,"time":1726798504521,"pid":16,"hostname":"pepr-static-test-watcher-6dc69654c9-5ql6b","msg":"Executing watch action with alias: watch"}
{"level":30,"time":1726798504521,"pid":16,"hostname":"pepr-static-test-watcher-6dc69654c9-5ql6b","alias":"watch","msg":"alias: watch red"}
{"level":30,"time":1726798504528,"pid":16,"hostname":"pepr-static-test-watcher-6dc69654c9-5ql6b","msg":"Executing reconcile action with alias: reconcile"}
{"level":30,"time":1726798504528,"pid":16,"hostname":"pepr-static-test-watcher-6dc69654c9-5ql6b","alias":"reconcile","msg":"alias: reconcile red"}
{"level":30,"time":1726798504528,"pid":16,"hostname":"pepr-static-test-watcher-6dc69654c9-5ql6b","msg":"Executing watch action with alias: watch"}
{"level":30,"time":1726798504528,"pid":16,"hostname":"pepr-static-test-watcher-6dc69654c9-5ql6b","alias":"watch","msg":"alias: watch red"}
{"level":30,"time":1726798510464,"pid":16,"hostname":"pepr-static-test-watcher-6dc69654c9-5ql6b","msg":"Executing watch action with alias: watch"}
{"level":30,"time":1726798510464,"pid":16,"hostname":"pepr-static-test-watcher-6dc69654c9-5ql6b","alias":"watch","msg":"alias: watch red"}
{"level":30,"time":1726798510466,"pid":16,"hostname":"pepr-static-test-watcher-6dc69654c9-5ql6b","msg":"Executing reconcile action with alias: reconcile"}
{"level":30,"time":1726798510466,"pid":16,"hostname":"pepr-static-test-watcher-6dc69654c9-5ql6b","alias":"reconcile","msg":"alias: reconcile red"}
Note: The Alias function is optional and can be used to provide additional context in the logs. You must pass the logger object as shown above to the action to use the Alias function.
Looking for some more generic helpers? Check out the Module Author SDK for information on other things that Pepr can help with.
A capability is set of related actions that work together to achieve a specific transformation or operation on Kubernetes resources. Capabilities are user-defined and can include one or more actions. They are defined within a Pepr module and can be used in both MutatingWebhookConfigurations and ValidatingWebhookConfigurations. A Capability can have a specific scope, such as mutating or validating, and can be reused in multiple Pepr modules.
When you npx pepr init
, a capabilities
directory is created for you. This directory is where you will define your capabilities. You can create as many capabilities as you need, and each capability can contain one or more actions. Pepr also automatically creates a HelloPepr
capability with a number of example actions to help you get started.
Defining a new capability can be done via a VSCode Snippet generated during npx pepr init
.
Create a new file in the capabilities
directory with the name of your capability. For example, capabilities/my-capability.ts
.
Open the new file in VSCode and type create
in the file. A suggestion should prompt you to generate the content from there.
[)
If you prefer not to use VSCode, you can also modify or copy the HelloPepr
capability to meet your needs instead.
Pepr has an NPM org managed by Defense Unicorns, @pepr
, where capabilities are published for reuse in other Pepr Modules. You can find a list of published capabilities here.
You also can publish your own Pepr capabilities to NPM and import them. A couple of things you’ll want to be aware of when publishing your own capabilities:
Reuseable capability versions should use the format 0.x.x
or 0.12.x
as examples to determine compatibility with other reusable capabilities. Before 1.x.x
, we recommend binding to 0.x.x
if you can for maximum compatibility.
pepr.ts
will still be used for local development, but you’ll also need to publish an index.ts
that exports your capabilities. When you build & publish the capability to NPM, you can use npx pepr build -e index.ts
to generate the code needed for reuse by other Pepr modules.
See Pepr Istio for an example of a reusable capability.
The nature of admission controllers and general watch operations (the Mutate
, Validate
and Watch
actions in Pepr) make some types of complex and long-running operations difficult. There are also times when you need to share data between different actions. While you could manually create your own K8s resources and manage their cleanup, this can be very hard to track and keep performant at scale.
The Pepr Store solves this by exposing a simple, Web Storage API-compatible mechanism for use within capabilities. Additionally, as Pepr runs multiple replicas of the admission controller along with a watch controller, the Pepr Store provides a unique way to share data between these different instances automatically.
Each Pepr Capability has a Store
instance that can be used to get, set and delete data as well as subscribe to any changes to the Store. Behind the scenes, all capability store instances in a single Pepr Module are stored within a single CRD in the cluster. This CRD is automatically created when the Pepr Module is deployed. Care is taken to make the read and write operations as efficient as possible by using K8s watches, batch processing and patch operations for writes.
.subscribe()
and onReady()
methods enable real-time updates, allowing you to react to changes in the data store instantaneously.// Example usage for Pepr Store
Store.setItem("example-1", "was-here");
Store.setItem("example-1-data", JSON.stringify(request.Raw.data));
Store.onReady(data => {
Log.info(data, "Pepr Store Ready");
});
const unsubscribe = Store.subscribe(data => {
Log.info(data, "Pepr Store Updated");
unsubscribe();
});
getItem(key: string)
: Retrieves a value by its key. Returns null
if the key doesn’t exist.setItem(key: string, value: string)
: Sets a value for a given key. Creates a new key-value pair if the key doesn’t exist.setItemAndWait(key: string, value: string)
: Sets a value for a given key. Creates a new key-value pair if the key doesn’t exist. Resolves a promise when the new key and value show up in the store. Note - Async operations in Mutate and Validate are susceptible to timeouts.removeItem(key: string)
: Deletes a key-value pair by its key.removeItemAndWait(key: string)
: Deletes a key-value pair by its key and resolves a promise when the key and value do not show up in the store. Note - Async operations in Mutate and Validate are susceptible to timeouts.clear()
: Clears all key-value pairs from the store.subscribe(listener: DataReceiver)
: Subscribes to store updates.onReady(callback: DataReceiver)
: Executes a callback when the store is ready.The Kubernetes Fluent Client supports the creation of TypeScript typings directly from Kubernetes Custom Resource Definitions (CRDs). The files it generates can be directly incorporated into Pepr capabilities and provide a way to work with strongly-typed CRDs.
For example (below), Istio CRDs can be imported and used as though they were intrinsic Kubernetes resources.
Using the kubernetes-fluent-client to produce a new type looks like this:
npx kubernetes-fluent-client crd [source] [directory]
The crd
command expects a [source]
, which can be a URL or local file containing the CustomResourceDefinition(s)
, and a [directory]
where the generated code will live.
The following example creates types for the Istio CRDs:
user@workstation$ npx kubernetes-fluent-client crd https://raw.githubusercontent.com/istio/istio/master/manifests/charts/base/crds/crd-all.gen.yaml crds
Attempting to load https://raw.githubusercontent.com/istio/istio/master/manifests/charts/base/crds/crd-all.gen.yaml as a URL
- Generating extensions.istio.io/v1alpha1 types for WasmPlugin
- Generating networking.istio.io/v1alpha3 types for DestinationRule
- Generating networking.istio.io/v1beta1 types for DestinationRule
- Generating networking.istio.io/v1alpha3 types for EnvoyFilter
- Generating networking.istio.io/v1alpha3 types for Gateway
- Generating networking.istio.io/v1beta1 types for Gateway
- Generating networking.istio.io/v1beta1 types for ProxyConfig
- Generating networking.istio.io/v1alpha3 types for ServiceEntry
- Generating networking.istio.io/v1beta1 types for ServiceEntry
- Generating networking.istio.io/v1alpha3 types for Sidecar
- Generating networking.istio.io/v1beta1 types for Sidecar
- Generating networking.istio.io/v1alpha3 types for VirtualService
- Generating networking.istio.io/v1beta1 types for VirtualService
- Generating networking.istio.io/v1alpha3 types for WorkloadEntry
- Generating networking.istio.io/v1beta1 types for WorkloadEntry
- Generating networking.istio.io/v1alpha3 types for WorkloadGroup
- Generating networking.istio.io/v1beta1 types for WorkloadGroup
- Generating security.istio.io/v1 types for AuthorizationPolicy
- Generating security.istio.io/v1beta1 types for AuthorizationPolicy
- Generating security.istio.io/v1beta1 types for PeerAuthentication
- Generating security.istio.io/v1 types for RequestAuthentication
- Generating security.istio.io/v1beta1 types for RequestAuthentication
- Generating telemetry.istio.io/v1alpha1 types for Telemetry
✅ Generated 23 files in the istio directory
Observe that the kubernetes-fluent-client
has produced the TypeScript types within the crds
directory. These types can now be utilized in the Pepr module.
user@workstation$ cat crds/proxyconfig-v1beta1.ts
// This file is auto-generated by kubernetes-fluent-client, do not edit manually
import { GenericKind, RegisterKind } from "kubernetes-fluent-client";
export class ProxyConfig extends GenericKind {
/**
* Provides configuration for individual workloads. See more details at:
* https://istio.io/docs/reference/config/networking/proxy-config.html
*/
spec?: Spec;
status?: { [key: string]: any };
}
/**
* Provides configuration for individual workloads. See more details at:
* https://istio.io/docs/reference/config/networking/proxy-config.html
*/
export interface Spec {
/**
* The number of worker threads to run.
*/
concurrency?: number;
/**
* Additional environment variables for the proxy.
*/
environmentVariables?: { [key: string]: string };
/**
* Specifies the details of the proxy image.
*/
image?: Image;
/**
* Optional.
*/
selector?: Selector;
}
/**
* Specifies the details of the proxy image.
*/
export interface Image {
/**
* The image type of the image.
*/
imageType?: string;
}
/**
* Optional.
*/
export interface Selector {
/**
* One or more labels that indicate a specific set of pods/VMs on which a policy should be
* applied.
*/
matchLabels?: { [key: string]: string };
}
RegisterKind(ProxyConfig, {
group: "networking.istio.io",
version: "v1beta1",
kind: "ProxyConfig",
});
The generated types can be imported into Pepr directly, there is no additional logic needed to make them to work.
import { Capability, K8s, Log, a, kind } from "pepr";
import { Gateway } from "../crds/gateway-v1beta1";
import {
PurpleDestination,
VirtualService,
} from "../crds/virtualservice-v1beta1";
export const IstioVirtualService = new Capability({
name: "istio-virtual-service",
description: "Generate Istio VirtualService resources",
});
// Use the 'When' function to create a new action
const { When, Store } = IstioVirtualService;
// Define the configuration keys
enum config {
Gateway = "uds/istio-gateway",
Host = "uds/istio-host",
Port = "uds/istio-port",
Domain = "uds/istio-domain",
}
// Define the valid gateway names
const validGateway = ["admin", "tenant", "passthrough"];
// Watch Gateways to get the HTTPS domain for each gateway
When(Gateway)
.IsCreatedOrUpdated()
.WithLabel(config.Domain)
.Watch(vs => {
// Store the domain for the gateway
Store.setItem(vs.metadata.name, vs.metadata.labels[config.Domain]);
});
The OnSchedule
feature allows you to schedule and automate the execution of specific code at predefined intervals or schedules. This feature is designed to simplify recurring tasks and can serve as an alternative to traditional CronJobs. This code is designed to be run at the top level on a Capability, not within a function like When
.
OnSchedule
is designed for targeting intervals equal to or larger than 30 seconds due to the storage mechanism used to archive schedule info.
Create a recurring task execution by calling the OnSchedule function with the following parameters:
name - The unique name of the schedule.
every - An integer that represents the frequency of the schedule in number of units.
unit - A string specifying the time unit for the schedule (e.g., seconds
, minute
, minutes
, hour
, hours
).
startTime - (Optional) A UTC timestamp indicating when the schedule should start. All date times must be provided in GMT. If not specified the schedule will start when the schedule store reports ready.
run - A function that contains the code you want to execute on the defined schedule.
completions - (Optional) An integer indicating the maximum number of times the schedule should run to completion. If not specified the schedule will run indefinitely.
Update a ConfigMap every 30 seconds:
OnSchedule({
name: "hello-interval",
every: 30,
unit: "seconds",
run: async () => {
Log.info("Wait 30 seconds and create/update a ConfigMap");
try {
await K8s(kind.ConfigMap).Apply({
metadata: {
name: "last-updated",
namespace: "default",
},
data: {
count: `${new Date()}`,
},
});
} catch (error) {
Log.error(error, "Failed to apply ConfigMap using server-side apply.");
}
},
});
Refresh an AWSToken every 24 hours, with a delayed start of 30 seconds, running a total of 3 times:
OnSchedule({
name: "refresh-aws-token",
every: 24,
unit: "hours",
startTime: new Date(new Date().getTime() + 1000 * 30),
run: async () => {
await RefreshAWSToken();
},
completions: 3,
});
During the build phase of Pepr (npx pepr build --rbac-mode [admin|scoped]
), you have the option to specify the desired RBAC mode through specific flags. This allows fine-tuning the level of access granted based on requirements and preferences.
npx pepr build --rbac-mode admin
Description: The service account is given cluster-admin permissions, granting it full, unrestricted access across the entire cluster. This can be useful for administrative tasks where broad permissions are necessary. However, use this mode with caution, as it can pose security risks if misused. This is the default mode.
npx pepr build --rbac-mode scoped
Description: The service account is provided just enough permissions to perform its required tasks, and no more. This mode is recommended for most use cases as it limits potential attack vectors and aligns with best practices in security. The admission controller’s primary mutating or validating action doesn’t require a ClusterRole (as the request is not persisted or executed while passing through admission control), if you have a use case where the admission controller’s logic involves reading other Kubernetes resources or taking additional actions beyond just validating, mutating, or watching the incoming request, appropriate RBAC settings should be reflected in the ClusterRole. See how in Updating the ClusterRole.
If encountering unexpected behaviors in Pepr while running in scoped mode, check to see if they are related to RBAC.
kubectl logs -n pepr-system -l app | jq
# example output
{
"level": 50,
"time": 1697983053758,
"pid": 16,
"hostname": "pepr-static-test-watcher-745d65857d-pndg7",
"data": {
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "configmaps \"pepr-ssa-demo\" is forbidden: User \"system:serviceaccount:pepr-system:pepr-static-test\" cannot patch resource \"configmaps\" in API group \"\" in the namespace \"pepr-demo-2\"",
"reason": "Forbidden",
"details": {
"name": "pepr-ssa-demo",
"kind": "configmaps"
},
"code": 403
},
"ok": false,
"status": 403,
"statusText": "Forbidden",
"msg": "Dooes the ServiceAccount permissions to CREATE and PATCH this ConfigMap?"
}
kubectl auth can-i
SA=$(kubectl get deploy -n pepr-system -o=jsonpath='{range .items[0]}{.spec.template.spec.serviceAccountName}{"\n"}{end}')
# Can i create configmaps as the service account in pepr-demo-2?
kubectl auth can-i create cm --as=system:serviceaccount:pepr-system:$SA -n pepr-demo-2
# example output: no
SA=$(kubectl get deploy -n pepr-system -o=jsonpath='{range .items[0]}{.spec.template.spec.serviceAccountName}{"\n"}{end}')
kubectl describe clusterrole $SA
# example output:
Name: pepr-static-test
Labels: <none>
Annotations: <none>
PolicyRule:
Resources Non-Resource URLs Resource Names Verbs
--------- ----------------- -------------- -----
peprstores.pepr.dev [] [] [create delete get list patch update watch]
configmaps [] [] [watch]
namespaces [] [] [watch]
As discussed in the Modes section, the admission controller’s primary mutating or validating action doesn’t require a ClusterRole (as the request is not persisted or executed while passing through admission control), if you have a use case where the admission controller’s logic involves reading other Kubernetes resources or taking additional actions beyond just validating, mutating, or watching the incoming request, appropriate RBAC settings should be reflected in the ClusterRole.
Step 1: Figure out the desired permissions. (kubectl create clusterrole --help
is a good place to start figuring out the syntax)
kubectl create clusterrole configMapApplier --verb=create,patch --resource=configmap --dry-run=client -oyaml
# example output
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
creationTimestamp: null
name: configMapApplier
rules:
- apiGroups:
- ""
resources:
- configmaps
verbs:
- create
- patch
Step 2: Update the ClusterRole in the dist
folder.
...
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: pepr-static-test
rules:
- apiGroups:
- pepr.dev
resources:
- peprstores
verbs:
- create
- get
- patch
- watch
- apiGroups:
- ''
resources:
- namespaces
verbs:
- watch
- apiGroups:
- ''
resources:
- configmaps
verbs:
- watch
- create # New
- patch # New
...
Step 3: Apply the updated configuration
The /metrics
endpoint provides metrics for the application that are collected via the MetricsCollector
class. It uses the prom-client
library and performance hooks from Node.js to gather and expose the metrics data in a format that can be scraped by Prometheus.
The MetricsCollector
exposes the following metrics:
pepr_errors
: A counter that increments when an error event occurs in the application.pepr_alerts
: A counter that increments when an alert event is triggered in the application.pepr_mutate
: A summary that provides the observed durations of mutation events in the application.pepr_validate
: A summary that provides the observed durations of validation events in the application.pepr_cache_miss
: A gauge that provides the number of cache misses per window.pepr_resync_failure_count
: A gauge that provides the number of unsuccessful attempts at receiving an event within the last seen event limit before re-establishing a new connection.| PEPR_MAX_CACHE_MISS_WINDOWS
| Maximum number windows to emit pepr_cache_miss
metrics for | default: Undefined
|
Method: GET
URL: /metrics
Response Type: text/plain
Status Codes:
Response Body: The response body is a plain text representation of the metrics data, according to the Prometheus exposition formats. It includes the metrics mentioned above.
GET /metrics
`# HELP pepr_errors Mutation/Validate errors encountered
# TYPE pepr_errors counter
pepr_errors 5
# HELP pepr_alerts Mutation/Validate bad api token received
# TYPE pepr_alerts counter
pepr_alerts 10
# HELP pepr_mutate Mutation operation summary
# TYPE pepr_mutate summary
pepr_mutate{quantile="0.01"} 100.60707900021225
pepr_mutate{quantile="0.05"} 100.60707900021225
pepr_mutate{quantile="0.5"} 100.60707900021225
pepr_mutate{quantile="0.9"} 100.60707900021225
pepr_mutate{quantile="0.95"} 100.60707900021225
pepr_mutate{quantile="0.99"} 100.60707900021225
pepr_mutate{quantile="0.999"} 100.60707900021225
pepr_mutate_sum 100.60707900021225
pepr_mutate_count 1
# HELP pepr_validate Validation operation summary
# TYPE pepr_validate summary
pepr_validate{quantile="0.01"} 201.19413900002837
pepr_validate{quantile="0.05"} 201.19413900002837
pepr_validate{quantile="0.5"} 201.2137690000236
pepr_validate{quantile="0.9"} 201.23339900001884
pepr_validate{quantile="0.95"} 201.23339900001884
pepr_validate{quantile="0.99"} 201.23339900001884
pepr_validate{quantile="0.999"} 201.23339900001884
pepr_validate_sum 402.4275380000472
pepr_validate_count 2
# HELP pepr_cache_miss Number of cache misses per window
# TYPE pepr_cache_miss gauge
pepr_cache_miss{window="2024-07-25T11:54:33.897Z"} 18
pepr_cache_miss{window="2024-07-25T12:24:34.592Z"} 0
pepr_cache_miss{window="2024-07-25T13:14:33.450Z"} 22
pepr_cache_miss{window="2024-07-25T13:44:34.234Z"} 19
pepr_cache_miss{window="2024-07-25T14:14:34.961Z"} 0
# HELP pepr_resync_failure_count Number of retries per count
# TYPE pepr_resync_failure_count gauge
pepr_resync_failure_count{count="0"} 5
pepr_resync_failure_count{count="1"} 4
If using the Prometheus Operator, the following ServiceMonitor
example manifests can be used to scrape the /metrics
endpoint for the admission
and watcher
controllers.
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: admission
spec:
selector:
matchLabels:
pepr.dev/controller: admission
namespaceSelector:
matchNames:
- pepr-system
endpoints:
- targetPort: 3000
scheme: https
tlsConfig:
insecureSkipVerify: true
---
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: watcher
spec:
selector:
matchLabels:
pepr.dev/controller: watcher
namespaceSelector:
matchNames:
- pepr-system
endpoints:
- targetPort: 3000
scheme: https
tlsConfig:
insecureSkipVerify: true
Pepr fully supports WebAssembly. Depending on the language used to generate the WASM, certain files can be too large to fit into a Secret
or ConfigMap
. Due to this limitation, users have the ability to incorporate *.wasm
and any other essential files during the build phase, which are then embedded into the Pepr Controller container. This is achieved through adding an array of files to the includedFiles
section under pepr
in the package.json
.
NOTE - In order to instantiate the WebAsembly module in TypeScript, you need the WebAssembly type. This is accomplished through add the “DOM” to the
lib
array in thecompilerOptions
section of thetsconfig.json
. Ex:"lib": ["ES2022", "DOM"]
. Be aware that adding the DOM will add a lot of extra types to your project and your developer experience will be impacted in terms of the intellisense.
WASM support is achieved through adding files as layers atop the Pepr controller image, these files are then able to be read by the individual capabilities. The key components of WASM support are:
includedFiles
section of the pepr
block of the package.json
npx pepr build
with the -r
option specifying registry info. Ex: npx pepr build -r docker.io/cmwylie19
Deployment
.Create a simple Go function that you want to call from your Pepr module
package main
import (
"fmt"
"syscall/js"
)
func concats(this js.Value, args []js.Value) interface{} {
fmt.Println("PeprWASM!")
stringOne := args[0].String()
stringTwo := args[1].String()
return fmt.Sprintf("%s%s", stringOne, stringTwo)
}
func main() {
done := make(chan struct{}, 0)
js.Global().Set("concats", js.FuncOf(concats))
<-done
}
Compile it to a wasm target and move it to your Pepr module
GOOS=js GOARCH=wasm go build -o main.wasm
cp main.wasm $YOUR_PEPR_MODULE/
Copy the wasm_exec.js
from GOROOT
to your Pepr Module
cp "$(go env GOROOT)/misc/wasm/wasm_exec.js" $YOUR_PEPR_MODULE/
Update the polyfill to add globalThis.crypto
in the wasm_exec.js
since we are not running in the browser. This is needed directly under: (() => {
// Initialize the polyfill
if (typeof globalThis.crypto === 'undefined') {
globalThis.crypto = {
getRandomValues: (array) => {
for (let i = 0; i < array.length; i++) {
array[i] = Math.floor(Math.random() * 256);
}
},
};
}
After adding the files to the root of the Pepr module, reference those files in the package.json
:
{
"name": "pepr-test-module",
"version": "0.0.1",
"description": "A test module for Pepr",
"keywords": [
"pepr",
"k8s",
"policy-engine",
"pepr-module",
"security"
],
"engines": {
"node": ">=18.0.0"
},
"pepr": {
"name": "pepr-test-module",
"uuid": "static-test",
"onError": "ignore",
"alwaysIgnore": {
"namespaces": [],
"labels": []
},
"includedFiles":[
"main.wasm",
"wasm_exec.js"
]
},
...
}
Update the tsconfig.json
to add “DOM” to the compilerOptions
lib:
{
"compilerOptions": {
"allowSyntheticDefaultImports": true,
"declaration": true,
"declarationMap": true,
"emitDeclarationOnly": true,
"esModuleInterop": true,
"lib": [
"ES2022",
"DOM" // <- Add this
],
"module": "CommonJS",
"moduleResolution": "node",
"outDir": "dist",
"resolveJsonModule": true,
"rootDir": ".",
"strict": false,
"target": "ES2022",
"useUnknownInCatchVariables": false
},
"include": [
"**/*.ts"
]
}
Import the wasm_exec.js
in the pepr.ts
import "./wasm_exec.js";
Create a helper function to load the wasm file in a capability and call it during an event of your choice
async function callWASM(a,b) {
const go = new globalThis.Go();
const wasmData = readFileSync("main.wasm");
var concated: string;
await WebAssembly.instantiate(wasmData, go.importObject).then(wasmModule => {
go.run(wasmModule.instance);
concated = global.concats(a,b);
});
return concated;
}
When(a.Pod)
.IsCreated()
.Mutate(async pod => {
try {
let label_value = await callWASM("loves","wasm")
pod.SetLabel("pepr",label_value)
}
catch(err) {
Log.error(err);
}
});
Build your Pepr module with the registry specified.
npx pepr build -r docker.io/defenseunicorns
This document outlines how to customize the build output through Helm overrides and package.json
configurations.
By default, the store values are displayed in logs, to redact them you can set the PEPR_STORE_REDACT_VALUES
environment variable to true
in the package.json
file or directly on the Watcher or Admission Deployment
. The default value is undefined
.
{
"env": {
"PEPR_STORE_REDACT_VALUES": "true"
}
}
You can display warnings in the logs by setting the PEPR_NODE_WARNINGS
environment variable to true
in the package.json
file or directly on the Watcher or Admission Deployment
. The default value is undefined
.
{
"env": {
"PEPR_NODE_WARNINGS": "true"
}
}
The log format can be customized by setting the PINO_TIME_STAMP
environment variable in the package.json
file or directly on the Watcher or Admission Deployment
. The default value is a partial JSON timestamp string representation of the time. If set to iso
, the timestamp is displayed in an ISO format.
Caution: attempting to format time in-process will significantly impact logging performance.
{
"env": {
"PINO_TIME_STAMP": "iso"
}
}
With ISO:
{"level":30,"time":"2024-05-14T14:26:03.788Z","pid":16,"hostname":"pepr-static-test-7f4d54b6cc-9lxm6","method":"GET","url":"/healthz","status":200,"duration":"1 ms"}
Default (without):
{"level":30,"time":"1715696764106","pid":16,"hostname":"pepr-static-test-watcher-559d94447f-xkq2h","method":"GET","url":"/healthz","status":200,"duration":"1 ms"}
The Watch configuration is a part of the Pepr module that allows you to watch for specific resources in the Kubernetes cluster. The Watch configuration can be customized by specific enviroment variables of the Watcher Deployment and can be set in the field in the package.json
or in the helm values.yaml
file.
Field | Description | Example Values |
---|---|---|
PEPR_RESYNC_FAILURE_MAX | The maximum number of times to fail on a resync interval before re-establishing the watch URL and doing a relist. | default: "5" |
PEPR_RETRY_DELAY_SECONDS | The delay between retries in seconds. | default: "10" |
PEPR_LAST_SEEN_LIMIT_SECONDS | Max seconds to go without receiving a watch event before re-establishing the watch | default: "300" (5 mins) |
PEPR_RELIST_INTERVAL_SECONDS | Amount of seconds to wait before a relist of the watched resources | default: "600" (10 mins) |
The Reconcile Action allows you to maintain ordering of resource updates processed by a Pepr controller. The Reconcile configuration can be customized via enviroment variable on the Watcher Deployment, which can be set in the package.json
or in the helm values.yaml
file.
Field | Description | Example Values |
---|---|---|
PEPR_RECONCILE_STRATEGY | How Pepr should order resource updates being Reconcile()’d. | default: "kind" |
Available Options | |
---|---|
kind | separate queues of events for Reconcile()’d resources of a kind |
kindNs | separate queues of events for Reconcile()’d resources of a kind, within a namespace |
kindNsName | separate queues of events for Reconcile()’d resources of a kind, within a namespace, per name |
global | a single queue of events for all Reconcile()’d resources |
Below are the available Helm override configurations after you have built your Pepr module that you can put in the values.yaml
.
Parameter | Description | Example Values |
---|---|---|
secrets.apiToken | Kube API-Server Token. | Buffer.from(apiToken).toString("base64") |
hash | Unique hash for deployment. Do not change. | <your_hash> |
namespace.annotations | Namespace annotations | {} |
namespace.labels | Namespace labels | {"pepr.dev": ""} |
uuid | Unique identifier for the module | hub-operator |
admission.* | Admission controller configurations | Various, see subparameters below |
watcher.* | Watcher configurations | Various, see subparameters below |
Subparameter | Description |
---|---|
failurePolicy | Webhook failure policy [Ignore, Fail] |
webhookTimeout | Timeout seconds for webhooks [1 - 30] |
env | Container environment variables |
image | Container image |
annotations | Deployment annotations |
labels | Deployment labels |
securityContext | Pod security context |
readinessProbe | Pod readiness probe definition |
livenessProbe | Pod liveness probe definition |
resources | Resource limits |
containerSecurityContext | Container’s security context |
nodeSelector | Node selection constraints |
tolerations | Tolerations to taints |
affinity | Node scheduling options |
terminationGracePeriodSeconds | Optional duration in seconds the pod needs to terminate gracefully |
Note: Replace *
within admission.*
or watcher.*
to apply settings specific to the desired subparameter (e.g. admission.failurePolicy
).
Below are the available configurations through package.json
.
Field | Description | Example Values |
---|---|---|
uuid | Unique identifier for the module | hub-operator |
onError | Behavior of the webhook failure policy | reject , ignore |
webhookTimeout | Webhook timeout in seconds | 1 - 30 |
customLabels | Custom labels for namespaces | {namespace: {}} |
alwaysIgnore | Conditions to always ignore | {namespaces: []} |
includedFiles | For working with WebAssembly | [“main.wasm”, “wasm_exec.js”] |
env | Environment variables for the container | {LOG_LEVEL: "warn"} |
rbac | Custom RBAC rules (requires building with rbacMode: scoped ) | {"rbac": [{"apiGroups": ["<apiGroups>"], "resources": ["<resources>"], "verbs": ["<verbs>"]}]} |
rbacMode | Configures module to build binding RBAC with principal of least privilege | scoped , admin |
These tables provide a comprehensive overview of the fields available for customization within the Helm overrides and the package.json
file. Modify these according to your deployment requirements.
The following example demonstrates how to add custom RBAC rules to the Pepr module.
{
"pepr": {
"rbac": [
{
"apiGroups": ["pepr.dev"],
"resources": ["customresources"],
"verbs": ["get", "list"]
},
{
"apiGroups": ["apps"],
"resources": ["deployments"],
"verbs": ["create", "delete"]
}
]
}
}
Filters are functions that take a AdmissionReview
or Watch event and return a boolean. They are used to filter out resources that do not meet certain criteria. Filters are used in the package to filter out resources that are not relevant to the user-defined admission or watch process.
When(a.ConfigMap)
// This limits the action to only act on new resources.
.IsCreated()
// Namespace filter
.InNamespace("webapp")
// Name filter
.WithName("example-1")
// Label filter
.WithLabel("app", "webapp")
.WithLabel("env", "prod")
.Mutate(request => {
request
.SetLabel("pepr", "was-here")
.SetAnnotation("pepr.dev", "annotations-work-too");
});
Filters
.WithName("name")
: Filters resources by name..WithNameRegex(/^pepr/)
: Filters resources by name using a regex..InNamespace("namespace")
: Filters resources by namespace..InNamespaceRegex(/(.*)-system/)
: Filters resources by namespace using a regex..WithLabel("key", "value")
: Filters resources by label. (Can be multiple).WithDeletionTimestamp()
: Filters resources that have a deletion timestamp.Notes:
WithDeletionTimestamp()
is does not work on Delete through the Mutate
or Validate
methods because the Kubernetes Admission Process does not fire the DELETE event with a deletion timestamp on the resource.WithDeletionTimestamp()
will match on an Update event during Admission (Mutate
or Validate
) when pending-deletion permitted changes (like removing a finalizer) occur.