Ansible Event Driven Automation
Curated set of Event-Driven Ansible content.
The following set of content is included within this collection:
| Name | Description |
|---|---|
| juniper.eda.k8s | Respond to events within a Kubernetes cluster. |
The juniper.eda.k8s extension provides a reliable means for taking action upon Kubernetes events. It provides mechanisms to:
api_versionkindnamenamespacelabel_selectorsfield_selectorschanged_fieldsINIT_DONE event.ADDED, MODIFIED and DELETED for resources matching the filters.This extension is implemented with the kubernetes_asyncio client, which is non-blocking, ensuring that the EDA activation will be responsive.
The following is an example of how to use the Kubernetes Event Source Plugin within an Ansible Rulebook. For example:
- name: Listen for ConfigMaps across
hosts: all
sources:
- juniper.eda.k8s:
api_version: v1
kind: ConfigMap
rules:
- name: Existing ConfigMaps
condition: event.type == "INIT_DONE" and event.resources.kind == "ConfigMapList"
action:
debug:
msg: "INIT_DONE: ConfigMaps: {{ event.resources }}"
- name: ConfigMap Added
condition: event.type == "ADDED"
action:
debug:
msg: "ADDED: ConfigMap {{ event.resource.metadata.namespace }}/{{ event.resource.metadata.name }}"
You can also listen an any number of objects in the same rulebook activation. For example:
---
- name: Listen for Namespace or Pod
hosts: all
sources:
- juniper.eda.k8s:
kinds:
- api_version: v1
kind: Namespace
- api_version: v1
kind: Pod
label_selectors:
- app: myapp
rules:
- name: Existing Namespaces
condition: event.type == "INIT_DONE" and event.resources.kind == "NamespaceList"
action:
debug:
msg: "INIT_DONE: Namespaces: {{ event.resources }}"
- name: Namespace Added/Modified/Deleted
condition: event.resource.kind == "Namespace"
action:
debug:
msg: "{{ event.type }}: Namespace {{ event.resource.metadata.name }}"
- name: Existing Pods
condition: event.type == "INIT_DONE" and event.resources.kind == "PodList"
action:
debug:
msg: "INIT_DONE: Pods: {{ event.resources }}"
- name: Pod Added/Modified/Deleted
condition: event.resource.kind == "Pod"
action:
debug:
msg: "{{ event.type }}: Pod {{ event.resource.metadata.namespace }}/{{ event.resource.metadata.name }} with labels {{ event.resource.metadata.labels }}"
The event source can also be configured to monitor specific fields on a resource:
- name: Listen for ConfigMaps across
hosts: all
sources:
- juniper.eda.k8s:
api_version: v1
kind: ConfigMap
changed_fields:
- data
- metadata.annotations.foo
rules:
- name: Modified ConfigMap Specific Fields
condition: event.resource.kind == "ConfigMap" and event.type == "MODIFIED"
action:
debug:
msg: "{{ event.resource.metadata.name }} changed foo to {{ event.metadata.annotations.foo }}"
When running in a kubernetes environment, the rulebook activation pod will look for the service account secret that's typically injected into the pod. So, no configuration is required.
Otherwise, the following parameters can be specified manually:
| Key | Alias | Purpose |
|---|---|---|
| kubeconfig | Path to Kuberenetes config file | |
| context | Kubernetes context | |
| host | Kubernetes API host | |
| api_key | Authorization key for the Kubernetes API server | |
| username | Kubernetes user | |
| password | Kubernetes user password | |
| validate_certs | verify_ssl | Boolean to specify whether SSL verification is required |
| ca_cert | ssl_ca_cert | Path to certificate authority file to used to validate cert |
| client_cert | cert_file | Path to client certificate file |
| client_key | key_file | Path to client key file |
| proxy | URL for proxy server | |
| no_proxy | Disable proxy (even if proxy is configured) | |
| proxy_headers | Dictionary of proxy headers to use. See k8s.py for details. | |
| persist_config | Boolean to configure persistence of the client configuration. | |
| impersonate_user | User to impersonate | |
| impersonate_groups | List of groups to impersonate |
In order for the watch to work the (service) account that's associated with the authenticated user must be authorized to get, list and watch the types specified.
Here is how you might configure a cluster role:
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: eda-service-account-role
rules:
- apiGroups: [""]
resources: ["namespaces", "pods"]
verbs: ["get", "list", "watch"]
- apiGroups: ["sriovnetwork.openshift.io"]
resources: ["sriovnetworks"]
verbs: ["get", "list", "watch"]
This is how you might configure a cluster role binding to a service account:
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: eda-service-account-binding
subjects:
- kind: ServiceAccount
name: default # Replace with the name of your service account
namespace: aap # Replace with the namespace of your service account
roleRef:
kind: ClusterRole
name: eda-service-account-role
apiGroup: rbac.authorization.k8s.io
The following parameters are supported in the watch configuration.
| Config | Purpose | Default |
|---|---|---|
| api_version | Version of the kinds to watch | v1 |
| kind | Kind to watch | |
| kinds | List of kinds to watch. It's a list of dictionaries including all of the configuration values in this list except kinds. For each kind, it will use the top-level value as the default. For example, if namespace is set at the top level but not in one of the kind list entries, the namespace from the top will be used as a default. | |
| name | Name of the kind to watch | |
| namespace | Namespace to watch for kind | |
| label_selectors | Labels to filter resources | [] |
| field_selectors | Fields to filter resources | [] |
| changed_fields | Filter modified events by specific fields | [] |
| ignore_modified_deleted | Filter out MODIFIED events marked for deletion. | False |
| log_level | Log level. Can be (CRITICAL,ERROR,INFO,DEBUG,NOTSET) | INFO |
To build the decision environment image, docker is required.
You'll need to set the environment variables RH_USERNAME and RH_PASSWORD in the .env file at the root of your repo. For example:
RH_USERNAME=jsmith
RH_PASSWORD=XXXXXXXXXXXXXX
Then make image will create an image named juniper-eda-de:latest.
To publish an image, you'll need to set the REGISTRY_URL in your .env file to point to the location of the docker registry you use to publish Decision Environments. For example:
REGISTRY_URL=s-artifactory.juniper.net/de
Then, simply run make image again, and in addition to rebuilding (if needed), the image apstra-de:latest will be tagged and pushed to the location specified in the REGISTRY_URL.
The following tools are recommended for development of this collection: 1. brew.sh -- Only needed for Mac OS X 1. pyenv 2. pipenv 3. pre-commit
bash
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"If you have an ARM-based Mac, make sure the following is in your ~/.zprofile:
bash
eval "$(/opt/homebrew/bin/brew shellenv)"
For Intel-based Mac, you may have to add this to ~/.zprofile instead:
bash
eval "$(/usr/local/bin/brew shellenv)"
Run the following command to install pyenv:
bash
brew install xz pyenv
Add this to your ~/.zprofile and restart your shell:
bash
export PYENV_ROOT="$HOME/.pyenv"
[[ -d $PYENV_ROOT/bin ]] && export PATH="$PYENV_ROOT/bin:$PATH"
eval "$(pyenv init -)"
Install pyenv:
bash
curl https://pyenv.run | bash
To set it up in your shell follow these instructions: https://github.com/pyenv/pyenv?tab=readme-ov-file#b-set-up-your-shell-environment-for-pyenv
On Ubuntu, you'll need to install some packages to build Python properly:
bash
sudo apt -y install build-essential liblzma-dev libbz2-dev zlib1g zlib1g-dev libssl-dev libffi-dev libsqlite3-dev libncurses-dev libreadline-dev
Download the aos-sdk, from the Juniper Download page for Apstra. Select the option for the Apstra Automation Python 3 SDK. The SDK is a closed-source project. Juniper Networks is actively working to split the Apstra client code out and open-source it, as that is the only part needed for this collection.
The file that's downloaded will have either a 'whl' or a 'dms' extension. Just move the file to the expected location. For example: mv ~/Downloads/aos_sdk-0.1.0-py3-none-any.dms build/wheels/aos_sdk-0.1.0-py3-none-any.whl.
Run the setup make target:
bash
make setup
Optional: Follow pipenv command completion setup instructions. Only do it if pipenv is installed in your global Python interpreter.
To use the development environment after setting everything up, simply run the commands:
bash
pipenv install --dev
pipenv shell
This will start a new interactive prompt in which the known supported version of Ansible and required dependencies to use the Apstra SDK is installed.
Apache 2.0
| Product |
|---|
| 2.4 |
| 2.5 |
The Red Hat Ecosystem Catalog is the official source for discovering and learning more about the Red Hat Ecosystem of both Red Hat and certified third-party products and services.
We’re the world’s leading provider of enterprise open source solutions—including Linux, cloud, container, and Kubernetes. We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.