The Red Hat Ecosystem Catalog is the official source for discovering and learning more about the Red Hat Ecosystem of both Red Hat and certified third-party products and services.
We’re the world’s leading provider of enterprise open source solutions—including Linux, cloud, container, and Kubernetes. We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.
Project Status: stable
Operator Version: v2
When you deploy Graph Lakehouse cluster using operator, following are the set of images used for actual cluster deployments. We have given reference docker commands to download the latest releases for each of them below.
docker pull registry.connect.redhat.com/cambridgesemantics/anzograph-operator:latest
docker pull registry.connect.redhat.com/cambridgesemantics/anzograph-db:latest
docker pull registry.connect.redhat.com/cambridgesemantics/anzograph-frontend:latest
docker pull registry.connect.redhat.com/cambridgesemantics/anzograph:latest
Deployment of Graph Lakehouse cluster can be achieved using following steps.
# Create Namespace, mention name of your namespace in metadata.name $ kubectl create -f deploy/v1_namespace_default.yaml # Setup Service Account $ kubectl create -f deploy/default_v1_serviceaccount_anzograph-operator.yaml --namespace <namespace> # Setup RBAC $ kubectl create -f deploy/default_rbac.authorization.k8s.io_v1_role_anzograph-operator.yaml --namespace <namespace> $ kubectl create -f deploy/default_rbac.authorization.k8s.io_v1_rolebinding_anzograph-operator.yaml --namespace <namespace> $ kubectl create -f deploy/rbac.authorization.k8s.io_v1_clusterrole_anzograph-operator.yaml $ kubectl create -f deploy/rbac.authorization.k8s.io_v1_clusterrolebinding_anzograph-operator.yaml # Setup the CRD $ kubectl create -f deploy/crds/apiextensions.k8s.io_v1_customresourcedefinition_anzographs.anzograph.clusters.cambridgesemantics.com.yaml # Deploy anzograph-operator $ kubectl create -f deploy/default_apps_v1_deployment_anzograph-operator.yaml --namespace <namespace> # Deploy AnzoGraph Custom Resource(CR), i.e. AnzoGraph cluster deployment $ kubectl apply -f deploy/default_anzograph.clusters.cambridgesemantics.com_v2_anzograph_azg01.yaml --namespace <namespace>
NOTE One needs to edit operator deployment, CR deployment with right docker image details.
# Delete AnzoGraph CR $ kubectl delete -f deploy/default_anzograph.clusters.cambridgesemantics.com_v2_anzograph_azg01.yaml --namespace <namespace> # Delete anzograph-operator $ kubectl delete -f deploy/default_apps_v1_deployment_anzograph-operator.yaml --namespace <namespace> # Delete RBAC $ kubectl delete -f deploy/default_rbac.authorization.k8s.io_v1_role_anzograph-operator.yaml --namespace <namespace> $ kubectl delete -f deploy/default_rbac.authorization.k8s.io_v1_rolebinding_anzograph-operator.yaml --namespace <namespace> $ kubectl delete -f deploy/rbac.authorization.k8s.io_v1_clusterrole_anzograph-operator.yaml $ kubectl delete -f deploy/rbac.authorization.k8s.io_v1_clusterrolebinding_anzograph-operator.yaml # Delete Service Account $ kubectl delete -f deploy/default_v1_serviceaccount_anzograph-operator.yaml --namespace <namespace> # Delete CRD $ kubectl delete -f deploy/crds/apiextensions.k8s.io_v1_customresourcedefinition_anzographs.anzograph.clusters.cambridgesemantics.com.yaml
## Persistence
The Graph Lakehouse custom resource utilizes PVs (disks) to save the Graph Lakehouse config and, if enabled, persisted data. The default policy(no annotations are specified) will delete these PVCs/PVs when the Graph Lakehouse custom resource is deleted. It is often preferable to change this default behavior using annotations specified in next section.
## Graph Lakehouse CustomResource(CR) annotations
Annotation keys and values can only be strings. All types must be string-encoded.
| Name | Description | Default |
| cambridgesemantics/configStorageInGB | Size for config Persistent Volume Claim | 5Gi |
| cambridgesemantics/spillStorageInGB | Size for spill Persistent Volume Claim | 20Gi |
| cambridgesemantics/dataStorageInGB | Size for data Persistent Volume Claim | |
| cambridgesemantics/retainConfigStorage | Whether to retain config Persistent Volume Claim | false |
| cambridgesemantics/retainSpillStorage | Whether to retain spill Persistent Volume Claim | false |
| cambridgesemantics/retainDataStorage | Whether to retain data Persistent Volume Claim | false |
| cambridgesemantics.com/skip-lb-check | Whether to skip ensuring of service LoadBalancer | false |
You can opt for config, spill and data persistence by setting `cambridgesemantics/retainConfigStorage`, `cambridgesemantics/retainSpillStorage` and `cambridgesemantics/retainDataStorage` to `true`.
Please note that data persistence does not support, if
* you change the "shape" of the Graph Lakehouse cluster (different slice/vCPU, different count of AZG database nodes)
* you change the database container of the CR.
Please make sure you export your (in memory) data of the Graph Lakehouse instance before you redeploy and re-load the data again.
## Upgrading Graph Lakehouse Custom Resource
Please note that Graph Lakehouse dynamic deployments(embedded from within anzo) do not currently support custom resource upgrades, from Graph Studio / Anzo user interface. In case of standalone installs, there are a few manual steps needed to be performed before `kubectl apply`. We intend to introduce the complete support for upgrade in future releases.
1. Persistence should be enabled in the original CR deployment. This can be achieved by setting
```yaml
annotations:
cambridgesemantics/configStorageInGB: 5Gi
cambridgesemantics/retainConfigStorage: "true"
cambridgesemantics/dataStorageInGB: "30"
cambridgesemantics/retainDataStorage: "true"
cambridgesemantics/spillStorageInGB: 32Gi
cambridgesemantics/retainSpillStorage: "false"
```
2. Stop Graph Lakehouse Database
```sh
export AZG_NS=default
export AZG_CR=azg01
kubectl --namespace="${AZG_NS}" get pod --selector="app_mgmt=anzograph-mgmt-grpc,cluster_name=${AZG_CR},apps.kubernetes.io/pod-index=0" --output=name \
| xargs -I{} kubectl --namespace="${AZG_NS}" exec {} -- bash -c 'azgctl -stop'
```
3. Prepare the database for new UDXs
```sh
export AZG_NS=default
export AZG_CR=azg01
kubectl --namespace="${AZG_NS}" get pod --selector="app_mgmt=anzograph-mgmt-grpc,cluster_name=${AZG_CR}" --output=name \
| xargs -I{} kubectl --namespace="${AZG_NS}" exec {} -- bash -c 'rm -rf lib/udx/{.activated,*}'
```
4. Remove spool files in /opt/anzograph/spill/
```sh
export AZG_NS=default
export AZG_CR=azg01
kubectl --namespace="${AZG_NS}" get pod --selector="app_mgmt=anzograph-mgmt-grpc,cluster_name=${AZG_CR}" --output=name \
| xargs -I{} kubectl --namespace="${AZG_NS}" exec {} -- bash -c 'rm -f spill/*'
```
5. Delete persisted data (you have a backup, right?)
```sh
export AZG_NS=default
export AZG_CR=azg01
kubectl --namespace="${AZG_NS}" get pod --selector="app_mgmt=anzograph-mgmt-grpc,cluster_name=${AZG_CR}" --output=name \
| xargs -I{} kubectl --namespace="${AZG_NS}" exec {} -- bash -c 'rm -fr persistence/*'
```
6. Uninstall current Graph Lakehouse CR
```sh
export AZG_NS=default
export AZG_CR=azg01
kubectl --namespace="${AZG_NS}" delete anzograph/${AZG_CR}
```
7. Install CR with same name and updated CR yaml file
```sh
export AZG_NS=default
export AZG_CR=azg01
kubectl --namespace="${AZG_NS}" apply -f <cr-yaml-file>
```
The following table lists the configurable parameters for Graph Lakehouse and their default values.(CR API Version: v2)
| Parameter | Description | Default |
|---|---|---|
| metadata.name | Name of CR | |
| metadata.namespace | Namespace of CR | |
| metadata.labels | Dictionary of (key: val) as labels of CR | |
| spec.db.nodeConfig.spec | Configuration specification for Graph Lakehouse pods | |
| spec.db.nodeConfig.spec.replicas | Number of pods for Graph Lakehouse | 1 |
| spec.db.nodeConfig.spec.serviceName | Name of headless service for Graph Lakehouse | anzograph- |
| spec.db.nodeConfig.spec.template.spec.serviceAccountName | Service account name for pods | anzograph-operator |
| spec.db.nodeConfig.spec.template.spec.containers.x.Name | Name of Graph Lakehouse container | db |
| spec.db.nodeConfig.spec.template.spec.containers.y.Name | Name of sidecar container, if sidecar logging is enabled | logger |
| spec.db.deployLoggerSidecar | Set this to true to enable sidecar logging | false |
| spec.db.service | Database loadbalancer service attributes, of type v1.Service | commented, please uncomment to add value |
| spec.db.volumes | List of persistent volumes for Graph Lakehouse | commented, please uncomment to add value |
| spec.db.volumes.[i].name | Name for persistent volume | |
| spec.db.volumes.[i].mountPath | Path where persistent volume should be mounted inside container | |
| spec.db.volumes.[i].pv | Attributes to configure persistent volume, of type v1.PersistentVolume | |
| spec.db.volumes.[i].pvc | Attributes to configure persistent volume claim, of type v1.PersistentVolumeClaim | |
| spec.db.volumes.[i].deletePVC | Set this to true if you want to delete PVC after CR deletion | false |
| spec.db.settingsProfile | Named settings bundles/profiles to configure Graph Lakehouse | standalone |
| spec.db.settingsConfContent | When settingsProfile is 'custom', use this to override default settings(dictionary of key: val) | |
| spec.db.license | User provided license string(BYOL) | "" |
| spec.frontend.nodeConfig.spec | Configuration specification for Graph Lakehouse Frontend pods | |
| spec.frontend.nodeConfig.spec.replicas | Number of pods for Graph Lakehouse Frontend | 1 |
| spec.frontend.nodeConfig.spec.serviceName | Name of headless service for Graph Lakehouse | anzograph- |
| spec.frontend.nodeConfig.spec.template.spec.serviceAccountName | Service account name for pods | anzograph-operator |
| spec.frontend.nodeConfig.spec.template.spec.containers.x.Name | Name of Graph Lakehouse Frontend container | frontend |
| spec.frontend.service | Database loadbalancer service attributes, of type v1.Service | commented, please uncomment to add value |
| spec.frontend.volumes | List of persistent volumes for Graph Lakehouse | commented, please uncomment to add value |
| spec.frontend.volumes.[i].name | Name for persistent volume | |
| spec.frontend.volumes.[i].mountPath | Path where persistent volume should be mounted inside container | |
| spec.frontend.volumes.[i].pv | Attributes to configure persistent volume, of type v1.PersistentVolume | |
| spec.frontend.volumes.[i].pvc | Attributes to configure persistent volume claim, of type v1.PersistentVolumeClaim | |
| spec.frontend.volumes.[i].deletePVC | Set this to true if you want to delete PVC after CR deletion | false |
| spec.uiCredentials.uiCredentials | Name of existing secret for frontend credentials | |
| spec.uiCredentials.grpcCredentials | Name of existing secret for gRPC credentials | |
| spec.uiCredentials.keystoreCredentials | Name of existing secret for frontend keystore | |
| spec.deployFrontend | Set this to true if you want to deploy frontend for Graph Lakehouse | false |
| spec.uiUserCerts.uiUserServiceCert | Graph Lakehouse UI access certificate | commented, please uncomment to add value |
| spec.uiUserCerts.uiUserServiceKey | Graph Lakehouse UI access certificate key | commented, please uncomment to add value |
| spec.uiUserCerts.uiUserCACert | Graph Lakehouse UI access ca certificate | commented, please uncomment to add value |
| spec.dbCertificate | Graph Lakehouse DB certificate resource to be issued from cert-manager | commented, please uncomment to add value |
| spec.frontendCertificate | Graph Lakehouse UI certificate resource to be issued from cert-manager | commented, please uncomment to add value |
https://docs.cambridgesemantics.com/
The following information was extracted from the containerfile and other sources.
| Summary | Graph Lakehouse® Operator, ubi9 Image |
| Description | Graph Lakehouse® Operator lets a user deploy and manage life-cycle of Graph Lakehouse® DB. |
| Provider | Cambridge Semantics |
| Maintainer | https://altair.com/customer-support |
The following information was extracted from the containerfile and other sources.
| Repository name | Graph Lakehouse® Operator, ubi9 Image |
| Image version | 3.2.5 |
| Architecture | amd64 |
Use the following instructions to get images from a Red Hat container registry using registry service account tokens. You will need to create a registry service account to use prior to completing any of the following tasks.
First, you will need to add a reference to the appropriate secret and repository to your Kubernetes pod configuration via an imagePullSecrets field.
Then, use the following from the command line or from the OpenShift Dashboard GUI interface.
Use the following command(s) from a system with podman installed
Use the following command(s) from a system with docker service installed and running
Use the following instructions to get images from a Red Hat container registry using your Red Hat login.
For best practices, it is recommended to use registry tokens when pulling content for OpenShift deployments.
Use the following command(s) from a system with podman installed
Use the following command(s) from a system with docker service installed and running