I recently had to configure Hashicorps Vault to be integrated with our SSO provider Keycloak using Openid-Connect.

NOTE:

Keycloak

Image by marcus-povery.co.uk


In this post I’d like to explain my view of the inventory/classes and inventory/targets and give a few examples.

If you need a little introduction you can read my previous post “Kapitan — rise and shine” and you can also follow along with the whole Kapitan blog.

With this post I’d like to explain my view on Kapitan’s “inventory section” and how I understand and use them.

Overview

Here you can see the overall components.


I searched for a new tool. Which we can use to ease our Kubernetes configuration management. Here is why I will use Kapitan over Helm, Kustomize or Pulumi.

Why Kapitan?

I searched for a new tool. Which we can use to ease our Kubernetes configuration management. Therefore I personally tested Helm, Kustomize, Pulumi and Kapitan. The duration of every tool test where around one to two weeks.

Helm for me personally was likely the same as Jinja2-templating but in go-templating. Yes I know it is much more but not for my intentions. So I saw not much benefit to switch to it.


I try to help you here with kustomize issues I had and how got along with them. You can follow for more updates in near future

by https://ordina-jworks.github.io/

Basic Preparations

$ brew install kustomize
[...]
$ mkdir -p myrepo/base myrepo/sonarqube/
$ cd myrepo
$ kustomize init

Case 1: Preparation

Replace your ingress hostname with kustomize configMapGenerator + vars

base/sonarqube/ingress.yaml

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: sonarqube-ingress
labels:
app.kubernetes.io/name: sonarqube
app.kubernetes.io/instance: default
app.kubernetes.io/version: …

I recently tried to create an aditional user in my Kubernetes Cluster. Therefore I searched for hours to find nothing.

With this post I’d like to help you find the solution quicker .

First of, there are no “User” like in LDAP or Active Directory in Kubernetes (K8s). This is just another name for a Serviceaccount (SA). Those Serviceaccounts are bound to a particular Namespace (NS). Unfortunately both User and Serviceaccount are mixed pretty good by Kubernetes.

I’d like to start with an overview over the cluster. Then I will choose a clusterrole (edit) to permit my Serviceaccount to access…


In this post I try to help you to get along with certain harbor issues.

Harbor is an open source trusted cloud native registry project that stores, signs, and scans content. Harbor extends the open source Docker Distribution by adding the functionalities usually required by users such as security, identity and management. Having a registry closer to the build and run environment can improve the image transfer efficiency. Harbor supports replication of images between registries, and also offers advanced security features such as user management, access control and activity auditing.

Error Picture:

  1. GC or Jobs are not executed either manual or automatically


In the following article I try to describe all configurations which I came a long when creating and destroying ROOK.IO with CEPH.

Image from sakuragawa.moe

As my company moves on from simple docker-compose especially for internal services like DNS, LDAP, RADIUS etc. we’re now challenged to migrate everything setp by step into kubernetes applications.

Therefore we took a look at K3OS (all in one) and a combination of RancherOS (ROS) + Rancher kubernetes engine (RKE) to have an easy deployable and understandable (at least for our purposes) setup.

Once we decided to use ROS + RKE we quickly came upon the next steps.


I tried to setup k3s on alpinelinux, this is what I came up with..

Photo by Kevin Horvat on Unsplash

Explanation section

Experienced Users can skip ahead to Prepare alpine Linux

What is Alpine Linux?


I’m working on a new dev solution where I came accross required persistent volumes with nfs. This is how I solved it.

Requirements:

NFS-Server

pic by sysadmins.co.za

Installing NFS (if not yet installed):

apt install nfs-kernel-server nfs-common

Edit the defaults/nfs-common to enable the kubernetes provisioner to remote-lock files:

vim /etc/default/nfs-common
# find NEED_STATD= and replace it with:
NEED_STATD=yes

Starting/Restarting the NFS (if not yet started):

systemctl status nfs-serversystemctl restart nfs-server

Create the NFS-Share:

mkdir -p /opt/nfs/k8s_test# not recommended but will do it for the test
chmod -R 777 /opt/nfs

Enable the NFS-Share:

# to open it for every…

NOTE: this is WIP

Terraform provider VMware

Get Informations

Get your hosts resourcepool ID by selecting the host(s)

variable "hosts" {
default = [
"esx-01.example.com",
"esx-02.example.com",
"esx-03.example.com"
]
}
data "vsphere_resource_pool" "resource_pool" {
count = length(var.hosts)
name = "${data.vsphere_host.hosts.*.name[count.index]}/Resources"
datacenter_id = "${data.vsphere_datacenter.dc.id}"
}
output "resourcepool_by_hosts" {
value = data.vsphere_resource_pool.resource_pool
}

Get your hosts resourcepool ID by selecting the resourcepools

variable "hosts" {
default = [
"esx-01.example.com",
"esx-02.example.com",
"esx-03.example.com"
]
}
data "vsphere_host" "hosts" {
count = length(var.hosts)
name = var.hosts[count.index]
datacenter_id = data.vsphere_datacenter.dc.id
}
output "resourcepool_by_resourcepool" {
value = data.vsphere_host.hosts
}

The Module call:

module "kcluster_db" {
source = "../terraform/modules/terraform-module-vmware_vm"
host_count = 1…

D. Heinrich

Working as a IT-Operations engineer at NeXenio, a spin-off by Hasso-Plattner-Institute for products around a digitial workspace.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store