kubectl create secret generic azure-secrets --from-env-file <(jq -r "to_entries|map(\"\(.key)=\(.value|tostring)\")|.[]" azure-credentials.json)
Operator mit Helm installieren
helm repo add external-secrets https://charts.external-secrets.io
helm install external-secrets external-secrets/external-secrets
Erstellung eines Azure Service Principals
provider "azurerm" {
features {}
}
provider "kubernetes" {
config_path = "~/.kube/config"
}
provider "azuread" {
tenant_id = data.azurerm_client_config.current.tenant_id
}
data "azurerm_client_config" "current" {}
resource "azuread_application" "eso_app" {
display_name = "external-secrets-operator"
}
resource "azuread_service_principal" "eso_sp" {
application_id = azuread_application.eso_app.application_id
}
resource "azuread_service_principal_password" "eso_sp_password" {
service_principal_id = azuread_service_principal.eso_sp.id
value = random_password.sp_password.result
end_date = "2099-01-01T00:00:00Z"
}
resource "random_password" "sp_password" {
length = 32
special = true
}
resource "azurerm_role_assignment" "kv_role_assignment" {
principal_id = azuread_service_principal.eso_sp.id
role_definition_name = "Key Vault Secrets User"
scope = azurerm_key_vault.example.id
}
# Optional: Erstelle ein JSON-Dokument im Azure SDK-Format (wie azure-credentials.json)
locals {
azure_credentials = jsonencode({
clientId = azuread_service_principal.eso_sp.application_id
clientSecret = azuread_service_principal_password.eso_sp_password.value
tenantId = data.azurerm_client_config.current.tenant_id
subscriptionId = data.azurerm_client_config.current.subscription_id
activeDirectoryEndpointUrl = "https://login.microsoftonline.com"
resourceManagerEndpointUrl = "https://management.azure.com/"
activeDirectoryGraphResourceId = "https://graph.windows.net/"
sqlManagementEndpointUrl = "https://management.core.windows.net:8443/"
galleryEndpointUrl = "https://gallery.azure.com/"
managementEndpointUrl = "https://management.core.windows.net/"
})
}
# Kubernetes Secret für Service Principal erstellen
resource "kubernetes_secret" "azure_secrets" {
metadata {
name = "azure-secrets"
namespace = "default"
}
data = {
"azure-credentials.json" = base64encode(local.azure_credentials)
}
}
Erklärung der Ressourcen:
azuread_application
: Erstellt die Azure AD Anwendung, die den Service Principal repräsentiert.azuread_service_principal
: Erstellt den Service Principal für die Anwendung.azuread_service_principal_password
: Erstellt ein Kennwort (Client Secret) für den Service Principal.azurerm_role_assignment
: Weist dem Service Principal die RolleKey Vault Secrets User
für den Zugriff auf den Key Vault zu.kubernetes_secret
: Erstellt ein Kubernetes Secret, das die Azure-Anmeldeinformationen imazure-credentials.json
-Format speichert. Das JSON wird mithilfe derlocal
-Variable generiert.
Oder per CLI:
$ az ad sp create-for-rbac --name "external-secrets-operator" --sdk-auth > azure-credentials.json
$ kubectl create secret generic azure-secrets --from-env-file <(jq -r "to_entries|map(\"\(.key)=\(.value|tostring)\")|.[]" azure-credentials.json)
Secretstore anlegen
apiVersion: external-secrets.io/v1beta1
kind: SecretStore
metadata:
name: example-secret-store
spec:
provider:
azurekv:
tenantId: "xxxxx-xxx-xxxx-xxxx-xxxxxxx"
vaultUrl: "https://xxxxx.vault.azure.net"
authSecretRef:
clientId:
name: azure-secrets
key: clientId
clientSecret:
name: azure-secrets
key: clientSecret
Externalsecret anlegen
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
name: example-external-secret
spec:
refreshInterval: 1h
secretStoreRef:
kind: SecretStore
name: example-secret-store
target:
name: secret-to-be-created
creationPolicy: Owner
data:
- secretKey: secret-key
remoteRef:
key: secret-key
Optional: Repo hinzufügen
helm repo add argo https://argoproj.github.io/argo-helm
helm repo update
Repo nach vorhandenen Charts durchsuchen:
helm search repo argo
NAME CHART VERSION APP VERSION DESCRIPTION
argo/argo 1.0.0 v2.12.5 A Helm chart for Argo Workflows
argo/argo-cd 7.6.6 v2.12.4 A Helm chart for Argo CD, a declarative, GitOps...
argo/argo-ci 1.0.0 v1.0.0-alpha2 A Helm chart for Argo-CI
argo/argo-events 2.4.8 v1.9.2 A Helm chart for Argo Events, the event-driven ...
argo/argo-lite 0.1.0 Lighweight workflow engine for Kubernetes
argo/argo-rollouts 2.37.7 v1.7.2 A Helm chart for Argo Rollouts
argo/argo-workflows 0.42.4 v3.5.11 A Helm chart for Argo Workflows
argo/argocd-applicationset 1.12.1 v0.4.1 A Helm chart for installing ArgoCD ApplicationSet
argo/argocd-apps 2.0.1 A Helm chart for managing additional Argo CD Ap...
argo/argocd-image-updater 0.11.0 v0.14.0 A Helm chart for Argo CD Image Updater, a tool ...
argo/argocd-notifications 1.8.1 v1.2.1 A Helm chart for ArgoCD notifications, an add-o...
gewünschtes Helm Chart pullen
helm pull argo/argo-cd
Helm Chart entpacken
tar -xvf argo-cd-7.6.6.tgz
Helm Repo hinzufügen
helm repo add jetstack https://charts.jetstack.io --force-update
helm install \
cert-manager jetstack/cert-manager \
--namespace cert-manager \
--create-namespace \
--version v1.15.3 \
--set crds.enabled=true
ClusterIssuer konfigurieren
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: ta@example.com
privateKeySecretRef:
name: example-issuer-account-key
solvers:
- http01:
ingress:
ingressClassName: nginx
! Spot node pool can’t be a default node pool, it can only be used as a secondary pool.
Hier der default Node Pool:
resource "azurerm_kubernetes_cluster" "aks" {
name = "aks-demo-cluster"
location = azurerm_resource_group.aks_rg.location
resource_group_name = azurerm_resource_group.aks_rg.name
dns_prefix = "aksdemocluster"
default_node_pool {
name = "default"
node_count = 1
vm_size = "Standard_DS2_v2"
}
identity {
type = "SystemAssigned"
}
network_profile {
network_plugin = "azure"
load_balancer_sku = "Standard"
}
tags = {
environment = "Demo"
}
}
Hier ein Spot Node Pool:
resource "azurerm_kubernetes_cluster_node_pool" "spot_pool" {
name = "spotpool"
kubernetes_cluster_id = azurerm_kubernetes_cluster.aks.id
vm_size = "Standard_DS2_v2"
node_count = 1
enable_auto_scaling = true
min_count = 1
max_count = 3
priority = "Spot"
eviction_policy = "Delete"
spot_max_price = -1 # Nutzt den aktuellen Spot-Preis
tags = {
environment = "Spot"
}
}
Erklärung:
default_node_pool
: Ein minimaler Node Pool, der verwendet wird, um das Cluster initial zu erstellen.azurerm_kubernetes_cluster_node_pool
: Hier wird ein zusätzlicher Node Pool erstellt, der Spot-Instanzen und Autoscaling unterstützt. Dieser Pool wird mit dem Haupt-Cluster verbunden.
Wichtige Hinweise:
- Autoscaling und Spot-Instanzen müssen in separaten Node Pools konfiguriert werden.
- Wenn du einen Spot-Node Pool nutzt, ist es eine gute Praxis, immer einen regulären Node Pool (wie
default_node_pool
) zu behalten, da Spot-Instanzen potenziell jederzeit entfernt werden können.
Zuerst muss ein Service Principal erstellt werden damit in einer Ci/CD Pipeline automatisch eingeloggt werden kann
az login
az account set --subscription "your-subscription-id"
az ad sp create-for-rbac --name "<your-sp-name>" --role="Contributor" --scopes="/subscriptions/<your-subscription-id>"
Es kann auch ein Servie Principal für eine Resource Group erstellt werden:
az ad sp create-for-rbac --name "sp-pipeline" --role="Contributor" --scopes="/subscriptions/xxxx-xxxx-44f0-8fbe-xxxxx/resourceGroups/pg_xxxxx-171"
Hier die Ausgaben
{
"appId": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
"displayName": "<your-sp-name>",
"password": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
"tenant": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
}
Diese Anmeldeinformationen können als Umgebaungsvariablen für Terraform gesetzt werden:
export ARM_CLIENT_ID="xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
export ARM_CLIENT_SECRET="xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
export ARM_SUBSCRIPTION_ID="xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
export ARM_TENANT_ID="xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
In einer gitlab Pipeline könnte es so aussehen:
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v2
- name: Set up Terraform
uses: hashicorp/setup-terraform@v1
- name: Azure Login
env:
ARM_CLIENT_ID: ${{ secrets.ARM_CLIENT_ID }}
ARM_CLIENT_SECRET: ${{ secrets.ARM_CLIENT_SECRET }}
ARM_SUBSCRIPTION_ID: ${{ secrets.ARM_SUBSCRIPTION_ID }}
ARM_TENANT_ID: ${{ secrets.ARM_TENANT_ID }}
run: terraform init
Die Terraform main.tf anlegen (Terrafom wird noch lokal ausgeführt)
Danach ein terraform apply um die VM zu deployen
# Anbieter-Konfiguration
provider "azurerm" {
features {}
}
# Verwende die bestehende Ressourcengruppe
data "azurerm_resource_group" "existing_rg" {
name = "rg_xxxxxxxx-171" # Deine bestehende Resource Group
}
# Virtuelles Netzwerk
resource "azurerm_virtual_network" "vnet" {
name = "myVnet"
address_space = ["10.0.0.0/16"]
location = data.azurerm_resource_group.existing_rg.location
resource_group_name = data.azurerm_resource_group.existing_rg.name
}
# Subnetz
resource "azurerm_subnet" "subnet" {
name = "mySubnet"
resource_group_name = data.azurerm_resource_group.existing_rg.name
virtual_network_name = azurerm_virtual_network.vnet.name
address_prefixes = ["10.0.1.0/24"]
}
# Netzwerkschnittstelle
resource "azurerm_network_interface" "nic" {
name = "myNic"
location = data.azurerm_resource_group.existing_rg.location
resource_group_name = data.azurerm_resource_group.existing_rg.name
ip_configuration {
name = "internal"
subnet_id = azurerm_subnet.subnet.id
private_ip_address_allocation = "Dynamic"
}
}
# Virtuelle Maschine
resource "azurerm_linux_virtual_machine" "vm" {
name = "myVM"
resource_group_name = data.azurerm_resource_group.existing_rg.name
location = data.azurerm_resource_group.existing_rg.location
size = "Standard_B1s"
admin_username = "azureuser"
admin_password = "Password4321"
network_interface_ids = [azurerm_network_interface.nic.id]
os_disk {
caching = "ReadWrite"
storage_account_type = "Standard_LRS"
}
source_image_reference {
publisher = "Canonical"
offer = "UbuntuServer"
sku = "18.04-LTS"
version = "latest"
}
}
# Öffentliche IP-Adresse
resource "azurerm_public_ip" "public_ip" {
name = "myPublicIP"
location = data.azurerm_resource_group.existing_rg.location
resource_group_name = data.azurerm_resource_group.existing_rg.name
allocation_method = "Dynamic"
}
# Sicherheitsgruppe für die VM
resource "azurerm_network_security_group" "nsg" {
name = "myNSG"
location = data.azurerm_resource_group.existing_rg.location
resource_group_name = data.azurerm_resource_group.existing_rg.name
security_rule {
name = "SSH"
priority = 1001
direction = "Inbound"
access = "Allow"
protocol = "Tcp"
source_port_range = "*"
destination_port_range = "22"
source_address_prefix = "*"
destination_address_prefix = "*"
}
}
# Netzwerkschnittstelle mit Sicherheitsgruppe verknüpfen
resource "azurerm_network_interface_security_group_association" "nic_nsg" {
network_interface_id = azurerm_network_interface.nic.id
network_security_group_id = azurerm_network_security_group.nsg.id
}
Damit Terraform auch in der Pipeline deployed werden kann, muss der tfstate dort liegen, bzw. es muss auf ein gemeinsames Backend zugegriffen werden das den Terraform State enthält.
Dazu wird ein Storage in Azure erstellt indem das File gespeichert werden kann. Dies kann auch über terraform geschehen, dazu die main.tf mit der Storage Ressource erweitern und noch einmal neu mit terraform apply deployen:
# Storage Account erstellen
resource "azurerm_storage_account" "storage" {
name = "tfstatedemovm"
location = data.azurerm_resource_group.existing_rg.location
resource_group_name = data.azurerm_resource_group.existing_rg.name
account_tier = "Standard"
account_replication_type = "LRS"
}
# Storage Container erstellen
resource "azurerm_storage_container" "container" {
name = "tfstate"
storage_account_name = azurerm_storage_account.storage.name
container_access_type = "private"
}
output "storage_account_name" {
value = azurerm_storage_account.storage.name
}
output "container_name" {
value = azurerm_storage_container.container.name
}
Jetzt kann auf das Azue Storage Backend umgestellt weden, dazu folgenden Block der main.tf hinzufügen:
terraform {
backend "azurerm" {
resource_group_name = "rg-xxxxxxx-171"
storage_account_name = "tfstatedemovm"
container_name = "tfstate"
key = "terraform.tfstate"
}
}
Ein terraform init würde jetzt merken das die Backend Configuration geändert wurde und vorschlagen das mit terraform init -migrate-state das Terraform State in das Azure Backend migriert werden kann:
$ terraform init
Initializing the backend...
╷
│ Error: Backend configuration changed
│
│ A change in the backend configuration has been detected, which may require migrating existing state.
│
│ If you wish to attempt automatic migration of the state, use "terraform init -migrate-state".
│ If you wish to store the current configuration with no changes to the state, use "terraform init -reconfigure".
╵
$ terraform init -migrate-state
Initializing the backend...
Terraform detected that the backend type changed from "local" to "azurerm".
Do you want to copy existing state to the new backend?
Pre-existing state was found while migrating the previous "local" backend to the
newly configured "azurerm" backend. An existing non-empty state already exists in
the new backend. The two states have been saved to temporary files that will be
removed after responding to this query.
Previous (type "local"): /tmp/terraform3111667706/1-local.tfstate
New (type "azurerm"): /tmp/terraform3111667706/2-azurerm.tfstate
Do you want to overwrite the state in the new backend with the previous state?
Enter "yes" to copy and "no" to start with the existing state in the newly
configured "azurerm" backend.
Enter a value: yes
Successfully configured the backend "azurerm"! Terraform will automatically
use this backend unless the backend configuration changes.
Initializing provider plugins...
- Reusing previous version of hashicorp/azurerm from the dependency lock file
- Using previously-installed hashicorp/azurerm v4.1.0
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
Jetzt muss noch eine Service Connection angelegt werden.
In den Project Settings auf Service Connection klicken
Auf „Azure Resource Manager“ klicken und dann auf „Service principal (automatic)“



Eine azure-pipelines.yml anlegen
Die azureSubscription ist der Name der soeben angelegten Service Connection (demoVM)
trigger:
- main # Oder dein Branch-Name
pool:
vmImage: 'ubuntu-latest'
variables:
ARM_CLIENT_ID: $(servicePrincipalId)
ARM_CLIENT_SECRET: $(servicePrincipalKey)
ARM_SUBSCRIPTION_ID: $(subscriptionId)
ARM_TENANT_ID: $(tenantId)
steps:
- checkout: self
- task: AzureCLI@2
inputs:
azureSubscription: 'demoVM' # Der Name der Azure Service Connection in Azure DevOps
scriptType: 'bash'
scriptLocation: 'inlineScript'
inlineScript: |
# Installiere Terraform
curl -o- https://raw.githubusercontent.com/tfutils/tfenv/master/install.sh | bash
export PATH="$HOME/.tfenv/bin:$PATH"
tfenv install latest
tfenv use latest
# Terraform Befehle ausführen
terraform init
terraform plan -out=tfplan
terraform apply -auto-approve tfplan
workingDirectory: '$(System.DefaultWorkingDirectory)/terraform'
Test Framework installieren
dotnet add package xunit
dotnet add package xunit.runner.visualstudio
dotnet add package Microsoft.NET.Test.Sdk
Visual Studio Code Extension
Es gibt eine Extension für .NET Tests in Visual Studio Code:
- Installiere die .NET Core Test Explorer Extension: Sie ermöglicht es dir, Tests direkt aus VS Code auszuführen und die Ergebnisse anzuzeigen.
Um die Erweiterung zu installieren:
- Gehe in VS Code zu den Extensions (Ctrl+Shift+X) und suche nach .NET Core Test Explorer. Installiere sie.
Anlegen der Webapp und der Test Unit
dotnet new webapp -n WebAppDemo -o WebAppDemo
dotnet new xunit -n WebAppTest -o WebAppTest
Verlinken der Test Unit zur Webapp
cd WebAppTest
dotnet add reference ../WebAppDemo/WebAppDemo.csproj
mssql extension installieren
dotnet add WebAppDemo package Microsoft.EntityFrameworkCore.SqlServer
In Visual Studio Code bei den Erweiterungen nach „mssql“ suchen
Optional: lokal einen mssql docker Container ausführen (linux)
docker run -e "ACCEPT_EULA=Y" -e "MSSQL_SA_PASSWORD=xxxx" \
-p 1433:1433 --name localhost --hostname localhost -d \
mcr.microsoft.com/mssql/server:2022-latest
- Einloggen bei https://dev.azure.com/
- Neues Projekt anlegen

Unter WorkItems ein Epic anlegen

Unter WorkItems ein Feature anlegen

Unter dem angelegten Epic das Feature verknüpfen (Add Link/Existing Item)

Ein PBI (Produkt Backup Item anlegen) und drauf klicken um eine Userstory zu hinterlegen

Die Backlog Items sortieren (Drag and Drop) und zur Board Ansicht wechseln

Product Owner muss die PBIs approven (in Spalte verschieben oder auf State klicken und approved wählen )

Unter Project Settings die Sprints konfigurieren

Hier können per Drag and Drop die PBI den Sprints zugewiesen werden.

Hier können Tasks für die PBI erstellt werden(Sprints -> Taskboard)

Bitwarden Backend konfigurieren:
bw config server https://bitwarden.example.de/
Saved setting `config`.
Mit API Key anmelden (Wird über die Gui erstellt/ausgelesen):
ClientID und ClientSecret als Umbegungsvariable setzen
export BW_CLIENTID="organization.f2c7e8e7-4370-47f6-96ab-4xxxxxxx"
export BW_CLIENTSECRET="2aOkxxxxxxxx"
$ bw login --apikey
? client_id: organization.xxxxx-4370-47f6-xxxx-47a8axxxx7db
? client_secret: 2aOkWAretz45563456qLCu3UvrGgj
You are logged in!
To unlock your vault, use the `unlock` command. ex:
$ bw unlock
spec:
type: LoadBalancer
externalIPs:
- 192.168.0.10