
admin
sudo apt install gnome-tweaks
gsettings set org.gnome.mutter workspaces-only-on-primary false
Datei mit kms verschlüsseln:
DEK=$(aws kms generate-data-key --key-id ${KMS_KEY_ID} --key-spec AES_128)
DEK_PLAIN=$(echo $DEK | jq -r '.Plaintext' | base64 -d | xxd -p)
DEK_ENC=$(echo $DEK | jq -r '.CiphertextBlob')
# This key must be stored alongside the encrypted artifacts, without it we won't be able to decrypt them
base64 -d <<< $DEK_ENC > key.enc
openssl enc -aes-128-cbc -e -in ${TARGET}.zip -out ${TARGET}.zip.enc -K ${DEK_PLAIN:0:32} -iv 0
hex string is too short, padding with zero bytes to length
task: [artifacts:encrypt] rm -rf ${TARGET}.zip
Datei mit KMS entschlüsseln:
DEK=$(aws kms decrypt --ciphertext-blob fileb://key.enc --output text --query Plaintext | base64 -d | xxd -p)
openssl enc -aes-128-cbc -d -K ${DEK:0:32} -iv 0 -in ${TARGET}.zip.enc -out ${TARGET}.zip
ls *whl | xargs -l bash -c 'wheel unpack $0'
Prerequisites: kubebuilder
Neues Projekt initialisieren:
kubebuilder init --owner tasanger --domain sidecar-demo.ta.vg --repo sidecar-demo
$ kubectl api-resources | grep Pod
pods po v1 true Pod
podtemplates v1 true PodTemplate
horizontalpodautoscalers hpa autoscaling/v2 true HorizontalPodAutoscaler
pods metrics.k8s.io/v1beta1 true PodMetrics
poddisruptionbudgets pdb policy/v1 true PodDisruptionBudget
Es wird keine neue Ressource benötigt, daher wird nur der Controller erstellt:
$ kubebuilder create api --group core --version v1 --kind Pod
INFO Create Resource [y/n]
n
INFO Create Controller [y/n]
y
INFO Writing kustomize manifests for you to edit...
INFO Writing scaffold for you to edit...
INFO internal/controller/suite_test.go
INFO internal/controller/pod_controller.go
INFO internal/controller/pod_controller_test.go
INFO Update dependencies:
$ go mod tidy
Edit pod_controller.go
func (r *PodReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
l := log.FromContext(ctx)
// TODO(user): your logic here
pod := &corev1.Pod{}
if err := r.Get(ctx, req.NamespacedName, pod); err != nil {
return ctrl.Result{}, client.IgnoreNotFound(err)
}
l.Info("Pod", "Name", pod.Name, "Namespace", pod.Namespace)
return ctrl.Result{}, nil
}
eine launch.json im Verzeichnis .vscode erstellen
{
// Use IntelliSense to learn about possible attributes.
// Hover to view descriptions of existing attributes.
// For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387
"version": "0.2.0",
"configurations": [
{
"name": "Launch Package",
"type": "go",
"request": "launch",
"mode": "auto",
"program": "./cmd"
}
]
}
Wenn sich Ihre main.go
Datei im Verzeichnis cmd
befindet, müssen Sie sicherstellen, dass der Build-Prozess dieses Verzeichnis berücksichtigt. Hier sind einige Schritte, um sicherzustellen, dass alles richtig konfiguriert ist:
Anpassen der VSCode Debug-Konfiguration: Passen Sie die launch.json
in Ihrem .vscode
-Verzeichnis an, um sicherzustellen, dass das cmd
-Verzeichnis verwendet wird:
{
"version": "0.2.0",
"configurations": [
{
"name": "Launch",
"type": "go",
"request": "launch",
"mode": "auto",
"program": "${workspaceFolder}/cmd",
"env": {},
"args": [],
"showLog": true,
"trace": "verbose"
}
]
}
eventbridge.tf:
data "aws_subnet" "subnet_id_A" {
filter {
name = "tag:aws:cloudformation:logical-id"
values = ["PrivateSubnetA"]
}
}
data "aws_subnet" "subnet_id_B" {
filter {
name = "tag:aws:cloudformation:logical-id"
values = ["PrivateSubnetB"]
}
}
data "aws_subnet" "subnet_id_C" {
filter {
name = "tag:aws:cloudformation:logical-id"
values = ["PrivateSubnetC"]
}
}
### Lambda Function
## Create Lambda Role
resource "aws_iam_role" "announcement-nodegroup-rollout" {
name = "announcement-nodegroup-rollout"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "lambda.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}
EOF
}
##
## Create LogGroup
resource "aws_cloudwatch_log_group" "announcement-nodegroup-rollout" {
name = "/aws/lambda/announcement-nodegroup-rollout"
retention_in_days = 7
}
resource "aws_iam_policy" "announcement-nodegroup-rollout" {
name = "announcement-nodegroup-rollout"
path = "/"
description = "IAM policy for logging from a lambda"
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": "arn:aws:logs:*:*:*",
"Effect": "Allow"
},
{
"Effect": "Allow",
"Action": [
"ec2:CreateNetworkInterface",
"ec2:DeleteNetworkInterface",
"ec2:DescribeNetworkInterfaces"
],
"Resource": "*"
}
]
}
EOF
}
resource "aws_iam_role_policy_attachment" "announcement-nodegroup-rollout" {
role = aws_iam_role.announcement-nodegroup-rollout.name
policy_arn = aws_iam_policy.announcement-nodegroup-rollout.arn
}
##
resource "aws_lambda_function" "announcement-nodegroup-rollout" {
description = "Send Nodegroup Rollout Announcement to Teams"
filename = data.archive_file.lambda_zip.output_path
source_code_hash = data.archive_file.lambda_zip.output_base64sha256
function_name = "announcement-nodegroup-rollout"
role = aws_iam_role.announcement-nodegroup-rollout.arn
timeout = 180
runtime = "python3.12"
handler = "lambda_function.lambda_handler"
memory_size = 256
vpc_config {
subnet_ids = [data.aws_subnet.subnet_id_A.id, data.aws_subnet.subnet_id_B.id, data.aws_subnet.subnet_id_C.id]
security_group_ids = lookup(var.security_group_ids, var.stage)
}
environment {
variables = {
stage = var.stage
webhook_url = var.webhook_url
}
}
tags = {
Function = "announcement-nodegroup-rollout"
Customer = "os/tvnow/systems"
CustomerProject = "r5s-announcements"
}
}
data "archive_file" "lambda_zip" {
type = "zip"
source_dir = "${path.module}/lambda-nodegroup-rollout-source/package"
output_path = "${path.module}/lambda-nodegroup-rollout-source/package/lambda_function.zip"
}
###
resource "aws_lambda_permission" "announcement-nodegroup-rollout" {
statement_id = "AllowExecutionFromEventBridge"
action = "lambda:InvokeFunction"
function_name = aws_lambda_function.announcement-nodegroup-rollout.function_name
principal = "events.amazonaws.com"
source_arn = module.eventbridge.eventbridge_rule_arns["logs"]
}
module "eventbridge" {
source = "terraform-aws-modules/eventbridge/aws"
version = "v2.3.0"
create_bus = false
rules = {
logs = {
name = "announcement-UpdateNodegroupVersion"
description = "announcement-UpdateNodegroupVersion"
event_pattern = jsonencode({ "detail" : { "eventName" : ["UpdateNodegroupVersion"] } })
}
}
targets = {
logs = [
{
name = "announcement-UpdateNodegroupVersion"
arn = aws_lambda_function.announcement-nodegroup-rollout.arn
}
]
}
}
lambda_function.py:
import urllib3
import json
import os
class TeamsWebhookException(Exception):
"""custom exception for failed webhook call"""
pass
class ConnectorCard:
def __init__(self, hookurl, http_timeout=60):
self.http = urllib3.PoolManager()
self.payload = {}
self.hookurl = hookurl
self.http_timeout = http_timeout
def text(self, mtext):
self.payload["text"] = mtext
return self
def send(self):
headers = {"Content-Type":"application/json"}
r = self.http.request(
'POST',
f'{self.hookurl}',
body=json.dumps(self.payload).encode('utf-8'),
headers=headers, timeout=self.http_timeout)
if r.status == 200:
return True
else:
raise TeamsWebhookException(r.reason)
def lambda_handler(event, context):
print(event['detail']['requestParameters']['nodegroupName'])
if("r5s-default" not in event['detail']['requestParameters']['nodegroupName']):
return {
'statusCode': 200,
'body': json.dumps('Announcing only default Node Group')
}
stage = os.environ['stage']
webhook_url = os.environ['webhook_url']
f = open("announcement.tpl", "r").read().replace('###STAGE###', stage)
myTeamsMessage = ConnectorCard(webhook_url)
myTeamsMessage.text(f)
myTeamsMessage.send()
announcement.tpl:
<h1><b>r5s Node Rollout gestarted</b></h1>
Soeben wurde auf r5s ###STAGE### ein Node Rollout gestarted (angestoßen durch Updates, Config Changes,...).<br />
Environment Varbales:
stage = dev|preprod|prod
webhook_url: <webhook url>
- helm diff -n $namespace upgrade jira-exporter jira-exporter --values jira-exporter/values.yaml --allow-unreleased
--set ingress.tls[0].hosts[0]="jira-prometheus-exporter.cloud"
--set ingress.hosts[0].host="jira-prometheus-exporter.cloud"
--set image.tag=$VERSION
--set secretstore=$secretstore
--set secretstore_role=$secretstore_role
--set sa_role=$sa_role
--set env=$ENV
- helm upgrade --install jira-exporter jira-exporter -f jira-exporter/values.yaml -n $namespace --create-namespace $DRYRUN
--set ingress.tls[0].hosts[0]="jira-prometheus-exporter.cloud"
--set ingress.hosts[0].host="jira-prometheus-exporter.cloud"
--set image.tag=$VERSION
--set secretstore=$secretstore
--set secretstore_role=$secretstore_role
--set sa_role=$sa_role
--set env=$ENV
Da ingress eine Map ist muss dieser auch entsprechend gesetzt werden.
kubectl rollout restart deployment/subgraph-pbf
Synopsis
Manage the rollout of one or many resources.
Valid resource types include:
- deployments
- daemonsets
- statefulsets
kubectl rollout SUBCOMMAND
Examples
# Rollback to the previous deployment
kubectl rollout undo deployment/abc
# Check the rollout status of a daemonset
kubectl rollout status daemonset/foo
# Restart a deployment
kubectl rollout restart deployment/abc
# Restart deployments with the 'app=nginx' label
kubectl rollout restart deployment --selector=app=nginx
Hier wird erst eine Liste von PIDs erstellt denen dann 50% CPU zugewiesen wird.
ps faux | grep '/usr/local/bin/node --' | awk '{print $2}' | xargs -I{} cpulimit -p {} -l 50
git reset --soft HEAD^
Diese Command kann wiederholt werden im weitere Commits zu resetten.
git reset --soft HEAD^^^
Hier werden die letzten 3 Commits resetted
helm upgrade --install jira-exporter -f jira-exporter/values.yaml jira-exporter