Virtual Thoughts

Virtualisation, Storage and various other ramblings.

Page 8 of 24

End to end automation with CircleCI and ArgoCD Part 2 – CircleCI Configuration

CircleCI is a continuous integration technology that is capable of building extremely complex pipelines. Being cloud-hosted and offering a free tier makes it very easy to get up and running.

Sign Up and Set-Up

Simply navigate to CircleCI’s website and log in with either Github or Bitbucket.

Select your org:

Select your project. For me, it’s CircleCI-Webapp-CI (Feel free to fork it or leverage your own)

The next step is to create a CircleCI config file that will dictate the steps in the pipeline. CircleCI provides a template to work with:

Clicking on “Start Building” will prompt you for two options. Either let CircleCI apply the starter config, or you can add one of your own. As I’ve already created one myself, I selected “Add Manually”

After which we’re prompted to create /.circleci/config.yaml in the respective GitHub repo.

Click here for my full config.yml file. This can be a bit daunting, so let’s break it down:

version: 2.1
jobs:

Here we define the version of the CircleCI configuration file and in this pipeline, we configure Jobs. Jobs are a wrapper around a collection of Steps. Steps are a series of commands. In this example, we have:

version: 2.1
jobs:
  Test:
    - {steps}
  Build:
    - {steps}
  Publish:
    - {steps}
  GenerateYAML:
    - {steps}

Therefore, we have 4 jobs in this pipeline. Test, Build, Publish and GenerateYAML.

Test Job

jobs:
  Test:
    docker:
      - image: cimg/go:1.13
    steps:
      - checkout
      - run: go test -v ./...

In this test we’re defining it should run in a docker container, hence the docker declaration. In CircleCI, we can run jobs in either a VM or a Container.

As this application is written in Go, using the CircleCI image cimg/go is convenient as it will contain the dev tools we need, such as go test. main_test.go includes the test for this app.

Build Job

jobs:
  Build:
    docker:
      - image: cimg/go:1.13
    steps:
      - checkout
      - run: go get -v -t -d ./...
      - run: go build -o webappdemo
      - persist_to_workspace:
          root: .
          paths:
            - ./webappdemo
            - ./deployment.yaml

Similarly to before, the first declaration is the docker image to use. Note: Each job in CircleCI runs it its own container or VM. Hence why we have to declare which image to use.

As this is using the cimg/go image, we have access to the dev tools to go get (Download application dependencies) and go build (Compiles). -o specifies the output. Here it will compile the application into an executable called webappdemo.

Workspaces are incredibly helpful in saving the output of a job to be used by others. As mentioned previously, Jobs run in their own container/VM, therefore once that job has finished, its data is gone, unless we save it as an artefact, or place it into a workspace. In this example, we’re saving the compiled application and the YAML manifest for the Kubernetes deployment – which is already present in the Github repo.

Publish Job

jobs:
  Publish:
    docker:
      - image: cimg/go:1.13
    steps:
      - checkout
      - setup_remote_docker
      - attach_workspace:
          at: /tmp/workspace
      - run: |
          cp /tmp/workspace/webappdemo .
          TAG=0.1.$CIRCLE_BUILD_NUM
          docker build -t virtualthoughts/circleci-webappdemo:$TAG .
          echo $DOCKER_PASSWORD | docker login -u $DOCKER_USERNAME --password-stdin
          docker push virtualthoughts/circleci-webappdemo:$TAG

This job is responsible for generating a new docker image for our application with an appropriate tag. Dockerfile exists in the application repo and comprises of:

FROM ubuntu:20.10
RUN mkdir /app
COPY webappdemo /app/
WORKDIR /app
CMD ["/app/webappdemo"]
EXPOSE 80

setup_remote_docker creates a remote environment inside the existing container, which will be automatically used to configure it. Consequently, any docker commands will be used in this environment. This is required to build the docker image.

This job requires the output of the previous job, which was saved into the workspace. Therefore we attach the workspace to acquire the files.

      - attach_workspace:
          at: /tmp/workspace

This gives us access to the files we attached from the Build job.

- run: |
          cp /tmp/workspace/webappdemo .
          TAG=0.1.$CIRCLE_BUILD_NUM
          docker build -t virtualthoughts/circleci-webappdemo:$TAG .
          echo $DOCKER_PASSWORD | docker login -u $DOCKER_USERNAME --password-stdin
          docker push virtualthoughts/circleci-webappdemo:$TAG

We copy the application into the current directory and leverage one of CircleCI’s environment variables to generate the build number and construct the Docker image.

This image is then pushed to Dockerhub. To prevent sensitive information being present in config.yml, leverage CircleCI’s environment variables. (CircleCI > Project > Project Settings > Environment Variables).

GenerateYAML Job

jobs:
  GenerateYAML:
    docker:
      - image: cimg/base:2020.06
    steps:
      - attach_workspace:
          at: /tmp/workspace
      - run : |
          TAG=0.1.$CIRCLE_PREVIOUS_BUILD_NUM
          git clone https://github.com/David-VTUK/CircleCI-Webapp-CD /tmp/CircleCI-Webapp-CD
          cd /tmp/CircleCI-Webapp-CD
          cp /tmp/workspace/deployment.yaml .
          sed -i 's/\(circleci-webappdemo\)\(.*\)/\1:'$TAG'/' ./deployment.yaml
          git config credential.helper 'cache --timeout=120'
          git config user.email "CircleCI@virtualthoughts.co.uk"
          git config user.name "CircleCI"
          git add .
          git commit -m "Update via CircleCI"
          # Push quietly to prevent showing the token in log
          git push -q https://$GITHUB_PERSONAL_TOKEN@github.com/David-VTUK/CircleCI-Webapp-CD.git master

This job:

  • Clones the CD repo (This is what ArgoCD will monitor).
  • Retrieves the template deployment.yaml from the Workspace.
  • Replaces $TAG in the following string.
    spec:
      containers:
      - name: webapp-demo
        image: virtualthoughts/circleci-webappdemo:$TAG

With the image tag of the Build process and consequently what has been tagged in Dockerhub.

  • Pushes this YAML file into a separate repo using the $GITHUB_PERSONAL_TOKEN environment variable.

Workflow

Workflows are a way to schedule Jobs. They aren’t required, but it helps to manage jobs and influence order, dependency and other characteristics. Additionally, breaking up a workflow can help with troubleshooting as a failed step relates to a specific Job.

workflows:
  version: 2
  Test-Build-Publish:
    jobs:
      - Test
      - Build:
          requires:
            - Test
      - Publish:
          requires:
            - Build
      - GenerateYAML:
          requires:
            - Publish

This workflow sets the order for this pipeline. The first job is Test, followed by Build, Publish and GenerateYAML. Each of these jobs specifies a requires: field. So if a previous job fails, the entire workflow fails.

With this YAML file in place, “Start Building” can be selected:

Any commit into this repo will trigger the defined pipeline. Which we can view in CircleCI:

Additionally, the Docker image has been updated:

Part 1 – Overview

Part 3 – ArgoCD Configuration

End to end automation with CircleCI and ArgoCD Part 1 – Overview

Part 1 of this 3 part series goes through the process of setting up a CI/CD pipeline leveraging CircleCI and ArgoCD. The overall architecture is depicted below:

CI/CD Delivery Pipeline

The steps that will be implemented/accommodated are:

  1. Developer commits code to a GitHub repo that is monitored by CircleCI.
  2. CircleCI will perform the following tasks on all commits into the master branch:
    1. Test the code. In this example, we’re leveraging go test.
    2. If Testing completes successfully, build (compile) the code.
    3. If building the code completes successfully, construct a docker image to accommodate the service. Push this image to DockerHub
    4. If creating the Docker Image completes successfully, construct a basic deployment YAML file including the tag of the image that was recently uploaded.
    5. Commit the YAML file to a separate GitHub repo, monitored by ArgoCD.
  3. Argo CD will deploy the YAML manifest:
    1. Automatically into the Test cluster
    2. With manual approval into the Prod cluster

Part 2 – CircleCI Configuration

Part 3 – ArgoCD Configuration

Rancher, vSphere Network Protocol Profiles and static IP addresses for k8s nodes [Updated 2023]

Edit: This post has been updated to reflect changes in newer versions of Rancher.

Note: As mentioned by Jonathan in the comments, disabling cloud-init’s initial network configuration is recommended. To do this, create a file:

/etc/cloud/cloud.cfg.d/99-disable-network-config.cfg

To contain:

network: {config: disabled}

In your VM template.

How networking configuration is applied to k8s nodes (or VM’s in general) in on-premises environments is usually achieved by one of two ways – DHCP or static. For some, DHCP is not a popular option and static addresses can be time-consuming to manage, particularly when there’s no IPAM feature in Rancher. In this blog post I go through how to leverage vSphere Network Protocol Profiles in conjunction with Rancher and Cloud-Init to reliably, and predictably apply static IP addresses to deployed nodes.

Create the vSphere Network Protocol Profile

Navigate to Datacenter > Configure > Network Protocol Profiles. and click “Add”.

Provide a name for the profile and assign it to one, or a number of port groups.

Next define the network parameters for this port group. The IP Pool and IP Pool Range are of particular importance here – we will use this pool of addresses to assign to our Rancher deployed K8s nodes.

After adding any other network configuration items the profile will be created and associated with the previously specified port group.

Create a cluster

In Rancher, navigate to Cluster Management > Create > vSphere

In the cloud-init config, we add a script to extrapolate the ovf environment that vSphere will provide via the Network Profile and configure the underlying OS. In this case, Ubuntu 22.04 using Netplan:

Code snippet:

#cloud-config
write_files:
  - path: /root/test.sh
    content: |
      #!/bin/bash
      vmtoolsd --cmd 'info-get guestinfo.ovfEnv' > /tmp/ovfenv
      IPAddress=$(sed -n 's/.*Property oe:key="guestinfo.interface.0.ip.0.address" oe:value="\([^"]*\).*/\1/p' /tmp/ovfenv)
      SubnetMask=$(sed -n 's/.*Property oe:key="guestinfo.interface.0.ip.0.netmask" oe:value="\([^"]*\).*/\1/p' /tmp/ovfenv)
      Gateway=$(sed -n 's/.*Property oe:key="guestinfo.interface.0.route.0.gateway" oe:value="\([^"]*\).*/\1/p' /tmp/ovfenv)
      DNS=$(sed -n 's/.*Property oe:key="guestinfo.dns.servers" oe:value="\([^"]*\).*/\1/p' /tmp/ovfenv)

      cat > /etc/netplan/01-netcfg.yaml <<EOF
      network:
        version: 2
        renderer: networkd
        ethernets:
          ens192:
            addresses:
              - $IPAddress/24
            gateway4: $Gateway
            nameservers:
              addresses : [$DNS]
      EOF

      sudo netplan apply
runcmd:
  - bash /root/test.sh
bootcmd:
  - growpart /dev/sda 3
  - pvresize /dev/sda3
  - lvextend -l +100%FREE /dev/mapper/ubuntu--vg-ubuntu--lv
  - resize2fs /dev/mapper/ubuntu--vg-ubuntu--lv

What took me a little while to figure out is the application of this feature is essentially a glorified transport mechanism for a bunch of key/value pairs – how they are leveraged is down to external scripting/tooling. VMTools will not do this magic for us.

Next, we configure the vApp portion of the cluster (how we consume the Network Protocol Profile:

the format is param:portgroup. ip:VDS-MGMT-DEFAULT will be an IP address from the pool we defined earlier – vSphere will take an IP out of the pool and assign it to each VM associated with this template. This can be validated from the UI:

What we essentially do with the cloud-init script is extract this and apply it as a configuration to the VM.

This could be seen as the best of both worlds – Leveraging vSphere Network Profiles for predictable IP assignment whilst avoiding DHCP and the need to implement many Node Templates in Rancher.

« Older posts Newer posts »

© 2025 Virtual Thoughts

Theme by Anders NorenUp ↑

Social media & sharing icons powered by UltimatelySocial
RSS
Twitter
Visit Us
Follow Me