Github: https://github.com/thoughtfault/gonjalla-DeimosC2

Summary

We deploy a DeimosC2 teamserver and forwarder to the bulletproof domain registrar/hosting provider Njalla. We will be configuring the teamserver and forwarder with terraform and ansible. To accomplish this we will need to add support in the gonjalla golang library for managing servers and domains, since currently it only supports changing resources records. Furthermore, we will add support to the terraform provider terraform-provider-njalla. Then we will need to write our terraform code and ansible playbooks to deploy and configure our command and control infrastructure.

I have contributed the changes to the repos but you could of course apply this to other providers without existing support. https://github.com/Sighery/terraform-provider-njalla https://github.com/Sighery/gonjalla

Detection

Adversaries will often rent VPSs that can be used during their operations. Providers that accept privacy coins such as monero are particularly attractive, as they provide an defense against attribution and legal repercussions. Defenders should be aware of these VPS providers and incorporate their assets into their threat hunting activities.

I suggest putting together a list of hacking/security related forums and combining them with keywords. Ex: (site:cracked.io | site:xss.is … ) & (“bulletproof” | “anonymous” …)

Read some of these posts and collect VPS providers, then search their assets with sites like https://bgp.he.net/net/185.193.125.0/24. You could also spin up servers yourself to note their public IP - this doesn’t work too well on njalla because you receive one IP address per VPS and so you cannot very cheaply enumerate the address spaces they use. It is easier to enumerate address spaces in use with a provider that has a pay as you go model. Search communication with these addresses across your environment.

Gonjalla modifications

We will add support for servers to the gonjalla library (https://github.com/Sighery/gonjalla). This is needed because the terraform provider depends on gonjalla.

Contents of servers.go

package gonjalla

import (
	"encoding/json"
)

// Server struct contains data returned by api calls that deal with server state
type Server struct {
	Name        string   `json:"name"`
	Type        string   `json:"type"`
	ID          string   `json:"id"`
	Status      string   `json:"status"`
	Os          string   `json:"os"`
	Expiry      string   `json:"expiry"`
	Autorenew   bool     `json:"autorenew"`
	SSHKey      string   `json:"ssh_key"`
	Ips         []string `json:"ips"`
	ReverseName string   `json:"reverse_name"`
	OsState     string   `json:"os_state"`
}

// ListServers returns a listing of all servers for a given account
func ListServers(token string) ([]Server, error) {
	params := map[string]interface{}{}

	data, err := Request(token, "list-servers", params)
	if err != nil {
		return nil, err
	}

	type Response struct {
		Servers []Server `json:"servers"`
	}

	var response Response
	err = json.Unmarshal(data, &response)
	if err != nil {
		return nil, err
	}

	return response.Servers, nil
}

// ListServerImages returns a listing of the avaliable server images
func ListServerImages(token string) ([]string, error) {
	params := map[string]interface{}{}

	data, err := Request(token, "list-server-images", params)
	if err != nil {
		return nil, err
	}

	type Response struct {
		Images []string `json:"images"`
	}

	var response Response
	err = json.Unmarshal(data, &response)
	if err != nil {
		return nil, err
	}

	return response.Images, nil
}

// ListServerTypes returns a listing of the avaliable server types
func ListServerTypes(token string) ([]string, error) {
	params := map[string]interface{}{}

	data, err := Request(token, "list-server-types", params)
	if err != nil {
		return nil, err
	}

	type Response struct {
		Types []string `json:"types"`
	}

	var response Response
	err = json.Unmarshal(data, &response)
	if err != nil {
		return nil, err
	}

	return response.Types, nil
}

// StopServer stops a server from running.  Server data will not be destroyed.
func StopServer(token string, id string) (Server, error) {
	params := map[string]interface{}{
		"id": id,
	}

	var server Server

	data, err := Request(token, "stop-server", params)
	if err != nil {
		return server, err
	}

	err = json.Unmarshal(data, &server)
	if err != nil {
		return server, err
	}

	return server, nil
}

// StartServer starts a server
func StartServer(token string, id string) (Server, error) {
	params := map[string]interface{}{
		"id": id,
	}

	var server Server

	data, err := Request(token, "start-server", params)
	if err != nil {
		return server, err
	}

	err = json.Unmarshal(data, &server)
	if err != nil {
		return server, err
	}

	return server, nil
}

// RestartServer restarts a server
func RestartServer(token string, id string) (Server, error) {
	params := map[string]interface{}{
		"id": id,
	}

	var server Server

	data, err := Request(token, "restart-server", params)
	if err != nil {
		return server, err
	}

	err = json.Unmarshal(data, &server)
	if err != nil {
		return server, err
	}

	return server, nil
}

// ResetServer resets a server with new server settings.  Server data WILL be destroyed
func ResetServer(token string, id string, os string, publicKey string, instanceType string) (Server, error) {
	params := map[string]interface{}{
		"id": id,
		"os": os,
		"ssh_key": publicKey,
		"type": instanceType,
	}

	var server Server

	data, err := Request(token, "reset-server", params)
	if err != nil {
		return server, err
	}

	err = json.Unmarshal(data, &server)
	if err != nil {
		return server, err
	}

	return server, nil
}

// AddServer creates a new server.
func AddServer(token string, name string, instanceType string, os string, publicKey string, months int) (Server, error) {
	params := map[string]interface{}{
		"name": name,
		"type": instanceType,
		"os": os,
		"ssh_key": publicKey,
		"months": months,
	}

	var server Server

	data, err := Request(token, "add-server", params)
	if err != nil {
		return server, err
	}

	err = json.Unmarshal(data, &server)
	if err != nil {
		return server, err
	}

	return server, nil
}

// RemoveServer removes a server. Server data WILL be destroyed.
func RemoveServer(token string, id string) (Server, error) {
	params := map[string]interface{}{
		"id": id,
	}

	var server Server

	data, err := Request(token, "remove-server", params)
	if err != nil {
		return server, err
	}

	err = json.Unmarshal(data, &server)
	if err != nil {
		return server, err
	}

	return server, nil
}

We need to add a couple functions for domain management.

// Checks the task status given a task id
func CheckTask(token string, id string) (string, error) {
	params := map[string]interface{}{
		"id": id,
	}

	data, err := Request(token, "check-task", params)
	if err != nil {
		return "", err
	}

	type Response struct {
		id string `json:"id"`
		status string  `json:"status"`
	}

	var response Response
	err = json.Unmarshal(data, &response)
	if err != nil {
		return "", err
	}

	return response.status, nil
}

// Registers a domain given a domain name and desired term length
func RegisterDomain(token string, domain string, years int) (error) {
	params := map[string]interface{}{
		"domain": domain,
		"years": years,
	}

	data, err := Request(token, "register-domain", params)
	if err != nil {
		return err
	}

	type Response struct {
		task string `json:"task"`
	}

	var response Response
	err = json.Unmarshal(data, &response)
	if err != nil {
		return err
	}

	var status string
	for true {
		status, err = CheckTask(token, response.task)
		if err != nil {
			return err
		}

		if status == "active" {
			break
		}

		time.Sleep(5 * time.Second)
	}

	return nil
}

Terraform provider modifications

Next, we will add support for servers and domains in the njalla terraform provider (https://github.com/Sighery/terraform-provider-njalla). Currently, this provider only supports management of resource records.

First, we will create our resource files. These files defines the CRUD operations that terraform will use to configure our resources to the desired state.

Contents of resource_server.go:

package njalla

import (
	"context"
	"fmt"
	"time"

	"github.com/hashicorp/terraform-plugin-sdk/v2/diag"
	"github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"

	"github.com/Sighery/gonjalla"
)

func resourceServer() *schema.Resource {
	return &schema.Resource{
		CreateContext: resourceServerCreate,
		ReadContext:   resourceServerRead,
		UpdateContext: resourceServerUpdate,
		DeleteContext: resourceServerDelete,

		Schema: map[string]*schema.Schema{
			"name": {
				Type:		schema.TypeString,
				Required:	true,
				Description:	"Name for the server",
			},
			"instance_type": {
				Type:		schema.TypeString,
				Required:	true,
				Description:	"Instance type for the server",
			},
			"os": {
				Type:		schema.TypeString,
				Required:	true,
				Description:	"OS type for the server",
			},
			"public_key": {
				Type:		schema.TypeString,
				Required:	true,
				Description:	"Public key material for this server",
			},
			"months": {
				Type:		schema.TypeInt,
				Required:	true,
				Description:	"Number of months to buy the server for",
			},
			"public_ip": {
				Type:		schema.TypeString,
				Computed:	true,
				Description:	"Public IPv4 address of this server",
			},

		},

		Importer: &schema.ResourceImporter{
			StateContext: resourceServerImport,
		},
	}
}

func resourceServerCreate(
	ctx context.Context, d *schema.ResourceData, m interface{},
) diag.Diagnostics {
	config := m.(*Config)

	name := d.Get("name").(string)
	instanceType := d.Get("instance_type").(string)
	os := d.Get("os").(string)
	publicKey := d.Get("public_key").(string)
	months := d.Get("months").(int)

	server, err := gonjalla.AddServer(config.Token, name, instanceType, os, publicKey, months)
	if err != nil {
		return diag.FromErr(err)
	}

	return resourceServerRead(ctx, d, m)
}

func resourceServerRead(
	ctx context.Context, d *schema.ResourceData, m interface{},
) diag.Diagnostics {
	config := m.(*Config)

	var diags diag.Diagnostics

	servers, err := gonjalla.ListServers(config.Token)
	if err != nil {
		return diag.FromErr(err)
	}

	for true {
		for _, server := range servers {
			if d.Id() == server.ID {
				if len(server.Ips) == 0 {
					time.Sleep(5 * time.Second)
					continue
				}

				d.Set("instance_type", server.Type)
				d.Set("os", server.Os)
				d.Set("public_key", server.SSHKey)
				d.Set("public_ip", server.Ips[0])

				return diags
			}
		}
	}

	d.SetId("")
	return diags
}

func resourceServerUpdate(
	ctx context.Context, d *schema.ResourceData, m interface{},
) diag.Diagnostics {
	config := m.(*Config)

	id := d.Id()
	os := d.Get("os").(string)
	publicKey := d.Get("public_key").(string)
	instanceType := d.Get("instance_type").(string)

	_, err := gonjalla.ResetServer(config.Token, id, os, publicKey, instanceType)
	if err != nil {
		return diag.FromErr(err)
	}

	return resourceServerRead(ctx, d, m)
}

func resourceServerDelete(
	ctx context.Context, d *schema.ResourceData, m interface{},
) diag.Diagnostics {
	config := m.(*Config)

	_, err := gonjalla.RemoveServer(config.Token, d.Id())
	if err != nil {
		return diag.FromErr(err)
	}

	d.SetId("")

	var diags diag.Diagnostics
	return diags
}

func resourceServerImport(
	ctx context.Context, d *schema.ResourceData, m interface{},
) ([]*schema.ResourceData, error) {

	id := d.Id()

	config := m.(*Config)

	servers, err := gonjalla.ListServers(config.Token)
	if err != nil {
		return nil, fmt.Errorf(
			"Listing servers failed: %s", err.Error(),
		)
	}

	for _, server := range servers {
		if id == server.ID {
			d.SetId(id)
			d.Set("name", server.Name)
			d.Set("instance_type", server.Type)
			d.Set("os", server.Os)
			d.Set("public_key", server.SSHKey)
			d.Set("public_ip", server.Ips[0])

			return []*schema.ResourceData{d}, nil
		}
	}

	return nil, fmt.Errorf("Couldn't find server with id %s", id)
}

Contents of resource_domain.go

package njalla

import (
	"context"
	"fmt"

	"github.com/hashicorp/terraform-plugin-sdk/v2/diag"
	"github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"

	"github.com/Sighery/gonjalla"
)

func resourceDomain() *schema.Resource {
	return &schema.Resource{
		CreateContext: domainCreate,
		ReadContext:   domainRead,
		UpdateContext:   domainUpdate,
		DeleteContext: domainDelete,

		Schema: map[string]*schema.Schema{
			"name": {
				Type:		schema.TypeString,
				Required:	true,
				Description:	"Name of the domain",
			},
			"years": {
				Type:		schema.TypeInt,
				Required:	true,
				Description:	"Number of months to buy the server for",
			},
		},

		Importer: &schema.ResourceImporter{
			StateContext: domainImport,
		},
	}
}

func domainCreate(
	ctx context.Context, d *schema.ResourceData, m interface{},
) diag.Diagnostics {
	config := m.(*Config)

	name := d.Get("name").(string)
	years := d.Get("years").(int)

	err := gonjalla.RegisterDomain(config.Token, name, years)
	if err != nil {
		return diag.FromErr(err)
	}

	return domainRead(ctx, d, m)
}

func domainRead(
	ctx context.Context, d *schema.ResourceData, m interface{},
) diag.Diagnostics {
	config := m.(*Config)

	var diags diag.Diagnostics

	domains, err := gonjalla.ListDomains(config.Token)
	if err != nil {
		return diag.FromErr(err)
	}

	for _, domain := range domains {
		if d.Id() == domain.Name {
			d.Set("name", domain.Name)
		}

		return diags
	}

	d.SetId("")
	return diags
}

func domainUpdate(
	ctx context.Context, d *schema.ResourceData, m interface{},
) diag.Diagnostics {

	return domainRead(ctx, d, m)
}


func domainDelete(
	ctx context.Context, d *schema.ResourceData, m interface{},
) diag.Diagnostics {

	// Cannot delete domain through API.  Just delete in internal state instead.
	d.SetId("")

	var diags diag.Diagnostics
	return diags
}

func domainImport(
	ctx context.Context, d *schema.ResourceData, m interface{},
) ([]*schema.ResourceData, error) {

	name := d.Id()

	config := m.(*Config)

	domains, err := gonjalla.ListDomains(config.Token)
	if err != nil {
		return nil, fmt.Errorf(
			"Listing domains failed: %s", err.Error(),
		)
	}

	for _, domain := range domains {
		if name == domain.Name {
			d.SetId(domain.Name)
			d.Set("name", domain.Name)

			return []*schema.ResourceData{d}, nil
		}
	}

	return nil, fmt.Errorf("Couldn't find domain with id %s", name)
}

Next, we will need to add our server resource and domain resource to the resources map in provider.go

ResourcesMap: map[string]*schema.Resource{
	"njalla_server":       resourceServer(), // New resource
	"njalla_domain":       resourceDomain(), // New resource
	"njalla_record_txt":   resourceRecordTXT(),
	"njalla_record_a":     resourceRecordA(),
	"njalla_record_aaaa":  resourceRecordAAAA(),
	"njalla_record_mx":    resourceRecordMX(),
	"njalla_record_cname": resourceRecordCNAME(),
	"njalla_record_caa":   resourceRecordCAA(),
	"njalla_record_ptr":   resourceRecordPTR(),
	"njalla_record_ns":    resourceRecordNS(),
	"njalla_record_tlsa":  resourceRecordTLSA(),
	"njalla_record_naptr": resourceRecordNAPTR(),

Making modifications work

tree -L 1
.
├── gonjalla
└── terraform-provider-njalla

I added this to the top of terraform-provider-njalla/go.mod. This will ensure the modified version of gonjalla is used instead.

replace github.com/Sighery/gonjalla => ../gonjalla

To tell terraform to use our custom provider, we create a .terraformrc. Terraform will look for a binary under /home/USERNAME/terraform-provider-njalla.

provider_installation {
  dev_overrides {
    "registry.terraform.io/Sighery/njalla" = "/home/USERNAME/terraform-provider-njalla"
  }

  direct {}
}

Terraforming

tree
.
├── files
│   ├── DeimosC2.service
│   ├── DeimosC2.sh
│   ├── forwarder.service.j2
│   └── forwarder.sh
├── install-forwarder.yml
├── install-teamserver.yml
├── main.tf
├── terraform.tfvars
└── variables.tf

Our main.tf will provision and configure a DeimosC2 teamserver and a forwarder. It will configure a record to point to the forwarder IP.

Contents of main.tf

terraform {
  required_version = ">= 0.13"

  required_providers {
    njalla = {
      source  = "Sighery/njalla"                                                                                                                                                                                                            version = "~> 0.10.0"
    }
  }
}

resource "tls_private_key" "this" {
        algorithm       = "RSA"
        rsa_bits        = 4096
}

resource "local_file" "this" {
        filename                = "${var.key_name}.pem"
        content                 = tls_private_key.this.private_key_pem
        file_permission = "0600"
}

resource "njalla_server" "teamserver" {
        name            = "teamserver"
        instance_type   = var.teamserver_instance_type
        os              = var.teamserver_os
        public_key      = tls_private_key.this.public_key_openssh
        months          = 1

        depends_on      = [local_file.this]

        provisioner "remote-exec" {
                connection {
                        host            = self.public_ip
                        user            = "root"
                        private_key     = file("./${var.key_name}.pem")
                }
                inline  = ["echo connected"]
        }

        provisioner "local-exec" {
                command = "ANSIBLE_HOST_KEY_CHECKING=False ansible-playbook --inventory '${self.public_ip},' --private-key=${var.key_name}.pem --extra-vars \"operator_ip='${var.operator_ip}'\" install-teamserver.yml"
        }
}

resource "njalla_server" "forwarder" {
        name            = "forwarder"
        instance_type   = var.forwarder_instance_type
        os              = var.forwarder_os
        public_key      = tls_private_key.this.public_key_openssh
        months          = 1

        depends_on      = [local_file.this]

        provisioner "remote-exec" {
                connection {
                        host            = self.public_ip
                        user            = "root"
                        private_key     = file("./${var.key_name}.pem")
                }
                inline  = ["echo connected"]
        }

        provisioner "local-exec" {
                command = "ANSIBLE_HOST_KEY_CHECKING=False ansible-playbook --inventory '${self.public_ip},' --private-key=${var.key_name}.pem --extra-vars \"teamserver_ip='${njalla_server.teamserver.public_ip}'\" install-forwarder.yml"
        }
}

resource "njalla_domain" "this" {
		name    = "${var.domain_name}"
		years   = 1
}

resource "njalla_record_a" "this" {
        domain  = "${var.domain_name}"
        ttl     = 10800
        content = njalla_server.forwarder.public_ip
}

Our variables are declared in variables.tf

variable "domain_name" {
        type            = string
        description     = "Domain to use"
}

variable "key_name" {
        type            = string
        description     = "The saved private key name"
}

variable "operator_ip" {
        type            = string
        description     = "The IPv4 address of the jumpbox/operator"
}

variable "teamserver_instance_type" {
        type            = string
        description     = "The instance type of the teamserver"
}

variable "forwarder_instance_type" {
        type            = string
        description     = "The instance type of the forwarder"
}

variable "teamserver_os" {
        type            = string
        description     = "The os of the teamserver"
}

variable "forwarder_os" {
        type            = string
        description     = "The os of the forwarder"
}

We set our variables in terraform.tfvars

domain_name                     = "myverybaddomain.com"
key_name                        = "mykey"
operator_ip                     = ""x.x.x.x"
teamserver_instance_type        = "njalla1"
teamserver_os                   = "ubuntu2004"
forwarder_instance_type         = "njalla1"
forwarder_os                    = "ubuntu2004"

Ansible

Our main.tf file will run these playbooks to configure the teamserver and forwarder, respectively.

Contents of install-teamserver.yml:

- name: Install DeimosC2 teamserver
  hosts: all
  remote_user: root
  tasks:
    - name: Install packages
      ansible.builtin.apt:
        name:
          - unzip
          - python3-pip
        state: present
        update_cache: yes
        cache_valid_time: 3600

    - name: Allow management traffic from operator IP
      community.general.ufw:
        rule: allow
        port: '{{ item }}'
        proto: tcp
        src: '{{ operator_ip }}'
      loop:
        - 22
        - 8443
      notify:
        - Enable ufw

    - name: Deny management traffic from everywhere
      community.general.ufw:
        rule: deny
        port: '{{ item }}'
        proto: tcp
        src: '0.0.0.0/0'
      loop:
        - 22
        - 8443
      notify:
        - Enable ufw

    - name: Allow C2 traffic
      community.general.ufw:
        rule: allow
        port: '{{ item }}'
        src: '0.0.0.0/0'
      loop:
        - 443
        - 4443
        - 4444
      notify:
        - Enable ufw

    - name: Get link for latest release of DeimosC2
      ansible.builtin.uri:
        url: https://api.github.com/repos/DeimosC2/DeimosC2/releases/latest
        return_content: true
      register: json_response

    - name: Get latest release of DeimosC2
      ansible.builtin.unarchive:
        src: "{{ json_response.json.assets[1].browser_download_url}}"
        dest: /opt/
        remote_src: yes
      args:
        creates: /opt/DeimosC2

    - name: Install requirements.txt (bultin.pip broken)
      shell:
        "pip3 install -r /opt/requirements.txt"

    - name: Copy service script
      ansible.builtin.copy:
        src: files/DeimosC2.sh
        dest: /opt/DeimosC2.sh
        mode: '0755'

    - name: Copy service file
      ansible.builtin.copy:
        src: files/DeimosC2.service
        dest: /etc/systemd/system
        mode: '0644'
      notify:
        - Enable DeimosC2

  handlers:
    - name: Enable ufw
      community.general.ufw:
        state: enabled

    - name: Enable DeimosC2
      ansible.builtin.service:
        name: DeimosC2
        state: restarted
        enabled: yes
        daemon_reload: yes

Contents of install-forwarder.yml:

- name: Install c2 forwarders
  hosts: all
  remote_user: root
  tasks:
    - name: Install socat
      ansible.builtin.apt:
        name: socat
        state: present
        update_cache: yes
        cache_valid_time: 3600

    - name: Copy service script
      ansible.builtin.copy:
        src: files/forwarder.sh
        dest: /opt/forwarder.sh
        mode: '0755'

    - name: Copy service file
      ansible.builtin.template:
        src: files/forwarder.service.j2
        dest: /etc/systemd/system/forwarder.service
        mode: '0644'
      notify:
        - Enable forwarder

  handlers:
    - name: Enable forwarder
      ansible.builtin.service:
        name: forwarder
        state: restarted
        enabled: yes
        daemon_reload: yes

Our service files allow us to manage the scripts/binaries through systemd.

Contents of DeimosC2.service

[Unit]
Description=DeimosC2

[Service]
Type=simple
ExecStart=/opt/DeimosC2.sh
WorkingDirectory=/opt

[Install]
WantedBy=graphical.target

Contents of forwarder.service.j2

[Unit]
Description=forwarder

[Service]
Type=simple
ExecStart=/opt/forwarder.sh {{ teamserver_ip }}
WorkingDirectory=/opt

[Install]
WantedBy=graphical.target

Our forwarder shell script starts socat forwarders for the specified ports.

#!/bin/bash

echo -n 443 4443 4444 | xargs -d ' ' -I% bash -c "nohup socat TCP4-LISTEN:%,fork TCP4:$1:% &"

while true; do sleep 1000000; done