How to provision local SaltStack Master to work with VMware Aria Automation Config Cloud (aka SaltStack Config)

Recently I had to install and configure a SaltStack Master in my home lab and connect this master to my Aria Automation Config Cloud (aka SaltStack Config, i will use both names in this post) instance.

Even if the official documentation improved a lot, there are still some pitfalls, especially if you are not experienced with the Salt setup.

In this blog post you will see, step by step, the entire process of preparing CentOS 8, installing Salt components and configuring the master to join the Cloud RaaS instance.

My new SaltStack Master will run on a CentOS 8 box:

4.18.0-348.2.1.el8_5.x86_64 #1 SMP

Step 1 – Prepare CentOS 8

The first thing you need to ensure is, that the firewall is not blocking the Salt master ports.

[root@saltmaster02 ~]# firewall-cmd --permanent --add-port=4505-4506/tcp success
[root@saltmaster02 ~]# firewall-cmd --reload success

You can find the details on the official page of the salt project: https://docs.saltproject.io/en/latest/topics/tutorials/firewall.html

You also need to ensure that the gcc package is installed, if it is not available, simply run:

sudo yum install gcc python3-devel

Step 2 – Install Salt on your Salt master

You must install the Salt master service and Salt minion service plus some few more packages if needed on the Salt master. The following instructions install the latest Salt release on CentOS 8 (RHEL 8).

In the Salt master’s terminal, run the following commands to install the Salt Project repository and key:

sudo rpm --import https://repo.saltproject.io/py3/redhat/8/x86_64/latest/SALTSTACK-GPG-KEY.pub
curl -fsSL https://repo.saltproject.io/py3/redhat/8/x86_64/latest.repo | sudo tee /etc/yum.repos.d/salt.repo

Run sudo yum clean expire-cache. Install the salt-minion service and salt-master service on your Salt master:

sudo yum install salt-master
sudo yum install salt-minion

# Optional packages
sudo yum install salt-ssh
sudo yum install salt-syndic
sudo yum install salt-cloud
sudo yum install salt-api

Enable and start service for salt-minion, salt-master, or other Salt components:

sudo systemctl enable salt-master && sudo systemctl start salt-master
sudo systemctl enable salt-minion && sudo systemctl start salt-minion
sudo systemctl enable salt-syndic && sudo systemctl start salt-syndic
sudo systemctl enable salt-api && sudo systemctl start salt-api

See the Salt Install guide for information about installing Salt on other operating systems.

Step 3 – Create initial master configuration

Create a master.conf file in the /etc/salt/minion.d directory. In this file, set the Salt master’s IP address to point to itself:

master: localhost

Restart the Salt master service and Salt minion service (the services have been enabled in the previous step):

sudo systemctl restart salt-master
sudo systemctl restart salt-minion

Step 4 – Install and configure the Master Plugin

After you install Salt on your om-premises infrastructure, you must install and configure the Master SSEAPE Plugin, which enables your Salt masters to communicate with Aria Automation Config (SaltStack Config) Cloud.

To install and configure the Master PSSEAPE Plugin you first need to install the required Python libraries. Login to your local master and run:

sudo pip3 install pyjwt
sudo pip3 install pika

Download the latest Master Plugin wheel from Customer Connect. You will find the file in the package highlighted in the following picture.

Figure 01: Package containing the SSEAPE Plugin.

The Master Plugin is included in the Automated Installer .tar.gz file. After you download and extract the .tar.gz file, you can find the Master Plugin in the sse-installer/salt/sse/eapi_plugin/files directory.

Put the wheel file into your /root directory and install the Master Plugin by manually installing the Python wheel. Use the following example commands, replacing the exact name of the wheel file:

sudo pip3 install SSEAPE-file-name.whl --prefix /usr

Verify that the /etc/salt/master.d directory exists, create it if needed.

Run the following command to generate the master configuration file.

sudo sseapi-config --all > /etc/salt/master.d/raas.conf

If running this command causes an error, see Troubleshooting SaltStack Config Cloud.

Restart the Salt master service.

sudo systemctl restart salt-master

Step 5 – Generate an API token

Before you can connect your Salt master to Aria Automation Config Cloud, you must generate an API token using the Cloud Services Console. This token is used to authenticate your Salt master with VMware Cloud Services.

NOTE: You must have the same role(s) as the role(s) you are configuring for the token. So for example if you are assigning the Organization-Administrator role to the new token you must be Organization-Administrator as well! Please also see: https://docs.vmware.com/en/VMware-Cloud-services/services/Using-VMware-Cloud-Services/GUID-E2A3B1C1-E9AD-4B00-A6B6-88D31FCDDF7C.html

To generate an API token:

On the Cloud Services Console toolbar, click your user name and select My Account > API Tokens.

Click Generate Token.

Figure 02: Generate new API token.

Enter a name for the token.

Select the token’s Time to Live (TTL). The default duration is six months. Note: A non-expiring token can be a security risk if compromised. If this happens, you must revoke the token.

Define scopes for the token. To access the Aria Automation Config Cloud service, you must select the Organization Admin or Organization Owner roles as well as the Salt Master Service Role.

Figure 03: Specify the Organization and Service Roles.

(Optional) Set an email preference to receive a reminder when your token is about to expire.

Click Generate and the newly generated API token will appear in the Token Generated window.

Save the token value to a secure location. After you generate the token, you will only be able to see the token’s name on the API Tokens page, not the token value itself. To regenerate the token, click Regenerate.

Step 6 – Connect your Salt master to Aria Automation Config (SaltStack Config) Cloud

After you generated an API token, you use it to connect your Salt master to Aria Automation Config Cloud.

To connect your Salt master first set a env variable to store the API token you have created in the previous step:

export CSP_API_TOKEN=<api token value>

Run the sseapi-config join command to connect your Salt master to Aria Automation Config Cloud. You have to replace the ssc-url and csp-url values with your region-specific URLs. See the following table for the region-specific URLs.

Region nameSSC URLCSP URL
UShttps://ssc-gateway.mgmt.cloud.vmware.comhttps://console.cloud.vmware.com
DE (Germany)https://de.ssc-gateway.mgmt.cloud.vmware.comhttps://console.cloud.vmware.com
IN (India)https://in.ssc-gateway.mgmt.cloud.vmware.comhttps://console.cloud.vmware.com
Region-specific URLs

Run the sseapi-config join command:

sudo sseapi-config join --ssc-url <SSC URL> --csp-url <CSP URL>

In my example the command will be:

sseapi-config join --ssc-url https://ssc-gateway.mgmt.cloud.vmware.com --csp-url https://console.cloud.vmware.com

If you need to redo the joining process, re-run the sseapi-config join command and pass the flag --override-oauth-app.

sseapi-config join --ssc-url <SSC URL> --csp-url <CSP URL> --override-oauth-app

The --override-oauth-app flag deletes the OAuth app used to get an access token and recreates it.

Restart the Salt master service.

systemctl restart salt-master

Repeat this process for each Salt master. Note: After you connect each Salt master to Aria Automation Config Cloud, you can delete the API token. It is only required for connecting your Salt masters.

After you run the sseapi-config command, an OAuth app is created in your Organization for each Salt master. Salt masters use the OAuth app to get an access token which is appended to every request to Aria Automation Config Cloud. You can view the details of the OAuth app by selecting Organization > OAuth Apps.

The command also creates pillar data called CSP_AUTH_TOKEN on the Salt master. Pillars are structures of data stored on the Salt master and passed through to one or more minions that have been authorized to access that data. The pillar data is stored in /srv/pillar/csp.sls and contains the client ID, the secret, your organization ID, and CSP URL. If you need to rotate your secret, you can re-run the sseapi-config join command.

Example pillar data:

  CSP_AUTH_TOKEN:
   csp_client_id: kH8wIvNxMJEGGmk7uCx4MBfPswEw7PpLaDh
   csp_client_secret: ebH9iuXnZqUOkuWKwfHXPjyYc5Umpa00mI9Wx3dpEMlrUWNy95
   csp_org_id: 6bh70973-b1g2-716c-6i21-i9974a6gdc85
   csp_url: https://console.cloud.vmware.com

Step 7 – Accept Salt master keys

After you connected your Salt master(s) to Aria Automation Config Cloud, you must accept the Salt master’s key in the Aria Automation Config Cloud user interface.

You must have the Superuser role in SaltStack Config Cloud to accept the Salt master’s key.

To accept the Salt master’s key:

  1. Log in to the SaltStack Config Cloud user interface.
  2. From the top left navigation bar, click the Menu, then select Administration to access the Administration workspace. Click the Master Keys tab.
  3. Check the box next to the master key to select it. Then, click Accept Key.
  4. If you already connected your Salt minions to your Salt master, an alert appears indicating that you have pending minion keys to accept. To accept these minion keys, go to Minion Keys > Pending.
    1. Check the boxes next to your minions to select them. Then, click Accept Key.
    The key is now accepted. After several seconds, the minion appears under the Accepted tab and in the Targets workspace.
Figure 04: Accepted master keys.

You can verify that your Salt master and Salt minions are communicating by running a test.ping command in the Aria Automation Config Cloud user interface.

Stay safe.

Thomas – https://twitter.com/ThomasKopton

Salt Extension Modules for VMware – Quick How-To

My fellow colleague Vincent Riccio described here in his blog post the open-source SaltStack Modules that provide hooks into components such as VMware Cloud on AWS, NSX-T, and vSphere.
These modules are a fantastic way to implement prescriptive configuration management across various VMware infrastructure components using the same solution as you should use for software and configuration management of your operating systems and applications – vRealize Automation SaltStack Config.

In this blog post, I will show you how easy it is to install and use the Salt Extension Modules for VMware using the vSphere vCenter module as an example.

Pre-Requisites

I have modified the following Quickstart to fit into my SaltStack setup.

The components running in my lab for this quick demo are:

  • vRealize Automation SaltStack Config instance
  • SaltStack minion on a Linux VM

The next picture shows my Salt minion running in a CentOS 8 Linux. This minion will be the dedicated minion I will use to execute the VMware modules.

Figure 1: Salt minion for the extension modules.

Configuration Steps

Step 1: We need to provide basic information to let SaltStack connect to the vCenter Server. Usually, we use Salt pillars to specify such configuration variables. In the next picture, you see the pillar I have created for my vCenter instance.

Figure 2: Salt pillar containg vCenter login information.

Please be aware that the user name is case sensitive.

Step 2: Update the target, in my use case the dedicated minion, to include the data in this pillar.

Figure 3: Updating the target with the pillar data.

Step 3: With the following command executed on the target Salt minion we can check if the pillar has been applied and the minion has all the needed information.

[root@tk-lin-131 ~]# salt-call pillar.items
local:
    ----------
    vmware_config:
        ----------
        host:
            vc-demo.xxx.xxx
        password:
            xxxxxxx
        user:
            Administrator@demo.local

Step 4: Install the Salt Extension Modules for VMware on the minion with the following command as described in the Quickstart.

$ salt-call pip.install saltext.vmware

In case you receive an error pointing to an outdated pip version, simply run pip upgrade on the minion:

python3 -m pip install --upgrade pip

Step 5: Check if the modules are available on your minion (the output is truncated to display only the relevant modules):

[root@tk-lin-131 ~]# salt-call --local sys.list_modules
local:
    - nsxt_compute_manager
    - nsxt_ip_blocks
    - nsxt_ip_pools
    - nsxt_license
    - nsxt_manager
    - nsxt_policy_segment
    - nsxt_policy_tier0
    - nsxt_policy_tier1
    - nsxt_transport_node
    - nsxt_transport_node_profiles
    - nsxt_transport_zone
    - nsxt_uplink_profiles
    - vmc_dhcp_profiles
    - vmc_direct_connect
    - vmc_distributed_firewall_rules
    - vmc_dns_forwarder
    - vmc_nat_rules
    - vmc_networks
    - vmc_public_ip
    - vmc_sddc
    - vmc_sddc_host
    - vmc_security_groups
    - vmc_security_rules
    - vmc_vpn_statistics
    - vmware_cluster
    - vmware_cluster_drs
    - vmware_cluster_ha
    - vmware_datacenter
    - vmware_datastore
    - vmware_dvswitch
    - vmware_esxi
    - vmware_folder
    - vmware_license_mgr
    - vmware_tag
    - vmware_vm

Step 6: Check if the minion is successfully connecting to the vCenter specified in the pillar and if the modules are working as expected (output truncated for visibility):

[root@tk-lin-131 ~
]# salt-call vmware_datacenter.list
local:
    - Demo-Datacenter
[root@tk-lin-131 ~
]# salt-call vmware_cluster.get cluster_name=HP-Cluster datacenter_name=Demo-Datacenter
local:
    ----------
    drs:
        ----------
        advanced_settings:
            ----------
        default_vm_behavior:
            fullyAutomated
        enable_vm_behavior_overrides:
            True
        enabled:
            True
        vmotion_rate: 3
    drs_enabled:
        True 

Step 7: Now we can start creating Salt state files which will be integral and prescriptive part of our configuration management.

The following state file is just a very simple example. It configures few security settings on all my ESXi hosts in the vCenter we have specified in the pillar in step 1.

set_sec_config_max_days:
  module.run:
    - name: vmware_esxi.get_advanced_config
    - config_name: Security.PasswordMaxDays
    - config_value: 99998

set_sec_config_unlock_time:
  module.run:
    - name: vmware_esxi.get_advanced_config
    - config_name: Security.AccountUnlockTime
    - config_value: 899

Step 8: We can apply this state file to our dedicated minion using e.g. a Salt job as shown in the next picture.

Figure 4: Applying the state file as Salt job.

Step 9: In the last step we can finally check the outcome. We can use the corresponding get command on our minion or just review the settings in vCenter.

[root@tk-lin-131 ~]# salt-call vmware_esxi.get_advanced_config config_name=Security
local:
    ----------
    hp-demo01.xxx.yyy:
        ----------
        Security.AccountLockFailures:
            5
        Security.AccountUnlockTime:
            899
        Security.PasswordHistory:
            0
        Security.PasswordMaxDays:
            99998
        Security.PasswordQualityControl:
            retry=3 min=disabled,disabled,disabled,7,7
Figure 5: Advanced configuration of an ESXi host in vCenter.
Some final notes

Please note that after changing the state file it may take Salt a few seconds to reflect that change in the virtual file system. If you run a Salt job immediately after changing the state file, Salt may use the “old version”.

In my example, I have used an execution module. Usually, you would use a state module to check a setting and only apply a configuration if there is a deviation. At the moment of writing this post, the ESXi state module does not support checking Advanced Configuration. Since this is an Open Source module anyone can try to implement it:-)

Stay safe

Thomas – https://twitter.com/ThomasKopton