Determine disk space requirements
Disk space will depend on the number of PCAP files you want to process daily.
Initial specifications depending on the number of PCAP files to be processed daily:
Number of files per Busy Hour | Average number of packets per file | vCPU | Memory (GB) | Disk (GB) 3days Retention | Disk (GB) 90days Retention |
50 | 2.5K | 12 | 48 | 50 | 150 |
100 | 2.5K | 16 | 64 | 50 | 150 |
50 | 25K | 16 | 64 | 75 | 1000 |
25 | 250K | 24 | 96 | 150 | 10000 |
Note: Assumptions: processing 100 files/day, retention period is 3 days.
The Agility Monitoring Stack, which includes a comprehensive metrics, logging, and traces platform requires allocating additional resources:
CPU: 2
Memory: 4GB
Disk: 40GB
The disk requirement applies either to the boot disk, when it is the sole storage option, or to the external disk when it is used as an alternative.
Note: The values associated with AGILITY application or its monitoring stack can be customized according to specific requirements and file sizes.
Choose your installation
A simple and straightforward way to utilize AGILITY is by provisioning an existing Cloud image. This option is available in both public and private cloud environments.
The access to the VM image will be provided by B-Yond.
On-Premises Virtualization Platforms
Public Clouds
AWS: The AMI (Amazon Machine Image) ID will be shared with target account.
Azure: The Azure VM image will be shared with target subscription/tenant.
Google Cloud: The Google Cloud VM image will be shared with target organization.
If you are using other Cloud providers or virtualization solutions, you may need to convert the qcow2 or VMware disk images to the format required by your platform. Consult the documentation of your specific provider or platform for instructions on image conversion.
Using the B-Yond provided images is recommended as they are pre-configured and optimized for running AGILITY.
Begin Installation
OpenStack
From the email sent from B-Yond, download the qcow2 image specifically configured for OpenStack.
(As an administrator) Create an image:
glance image-create --disk-format qcow2 --container-format bare --file ./Agility-X.YY.Z-AlmaLinux-X-GenericCloud-X.Y-YYYYMMDD.x86_64.qcow2 --min-disk 25 --min-ram 2048 --name Agility-X.Y.Z
(As an administrator) Create a member for the glance image:
glance member-create <image-id> <member-id>
(As an administrator) Accept the membership for the glance image:
glance member-update <image-id> <member-id> accepted
(As a user) Create a VM using the image (minimum use m1.medium which is 2 CPU / 4096 RAM / 40G disk):
openstack server create --flavor <your-flavor> --image <image-id> agility --nic net-id=<network-id> --security-group <your-security-group> --key-name <your-key>
Next: Go to Access the VM.
VMware ESXi
From the email sent from B-Yond, download the provided VMware disk image specifically configured for VMware virtualization environments.
To import a virtual machine stored on a VMware Hosted product to an ESX/ESXi host, run:
vmkfstools -i virtual_machine.vmdk /vmfs/volumes/datastore/my_virtual_machine_folder/virtual_machine.vmdk
Create the VM using the imported disk. Option
Guest OS: Other Linux (64-bit)
.Using the console login as
root
, passwordalmalinux
.Set up static network configuration, e.g. using nmtui.
Increase VM disk size:
-Increase disk size from ESXi
-Rescan usingecho 1>/sys/class/block/sda/device/rescan
-Recreate the partition2
withfdisk
printf "d\n\nn\n\n\n\np\nw\n" | fdisk /dev/sda
-Increase the filesystem size using
xfs_growfs /dev/sda2
Configure ssh options, e.g. set authorized keys for default cloud-user
almalinux
or another user.
Note: For ESXi 8.0, use Guest OS: Other Linux (64-bit)
enable the LSI Logic parallel SCSI controller option.
Next: Go to Access the VM.
Public Clouds (AWS, Azure, GCP, etc.)
Follow the procedures specified by your Cloud provider. These procedures typically include the following steps:
Image selection: Choose the AGILITY VM image obtained from B-Yond or the converted image.
Shape specification: Specify the number of virtual CPUs (vCPUs) and RAM for the instance.
Boot disk specification: Define the size and type of the boot disk.
Networking configuration: Configure the network settings for the VM.
Public SSH key(s): Provide the SSH key(s) that will be used to access the VM.
Provide an
init-cloud
script to run (this is in general an optional step).
Note: The VM boot time might take between 5 and 10 minutes in total.
Next: Go to Access the VM.
Access the VM
SSH in using the
cloud-user
and the associated private key:
– Generic Cloud:almalinux
– AWS - AMI:ec2-user
ssh -i <private_key> <cloud-user>@<vm_ip>
Verify that all components are up and running:
sudo su -
kubectl get pods -A
All Kubernetes pods should be in
Running
andReady
status.
Warning: When some pods are not running. e.g. Kafka, Zookeeper, etc., they can be deleted and that action might fix the issue. A VM reboot is recommended instead.
Access the user interface (UI):
Open your browser and put the AGILITY VM IP, .e.g https://10.0.0.1/cv/
Use the following credentials:
username:agility-admin@b-yond.com
password:agility-admin@b-yond.com
Note: The default password has to be changed after first login. Later, it can be modified following the Manage Agility Local Users section.
Next: Configuration
Configure DNS Servers (Optional)
The DNS server is by default provided via DHCP. This section is relevant if you need to specify an additional DNS server or if the DHCP option is unavailable.
There should be at least 1 (one) nameserver defined for AGILITY in the VM.
To configure nameservers, domain search suffixes, etc., use the NetworkManager tool:
Check the current DNS configuration:
cat /etc/resolv.conf
Example output:
cat /etc/resolv.conf # Generated by NetworkManager nameserver 169.254.169.254
Identify the network connection to configure:
sudo nmcli con show
Example output:
NAME UUID TYPE DEVICE System eth0 5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03 ethernet eth0 cni0 4e4c9ecf-cc82-49eb-bda6-99317c953691 bridge cni0 flannel.1 021d7133-dad5-4a02-a035-b11009ac943a vxlan flannel.1
Add a new DNS server:
sudo nmcli con mod <connection-name> +ipv4.dns <dns-server-ip> sudo nmcli con up <connection-name>For example, to add Google’s DNS server to the device eth0, the commands will be:
sudo nmcli con mod "System eth0" +ipv4.dns 8.8.8.8 sudo nmcli con up "System eth0"
To check the change run again:
$ cat /etc/resolv.conf # Generated by NetworkManager nameserver 8.8.8.8 nameserver 169.254.169.254
Removing DHCP DNS
If you need to remove the DNS server specified by DHCP, run the following commands:
sudo nmcli con mod <connection-name> ipv4.ignore-auto-dns yes sudo nmcli con up <connection-name>
This will leave only the DNS servers configured manually.
Changing the domain name
If you need to change to domain name, use the ipv4.dns-search option. Ensure that the correct fully qualified domain name (FQDN) is set before by using the hostnamectl set-hostname
command.
Executed the following commands:
sudo nmcli con mod <connection-name> +ipv4.dns-search <domain> sudo nmcli con up <connection-name>
For example, to add a domain name in the search list (here http://example.com ), run:
$ sudo nmcli con mod "System eth0" +ipv4.dns-search example.com $ sudo nmcli con up "System eth0" Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/5) $ cat /etc/resolv.conf # Generated by NetworkManager search example.com nameserver 8.8.8.8 nameserver 169.254.169.254
Configure System Clock (Optional)
This section is crucial for situations where non-default NTP servers are required or when there are limitations in accessing external public ones.
AGILITY VM facilitates clock synchronization using the Chrony service, which is enabled by default and synchronizes with a pool of public NTP servers.
Using a Custom NTP Server
To synchronize the VM clock with a specific NTP server:
Check the current configured servers:
chronyc sources
Example output:
$ chronyc sources MS Name/IP address Stratum Poll Reach LastRx Last sample =============================================================================== ^- 68.64.173.196 2 10 377 132 +6288us[+6288us] +/- 119ms ^- tick.srs1.ntfo.org 3 9 377 57 -693us[ -693us] +/- 148ms ^* http://ntp1.wiktel.com 1 10 377 738 +362us[ +214us] +/- 22ms ^+ 23.150.40.242 2 10 377 103 -1361us[-1361us] +/- 32ms
Add your server definition in the file
/etc/chrony.conf
:server <my-server-ip>
For example, using a public cloud NTP server:
echo "server 169.254.169.254" | sudo tee -a /etc/chrony.conf
Comment out the entry
pool 2.almalinux.pool.ntp.org iburst
to enforce using only the specified NTP server:sudo sed -i '/^pool 2\.almalinux\.pool\.ntp\.org iburst/s/^/#/' /etc/chrony.conf
Restart the Chrony service:
sudo systemctl restart chronyd
Check the changes were applied (Wait until the status changes from ^? to ^*, it might take several minutes):
chronyc sources
Example output:
$ chronyc sources MS Name/IP address Stratum Poll Reach LastRx Last sample =============================================================================== ^* 169.254.169.254 2 6 3 51 -491us[ -491us] +/- 23ms
Enable NTP and trigger a synchronization:
sudo timedatectl set-ntp true sudo chronyc -a makestep
Verify the clock is synchronized:
timedatectl
Example output:
$ timedatectl Local time: Mon 2024-03-11 22:05:42 UTC Universal time: Mon 2024-03-11 22:05:42 UTC RTC time: Mon 2024-03-11 22:05:43 Time zone: UTC (UTC, +0000) System clock synchronized: yes NTP service: active RTC in local TZ: no
To confirm Chrony tracking, run the command:
chronyc tracking The output also shows the configured NTP server:
The output also shows the configured NTP server:
Reference ID : A9FEA9FE (169.254.169.254) Stratum : 3 Ref time (UTC) : Mon Mar 11 22:06:03 2024 System time : 0.000000751 seconds slow of NTP time Last offset : -0.000023889 seconds RMS offset : 0.000017941 seconds Frequency : 18.960 ppm slow Residual freq : -0.001 ppm Skew : 0.011 ppm Root delay : 0.000524478 seconds Root dispersion : 0.010530258 seconds Update interval : 1026.3 seconds Leap status : Normal
Ensure the Chrony service is available after reboot:
sudo systemctl enable chronyd
Configure the time zone
Your system’s time zone settings are stored in the /usr/share/zoneinfo
directory. To ensure your system is set to the appropriate time zone, such as Europe/Paris, execute the following command:
sudo timedatectl set-timezone Europe/Paris
Additionally, you can confirm your current time zone by inspecting the /etc/localtime
file:
ls -l /etc/localtime
Next - Configuration
Attach an External Disk (Optional)
In cases where external disk attachment is necessary, follow these steps. This will depend on the type of external disk used.
Prepare the VM
Access the VM using ssh
Stop the processes
sudo su - systemctl stop k3s
Place the persisted data into a different location:
mv /var/lib/rancher/k3s/storage /var/lib/rancher/k3s/storage-bkp
NFS example
Create a directory on your Ubuntu system to serve as the mount point for the NFS share:
sudo mkdir -p /var/lib/rancher/k3s/storage
Edit the /etc/fstab file as root using a text editor, such as nano or vim:
sudo nano /etc/fstab
Add an entry at the end of the /etc/fstab file to specify the NFS share and the mount point. The entry should follow this format:
<NFS_server_IP_or_hostname>:<remote_directory> <local_mount_point> nfs defaults 0 0
Replace with the IP address or hostname of the NFS server, with the path of the directory you want to mount, and with the path of the local mount point you created in Step 1.
For example, if the NFS server IP address is 192.168.1.100 and the remote directory you want to mount is /data, the entry would look like this:192.168.1.100:/data /var/lib/rancher/k3s/storage nfs defaults 0 0
Save the changes and exit the text editor.
To mount all entries listed in /etc/fstab, you can use the
mount -a
command.
Ensure that your VM has network connectivity to the NFS server and that you have the necessary permissions to access the NFS share.
Block volume example
Your AGILITYloud provider gives you the ability to provision block storage and attach the disk to your VM. Follow the recommended procedures. E.g., it involves several iscsi
commands executions.
Once attached, format the disk (e.g.,
sdb
):export DEV_PATH=sdb export MOUNT_PATH=/var/lib/rancher/k3s/storage sudo mkfs.ext4 -m 0 -F -E lazy_itable_init=0,lazy_journal_init=0,discard /dev/${DEV_PATH} sudo mkdir -p ${MOUNT_PATH} sudo mount -o discard,defaults /dev/${DEV_PATH} ${MOUNT_PATH} sudo chmod 775 ${MOUNT_PATH}
Persist the changes:
sudo cp /etc/fstab /etc/fstab.backup UUID=$(sudo blkid -s UUID -o value /dev/${DEV_PATH}) echo $UUID echo UUID=$(sudo blkid -s UUID -o value /dev/${DEV_PATH}) ${MOUNT_PATH} ext4 _netdev,nofail 0 2 | sudo tee -a /etc/fstab
Restore data
Note: These steps assume you have the necessary permissions and understand the implications of deleting the old data. Exercise caution while performing these operations.
Copy the data to the newly mounted external location:
sudo su - cp -R /var/lib/rancher/k3s/storage-bkp/* /var/lib/rancher/k3s/storage/
Start the processes:
systemctl start k3s
Wait a few seconds and ensure that all services are in the Running state:
kubectl get pods -n agility
Verify that the system is functioning correctly by performing tasks in the UI.
Once you have confirmed everything is working as expected, you can delete the old data:
rm -fr /var/lib/rancher/k3s/storage-bkp
Add Comment