Servers
You can utilize different server setups, depending on how you intend to make use of the ftrack service. It is often more convenient and flexible to use virtual machines instead of bare metal servers.
The default configuration of ftrack requires 30GB RAM and 10 CPUs during runtime in the kubernetes cluster. If you have less than that you need to scale down the number of replicas which will affect the performance of the service.
Here are some recommended server setups for different production sizes:
Small production
- Database server with MariaDB 10.11. The database should have its own dedicated server where it can consume all resources. It should have the following as a starting point:
- At least 2 CPU cores
- 10GB RAM
- 50GB SSD
- Be prepared to grow as the database grows in size, and CPU usage approaches 100%.
- Single server Kubernetes cluster with:
- At least 15 CPU cores
- 40GB RAM
- 100GB disk
Example of a small production: Uses ftrack standard setup and parts of the ftrack API for pipeline integration. Recommended for productions 5 - 50 users. It is possible to run a larger production on a single server if it has more CPU cores and RAM.
Large production
- Database server with MariaDB 10.11. The database should have its own dedicated server where it can consume all resources. It should have the following as a starting point:
- 10 CPU cores
- 30GB RAM
- 100GB SSD
- Be prepared to grow as the database grows in size, and CPU usage approaches 100%.
- Kubernetes cluster with at least three servers (one master and at least two nodes). The master can be smaller as it is responsible for the Kubernetes API only, but should have at least 4GB RAM and two cores. You can add more nodes if needed, and you can increase numerous service replicas. Nodes should have at least:
- 10 CPU cores
- 15GB RAM
- 100GB disk
Example of a large production: Utilizes the full potential of the ftrack API to build automation and customized pipeline integrations. Recommended for 50+ up to 300 users. For 300+ users the design allows for further scaling suitable to your particular workflow and requirements.
Staging
- You can use a single server to run both the database and Kubernetes cluster. If you wish to limit memory usage, you can scale down all services so that they use a single replica.
Are you using a cloud provider?
If you are using a cloud provider such as AWS or GCP it is possible to run ftrack there instead and use some of their managed services. For AWS, please read our article about what services to use and use that as inspiration if you are on GCP.
Kubernetes cluster
A Kubernetes (https://kubernetes.io/) cluster is required to run ftrack on-prem. If you already have a working cluster, you can use that cluster and skip this section.
If you don't have a Kubernetes cluster, don't worry – they are easy to set up. There are even one-click options available to get a simple cluster up and running.
The main benefits of Kubernetes are its scalability and reliability when running a cluster on multiple servers. However, Kubernetes can run on a single server. Even if the benefits of scalability are lost, single server will still be sufficient for some production workflows.
There are several ways to set up a production-grade Kubernetes cluster, like using kubeadm or k3s (by Rancher). Below, we've shown how to set up a single-node Kubernetes cluster using the certified Kubernetes distribution k3s on a single server. Note that you can use a different version of k3s, but avoid using the latest and greatest.
# On a Rocky Linux 9 server start a Kubernetes cluster with k3s.
export INSTALL_K3S_BIN_DIR=/usr/bin
export INSTALL_K3S_VERSION=v1.28.13+k3s1
curl -sfL https://get.k3s.io | sh -
# Set the config globally so that helm can find it.
mkdir -p ~/.kube
kubectl config view --raw > ~/.kube/config
# After about 30 seconds the following should show the node as ready.
kubectl get nodes
For a more scalable and durable setup, we recommend establishing a multi-node cluster wherein the workloads do not run on the master node.
After setting up a Kubernetes cluster, verify that the cluster is working as expected by making sure that all system pods are running:
kubectl get pods -n kube-system
It is recommended to set selinux to Permissive mode since it can cause problems depending on what Kubernetes distribution you are using, k3s have support for it in recent versions. The OS firewall can also cause problems, either follow our firewall recommendation below or disable it.
Kubectl
kubectl is the Kubernetes command-line tool. You should have access to this once you set up your cluster. Learn about installing kubectl here:
https://kubernetes.io/docs/tasks/tools/install-kubectl/
Helm
Helm is a Kubernetes package manager that you can use to install software. ftrack requires Helm version 3 for installation. You can download the binary here:
https://github.com/helm/helm/releases/
Add the binary to your system like this:
curl -O https://get.helm.sh/helm-v3.16.1-linux-amd64.tar.gz
tar -xvf helm-v3.16.1-linux-amd64.tar.gz
cp ./linux-amd64/helm /usr/bin/
Now verify that helm is working as expected by running:
helm list
and make sure it does not say something like "Error: Kubernetes cluster unreachable". If it does, then you need to go back to the k3s installation guide above and make sure you created the "~/.kube/config" file.
Firewall
Even though ftrack will not be accessed from the internet, we highly recommend protecting all servers used for running ftrack with a firewall, with only web traffic ports that include 80 and 443 open. If you have a cluster with multiple nodes or the database as a separate server, you must allow communication between those servers.
On Rocky 9 firewall must also be configured to allow Kubernetes to communicate with itself.
firewall-cmd --permanent --add-port=6443/tcp #apiserver
firewall-cmd --permanent --zone=trusted --add-source=10.42.0.0/16 #pods
firewall-cmd --permanent --zone=trusted --add-source=10.43.0.0/16 #services
firewall-cmd --reload
After making changes to the firewall you should also restart the k3s service. Read more about k3s ports here.
Verify connection to the cluster
Before continuing with the installation we recommend verifying that the connection to the cluster is working as excepted from within the cluster and from outside. A k3s cluster will have an ingress controller running which will accept incoming traffic. If you visit the ip address of your cluster master or nodes in a browser you should see a plain text message like:
404 page not found
And from the master using command line:
# curl localhost
404 page not found
The output shows that your cluster is responding on port 80 but responds with a 404 because there is nothing to show. If you get a different error such as "connection refused" then your cluster is not working as expected.
Please ensure that the connection to the cluster is working before you continue with the installation. If you are not using k3s, you might have to install the nginx ingress controller manually using helm.
Database
A database server running MariaDB 10.11. A dedicated server for the database is required for production setups.
MariaDB can be installed on Rocky 9 using the process below.
For any other disto please read the instructions here: https://downloads.mariadb.org/mariadb/repositories/:
Add the following content to /etc/yum.repos.d/MariaDB.repo
# MariaDB 10.11 RedHatEnterpriseLinux repository list - created 2024-09-18 12:13 UTC
# https://mariadb.org/download/
[mariadb]
name = MariaDB
# rpm.mariadb.org is a dynamic mirror if your preferred mirror goes offline. See https://mariadb.org/mirrorbits/ for details.
# baseurl = https://rpm.mariadb.org/10.11/rhel/$releasever/$basearch
baseurl = https://mirror.group.one/mariadb/yum/10.11/rhel/$releasever/$basearch
# gpgkey = https://rpm.mariadb.org/RPM-GPG-KEY-MariaDB
gpgkey = https://mirror.group.one/mariadb/yum/RPM-GPG-KEY-MariaDB
gpgcheck = 1
Then install MariaDB:
dnf install MariaDB-server MariaDB-client
systemctl start mariadb
systemctl enable mariadb
Configure and add a default database
/usr/bin/mysqladmin -u root password 'rootpass'
# Create database and ftrack user.
mysql -u root -prootpass -e 'create database ftrack'
mysql -u root -prootpass -e 'grant all privileges on ftrack.* to "ftrack_user"@"localhost" identified by "ftrack_pass"'
mysql -u root -prootpass -e 'grant SUPER on *.* to "ftrack_user"@"localhost"'
# Allow ftrack_user access from remote hosts.
mysql -u root -prootpass -e 'grant all privileges on ftrack.* to "ftrack_user"@"%" identified by "ftrack_pass"'
mysql -u root -prootpass -e 'grant SUPER on *.* to "ftrack_user"@"%"'
# Import an existing database.
curl -o /tmp/database.sql https://s3-eu-west-1.amazonaws.com/ftrack-deployment/localinstall/backups/workflows/database.sql
mysql --user=ftrack_user --password=ftrack_pass ftrack < /tmp/database.sql
Firewall
The database server must allow access on port 3306 from the kubernetes cluster containers.
Verify database connection
Before continuing with the installation we recommend verifying that the connection to the database is working as excepted from within the cluster.
You can run a container interactively and test from it:
kubectl run mariadb -i --tty --image=mariadb:10.3 --restart=Never --rm -- /bin/bash
After a few moments you should have a bash shell, where you can run something like the following:
mysql -h IP_ADDRESS_OF_MARIADB -u ftrack_user -pftrack_pass -e 'select username from ftrack.user'
The connection verification must be successful before installing ftrack.
Performance
Before using the database in production, it is important to configure it for optimal performance. Please read about database performance here.
File storage
All files uploaded to ftrack must be stored somewhere.
If you are running a single-node cluster or a small cluster dedicated to running ftrack, storage can be a simple folder or mounted folder on your servers. However, we recommend using network-attached storage (NFS) directly from the cluster.
The data folder must contain two folders – attachments and tmp – both of which are used by ftrack. Below you can find an example of how to create this data folder if you were to locate it at /ftrack/data.
mkdir /ftrack /ftrack/data /ftrack/data/attachments /ftrack/data/tmp
# Download demo data and add as/ftrack/data/attachments.
curl -o files.tar.gz https://s3-eu-west-1.amazonaws.com/ftrack-deployment/localinstall/backups/workflows/files.tar.gz
tar -xf files.tar.gz -C /ftrack/data/
# Update permissions to ensure server can read/write.
chown -R 500:500 /ftrack/data
chmod -R a+rwX /ftrack/data
ftrack writes files as uid 500 and gid 500 by default.