Platform Setup
Installing Kamea requires setting up a few things first.
Customer Preparation
Several steps require having rights on the Azure Subscription that you probably won't have. So, you'll have to make some requests to the customer's IT department.
- Ask for the creation of a Resource Group for the development environment.
- Ask for rights on the Azure Resource Group with a Contributor role to create resources.
- Warn the customer to check that all required resources are available for creation on their Azure Tenant.
- Ask for the creation of an Application Registration for the pipeline to have resource creation rights and to give it the Owner role on your RG. You'll need the application ID and secret.
- Ask for the creation of the SaaS InfluxDB resource (creating it requires having the user account that will "sign" the subscription, so it must be done from a customer user account).
- If using AD B2C, and if the customer creates the tenant (which means you don't have the admin role): - Request the following roles on it:
Application Administrator
andExternal ID User Flow Administrator
. - You'll have to create the App Registration for the Graph API users' access. Once it's done, you must request an admin to grant consent on it.
Once everything is ready, the setup can be started. A few manual operations are required before automatically deploying the Azure Resources.
Required Resources
- App service plan
- App service
- Application Insights
- Azure Container Apps
- Azure Container Apps environment (note that it requires the following namespaces to be enabled on the Azure subscription: Microsoft.App and Microsoft.ContainerService. Even though Microsoft.ContainerService is only required for AKS in the docs.)
- Azure Maps Account
- Azure IoT Hub Device Provisioning Service
- B2C Tenant (Active Directory)
- CDN profiles
- Disk (if using VM)
- Function app
- IoT Hub
- Log Analytics workspace
- Network Interface (if using VM)
- Network security group (if using VM)
- Public IP address (if using VM)
- SaaS subscription
- Service Bus Namespace
- SQL Database
- Storage account
- Virtual Machine (if necessary)
InfluxDB
Log into the customer's cloud InfluxDB instance, and:
- Note its URL (with the trailing slash after the domain). It should look like this:
https://westeurope-1.azure.cloud2.influxdata.com/
- Create an API Token that has read & write access to the buckets. Note its value.
- Go to the Organization menu, and note its ID and name.
Use the values you've noted to fill the following environment variables on GitLab: INFLUXDB_ORG
(the organization name), INFLUXDB_ORG_ID
, INFLUXDB_TOKEN
, INFLUXDB_URL
.
GitLab
First, make sure that the GitLab runners are available. Otherwise, the CI/CD pipelines won't run. Also, make sure that the container registry feature is available on your GitLab instance.
Create a deploy token with read*registry permission. Add its name and value in the GitLab environment variables GITLAB_REGISTRY_USERNAME
and GITLAB_REGISTRY_TOKEN
.
Create a login/password for SQL Server, and add the environment variables ADMIN_SQL_LOGIN
and ADMIN_SQL_PASSWORD
.
With the information given by the customer after creating the pipeline Application Registration on Azure, create the environment variables ARM_CLIENT_ID
, ARM_CLIENT_SECRET
.
You'll easily find the values for those two variables in Azure: ARM_TENANT_ID
(find it in the Azure Active Directory resources - not the B2C one, the main one), ARM_SUBSCRIPTION_ID
(find it in the Resource Group overview).
Additionally, set these environment variables. Adjust them to your use case if needed (especially the SKUs):
Token Expiration
When creating the access token, be mindful of the expiration date. Set it far enough in the future to avoid disrupting the CI/CD pipeline. However, for security reasons, don't set it to "never expire". A good practice is to set it to 1 year and create a reminder to rotate it before expiration. See Periodic infrastructure tasks for a complete list of tokens that need to be monitored and rotated.
Key | Value |
---|---|
ARM_SKIP_PROVIDER_REGISTRATION | true |
AZURE_RESOURCE_GROUP | resource group name |
DOC_GITLAB_PAGES_BASE_URL | GitLab Pages path |
DOC_STORAGE_ACCOUNT_NAME | Name of the storage account used to deploy the documentation as a static website Only when deploying the doc to Azure |
PROJECT_NAME | prefix for the Azure resources |
SKU_CDN | Standard_Microsoft |
SKU_NAME_IOT_HUB | S1 |
SKU_NAME_MANAGEMENT_API_DATABASE | Basic |
SKU_REPLICATION_TYPE_STORAGE_ACCOUNT_API | LRS |
SKU_REPLICATION_TYPE_STORAGE_ACCOUNT_FRONTEND | LRS |
SKU_SERVICE_BUS | Standard |
SKU_NAME_WEBAPP_PLAN | B2 |
SKU_TIER_STORAGE_ACCOUNT_API | Standard |
SKU_TIER_STORAGE_ACCOUNT_FRONTEND | Standard |
SKU_REDIS_ACA_MEMORY | 1.0Gi |
SKU_REDIS_ACA_VCPU | 0.5 |
KAMEA_PROJECT_DIR | ${CI_PROJECT_DIR} Kamea's root, change in case of submodule |
DOCKER_AUTH_CONFIG | See https://docs.gitlab.com/ee/ci/docker/using_docker_images.html#determine-your-docker_auth_config-data |
USE_REDIS_FOR_TELEMETRIES | Set it to true if you want to use Redis as a telemetries database. Don't set it otherwise. |
USE_MQTT | Set it to true if you want to use MQTT as a communication interface for your devices. Don't set it otherwise. |
USE_IOTHUB | Set it to true if you want to use Azure IoT Hub as a communication interface for your devices. Don't set it otherwise. |
DB_CA_PATH | PEM Certificate of PostgreSQL root CA |
You'll have to add other environment variables during the following steps.
Business Apps
Links can be added from the built-in front-ends to the customer's business apps. This is done by using additional environment variables. Add one environment variable per business app in GitLab with the following format:
- Key: anything starting with
BUSINESS_APP
. For instance,BUSINESS_APP_DEMO
is a valid name. - Value:
url,materialIconName,displayName
- url: URL where the business app is hosted
- materialIconName: name of the material icon that will be used to show the app in the app list menu
- displayName: name of the application in the app list menu
Customer Mirror
After creating the Kamea mirror for the customer, go into the GitLab repository, then create an access token named kamea-ci-token
, with write-repository
as scope and Maintainer
as role. Remember to change the expiration date.
Now put it like this:
- https://kamea-ci-token:token
@https_url_kamea_customer_mirror_git
Put this into a variable named KAMEA_MIRROR_"CUSTOMER_NAME" on the Kamea repo. Remember to mask it.
Authentication
Kamea uses the OAuth 2.0 with OpenID Connect protocol to authenticate users. Consequently, one of the first things to set up is an OAuth 2.0 / OIDC-compliant identity provider. The authentication process is agnostic from the identity provider, but the APIs interact with it (to create users, for instance), so the chosen identity provider must be supported by the platform.
Requirements
The identity provider must:
- Support the
Authorization Code with PKCE
- Have APIs to control the creation of users
Identity Provider Setup
For now, Kamea is compatible with:
- Auth0 - Be sure to add the expected environment variables in GitLab (you can read them in the API source code).
- Azure Active Directory B2C
Front-End Configuration
The front-end needs to be configured to use your IDP application. Read the readme.md
file in the source code folder core/apps/client/environments
to understand how to do it.
Source Code & Pipelines
Copy the source code from Witekio's GitLab to the customer's one. This will trigger a pipeline execution. However, the first execution is going to fail during the terraform:plan
stage. This is expected. The failure is caused by the fact that Terraform needs to be executed twice. Terraform needs some resources to have been created before planning the creation of other ones. For example, if the Caddy resource is not created, Terraform isn't capable of estimating how many whitelist rules it must create for it to properly have access to the API.
After the first pipeline has been triggered, wait for it to fail during the terraform:plan
job. Then, in the pipelines menu of GitLab, manually trigger a pipeline on the main branch. A few environment variables will be requested:
ENVIRONMENT_NAME
: Depends on your setup. In Kamea's default configuration, it should bedev
orprod
.RUN_TARGETED_TERRAFORM
:true
. This is the option that will trigger a Terraform execution while targeting a subset of the resources, solving the initial problem.RUN_COMPLETE_TERRAFORM
: Leave it empty, or remove it.DOCKER_TAG
: depends on your environment, and how the Docker images are built. Use the tag value that matches your environment.- For the other values, set them according to your needs.
Note
MQTT support includes additional steps. Before going further, follow this guide.
After filling those values, execute the pipeline. Once the terraform:plan
job is finished, check its output. If the output matches what is expected, manually trigger the terraform:apply
job of the same pipeline. Note that this job will not be executed automatically.
When the targeted Terraform execution has been applied, restart the failed jobs of the initial pipeline. They will run successfully and complete Kamea's infrastructure setup.
Configure Redirect URLs in the IDP
Once the applications have been deployed, the redirect URLs must be configured in your identity provider.
Initialization Endpoint
When everything has been correctly set up, the first data must be initialized on your instance. It includes the root tenant, the first admin account, etc. In order to do that, send a POST request to the endpoint /init
. It does not require any authentication but can only be called once. The following parameters are expected in the request body:
{
"adminMail": "...",
"adminFirstName": "...",
"adminLastName": "...",
"rootTenantName": "...",
"rootTenantShortName": "..."
}
Those resources will be created:
- Root tenant
- Test tenant
- Interfaces / Codecs / Channels
- Device types
- Permissions, and entity type/permission associations
- Admin role
- Admin group: has the admin role on both created tenants
- Admin account, put in the admin group
Front-End Deployment Options
Kamea Angular front-ends can be deployed in several ways.
Note
To properly understand how to configure a web server or CDN to serve Angular applications, read this documentation
Caddy
By default, Kamea uses Caddy to host the front-ends. Since Caddy provides both web server and reverse proxy capabilities, Kamea uses the same instance for serving the front-end and reverse-proxying to the API.
In this configuration, when requesting the front-ends, the first URL segment is used to route the user to one of the front-ends. By default, management
and settings
are used in the first segment to differentiate between the front-ends. This means that the front-ends will be accessible at these URLs:
https://your-caddy-domain.com/management
https://your-caddy-domain.com/settings
To make the front-end files accessible to Caddy, mount a volume at an arbitrary path in the Caddy container. Then, in the folder in which the volume is mounted, copy the front-ends (one sub-folder per front-end). Then, provide the environment variable SPA_ROOT_PATH
to Caddy to indicate where the front-end folders are.
By default, the volume is mounted in /frontend
. So, the front-end files will be in the folders:
- /frontend/management
- /frontend/settings
To instruct Caddy to serve these front-ends, a file named caddyfile.extension
is used. The default one provided in Kamea contains these lines:
import serve_frontend management
import serve_frontend settings
The instruction import serve_frontend foo
means that Caddy will serve the content of the folder /frontend/foo
as a single-page application front-end.
To serve new front-ends, the caddyfile.extension
file must be overridden by another one, using the same syntax. Note that when providing a new one, the default one will not be used, so be sure to include the management & settings lines if you want to keep them in your setup.
Providing a new extension file is done when building the Caddy Docker image by providing the variable caddyfile_ext_path
and making it target your own file. See the Caddy Dockerfile for more information.
When deployed in Azure, an Azure Storage Account is used as a file share to store the front-end files. It is then mounted in the Caddy App Service as a volume.
To use Caddy to serve the front-ends, set the variable serve_frontend_from_caddy
to true
when executing Terraform.
Azure Storage Account
When running on Azure, Storage Accounts can be used to host the front-ends. In order to do that, the Static Website feature must be enabled.
Currently, Storage Accounts do not provide any way to configure routing rules. Consequently, another tool is needed to route all requests to the index.html
files. When using Storage Accounts, it is advised to use either Azure CDN or Azure Front Door to provide routing rules and control cache expiration.
Optional
Custom Domain
By default, all Azure resources have a domain name automatically generated and provided by Azure. For instance, the App Services domain names are <name of the app service>.azurewebsites.net
. Most of these domain names can be customized. Providing a custom domain name also requires providing a certificate; otherwise, the resources cannot provide HTTPS connections. Several ways of providing certificates are available, but here we are going to focus on the simplest, least expensive, and most direct one: using Azure-managed certificates. They are free, and Azure automatically manages the renewal, which is very convenient.
Note: These operations will require some DNS configuration.
App Services
Create the environment variable
CUSTOM_DOMAIN_REVERSE_PROXY
in GitLab BEFORE deploying your infrastructure. Set its value to your domain name withouthttps://
and without a trailing slash.
The main App Service whose domain needs to be customized is the reverse proxy because it is the one that is publicly exposed.
Find more information in the official Azure documentation
First, select the reverse proxy resource App Service on Azure. Note its default domain name. Go to the Custom domains
menu, and note its Custom Domain Verification ID
.
Create the following DNS records in your DNS provider:
Type | Host | Value |
---|---|---|
CNAME | your custom domain name |
App Service default domain name |
TXT | asuid.your custom domain name |
App Service Custom Domain Verification ID |
Wait for the domain name propagation. It might take a few hours.
Note:
The custom domain is created automatically by Terraform if configured properly. The next paragraph explains how to proceed if Terraform did not create it. If you skip this paragraph, resume your reading at the one just after to configure the certificate (never done through Terraform). To check that it was correctly created, look at theCustom Domains
menu in your App Service, and check that yours is present.
Return to the Custom Domains
menu in the App Service, and click on the button Add custom domain
. In the blade that opens on the right, enter your custom domain, and click on Validate
. In the dropdown list, select CNAME
. Click the Add hostname
button at the bottom. You should now see your custom domain in the list, but it is not yet associated with a certificate.
Go to the menu TLS/SSL settings
. Select the tab Private Key Certificates (.pfx)
and click on Create App Service Managed Certificate
. In the dropdown list, select your custom domain. If everything has been set up properly, it should indicate that the hostname is eligible for certificate creation. Click on Create
.
Return to the Custom Domains
menu in the App Service. Click on the Add binding
button next to your custom domain name. In the dropdown list, select your domain name. In the next list, select the certificate you created in the previous step. Select SNI SSL
in the last list and click on Add Binding
.
Wait for everything to take effect. Your API now has HTTPS enabled on your custom domain name.
Update the environment.ts
files of your front-ends to make them target the correct domain name.
Azure CDN Endpoints
Create the environment variables CUSTOM_DOMAIN_MGMT_FRONT
and CUSTOM_DOMAIN_SETTINGS_FRONT
in GitLab BEFORE deploying your infrastructure. Set their values to your domain names without http://
and without a trailing slash.
The Angular front-ends are deployed through Azure CDN endpoints. Follow these instructions to customize their domain names.
Note
Find more information in the custom domain setup documentation and in the HTTPS setup documentation
These steps must be followed for every CDN endpoint.
Select your Azure CDN endpoint resource and note its default domain name.
Create the following DNS record in your DNS provider:
Type | Host | Value |
---|---|---|
CNAME | your custom domain name |
Azure CDN Endpoint default domain name |
Note
The custom domain is created automatically by Terraform if configured properly. The next paragraph explains how to proceed if Terraform did not create it. To check that it was correctly created, look at the Custom Domains
menu in your CDN endpoint, and check that yours is present and that it has a TLS certificate.
Return to the Azure CDN endpoint resource. Go to the menu Custom domains
. Click on the button + Custom domain
. In the blade, input your custom domain name in the Custom hostname
field. Wait for the validation, and click on Add
.
In the Custom domains
, click on the domain you created. Enable the checkbox Custom domain HTTPS
, and select CDN managed
as the Certificate management type
. Validate the form, and wait for the process to complete. It might take up to an hour to propagate.
Update your Identity Provider to add your new front-end domain name to the list of redirect URLs.