Skip to content

Local development setup

This guide will walk you through setting up an environment in which you launch the platform, setting up as many elements as possible locally.

Overview

The overview of the platform elements can be found on the technical index page.

In this guide, all of the elements will be launched locally, except for:

  • Azure IoT Hub
  • Azure Service Bus
  • InfluxDB database
  • Azure Storage Account

For these services, you could use live ones or install your own. This guide does not document how to set them up locally.

Depending on your needs, you might not have to launch everything locally. You can also simply pick what you need from this guide.

This guide will explain how to set up the environment and to launch locally:

  • the device management front-end
  • the settings front-end
  • the Management API
  • the WebSocket Server (WSS)
  • the Azure functions
  • the Redis database

All of the code is in this project's monorepository. Having all of these applications running allows you to test Kamea ingestion with HTTP and Azure IoT Hub

Prerequisites

Running Kamea locally requires a complete NodeJS development environment, with the Docker CLI available. It is advised to use the same NodeJS version as the one used in the Dockerfiles running the applications. For instance, you can check core/apps/api/management-api/Dockerfile to see which version is used to run the API in production, and use the same version locally.

Azure Service Bus setup

One issue when running the platform locally is that the Azure Service Bus (ASB) cannot run outside of Azure. If you use the same one as one of your live environments, you will encounter conflicts between your local Azure Functions and the remote ones, since they will share the same data topics and subscriptions.

Consequently, before launching everything locally, it is advised to create an ASB dedicated to the local development. ASB is cheap in its standard version, so it will not increase the cost of your Azure subscription in any significant way.

Once it is created, you will need to set up the topics and subscriptions. It can be cumbersome and take some time, so a script is available in the dev-tools folder to initialize the ASB according to your needs. The script can also clear everything you have created. Before running it, you need to copy your ASB connection string. Then, open a terminal at the root of the project, and execute those commands:

cd dev-tools
npm i
npm run setup-service-bus

Follow the instructions, and everything will be setup automatically. It is advised to use a different suffix by developer to avoid conflicts.

Note

When created through the script, subscriptions and topics are configured to auto-delete on idle after 30 days.

Then, you need to configure your environment variables to use your topics and subscriptions names based on what has been created by the script. To find them, check the .envrc.example files of the Management API and WSS, and see the local.settings.example.json file of each of your Azure Functions. For the Azure Functions, it is strongly recommended to use a local.settings.json file (git-ignored) next to the example file to provide the environment variables.

This setup will allow you to run the whole ingestion chain locally without conflict, and without having to reconfigure stuff every day.

There is one limitation that can be easily overcome: the IoT Hub device lifecycle. Messages are directly sent from IoT Hub to the ASB. So, to work with it, you need to add a new custom endpoint pointing to your new ASB, and configure it the same way as the one that already handles the lifecycle events for the real environment.

Redis

Redis must be started before running the backend applications. Start by building the Dockerfile in core/apps/redis, then run it with those commands:

cd core/apps/redis
docker build -t redis-kamea:local .
docker run -d -p 6379:6379 -e REDIS_READWRITE_PWD=secretValue redis-kamea:local

Redis is configured to persist data. It is strongly advised to attach a volume to the container to avoid losing data at each restart.

Management API / WSS

The API code is in core/apps/api/management-api, and the WSS is in core/apps/api/websocket-server

Requirements

  • Some functionalities, like authentication, depend on external services. These services have been set up either manually during the platform setup, or automatically while applying Terraform (see the infrastructure folder). The API & WSS expect those services to be available. Note that for some of them, you can use a local version. For example, you can install a local InfluxDB database instead of using the cloud version.

  • The API needs a PostgreSQL database. You can spin up one locally in any way you like, as long as you set the API's environment variables accordingly in your .envrc file.

  • Make sure your .envrc files look like the .envrc.example files present in the API & WSS folders, and review them to check your environment variables match your expectations.

  • If you want to load some initial fixtures, check out the Loading initial platform data guide.

  • Redis must be running.

Execution

At the project root, run the following commands :

npm install

npm run start:management-api # To start the API. 
npm run start:websocket-server # To start the WSS

Note

The above commands also support a -debug suffix that allows to attach a debugger to the API & WSS when they're running.

Windows specificities

On Windows, you cannot use the global launch command and you need to set up the environment variables for the API to start.

You can either do that manually, or use direnv (recommended).

This setup assumes you're using Git Bash as a command prompt.

Steps marked with a (1) are specific to Git Bash. If you're using another terminal, you must find the way to hook the terminal to direnv. See the direnv doc for more details.

  • Create a folder such as C:\tools to put the direnv.exe file and add it to the Windows PATH

  • Go to https://github.com/direnv/direnv/releases to find the latest release of direnv.

  • Click on direnv.windows-amd64.exe to download the file

  • Copy the file to C:\tools and rename it as direnv

  • Open the Git Bash terminal and execute $ echo ~ to find the location in Windows of the user's home directory (1)

  • If a .bash_profile file doesn't exist in the user's home directory, create one (1)

  • Follow the instructions at https://direnv.net/docs/hook.html#bash and add the following text to .bashrc: eval "$(direnv hook bash)" (1)

  • From your chosen terminal, open to the same location as the .envrc file, enter the command:

direnv allow

The result should look like this:

$ direnv allow
direnv: loading ~/Documents/DEV/iot-manager/core/apps/api/management-api/.envrc
direnv: export +CORS_ORIGIN +DB_HOST +DB_NAME +DB_PASSWORD +DB_PORT +DB_USER +JWT_AUDIENCE +JWT_ISSUER +JWT_ISSUER_WELL_KNOWN_URL +LOG_LEVELS [...]

You're set! You can now run at the root of the management-api folder the following:

npm run start

Please note that after every change to the .envrc file, you must again execute the command:

direnv allow

Azure functions

The Azure functions code is in core/apps/azure-functions.

Requirements

  • The func command to start the functions
  • Setting environment variables

More information about the two elements above can be found in the Azure functions readme file.

The Azure functions require a database to retrieve information about the telemetries they handle. Make sure in the environment variables to target the same database as the API, so that the database schema and data will exist as expected.

Launching

In the functions directory:

npm install
npm run start

Front-end

Requirements

The front-ends environments need to be configured before execution. Create a JSON configuration file that matches the template defined in this file: core\apps\client\environments\environment.template.json

Then, setup the environment variable SPA_ENVIRONMENT_FILE_PATH to the configuration file path. When running the front-ends with their respective npm run start command, they will automatically fetch this file and use it to target the proper environment.

The API must run before executing the front-ends.

Note

Executing the WSS is optional. It is used by the device management front-end, but will not prevent it from working it not present. It will just raise an error when loading a device details page, since it won't be able to open the websocket. But if you don't need to test it, it avoids running an additional command when executing the project locally.

Launching

In the monorepository root directory, the following commands will start the device management front-end:

npm install
npm run start:front

This will start the front-end app that lives in core/apps/client/management-front-end.

Additionally, you can launch the settings front-end:

npm run start:settings

This will start the front-end app that lives in core/apps/settings-app.

You will be able to access both front-ends through the link displayed once the startup has finished.