VRE Backend API and Scheduler
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

7.6 KiB

<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en"> <head> </head>


In order to develop on this software, you can setup a development environment using these steps.

In order to install this Research Workspaces project, we use the following packages / software.

  • Redis
  • Django
  • TUSD (The Upload Server Daemon)

First we need to checkout the code.

git clone https://git.web.rug.nl/VRE/Broker.git


Redis is used for storing the schedule/background actions. For development we use the default Redis setup without authentication. Install Redis with the default package manager. For Debian based:

sudo apt install redis-server


The Django code consists of three parts. There is a REST API server, a background scheduler and a demo portal. For development we use all three parts. They all work from the same Python3 virtual environment.


First we need to create the Python virtual environment. This is done with Python 3.

python3 -m venv venv

This will give us a virtual python environment on the location venv in the root of the code dir. Next we need to install the required libraries

source venv/bin/activate
pip install -r VRE/requirements.txt
pip install -r VRE/requirements-dev.txt


Out of the box the REST API server only needs two required settings to work. These settings needs to be placed in a .env file located in the VRE/VRE folder. There should be an .env.example file which you can use as a template.

The minimal settings that needs to be set are:

  • SECRET_KEY: A uniquely secret key. Used for cookie/session encryption
  • DEBUG: Enable debug

Then we can setup and start the REST API server with the following commands.

source venv/bin/activate
./VRE/manage.py migrate
./VRE/manage.py loaddata university_initial_data
./VRE/manage.py loaddata virtual_machine_initial_data
./VRE/manage.py loaddata vre_apps_initial_data
./VRE/manage.py createsuperuser

And start with:

source venv/bin/activate
./VRE/manage.py runserver

Now you can access your REST API server documentation on http://localhost:1337/api/redoc/ and the admin at http://localhost:1337/admin/

There are more settings available to setup. These can be added the to .env file of the REST API.

System Message: ERROR/3 (<stdin>, line 87)

Unknown directive type "literalinclude".

.. literalinclude:: ../VRE/VRE/env.example
    :language: bash


The scheduler is used for background tasks such as creating new workspaces or other long taking actions. The scheduler needs the same python3 environment as the REST API. So here we asume that the Python3 virtual environment is setup correctly.

source venv/bin/activate
./VRE/manage.py run_huey


We also need a TUSD user for API communication between the REST API and the TUSD server. So we create a new user in the REST API admin. Go to http://localhost:1337/admin/auth/user/add/ and create a new user. When the user is created go to the API tokens and select the token for the TUSD user. We need the key and secret of the TUSD user for later use. make sure the TUSD user has the superuser status. This is needed.


NGINX is used on multiple places on the project. This means that we will create multiple virtual domains to get everything working correctly.

We do not cover SSL setups in this document


First install NGINX with LUA support through the package manager. For Debian based this would be:

sudo apt install nginx libnginx-mod-http-lua



There is usage of LUA in NGINX so we can handle some dynamic data on the server side. All LUA code should be placed in the folder /etc/nginx/lua.

sudo ln -s /opt/deploy/VRE/nginx/lua /etc/nginx/lua


After installation of the packages, create a symbolic link in the /etc/nginx/sites-enabled so that a new VHost is created.

Important parts of the VHost configuration:

System Message: ERROR/3 (<stdin>, line 142)

Unknown directive type "literalinclude".

.. literalinclude:: ../../Upload_Server/nginx/tus.vhost.conf
    :language: bash

And there should be a lua folder in the /etc/nginx folder. This can be a symbolic link to the LUA folder that is provided with this project.

In order to test if NGINX is configured correctly run nginx -t and it should give an OK message:

nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful


TUS = The Upload Server. This is a resumable upload server that speaks HTTP. This server is a stand-alone server that is running behind the NGINX server. This is needed as NGINX is manipulating the headers so extra information is added to the uploads.

It is even possible to run a TUS instance on a different location (Amsterdam). As long as the TUS is reachable by the NGINX frontend server, and the TUS server can post webhooks back to the REST API server.


The services is started with a simple bash script. This makes sure that all settings are loaded and the right parameters are used with the TUSD Go daemon server.

The daemon needs to know the following information. These settings are required:

  • WEBHOOK_URL: This is the full url to the REST API server to post updates during uploads.
  • DROPOFF_API_HAWK_KEY: The key for the token that is created on the REST API server for communication with the REST API server.
  • DROPOFF_API_HAWK_SECRET: The secret value that belongs to the token DROPOFF_API_HAWK_KEY.

This information can be placed in an .env file in the same folder where the startup (startup.sh) script is located. An example .env file:

System Message: ERROR/3 (<stdin>, line 175)

Unknown directive type "literalinclude".

.. literalinclude:: ../../Upload_Server/.env.example
    :language: bash

In the startup.sh script there are some default variables that can be overwritten by adding them to the .env file above.

System Message: ERROR/3 (<stdin>, line 180)

Unknown directive type "literalinclude".

.. literalinclude:: ../../Upload_Server/startup.sh
    :language: bash
    :lines: 5-16

This will start the TUS server running on TCP port 1080.

Data storage

The upload data is stored at a folder that is configured in the TUS startup command. This should be folder that is writable by the user that is running the TUS instance. Make sure that the upload folder is not directly accessible by the webserver. Else files can be downloaded.


The TUS is capable of handling hooks based on uploaded files. There are two types of hooks. 'Normal' hooks and webhooks. It is not possible to run both hook systems at the same time due to the blocking nature of the pre-create hook. So we use the 'normal' hook system. That means that custom scripts are run. Those scripts can then post the data to a webserver in order to get a Webhook functionality with the 'normal' hooks. At the moment, there is only a HTTP API call done in the hook system. There is no actual file movement yet. For now we have used the following hooks:

  • pre-create: This hook will run when a new upload starts. This will trigger the REST API server to store the upload in the database, and check if the upload is allowed based on an unique upload url and unique upload code.
  • post-finish: This hook will run when an upload is finished. And will update the REST API server with the file size and actual filename (unique) on disk.

An example of a hook as used in this project is the pre-create.py script.

System Message: ERROR/3 (<stdin>, line 201)

Unknown directive type "literalinclude".

.. literalinclude:: ../../Upload_Server/hooks/pre-create.py

This hook uses the same data payload as when TUS would use the Webhook system. So using 'Normal' hooks or using Webhooks with REST API Server should both work out of the box.