||8 months ago|
|VRE||8 months ago|
|doc||12 months ago|
|docker||12 months ago|
|nginx||1 year ago|
|.dockerignore||1 year ago|
|.drone.yml||8 months ago|
|.gitignore||12 months ago|
|.pep8||1 year ago|
|DOCKER.md||12 months ago|
|LICENSE||2 years ago|
|README.md||10 months ago|
|SECURITY.md||12 months ago|
|clouds.yaml.example||12 months ago|
|docker-compose.yaml||1 year ago|
|docker-compose.yaml.example||12 months ago|
|run_scheduler.sh||1 year ago|
Research Workspace Broker
Research Workspaces Broker is the hart of the Research Workspace. This is an API that is handling all the actions and requests from either a web interface of direct API.
This API is responsible for:
- Backend for the web interface to the researchers
- Inviting other researchers to your project
- Handles the creation of virtual workspaces (VRW / Openstack)
More to come ...
The Broker is made with the Django Framework in combination with Django Rest Framework. The installation is pretty straight forward. After this the code is at location
Broker and during the setup we assume that this is the root folder where you are in.
For a full Docker setup, look at DOCKER.md
- The Broker depends on Redis and Postgres. These can be installed using docker-compose, or by installing the services directly on to your system.
- Then Checkout the GIT repository:
git clone https://git.web.rug.nl/VRE/Broker.git
- Create a Python3 virtual environment by using venv (or virtualenvwrapper)
cd Broker python3 -m venv [env_name]
- Activate the Python3 virtual environment:
- Install all the required Python3 modules:
pip install -r VRE/requirements.txt pip install -r VRE/requirements-dev.txt
- Create a
.envconfig file and adjust the Environment settings. At least enable
Debugin development :
cp VRE/VRE/env.example VRE/VRE/.env
- Create the database structure and load some needed data
VRE/manage.py migrate VRE/manage.py loaddata virtual_machine_initial_data VRE/manage.py loaddata university_initial_data
- Create a super user (admin) to login to the admin part of the API
- Start the Django application
Now you can enter the Django Admin at
http://localhost:8000/admin/. If do not see any styles on the page, make sure you have enabled
Debug or setup static files for Django
In order to get the API running, you need to specify some settings. This is done by creating environment variables with values that are read out by Django during startup. For this you can use a
.env file or by manually setting bash environment variables. And example env file can be found at
VRE/VRE/env.example which can be used as a template.
The location of the env file should be
All variables have a short explanation above them what they do or used for.
# A uniquely secret key # https://docs.djangoproject.com/en/dev/ref/settings/#secret-key SECRET_KEY=@wb=#(f4uc0l%e!5*eo+aoflnxb(@!l9!=c5w=4b+x$=!8&vy%' # Disable debug in production # https://docs.djangoproject.com/en/dev/ref/settings/#debug DEBUG=False # Allowed hosts that Django does server. Use comma separated list Take care when NGINX is proxying in front of Django # https://docs.djangoproject.com/en/dev/ref/settings/#allowed-hosts ALLOWED_HOSTS=127.0.0.1,localhost # All internal IPS for Django. Use comma separated list # https://docs.djangoproject.com/en/dev/ref/settings/#internal-ips INTERNAL_IPS=127.0.0.1 # Enter the database url connection. Enter all parts even the port numbers: https://github.com/jacobian/dj-database-url # By default a local sqlite3 database is used. DATABASE_URL=sqlite:///db.sqlite3 # The location on disk where the static files will be placed during deployment. Setting is required # https://docs.djangoproject.com/en/dev/ref/settings/#static-root STATIC_ROOT= # Enter the default timezone for the visitors when it is not known. # https://docs.djangoproject.com/en/dev/ref/settings/#std:setting-TIME_ZONE TIME_ZONE=Europe/Amsterdam # Email settings # https://docs.djangoproject.com/en/dev/ref/settings/#email-host # EMAIL_HOST= # Email user name # https://docs.djangoproject.com/en/dev/ref/settings/#email-host-user # EMAIL_HOST_USER= # Email password # https://docs.djangoproject.com/en/dev/ref/settings/#email-host-password # EMAIL_HOST_PASSWORD= # Email server port number to use. Default is 25 # https://docs.djangoproject.com/en/dev/ref/settings/#email-port # EMAIL_PORT= # Does the email server supports TLS? # https://docs.djangoproject.com/en/dev/ref/settings/#email-use-tls # EMAIL_USE_TLS= https://docs.djangoproject.com/en/dev/ref/settings/#default-from-email DEFAULT_FROM_EMAIL=Do not reply<email@example.com> # The sender address. This needs to be one of the allowed domains due to SPF checks # The code will use a reply-to header to make sure that replies goes to the researcher and not this address EMAIL_FROM_ADDRESS=Do not reply<firstname.lastname@example.org> # The Redis server is used for background tasks. Enter the variables below. Leave password empty if authentication is not enabled. # The hostname or IP where the Redis server is running. Default is localhost REDIS_HOST=localhost # The Redis port number on which the server is running. Default is 6379 REDIS_PORT=6379 # The Redis password when authentication is enabled # REDIS_PASSWORD= # The amount of connections to be made inside a connection pool. Default is 10 REDIS_CONNECTIONS=10 # Enter the full path to the Webbased file uploading without the Study ID part. The Study ID will be added to this url based on the visitor. DROPOFF_BASE_URL=http://localhost:8000/dropoffs/ # Enter the full url to the NGINX service that is in front of the TUSD service. By default that is http://localhost:1090 DROPOFF_UPLOAD_HOST=http://localhost:1090 # Which file extensions are **NOT** allowed to be uploaded. By default the extensions exe,com,bat,lnk,sh are not allowed DROPOFF_NOT_ALLOWED_EXTENSIONS=exe,com,bat,lnk,sh # Sentry settings # Enter the full Sentry DSN string. This should contain a key and a project SENTRY_DSN=
For some actions we need a background scheduler system. This system is relaying on Redis, so make sure you have Redis installed. The scheduler can be start with the command:
This will load the Python3 virtual environment and start the background scheduler. Keep the console open.
We use NGINX as a proxy in front of the API. This is not mandatory, but can be handy when you have a busy api.
In order to login, SurfConext is used. This will required a setup at SurfConext. And a ini file with the SurfConext credentials. There is an example file called
surfnet_conext_secrets.ini.example which can be used as a template. Copy that file to
surfnet_conext_secrets.ini at the same location and fill the values given by Surfnet
# Authentication settings # https://mozilla-django-oidc.readthedocs.io/en/stable/installation.html # The Client ID which is the Entity ID in SurfConext OIDC_RP_CLIENT_ID= # The Client secret that is created with the Entity ID. This will be shown only once OIDC_RP_CLIENT_SECRET= # The encryption algorithm. Default is RS256 OIDC_RP_SIGN_ALGO=RS256 # The following urls should be loaded automatically when PR039 is merged: https://github.com/mozilla/mozilla-django-oidc/pull/309 For now, we have to manually add those urls # The source is at: https://connect.surfconext.nl/.well-known/openid-configuration (production) # The source is at: https://connect.test.surfconext.nl/.well-known/openid-configuration (testing) # This is the 'authorization_endpoint' url OIDC_OP_AUTHORIZATION_ENDPOINT=https://connect.surfconext.nl/oidc/authorize # This is the 'token_endpoint' url OIDC_OP_TOKEN_ENDPOINT=https://connect.surfconext.nl/oidc/token # This is the 'userinfo_endpoint' url OIDC_OP_USER_ENDPOINT=https://connect.surfconext.nl/oidc/userinfo # This is the 'jwks_uri' url. Needed due to the `OIDC_RP_SIGN_ALGO` choice OIDC_OP_JWKS_ENDPOINT=https://connect.surfconext.nl/oidc/certs
In order to use the VRW (Virtual Research Workspace) you need to configure a special user that is allowed to make the API calls for VRW updates.
TODO: MORE INFO
For OpenStack integration we need a
clouds.yaml file where the API credentials are stored and the API entrypoint.
This file needs to be stored at the root of the VRE Broker project next to this README.md file. Else the cloud.yaml file will not be found and therefore OpenStack calls will fail.
When this integrations is setup, it is possible to create virtual machines on the Openstack platform
Here you can see an example
cloud.yaml file. The name
hpc at line 2 is mandatory for the RUG Openstack setup.
clouds: hpc: auth: auth_url: [API_ENTRY_POINT] application_credential_id: "[API_CREDENTIALS]" application_credential_secret: "[API_CREDENTIALS_SECRET]" region_name: "RegionOne" interface: "public" identity_api_version: 3 auth_type: "v3applicationcredential"