This was a bit rushed so the deployment utilizes a single
docker file docker-compose.prod.yml in the root of the directory
I wanted some simple instructions below to configure the deployment
Be sure to use the proper hostname (batdetectai.kitware.com) in all locations that require it.
I created a client service which has it's own Dockerfile and
builds the vue client app.
The client service also uses a reverse proxy to route
/api, /admin fields to the django server.
The client will need to be built with a different Client ID
for accessing the server.
Remember to git lfs pull to download the onnx model used for inference in the repo.
The onnx model file is in the /assets folder and is bind mounted into the containers
Figure out the proper template to use. This is either
./prod/.env.kitware-production.template or ./prod/.env.nabat-production.template
Copy over the chosen .env file to ./prod/.env.production
and change the default passwords for fields
- Run
source ./dev/export-env.sh ./prod/env.productionto load environment varaibles for the production docker compose file - Run
docker compose -f ./prod/docker-compose.prod.yml run --rm django ./manage.py migrate - Run
docker compose -f ./prod/docker-compose.prod.yml run --rm django ./manage.py createsuperuserand follow the prompts to create your own user - Run
docker compose -f ./prod/docker-compose.prod.yml run \ --rm django ./manage.py makeclient \ --username your.super.user@email.address \ --uri https://batdetectai.kitware.com/ - Run
docker compose -f docker-compose.prod.yml run \ --rm django ./manage.py loaddata speciesto load species data into the database - Run
docker compose -f ./prod/docker-compose.prod.yml run --rm django ./manage.py collectstaticto collect the static files - Run
docker compose -f ./prod/docker-compose.prod.yml upto start the server add-dfor a silent version to run in the background - After creating the basic application log into the django admin
batdetectai.kitware.com/admin - Test logging in/out and uploading data to the server.
Then run docker compose -f ./prod/docker-compose.prod.yml run \ --rm django ./manage.py loadGRTS
It may take a few minutes to upload because it is downloading and loading a bunch of data into the database. This includes the CONUS GRTS cells, the Alaska/Canada GRTS Cells and the Hawaii GRTSCells. There will be progress bar for each shapefile it is processing and loading.
Service that will automatically start and launch the server
Create this at /etc/systemd/system using sudo
[Unit]
Description=batai-server
Requires=docker.service
After=docker.service
[Service]
ExecStartPre=/bin/sleep 10
Environment=PATH=/usr/bin:/sbin:/usr/sbin:/usr/local/sbin:/usr/local/bin
Restart=always
User=bryon
Group=docker
TimeoutStartSec=300
RestartSec=20
WorkingDirectory=/opt/batai
# Shutdown container (if running) when unit is started
ExecStartPre=docker compose down
# Start container when unit is started
ExecStart=source ./dev/export-env.sh ./prod/env.production && docker compose -f ./prod/docker-compose.prod.yml up
# Stop container when unit is stopped
ExecStop=docker compose down
[Install]
WantedBy=multi-user.target
After run sudo systemctl enable batai.service
Then to start you can use sudo systemctl start batai.service
Stopping: sudo systemctl stop batai.service
If when logging in you're getting redirect URI errors go into the django-admin interface and make sure you have the trailing '/' on the URL.
There is no email server connected up so users need to be individually approved and their email verified by an admin
In order to use AWS S3 as your storage service, you'll need to make sure to start the Django application with the correct configuration.
As a starting point for configuring your environment, see dev/.env.prod.s3.template for a list of environment variables that you'll need to populate for your deployment.
DJANGO_SETTINGS_MODULEthis should be set tobats_ai.settings.nabat_production. This tells Django which set of settings to use for the web server. Thenabat_productionmodule will configure S3 settings.DJANGO_DATABASE_URLthis will be a postgres connection string, e.g.postgres://user:password@postgres:5432/djangoDJANGO_CELERY_BROKER_URLis used to make sure django can send tasks to theceleryservice. For example, if using RabbitMQ, it might look like this:amqp://rabbitmq:5672AWS_*andDJANGO_STORAGE_BUCKET_NAMEare used to make sure the application can connect to your S3 bucketDJANGO_BATAI_NABAT_API_URL(optional): the location of the NABat GraphQL endpoint used to retrieve information about files in NABat.DJANGO_BATAI_SAVE_SPECTROGRAM_CONTOURS(optional, defaultfalse): controls whether Celery spectrogram tasks (recording upload and NABat import pipelines) extract contours from compressed spectrogram masks and save them toPulseMetadata.contours. Whenfalseor unset, contour extraction is skipped and stored contours are empty, which lowers DB storage size. Set totrueif you need pulse contour data (e.g. the spectrogram contour overlay in the client).VITE_API_ROUTE: this tells the Vue application where the backend (Django) API can be found.DJANGO_BATAI_URL_PATH: this allows the Django application to be mounted at a subpath in a URL. It is used by the Django application itself and the nginx configuration at nginx.subpath.template