TBCare PPTI: Software architecture, environment, and CI/CD

josh sudung
5 min readMay 10, 2020

--

When it comes to creating an application, the first thing that comes into mind (besides, of course, deciding a framework to use) is usually deciding a proper architecture design that fits the app’s requirements and its scale. Yes, the early stages are the most important part, and usually (for me, at least) the most frustrating part if we don’t know and don’t understand the expectations for the applications going forwards.

TBCare for PPTI Depok

A little bit of context

Compared with some other PPL projects that we saw, TBCare, to us, had a clear initial requirement in terms of its business process, so that’s a plus. To summarize the initial requirement: PPTI needs a mobile application intended to be used by PPTI field officers, to track and log new TBC investigation cases, as well as to do updates to these cases whenever monitoring to the cases is due (which their officers do regularly). On top of that, PPTI also needs a web application for their admins, or rather their relevant higher-ups, to track and recapitulate all of the cases tracked by the field officers. Naively thinking, TBCare is basically a bunch of forms to input/edit TBC case data, and a bunch of tables to serve cases data and its details.

Architecture

It is pretty clear that both the mobile app and the web app will be sharing some sort of general-purpose server-side API, which will have the capability to do CRUD operations on the data (as far as we are concerned back then, there are only a couple of data models to be operated on: Case, and Account. It grew more complicated later, but the general interactions remains the same).

So it was decided that we will be using a single server-side monolith backend for our client-side frontends (BFF architecture). We don’t necessarily need a microserviced backend, as there is no other operation other than CRUD. As for the frameworks, the choices for us are Django REST for the backend, React Native, and React for the mobile app and the web app respectively.

TBCare architecture

Environment setup, Integration, Deployment

As required by our PPL guidelines, we had to have two separate independent environments: production and staging environment. Work done will have to go through the staging environment first, before being iteratively approved and merged into the production environment. Both these environments are set up the same way, which is by the following:

  1. For our Django backend server, we chose Heroku as our container and as our service provider as it offers PaaS out of the box. We don’t need to go through the hassle of doing the basic DevOps work, and it’s also free (with the free dyno, at least).
  2. To serve our web app, we used Netlify’s CMS as our web frontend is serverless/static only. It is also free.
  3. Our mobile app is built natively as an Android APK, or as an iOS IPA (For now, we have only built it as an APK).
  4. Regarding the database, both environments use an independent Postgresql DB, stored in their respective integrated Heroku Postgres Hobby Dev service, which is automatically configured to work with our Django backend.

This is our final architecture design looks like, for both our staging and production environment:

TBCare environment

Continuous Integration & Deployment Pipeline

As required by the PPL guidelines, we are using Gitlab CI as our CI/CD service. From this point forward, I am going to use TBCare’s web application configuration as an example.

First things first, we created a CI script specifically for Gitlab on the root folder: .gitlab-ci.yml. The stages are as per usual: test, build and deploy. Since we are using a static code analysis to maintain and monitor code quality, bugs, and vulnerabilities in our app, we added a sonar-scanner stage right after the testing stage, to update the analysis with the latest code coverage.

TBCare Web App CI Script

To save time, we used a custom docker image nezappl/web:stage that we modified from the latest ubuntu image, which has all the packages and modules needed for that particular stage pre-installed. This way, we can avoid installing these modules every time the runner runs that particular stage (in this case, the test and build stage). Below is the image history:

Our custom image at Docker Hub, which has pre-installed modules needed

Notice we needed to explicitly specify variables to be used for both staging and production (master) environment so that our application will be deployed to the correct environment (in this case, correct netlify app) for each corresponding branches.

if [[ "${CI_COMMIT_REF_NAME}" == "staging" ]];
then
NETLIFY_SITE_ID=$STAGING_NETLIFY_SITE_ID
NETLIFY_AUTH_TOKEN=$STAGING_NETLIFY_AUTH_TOKEN
elif [[ "${CI_COMMIT_REF_NAME}" == "master" ]];
then
NETLIFY_SITE_ID=$MASTER_NETLIFY_SITE_ID
NETLIFY_AUTH_TOKEN=$MASTER_NETLIFY_AUTH_TOKEN
fi

Then we defined those secret keys and variables used in the CI script (and in the source code too, if there are any) in the environment variables section of Gitlab CI settings. Each value defined here are secret and should be treated that way. There should be no constant sensitive value declared explicitly in the app’s source code.

TBCare Web environment variables

And it’s done. We have pipelined each integration and deployment phase automatically. All we needed to do is to push our work, and it will automatically be deployed into our web app. ppti-staging.netlify.app for the staging app, and ppti.netlify.app for the production app. (We currently haven’t merged our work into the production environment yet, so be patient!) 😃

That is all for this article, thanks for reading!

Written as an assignment for my software engineering project individual review.

--

--

josh sudung
josh sudung

No responses yet