!!! abstract “Abstract” This chapter covers the preparation in terms of the architecture defined for the solution, as well as the setup of the development environment. (4 min read)
The solution must be integrated with the rest of the application by interacting with the backend through a proper bus. Therefore, this is the general architecture that the service had to meet to allow the integration.
These are the steps that the application takes in the normal flow:
recipes/{clientId}
).It is important to highlight that if this process from step 4 to 6 takes more than a configured window in seconds, the Recipes API will return a default response to the App which is just a header of the recipes retrieved in the step 2.
At this moment of the project, we didn’t have a proper scaffolding nor boilerplate to setup the code repository. Therefore, a new repository was started from scratch, where the following aspects were solved to start the development:
Matech.Starvapp.RecipesRecomender.Analytics
.The repository was published here. You can check how the initial structure was defined in the Pull Request #1, obtaining this:
├── README.md <- The top-level README for developers using this project
│
├── starvapprecom <- Core package of this project.
│ ├── approaches <- Module to develop approaches for recipes recommendations
│ ├── scrapers <- Module to handle scrapers of recipes
│ └── utils <- Module to provide utils in general
│
├── data <- Folder for datasets to be used locally
│
├── docs <- Project documentation and resources
│
├── notebooks <- Place to store all Jupyter notebooks
│
├── scripts <- Scripts to execute services and other functions
│
├── tests <- Unit and System tests of the core library
│
├── tools <- Tools for the development of this project
│
└─── requirements.txt <- File that specifies the dependencies for this project
All the services in StarvApp are deployed through Kubernetes, so each of them must provide a Dockerfile
that allows the DevOps team to achieve the actual deployment. Therefore, for this repository we provided:
Dockerfile
that creates and serves the recommender solutionDockerfile_rabbitmq
that creates a RabbitMQ server for testing purposesdocker-compose.yml
to configure the connection between these two previous services, for testing purposes.In addition, to allow an easy way to make local development, there are tools to create and use a virtualenv which is easier to handle for local tests.
Given that, the development evolved from the explained “boilerplate” to then create the PR #2, where the following aspects were solved:
Dockerfile
to build and serve the applicationdocker-compose.yml
to test quickly a deployment before delivering the applicationAfter that, the development proceed to enhance and fix some aspects, but in the PR #7 the repository evolved to have these aspects done:
You can check these configurations in the actual code repository.