For the moment I have a spring boot configuration with an API Restful, this server needs db connection to read and write data, also it works with some envs
I created a new dockerfile also I edited docker-compose.yml and docker-compose.override.yml
Our CB Dashboard won’t be able to track neither the deployment status, nor the logs from the Spring app currently, as it only stores the information about the main web process, which is Django Backend in your case.
We’ll need to populate an environment variable for the Django that will contain the URL for the Spring service after its initial deployment - that is one way to do it.
@jorge.m, it sounds like we’ll need to run another dyno for this project, which will cause an additional cost.
An alternative and perhaps a cleaner solution, is to create a separate web app on our platform and run it independently from the main app. This way you’ll have complete control over the Spring boot deployment, access to logs, etc. Using this approach, you should be able to still have a single docker-compose.spring.yml (as a naming example) file for local development, provided you have both projects cloned to your computer and use relative paths.
Infrastructure-wise, the resources required will be the same - another DB instance and another dyno (except in case of an app it also requires another repo). The rest you should probably discuss with either @whitney3 or @anand.
One important thing to consider besides the point above is security: since you are running a web-based API on a public URL you should probably think about securing and limiting access to it.
In the meantime, let me provide some more context if that helps to come up with the most suitable infrastructure:
OptaPlanner will run once per week only, and it’s a process that can take several minutes and, up to a few hours, depending on the amount of constraints configured. Ideally we would need to maximize performance and use resources on demand only when the application is running.
There is no real need to have another DB instance and another repo. However we would need to have access to the logs for debugging purposes. Having another dyno sounds like the easiest solution if we can provide a way to see the logs of the Java app.
All the documentation is based on an infrastructure that uses SpringBoot. Another workaround not using SpringBoot could bring some more difficulties so I would not walk that bridge to mitigate risks.
Extra apps are priced at $199/mo and up, although the advanced and enterprise tiers pay a reduced rate on multiple apps. We define new apps as new repos.
New dynos on an existing app are much more cost-effective, so we recommend this approach when customers scale up. They are added onto a user’s monthly plan cost for actual compute minutes used as a pass-through expense. Optimization engines are generally compute-intensive, so it’s wise to get a sense of the costs here with a few trial runs.
@anand A new dyno sound like the best solution. And It would make a lot of sense to do a few trial runs. However, we would need a few dev iterations to add all the constraints that will affect to the optimization engine. Only by then we could have an idea of the costs. Ideally, if there are extra costs, the client should be informed as soon as possible to avoid surprises. Can we estimate a cost range based on the first point provided on my previous message?
@amador.jesus, thanks for trying that out. This sort of memory requirement would need a dyno that costs up to $250 a month.
It would probably be more cost-effective to run it as a scheduled task, and not as part of a web process. Then you can run this sort of hardware only as long as the process is running, and not for the whole month.
@jorge.m, no, scheduled is by definition recurring, not invoked.
It sounds like putting this Java code somewhere on GCP CloudRun or AWS Lambda makes more sense from performance and cost perspectives, since that would allow to trigger it any time and incur costs only when running.
From the architectural standpoint, it would be much better to set up two independent databases (one being a data lake) and pipe the data into the database that the OptaPlanner is using. Or if you have to share a database, it would probably be better to migrate to GCP SQL or one of AWS DB products to be able to manage the users ourselves - Heroku provides us connection URLs but they may change, which would require manual update on the side of OptaPlanner.
Yes, to run on Lambda or GCP Cloud Run this would have to be an API, or it could be invoked using a messaging service.