Deployment failed, Java app to be deployed in Dockerfile

That’s a good question. Perhaps @Crowdbotics_Dan or @anand could chime in.

The main database needs to be share to optaplanner because is going to edit directly to database, for example
“I need the best employee to do this work”
So optaplanner will find the best employee for that and assign to that work, to do that optaplanner needs to read employees table and writes on work table

Does it actually need 1.2GB of memory? Can you optimize it so that it doesn’t require as much?

1.2GB seems horribly excessive, even for a Java app.

Well, I could set some limits to optimize this initials test records, also I found this note in optaplanner examples:

@Crowdbotics_Dan @charath if we optimize the limits and add those params on VM to use memory only when needed, do we have your consent to proceed?

I already made some limits and some query filters, It works but with some memory errors

@charath @Crowdbotics_Dan @dmitrii.k It seems that @amador.jesus is not having any luck reducing the memory quota. For what we are reading on OptaPlanner documentation the only thing that can be done is to make sure that memory is only used when process is running.

Likewise, it seems that Lambda is not suitable, as the maximum execution time is 15 mins and OptaPlanner can run in the background for up to several minutes. Depending on the number of constraints and size of the data set.

Can you assess?

Linear optimization tools are compute-intensive.

As I mentioned last time, I discourage the use of this tool; it’s overkill for small datasets. Since you indicated it’s a hard customer requirement, please get sign-off from the customer on the extra dyno cost of $250+/mo.

Thanks @anand. I will take that into consideration if we don’t have any good answer from OptaPlanner community. We might have a possibility of having enough with these 15 mins limit that Lambda has. @dmitrii.k proposed GCP Cloud Run also. Is there any impediment on using that?

I see that GCP also has the same limit: 15 mins. So the best way to go would be to use Lambda for the MVP so we can track the performance as we add more and more constraints iteratively. If at some point the processing time gets closer to the limit the client will have to sign-off a new dyno.

@anand @Crowdbotics_Dan @charath let me know if you are ok with the proposed solution. If so, please let’s proceed with AWS Lamba

This really isn’t my call. My personal opinion is that you’ll be better off either not using the library or getting the client to sign off on the hosting costs.

@anand @charath @dmitrii.k @Crowdbotics_Dan We got the approval from client to get an extra dyno. It’s worth to mention that OptaPlanner, under our current infrastructure on DockerCompose shared with the main Django app has a significantly slow performance. Will the performance improve after having a new dyno used only for OptaPlanner?

Another question. Client tends to prefer an upgrade (+$299) to a multi-app plan vs adding a new dyno (+$250) due to the number of perks she would get. Could we list or refer to the main differences? The key here is performance as OptaPlanner is a huge memory and CPU consumer.

Just a follow up here, as the client is awaiting a response. Thanks in advance :slight_smile:

@jorge.m,

The performance should definitely improve, because you’re now running it on a significantly underpowered dyno.

It’s worth mentioning that Heroku doesn’t run docker-compose (nor swarm), it’s running off heroku.yml and currently doesn’t support any of the docker networking features. Adding another dyno will therefore require a different way of communicating - not a RESTful API.

Thanks @dmitrii.k. I will defer to @amador.jesus the questions related with the deployment on the new dyno having into consideration your latest message.

And apologize for insisting, but I would need to provide some comparison info on the 2 available option:

I don’t know if a multi-app upgrade will allow you to have a $250 dyno, @anand is the best person to answer that.

But if you want to use a RESTful API for communication, multi-app pretty much the only way.

@anand looking forward to having that comparison insight.

@dmitrii.k We have an alternative to RESTful API, using command line that can be used on the new dyno. However, we have a few tech doubts regarding both solutions: multi-app or new dyno:

1- OptaPlanner needs to connect with Django DB. How the communication between both apps can be done in both solutions (multi-app or new dyno)? Are both solutions involving to have separate servers?

2- In terms of security, what are the policies suggested on each case?

Fine with us to upgrade plan to $299 and add the super-dyno on that.

@jorge.m, a dyno is a separate server, which means command-line communication will not be possible. Different dynos under the same app can communicate only through database and messaging. Dynos under different apps can communicate using RESTful API on top of that. Command-line API is available only if you’re invoking the process from a web process running on the same dyno.

  1. MongoDB isn’t free on Heroku anymore. The only remaining provider is ObjectRocket, and their cheapest package is $95 a month. Alternatively you can sign up for cloud.mongodb.com, but it’ll most likely be paid also. Both solutions mean you’ll have separate servers.

  2. For the RESTful API you should use SSL and secure tokens for authentication. Alternatively you can use a messaging service, e.g. RabbitMQ.

You can run the java scripts on the same server with your web app, but then you’ll need to upgrade that dyno.