Intermittent server timeouts on an endpoint

Priority: Normal
App ID: globalnewsapp - 18666

We are having intermittent server timeouts or status 503 on an endpoint. As reported in a previous thread in the tech-channel, sometimes a memory error occurs. The endpoint returns a filtered set of posts that is limited to 20 instances per query. The endpoint is already filtered for the user’s selected categories while excluding users that are blocked and muted. I have been testing the endpoint since Monday and worst response time is ~4 seconds.

Here is the queryset and serializer used for the endpoint.


class PostViewSet(ModelViewSet):
    def get_queryset(self):
        if self.action != "list":
            return Post.objects.valid_posts()

    user = self.request.user
    categories = list(user.categories.values_list("id", flat=True))
    blocked = list(user.blocked.values_list("id", flat=True))
    muted = list(user.muted_users.values_list("id", flat=True))
    excluded = list(set(blocked + muted))

    return (
            | Q(author=user)
            | Q(
        .prefetch_related("trending_topics", "ratings")


class PostSerializer(serializers.ModelSerializer):
    media = MediaSerialzier(many=True, required=False)
    author = FeedUserSerializer(
    trending_topics = TrendingTopicSerializer(required=False, many=True)
    share_count = serializers.ReadOnlyField()
    comment_count = serializers.ReadOnlyField()
    is_shared = serializers.ReadOnlyField()
    trending_post = serializers.ReadOnlyField()
    location = LocationSerializer()
    citations_count = serializers.ReadOnlyField()
    original_post = SharedPostSerializer()
    user_rating = serializers.SerializerMethodField()

    def get_user_rating(self, obj):
        user = self.context.get("user")
        if ( == user) & obj.ratings.exists():
            return (

        rating = get_object_or_None(PostRating, post=obj, user=user)
        if rating:
            return rating.rating

    class Meta:
        model = Post
        exclude = [

cc @dilara

It’s not the size of your response that’s driving the response timeup; it’s the complexity of your query. Here’s what resident Djangonista @andresmachado had to say:

“the problem here is the reverse relations. They’re calling a lot of queries for each id list definition in the top portion of the code. Ideally, all of it would be bundled inside Subqueries that would improve the performance.”

That should sort things out for you. Happy coding!

Hey @mcruspero!

@anand is correct about the python code.

But another thing that came to my mind was that memory errors may be related to the OptaPlanner running in the same container. I posted a link to this blog - Spring Boot on Heroku with Docker, JDK 11 & Maven 3.5.x - codecentric AG Blog before, but not sure if your team was able to implement it. It provides a good insight into memory management and optimisation for the Spring apps running in a container.

Hi ppl!

As Anand mentioned, the problem may be the number of reverse relations being done in the viewset, each calling to the endpoint will generate a big number of queries in your database.

Some causes can be related to not optimized data modeling.

Another point of concern in your code is your serializer, if you run a debug tool like django-debug-toolbar you’ll see the number of queries that are being made. Also, the serializer is not optimized. It has 4 nested serializers, fields being declared using ReadOnlyField and another reverse relation being made inside a serializer method field.

I’d review your endpoint as it seems it’s doing too much, I recommend you to separate the responsibilities, it might help.