Degraded performance

Incident Report for Napta

Postmortem

We would like to share an overview of a service incident that occurred on January 27, 2026, affecting the performance of the Napta application. Users experienced significant slowness and latency when accessing the platform. Below is a timeline of events, an explanation of the cause, and the actions we are taking to prevent recurrence.

Timeline of Events

  • 3:19 PM CET: Our team applied a configuration change to enable a new feature. Following this, the application began experiencing degraded performance.
  • 3:25 PM CET: Our monitoring systems triggered high-latency alarms, and the issue was formally detected by our engineering team.
  • 3:27 PM CET: The engineering team initiated a rollback of the configuration change to stop the performance degradation.
  • 3:29 PM CET: To accelerate recovery, the team manually suspended the internal processing service to reduce load on the database.
  • 3:36 PM CET: Database activity returned to normal levels, and application response times fully stabilized.

Root Cause

The performance degradation was triggered by the activation of a new background processing feature. Upon activation, the system attempted to catch up on a large volume of pending data updates that had accumulated over the previous days.

This sudden influx of operations created a traffic spike that temporarily overloaded our primary database capabilities. This resulted in significantly slower response times for all users until the system stabilized and the traffic volume was controlled.

Action Plan

To prevent similar incidents, we are implementing the following improvements:

  • Traffic Control: We are adjusting our system configurations to strictly limit the number of background operations performed simultaneously, ensuring they cannot impact user performance.
  • Data Volume Management: We are updating our logic to better manage how accumulated data is processed, preventing massive "catch-up" spikes.
  • Enhanced Testing: We are introducing stricter load testing protocols in our staging environments to better simulate high-volume scenarios before releasing changes to production.

Closing Remarks

We sincerely apologize for the disruption this slowdown caused to your operations. We understand the importance of a fast and reliable platform and are committed to continuously improving our systems to minimize the risk of future incidents. If you have any further questions, please don’t hesitate to contact our support team

Posted Jan 27, 2026 - 17:21 UTC

Resolved

This incident has been resolved.
Sorry for the inconvenience, a post mortem will be shared soon
Posted Jan 27, 2026 - 14:42 UTC

Monitoring

A fix has been implemented and we are monitoring the results.
Posted Jan 27, 2026 - 14:36 UTC

Identified

The issue has been identified and a fix is being implemented.
Posted Jan 27, 2026 - 14:32 UTC

Investigating

We are currently experiencing degraded performances on app.napta.io, our team is looking into this.
Posted Jan 27, 2026 - 14:19 UTC
This incident affected: Application.