Degraded performance

Incident Report for Napta

Postmortem

On March 14th, 2025, Napta experienced an incident affecting platform performance. We would like to share a detailed timeline of events, the root cause, and the actions taken to prevent similar issues in the future.

Timeline of Events

2:40 PM CET

Our monitoring systems detected a sudden increase in response times across the platform.

3:10 PM CET

The root cause was identified and mitigated. Response times returned to normal and the platform became fully responsive again.

Root Cause

The issue was caused by a process that unexpectedly acquired a global lock on a specific client database for an undetermined amount of time. This lock prevented other transactions from executing, triggering timeouts and creating bottlenecks across the system for all our clients.

Action Plan

To prevent similar incidents in the future, we introduced stricter timeout configurations to automatically terminate blocking database operations after a short period of time, preventing prolonged lock situations.

Closing Remarks

We sincerely apologize for the inconvenience this incident may have caused. We remain committed to maintaining a reliable and high-performing platform, and continue to improve our monitoring and safeguards accordingly.

If you have any further questions, please don’t hesitate to contact our support team.

Posted Mar 24, 2025 - 08:55 UTC

Resolved

We have identified the root cause of the issue and successfully resolved it. A postmortem report will be published soon.
We sincerely apologize for any inconvenience this may have caused.
Posted Mar 14, 2025 - 14:10 UTC

Investigating

We are currently experiencing degraded performances on app.napta.io, our team is looking into this.
Posted Mar 14, 2025 - 13:40 UTC
This incident affected: Application.