Date/Time of Incident: February 25, 2026, around 21:01 UTC
Duration: ~11 minutes
Reported by: Monitoring & Health Checks (automatic detection)
Services Affected: Axinom DRM Licensing Service (Europe region)
Severity: Minor
On February 25, 2026, the Licensing Service in the Europe region became partially unavailable for approximately 11 minutes. The issue originated from a network-level security incident at our infrastructure provider, which triggered automated mitigation systems that inadvertently over-blocked traffic, affecting services beyond those directly targeted. Automatic health checks detected the issue and traffic was partially rerouted to alternate regions.
Around 20% of license requests received in the Europe region during the incident experienced delayed responses or failed entirely.
Our infrastructure provider experienced a network-level security incident. While the incident was successfully contained, the automated mitigation systems over-blocked traffic, eventually affecting our Licensing Service in the Europe region. This was an infrastructure-level event entirely external to Axinom's platform — no customer data or DRM systems were compromised. Our infrastructure provider shared this:
"The issue was caused due to a DDOS attack which was mitigated but blocked too much traffic affecting UDP. Due to the over-blocking other services not directly using UDP were influenced too. The issue was solved, we will check further measures to avoid this in the future. All systems are accessible again. Thank you for your understanding.."
We will update this page as soon as we learn more from our provider.
The incident lasted for an extended period because our health monitoring did not immediately identify this incident as major and, therefore, did not completely fail over to another cloud provider. Under normal outage conditions, our failover process completes in under one minute. We are currently reviewing and optimizing our failover strategy to better handle this type of infrastructure-level disruption.
The issue was resolved by our infrastructure provider. Their team is reviewing mitigation measures to prevent similar over-blocking in future incidents. Our team is also evaluating improvements to our failover mechanisms to better handle this type of infrastructure-level disruption.