Previous incidents
Incident Report - Database Performance Outage
Resolved Jan 22 at 11:34am HST
Duration: 21:30 - 22:15 CET (45 minutes)
Impact: Service unavailable during this period
Root Cause: Primary database performance degradation prevented proper log recording and system operations
Status: Resolved - All systems returned to normal
Partial service disruption - Resolved
Resolved Jan 11 at 02:50am HST
We experienced a partial service outage from 1:00 AM to 1:30 PM CET today due to abnormally high call volumes over the weekend. In response to this traffic spike, we disabled logging temporarily to stabilize the platform and prevent further degradation.
The issue has been identified and resolved. All services are now returning to normal operations. We are currently monitoring the situation closely and will continue investigating the root cause of the elevated traffic volume.
We apologize fo...
Complete Application Shutdown
Resolved Jan 07 at 04:37am HST
Service has been fully restored. All systems are back to normal operation.
3 previous updates
Network Outage
Resolved Dec 22 at 11:18pm HST
We experienced a network infrastructure issue this morning that resulted in unavailability of our Scrapin API services for approximately 90 minutes. Our technical team was mobilized early this morning and successfully resolved the issue.
A network connectivity problem affected our service delivery starting at 7:30 AM CET. Our engineering team identified and addressed the root cause, restoring full service by 9:00 AM CET.
During this window, API requests and platform access were unavailable....
INVESTIGATING - API Performance Issues
Resolved Dec 18 at 07:34am HST
The performance issues we experienced have been resolved. API response times are now back to normal levels. We continue to monitor our systems closely to ensure stability. Thank you for your patience.
1 previous update
Complete service outage
Resolved Dec 10 at 05:30am HST
All systems are operating normally on our end. Our reverse proxy has returned to baseline functionality, and we have immediately deployed a secondary reverse proxy to provide redundancy and prevent similar incidents from occurring in the future.
1 previous update
API Endpoint performance degradation
Resolved Dec 10 at 05:07am HST
The connectivity issues we experienced earlier have been resolved. All services are now operating normally. We appreciate your patience and apologize for any inconvenience this may have caused.
2 previous updates
Performance & data access Incident
Resolved Dec 09 at 06:23am HST
The network connectivity issue affecting our primary data sources has been resolved. All systems are operating normally and data consistency has been restored.
1 previous update
Data Retrieval Issue - Resolved
Resolved Dec 07 at 07:16am HST
We identified an issue beginning at 4:15 PM CET where a limited number of profiles were being returned with empty data fields. We implemented a workaround at 5:35 PM CET that successfully resolved the issue. All profiles are now returning complete data as expected. We apologize for any inconvenience this may have caused.
Network connectivity Issue - Scraping temporarily unavailable
Resolved Dec 06 at 07:27am HST
All services have been fully restored and are operating normally. Thank you for your patience.
1 previous update
Minor data inconsistencies on Profile endpoints
Resolved Dec 08 at 03:46am HST
The "contractType" issues on "positions" are now fixed and we shouldn't have any stability problems going forward.
2 previous updates
Performance Incident
Resolved Nov 24 at 09:55am HST
Yesterday, 4:30 PM and 7:30 PM UTC
We experienced performance degradation on our ScrapIn API during two separate windows last evening. Our team responded quickly to mitigate the impact and stabilize the service.
What happened: Increased query load resulted in elevated response times and brief periods of service instability across certain endpoints.
Our response: We implemented immediate optimizations and load balancing adjustments to restore normal operation.
Current status: Performance h...
Planned service degradation: Switching to cache-only mode for stability
Resolved Nov 19 at 08:19am HST
After experiencing multiple incidents throughout the day, we have made the decision to temporarily disable activity and job endpoints to focus on comprehensive system stabilization. Profile and company endpoints will continue to operate using cached data only during this period.
We recognize that applying quick fixes to isolated issues is not sustainable. Instead, we are taking the necessary time to implement robust, long-term solutions to prevent future disruptions. This maintenance period ...
Night instability resolved - fallback systems activated
Resolved Nov 18 at 04:53pm HST
We experienced intermittent instability between 1:00 AM and 3:30 AM CET. Our fallback systems were automatically activated to maintain service availability. All services are now stable and operating normally.
Profile endpoints missing positions, schools and skills data
Resolved Nov 18 at 07:17am HST
The issue has been resolved. Profile endpoints are now returning positions, schools, and skills data as expected. We continue to monitor the situation closely over the next few hours.
1 previous update
Service Instability - Detailed Incident Report
Resolved Nov 12 at 12:00am HST
We experienced multiple cascading issues over the past 24 hours. Here's the timeline:
Yesterday 3:00 PM - 6:00 PM CET: Our primary data source experienced an outage. Our automatic failover to a secondary source activated, but we quickly reached rate limits on that backup system.
Around 7:30 PM CET: We encountered elevated 429/424 rate limit errors from the secondary source. Our team manually switched back to the primary source after confirming its status had improved.
Around 10:00 PM CET: ...
Database performance degradation
Resolved Nov 11 at 01:05am HST
We're investigating high database load following an internal migration. Our team is actively implementing read replicas and performance optimizations. We're monitoring closely and will update you as the situation evolves.
1 previous update
Intermittent HTTP Request Issues
Resolved Nov 04 at 12:40am HST
We've identified and resolved the intermittent request failures that occurred earlier. The issue was caused by elevated traffic spikes overwhelming our reverse proxy, which temporarily prevented some requests from reaching our backend servers. We've implemented improved rate limiting and traffic management on our proxy layer to prevent this from occurring in the future. All systems are now operating normally.
1 previous update