nopCommerce After 9 months system keeps going slow (restarting?) multiple times an hour

2 weeks ago
Hi,

Looking for any advice with this, I have spent the last couple of days trying various strategies to resolve this issue with only limited success (mitigating the worst rather than solving the issue)

nopCommerce version:4.60.2
Operating system: Microsoft Windows NT 10.0.20348.0
ASP.NET info: .NET 7.0.18
Hosted on Azure App Service (Currently Premium v3 P3V3 - 8CPU / 32GB Ram)
- Normally the "Committed Memory" is between 2G and 3G
- CPU averages about 25% and goes above 60% about twice an hour
Azure SQL: 100DTU (Same region as app service)
- CPU sits below 5%
Number of Orders in System: 20,185 (Is this an issue?)
The number of requests to the system has remained constant at between 30,000 - 40,000 per day according to azure logs

The system is, 95% of the time really quick and works great. The other 5% of the time all requests basically sit for between 20s and 50s before the page renders, this is driving my customer crazy. This used to happen occasionally, but now it is hourly.

Things I have tried (some left in place, some reverted):

- Using System -> Maintenance: Cleared out guest customers, abandoned shopping carts, already sent emails
- Upgraded Azure App Service (See above)
- Upgrade Azure SQL (See above)
- Used Redis for "distributed" caching (lowered ram usage but did not fix issue)
- Used  Memory for "distributed" caching.
- Blocked unregistered users from browsing site (They now all get the login page)
- Set the cache configuration to various levels from Default Cache time of 5m to 600m (currently 600)
- Reduced default results per page to 10
- Looked at top 10 DB queries (seem fine)
- Swapped from 64bit hosting to 32 Bit and back again
- Forced azure to swap the physical host

Is there a log file anywhere (not the db) showing if the system restarts? I am currently investigating if this could be the issue, i.e. something crashing and forcing a nop restart on a regular basis.

Any other ideas or thoughts would be appreciated, thank you!
2 weeks ago
Found It. Was a process running in the "Schedule Tasks" that was making too many external http calls for the Azure hosting, causing an error, which made Azure Application Insights restart the system - hence the 40s-50s delay in responses.