In a typical Django application, performance is usually determined by the database queries your views are generating. You can use tools like django-debug-toolbar during development, or an APM in production like Elastic APM or Scout APM to inspect the SQL queries that are performed.
But even when your using these kinds of tools, a slow query could still be deployed in production by accident. For example, a query could be fast enough in your development or staging environment, but if your production database has an order of magnitude more rows it will be much slower in production.
An extremely slow query (like a couple of seconds, or worse a lot more) will severely impact the amount of requests that your application can handle. While you can set a timeout on requests in your WSGI server (such as Gunicorn), your database could still be busy in the background executing the query. Or if you have background tasks / cronjobs where your monitoring is less visible, slow queries can happen when you least expect it.
If you’re using Django with MySQL (5.7 or later), you can prevent long running queries from bringing down your application by setting max_execution_time:
When a SELECT query exceeds the timeout, MySQL will interrupt the execution and report an error which Django raises as an
Query execution was interrupted, maximum statement execution time exceeded
In the traceback you can find the query responsible for the timeout and perform the necessary optimizations.
You could also set the
max_execution_time globally in MySQL, but doing it this way only affects queries from your Django application and allows you to manually perform more time consuming queries.