What is SQL slow query?
When we execute a sql, the database not only returns the execution result, but also returns the execution time of the sql. MySQL slow queries refer to queries whose execution time exceeds the threshold (the default value is 10 seconds). These slow queries will be recorded as slow In log
What impact will slow SQL have?
Slow queries will affect the performance of MySQL. A small number of slow queries will delay the system response time. As the number of slow queries increases, DDL operations will be blocked, which may increase lock competition and lead to data inconsistency. It may also occupy a large amount of memory and increase the system load, affecting other aspects. Request processing, when there are a large number of slow queries, can cause the service to hang up, paralyze the system, seriously affect user use, and even affect business development.
How to analyze slow query situation?
Use ‘show processlist’ to view the slow queries being executed, and kill the slow SQL that has a greater impact.
Use ‘explain’ to analyze the execution of a slow SQL. You can see the usage and effect of the index, the number of scanned rows and the estimated execution time, etc.
How to deal with slow queries?
Slow queries will affect the performance of MySQL. A small number of slow queries will delay the system response time. As the number of slow queries increases, DDL operations will be blocked, which may increase lock competition and lead to data inconsistency. It may also occupy a large amount of memory and increase the system load, affecting other aspects. Request processing, when there are a large number of slow queries, can cause the service to hang up, paralyze the system, seriously affect user use, and even affect business development.
How to analyze slow query situation?
- Use ‘show processlist’ to view the slow queries being executed, and kill the slow SQL that has a greater impact.
- Use ‘explain’ to analyze the execution of a slow SQL. You can see the usage and effect of the index, the number of scanned rows and the estimated execution time, etc.
How to deal with slow queries?
Data caching
For functions or table information that change less frequently, you can cache data and update the cache when table data changes to reduce the number of queries.SQL statement optimization,
(1) Only query the required fields instead of ‘select *’ to query all fields
(2) There are too many values in ‘where’ condition ‘in’
(3) Use ‘limit’ efficiently. For example, to find the oldest person in the user table, you can use ‘select age from user order by age desc limit 1’ instead of the max function.
(4) When subquerying ‘in’, the left side is the table with large data volume and the right side is the table with small data volume.
(5) The ‘where’ query field value type must be consistent with the field type of the table structure
(6) Don’t ‘join’ too many tables
(7) Index optimizationIndex optimization
(1) Create indexes on fields with higher discrimination, so that the number of retrieval times is fewer and the performance is higher. More data is filtered out during retrieval, the number of table returns is reduced, and the indexing effect is better.
(2) Try to build indexes on fields with small storage space. The index will also take up less space. And the smaller the data type, the faster the operation will be during execution.
(3) Create indexes on fields that appear more frequently in ‘where’
(4) Limit the number of indexes, because indexes may affect the execution efficiency of DDL while improving query efficiency.
(5) Follow the leftmost prefix principle to make the query conditions hit as many indexes as possible
(6) Avoid index failure: for example
a. When the query conditions of multiple fields are connected using ‘or’, it is necessary to create an index for each field, otherwise the index will be invalid;
b. Using queries starting with % in the ‘like’ condition will invalidate the index, but queries ending with % will not;
c. Use != or <>;
d. Use functions in ‘where’ query conditions;
e. Multiple fields are sorted, and the order is not uniform. They should all be in ascending order or all in descending order.
(7) Unused indexes need to be cleaned up in time to reduce memory overhead.
(8) Use forced index when necessary, that is, ‘force index’Optimization of table structure
(1) Delete redundant columns
(2) Appropriately add intermediate tables
(3) When the amount of data is too large, divide the database or tablesUpgrade hardware configuration
(1) Increase the number of CPUs and improve concurrent processing performance
(2) Increase memory and improve SQL operating efficiency
(3) Expand the hard disk capacity to store more data. The size of the remaining space will also affect SQL performance.
In fact, optimizing database performance is a complex process. During the execution of a SQL statement, the efficiency of the index will also change as the amount of table data increases. So we must comprehensively consider the SQL statement, data size, database configuration, network service configuration and caching and many other technical points, both need to be adjusted and optimized according to the development of business data to find the best optimization solution, reduce response time, improve service performance, and give system users a better experience.