SQL problem
The effect of the third of these three SQL statements is different from that of the first two statements.

In the first SQL statement, Tbl 1 and tbl2 are left-connected. If there are many records with the same TBL id (assuming that there are 10000 records with the same id), the data volume of this statement is very large after left join, and then the largest one is obtained by grouping filtering, and the performance is relatively low. However, if there are multiple records in tbl 1 with the same id, itme and name, there will be no duplicate records in the final result.

In the second SQL statement, two kinds of tbl2 are simply grouped. If there are many records with the same id, this SQL statement will first filter out a large part of the data when making a left join, which can improve the performance. However, if there are duplicate data in tbl 1, there may be duplicate records in the results of this SQL query. If it is determined that there is no duplication in tbl 1, this sql is recommended. If there is duplication, you can add a different statement in select. Of course, performance will be reduced.

It can be said that the third sql statement is basically an incorrect SQL statement with an id of max and only one record at the end. Id is the largest in the record, itme is the largest in the query, name is the largest, and time is the largest in the data that can be connected. So this sql should be said to be meaningless sql.

For sql query, if you want to test its performance, if it is Oracle, you can use the execution plan to check the execution of SQL, or you can look at the execution time. The details are still very complicated. It is recommended to study the book "The Art of Programming in Oracle Bone Inscriptions", which contains relevant explanations.

If it's SQLServer, I'm sorry, I don't know how to study it either. But there is one thing: when connecting, if it is a secondary table (tbl2 above can be considered as a secondary table), then if the amount of connection data can be greatly reduced, the performance will be improved a lot. The large amount of data not only takes up the processing time of CPU, but more importantly, it may increase a lot of IO operations, thus consuming a lot of time.