We had some performance issues that were coming up in a particular part of our code if the install had a very large number of clients/users. After analyzing the code a number of optimizations were made to hopefully improve the situation.
To quantify whether or not the changes were actually having the desired effect, we needed to track the performance with and without the changes. I made a test script that ran through the problem code while increasing the number of users, and logged the time taken to a statistics table in a database. I ran the script a few times, against different databases, with and without the optimizations in place. Then I connected Yellowfin to the statistics table and was able to quickly see that the optimizations were having a significant effect.
Y-axis is time taken to run through the code in question (msecs) – lower is better.
X-axis fig 1 is the total elapsed time to run the test groups
X-axis fig 2 is the number of clients/users in the system.
Fig 1 Total time to create and flatten with and without optimization
SQL Server performance is better than PostgreSQL overall, but that’s mostly because it’s running on a more powerful machine.
A great way to visualize the performance increase associated with the optimization of our code.
Fig 2 Total time for process compare PostgeSQL and SQL Server with and without optimization