Tuesday, September 25, 2012

Premium C++ error detection is now available for Visual Studio 2012

Just two weeks ahead of general public release of Visual Studio 2012 small Russia-based company OOO "Program Verification Systems" (Co Ltd) announced availability of new version of their flagship product PVS-Studio that completely integrates into VS2012. PVS-Studio is a product that performs early diagnosis of programming errors in C/C++/C++11 code.

This is the first static analysis tools for C++ that is available on the new platform to complement the new built-in static analysis that is now found in all editions of Visual Studio.

Besides VS2012 support this new version adds a large number of new diagnostic rule to the vast set of heuristics already available. Andrey Karpov, CTO of the company, mentioned that "our rapid development cycle (we usually release new version each month) allows a lightning-fast delivery of heuristics for detection of new error patterns that our researchers discover. As our updates are free for all exiting customers this greatly increases value of the product".

Considering no-obligations trial download (you won't even be asked to enter your e-mail!) this is product is worth trying for projects of any size and complexity.

Contributing back to the ecosystem

Team that created PVS-Studio are well-known within C++ communities for their invaluable contributions to various open-source projects. They use these projects as a playground when developing new rule sets, and always submit bugs they discover helping to make these products better.

While working on the newest version they validated source code of all C++ libraries (including MFC) that were included with Visual Studio 2012 and to the team's sheer surprise few issues popped up despite overall high quality of the code. These issues are rather typical and were easily caught by PVS-Studio. Detailed description of the issues could be found in their recent blog post. Also bug was filed on the Microsoft Connect site. Hopefully these issues will be fixed right in the next update for Visual Studio as some of them might actually pose a stability or security threat to the applications built with these libraries.

Meanwhile the team that created PVS-Studio continues its fight for improving quality of the native code.

Thursday, September 20, 2012

The cost of freeing memory

There is a common topic of discussion for all C/C++ developers - avoiding extra memory allocations to optimize application's performance. Really, unthoughtful memory allocations might kill performance of the best algorithm out there.

During one of the performance exercises I did on my project I noticed a strange hotspot in the place when memory is freed. Later on I did a brief investigation on this matter, and found no further information except this short discussion on stackoverflow: http://stackoverflow.com/questions/5625842/cost-of-memory-deallocation-and-potential-compiler-optimizations-c. It mentioned that although there are no guarantees about the cost of freeing memory, developers might expect it to be no higher than the cost of memory allocation.

As there were no numbers our there I decided to do simple benchmark and results were quite surprising. Although when the number of allocations was small the cost of deallocation was below the cost of the allocation, but as the numbers come close to millions of allocations the cost of freeing memory increases significantly (this of course depends on the compiled used).

Without further ado here are the numbers:

# of allocations
Visual C++ 10
Intel C++ 12.1
1K
51.44%
46.95%
10K
97.61%
96.55%
100K
100.13%
100.95%
1M
106.33%
105.95%
2M
134.35%
102.48%
3M
157.52%
109.76%
4M
176.04%
113.73%
5M
193.23%
119.14%

Numbers here are the relative costs of freeing compared to the costs of allocations. For example, 50% means that it takes twice less time to deallocate memory than to allocate it.

As you see, for Visual C++ compiler deallocation might take almost twice more than allocation! Intel's compiler also has a growing deallocation cost, but growth is much slower.

What's the practical application of this knowledge? In rare cases when memory deallocation becomes a bottleneck it might be useful to emulate a manual garbage collection - e.g. remember all blocks that are to be freed, and then, when application is idle, do the clean-up. Not sure if any real world application would benefit from such optimization though :)

Another possible optimization is just avoiding the cost of freeing memory when application is terminating - when process exists its memory will be reclaimed by OS in any case. This might speed up shutdown in some cases.

For those who want to know here is the information about benchmark application:

1) It allocates needed number of memory blocks via malloc() calls
2) Block sizes are random, something between 1 and 1024 bytes
3) Deallocation order is also random
4) All random numbers were taken out of the loop (e.g. vectors with the block sizes and deallocation order are prepared before the memory allocation stats)