I create a simple array of 100 equal values of 0.1 floats and sum them up using the following straightforward code:
std::array<float, 100> a;
std::fill(a.begin(), a.end(), 0.1f);
std::cout << std::accumulate(a.begin(), a.end(), 0) << std::endl;
The expected result is 10. Why this simple program prints the result 0 ?
A short note on how you can switch Cmake options in QtCreator just by one click.
The compile-time integer factorization implementation described before is a good benchmark for the C++ template meta-programs compilation performance. In the middle of the year I’d like to publish the compilation performance of four compilers in their latest versions and compare to the initial benchmarks.
The metafactor C++ library was primarily developed for compile-time factorization of integers up to 32-bit unsigned maximum. As a side-effect, using the algorithms of this library, I could generate big list of primes at compile-time. It turns out that a list of primes less than 65536 (16-bit unsigned integers) can be generated by Clang 3.8 C++ compiler within 5 minutes of compilation time. Apparently, such a list is enough to check any 32-bit unsigned integer for primality or can be used for other purposes. Continue reading
The previous article explained how to factorize an integer N at compile-time using variadic templates from C++11. This article gives benchmarks of the implementation using different compilers. Not only limits of the tested compilers are touched, but the optimal strategy has been selected for possible practical applications.
It is well known that C++ templates are Turing-complete and therefore any algorithm can be theoretically implemented to be executed at compile-time. But what about reality? How far the current available compilers can go? In this article I’d like to discover the limits of compilers, but before that I will explain a relatively simple and suitable factorization algorithm.