Does the effort pay off? Generic programming increases code readability, robustness, reuse, and eases parallel programming.

The curtain down and nothing settled?

Our little tutorial has come to an end (for this time). You may rightfully ask yourself, if such simple functions like a sum or a reduction warrant such an effort? Well, first of all, I believe a tutorial should try to keep things as simple as possible (but not simpler, as Einstein said). But even for this simple case, there are a lot of arguments for the use of generic components.

Indeed, the effort seems to be high initially (when compared to a single simple loop, which would have done the job in most cases). But on the one hand, with some experience, many details can be solved in a more compact way using tools like Boost enable_if and the new features of C++11 (an advanced excercise ... or a TODO for me!). On the other hand, most of the infrastructure (like iterator type maps) can be reused, in this case for other sequence algorithms.

Advantages of the generic approach

By the mere fact that our implementation had to be generic, we were led to a number of interesting questions, that in principle have to be answered also in a non-generic implementation, but often remain hidden and implicit (like a neutral element for maximum computation or a too narrow type for accumulation). We did not discuss a way to detect and handle too narrow value types here. The presence of such a feature in a generic component would be a strong argument to use this instead of a hand-written loop!

Readability and robustness of application code can be increased considerably by using generic components. In general, code is easier to understand (more intentional), if it uses a function call exhibiting the purpose via its interface instead of explicit loops with implicit purpose. So, I feel that

float mx = reduce(c.begin(), c.end(), std::max<float>);
is already clearer as the corresponding explicit loop, and shorter as well (and we can do better, see below). It also is less error prone, because one has not to deal with border cases manually, like the correct initialisation (in the case of possibly empty sequences), which is not entirely trivial, as we have seen. And finally, the code is more robust in the presence of changes in data representation, as the generic components make much less assumptions on such details then a conventional implementation must make.

The advantage of clarity and compactness can fade away, if the necessary operators have to be assembled painstakingly. Defining thin, more specialized wrappers around the most generic versions can help:

template<class It>
typename value_type<It>::result
max(It begin, It end) { ... }
leading to shorter, clearer code:
float mx = max(c.begin(), c.end());
There is still a redundancy: The double mention of the container c (which can be long in practice). We can do away with this by using a different interface on top of iterators, based on ranges, as offered by Boost Range:
float mx = max(c);
which is hard to beat. Some people argue that range-based interfaces should indeed be preferred to iterator-based ones (see e.g. A. Alexandrescu' presentation Iterators Must Go at BoostCon 2009).


There is no light without dark. As you will have noticed, generic programming has a certain learning curve (which I hopefully could flatten a bit). We use a somewhat more abstract point of view — starting from a simple sum, we ended up with type maps and neutral elements. As with many things in live, this is also a matter of experience and practice.

Evidently, implementing generic components entails a certain overhead. One has to repeatedly use these components in order to reap the benefits I discussed before. Thus, generic components belong into libraries — libraries which often become possible or useful precisely by the significant increase in generality attainable with generic programming! (A C-library providing sum functions for the basic types does not exist, for good reason.) Using parallel programming as an example, we'll discuss this further.

The most frequent criticism however does address generic programming per se, but rather insufficient language support:

  • The C++ template syntax is often cumbersome.
  • Typical nesting of generic code can lead to incomprehensible compiler error messages.
  • Repeated use of template function or classes can entail additional compilation effort, when compared to non-parameterized alternatives.

While these problems are annoying and can be road blocks at times, they are of transient nature (I know this is a weak consolation when fighting with template error messages!). But if one compares compile speed and error handling of template codes today with 10 years ago, one can see significant improvements. Language extensions like lambdas (in C++11) and concepts (in C++1y?) help a lot or will hopefully do so in the not-to-far future! Looking around, the example of the D programming language shows that a template syntax for an imperative language can be both simple and powerful. So I feel optimistic that languages (including C++) will evolve to support generic programming as well as other paradigms.

Now, let's look at a very hot use case.

Parallel Programming and GP

As you certainly guess, there is more to a simple reduce. If we think about parallel or vectorized versions, things start to loose their simplicity. The appeal of such versions is that the user does not have to see or know about the parallel implementation. This is of course highly attractive. It is indeed generic programming which makes such solutions practical at all! Because otherwise, we would need a new implementation full of technical details for each minor (or not so minor) change of the parameters.

Thus, we can essentially decouple the choice of the parallel framework from the application code. We can test parallel code separately, even before the application proper gets written. And often, we can reuse those complex algorithmic components — they are already tested! Being able to concentrate on the application specifics, its design will get much clearer and simpler.

And finally …

Besides parallelization aspects, there are of course many more applications of generic programming on more complex algorithms just waiting to be realized. Performance aspects need to be discussed, and I've a lot of ideas for more uses of generic programming in the back of my head. So stay tuned ... and in the meantime, consider the following: