A new textbook titled "Kernelization, Theory of Parameterized Preprocessing" co-authored by our very own Daniel Lokshtanov, together with Fedor V. Fomin, Saket Saurabh and Meirav Zehavi has now been published by Cambridge University Press.

The topic of the book is Kernelization, or rather, how to teach computers what to forget. We are often faced with a situation where we have a lot of data, perhaps too much! If we know the purpose for which the data will be used, then this leads to a natural notion of irrelevant data - parts that can be removed without changing the outcome of using the data for the prescribed purpose. Kernelization is the study of how this irrelevant data can be identified and stripped away - leaving the core (or Kernel!) intact.  The textbook is a result of almost 10 years of intense procrastination and is aimed at graduate students and researchers. Each chapter comes with exercises, making the book a suitable source for a graduate level course.