Abstract :
Compressive Sensing is a research area that has intensively developed over the last few years. It is used in applications where the amount of samples is significantly lower than that required by the Nyquist-Shannon theorem. The samples should be taken randomly and the signal has to be sparse in one of its domains. If these and some additional constraints are satisfied, the signal can be accurately reconstructed even in the case when a significant number of samples is missing. The algorithms used for signals?? reconstruction can be generally divided into two groups, namely, the convex optimisation and greedy algorithms. It is important to note that the missing samples could be the result of sampling strategy or could appear as a consequence of discarding impaired signal samples. The second case may originate from using the L-estimation based robust statistics. After the L-estimation is applied, the original signal is reduced to the non-noisy, randomly distributed set of samples. We can see that this is the point where Compressive Sensing and robust estimation theory complement each other. Thus, if we combine these two areas, we are able to denoise signals corrupted by heavy-tailed noise, hence obtaining an ideally filtered signal. Obviously, the analysis performed within the robust statistics theory can be very useful and applicable to Compressive Sensing. The idea of this Special Issue was to bring attention to these complementary areas and to get all the benefits from the comprehensive studies of robust statistics, which preceded Compressive Sensing Theory and was intensively investigated fifteen years ago.