Abstract: A well-known fact in the field of text compression is that entropy is a weak model when the input contains repetitions. To address this, decades of research generated myriads of the so-called dictionary compressors. In this talk I will show that these techniques are different solutions to the same, elegant, combinatorial problem: to find a small set of positions - a "string attractor" - capturing all text’s substrings. I will first show reductions between string attractors and dictionary compressors, indicating that these objects stand at the core of dictionary compression. One of these reductions will then be used to obtain an optimal-time random access data structure based on string attractors. Using similar ideas, I will show that universal compressed text indexing can also be achieved on top of these combinatorial objects. Finally, I will sketch the NP-completeness proof of the smallest attractor problem and present several open algorithmic challenges.