Computer architectures are becoming more and more complicated to meet the continuously increasing demand on performance, security and sustainability from applications. Many factors exist in the design and engineering space of various components and policies in the architectures, and it is not intuitive how these factors interact with each other and how they make impactson the architecture behaviors. Seeking for the best architectures for specific applications and requirements automatically is even more challenging. Meanwhile, the architecture design need to deal with more and more non-determinism from lower level technologies. Emerging technologies exhibit statistical properties inherently, such as the wearout phenomenon in NEMs, PCM, ReRAM, etc. Due to the manufacturing and processing variations, there also exists variability among different devices or within the same device (e.g. different cells on the same memory chip). Hence, to better understand and control the architecture behaviors, we introduce the statistical perspective of architecture design: by specifying the architectural design goals and the desired statistical properties, we guide the architecture design with these statistical properties and exploit a series of techniques to achieve these properties.
We explore the statistical architecture design methodology in three cases: 1) Herniated Hash Tables: Exploiting Multi-level Phase Change Memory for In-Place Data Expansion. 2) Lemonade from Lemons: Harnessing Device Wearout to Create Limited-Use Security Architectures. 3) Memory Cocktail Therapy: A General Learning-Based Framework to Optimize Dynamic Trade-offs In NVMs. In the three work, we introduce techniques such as addressing mapping, prefetching, redundant encoding, regression, regularization techniques to control the system behaviors and achieve the desired statistical properties.