OVERVIEW


The performance/power efficiency wall poses the major challenge faced nowadays by HPC. Looking straight at the heart of the problem, the hurdle to the full exploitation of today computing technologies ultimately lies in the gap between the applications’ demand and the underlying computing architecture: the closer the computing system matches the structure of the application, the most efficiently the available computing power is exploited. Consequently, enabling a deeper customization of architectures to applications is the main pathway towards computation power efficiency.

The MANGO project will build on this consideration and will set inherent architecture-level support for application-based customization as one of its underlying pillars. In addition to mere performance and power-efficiency, it is of paramount importance to meet new nonfunctional requirements posed by emerging classes of applications. In particular, a growing number of HPC applications demand some form of time- predictability, or more generally Quality-of-Service (QoS), particularly in those scenarios where correctness depends on both performance and timing requirements and the failure to meet either of them is critical. Examples of such time-critical application include:

  • online video transcoding
    • the server-side on-the-fly conversion of video contents, which involves very computation-intensive operations on huge amounts of data to be performed within near real-time deadlines.
  • medical imaging
    • characterized by both stringent low-latency requirements and massive computational demand.

Time predictability and QoS, unfortunately, are a relatively unexplored area in HPC. While traditional HPC systems are based on a “the faster, the better” principle, realtimeness is a feature typically found in systems used for mission-critical applications, where timing constraints usually prevail over performance requirements. In such scenarios, the most straightforward way of ensuring isolation and time-predictability is through resource overprovisioning, which is in striking contrast to power/performance optimization.


In fact, predictability, power, and performance appear to be three inherently diverging perspectives on HPC. We collectively refer to this range of tradeoffs, well captured in Figure above, as the PPP space. The combined optimization of PPP figures is made even more challenging by new delivery models, such as outsourced and cloudbased HPC, which are dramatically widening the amount and the type of HPC demand. In fact, cloud enables resource usage and business model flexibility, but it inherently requires virtualization and large scale capacity computing support, where many unrelated, competing applications with very different workloads are served concurrently.