Schedule (recorded talks):
10:00 - 10:15am ET Opening remarks Organizers
10:15 - 10:45am ET Quantum Computer Systems Research: Origins and Surprises Over the Last 20 Years
Fred Chong
University of Chicago

Recent advances in quantum computing hardware have created substantial excitement towards the potential of these technologies. Yet there exists tremendous opportunity and shortfall in efforts that take a systems view of quantum computing. In this talk, I will describe how the systems perspective has evolved over the last 20 years, describing some of the opportunities, successes, and failures that have occurred along the way.

10:45 - 11:15am ET Deep-Learning for system security and privacy: the obstacles ahead of us
Ramon Canal
Universitat Politècnica de Catalunya/Barcelona Supercomputing Center

The upsurge of connected devices, applications and services has raised important cybersecurity challenges due to the exponential increase of attacks and their sophistication. These attacks cast a shadow on the adoption of smart mobile applications and services, which has been amplified by the large number of different platforms that provide data, storage, computation, and application services to end-users. All this makes system security complex and challenging. Deep-Learning is now at the core of our cybersecurity mechanisms. In this talk, we will revise the main DL techniques used and identify where they are useful and where they are not. Then, we will identify the obstacles we face when pushing for better cybersecurity detection/protection mechanisms for infrastructure (i.e. network level), software (i.e. malware) and privacy (i.e. privacy preservation).

11:15am - 11:45am ET Datacenter Research in Academia: Challenges and Opportunities
Christina Delimitrou
Cornell University

Cloud systems have become increasingly prevalent, which has motivated a wealth of research directions targeting both their hardware and software design. Not having access to production applications places academic research at a disadvantage in terms of showing the representativeness of its results. In this talk I will discuss the challenges and opportunities datacenter research in academia presents, and some ways to address the lack of production applications and systems when it comes to some of the newer cloud paradigms, like microservices, serverless compute, and hardware acceleration

11:45am - 12:00pm ET Hardware Prefetcher Aggressiveness Controllers: Do We Need Them All the Time?
Anuj Mishra
IIT Kanpur

Hardware prefetching is a well known latency hiding technique for improving performance. A hardware prefetcher predicts future memory references and brings data to cache before processor demands it. However, in case of many-core systems, prefetchers can increase shared resource contention such as DRAM bandwidth contention, degrading overall system performance. Research on controlling aggressiveness (that controls prefetch degree and distance) of prefetchers has focused on heterogeneous workloads having a mix of prefetch friendly (applications that get benefit with prefetching) and unfriendly (applications that do not get benefit with prefetching). But a large number of server workloads are multi-threaded and homogeneous in nature. We showcase, prefetch aggressiveness controllers that perform well in case of heterogeneous workloads by classifying and grouping applications, provide marginal utility in case of homogeneous workloads. Our findings show, there is no need of grouping or clustering mechanisms for controlling aggressiveness. Instead, we make a case for a simple online profiler that can decide whether to keep prefetchers ON or OFF. Paper link

12:00 - 12:30pm ET Coffee/Meal Break
12:30 - 1:00pm ET How to Develop a Bad Research Tool
Jason Lowe-Power
UC Davis

In the style of David Patterson's "How to give a bad talk" and "How to have a bad career", this talk will discuss how to develop bad tools. Software artifacts such as prototype systems, simulators, RTL designs, and data analysis tools are paramount to computer systems research. In some cases, the tools developed for a project can have more impact than research papers. This impact comes not from the inherent beauty of cleverness of the artifact, but from other people using and building off of the artifact. In this talk, I will draw on my own experience developing bad tools (and good tools!) as well as look at some of the unsuccessful (and successful!) examples from the community.

1:00 - 1:30pm ET The Hardware Lottery
Sara Hooker
Google Brain

The hardware lottery describes when a research idea wins because it is suited to the available software and hardware and not because the idea is superior to alternative research directions. Examples from early computer science history illustrate how hardware lotteries can delay research progress by casting successful ideas as failures. These lessons are particularly salient given the advent of domain specialized hardware which make it increasingly costly to stray off of the beaten path of research ideas

1:30pm - 2:00pm ET Searching for Bugs in Real Computer Systems
Caroline Trippel
Stanford University

In this talk, I will give an overview of my PhD experience of finding bugs in real computer systems. The key take-away from this talk is that many impactful research ideas lay at the intersection between disciplines. In particular, my early dissertation research focused on applying formal analysis techniques that had proven useful in reasoning about parallel program correctness to reason about the correctness of hardware-level concurrency. This work identified a series of deficiencies in the 2016 RISC-V ISA’s memory consistency model (MCM) specification and discovered bugs in some commercial compilers. Later, in an effort to further motivate the importance of hardware correctness, my work made the observation that the same sorts of techniques we were using for efficiently verifying the correctness of processor concurrency could be used to evaluate processor security. From this observation, we design a hardware security verification tool that identified two new variants of the now-famous Meltdown and Spectre attacks that impacted Intel processors.

2:00 - 2:30pm ET The End.

Bringing historical perspectives to NOPE

At MICRO 2019, Lynn Conway gave a keynote describing her personal story and the "techno-social" factors which led to her contributions being undervalued and unseen. Inspired by her perspective and call to action, we aim to rebrand the aims and goals NOPE to be more inclusive of historical perspectives and descriptions of how technical concepts came to be commonplace.

NOPE will remain a place for open, honest port-mortems of research projects which ran into unexpected limitations and resulted in lessons learned. In addition, it will offer a venue to discuss contributions that have been underappreciated and misconstrued over time. In this way, the goals of NOPE are to reflect on negative outcomes, shed light on the origin of ideas, and offer a venue to revise our understanding and uncover opportunities to move forward by reflecting on mistakes that can be made throughout the research process.

Why do we need to talk about negative results?

Not all research projects end up with positive results. Sometimes ideas that sound enticing at first run into unexpected complexity, high overheads, or turn out simply infeasible. Such projects often end up in a proverbial researcher's drawer, and the community as a whole is not aware of dead-end or hard-to-advance research directions. NOPE is a venue that encourages publishing such results in all their "badness".

What is a "good failure"?

The best negative results help us learn from our mistakes. They can illuminate hidden obstacles or demonstrate why we need a change of course. An ideal submission to NOPE has a novel idea which sounds plausible from first principles or design intuition, but yields little to no improvement (in performance, power, area, …) in practice. The paper drills down into the reasons for the lack of improvement and proposes a plausible explanation – different technology trends, unexpected implementation complexity.

Prior NOPE Workshops

NOPE 2020 Lausanne, Switzerland
NOPE 2019 Providence, RI
NOPE 2017 Cambridge, MA
NOPE 2016 Taipei, Taiwan
NOPE 2015 Waikiki, HI
(with MICRO 2015-2017)

Call for Papers

Our goal is to find papers which the community can learn from and might otherwise have trouble finding a suitable venue, so we take a broad view of what constitutes a "negative" result.

We invite submissions from all sub-areas of systems and computer architecture. Submissions should focus on discussing historical perspectives, uncovering a misunderstood or misappropriated technical concept, analyzing the reasons for failure, especially in light of underlying assumptions. Submissions based on opinion, speculation, and non-fundamental circumstances ("there was a bug in the simulator") are not encouraged, as they do not provide concrete evidence as to whether an idea is bad. Topics of interest include:

Important dates

Paper submission: March 25, 2021
Author notification: March 29, 2021
Camera-ready version: April 8, 2021


Lillie Pentecost, Harvard
Udit Gupta, Harvard
David Brooks, Harvard
Brandon Reagen, NYU / Facebook
Svilen Kanev, Google
Bob Adolf, Harvard

Program Committee

Chris Batten, Cornell
Luis Ceze, University of Washington
Tipp Moseley, Google
Thomas Wenisch, University of Michigan


Questions? Send us an email.

Submission Guidelines

We believe in substance over style, and we encourage authors to prioritize delivering their message over conforming to a particular template. That being said, we anticipate papers will be 2 pages, and we encourage authors to use a two-column format. Papers need not be anonymized.

Additionally, we ask that you also include a short, 1-paragraph abstract in your submission email. This should be suitable for inclusion on the NOPE website and program handouts.

Accepted submissions are expected to prepare a poster or short presentation to foster discussion.


Please submit your papers via email to: by 5:00pm ET on the deadline.