Thinking about the future is important for taking effective action in the present. While futures thinking by specialists and elites can be useful, it risks not taking account of the knowledge and values of the public. The field of participatory futures aims to correct this by developing democratic and inclusive processes for people to explore and develop the futures they want.

With this goal in mind, Nesta have been exploring the idea of participatory futures, and have collected many examples of how it can be done. A report currently being developed will push this further. It will clarify what participatory futures is and share available best practice and methods. We have done some initial thinking in this area as well and would like to contribute our findings to this work. In this post, we will outline a series of observed trends that are relevant to participatory futures, propose a way of categorising different methods depending on what one is trying to achieve, and share some future lines of inquiry.

Political and social trends provide new opportunities for the use of participatory methods, and new technologies offer new ways of participating. Digital tools can help scale participatory futures across large populations and can enable access to rich, interactive visions of the future.

Through our initial research, we came across the following interesting trends in participatory futures.

Collective intelligence

The field of collective intelligence could provide new ways of doing participatory futures that combine the capabilities of groups of people with machines. Emerging technologies such as machine learning help make this more possible. An example of this is Climate CoLab, an open problem-solving platform from MIT aimed at exploring and solving complex problems.

Example methods: hybrid forecasting, collaborative argument-mapping software.

Participatory governance

Movements around participatory local governance are gaining prominence, and are using digital technology to help with this. For example, the municipalist movement is a radical movement that seeks to build bottom-up forms of governance using participatory methods. For example, participatory budgeting projects in Paris, Madrid, and Mexico City have used digital methods. One such tool is Empatia, which provides an environment to test out participatory systems.

Example methods: citizens assemblies, participatory budgeting.

Immersive experiences

There is a strand of futures work that puts people in immersive environments so that they can experience the future and use that experience as a stimulus for thought. Emerging technologies such as virtual and augmented reality (VR and AR) are making these experiences much more immersive and can support more constructive discussions about the future. For example, VR and AR have been used in facilitating participatory urban planning decisions. Games also help with immersiveness. For example, IMPACT is a game where participants play different roles in the future and see how future changes could impact those roles. The Block by Block project uses the Minecraft game as a space for children to participate in designing their environment.

Example methods: serious games, speculative design, VR-enabled participatory urban planning.

Creative activism

There has been a trend towards using creative methods in activism. Not all of this is futures-focussed, but some is. For example, temporary autonomous zones such as Burning Man or Freetown Christiania in Copenhagen provide an enclave for a new way of living without having to change the whole of society.

Example methods: temporary autonomous zones, prefigurative intervention, legislative theatre.

Focusing participation on neglected voices

Although all participatory futures methods aim to widen participation, some are particularly focussed on including people that tend to be neglected in discussions about the future. For example, MH:2K involves young people in mental health work as citizen researchers. Similarly, the Guardian’s Gene Gap project involves five different UK communities to help identify different stories to tell about gene editing. Afrofuturism uses science fiction to imagine and explore science, technology, and cultures of the future from the perspectives of the African diaspora.

Example methods: citizen journalism, citizen science, participatory international development.

What types of participatory futures methods are there?

The abundance of different methods for engaging people in conversations about the future makes choosing an appropriate method challenging – where to begin? You could start by asking yourself two questions: Which type of question are you asking about the future? And which actors will be driving the process?

Type of question Ask Example outputs
Predictive What kind of future can we expect? Predictions, scenarios, trends
Value-based What kind of future do we want? Values, visions, ideologies, speculative design
Strategic How can we get the future we want? Plans, strategies
Driving actors Who initiates the process? Who controls the process?
Top-down Traditional authorities (e.g., local governments) The initiating authority
Bottom-up Members of the public The public

Together, these two variables form a framework in which we can place methods.

Predictive Value-based Strategic
Top-down Forecasting competitions
Crowdsourcing platforms
Speculative design
Citizens' assembly
21st century town meetings
Participatory backcasting
Bottom-up Betting markets Temporary autonomous zones
Prefigurative politics
Legislative theatre
Online petitions

In addition to the type of question and driving actors that form these categories, there are several other variables that it might be useful to consider:

  1. What are participants contributing? There are a wide variety of inputs that people could contribute, such as: prediction, observation, knowledge, value, goal, preference, concern, theory, vision, or framing.
  2. Design of the process. The design of the process considers how people will be brought together and think together about the future. It will include: how participants are selected, how they participate and contribute, how they are coordinated, what the output of the process is, how this output is used.
  3. Practical considerations. There are also practical variables such as: money and time cost of running the process, time and energy required from participants, knowledge requirements for participation, political complexity of the topic.

Where next?

This post summarises some initial ideas based on a small amount of research; more in-depth research will challenge and refine them. Further work could also explore:

  • What can we learn from the long history of participatory methods more broadly?
  • What outcomes do we want from participatory futures, and how do we measure them and build up an evidence base?
  • How do we ensure that participatory futures methods are genuinely participatory, and are not co-opted by powerful groups and individuals?
  • Which futures tasks are most suited to participatory methods, vs expert methods? How can experts and the public work together most effectively?

Interested in learning more about participatory futures? You could start by checking out Participedia, a repository of participatory projects and methods. Beautiful trouble similarly presents a database of creative activism techniques. Involve’s participation knowledge base has a wealth of information related to participatory methods. And finally, we’ve also made our own research spreadsheet available for you to download and modify as you wish.

Getting a better understanding of participatory futures methods is an important part of the wider project of democratising futures thinking. We’re glad that Nesta is pushing this field forward and are excited to see further work in this area.

We helped the Humanitarian Innovation Fund translate research into actionable next steps for the humanitarian sector.

Research often leads to piles of information that are hard to act on. If you write this information up without synthesising and communicating it effectively, you will end up with an ineffective report. Because of this, we focus intensively on synthesis and communication in all of our work.

We recently did this kind of synthesis work for Elrha’s Humanitarian Innovation Fund (HIF). They support organisations developing innovations in humanitarian assistance and they’ve noticed that it's often difficult to scale these innovations. They wanted to write a report to help the humanitarian sector understand why scaling is difficult and take action to enable it. We helped them translate their experience and research findings into a set of clear and actionable challenges for the humanitarian sector.

Using challenges to structure thinking

We structured the report around challenges because they are a good way to stimulate action. Challenges are brief statements of a problem, the reasons for the problem, and how it might be solved. They help the reader quickly understand the situation and provide focus for a community of practitioners.

We based our challenges on research that had identified barriers to scale and recommendations for the sector. This research drew on the HIF’s experience in helping innovators scale their projects and on research carried out by Spring Impact, who are experts in scaling social innovation. We analysed this research and proposed a set of challenges and a structure for the report that we refined with the HIF team.

Five key challenges stood out:

  • Too few humanitarian innovations are geared to scale
  • The humanitarian sector has insufficient embedded knowledge and skills for supporting innovations to scale
  • There is a lack of appropriate and adequate funding for scaling innovation in the sector
  • There is insufficient evidence of the impact of humanitarian innovations
  • The humanitarian ecosystem significantly frustrates efforts to scale humanitarian innovation

We developed the following structure to describe each challenge:

  • Barriers: What is causing the challenge and what are the consequences?
  • Current activity: What is the humanitarian sector currently doing about this challenge?
  • Calls to action: What do different humanitarian actors need to do at both an operational and systemic level to address this challenge?
  • Questions for the sector to consider: A series of provocations to encourage the sector to think differently about the challenge.

This structure gives humanitarian actors an understanding of the challenge, provides detail on what’s causing it, and gets them thinking about how they can solve it.

Opening up conversations

It might seem trivial, but something as simple as how research or insights are framed can shape the kind of conversations they enable. Identifying limitations and barriers is important, but advancing informed proposals on what needs to happen to address them can generate much more meaningful conversations.

This report represented an opportunity for the HIF to reflect on their work and consolidate their position as a leader in humanitarian innovation. By articulating concrete challenges and next steps for the sector, they now have a valuable tool they can use to work with stakeholders to unlock the systemic change needed to help innovations to scale.

To learn more about Too tough to scale? Challenges to scaling innovation in the humanitarian sector read the full report here.

How rapid prototyping can be used to accelerate life science software development

Building life science software products isn’t trivial. This blog post focuses on how our BioDesign team use prototyping to help kick off the process.

We often work on projects in their early stages when there is no user interface, or even before any coding has begun. At this point, there may only be a list of scientific and technical requirements, a founders’ broad vision for a product, or simply an initial hypothesis that has never been validated. Our response to this is to produce a prototype as quickly as possible, to turn an idea into something tangible.

This is part of our design-led approach, and we do it for a variety of reasons. Prototypes can be used to understand what people want from a software tool in user research interviews, or to explain the purpose of a new product as part of an investor pitch deck. Sometimes, these can be used as a tool to encourage internal discussion and to help create alignment in a team.

We prototyped a new tool for repurposing existing drugs at the BioDataHack 2018 hackathon at the Wellcome Sanger Institute, and were part of the winning team for the OpenTargets challenge.

In general, a prototype is a stimulus — a talking point for a conversation. Because they are quick and cheap to produce, they are a great way to explore the possible workflows, features, and interactions of a new software product. This is particularly relevant to scientific software, where design patterns are not yet fully established. By trialling out options before committing to code, it is possible to save time and money by avoiding building products that people don’t understand or need.

The prototypes that we initially create are ordinarily mockups created in user interface (UI) design software such as Sketch. These are essentially drawings of UI, and can be made dynamic by using additional software that mimics interactivity. We use Marvel which is essentially an online slide show of static screens linked by hotspot buttons. These can be made to feel very similar to a real interface, in a fraction of the time taken to develop interfaces in code.

Project TrackBook is version-controlled shareable lab notebook. In just under 24 hours we designed, built, and filmed this prototype, and conducted initial user research at the eLife Innovation Sprint 2018.

At BioDesign, we use prototyping to…

Understand scientific users

As life science software becomes more sophisticated and widespread, the number and type of users broaden. For example, users of genetic analysis software can range from clinical geneticists to doctors and patients.

We use prototypes in interviews with users as a stimulus. Showing people something tangible helps focus conversations on the research question, and lessens the possibility of people misunderstanding what you are trying to do. For scientific products this allows you to understand the right level and type of scientific detail to include, and the context of use for your product. This insight is then used to help determine which features to build, and how workflows should be structured to match scientific work patterns.

Communicate an idea

Scientific software products are, by definition, based on scientific concepts, which can be complex to understand for newcomers. This can make it challenging to approach customers or investment in the early stages of a new product. One way to clearly communicate the potential utility of a new tool is through a prototype. For example, a prototype can be used to show a use case for the tool, helping people to ground the concept in real working practices.

Build alignment

It is not unusual for scientific products to have multiple stakeholders: universities, funding organisations, non-profits, for-profits… A prototype represents the first attempt at a consensus of a product and therefore will highlight areas in which people are not in agreement. By collating stakeholder feedback on the prototype, we can facilitate productive discussions about differences in ideas, and a path to alignment on a single vision for the product.

We think that rapid prototyping is one of the most useful design approaches that is not yet widespread in the scientific sector. As scientific software tools move to become software products, methods such as prototyping can allow these to be more focused on user needs and ultimately be more useful for people.

BioDesign team member Simon recently joined around 150 participants at the Wellcome Trust BioData Hackathon. Focused on finding novel ways to use biological data to improve healthcare, teams had 2 days to design, develop and present their solutions. We were thrilled to be a part of the winning team for the Open Targets challenge to identify drugs with the potential to be repurposed.

The Wellcome Trust’s Genome Campus is always an enjoyable place to visit. This time I was there for the BioData Hackathon, alongside around 150 participants with backgrounds in statistics, bioinformatics, genomics, medicine, design, entrepreneurship and more. We’d all come together to come up with new ways to use biological data to improve health outcomes.

Choosing a challenge

Although tempted by all of the challenges, I chose the challenge hosted by Open Targets - “How can we predict opportunities to repurpose drugs to treat unmet patient need?” as the intention was to not only come up with novel data analysis methods, but also to consider who might use the idea and in what context.

The focus of this challenge was all about the potential for existing drugs to be used as treatments for new symptoms and diseases. As is widely documented, the process of developing new drugs is both incredibly expensive, and very likely to fail. Drugs that are safe and effective are a scarce resource.

I quickly joined up with a fantastic team (Elodie, Ken, Joni, Rebecca and Robert,) with a variety of skills in bioinformatics, statistics, R and Python. Robert acted as a great project manager - making sure people knew what their jobs were, and that we were pulling together for one aim.

Who needs to repurpose drugs?

My first job was to understand the possible uses for a tool that could help identify existing drugs with the potential to be repurposed. Mentors for each of the challenges were available to provide guidance. Andrew Hercules, UX designer at Open Targets proved a great help, sharing his own insights from previous research on the possible uses for a drug reposing tool.

We also took inspiration from the opening talks, in particular, Gemma Chandratillake’s talk on the importance of improving diagnoses and therapies for rare disease.

After discussing a few different angles, we decided to pursue the idea of developing a tool that would help clinical researchers to identify possible treatments for patients with unusual sets of symptoms. These could be for research projects, or possibly to help assist prescriptions (although this would require significant validation).

Matching drugs to symptoms

The idea behind the tool was to separate diseases into their constituent symptoms, and then matching those symptoms to drugs known to treat them in any context. By searching for a collection of symptoms, you could then find a list of drugs that have been used to treat at least one of the symptoms. The better the match of a drug to the symptoms input, the higher the score it would be given.

To test the algorithm could work the team used example data provided by the Open Targets team including drug, phenotype and target information. By the end of the first day, (with a particular shout out to Joni), we had working code. Although not perfect, there was evidence that the approach could work, as a number of existing repurposed drugs were identified as high scorers. Of course this approach also pulled out drugs specifically targeted at these symptoms too. Finding a way to differentiate these is definitely something that would be a high priority for any further development.

Thinking about users

In the meantime I had got to work on a simple mockup for a possible UI. We imagined it as a search engine that would allow input of either a disease, or a set of symptoms. This would then return a list of drugs that have been recorded as treating some of those symptoms, and so could potentially be repurposed.

We decided to try and simplify this as much as possible. The scores were represented as different ‘buckets’, (from 1 to 5 stars) to give an indication of how strong a match was. Basic visualisations were also included to summarise some of the key information about the results. These included the top drug and target matches, as well as indications of how much each symptom contributed to the search results, and the phases of development of the drugs included. Indicating the development phase of a drug is important as drugs that have already been approved are the lowest hanging fruit for repurposing. Each visualisation would also be interactive, allowing filtering on the different properties.

Pitching the idea

From this point we focused on developing a good pitch - summarising the problem, our approach in the backend and a click-through of the prototype UI.

Four other excellent teams had also joined the challenge, and pitched their ideas to a team of judges. Each concept took a different perspective, including an idea to draw inspiration from Netflix’s personalised suggestion algorithm and a working tool using graph and machine learning technologies.

The hackathon was drawn to a close with winners announced for each challenge. We were incredibly flattered to find out that our team had won our challenge, amongst some very strong teams. The aim is for the idea to now be carried forward, to validate the algorithmic approach as well as to better understand possible contexts of use, and user needs. We hope that this can be the seed for a new tool helping to find treatments for people with rare disease.

The Open Targets challenge winners certificate

Thanks to:

Last month we travelled the world to discuss our BioDesign team’s work. We presented at the Pistoia Alliance UXLS Conference in Boston and the OpenVis Conf 2018 workshops in Paris.

We spoke about 5 principles of good information experience that we’ve found to be especially helpful in the design of software for the life sciences. In this post we explain these principles, why we think they matter, and how to use them in practice.

Types of life science software

A key part of software design for the life sciences is understanding and shaping information experience. This means working to ensure that users know how data is processed, and offering opportunities to manipulate the ways in which a tool works. Most importantly it is about making it is as easy as possible to find patterns and insight in data.

Below we discuss 5 design principles that we use when designing software for the life sciences and how they can be implemented.

Transparency in Information Analysis

One of the main reasons for developing life science software is to automate some part of an analytical process through the use of algorithms, easing the burden on scientists. Often, most of the logic of an algorithm tends to be abstracted away from the users - it’s a ‘black box’ that no one can see into to make sense of.

Algorithm logic is hidden from user

Unless people understand how a software algorithm works, they are unlikely to be able to trust it. For example, in clinical genetics labs like the one that Simon worked in, the sensitive nature of the work means that a tool won’t be used at all if lab scientists don’t have an understanding of the principles it is based on.

This means it’s a really good idea to find a way to open up the black box, showing the steps that an algorithm is making to allow users compare the mechanism of an algorithm to their own understanding of the process.

Opening the black box of a life science algorithm

A great way to do this is visually. This can be a very quick method to describe functionality, while not revealing so much that IP is at risk.

Going further, if the method of revealing the steps in the process is also interactive, this not only increases the ease of understanding, but also makes it customisable — potentially increasing the utility and trustworthiness of the tool so it can be implemented in more contexts.

Manipulating the logic builds understanding and trust

In our ‘Opening the black box’ case study we explain how we applied this first principle to help users understand the working of a clinical genetic analysis tool.

Information Hierarchy

A major challenge in the life sciences is the scale and richness of biological data. When using software tools for accessing or manipulating large biological datasets, it is easy to become overwhelmed, miss what you are looking for, and miss opportunities for discovery.

Understanding the information priorities of users – which data they want to compare and consume, and in what order – is key to preventing confusion.

In practical terms, applying the principles of information hierarchy can be quite simple. For example, this could be a matter of prioritising information using colour, size, and order of different information types.

Prioritising information using colour, size, and order

Other approaches involve moving less important (or more technical) information a click away behind drop-downs or on secondary pages. Standard design patterns proven in other domains are equally useful in the life sciences.

Bringing more important information to the front

It’s important to bear in mind that information priority is not universal. Depending on what they’re using a software tool for, users will have different opinions on what they need to see first, what second, and so on. They will also have preferences regarding the level of control they need over the system’s functionality.

Users have different preference for information prioritisation

For example, bioinformaticians may need to know the type of sequencing platforms and the exact pipeline settings used in their experiments, while clinical scientists may prioritise access to journal articles in which disease symptoms are discussed. This means that designing the right level of interface customisability for different user types relies heavily on user research to understand preferences.

Flexible Workflows

Scientific workflows can switch rapidly from the ordinary to the novel as researchers respond to signals in their data. Software interfaces need to support streamlined completion of routine tasks as well as facilitating detours for more in-depth data exploration.

Designing flexible workflows for quick routine tasks and easy explorations

This can be challenging to implement in the life science context. Workflows can include multiple steps, protocols can vary from one lab to another, and best practices constantly evolve.

One way of addressing this is through treating different analytical features modularly, suggesting popular next steps from one module to another. Another approach is to pursue a sandpit style of software, in which there are no enforced workflows in favour of maximum flexibility (although this can create barriers to newcomers if the interface is too overwhelming).

Understanding what scientific users want to accomplish, their context, and their limitations is especially important for designing workflows that are optimal. At BioDesign we do this by observing existing patterns with currently used tools, and by prototyping and testing scenarios based on known use cases.

Encouraging Exploration of Results

Scientific information inherently lends itself to visualisation. Modern web browsers allow for visual representations that are multidimensional, dynamic, and interactive. However, because data in the life sciences is multidimensional and vast in size, it often makes it challenging to capture all significant information in a single visualisation format.

Challenge to capture all information in a single format

We believe it is important to give people the opportunity to view their data from a variety of perspectives, making it possible to find patterns, pull insights out of data, and generate new hypotheses.

View data from a variety of angles

Of course, each visualisation type has its own strengths and weaknesses. Each graphic will highlight or emphasise certain aspects of the data, but may obscure or distort others. By paying close attention to what these factors are, and testing extensively with users, it is possible to develop novel and complementary visualisations that improve a scientist’s ability to making a discovery.

Complementary and novel visualisations

This principle guided our work for Sequence Bundles — a novel method for visualising sequence alignments that we designed and published as an open-source software tool. Sequence Bundles users can expose protein and DNA patterns that other visualisation methods would otherwise fail to surface.

Design-led Research

User research is essential to producing good software. Without it, it becomes easy to over-engineer functionality, or build confusing user interfaces (UIs). It is especially important for software development in the life sciences, where functionality is often complex and there are fewer instances of known design patterns that can be reliably followed.

At BioDesign, we use a design-led research approach in our work. This means that we produce design prototypes as early as possible in which we capture our best understanding of user interactions, workflows, information hierarchies, and data visualisations. We then use these prototypes in research interviews to test ideas and assumptions with user and experts.

Design-led research helps in understanding user needs and testing designs early

A prototype can be a click-through mock-up, a diagram of a workflow, a screen from a proposed software UI, or a map of ideas… Anything that clearly indicates the concept or proposal that you want to test with users. Interviews should be recorded, then insights are pulled out and collated to build up a picture of a user’s needs, understanding, and intentions. Using this feedback as a basis for iteration allows for rapid improvement and refinement in the design of software.

We have found the design-led research approach to be most effective in the early stages of development, where open user feedback is instrumental in defining features, interaction models, and the scope of life science software.

Design Matters

Many hopes are placed in modern life science software: from providing genetic diagnoses to patients, to automating complex experiments, to identifying new drug targets, to organising entire domains of knowledge. Pivotal to all these promises is how we interact with information and data. Designing good information experiences for the life sciences will help these tools to meet their potential.

At the most basic they will be more efficient, but they can also be more compelling, delightful, and understandable. At their best, well-designed information experiences will enable scientists to find patterns that would otherwise have been missed, and formulate new hypotheses that can push research forward.