Powered by the Association of Science and Technology Centers (ASTC)

Evaluating Processes and Outcomes

For successful evaluation of a Dialogue & Deliberation effort, organizers must be able to reflect on the program’s success throughout all stages of the effort: early planning and preliminary activities, project governance and program development, the Dialogue & Deliberation event itself, and, finally, the reporting, dissemination of impacts, and outcomes. Program evaluation aims to understand the outcomes of a specific process and whether the desired impact was achieved. When used well, evaluation can be a powerful learning tool for you and your team.

What Does Effective Evaluation Look Like?

Watch the video below to hear perspectives on effective evaluation approaches from Karen Peterman (Catalyst Consulting Group) and ASTC Community Science Dialogue & Deliberation Fellow David Valentine (Science Museum of Minnesota).

Click the topics below to learn more.

Co-designing the evaluation plan with the community partner is essential to ensure that the target outcomes and strategies used to measure them align with the community’s values and with the event’s goals. Evaluation should be discussed in initial planning meetings as an integral part of the event planning process. These evaluation conversations will also help with the design process, as they can clarify goals.

Designing an evaluation plan requires making two interrelated decisions: 1) what data you would like to gather and 2) how you would like to (and are able to) gather those data. You and your partners should also discuss how to assess the health of your relationships throughout the process, as well as how and with whom you will share evaluation outcomes. Although it can be tempting to try to maximize the volume of data you collect, keep in mind that collecting and analyzing more data may be burdensome for both participants and your team. Targeted data collection often yields clearer insights than wide-ranging efforts.

Once you’ve made some decisions about what information you want, you can consider the methods that would be appropriate (and feasible) for gathering it. Surveys are the most straightforward form of evaluation and can produce clear, quantitative results. For example, brief entry and/or exit surveys can help you gather initial feedback to determine how successful the event was in recruiting and engaging your target audience. When more in-depth information is desired, interviews can help reveal nuanced perspectives. One weakness shared by both surveys and interviews is that they are obtrusive, meaning that respondents are aware the evaluation is occurring, which may influence their behavior. To prevent this influence, you may consider unobtrusive measures, like reviewing participants’ notes, gathering facilitators’ observations, or looking for changes in local policy.

The evaluation process may be run entirely by the same team that led the Dialogue & Deliberation event itself. Or it may incorporate internal evaluation experts from the museum or community partner organization or involve hiring external evaluators. There are different advantages to each of these choices. Working on the evaluation with the original team can help ensure that evaluation is tightly focused on the most important outcomes and can, of course, save resources. Some evaluations are designed exclusively to provide information for you and your team to make improvements. For instance, Team-Based Inquiry takes you through four primary steps: question, investigate, reflect, and improve. This type of evaluation may be ideal if you do not have any grant reporting requirements or other reasons to widely share evaluation results and can be a good place to start for smaller projects and newer teams. 

However, if you have the resources available or are in the process of applying for a grant, we recommend that you set aside funding to hire an external evaluator, as their expertise can help you design the best possible evaluation. If you do choose to hire external evaluators, there are several important decisions to make while selecting them. First, you should know that evaluators come from a wide range of professional backgrounds, some of which will be better suited to this process than others. Additionally, it is essential to make sure the evaluator(s) you select have experience working with the community and are trained in culturally responsive evaluation practices.

Community science projects can be particularly difficult in terms of data collection for several reasons. Unlike a school or work setting, you cannot assume you will see the same participants day in and day out. You must also consider that staffing changes may occur at your organization or your community partner’s organization. Your partners and participants may come from groups that are rightfully cautious about sharing information about themselves. They may speak different languages or hold different opinions on the value of data collection.  

Here are some things to consider when it comes to effective and ethical data collection in community science:

  1. Keep it short. Although it may be tempting to gather as much data as possible, this approach can quickly overwhelm respondents and cause them to drop out of data collection altogether. Carefully consider what information is most important and remove extraneous questions. It is also important to know your audience; for example, young children will likely have shorter attention spans than adults.
  2. Be transparent. Ensure that respondents know what data you are gathering, why you are gathering it, and how it will be used. This serves the dual purpose of assuring respondents that it is safe to share information with you and letting them know why it is important, so they are motivated to respond.
  3. Compensate. Partners and participants should receive equitable compensation for their efforts whenever possible, and this includes work on evaluation activities like filling out surveys or giving interviews.  

Ideally, evaluation results can be used to improve future Dialogue & Deliberation efforts. This goal of future improvement should inform all steps of the evaluation process but will likely be most obvious in the final stages, as you and your team work to turn the data you’ve collected into actionable suggestions. Evaluation in this context can be viewed as a cycle of continuous improvement—information gained about partner relationships can improve collaboration while the information received from participants can help you design more effective Dialogue & Deliberation events.  

Evaluation results can also be used to gain a clearer understanding of community desires. While the entire Dialogue & Deliberation process should move toward proposed actions, other outcomes may not be immediately clear simply by attending and listening to conversations. Depending on the design of your event, facilitator notes, participant-made artwork, or the voting results may help your team better understand participant needs, desires, impressions, and perspectives You can even share initial evaluation results with participants and ask for their opinions to check your assumptions. 

Learn more about data analysis methods: Data Analysis (Catalyst Consulting Group)