How Do We Know if Science Communication Training is Working?

By Erica Goldman and COMPASS

Mar 11, 2014

    |    

Minute Read

    |    

By Erica Goldman
Published March 11, 2014
Title: How Do We Know If Science Communication Training Is Working?
Categories: Communication, About COMPASS, Science
Tags: Erica Goldman, collaborating, evaluation, funding, goals, GradSciComm, graduate school, tools, presenting, training, workshops

“Don’t blame the ruler.”

Now a few weeks out from the AAAS meeting in Chicago, the punch line of Rick Tankersley’s talk at our #GradSciComm session still niggles in the back of my mind.

We can’t manage what we don’t measure. Investing in the evaluation of science communication training will help ensure that scientists become proficient in desired skills.
(Photo: Creative Commons License BY-NC-ND 2.0 –  Barbara.K)

Rick, currently on rotation as a program officer in the Division of Graduate Education at the National Science Foundation (NSF), runs a presentation boot camp for ocean scientists. The boot camp focuses on a set of 11 presentation skills, ranging from organization to timing to visuals and language. He also places a strong emphasis on evaluation, and has developed a rigorous rubric to assess whether his [and other] trainings result in communication growth. The evaluation instrument takes a concrete approach that measures growth by assessing a level of attained proficiency (from basic to expert) for each one of the skills identified as a target goal for a particular training.

What he found is that even sophisticated communication training programs can fail to generate communication growth if they don’t include all of the elements of what he calls the “communications triad”: training, feedback, and practice. Rick applied his evaluation methods to NSF’s GK-12 program, a now-discontinued effort that provided funding to STEM graduate students to teach in K-12 classrooms. An explicit goal of the program was for students to improve their communication skills to non-peer audiences. Despite providing tremendous opportunities for feedback and practice, the program lacked a dedicated communication training component. As a result, participants in the GK-12 program failed to show significant communication growth.

Rick’s findings weigh heavily on me as we move forward with a roadmap for how to make effective communication training an integral part of STEM graduate education through our #GradSciComm effort. The risks feel considerable. If we know that all three elements are necessary, how can we be sure that agencies and universities strive to systemically support the whole triad: training, feedback, and practice?

But it goes beyond than that. I think what is really niggling at me is that we don’t have a systematic way to know whether future investments in building graduate student communication skills will yield the desired returns. How will we know that the next generation of scientists is becoming proficient in a set of communication-specific core competencies?

What I realize is that it is more than “don’t blame the ruler.” It’s “use the ruler!” Use the ruler to inform how you develop the goals of your training program, whether you seek changes in proficiency in oral presentation skills or changes in societal attitudes toward scientists as agents of change. Map out how you are going to measure the changes you hope to see. Then do it.

But this is not an easy lift for communication training programs and university courses. Effective evaluation and monitoring takes time, resources, and a new set of skills from the social sciences that will require thoughtful integration and capacity building. We at COMPASS ask this question about our own programs all the time. We robustly incorporate the three elements of the communications triad in our work, and we can clearly see the impact of our work in the world. But we also realize that to develop and use the appropriate ruler to measure the impact of our trainings on scientists over time will require a dedicated push. We’d like to take this on in a rigorous way, but we need time, money, and appropriate collaborators to make it happen.

As the national conversation unfolds urging agencies and universities to invest in providing more and better communication training for the next generation of scientists, we hear loud and clear that a parallel investment needs to be made in the mechanisms for monitoring and evaluation of that training. We’ll be working through some ideas for how to do that as the #GradSciComm group develops its recommendations to NSF and other agencies. We’d love to hear your thoughts as well. How can we simultaneously invest in growing capacity for communication skills training, while making sure that we know what we will get in return from that investment?

This post was transferred from its original location at www.compassonline.org to www.COMPASSscicomm.org, August 2017.

Did you like this article? Share it out with your community.