Citizen Science Evaluation & Planning

In collaboration with the Smithsonian Environmental Research Center (SERC), we are testing evaluation tools designed to establish contextually-appropriate means of evaluating scientific productivity. Due to the diversity of goals and practices, measuring science outcomes in citizen science projects requires a holistic approach, so we are developing evaluation and planning procedures suited to application across a variety of contexts. At SERC, we can work with a diverse range of citizen science projects to improve the robustness and generalizability of a science outcomes inventory and planning toolkit that will be useful to the broader citizen science community as well. Citizen science can be considered both a methodology and a phenomenon; in this study, we focus on its methodological characteristics through the contextualized evaluation and planning process. As a phenomenon, we focus on understanding these projects’ evolutionary patterns and the impact of key decisions on project development, addressing these research questions: What are the typical stages of project development and longitudinal patterns of project evolution in citizen science projects? What events or conditions influence project management decisions in citizen science projects? How does structured evaluation of project outputs support project evolution and decision making? The products of this research will include improved citizen science project management and evaluation processes. We also anticipate new insights into project dynamics and resource requirements that can be used to establish reasonable, evidence-based resource allocations and performance expectations for local-to-regional field-based citizen science projects, supporting more effective project management and improved...

ADVANCE seed grant awarded

It’s official: the Open Knowledge Lab’s latest new project, a study of how researchers assess data, has been funded under the UMD ADVANCE seed grant program! Lab Director Wiggins will work with Dr. Melissa Kenney and her team on a study of climate indicators—data visualizations with brief text descriptions and links to provenance describing the sources of data and analysis processes—and how scientists assess the data when these pieces of content are delivered in different ways. Right now, there’s a big push for scientific data to be shared and re-used, but sharing these data effectively is harder than it sounds. First, there’s a lot of “extra work” involved, and the payoff to the sharer isn’t always obvious or direct. Second, without that extra work (or in spite of it), using data collected by someone else is often simply harder from an analytical standpoint, even if it does save you a whole lot of time and money on collecting the data. There are a lot of reasons that it’s challenging to re-use scientific data, but right from the start, you have to figure out if the data set in front of you will be useful. This is an especially challenging task and still a fairly big problem in the area of data discovery, so we hope the results of this study can help reduce this critical bottleneck to effective data discovery and use. At the end of the day, if representing data sets in a particular way helps convey their value to potential data consumers more effectively, then it would clearly be worth the relatively small added effort required to...