The OER Advocacy Toolkit is adapted from the Open Educational Resources Advocacy Toolkit by Council of Australians University Librarians and licensed under Creative Commons Attribution 4.0 International except where otherwise stated.
Each previous stage of the Toolkit has offered Evaluation Checkpoints to recognise the role of constant monitoring and evaluative processes. Iterative evaluation is essential for making adjustments throughout the advocacy life cycle, leading to a full evaluation at the conclusion of your activities.
Likewise, the Toolkit has stressed the importance of setting clear advocacy goals because you can't evaluate the effectiveness of your actions without understanding the purpose behind them.
Most advocates will measure their outcomes, such as the:
These are relatively easy measures to gauge and are usual inclusions for outcomes-based reporting. However, it's just as important to measure the effectiveness of your advocacy strategy in:
Measuring and reporting these outcomes is more challenging, but part of a deliberate and purposeful advocacy strategy.
Evaluating your program using local data, evidence and case studies that demonstrate your program's successes will strengthen your advocacy efforts. This is a good time to look back at your advocacy action plan goals for a point of comparison and ask yourself:
For example, when the program purpose is to increase the usage of open textbooks at your institution, your report can include the:
Using the previous example, you may have set the following as markers of success:
Below are some tools and resources you can use to evaluate your OER program against its intended goals:
Keep in mind that results and achievements can't always be measured in numbers. Case studies are important for collecting and describing qualitative data like barriers, solutions and complex stakeholder exchanges.
Just as the outcomes of advocacy activities require evaluation, so to does the practice of advocacy. This can be challenging as advocacy rarely lends itself to linear design and execution, usually requires iterative development, and the outcomes can't always be defined with precise qualitative measures. Rather, the success of this type of evaluation is predicated on asking the right questions, asking those questions of the right people, and professional reflection.
Using existing tools and frameworks provides a foundation for action and you can consider localising these resources or aligning them more strongly with your outcomes. The resources below will support you in evaluating your advocacy:
A key part of evaluating one’s advocacy is assessing whether you're gaining traction with stakeholders. Here are several questions you can use or integrate into advocacy workshop feedback planning:
These types of questions lend themselves well to qualitative data collection such as semi-structured interviews. Always built re-use and repurposing into data collection to maximise the outputs. A semi-structured interview could become a short video asset or a featured case study that can then be hosted on the institutional website or used in subsequent workshops or presentations as local examples of practice to inspire others. Given the limited resources available at most institutions, OER advocates need to plan how to strategically leverage the freedoms of OER for their own practice.
Identifying the purpose of your program and how to measure its success from the outset is crucial to the process of evaluating your OER advocacy program.
OER and open textbooks can be a transformative strategy for addressing digital access, learning material costs and inclusive experiences for higher education students (Lambert & Fadel, 2022). Success may be multilayered and multidimensional, encompassing pedagogical, economic, social justice and cultural outcomes, so you'll need to plan from the beginning of your OER program how best to capture the story you want to tell about its rollout and delivery.
In terms of measuring success, you'll need a plan for collecting data transparently to inform future decision-making and reporting. Some examples of successful deliverables and outcomes include:
University libraries are increasingly identifying expanding the adoption of OER and open textbooks as a strategy to mitigate the cost and digital access risks associated with commercial textbook provision in the digital age (Lambert & Fadel, 2022).
You should have a plan for what data you need to collect, why you need to collect it and how you will collect it for the purposes of decision-making and reporting – particularly when measuring the success of economic outcomes due to OER programs. That’s not all, though: the second step in your data strategy is to decide how larger measures like student savings will be calculated, analysed and presented to various audiences.
Where possible, use actual cost savings as reported by the faculty or school teaching the course or by your bookshop’s data. Arriving at a total savings estimate would be extremely difficult without an average per-student, per-course savings estimate.
Various groups have modelled how to arrive at an estimate. For example:
These statistics are derived from sources in the United States, where textbook costs (as well as textbook publisher models for access) have come under increasing scrutiny. You’ll note many US-based projects will use affordability as the key driver and reporting outcome – this decision is based on localised contexts and funding arrangements. At your institution, affordability may only be one reason (and it may not be the primary reason) for wider engagement.
Presently, there is little national data on textbook use, affordability and access in Australia. However reports on student finances and student poverty often reference educational costs and their impact on Australian students. Two suggested starting points are:
While open education practices (OEP) may have a pedagogical rather than social justice focus, those that aim to empower learners may have a positive impact on human rights or equity in at least two ways:
Adapted from 'Calculating and Reporting Student Savings' by Jeff Gallant in The OER Starter Kit for Program Managers by Abbey K. Elder, Stefanie Buck, Jeff Gallant, Marco Seiferle-Valencia and Apurva Ashok, licensed under a CC BY 4.0 licence.
Bali, M., Cronin, C., & Jhangiani, R. S. (2020). Framing open educational practices from a social justice perspective. Journal of Interactive Media in Education, 2020(1), 10. http://doi.org/10.5334/jime.565 (licensed under a CC-BY 4.0 licence)
Lambert, S., & Fadel, H.(2022.) Open textbooks and social justice: A national scoping study. National Centre for Student Equity in Higher Education (NCSEHE). https://www.ncsehe.edu.au/wp-content/uploads/2022/02/Lambert_OpenTextbooks_FINAL_2022.pdf
Advocacy is a distinctly iterative cycle where it is commonplace to encounter barriers such as stakeholder indifference, disagreement, administrative or policy blocks, and lack of understanding. It's important for advocates to continuously debrief amongst themselves about these common barriers to develop the right solutions.
This method of collective debriefing can be thought of as incremental advocacy evaluation. It's a continuous social learning process of planning, reflecting, sharing and acting.
These debriefs can be more effective when you:
Documenting these changes becomes an important part of defining workflows and processes. This will mature your advocacy from a single instance ad hoc activity, to one that is ready for repeatable and transferable applications across your institution.
Ultimately, advocacy is about creating opportunities for change and transformation of practice. There are a number of common approaches and frameworks irrespective of discipline, and advocacy evaluation tools are usually contextualised by practitioners as part of iterative design and debriefing.
The following examples – drawn from non-government organisation (NGO), healthcare and social movements – are a good starting point for OER advocates: