Skip to Main Content

OER Advocacy Toolkit

What Does It Mean to Evaluate OER Advocacy?

Each previous stage of the Toolkit has offered Evaluation Checkpoints to recognise the role of constant monitoring and evaluative processes. Iterative evaluation is essential for making adjustments throughout the advocacy life cycle, leading to a full evaluation at the conclusion of your activities.

Likewise, the Toolkit has stressed the importance of setting clear advocacy goals because you can't evaluate the effectiveness of your actions without understanding the purpose behind them.

Most advocates will measure their outcomes, such as the:

  • number of students benefitting from OER
  • number of staff engaged with OER
  • amount of cost savings to students
  • number of OER produced by the institution.

These are relatively easy measures to gauge and are usual inclusions for outcomes-based reporting. However, it's just as important to measure the effectiveness of your advocacy strategy in:

  • influencing stakeholders
  • changing attitudes
  • transforming behaviours
  • building partnerships.

Measuring and reporting these outcomes is more challenging, but part of a deliberate and purposeful advocacy strategy.

Evaluating Your OER Program

Evaluating your program using local data, evidence and case studies that demonstrate your program's successes will strengthen your advocacy efforts. This is a good time to look back at your advocacy action plan goals for a point of comparison and ask yourself:

  1. What was the original purpose of your program? Did you program meet this purpose, and how can you evidence this?

For example, when the program purpose is to increase the usage of open textbooks at your institution, your report can include the:

  • number of open textbook workshops conducted
  • attendance (consider demarcating numbers based on discipline or faculty)
  • correlation between attendance and open textbook adoptions (are there instances of staff adopting an open text without attending a workshop?)
  • open textbook adoptions across the university (consider reporting based on discipline or year of study – were adoptions more prevalent in first-year courses?)
  • number of students engaging with open textbooks.
  1. What expectations did you set for what success looks like?

Using the previous example, you may have set the following as markers of success:

  • all first-year lecturers from three specific disciplines will attend an open textbook workshop
  • thirty per cent of those attending will set an open textbook in their course
  • a learning and teaching event will be held after semester featuring some of these lecturers to help disseminate this practice.

Below are some tools and resources you can use to evaluate your OER program against its intended goals:

Keep in mind that results and achievements can't always be measured in numbers. Case studies are important for collecting and describing qualitative data like barriers, solutions and complex stakeholder exchanges.

Evaluating Your OER Advocacy

Just as the outcomes of advocacy activities require evaluation, so to does the practice of advocacy. This can be challenging as advocacy rarely lends itself to linear design and execution, usually requires iterative development, and the outcomes can't always be defined with precise qualitative measures. Rather, the success of this type of evaluation is predicated on asking the right questions, asking those questions of the right people, and professional reflection.

Using existing tools and frameworks provides a foundation for action and you can consider localising these resources or aligning them more strongly with your outcomes. The resources below will support you in evaluating your advocacy:

A key part of evaluating one’s advocacy is assessing whether you're gaining traction with stakeholders. Here are several questions you can use or integrate into advocacy workshop feedback planning:

  • Have stakeholders explicitly signalled or proven that they:
    • will change their practices as a result of your advocacy?
    • aim to influence those around them to change their practices as a result of your advocacy?
    • changed their own beliefs and attitudes as a result of your advocacy?

These types of questions lend themselves well to qualitative data collection such as semi-structured interviews. Always built re-use and repurposing into data collection to maximise the outputs. A semi-structured interview could become a short video asset or a featured case study that can then be hosted on the institutional website or used in subsequent workshops or presentations as local examples of practice to inspire others. Given the limited resources available at most institutions, OER advocates need to plan how to strategically leverage the freedoms of OER for their own practice.

Evaluating Economic and Social Justice Outcomes

Identifying the purpose of your program and how to measure its success from the outset is crucial to the process of evaluating your OER advocacy program.

OER and open textbooks can be a transformative strategy for addressing digital access, learning material costs and inclusive experiences for higher education students (Lambert & Fadel, 2022). Success may be multilayered and multidimensional, encompassing pedagogical, economic, social justice and cultural outcomes, so you'll need to plan from the beginning of your OER program how best to capture the story you want to tell about its rollout and delivery.

In terms of measuring success, you'll need a plan for collecting data transparently to inform future decision-making and reporting. Some examples of successful deliverables and outcomes include:

  • economic outcomes
  • equitable (social justice) outcomes.

Economic Outcomes

University libraries are increasingly identifying expanding the adoption of OER and open textbooks as a strategy to mitigate the cost and digital access risks associated with commercial textbook provision in the digital age (Lambert & Fadel, 2022).

You should have a plan for what data you need to collect, why you need to collect it and how you will collect it for the purposes of decision-making and reporting – particularly when measuring the success of economic outcomes due to OER programs. That’s not all, though: the second step in your data strategy is to decide how larger measures like student savings will be calculated, analysed and presented to various audiences.

Where possible, use actual cost savings as reported by the faculty or school teaching the course or by your bookshop’s data. Arriving at a total savings estimate would be extremely difficult without an average per-student, per-course savings estimate.

Various groups have modelled how to arrive at an estimate. For example:

  • Nicole Allen and David Wiley presented at the 2016 Open Education Conference on multiple cost savings studies and concluded that US$100 was a reasonable per-course savings estimate (Allen & Wiley, 2016).
  • In 2018, SPARC and Lumen Learning came to a more specific estimate using disaggregated IPEDS data to reach an estimate of US$116.94 per course (Nyamweya, 2018).
  • The National Association of College Stores provided an average textbook cost using college store data to reach an US$82 per-textbook average. This is slightly different from a per-course average, though, as the average includes low-cost scholarly monographs, novels and trade publications, which are often assigned in a group of required resources for one course (Open Oregon Educational Resources, 2018).
  • OpenStax used the 2015–2016 NCES National Postsecondary Student Aid Restricted-Use Data File and an internally-calculated average of 7 courses per year that would likely require a textbook to reach a US$79.37 per-course textbook cost average (Ruth, 2018).

These statistics are derived from sources in the United States, where textbook costs (as well as textbook publisher models for access) have come under increasing scrutiny. You’ll note many US-based projects will use affordability as the key driver and reporting outcome – this decision is based on localised contexts and funding arrangements. At your institution, affordability may only be one reason (and it may not be the primary reason) for wider engagement.

Presently, there is little national data on textbook use, affordability and access in Australia. However reports on student finances and student poverty often reference educational costs and their impact on Australian students. Two suggested starting points are:

Equitable (Social Justice) Outcomes

While open education practices (OEP) may have a pedagogical rather than social justice focus, those that aim to empower learners may have a positive impact on human rights or equity in at least two ways:

  • When used with individuals in marginalised communities – Creating or adapting content can increase representation of diverse identities and marginalised groups.
  • In the long-term development of students as citizens who learn how they can empower others once they're in a position to do so – OEP can open up scholarship and knowledge access to populations who would otherwise not be able to afford it . Although many of the OEP discussed here don't initially have transformative effects, their openness in itself may begin to affect mindsets and cultures to facilitate transformative change (Bali, Cronin, & Jhangiani, 2020).

Attributions

Adapted from 'Calculating and Reporting Student Savings' by Jeff Gallant in The OER Starter Kit for Program Managers by Abbey K. Elder, Stefanie Buck, Jeff Gallant, Marco Seiferle-Valencia and Apurva Ashok, licensed under a CC BY 4.0 licence.

References

Bali, M., Cronin, C., & Jhangiani, R. S. (2020). Framing open educational practices from a social justice perspective. Journal of Interactive Media in Education, 2020(1), 10. http://doi.org/10.5334/jime.565 (licensed under a CC-BY 4.0 licence)

Lambert, S., & Fadel, H.(2022.) Open textbooks and social justice: A national scoping study. National Centre for Student Equity in Higher Education (NCSEHE). https://www.ncsehe.edu.au/wp-content/uploads/2022/02/Lambert_OpenTextbooks_FINAL_2022.pdf

Iterative Debriefing

Advocacy is a distinctly iterative cycle where it is commonplace to encounter barriers such as stakeholder indifference, disagreement, administrative or policy blocks, and lack of understanding. It's important for advocates to continuously debrief amongst themselves about these common barriers to develop the right solutions.

This method of collective debriefing can be thought of as incremental advocacy evaluation. It's a continuous social learning process of planning, reflecting, sharing and acting.

These debriefs can be more effective when you:

  • create an informal and welcoming meeting environment that lends itself to honesty and frankness
  • focus on identifying barriers
  • commit to understanding stakeholder indifference/disagreement (in order to overcome it), rather than just rejecting it
  • reflect on successful advocacy, why it worked and how to further leverage this influence
  • reflect on ineffective advocacy, why it didn’t work and how to do this differently
  • develop solutions creatively, rather than in a linear or conventional way.

Documenting these changes becomes an important part of defining workflows and processes. This will mature your advocacy from a single instance ad hoc activity, to one that is ready for repeatable and transferable applications across your institution.

Tools for Evaluating Advocacy

Ultimately, advocacy is about creating opportunities for change and transformation of practice. There are a number of common approaches and frameworks irrespective of discipline, and advocacy evaluation tools are usually contextualised by practitioners as part of iterative design and debriefing.

The following examples – drawn from non-government organisation (NGO), healthcare and social movements – are a good starting point for OER advocates: