By Caitlin Kennedy and Nadia Clifton


Program evaluation is nothing new for public libraries, and attention on this aspect of library work has only grown in recent years as not only libraries, but all public entities have been under pressure to justify their value in the face of shrinking or stagnant budgets. Still, a focus on teaching and learning can greatly expand our understanding of the purpose and process of evaluation, and can generate new data that makes a compelling case for the services and resources we offer. In this chapter, we will explore how assessment and evaluation methods from informal and formal education environments can be effectively applied to STEAM learning experiences in the public library. 

Introducing Assessment and Evaluation

Librarians, teachers, and researchers have studied and written about the impact of continuous, outcomes-based evaluation and assessment of library instruction in a variety of contexts. Assessment has been defined as “all those activities undertaken by teachers — and by their students in assessing themselves — that provide information to be used as feedback to modify teaching and learning activities” (Black & William, 1998, p. 82). While this definition was developed to describe assessment of learning in K-12 science classes, it applies in a public library STEAM instruction context as well. In an informal learning setting like the public library, assessment typically does not consist of traditional classroom measures of achievement like grades and tests; in fact, these quantitative measurements can be perceived as “antithetical” to the very nature of the informal learning experience (National Research Council, 2009 pp. 54, 56). Instead, methods of “authentic assessment” are often more useful; these methods present real-world scenarios and invite students to “integrate and apply what they have learned through contextualized tasks” (Frey, 2018). Authentic assessment has been used by librarians to measure the effectiveness of active learning strategies in information literacy sessions, as well as by instructors in schools and informal learning contexts to measure students’ science learning outcomes (Carter, 2013; National Research Council, 2009, p. 57). Although there is currently a relative lack of research about assessment and evaluation of public library STEAM instruction, librarians can learn from past research findings about information literacy and STEAM assessment in both K-12 and higher education settings.

Ideally, in any learning context, “the improvement of outcomes should lie at the heart of assessment efforts” (National Research Council, 2009, p. 55). While many public librarians may agree with this sentiment, it can be difficult to measure and achieve learning gains in an informal learning context. These learning environments are typically student-centered, self-directed, and “a rich source for nurturing curiosity and motivation to learn” (Cun, Abramovich, & Smith, 2019, p. 39). Students often have more agency in informal learning contexts, and they may play a significant role in setting the goals and standards for success in their learning environment (National Research Council, 2009, p. 57). When students engage in this type of self-directed learning, they have opportunities to “make decisions about the information they want to experience” instead of following a “prescribed or predetermined” sequence of activities; cognitive scientists and education researchers tend to agree that this self-directed approach improves students’ learning outcomes, though they have proposed a range of different theories to explain why and to what extent this is true (Gureckis & Markant, 2012; National Research Council, 2009, p. 56). Regardless of the effectiveness of this learning model and its impact on student learning, it can be a real challenge to plan assessments in self-directed learning contexts. Librarians must anticipate learners’ preferences and goals, while also taking into account their own expectations and goals for students, as well as the public library’s broader institutional values and goals (National Research Council, 2009).

Therefore, plans for library assessment should begin with “a clear statement of purpose” that describes what librarians hope to achieve through their assessment practices and demonstrates they understand their community of learners (Oakleaf, 2010, p. 81). Additionally, because of the many stakeholders that impact public library instruction, “a list of agreed-upon overarching goals and specific, measurable learning outcomes is a necessary element of any assessment plan” (Oakleaf, 2010, p. 83). Writing a clear assessment plan ensures that public librarians will implement the best form of assessment to meet their shared goals, whether that is formative or summative assessment, quantitative or qualitative assessment, internally-focused or externally-focused assessment (Oakleaf, 2010, p. 81).

Defining Formative and Summative Assessment

There are two primary categories of assessment that are mentioned consistently throughout the library instruction and K-12 literature: formative and summative assessment. Formative assessment is conducted during instruction, while summative assessment takes place after instruction has occurred (Frey, 2018). Two additional categories of assessment are often used to supplement formative and summative assessment: pre-assessment, which happens before instruction, and confirmative assessment, which happens after instruction (Booth, 2011, p. 139). Pre-assessment reveals students’ prior knowledge and establishes the learning context, while confirmative assessment evaluates students’ retention of knowledge and skills and their ability to apply what they have learned (Booth, 2011, p. 139). See Table 1 below for a summary of the definitions and goals of each type of assessment, as well as examples of how each method could be applied in a public library context to assess STEAM instruction. 

Table 1: Definitions, Goals, and Examples of Assessment
Type of Assessment Definition, Goals, and Examples
Pre-Assessment
  • Definition: assessment that occurs before instruction
  • Goals: evaluate students’ prior knowledge and establish the learning context
  • Examples: 
    • Pre-tests
      • Assess students’ knowledge or feelings about a topic before instruction
Formative Assessment
  • Definition: assessment that occurs during instruction
  • Goals: provide real-time feedback that allows teachers to adapt their instructional approach 
  • Examples:
    • Observations
      • Observe students’ responses during instruction
    • In-class discussions
      • Ask clarifying questions like, “What questions do you have for me?”
    • Selected response or constructed response 
      • Use ed tech tools to allow students to provide anonymous feedback
    • Self-assessment
      • Design an opportunity for students to reflect and evaluate their own learning
Summative Assessment
  • Definition: assessment that measures students’ learning after instruction has occurred 
  • Goals: measure the effectiveness of instruction and demonstrate the impact of instruction
  • Examples:
    • Tests
      • Measure students’ mastery of content or skills
    • Learning artifacts (projects, papers, etc.)
      • Use a rubric or other evaluation tool to measure the quality of student work 
    • Exit tickets (surveys, self-evaluations)
      • Ask students to reflect on their learning at the end of class
Confirmative Assessment
  • Definition: assessment that takes place well after instruction has occurred
  • Goals: evaluate students’ retention of knowledge and skills and their ability to apply what they have learned
  • Examples:
    • Post-tests or project
      • Design a follow-up opportunity for students to demonstrate or apply their learning after instruction  

Types of Formative Assessment

Formative assessment “provides actionable evidence to guide instruction” (Frey, 2018). By capturing a snapshot of students’ learning and their gaps in knowledge and understanding, librarians can use formative assessment to adapt their instructional approach in order to better meet students’ needs (Frey, 2018). Summative assessment, meanwhile, is more useful to measure students’ learning after (and often, as a result of) the instruction they have received; the learning measured could be students’ “knowledge acquisition” or it could be “the overall impact of the learning experience” on students (Booth, 2011, p. 142). Formative assessment can take a variety of forms, including “on-the-fly assessment, planned-for interaction, and curriculum-embedded assessment” (Heritage, 2007, p. 141). 

  • On-the-fly formative assessment happens “spontaneously” when the teacher changes direction in response to students’ feedback during the lesson (Heritage, 2007, p. 141). For example, the youth librarians at Radnor Memorial Library planned a STEAM workshop for elementary school students to learn geometric principles and express their creativity by creating glow stick shape designs (go.unc.edu/math-club).  They planned this lesson with fourth and fifth grade students’ prior knowledge and skills in mind. However, if students in kindergarten and toddlers had ultimately showed up for the workshop, the librarians would have needed to reframe the lesson “on-the-fly” by asking questions to assess learners’ current knowledge of shapes and geometry. To establish a shared learning context among workshop participants of different ages, the librarians could then “provide a quick ‘pop-up’ lesson” to introduce basic geometric principles (Heritage, 2007, p. 141).  
  • Planned-for-instruction formative assessment is developed by instructors before the session to “elicit students’ thinking during the course of instruction” (Heritage, 2007, p. 141). This type of assessment allows librarians to decide in advance what types of questions they will ask to evaluate students’ learning and to facilitate critical thinking and reflection. It also helps establish a timeline for assessment; by planning formative assessment in advance, librarians create opportunities to connect their assessment choices more explicitly with the content being delivered at different points in the class. For example, the librarians leading the geometry workshop at the Radnor Memorial Library could use planned-for-instruction formative assessment at key points in the workshop to encourage reflection and solicit feedback from students. They could start the workshop with a think-pair-share activity in which students discuss what they already know about geometry with a partner. After the think-pair-share, librarians could encourage students to reflect by asking, “What did you just learn from your partner?” Later in the workshop, before starting the hands-on activity, the librarians could pause to clarify students’ understanding by asking, “What questions do you have for me about this activity?”
  • Curriculum-embedded formative assessment can be used to “solicit feedback at key points in a learning sequence” or as a “part of ongoing classroom activities” (Heritage, 2007, p. 141). The key to this type of assessment is repetition — it happens as part of students’ “regular classroom activity” (Heritage, 2007, p. 141). This type of assessment may not be feasible in a one-shot library instruction scenario; however, it could be implemented in the context of a club or an ongoing workshop series. For example, at the Radnor Memorial Library, the STEAM geometry workshop was offered as part of the “Crazy 8’s Math Club” (http://go.unc.edu/SMArt-Kids). To implement curriculum-embedded formative assessment, the librarians could develop a long-term opportunity for club members to apply and reflect on their learning. For example, participants might receive sketchbooks or journals that they use in each math club meeting to apply and reflect on their learning related to different STEAM topics. Considering the students’ journal entries over a period of time would allow librarians to identify patterns, continuously assess students’ learning gains and knowledge gaps, and clarify their goals for future math club instruction. 

Summative Assessment Methods and Tools

Summative assessment is less common in public libraries than in K-12 and higher education settings, where traditional summative assessment methods like tests and papers are often used to measure students’ progress at the end of a course or grade level. These summative assessments may be used by institutions and school administrators to evaluate teachers’ success as classroom instructors, as well as to measure students’ performance. While standardized tests are likely not the most effective form of public library assessment, other summative assessment methods may provide public librarians with evidence of student learning that can be used to advocate for the importance of public library instruction. Summative assessment methods that could be applied in a public library setting include tools like surveys, questionnaires, and student reflections; librarians may also wish to experiment with more complex measurement systems like rubrics, matrixes, and portfolios (Booth, 2011, p. 143). 

  • Educational technology (ed tech) tools can be used to facilitate summative assessment in public library STEAM learning contexts. Padlet, Poll Everywhere, Mentimeter, Kahoot, Socrative, Flipgrid, Microsoft Forms, Google Forms, and Google Docs are all examples of free online tools that can be used to evaluate students’ learning after library instruction has occurred.* These assessment tools are often used to measure students’ content knowledge gains, but they can also be used to measure students’ affective learning outcomes. For example, at the end of the STEAM workshop at Radnor Memorial Library, librarians could measure students’ affective learning by using a Google Form survey to ask students questions like, “What did you like about today’s activity?” or “How did you feel about today’s activity?” Students could spend ten minutes at the end of the workshop filling out the Google Form and submitting their survey responses. Of course, this summative assessment could also be delivered without the use of technology; the librarians could facilitate a group discussion or think-pair-share to elicit responses, or they could ask students to write down or draw a picture in response. Librarians should always consider their audience when selecting the best summative assessment tool. For example, while a Google Form may be the best option to assess a group of high school students, it would not be helpful in an early learning context. Librarians should always strive to tailor their assessment design to meet the needs of their unique community of learners.     
    • * Please note that these assessment tools could also be used to facilitate formative assessment during the instruction session! The difference is that summative assessment occurs at the end of the instruction, with a goal of retroactively evaluating the quality of instruction and extent of students’ learning, whereas formative assessment is integrated throughout the session with a goal of incorporating student feedback into the instruction as it occurs. 

Using Summative Assessment as an Advocacy Tool

A common question is why public librarians should devote time to summative assessment. Unlike K-12 teachers — who are often compensated, promoted, or demoted according to their students’ standardized test results — public librarians are usually not evaluated based on the results of summative assessments. If their patrons are not being graded and public librarians are not able to use summative assessment to improve instruction during class, then why should they bother? It may seem like an unnecessary or extra step for an already very busy group of instructors.

However, summative assessment can play an important advocacy role for those engaged in public library instruction. Historically, assessment research in public libraries has been conducted with an end goal of demonstrating  “the value of the public library within a community” — including both “social and economic benefits” — and youth librarians can take this same approach by using summative assessment methods to demonstrate the social and economic impact of STEAM instruction. In a climate where “public libraries are consistently being challenged to do more with less” and “to convey to their communities the value of the library,” summative assessment ought to be viewed as a critical component of the public librarian’s job description (Public Library Association, 2019).

Formative and Summative Assessment and STEAM Learning

Researchers have undertaken both formative and summative assessment projects to make a case for the value of STEAM in informal learning contexts that are similar to public library settings, such as afterschool and summer programs. For example, After School Matters and Columbia College Chicago used content knowledge tests and attitude surveys to demonstrate the value of their STEAM summer programs for underrepresented high school students (Caplan, 2018). In order to measure students’ learning gains and the changes in their perceptions of STEAM over the course of a six-week summer program, pre-tests were distributed to students on the first day of the program and post-tests were distributed on the last day of the program. The pre-test knowledge test and attitude survey are examples of pre-assessment, while the post-test knowledge test and attitude survey are examples of summative assessment. 

There was a statistically significant gain in students’ content knowledge between the pre-tests and post-tests, but the majority of students started out reporting positive attitudes toward STEAM and therefore did not demonstrate statistically significant changes in attitude (Caplan, 2018). The researchers connected high school students’ participation in this program — and their gains in STEAM content knowledge — with larger economic and social problems, such as the shortage of engineers and the lack of underrepresented students in STEM majors (Caplan, 2018). By demonstrating the STEAM summer program’s impact on students’ content knowledge, they were able to link their program’s learning outcomes with larger goals of equity and economic growth. Librarians could use pre-tests and post-tests to make a similar case for expanding and funding STEAM instruction for youth in public library settings; by demonstrating students’ learning gains and/or changes in students’ attitudes toward STEAM, they could demonstrate the public library’s long-term economic and social impact.

Developing Assessment Tools

Once librarians have established a set of assessment goals and considered different assessment strategies, the next steps are to create a realistic timeline for continuous engagement and to design appropriate assessment tools (Oakleaf, 2010, pp. 83-85). These tools may facilitate either qualitative or quantitative assessment methods, and they may look radically different depending on the audience, learning context, and desired learning outcomes: “Assessment design is based on knowing who is being assessed … where they are being assessed … and what learning is being assessed” (Cun, Abramovich, and Smith, 2019, p. 40). To facilitate effective STEAM assessment, it is especially critical for librarians to understand the relationship between their audience, learning context, and subject matter.  

Measuring a Public Library Makerspace with an Assessment Matrix

In response to the rise of makerspace programs in public libraries, Cun, Abramovich, and Smith (2019) set out to design an assessment matrix specifically to measure learning in the makerspace context. As an example of STEAM learning, public library makerspaces are unique in that they offer “free opportunities for patrons to learn and create through play and exploration” (Cun, Abramovich, and Smith, 2019, p. 40). Whereas school and university makerspaces are limited to student audiences, the public library makerspace is free and accessible for all patrons. Cun, Abramovich, and Smith (2019) conducted research at a makerspace in the central library of a mid-sized city, which quickly revealed both the opportunities and barriers of assessment in an informal learning context. First, it was challenging to design one assessment tool that worked for everyone, because of the wide range of learners’ comfort levels and skills with makerspace technologies; this challenge holds true in any STEAM library instruction context. Students will have different levels of prior knowledge related to STEAM topics, and they may or may not already identify as being interested in and knowledgeable about STEAM. Instructors should be aware of how learners’ identities will impact their responses to STEAM instruction and assessment, and they should design assessments that acknowledge different learning outcomes will be appropriate for different learners. For example, “developing interest in science” may be the outcome most appropriate for a patron new to the makerspace, whereas a more experienced student could end up “understanding science knowledge” or “engaging in scientific reasoning” (National Research Council, 2009, pp. 58, 61, 66). 

Because of the many different audiences who visit the public library makerspace and the variety of learning opportunities they may pursue, Cun, Abramovich, and Smith (2019) settled on an assessment matrix in order to make more explicit connections between summative and formative assessment and the type of learning that occurs in makerspaces, as well as to “help librarians integrate library makerspaces into the assessment practices they already use for understanding patron needs and expanding services” (p. 46). While the matrix worked especially well for the complex library making environment, it may be helpful for librarians in any STEAM instruction context to explicitly map out connections between their audience, learning outcomes, and assessment strategies. This process helps uncover possibilities for better aligning assessment tools with patrons’ learning needs and goals. Because the ultimate “purpose of public libraries is to offer resources and services for patrons to meet their learning goals and needs,” this strategy is especially crucial in the public library context (Cun, Abramovich, and Smith, 2019, p. 45). 

Opportunities and Benefits of Assessment

Once you have planned and implemented your assessment, what do you do with the results? The above sections touched on some of the reasons to assess STEAM programs, such as using summative assessment as an advocacy tool. Assessment allows you to understand what your students are learning and how you can improve your programing. However, it is also an excellent avenue for communication and advocacy. One of the best things you can do with your assessment results is to share them! Sharing the results of your assessment allows you to communicate the value of the library and its programs to all stakeholders, and is also an avenue for building community partnerships that will benefit all.

Using Assessment to Communicate with Stakeholders 

Sharing assessment results is a great way to communicate the value of libraries and their programming to all library stakeholders (Charles, 2015; Samson, 2007; Seeber 2013). For example, Berkeley College created a curriculum map by aligning information literacy competencies with the College’s core courses, with specific courses within disciplines, and with corse assessments (Charles, 2015). By doing so, they could use the map and their own assessment results to demonstrate to faculty and administrators the value that librarians added to College courses. Just as importantly, the library was also able to communicate the value of information literacy to students.  

Public libraries can use the curriculum at local schools to inform their STEAM programming, and proper assessment can reveal how the library is contributing to student learning and engagement in STEAM topics. This may be especially useful many students visit the library during after-school hours, and assessment results can build support for the programs. While very early learners may only be interested in play and creation, sharing learning outcome results with older children and teens can help them understand how they can apply their program experience outside of the library setting. 

When you are able to communicate the value of your programs, you can leverage support to continue and grow your programs. “In a simple way, the process of sharing assessment results serves as an advertisement for the library and an affirmation of its mission” (Seeber, 2013, p.362). It is important to remember, however, that assessment should not be conducted with the goal of marketing. The priority should be to improve learner outcomes and instruction skill. Marketing and advertising is just a wonderful benefit.

Using Assessment to Build Partnerships

Sharing assessment results can also help break down barriers and form partnerships with other departments and organizations (Seeber, 2013). Seeber explored how the University Library at Colorado State University – Pueblo was able to break barriers and form partnerships between librarians and course instructors by sharing the results of library’s information literacy instruction. Even though librarians at CSU-Pueblo are tenure track, the perception that they are not “real” professors persists between both the librarians and the course instructors. This is similar to how patrons of public libraries and even the librarians themselves may not think of librarians as “real” instructors or teachers. This may be the case even if librarians are planning STEAM programs complete with learning outcomes that benefit participants.  Seeber writes, 

By including teaching faculty in the discussion of student evaluations, the library is positioning itself as being on equal footing with those instructors who are collecting assignments and assigning grades. In a time of thin resources and increased emphasis on performance, creating a mentality of teamwork and partnership on campus can provide a solid boost to morale. (p. 362) 

Think about partners with whom you can share and discuss the creation and results of assessment. It may be that you work with other youth librarians within your library system, or within your library who just work in a different department. You may reach out to the school librarian at local schools, or if you are near a University, there may be students in Education, Psychology, Sociology, or Library Science departments who would be happy to work with you. Forming partnerships and sharing resources can cut down your labor in the long and opens more opportunities for advocacy even as you contribute to your partner’s goals and support student learning. 

Spotlight: Radnor Memorial Public Library

Carrie Sturgill, Head of Youth Services, and her colleague Andrea Elson, Youth Services Librarian, at Radnor Memorial Library in Wayne, Pennsylvania run the Science, Math and the Arts (S.M.Art) Kids program and other STEM programs for youth. S.M.Art Kids is on its fifth year as of 2019 follow the STEM trend by mixing literature with a STEAM hands-on activity. When asked about the role of libraries in STEAM education, they shared, “The libraries are a third place of learning. We’re not school. We’re not home. We’re just something really uniquely different from that. And so it should be exciting and it’s an opportunity to explore new technology and things that are that kids are curious about.”

Currently, Carrie and Andrea’s assessment is based on verbal and anecdotal feedback from participants and their parents. Parents love to share and will tell the librarians when their kids can’t stop talking about a topic, when kids are checking out more books on the topic, and when parents take the kids to explore the topic outside of the library setting, like at a museum. These conversations are indications that the event went well, and the librarians document them and keep them along with statistics and numbers from the programs. The librarians also use the level of chaos during a program as an indicator of how well the activity is going. “If we just literally can’t control the kids because they’re making themselves into toilet paper mummies, we’ve lost them.”

In order to develop formal ways of evaluating their programs, Carrie and Andrea are beginning a partnership with the psychology department at the nearby University of Villanova. A new faculty member who teaches a psychology lab that studies school age children reached out to the library because she wanted to be more involved in the community. This benefits the psychology students because they have can will gather data and conduct research, and in return, the library will have more formalized tools to use for assessment.

Carrie and Andrea want to tangibly show what and how kids are learning. “I never learned about [assessment] in library school and I think it’s an area that we haven’t really been given a whole lot of resources on, so the fact that we have a team of people to help us out with that” is very exciting. After taking a library marketing class, they began to think about advocacy work and how to explain the value and role of libraries. “It was such a different way of thinking about this type of work, having to advocate for it and explain its value and what our role is, and also sort of putting boundaries in place for what I provide as an information expert, so that other partners and organizations can build on it.” Radnor Memorial’s budding partnership with the University of Villanova’s Psychology Department is a perfect example of how beneficial it can be to team up on assessment efforts.

You can see examples of Radnor Memorial’s STEAM programs at their SMArt Kids blog: http://go.unc.edu/SMArt-Kids . Table 2 below has examples of possible ways to assess some of the programs.

Table 2: Possible STEAM Assessment at Radnor Public Library

These are examples of how the youth librarians at Radnor Memorial Library could assess the S.M.Art Kids Program.

Type of Assessment Example
Pre-Assessment
  • Before the next Crazy 8’s Math Club meeting, ask students to watch a video about geometry and then take an online quiz to gauge their learning. 
Formative Assessment
  • In the Monsterology workshop, students design and build boats that will hold treasure and float. Mid-workshop, ask them to stop and reflect on their engineering skills by sharing their progress so far with another student. 
Summative Assessment
  • At the Electric Gingerbread Man workshop, ask students to draw a picture, chart, or diagram to explain what we learned about circuits. 
Confirmative Assessment
  • Ask students from the Monsterology workshop to return and help lead a boat building workshop for a new group of students.

 

References

Black, P., & Wiliam, D. (2010). Inside the black box: Raising standards through classroom assessment. Phi Delta Kappan Magazine, 92(1), 81-90. doi:10.1177/003172171009200119

Booth, C. (2011). Reflective Teaching, Effective Learning: Instructional Literacy for Library Educators. Chicago: American Library Association.

Caplan, M. (2018). Assessment of the impact of summer STEAM programs on high school participants’ content knowledge and attitude towards STEAM careers. ASEE IL-IN Section Conference. 2. https://docs.lib.purdue.edu/aseeil- insectionconference/2018/k12ol/2

Carter, T. M. (2013). Use what you have: Authentic assessment of in-class activities. Reference Services Review, 41(1), 49-61. doi:10.1108/00907321311300875

Charles, L. H. (2015). Using an information literacy curriculum map as a means of communication and accountability for stakeholders in higher education. Journal of Information Literacy, 9(1), 47–61. https://doi-org.libproxy.lib.unc.edu/10.11645/9.1.1959

Cun, A., Abramovich, S., & Smith, J. M. (2019). An assessment matrix for library makerspaces. Library and Information Science Research, 41(1), 39-47. doi:10.1016/j.lisr.2019.02.008

Frey, B. (2018). Authentic Assessment. In The SAGE encyclopedia of educational research, measurement, and evaluation (Vols. 1-4). Thousand Oaks,, CA: SAGE Publications, Inc. doi: 10.4135/9781506326139

Frey, B. (2018). Formative Assessment. In The SAGE Encyclopedia of Educational Research, Measurement, and Evaluation (Vols. 1-4). Thousand Oaks, CA: SAGE Publications, Inc. doi: 10.4135/9781506326139

Frey, B. (2018). Summative Assessment. In The SAGE Encyclopedia of Educational Research, Measurement, and Evaluation (Vols. 1-4). Thousand Oaks, CA: SAGE Publications, Inc. doi: 10.4135/9781506326139

Gureckis, T. M., & Markant, D. B. (2012). Self-directed learning: A cognitive and computational perspective. Perspectives on Psychological Science, 7(5) 464-481. https://doi.org/10.1177/1745691612454304

Heritage, M. (2007). Formative assessment: What do teachers need to know and do? Phi Delta Kappan, 89(2), 140–145. https://doi.org/10.1177/003172170708900210

Jackson, S., C. Hansen, and L. Fowler. 2005. Using selected assessment data to inform information literacy program planning with campus partners. Research Strategies 20, 44–56. doi:10.1016/j.resstr.2005.10.004

National Research Council. (2009). Learning Science in Informal Environments: People, Places, and Pursuits. Washington, D.C.: The National Academies Press. Retrieved from www.nap.edu/catalog.php?record_id=12190 

Oakleaf, M. (2010). Writing information literacy assessment plans: A guide to best practice. Communications in Information Literacy, 3(2), 80-90. doi:10.15760/comminfolit.2010.3.2.73

Public Library Association. (2019). Advocacy. http://www.ala.org/pla/leadership/advocacy

Paberza, K. (2010). Towards an assessment of public library value: Statistics on the policy makers’ agenda. Performance Measurement and Metrics, 11(1), 83-92. doi:10.1108/14678041011026892

Radnor Memorial Library. (2016). Crazy 8’s Math Club: Geometry. [Website]. Retrieved from https://radnorlibrarysmartkids.wordpress.com/2016/11/03/crazy-8s-math-club-geometry/

Radnor Memorial Library. (n.d.). S.M.Art Kids: Science, Math, and the Arts at Radnor Memorial Library. [Website]. Retrieved from https://radnorlibrarysmartkids.wordpress.com/

Samson, S., & McLure, M. (2007). Library instruction assessment through 360°. Public Services Quarterly, 3(1/2), 9–28. https://doi-org.libproxy.lib.unc.edu/10.1300/J295v03n01_02

Seeber, K. (2013). Using assessment results to reinforce campus partnerships. College & Undergraduate Libraries, 20(3–4), 352–365. https://doi-org.libproxy.lib.unc.edu/10.1080/10691316.2013.829366