We know that tutoring is effective yet there is a crucial gap in the evidence to suggest what implementation is most effective for schools and colleges in different contexts. As a result, five tutoring providers: TP Tutors, Get Further, Coachbright, Equal Education and Mannings Tutors partnered with ImpactEd to look at the impact of their tutoring models on pupil outcomes and find out what is most effective in terms of tutoring implementation for schools and colleges in different contexts.
The evaluation ran from March 2023 to October 2023 and looked at implementation factors such as: group size, mode of delivery, time of tutoring, session length, tutor qualifications, pupil selection and pupil attendance to sessions. We'll also be looking at the impact of the tutoring on students' non-cognitive outcomes: motivation and self-efficacy.
What was the impact?
The findings below show initial insight into some of the early findings on the impact of tutoring on motivation and self-efficacy. Validated surveys for motivation (Motivated Strategies for Learning Questionnaire (MSLQ) - Intrinsic Value Subscale) and self-efficacy (MSLQ - Self-Efficacy Subscale) were used in this evaluation. We will be discussing the findings below, and more findings from a wide range of tutoring evaluations we've delivered, in our Talking Impact webinar on the 14th of December at 1:30pm.
Impact on self-efficacy: Primary and Secondary
Impact on self-efficacy: Further Education
Impact on motivation: Primary and Secondary
Impact on motivation: Further Education
Limitations
There are some important limitations with this evaluation that should be considered when assessing its findings:
Inconsistent representation of the sample: Not all pupils completed the same surveys. Of the five participating tutoring organisations, 4 are represented in the motivation data and 3 in the self-efficacy data.
No control group design: This evaluation does not feature a control group. Control groups allow us to assess whether changes in pre- and post-intervention data are limited to just those individuals that received the intervention, or whether they are part of a wider background trend in the population. Without a control group, we cannot confidently conclude that these changes are associated with the intervention, rather than some other background factor.
Not all users participated in surveys: Not all pupils who received tutoring responded to surveys. This means the sample that responded may have something fundamentally different compared to wider cohorts, and therefore the data is subject to selection bias.
On the 14th of December we'll be discussing these findings from the Joint Tutoring report as well as findings around successful implementation of tutoring and further findings in our other work on tutoring. For more information about the event, please go here and keep an eye out on socials!