Workgroup 3: assessment and accountability

Short summary

The areas of assessment and accountability seem to offer the greatest potential (and also the greatest barriers) to change in education. They both act as drivers of practice (including curriculum and pedagogy).

Government policy, in the form of the National Curriculum and the strategic direction set for assessment and accountability, is a critical lever for influencing educational practice therefore.

Big Data is a rapidly developing area in education, which could radically enhance the effective use of data in education, including helping to address some of the challenges of assessing knowledge age skills and attributes.

This section is divided into three major subsections: Accountability, Assessment, and Big Data. For each, Workgroup 3 provide a brief summary of the issues followed by some indicative recommendations.


Education providers’ practices are driven by the accountability frameworks within which they have to operate. For Early Years, schools, FE and skills and Initial Teacher Education (ITE) this is the respective Ofsted Framework. There is a general move for greater coherence and commonality across these four Ofsted Frameworks. For Higher Education the QAA sets the accountability expectations.

A key challenge in this area is to ensure that accountability frameworks remain relevant in regards to how digital technology is used. Given the rapid pace of innovation and changing learner expectations, providing specific guidance regarding technology is not generally helpful in the long term. Instead an approach that encourages reflection and strategic planning to enhance learning, teaching and assessment with the use of digital technology could be adopted across sectors.

Accountability Recommendations

  1. Government should make it a statutory requirement for all education providers to describe and justify how they use digital technology to enhance learning across the curriculum (including for assessment) as part of the teaching and learning policies on their public website by September 2015. This should not be an additional burden. It should just be always a part of existing reports.
  2. The relevant accountability frameworks, such as the Ofsted frameworks/criteria for Early Years, Schools, FE and Skills, and ITE should be evolved so that they require providers to explain and justify how they use digital technology across the curriculum to enhance learning. It should not be possible to obtain a rating of Good or Outstanding from Ofsted or the QAA for HE without having provided a good or outstanding justification for your use of digital technology to enhance learning across the curriculum from September 2015.
  3. The criteria for employment as an Ofsted or QAA inspector should include a requirement to maintain a sound understanding of the ways in which digital technology impacts on their subject area(s), pedagogy, and other aspects of education within their sector expertise.
  4. Ofsted and QAA should ensure that all inspectors engage with regular professional learning opportunities, including recent and relevant experience, in order to be able to meet the specific requirement in Recommendation 11 – ETAG member organisations could support this effort.


Digital technologies have changed much, from the way we work to the way we shop, from the way families communicate to the memberships we enjoy. This results in a double edged sword for assessment. On the one hand the capabilities and skills sought are far more than knowledge retention, including for example the critiquing and collaborating capabilities that PISA is now embracing. On the other hand if digital technology allows us to do things differently, it is time to take a critical look at how we currently manage high stakes testing. Two years ago a meeting of european education ministers warmly debated open, internet connected examinations for example.

This is yet another area where being clear about where we are going helps to guide small steps today, and wastes less resources pursuing blind alleyways. Digital technology-enabled assessment is certainly not the screen based analog of paper based. Moving a multiple choice paper to a multiple choice screen misses the opportunity, and need, for what can be done. There is potential for digital technology-enabled assessment to enable the rich learning experiences that traditional assessment is often presented as reducing[1].

Assessment recommendations

  1. The DfE should set a strategic goal for General Qualifications (GQs) across the majority of subjects, including English, Maths and Science, to move towards utilising digital technology-enabled assessment.
  2. The DfE and Ofqual should actively support and encourage the Awarding Bodies to develop digital technology-enabled assessment for GQs, starting with pilots across a range of subject areas in 2015/2016.
  3. The JCQ should develop a framework for the implementation of digital technology-enabled assessment for GQs in 2015.

» A more detailed explanation of this workgroup’s thinking on assessment »

Big data

In education, as in other areas of life, the way we use technology increasingly generates large amounts of data. The collection, analysis and use of this data, either in real time or in the long term, is transforming every aspect of our lives – often empowering us as individuals as we have seen for example in Health.

Making intelligent use of data for learning and teaching is beginning to demonstrate real impact for three key stakeholder groups:

  • Learners (and their families): learners need access to and clear information about their own data;
  • Teachers, trainers and employers: they need to make use of analytics to improve learning design and delivery;
  • Providers, national agencies and industry: here we can encourage nation-wide, and international use of analytics of key data sets to respond to market demand, employer needs and competition.
  • Put simply: better data, for individuals, institutions and policymakers will be essential if everyone is to play their part in making learning better.
  • A simple example: students’ phones know how fast they come into, or leave, educational establishments – a helpful marker for engagement on “special” Yet that data and its aggregates are currently unavailable either to institutions of individuals. There is also a flip side: should we, for example, have no practical control of intimate data being collected about learners, particularly children by technology already in use? Or should young adults leave education without being aware what data employers use in recruitment? Should we not encourage responsible use of technology and learning analytics to inform and support learners when it comes to managing their personal information online or interacting with others? Industry will step up its use of sophisticated tools for data collection/analysis. We cannot afford to do nothing.
  • This applies across all sectors of education and informal/adult/work-based learning. There are a number of things that are brought together under this heading including use of ‘big data’, (social) learning analytics and education analytics (see OU definition of terms).
  • Although big data has the potential to move all sectors forward, there are perhaps particular challenges in schools. In our consultations concern was repeatedly expressed that around 80% of schools are tied to a single management information system (MIS) from a single supplier. Many felt that more robust competition – and perhaps data exchange protocols – might be needed to deliver the kinds of new analytics that schools, teachers and students need now.


Big Data Recommendations

  1. The government should make data science and analytics a funding priority.
  2. All educational organisations should be required to put in place clear public statements about their collection, use and analysis of data and for students to be informed about this (including the role played by third party services).
  3. The government should establish a task force to develop proposals for a national framework for sharing key data across education, including between students, that addresses issues of data protection and ownership.
  4. The concern about lack of effective MIS competition should be addressed.

[1] For example, there is much evidence to suggest that digital technology-enabled assessment systems based on ‘comparative judgement’ are not only more flexible, but actually more robust than traditional systems. Comparative judgement allows genuinely open ended and problem based tasks to be used for robust assessment. There is a strong need to examine both the reliability and validity of traditional assessment. Paper based, standardised tasks are often seen to be very reliable, we can rely on them providing the same measure repeatedly. They stand up less well to scrutiny when we consider whether they are valid, ie whether they actually measure what we think is valuable. There is little point in reliably measuring understanding and competencies that we do not hold to be valuable. Much promise exists in more cutting edge systems for assessment such as comparative judgement, which research suggests not only allows tasks that demonstrate more valid measures of learners, but are also actually more reliable. See: Pollitt (2012) Available at: