Powered By Blogger

Thursday, 18 September 2014

Assessment beyond Levels: the benefits of using standardised assessments for planning and monitoring purposes



Arrangements for assessment in the new curriculum give schools greater opportunity to develop assessment frameworks that are best suited to their own school populations. Although ‘assessment without levels’, a new concept for a generation of teachers brought up on tracking and evaluating progress through reference to levels, initially may pose some challenges to school leaders, in practice, schools have been given greater freedom with regard to developing their own assessment.
In this new climate, where schools are accountable for demonstrating year-on-year progress without using levels, diagnostic, standardised assessments are an excellent way of supporting other school assessments as evidence of progress. As CEM, University of Durham, standardised assessments are an excellent tool for providing valid and reliable information on pupil academic profiling and can work alongside any school assessments already in place, these types of assessments are key in providing informed evidence for progress tracking and for benchmarking pupils’ performance with national norms. In addition, CEM assessments can be used not only to establish a baseline for every pupil or a particular cohort, but can also complement teacher assessment in monitoring learning and assist with reporting of progress to parents and other stakeholders. Many schools looking for reliable ways of demonstrating year-on-year progress by externally verified standardised tests have already embedded CEM assessments into their assessment frameworks to assist with evidence of progress for inspection purposes and own self-evaluation of teaching and learning.
What makes CEM assessments particularly useful to teachers and school leaders, is not only the range of assessments available to suit different ages and key stages in education but also their reliability in terms of future predictions – a feature especially useful for target-setting on individual and cohort basis. Since test development at CEM, the largest provider of standardised school assessments in the world, is researched-based, CEM assessments are trailed on very large samples, which makes the tests quite reliable. CEM have been delivering baseline tests and value added measures for over 30 years to over 3000 secondary schools worldwide.

Range of Assessments
CEM assessments cover all stages of education: from Nursery / EYFS to Post 16 and CEM’s Pscales+ assessments are aimed at supporting pupils with special educational needs.


Aspects Nursery/EYFS (ages 3 – 4)
PIPS Primary (ages 5 – 11)
InCAS Primary (ages 5 – 11)
MidYIS KS3 (ages 11 – 14)
INSIGHT End of KS3 (ages 13 – 14)
Yellis KS4 (ages 14 – 16)
Alis KS5 (16+)
CEM IBE KS5 (16+ IB courses)
Pscales+ All key stages (ages 3 – 19) special schools

CEM assessment are adaptive and computer based, meaning each pupil is challenged at a level that is appropriate to them. They are easy to administer and rapid, comprehensive analysis and feedback are provided for schools, including analytical software.

Primary Schools: aims and benefits of CEM assessments
How can CEM data help schools to measure attainment and track progress?
At primary level, Aspects, PIPS and InCAS assessments provide an objective baseline measure which is invaluable to tracking progress at individual and cohort level. In the absence of levels, they provide an objective, external evaluation and standardisation against national norms or age equivalent scores and, alongside school internal assessment data, assist with year-on-year tracking of progress. Information from on-entry assessments helps with pupil profiling by establishing an initial ability baseline which can be used to inform future teaching and learning, and inform school curriculum planning. The detailed data obtained over time allows teachers to identify pupils’ gaps in learning, monitor individual progress and can be used to inform future teaching and planning of learning activities. At whole school level, the data not only provides robust evidence of progress that can be used for accountability purposes, but can also support monitoring of teaching and learning at cohort level; inform curriculum planning and evaluation; identify staff professional development needs and support school improvement.

Can CEM data be used by teachers for target-setting?
As PIPS and InCAS data identify gaps in learning, this can help with individual target-setting for improvement and mapping pupil progress. Easy access to detailed analysis of performance at cohort level, for example class, year group, subject area, can support school leaders with identifying targets for particular cohorts as well as monitoring and evaluating earlier interventions or particular targets. The predictive nature of the assessments assists with curriculum planning and target-setting for whole cohorts against school expectations or national accountability measures (floor standards).

How can the CEM data help parents understand their child’s attainment and progress?

PIPS and InCAS data provide a wealth of information that can support school assessment data in reporting attainment and progress to parents. Within the changing school landscape, parents need simple and accurate information about their children learning in an easy to understand format. CEM reports identify where children are with their learning against their own prior achievement and/or national standards in a visual format, and parental feedback suggests that parents find PIPS data to be a particularly useful tool in reporting of progress. InCAS data show progress based on age standardized scores so parents can see how pupils’ abilities relate to their chronological age. Where there is an indication of a weakness, parents always welcome early identification. In instances of high achievement, CEM data can be used to report on appropriate level of challenge. A school which introduced sharing of PIPS data with parents reported higher levels of parental engagement and improved parental participation at school events.

How can the CEM assessment data help with providing evidence of progress for school inspectors?
Alongside school assessment data, CEM assessments provide an extra layer of external, robust evidence of pupil progress and attainment. As inspections are focused on the impact of school processes, the ability to demonstrate how a school’s assessment framework contributes to improved outcomes for pupils is crucial to evaluating the effectiveness of teaching and learning, and pupil achievement. In the absence of levels, the data from CEM assessments can be used to support teacher assessment data as evidence of progress at individual and cohort level. The information from baseline assessments, can be also used to establish pupils’ ability levels on entry as compared with national norms. CEM assessment data are a source of reliable, standardised information for inspectors and allow school leaders to demonstrate how the data are used for school self-evaluation and improvement. One school reported that PIPS data helped to establish pupils’ ability levels during inspection. PIPS data along with other school assessment data were used as evidence of outstanding progress made by pupils.

How can CEM assessments inform standardisation and national benchmarking in the absence of levels?
Comparisons with national norms allow teachers to put school assessment data into context and assist with benchmarking of pupils performance, which is particularly useful when schools work in greater isolation and need to demonstrate progress between standardised key stage assessments. CEM assessments are in tune with the new way of assessing progress and attainment in the new curriculum, where the new assessments at the end of key stage 2 will report “secondary readiness” by way of national standardisation (by comparison with others). According to new accountability measures, schools will be expected to get at least 85% of their pupils to the new “secondary ready” standard, which is expected to be reported on a standardised national scale of 80 – 130 with a score of 100 being described as “secondary ready” standard. The new floor standards will be based on key stage 2 results and pupil progress, and schools will be expected to track the rate of progress from a new baseline assessment in Reception. As CEM assessments already provide similar baseline data and can be used for tracking progress throughout primary years based on national standardisation and evidence of individual attainment/progress, together with local school data based on teacher assessment, they can provide robust evidence of progress benchmarked against national standards. For comparison purposes, PIPS assessment feedback is standardised with an average pupil scoring 50. CEM data are ideally placed to contribute to a broad range of information that can be made available to parents and the wider public about the school performance and contribute to the fair and transparent school accountability system.

How can CEM tests help with screening for SEN, EAL and More Able Pupils?
The diagnostic nature of CEM assessments provides data for early identification of special education or EAL needs that can be a barrier to learning. On the other end of the spectrum, early identification of pupils who are particularly able, Gifted & Talented, can assist with curriculum planning and inform teaching and learning to a greater degree. Early identification of specific learning needs is crucial to providing intervention at an appropriate level. The data from subsequent assessments can be used to evaluate school intervention. Diagnostic assessment in InCAS Reading and Maths are supported by research-based remediation advice to empower teachers in the diagnosis of specific learning difficulties and providing the right level of support.

How do CEM assessments contribute to pupils’ voice?
As pupil engagement in learning is key to motivation and improvement, CEM assessments, through measuring attitudes, can provide valuable information for teachers and school leaders regarding pupils’ attitudes to learning. At primary stage, attitudes to mathematics, reading and school are assessed. These data can help with identifying mindsets towards learning in different areas of the school curriculum and can help with bridging pastoral and academic dimensions of the school. The data can also be used for mentoring learning and developing cross-curricular thinking and study skills to improve pupils’ attitudes towards learning. This information is an invaluable addition to academic and pastoral profiling, and can contribute significantly to reporting and discussions with parents.

Secondary Schools: aims and benefits of CEM assessments
The benefits of CEM assessments (MidYIS, INSIGHT, Yellis, Alis and CEM IBE) at secondary school level can be demonstrated through their ability to add an extra dimension to other school data that can be used for monitoring teaching and learning at subject, department/faculty and whole-school level. As students progress through their school career, there is more information available about their progress and attainment. These data provide additional information about students’ academic profiles, making CEM assessments a very accurate and reliable tool for future predictions at individual subject level. This type of information is very useful for curriculum planning at secondary school level and for career advice, including various subject options and/or qualifications at GCSE level and beyond. MidYIS tests have an established reputation for delivering accurate baseline data that can be used as a springboard for progress measures as well as validating “secondary readiness” data. Invaluable feedback for progress tracking includes predictions and chances graphs to external examination as well as a full progress reporting system, currently based on value-added. CEM assessment data can be fully integrated into schools’ information management systems for analysis purposes, and can be used by teachers and leaders to inform school planning, monitoring and for evaluating school effectiveness.

How can CEM data help schools with progress measures and provide standardisation with reference to benchmarking?
As secondary schools move from the five A* to C benchmark performance measure to Progress 8 and average point score (APS), CEM assessments can provide crucial evidence to inform tracking and monitoring student progress. As new accountability measures will assess students on their progress relative to their starting points, schools will need reliable baseline data for accurate progress monitoring. MidYIS can provide an accurate and stable alternative or addition to the new end of key stage 2 secondary readiness measures. In the absence of key stage 3 national standardised assessments, INSIGHT tests can deliver useful data for progress tracking in core subjects with reference to national norms and inform curriculum planning, including qualification choices at 14+. Since ‘progress’ is key to the new accountability measures system, schools will need to develop robust evidence for demonstrating progress, which can be supported by the standardised CEM assessment data. All CEM assessment feedback delivers standardised data and, in case of MidYIS, an average student score is 100. This standardisation can be used by school leaders as an extra validation for progress tracking and provides a national benchmark for identifying the ability levels of particular cohorts of students, as well as individual students. As the tests are designed to measure a ‘typical’ student performance, the data informs teachers about the required level of preparation for external examination. Also, benchmarking pupils’ performance with others of the same age can assist school leaders with the evaluation of school’s teaching and learning or particular interventions, as well as with reporting to parents and inspectors.

How the feedback data can be used to inform target-setting?
Feedback from CEM secondary school assessments can support target-setting at individual and cohort level, including targets for specific subject areas, and can provide performance indicators for Post 16 education. In the absence of levels, CEM data can provide reliable baseline information for setting minimum individual and group targets in different subject areas. This information can be used by subject teachers for monitoring individual attainment and for triangulation purposes with other teacher assessment data. It can be also useful for tutors for monitoring student effort in meeting their minimum targets, and for providing basis for informed teacher-student learning dialogues. The predictive nature of the assessments can assist with curriculum planning and target-setting for whole cohorts against school expectations or national accountability measures. For reporting, monitoring and accountability purposes in the absence of levels, information about likely examination performance can assist with on-track monitoring and with individual and cohort position regarding meeting expected targets. Target predictions are a useful tool in informing teachers about setting appropriate challenge at student and class level, and, at whole school level, can contribute to evaluating the effectiveness of teaching and learning.

To what extent CEM data can be used for reporting purposes?
Comprehensive feedback from CEM assessments, which includes predictions and chances graphs related to external examinations as well as a full value-added reporting system, is a reliable way of communicating progress to parents and other stakeholders, often providing external validation of school assessment data. Parental feedback indicates that parents value assessment feedback that pinpoints their child position in comparison with other students or national standards. Indeed, one of the reason for removing national curriculum levels is that parents found levels confusing as they attached an abstract number to particular attainment without identifying students’ strength or weaknesses. Since CEM assessment data is standardised against a nationally representative sample of schools and the feedback includes the percentile band into which the student’s score falls and identifies into which ability band (A – D) the student’s score belongs, parents can be reliably informed about their child’s ability profile. This type of reporting helps with setting future targets and Individual Pupil Record Sheets (IPRS) provide an accurate record which summarises all the baseline information for a particular student on one sheet. Easy access to data manipulation and analysis can help senior leaders with the analysis of predicted performance for particular cohorts or groups of students. This extra layer of evidence for pupil progress and predictions based on ability profiles provides substantiation regarding school effectiveness when reporting to school inspectors. This is a particularly useful evidence in the absence of levels, where schools are expected to provide reliable data for accountability purposes and demonstrate year-on-year progress between key stage 2 and end of key stage 4 qualifications. CEM assessment data can also support evidence for the effectiveness of school interventions and can provide evidence for reporting pupil achievement by comparisons with different groups of students in the school and nationally.

How can CEM tests help with screening for SEN, EAL and More Able Pupils?
The diagnostic nature of CEM assessments provides data for early identification of special education or EAL needs that can be a barrier to learning. On the other end of the spectrum, early identification of pupils who are particularly able, Gifted & Talented, can assist with curriculum planning and inform teaching and learning to a greater degree, where challenge can be more effectively matched to students’ skills and abilities. Early identification of specific learning needs is crucial to providing intervention at an appropriate level and preventing failure at a later stage. The data from subsequent assessments can be used to evaluate school intervention.

How can feedback from CEM secondary assessments assist school leadership with accountability measures and school improvement?
MidYIS, INSIGHT, Yellis and Alis/CEM IBE standardised assessment data can be used to support effective school self-evaluation and promote school improvement through identifying school’s strengths and weaknesses, and by reference to national benchmarking. CEM feedback can also facilitate monitoring of standards over time, where statistical process control charts show yearly progress measures against statistical significance boundaries. In the absence of levels and no common benchmarking at key stage 3, schools need to ensure that their assessment frameworks provide reliable data for demonstrating students’ progress over time and attainment relative to their baseline on entry. Therefore there is a need for schools to develop systems for tracking student progress in order to present data in support of self-evaluation statements about the progress made by students. Although Ofsted do not have any predetermined view as to what specific system schools should use, inspectors’ main interest is whether the approach adopted by the school is effective in measuring what progress pupils are making and how this relates to their expected level of progress. In order to assist with reliable evidence for demonstrating students’ progress over time, one of the key indicators of school effectiveness, INSIGHT assessments can provide value-added measures of progress over time as compared against pupils of similar abilities in other schools. The feedback from these tests can also show progress from MidYIS tests (progress at key stage 3) and can give predicted grades at GCSE level in a wide range of subjects which are invaluable pieces of data for school leaders with regard to external validation of the effectiveness of teaching and learning, and can assist with curriculum and improvement planning.
Regarding accountability measures, from September 2016, schools will be required to provide information via a standard ‘snapshot’ version on the school website and performance tables giving more detailed information, including access to data of interest to Ofsted, parents and other interested parties. In order to provide accurate and reliable information for the new accountability measures, CEM secondary school assessment data can assist school leaders with robust evidence for pupil progress and the standardised pupil scores for reading, maths and science enable school leaders to make comparisons between students and classes for monitoring, planning and evaluating purposes. School leaders can also benefit from PARIS software which is designed to provide detailed assessment data analysis and produce reports or value added information for cohorts of students to help with progress monitoring and GCSE predictions. This type of data analysis can be also very useful for validating school processes and inspection evidence.
In one of the schools, senior leaders used MidIYS data to monitor teaching and learning and adjusted staffing where the data demonstrated poor cohort progress in a subject area with long-term teacher absence covered by non-specialist staff. In this case, the intervention resulted in recruiting a specialist teacher to cover the absence for improved progress.

Wednesday, 28 May 2014

Assessing Teacher Assessment

Dr Joanna Goodman, an education consultant and Fellow of the CIEA, considers the importance of professional development in assessment for all teachers, as schools enter a new dawn of developing their own processes.

Giving schools greater freedom in assessing students’ learning between different key stages in the national curriculum, has raised new issues over teacher assessment and the role of teacher-assessor. Teachers have always made evaluative judgements about pupils’ performance, but the new curriculum seems to be placing increasing demands on teachers as assessors. Putting teacher assessment at the heart of learning will have implications for schools in terms of planning, training and developing assessments leads. Furthermore, moderation procedures will need to be reviewed to ensure standardisation.

If we are serious about the high quality of school-based assessment, then we need to be serious about developing identified staff whose prime responsibility would focus on overseeing assessment and moderation within a school or within groups of schools. If, however, we want to develop teacher assessment that is valid and reliable (meaning it measures constructs that it is designed to measure appropriately), we may have to consider developing teachers’ assessment skills as part of their professional development. Since assessment is such an integral part of any educational process (Gipps and Murphy, 1994; Earl, 2003) and since it has greatly increased in schools over the last two decades, it is likely to have an enormous impact on the way teachers see their teaching and the way pupils experience their education (Tymms, 2000; Gibbs et al., 2002; Goodman, 2011). So perhaps now is a good time to start the discussion on the need of developing teachers as skilled assessors. It may be crucial to future teaching and learning development to establish to what extent effective teacher assessment skills could contribute to improved teaching and learning outcomes.

The changing school environment and educational climate call for a greater scrutiny of assessment practices in schools. Effective use of assessment that can consistently inform future planning and teaching is key to improving learning outcomes for young people. Talking to teachers and school leaders, I sense a certain degree of anxiety regarding the increased expectations being placed on the importance of teacher assessment and the need for greater professional dialogue about developing teachers’ assessment skills. With increasing pressure for improved outcomes, there is a need for greater specialisation of skills within professional development of teachers.

So where do we start? Perhaps with defining assessment and what it may mean in different contexts and purposes that it can serve. Do we consider assessment as a form of evaluation and making judgements about learning that feed into the learning process or do we think of assessment as a measuring device? What is it that we want to assess? How are we going to assess it and why? Indeed, considering a ‘good’ assessment, Professor Robert Coe, Director of CEM at the University of Durham, identifies a 47-question checklist which includes construct validity, content validity, criterion-related validity, reliability, freedom from biases, robustness, educational value and accessibility.

“An assessment is never a neutral event”, Stobart (2008) asserts, “…if it is for selection, then there may be high-stakes outcomes for the individual taking it. If it is for accountability purposes, then there may be consequences for the school… The task is to make the test good enough to encourage effective teaching and learning” (ibid.). For these reasons, it is worth looking at re-assessing teacher assessment skills when evaluating the effectiveness of assessment practices.

REFERENCES:
Earl, L.M. (2003). Assessment As Learning. Using Classroom Assessment to Maximize Student Learning. London: Sage Publications. Gipps, C., Hargreaves, E., McCallum (2000). In S. Askew Feedback for Learning. London: Routledge Falmer. Gipps, C. and Murphy, P. (1994). Fair test? Assessment, achievement and equality. Buckingham: Open University Press. Goodman, J. (2011). Assessment Practices in an Independent School: The Spirit versus the Letter. King’s College London. Stobart, G. (2008). Testing Times: The uses and abuses of assessment. Oxon: Routledge. Tymms, P. (2000). Baseline Assessments and Monitoring in Primary Schools: Achievements, Attitudes and Value-added Indicators. London: David Fulton Publishers. http://ciea.co.uk/makethegrade/assessing-teacher-assessment/

Saturday, 15 March 2014

Assessment without Levels



‘Assessment without levels’ is a new concept for many teachers who have been brought up on levels as a measuring device of reporting progress.  However, reclaiming teacher assessment for the benefit of improving students’ learning, should be viewed as a liberating opportunity for schools to influence their unique aspirations and to create assessment tailored to their school populations and aims.  It gives schools greater freedom to focus on building their learning cultures with the ultimate aim of improving pupils’ outcomes.

The new national curriculum, based on knowledge and understanding, and learning mastery, creates not only challenges but also exciting opportunities for schools from September 2014 and beyond, and is signalling the most radical changes in education for decades.  Ultimately, its effectiveness will be judged by the school leaders’ ability and innovative attitudes to embrace the opportunity open to them with regard to developing their own ‘broad and balanced’ school curriculum well matched to their own particular settings and situations.  So far, many assumptions have been made, in particular by the press, about the concept of a curriculum based on ‘knowledge and understanding’ with references to rote learning and facts regurgitating.  This is not what the new curriculum appears to be about.

Knowledge in the wider, Confucian, sense is about developing expertise needed for competency in any area.  According to Confucius,

Only when things are investigated is knowledge extended; only when knowledge is extended are thoughts sincere; only when thoughts are sincere are minds rectified; when minds are rectified are the characters of persons cultivated…”

Instant access to information has never been easier.  With this, follows the need for well-developed critical thinking skills to eliminate bias. The knowledge required to progress one’s understanding onto new and higher order thinking levels is exactly what is meant by the ‘knowledge and understanding’ in the new curriculum, as I see it.

However, the most thrilling feature of the new national curriculum is the departure from ‘levels’.  For about a quarter of a century, and a generation of teachers, schools have been attaching an abstract numerical value, and a label, to a learning standard attained.  On reflection, this practice has been, quite frankly, meaningless, and did little to improving educational standards.  Since measuring cannot bring improvement, just the same as weighing a pig cannot fatten it, a different focus is needed now.

Assessing without levels, for the first time in generation, gives schools the freedom to focus on building their learning cultures suited to their circumstances that can be separate from managerial / accountability cultures, which serve a different purpose and are not aimed at improving learning.  Through reclaiming teacher assessment, schools have been freed from the constraints of having to link teacher assessment to levels (measuring / accountability purpose) and, instead, are now free to develop their own teacher assessment focused on formative processes during production (learning process).  This, however, requires a shift in thinking for a generation of teachers brought up on national curriculum levels and APP.

As assessment is central to learning, and since there is now greater freedom to develop assessment to guide learning between key stages, schools now have a chance to define what type of institution they really aspire to be by defining what skills and competencies they value and aim to develop in their pupils.  The deliberate removal of levels from the new national curriculum is aimed at developing learning cultures and requires a fresh look at teacher assessment – the type of assessment that is aimed at developing pupil engagement, feeding-forward and leading to building learning independence and, during the process, is not linked to any measures.

It seems that putting teacher assessment at the heart of learning has its own challenges as schools struggle to understand the concept of assessing without levels, looking towards ‘one size fits all’ solutions and ready ‘toolkits’.  Having the autonomy to develop their own assessment, many now struggle with the new prospect of life without levels.  The pre-occupation with HOW has obliterated the need for WHAT.  And it is WHAT that needs to be answered first before moving on to HOW and WHY.

For improved outcomes at accountability stages, schools must concentrate first on identifying what they want to assess and develop teachers’ confidence in the formative aspect of assessment, without the reference to levels to describe progress in numbers (or grades).  The quality of formative assessment and how it is embedded within the teaching and learning process is crucial to improving learning standards and this is why schools have been given the freedom to develop their own approaches to assessing without levels.

There is over-whelming evidence (Black and Wiliam) which shows that summative purpose of assessment (giving every piece of work a level or a grade) can distract from the formative aim of improving learning and focus on the next steps, which is what leads to pupils’ progress and learning success.

Within the changing assessment climate, this shift in attention from accountability measures (progress and attainment at the end of the learning process) needs to occur so schools can develop high quality assessment for learning strategies to guide and scaffold pupils’ learning, perhaps starting with high quality initial assessment to inform future teaching.









Monday, 20 January 2014

What Works in Education: from myths and fads to evidence-based learning

Teachers are increasingly demanding training that is based on solid and robust research evidence for what really works in education and leads to improved learning.

For far too long, teachers’ CPD and subsequent practice followed various fads and trends that, on the surface, seemed attractive to implement and try as part of the classroom practice.  The appeal of many myths, like ‘brain gym’ or explicit reliance on different ‘learning styles,’ or what has been termed as ‘accelerated learning’ techniques, has been based on the premise that relatively simple and easy lists of strategies, if followed, could lead to big learning gains and improvement in pupil engagement.  Although these approaches promised research-based foundations on  how the brain works, the real fact remains that many of these myths, which have been sold to teachers as ‘real evidence’, often lack clear scientific proof that any of the suggested classroom strategies lead to improved learning.  It would appear that these quick-fix fads have been simply sold to the teaching profession as short-cuts to improvement based on snippets of inconclusive or out-of-context research.

Dr Hilary Leevers, head of education and learning at the Welcome Trust, agrees:

“Neuromyths” can merely perpetuate misconceptions about the brain.  Of greater concern is when they influence how we are raised or educated.  You may be familiar with the idea of different types of learner.  For example, if you are a “visual learner” you need content delivered primarily visually.  But there is very little scientific evidence to support this idea, and labelling pupils by type of learner and delivering content accordingly limits the richness of their learning experience and may reduce what is learned.                                                      (The Guardian, 7 January 2014)

Indeed, ‘labelling’ pupils’ can have a negative influence on learning and progress. Moreover, these unproven myths have not only contributed to a stream of ineffective classroom practices, that could be referred to as ‘educational fads’, they have also simplified some of the neurological research findings for the purpose of appeal on popular psychology grounds that vowed instant classroom success.  Another reason why these myths can be so damaging, is that teachers have come to expect ready-made lists of effective strategies that they can follow in class and, in some cases, this has led to a tick-list teaching-style characteristic of little reflection about what really leads to improved learning and quality outcomes for young people.  In contrast, substantive evidence-based research into better teaching and learning that results in improved learning progress cannot be reduced to tick-lists and is characterised by an approach-style methodology being used as part of the teaching and learning processes.

I feel that teachers need some help in distinguishing between solid, evidence-based research into what strategies, if consistently applied, really bring big learning gains and myths that result is seemingly quick-fixes but have little to do with improvement in learning outcomes or developing essential pupil learning autonomy for long-term success.  It is also critical to emphasise the need for deeper reflection and honest self-evaluation of teachers’ own practices so robust research-based evidence is seen in terms of an ‘approach’, rather than a list of ritualised classroom strategies.

Investment in high quality CPD for teachers based on solid academic research findings into what really works in education – and there is enough of evidence-based research regarding what approaches lead to improved outcomes – is absolutely key if we are to improve everyday classroom practice and long-term prospects for our young people.  It is about elucidating what is ‘real’ and what is a ‘myth’ so teachers can make informed judgements regarding the rationale behind their teaching methods.

When it comes to research-based evidence, it seems appropriate to mention the research into assessment for learning (AfL) as an example of a wide evidence-based study into improving learning outcomes. Despite the effectiveness of AfL approach based on the evidence of ‘effect size’[i] between 0.4 to 0.7 (one of the biggest found in educational interventions, Black and Wiliam,1998, and backed up by Hattie’s research into effectiveness of classroom interventions), this approach to better teaching and learning can be still poorly understood by teachers and policy-makers, who seem to be  conditioned into thinking that ‘assessment’ can be only reflected quantitatively, rather than qualitatively during the process, and ultimately leading to improved standards that can be demonstrated in quantitative, as well as qualitative, values.    

Seemingly, this lack of in-depth grasp of what AfL means in practice, highlights the need for teachers’ greater awareness of robust, research-based evidence so they can make more informed choices regarding their most effective practices in class leading to improvement and learning sustainability.

High quality, evidence-based training is crucial to institutional learning and continuous teacher professional development for improved standards in teaching and learning.

 

References:

Black, P. and Wiliam D. (1998). Inside the Black Box. London: NferNelson

Hattie, J. and Yates, G. (2014). Visible Learning and Science of How We Learn. Oxon: Routledge.



[i] Learning gains measure by comparing (a) the average improvements in pupils’ scores on tests with (b) the range of scores that are found for typical groups of pupils on the same tests. The ration of (a) divided by (b) is the ‘effect size’.