“Assessing the impact of coding and STEM educational initiatives” is the discussion group held at ALL DIGITAL Summit on October 10th, 2019, in Bologna by Davide Marocco (University of Naples) and Monica Divitini (NTNU). This workshop was supported by the CODINC project.
In this blogpost you can read about the discussion group and learn more about how to assess the impact of a project.
Veronique De Leener, founder and director of Maks vzw and practitioner with refugees and migrants, moderated the Discussion Group.
To begin, Veronique gave the example of the Capital Digital project, a project which aims at teaching children and young people how to code. The latter will be the ones holding the workshops with the children in a peer-to-peer approach. She said that after undertaking these workshops, youngsters obtained better results in school and improved personal and educational skills. However, these are stories shared by the participants. But are they enough to measure the impact of the action?
Veronique pointed out that there is a need for researchers to work together to have a more scientific view on impact measurement. She then gave the floor to Davide who introduced the topic of the impact assessment of extracurricular digital creative activities.
Davide pointed out that evaluation concerns the collection of data to measure the impact and effectiveness of an action or of a project. The idea of impact assessment and how an activity can be effective brings up a number of questions such as how you can tell if an activity is actually good or which are the dimensions you want to explore. These questions were brought up for instance in the context of the CODINC project. CODINC is a project where several dimensions come together: acquisition of knowledge (hard skills – learn how to code), a strong social aspect due to the interaction of the students between them and their teachers, and a personal dimension as well (soft skills).
Davide gave an overview of the tools that they used to measure the impact of the CODINC project.
The first tool used was the sociometric test which was aimed at measuring the social relationship within the group. This approach is then characterized by an affective-relational aspect which refers to the relationship that have been established between the members of a group and a functional aspect which is related to the organisation of the group and is aimed at understanding the relationship established with the aim of achieving a common goal.
The second tool tested was the self-efficacy scale. The idea behind it is that if you get better in doing something, this may have an impact on the personal perspective. Self-efficacy is, therefore, not a measure of the possessed skills but the belief that the person has in what he/she is able to do in different situations with those skills. This test, however, didn’t give many results in the context of the CODINC project and therefore it was ruled out.
Finally, the TATS Scale was used to assess the relationship between teachers and students. The TATS measured the democratic or authoritative attitude of teachers towards students. Interestingly, across the 500 students it was observed that the there was a different perspective of the authoritative attitude among students between the pre and post-test. The authoritative approach scored higher for the majority of students after having experienced being teachers.
As example, Davide showed a sociogram, a graphical representation of the affective-relational aspect in secondary schools before and after the test was taken. This is composed of nodes and line. Nodes represent the members of a group, while lines indicate the relationships (positive or negative) between the members.
The sociogram allows to see in an immediate and visual way if the cohesion, the relationships and therefore the dynamics of the class group have changed. However, Davide remarked that this doesn’t probably tell much of the impact of the project carried out because after 5 months students would get closer anyway.
This is the reason why a control class was used, where both tests were applied but without undertaking any activities.
Davide concluded by saying that it is therefore possible to think that a more quantitative approach can be used to measure the impact of an action even on social relationships. However, he also stressed that doing this is not an easy task! These analyses were not always taken well from the schools, so it is important to find a balance between the different needs!
Davide gave the floor to Monica who gave an overview of the tools that they used in the UMI-Sci-Ed project (Exploring Ubiquitous computing, Mobile computing and Internet-of-things to promote Science Education). The aim of the project was to enhance the attractiveness and inclusiveness of science education and careers for young people through the use of the latest technologies.
She started by saying that collecting relevant data to measure the impact of a project is difficult and time demanding and that there isn’t one methodology that fits all the situations.
But it is important to do it:
She underlined that it is important to identify what it needs to be evaluated. In this case, it was the students’ attitude towards the use of new tools. But the aim was also to evaluate the impact of the action on teachers and not only on students (how does the action change teachers training?)
She presented the UMI-Sci-Ed evaluation toolbox. In creating the toolbox, they defined for each tool when it is reasonable to use it.
Afterwards, Monica introduced the actual steps of the evaluation process stressing that it is important to negotiate with stakeholders what it needs to be evaluated and that data is collected following evaluation guidelines.
An interesting aspect to take into account is that high satisfaction does not necessarily translate to high intention. Although students responded positively to satisfaction, to the question “do you want to attend something similar?” their responses where more negative.
Monica concluded that, although expectations might be high, the reality proves to be complex and difficult since evaluation activities are time consuming. It is essential to find a balance between an engaging experience and the need to collect data.
Finally, she said that the feeling of being a bit lost is normal. What is critical, though, is to have the support of the stakeholders in order to create awareness about the importance of evaluation and assessment. This is something to push for!
At the end of the session, some time was allocated to allow the participants to ask questions.
To the question “what would you do differently?”, Monica responded that it is important to always adapt the method to each situation.
Veronique asked whether the stories from participants could be used as a tool to evaluate the impact of the project. Monica said that they could be used but if they were collected in a systematic way so to see what aspects there are in common.
It was also pointed out that the coming together of formal and non-formal education allow students, who don’t see themselves progress at school, to see that they are good at something that is more practical, which is promising! Veronique suggested that training should be offered to the parents too. They are afraid of everything that is not traditional.
At the end, one participant added that asking “how do you feel today?” is as important as having numbers and statistics and also that there are certain activities whose impact can be measured in like 10 years’ time. This probably shows also the limits of assessing the impact of a project. Improving the way data are collected and analysed is a great way to see whether an initiative has been successful or not, but is it enough? Is there something else that we are leaving out?
Add a Comment