Evaluation reports – summary and lessons to be learned

Earlier this year we have submitted four Horizon 2020 proposals. In July we received the bad news – none of them is getting funded. What went wrong?

Horizon 2020 applications are evaluated by independent experts and the process includes three phases: individual evaluation, consensus group and panel review. More details about it can be found here. Each criterion (Excellence, Impact, Quality & Efficiency of the Implementation) can bring 5 points max., which makes the total of 15 points.

EPICS – Epistemological Insights on Citizen Science (total score 12)

Positive points:

  • The project was evaluated as ambitious and progressive with clear research objectives, methodology and work plan, management structures and procedures, complementary partners and very well addressed resource attribution. Direct participation of citizen scientists in delivering the targeted actions is considered innovative as well as the application of participatory design and ethnographical research methods. Inclusion of grassroots initiatives, public and societal engagement are particularly strong. Exploitation goals, data management plan and communications plan are detailed. Outputs are aligned with impacts.


  • Activities only partially address the topic, limited focus on incentivising career scientists - little is said of plans for the identification of barriers and enablers to their involvement. The targeted knowledge base of existing citizen science projects is not significant and the involvement of policy makers not evident. Gender dimension is only broadly considered (but not how it will be addressed in research terms). Dissemination is not fully detailed nor how results will be exploitable by wider stakeholder audiences. Some minor remarks include potential risks of combining citizens science and ethnography which are not fully considered; the rationale for the attribution of lead roles not in all cases fully described.

EVENTS – Understanding the Evolution of Society and Science through Citizen Science (total score: 7) This was a different type of call, evaluated on only two dimensions. Maximum score is 10, not 15.

Positive points:

  • Approach and methodology are evaluated as innovative; objectives consistent with the exploitation strategy; the knowledge gap well investigated with objectives addressing it. The proposal is beyond the state of the art in relation to the ethnographic approach. The use of stakeholder knowledge is clear and the gender dimension precisely specified.


  • The topic description in the work program is only partially discussed, the objectives may be achievable but not specific or easy to measure. The concepts of trust and accountability can be interpreted in many ways and need to be examined critically. Literature on the analysis and clarification of these concepts were not fully taken into account. The fellowship program lacks evidence; greater representation from ethical experts is needed.
  • Little evidence is provided of how participants will yield insights into the reproducibility of citizen ethnography. Certain claims, such as to improve citizen science projects, are difficult to assess. Not enough details on how knowledge base will be significantly advanced beyond the state of the art. It should be more clear how the results will contribute to the expected impacts. Vague statements should be avoided.

Future calls to consider related to citizens science (all opening on 10 December 2019 with deadline 15 April 2020, more details available in the work programme):

SwafS-19-2020: Taking stock and re-examining the role of science communication
SwafS-23-2020: Grounding RRI in society with a focus on citizen science
SwafS-27-2020: Hands-on citizen science and frugal innovation (this one especially suitable to “re-use” the EPICS application)
SwafS-29-2020: The ethics of technologies with high socio-economic impact
SwafS-30-2020: Responsible Open Science: an ethics and integrity perspective
SwafS-31-2020: Bottom-up approach to build SwafS knowledge base

CYCLOPS – Cycle of Creativity: a Leading Open Intellectual Property System for the European Union (total score: 9.50)

Positive points:

  • The overall scope and goals are very ambitious, the proposal is well structured and demonstrates awareness of the current issues. Some WPs are well thought through (WP5 and WP2 – “Giving all stakeholders a voice”, Edgeryders leading) – very likely to lead to new insights. The outputs are clearly defined and very well linked with the expected impacts and there is a possibility for the network to be created beyond the scope of the project. Critical risks for implementation and contingencies are addressed and management well described.


  • Short duration for the proposed activities, the feasibility is not justified. Much of the work is unspecified and open, some research questions are unclear and vague. The results to be achieved are numerous and connected with each other in a rather complex way. Work packages focused on collecting the views of stakeholders are limited by a reduced set of languages and it is not clearly explained how the selection of languages, countries and issues is representative of the digital single market, its cultural and societal make-up. More detailed interfacing with senior lawyers would be needed in order to prevent mischaracterization and simplification of law for the purposes of empirical work.
  • The interdisciplinary approach is not clearly expressed in all sections. The distribution of activities foresees very reduced cooperation among the partners while dissemination and communication activities lack details. The risks related to the project’s ambitious goals are not sufficiently considered.

AI SET – Arts Inspired Socio Economic Transformation (total score: 7)

Positive points:

  • The concept is compelling and has imaginative components especially in terms of how a material object can collect and mediate experiences of mobility, social change, place and identity, as well as improve well-being among returning migrant populations. It is a bold, ambitious and multi-stranded project that combines data-driven enterprises with the everyday worlds of craft, objects and local, regional and global populations. The proposal is strong on stakeholder knowledge, the work plan clear and work packages appropriately structured. Consortium has a high degree of expertise, art specific approaches and competences in the field of digital economy are well covered. Particular strength: online element involving the ethnographic mapping – carefully defined and given adequate weight. (great point for us!)


  • Connections between the existing field of research and the concrete case are not sufficiently described and links between socio-economic transformation and the core issues of the project not fully addressed. It is not clear how the social inclusion will be achieved or how the project will contribute to the further integration of the arts in policies and strategic goals of the EU. The objectives remain general.
  • The use of social media and mobilising communities through online methods are not sufficiently explained (!), the development of new products and services not given sufficient attention. The project’s innovative character is also limited in terms of scope and impact. Academic knowledge and interdisciplinary approach are not considered well enough. Dissemination activities not concrete, different target groups not given enough attention and it is not clear how the partners would complement each other.

What is certainly clear from the evaluations is that our methodology is recognized to be innovative and our workpackage very relevant for the consortia. There is a need to stress more the interdisciplinary approach and emphasize the close collaboration between the partners. The dissemination part was often evaluated as weak.

What next? @alberto and I will be looking in more detail the upcoming calls to find the best way to use some of the proposals above and improve them based on this feedback.

Stay tuned!


This is a very useful synthesis, @marina, thanks a lot! I will reflect on it.

1 Like

My first reaction was ‘there seems to be an issue of “granulrity”’ - the bigger (more overarching) elements seem to have got better ratings than the more specific elements.

1 Like

And how do you interpret that @martin? What should we do to improve our scores?

I did not know the process of consortium building and proposal writing; hence, I have no insights into ‘who missed what &when’?. My view is shaped along the following lines:

  1. Assuming that you had a standard process of ‘consortium building and proposal writing’ the distribution of strengths and weaknesses will follow the ‘standard model’ (whatever this is) that leads to a typical H2020 success rate (of single proposals) of (let’s assume) 15% (figure may be lower). This means that being in four proposals has a’ likelihood to gain’: 0,0005 for 4 grants; 0,0115 for 3 grants; 0,0975 for 2 grants; 0,3685 for 1 grant; and 0,5220 for 0 grant.

  2. Taking a success rate of 12% rises the likelihood of ‘no grant in four’ to 60% and lowers the likelihood for ‘one grant in four’ to 33%. Hence, statistically your outcome is not shocking!

  3. What is to check is the assumption _‘H2020 standard process of consortium building and proposal writing’. You may have faced less favourable conditions; but the statistics do not indicate it!

  4. Under the assumption ‘H2020 standard process of consortium building and proposal writing’ the most likely root cause ‘to have messed it up, in the end’ has to be something that damaged features of the proposal that would be needed to exceed the ‘85%-is-good-level’. That means, that the devil is in ‘what the writer considers as less relevant’ (=small or finer granularity).

Against the background of the above, my reading of the excerpts from the proposal assessment is that too many of the (small / considered as less relevant) features were missed that are 'needed to exceed the ‘85%-is-good-level’. It is hard, but a proposal has to be written to match the ‘95%-is-good-level’ to have a fair chance in a statistical sense and not being pure lottery.

My reading of proposals is/was ‘from the end’ (= what does the proposer plans to do when the work proposed ‘here’ is achieved). In cases that applicants have an idea about the step ‘N+1’ then (more often) they have (more) comprehensive understanding of ‘what to do now’ (step ‘N’). My reading of the excerpts from the proposal assessment is that ‘suchlike features’ were missing (too) frequently.

I hope that these lines help to handle this less-comforting experience.


Very interesting Martin, so glad to hear your thoughts. Thanks for the feedback!

@marina, could I move this one to the proposal writing category? I think having theses types of analysis easily accessible might be a good draw.


ok, but this category is under the IOH correct? Does that mean that the topics for proposals writing need to be somehow related to NGI?

Alberto also just brought this up. Good point that I had overlooked. I will not move those general examples there in that case. Do you know NGI related examples?

I think that for now, there is nothing specifically related to the NGI that we could move. I’ll keep this in mind for future!

1 Like